[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: \"\"\nlabels: \"\"\nassignees: \"\"\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**To Reproduce**\nA clear and minimal example to reproduce the bug.\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n\n**Additional context**\nAdd any other context about the problem here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "# To get started with Dependabot version updates, you'll need to specify which\n# package ecosystems to update and where the package manifests are located.\n# Please see the documentation for all configuration options:\n# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file\n\nversion: 2\nupdates:\n  - package-ecosystem: \"cargo\" # See documentation for possible values\n    directory: \"/\" # Location of package manifests\n    schedule:\n      interval: \"daily\"\n    groups:\n      development:\n        dependency-type: \"development\"\n      tree-sitter:\n        patterns:\n          - \"tree-sitter*\"\n      aws:\n        patterns:\n          - \"aws*\"\n      minor:\n        update-types:\n          - \"minor\"\n          - \"patch\"\n\n  - package-ecosystem: \"github-actions\"\n    directory: \"/\"\n    schedule:\n      interval: \"daily\"\n"
  },
  {
    "path": ".github/workflows/bench.yml",
    "content": "name: Bench\non:\n  push:\n    branches:\n      - master\n\npermissions:\n  contents: write\n  deployments: write\n\njobs:\n  benchmark:\n    name: Benchmark\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n      - uses: dtolnay/rust-toolchain@stable\n      - name: Install Protoc\n        uses: arduino/setup-protoc@v3\n      - name: Run benchmark\n        run: cargo bench -p benchmarks -- --output-format bencher | tee benchmarks/output.txt\n\n      - name: Store benchmark result\n        uses: benchmark-action/github-action-benchmark@v1\n        with:\n          name: Rust Benchmark\n          tool: \"cargo\"\n          output-file-path: benchmarks/output.txt\n          github-token: ${{ github.token }}\n          auto-push: true\n          # Show alert with commit comment on detecting possible performance regression\n          alert-threshold: \"200%\"\n          comment-on-alert: true\n          fail-on-alert: true\n          alert-comment-cc-users: \"@timonv\"\n"
  },
  {
    "path": ".github/workflows/coverage.yml",
    "content": "name: Coverage\n\non:\n  pull_request:\n  push:\n    branches:\n      - master\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-coverage\n  cancel-in-progress: true\n\nenv:\n  RUSTFLAGS: \"-Dwarnings -Clink-arg=-fuse-ld=lld\"\n\njobs:\n  test:\n    name: coverage\n    runs-on: ubuntu-latest\n    steps:\n      - name: Free Disk Space (Ubuntu)\n        uses: jlumbroso/free-disk-space@main\n      - name: Checkout repository\n        uses: actions/checkout@v6\n      - uses: dtolnay/rust-toolchain@nightly\n        with:\n          components: llvm-tools-preview\n      - name: Install Protoc\n        uses: arduino/setup-protoc@v3\n      - name: Install cargo-llvm-cov\n        uses: taiki-e/install-action@v2\n        with:\n          tool: cargo-llvm-cov\n      - name: Install system dependencies\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y lld libcurl4-openssl-dev\n      - name: Generate code coverage\n        run: |\n          cargo llvm-cov --tests -j 2 --all-features --lcov --output-path lcov.info\n\n      - name: Coveralls\n        uses: coverallsapp/github-action@v2\n"
  },
  {
    "path": ".github/workflows/discord.yml",
    "content": "on:\n  release:\n    types: [published]\n\njobs:\n  github-releases-to-discord:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v6\n      - name: Github Releases To Discord\n        uses: SethCohen/github-releases-to-discord@v1.19.0\n        with:\n          webhook_url: ${{ secrets.DISCORD_WEBHOOK_URL }}\n          color: \"2105893\"\n          username: \"Release Changelog\"\n          avatar_url: \"https://cdn.discordapp.com/avatars/487431320314576937/bd64361e4ba6313d561d54e78c9e7171.png\"\n          footer_title: \"Changelog\"\n          footer_icon_url: \"https://cdn.discordapp.com/avatars/487431320314576937/bd64361e4ba6313d561d54e78c9e7171.png\"\n          footer_timestamp: true\n"
  },
  {
    "path": ".github/workflows/lint.yml",
    "content": "name: CI\n\non:\n  pull_request:\n  merge_group:\n  push:\n    branches:\n      - master\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-lint\n\nenv:\n  CARGO_TERM_COLOR: always\n\njobs:\n  lint:\n    name: Lint\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n      - uses: dtolnay/rust-toolchain@stable\n        with:\n          components: clippy\n      - uses: r7kamura/rust-problem-matchers@v1\n      - name: Install Protoc\n        uses: arduino/setup-protoc@v3\n      - name: Install system dependencies\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y libcurl4-openssl-dev\n      - name: Check typos\n        uses: crate-ci/typos@master\n      # - name: Lint dependencies\n      #   uses: EmbarkStudios/cargo-deny-action@v2\n      - name: clippy\n        run: cargo clippy --all-targets --all-features --workspace\n        env:\n          RUSTFLAGS: \"-Dwarnings\"\n\n  lint-formatting:\n    name: Lint formatting\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n      - uses: dtolnay/rust-toolchain@nightly\n        with:\n          components: rustfmt\n      - uses: r7kamura/rust-problem-matchers@v1\n      - name: \"Rustfmt\"\n        run: cargo +nightly fmt --all -- --check\n        env:\n          RUSTFLAGS: \"-Dwarnings\"\n\n  hack:\n    name: Cargo Hack\n    runs-on: ubuntu-latest\n\n    steps:\n      - uses: actions/checkout@v6\n      - uses: dtolnay/rust-toolchain@stable\n        with:\n          components: rustfmt\n      - uses: r7kamura/rust-problem-matchers@v1\n      - name: Install Protoc\n        uses: arduino/setup-protoc@v3\n      - name: Install system dependencies\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y libcurl4-openssl-dev\n      - name: Install cargo-hack\n        uses: taiki-e/install-action@v2\n        with:\n          tool: cargo-hack\n      - name: Check features with Cargo Hack\n        run: cargo hack check --each-feature --no-dev-deps\n"
  },
  {
    "path": ".github/workflows/pr.yml",
    "content": "name: Check Pull Requests\n\non:\n  pull_request_target:\n    types:\n      - opened\n      - edited\n      - synchronize\n      - labeled\n      - unlabeled\n  merge_group:\n\npermissions:\n  pull-requests: write\n\njobs:\n  check-title:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Check PR title\n        if: github.event_name == 'pull_request_target'\n        uses: amannn/action-semantic-pull-request@v6\n        id: check_pr_title\n        env:\n          GITHUB_TOKEN: ${{ github.token }}\n      # Add comment indicating we require pull request titles to follow conventional commits specification\n      - uses: marocchino/sticky-pull-request-comment@v2\n        if: always() && (steps.check_pr_title.outputs.error_message != null)\n        with:\n          header: pr-title-lint-error\n          message: |\n            Thank you for opening this pull request!\n\n            We require pull request titles to follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/) and it looks like your proposed title needs to be adjusted.\n\n            Details:\n\n            > ${{ steps.check_pr_title.outputs.error_message }}\n\n      # Delete a previous comment when the issue has been resolved\n      - if: ${{ steps.check_pr_title.outputs.error_message == null }}\n        uses: marocchino/sticky-pull-request-comment@v2\n        with:\n          header: pr-title-lint-error\n          delete: true\n\n  check-breaking-change-label:\n    runs-on: ubuntu-latest\n    env:\n      # use an environment variable to pass untrusted input to the script\n      # see https://securitylab.github.com/research/github-actions-untrusted-input/\n      PR_TITLE: ${{ github.event.pull_request.title }}\n    steps:\n      - name: Check breaking change label\n        id: check_breaking_change\n        run: |\n          pattern='^(build|chore|ci|docs|feat|fix|perf|refactor|revert|style|test)(\\(\\w+\\))?!:'\n          # Check if pattern matches\n          if echo \"${PR_TITLE}\" | grep -qE \"$pattern\"; then\n            echo \"breaking_change=true\" >> \"$GITHUB_OUTPUT\"\n          else\n            echo \"breaking_change=false\" >> \"$GITHUB_OUTPUT\"\n          fi\n      - name: Add label\n        if: steps.check_breaking_change.outputs.breaking_change == 'true'\n        uses: actions/github-script@v8\n        with:\n          github-token: ${{ github.token }}\n          script: |\n            github.rest.issues.addLabels({\n              issue_number: context.issue.number,\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              labels: ['breaking change']\n            })\n\n  do-not-merge:\n    if: ${{ contains(github.event.*.labels.*.name, 'do not merge') }}\n    name: Prevent Merging\n    runs-on: ubuntu-latest\n    steps:\n      - name: Check for label\n        run: |\n          echo \"Pull request is labeled as 'do not merge'\"\n          echo \"This workflow fails so that the pull request cannot be merged\"\n          exit 1\n"
  },
  {
    "path": ".github/workflows/release.yml",
    "content": "name: Release\n\npermissions:\n  pull-requests: write\n  contents: write\n\non:\n  push:\n    branches:\n      - master\n\njobs:\n  release-swiftide:\n    name: Crates.io\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v6\n        with:\n          fetch-depth: 0\n          token: ${{ secrets.RELEASE_PLZ_TOKEN }}\n      - name: Install Rust toolchain\n        uses: dtolnay/rust-toolchain@stable\n      - name: Install Protoc\n        uses: arduino/setup-protoc@v3\n        with:\n          repo-token: ${{ secrets.GITHUB_TOKEN }}\n      - name: Run release-plz\n        uses: MarcoIeni/release-plz-action@v0.5\n        env:\n          GITHUB_TOKEN: ${{ secrets.RELEASE_PLZ_TOKEN }}\n          CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}\n          GITHUB_REPO: ${{ github.repository }}\n"
  },
  {
    "path": ".github/workflows/test.yml",
    "content": "name: CI\n\non:\n  pull_request:\n  merge_group:\n  push:\n    branches:\n      - master\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}-test\n\nenv:\n  CARGO_TERM_COLOR: always\n  RUSTFLAGS: \"-Dwarnings -Clink-arg=-fuse-ld=lld\"\n\njobs:\n  test:\n    name: Test\n    runs-on: ubuntu-latest\n    steps:\n      - name: Free Disk Space (Ubuntu)\n        uses: jlumbroso/free-disk-space@main\n      - uses: actions/checkout@v6\n      - uses: dtolnay/rust-toolchain@stable\n      - name: Install Protoc\n        uses: arduino/setup-protoc@v3\n      - name: Install system dependencies\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y lld libcurl4-openssl-dev\n      - name: \"Test\"\n        run: cargo test -j 2 --tests --all-features --no-fail-fast\n  docs:\n    name: Docs\n    runs-on: ubuntu-latest\n    steps:\n      - name: Free Disk Space (Ubuntu)\n        uses: jlumbroso/free-disk-space@main\n      - uses: actions/checkout@v6\n      - uses: dtolnay/rust-toolchain@stable\n      - name: Install Protoc\n        uses: arduino/setup-protoc@v3\n      - name: Install system dependencies\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y lld libcurl4-openssl-dev\n      - name: \"Test\"\n        run: cargo test --doc --all-features --no-fail-fast\n"
  },
  {
    "path": ".gitignore",
    "content": "# Generated by Cargo\n# will have compiled files and executables\ndebug/\ntarget/\n\n# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries\n# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html\n\n# These are backup files generated by rustfmt\n**/*.rs.bk\n\n# MSVC Windows builds of rustc generate these, which store debugging information\n*.pdb\n\n\n# Added by cargo\n\ntarget\ntmp\n\n.env\n.env*.local\n\n**/.fastembed_cache\n.idea/\n.history.cargo\n"
  },
  {
    "path": ".markdownlint.yaml",
    "content": "# configuration for https://github.com/DavidAnson/markdownlint\n\nfirst-line-heading: false\nno-inline-html: false\nline-length: false\n\n# to support repeated headers in the changelog\nno-duplicate-heading: false\n"
  },
  {
    "path": "AGENTS.md",
    "content": "# Repository Guidelines\n\n## Project Structure & Module Organization\n\nSwiftide is a Rust workspace driven by the library in `swiftide/`, with supporting crates such as `swiftide-core/` for shared primitives, `swiftide-agents/` for agent orchestration, `swiftide-indexing/` and `swiftide-query/` for pipeline flows, and `swiftide-integrations/` for external connectors. Shared fixtures live in `swiftide-test-utils/`, while `examples/` hosts runnable demos and `benchmarks/` tracks performance scenarios. Static assets (logos and diagrams) are under `images/`.\n\n## Build, Test, and Development Commands\n\n- `cargo check --workspace --all-features` quickly verifies the entire workspace compiles with all feature flags enabled.\n- `cargo build --workspace --all-features` compiles every crate and surfaces feature-gating issues early.\n- `cargo check -p swiftide-agents` is a fast way to probe agent changes before touching the rest of the workspace.\n- `cargo +nightly fmt --all` applies the repo `rustfmt.toml` (comment wrapping requires nightly); use `cargo +nightly fmt --all -- --check` to mirror CI formatting validation.\n- `cargo clippy --workspace --all-targets --all-features -- -D warnings` mirrors the main lint job and keeps us aligned with the pedantic lint profile baked into `Cargo.toml`.\n- `cargo test -j 2 --tests --all-features --no-fail-fast` mirrors the main CI test job for unit and integration tests.\n- `cargo test --doc --all-features --no-fail-fast` mirrors the docs test job in CI.\n- `cargo hack check --each-feature --no-dev-deps` mirrors the Cargo Hack feature-matrix check run in CI.\n- `typos` mirrors the spelling check run in CI.\n- `cargo test --workspace` is still useful locally when you want a broader default test sweep; use `RUST_LOG=info` if you need verbose diagnostics.\n- Snapshot updates flow through `cargo insta review` after tests rewrite `.snap` files.\n\n## Coding Style & Naming Conventions\n\nFollow Rust 2024 idioms with four-space indentation. Public APIs should embrace builder patterns and the naming guidance from the Rust API Guidelines: `snake_case` for functions, `UpperCamelCase` for types, and `SCREAMING_SNAKE_CASE` constants. Avoid `unsafe` blocks—`Cargo.toml` forbids them at the workspace level. Keep comments concise so `wrap_comments = true` can format them within 100 columns.\n\n## Testing Guidelines\n\nPrefer focused crate runs such as `cargo test -p swiftide-integrations` when iterating, and opt into `-- --ignored` for heavier scenarios. Integration tests rely on `testcontainers`, so ensure Docker is available; keep fixtures inside `swiftide-test-utils/` to reuse container helpers. For `insta` snapshots, commit reviewed `.snap.new` diffs only after `cargo insta review` removes pending files.\n\n## Commit & Pull Request Guidelines\n\nCommits follow conventional syntax (`feat(agents): …`, `fix(indexing): …`) with a lowercase imperative summary. Pull request titles are also checked against the conventional commits format in CI, and titles ending in `!` receive the `breaking change` label automatically. Each PR should describe the change, link any GitHub issue, note API or schema impacts, and include before/after traces or logs when behavior changes. Update docs (README, website, or inline rustdoc) and add tests or benchmarks alongside functional work. Before requesting review, run the full lint and test suite listed above.\n\n## Tooling & Environment Notes\n\nThe workspace pins `stable` in `rust-toolchain.toml`; use the same channel unless a nightly tool is explicitly required. Dependency hygiene is enforced with `cargo deny --workspace`, and spelling checks may run via `typos`. Store local credentials with `mise` or environment variables—never commit secrets.\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\n## [0.32.1](https://github.com/bosun-ai/swiftide/compare/v0.32.0...v0.32.1) - 2025-11-08\n\n### New features\n\n- [8bca0ef](https://github.com/bosun-ai/swiftide/commit/8bca0efa246e6adac061006f5f72cc9dd038cc8f) *(integrations/tree-sitter)*  Add C# support ([#967](https://github.com/bosun-ai/swiftide/pull/967))\n\n- [da35870](https://github.com/bosun-ai/swiftide/commit/da358708c83459c7f990027759fa5c56a2b647b9)  Custom schema for fail tool ([#966](https://github.com/bosun-ai/swiftide/pull/966))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.32.0...0.32.1\n\n\n\n## [0.32.0](https://github.com/bosun-ai/swiftide/compare/v0.31.3...v0.32.0) - 2025-11-05\n\n### New features\n\n- [9ae3331](https://github.com/bosun-ai/swiftide/commit/9ae33317bbcbf5e65e3aa7eb0bf378190b7c33b5) *(agents)*  [**breaking**] Improve toolspec api with schemars and support all possible types ([#940](https://github.com/bosun-ai/swiftide/pull/940))\n\n**BREAKING CHANGE**: macro-level `json_type` overrides beyond the basic\nprimitives are no longer enforced; rely on Rust type inference or\nprovide an explicit schemars-derived struct/custom schema when specific\nshapes are required\n\n- [a0cc8d7](https://github.com/bosun-ai/swiftide/commit/a0cc8d73a6ce9a82a03a78e8f83957d3c1455584) *(agents)*  Stop with args with optional schema ([#950](https://github.com/bosun-ai/swiftide/pull/950))\n\n- [8ad7d97](https://github.com/bosun-ai/swiftide/commit/8ad7d97b6911bd3c676c79a2d5318c31dad23e9f) *(agents)*  Add configurable timeouts to commands and local executor ([#963](https://github.com/bosun-ai/swiftide/pull/963))\n\n- [29289d3](https://github.com/bosun-ai/swiftide/commit/29289d37cb9c49fba89376c125194fc430c57a37) *(agents)*  [**breaking**] Add working directories for executor and commands ([#941](https://github.com/bosun-ai/swiftide/pull/941))\n\n**BREAKING CHANGE**: Add working directories for executor and commands ([#941](https://github.com/bosun-ai/swiftide/pull/941))\n\n- [ce724e5](https://github.com/bosun-ai/swiftide/commit/ce724e56034d717aafde08bb6c2d9dc163c66caf) *(agents/mcp)*  Prefix mcp tools with the server name ([#958](https://github.com/bosun-ai/swiftide/pull/958))\n\n### Bug fixes\n\n- [04cd88b](https://github.com/bosun-ai/swiftide/commit/04cd88b74c7a0dd962c093181884db0afe7b6d2d) *(docs)*  Replace `feature(doc_auto_cfg)` with `doc(auto_cfg)`\n\n- [7873ce5](https://github.com/bosun-ai/swiftide/commit/7873ce5941a7abf8ed60df4ec2ea8a7a4c1d1316) *(integrations/openai)*  Simplefy responses api and improve chat completion request ergonomics ([#956](https://github.com/bosun-ai/swiftide/pull/956))\n\n- [24328d0](https://github.com/bosun-ai/swiftide/commit/24328d07e61a4f02679ee6b63a38561d12acefd4) *(macros)*  Ensure deny_unknown_attributes is set on generated args ([#948](https://github.com/bosun-ai/swiftide/pull/948))\n\n- [54245d0](https://github.com/bosun-ai/swiftide/commit/54245d0e70aff580d0e12d68e174026edfdb4801)  Update async-openai and fix responses api ([#964](https://github.com/bosun-ai/swiftide/pull/964))\n\n- [72a6c92](https://github.com/bosun-ai/swiftide/commit/72a6c92764aeda4e88a7cf18d26ce600b7ba8a28)  Force additionalProperties properly on completion requests ([#949](https://github.com/bosun-ai/swiftide/pull/949))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.31.3...0.32.0\n\n\n\n## [0.31.3](https://github.com/bosun-ai/swiftide/compare/v0.31.2...v0.31.3) - 2025-10-06\n\n### New features\n\n- [a189ae6](https://github.com/bosun-ai/swiftide/commit/a189ae6de51571810f98cf58f9fdb58e7707f29a) *(integrations/openai)*  Opt-in responses api ([#943](https://github.com/bosun-ai/swiftide/pull/943))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.31.2...0.31.3\n\n\n\n## [0.31.2](https://github.com/bosun-ai/swiftide/compare/v0.31.1...v0.31.2) - 2025-09-23\n\n### New features\n\n- [f35c9b5](https://github.com/bosun-ai/swiftide/commit/f35c9b507e11f76ff7e78de35843b3310a25f3db) *(agents)*  Add builder lite methods to SystemPrompt\n\n- [9f533f5](https://github.com/bosun-ai/swiftide/commit/9f533f57b2c7ed4ac1988f9e3567cda42f64b824) *(agents)*  Add helpers to retrieve or mutate the system prompt\n\n- [febb7eb](https://github.com/bosun-ai/swiftide/commit/febb7eb282af98ce1124636cb66a8819265e3585) *(agents)*  Support appending any kind of string to default SystemPrompt\n\n- [992478e](https://github.com/bosun-ai/swiftide/commit/992478ec8912554f73e3af6467784fd9326461c5) *(integrations/tree-sitter)*  Splitter support for PHP ([#932](https://github.com/bosun-ai/swiftide/pull/932))\n\n### Bug fixes\n\n- [5df7a48](https://github.com/bosun-ai/swiftide/commit/5df7a483bed7d980bceef5e69fd7e1415da7563f) *(agents)*  Only log error tool calls if error after hook\n\n- [54dceec](https://github.com/bosun-ai/swiftide/commit/54dceece5b939a0b534891ee5902593920a3fdeb) *(agents/local-executor)*  Also respect workdir in read file and write file\n\n- [6a688b4](https://github.com/bosun-ai/swiftide/commit/6a688b4be6a5a443ac72aa8ec0165ce6a0bebf11) *(agents/local-executor)*  Respect workdir when running commands\n\n- [5b01c58](https://github.com/bosun-ai/swiftide/commit/5b01c5854432569638fa54225268e48b4133178d) *(langfuse)*  Use swiftide Usage in SimplePrompt ([#929](https://github.com/bosun-ai/swiftide/pull/929))\n\n### Miscellaneous\n\n- [ec1e301](https://github.com/bosun-ai/swiftide/commit/ec1e301eec2793613186b9e3bcb02de52741b936) *(agents)*  Explicit read file test for local executor\n\n- [8882a53](https://github.com/bosun-ai/swiftide/commit/8882a538f30c7ff457dcb3a1d48e623fbc5aad1d)  Improve tests for control tools ([#928](https://github.com/bosun-ai/swiftide/pull/928))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.31.1...0.31.2\n\n\n\n## [0.31.1](https://github.com/bosun-ai/swiftide/compare/v0.31.0...v0.31.1) - 2025-09-16\n\n### Docs\n\n- [866b77a](https://github.com/bosun-ai/swiftide/commit/866b77a8c33b6b7935f260c1df099d89492cb048) *(readme)*  Use raw links for images so they work on crates/docs\n\n- [513c143](https://github.com/bosun-ai/swiftide/commit/513c143cd11ae6ddda48f73012844f1f6d026ef7) *(readme)*  Remove double back-to-top\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.31.0...0.31.1\n\n\n\n## [0.31.0](https://github.com/bosun-ai/swiftide/compare/v0.30.1...v0.31.0) - 2025-09-16\n\n### New features\n\n- [ad6655d](https://github.com/bosun-ai/swiftide/commit/ad6655dc448defc3a9ef8401f0528da11e16a256) *(agents)*  Add helper to remove default stop tool from agent builder\n\n- [708ebe4](https://github.com/bosun-ai/swiftide/commit/708ebe436b4d2e9456723cfc95557071f2c636c9) *(agents)*  Implement From<SystemPrompt> for SystemPromptBuilder\n\n- [db79f21](https://github.com/bosun-ai/swiftide/commit/db79f21c323abca462a5f469814c4c03cc949b7e) *(agents/tasks)*  Add helper to create instant transitions from node ids\n\n- [ac7cd22](https://github.com/bosun-ai/swiftide/commit/ac7cd2209e1792b693acbde251a1aa756bb35541) *(indexing)*  [**breaking**] Prepare for multi modal and node transformations with generic indexing ([#899](https://github.com/bosun-ai/swiftide/pull/899))\n\n**BREAKING CHANGE**: Indexing pipelines are now generic over their inner\ntype. This is a major change that enables major cool stuff in the\nfuture. Most of Swiftide still runs on Node<String>, and will be\nmigrated when needed/appropriate. A `TextNode` alias is provided and\nmost indexing traits now take the node's inner generic parameter as\nInput/Output associated types.\n\n- [4e20804](https://github.com/bosun-ai/swiftide/commit/4e20804cc78a90e61a1c816abe5810b2a34007af) *(integrations)*  More convenient usage reporting via callback ([#897](https://github.com/bosun-ai/swiftide/pull/897))\n\n- [5923532](https://github.com/bosun-ai/swiftide/commit/592353259018b39d4ce43b4a15a9dea1aa1d2904) *(integrations/openai, core)*  Add `StructuredPrompt` and implement for OpenAI ([#912](https://github.com/bosun-ai/swiftide/pull/912))\n\n- [d2681d5](https://github.com/bosun-ai/swiftide/commit/d2681d53ce235439885ace40ac08a6d4a058259a)  Integrate with Langfuse via tracing and make traces consistent and pretty ([#907](https://github.com/bosun-ai/swiftide/pull/907))\n\n- [b3f18cd](https://github.com/bosun-ai/swiftide/commit/b3f18cd00f9019496274142aa89342da115c6843)  Add convenience helpers to get ToolOutput values as ref\n\n### Bug fixes\n\n- [0071b72](https://github.com/bosun-ai/swiftide/commit/0071b721520d585f36d1ec6ff90eb88d669da043) *(agents)*  Replace tools when adding multiple with the same name\n\n- [dab4cf7](https://github.com/bosun-ai/swiftide/commit/dab4cf771cd9a6d90ae0985c83171fd87b213cba) *(integrations)*  Remove sync requirement in future from `on_usage_async`\n\n- [6702314](https://github.com/bosun-ai/swiftide/commit/6702314eb6d937353324ce601f2a35c2a13d4cc1) *(langfuse)*  Ensure all data is on the right generation span ([#913](https://github.com/bosun-ai/swiftide/pull/913))\n\n- [e389c8b](https://github.com/bosun-ai/swiftide/commit/e389c8ba72435ba1c1af109934b2b580fb6be7c1) *(langfuse)*  Set type field correctly on `SimplePrompt`\n\n### Miscellaneous\n\n- [5ba9a7d](https://github.com/bosun-ai/swiftide/commit/5ba9a7db6f844687b04c5fa5d9a2119f456108c6) *(agents)*  Implement default for `AgentCanFail` tool\n\n- [412dacb](https://github.com/bosun-ai/swiftide/commit/412dacb554d2b1478f3286a47352a6daed3079b9) *(agents/tasks)*  Clean up closure api for node registration\n\n- [478d583](https://github.com/bosun-ai/swiftide/commit/478d5830fa194b880595b2c2ef9ef409cc5b34c4) *(openai)*  Remove double `include_usage` in complete_stream\n\n### Docs\n\n- [2117190](https://github.com/bosun-ai/swiftide/commit/211719038d1912f3ee3f165cdb721c216fa48286)  Update blog post links in readme\n\n- [d5e0323](https://github.com/bosun-ai/swiftide/commit/d5e0323691a22a0b413d14d02e3bafb391e9dd7a)  Update readme\n\n- [a574860](https://github.com/bosun-ai/swiftide/commit/a5748604d14e10c4010384e020e09c6082d2a7c1)  Update readme\n\n### Style\n\n- [7081e29](https://github.com/bosun-ai/swiftide/commit/7081e291216491618fb07e1ac3f947a99b140c7f)  Fmt\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.30.1...0.31.0\n\n\n\n## [0.30.1](https://github.com/bosun-ai/swiftide/compare/v0.30.0...v0.30.1) - 2025-08-19\n\n### Bug fixes\n\n- [0114573](https://github.com/bosun-ai/swiftide/commit/011457367b7bfdc207f1f6d9ebfcbf2a2de4ac58) *(agents)*  Explicitly handle out of bounds and empty edge cases for message history\n\n- [1005ac2](https://github.com/bosun-ai/swiftide/commit/1005ac219e2078c6ee12b050a7e73d48ef7f46a5) *(core)*  Export tokenizer traits from the root crate\n\n- [e4c01e1](https://github.com/bosun-ai/swiftide/commit/e4c01e14fbe89cb5a16beddcb3819b66c7f1a087) *(integrations/tiktoken)*  Tiktoken feature flag in root crate\n\n- [d56496d](https://github.com/bosun-ai/swiftide/commit/d56496d60719eea3752f849aee2a780eb435130e) *(integrations/tiktoken)*  Fix my inability to count in late hours\n\n### Miscellaneous\n\n- [352bf40](https://github.com/bosun-ai/swiftide/commit/352bf40ad5f74778bf41f00cff936805b8633b30) *(core)*  Implement AsRef<str> for ChatMessage\n\n- [aadfb7b](https://github.com/bosun-ai/swiftide/commit/aadfb7b89fe1fd6d04f27bc7209458de3571d1cc) *(integrations/openai)*  Concise debug logs and more verbose trace\n\n- [f975d40](https://github.com/bosun-ai/swiftide/commit/f975d40beccdebd98c896d8492243a489a9b287b) *(query)*  Reduce debugging noise for queries\n\n### Style\n\n- [6a744e0](https://github.com/bosun-ai/swiftide/commit/6a744e0290ebceca3c14b675a35a460f532c4cff)  Fix typos\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.30.0...0.30.1\n\n\n\n## [0.30.0](https://github.com/bosun-ai/swiftide/compare/v0.29.0...v0.30.0) - 2025-08-16\n\n### New features\n\n- [dc574b4](https://github.com/bosun-ai/swiftide/commit/dc574b41b259f430bb4dc38338416ea1aa9480bb) *(agents)*  Multi agent setup with graph-like Tasks ([#861](https://github.com/bosun-ai/swiftide/pull/861))\n\n- [8740762](https://github.com/bosun-ai/swiftide/commit/87407626ef75c254fae0a677148609738fd64ccc) *(agents)*  Allow mutating an existing system prompt in the builder ([#887](https://github.com/bosun-ai/swiftide/pull/887))\n\n- [4bbf207](https://github.com/bosun-ai/swiftide/commit/4bbf207637a1aebe4e0d5b2d4030c3d1f99d4c1c) *(agents/local-executor)*  Allow clearing, adding and removing env variable ([#875](https://github.com/bosun-ai/swiftide/pull/875))\n\n- [7873493](https://github.com/bosun-ai/swiftide/commit/787349329e34956bcd205b8da64bb241c15c8e65) *(agents/local-executor)*  Support running inline shebang scripts ([#874](https://github.com/bosun-ai/swiftide/pull/874))\n\n- [a6d4379](https://github.com/bosun-ai/swiftide/commit/a6d43794ae8e549b3716ef15344471b22041cbc1)  Proper streaming backoff for Chat Completion ([#895](https://github.com/bosun-ai/swiftide/pull/895))\n\n### Bug fixes\n\n- [2b8e138](https://github.com/bosun-ai/swiftide/commit/2b8e1389b630283a2e8c55b9997f09322b7378a9) *(openai)*  More gracefully allow handling streaming errors if the client is decorated ([#891](https://github.com/bosun-ai/swiftide/pull/891))\n\n- [f2948b5](https://github.com/bosun-ai/swiftide/commit/f2948b596d7c91c518e700c5d2589fba5a45b649) *(pipeline)*  Revert cache nodes after they've been successfully ran ([#800](https://github.com/bosun-ai/swiftide/pull/800)) ([#852](https://github.com/bosun-ai/swiftide/pull/852))\n\n### Performance\n\n- [63a91bd](https://github.com/bosun-ai/swiftide/commit/63a91bd2d8290cbd20f4ae3914d820192ef160d2)  Use Cow to in Prompt\n\n### Miscellaneous\n\n- [09f421b](https://github.com/bosun-ai/swiftide/commit/09f421bcc934721ab5fcf3dc2808fe5beefcc9a2)  Update rmcp and schemars ([#881](https://github.com/bosun-ai/swiftide/pull/881))\n\n### Docs\n\n- [84ffa45](https://github.com/bosun-ai/swiftide/commit/84ffa4507e57b252f72204f8e0df67191d97fe72)  Minimal updates for tasks\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.29.0...0.30.0\n\n\n\n## [0.29.0](https://github.com/bosun-ai/swiftide/compare/v0.28.1...v0.29.0) - 2025-07-29\n\n### New features\n\n- [25a86fa](https://github.com/bosun-ai/swiftide/commit/25a86fa0403581c3c5ddc5bd237bee98f41bc153) *(agents)*  Lots of utility functions for agents ([#862](https://github.com/bosun-ai/swiftide/pull/862))\n\n- [a70840b](https://github.com/bosun-ai/swiftide/commit/a70840b4dca983bd23b54f1f7cf12b33d60b733c) *(openai)*  Add helper to set the end user field for requests\n\n- [f8ddeba](https://github.com/bosun-ai/swiftide/commit/f8ddebaf57001671516db193140c2e5618000206) *(tree-sitter)*  Add html support for splitting and parsing ([#850](https://github.com/bosun-ai/swiftide/pull/850))\n\n### Bug fixes\n\n- [aaa5cd9](https://github.com/bosun-ai/swiftide/commit/aaa5cd99d0316dcdc46afb922bbcefdfaa97da86) *(agents)*  Add user message before invoking hooks ([#853](https://github.com/bosun-ai/swiftide/pull/853))\n\n- [592be04](https://github.com/bosun-ai/swiftide/commit/592be049b798d80d6dadce6317889a14404643c8) *(agents)*  Reduce verbosity of streaming hook ([#854](https://github.com/bosun-ai/swiftide/pull/854))\n\n- [9778295](https://github.com/bosun-ai/swiftide/commit/977829550d58301f53f663b4c25fa5650ab15359) *(agents)*  Ensure error causes are always accessible\n\n- [efd35da](https://github.com/bosun-ai/swiftide/commit/efd35da842288616abd55c789b727265bc549ffb) *(docs)*  Fix prompt doctests\n\n- [e2670c0](https://github.com/bosun-ai/swiftide/commit/e2670c04d471dd7654e903e79f48bcfe61603b9f) *(duckdb)*  Force install and update extensions ([#851](https://github.com/bosun-ai/swiftide/pull/851))\n\n- [6a7ea3b](https://github.com/bosun-ai/swiftide/commit/6a7ea3b1472df209669fdf1231f0bdf4ebe6007f) *(redis)*  Redis instrumentation only at trace level\n\n### Miscellaneous\n\n- [0a8ce37](https://github.com/bosun-ai/swiftide/commit/0a8ce373325fac53946c245209afcd8bb7b2caa9)  Public chat completion streaming types\n\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.28.1...0.29.0\n\n\n\n## [0.28.1](https://github.com/bosun-ai/swiftide/compare/v0.28.0...v0.28.1) - 2025-07-01\n\n### New features\n\n- [c671e6a](https://github.com/bosun-ai/swiftide/commit/c671e6aec7b381235f8450a8be0cbc766df72985) *(agents)*  Add is_approved() and is_refused() to ToolFeedback\n\n### Bug fixes\n\n- [68c5cda](https://github.com/bosun-ai/swiftide/commit/68c5cdafc6e457739bcfeb12d2810350659f2979) *(agents)*  Prevent stack overflow when ToolExecutor has ambigious refs\n\n- [07198d2](https://github.com/bosun-ai/swiftide/commit/07198d26389e1606e6e0f552e411196f42cf6600) *(duckdb)*  Resolve 'x is an existing extension'\n\n- [e8ecc2f](https://github.com/bosun-ai/swiftide/commit/e8ecc2ff532efd07bd21e5350b8d2b6f600ca1c6) *(qdrant)*  Re-export the full qdrant client\n\n- [242b8f5](https://github.com/bosun-ai/swiftide/commit/242b8f5e3d427967aa238115047a58bb9debad3b) *(qdrant)*  Re-export qdrant::Filter properly\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.28.0...0.28.1\n\n\n\n## [0.28.0](https://github.com/bosun-ai/swiftide/compare/v0.27.2...v0.28.0) - 2025-06-30\n\n### New features\n\n- [9d11386](https://github.com/bosun-ai/swiftide/commit/9d11386c155773fcc77a60591cd57bc366044c71)  Token usage metrics for embeddings, SimplePrompt and ChatCompletion with metric-rs ([#813](https://github.com/bosun-ai/swiftide/pull/813))\n\n- [59c8b9c](https://github.com/bosun-ai/swiftide/commit/59c8b9cef721c3861a9d352c7fbef28e27d2f649)  Stream files from tool executor for indexing ([#835](https://github.com/bosun-ai/swiftide/pull/835))\n\n### Bug fixes\n\n- [ba6ec04](https://github.com/bosun-ai/swiftide/commit/ba6ec0485dc950e83e91e6a8102becc0e8a13158) *(pipeline)*  Cache nodes after they've been successfully ran ([#800](https://github.com/bosun-ai/swiftide/pull/800))\n\n- [d98827c](https://github.com/bosun-ai/swiftide/commit/d98827c9cd7bb476fdda0ef2ebb6939150b8781c) *(qdrant)*  Re-export qdrant::Filter\n\n- [275efcd](https://github.com/bosun-ai/swiftide/commit/275efcdf91e85ed4327ffa948dcebe5903b178fa)  Mark Loader as Send + Sync\n\n- [5974b72](https://github.com/bosun-ai/swiftide/commit/5974b72de4da2fc18d1f76adde02d02035104d5c)  Integrations metrics depends on core/metrics\n\n### Miscellaneous\n\n- [2f8c7cc](https://github.com/bosun-ai/swiftide/commit/2f8c7cc96b194264a47a8fe21abb7af5c63204f6) *(deps)*  Up all crates ([#837](https://github.com/bosun-ai/swiftide/pull/837))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.27.2...0.28.0\n\n\n\n## [0.27.2](https://github.com/bosun-ai/swiftide/compare/v0.27.1...v0.27.2) - 2025-06-26\n\n### New features\n\n- [66cd7e9](https://github.com/bosun-ai/swiftide/commit/66cd7e9349673a77d8cc79e6b5acab8d56078a42) *(qdrant)*  Add support for a filter in hybrid search ([#830](https://github.com/bosun-ai/swiftide/pull/830))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.27.1...0.27.2\n\n\n\n## [0.27.1](https://github.com/bosun-ai/swiftide/compare/v0.27.0...v0.27.1) - 2025-06-12\n\n### Bug fixes\n\n- [0892151](https://github.com/bosun-ai/swiftide/commit/0892151d2d02c30e38fa8629c386eaf4475da7f8) *(duckdb)*  Avoid panic if duckdb gets created twice ([#818](https://github.com/bosun-ai/swiftide/pull/818))\n\n- [0815923](https://github.com/bosun-ai/swiftide/commit/081592334f2bd8c2da30535b4e1b51e8ddd15834) *(tool-executor)*  Remove conflicting implementation of AsRef<str> for Output\n\n### Miscellaneous\n\n- [2b64410](https://github.com/bosun-ai/swiftide/commit/2b644109796c8870d29fa1b54f6a0802cae9aaf8) *(tool-executor)*  Implement AsRef<str> for CommandOutput\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.27.0...0.27.1\n\n\n\n## [0.27.0](https://github.com/bosun-ai/swiftide/compare/v0.26.0...v0.27.0) - 2025-06-09\n\n### New features\n\n- [c636eba](https://github.com/bosun-ai/swiftide/commit/c636ebaa2eb8d4ace1b5a370698c5f2817fc9c99) *(agents)*  [**breaking**] Context is now generic over its backend ([#810](https://github.com/bosun-ai/swiftide/pull/810))\n\n**BREAKING CHANGE**: The signature is now slightly different for the\nAgentContext. If you have implemented your own for i.e. a persisted\nsolution, if it's *just that*, the implementation is now a lot more\nstraightforward with the `MessageHistory` trait.\n\n- [3c937a8](https://github.com/bosun-ai/swiftide/commit/3c937a8ed4f7d28798a24b0d893f1613cd298493) *(agents)*  Add helpers for creating tool errors ([#805](https://github.com/bosun-ai/swiftide/pull/805))\n\n- [9e831d3](https://github.com/bosun-ai/swiftide/commit/9e831d3eb072748ebb21c9a16cd7d807b4d42469) *(agents)*  [**breaking**] Easy human-in-the-loop flows by decorating tools ([#790](https://github.com/bosun-ai/swiftide/pull/790))\n\n**BREAKING CHANGE**: The `Tool` trait now receives a `ToolCall` as argument\ninstead of an `Option<&str>`. The latter is still accessible via\n`tool_call.args()`.\n\n- [814c217](https://github.com/bosun-ai/swiftide/commit/814c2174c742ff4277246505537070726ce8af92) *(duckdb)*  Hybrid Search ([#807](https://github.com/bosun-ai/swiftide/pull/807))\n\n- [254bd3a](https://github.com/bosun-ai/swiftide/commit/254bd3a32ffbd4d06abd6a4f3950a2b8556dc310) *(integrations)*  Add kafka as loader and persist support ([#808](https://github.com/bosun-ai/swiftide/pull/808))\n\n- [19a2e94](https://github.com/bosun-ai/swiftide/commit/19a2e94d262cc68c629d88b6b02a72bb9b159036) *(integrations)*  Add support for Google Gemini ([#754](https://github.com/bosun-ai/swiftide/pull/754))\n\n- [990fa5e](https://github.com/bosun-ai/swiftide/commit/990fa5e9edffebd9b70da6b57fa454f7318d642d) *(redis)*  Support `MessageHistory` for redis ([#811](https://github.com/bosun-ai/swiftide/pull/811))\n\n### Bug fixes\n\n- [ca119bd](https://github.com/bosun-ai/swiftide/commit/ca119bdc473140437abb1bf14b496bb7bd9378de) *(agents)*  Ensure approved / refused tool calls are in new completions ([#799](https://github.com/bosun-ai/swiftide/pull/799))\n\n- [df6a12d](https://github.com/bosun-ai/swiftide/commit/df6a12dabe855f351acc3e0d104048321cb9bc0e) *(agents)*  Ensure agents with no tools still have the stop tool\n\n- [cd57d12](https://github.com/bosun-ai/swiftide/commit/cd57d1207ced8651a277526d706bc3b7703912c0) *(openai)*  Opt-out streaming accumulated response and only get the delta ([#809](https://github.com/bosun-ai/swiftide/pull/809))\n\n- [da2d604](https://github.com/bosun-ai/swiftide/commit/da2d604e7e6209c83f382cf6de44f5f5c2042596) *(redb)*  Explicit lifetime in table definition\n\n### Miscellaneous\n\n- [7ac92a4](https://github.com/bosun-ai/swiftide/commit/7ac92a4f2ff4b1d1ba7e86c90c4f6c5c025cabc9) *(agents)*  Direct access to executor via context ([#794](https://github.com/bosun-ai/swiftide/pull/794))\n\n- [a21883b](https://github.com/bosun-ai/swiftide/commit/a21883b219a0079c1edc1d3c36d1d06ac906ba18) *(agents)*  [**breaking**] Improved naming for existing messages and message history in default context\n\n**BREAKING CHANGE**: Improved naming for existing messages and message history in default context\n\n- [40bfa9c](https://github.com/bosun-ai/swiftide/commit/40bfa9c2d5685e54f247becb49698f8fdc347172) *(indexing)*  Implement ChunkerTransformer for closures\n\n- [c8d7ab9](https://github.com/bosun-ai/swiftide/commit/c8d7ab90c86e674d5df5f4985121e4e81d1e4a37) *(integrations)*  Improved warning when a qdrant collection exists\n\n- [d6769eb](https://github.com/bosun-ai/swiftide/commit/d6769eba0b87750fd3173ba73315973f720263ec) *(tree-sitter)*  Implement Eq, Hash and AsRefStr for SupportedLanguages\n\n- [04ec29d](https://github.com/bosun-ai/swiftide/commit/04ec29d7240a8542ccd1d530bb9b104bcd57631e)  Consistent logging for indexing pipeline ([#792](https://github.com/bosun-ai/swiftide/pull/792))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.26.0...0.27.0\n\n\n\n## [0.26.0](https://github.com/bosun-ai/swiftide/compare/v0.25.1...v0.26.0) - 2025-05-06\n\n### New features\n\n- [11051d5](https://github.com/bosun-ai/swiftide/commit/11051d5a1df6ea158ee84de274767fbdc70cc74e) *(agents)*  `tools` on `Agent` is now public and can be used in hooks\n\n- [ebe68c1](https://github.com/bosun-ai/swiftide/commit/ebe68c104b8198b80ee5ee1f451c3272ce36841c) *(integrations)*  Streaming chat completions for anthropic ([#773](https://github.com/bosun-ai/swiftide/pull/773))\n\n- [7f5b345](https://github.com/bosun-ai/swiftide/commit/7f5b345115a3443afc9b32ca54a292fae3f5d38b) *(integrations)*  Streaming chat completions for OpenAI ([#741](https://github.com/bosun-ai/swiftide/pull/741))\n\n- [e2278fb](https://github.com/bosun-ai/swiftide/commit/e2278fb133e51f15025e114135a2bc29157242ee) *(integrations)*  Customize common default settings for OpenAI requests ([#775](https://github.com/bosun-ai/swiftide/pull/775))\n\n- [c563cf2](https://github.com/bosun-ai/swiftide/commit/c563cf270c60957dbb948113fb2299ec5eb7ed58) *(treesitter)*  Add elixir support ([#776](https://github.com/bosun-ai/swiftide/pull/776))\n\n- [13ae991](https://github.com/bosun-ai/swiftide/commit/13ae991b632cc95d1ae0bc7107146a145af59c74)  Add usage to chat completion response ([#774](https://github.com/bosun-ai/swiftide/pull/774))\n\n### Bug fixes\n\n- [7836f9f](https://github.com/bosun-ai/swiftide/commit/7836f9ff31f2abeab966f80a91eab32054e61ff1) *(agents)*  Use an RwLock to properly close a running MCP server\n\n- [0831c98](https://github.com/bosun-ai/swiftide/commit/0831c982cd6bb0b442396268c0681c908b6dadc2) *(openai)*  Disable parallel tool calls by default\n\n### Miscellaneous\n\n- [18dc99c](https://github.com/bosun-ai/swiftide/commit/18dc99ca1f597586ffed36e163f04f7c3689d2be) *(integrations)*  Use generics for all openai variants ([#764](https://github.com/bosun-ai/swiftide/pull/764))\n\n- [2a9d062](https://github.com/bosun-ai/swiftide/commit/2a9d062c6e19721c49c6233690ac71e9e28b6a04) *(openai)*  Consistent exports across providers\n\n- [4df6dbf](https://github.com/bosun-ai/swiftide/commit/4df6dbf17fd4b87afc2cf7159c6518fcebc27438)  Export macros from main crate and enable them by default ([#778](https://github.com/bosun-ai/swiftide/pull/778))\n\n- [8b30fde](https://github.com/bosun-ai/swiftide/commit/8b30fde5e20ecbd4f0387c26e441d39f78ddca32)  Rust like its 2024 ([#763](https://github.com/bosun-ai/swiftide/pull/763))\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.25.1...0.26.0\n\n\n\n## [0.25.1](https://github.com/bosun-ai/swiftide/compare/v0.25.0...v0.25.1) - 2025-04-17\n\n### Bug fixes\n\n- [7102091](https://github.com/bosun-ai/swiftide/commit/710209123ba6972cd11fb0f3d364c9c83478e184) *(agents)*  AgentBuilder and AgentBuilderError should be public\n\n\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.25.0...0.25.1\n\n\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.25.0](https://github.com/bosun-ai/swiftide/compare/v0.24.0...v0.25.0) - 2025-04-16\r\n\r\n### New features\r\n\r\n- [4959ddf](https://github.com/bosun-ai/swiftide/commit/4959ddfe00e0424215dd9bd3e8a6acb579cc056c) *(agents)*  Restore agents from an existing message history ([#742](https://github.com/bosun-ai/swiftide/pull/742))\r\n\r\n- [6efd15b](https://github.com/bosun-ai/swiftide/commit/6efd15bf7b88d8f8656c4017676baf03a3bb510e) *(agents)*  Agents now take an Into Prompt when queried ([#743](https://github.com/bosun-ai/swiftide/pull/743))\r\n\r\n### Bug fixes\r\n\r\n- [5db4de2](https://github.com/bosun-ai/swiftide/commit/5db4de2f0deb2028f5ffaf28b4d26336840e908c) *(agents)*  Properly support nullable types for MCP tools ([#740](https://github.com/bosun-ai/swiftide/pull/740))\r\n\r\n- [dd2ca86](https://github.com/bosun-ai/swiftide/commit/dd2ca86b214e8268262075a513711d6b9c793115) *(agents)*  Do not log twice if mcp failed to stop\r\n\r\n- [5fea2e2](https://github.com/bosun-ai/swiftide/commit/5fea2e2acdca0782f88d4274bb8e106b48e1efe4) *(indexing)*  Split pipeline concurrently ([#749](https://github.com/bosun-ai/swiftide/pull/749))\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n- [0f2605a](https://github.com/bosun-ai/swiftide/commit/0f2605a61240d2c99e10ce6f5a91e6568343a78b)  Pretty print RAGAS output ([#745](https://github.com/bosun-ai/swiftide/pull/745))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.24.0...0.25.0\r\n\r\n\r\n\r\n## [0.24.0](https://github.com/bosun-ai/swiftide/compare/v0.23.0...v0.24.0) - 2025-04-11\r\n\r\n### New features\r\n\r\n- [3117fc6](https://github.com/bosun-ai/swiftide/commit/3117fc62c146b0bf0949adb3cfe4e6c7f40427f7)  Introduce LanguageModelError for LLM traits and an optional backoff decorator ([#630](https://github.com/bosun-ai/swiftide/pull/630))\r\n\r\n### Bug fixes\r\n\r\n- [0134dae](https://github.com/bosun-ai/swiftide/commit/0134daebef5d47035e986d30e1fa8f2c751c2c48) *(agents)*  Gracefully stop mcp service on drop ([#734](https://github.com/bosun-ai/swiftide/pull/734))\r\n\r\n### Miscellaneous\r\n\r\n- [e872c5b](https://github.com/bosun-ai/swiftide/commit/e872c5b24388754b371d9f0c7faad8647ad4733b)  Core test utils available behind feature flag ([#730](https://github.com/bosun-ai/swiftide/pull/730))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.23.0...0.24.0\r\n\r\n\r\n\r\n## [0.23.0](https://github.com/bosun-ai/swiftide/compare/v0.22.8...v0.23.0) - 2025-04-08\r\n\r\n### New features\r\n\r\n- [fca4165](https://github.com/bosun-ai/swiftide/commit/fca4165c5be4b14cdc3d20ed8215ef64c5fd69a9) *(agents)*  Return typed errors and yield error in `on_stop` ([#725](https://github.com/bosun-ai/swiftide/pull/725))\r\n\r\n- [29352e6](https://github.com/bosun-ai/swiftide/commit/29352e6d3dc51779f3202e0e9936bf72e0b61605) *(agents)*  Add `on_stop` hook and `stop` now takes a `StopReason` ([#724](https://github.com/bosun-ai/swiftide/pull/724))\r\n\r\n- [a85cd8e](https://github.com/bosun-ai/swiftide/commit/a85cd8e2d014f198685ee6bfcfdf17f7f34acf91) *(macros)*  Support generics in Derive for tools ([#720](https://github.com/bosun-ai/swiftide/pull/720))\r\n\r\n- [52c44e9](https://github.com/bosun-ai/swiftide/commit/52c44e9b610c0ba4bf144881c36eacc3a0d10e53)  Agent mcp client support  ([#658](https://github.com/bosun-ai/swiftide/pull/658))\r\n\r\n````text\r\nAdds support for agents to use tools from MCP servers. All transports\r\n  are supported via the `rmcp` crate.\r\n\r\n  Additionally adds the possibility to add toolboxes to agents (of which\r\n  MCP is one). Tool boxes declare their available tools at runtime, like\r\n  tool box.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [69706ec](https://github.com/bosun-ai/swiftide/commit/69706ec6630b70ea9d332c151637418736437a99)  [**breaking**] Remove templates ([#716](https://github.com/bosun-ai/swiftide/pull/716))\r\n\r\n````text\r\nTemplate / prompt interface got confusing and bloated. This removes\r\n  `Template` fully, and changes Prompt such that it can either ref to a\r\n  one-off, or to a template named compiled in the swiftide repository.\r\n````\r\n\r\n**BREAKING CHANGE**: This removes `Template` from Swiftide and simplifies\r\nthe whole setup significantly. The internal Swiftide Tera repository can\r\nstill be extended like with Templates. Same behaviour with less code and\r\nabstractions.\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.8...0.23.0\r\n\r\n\r\n\r\n## [0.22.8](https://github.com/bosun-ai/swiftide/compare/v0.22.7...v0.22.8) - 2025-04-02\r\n\r\n### Bug fixes\r\n\r\n- [6b4dfca](https://github.com/bosun-ai/swiftide/commit/6b4dfca822f39b3700d60e6ea31b9b48ccd6d56f)  Tool macros should work with latest darling version ([#712](https://github.com/bosun-ai/swiftide/pull/712))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.7...0.22.8\r\n\r\n\r\n\r\n## [0.22.7](https://github.com/bosun-ai/swiftide/compare/v0.22.6...v0.22.7) - 2025-03-30\r\n\r\n### Bug fixes\r\n\r\n- [b0001fb](https://github.com/bosun-ai/swiftide/commit/b0001fbb12cf6bb85fc4d5a8ef0968219e8c78db) *(duckdb)*  Upsert is now opt in as it requires duckdb >= 1.2 ([#708](https://github.com/bosun-ai/swiftide/pull/708))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.6...0.22.7\r\n\r\n\r\n\r\n## [0.22.6](https://github.com/bosun-ai/swiftide/compare/v0.22.5...v0.22.6) - 2025-03-27\r\n\r\n### New features\r\n\r\n- [a05b3c8](https://github.com/bosun-ai/swiftide/commit/a05b3c8e7c4224c060215c34490b2ea7729592bf) *(macros)*  Support optional values and make them even nicer to use ([#703](https://github.com/bosun-ai/swiftide/pull/703))\r\n\r\n### Bug fixes\r\n\r\n- [1866d5a](https://github.com/bosun-ai/swiftide/commit/1866d5a081f40123e607208d04403fb98f34c057) *(integrations)*  Loosen up duckdb requirements even more and make it more flexible for version requirements ([#706](https://github.com/bosun-ai/swiftide/pull/706))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.5...0.22.6\r\n\r\n\r\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.22.5](https://github.com/bosun-ai/swiftide/compare/v0.22.4...v0.22.5) - 2025-03-23\r\n\r\n### New features\r\n\r\n- [eb4e044](https://github.com/bosun-ai/swiftide/commit/eb4e0442293e17722743aa2b88d8dd7582dd9236)  Estimate tokens for OpenAI like apis with tiktoken-rs ([#699](https://github.com/bosun-ai/swiftide/pull/699))\r\n\r\n### Miscellaneous\r\n\r\n- [345c57a](https://github.com/bosun-ai/swiftide/commit/345c57a663dd0d315a28f0927c5d598ba21d019d)  Improve file loader logging ([#695](https://github.com/bosun-ai/swiftide/pull/695))\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.4...0.22.5\r\n\r\n\r\n\r\n## [0.22.4](https://github.com/bosun-ai/swiftide/compare/v0.22.3...v0.22.4) - 2025-03-17\r\n\r\n### Bug fixes\r\n\r\n- [4ec00bb](https://github.com/bosun-ai/swiftide/commit/4ec00bb0fed214f27629f32569406bfa2c786dd7) *(integrations)*  Add chrono/utc feature flag when using qdrant ([#684](https://github.com/bosun-ai/swiftide/pull/684))\r\n\r\n````text\r\nThe Qdrant integration calls chrono::Utc::now(), which requires the now\r\n  feature flag to be enabled in the chrono crate when using qdrant\r\n````\r\n\r\n- [0b204d9](https://github.com/bosun-ai/swiftide/commit/0b204d90a68978bb4b75516c537a56d665771c55)  Ensure `groq`, `fastembed`, `test-utils` features compile individually ([#689](https://github.com/bosun-ai/swiftide/pull/689))\r\n\r\n### Miscellaneous\r\n\r\n- [bd4ef97](https://github.com/bosun-ai/swiftide/commit/bd4ef97f2b9207b5ac03d610b76bdb3440e3d5c0)  Include filenames in errors in file io ([#694](https://github.com/bosun-ai/swiftide/pull/694))\r\n\r\n````text\r\nUses fs-err crate to automatically include filenames in the error\r\n  messages\r\n````\r\n\r\n- [9453e06](https://github.com/bosun-ai/swiftide/commit/9453e06d5338c99cec5f51b085739cc30a5f12be)  Use std::sync::Mutex instead of tokio mutex ([#693](https://github.com/bosun-ai/swiftide/pull/693))\r\n\r\n- [b3456e2](https://github.com/bosun-ai/swiftide/commit/b3456e25af99f661aff1779ae5f2d4da460f128c)  Log qdrant setup messages at debug level ([#696](https://github.com/bosun-ai/swiftide/pull/696))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.3...0.22.4\r\n\r\n\r\n\r\n## [0.22.3](https://github.com/bosun-ai/swiftide/compare/v0.22.2...v0.22.3) - 2025-03-13\r\n\r\n### Miscellaneous\r\n\r\n- [834fcd3](https://github.com/bosun-ai/swiftide/commit/834fcd3b2270904bcfe8998a7015de15626128a8)  Update duckdb to 1.2.1 ([#680](https://github.com/bosun-ai/swiftide/pull/680))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.2...0.22.3\r\n\r\n\r\n\r\n## [0.22.2](https://github.com/bosun-ai/swiftide/compare/v0.22.1...v0.22.2) - 2025-03-11\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n- [e1c097d](https://github.com/bosun-ai/swiftide/commit/e1c097da885374ec9320c1847a7dda7c5d9d41cb)  Disable default features on all dependencies ([#675](https://github.com/bosun-ai/swiftide/pull/675))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.1...0.22.2\r\n\r\n\r\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.22.1](https://github.com/bosun-ai/swiftide/compare/v0.22.0...v0.22.1) - 2025-03-09\r\n\r\n### New features\r\n\r\n- [474d612](https://github.com/bosun-ai/swiftide/commit/474d6122596e71132e35fcb181302dfed7794561) *(integrations)*  Add Duckdb support ([#578](https://github.com/bosun-ai/swiftide/pull/578))\r\n\r\n````text\r\nAdds support for Duckdb. Persist, Retrieve (Simple and Custom), and\r\n  NodeCache are implemented. Metadata and full upsert are not. Once 1.2\r\n  has its issues fixed, it's easy to add.\r\n````\r\n\r\n- [4cf417c](https://github.com/bosun-ai/swiftide/commit/4cf417c6a818fbec2641ad6576b4843412902bf6) *(treesitter)*  C and C++ support for splitter only ([#663](https://github.com/bosun-ai/swiftide/pull/663))\r\n\r\n\r\n### Bug fixes\r\n\r\n- [590eaeb](https://github.com/bosun-ai/swiftide/commit/590eaeb3c6b5c14c56c925e038528326f88508a1) *(integrations)*  Make openai parallel_tool_calls an Option ([#664](https://github.com/bosun-ai/swiftide/pull/664))\r\n\r\n````text\r\no3-mini needs to omit parallel_tool_calls - so we need to allow for a\r\n  None option to not include that field\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n- [d864c7e](https://github.com/bosun-ai/swiftide/commit/d864c7e72ba01d3f187e4f6ab6ad3e6244ae0dc4)  Downgrade duckdb to 1.1.1 and fix ci ([#671](https://github.com/bosun-ai/swiftide/pull/671))\r\n\r\n- [9b685b3](https://github.com/bosun-ai/swiftide/commit/9b685b3281d9694c5faa58890a9aba32cba90f1c)  Update and loosen deps ([#670](https://github.com/bosun-ai/swiftide/pull/670))\r\n\r\n- [a64ca16](https://github.com/bosun-ai/swiftide/commit/a64ca1656b903a680cc70ac7b33ac40d9d356d4a)  Tokio_stream features should include `time`\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.22.0...0.22.1\r\n\r\n\r\n\r\n## [0.22.0](https://github.com/bosun-ai/swiftide/compare/v0.21.1...v0.22.0) - 2025-03-03\r\n\r\n### New features\r\n\r\n- [a754846](https://github.com/bosun-ai/swiftide/commit/a7548463367023d3e5a3a25dd84f06632b372f18) *(agents)*  Implement Serialize and Deserialize for chat messages\r\n\r\n````text\r\nPersist, retry later, evaluate it completions in a script, you name it.\r\n````\r\n\r\n- [0a592c6](https://github.com/bosun-ai/swiftide/commit/0a592c67621f3eba4ad6e0bfd5a539e19963cf17) *(indexing)*  Add `iter()` for file loader ([#655](https://github.com/bosun-ai/swiftide/pull/655))\r\n\r\n````text\r\nAllows playing with the iterator outside of the stream.\r\n\r\n  Relates to https://github.com/bosun-ai/kwaak/issues/337\r\n````\r\n\r\n- [57116e9](https://github.com/bosun-ai/swiftide/commit/57116e9a30c722f47398be61838cc1ef4d0bbfac)  Groq ChatCompletion ([#650](https://github.com/bosun-ai/swiftide/pull/650))\r\n\r\n````text\r\nUse the new generics to _just-make-it-work_.\r\n````\r\n\r\n- [4fd3259](https://github.com/bosun-ai/swiftide/commit/4fd325921555a14552e33b2481bc9dfcf0c313fc)  Continue Agent on Tool Failure ([#628](https://github.com/bosun-ai/swiftide/pull/628))\r\n\r\n````text\r\nEnsure tool calls and responses are always balanced, even when the tool retry limit is reached\r\n  https://github.com/bosun-ai/kwaak/issues/313\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.21.1...0.22.0\r\n\r\n\r\n\r\n## [0.21.1](https://github.com/bosun-ai/swiftide/compare/v0.21.0...v0.21.1) - 2025-02-28\r\n\r\n### Bug fixes\r\n\r\n- [f418c5e](https://github.com/bosun-ai/swiftide/commit/f418c5ee2f0d3ee87fb3715ec6b1d7ecc80bf714) *(ci)*  Run just a single real rerank test to please the flaky gods\r\n\r\n- [e387e82](https://github.com/bosun-ai/swiftide/commit/e387e826200e1bc0a608e1f680537751cfc17969) *(lancedb)*  Update Lancedb to 0.17 and pin Arrow to a lower version\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.21.0...0.21.1\r\n\r\n\r\n\r\n## [0.21.0](https://github.com/bosun-ai/swiftide/compare/v0.20.1...v0.21.0) - 2025-02-25\r\n\r\n### New features\r\n\r\n- [12a9873](https://github.com/bosun-ai/swiftide/commit/12a98736ab171c25d860000bb95b1e6e318758fb) *(agents)*  Improve flexibility for tool generation (#641)\r\n\r\n````text\r\nPreviously ToolSpec and name in the `Tool` trait worked with static.\r\n  With these changes, there is a lot more flexibility, allowing for i.e.\r\n  run-time tool generation.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.20.1...0.21.0\r\n\r\n\r\n\r\n## [0.20.1](https://github.com/bosun-ai/swiftide/compare/v0.20.0...v0.20.1) - 2025-02-21\r\n\r\n### Bug fixes\r\n\r\n- [0aa1248](https://github.com/bosun-ai/swiftide/commit/0aa124819d836f37d1fcaf88e6f88b5affb46cf9) *(indexing)*  Handle invalid utf-8 in fileloader lossy (#632)\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.20.0...0.20.1\r\n\r\n\r\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.20.0](https://github.com/bosun-ai/swiftide/compare/v0.19.0...v0.20.0) - 2025-02-18\r\n\r\n### New features\r\n\r\n- [5d85d14](https://github.com/bosun-ai/swiftide/commit/5d85d142339d24c793bd89a907652bede0d1c94d) *(agents)*  Add support for numbers, arrays and booleans in tool args (#562)\r\n\r\n````text\r\nAdd support for numbers, arrays and boolean types in the\r\n  `#[swiftide_macros::tool]` attribute macro. For enum and object a custom\r\n  implementation is now properly supported as well, but not via the macro.\r\n  For now, tools using Derive also still need a custom implementation.\r\n````\r\n\r\n- [b09afed](https://github.com/bosun-ai/swiftide/commit/b09afed72d463d8b59ffa2b325eb6a747c88c87f) *(query)*  Add support for reranking with `Fastembed` and multi-document retrieval (#508)\r\n\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.19.0...0.20.0\r\n\r\n\r\n\r\n## [0.19.0](https://github.com/bosun-ai/swiftide/compare/v0.18.2...v0.19.0) - 2025-02-13\r\n\r\n### New features\r\n\r\n- [fa5112c](https://github.com/bosun-ai/swiftide/commit/fa5112c9224fdf5984d26db669f04dedc8ebb561) *(agents)*  By default retry failed tools with LLM up to 3 times (#609)\r\n\r\n````text\r\nSpecifically meant for LLMs sending invalid JSON, these tool calls are\r\n  now retried by feeding back the error into the LLM up to a limit\r\n  (default 3).\r\n````\r\n\r\n- [14f4778](https://github.com/bosun-ai/swiftide/commit/14f47780b4294be3a9fa3670aa18a952ad7e9d6e) *(integrations)*  Parallel tool calling in OpenAI is now configurable (#611)\r\n\r\n````text\r\nAdds support reasoning models in agents and for chat completions.\r\n````\r\n\r\n- [37a1a2c](https://github.com/bosun-ai/swiftide/commit/37a1a2c7bfd152db56ed929e0ea1ab99080e640d) *(integrations)*  Add system prompts as `system` instead of message in Anthropic requests\r\n\r\n### Bug fixes\r\n\r\n- [ab27c75](https://github.com/bosun-ai/swiftide/commit/ab27c75b8f4a971cb61e88b26d94231afd35c871) *(agents)*  Add back anyhow catch all for failed tools\r\n\r\n- [2388f18](https://github.com/bosun-ai/swiftide/commit/2388f187966d996ede4ff42c71521238b63d129c) *(agents)*  Use name/arg hash on tool retries (#612)\r\n\r\n- [da55664](https://github.com/bosun-ai/swiftide/commit/da5566473e3f8874fce427ceb48a15d002737d07) *(integrations)*  Scraper should stop when finished (#614)\r\n\r\n### Miscellaneous\r\n\r\n- [990a8ea](https://github.com/bosun-ai/swiftide/commit/990a8eaeffdbd447bb05a0b01aa65a39a7c9cacf) *(deps)*  Update tree-sitter (#616)\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.18.2...0.19.0\r\n\r\n\r\n\r\n## [0.18.2](https://github.com/bosun-ai/swiftide/compare/v0.18.1...v0.18.2) - 2025-02-11\r\n\r\n### New features\r\n\r\n- [50ffa15](https://github.com/bosun-ai/swiftide/commit/50ffa156e28bb085a61a376bab71c135bc09622f)  Anthropic support for prompts and agents (#602)\r\n\r\n### Bug fixes\r\n\r\n- [8cf70e0](https://github.com/bosun-ai/swiftide/commit/8cf70e08787d1376ba20001cc9346767d8bd84ef) *(integrations)*  Ensure anthropic tool call format is consistent with specs\r\n\r\n### Miscellaneous\r\n\r\n- [98176c6](https://github.com/bosun-ai/swiftide/commit/98176c603b61e3971ca5583f9f4346eb5b962d51)  Clippy\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.18.1...0.18.2\r\n\r\n\r\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.18.1](https://github.com/bosun-ai/swiftide/compare/v0.18.0...v0.18.1) - 2025-02-09\r\n\r\n### New features\r\n\r\n- [78bf0e0](https://github.com/bosun-ai/swiftide/commit/78bf0e004049c852d4e32c0cd67725675b1250f9) *(agents)*  Add optional limit for agent iterations (#599)\r\n\r\n- [592e5a2](https://github.com/bosun-ai/swiftide/commit/592e5a2ca4b0f09ba6a9b20cef105539cb7a7909) *(integrations)*  Support Azure openai via generics (#596)\r\n\r\n- [c8f2eed](https://github.com/bosun-ai/swiftide/commit/c8f2eed9964341ac2dad611fc730dc234436430a) *(tree-sitter)*  Add solidity support (#597)\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.18.0...0.18.1\r\n\r\n\r\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.18.0](https://github.com/bosun-ai/swiftide/compare/v0.17.5...v0.18.0) - 2025-02-01\r\n\r\n### New features\r\n\r\n- [de46656](https://github.com/bosun-ai/swiftide/commit/de46656f80c5cf68cc192d21b5f34eb3e0667a14) *(agents)*  Add `on_start` hook (#586)\r\n\r\n- [c551f1b](https://github.com/bosun-ai/swiftide/commit/c551f1becfd1750ce480a00221a34908db61e42f) *(integrations)*  OpenRouter support (#589)\r\n\r\n````text\r\nAdds OpenRouter support. OpenRouter allows you to use any LLM via their\r\n  own api (with a minor upsell).\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [3ea5839](https://github.com/bosun-ai/swiftide/commit/3ea583971c0d2cc5ef0594eaf764ea149bacd1d8) *(redb)*  Disable per-node tracing\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.lock dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.17.5...0.18.0\r\n\r\n\r\n\r\n## [0.17.5](https://github.com/bosun-ai/swiftide/compare/v0.17.4...v0.17.5) - 2025-01-27\r\n\r\n### New features\r\n\r\n- [825a52e](https://github.com/bosun-ai/swiftide/commit/825a52e70a74e4621d370485346a78d61bf5d7a9) *(agents)*  Tool description now also accepts paths (i.e. a const) (#580)\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.lock dependencies\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.17.4...0.17.5\r\n\r\n\r\n\r\n## [0.17.4](https://github.com/bosun-ai/swiftide/compare/v0.17.3...v0.17.4) - 2025-01-24\r\n\r\n### Bug fixes\r\n\r\n- [0d9e250](https://github.com/bosun-ai/swiftide/commit/0d9e250e2512fe9c66d5dfd2ac688dcd56bd07e9) *(tracing)*  Use `or_current()` to prevent orphaned tracing spans (#573)\r\n\r\n````text\r\nWhen a span is emitted that would be selected by the subscriber, but we\r\n  instrument its closure with a span that would not be selected by the\r\n  subscriber, the span would be emitted as an orphan (with a new\r\n  `trace_id`) making them hard to find and cluttering dashboards.\r\n\r\n  This situation is also documented here:\r\n  https://docs.rs/tracing/latest/tracing/struct.Span.html#method.or_current\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.17.3...0.17.4\r\n\r\n\r\n\r\n## [0.17.3](https://github.com/bosun-ai/swiftide/compare/v0.17.2...v0.17.3) - 2025-01-24\r\n\r\n### New features\r\n\r\n- [8e22442](https://github.com/bosun-ai/swiftide/commit/8e2244241f16fff77591cf04f40725ad0b05ca81) *(integrations)*  Support Qdrant 1.13 (#571)\r\n\r\n### Bug fixes\r\n\r\n- [c5408a9](https://github.com/bosun-ai/swiftide/commit/c5408a96fbed6207022eb493da8d2cbb0fea7ca6) *(agents)*  Io::Error should always be a NonZeroExit error for tool executors (#570)\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.lock dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.17.2...0.17.3\r\n\r\n\r\n\r\n## [0.17.2](https://github.com/bosun-ai/swiftide/compare/v0.17.1...v0.17.2) - 2025-01-21\r\n\r\n### Bug fixes\r\n\r\n- [47db5ab](https://github.com/bosun-ai/swiftide/commit/47db5ab138384a6c235a90024470e9ab96751cc8) *(agents)*  Redrive uses the correct pointer and works as intended\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.17.1...0.17.2\r\n\r\n\r\n\r\n## [0.17.1](https://github.com/bosun-ai/swiftide/compare/v0.17.0...v0.17.1) - 2025-01-20\r\n\r\n### New features\r\n\r\n- [e4e4468](https://github.com/bosun-ai/swiftide/commit/e4e44681b65b07b5f1e987ce468bdcda61eb30da) *(agents)*  Implement AgentContext for smart dyn pointers\r\n\r\n- [70181d9](https://github.com/bosun-ai/swiftide/commit/70181d9642aa2c0a351b9f42be1a8cdbd83c9075) *(agents)*  Add pub accessor for agent context (#558)\r\n\r\n- [274d9d4](https://github.com/bosun-ai/swiftide/commit/274d9d46f39ac2e28361c4881c6f8f7e20dd8753) *(agents)*  Preprocess tool calls to fix common, fixable errors (#560)\r\n\r\n````text\r\nOpenAI has a tendency to sometimes send double keys. With this, Swiftide\r\n  will now take the first key and ignore any duplicates after that. Sets the stage for any future preprocessing before it gets strictly parsed by serde.\r\n````\r\n\r\n- [0f0f491](https://github.com/bosun-ai/swiftide/commit/0f0f491b2621ad82389a57bdb521fcf4021b7d7a) *(integrations)*  Add Dashscope support  (#543)\r\n\r\n````text\r\n---------\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [b2b15ac](https://github.com/bosun-ai/swiftide/commit/b2b15ac073e4f6b035239791a056fbdf6f6e704e) *(openai)*  Enable strict mode for tool calls (#561)\r\n\r\n````text\r\nEnsures openai sticks much better to the schema and avoids accidental\r\n  mistakes.\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.17.0...0.17.1\r\n\r\n\r\n\r\n## [0.17.0](https://github.com/bosun-ai/swiftide/compare/v0.16.4...v0.17.0) - 2025-01-16\r\n\r\n### New features\r\n\r\n- [835c35e](https://github.com/bosun-ai/swiftide/commit/835c35e7d74811daa90f7ca747054d1919633058) *(agents)*  Redrive completions manually on failure (#551)\r\n\r\n````text\r\nSometimes LLMs fail a completion without deterministic errors, or the\r\n  user case where you just want to retry. `redrive` can now be called on a\r\n  context, popping any new messages (if any), and making the messages\r\n  available again to the agent.\r\n````\r\n\r\n- [f83f3f0](https://github.com/bosun-ai/swiftide/commit/f83f3f03bbf6a9591b54521dde91bf1a5ed19c5c) *(agents)*  Implement ToolExecutor for common dyn pointers (#549)\r\n\r\n- [7f85735](https://github.com/bosun-ai/swiftide/commit/7f857358e46e825494ba927dffb33c3afa0d762e) *(query)*  Add custom lancedb query generation for lancedb search (#518)\r\n\r\n- [ce4e34b](https://github.com/bosun-ai/swiftide/commit/ce4e34be42ce1a0ab69770d03695bd67f99a8739) *(tree-sitter)*  Add golang support (#552)\r\n\r\n````text\r\nSeems someone conveniently forgot to add Golang support for the\r\n  splitter.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.lock dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.16.4...0.17.0\r\n\r\n\r\n\r\n## [0.16.4](https://github.com/bosun-ai/swiftide/compare/v0.16.3...v0.16.4) - 2025-01-12\r\n\r\n### New features\r\n\r\n- [c919484](https://github.com/bosun-ai/swiftide/commit/c9194845faa12b8a0fcecdd65f8ec9d3d221ba08)  Ollama via async-openai with chatcompletion support (#545)\r\n\r\n````text\r\nAdds support for chatcompletions (agents) for ollama. SimplePrompt and embeddings now use async-openai underneath.\r\n\r\n  Copy pasted as I expect some differences in the future.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.16.3...0.16.4\r\n\r\n\r\n\r\n## [0.16.3](https://github.com/bosun-ai/swiftide/compare/v0.16.2...v0.16.3) - 2025-01-10\r\n\r\n### New features\r\n\r\n- [b66bd79](https://github.com/bosun-ai/swiftide/commit/b66bd79070772d7e1bfe10a22531ccfd6501fc2a) *(fastembed)*  Add support for jina v2 code (#541)\r\n\r\n````text\r\nAdd support for jina v2 code in fastembed.\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.16.2...0.16.3\r\n\r\n\r\n\r\n## [0.16.2](https://github.com/bosun-ai/swiftide/compare/v0.16.1...v0.16.2) - 2025-01-08\r\n\r\n### Bug fixes\r\n\r\n- [2226755](https://github.com/bosun-ai/swiftide/commit/2226755f367d9006870a2dea2063655a7901d427)  Explicit cast on tools to Box<dyn> to make analyzer happy (#536)\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.16.1...0.16.2\r\n\r\n\r\n\r\n## [0.16.1](https://github.com/bosun-ai/swiftide/compare/v0.16.0...v0.16.1) - 2025-01-06\r\n\r\n### Bug fixes\r\n\r\n- [d198bb0](https://github.com/bosun-ai/swiftide/commit/d198bb0807f5d5b12a51bc76721cc945be8e65b9) *(prompts)*  Skip rendering prompts if no context and forward as is (#530)\r\n\r\n````text\r\nFixes an issue if strings suddenly include jinja style values by\r\n  mistake. Bonus performance boost.\r\n````\r\n\r\n- [4e8d59f](https://github.com/bosun-ai/swiftide/commit/4e8d59fbc0fbe72dd0f8d6a95e6e335280eb88e3) *(redb)*  Log errors and return uncached instead of panicing (#531)\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.16.0...0.16.1\r\n\r\n\r\n\r\n## [0.16.0](https://github.com/bosun-ai/swiftide/compare/v0.15.0...v0.16.0) - 2025-01-02\r\n\r\n### New features\r\n\r\n- [52e341e](https://github.com/bosun-ai/swiftide/commit/52e341ee9777d04f9fb07054980ba087c55c033e) *(lancedb)*  Public method for opening table (#514)\r\n\r\n- [3254bd3](https://github.com/bosun-ai/swiftide/commit/3254bd34d0eeb038c8aa6ea56ac2940b3ca81960) *(query)*  Generic templates with document rendering (#520)\r\n\r\n````text\r\nReworks `PromptTemplate` to a more generic `Template`, such that they\r\n  can also be used elsewhere. This deprecates `PromptTemplate`.\r\n\r\n  As an example, an optional `Template` in the `Simple` answer\r\n  transformer, which can be used to customize the output of retrieved\r\n  documents. This has excellent synergy with the metadata changes in #504.\r\n````\r\n\r\n- [235780b](https://github.com/bosun-ai/swiftide/commit/235780b941a0805b69541f0f4c55c3404091baa8) *(query)*  Documents as first class citizens (#504)\r\n\r\n````text\r\nFor simple RAG, just adding the content of a retrieved document might be\r\n  enough. However, in more complex use cases, you might want to add\r\n  metadata as well, as is or for conditional formatting.\r\n\r\n  For instance, when dealing with large amounts of chunked code, providing\r\n  the path goes a long way. If generated metadata is good enough, could be\r\n  useful as well.\r\n\r\n  With this retrieved Documents are treated as first class citizens,\r\n  including any metadata as well. Additionally, this also paves the way\r\n  for multi retrieval (and multi modal).\r\n````\r\n\r\n- [584695e](https://github.com/bosun-ai/swiftide/commit/584695e4841a3c9341e521b81e9f254270b3416e) *(query)*  Add custom SQL query generation for pgvector search (#478)\r\n\r\n````text\r\nAdds support for custom retrieval queries with the sqlx query builder for PGVector. Puts down the fundamentals for custom query building for any retriever.\r\n\r\n  ---------\r\n````\r\n\r\n- [b55bf0b](https://github.com/bosun-ai/swiftide/commit/b55bf0b318042459a6983cf725078c4da662618b) *(redb)*  Public database and table definition (#510)\r\n\r\n- [176378f](https://github.com/bosun-ai/swiftide/commit/176378f846ddecc3ddba74f6b423338b793f29b4)  Implement traits for all Arc dynamic dispatch (#513)\r\n\r\n````text\r\nIf you use i.e. a `Persist` or a `NodeCache` outside swiftide as well, and you already have it Arc'ed, now it just works.\r\n````\r\n\r\n- [dc9881e](https://github.com/bosun-ai/swiftide/commit/dc9881e48da7fb5dc744ef33b1c356b4152d00d3)  Allow opt out of pipeline debug truncation\r\n\r\n### Bug fixes\r\n\r\n- [2831101](https://github.com/bosun-ai/swiftide/commit/2831101daa2928b5507116d9eb907d98fb77bf50) *(lancedb)*  Metadata should be nullable in lancedb (#515)\r\n\r\n- [c35df55](https://github.com/bosun-ai/swiftide/commit/c35df5525d4d88cfb9ada89a060e1ab512b471af) *(macros)*  Explicit box dyn cast fixing Rust Analyzer troubles (#523)\r\n\r\n### Miscellaneous\r\n\r\n- [1bbbb0e](https://github.com/bosun-ai/swiftide/commit/1bbbb0e548cafa527c34856bd9ac6f76aca2ab5f)  Clippy\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.15.0...0.16.0\r\n\r\n\r\n\r\n## [0.15.0](https://github.com/bosun-ai/swiftide/compare/v0.14.4...v0.15.0) - 2024-12-23\r\n\r\n### New features\r\n\r\n- [a1b9a2d](https://github.com/bosun-ai/swiftide/commit/a1b9a2d37715420d3e2cc80d731e3713a22c7c50) *(query)*  Ensure concrete names for transformations are used when debugging (#496)\r\n\r\n- [7779c44](https://github.com/bosun-ai/swiftide/commit/7779c44de3581ac865ac808637c473525d27cabb) *(query)*  Ensure query pipeline consistently debug logs in all other stages too\r\n\r\n- [55dde88](https://github.com/bosun-ai/swiftide/commit/55dde88df888b60a7ccae5a68ba03d20bc1f57df) *(query)*  Debug full retrieved documents when debug mode is enabled (#495)\r\n\r\n- [66031ba](https://github.com/bosun-ai/swiftide/commit/66031ba27b946add0533775423d468abb3187604) *(query)*  Log query pipeline answer on debug (#497)\r\n\r\n### Miscellaneous\r\n\r\n- [d255772](https://github.com/bosun-ai/swiftide/commit/d255772cc933c839e3aaaffccd343acf75dcb251) *(agents)*  Rename `CommandError::FailedWithOutput` to `CommandError::NonZeroExit` (#484)\r\n\r\n````text\r\nBetter describes what is going on. I.e. `rg` exits with 1 if nothing is\r\n  found, tests generally do the same if they fail.\r\n````\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.14.4...0.15.0\r\n\r\n\r\n\r\n## [0.14.4](https://github.com/bosun-ai/swiftide/compare/v0.14.3...v0.14.4) - 2024-12-11\r\n\r\n### New features\r\n\r\n- [7211559](https://github.com/bosun-ai/swiftide/commit/7211559936d8b5e16a3b42f9c90b42a39426be8a) *(agents)*  **EXPERIMENTAL** Agents in Swiftide (#463)\r\n\r\n````text\r\nAgents are coming to Swiftide! We are still ironing out all the kinks,\r\n  while we make it ready for a proper release. You can already experiment\r\n  with agents, see the rustdocs for documentation, and an example in\r\n  `/examples`, and feel free to contact us via github or discord. Better\r\n  documentation, examples, and tutorials are coming soon.\r\n\r\n  Run completions in a loop, define tools with two handy macros, customize\r\n  the agent by hooking in on lifecycle events, and much more.\r\n\r\n  Besides documentation, expect a big release for what we build this for\r\n  soon! 🎉\r\n````\r\n\r\n- [3751f49](https://github.com/bosun-ai/swiftide/commit/3751f49201c71398144a8913a4443f452534def2) *(query)*  Add support for single embedding retrieval with PGVector (#406)\r\n\r\n### Miscellaneous\r\n\r\n- [5ce4d21](https://github.com/bosun-ai/swiftide/commit/5ce4d21725ff9b0bb7f9da8fe026075fde9fc9a5)  Clippy and deps fixes for 1.83 (#467)\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.14.3...0.14.4\r\n\r\n\r\n\r\n## [0.14.3](https://github.com/bosun-ai/swiftide/compare/v0.14.2...v0.14.3) - 2024-11-20\r\n\r\n### New features\r\n\r\n- [1774b84](https://github.com/bosun-ai/swiftide/commit/1774b84f00a83fe69af4a2b6a6daf397d4d9b32d) *(integrations)*  Add PGVector support for indexing ([#392](https://github.com/bosun-ai/swiftide/pull/392))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.14.2...0.14.3\r\n\r\n\r\n\r\n## [0.14.2](https://github.com/bosun-ai/swiftide/compare/v0.14.1...v0.14.2) - 2024-11-08\r\n\r\n### Bug fixes\r\n\r\n- [3924322](https://github.com/bosun-ai/swiftide/commit/39243224d739a76cf2b60204fc67819055b7bc6f) *(querying)*  Query pipeline is now properly send and sync when possible ([#425](https://github.com/bosun-ai/swiftide/pull/425))\r\n\r\n### Miscellaneous\r\n\r\n- [52198f7](https://github.com/bosun-ai/swiftide/commit/52198f7fe76376a42c1fec8945bda4bf3e6971d4)  Improve local dev build speed ([#434](https://github.com/bosun-ai/swiftide/pull/434))\r\n\r\n````text\r\n- **Tokio on rt-multi-thread only**\r\n  - **Remove manual checks from lancedb integration test**\r\n  - **Ensure all deps in workspace manifest**\r\n  - **Remove unused deps**\r\n  - **Remove examples and benchmarks from default members**\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.14.1...0.14.2\r\n\r\n\r\n\r\n## [0.14.1](https://github.com/bosun-ai/swiftide/compare/v0.14.0...v0.14.1) - 2024-10-27\r\n\r\n### Bug fixes\r\n\r\n- [5bbcd55](https://github.com/bosun-ai/swiftide/commit/5bbcd55de65d73d7908e91c96f120928edb6b388)  Revert 0.14 release as mistralrs is unpublished ([#417](https://github.com/bosun-ai/swiftide/pull/417))\r\n\r\n````text\r\nRevert the 0.14 release as `mistralrs` is unpublished and unfortunately\r\n  cannot be released.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [07c2661](https://github.com/bosun-ai/swiftide/commit/07c2661b7a7cdf75cdba12fab0ca91866793f727)  Re-release 0.14 without mistralrs ([#419](https://github.com/bosun-ai/swiftide/pull/419))\r\n\r\n````text\r\n- **Revert \"fix: Revert 0.14 release as mistralrs is unpublished\r\n  ([#417](https://github.com/bosun-ai/swiftide/pull/417))\"**\r\n  - **Fix changelog**\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.14.0...0.14.1\r\n\r\n\r\n\r\n## [0.14.0](https://github.com/bosun-ai/swiftide/compare/v0.13.4...v0.14.0) - 2024-10-27\r\n\r\n### Bug fixes\r\n\r\n- [551a9cb](https://github.com/bosun-ai/swiftide/commit/551a9cb769293e42e15bae5dca3ab677be0ee8ea) *(indexing)*  [**breaking**] Node ID no longer memoized ([#414](https://github.com/bosun-ai/swiftide/pull/414))\r\n\r\n````text\r\nAs @shamb0 pointed out in [#392](https://github.com/bosun-ai/swiftide/pull/392), there is a potential issue where Node\r\n  ids are get cached before chunking or other transformations, breaking\r\n  upserts and potentially resulting in data loss.\r\n````\r\n\r\n**BREAKING CHANGE**: This PR reworks Nodes with a builder API and a private\r\nid. Hence, manually creating nodes no longer works. In the future, all\r\nthe fields are likely to follow the same pattern, so that we can\r\ndecouple the inner fields from the Node's implementation.\r\n\r\n- [c091ffa](https://github.com/bosun-ai/swiftide/commit/c091ffa6be792b0bd7bb03d604e26e40b2adfda8) *(indexing)*  Use atomics for key generation in memory storage ([#415](https://github.com/bosun-ai/swiftide/pull/415))\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.13.4...0.14.0\r\n\r\n\r\n\r\n## [0.13.4](https://github.com/bosun-ai/swiftide/compare/v0.13.3...v0.13.4) - 2024-10-21\r\n\r\n### Bug fixes\r\n\r\n- [47455fb](https://github.com/bosun-ai/swiftide/commit/47455fb04197a4b51142e2fb4c980e42ac54d11e) *(indexing)*  Visibility of ChunkMarkdown builder should be public\r\n\r\n- [2b3b401](https://github.com/bosun-ai/swiftide/commit/2b3b401dcddb2cb32214850b9b4dbb0481943d38) *(indexing)*  Improve splitters consistency and provide defaults ([#403](https://github.com/bosun-ai/swiftide/pull/403))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.13.3...0.13.4\r\n\r\n\r\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.13.3](https://github.com/bosun-ai/swiftide/compare/v0.13.2...v0.13.3) - 2024-10-11\r\n\r\n### Bug fixes\r\n\r\n- [2647f16](https://github.com/bosun-ai/swiftide/commit/2647f16dc164eb5230d8f7c6d71e31663000cb0d) *(deps)*  Update rust crate text-splitter to 0.17 ([#366](https://github.com/bosun-ai/swiftide/pull/366))\r\n\r\n- [d74d85b](https://github.com/bosun-ai/swiftide/commit/d74d85be3bd98706349eff373c16443b9c45c4f0) *(indexing)*  Add missing `Embed::batch_size` implementation ([#378](https://github.com/bosun-ai/swiftide/pull/378))\r\n\r\n- [95f78d3](https://github.com/bosun-ai/swiftide/commit/95f78d3412951c099df33149c57817338a76553d) *(tree-sitter)*  Compile regex only once ([#371](https://github.com/bosun-ai/swiftide/pull/371))\r\n\r\n````text\r\nRegex compilation is not cheap, use a static with a oncelock instead.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.13.2...0.13.3\r\n\r\n\r\n\r\n## [0.13.2](https://github.com/bosun-ai/swiftide/compare/v0.13.1...v0.13.2) - 2024-10-05\r\n\r\n### New features\r\n\r\n- [4b13aa7](https://github.com/bosun-ai/swiftide/commit/4b13aa7d76dfc7270870682e2f757f066a99ba4e) *(core)*  Add support for cloning all trait objects ([#355](https://github.com/bosun-ai/swiftide/pull/355))\r\n\r\n````text\r\nFor instance, if you have a `Box<dyn SimplePrompt>`, you can now clone\r\n  into an owned copy and more effectively use the available generics. This\r\n  also works for borrowed trait objects.\r\n````\r\n\r\n- [ed3da52](https://github.com/bosun-ai/swiftide/commit/ed3da52cf89b2384ec6f07c610c591b3eda2fa28) *(indexing)*  Support Redb as embedable nodecache ([#346](https://github.com/bosun-ai/swiftide/pull/346))\r\n\r\n````text\r\nAdds support for Redb as an embeddable node cache, allowing full local\r\n  app development without needing external services.\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [06f8336](https://github.com/bosun-ai/swiftide/commit/06f83361c52010a451e8b775ce9c5d67057edbc5) *(indexing)*  Ensure `name()` returns concrete name on trait objects ([#351](https://github.com/bosun-ai/swiftide/pull/351))\r\n\r\n### Miscellaneous\r\n\r\n- [8237c28](https://github.com/bosun-ai/swiftide/commit/8237c2890df681c48117188e80cbad914b91e0fd) *(core)*  Mock traits for testing should not have their docs hidden\r\n\r\n- [0000000](https://github.com/bosun-ai/swiftide/commit/0000000)  Update Cargo.toml dependencies\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.13.1...0.13.2\r\n\r\n\r\n\r\n## [0.13.1](https://github.com/bosun-ai/swiftide/compare/v0.13.0...v0.13.1) - 2024-10-02\r\n\r\n### Bug fixes\r\n\r\n- [e6d9ec2](https://github.com/bosun-ai/swiftide/commit/e6d9ec2fe034c9d36fd730c969555c459606d42f) *(lancedb)*  Should not error if table exists ([#349](https://github.com/bosun-ai/swiftide/pull/349))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.13.0...0.13.1\r\n\r\n\r\n\r\n## [0.13.0](https://github.com/bosun-ai/swiftide/compare/v0.12.3...v0.13.0) - 2024-09-26\r\n\r\n### New features\r\n\r\n- [7d8a57f](https://github.com/bosun-ai/swiftide/commit/7d8a57f54b2c73267dfaa3b3a32079b11d9b32bc) *(indexing)*  [**breaking**] Removed duplication of batch_size ([#336](https://github.com/bosun-ai/swiftide/pull/336))\r\n\r\n**BREAKING CHANGE**: The batch size of batch transformers when indexing is\r\nnow configured on the batch transformer. If no batch size or default is\r\nconfigured, a configurable default is used from the pipeline. The\r\ndefault batch size is 256.\r\n\r\n- [fd110c8](https://github.com/bosun-ai/swiftide/commit/fd110c8efeb3af538d4e51d033b6df02e90e05d9) *(tree-sitter)*  Add support for Java 22 ([#309](https://github.com/bosun-ai/swiftide/pull/309))\r\n\r\n### Bug fixes\r\n\r\n- [23b96e0](https://github.com/bosun-ai/swiftide/commit/23b96e08b4e0f10f5faea0b193b404c9cd03f47f) *(tree-sitter)* [**breaking**]  SupportedLanguages are now non-exhaustive ([#331](https://github.com/bosun-ai/swiftide/pull/331))\r\n\r\n**BREAKING CHANGE**: SupportedLanguages are now non-exhaustive. This means that matching on SupportedLanguages will now require a catch-all arm.\r\nThis change was made to allow for future languages to be added without breaking changes.\r\n\r\n### Miscellaneous\r\n\r\n- [923a8f0](https://github.com/bosun-ai/swiftide/commit/923a8f0663e7d2b7138f54069f7a74c3cf6663ed) *(fastembed,qdrant)*  Better batching defaults ([#334](https://github.com/bosun-ai/swiftide/pull/334))\r\n\r\n```text\r\nQdrant and FastEmbed now have a default batch size, removing the need to set it manually. The default batch size is 50 and 256 respectively.\r\n```\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.12.3...0.13.0\r\n\r\n\r\n\r\n## [0.12.3](https://github.com/bosun-ai/swiftide/releases/tag/0.12.3) - 2024-09-23\r\n\r\n### New features\r\n\r\n- [da5df22](https://github.com/bosun-ai/swiftide/commit/da5df2230da81e9fe1e6ab74150511cbe1e3d769) *(tree-sitter)*  Implement Serialize and Deserialize for SupportedLanguages ([#314](https://github.com/bosun-ai/swiftide/pull/314))\r\n\r\n### Bug fixes\r\n\r\n- [a756148](https://github.com/bosun-ai/swiftide/commit/a756148f85faa15b1a79db8ec8106f0e15e4d6a2) *(tree-sitter)*  Fix javascript and improve tests ([#313](https://github.com/bosun-ai/swiftide/pull/313))\r\n\r\n````text\r\nAs learned from [#309](https://github.com/bosun-ai/swiftide/pull/309), test coverage for the refs defs transformer was\r\n  not great. There _are_ more tests in code_tree. Turns out, with the\r\n  latest treesitter update, javascript broke as it was the only language\r\n  not covered at all.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [e8e9d80](https://github.com/bosun-ai/swiftide/commit/e8e9d80f2b4fbfe7ca2818dc542ca0a907a17da5) *(docs)*  Add documentation to query module ([#276](https://github.com/bosun-ai/swiftide/pull/276))\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/0.12.2...0.12.3\r\n\r\n\r\n\r\n\r\n## [v0.12.2](https://github.com/bosun-ai/swiftide/releases/tag/v0.12.2) - 2024-09-20\r\n\r\n### Docs\r\n\r\n- [d84814e](https://github.com/bosun-ai/swiftide/commit/d84814eef1bf12e485053fb69fb658d963100789)  Fix broken documentation links and other cargo doc warnings (#304) by @tinco\r\n\r\n````text\r\nRunning `cargo doc --all-features` resulted in a lot of warnings.\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.12.1...v0.12.2\r\n\r\n\r\n## [v0.12.1](https://github.com/bosun-ai/swiftide/releases/tag/v0.12.1) - 2024-09-16\r\n\r\n### New features\r\n\r\n- [ec227d2](https://github.com/bosun-ai/swiftide/commit/ec227d25b987b7fd63ab1b3862ef19b14632bd04) *(indexing,query)*  Add concise info log with transformation name by @timonv\r\n\r\n- [01cf579](https://github.com/bosun-ai/swiftide/commit/01cf579922a877bb78e0de20114ade501e5a63db) *(query)*  Add query_mut for reusable query pipelines by @timonv\r\n\r\n- [081a248](https://github.com/bosun-ai/swiftide/commit/081a248e67292c1800837315ec53583be5e0cb82) *(query)*  Improve query performance similar to indexing in 0.12 by @timonv\r\n\r\n- [8029926](https://github.com/bosun-ai/swiftide/commit/80299269054eb440e55a42667a7bcc9ba6514a7b) *(query,indexing)*  Add duration in log output on pipeline completion by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [39b6ecb](https://github.com/bosun-ai/swiftide/commit/39b6ecb6175e5233b129f94876f95182b8bfcdc3) *(core)*  Truncate long strings safely when printing debug logs by @timonv\r\n\r\n- [8b8ceb9](https://github.com/bosun-ai/swiftide/commit/8b8ceb9266827857859481c1fc4a0f0c40805e33) *(deps)*  Update redis by @timonv\r\n\r\n- [16e9c74](https://github.com/bosun-ai/swiftide/commit/16e9c7455829100b9ae82305e5a1d2568264af9f) *(openai)*  Reduce debug verbosity by @timonv\r\n\r\n- [6914d60](https://github.com/bosun-ai/swiftide/commit/6914d607717294467cddffa867c3d25038243fc1) *(qdrant)*  Reduce debug verbosity when storing nodes by @timonv\r\n\r\n- [3d13889](https://github.com/bosun-ai/swiftide/commit/3d1388973b5e2a135256ae288d47dbde0399487f) *(query)*  Reduce and improve debugging verbosity by @timonv\r\n\r\n- [133cf1d](https://github.com/bosun-ai/swiftide/commit/133cf1d0be09049ca3e90b45675a965bb2464cb2) *(query)*  Remove verbose debug and skip self in instrumentation by @timonv\r\n\r\n- [ce17981](https://github.com/bosun-ai/swiftide/commit/ce179819ab75460453236723c7f9a89fd61fb99a)  Clippy by @timonv\r\n\r\n- [a871c61](https://github.com/bosun-ai/swiftide/commit/a871c61ad52ed181d6f9cb6a66ed07bccaadee08)  Fmt by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [d62b047](https://github.com/bosun-ai/swiftide/commit/d62b0478872e460956607f52b72470b76eb32d91) *(ci)*  Update testcontainer images and fix tests by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.12.0...v0.12.1\r\n\r\n\r\n## [v0.12.0](https://github.com/bosun-ai/swiftide/releases/tag/v0.12.0) - 2024-09-13\r\n\r\n### New features\r\n\r\n- [e902cb7](https://github.com/bosun-ai/swiftide/commit/e902cb7487221d3e88f13d88532da081e6ef8611) *(query)*  Add support for filters in SimilaritySingleEmbedding (#298) by @timonv\r\n\r\n````text\r\nAdds support for filters for Qdrant and Lancedb in\r\n  SimilaritySingleEmbedding. Also fixes several small bugs and brings\r\n  improved tests.\r\n````\r\n\r\n- [f158960](https://github.com/bosun-ai/swiftide/commit/f1589604d1e0cb42a07d5a48080e3d7ecb90ee38)  Major performance improvements (#291) by @timonv\r\n\r\n````text\r\nFutures that do not yield were not run in parallel properly. With this\r\n  futures are spawned on a tokio worker thread by default.\r\n\r\n  When embedding (fastembed) and storing a 85k row dataset, there's a\r\n  ~1.35x performance improvement:\r\n  <img width=\"621\" alt=\"image\"\r\n  src=\"https://github.com/user-attachments/assets/ba2d4d96-8d4a-44f1-b02d-6ac2af0cedb7\">\r\n\r\n  ~~Need to do one more test with IO bound futures as well. Pretty huge,\r\n  not that it was slow.~~\r\n\r\n  With IO bound openai it's 1.5x.\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [f8314cc](https://github.com/bosun-ai/swiftide/commit/f8314ccdbe16ad7e6691899dd01f81a61b20180f) *(indexing)*  Limit logged chunk to max 100 chars (#292) by @timonv\r\n\r\n- [f95f806](https://github.com/bosun-ai/swiftide/commit/f95f806a0701b14a3cad5da307c27c01325a264d) *(indexing)*  Debugging nodes should respect utf8 char boundaries by @timonv\r\n\r\n- [8595553](https://github.com/bosun-ai/swiftide/commit/859555334d7e4129215b9f084d9f9840fac5ce36)  Implement into_stream_boxed for all loaders by @timonv\r\n\r\n- [9464ca1](https://github.com/bosun-ai/swiftide/commit/9464ca123f08d8dfba3f1bfabb57e9af97018534)  Bad embed error propagation (#293) by @timonv\r\n\r\n````text\r\n- **fix(indexing): Limit logged chunk to max 100 chars**\r\n  - **fix: Embed transformers must correctly propagate errors**\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [45d8a57](https://github.com/bosun-ai/swiftide/commit/45d8a57d1afb4f16ad76b15236308d753cf45743) *(ci)*  Use llm-cov preview via nightly and improve test coverage (#289) by @timonv\r\n\r\n````text\r\nFix test coverage in CI. Simplified the trait bounds on the query\r\n  pipeline for now to make it all work and fit together, and added more\r\n  tests to assert boxed versions of trait objects work in tests.\r\n````\r\n\r\n- [408f30a](https://github.com/bosun-ai/swiftide/commit/408f30ad8d007394ba971b314d399fcd378ffb61) *(deps)*  Update testcontainers (#295) by @timonv\r\n\r\n- [37c4bd9](https://github.com/bosun-ai/swiftide/commit/37c4bd9f9ac97646adb2c4b99b8f7bf0bee4c794) *(deps)*  Update treesitter (#296) by @timonv\r\n\r\n- [8d9e954](https://github.com/bosun-ai/swiftide/commit/8d9e9548ccc1b39e302ee42dd5058f50df13270f)  Cargo update by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.11.1...v0.12.0\r\n\r\n\r\n## [v0.11.1](https://github.com/bosun-ai/swiftide/releases/tag/v0.11.1) - 2024-09-10\r\n\r\n### New features\r\n\r\n- [3c9491b](https://github.com/bosun-ai/swiftide/commit/3c9491b8e1ce31a030eaac53f56890629a087f70)  Implemtent traits T for Box<T> for indexing and query traits (#285) by @timonv\r\n\r\n````text\r\nWhen working with trait objects, some pipeline steps now allow for\r\n  Box<dyn Trait> as well.\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [dfa546b](https://github.com/bosun-ai/swiftide/commit/dfa546b310e71a7cb78a927cc8f0ee4e2046a592)  Add missing parquet feature flag by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.11.0...v0.11.1\r\n\r\n\r\n## [v0.11.0](https://github.com/bosun-ai/swiftide/releases/tag/v0.11.0) - 2024-09-08\r\n\r\n### New features\r\n\r\n- [bdf17ad](https://github.com/bosun-ai/swiftide/commit/bdf17adf5d3addc84aaf45ad893b816cb46431e3) *(indexing)*  Parquet loader (#279) by @timonv\r\n\r\n````text\r\nIngest and index data from parquet files.\r\n````\r\n\r\n- [a98dbcb](https://github.com/bosun-ai/swiftide/commit/a98dbcb455d33f0537cea4d3614da95f1a4b6554) *(integrations)*  Add ollama embeddings support (#278) by @ephraimkunz\r\n\r\n````text\r\nUpdate to the most recent ollama-rs, which exposes the batch embedding\r\n  API Ollama exposes (https://github.com/pepperoni21/ollama-rs/pull/61).\r\n  This allows the Ollama struct in Swiftide to implement `EmbeddingModel`.\r\n\r\n  Use the same pattern that the OpenAI struct uses to manage separate\r\n  embedding and prompt models.\r\n\r\n  ---------\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [873795b](https://github.com/bosun-ai/swiftide/commit/873795b31b3facb0cf5efa724cb391f7bf387fb0) *(ci)*  Re-enable coverage via Coverals with tarpaulin (#280) by @timonv\r\n\r\n- [465de7f](https://github.com/bosun-ai/swiftide/commit/465de7fc952d66f4cd15002ef39aab0e7ec3ac26)  Update CHANGELOG.md with breaking change by @timonv\r\n\r\n### New Contributors\r\n* @ephraimkunz made their first contribution in [#278](https://github.com/bosun-ai/swiftide/pull/278)\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.10.0...v0.11.0\r\n\r\n\r\n## [v0.10.0](https://github.com/bosun-ai/swiftide/releases/tag/v0.10.0) - 2024-09-06\r\n\r\n### Bug fixes\r\n\r\n- [5a724df](https://github.com/bosun-ai/swiftide/commit/5a724df895d35cfa606721d611afd073a23191de)  [**breaking**] Rust 1.81 support (#275) by @timonv\r\n\r\n````text\r\nFixing id generation properly as per #272, will be merged in together.\r\n\r\n  - **Clippy**\r\n  - **fix(qdrant)!: Default hasher changed in Rust 1.81**\r\n````\r\n\r\n**BREAKING CHANGE**: Rust 1.81 support (#275)\r\n\r\n### Docs\r\n\r\n- [3711f6f](https://github.com/bosun-ai/swiftide/commit/3711f6fb2b51e97e4606b744cc963c04b44b6963) *(readme)*  Fix date (#273) by @dzvon\r\n\r\n````text\r\nI suppose this should be 09-02.\r\n````\r\n\r\n### New Contributors\r\n* @dzvon made their first contribution in [#273](https://github.com/bosun-ai/swiftide/pull/273)\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.9.2...v0.10.0\r\n\r\n\r\n## [v0.9.2](https://github.com/bosun-ai/swiftide/releases/tag/v0.9.2) - 2024-09-04\r\n\r\n### New features\r\n\r\n- [84e9bae](https://github.com/bosun-ai/swiftide/commit/84e9baefb366f0a949ae7dcbdd8f97931da0b4be) *(indexing)*  Add chunker for text with text_splitter (#270) by @timonv\r\n\r\n- [387fbf2](https://github.com/bosun-ai/swiftide/commit/387fbf29c2bce06284548f9af146bb3969562761) *(query)*  Hybrid search for qdrant in query pipeline (#260) by @timonv\r\n\r\n````text\r\nImplement hybrid search for qdrant with their new Fusion search. Example\r\n  in /examples includes an indexing and query pipeline, included the\r\n  example answer as well.\r\n````\r\n\r\n### Docs\r\n\r\n- [064c7e1](https://github.com/bosun-ai/swiftide/commit/064c7e157775a7aaf9628a39f941be35ce0be99a) *(readme)*  Update intro by @timonv\r\n\r\n- [1dc4c90](https://github.com/bosun-ai/swiftide/commit/1dc4c90436c9c8c8d0eb080e300afce53090c73e) *(readme)*  Add new blog links by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.9.1...v0.9.2\r\n\r\n\r\n## [v0.9.1](https://github.com/bosun-ai/swiftide/releases/tag/v0.9.1) - 2024-09-01\r\n\r\n### New features\r\n\r\n- [b891f93](https://github.com/bosun-ai/swiftide/commit/b891f932e43b9c76198d238bcde73a6bb1dfbfdb) *(integrations)*  Add fluvio as loader support (#243) by @timonv\r\n\r\n````text\r\nAdds Fluvio as a loader support, enabling Swiftide indexing streams to\r\n  process messages from a Fluvio topic.\r\n````\r\n\r\n- [c00b6c8](https://github.com/bosun-ai/swiftide/commit/c00b6c8f08fca46451387f3034d3d53805f3e401) *(query)*  Ragas support (#236) by @timonv\r\n\r\n````text\r\nWork in progress on support for ragas as per\r\n  https://github.com/explodinggradients/ragas/issues/1165 and #232\r\n\r\n  Add an optional evaluator to a pipeline. Evaluators need to handle\r\n  transformation events in the query pipeline. The Ragas evaluator\r\n  captures the transformations as per\r\n  https://docs.ragas.io/en/latest/howtos/applications/data_preparation.html.\r\n\r\n  You can find a working notebook here\r\n  https://github.com/bosun-ai/swiftide-tutorial/blob/c510788a625215f46575415161659edf26fc1fd5/ragas/notebook.ipynb\r\n  with a pipeline using it here\r\n  https://github.com/bosun-ai/swiftide-tutorial/pull/1\r\n````\r\n\r\n- [a1250c1](https://github.com/bosun-ai/swiftide/commit/a1250c1cef57e2b74760fd31772e106993a3b079)  LanceDB support (#254) by @timonv\r\n\r\n````text\r\nAdd LanceDB support for indexing and querying. LanceDB separates compute\r\n  from storage, where storage can be local or hosted elsewhere.\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [f92376d](https://github.com/bosun-ai/swiftide/commit/f92376d551a3bf4fe39d81a64c4328a742677669) *(deps)*  Update rust crate aws-sdk-bedrockruntime to v1.46.0 (#247) by @renovate[bot]\r\n\r\n- [732a166](https://github.com/bosun-ai/swiftide/commit/732a166f388d4aefaeec694103e3d1ff57655d69)  Remove no default features from futures-util by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [9b257da](https://github.com/bosun-ai/swiftide/commit/9b257dadea6c07f720ac4ea447342b2f6d91d0ec)  Default features cleanup (#262) by @timonv\r\n\r\n````text\r\nIntegrations are messy and pull a lot in. A potential solution is to\r\n  disable default features, only add what is actually required, and put\r\n  the responsibility at users if they need anything specific. Feature\r\n  unification should then take care of the rest.\r\n````\r\n\r\n### Docs\r\n\r\n- [fb381b8](https://github.com/bosun-ai/swiftide/commit/fb381b8896a5fc863a4185445ce51fefb99e6c11) *(readme)*  Copy improvements (#261) by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.9.0...v0.9.1\r\n\r\n\r\n## [v0.9.0](https://github.com/bosun-ai/swiftide/releases/tag/v0.9.0) - 2024-08-15\r\n\r\n### New features\r\n\r\n- [2443933](https://github.com/bosun-ai/swiftide/commit/24439339a9b935befcbcc92e56c01c5048605138) *(qdrant)*  Add access to inner client for custom operations (#242) by @timonv\r\n\r\n- [4fff613](https://github.com/bosun-ai/swiftide/commit/4fff613b461e8df993327cb364cabc65cd5901d8) *(query)*  Add concurrency on query pipeline and add query_all by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [4e31c0a](https://github.com/bosun-ai/swiftide/commit/4e31c0a6cdc6b33e4055f611dc48d3aebf7514ae) *(deps)*  Update rust crate aws-sdk-bedrockruntime to v1.44.0 (#244) by @renovate[bot]\r\n\r\n- [501321f](https://github.com/bosun-ai/swiftide/commit/501321f811a0eec8d1b367f7c7f33b1dfd29d2b6) *(deps)*  Update rust crate spider to v1.99.37 (#230) by @renovate[bot]\r\n\r\n- [8a1cc69](https://github.com/bosun-ai/swiftide/commit/8a1cc69712b4361893c0564c7d6f7d1ed21e5710) *(query)*  After retrieval current transormation should be empty by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [e9d0016](https://github.com/bosun-ai/swiftide/commit/e9d00160148807a8e2d1df1582e6ea85cfd2d8d0) *(indexing,integrations)*  Move tree-sitter dependencies to integrations (#235) by @timonv\r\n\r\n````text\r\nRemoves the dependency of indexing on integrations, resulting in much\r\n  faster builds when developing on indexing.\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.8.0...v0.9.0\r\n\r\n\r\n## [v0.8.0](https://github.com/bosun-ai/swiftide/releases/tag/v0.8.0) - 2024-08-12\r\n\r\n### New features\r\n\r\n- [2e25ad4](https://github.com/bosun-ai/swiftide/commit/2e25ad4b999a8562a472e086a91020ec4f8300d8) *(indexing)*  [**breaking**] Default LLM for indexing pipeline and boilerplate Transformer macro (#227) by @timonv\r\n\r\n````text\r\nAdd setting a default LLM for an indexing pipeline, avoiding the need to\r\n  clone multiple times.\r\n\r\n  More importantly, introduced `swiftide-macros` with\r\n  `#[swiftide_macros::indexing_transformer]` that generates\r\n  all boilerplate code used for internal transformers. This ensures all\r\n  transformers are consistent and makes them\r\n  easy to change in the future. This is a big win for maintainability and\r\n  ease to extend. Users are encouraged to use the macro\r\n  as well.\r\n````\r\n\r\n**BREAKING CHANGE**: Introduces `WithIndexingDefaults` and\r\n`WithBatchIndexingDefaults` trait constraints for transformers. They can\r\nbe used as a marker\r\nwith a noop (i.e. just `impl WithIndexingDefaults for MyTransformer\r\n{}`). However, when implemented fully, they can be used to provide\r\ndefaults from the pipeline to your transformers.\r\n\r\n- [67336f1](https://github.com/bosun-ai/swiftide/commit/67336f1d9c7fde474bdddfd0054b40656df244e0) *(indexing)*  Sparse vector support with Splade and Qdrant (#222) by @timonv\r\n\r\n````text\r\nAdds Sparse vector support to the indexing pipeline, enabling hybrid\r\n  search for vector databases. The design should work for any form of\r\n  Sparse embedding, and works with existing embedding modes and multiple\r\n  named vectors. Additionally, added `try_default_sparse` to FastEmbed,\r\n  using Splade, so it's fully usuable.\r\n\r\n  Hybrid search in the query pipeline coming soon.\r\n````\r\n\r\n- [e728a7c](https://github.com/bosun-ai/swiftide/commit/e728a7c7a2fcf7b22c31e5d6c66a896f634f6901)  Code outlines in chunk metadata (#137) by @tinco\r\n\r\n````text\r\nAdded a transformer that generates outlines for code files using tree sitter. And another that compresses the outline to be more relevant to chunks. Additionally added a step to the metadata QA tool that uses the outline to improve the contextual awareness during QA generation.\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [dc7412b](https://github.com/bosun-ai/swiftide/commit/dc7412beda4377e8a6222b3ad576f0a1af332533) *(deps)*  Update aws-sdk-rust monorepo (#223) by @renovate[bot]\r\n\r\n### Miscellaneous\r\n\r\n- [9613f50](https://github.com/bosun-ai/swiftide/commit/9613f50c0036b42411cd3a3014f54b592fe4958a) *(ci)*  Only show remote github url if present in changelog by @timonv\r\n\r\n### Docs\r\n\r\n- [73d1649](https://github.com/bosun-ai/swiftide/commit/73d1649ca8427aa69170f6451eac55316581ed9a) *(readme)*  Add Ollama support to README by @timonv\r\n\r\n- [b3f04de](https://github.com/bosun-ai/swiftide/commit/b3f04defe94e5b26876c8d99049f4d87b5f2dc18) *(readme)*  Add link to discord (#219) by @timonv\r\n\r\n- [4970a68](https://github.com/bosun-ai/swiftide/commit/4970a683acccc71503e64044dc02addaf2e9c87c) *(readme)*  Fix discord links by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.7.1...v0.8.0\r\n\r\n\r\n## [v0.7.1](https://github.com/bosun-ai/swiftide/releases/tag/v0.7.1) - 2024-08-04\r\n\r\n### New features\r\n\r\n- [b2d31e5](https://github.com/bosun-ai/swiftide/commit/b2d31e555cb8da525513490e7603df1f6b2bfa5b) *(integrations)*  Add ollama support (#214) by @tinco\r\n\r\n- [9eb5894](https://github.com/bosun-ai/swiftide/commit/9eb589416c2a56f9942b6f6bed3771cec6acebaf) *(query)*  Add support for closures in all steps (#215) by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [53e662b](https://github.com/bosun-ai/swiftide/commit/53e662b8c30f6ac6d11863685d3850ab48397766) *(ci)*  Add cargo deny to lint dependencies (#213) by @timonv\r\n\r\n### Docs\r\n\r\n- [1539393](https://github.com/bosun-ai/swiftide/commit/15393932dd756af134a12f7954faa75893f8c3fb) *(readme)*  Update README.md by @timonv\r\n\r\n- [ba07ab9](https://github.com/bosun-ai/swiftide/commit/ba07ab93722d974ac93ed5d4a22bf53317bc11ae) *(readme)*  Readme improvements by @timonv\r\n\r\n- [f7accde](https://github.com/bosun-ai/swiftide/commit/f7accdeecf01efc291503282554257846725ce57) *(readme)*  Add 0.7 announcement by @timonv\r\n\r\n- [084548f](https://github.com/bosun-ai/swiftide/commit/084548f0fbfbb8cf6d359585f30c8e2593565681) *(readme)*  Clarify on closures by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.7.0...v0.7.1\r\n\r\n\r\n## [swiftide-v0.7.0](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.7.0) - 2024-07-28\r\n\r\n### New features\r\n\r\n- [ec1fb04](https://github.com/bosun-ai/swiftide/commit/ec1fb04573ab75fe140cbeff17bc3179e316ff0c) *(indexing)*  Metadata as first class citizen (#204) by @timonv\r\n\r\n````text\r\nAdds our own implementation for metadata, internally still using a\r\n  BTreeMap. The Value type is now a `serde_json::Value` enum. This allows\r\n  us to store the metadata in the same format as the rest of the document,\r\n  and also allows us to use values programmatically later.\r\n\r\n  As is, all current meta data is still stored as Strings.\r\n````\r\n\r\n- [16bafe4](https://github.com/bosun-ai/swiftide/commit/16bafe4da8c98adcf90f5bb63070832201c405b9) *(swiftide)*  [**breaking**] Rework workspace preparing for swiftide-query (#199) by @timonv\r\n\r\n````text\r\nSplits up the project into multiple small, unpublished crates. Boosts\r\n  compile times, makes the code a bit easier to grok and enables\r\n  swiftide-query to be build separately.\r\n````\r\n\r\n**BREAKING CHANGE**: All indexing related tools are now in\r\n\r\n- [63694d2](https://github.com/bosun-ai/swiftide/commit/63694d2892a7c97a7e7fc42664d550c5acd7bb12) *(swiftide-query)*  Query pipeline v1 (#189) by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [ee3aad3](https://github.com/bosun-ai/swiftide/commit/ee3aad37a40eb9f18c9a3082ad6826ff4b6c7245) *(deps)*  Update rust crate aws-sdk-bedrockruntime to v1.42.0 (#195) by @renovate[bot]\r\n\r\n- [be0f31d](https://github.com/bosun-ai/swiftide/commit/be0f31de4f0c7842e23628fd6144cc4406c165c0) *(deps)*  Update rust crate spider to v1.99.11 (#190) by @renovate[bot]\r\n\r\n- [dd04453](https://github.com/bosun-ai/swiftide/commit/dd04453ecb8d04326929780e9e52155b37d731e2) *(swiftide)*  Update main lockfile by @timonv\r\n\r\n- [bafd907](https://github.com/bosun-ai/swiftide/commit/bafd90706346c3e208390f1296f10e2c17ad61b1)  Update all cargo package descriptions by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [e72641b](https://github.com/bosun-ai/swiftide/commit/e72641b677cfd1b21e98fd74552728dbe3e7a9bc) *(ci)*  Set versions in dependencies by @timonv\r\n\r\n### Docs\r\n\r\n- [2114aa4](https://github.com/bosun-ai/swiftide/commit/2114aa4394f4eda2e6465e1adb5602ae1b3ff61f) *(readme)*  Add copy on the query pipeline by @timonv\r\n\r\n- [573aff6](https://github.com/bosun-ai/swiftide/commit/573aff6fee3f891bae61e92e131dd15425cefc29) *(indexing)*  Document the default prompt templates and their context (#206) by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.6.7...swiftide-v0.7.0\r\n\r\n\r\n## [swiftide-v0.6.7](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.6.7) - 2024-07-23\r\n\r\n### New features\r\n\r\n- [beea449](https://github.com/bosun-ai/swiftide/commit/beea449301b89fde1915c5336a071760c1963c75) *(prompt)*  Add Into for strings to PromptTemplate (#193) by @timonv\r\n\r\n- [f3091f7](https://github.com/bosun-ai/swiftide/commit/f3091f72c74e816f6b9b8aefab058d610becb625) *(transformers)*  References and definitions from code (#186) by @timonv\r\n\r\n### Docs\r\n\r\n- [97a572e](https://github.com/bosun-ai/swiftide/commit/97a572ec2e3728bbac82c889bf5129b048e61e0c) *(readme)*  Add blog posts and update doc link (#194) by @timonv\r\n\r\n- [504fe26](https://github.com/bosun-ai/swiftide/commit/504fe2632cf4add506dfb189c17d6e4ecf6f3824) *(pipeline)*  Add note that closures can also be used as transformers by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.6.6...swiftide-v0.6.7\r\n\r\n\r\n## [swiftide-v0.6.6](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.6.6) - 2024-07-16\r\n\r\n### New features\r\n\r\n- [d1c642a](https://github.com/bosun-ai/swiftide/commit/d1c642aa4ee9b373e395a78591dd36fa0379a4ff) *(groq)*  Add SimplePrompt support for Groq (#183) by @timonv\r\n\r\n````text\r\nAdds simple prompt support for Groq by using async_openai. ~~Needs some\r\n  double checks~~. Works great.\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [5d4a814](https://github.com/bosun-ai/swiftide/commit/5d4a8145b6952b2f4f9a1f144913673eeb3aaf24) *(deps)*  Update rust crate aws-sdk-bedrockruntime to v1.40.0 (#169) by @renovate[bot]\r\n\r\n### Docs\r\n\r\n- [143c7c9](https://github.com/bosun-ai/swiftide/commit/143c7c9c2638737166f23f2ef8106b7675f6e19b) *(readme)*  Fix typo (#180) by @eltociear\r\n\r\n- [d393181](https://github.com/bosun-ai/swiftide/commit/d3931818146bff72499ebfcc0d0e8c8bb13a760d) *(docsrs)*  Scrape examples and fix links (#184) by @timonv\r\n\r\n### New Contributors\r\n* @eltociear made their first contribution in [#180](https://github.com/bosun-ai/swiftide/pull/180)\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.6.5...swiftide-v0.6.6\r\n\r\n\r\n## [swiftide-v0.6.5](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.6.5) - 2024-07-15\r\n\r\n### New features\r\n\r\n- [0065c7a](https://github.com/bosun-ai/swiftide/commit/0065c7a7fd1289ea227391dd7b9bd51c905290d5) *(prompt)*  Add extending the prompt repository (#178) by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [b54691f](https://github.com/bosun-ai/swiftide/commit/b54691f769e2d0ac7886938b6e837551926eea2f) *(prompts)*  Include default prompts in crate (#174) by @timonv\r\n\r\n````text\r\n- **add prompts to crate**\r\n  - **load prompts via cargo manifest dir**\r\n````\r\n\r\n- [3c297bb](https://github.com/bosun-ai/swiftide/commit/3c297bbb85fd3ae9b411a691024f622702da3617) *(swiftide)*  Remove include from Cargo.toml by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [73d5fa3](https://github.com/bosun-ai/swiftide/commit/73d5fa37d23f53919769c2ffe45db2e3832270ef) *(traits)*  Cleanup unused batch size in `BatchableTransformer` (#177) by @timonv\r\n\r\n### Docs\r\n\r\n- [b95b395](https://github.com/bosun-ai/swiftide/commit/b95b3955f89ed231cc156dab749ee7bb8be98ee5) *(swiftide)*  Documentation improvements and cleanup (#176) by @timonv\r\n\r\n````text\r\n- **chore: remove ingestion stream**\r\n  - **Documentation and grammar**\r\n````\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.6.3...swiftide-v0.6.5\r\n\r\n\r\n## [swiftide-v0.6.3](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.6.3) - 2024-07-14\r\n\r\n### Bug fixes\r\n\r\n- [47418b5](https://github.com/bosun-ai/swiftide/commit/47418b5d729aef1e2ff77dabd7e29b5131512b01) *(prompts)*  Fix breaking issue with prompts not found by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.6.2...swiftide-v0.6.3\r\n\r\n\r\n## [swiftide-v0.6.2](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.6.2) - 2024-07-12\r\n\r\n### Miscellaneous\r\n\r\n- [2b682b2](https://github.com/bosun-ai/swiftide/commit/2b682b28fd146fac2c61f1ee430534a04b9fa7ce) *(deps)*  Limit feature flags on qdrant to fix docsrs by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.6.1...swiftide-v0.6.2\r\n\r\n\r\n## [swiftide-v0.6.1](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.6.1) - 2024-07-12\r\n\r\n### Miscellaneous\r\n\r\n- [aae7ab1](https://github.com/bosun-ai/swiftide/commit/aae7ab18f8c9509fd19f83695e4eca942c377043) *(deps)*  Patch update all by @timonv\r\n\r\n### Docs\r\n\r\n- [085709f](https://github.com/bosun-ai/swiftide/commit/085709fd767bab7153b2222907fc500ad4412570) *(docsrs)*  Disable unstable and rustdoc scraping by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.6.0...swiftide-v0.6.1\r\n\r\n\r\n## [swiftide-v0.6.0](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.6.0) - 2024-07-12\r\n\r\n### New features\r\n\r\n- [70ea268](https://github.com/bosun-ai/swiftide/commit/70ea268b19e564af83bb834f56d406a05e02e9cd) *(prompts)*  Add prompts as first class citizens (#145) by @timonv\r\n\r\n````text\r\nAdds Prompts as first class citizens. This is a breaking change as\r\n  SimplePrompt with just a a `&str` is no longer allowed.\r\n\r\n  This introduces `Prompt` and `PromptTemplate`. A template uses jinja\r\n  style templating build on tera. Templates can be converted into prompts,\r\n  and have context added. A prompt is then send to something that prompts,\r\n  i.e. openai or bedrock.\r\n\r\n  Additional prompts can be added either compiled or as one-offs.\r\n  Additionally, it's perfectly fine to prompt with just a string as well,\r\n  just provide an `.into()`.\r\n\r\n  For future development, some LLMs really benefit from system prompts,\r\n  which this would enable. For the query pipeline we can also take a much\r\n  more structured approach with composed templates and conditionals.\r\n````\r\n\r\n- [699cfe4](https://github.com/bosun-ai/swiftide/commit/699cfe44fb0e3baddba695ad09836caec7cb30a6)  Embed modes and named vectors (#123) by @pwalski\r\n\r\n````text\r\nAdded named vector support to qdrant. A pipeline can now have its embed\r\n  mode configured, either per field, chunk and metadata combined (default)\r\n  or both. Vectors need to be configured on the qdrant client side.\r\n\r\n  See `examples/store_multiple_vectors.rs` for an example.\r\n\r\n  Shoutout to @pwalski for the contribution. Closes #62.\r\n\r\n  ---------\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [9334934](https://github.com/bosun-ai/swiftide/commit/9334934e4af92b35dbc61e1f92aa90abac29ca12) *(chunkcode)*  Use correct chunksizes (#122) by @timonv\r\n\r\n- [dfc76dd](https://github.com/bosun-ai/swiftide/commit/dfc76ddfc23d9314fe88c8362bf53d7865a03302) *(deps)*  Update rust crate serde to v1.0.204 (#129) by @renovate[bot]\r\n\r\n- [28f5b04](https://github.com/bosun-ai/swiftide/commit/28f5b048f5acd977915ae20463f8fbb473dfab9a) *(deps)*  Update rust crate tree-sitter-typescript to v0.21.2 (#128) by @renovate[bot]\r\n\r\n- [9c261b8](https://github.com/bosun-ai/swiftide/commit/9c261b87dde2e0caaff0e496d15681466844daf4) *(deps)*  Update rust crate text-splitter to v0.14.1 (#127) by @renovate[bot]\r\n\r\n- [ff92abd](https://github.com/bosun-ai/swiftide/commit/ff92abd95908365c72d96abff37e0284df8fed32) *(deps)*  Update rust crate tree-sitter-javascript to v0.21.4 (#126) by @renovate[bot]\r\n\r\n- [7af97b5](https://github.com/bosun-ai/swiftide/commit/7af97b589ca45f2b966ea2f61ebef341c881f1f9) *(deps)*  Update rust crate spider to v1.98.7 (#124) by @renovate[bot]\r\n\r\n- [adc4bf7](https://github.com/bosun-ai/swiftide/commit/adc4bf789f679079fcc9fac38f4a7b8f98816844) *(deps)*  Update aws-sdk-rust monorepo (#125) by @renovate[bot]\r\n\r\n- [dd32ef3](https://github.com/bosun-ai/swiftide/commit/dd32ef3b1be7cd6888d2961053d0b3c1a882e1a4) *(deps)*  Update rust crate async-trait to v0.1.81 (#134) by @renovate[bot]\r\n\r\n- [2b13523](https://github.com/bosun-ai/swiftide/commit/2b1352322e574b62cb30268b35c6b510122f0584) *(deps)*  Update rust crate fastembed to v3.7.1 (#135) by @renovate[bot]\r\n\r\n- [8e22937](https://github.com/bosun-ai/swiftide/commit/8e22937427b928524dacf2b446feeff726b6a5e1) *(deps)*  Update rust crate aws-sdk-bedrockruntime to v1.39.0 (#143) by @renovate[bot]\r\n\r\n- [353cd9e](https://github.com/bosun-ai/swiftide/commit/353cd9ed36fcf6fb8f1db255d8b5f4a914ca8496) *(qdrant)*  Upgrade and better defaults (#118) by @timonv\r\n\r\n````text\r\n- **fix(deps): update rust crate qdrant-client to v1.10.1**\r\n  - **fix(qdrant): upgrade to new qdrant with sensible defaults**\r\n  - **feat(qdrant): safe to clone with internal arc**\r\n\r\n  ---------\r\n````\r\n\r\n- [b53636c](https://github.com/bosun-ai/swiftide/commit/b53636cbd8f179f248cc6672aaf658863982c603)  Inability to store only some of `EmbeddedField`s (#139) by @pwalski\r\n\r\n### Performance\r\n\r\n- [ea8f823](https://github.com/bosun-ai/swiftide/commit/ea8f8236cdd9c588e55ef78f9eac27db1f13b2d9)  Improve local build performance and crate cleanup (#148) by @timonv\r\n\r\n````text\r\n- **tune cargo for faster builds**\r\n  - **perf(swiftide): increase local build performance**\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [eb8364e](https://github.com/bosun-ai/swiftide/commit/eb8364e08a9202476cca6b60fbdfbb31fe0e1c3d) *(ci)*  Try overriding the github repo for git cliff by @timonv\r\n\r\n- [5de6af4](https://github.com/bosun-ai/swiftide/commit/5de6af42b9a1e95b0fbd54659c0d590db1d76222) *(ci)*  Only add contributors if present by @timonv\r\n\r\n- [4c9ed77](https://github.com/bosun-ai/swiftide/commit/4c9ed77c85b7dd0e8722388b930d169cd2e5a5c7) *(ci)*  Properly check if contributors are present by @timonv\r\n\r\n- [c5bf796](https://github.com/bosun-ai/swiftide/commit/c5bf7960ca6bec498cdc987fe7676acfef702e5b) *(ci)*  Add clippy back to ci (#147) by @timonv\r\n\r\n- [7a8843a](https://github.com/bosun-ai/swiftide/commit/7a8843ab9e64b623870ebe49079ec976aae56d5c) *(deps)*  Update rust crate testcontainers to 0.20.0 (#133) by @renovate[bot]\r\n\r\n- [364e13d](https://github.com/bosun-ai/swiftide/commit/364e13d83285317a1fb99889f6d74ad32b58c482) *(swiftide)*  Loosen up dependencies (#140) by @timonv\r\n\r\n````text\r\nLoosen up dependencies so swiftide is a bit more flexible to add to\r\n  existing projects\r\n````\r\n\r\n- [84dd65d](https://github.com/bosun-ai/swiftide/commit/84dd65dc6c0ff4595f27ed061a4f4c0a2dae7202)  [**breaking**] Rename all mentions of ingest to index (#130) by @timonv\r\n\r\n````text\r\nSwiftide is not an ingestion pipeline (loading data), but an indexing\r\n  pipeline (prepping for search).\r\n\r\n  There is now a temporary, deprecated re-export to match the previous api.\r\n````\r\n\r\n**BREAKING CHANGE**: rename all mentions of ingest to index (#130)\r\n\r\n- [51c114c](https://github.com/bosun-ai/swiftide/commit/51c114ceb06db840c4952d3d0f694bfbf266681c)  Various tooling & community improvements (#131) by @timonv\r\n\r\n````text\r\n- **fix(ci): ensure clippy runs with all features**\r\n  - **chore(ci): coverage using llvm-cov**\r\n  - **chore: drastically improve changelog generation**\r\n  - **chore(ci): add sanity checks for pull requests**\r\n  - **chore(ci): split jobs and add typos**\r\n````\r\n\r\n- [d2a9ea1](https://github.com/bosun-ai/swiftide/commit/d2a9ea1e7afa6f192bf9c32bbb54d9bb6e46472e)  Enable clippy pedantic (#132) by @timonv\r\n\r\n### Docs\r\n\r\n- [8405c9e](https://github.com/bosun-ai/swiftide/commit/8405c9efedef944156c2904eb709ba79aa4d82de) *(contributing)*  Add guidelines on code design (#113) by @timonv\r\n\r\n- [3e447fe](https://github.com/bosun-ai/swiftide/commit/3e447feab83a4bf8d7d9d8220fe1b92dede9af79) *(readme)*  Link to CONTRIBUTING (#114) by @timonv\r\n\r\n- [4c40e27](https://github.com/bosun-ai/swiftide/commit/4c40e27e5c6735305c70696ddf71dd5f95d03bbb) *(readme)*  Add back coverage badge by @timonv\r\n\r\n- [5691ac9](https://github.com/bosun-ai/swiftide/commit/5691ac930fd6547c3f0166b64ead0ae647c38883) *(readme)*  Add preproduction warning by @timonv\r\n\r\n- [37af322](https://github.com/bosun-ai/swiftide/commit/37af3225b4c3464aa4ed67f8f456c26f3d445507) *(rustdocs)*  Rewrite the initial landing page (#149) by @timonv\r\n\r\n````text\r\n- **Add homepage and badges to cargo toml**\r\n  - **documentation landing page improvements**\r\n````\r\n\r\n- [7686c2d](https://github.com/bosun-ai/swiftide/commit/7686c2d449b5df0fddc08b111174357d47459f86)  Templated prompts are now a major feature by @timonv\r\n\r\n### New Contributors\r\n* @pwalski made their first contribution in [#139](https://github.com/bosun-ai/swiftide/pull/139)\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.5.0...swiftide-v0.6.0\r\n\r\n\r\n## [swiftide-v0.5.0](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.5.0) - 2024-07-01\r\n\r\n### New features\r\n\r\n- [6a88651](https://github.com/bosun-ai/swiftide/commit/6a88651df8c6b91add03acfc071fb9479545b8af) *(ingestion_pipeline)*  Implement filter (#109) by @timonv\r\n\r\n- [5aeb3a7](https://github.com/bosun-ai/swiftide/commit/5aeb3a7fb75b21b2f24b111e9640ea4985b2e316) *(ingestion_pipeline)*  Splitting and merging streams by @timonv\r\n\r\n- [8812fbf](https://github.com/bosun-ai/swiftide/commit/8812fbf30b882b68bf25f3d56b3ddf17af0bcb7a) *(ingestion_pipeline)*  Build a pipeline from a stream by @timonv\r\n\r\n- [6101bed](https://github.com/bosun-ai/swiftide/commit/6101bed812c5167eb87a4093d66005140517598d)  AWS bedrock support (#92) by @timonv\r\n\r\n````text\r\nAdds an integration with AWS Bedrock, implementing SimplePrompt for\r\n  Anthropic and Titan models. More can be added if there is a need. Same\r\n  for the embedding models.\r\n````\r\n\r\n### Bug fixes\r\n\r\n- [17a2be1](https://github.com/bosun-ai/swiftide/commit/17a2be1de6c0f3bda137501db4b1703f9ed0b1c5) *(changelog)*  Add scope by @timonv\r\n\r\n- [a12cce2](https://github.com/bosun-ai/swiftide/commit/a12cce230032eebe2f7ff1aa9cdc85b8fc200eb1) *(openai)*  Add tests for builder by @timonv\r\n\r\n- [963919b](https://github.com/bosun-ai/swiftide/commit/963919b0947faeb7d96931c19e524453ad4a0007) *(transformers)*  [**breaking**] Fix too small chunks being retained and api by @timonv\r\n\r\n**BREAKING CHANGE**: Fix too small chunks being retained and api\r\n\r\n- [5e8da00](https://github.com/bosun-ai/swiftide/commit/5e8da008ce08a23377672a046a4cedd48d4cf30c)  Fix oversight in ingestion pipeline tests by @timonv\r\n\r\n- [e8198d8](https://github.com/bosun-ai/swiftide/commit/e8198d81354bbca2c21ca08b9522d02b8c93173b)  Use git cliff manually for changelog generation by @timonv\r\n\r\n- [2c31513](https://github.com/bosun-ai/swiftide/commit/2c31513a0ded87addd0519bbfdd63b5abed29f73)  Just use keepachangelog by @timonv\r\n\r\n- [6430af7](https://github.com/bosun-ai/swiftide/commit/6430af7b57eecb7fdb954cd89ade4547b8e92dbd)  Use native cargo bench format and only run benchmarks crate by @timonv\r\n\r\n- [cba981a](https://github.com/bosun-ai/swiftide/commit/cba981a317a80173eff2946fc551d1a36ec40f65)  Replace unwrap with expect and add comment on panic by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [e243212](https://github.com/bosun-ai/swiftide/commit/e2432123f0dfc48147ebed13fe6e3efec3ff7b3f) *(ci)*  Enable continous benchmarking and improve benchmarks (#98) by @timonv\r\n\r\n- [2dbf14c](https://github.com/bosun-ai/swiftide/commit/2dbf14c34bed2ee40ab79c0a46d011cd20882bda) *(ci)*  Fix benchmarks in ci by @timonv\r\n\r\n- [b155de6](https://github.com/bosun-ai/swiftide/commit/b155de6387ddfe64d1a177b31c8e1ed93739b2c9) *(ci)*  Fix naming of github actions by @timonv\r\n\r\n- [206e432](https://github.com/bosun-ai/swiftide/commit/206e432dd291dd6a4592a6fb5f890049595311cb) *(ci)*  Add support for merge queues by @timonv\r\n\r\n- [46752db](https://github.com/bosun-ai/swiftide/commit/46752dbfc8ccd578ddba915fd6cd6509e3e6fb14) *(ci)*  Add concurrency configuration by @timonv\r\n\r\n- [5f09c11](https://github.com/bosun-ai/swiftide/commit/5f09c116f418cecb96fb1e86161333908d1a4d70)  Add initial benchmarks by @timonv\r\n\r\n- [162c6ef](https://github.com/bosun-ai/swiftide/commit/162c6ef2a07e40b8607b0ab6773909521f0bb798)  Ensure feat is always in Added by @timonv\r\n\r\n### Docs\r\n\r\n- [929410c](https://github.com/bosun-ai/swiftide/commit/929410cb1c2d81b6ffaec4c948c891472835429d) *(readme)*  Add diagram to the readme (#107) by @timonv\r\n\r\n- [b014f43](https://github.com/bosun-ai/swiftide/commit/b014f43aa187881160245b4356f95afe2c6fe98c)  Improve documentation across the project (#112) by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.4.3...swiftide-v0.5.0\r\n\r\n\r\n## [swiftide-v0.4.3](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.4.3) - 2024-06-28\r\n\r\n### Bug fixes\r\n\r\n- [ab3dc86](https://github.com/bosun-ai/swiftide/commit/ab3dc861490a0d1ab94f96e741e09c860094ebc0) *(memory_storage)*  Fallback to incremental counter when missing id by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [bdebc24](https://github.com/bosun-ai/swiftide/commit/bdebc241507e9f55998e96ca4aece530363716af)  Clippy by @timonv\r\n\r\n### Docs\r\n\r\n- [dad3e02](https://github.com/bosun-ai/swiftide/commit/dad3e02fdc8a57e9de16832090c44c536e7e394b) *(readme)*  Add ci badge by @timonv\r\n\r\n- [4076092](https://github.com/bosun-ai/swiftide/commit/40760929d24e20631d0552d87bdbb4fdf9195453) *(readme)*  Clean up and consistent badge styles by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.4.2...swiftide-v0.4.3\r\n\r\n\r\n## [swiftide-v0.4.2](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.4.2) - 2024-06-26\r\n\r\n### New features\r\n\r\n- [926cc0c](https://github.com/bosun-ai/swiftide/commit/926cc0cca46023bcc3097a97b10ce03ae1fc3cc2) *(ingestion_stream)*  Implement into for Result<Vec<IngestionNode>> by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [3143308](https://github.com/bosun-ai/swiftide/commit/3143308136ec4e71c8a5f9a127119e475329c1a2) *(embed)*  Panic if number of embeddings and node are equal by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [5ed08bb](https://github.com/bosun-ai/swiftide/commit/5ed08bb259b7544d3e4f2acdeef56231aa32e17c)  Cleanup changelog by @timonv\r\n\r\n### Docs\r\n\r\n- [47aa378](https://github.com/bosun-ai/swiftide/commit/47aa378c4a70c47a2b313b6eca8dcf02b4723963)  Create CONTRIBUTING.md by @timonv\r\n\r\n- [0660d5b](https://github.com/bosun-ai/swiftide/commit/0660d5b08aed15d62f077363eae80f621ddaa510)  Readme updates by @timonv\r\n\r\n### Refactor\r\n\r\n- [d285874](https://github.com/bosun-ai/swiftide/commit/d28587448d7fe342a79ac687cd5d7ee27354cae6) *(ingestion_pipeline)*  Log_all combines other log helpers by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.4.1...swiftide-v0.4.2\r\n\r\n\r\n## [swiftide-v0.4.1](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.4.1) - 2024-06-24\r\n\r\n### New features\r\n\r\n- [3898ee7](https://github.com/bosun-ai/swiftide/commit/3898ee7d6273ee7034848f9ab08fd85613cb5b32) *(memory_storage)*  Can be cloned safely preserving storage by @timonv\r\n\r\n- [92052bf](https://github.com/bosun-ai/swiftide/commit/92052bfdbca8951620f6d016768d252e793ecb5d) *(transformers)*  Allow for arbitrary closures as transformers and batchable transformers by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.4.0...swiftide-v0.4.1\r\n\r\n\r\n## [swiftide-v0.4.0](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.4.0) - 2024-06-23\r\n\r\n### New features\r\n\r\n- [477a284](https://github.com/bosun-ai/swiftide/commit/477a284597359472988ecde372e080f60aab0804) *(benchmarks)*  Add benchmark for the file loader by @timonv\r\n\r\n- [1567940](https://github.com/bosun-ai/swiftide/commit/15679409032e9be347fbe8838a308ff0d09768b8) *(benchmarks)*  Add benchmark for simple local pipeline by @timonv\r\n\r\n- [2228d84](https://github.com/bosun-ai/swiftide/commit/2228d84ccaad491e2c3cd0feb948050ad2872cf0) *(examples)*  Example for markdown with all metadata by @timonv\r\n\r\n- [9a1e12d](https://github.com/bosun-ai/swiftide/commit/9a1e12d34e02fe2292ce679251b96d61be74c884) *(examples,scraping)*  Add example scraping and ingesting a url by @timonv\r\n\r\n- [15deeb7](https://github.com/bosun-ai/swiftide/commit/15deeb72ca2e131e8554fa9cbefa3ef369de752a) *(ingestion_node)*  Add constructor with defaults by @timonv\r\n\r\n- [4d5c68e](https://github.com/bosun-ai/swiftide/commit/4d5c68e7bb09fae18832e2a453f114df5ba32ce1) *(ingestion_node)*  Improved human readable Debug by @timonv\r\n\r\n- [a5051b7](https://github.com/bosun-ai/swiftide/commit/a5051b79b2ce62d41dd93f7b34a1a065d9878732) *(ingestion_pipeline)*  Optional error filtering and logging (#75) by @timonv\r\n\r\n- [062107b](https://github.com/bosun-ai/swiftide/commit/062107b46474766640c38266f6fd6c27a95d4b57) *(ingestion_pipeline)*  Implement throttling a pipeline (#77) by @timonv\r\n\r\n- [a2ffc78](https://github.com/bosun-ai/swiftide/commit/a2ffc78f6d25769b9b7894f1f0703d51242023d4) *(ingestion_stream)*  Improved stream developer experience (#81) by @timonv\r\n\r\n````text\r\nImproves stream ergonomics by providing convenient helpers and `Into`\r\n  for streams, vectors and iterators that match the internal type.\r\n\r\n  This means that in many cases, trait implementers can simply call\r\n  `.into()` instead of manually constructing a stream. In the case it's an\r\n  iterator, they can now use `IngestionStream::iter(<IntoIterator>)`\r\n  instead.\r\n````\r\n\r\n- [d260674](https://github.com/bosun-ai/swiftide/commit/d2606745de8b22dcdf02e244d1b044efe12c6ac7) *(integrations)*  [**breaking**] Support fastembed (#60) by @timonv\r\n\r\n````text\r\nAdds support for FastEmbed with various models. Includes a breaking change, renaming the Embed trait to EmbeddingModel.\r\n````\r\n\r\n**BREAKING CHANGE**: support fastembed (#60)\r\n\r\n- [9004323](https://github.com/bosun-ai/swiftide/commit/9004323dc5b11a3556a47e11fb8912ffc49f1e9e) *(integrations)*  [**breaking**] Implement Persist for Redis (#80) by @timonv\r\n\r\n**BREAKING CHANGE**: implement Persist for Redis (#80)\r\n\r\n- [eb84dd2](https://github.com/bosun-ai/swiftide/commit/eb84dd27c61a1b3a4a52a53cc0404203eac729e8) *(integrations,transformers)*  Add transformer for converting html to markdown by @timonv\r\n\r\n- [ef7dcea](https://github.com/bosun-ai/swiftide/commit/ef7dcea45bfc336e7defcaac36bb5a6ff27d5acd) *(loaders)*  File loader performance improvements by @timonv\r\n\r\n- [6d37051](https://github.com/bosun-ai/swiftide/commit/6d37051a9c2ef24ea7eb3815efcf9692df0d70ce) *(loaders)*  Add scraping using `spider` by @timonv\r\n\r\n- [2351867](https://github.com/bosun-ai/swiftide/commit/235186707182e8c39b8f22c6dd9d54eb32f7d1e5) *(persist)*  In memory storage for testing, experimentation and debugging by @timonv\r\n\r\n- [4d5d650](https://github.com/bosun-ai/swiftide/commit/4d5d650f235395aa81816637d559de39853e1db1) *(traits)*  Add automock for simpleprompt by @timonv\r\n\r\n- [bd6f887](https://github.com/bosun-ai/swiftide/commit/bd6f8876d010d23f651fd26a48d6775c17c98e94) *(transformers)*  Add transformers for title, summary and keywords by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [7cbfc4e](https://github.com/bosun-ai/swiftide/commit/7cbfc4e13745ee5a6776a97fc6db06608fae8e81) *(ingestion_pipeline)*  Concurrency does not work when spawned (#76) by @timonv\r\n\r\n````text\r\nCurrency does did not work as expected. When spawning via `Tokio::spawn`\r\n  the future would be polled directly, and any concurrency setting would\r\n  not be respected. Because it had to be removed, improved tracing for\r\n  each step as well.\r\n````\r\n\r\n### Miscellaneous\r\n\r\n- [f4341ba](https://github.com/bosun-ai/swiftide/commit/f4341babe5807b268ce86a88e0df4bfc6d756de4) *(ci)*  Single changelog for all (future) crates in root (#57) by @timonv\r\n\r\n- [7dde8a0](https://github.com/bosun-ai/swiftide/commit/7dde8a0811c7504b807b3ef9f508ce4be24967b8) *(ci)*  Code coverage reporting (#58) by @timonv\r\n\r\n````text\r\nPost test coverage to Coveralls\r\n\r\n  Also enabled --all-features when running tests in ci, just to be sure\r\n````\r\n\r\n- [cb7a2cd](https://github.com/bosun-ai/swiftide/commit/cb7a2cd3a72f306a0b46556caee0a25c7ba2c0e0) *(scraping)*  Exclude spider from test coverage by @timonv\r\n\r\n- [7767588](https://github.com/bosun-ai/swiftide/commit/77675884a2eeb0aab6ce57dccd2a260f5a973197) *(transformers)*  Improve test coverage by @timonv\r\n\r\n- [3b7c0db](https://github.com/bosun-ai/swiftide/commit/3b7c0dbc2f020ce84a5da5691ee6eb415df2d466)  Move changelog to root by @timonv\r\n\r\n- [d6d0215](https://github.com/bosun-ai/swiftide/commit/d6d021560a05508add07a72f4f438d3ea3f1cb2c)  Properly quote crate name in changelog by @timonv\r\n\r\n- [f251895](https://github.com/bosun-ai/swiftide/commit/f2518950427ef758fd57e6e6189ce600adf19940)  Documentation and feature flag cleanup (#69) by @timonv\r\n\r\n````text\r\nWith fastembed added our dependencies become rather heavy. By default\r\n  now disable all integrations and either provide 'all' or cherry pick\r\n  integrations.\r\n````\r\n\r\n- [f6656be](https://github.com/bosun-ai/swiftide/commit/f6656becd199762843a59b0f86871753360a08f0)  Cargo update by @timonv\r\n\r\n### Docs\r\n\r\n- [53ed920](https://github.com/bosun-ai/swiftide/commit/53ed9206835da1172295e296119ee9a883605f18)  Hide the table of contents by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.3.3...swiftide-v0.4.0\r\n\r\n\r\n## [swiftide-v0.3.3](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.3.3) - 2024-06-16\r\n\r\n### New features\r\n\r\n- [bdaed53](https://github.com/bosun-ai/swiftide/commit/bdaed5334b3e122f803370cc688dd2f662db0b8d) *(integrations)*  Clone and debug for integrations by @timonv\r\n\r\n- [318e538](https://github.com/bosun-ai/swiftide/commit/318e538acb30ca516a780b5cc42c8ab2ed91cd6b) *(transformers)*  Builder and clone for chunk_code by @timonv\r\n\r\n- [c074cc0](https://github.com/bosun-ai/swiftide/commit/c074cc0edb8b0314de15f9a096699e3e744c9f33) *(transformers)*  Builder for chunk_markdown by @timonv\r\n\r\n- [e18e7fa](https://github.com/bosun-ai/swiftide/commit/e18e7fafae3007f1980bb617b7a72dd605720d74) *(transformers)*  Builder and clone for MetadataQACode by @timonv\r\n\r\n- [fd63dff](https://github.com/bosun-ai/swiftide/commit/fd63dffb4f0b11bb9fa4fadc7b076463eca111a6) *(transformers)*  Builder and clone for MetadataQAText by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [678106c](https://github.com/bosun-ai/swiftide/commit/678106c01b7791311a24425c22ea39366b664033) *(ci)*  Pretty names for pipelines (#54) by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.3.2...swiftide-v0.3.3\r\n\r\n\r\n## [swiftide-v0.3.2](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.3.2) - 2024-06-16\r\n\r\n### New features\r\n\r\n- [b211002](https://github.com/bosun-ai/swiftide/commit/b211002e40ef16ef240e142c0178b04636a4f9aa) *(integrations)*  Qdrant and openai builder should be consistent (#52) by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.3.1...swiftide-v0.3.2\r\n\r\n\r\n## [swiftide-v0.3.1](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.3.1) - 2024-06-15\r\n\r\n### Docs\r\n\r\n- [6f63866](https://github.com/bosun-ai/swiftide/commit/6f6386693f3f6e0328eedaa4fb69cd8d0694574b)  We love feedback <3 by @timonv\r\n\r\n- [7d79b64](https://github.com/bosun-ai/swiftide/commit/7d79b645d2e4f7da05b4c9952a1ceb79583572b3)  Fixing some grammar typos on README.md (#51) by @hectorip\r\n\r\n### New Contributors\r\n* @hectorip made their first contribution in [#51](https://github.com/bosun-ai/swiftide/pull/51)\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.3.0...swiftide-v0.3.1\r\n\r\n\r\n## [swiftide-v0.3.0](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.3.0) - 2024-06-14\r\n\r\n### New features\r\n\r\n- [745b8ed](https://github.com/bosun-ai/swiftide/commit/745b8ed7e58f76e415501e6219ecec65551d1897) *(ingestion_pipeline)*  [**breaking**] Support chained storage backends (#46) by @timonv\r\n\r\n````text\r\nPipeline now supports multiple storage backends. This makes the order of adding storage important. Changed the name of the method to reflect that.\r\n````\r\n\r\n**BREAKING CHANGE**: support chained storage backends (#46)\r\n\r\n- [cd055f1](https://github.com/bosun-ai/swiftide/commit/cd055f19096daa802fe7fc34763bfdfd87c1ec41) *(ingestion_pipeline)*  Concurrency improvements (#48) by @timonv\r\n\r\n- [1f0cd28](https://github.com/bosun-ai/swiftide/commit/1f0cd28ce4c02a39dbab7dd3c3f789798644daa3) *(ingestion_pipeline)*  Early return if any error encountered (#49) by @timonv\r\n\r\n- [fa74939](https://github.com/bosun-ai/swiftide/commit/fa74939b30bd31301e3f80c407f153b5d96aa007)  Configurable concurrency for transformers and chunkers (#47) by @timonv\r\n\r\n### Docs\r\n\r\n- [473e60e](https://github.com/bosun-ai/swiftide/commit/473e60ecf9356e2fcabe68245f8bb8be7373cdfb)  Update linkedin link by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.2.1...swiftide-v0.3.0\r\n\r\n\r\n## [swiftide-v0.2.1](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.2.1) - 2024-06-13\r\n\r\n### Docs\r\n\r\n- [cb9b4fe](https://github.com/bosun-ai/swiftide/commit/cb9b4feec1c3654f5067f9478b1a7cf59040a9fe)  Add link to bosun by @timonv\r\n\r\n- [e330ab9](https://github.com/bosun-ai/swiftide/commit/e330ab92d7e8d3f806280fa781f0e1b179d9b900)  Fix documentation link by @timonv\r\n\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/swiftide-v0.2.0...swiftide-v0.2.1\r\n\r\n\r\n## [swiftide-v0.2.0](https://github.com/bosun-ai/swiftide/releases/tag/swiftide-v0.2.0) - 2024-06-13\r\n\r\n### New features\r\n\r\n- [9ec93be](https://github.com/bosun-ai/swiftide/commit/9ec93be110bd047c7e276714c48df236b1a235d7)  Api improvements with example (#10) by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [42f8008](https://github.com/bosun-ai/swiftide/commit/42f80086042c659aef74ddd0ea1463c84650938d)  Clippy & fmt by @timonv\r\n\r\n- [5b7ffd7](https://github.com/bosun-ai/swiftide/commit/5b7ffd7368a2688f70892fe37f28c0baea7ad54f)  Fmt by @timonv\r\n\r\n### Docs\r\n\r\n- [95a6200](https://github.com/bosun-ai/swiftide/commit/95a62008be1869e581ecaa0586a48cfbb6a7606a) *(swiftide)*  Documented file swiftide/src/ingestion/ingestion_pipeline.rs (#14) by @bosun-ai[bot]\r\n\r\n- [7abccc2](https://github.com/bosun-ai/swiftide/commit/7abccc2af890c8369a2b46940f35274080b3cb61) *(swiftide)*  Documented file swiftide/src/ingestion/ingestion_stream.rs (#16) by @bosun-ai[bot]\r\n\r\n- [755cd47](https://github.com/bosun-ai/swiftide/commit/755cd47ad00e562818162cf78e6df0c5daa99d14) *(swiftide)*  Documented file swiftide/src/ingestion/ingestion_node.rs (#15) by @bosun-ai[bot]\r\n\r\n- [2ea5a84](https://github.com/bosun-ai/swiftide/commit/2ea5a8445c8df7ef36e5fbc25f13c870e5a4dfd5) *(swiftide)*  Documented file swiftide/src/integrations/openai/mod.rs (#21) by @bosun-ai[bot]\r\n\r\n- [b319c0d](https://github.com/bosun-ai/swiftide/commit/b319c0d484db65d3a4594347e70770b8fac39e10) *(swiftide)*  Documented file swiftide/src/integrations/treesitter/splitter.rs (#30) by @bosun-ai[bot]\r\n\r\n- [29fce74](https://github.com/bosun-ai/swiftide/commit/29fce7437042f1f287987011825b57c58c180696) *(swiftide)*  Documented file swiftide/src/integrations/redis/node_cache.rs (#29) by @bosun-ai[bot]\r\n\r\n- [7229af8](https://github.com/bosun-ai/swiftide/commit/7229af8535daa450ebafd6c45c322222a2dd12a0) *(swiftide)*  Documented file swiftide/src/integrations/qdrant/persist.rs (#24) by @bosun-ai[bot]\r\n\r\n- [6240a26](https://github.com/bosun-ai/swiftide/commit/6240a260b582034970d2ee46da9f5234cf317820) *(swiftide)*  Documented file swiftide/src/integrations/redis/mod.rs (#23) by @bosun-ai[bot]\r\n\r\n- [7688c99](https://github.com/bosun-ai/swiftide/commit/7688c993125a129204739fc7cd8d23d0ebfc9022) *(swiftide)*  Documented file swiftide/src/integrations/qdrant/mod.rs (#22) by @bosun-ai[bot]\r\n\r\n- [d572c88](https://github.com/bosun-ai/swiftide/commit/d572c88f2b4cfc4bbdd7bd5ca93f7fd8460f1cb0) *(swiftide)*  Documented file swiftide/src/integrations/qdrant/ingestion_node.rs (#20) by @bosun-ai[bot]\r\n\r\n- [14e24c3](https://github.com/bosun-ai/swiftide/commit/14e24c30d28dc6272a5eb8275e758a2a989d66be) *(swiftide)*  Documented file swiftide/src/ingestion/mod.rs (#28) by @bosun-ai[bot]\r\n\r\n- [502939f](https://github.com/bosun-ai/swiftide/commit/502939fcb5f56b7549b97bb99d4d121bf030835f) *(swiftide)*  Documented file swiftide/src/integrations/treesitter/supported_languages.rs (#26) by @bosun-ai[bot]\r\n\r\n- [a78e68e](https://github.com/bosun-ai/swiftide/commit/a78e68e347dc3791957eeaf0f0adc050aeac1741) *(swiftide)*  Documented file swiftide/tests/ingestion_pipeline.rs (#41) by @bosun-ai[bot]\r\n\r\n- [289687e](https://github.com/bosun-ai/swiftide/commit/289687e1a6c0a9555a6cbecb24951522529f9e1a) *(swiftide)*  Documented file swiftide/src/loaders/mod.rs (#40) by @bosun-ai[bot]\r\n\r\n- [ebd0a5d](https://github.com/bosun-ai/swiftide/commit/ebd0a5dda940c5ef8c2b795ee8ab56e468726869) *(swiftide)*  Documented file swiftide/src/transformers/chunk_code.rs (#39) by @bosun-ai[bot]\r\n\r\n- [fb428d1](https://github.com/bosun-ai/swiftide/commit/fb428d1e250eded80d4edc8ccc0c9a9b840fc065) *(swiftide)*  Documented file swiftide/src/transformers/metadata_qa_text.rs (#36) by @bosun-ai[bot]\r\n\r\n- [305a641](https://github.com/bosun-ai/swiftide/commit/305a64149f015539823d748915e42ad440a7b4b4) *(swiftide)*  Documented file swiftide/src/transformers/openai_embed.rs (#35) by @bosun-ai[bot]\r\n\r\n- [c932897](https://github.com/bosun-ai/swiftide/commit/c93289740806d9283ba488dd640dad5e4339e07d) *(swiftide)*  Documented file swiftide/src/transformers/metadata_qa_code.rs (#34) by @bosun-ai[bot]\r\n\r\n- [090ef1b](https://github.com/bosun-ai/swiftide/commit/090ef1b38684afca8dbcbfe31a8debc2328042e5) *(swiftide)*  Documented file swiftide/src/integrations/openai/simple_prompt.rs (#19) by @bosun-ai[bot]\r\n\r\n- [7cfcc83](https://github.com/bosun-ai/swiftide/commit/7cfcc83eec29d8bed44172b497d4468b0b67d293)  Update readme template links and fix template by @timonv\r\n\r\n- [a717f3d](https://github.com/bosun-ai/swiftide/commit/a717f3d5a68d9c79f9b8d85d8cb8979100dc3949)  Template links should be underscores by @timonv\r\n\r\n### New Contributors\r\n* @bosun-ai[bot] made their first contribution in [#19](https://github.com/bosun-ai/swiftide/pull/19)\r\n\r\n**Full Changelog**: https://github.com/bosun-ai/swiftide/compare/v0.1.0...swiftide-v0.2.0\r\n\r\n\r\n## [v0.1.0](https://github.com/bosun-ai/swiftide/releases/tag/v0.1.0) - 2024-06-13\r\n\r\n### New features\r\n\r\n- [2a6e503](https://github.com/bosun-ai/swiftide/commit/2a6e503e8abdab83ead7b8e62f39e222fa9f45d1) *(doc)*  Setup basic readme (#5) by @timonv\r\n\r\n- [b8f9166](https://github.com/bosun-ai/swiftide/commit/b8f9166e1d5419cf0d2cc6b6f0b2378241850574) *(fluyt)*  Significant tracing improvements (#368) by @timonv\r\n\r\n````text\r\n* fix(fluyt): remove unnecessary cloning and unwraps\r\n\r\n  * fix(fluyt): also set target correctly on manual spans\r\n\r\n  * fix(fluyt): do not capture raw result\r\n\r\n  * feat(fluyt): nicer tracing for ingestion pipeline\r\n\r\n  * fix(fluyt): remove instrumentation on lazy methods\r\n\r\n  * feat(fluyt): add useful metadata to the root span\r\n\r\n  * fix(fluyt): fix dangling spans in ingestion pipeline\r\n\r\n  * fix(fluyt): do not log codebase in rag utils\r\n````\r\n\r\n- [0986136](https://github.com/bosun-ai/swiftide/commit/098613622a7018318f2fffe0d51cd17822bf2313) *(fluyt/code_ops)*  Add languages to chunker and range for chunk size (#334) by @timonv\r\n\r\n````text\r\n* feat(fluyt/code_ops): add more treesitter languages\r\n\r\n  * fix: clippy + fmt\r\n\r\n  * feat(fluyt/code_ops): implement builder and support range\r\n\r\n  * feat(fluyt/code_ops): implement range limits for code chunking\r\n\r\n  * feat(fluyt/indexing): code chunking supports size\r\n````\r\n\r\n- [f10bc30](https://github.com/bosun-ai/swiftide/commit/f10bc304b0b2e28281c90e57b6613c274dc20727) *(ingestion_pipeline)*  Default concurrency is the number of cpus (#6) by @timonv\r\n\r\n- [7453ddc](https://github.com/bosun-ai/swiftide/commit/7453ddc387feb17906ae851a17695f4c8232ee19)  Replace databuoy with new ingestion pipeline (#322) by @timonv\r\n\r\n- [054b560](https://github.com/bosun-ai/swiftide/commit/054b560571b4a4398a551837536fb8fbff13c149)  Fix build and add feature flags for all integrations by @timonv\r\n\r\n### Bug fixes\r\n\r\n- [fdf4be3](https://github.com/bosun-ai/swiftide/commit/fdf4be3d0967229a9dd84f568b0697fea4ddd341) *(fluyt)*  Ensure minimal tracing by @timonv\r\n\r\n- [389b0f1](https://github.com/bosun-ai/swiftide/commit/389b0f12039f29703bc8bb71919b8067fadf5a8e)  Add debug info to qdrant setup by @timonv\r\n\r\n- [bb905a3](https://github.com/bosun-ai/swiftide/commit/bb905a30d871ea3b238c3bc5cfd1d96724c8d4eb)  Use rustls on redis and log errors by @timonv\r\n\r\n- [458801c](https://github.com/bosun-ai/swiftide/commit/458801c16f9111c1070878c3a82a319701ae379c)  Properly connect to redis over tls by @timonv\r\n\r\n### Miscellaneous\r\n\r\n- [ce6e465](https://github.com/bosun-ai/swiftide/commit/ce6e465d4fb12e2bbc7547738b5fbe5133ec2d5a) *(fluyt)*  Add verbose log on checking if index exists by @timonv\r\n\r\n- [6967b0d](https://github.com/bosun-ai/swiftide/commit/6967b0d5b6221f7620161969865fb31959fc93b8)  Make indexing extraction compile by @tinco\r\n\r\n- [f595f3d](https://github.com/bosun-ai/swiftide/commit/f595f3dae88bb4da5f4bbf6c5fe4f04abb4b7db3)  Add rust-toolchain on stable by @timonv\r\n\r\n- [da004c6](https://github.com/bosun-ai/swiftide/commit/da004c6fcf82579c3c75414cb9f04f02530e2e31)  Start cleaning up dependencies by @timonv\r\n\r\n- [cccdaf5](https://github.com/bosun-ai/swiftide/commit/cccdaf567744d58e0ee8ffcc8636f3b35090778f)  Remove more unused dependencies by @timonv\r\n\r\n- [7ee8799](https://github.com/bosun-ai/swiftide/commit/7ee8799aeccc56fb0c14dbe68a7126cabfb40dd3)  Remove more crates and update by @timonv\r\n\r\n- [951f496](https://github.com/bosun-ai/swiftide/commit/951f496498b35f7687fb556e5bf7f931a662ff8a)  Clean up more crates by @timonv\r\n\r\n- [1f17d84](https://github.com/bosun-ai/swiftide/commit/1f17d84cc218602a480b27974f23f64c4269134f)  Cargo update by @timonv\r\n\r\n- [730d879](https://github.com/bosun-ai/swiftide/commit/730d879e76c867c2097aef83bbbfa1211a053bdc)  Create LICENSE by @timonv\r\n\r\n- [44524fb](https://github.com/bosun-ai/swiftide/commit/44524fb51523291b9137fbdcaff9133a9a80c58a)  Restructure repository and rename (#3) by @timonv\r\n\r\n````text\r\n* chore: move traits around\r\n\r\n  * chore: move crates to root folder\r\n\r\n  * chore: restructure and make it compile\r\n\r\n  * chore: remove infrastructure\r\n\r\n  * fix: make it compile\r\n\r\n  * fix: clippy\r\n\r\n  * chore: remove min rust version\r\n\r\n  * chore: cargo update\r\n\r\n  * chore: remove code_ops\r\n\r\n  * chore: settle on swiftide\r\n````\r\n\r\n- [e717b7f](https://github.com/bosun-ai/swiftide/commit/e717b7f0b1311b11ed4690e7e11d9fdf53d4a81b)  Update issue templates by @timonv\r\n\r\n- [8e22e0e](https://github.com/bosun-ai/swiftide/commit/8e22e0ef82fffa4f907b0e2cccd1c4e010ffbd01)  Cleanup by @timonv\r\n\r\n- [4d79d27](https://github.com/bosun-ai/swiftide/commit/4d79d27709e3fed32c1b1f2c1f8dbeae1721d714)  Tests, tests, tests (#4) by @timonv\r\n\r\n- [1036d56](https://github.com/bosun-ai/swiftide/commit/1036d565d8d9740ab55995095d495e582ce643d8)  Configure cargo toml (#7) by @timonv\r\n\r\n- [0ae98a7](https://github.com/bosun-ai/swiftide/commit/0ae98a772a751ddc60dd1d8e1606f9bdab4e04fd)  Cleanup Cargo keywords by @timonv\r\n\r\n### Refactor\r\n\r\n- [0d342ea](https://github.com/bosun-ai/swiftide/commit/0d342eab747bc5f44adaa5b6131c30c09b1172a2)  Models as first class citizens (#318) by @timonv\r\n\r\n````text\r\n* refactor: refactor common datastructures to /models\r\n\r\n  * refactor: promote to first class citizens\r\n\r\n  * fix: clippy\r\n\r\n  * fix: remove duplication in http handler\r\n\r\n  * fix: clippy\r\n\r\n  * fix: fmt\r\n\r\n  * feat: update for latest change\r\n\r\n  * fix(fluyt/models): doctest\r\n````\r\n\r\n\r\n\r\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contribution guidelines\n\n\nSwiftide is in a very early stage and we are aware that we do lack features for the wider community. Contributions are very welcome. :tada:\n\nIndexing and querying are performance sensitive tasks. Please make sure to consider allocations and performance when contributing.\n\nAI Generated code is welcome and not frowned upon. Please be genuine and think critically about what you add.\n\nFor AI agents read the [AGENTS.md](AGENTS.md) for workspace layout, commands, and expectations tailored to agents.\n\n## Feature requests and feedback\n\nWe love them, please let us know what you would like. Use one of the templates provided.\n\n## Code design\n\n* Simple, thin wrappers with sane defaults\n* Provide a builder (derive_builder) for easy customization\n* Keep Rust complexity (Arc/Box/Lifetimes/Pinning ...) encapsulated and away from library users\n* Adhere to [Rust api naming](https://rust-lang.github.io/api-guidelines/naming.html) as much as possible\n\n## Bug reports\n\nIt happens, but we still love them.\n\n## Submitting pull requests\n\nIf you have a great idea, please fork the repo and create a pull request. You can also simply open an issue with the tag \"enhancement\".\nDon't forget to give the project a star! Thanks again!\n\nIf you just want to contribute (bless you!), see [our issues](https://github.com/bosun-ai/swiftide/issues).\n\n1. Fork the Project\n2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)\n3. Commit your Changes (`git commit -m 'feat: Add some AmazingFeature'`)\n4. Push to the Branch (`git push origin feature/AmazingFeature`)\n5. Open a Pull Request\n\nMake sure that:\n\n* Public functions are documented in code\n* Documentation is updated in the [user documentation](https://github.com/bosun-ai/swiftide-website)\n* Tests are added\n* Verified performance with benchmarks if applicable\n"
  },
  {
    "path": "Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[workspace]\nmembers = [\"swiftide\", \"swiftide-*\", \"examples\", \"benchmarks\"]\ndefault-members = [\"swiftide\", \"swiftide-*\"]\nresolver = \"2\"\n\n[workspace.package]\nversion = \"0.32.1\"\nedition = \"2024\"\nlicense = \"MIT\"\nreadme = \"README.md\"\nkeywords = [\"llm\", \"rag\", \"ai\", \"data\", \"openai\"]\ndescription = \"Fast, streaming indexing, query, and agentic LLM applications in Rust\"\ncategories = [\"asynchronous\"]\nrepository = \"https://github.com/bosun-ai/swiftide\"\nhomepage = \"https://swiftide.rs\"\n\n[profile.dev.package]\ninsta.opt-level = 3\nsimilar.opt-level = 3\n\n[workspace.dependencies]\nanyhow = { version = \"1.0\", default-features = false }\nthiserror = { version = \"2.0\", default-features = false }\nasync-trait = { version = \"0.1\", default-features = false }\nderive_builder = { version = \"0.20\", default-features = true }\nfs-err = { version = \"3.1\", default-features = false }\nfutures-util = { version = \"0.3\", default-features = true }\ntokio = { version = \"1.46\", features = [\n  \"rt-multi-thread\",\n  \"time\",\n], default-features = false }\ntokio-stream = { version = \"0.1\", default-features = false, features = [\n  \"time\",\n] }\ntokio-util = { version = \"0.7\", default-features = false }\ntracing = { version = \"0.1\", features = [\n  \"log\",\n  \"attributes\",\n], default-features = false }\nnum_cpus = { version = \"1.17\", default-features = false }\npin-project = { version = \"1.1\", default-features = false }\nitertools = { version = \"0.14\", default-features = true }\nserde = { version = \"1.0\", features = [\n  \"derive\",\n  \"std\",\n], default-features = false }\nserde_json = { version = \"1.0\", default-features = false, features = [\"std\"] }\nstrum = { version = \"0.28\", default-features = false }\nstrum_macros = { version = \"0.28\", default-features = false }\nlazy_static = { version = \"1.5\", default-features = false }\nchrono = { version = \"0.4\", default-features = false }\nindoc = { version = \"2.0\", default-features = false }\nregex = { version = \"1.11\", default-features = false }\nuuid = { version = \"1.18\", features = [\n  \"v3\",\n  \"v4\",\n  \"serde\",\n], default-features = false }\ndyn-clone = { version = \"1.0\", default-features = false }\nconvert_case = { version = \"0.11\", default-features = false }\nbase64 = { version = \"0.22\", default-features = false, features = [\"std\"] }\n\n# Mcp\nrmcp = { version = \"0.17\", default-features = false, features = [\n  \"base64\",\n  \"macros\",\n  \"server\",\n] }\nschemars = { version = \"1.0\", default-features = false }\n\n# Integrations\nspider = { version = \"2.45\", default-features = false }\nasync-openai = { version = \">=0.33.0\", default-features = false }\nqdrant-client = { version = \"1.17\", default-features = false, features = [\n  \"serde\",\n] }\nfluvio = { version = \"0.50.1\", default-features = false }\nrdkafka = { version = \"0.39.0\", features = [\"cmake-build\"] }\nlancedb = { version = \"0.26\", default-features = false, features = [\"remote\"] }\n# Needs to stay in sync with whatever lancdb uses, nice\narrow-array = { version = \"57.1\", default-features = false }\nparquet = { version = \"57.1\", default-features = false, features = [\"async\"] }\nredb = { version = \"3.1\", default-features = false }\nsqlx = { version = \"0.8\", features = [\n  \"postgres\",\n  \"uuid\",\n], default-features = false }\naws-config = { version = \"1.8\", default-features = true }\npgvector = { version = \"0.4\", features = [\"sqlx\"], default-features = false }\naws-credential-types = { version = \"1.2\", default-features = false }\naws-sdk-bedrockruntime = { version = \"1.126\", default-features = false }\naws-smithy-types = { version = \"1.3\", default-features = false }\ncriterion = { version = \"0.8\", default-features = false }\ndarling = { version = \"0.23\", default-features = false }\ndeadpool = { version = \"0.13\", default-features = false }\ndocument-features = { version = \"0.2\" }\nfastembed = { version = \"5.5\", default-features = false }\nflv-util = { version = \"0.5\", default-features = false }\nhtmd = { version = \"0.5\", default-features = false }\nignore = { version = \"0.4\", default-features = false }\nproc-macro2 = { version = \"1.0\", default-features = false }\nquote = { version = \"1.0\", default-features = false }\nredis = { version = \"1.0\", default-features = false }\nreqwest = { version = \"0.13\", default-features = false }\nsecrecy = { version = \"0.10\", default-features = false }\nsyn = { version = \"2.0\", default-features = false }\ntera = { version = \"1.20\", default-features = false }\ntext-splitter = { version = \"0.29\", default-features = false }\ntracing-subscriber = { version = \"0.3\", default-features = true }\ntree-sitter = { version = \"0.26\", default-features = false, features = [\"std\"] }\ntree-sitter-java = { version = \"0.23\", default-features = false }\ntree-sitter-javascript = { version = \"0.25\", default-features = false }\ntree-sitter-python = { version = \"0.25\", default-features = false }\ntree-sitter-ruby = { version = \"0.23\", default-features = false }\ntree-sitter-rust = { version = \"0.24\", default-features = false }\ntree-sitter-typescript = { version = \"0.23\", default-features = false }\ntree-sitter-go = { version = \"0.25\", default-features = false }\ntree-sitter-solidity = { version = \"1.2\", default-features = false }\ntree-sitter-c = { version = \"0.24\", default-features = false }\ntree-sitter-cpp = { version = \"0.23\", default-features = false }\ntree-sitter-elixir = { version = \"0.3.4\", default-features = false }\ntree-sitter-html = { version = \"0.23\", default-features = false }\ntree-sitter-php = { version = \"0.24\", default-features = false }\ntree-sitter-c-sharp = { version = \"0.23\", default-features = false }\nasync-anthropic = { version = \"0.6.0\", default-features = false }\nduckdb = { version = \"1\", default-features = false }\nlibduckdb-sys = { version = \"1\", default-features = false }\nmetrics = { version = \"0.24\", default-features = false }\ntiktoken-rs = { version = \"0.9\", default-features = false }\nreqwest-eventsource = { version = \"0.6\", default-features = false }\n\n# Testing\ntest-log = { version = \"0.2\" }\ntestcontainers = { version = \"0.27\", features = [\"http_wait\"] }\ntestcontainers-modules = { version = \"0.15\" }\nmockall = { version = \"0.14\" }\ntemp-dir = { version = \"0.2\" }\nwiremock = { version = \"0.6\" }\ntest-case = { version = \"3.3\" }\npretty_assertions = { version = \"1.4\" }\ninsta = { version = \"1.45\", features = [\"yaml\", \"filters\"] }\neventsource-stream = { version = \"0.2\" }\n\n[workspace.lints.rust]\nunsafe_code = \"forbid\"\nunexpected_cfgs = { level = \"warn\", check-cfg = [\n  'cfg(coverage,coverage_nightly)',\n] }\n\n[workspace.lints.clippy]\ncargo = { level = \"warn\", priority = -1 }\npedantic = { level = \"warn\", priority = -1 }\nblocks_in_conditions = \"allow\"\nmust_use_candidate = \"allow\"\nmodule_name_repetitions = \"allow\"\nmissing_fields_in_debug = \"allow\"\n# Should be fixed asap\nmultiple_crate_versions = \"allow\"\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2024 Bosun.ai\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "<details>\n  <summary>Table of Contents</summary>\n\n<!--toc:start-->\n\n- [What is Swiftide?](#what-is-swiftide)\n  - [High level features](#high-level-features)\n- [Latest updates on our blog :fire:](#latest-updates-on-our-blog-fire)\n- [Examples](#examples)\n- [Vision](#vision)\n- [Features](#features)\n  - [In detail](#in-detail)\n- [Getting Started](#getting-started)\n  - [Prerequisites](#prerequisites)\n  - [Installation](#installation)\n- [Usage and concepts](#usage-and-concepts)\n  - [Indexing](#indexing)\n  - [Querying](#querying)\n- [Contributing](#contributing)\n- [Core Team Members](#core-team-members)\n- [License](#license)\n<!--toc:end-->\n\n</details>\n\n<a name=\"readme-top\"></a>\n\n<!-- PROJECT SHIELDS -->\n<!--\n*** I'm using markdown \"reference style\" links for readability.\n*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).\n*** See the bottom of this document for the declaration of the reference variables\n*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.\n*** https://www.markdownguide.org/basic-syntax/#reference-style-links\n-->\n\n![CI](https://img.shields.io/github/actions/workflow/status/bosun-ai/swiftide/test.yml?style=flat-square)\n![Coverage Status](https://img.shields.io/coverallsCoverage/github/bosun-ai/swiftide?style=flat-square)\n[![Crate Badge]][Crate]\n[![Docs Badge]][API Docs]\n[![Contributors][contributors-shield]][contributors-url]\n[![Stargazers][stars-shield]][stars-url]\n![Discord](https://img.shields.io/discord/1257672801553354802?style=flat-square&link=https%3A%2F%2Fdiscord.gg%2F3jjXYen9UY)\n[![MIT License][license-shield]][license-url]\n[![LinkedIn][linkedin-shield]][linkedin-url]\n\n<!-- PROJECT LOGO -->\n<br />\n<div align=\"center\">\n  <a href=\"https://github.com/bosun-ai/swiftide\">\n    <img src=\"https://raw.githubusercontent.com/bosun-ai/swiftide/master/images/logo.png\" alt=\"Logo\" width=\"250\" height=\"250\">\n  </a>\n\n  <h3 align=\"center\">Swiftide</h3>\n\n  <p align=\"center\">\nFast, streaming indexing, query, and agentic LLM applications in Rust\n    <br />\n    <a href=\"https://swiftide.rs\"><strong>Read more on swiftide.rs »</strong></a>\n    <br />\n    <br />\n    <!-- <a href=\"https://github.com/bosun-ai/swiftide\">View Demo</a> -->\n    <a href=\"https://docs.rs/swiftide/latest/swiftide/\">API Docs</a>\n    ·\n    <a href=\"https://github.com/bosun-ai/swiftide/issues/new?labels=bug&template=bug_report.md\">Report Bug</a>\n    ·\n    <a href=\"https://github.com/bosun-ai/swiftide/issues/new?labels=enhancement&template=feature_request.md\">Request Feature</a>\n    ·\n    <a href=\"https://discord.gg/3jjXYen9UY\">Discord</a>\n  </p>\n</div>\n\n<!-- ABOUT THE PROJECT -->\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## What is Swiftide?\n\n<!-- [![Product Name Screen Shot][product-screenshot]](https://example.com) -->\n\nSwiftide is a Rust library for building LLM applications. From performing a simple prompt completion, to building fast, streaming indexing and querying pipelines, to building agents that can use tools and call other agents.\n\n### High level features\n\n- Simple primitives for common LLM tasks\n- Build fast, streaming indexing and querying pipelines\n- Easily build agents, mix and match with previously built pipelines\n- A modular and extendable API, with minimal abstractions\n- Integrations with popular LLMs and storage providers\n- Ready to use pipeline transformations or bring your own\n- Build graph like workflows with Tasks\n- [Langfuse](https://langfuse.com) support\n\n<div align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/bosun-ai/swiftide/master/images/overview.png\" alt=\"Swiftide overview\" width=\"100%\" >\n</div>\n\nPart of the [bosun.ai](https://bosun.ai) project. An upcoming platform for autonomous code improvement.\n\nWe <3 feedback: project ideas, suggestions, and complaints are very welcome. Feel free to open an issue or contact us on [discord](https://discord.gg/3jjXYen9UY).\n\n> [!CAUTION]\n> Swiftide is under heavy development and can have breaking changes. Documentation might fall short of all features, and despite our efforts be slightly outdated. We recommend to always keep an eye on our [github](https://github.com/bosun-ai/swiftide) and [api documentation](https://docs.rs/swiftide/latest/swiftide/). If you found an issue or have any kind of feedback we'd love to hear from you.\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## Latest updates on our blog :fire:\n\n- [Swiftide 0.31 - Tasks, Langfuse, Multi-Modal, and more](http://blog.bosun.ai/swiftide-0-31/)\n- [Swiftide 0.27 - Easy human-in-the-loop flows for agentic AI](http://blog.bosun.ai/swiftide-0-27/)\n- [Swiftide 0.26 - Streaming agents](http://blog.bosun.ai/swiftide-0-26/)\n- [Releasing kwaak with kwaak](https://bosun.ai/posts/releasing-kwaak-with-kwaak/)\n- [Swiftide 0.16 - AI Agents in Rust](https://bosun.ai/posts/swiftide-0-16/)\n- [Rust in LLM based tools for performance](https://bosun.ai/posts/rust-for-genai-performance/)\n- [Evaluate Swiftide pipelines with Ragas](https://bosun.ai/posts/evaluating-swiftide-with-ragas/) (2024-09-15)\n- [Release - Swiftide 0.12](https://bosun.ai/posts/swiftide-0-12/) (2024-09-13)\n- [Local code intel with Ollama, FastEmbed and OpenTelemetry](https://bosun.ai/posts/ollama-and-telemetry/) (2024-09-04)\n\nMore on our [blog](https://blog.bosun.ai/)\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## Examples\n\nIndexing a local code project, chunking into smaller pieces, enriching the nodes with metadata, and persisting into [Qdrant](https://qdrant.tech):\n\n```rust\nindexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"rs\"]))\n        .with_default_llm_client(openai_client.clone())\n        .filter_cached(Redis::try_from_url(\n            redis_url,\n            \"swiftide-examples\",\n        )?)\n        .then_chunk(ChunkCode::try_for_language_and_chunk_size(\n            \"rust\",\n            10..2048,\n        )?)\n        .then(MetadataQACode::default())\n        .then(move |node| my_own_thing(node))\n        .then_in_batch(Embed::new(openai_client.clone()))\n        .then_store_with(\n            Qdrant::builder()\n                .batch_size(50)\n                .vector_size(1536)\n                .build()?,\n        )\n        .run()\n        .await?;\n```\n\nQuerying for an example on how to use the query pipeline:\n\n```rust\nquery::Pipeline::default()\n    .then_transform_query(GenerateSubquestions::from_client(\n        openai_client.clone(),\n    ))\n    .then_transform_query(Embed::from_client(\n        openai_client.clone(),\n    ))\n    .then_retrieve(qdrant.clone())\n    .then_answer(Simple::from_client(openai_client.clone()))\n    .query(\"How can I use the query pipeline in Swiftide?\")\n    .await?;\n```\n\nRunning an agent that can search code:\n\n```rust\n#[swiftide::tool(\n    description = \"Searches code\",\n    param(name = \"code_query\", description = \"The code query\")\n)]\nasync fn search_code(\n    context: &dyn AgentContext,\n    code_query: &str,\n) -> Result<ToolOutput, ToolError> {\n    let command_output = context\n        .executor()\n        .exec_cmd(&Command::shell(format!(\"rg '{code_query}'\")))\n        .await?;\n\n    Ok(command_output.into())\n}\n\nagents::Agent::builder()\n    .llm(&openai)\n    .tools(vec![search_code()])\n    .build()?\n    .query(\"In what file can I find an example of a swiftide agent?\")\n    .await?;\n```\n\nAgents loop over LLM calls, tool calls, and lifecycle hooks until a final answer is reached.\n\n_You can find more detailed examples in [/examples](https://github.com/bosun-ai/swiftide/tree/master/examples)_\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## Vision\n\nOur goal is to create a fast, extendable platform for building LLM applications in Rust, to further the development of automated AI applications, with an easy-to-use and easy-to-extend api.\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## Features\n\n- Simple primitives for common LLM tasks\n- Fast, modular streaming indexing pipeline with async, parallel processing\n- Experimental query pipeline\n- Experimental agent framework\n- A variety of loaders, transformers, semantic chunkers, embedders, and more\n- Bring your own transformers by extending straightforward traits or use a closure\n- Splitting and merging pipelines\n- Jinja-like templating for prompts\n- Store into multiple backends\n- Integrations with OpenAI, Groq, Gemini, Anthropic, Redis, Qdrant, Ollama, FastEmbed-rs, Fluvio, LanceDB, and Treesitter\n- Evaluate pipelines with RAGAS\n- Sparse vector support for hybrid search\n- `tracing` supported for logging and tracing, see /examples and the `tracing` crate for more information.\n- Tracing layer for exporting to Langfuse\n\n### In detail\n\n| **Feature**                                  | **Details**                                                                                                                                                          |\n| -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| **Supported Large Language Model providers** | OpenAI (and Azure) <br> Anthropic <br> Gemini <br> OpenRouter <br> AWS Bedrock (Converse API) <br> Groq - All models <br> Ollama - All models                 |\n| **Agents**                           | All the boiler plate for autonomous agents so you don't have to                                                                                     |\n| **Tasks** | Build graph like workflows with tasks, combining all the above to build complex applications                                                                                     |\n| **Loading data**                             | Files <br> Scraping <br> Fluvio <br> Parquet <br> Kafka <br> Other pipelines and streams                                                                                        |\n| **Example and pre-build transformers and metadata generation**     | Generate Question and answerers for both text and code (Hyde) <br> Summaries, titles and queries via an LLM <br> Extract definitions and references with tree-sitter |\n| **Splitting and chunking**                   | Markdown <br> Text (text_splitter) <br> Code (with tree-sitter)                                                                                                      |\n| **Storage**                                  | Qdrant <br> Redis <br> LanceDB <br> Postgres <br> Duckdb                                                                                                                                       |\n| **Query pipeline**                           | Similarity and hybrid search, query and response transformations, and evaluation                                                                                     |\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n<!-- GETTING STARTED -->\n\n## Getting Started\n\n### Prerequisites\n\nMake sure you have the rust toolchain installed. [rustup](https://rustup.rs) Is the recommended approach.\n\nTo use OpenAI, an API key is required. Note that by default `async_openai` uses the `OPENAI_API_KEY` environment variables.\n\nOther integrations might have their own requirements.\n\n### Installation\n\n1. Set up a new Rust project\n2. Add swiftide\n\n   ```sh\n   cargo add swiftide\n   ```\n\n3. Enable the features of integrations you would like to use in your `Cargo.toml`\n4. Write a pipeline (see our examples and documentation)\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n<!-- USAGE EXAMPLES -->\n\n## Usage and concepts\n\nBefore building your streams, you need to enable and configure any integrations required. See /examples.\n\n_We have a lot of examples, please refer to /examples and the [Documentation](https://docs.rs/swiftide/latest/swiftide/)_\n\n> [!NOTE]\n> No integrations are enabled by default as some are code heavy. We recommend you to cherry-pick the integrations you need. By convention flags have the same name as the integration they represent.\n\n### Indexing\n\nAn indexing stream starts with a Loader that emits Nodes. For instance, with the Fileloader each file is a Node.\n\nYou can then slice and dice, augment, and filter nodes. Each different kind of step in the pipeline requires different traits. This enables extension.\n\nNodes are generic over their inner type. This is a transition in progress, but when you BYO, feel free to slice and dice. The inner type can change midway through the pipeline.\n\n- **from_loader** `(impl Loader)` starting point of the stream, creates and emits Nodes\n- **filter_cached** `(impl NodeCache)` filters cached nodes\n- **then** `(impl Transformer)` transforms the node and puts it on the stream\n- **then_in_batch** `(impl BatchTransformer)` transforms multiple nodes and puts them on the stream\n- **then_chunk** `(impl ChunkerTransformer)` transforms a single node and emits multiple nodes\n- **then_store_with** `(impl Storage)` stores the nodes in a storage backend, this can be chained\n\nAdditionally, several generic transformers are implemented. They take implementers of `SimplePrompt` and `EmbedModel` to do their things.\n\n> [!WARNING]\n> Due to the performance, chunking before adding metadata gives rate limit errors on OpenAI very fast, especially with faster models like gpt-5-nano. Be aware. The `async-openai` crate provides an exmponential backoff strategy. If that is still a problem, there is also a decorator that supports streaming in `swiftide_core/indexing_decorators`.\n\n### Querying\n\nA query stream starts with a search strategy. In the query pipeline a `Query` goes through several stages. Transformers and retrievers work together to get the right context into a prompt, before generating an answer. Transformers and Retrievers operate on different stages of the Query via a generic statemachine. Additionally, the search strategy is generic over the pipeline and Retrievers need to implement specifically for each strategy.\n\nThat sounds like a lot but, tl&dr; the query pipeline is _fully and strongly typed_.\n\n- **Pending** The query has not been executed, and can be further transformed with transformers\n- **Retrieved** Documents have been retrieved, and can be further transformed to provide context for an answer\n- **Answered** The query is done\n\nAdditionally, query pipelines can also be evaluated. I.e. by [Ragas](https://ragas.io).\n\nSimilar to the indexing pipeline each step is governed by simple Traits and closures implement these traits as well.\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n<!-- ROADMAP -->\n\n## Contributing\n\nSwiftide is in a very early stage and we are aware that we lack features for the wider community. Contributions are very welcome. :tada:\n\nIf you have a great idea, please fork the repo and create a pull request. You can also simply open an issue with the tag \"enhancement\".\nDon't forget to give the project a star! Thanks again!\n\nIndexing and querying are performance sensitive tasks. Please make sure to consider allocations and performance when contributing.\n\nAI Generated code is welcome and not frowned upon. Please be genuine and think critically about what you add.\n\nIf you just want to contribute (bless you!), see [our issues](https://github.com/bosun-ai/swiftide/issues) or join us on Discord.\n\n1. Fork the Project\n2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)\n3. Commit your Changes (`git commit -m 'feat: Add some AmazingFeature'`)\n4. Push to the Branch (`git push origin feature/AmazingFeature`)\n5. Open a Pull Request\n\nAI Agents can refer to [AGENTS.md](AGENTS.md) for workspace layout, commands, and expectations tailored to agents.\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n## Core Team Members\n\n<table>\n  <tr>\n    <td align=\"center\">\n      <a href=\"https://github.com/timonv\">\n        <img\n          src=\"https://avatars.githubusercontent.com/u/49373?s=100\"\n          width=\"100px;\"\n          alt=\"\"\n        />\n        <br /><sub><b>timonv</b></sub>\n        <br /><sub>open for swiftide consulting</sub>\n      </a>\n    </td>\n    <td align=\"center\">\n      <a href=\"https://github.com/tinco\">\n        <img\n          src=\"https://avatars.githubusercontent.com/u/22532?s=100\"\n          width=\"100px;\"\n          alt=\"\"\n        />\n        <br /><sub><b>tinco</b></sub>\n        <br /><br />\n      </a>\n    </td>\n  </tr>\n</table>\n\n<!-- LICENSE -->\n\n## License\n\nDistributed under the MIT License. See `LICENSE` for more information.\n\n<p align=\"right\">(<a href=\"#readme-top\">back to top</a>)</p>\n\n<!-- MARKDOWN LINKS & IMAGES -->\n<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->\n\n[contributors-shield]: https://img.shields.io/github/contributors/bosun-ai/swiftide.svg?style=flat-square\n[contributors-url]: https://github.com/bosun-ai/swiftide/graphs/contributors\n[stars-shield]: https://img.shields.io/github/stars/bosun-ai/swiftide.svg?style=flat-square\n[stars-url]: https://github.com/bosun-ai/swiftide/stargazers\n[license-shield]: https://img.shields.io/github/license/bosun-ai/swiftide.svg?style=flat-square\n[license-url]: https://github.com/bosun-ai/swiftide/blob/master/LICENSE.txt\n[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=flat-square&logo=linkedin&colorB=555\n[linkedin-url]: https://www.linkedin.com/company/bosun-ai\n[Crate Badge]: https://img.shields.io/crates/v/swiftide?logo=rust&style=flat-square&logoColor=E05D44&color=E05D44\n[Crate]: https://crates.io/crates/swiftide\n[Docs Badge]: https://img.shields.io/docsrs/swiftide?logo=rust&style=flat-square&logoColor=E05D44\n[API Docs]: https://docs.rs/swiftide\n"
  },
  {
    "path": "benchmarks/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"benchmarks\"\npublish = false\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dev-dependencies]\ntokio = { workspace = true, features = [\"full\"] }\nswiftide = { path = \"../swiftide\", features = [\"all\", \"redb\"] }\nserde_json = { workspace = true }\ncriterion = { workspace = true, features = [\"html_reports\", \"async_tokio\"] }\nanyhow = { workspace = true }\nfutures-util = { workspace = true }\ntestcontainers = { workspace = true, features = [\"blocking\"] }\ntemp-dir = { workspace = true }\n\n[[bench]]\nname = \"fileloader\"\npath = \"fileloader.rs\"\nharness = false\n\n[[bench]]\nname = \"index-readme-local\"\npath = \"local_pipeline.rs\"\nharness = false\n\n\n[[bench]]\nname = \"node-cache\"\npath = \"node_cache_comparison.rs\"\nharness = false\n"
  },
  {
    "path": "benchmarks/fileloader.rs",
    "content": "use std::hint::black_box;\n\nuse anyhow::Result;\nuse criterion::{Criterion, criterion_group, criterion_main};\nuse futures_util::stream::{StreamExt, TryStreamExt};\nuse swiftide::traits::Loader;\n\nasync fn run_fileloader(num_files: usize) -> Result<usize> {\n    let mut total_nodes = 0;\n    let mut stream = swiftide::indexing::loaders::FileLoader::new(\"./benchmarks/fileloader.rs\")\n        .with_extensions(&[\"rs\"])\n        .into_stream()\n        .take(num_files);\n\n    while stream.try_next().await?.is_some() {\n        total_nodes += 1;\n    }\n    assert!(total_nodes == num_files);\n    Ok(total_nodes)\n}\n\nfn criterion_benchmark(c: &mut Criterion) {\n    c.bench_function(\"load_1\", |b| b.iter(|| run_fileloader(black_box(1))));\n    c.bench_function(\"load_10\", |b| b.iter(|| run_fileloader(black_box(10))));\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "benchmarks/local_pipeline.rs",
    "content": "use anyhow::Result;\nuse criterion::{Criterion, criterion_group, criterion_main};\nuse swiftide::{\n    indexing::Pipeline,\n    indexing::loaders::FileLoader,\n    indexing::persist::MemoryStorage,\n    indexing::transformers::{ChunkMarkdown, Embed},\n    integrations::fastembed::FastEmbed,\n};\n\nasync fn run_pipeline() -> Result<()> {\n    Pipeline::from_loader(FileLoader::new(\"README.md\").with_extensions(&[\"md\"]))\n        .then_chunk(ChunkMarkdown::from_chunk_range(20..256))\n        .then_in_batch(Embed::new(FastEmbed::builder().batch_size(10).build()?))\n        .then_store_with(MemoryStorage::default())\n        .run()\n        .await\n}\n\nfn criterion_benchmark(c: &mut Criterion) {\n    c.bench_function(\"run_local_pipeline\", |b| b.iter(run_pipeline));\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "benchmarks/node_cache_comparison.rs",
    "content": "use anyhow::Result;\nuse criterion::{BenchmarkId, Criterion, criterion_group, criterion_main};\nuse swiftide::indexing::transformers::ChunkCode;\nuse swiftide::{\n    indexing::{Pipeline, loaders::FileLoader, persist::MemoryStorage},\n    traits::NodeCache,\n};\nuse temp_dir::TempDir;\nuse testcontainers::Container;\nuse testcontainers::{\n    GenericImage,\n    core::{IntoContainerPort, WaitFor},\n    runners::SyncRunner,\n};\n\nasync fn run_pipeline(node_cache: Box<dyn NodeCache<Input = String>>) -> Result<()> {\n    Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"rs\"]))\n        .filter_cached(node_cache)\n        .then_chunk(ChunkCode::try_for_language_and_chunk_size(\"rust\", 10..256)?)\n        .then_store_with(MemoryStorage::default())\n        .run()\n        .await\n}\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let redis_container = start_redis();\n\n    let redis_url = format!(\n        \"redis://{host}:{port}\",\n        host = redis_container.get_host().unwrap(),\n        port = redis_container.get_host_port_ipv4(6379).unwrap()\n    );\n\n    let redis: Box<dyn NodeCache<Input = String>> = Box::new(\n        swiftide::integrations::redis::Redis::try_from_url(redis_url, \"criterion\").unwrap(),\n    );\n\n    let tempdir = TempDir::new().unwrap();\n    let redb: Box<dyn NodeCache<Input = String>> = Box::new(\n        swiftide::integrations::redb::Redb::builder()\n            .database_path(tempdir.child(\"criterion\"))\n            .build()\n            .unwrap(),\n    );\n\n    let runtime = tokio::runtime::Builder::new_multi_thread()\n        .enable_all()\n        .build()\n        .unwrap();\n\n    for node_cache in [(redis, \"redis\"), (redb, \"redb\")] {\n        c.bench_with_input(\n            BenchmarkId::new(\"node_cache\", node_cache.1),\n            &node_cache,\n            |b, s| {\n                let cache_clone = s.0.clone();\n                runtime.spawn_blocking(move || async move { cache_clone.clear().await.unwrap() });\n\n                b.to_async(&runtime).iter(|| run_pipeline(s.0.clone()))\n            },\n        );\n    }\n}\n\nfn start_redis() -> Container<GenericImage> {\n    GenericImage::new(\"redis\", \"7.2.4\")\n        .with_exposed_port(6379.tcp())\n        .with_wait_for(WaitFor::message_on_stdout(\"Ready to accept connections\"))\n        .start()\n        .expect(\"Redis started\")\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "benchmarks/output.txt",
    "content": "test load_1 ... bench:           6 ns/iter (+/- 0)\n\ntest load_10 ... bench:           6 ns/iter (+/- 0)\n\ntest run_local_pipeline ... bench:         846 ns/iter (+/- 7)\n\n"
  },
  {
    "path": "cliff.toml",
    "content": "[remote.github]\nowner = \"bosun-ai\"\nrepo = \"swiftide\"\n\n[git]\ncommit_parsers = [\n  { message = \"(r|R)elease\", skip = true },\n  { message = \"^(feat|fix|perf|chore)\\\\(ci\\\\)\", group = \"<!-- 3 -->Miscellaneous\" },\n  { message = \"^feat*\", group = \"<!-- 0 -->New features\" },\n  { message = \"^fix*\", group = \"<!-- 1 -->Bug fixes\" },\n  { message = \"^perf*\", group = \"<!-- 2 -->Performance\" },\n  { message = \"^chore*\", group = \"<!-- 3 -->Miscellaneous\" },\n]\n\n[changelog]\n# changelog header\nheader = \"\"\"\n# Changelog\n\nAll notable changes to this project will be documented in this file.\n\"\"\"\nbody = \"\"\"\n{%- if not version %}\n## [unreleased]\n{% else -%}\n## [{{ version }}](https://github.com/bosun-ai/swiftide/releases/tag/{{ version }}) - {{ timestamp | date(format=\"%Y-%m-%d\") }}\n{% endif -%}\n\n{% macro commit(commit) -%}\n- [{{ commit.id | truncate(length=7, end=\"\") }}]({{ \"https://github.com/bosun-ai/swiftide/commit/\" ~ commit.id }}) \\\n{% if commit.scope %}*({{commit.scope | default(value = \"uncategorized\") | lower }})* {% endif %}\\\n{%- if commit.breaking %} [**breaking**]{% endif %} \\\n{{ commit.message | upper_first | trim }}\\\n{% if commit.remote.username %} by @{{ commit.remote.username }}{%- endif -%}\\\n{%- if commit.links %} \\\n   in {% for link in commit.links %}[{{link.text}}]({{link.href}}) {% endfor -%}\\\n{% endif %}\n{%- if commit.body and commit.remote.username and commit.remote.username is not containing(\"[bot]\") %}\n\n````text {#- 4 backticks escape any backticks in body #}\n{{commit.body | indent(prefix=\"  \") }}\n````\n{%- endif %}\n{%- if commit.breaking_description %}\n\n**BREAKING CHANGE**: {{ commit.breaking_description }}\n\n{%- endif %}\n{% endmacro -%}\n\n{% for group, commits in commits | group_by(attribute=\"group\") %}\n### {{ group | striptags | trim | upper_first }}\n{% for commit in commits | filter(attribute=\"scope\") | sort(attribute=\"scope\") %}\n{{ self::commit(commit=commit) }}\n{%- endfor -%}\n{% for commit in commits %}\n{%- if not commit.scope %}\n{{ self::commit(commit=commit) }}\n{%- endif -%}\n{%- endfor -%}\n{%- endfor %}\n\n{%- if github.contributors -%}\n{% if github.contributors | filter(attribute=\"is_first_time\", value=true) | length != 0 %}\n### New Contributors\n{%- endif %}\\\n{% for contributor in github.contributors | filter(attribute=\"is_first_time\", value=true) %}\n* @{{ contributor.username }} made their first contribution\n{%- if contributor.pr_number %} in \\\n[#{{ contributor.pr_number }}]({{ self::remote_url() }}/pull/{{ contributor.pr_number }}) \\\n{%- endif %}\n{%- endfor -%}\n{% endif -%}\n\n{% if version %}\n{% if previous.version %}\n**Full Changelog**: {{ self::remote_url() }}/compare/{{ previous.version }}...{{ version }}\n{% endif %}\n{% else -%}\n  {% raw %}\\n{% endraw %}\n{% endif %}\n\n{%- macro remote_url() -%}\n{%- if remote.github -%}\nhttps://github.com/{{ remote.github.owner }}/{{ remote.github.repo }}\\\n{% else -%}\nhttps://github.com/bosun-ai/swiftide\n{%- endif -%}\n{% endmacro %}\n\"\"\" # template for the changelog body\n# https://keats.github.io/tera/docs/#introduction\n# note that the - before / after the % controls whether whitespace is rendered between each line.\n# Getting this right so that the markdown renders with the correct number of lines between headings\n# code fences and list items is pretty finicky. Note also that the 4 backticks in the commit macro\n# is intentional as this escapes any backticks in the commit body.\n\n\n# remove the leading and trailing whitespace from the template\ntrim = false\n# changelog footer\n"
  },
  {
    "path": "deny.toml",
    "content": "[graph]\nall-features = true\n\n[licenses]\nconfidence-threshold = 0.8\nallow = [\n  \"Apache-2.0\",\n  \"BSD-2-Clause\",\n  \"BSD-3-Clause\",\n  \"ISC\",\n  \"MIT\",\n  \"Unicode-DFS-2016\",\n  \"MPL-2.0\",\n  \"Apache-2.0 WITH LLVM-exception\",\n  \"Unlicense\",\n  \"CC0-1.0\",\n  \"zlib-acknowledgement\",\n  \"Zlib\",\n  \"0BSD\",\n  \"Unicode-3.0\",\n  \"NCSA\",\n]\nexceptions = [{ allow = [\"OpenSSL\"], crate = \"ring\" }]\n\n[advisories]\nversion = 2\nignore = [\n  { id = \"RUSTSEC-2023-0086\", reason = \"Ignore a security adivisory on lexical-core\" },\n  { id = \"RUSTSEC-2021-0141\", reason = \"Dotenv is used by spider\" },\n  { id = \"RUSTSEC-2024-0384\", reason = \"Instant is unmaintained\" },\n  { id = \"RUSTSEC-2024-0421\", reason = \"Older version of idna used by reqwest\" },\n]\n\n[bans]\nmultiple-versions = \"allow\"\n\n[sources]\nunknown-registry = \"deny\"\nunknown-git = \"warn\"\nallow-registry = [\"https://github.com/rust-lang/crates.io-index\"]\n\n[[licenses.clarify]]\ncrate = \"ring\"\n# SPDX considers OpenSSL to encompass both the OpenSSL and SSLeay licenses\n# https://spdx.org/licenses/OpenSSL.html\n# ISC - Both BoringSSL and ring use this for their new files\n# MIT - \"Files in third_party/ have their own licenses, as described therein. The MIT\n# license, for third_party/fiat, which, unlike other third_party directories, is\n# compiled into non-test libraries, is included below.\"\n# OpenSSL - Obviously\nexpression = \"ISC AND MIT AND OpenSSL\"\nlicense-files = [{ path = \"LICENSE\", hash = 0xbd0eed23 }]\n"
  },
  {
    "path": "examples/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-examples\"\npublish = false\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\ntokio = { workspace = true, features = [\"full\"] }\nswiftide = { path = \"../swiftide/\", features = [\n  \"all\",\n  \"scraping\",\n  \"aws-bedrock\",\n  \"groq\",\n  \"ollama\",\n  \"fluvio\",\n  \"kafka\",\n  \"lancedb\",\n  \"pgvector\",\n  \"swiftide-agents\",\n  \"dashscope\",\n  \"mcp\",\n  \"anthropic\",\n  \"gemini\",\n  \"metrics\",\n  \"langfuse\",\n] }\nswiftide-macros = { path = \"../swiftide-macros\" }\ntracing-subscriber = { workspace = true, features = [\"fmt\", \"env-filter\"] }\nserde_json = { workspace = true }\nspider = { workspace = true }\nfluvio = { workspace = true }\ntemp-dir = { workspace = true }\nanyhow = { workspace = true }\nfutures-util = { workspace = true }\nsqlx = { workspace = true }\nswiftide-test-utils = { path = \"../swiftide-test-utils\" }\ntracing = { workspace = true }\nserde = { workspace = true }\nrmcp = { workspace = true, features = [\n  \"transport-child-process\",\n  \"client\",\n  \"server\",\n] }\nmetrics = { workspace = true }\nschemars.workspace = true\nbase64 = { workspace = true }\n\n\n[[example]]\ndoc-scrape-examples = true\nname = \"index-codebase\"\npath = \"index_codebase.rs\"\n\n[[example]]\nname = \"index-codebase-reduced-context\"\npath = \"index_codebase_reduced_context.rs\"\n\n[[example]]\ndoc-scrape-examples = true\nname = \"fastembed\"\npath = \"fastembed.rs\"\n\n[[example]]\ndoc-scrape-examples = true\nname = \"index-redis\"\npath = \"index_into_redis.rs\"\n\n[[example]]\ndoc-scrape-examples = true\nname = \"index-markdown-metadata\"\npath = \"index_markdown_lots_of_metadata.rs\"\n\n[[example]]\ndoc-scrape-examples = true\nname = \"scraping-index\"\npath = \"scraping_index_to_markdown.rs\"\n\n[[example]]\ndoc-scrape-examples = true\nname = \"aws-bedrock\"\npath = \"aws_bedrock.rs\"\n\n[[example]]\nname = \"aws-bedrock-agent\"\npath = \"aws_bedrock_agent.rs\"\n\n[[example]]\ndoc-scrape-examples = true\nname = \"store-multiple-vectors\"\npath = \"store_multiple_vectors.rs\"\n\n[[example]]\nname = \"index-groq\"\npath = \"index_groq.rs\"\n\n[[example]]\nname = \"index-ollama\"\npath = \"index_ollama.rs\"\n\n[[example]]\nname = \"query-pipeline\"\npath = \"query_pipeline.rs\"\n\n[[example]]\nname = \"hybrid-search\"\npath = \"hybrid_search.rs\"\n\n[[example]]\nname = \"fluvio\"\npath = \"fluvio.rs\"\n\n[[example]]\nname = \"kakfa\"\npath = \"kafka.rs\"\n\n[[example]]\nname = \"lancedb\"\npath = \"lancedb.rs\"\n\n[[example]]\nname = \"describe-image\"\npath = \"describe_image.rs\"\n\n[[example]]\nname = \"hello-agents\"\npath = \"hello_agents.rs\"\n\n[[example]]\nname = \"index-md-pgvector\"\npath = \"index_md_into_pgvector.rs\"\n\n[[example]]\nname = \"dashscope\"\npath = \"dashscope.rs\"\n\n[[example]]\nname = \"reranking\"\npath = \"reranking.rs\"\n\n[[example]]\nname = \"agents-mcp\"\npath = \"agents_mcp_tools.rs\"\n\n[[example]]\nname = \"agents-resume\"\npath = \"agents_resume.rs\"\n\n[[example]]\nname = \"streaming-agents\"\npath = \"streaming_agents.rs\"\n\n[[example]]\nname = \"agents-hitl\"\npath = \"agents_with_human_in_the_loop.rs\"\n\n[[example]]\nname = \"usage-metrics\"\npath = \"usage_metrics.rs\"\n\n[[example]]\nname = \"tasks\"\npath = \"tasks.rs\"\n\n[[example]]\nname = \"agent-can-fail-custom-schema\"\npath = \"agent_can_fail_custom_schema.rs\"\n\n[[example]]\nname = \"stop-with-args-custom-schema\"\npath = \"stop_with_args_custom_schema.rs\"\n\n[[example]]\nname = \"responses-api\"\npath = \"responses_api.rs\"\n\n[[example]]\nname = \"responses-api-reasoning\"\npath = \"responses_api_reasoning.rs\"\n\n[[example]]\nname = \"structured-prompt\"\npath = \"structured_prompt.rs\"\n\n[[example]]\nname = \"langfuse\"\npath = \"langfuse.rs\"\n\n[[example]]\nname = \"tool-custom-schema\"\npath = \"tool_custom_schema.rs\"\n"
  },
  {
    "path": "examples/agent_can_fail_custom_schema.rs",
    "content": "//! Demonstrates how to replace the default failure arguments for `AgentCanFail` with a custom\n//! JSON schema and capture the structured failure payload when the agent stops.\n//!\n//! Set the `OPENAI_API_KEY` environment variable before running the example. The agent is guided\n//! to use the `task_failed` tool with the schema defined below whenever it cannot complete the\n//! task.\nuse anyhow::Result;\nuse schemars::{JsonSchema, Schema, schema_for};\nuse serde::{Deserialize, Serialize};\nuse serde_json::{self, to_string_pretty};\nuse swiftide::agents::tools::control::AgentCanFail;\nuse swiftide::agents::{Agent, StopReason};\nuse swiftide::traits::Tool;\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\n#[serde(rename_all = \"snake_case\")]\nenum FailureCategory {\n    MissingDependency,\n    PermissionDenied,\n    UnexpectedRegression,\n}\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\n#[serde(rename_all = \"snake_case\")]\nenum RemediationStatus {\n    Planned,\n    Blocked,\n    Complete,\n}\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\n#[schemars(deny_unknown_fields)]\nstruct FailureReport {\n    category: FailureCategory,\n    summary: String,\n    impact: String,\n    #[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n    recommended_actions: Vec<String>,\n    #[serde(default, skip_serializing_if = \"Option::is_none\")]\n    remediation_status: Option<RemediationStatus>,\n}\n\nfn failure_schema() -> Schema {\n    schema_for!(FailureReport)\n}\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt::init();\n\n    let schema = failure_schema();\n    let failure_tool = AgentCanFail::with_parameters_schema(schema.clone());\n\n    println!(\n        \"task_failed tool schema:\\n{}\",\n        to_string_pretty(&failure_tool.tool_spec())?,\n    );\n\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-4o-mini\")\n        .default_embed_model(\"text-embedding-3-small\")\n        .build()?;\n\n    let mut builder = Agent::builder();\n    builder\n        .llm(&openai)\n        .tools([failure_tool.clone()])\n        .on_stop(|_, reason, _| {\n            Box::pin(async move {\n                if let StopReason::AgentFailed(Some(payload)) = reason {\n                    let json = to_string_pretty(&payload).unwrap();\n                    println!(\"agent reported failure:\\n{json}\");\n                }\n                Ok(())\n            })\n        });\n\n    if let Some(prompt) = builder.system_prompt_mut() {\n        prompt\n            .with_role(\"Incident response coordinator\")\n            .with_guidelines([\n                \"If the task cannot be completed, call the `task_failed` tool using the provided JSON schema.\",\n                \"Populate all required fields and list at least one `recommended_actions` entry.\",\n                \"Clearly document the impact so downstream teams can prioritize remediation.\",\n            ])\n            .with_constraints([\"Do not claim success when blockers remain unresolved.\"]);\n    }\n\n    let mut agent = builder.build()?;\n\n    agent\n        .query_once(\n            \"You must restore last night's database backup, but the only backup file is corrupted and no redundant copy exists. Report the failure.\",\n        )\n        .await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/agents_mcp_tools.rs",
    "content": "//! This is an example of how to build a Swiftide agent with tools using the MCP protocol.\n//!\n//! The agent in this example prints all messages using a channel.\nuse anyhow::Result;\nuse rmcp::{\n    ServiceExt as _,\n    model::{ClientInfo, Implementation},\n    transport::{ConfigureCommandExt as _, TokioChildProcess},\n};\nuse swiftide::agents::{self, tools::mcp::McpToolbox};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    println!(\"Hello, agents!\");\n\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embeddings-3-small\")\n        .default_prompt_model(\"gpt-4o-mini\")\n        .build()?;\n\n    let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<String>();\n\n    tokio::spawn(async move {\n        while let Some(msg) = rx.recv().await {\n            println!(\"{msg}\");\n        }\n    });\n\n    // First set up our client info to identify ourselves to the server\n    let client_info = ClientInfo {\n        client_info: Implementation {\n            name: \"swiftide-example\".into(),\n            version: env!(\"CARGO_PKG_VERSION\").into(),\n            title: None,\n            description: None,\n            icons: None,\n            website_url: None,\n        },\n        ..Default::default()\n    };\n\n    // Use `rmcp` to start the server\n    let running_service = client_info\n        .serve(TokioChildProcess::new(\n            tokio::process::Command::new(\"npx\").configure(|cmd| {\n                cmd.args([\"-y\", \"@modelcontextprotocol/server-everything\"]);\n            }),\n        )?)\n        .await?;\n\n    // Create a toolbox from the running server, and only use the `add` tool\n    //\n    // A toolbox reveals it's tools to the swiftide agent the first time it starts (if the state of\n    // the agent was pending). You can add as many toolboxes as you want. MCP services are an\n    // implmenentation of a toolbox. A list of tools is another.\n    let everything_toolbox = McpToolbox::from_running_service(running_service)\n        .with_whitelist([\"add\"])\n        .to_owned();\n\n    agents::Agent::builder()\n        .llm(&openai)\n        // Add the toolbox to the agent\n        .add_toolbox(everything_toolbox)\n        // Every message added by the agent will be printed to stdout\n        .on_new_message(move |_, msg| {\n            let msg = msg.to_string();\n            let tx = tx.clone();\n            Box::pin(async move {\n                tx.send(msg).unwrap();\n                Ok(())\n            })\n        })\n        .build()?\n        .query(\"Use the add tool to add 1 and 2\")\n        .await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/agents_resume.rs",
    "content": "//! This example illustrates how to resume an agent from existing messages.\nuse anyhow::Result;\nuse swiftide::agents::{self, DefaultContext};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    println!(\"Hello, agents!\");\n\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embeddings-3-small\")\n        .default_prompt_model(\"gpt-4o-mini\")\n        .build()?;\n\n    let mut first_agent = agents::Agent::builder().llm(&openai).build()?;\n\n    first_agent.query(\"Say hello!\").await?;\n\n    // Let's store the messages in a database, retrieve them back, and start a new agent\n    let stored_history = serde_json::to_string(&first_agent.history().await?)?;\n    let retrieved_history: Vec<_> = serde_json::from_str(&stored_history)?;\n\n    let restored_context = DefaultContext::default()\n        .with_existing_messages(retrieved_history)\n        .await?\n        .to_owned();\n\n    let mut second_agent = agents::Agent::builder()\n        .llm(&openai)\n        .context(restored_context)\n        // We'll use the one from the first agent, alternatively we could also pop it from the\n        // previous history and add a new one here\n        .no_system_prompt()\n        .build()?;\n\n    second_agent.query(\"What did you say?\").await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/agents_with_human_in_the_loop.rs",
    "content": "//! This is an example of using a human in the loop pattern with switfide agents.\n//!\n//! In the example we send the tool call over an channel, and then manually approve it.\n//!\n//! In a more realistic example, you can use other rust primitives to make it work for your\n//! usecase. I.e., make an api request with a callback url that will add the feedback.\n//!\n//! Both requesting feedback and providing feedback support an optional payload (as a\n//! `serde_json::Value`).\n//!\n//! This allows for more custom workflows, to either display or provide more input to the\n//! underlying tool call.\n//!\n//! For an example on how to implement your own custom wrappers, refer to\n//! `tools::control::ApprovalRequired`\n\nuse anyhow::Result;\nuse swiftide::{\n    agents::{self, StopReason, tools::control::ApprovalRequired},\n    chat_completion::{ToolOutput, errors::ToolError},\n    traits::{AgentContext, ToolFeedback},\n};\nuse tracing_subscriber::EnvFilter;\n\n#[swiftide::tool(\n    description = \"Guess a number\",\n    param(name = \"number\", description = \"Number to guess\")\n)]\nasync fn guess_a_number(\n    _context: &dyn AgentContext,\n    number: usize,\n) -> Result<ToolOutput, ToolError> {\n    let actual_number = 42;\n\n    if number == actual_number {\n        Ok(\"You guessed it!\".into())\n    } else {\n        Ok(\"Try again!\".into())\n    }\n}\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt()\n        .compact()\n        .with_env_filter(EnvFilter::from_default_env())\n        .init();\n\n    println!(\"Hello, agents!\");\n\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-4o\")\n        .build()?;\n\n    // ApprovalRequired is a simple wrapper. You can also implement your own approval\n    // flows by returning a `ToolOutput::FeedbackRequired` in a tool,\n    // you can then use `has_received_feedback` and `received_feedback` on the context\n    // to build your custom workflow.\n    let guess_with_approval = ApprovalRequired::new(guess_a_number());\n\n    let mut agent = agents::Agent::builder()\n        .llm(&openai)\n        .tools(vec![guess_with_approval])\n        // Every message added by the agent will be printed to stdout\n        .on_new_message(move |_, msg| {\n            println!(\"{msg}\");\n\n            Box::pin(async move { Ok(()) })\n        })\n        .limit(5)\n        .build()?;\n\n    // First query the agent, the agent will stop with a reason that feedback is required\n    agent\n        .query(\"Guess a number between 0 and 100 using the `guess_a_number` tool\")\n        .await?;\n\n    // The agent stopped, lets get the tool call\n    let Some(StopReason::FeedbackRequired { tool_call, .. }) = agent.stop_reason() else {\n        panic!(\"expected a tool call to approve\")\n    };\n\n    // Alternatively, you can also get the stop reason from the agent state\n    // agent.state().stop_reason().unwrap().feedback_required().unwrap()\n\n    // Register that this tool call is ok.\n    println!(\"Approving number guessing\");\n    agent\n        .context()\n        .feedback_received(tool_call, &ToolFeedback::approved())\n        .await\n        .unwrap();\n\n    // Run the agent again and it will pick up where it stopped.\n    agent.run().await.unwrap();\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/aws_bedrock.rs",
    "content": "//! # [Swiftide] Aws Bedrock example\n//!\n//! This example demonstrates how to use the `AwsBedrock` v2 integration to interact with Bedrock\n//! service.\n//!\n//! To use bedrock you will need the following:\n//! - AWS cli or environment variables configured\n//! - An aws region configured\n//! - Access to the bedrock models you want to use\n//! - A model id or arn\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n//! [AWS Bedrock documentation]: https://docs.aws.amazon.com/bedrock/\n\nuse swiftide::{\n    indexing, indexing::loaders::FileLoader, indexing::persist::MemoryStorage,\n    indexing::transformers, integrations,\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let aws_bedrock = integrations::aws_bedrock_v2::AwsBedrock::builder()\n        .default_prompt_model(\"global.anthropic.claude-haiku-4-5-20251001-v1:0\")\n        .build()?;\n\n    let memory_storage = MemoryStorage::default();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"./README.md\"))\n        .log_nodes()\n        .then_chunk(transformers::ChunkMarkdown::from_chunk_range(100..512))\n        .then(transformers::MetadataSummary::new(aws_bedrock.clone()))\n        .then_store_with(memory_storage.clone())\n        .log_all()\n        .run()\n        .await?;\n\n    println!(\"Summaries:\");\n    println!(\n        \"{}\",\n        memory_storage\n            .get_all_values()\n            .await\n            .iter()\n            .filter_map(|n| n.metadata.get(\"Summary\").map(|v| v.to_string()))\n            .collect::<Vec<_>>()\n            .join(\"\\n---\\n\")\n    );\n    Ok(())\n}\n"
  },
  {
    "path": "examples/aws_bedrock_agent.rs",
    "content": "//! # [Swiftide] AWS Bedrock Agent Example\n//!\n//! This example demonstrates a simple agent setup with `AwsBedrock` v2.\n//!\n//! Requirements:\n//! - AWS credentials and region configured (CLI profile or environment variables)\n//! - Access to the Bedrock model you choose\n//! - A model with tool use support (the Claude model below supports this)\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n\nuse anyhow::Result;\nuse schemars::JsonSchema;\nuse serde::{Deserialize, Serialize};\nuse swiftide::{\n    agents,\n    chat_completion::{ToolOutput, errors::ToolError},\n    integrations::aws_bedrock_v2::AwsBedrock,\n    traits::{AgentContext, Command},\n};\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\nstruct FormatTimestampRequest {\n    /// Prefix to prepend to the timestamp.\n    prefix: String,\n    /// Timestamp to format.\n    timestamp: String,\n}\n\n#[swiftide::tool(description = \"Get the current UTC date and time in RFC3339 format\")]\nasync fn current_utc_time(context: &dyn AgentContext) -> Result<ToolOutput, ToolError> {\n    let command_output = context\n        .executor()\n        .exec_cmd(&Command::shell(\"date -u +\\\"%Y-%m-%dT%H:%M:%SZ\\\"\"))\n        .await?;\n\n    Ok(command_output.into())\n}\n\n#[swiftide::tool(\n    description = \"Format a timestamp with a caller-provided prefix\",\n    param(name = \"request\", description = \"Timestamp formatting input\")\n)]\nasync fn format_timestamp(\n    _context: &dyn AgentContext,\n    request: FormatTimestampRequest,\n) -> Result<ToolOutput, ToolError> {\n    Ok(ToolOutput::text(format!(\n        \"{}{}\",\n        request.prefix, request.timestamp\n    )))\n}\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt::init();\n\n    let bedrock = AwsBedrock::builder()\n        .default_prompt_model(\"global.anthropic.claude-sonnet-4-6\")\n        .build()?;\n\n    let mut agent = agents::Agent::builder()\n        .llm(&bedrock)\n        .tools(vec![current_utc_time(), format_timestamp()])\n        .on_new_message(|_, msg| {\n            let rendered = msg.to_string();\n            Box::pin(async move {\n                println!(\"{rendered}\");\n                Ok(())\n            })\n        })\n        .limit(6)\n        .build()?;\n\n    agent\n        .query(\n            \"Call current_utc_time once. Then call format_timestamp with prefix \\\"UTC now: \\\" and \\\n             that timestamp. After that, report the formatted result and stop.\",\n        )\n        .await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/dashscope.rs",
    "content": "use swiftide::{\n    indexing::{\n        self, EmbeddedField,\n        loaders::FileLoader,\n        transformers::{ChunkMarkdown, Embed, MetadataQAText, metadata_qa_text},\n    },\n    integrations::{dashscope::DashscopeBuilder, lancedb::LanceDB},\n    query::{\n        self,\n        answers::{self},\n        query_transformers::{self},\n        response_transformers,\n    },\n};\nuse temp_dir::TempDir;\n\n#[tokio::main]\nasync fn main() -> anyhow::Result<()> {\n    tracing_subscriber::fmt::init();\n\n    let client = DashscopeBuilder::default()\n        .default_embed_model(\"text-embedding-v2\")\n        .default_prompt_model(\"qwen-long\")\n        .build()?;\n    let tempdir = TempDir::new().unwrap();\n    let lancedb = LanceDB::builder()\n        .uri(tempdir.child(\"lancedb\").to_str().unwrap())\n        .vector_size(1536)\n        .with_vector(EmbeddedField::Combined)\n        .with_metadata(metadata_qa_text::NAME)\n        .table_name(\"swiftide_test\")\n        .build()\n        .unwrap();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"md\"]))\n        .with_default_llm_client(client.clone())\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n        .then(MetadataQAText::new(client.clone()))\n        .then_in_batch(Embed::new(client.clone()).with_batch_size(10))\n        .then_store_with(lancedb.clone())\n        .run()\n        .await?;\n\n    let pipeline = query::Pipeline::default()\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(client.clone()))\n        .then_retrieve(lancedb.clone())\n        .then_transform_response(response_transformers::Summary::from_client(client.clone()))\n        .then_answer(answers::Simple::from_client(client.clone()));\n\n    let result = pipeline\n        .query(\"What is swiftide? Please provide an elaborate explanation\")\n        .await?;\n\n    println!(\"====\");\n    println!(\"{:?}\", result.answer());\n    Ok(())\n}\n"
  },
  {
    "path": "examples/describe_image.rs",
    "content": "//! Demonstrates passing an image to Chat Completions using a data URL.\n//!\n//! Set the `OPENAI_API_KEY` environment variable before running.\n\nuse anyhow::{Context as _, Result};\nuse base64::{Engine as _, engine::general_purpose};\nuse swiftide::chat_completion::{ChatCompletionRequest, ChatMessage, ChatMessageContentPart};\nuse swiftide::traits::ChatCompletion;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt::init();\n\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-4o-mini\")\n        .build()?;\n\n    let image_path = std::path::Path::new(env!(\"CARGO_MANIFEST_DIR\")).join(\"../images/logo.png\");\n    let image_bytes = std::fs::read(&image_path).with_context(|| format!(\"Read {image_path:?}\"))?;\n    let encoded = general_purpose::STANDARD.encode(&image_bytes);\n    let data_url = format!(\"data:image/png;base64,{encoded}\");\n\n    let message = ChatMessage::new_user_with_parts(vec![\n        ChatMessageContentPart::text(\"Describe this image in one sentence.\"),\n        ChatMessageContentPart::image(data_url),\n    ]);\n\n    let request = ChatCompletionRequest::builder()\n        .messages(vec![message])\n        .build()?;\n\n    let response = openai.complete(&request).await?;\n    println!(\n        \"Image description: {}\",\n        response.message().unwrap_or(\"<no response>\")\n    );\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/fastembed.rs",
    "content": "//! # [Swiftide] Indexing the Swiftide itself example\n//!\n//! This example demonstrates how to index the Swiftide codebase itself using FastEmbed.\n//!\n//! The pipeline will:\n//! - Load all `.rs` files from the current directory\n//! - Embed the chunks in batches of 10 using FastEmbed\n//! - Store the nodes in Qdrant\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing,\n    indexing::loaders::FileLoader,\n    indexing::transformers::Embed,\n    integrations::{fastembed::FastEmbed, qdrant::Qdrant},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let qdrant_url = std::env::var(\"QDRANT_URL\")\n        .as_deref()\n        .unwrap_or(\"http://localhost:6334\")\n        .to_owned();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"rs\"]))\n        .then_in_batch(Embed::new(FastEmbed::builder().batch_size(10).build()?))\n        .then_store_with(\n            Qdrant::try_from_url(qdrant_url)?\n                .batch_size(50)\n                .vector_size(384)\n                .collection_name(\"swiftide-examples-fastembed\".to_string())\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/fluvio.rs",
    "content": "//! # [Swiftide] Loading data from Fluvio\n//!\n//! This example demonstrates how to index the Swiftide codebase itself.\n//! Note that for it to work correctly you need to have OPENAI_API_KEY set, redis and qdrant\n//! running.\n//!\n//! The pipeline will:\n//! - Load all `.rs` files from the current directory\n//! - Skip any nodes previously processed; hashes are based on the path and chunk (not the\n//!   metadata!)\n//! - Run metadata QA on each chunk; generating questions and answers and adding metadata\n//! - Chunk the code into pieces of 10 to 2048 bytes\n//! - Embed the chunks in batches of 10, Metadata is embedded by default\n//! - Store the nodes in Qdrant\n//!\n//! Note that metadata is copied over to smaller chunks when chunking. When making LLM requests\n//! with lots of small chunks, consider the rate limits of the API.\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing::{self, transformers::Embed},\n    integrations::{\n        fastembed::FastEmbed,\n        fluvio::{ConsumerConfigExt, Fluvio},\n        qdrant::Qdrant,\n    },\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    static TOPIC_NAME: &str = \"hello-rust\";\n    static PARTITION_NUM: u32 = 0;\n\n    let loader = Fluvio::builder()\n        .consumer_config_ext(\n            ConsumerConfigExt::builder()\n                .topic(TOPIC_NAME)\n                .partition(PARTITION_NUM)\n                .offset_start(fluvio::Offset::from_end(1))\n                .build()\n                .unwrap(),\n        )\n        .build()\n        .unwrap();\n\n    indexing::Pipeline::from_loader(loader)\n        .then_in_batch(Embed::new(FastEmbed::try_default().unwrap()).with_batch_size(10))\n        .then_store_with(\n            Qdrant::builder()\n                .batch_size(50)\n                .vector_size(384)\n                .collection_name(\"swiftide-examples\")\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/hello_agents.rs",
    "content": "//! This is an example of how to build a Swiftide agent\n//!\n//! A swiftide agent runs completions in a loop, optionally with tools, to complete a task\n//! autonomously. Agents stop when either the LLM calls the always included `stop` tool, or\n//! (configurable) if the last message in the completion chain was from the assistant.\n//!\n//! Tools can be created by using the `tool` attribute macro as shown here. For more control (i.e.\n//! internal state), there\n//! is also a `Tool` derive macro for convenience. Anything that implements the `Tool` trait can\n//! act as a tool.\n//!\n//! Agents operate on an `AgentContext`, which is responsible for managaging the completion history\n//! and providing access to the outside world. For the latter, the context is expected to have a\n//! `ToolExecutor`, which by default runs locally.\n//!\n//! When building the agent, hooks are available to influence the state, completions, and general\n//! behaviour of the agent. Hooks are also traits.\n//!\n//! Refer to the api documentation for more detailed information.\nuse anyhow::Result;\nuse schemars::JsonSchema;\nuse serde::{Deserialize, Serialize};\nuse swiftide::{\n    agents,\n    chat_completion::{ToolOutput, errors::ToolError},\n    traits::{AgentContext, Command},\n};\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\nstruct CodeSearchRequest {\n    /// Search query to pass to ripgrep\n    query: String,\n    /// Optional repository root (defaults to the current working directory)\n    repo: Option<String>,\n    /// Optional list of glob filters for the search\n    file_globs: Option<Vec<String>>,\n}\n\n#[swiftide::tool(\n    description = \"Searches code\",\n    param(name = \"request\", description = \"Code search parameters\")\n)]\nasync fn search_code(\n    context: &dyn AgentContext,\n    request: CodeSearchRequest,\n) -> Result<ToolOutput, ToolError> {\n    let repo = request.repo.as_deref().unwrap_or(\".\");\n    let mut command = format!(\"cd {repo} && rg '{query}'\", query = request.query);\n\n    if let Some(globs) = &request.file_globs {\n        for glob in globs {\n            command.push_str(&format!(\" -g '{glob}'\"));\n        }\n    }\n\n    let command_output = context\n        .executor()\n        .exec_cmd(&Command::shell(command))\n        .await?;\n\n    Ok(command_output.into())\n}\n\nconst READ_FILE: &str = \"Read a file\";\n\n#[swiftide::tool(\n    description = READ_FILE,\n    param(name = \"path\", description = \"Path to the file\")\n)]\nasync fn read_file(context: &dyn AgentContext, path: &str) -> Result<ToolOutput, ToolError> {\n    let command_output = context\n        .executor()\n        .exec_cmd(&Command::shell(format!(\"cat {path}\")))\n        .await?;\n\n    Ok(command_output.into())\n}\n\n// The macro understands common Rust types (strings, numbers, bools, vectors, maps, structs, etc.)\n// and automatically derives a JSON Schema via `schemars`. If you need to tweak the schema\n// manually, implement the `Tool` trait and attach your own `parameters_schema`.\n//\n#[swiftide::tool(\n    description = \"Guess a number\",\n    param(name = \"number\", description = \"Number to guess\")\n)]\nasync fn guess_a_number(\n    _context: &dyn AgentContext,\n    number: usize,\n) -> Result<ToolOutput, ToolError> {\n    let actual_number = 42;\n\n    if number == actual_number {\n        Ok(\"You guessed it!\".into())\n    } else {\n        Ok(\"Try again!\".into())\n    }\n}\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    println!(\"Hello, agents!\");\n\n    tracing_subscriber::fmt::init();\n\n    let openai = swiftide::integrations::gemini::Gemini::builder()\n        .default_embed_model(\"gemini-embedding-exp-03-07\")\n        .default_prompt_model(\"gemini-2.0-flash\")\n        .build()?;\n\n    let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<String>();\n\n    tokio::spawn(async move {\n        while let Some(msg) = rx.recv().await {\n            println!(\"{msg}\");\n        }\n    });\n\n    agents::Agent::builder()\n        .llm(&openai)\n        .tools(vec![search_code(), read_file(), guess_a_number()])\n        .before_all(move |_context| {\n            // This is a hook that runs before any command is executed\n            // No native async closures in Rust yet, so we have to use Box::pin\n            Box::pin(async move {\n                println!(\"Hello hook!\");\n                Ok(())\n            })\n        })\n        // Every message added by the agent will be printed to stdout\n        .on_new_message(move |_, msg| {\n            let msg = msg.to_string();\n            let tx = tx.clone();\n            Box::pin(async move {\n                tx.send(msg).unwrap();\n                Ok(())\n            })\n        })\n        .limit(5)\n        .build()?\n        .query(\"In what file can I find an example of a swiftide agent? When you are done guess a number and stop\")\n        .await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/hybrid_search.rs",
    "content": "//! # [Swiftide] Hybrid search with qudrant\n//!\n//! This example demonstrates how to do hybrid search with Qdrant with Sparse vectors.\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing::{\n        self, EmbeddedField,\n        loaders::FileLoader,\n        transformers::{self, ChunkCode, MetadataQACode},\n    },\n    integrations::{fastembed::FastEmbed, openai, qdrant::Qdrant},\n    query::{self, answers, query_transformers, search_strategies::HybridSearch},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    // Ensure all batching is consistent\n    let batch_size = 64;\n\n    let fastembed_sparse = FastEmbed::try_default_sparse().unwrap().to_owned();\n    let fastembed = FastEmbed::try_default().unwrap().to_owned();\n\n    // Set up openai with the mini model, which is great for indexing\n    let openai = openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-4o-mini\")\n        .build()\n        .unwrap();\n\n    // Set up qdrant and use the combined fields (metadata + chunks) for both sparse and dense\n    // vectors\n    let qdrant = Qdrant::builder()\n        .batch_size(batch_size)\n        .vector_size(384)\n        .with_vector(EmbeddedField::Combined)\n        .with_sparse_vector(EmbeddedField::Combined)\n        .collection_name(\"swiftide-hybrid-example\")\n        .build()?;\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"swiftide-core/\").with_extensions(&[\"rs\"]))\n        // Chunk fairly large as the context window is big\n        .then_chunk(ChunkCode::try_for_language_and_chunk_size(\n            \"rust\",\n            10..2048,\n        )?)\n        // Generate metadata on the code chunks to increase our chances of finding the right code\n        .then(MetadataQACode::from_client(openai.clone()).build().unwrap())\n        .then_in_batch(\n            transformers::SparseEmbed::new(fastembed_sparse.clone()).with_batch_size(batch_size),\n        )\n        .then_in_batch(transformers::Embed::new(fastembed.clone()).with_batch_size(batch_size))\n        .then_store_with(qdrant.clone())\n        .run()\n        .await?;\n\n    // Use sophisticated model for our query\n    let openai = openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-4o\")\n        .build()\n        .unwrap();\n\n    let query_pipeline = query::Pipeline::from_search_strategy(\n        // Return a large amount of documents because we have a large context window\n        // By default it uses the Combined fields, no need to configure\n        HybridSearch::default()\n            .with_top_n(20)\n            .with_top_k(20)\n            .to_owned(),\n    )\n    // Generate subquestions on the initial query to increase our query coverage\n    .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n        openai.clone(),\n    ))\n    // Generate the same embeddings we used for indexing\n    .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n    .then_transform_query(query_transformers::SparseEmbed::from_client(\n        fastembed_sparse.clone(),\n    ))\n    .then_retrieve(qdrant.clone())\n    // Answer with Simple, which either takes the documents as is (in this case), or any\n    // transformations applied after querying\n    .then_answer(answers::Simple::from_client(openai.clone()));\n\n    let answer = query_pipeline\n        .query(\"What are the different pipelines in Swiftide and how do they work? Provide an elaborate answer with examples.\")\n        .await\n        .unwrap();\n\n    println!(\"{}\", answer.answer());\n\n    // ## Different Pipelines in Swiftide and How They Work\n    //\n    // Swiftide offers multiple pipelines, notably the indexing pipeline and the query pipeline. The\n    // functionality of these pipelines is enhanced using traits and components like transformers,\n    // stream handlers, and more. Below we elaborate on the key components and how they become part\n    // of the larger pipeline system:\n    //\n    // ### Indexing Pipeline\n    //\n    // 1. **Transformers**:\n    //     - **Transformer Trait**: Transforms single nodes into single nodes. Mainly used for\n    //       transforming data in a singular manner.\n    //     - **BatchableTransformer Trait**: Transforms a batch of nodes into a stream of nodes,\n    //       useful for bulk processing.\n    //\n    //     ```rust\n    //     #[async_trait]\n    //     pub trait Transformer: Send + Sync {\n    //         async fn transform_node(&self, node: Node) -> Result<Node>;\n    //         fn concurrency(&self) -> Option<usize> { None }\n    //     }\n    //\n    //     #[async_trait]\n    //     impl<F> Transformer for F where F: Fn(Node) -> Result<Node> + Send + Sync {\n    //         async fn transform_node(&self, node: Node) -> Result<Node> {\n    //             self(node)\n    //         }\n    //     }\n    //\n    //     #[async_trait]\n    //     pub trait BatchableTransformer: Send + Sync {\n    //         async fn batch_transform(&self, nodes: Vec<Node>) -> IndexingStream;\n    //         fn concurrency(&self) -> Option<usize> { None }\n    //     }\n    //\n    //     #[async_trait]\n    //     impl<F> BatchableTransformer for F where F: Fn(Vec<Node>) -> IndexingStream + Send + Sync\n    // {         async fn batch_transform(&self, nodes: Vec<Node>) -> IndexingStream {\n    //             self(nodes)\n    //         }\n    //     }\n    //     ```\n    //\n    // 2. **Loaders**:\n    //     - Defines methods for converting a loader into an `IndexingStream`.\n    //\n    //     ```rust\n    //     pub trait Loader {\n    //         fn into_stream(self) -> IndexingStream;\n    //     }\n    //     ```\n    //\n    // 3. **Chunker Transformers**:\n    //     - Splits one node into multiple nodes. It's useful for breaking down large nodes into\n    //       smaller, manageable chunks.\n    //\n    //     ```rust\n    //     #[async_trait]\n    //     pub trait ChunkerTransformer: Send + Sync + Debug {\n    //         async fn transform_node(&self, node: Node) -> IndexingStream;\n    //         fn concurrency(&self) -> Option<usize> { None }\n    //     }\n    //     ```\n    //\n    // 4. **IndexingStream**:\n    //     - An asynchronous stream of nodes, used internally by the indexing pipeline to handle\n    //       streams of `Node` items.\n    //\n    //     ```rust\n    //     pub struct IndexingStream {\n    //         #[pin]\n    //         pub(crate) inner: Pin<Box<dyn Stream<Item = Result<Node>> + Send>>,\n    //     }\n    //     ```\n    //\n    // ### Query Pipeline\n    //\n    // 1. **QueryStream**:\n    //     - Handles query streams, ensuring data flows correctly through various query states.\n    //\n    //     ```rust\n    //     pub struct QueryStream<'stream, Q: 'stream> {\n    //         #[pin]\n    //         pub(crate) inner: Pin<Box<dyn Stream<Item = Result<Query<Q>>> + Send + 'stream>>,\n    //         #[pin]\n    //         pub sender: Option<Sender<Result<Query<Q>>>>,\n    //     }\n    //     ```\n    //\n    // 2. **Query Handling**:\n    //     - Various state transitions and handling for queries in the pipeline.\n    //\n    //     ```rust\n    //     pub struct Query<State> {\n    //         original: String,\n    //         current: String,\n    //         state: State,\n    //         transformation_history: Vec<TransformationEvent>,\n    //         pub embedding: Option<Embedding>,\n    //         pub sparse_embedding: Option<SparseEmbedding>,\n    //     }\n    //     ```\n    //\n    // ### Extending the Pipeline with Traits\n    //\n    // Swiftide allows developers to extend the pipeline by implementing custom transformers,\n    // loaders, and other components by implementing the respective traits. This design ensures\n    // flexibility and modularity, allowing seamless integration of custom functionality.\n    //\n    // For example, to create a custom transformer:\n    // ```rust\n    // use crate::node::Node;\n    // use anyhow::Result;\n    //\n    // struct MyCustomTransformer;\n    //\n    // #[async_trait]\n    // impl Transformer for MyCustomTransformer {\n    //     async fn transform_node(&self, node: Node) -> Result<Node> {\n    //         // Custom transformation logic here...\n    //         Ok(node)\n    //     }\n    // }\n    // ```\n    //\n    // ### Usage of Prompts in Transformers\n    //\n    // Swiftide utilizes the [`Template`] for templating prompts, making it easy to define and\n    // manage prompts within transformers.\n    //\n    // ```rust\n    // let template = PromptTemplate::try_compiled_from_str(\"hello {{world}}\").await.unwrap();\n    // let prompt = template.to_prompt().with_context_value(\"world\", \"swiftide\");\n    // assert_eq!(prompt.render().await.unwrap(), \"hello swiftide\");\n    // ```\n    //\n    // ### Conclusion\n    //\n    // The Indexing and Query Pipelines in Swiftide are made extensible and modular via traits such\n    // as `Transformer`, `BatchableTransformer`, `Loader`, and more. Custom implementations can\n    // seamlessly integrate into the pipeline, providing flexibility in how data is processed,\n    // transformed, and indexed. The use of prompts further enhances the capability to manage\n    // dynamic and templated data within these pipelines.\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/index_codebase.rs",
    "content": "//! # [Swiftide] Indexing the Swiftide itself example\n//!\n//! This example demonstrates how to index the Swiftide codebase itself.\n//! Note that for it to work correctly you need to have OPENAI_API_KEY set, redis and qdrant\n//! running.\n//!\n//! The pipeline will:\n//! - Load all `.rs` files from the current directory\n//! - Skip any nodes previously processed; hashes are based on the path and chunk (not the\n//!   metadata!)\n//! - Run metadata QA on each chunk; generating questions and answers and adding metadata\n//! - Chunk the code into pieces of 10 to 2048 bytes\n//! - Embed the chunks in batches of 10, Metadata is embedded by default\n//! - Store the nodes in Qdrant\n//!\n//! Note that metadata is copied over to smaller chunks when chunking. When making LLM requests\n//! with lots of small chunks, consider the rate limits of the API.\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing,\n    indexing::LanguageModelWithBackOff,\n    indexing::loaders::FileLoader,\n    indexing::transformers::{ChunkCode, Embed, MetadataQACode},\n    integrations::{self, qdrant::Qdrant, redis::Redis},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embedding-3-small\")\n        .default_prompt_model(\"gpt-3.5-turbo\")\n        .build()?;\n\n    // Optionally use the backoff decorator to handle rate limits and transient errors.\n    //\n    // This works with streaming as well, async openai does not support this properly yet.\n    let openai_client = LanguageModelWithBackOff::new(openai_client, Default::default());\n\n    let redis_url = std::env::var(\"REDIS_URL\")\n        .as_deref()\n        .unwrap_or(\"redis://localhost:6379\")\n        .to_owned();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"rs\"]))\n        .filter_cached(Redis::try_from_url(redis_url, \"swiftide-examples\")?)\n        .then(MetadataQACode::new(openai_client.clone()))\n        .then_chunk(ChunkCode::try_for_language_and_chunk_size(\n            \"rust\",\n            10..2048,\n        )?)\n        .then_in_batch(Embed::new(openai_client.clone()).with_batch_size(10))\n        .then_store_with(\n            Qdrant::builder()\n                .batch_size(50)\n                .vector_size(1536)\n                .collection_name(\"swiftide-examples\")\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/index_codebase_reduced_context.rs",
    "content": "//! # [Swiftide] Indexing the Swiftide itself example with reduced context size\n//!\n//! This example demonstrates how to index the Swiftide codebase itself, optimizing for a smaller\n//! context size. Note that for it to work correctly you need to have OPENAI_API_KEY set, redis and\n//! qdrant running.\n//!\n//! The pipeline will:\n//! - Load all `.rs` files from the current directory\n//! - Skip any nodes previously processed; hashes are based on the path and chunk (not the\n//!   metadata!)\n//! - Generate an outline of the symbols defined in each file to be used as context in a later step\n//!   and store it in the metadata\n//! - Chunk the code into pieces of 10 to 2048 bytes\n//! - For each chunk, generate a condensed subset of the symbols outline tailored for that specific\n//!   chunk and store that in the metadata\n//! - Run metadata QA on each chunk; generating questions and answers and adding metadata\n//! - Embed the chunks in batches of 10, Metadata is embedded by default\n//! - Store the nodes in Qdrant\n//!\n//! Note that metadata is copied over to smaller chunks when chunking. When making LLM requests\n//! with lots of small chunks, consider the rate limits of the API.\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::indexing;\nuse swiftide::indexing::loaders::FileLoader;\nuse swiftide::indexing::transformers::{ChunkCode, Embed, MetadataQACode};\nuse swiftide::integrations::{self, qdrant::Qdrant, redis::Redis};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embedding-3-small\")\n        .default_prompt_model(\"gpt-3.5-turbo\")\n        .build()?;\n\n    let redis_url = std::env::var(\"REDIS_URL\")\n        .as_deref()\n        .unwrap_or(\"redis://localhost:6379\")\n        .to_owned();\n\n    let chunk_size = 2048;\n\n    indexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"rs\"]))\n        .filter_cached(Redis::try_from_url(\n            redis_url,\n            \"swiftide-examples-codebase-reduced-context\",\n        )?)\n        .then(\n            indexing::transformers::OutlineCodeTreeSitter::try_for_language(\n                \"rust\",\n                Some(chunk_size),\n            )?,\n        )\n        .then(MetadataQACode::new(openai_client.clone()))\n        .then_chunk(ChunkCode::try_for_language_and_chunk_size(\n            \"rust\",\n            10..chunk_size,\n        )?)\n        .then(indexing::transformers::CompressCodeOutline::new(\n            openai_client.clone(),\n        ))\n        .then_in_batch(Embed::new(openai_client.clone()).with_batch_size(10))\n        .then_store_with(\n            Qdrant::builder()\n                .batch_size(50)\n                .vector_size(1536)\n                .collection_name(\"swiftide-examples-codebase-reduced-context\")\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/index_groq.rs",
    "content": "//! # [Swiftide] Indexing with Groq\n//!\n//! This example demonstrates how to index the Swiftide codebase itself.\n//! Note that for it to work correctly you need to have set the GROQ_API_KEY\n//!\n//! The pipeline will:\n//! - Loads the readme from the project\n//! - Chunk the code into pieces of 10 to 2048 bytes\n//! - Run metadata QA on each chunk with Groq; generating questions and answers and adding metadata\n//! - Embed the chunks in batches of 10, Metadata is embedded by default\n//! - Store the nodes in Memory Storage\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing,\n    indexing::loaders::FileLoader,\n    indexing::persist::MemoryStorage,\n    indexing::transformers::{ChunkMarkdown, Embed, MetadataQAText},\n    integrations,\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let groq_client = integrations::groq::Groq::builder()\n        .default_prompt_model(\"llama3-8b-8192\")\n        .to_owned()\n        .build()?;\n\n    let fastembed = integrations::fastembed::FastEmbed::try_default()?;\n    let memory_store = MemoryStorage::default();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"README.md\"))\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n        .then(MetadataQAText::new(groq_client.clone()))\n        .then_in_batch(Embed::new(fastembed).with_batch_size(10))\n        .then_store_with(memory_store.clone())\n        .run()\n        .await?;\n\n    println!(\"Example results:\");\n    println!(\n        \"{}\",\n        memory_store\n            .get_all_values()\n            .await\n            .into_iter()\n            .flat_map(|n| n.metadata.into_values().map(|v| v.to_string()))\n            .collect::<Vec<_>>()\n            .join(\"\\n\")\n    );\n    Ok(())\n}\n"
  },
  {
    "path": "examples/index_into_redis.rs",
    "content": "//! # [Swiftide] Indexing the Swiftide itself example\n//!\n//! This example demonstrates how to index the Swiftide codebase itself.\n//! Note that for it to work correctly you need to have OPENAI_API_KEY set, redis and qdrant\n//! running.\n//!\n//! The pipeline will:\n//! - Load all `.rs` files from the current directory\n//! - Skip any nodes previously processed; hashes are based on the path and chunk (not the\n//!   metadata!)\n//! - Run metadata QA on each chunk; generating questions and answers and adding metadata\n//! - Chunk the code into pieces of 10 to 2048 bytes\n//! - Embed the chunks in batches of 10, Metadata is embedded by default\n//! - Store the nodes in Qdrant\n//!\n//! Note that metadata is copied over to smaller chunks when chunking. When making LLM requests\n//! with lots of small chunks, consider the rate limits of the API.\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing, indexing::loaders::FileLoader, indexing::transformers::ChunkCode,\n    integrations::redis::Redis,\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let redis_url = std::env::var(\"REDIS_URL\")\n        .as_deref()\n        .unwrap_or(\"redis://localhost:6379\")\n        .to_owned();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"rs\"]))\n        .then_chunk(ChunkCode::try_for_language_and_chunk_size(\n            \"rust\",\n            10..2048,\n        )?)\n        .then_store_with(\n            // By default the value is the full node serialized to JSON.\n            // We can customize this by providing a custom function.\n            Redis::try_build_from_url(&redis_url)?\n                .persist_value_fn(|node| Ok(serde_json::to_string(&node.metadata)?))\n                .batch_size(50)\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/index_markdown_lots_of_metadata.rs",
    "content": "//! # [Swiftide] Indexing the Swiftide README with lots of metadata\n//!\n//! This example demonstrates how to index the Swiftide README with lots of metadata.\n//!\n//! The pipeline will:\n//! - Load the README.md file from the current directory\n//! - Chunk the file into pieces of 20 to 1024 bytes\n//! - Generate questions and answers for each chunk\n//! - Generate a summary for each chunk\n//! - Generate a title for each chunk\n//! - Generate keywords for each chunk\n//! - Embed each chunk\n//! - Store the nodes in Qdrant\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing,\n    indexing::loaders::FileLoader,\n    indexing::transformers::{\n        ChunkMarkdown, Embed, MetadataKeywords, MetadataQAText, MetadataSummary, MetadataTitle,\n    },\n    integrations::{self, qdrant::Qdrant},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embedding-3-small\")\n        .default_prompt_model(\"gpt-4o\")\n        .build()?;\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"README.md\").with_extensions(&[\"md\"]))\n        .with_concurrency(1)\n        .then_chunk(ChunkMarkdown::from_chunk_range(20..2048))\n        .then(MetadataQAText::new(openai_client.clone()))\n        .then(MetadataSummary::new(openai_client.clone()))\n        .then(MetadataTitle::new(openai_client.clone()))\n        .then(MetadataKeywords::new(openai_client.clone()))\n        .then_in_batch(Embed::new(openai_client.clone()))\n        .log_all()\n        .filter_errors()\n        .then_store_with(\n            Qdrant::builder()\n                .batch_size(50)\n                .vector_size(1536)\n                .collection_name(\"swiftide-examples\")\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/index_md_into_pgvector.rs",
    "content": "/// This example demonstrates how to index markdown into PGVector\nuse std::path::PathBuf;\nuse swiftide::{\n    indexing::{\n        self, EmbeddedField,\n        loaders::FileLoader,\n        transformers::{\n            ChunkMarkdown, Embed, MetadataQAText, metadata_qa_text::NAME as METADATA_QA_TEXT_NAME,\n        },\n    },\n    integrations::{self, fastembed::FastEmbed, pgvector::PgVector},\n    query::{self, answers, query_transformers, response_transformers},\n    traits::SimplePrompt,\n};\n\nasync fn ask_query(\n    llm_client: impl SimplePrompt + Clone + 'static,\n    embed: FastEmbed,\n    vector_store: PgVector,\n    questions: Vec<String>,\n) -> Result<Vec<String>, Box<dyn std::error::Error>> {\n    // By default the search strategy is SimilaritySingleEmbedding\n    // which takes the latest query, embeds it, and does a similarity search\n    //\n    // Pgvector will return an error if multiple embeddings are set\n    //\n    // The pipeline generates subquestions to increase semantic coverage, embeds these in a single\n    // embedding, retrieves the default top_k documents, summarizes them and uses that as context\n    // for the final answer.\n    let pipeline = query::Pipeline::default()\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            llm_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(embed))\n        .then_retrieve(vector_store.clone())\n        .then_transform_response(response_transformers::Summary::from_client(\n            llm_client.clone(),\n        ))\n        .then_answer(answers::Simple::from_client(llm_client.clone()));\n\n    let results: Vec<String> = pipeline\n        .query_all(questions)\n        .await?\n        .iter()\n        .map(|result| result.answer().to_string())\n        .collect();\n\n    Ok(results)\n}\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n    tracing::info!(\"Starting PgVector indexing test\");\n\n    // Get the manifest directory path\n    let manifest_dir = std::env::var(\"CARGO_MANIFEST_DIR\").expect(\"CARGO_MANIFEST_DIR not set\");\n\n    // Create a PathBuf to test dataset from the manifest directory\n    let test_dataset_path = PathBuf::from(manifest_dir).join(\"../README.md\");\n\n    tracing::info!(\"Test Dataset path: {:?}\", test_dataset_path);\n\n    let (_pgv_db_container, pgv_db_url) = swiftide_test_utils::start_postgres().await;\n\n    tracing::info!(\"pgv_db_url :: {:#?}\", pgv_db_url);\n\n    let llm_client = integrations::ollama::Ollama::default()\n        .with_default_prompt_model(\"llama3.2:latest\")\n        .to_owned();\n\n    let fastembed =\n        integrations::fastembed::FastEmbed::try_default().expect(\"Could not create FastEmbed\");\n\n    // Configure Pgvector with a default vector size, a single embedding\n    // and in addition to embedding the text metadata, also store it in a field\n    let pgv_storage = PgVector::builder()\n        .db_url(pgv_db_url)\n        .vector_size(384)\n        .with_vector(EmbeddedField::Combined)\n        .with_metadata(METADATA_QA_TEXT_NAME)\n        .table_name(\"swiftide_pgvector_test\".to_string())\n        .build()\n        .unwrap();\n\n    // Drop the existing test table before running the test\n    tracing::info!(\"Dropping existing test table & index if it exists\");\n    let drop_table_sql = \"DROP TABLE IF EXISTS swiftide_pgvector_test\";\n    let drop_index_sql = \"DROP INDEX IF EXISTS swiftide_pgvector_test_embedding_idx\";\n\n    if let Ok(pool) = pgv_storage.get_pool().await {\n        sqlx::query(drop_table_sql).execute(pool).await?;\n        sqlx::query(drop_index_sql).execute(pool).await?;\n    } else {\n        return Err(\"Failed to get database connection pool\".into());\n    }\n\n    tracing::info!(\"Starting indexing pipeline\");\n\n    indexing::Pipeline::from_loader(FileLoader::new(test_dataset_path).with_extensions(&[\"md\"]))\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n        .then(MetadataQAText::new(llm_client.clone()))\n        .then_in_batch(Embed::new(fastembed.clone()).with_batch_size(100))\n        .then_store_with(pgv_storage.clone())\n        .run()\n        .await?;\n\n    tracing::info!(\"PgVector Indexing completed successfully\");\n\n    let questions: Vec<String> = vec![\n            \"What is SwiftIDE? Provide a clear, comprehensive summary in under 50 words.\".into(),\n            \"How can I use SwiftIDE to connect with the Ethereum blockchain? Please provide a concise, comprehensive summary in less than 50 words.\".into(),\n        ];\n\n    ask_query(\n        llm_client.clone(),\n        fastembed.clone(),\n        pgv_storage.clone(),\n        questions,\n    )\n    .await?\n    .iter()\n    .enumerate()\n    .for_each(|(i, result)| {\n        tracing::info!(\"*** Answer Q{} ***\", i + 1);\n        tracing::info!(\"{}\", result);\n        tracing::info!(\"===X===\");\n    });\n\n    tracing::info!(\"PgVector Indexing & retrieval test completed successfully\");\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/index_ollama.rs",
    "content": "//! # [Swiftide] Indexing with Ollama\n//!\n//! This example demonstrates how to index the Swiftide codebase itself.\n//! Note that for it to work correctly you need to have ollama running on the default local port.\n//!\n//! The pipeline will:\n//! - Loads the readme from the project\n//! - Chunk the code into pieces of 10 to 2048 bytes\n//! - Run metadata QA on each chunk with Ollama; generating questions and answers and adding\n//!   metadata\n//! - Embed the chunks in batches of 10, Metadata is embedded by default\n//! - Store the nodes in Memory Storage\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing,\n    indexing::loaders::FileLoader,\n    indexing::persist::MemoryStorage,\n    indexing::transformers::{ChunkMarkdown, Embed, MetadataQAText},\n    integrations,\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let ollama_client = integrations::ollama::Ollama::default()\n        .with_default_prompt_model(\"llama3.1\")\n        .to_owned();\n\n    let fastembed = integrations::fastembed::FastEmbed::try_default()?;\n    let memory_store = MemoryStorage::default();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"README.md\"))\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n        .then(MetadataQAText::new(ollama_client.clone()))\n        .then_in_batch(Embed::new(fastembed).with_batch_size(10))\n        .then_store_with(memory_store.clone())\n        .run()\n        .await?;\n\n    println!(\"Example results:\");\n    println!(\n        \"{}\",\n        memory_store\n            .get_all_values()\n            .await\n            .into_iter()\n            .flat_map(|n| n.metadata.into_values().map(|v| v.to_string()))\n            .collect::<Vec<_>>()\n            .join(\"\\n\")\n    );\n    Ok(())\n}\n"
  },
  {
    "path": "examples/kafka.rs",
    "content": "//! # [Swiftide] Loading data from Kafka\n//!\n//! This example demonstrates how to index data from a Kafka topic and store the data in another\n//! Kafka topic. Note that for it to work correctly you need to have kafka.\n//!\n//! The pipeline will:\n//! - Load messages from a Kafka topic\n//! - Embed the chunks in batches of 10\n//! - Store the nodes in kafka\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing::{self, transformers::Embed},\n    integrations::{\n        fastembed::FastEmbed,\n        kafka::{ClientConfig, Kafka},\n    },\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    static LOADER_TOPIC: &str = \"loader\";\n    static STORAGE_TOPIC: &str = \"storage\";\n\n    let mut client_config = ClientConfig::new();\n    client_config.set(\"bootstrap.servers\", \"localhost:9092\");\n    client_config.set(\"group.id\", \"group_id\");\n    client_config.set(\"auto.offset.reset\", \"earliest\");\n\n    let loader = Kafka::builder()\n        .client_config(client_config.clone())\n        .topic(LOADER_TOPIC)\n        .build()\n        .unwrap();\n\n    let storage = Kafka::builder()\n        .client_config(client_config)\n        .topic(STORAGE_TOPIC)\n        .create_topic_if_not_exists(true)\n        .batch_size(2usize)\n        .build()\n        .unwrap();\n\n    indexing::Pipeline::from_loader(loader)\n        .then_in_batch(Embed::new(FastEmbed::try_default().unwrap()).with_batch_size(10))\n        .then_store_with(storage)\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/lancedb.rs",
    "content": "/// This example demonstrates how to use the LanceDB integration with Swiftide\nuse swiftide::{\n    indexing::{\n        self, EmbeddedField,\n        loaders::FileLoader,\n        transformers::{\n            ChunkMarkdown, Embed, MetadataQAText, metadata_qa_text::NAME as METADATA_QA_TEXT_NAME,\n        },\n    },\n    integrations::{self, lancedb::LanceDB},\n    query::{self, answers, query_transformers, response_transformers},\n};\nuse temp_dir::TempDir;\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embedding-3-small\")\n        .default_prompt_model(\"gpt-4o-mini\")\n        .build()?;\n\n    let tempdir = TempDir::new().unwrap();\n\n    // Configure lancedb with a default vector size, a single embedding\n    // and in addition to embedding the text metadata, also store it in a field\n    let lancedb = LanceDB::builder()\n        .uri(tempdir.child(\"lancedb\").to_str().unwrap())\n        .vector_size(1536)\n        .with_vector(EmbeddedField::Combined)\n        .with_metadata(METADATA_QA_TEXT_NAME)\n        .table_name(\"swiftide_test\")\n        .build()\n        .unwrap();\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"README.md\"))\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n        .then(MetadataQAText::new(openai_client.clone()))\n        .then_in_batch(Embed::new(openai_client.clone()).with_batch_size(10))\n        .then_store_with(lancedb.clone())\n        .run()\n        .await?;\n\n    // By default the search strategy is SimilaritySingleEmbedding\n    // which takes the latest query, embeds it, and does a similarity search\n    //\n    // LanceDB will return an error if multiple embeddings are set\n    //\n    // The pipeline generates subquestions to increase semantic coverage, embeds these in a single\n    // embedding, retrieves the default top_k documents, summarizes them and uses that as context\n    // for the final answer.\n    let pipeline = query::Pipeline::default()\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            openai_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(\n            openai_client.clone(),\n        ))\n        .then_retrieve(lancedb.clone())\n        .then_transform_response(response_transformers::Summary::from_client(\n            openai_client.clone(),\n        ))\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result = pipeline\n        .query(\"What is swiftide? Please provide an elaborate explanation\")\n        .await?;\n\n    println!(\"{:?}\", result.answer());\n    Ok(())\n}\n"
  },
  {
    "path": "examples/langfuse.rs",
    "content": "//! This is an example of using the langfuse integration with Swiftide.\n//!\n//! Langfuse is a platform for tracking and monitoring LLM usage and performance.\n//!\n//! When the feature `langfuse` is enabled, Swiftide can report tracing information,\n//! usage, inputs, and outputs to langfuse.\n//!\n//! For this to work, you need to set the LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY\n//! to the appropriate values. You can also set the LANGFUSE_URL environment variable\n//! to overwrite the default URL (http://localhost:3000).\n//!\n//! You can find more information about langfuse at https://langfuse.com/. On their github they\n//! also have a handy docker compose setup.\n//!\n//! More advanced usage is possible by using the `LangfuseLayer` directly.\nuse anyhow::Result;\nuse swiftide::traits::SimplePrompt;\nuse tracing::level_filters::LevelFilter;\nuse tracing_subscriber::{\n    EnvFilter, Layer as _, layer::SubscriberExt as _, util::SubscriberInitExt as _,\n};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    println!(\"Hello, langfuse!\");\n\n    let fmt_layer = tracing_subscriber::fmt::layer()\n        .compact()\n        .with_target(false)\n        .boxed();\n\n    let langfuse_layer = swiftide::langfuse::LangfuseLayer::default()\n        .with_filter(LevelFilter::DEBUG)\n        .boxed();\n\n    let registry = tracing_subscriber::registry()\n        .with(EnvFilter::from_default_env())\n        .with(vec![fmt_layer, langfuse_layer]);\n\n    registry.init();\n\n    prompt_openai().await?;\n\n    Ok(())\n}\n\n#[tracing::instrument]\nasync fn prompt_openai() -> Result<()> {\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-5\")\n        .build()\n        .unwrap();\n\n    let paris = openai\n        .prompt(\"What is the capital of France?\".into())\n        .await?;\n\n    println!(\"The capital of France is {paris}\");\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/query_pipeline.rs",
    "content": "use swiftide::{\n    indexing::{\n        self,\n        loaders::FileLoader,\n        transformers::{ChunkMarkdown, Embed, MetadataQAText},\n    },\n    integrations::{self, qdrant::Qdrant},\n    query::{self, answers, query_transformers, response_transformers},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embedding-3-large\")\n        .default_prompt_model(\"gpt-4o\")\n        .build()?;\n\n    let qdrant = Qdrant::builder()\n        .batch_size(50)\n        .vector_size(3072)\n        .collection_name(\"swiftide-examples\")\n        .build()?;\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"README.md\"))\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n        .then(MetadataQAText::new(openai_client.clone()))\n        .then_in_batch(Embed::new(openai_client.clone()).with_batch_size(10))\n        .then_store_with(qdrant.clone())\n        .run()\n        .await?;\n\n    // By default the search strategy is SimilaritySingleEmbedding\n    // which takes the latest query, embeds it, and does a similarity search\n    let pipeline = query::Pipeline::default()\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            openai_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(\n            openai_client.clone(),\n        ))\n        .then_retrieve(qdrant.clone())\n        .then_transform_response(response_transformers::Summary::from_client(\n            openai_client.clone(),\n        ))\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result = pipeline\n        .query(\"What is swiftide? Please provide an elaborate explanation\")\n        .await?;\n\n    println!(\"{:?}\", result.answer());\n    Ok(())\n}\n"
  },
  {
    "path": "examples/reranking.rs",
    "content": "/// Demonstrates reranking retrieved documents with fastembed\n///\n/// When reranking, many more documents are retrieved than used for the initial query. Maybe\n/// even from multiple sources.\n///\n/// Reranking compares the relevancy of the documents with the initial query, then filters out\n/// the `top_k` documents.\n///\n/// By default the model uses 'bge-reranker-base'.\nuse swiftide::{\n    indexing::{\n        self,\n        loaders::FileLoader,\n        transformers::{ChunkMarkdown, Embed},\n    },\n    integrations::{self, fastembed, qdrant::Qdrant},\n    query::{self, answers, query_transformers},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-4o\")\n        .build()?;\n\n    let fastembed = fastembed::FastEmbed::builder().batch_size(10).build()?;\n    let reranker = fastembed::Rerank::builder().top_k(5).build()?;\n\n    let qdrant = Qdrant::builder()\n        .batch_size(50)\n        .vector_size(384)\n        .collection_name(\"swiftide-reranking\")\n        .build()?;\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"README.md\"))\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n        .then_in_batch(Embed::new(fastembed.clone()))\n        .then_store_with(qdrant.clone())\n        .run()\n        .await?;\n\n    // By default the search strategy is SimilaritySingleEmbedding\n    // which takes the latest query, embeds it, and does a similarity search\n    let pipeline = query::Pipeline::default()\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            openai_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n        .then_retrieve(qdrant.clone())\n        .then_transform_response(reranker)\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result = pipeline\n        .query(\"What is swiftide? Please provide an elaborate explanation\")\n        .await?;\n\n    println!(\"{:?}\", result.answer());\n    Ok(())\n}\n"
  },
  {
    "path": "examples/responses_api.rs",
    "content": "use anyhow::{Context, Result};\nuse futures_util::StreamExt as _;\nuse schemars::JsonSchema;\nuse serde::{Deserialize, Serialize};\nuse std::io::Write as _;\nuse swiftide::{\n    chat_completion::{ChatCompletionRequest, ChatMessage, ToolOutput, errors::ToolError},\n    integrations::openai::{OpenAI, Options},\n    traits::{AgentContext, ChatCompletion, SimplePrompt, StructuredPrompt},\n};\nuse tracing_subscriber::EnvFilter;\n\n#[derive(Debug, Serialize, Deserialize, JsonSchema)]\n#[serde(deny_unknown_fields)]\n#[allow(dead_code)]\nstruct WeatherSummary {\n    description: String,\n}\n\n#[derive(Debug, Serialize, Deserialize, JsonSchema)]\n#[serde(deny_unknown_fields)]\nstruct EchoArgs {\n    message: String,\n}\n\n/// Minimal echo tool used to demonstrate tool calling with the Responses API.\n/// The macro implements the `Tool` trait, derives the JSON schema, and generates\n/// a helper constructor (`echo_tool()`) that returns a boxed tool ready for use.\n#[swiftide::tool(\n    description = \"Echos the provided message back to the caller.\",\n    param(name = \"payload\", description = \"Text to echo back\")\n)]\nasync fn echo_tool(\n    _context: &dyn AgentContext,\n    payload: EchoArgs,\n) -> Result<ToolOutput, ToolError> {\n    Ok(ToolOutput::text(format!(\"Echo: {}\", payload.message)))\n}\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt()\n        .with_env_filter(EnvFilter::from_default_env())\n        .init();\n\n    let openai = OpenAI::builder()\n        .default_prompt_model(\"gpt-4.1-mini\")\n        .default_options(Options::builder().temperature(0.2))\n        .use_responses_api(true)\n        .build()?;\n\n    let greeting = openai\n        .prompt(\"Say hello in one short sentence\".into())\n        .await?;\n    println!(\"Prompt result: {greeting}\");\n\n    let structured: WeatherSummary = openai\n        .structured_prompt(\"Summarise today's weather in Amsterdam as JSON\".into())\n        .await?;\n    println!(\"Structured result: {structured:?}\");\n\n    let chat_request = ChatCompletionRequest::builder()\n        .messages(vec![\n            ChatMessage::new_system(\"You are a concise assistant.\"),\n            ChatMessage::new_user(\"Share one fun fact about Amsterdam.\"),\n        ])\n        .build()?;\n\n    let completion = openai.complete(&chat_request).await?;\n    println!(\n        \"Complete result: {}\",\n        completion.message().unwrap_or(\"<no message>\")\n    );\n\n    let mut stream = openai.complete_stream(&chat_request).await;\n    print!(\"Streaming result: \");\n    let mut streamed_message = String::new();\n    while let Some(chunk) = stream.next().await {\n        let chunk = chunk?;\n        if let Some(delta) = chunk\n            .delta\n            .as_ref()\n            .and_then(|delta| delta.message_chunk.as_deref())\n        {\n            print!(\"{delta}\");\n            std::io::stdout().flush().ok();\n        }\n\n        if let Some(message) = chunk.message() {\n            streamed_message = message.to_string();\n        }\n    }\n    println!();\n    if streamed_message.is_empty() {\n        println!(\"Full streamed result: <no message>\");\n    } else {\n        println!(\"Full streamed result: {streamed_message}\");\n    }\n\n    let tool_request = ChatCompletionRequest::builder()\n        .messages(vec![\n            ChatMessage::new_system(\n                \"You are a precise assistant. Use available tools before replying directly.\",\n            ),\n            ChatMessage::new_user(\n                \"Call the echo tool with the phrase \\\"Hello Responses API\\\" and then summarise the result.\",\n            ),\n        ])\n        .tool(echo_tool())\n        .build()?;\n\n    let tool_completion = openai.complete(&tool_request).await?;\n\n    if let Some(tool_call) = tool_completion\n        .tool_calls()\n        .and_then(|calls| calls.first())\n        .cloned()\n    {\n        println!(\n            \"Assistant requested tool `{}` with arguments {}\",\n            tool_call.name(),\n            tool_call.args().unwrap_or(\"<missing arguments>\")\n        );\n\n        let args_json = tool_call\n            .args()\n            .context(\"echo tool call missing arguments\")?;\n        let args: EchoToolArgs = serde_json::from_str(args_json)?;\n        let tool_output = format!(\"Echo: {}\", args.payload.message);\n\n        let mut follow_up_messages = tool_request.messages().to_vec();\n        follow_up_messages.push(ChatMessage::new_assistant(\n            None::<String>,\n            Some(vec![tool_call.clone()]),\n        ));\n        follow_up_messages.push(ChatMessage::new_tool_output(\n            tool_call.clone(),\n            ToolOutput::text(tool_output),\n        ));\n\n        let follow_up_request = ChatCompletionRequest::builder()\n            .messages(follow_up_messages)\n            .tool(echo_tool())\n            .build()?;\n\n        let final_completion = openai.complete(&follow_up_request).await?;\n        println!(\n            \"Final response after tool call: {}\",\n            final_completion.message().unwrap_or(\"<no message>\")\n        );\n    } else {\n        println!(\n            \"Assistant responded without tool calls: {}\",\n            tool_completion.message().unwrap_or(\"<no message>\")\n        );\n    }\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/responses_api_reasoning.rs",
    "content": "//! Simple agent example that enables reasoning summaries via the Responses API.\n\nuse anyhow::Result;\nuse swiftide::agents::Agent;\nuse swiftide::chat_completion::{ChatMessage, ReasoningItem};\nuse swiftide::integrations::openai::{OpenAI, Options, ReasoningEffort};\nuse tracing_subscriber::EnvFilter;\n\nfn reasoning_summary(reasoning: Option<&[ReasoningItem]>) -> Option<String> {\n    let summary = reasoning\n        .unwrap_or(&[])\n        .iter()\n        .flat_map(|item| item.summary.iter())\n        .cloned()\n        .collect::<Vec<_>>()\n        .join(\"\\n\");\n\n    if summary.is_empty() {\n        None\n    } else {\n        Some(summary)\n    }\n}\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt()\n        .with_env_filter(EnvFilter::from_default_env())\n        .init();\n\n    // Reasoning models require the Responses API. Enabling reasoning effort also asks for a\n    // summary and encrypted reasoning content (enabled by default). If your OpenAI org is not\n    // verified for reasoning access, summaries may be absent. Disable with\n    // `reasoning_features(false)` if desired.\n    let openai = OpenAI::builder()\n        .default_prompt_model(\"o3-mini\")\n        .default_options(Options::builder().reasoning_effort(ReasoningEffort::Low))\n        .use_responses_api(true)\n        .build()?;\n\n    let mut agent = Agent::builder()\n        .llm(&openai)\n        .on_new_message(|_, message| {\n            if let ChatMessage::Assistant(content, _) = message\n                && let Some(content) = content.as_deref()\n            {\n                println!(\"Assistant: {content}\");\n            }\n            Box::pin(async move { Ok(()) })\n        })\n        .after_completion(|_, response| {\n            if let Some(summary) = reasoning_summary(response.reasoning.as_deref()) {\n                println!(\"Reasoning summary:\\n{summary}\");\n            }\n\n            let has_encrypted = response\n                .reasoning\n                .as_ref()\n                .is_some_and(|items| items.iter().any(|item| item.encrypted_content.is_some()));\n            println!(\"Encrypted reasoning content present: {has_encrypted}\");\n            Box::pin(async move { Ok(()) })\n        })\n        .build()?;\n\n    agent\n        .query(\"Explain why the sky is blue in one short paragraph.\")\n        .await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/scraping_index_to_markdown.rs",
    "content": "//! # [Swiftide] Indexing the Swiftide README with lots of metadata\n//!\n//! This example demonstrates how to index the Swiftide README with lots of metadata.\n//!\n//! The pipeline will:\n//! - Scrape the Bosun website\n//! - Transform the html to markdown\n//! - Chunk the markdown into smaller pieces\n//! - Store the nodes in Memory\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\nuse spider::website::Website;\nuse swiftide::{\n    indexing,\n    indexing::persist::MemoryStorage,\n    indexing::transformers::ChunkMarkdown,\n    integrations::scraping::{HtmlToMarkdownTransformer, ScrapingLoader},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    indexing::Pipeline::from_loader(ScrapingLoader::from_spider(\n        Website::new(\"https://www.bosun.ai/\")\n            .with_limit(1)\n            .to_owned(),\n    ))\n    .then(HtmlToMarkdownTransformer::default())\n    .then_chunk(ChunkMarkdown::from_chunk_range(20..2048))\n    .log_all()\n    .then_store_with(MemoryStorage::default())\n    .run()\n    .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/stop_with_args_custom_schema.rs",
    "content": "//! Demonstrates how to plug a custom JSON schema into the stop tool for an OpenAI-powered agent.\n//!\n//! Set the `OPENAI_API_KEY` environment variable before running the example. The agent guides the\n//! model to call the `stop` tool with a structured payload that matches the schema defined below.\n//! The on-stop hook prints the structured payload that made the agent stop.\nuse anyhow::Result;\nuse schemars::{JsonSchema, Schema, schema_for};\nuse serde::{Deserialize, Serialize};\nuse serde_json::to_string_pretty;\nuse swiftide::agents::tools::control::StopWithArgs;\nuse swiftide::agents::{Agent, StopReason};\nuse swiftide::traits::Tool;\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\n#[serde(rename_all = \"snake_case\")]\nenum TaskStatus {\n    Succeeded,\n    Failed,\n    Cancelled,\n}\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\n#[schemars(deny_unknown_fields)]\nstruct StopPayload {\n    status: TaskStatus,\n    summary: String,\n    #[serde(default, skip_serializing_if = \"Option::is_none\")]\n    details: Option<serde_json::Value>,\n}\n\nfn stop_schema() -> Schema {\n    schema_for!(StopPayload)\n}\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt::init();\n\n    let schema = stop_schema();\n    let stop_tool = StopWithArgs::with_parameters_schema(schema.clone());\n\n    println!(\n        \"stop tool schema:\\n{}\",\n        to_string_pretty(&stop_tool.tool_spec())?,\n    );\n\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-4o-mini\")\n        .default_embed_model(\"text-embedding-3-small\")\n        .build()?;\n\n    let mut builder = Agent::builder();\n    builder\n        .llm(&openai)\n        .without_default_stop_tool()\n        .tools([stop_tool.clone()])\n        .on_stop(|_, reason, _| {\n            Box::pin(async move {\n                if let StopReason::RequestedByTool(_, payload) = reason\n                    && let Some(payload) = payload\n                {\n                    println!(\n                        \"agent stopped with structured payload:\\n{}\",\n                        to_string_pretty(&payload).unwrap_or_else(|_| payload.to_string()),\n                    );\n                }\n                Ok(())\n            })\n        });\n\n    if let Some(prompt) = builder.system_prompt_mut() {\n        prompt\n            .with_role(\"Workflow finisher\")\n            .with_guidelines([\n                \"Summarize the work that was just completed and recommend next actions.\",\n                \"When you are done, call the `stop` tool using the provided JSON schema.\",\n                \"Always include the `details` field; use null when there is nothing to add.\",\n            ])\n            .with_constraints([\"Never fabricate task status values outside the schema.\"]);\n    }\n\n    let mut agent = builder.build()?;\n\n    agent\n        .query_once(\n            \"You completed onboarding five merchants today. Prepare a final handoff report and stop.\",\n        )\n        .await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/store_multiple_vectors.rs",
    "content": "//! # [Swiftide] Ingesting file with multiple metadata stored as named vectors\n//!\n//! This example demonstrates how to ingest a LICENSE file, generate multiple metadata, and store it\n//! all in Qdrant with individual named vectors\n//!\n//! The pipeline will:\n//! - Load the LICENSE file from the current directory\n//! - Chunk the file into pieces of 20 to 1024 bytes\n//! - Generate questions and answers for each chunk\n//! - Generate a summary for each chunk\n//! - Generate a title for each chunk\n//! - Generate keywords for each chunk\n//! - Embed each chunk\n//! - Embed each metadata\n//! - Store the nodes in Qdrant with chunk and metadata embeds as named vectors\n//!\n//! [Swiftide]: https://github.com/bosun-ai/swiftide\n//! [examples]: https://github.com/bosun-ai/swiftide/blob/master/examples\n\nuse swiftide::{\n    indexing::loaders::FileLoader,\n    indexing::transformers::{\n        ChunkMarkdown, Embed, MetadataKeywords, MetadataQAText, MetadataSummary, MetadataTitle,\n        metadata_keywords, metadata_qa_text, metadata_summary, metadata_title,\n    },\n    indexing::{self, EmbedMode, EmbeddedField},\n    integrations::{\n        self,\n        qdrant::{Distance, Qdrant, VectorConfig},\n    },\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    tracing_subscriber::fmt::init();\n\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embedding-3-small\")\n        .default_prompt_model(\"gpt-4o\")\n        .build()?;\n\n    indexing::Pipeline::from_loader(FileLoader::new(\"LICENSE\"))\n        .with_concurrency(1)\n        .with_embed_mode(EmbedMode::PerField)\n        .then_chunk(ChunkMarkdown::from_chunk_range(20..2048))\n        .then(MetadataQAText::new(openai_client.clone()))\n        .then(MetadataSummary::new(openai_client.clone()))\n        .then(MetadataTitle::new(openai_client.clone()))\n        .then(MetadataKeywords::new(openai_client.clone()))\n        .then_in_batch(Embed::new(openai_client.clone()).with_batch_size(10))\n        .log_all()\n        .filter_errors()\n        .then_store_with(\n            Qdrant::builder()\n                .batch_size(50)\n                .vector_size(1536)\n                .collection_name(\"swiftide-multi-vectors\")\n                .with_vector(EmbeddedField::Chunk)\n                .with_vector(EmbeddedField::Metadata(metadata_qa_text::NAME.into()))\n                .with_vector(EmbeddedField::Metadata(metadata_summary::NAME.into()))\n                .with_vector(\n                    VectorConfig::builder()\n                        .embedded_field(EmbeddedField::Metadata(metadata_title::NAME.into()))\n                        .distance(Distance::Manhattan)\n                        .build()?,\n                )\n                .with_vector(EmbeddedField::Metadata(metadata_keywords::NAME.into()))\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n}\n"
  },
  {
    "path": "examples/streaming_agents.rs",
    "content": "//! This example demonstrates how to stream responses from an agent\n//!\n//! By default, for convenience the accumulated response is streamed. You can opt-out of this\n//! behaviour and only receive the delta as well (only with OpenAI-like providers).\nuse anyhow::Result;\nuse swiftide::agents;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embeddings-3-small\")\n        .default_prompt_model(\"gpt-4o-mini\")\n        // Only streams the delta, leave this out to stream the full response\n        .stream_full(false)\n        .build()?;\n\n    // let anthropic = swiftide::integrations::anthropic::Anthropic::builder()\n    //     .default_prompt_model(\"claude-3-7-sonnet-latest\")\n    //     .build()?;\n\n    agents::Agent::builder()\n        .llm(&openai)\n        .on_stream(|_agent, response| {\n            // We print the message chunk if it exists. Streamed responses also include\n            // the full response (without tool calls) in `message` and an `id` to map them to\n            // previous chunks for convenience.\n            //\n            // The agent uses the full assembled response at the end of the stream.\n            if let Some(delta) = &response.delta {\n                print!(\n                    \"{}\",\n                    delta\n                        .message_chunk\n                        .as_deref()\n                        .map(str::to_string)\n                        .unwrap_or_default()\n                );\n            };\n            // If `stream_full` is disabled, response.message() will be the accumulated response\n            // response.message()\n\n            Box::pin(async move { Ok(()) })\n        })\n        // Every message added by the agent will be printed to stdout\n        .on_new_message(move |_, msg| {\n            let msg = msg.to_string();\n            Box::pin(async move {\n                println!(\"\\n---\\nFinal message:\\n {msg}\");\n                Ok(())\n            })\n        })\n        .limit(5)\n        .build()?\n        .query(\"Why is the rust programming language so good?\")\n        .await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/structured_prompt.rs",
    "content": "use anyhow::Result;\nuse schemars::JsonSchema;\nuse serde::{Deserialize, Serialize};\nuse swiftide::{integrations, traits::DynStructuredPrompt, traits::StructuredPrompt as _};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let client = integrations::openai::OpenAI::builder()\n        .default_prompt_model(\"gpt-5-mini\")\n        .build()?;\n\n    // Note that deny unknown fields is required. If you get an error on 'additionalProperties' to\n    // be required, and false, this is what is missing.\n    #[derive(Deserialize, JsonSchema, Serialize, Debug)]\n    #[serde(deny_unknown_fields)]\n    struct MyResponse {\n        questions: Vec<String>,\n    }\n\n    let response = client\n        .structured_prompt::<MyResponse>(\n            \"List three interesting questions about the Rust programming language.\".into(),\n        )\n        .await?;\n\n    println!(\"Response: {:?}\", response.questions);\n\n    // Because we use generics, structured_prompt is not dyn safe. However, there is an\n    // alternative:\n\n    let client: Box<dyn DynStructuredPrompt> = Box::new(client);\n\n    let response: serde_json::Value = client\n        .structured_prompt_dyn(\n            \"List three interesting questions about the Rust programming language.\".into(),\n            schemars::schema_for!(MyResponse),\n        )\n        .await?;\n\n    let parsed: MyResponse = serde_json::from_value(response)?;\n\n    println!(\"Response: {:?}\", parsed);\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/tasks.rs",
    "content": "//! This example illustrates how to set up a basic tasks\n//!\n//! Tasks follow  a graph model where each output of a node must match the input of the next node.\n//!\n//! To set up a task, you register nodes that implement the `TaskNode` trait. Most swiftide\n//! primiteves implement this trait, including agents, prompts, and closures.\n//!\n//! Then each node can be connected to the next node using the `register_transition` method. There\n//! is also a `register_transition_async` method that allows you to register an async transition.\n//!\n//! Since running an autonomous agent in a task is subject to taste, there is a basic\n//! `TaskAgent` that wraps it in an `Arc<Mutex>`, but your own implementation might want to toy\n//! with the state instead of the task instead.\n//!\n//! The API for closures as task nodes is still a bit clunky and subject to change.\nuse anyhow::Result;\nuse swiftide::{\n    agents::{\n        self,\n        tasks::{closures::SyncFn, impls::TaskAgent, task::Task},\n    },\n    prompt::Prompt,\n};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    println!(\"Hello, agents!\");\n\n    let openai = swiftide::integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embeddings-3-small\")\n        .default_prompt_model(\"gpt-4o-mini\")\n        .build()?;\n\n    let agent = agents::Agent::builder().llm(&openai).build()?;\n\n    let mut task: Task<Prompt, ()> = Task::new();\n\n    let agent_id = task.register_node(TaskAgent::from(agent));\n\n    let hello_id = task.register_node(SyncFn::new(move |_context: &()| {\n        println!(\"Hello from a task!\");\n\n        Ok(())\n    }));\n\n    task.starts_with(agent_id);\n\n    // Async is also supported\n    task.register_transition_async(agent_id, move |context| {\n        Box::pin(async move { hello_id.transitions_with(context) })\n    })?;\n    task.register_transition(hello_id, task.transitions_to_done())?;\n\n    task.run(\"Hello there!\").await?;\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/tool_custom_schema.rs",
    "content": "use std::borrow::Cow;\n\nuse anyhow::Result;\nuse schemars::{JsonSchema, Schema, schema_for};\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\nuse swiftide::chat_completion::{Tool, ToolCall, ToolOutput, ToolSpec, errors::ToolError};\nuse swiftide::traits::AgentContext;\n\n#[derive(Clone)]\nstruct WorkflowTool;\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\n#[schemars(\n    title = \"WorkflowInstruction\",\n    description = \"Choose a workflow action and optional payload\",\n    deny_unknown_fields\n)]\nstruct WorkflowInstruction {\n    #[schemars(description = \"Which workflow action to execute\")]\n    action: WorkflowAction,\n    #[schemars(description = \"Optional payload forwarded to the workflow engine\")]\n    #[serde(default, skip_serializing_if = \"Option::is_none\")]\n    payload: Option<Value>,\n}\n\n#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]\n#[serde(rename_all = \"lowercase\")]\nenum WorkflowAction {\n    Start,\n    Stop,\n    Status,\n}\n\n#[swiftide::reexports::async_trait::async_trait]\nimpl Tool for WorkflowTool {\n    async fn invoke(\n        &self,\n        _agent_context: &dyn AgentContext,\n        _tool_call: &ToolCall,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(ToolOutput::text(\n            \"Workflow execution not implemented in this example\",\n        ))\n    }\n\n    fn name<'tool>(&'tool self) -> Cow<'tool, str> {\n        Cow::Borrowed(\"workflow_tool\")\n    }\n\n    fn tool_spec(&self) -> ToolSpec {\n        ToolSpec::builder()\n            .name(\"workflow_tool\")\n            .description(\"Executes a workflow action with strict input choices\")\n            .parameters_schema(workflow_schema())\n            .build()\n            .expect(\"tool spec should be valid\")\n    }\n}\n\nfn workflow_schema() -> Schema {\n    schema_for!(WorkflowInstruction)\n}\n\nfn main() -> Result<()> {\n    let tool = WorkflowTool;\n    let spec = tool.tool_spec();\n\n    println!(\n        \"{}\",\n        serde_json::to_string_pretty(&spec).expect(\"tool spec should serialize\"),\n    );\n\n    Ok(())\n}\n"
  },
  {
    "path": "examples/usage_metrics.rs",
    "content": "//! Swiftide can emit usage metrics using `metrics-rs`.\n//!\n//! For metrics to be emitted, the `metrics` feature must be enabled.\n//!\n//! `metrics-rs` is a flexibly rust library that allows you to collect and publish metrics\n//! anywhere. From the user side, you need to provide a recorder and handles. The library itself\n//! provides several built-in for these, i.e. prometheus.\n//!\n//! In this example, we're indexing markdown and logging the usage metrics to stdout. For the\n//! recording we're using the examples from metric-rs.\n//!\n//! Usage metrics are emitted embedding, prompt requests, and chat completions. They always include\n//! the model used as metadata\n\nuse swiftide::{\n    indexing::{\n        self,\n        loaders::FileLoader,\n        transformers::{ChunkMarkdown, Embed},\n    },\n    integrations::{self, qdrant::Qdrant},\n};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n    // tracing_subscriber::fmt::init();\n    init_print_logger();\n\n    let metric_metadata = HashMap::from([(\"example\".to_string(), \"metadata\".to_string())]);\n    let openai_client = integrations::openai::OpenAI::builder()\n        .default_embed_model(\"text-embedding-3-small\")\n        .default_prompt_model(\"gpt-4.1-nano\")\n        // Metadata will be added to every metric\n        .metric_metadata(metric_metadata)\n        .build()?;\n\n    indexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"md\"]))\n        .then_chunk(ChunkMarkdown::from_chunk_range(10..512))\n        .then_in_batch(Embed::new(openai_client.clone()).with_batch_size(10))\n        .then_store_with(\n            Qdrant::builder()\n                .batch_size(50)\n                .vector_size(1536)\n                .collection_name(\"swiftide-examples-metrics\")\n                .build()?,\n        )\n        .run()\n        .await?;\n    Ok(())\n\n    // (counter) registered key swiftide.usage.prompt_tokens with unit None and description \"token\n    // usage for the prompt\" (counter) registered key swiftide.usage.completion_tokens with unit\n    // None and description \"token usage for the completion\" (counter) registered key\n    // swiftide.usage.total_tokens with unit None and description \"total token usage\"\n    // counter increment for 'Key(swiftide.usage.prompt_tokens, [example = metadata, model =\n    // text-embedding-3-small])': 356 counter increment for\n    // 'Key(swiftide.usage.completion_tokens, [example = metadata, model =\n    // text-embedding-3-small])': 0 counter increment for 'Key(swiftide.usage.total_tokens,\n    // [example = metadata, model = text-embedding-3-small])': 356 counter increment for\n    // 'Key(swiftide.usage.prompt_tokens, [example = metadata, model = text-embedding-3-small])':\n    // 336 counter increment for 'Key(swiftide.usage.completion_tokens, [example = metadata,\n    // model = text-embedding-3-small])': 0 counter increment for\n    // 'Key(swiftide.usage.total_tokens, [example = metadata, model = text-embedding-3-small])': 336\n    // counter increment for 'Key(swiftide.usage.prompt_tokens, [example = metadata, model =\n    // text-embedding-3-small])': 251 counter increment for\n    // 'Key(swiftide.usage.completion_tokens, [example = metadata, model =\n    // text-embedding-3-small])': 0 counter increment for 'Key(swiftide.usage.total_tokens,\n    // [example = metadata, model = text-embedding-3-small])': 251 counter increment for\n    // 'Key(swiftide.usage.prompt_tokens, [example = metadata, model = text-embedding-3-small])':\n    // 404 counter increment for 'Key(swiftide.usage.completion_tokens, [example = metadata,\n    // model = text-embedding-3-small])': 0 counter increment for\n    // 'Key(swiftide.usage.total_tokens, [example = metadata, model = text-embedding-3-small])': 404\n    // counter increment for 'Key(swiftide.usage.prompt_tokens, [example = metadata, model =\n    // text-embedding-3-small])': 329 counter increment for\n    // 'Key(swiftide.usage.completion_tokens, [example = metadata, model =\n    // text-embedding-3-small])': 0 counter increment for 'Key(swiftide.usage.total_tokens,\n    // [example = metadata, model = text-embedding-3-small])': 329\n}\n\n// --- Copied from https://github.com/metrics-rs/metrics/blob/main/metrics/examples/basic.rs\nuse std::{collections::HashMap, sync::Arc};\n\nuse metrics::{Counter, CounterFn, Gauge, GaugeFn, Histogram, HistogramFn, Key, Recorder, Unit};\nuse metrics::{KeyName, Metadata, SharedString};\n\n#[derive(Clone, Debug)]\nstruct PrintHandle(Key);\n\nimpl CounterFn for PrintHandle {\n    fn increment(&self, value: u64) {\n        println!(\"counter increment for '{}': {}\", self.0, value);\n    }\n\n    fn absolute(&self, value: u64) {\n        println!(\"counter absolute for '{}': {}\", self.0, value);\n    }\n}\n\nimpl GaugeFn for PrintHandle {\n    fn increment(&self, value: f64) {\n        println!(\"gauge increment for '{}': {}\", self.0, value);\n    }\n\n    fn decrement(&self, value: f64) {\n        println!(\"gauge decrement for '{}': {}\", self.0, value);\n    }\n\n    fn set(&self, value: f64) {\n        println!(\"gauge set for '{}': {}\", self.0, value);\n    }\n}\n\nimpl HistogramFn for PrintHandle {\n    fn record(&self, value: f64) {\n        println!(\"histogram record for '{}': {}\", self.0, value);\n    }\n}\n\n#[derive(Debug)]\nstruct PrintRecorder;\n\nimpl Recorder for PrintRecorder {\n    fn describe_counter(&self, key_name: KeyName, unit: Option<Unit>, description: SharedString) {\n        println!(\n            \"(counter) registered key {} with unit {:?} and description {:?}\",\n            key_name.as_str(),\n            unit,\n            description\n        );\n    }\n\n    fn describe_gauge(&self, key_name: KeyName, unit: Option<Unit>, description: SharedString) {\n        println!(\n            \"(gauge) registered key {} with unit {:?} and description {:?}\",\n            key_name.as_str(),\n            unit,\n            description\n        );\n    }\n\n    fn describe_histogram(&self, key_name: KeyName, unit: Option<Unit>, description: SharedString) {\n        println!(\n            \"(histogram) registered key {} with unit {:?} and description {:?}\",\n            key_name.as_str(),\n            unit,\n            description\n        );\n    }\n\n    fn register_counter(&self, key: &Key, _metadata: &Metadata<'_>) -> Counter {\n        Counter::from_arc(Arc::new(PrintHandle(key.clone())))\n    }\n\n    fn register_gauge(&self, key: &Key, _metadata: &Metadata<'_>) -> Gauge {\n        Gauge::from_arc(Arc::new(PrintHandle(key.clone())))\n    }\n\n    fn register_histogram(&self, key: &Key, _metadata: &Metadata<'_>) -> Histogram {\n        Histogram::from_arc(Arc::new(PrintHandle(key.clone())))\n    }\n}\n\nfn init_print_logger() {\n    metrics::set_global_recorder(PrintRecorder).unwrap()\n}\n"
  },
  {
    "path": "release-plz.toml",
    "content": "[workspace]\nchangelog_path = \"./CHANGELOG.md\"\n#changelog_config = \"cliff.toml\"\ngit_tag_name = \"v{{ version }}\"\nchangelog_update = false\ngit_tag_enable = false\ngit_release_enable = false\nrepo_url = \"https://github.com/bosun-ai/swiftide\"\n\n[[package]]\nname = \"swiftide-macros\"\npublish_no_verify = true\n[[package]]\n# Only release the main package on github\nname = \"swiftide\"\ngit_tag_name = \"v{{ version }}\"\ngit_tag_enable = true\ngit_release_enable = true\nchangelog_include = [\n  \"swiftide-core\",\n  \"swiftide-indexing\",\n  \"swiftide-integrations\",\n  \"swiftide-query\",\n  \"swiftide-test-utils\",\n  \"swiftide-agents\",\n  \"swiftide-macros\",\n]\nchangelog_update = true\n\n[changelog]\ncommit_parsers = [\n  { message = \"^feat*\", group = \"<!-- 0 -->New features\" },\n  { message = \"^fix*\", group = \"<!-- 1 -->Bug fixes\" },\n  { message = \"^perf*\", group = \"<!-- 2 -->Performance\" },\n  { message = \"^chore*\", group = \"<!-- 3 -->Miscellaneous\" },\n  { message = \"^refactor*\", group = \"<!-- 3 -->Miscellaneous\" },\n]\n\n# changelog header\nheader = \"\"\"\n# Changelog\n\nAll notable changes to this project will be documented in this file.\n\"\"\"\nbody = \"\"\"\n{%- if not version %}\n## [unreleased]\n{% else -%}\n## [{{ version }}]({{ release_link }}) - {{ timestamp | date(format=\"%Y-%m-%d\") }}\n{% endif -%}\n\n{% macro commit(commit) -%}\n{% if commit.id -%}\n- [{{ commit.id | truncate(length=7, end=\"\") }}]({{ \"https://github.com/bosun-ai/swiftide/commit/\" ~ commit.id }}) \\\n{% endif -%}\n{% if commit.scope %}*({{commit.scope | default(value = \"uncategorized\") | lower }})* {% endif %}\\\n{%- if commit.breaking %} [**breaking**]{% endif %} \\\n{{ commit.message | upper_first | trim }}\\\n{%- if commit.links %} \\\n   in {% for link in commit.links %}[{{link.text}}]({{link.href}}) {% endfor -%}\\\n{% endif %}\n{%- if commit.breaking_description %}\n\n**BREAKING CHANGE**: {{ commit.breaking_description }}\n\n{%- endif %}\n{% endmacro -%}\n\n{% for group, commits in commits | group_by(attribute=\"group\") %}\n### {{ group | striptags | trim | upper_first }}\n{% for commit in commits | filter(attribute=\"scope\") | sort(attribute=\"scope\") %}\n{{ self::commit(commit=commit) }}\n{%- endfor -%}\n{% for commit in commits %}\n{%- if not commit.scope %}\n{{ self::commit(commit=commit) }}\n{%- endif -%}\n{%- endfor -%}\n{%- endfor %}\n\n{%- if github.contributors -%}\n{% if github.contributors | filter(attribute=\"is_first_time\", value=true) | length != 0 %}\n### New Contributors\n{%- endif %}\\\n{% for contributor in github.contributors | filter(attribute=\"is_first_time\", value=true) %}\n* @{{ contributor.username }} made their first contribution\n{%- if contributor.pr_number %} in \\\n[#{{ contributor.pr_number }}]({{ self::remote_url() }}/pull/{{ contributor.pr_number }}) \\\n{%- endif %}\n{%- endfor -%}\n{% endif -%}\n\n{% if version %}\n{% if previous.version %}\n**Full Changelog**: {{ self::remote_url() }}/compare/{{ previous.version }}...{{ version }}\n{% endif %}\n{% else -%}\n  {% raw %}\\n{% endraw %}\n{% endif %}\n\n{%- macro remote_url() -%}\n{%- if remote.github -%}\nhttps://github.com/{{ remote.github.owner }}/{{ remote.github.repo }}\\\n{% else -%}\nhttps://github.com/bosun-ai/swiftide\n{%- endif -%}\n{% endmacro %}\n\"\"\" # template for the changelog body\n# https://keats.github.io/tera/docs/#introduction\n# note that the - before / after the % controls whether whitespace is rendered between each line.\n# Getting this right so that the markdown renders with the correct number of lines between headings\n# code fences and list items is pretty finicky. Note also that the 4 backticks in the commit macro\n# is intentional as this escapes any backticks in the commit body.\n\n\n# remove the leading and trailing whitespace from the template\ntrim = false\n# changelog footer\n"
  },
  {
    "path": "renovate.json",
    "content": "{\n  \"$schema\": \"https://docs.renovatebot.com/renovate-schema.json\",\n  \"extends\": [\n    \"config:recommended\"\n  ]\n}\n"
  },
  {
    "path": "rust-toolchain.toml",
    "content": "[toolchain]\nchannel = \"stable\"\n"
  },
  {
    "path": "rustfmt.toml",
    "content": "# docs: https://rust-lang.github.io/rustfmt/\n\n# Unstable options - to run these, use `cargo +nightly fmt`\nwrap_comments = true\ncomment_width = 100\nnormalize_comments = true\n"
  },
  {
    "path": "swiftide/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\ninclude = [\n  \"build.rs\",\n  \"../README.md\",\n  \"../images/\",\n  \"src/\",\n  \"../examples\",\n  \"tests/\",\n]\n\n[badges]\n\n[dependencies]\ndocument-features = { workspace = true }\n\n# Local dependencies\nswiftide-core = { path = \"../swiftide-core\", version = \"0.32\" }\nswiftide-integrations = { path = \"../swiftide-integrations\", version = \"0.32\" }\nswiftide-indexing = { path = \"../swiftide-indexing\", version = \"0.32\" }\nswiftide-query = { path = \"../swiftide-query\", version = \"0.32\" }\nswiftide-agents = { path = \"../swiftide-agents\", version = \"0.32\", optional = true }\nswiftide-macros = { path = \"../swiftide-macros\", version = \"0.32\", optional = true }\nswiftide-langfuse = { path = \"../swiftide-langfuse\", version = \"0.32\", optional = true }\n\n# Re-exports for macros and ease of use\nanyhow.workspace = true\nasync-trait.workspace = true\nserde.workspace = true\nserde_json.workspace = true\nschemars = { workspace = true, features = [\"derive\"] }\n\n[features]\n## By default only macros are enabled\ndefault = [\"macros\"]\n\nmacros = [\"dep:swiftide-macros\"]\n\n\nall = [\n  \"qdrant\",\n  \"redis\",\n  \"tree-sitter\",\n  \"openai\",\n  \"fastembed\",\n  \"scraping\",\n  \"aws-bedrock\",\n  \"groq\",\n  \"ollama\",\n  \"pgvector\",\n]\n\n#! ### Integrations\n\n## Enables Qdrant for storage and retrieval\nqdrant = [\"swiftide-integrations/qdrant\", \"swiftide-core/qdrant\"]\n\n## Enables PgVector for storage and retrieval\npgvector = [\"swiftide-integrations/pgvector\"]\n\n## Enables Redis as an indexing cache and storage\nredis = [\"swiftide-integrations/redis\"]\n\n## Tree-sitter for various code transformers\ntree-sitter = [\n  \"swiftide-integrations/tree-sitter\",\n  \"swiftide-indexing/tree-sitter\",\n]\n\n## OpenAI\nopenai = [\"swiftide-integrations/openai\"]\n\n## Groq \ngroq = [\"swiftide-integrations/groq\"]\n\n## Google Gemini \ngemini = [\"swiftide-integrations/gemini\"]\n\n## Dashscope prompting\ndashscope = [\"swiftide-integrations/dashscope\"]\n\n## OpenRouter prompting\nopen-router = [\"swiftide-integrations/open-router\"]\n\n## Ollama prompting\nollama = [\"swiftide-integrations/ollama\"]\n\n# Anthropic\nanthropic = [\"swiftide-integrations/anthropic\"]\n\n## FastEmbed (by qdrant) for fast, local, sparse and dense embeddings\nfastembed = [\"swiftide-integrations/fastembed\"]\n\n## Scraping via spider as loader and a html to markdown transformer\nscraping = [\"swiftide-integrations/scraping\"]\n\n## AWS Bedrock for prompting\naws-bedrock = [\"swiftide-integrations/aws-bedrock\"]\n\n## Lancdb for persistance and querying\nlancedb = [\"swiftide-integrations/lancedb\"]\n\n## Fluvio loader\nfluvio = [\"swiftide-integrations/fluvio\"]\n\n## Kafka loader\nkafka = [\"swiftide-integrations/kafka\"]\n\n## Parquet loader\nparquet = [\"swiftide-integrations/parquet\"]\n\n## Redb embeddable nodecache\nredb = [\"swiftide-integrations/redb\"]\n\n## Duckdb; sqlite fork, support Persist, Retrieve and NodeCache\nduckdb = [\"swiftide-integrations/duckdb\"]\n\n#! ### Other \n\n## MCP tool support for agents (tools only)\nmcp = [\"swiftide-agents\", \"swiftide-agents/mcp\"]\n\n## Metrics for usage, pipeline and agent performance\nmetrics = [\"swiftide-integrations/metrics\", \"swiftide-core/metrics\"]\n\n## Various mocking and testing utilities\ntest-utils = [\"swiftide-core/test-utils\"]\n\n## Json schema for various types\njson-schema = [\"swiftide-core/json-schema\", \"swiftide-agents/json-schema\"]\n\n## Estimate token counts using tiktoken\ntiktoken = [\"swiftide-integrations/tiktoken\"]\n\n#! ### Experimental\n## GenAI agents and tools\nswiftide-agents = [\"dep:swiftide-agents\"]\n## Langfuse tracing and observability\nlangfuse = [\n  \"swiftide-integrations/langfuse\",\n  \"swiftide-agents/langfuse\",\n  \"dep:swiftide-langfuse\",\n]\n\n\n[dev-dependencies]\nswiftide-core = { path = \"../swiftide-core\", features = [\"test-utils\"] }\nswiftide-test-utils = { path = \"../swiftide-test-utils\" }\n\nasync-openai = { workspace = true }\nqdrant-client = { workspace = true, default-features = false, features = [\n  \"serde\",\n] }\n\nanyhow = { workspace = true }\ntest-log = { workspace = true }\ntestcontainers = { workspace = true }\nmockall = { workspace = true }\ntemp-dir = { workspace = true }\nwiremock = { workspace = true }\nserde = { workspace = true }\nserde_json = { workspace = true }\ntokio = { workspace = true }\narrow-array = { workspace = true }\nsqlx = { workspace = true }\nlancedb = { workspace = true }\n\n[lints]\nworkspace = true\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n"
  },
  {
    "path": "swiftide/build.rs",
    "content": "use std::{fs, path::Path};\n\nfn main() {\n    let readme_path = Path::new(\"README.md\");\n    let out_dir = std::env::var(\"OUT_DIR\").unwrap();\n    let out_readme = Path::new(&out_dir).join(\"README.docs.md\");\n\n    // Read README.md\n    let contents = fs::read_to_string(readme_path).expect(\"Failed to read README.md\");\n\n    // Replace ```rust with ```ignore\n    let patched = contents.replace(\"```rust\", \"```ignore\");\n\n    // Write the modified README to OUT_DIR\n    fs::write(&out_readme, patched).expect(\"Failed to write patched README\");\n\n    // Tell Cargo to re-run build.rs if README changes\n    println!(\"cargo:rerun-if-changed=README.md\");\n\n    // Export the path so we can include it in lib.rs\n    println!(\"cargo:rustc-env=DOC_README={}\", out_readme.display());\n}\n"
  },
  {
    "path": "swiftide/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n#![allow(unused_imports, reason = \"that is what we do here\")]\n#![allow(clippy::doc_markdown, reason = \"the readme is invalid and that is ok\")]\n#![doc = include_str!(env!(\"DOC_README\"))]\n#![doc = document_features::document_features!()]\n\n#[doc(inline)]\npub use swiftide_core::prompt;\n#[doc(inline)]\npub use swiftide_core::type_aliases::*;\n\n#[cfg(feature = \"swiftide-agents\")]\n#[doc(inline)]\npub use swiftide_agents as agents;\n\n/// Common traits for common behaviour, re-exported from indexing and query\npub mod traits {\n    #[doc(inline)]\n    pub use swiftide_core::agent_traits::*;\n    #[doc(inline)]\n    pub use swiftide_core::chat_completion::traits::*;\n    #[doc(inline)]\n    pub use swiftide_core::indexing_traits::*;\n    #[doc(inline)]\n    pub use swiftide_core::query_traits::*;\n    #[doc(inline)]\n    pub use swiftide_core::token_estimation::{Estimatable, EstimateTokens};\n}\n\n#[doc(inline)]\npub use swiftide_core::token_estimation::CharEstimator;\n\n/// Abstractions for chat completions and LLM interactions.\n#[doc(inline)]\npub use swiftide_core::chat_completion;\n\n/// Integrations with various platforms and external services.\npub mod integrations {\n    #[doc(inline)]\n    pub use swiftide_integrations::*;\n}\n\n/// This module serves as the main entry point for indexing in Swiftide.\n///\n/// The indexing system in Swiftide is designed to handle the asynchronous processing of large\n/// volumes of data, including loading, transforming, and storing data chunks.\npub mod indexing {\n    #[doc(inline)]\n    pub use swiftide_core::indexing::*;\n    #[doc(inline)]\n    pub use swiftide_indexing::*;\n\n    pub mod transformers {\n        #[cfg(feature = \"tree-sitter\")]\n        #[doc(inline)]\n        pub use swiftide_integrations::treesitter::transformers::*;\n\n        pub use swiftide_indexing::transformers::*;\n    }\n\n    /// Pipeline statistics collection for monitoring and observability\n    pub mod statistics {\n        #[doc(inline)]\n        pub use swiftide_core::statistics::*;\n    }\n}\n\n#[cfg(feature = \"macros\")]\n#[doc(inline)]\npub use swiftide_macros::*;\n/// # Querying pipelines\n///\n/// Swiftide allows you to define sophisticated query pipelines.\n///\n/// Consider the following code that uses Swiftide to load some markdown text, chunk it, embed it,\n/// and store it in a Qdrant index:\n///\n/// ```no_run\n/// # #[cfg(all(feature = \"openai\", feature = \"qdrant\"))]\n/// # {\n/// use swiftide::{\n///     indexing::{\n///         self,\n///         loaders::FileLoader,\n///         transformers::{ChunkMarkdown, Embed, MetadataQAText},\n///     },\n///     integrations::{self, qdrant::Qdrant},\n///     integrations::openai::OpenAI,\n///     query::{self, answers, query_transformers, response_transformers},\n/// };\n///\n/// async fn index() -> Result<(), Box<dyn std::error::Error>> {\n///   let openai_client = OpenAI::builder()\n///       .default_embed_model(\"text-embedding-3-large\")\n///       .default_prompt_model(\"gpt-4o\")\n///       .build()?;\n///\n///   let qdrant = Qdrant::builder()\n///       .batch_size(50)\n///       .vector_size(3072)\n///       .collection_name(\"swiftide-examples\")\n///       .build()?;\n///\n///   indexing::Pipeline::from_loader(FileLoader::new(\"README.md\"))\n///       .then_chunk(ChunkMarkdown::from_chunk_range(10..2048))\n///       .then(MetadataQAText::new(openai_client.clone()))\n///       .then_in_batch(Embed::new(openai_client.clone()).with_batch_size(10))\n///       .then_store_with(qdrant.clone())\n///       .run()\n///       .await?;\n///\n///   Ok(())\n/// }\n/// # }\n/// ```\n///\n/// We could then define a query pipeline that uses the Qdrant index to answer questions:\n///\n/// ```no_run\n/// # #[cfg(all(feature = \"openai\", feature = \"qdrant\"))]\n/// # {\n/// # use swiftide::{\n/// #     indexing::{\n/// #         self,\n/// #         loaders::FileLoader,\n/// #         transformers::{ChunkMarkdown, Embed, MetadataQAText},\n/// #     },\n/// #     integrations::{self, qdrant::Qdrant},\n/// #     query::{self, answers, query_transformers, response_transformers},\n/// #     integrations::openai::OpenAI,\n/// # };\n/// # async fn query() -> Result<(), Box<dyn std::error::Error>> {\n/// #  let openai_client = OpenAI::builder()\n/// #      .default_embed_model(\"text-embedding-3-large\")\n/// #      .default_prompt_model(\"gpt-4o\")\n/// #      .build()?;\n/// #  let qdrant = Qdrant::builder()\n/// #      .batch_size(50)\n/// #      .vector_size(3072)\n/// #      .collection_name(\"swiftide-examples\")\n/// #      .build()?;\n/// // By default the search strategy is SimilaritySingleEmbedding\n/// // which takes the latest query, embeds it, and does a similarity search\n/// let pipeline = query::Pipeline::default()\n///     .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n///         openai_client.clone(),\n///     ))\n///     .then_transform_query(query_transformers::Embed::from_client(\n///         openai_client.clone(),\n///     ))\n///     .then_retrieve(qdrant.clone())\n///     .then_transform_response(response_transformers::Summary::from_client(\n///         openai_client.clone(),\n///     ))\n///     .then_answer(answers::Simple::from_client(openai_client.clone()));\n///\n/// let result = pipeline\n///     .query(\"What is swiftide? Please provide an elaborate explanation\")\n///     .await?;\n///\n/// println!(\"{:?}\", result.answer());\n/// # Ok(())\n/// # }\n/// # }\n/// ```\n///\n/// By using a query pipeline to transform queries, we can improve the quality of the answers we get\n/// from our index. In this example, we used an LLM to generate subquestions, embedding those and\n/// then using them to search the index. Finally, we summarize the results and combine them together\n/// into a single answer.\npub mod query {\n    #[doc(inline)]\n    pub use swiftide_core::querying::*;\n    #[doc(inline)]\n    pub use swiftide_query::*;\n}\n\n#[cfg(feature = \"langfuse\")]\n#[doc(inline)]\npub use swiftide_langfuse as langfuse;\n\n/// Re-exports for macros\n#[doc(hidden)]\npub mod reexports {\n    pub use ::anyhow;\n    pub use ::async_trait;\n    pub use ::schemars;\n    pub use ::serde;\n    pub use ::serde_json;\n}\n"
  },
  {
    "path": "swiftide/src/test_utils.rs",
    "content": "\n"
  },
  {
    "path": "swiftide/tests/dyn_traits.rs",
    "content": "//! Tests for dyn trait objects\n#![cfg(all(\n    feature = \"openai\",\n    feature = \"qdrant\",\n    feature = \"redis\",\n    feature = \"fastembed\",\n    feature = \"tree-sitter\"\n))]\n\nuse swiftide::{indexing::transformers::ChunkCode, integrations};\nuse swiftide_core::{\n    BatchableTransformer, ChunkerTransformer, EmbeddingModel, Loader, NodeCache, Persist,\n    SimplePrompt, Transformer,\n};\nuse swiftide_indexing::{loaders, transformers};\nuse swiftide_integrations::fastembed::FastEmbed;\n\n#[test_log::test(tokio::test)]\nasync fn test_name_on_dyn() {\n    let fastembed: Box<dyn EmbeddingModel> = Box::new(FastEmbed::try_default().unwrap());\n\n    assert_eq!(fastembed.name(), \"FastEmbed\");\n\n    let chunk_code: Box<dyn ChunkerTransformer<Input = String, Output = String>> =\n        Box::new(ChunkCode::try_for_language(\"rust\").unwrap());\n    assert_eq!(chunk_code.name(), \"ChunkCode\");\n\n    let transformer: Box<dyn Transformer<Input = String, Output = String>> =\n        Box::new(transformers::MetadataQAText::default());\n    assert_eq!(transformer.name(), \"MetadataQAText\");\n\n    let redis: Box<dyn NodeCache<Input = String>> = Box::new(\n        integrations::redis::Redis::try_from_url(\"redis://localhost:6379\", \"prefix\").unwrap(),\n    );\n    assert_eq!(redis.name(), \"Redis\");\n\n    let embed: Box<dyn BatchableTransformer<Input = String, Output = String>> =\n        Box::new(transformers::Embed::new(fastembed).with_batch_size(10));\n    assert_eq!(embed.name(), \"Embed\");\n\n    let qdrant: Box<dyn Persist<Input = String, Output = String>> = Box::new(\n        integrations::qdrant::Qdrant::try_from_url(\"http://localhost:6333\")\n            .unwrap()\n            .vector_size(1536)\n            .build()\n            .unwrap(),\n    );\n    assert_eq!(qdrant.name(), \"Qdrant\");\n\n    let openai_client: Box<dyn SimplePrompt> = Box::new(\n        integrations::openai::OpenAI::builder()\n            .default_embed_model(\"text-embedding-3-small\")\n            .default_prompt_model(\"gpt-3.5-turbo\")\n            .build()\n            .unwrap(),\n    );\n    assert_eq!(openai_client.name(), \"GenericOpenAI\");\n\n    let loader: Box<dyn Loader<Output = String>> =\n        Box::new(loaders::FileLoader::new(\".\").with_extensions(&[\"rs\"]));\n    assert_eq!(loader.name(), \"FileLoader\");\n}\n"
  },
  {
    "path": "swiftide/tests/indexing_pipeline.rs",
    "content": "//! This module contains tests for the indexing pipeline in the Swiftide project.\n//! The tests validate the functionality of the pipeline, ensuring it processes data correctly\n//! from a temporary file, simulates API responses, and stores data accurately in the Qdrant vector\n//! database.\n\n#![cfg(all(\n    feature = \"openai\",\n    feature = \"qdrant\",\n    feature = \"redis\",\n    feature = \"tree-sitter\"\n))]\n\nuse qdrant_client::qdrant::vectors_output::VectorsOptions;\nuse qdrant_client::qdrant::{ScrollPointsBuilder, SearchPointsBuilder, Value};\nuse swiftide::indexing::*;\nuse swiftide::integrations;\nuse swiftide_test_utils::*;\nuse temp_dir::TempDir;\nuse wiremock::MockServer;\n\n/// Tests the indexing pipeline without any mocks.\n///\n/// This test sets up a temporary directory and file, simulates API responses using mock servers,\n/// configures an `OpenAI` client, and runs the indexing pipeline. It then validates that the data\n/// is correctly stored in the Qdrant vector database.\n///\n/// # Panics\n/// Panics if any of the setup steps fail, such as creating the temporary directory or file,\n/// starting the mock server, or configuring the `OpenAI` client.\n///\n/// # Errors\n/// If the indexing pipeline encounters an error, the test will print the received requests\n/// for debugging purposes.\n#[test_log::test(tokio::test)]\nasync fn test_indexing_pipeline() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    std::fs::write(&codefile, \"fn main() { println!(\\\"Hello, World!\\\"); }\").unwrap();\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n\n    mock_chat_completions(&mock_server).await;\n\n    mock_embeddings(&mock_server, 1).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let (_redis, redis_url) = start_redis().await;\n\n    let (qdrant_container, qdrant_url) = start_qdrant().await;\n\n    // Coverage CI runs in container, just accept the double qdrant and use the service instead\n    let qdrant_url = std::env::var(\"QDRANT_URL\").unwrap_or(qdrant_url);\n\n    println!(\"Qdrant URL: {qdrant_url}\");\n\n    let result =\n        Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n            .with_default_llm_client(openai_client.clone())\n            .then_chunk(transformers::ChunkCode::try_for_language(\"rust\").unwrap())\n            .then(transformers::MetadataQACode::default())\n            .filter_cached(integrations::redis::Redis::try_from_url(&redis_url, \"prefix\").unwrap())\n            .then_in_batch(transformers::Embed::new(openai_client.clone()).with_batch_size(1))\n            .log_nodes()\n            .then_store_with(\n                integrations::qdrant::Qdrant::try_from_url(&qdrant_url)\n                    .unwrap()\n                    .vector_size(1536)\n                    .collection_name(\"swiftide-test\".to_string())\n                    .build()\n                    .unwrap(),\n            )\n            .run()\n            .await;\n\n    if result.is_err() {\n        println!(\"\\n Received the following requests: \\n\");\n        // Just some serde magic to pretty print requests on failure\n        let received_requests = mock_server\n            .received_requests()\n            .await\n            .unwrap_or_default()\n            .into_iter()\n            .map(|req| {\n                format!(\n                    \"- {} {}\\n{}\",\n                    req.method,\n                    req.url,\n                    serde_json::to_string_pretty(\n                        &serde_json::from_slice::<Value>(&req.body).unwrap()\n                    )\n                    .unwrap()\n                )\n            })\n            .collect::<Vec<String>>()\n            .join(\"\\n---\\n\");\n        println!(\"{received_requests}\");\n    }\n\n    result.expect(\"Indexing pipeline failed\");\n\n    let qdrant_client = qdrant_client::Qdrant::from_url(&qdrant_url)\n        .build()\n        .unwrap();\n\n    let stored_node = qdrant_client\n        .scroll(\n            ScrollPointsBuilder::new(\"swiftide-test\")\n                .limit(1)\n                .with_payload(true)\n                .with_vectors(true),\n        )\n        .await\n        .unwrap();\n\n    dbg!(\n        std::str::from_utf8(&qdrant_container.stdout_to_vec().await.unwrap())\n            .unwrap()\n            .split('\\n')\n            .collect::<Vec<_>>()\n    );\n    dbg!(\n        std::str::from_utf8(&qdrant_container.stderr_to_vec().await.unwrap())\n            .unwrap()\n            .split('\\n')\n            .collect::<Vec<_>>()\n    );\n    dbg!(stored_node);\n\n    let search_request =\n        SearchPointsBuilder::new(\"swiftide-test\", vec![0_f32; 1536], 10).with_payload(true);\n\n    let search_response = qdrant_client.search_points(search_request).await.unwrap();\n\n    dbg!(&search_response);\n\n    let first = search_response.result.first().unwrap();\n\n    dbg!(first);\n    assert!(\n        first\n            .payload\n            .get(\"path\")\n            .unwrap()\n            .as_str()\n            .unwrap()\n            .ends_with(\"main.rs\")\n    );\n    assert_eq!(\n        first.payload.get(\"content\").unwrap().as_str().unwrap(),\n        \"fn main() { println!(\\\"Hello, World!\\\"); }\"\n    );\n    assert_eq!(\n        first\n            .payload\n            .get(\"Questions and Answers (code)\")\n            .unwrap()\n            .as_str()\n            .unwrap(),\n        \"\\n\\nHello there, how may I assist you today?\"\n    );\n}\n\n#[test_log::test(tokio::test)]\nasync fn test_named_vectors() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    std::fs::write(&codefile, \"fn main() { println!(\\\"Hello, World!\\\"); }\").unwrap();\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n\n    mock_chat_completions(&mock_server).await;\n\n    mock_embeddings(&mock_server, 2).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let (_redis, redis_url) = start_redis().await;\n\n    let (_qdrant, qdrant_url) = start_qdrant().await;\n\n    // Coverage CI runs in container, just accept the double qdrant and use the service instead\n    let qdrant_url = std::env::var(\"QDRANT_URL\").unwrap_or(qdrant_url);\n\n    println!(\"Qdrant URL: {qdrant_url}\");\n\n    let result =\n        Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n            .with_embed_mode(EmbedMode::PerField)\n            .then_chunk(transformers::ChunkCode::try_for_language(\"rust\").unwrap())\n            .then(transformers::MetadataQACode::new(openai_client.clone()))\n            .filter_cached(integrations::redis::Redis::try_from_url(&redis_url, \"prefix\").unwrap())\n            .then_in_batch(transformers::Embed::new(openai_client.clone()).with_batch_size(10))\n            .then_store_with(\n                integrations::qdrant::Qdrant::try_from_url(&qdrant_url)\n                    .unwrap()\n                    .vector_size(1536)\n                    .collection_name(\"named-vectors-test\".to_string())\n                    .with_vector(EmbeddedField::Chunk)\n                    .with_vector(EmbeddedField::Metadata(\n                        transformers::metadata_qa_code::NAME.into(),\n                    ))\n                    .build()\n                    .unwrap(),\n            )\n            .run()\n            .await;\n\n    result.expect(\"Named vectors test indexing pipeline failed\");\n\n    let qdrant_client = qdrant_client::Qdrant::from_url(&qdrant_url)\n        .build()\n        .unwrap();\n\n    let search_request = SearchPointsBuilder::new(\"named-vectors-test\", vec![0_f32; 1536], 10)\n        .vector_name(\n            EmbeddedField::Metadata(transformers::metadata_qa_code::NAME.into()).to_string(),\n        )\n        .with_payload(true)\n        .with_vectors(true);\n\n    let search_response = qdrant_client.search_points(search_request).await.unwrap();\n\n    let first = search_response.result.into_iter().next().unwrap();\n\n    assert!(\n        first\n            .payload\n            .get(\"path\")\n            .unwrap()\n            .as_str()\n            .unwrap()\n            .ends_with(\"main.rs\")\n    );\n    assert_eq!(\n        first.payload.get(\"content\").unwrap().as_str().unwrap(),\n        \"fn main() { println!(\\\"Hello, World!\\\"); }\"\n    );\n    assert_eq!(\n        first\n            .payload\n            .get(\"Questions and Answers (code)\")\n            .unwrap()\n            .as_str()\n            .unwrap(),\n        \"\\n\\nHello there, how may I assist you today?\"\n    );\n\n    let vectors = first.vectors.expect(\"Response has vectors\");\n    let VectorsOptions::Vectors(named_vectors) = vectors\n        .vectors_options\n        .expect(\"Response has vector options\")\n    else {\n        panic!(\"Expected named vectors\");\n    };\n    let vectors = named_vectors.vectors;\n\n    assert_eq!(vectors.len(), 2);\n    assert!(vectors.contains_key(&EmbeddedField::Chunk.to_string()));\n    assert!(vectors.contains_key(\n        &EmbeddedField::Metadata(transformers::metadata_qa_code::NAME.into()).to_string()\n    ));\n}\n"
  },
  {
    "path": "swiftide/tests/lancedb.rs",
    "content": "#![cfg(all(\n    feature = \"openai\",\n    feature = \"lancedb\",\n    feature = \"fastembed\",\n    feature = \"tree-sitter\"\n))]\n\nuse anyhow::Context;\nuse lancedb::query::{self as lance_query_builder, QueryBase};\nuse swiftide::indexing::{self, TextNode};\nuse swiftide::indexing::{\n    EmbeddedField,\n    transformers::{ChunkCode, MetadataQACode, metadata_qa_code::NAME as METADATA_QA_CODE_NAME},\n};\nuse swiftide::query::{self as swift_query_pipeline, Query, states};\nuse swiftide_indexing::{Pipeline, loaders, transformers};\nuse swiftide_integrations::{\n    fastembed::FastEmbed,\n    lancedb::{self as lance_integration, LanceDB},\n};\nuse swiftide_query::{answers, query_transformers, response_transformers};\nuse swiftide_test_utils::{mock_chat_completions, openai_client};\nuse temp_dir::TempDir;\nuse wiremock::MockServer;\n\n#[test_log::test(tokio::test)]\nasync fn test_lancedb() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    let code = \"fn main() { println!(\\\"Hello, World!\\\"); }\";\n    std::fs::write(&codefile, code).unwrap();\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n    mock_chat_completions(&mock_server).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let fastembed = FastEmbed::try_default().unwrap();\n\n    let lancedb = LanceDB::builder()\n        .uri(tempdir.child(\"lancedb\").to_str().unwrap())\n        .vector_size(384)\n        .with_vector(EmbeddedField::Combined)\n        .with_metadata(METADATA_QA_CODE_NAME)\n        .with_metadata(\"filter\")\n        .with_metadata(\"path\")\n        .table_name(\"swiftide_test\")\n        .build()\n        .unwrap();\n\n    Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n        .then_chunk(ChunkCode::try_for_language(\"rust\").unwrap())\n        .then(MetadataQACode::new(openai_client.clone()))\n        .then(|mut node: TextNode| {\n            // Add path to metadata, by default, storage will store all metadata fields\n            node.metadata\n                .insert(\"path\", node.path.display().to_string());\n            node.metadata.insert(\"filter\", \"true\");\n            Ok(node)\n        })\n        .then_in_batch(transformers::Embed::new(fastembed.clone()).with_batch_size(20))\n        .log_nodes()\n        .then_store_with(lancedb.clone())\n        .run()\n        .await\n        .unwrap();\n\n    let strategy = swift_query_pipeline::search_strategies::SimilaritySingleEmbedding::from_filter(\n        \"filter = \\\"true\\\"\".to_string(),\n    );\n\n    let query_pipeline = swift_query_pipeline::Pipeline::from_search_strategy(strategy)\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            openai_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n        .then_retrieve(lancedb.clone())\n        .then_transform_response(response_transformers::Summary::from_client(\n            openai_client.clone(),\n        ))\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result: Query<states::Answered> = query_pipeline.query(\"What is swiftide?\").await.unwrap();\n\n    dbg!(&result);\n\n    assert_eq!(\n        result.answer(),\n        \"\\n\\nHello there, how may I assist you today?\"\n    );\n\n    let retrieved_document = result.documents().first().unwrap();\n    assert_eq!(retrieved_document.content(), code);\n\n    assert_eq!(\n        retrieved_document.metadata().get(\"path\").unwrap(),\n        codefile.to_str().unwrap()\n    );\n}\n\n#[test_log::test(tokio::test)]\nasync fn test_lancedb_retrieve_dynamic_search() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    let code = \"fn main() { println!(\\\"Hello, World!\\\"); }\";\n    std::fs::write(&codefile, code).unwrap();\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n    mock_chat_completions(&mock_server).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let fastembed = FastEmbed::try_default().unwrap();\n\n    let lancedb = LanceDB::builder()\n        .uri(tempdir.child(\"lancedb\").to_str().unwrap())\n        .vector_size(384)\n        .with_vector(EmbeddedField::Combined)\n        .with_metadata(METADATA_QA_CODE_NAME)\n        .with_metadata(\"filter\")\n        .with_metadata(\"path\")\n        .table_name(\"swiftide_test\")\n        .build()\n        .unwrap();\n\n    Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n        .then_chunk(ChunkCode::try_for_language(\"rust\").unwrap())\n        .then(MetadataQACode::new(openai_client.clone()))\n        .then(|mut node: indexing::TextNode| {\n            // Add path to metadata, by default, storage will store all metadata fields\n            node.metadata\n                .insert(\"path\", node.path.display().to_string());\n            node.metadata\n                .insert(\"filter\".to_string(), \"true\".to_string());\n            Ok(node)\n        })\n        .then_in_batch(transformers::Embed::new(fastembed.clone()).with_batch_size(20))\n        .log_nodes()\n        .then_store_with(lancedb.clone())\n        .run()\n        .await\n        .unwrap();\n\n    // Create the custom query strategy for vector similarity search\n    let create_vector_search_strategy =\n        |lancedb: &LanceDB,\n         table_name: String|\n         -> swift_query_pipeline::search_strategies::CustomStrategy<\n            lance_query_builder::VectorQuery,\n        > {\n            let table_name = table_name.clone();\n            let lancedb = lancedb.clone();\n\n            swift_query_pipeline::search_strategies::CustomStrategy::from_async_query(\n                move |query_node| {\n                    // Create owned copies for the async block\n                    let table_name = table_name.clone();\n                    let lancedb = lancedb.clone();\n\n                    let embedding = if let Some(embedding) = &query_node.embedding {\n                        embedding.clone()\n                    } else {\n                        panic!(\"Query embedding not found\");\n                    };\n\n                    // Return a Future using async block syntax\n                    Box::pin(async move {\n                        // Create a new connection for each query execution\n                        let connection = lancedb.get_connection().await?;\n\n                        // Open the table within the query execution context\n                        let vector_table = connection\n                            .open_table(&table_name)\n                            .execute()\n                            .await\n                            .context(\"Failed to open vector search table\")?;\n\n                        let vector_field =\n                            lance_integration::VectorConfig::from(EmbeddedField::Combined)\n                                .field_name();\n\n                        // Build and return the query\n                        let query_builder = vector_table\n                            .query()\n                            .nearest_to(embedding.as_slice())?\n                            .column(&vector_field)\n                            .limit(20);\n\n                        Ok(query_builder)\n                        // Connection is dropped here when query_builder is executed\n                    })\n                },\n            )\n        };\n\n    let vector_search_strategy =\n        create_vector_search_strategy(&lancedb, \"swiftide_test\".to_string());\n\n    let query_pipeline =\n        swift_query_pipeline::Pipeline::from_search_strategy(vector_search_strategy)\n            .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n                openai_client.clone(),\n            ))\n            .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n            .then_retrieve(lancedb.clone())\n            .then_transform_response(response_transformers::Summary::from_client(\n                openai_client.clone(),\n            ))\n            .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result: Query<states::Answered> = query_pipeline.query(\"What is swiftide?\").await.unwrap();\n\n    dbg!(&result);\n\n    assert_eq!(\n        result.answer(),\n        \"\\n\\nHello there, how may I assist you today?\"\n    );\n\n    let retrieved_document = result.documents().first().unwrap();\n    assert_eq!(retrieved_document.content(), code);\n\n    assert_eq!(\n        retrieved_document.metadata().get(\"path\").unwrap(),\n        codefile.to_str().unwrap()\n    );\n}\n"
  },
  {
    "path": "swiftide/tests/pgvector.rs",
    "content": "//! This module contains tests for the `PgVector` indexing pipeline in the Swiftide project.\n//! The tests validate the functionality of the pipeline, ensuring that data is correctly indexed\n//! and processed from temporary files, database configurations, and simulated environments.\n\n#![cfg(all(\n    feature = \"openai\",\n    feature = \"pgvector\",\n    feature = \"fastembed\",\n    feature = \"tree-sitter\"\n))]\n\nuse swiftide_core::document::Document;\nuse swiftide_integrations::treesitter::metadata_qa_code;\nuse temp_dir::TempDir;\n\nuse anyhow::{Result, anyhow};\nuse sqlx::{prelude::FromRow, types::Uuid};\nuse swiftide::{\n    indexing::{\n        self, EmbeddedField, Pipeline, loaders,\n        transformers::{\n            self, ChunkCode, MetadataQACode, metadata_qa_code::NAME as METADATA_QA_CODE_NAME,\n        },\n    },\n    integrations::{\n        self,\n        pgvector::{FieldConfig, PgVector, PgVectorBuilder, VectorConfig},\n    },\n    query::{self, Query, answers, query_transformers, response_transformers, states},\n};\nuse swiftide_test_utils::{mock_chat_completions, openai_client};\nuse wiremock::MockServer;\n\n#[allow(dead_code)]\n#[derive(Debug, Clone, FromRow)]\nstruct VectorSearchResult {\n    id: Uuid,\n    chunk: String,\n}\n\n/// Test case for verifying the `PgVector` indexing pipeline functionality.\n///\n/// This test:\n/// - Sets up a temporary file and Postgres database for testing.\n/// - Configures a `PgVector` instance with a vector size of 384.\n/// - Executes an indexing pipeline for Rust code chunks with embedded vector metadata.\n/// - Performs a similarity-based vector search on the database and validates the retrieved results.\n///\n/// Ensures correctness of end-to-end data flow, including table management, vector storage, and\n/// query execution.\n#[test_log::test(tokio::test)]\nasync fn test_pgvector_indexing() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    let code = \"fn main() { println!(\\\"Hello, World!\\\"); }\";\n    std::fs::write(&codefile, code).unwrap();\n\n    let (_pgv_db_container, pgv_db_url) = swiftide_test_utils::start_postgres().await;\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n    mock_chat_completions(&mock_server).await;\n\n    // Configure Pgvector with a default vector size, a single embedding\n    // and in addition to embedding the text metadata, also store it in a field\n    let pgv_storage = PgVector::builder()\n        .db_url(pgv_db_url)\n        .vector_size(384)\n        .with_vector(EmbeddedField::Combined)\n        .table_name(\"swiftide_test\")\n        .build()\n        .unwrap();\n\n    // Drop the existing test table before running the test\n    println!(\"Dropping existing test table & index if it exists\");\n    let drop_table_sql = \"DROP TABLE IF EXISTS swiftide_test\";\n    let drop_index_sql = \"DROP INDEX IF EXISTS swiftide_test_embedding_idx\";\n\n    if let Ok(pool) = pgv_storage.get_pool().await {\n        sqlx::query(drop_table_sql)\n            .execute(pool)\n            .await\n            .expect(\"Failed to execute SQL query for dropping the table\");\n        sqlx::query(drop_index_sql)\n            .execute(pool)\n            .await\n            .expect(\"Failed to execute SQL query for dropping the index\");\n    } else {\n        panic!(\"Unable to acquire database connection pool\");\n    }\n\n    let result =\n        Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n            .then_chunk(ChunkCode::try_for_language(\"rust\").unwrap())\n            .then(|mut node: indexing::TextNode| {\n                node.with_vectors([(EmbeddedField::Combined, vec![1.0; 384])]);\n                Ok(node)\n            })\n            .then_store_with(pgv_storage.clone())\n            .run()\n            .await;\n\n    result.expect(\"PgVector Named vectors test indexing pipeline failed\");\n\n    let pool = pgv_storage\n        .get_pool()\n        .await\n        .expect(\"Unable to acquire database connection pool\");\n\n    // Start building the SQL query\n    let sql_vector_query =\n        \"SELECT id, chunk FROM swiftide_test ORDER BY vector_combined <=> $1::VECTOR LIMIT $2\";\n\n    println!(\"Running retrieve with SQL: {sql_vector_query}\");\n\n    let top_k: i32 = 10;\n    let embedding = vec![1.0; 384];\n\n    let data: Vec<VectorSearchResult> = sqlx::query_as(sql_vector_query)\n        .bind(embedding)\n        .bind(top_k)\n        .fetch_all(pool)\n        .await\n        .expect(\"Sql named vector query failed\");\n\n    let docs: Vec<_> = data.into_iter().map(|r| r.chunk).collect();\n\n    println!(\"Retrieved documents for debugging: {docs:#?}\");\n\n    assert_eq!(docs[0], \"fn main() { println!(\\\"Hello, World!\\\"); }\");\n}\n\n/// Test the retrieval functionality of `PgVector` integration.\n///\n/// This test verifies that a Rust code snippet can be embedded,\n/// stored in a `PostgreSQL` database using `PgVector`, and accurately\n/// retrieved using a single similarity-based query pipeline. It sets up\n/// a mock `OpenAI` client, configures `PgVector`, and executes a query\n/// to ensure the pipeline retrieves the correct data and generates\n/// an expected response.\n#[test_log::test(tokio::test)]\nasync fn test_pgvector_retrieve() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    let code = \"fn main() { println!(\\\"Hello, World!\\\"); }\";\n    std::fs::write(&codefile, code).unwrap();\n\n    let (_pgv_db_container, pgv_db_url) = swiftide_test_utils::start_postgres().await;\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n    mock_chat_completions(&mock_server).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let fastembed =\n        integrations::fastembed::FastEmbed::try_default().expect(\"Could not create FastEmbed\");\n\n    // Configure Pgvector with a default vector size, a single embedding\n    // and in addition to embedding the text metadata, also store it in a field\n    let pgv_storage = PgVector::builder()\n        .db_url(pgv_db_url)\n        .vector_size(384)\n        .with_vector(EmbeddedField::Combined)\n        .with_metadata(METADATA_QA_CODE_NAME)\n        .with_metadata(\"filter\")\n        .table_name(\"swiftide_test\")\n        .build()\n        .unwrap();\n\n    // Drop the existing test table before running the test\n    println!(\"Dropping existing test table & index if it exists\");\n    let drop_table_sql = \"DROP TABLE IF EXISTS swiftide_test\";\n    let drop_index_sql = \"DROP INDEX IF EXISTS swiftide_test_embedding_idx\";\n\n    if let Ok(pool) = pgv_storage.get_pool().await {\n        sqlx::query(drop_table_sql)\n            .execute(pool)\n            .await\n            .expect(\"Failed to execute SQL query for dropping the table\");\n        sqlx::query(drop_index_sql)\n            .execute(pool)\n            .await\n            .expect(\"Failed to execute SQL query for dropping the index\");\n    } else {\n        panic!(\"Unable to acquire database connection pool\");\n    }\n\n    Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n        .then_chunk(ChunkCode::try_for_language(\"rust\").unwrap())\n        .then(MetadataQACode::new(openai_client.clone()))\n        .then(|mut node: indexing::TextNode| {\n            node.metadata\n                .insert(\"filter\".to_string(), \"true\".to_string());\n            Ok(node)\n        })\n        .then_in_batch(transformers::Embed::new(fastembed.clone()).with_batch_size(20))\n        .log_nodes()\n        .then_store_with(pgv_storage.clone())\n        .run()\n        .await\n        .unwrap();\n\n    let strategy = query::search_strategies::SimilaritySingleEmbedding::from_filter(\n        \"filter = \\\"true\\\"\".to_string(),\n    );\n\n    let query_pipeline = query::Pipeline::from_search_strategy(strategy)\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            openai_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n        .then_retrieve(pgv_storage.clone())\n        .then_transform_response(response_transformers::Summary::from_client(\n            openai_client.clone(),\n        ))\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result: Query<states::Answered> = query_pipeline.query(\"What is swiftide?\").await.unwrap();\n\n    assert_eq!(\n        result.answer(),\n        \"\\n\\nHello there, how may I assist you today?\"\n    );\n\n    let first_document = result.documents().first().unwrap();\n\n    let expected = Document::builder()\n        .content(\"fn main() { println!(\\\"Hello, World!\\\"); }\")\n        .metadata([\n            (\n                metadata_qa_code::NAME,\n                \"\\n\\nHello there, how may I assist you today?\",\n            ),\n            (\"filter\", \"true\"),\n        ])\n        .build()\n        .unwrap();\n    assert_eq!(first_document, &expected);\n}\n\n/// Tests the dynamic vector similarity search functionality using `PostgreSQL`.\n///\n/// This integration test verifies the complete workflow of vector similarity search:\n/// 1. Sets up a temporary test environment with a sample Rust code file\n/// 2. Configures `PostgreSQL` with pgvector extension for vector operations\n/// 3. Creates and populates test data using a processing pipeline:\n///    - Loads source code files\n///    - Chunks code into processable segments\n///    - Generates metadata using `OpenAI`\n///    - Embeds text using `FastEmbed`\n///    - Stores processed data in `PostgreSQL`\n/// 4. Implements a custom search strategy that:\n///    - Filters results based on metadata\n///    - Orders results by vector similarity\n///    - Limits the number of returned results\n/// 5. Executes a query pipeline that:\n///    - Generates and embeds the search query\n///    - Retrieves similar documents\n///    - Transforms results into a meaningful summary\n///    - Produces a final answer\n///\n/// # Configuration Pattern\n/// The test demonstrates the recommended configuration approach:\n/// - Define search parameters as constants in the implementation scope\n/// - Pass configuration through the query generator closure\n/// - Keep the strategy struct minimal and focused on query generation\n#[test_log::test(tokio::test)]\nasync fn test_pgvector_retrieve_dynamic_search() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    let code = \"fn main() { println!(\\\"Hello, World!\\\"); }\";\n    std::fs::write(&codefile, code).unwrap();\n\n    let (_pgv_db_container, pgv_db_url) = swiftide_test_utils::start_postgres().await;\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n    mock_chat_completions(&mock_server).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let fastembed =\n        integrations::fastembed::FastEmbed::try_default().expect(\"Could not create FastEmbed\");\n\n    // Configure Pgvector with a default vector size, a single embedding\n    // and in addition to embedding the text metadata, also store it in a field\n    let pgv_storage = PgVector::builder()\n        .db_url(pgv_db_url)\n        .vector_size(384)\n        .with_vector(EmbeddedField::Combined)\n        .with_metadata(METADATA_QA_CODE_NAME)\n        .with_metadata(\"filter\")\n        .table_name(\"swiftide_test\")\n        .build()\n        .unwrap();\n\n    // Drop the existing test table before running the test\n    println!(\"Dropping existing test table & index if it exists\");\n    let drop_table_sql = \"DROP TABLE IF EXISTS swiftide_test\";\n    let drop_index_sql = \"DROP INDEX IF EXISTS swiftide_test_embedding_idx\";\n\n    if let Ok(pool) = pgv_storage.get_pool().await {\n        sqlx::query(drop_table_sql)\n            .execute(pool)\n            .await\n            .expect(\"Failed to execute SQL query for dropping the table\");\n        sqlx::query(drop_index_sql)\n            .execute(pool)\n            .await\n            .expect(\"Failed to execute SQL query for dropping the index\");\n    } else {\n        panic!(\"Unable to acquire database connection pool\");\n    }\n\n    Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n        .then_chunk(ChunkCode::try_for_language(\"rust\").unwrap())\n        .then(MetadataQACode::new(openai_client.clone()))\n        .then(|mut node: indexing::TextNode| {\n            node.metadata\n                .insert(\"filter\".to_string(), \"true\".to_string());\n            Ok(node)\n        })\n        .then_in_batch(transformers::Embed::new(fastembed.clone()).with_batch_size(20))\n        .log_nodes()\n        .then_store_with(pgv_storage.clone())\n        .run()\n        .await\n        .unwrap();\n\n    // First, we'll clone pgv_storage before using it in the closure\n    let pgv_storage_for_closure = pgv_storage.clone();\n\n    // Configure search strategy\n    // Create a custom query generator with metadata filtering\n    let custom_strategy = query::search_strategies::CustomStrategy::from_query(\n        move |query_node| -> Result<sqlx::QueryBuilder<'static, sqlx::Postgres>> {\n            const CUSTOM_STRATEGY_MAX_RESULTS: i64 = 5;\n            let mut builder = sqlx::QueryBuilder::new(\"\");\n            let table: &str = pgv_storage_for_closure.get_table_name();\n\n            // Get column definitions\n            let default_fields: Vec<_> = PgVectorBuilder::default_fields();\n            let default_columns: Vec<&str> =\n                default_fields.iter().map(FieldConfig::field_name).collect();\n\n            // Start building the query properly\n            builder.push(\"SELECT \");\n            builder.push(default_columns.join(\", \"));\n            builder.push(\" FROM \");\n            builder.push(table);\n\n            // Add metadata filter\n            builder.push(\" WHERE meta_\");\n            builder.push(PgVector::normalize_field_name(\"filter\"));\n            builder.push(\" @> \");\n            builder.push(\"'{\\\"filter\\\": \\\"true\\\"}'::jsonb\");\n\n            // Add vector similarity ordering\n            let vector_field = VectorConfig::from(EmbeddedField::Combined).field;\n            builder.push(\" ORDER BY \");\n            builder.push(vector_field);\n            builder.push(\" <=> \");\n            // Let QueryBuilder handle the parameter placeholders\n            builder.push_bind(\n                query_node\n                    .embedding\n                    .as_ref()\n                    .ok_or_else(|| anyhow!(\"Missing embedding in query state\"))?\n                    .clone(),\n            );\n            builder.push(\"::vector\");\n\n            // Add LIMIT clause\n            builder.push(\" LIMIT \");\n\n            builder.push_bind(CUSTOM_STRATEGY_MAX_RESULTS);\n\n            Ok(builder)\n        },\n    );\n\n    let query_pipeline = query::Pipeline::from_search_strategy(custom_strategy)\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            openai_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n        .then_retrieve(pgv_storage.clone())\n        .then_transform_response(response_transformers::Summary::from_client(\n            openai_client.clone(),\n        ))\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result: Query<states::Answered> = query_pipeline.query(\"What is swiftide?\").await.unwrap();\n\n    assert_eq!(\n        result.answer(),\n        \"\\n\\nHello there, how may I assist you today?\"\n    );\n\n    let first_document = result.documents().first().unwrap();\n\n    // The custom query explicitly skipped metadata\n    let expected = Document::builder()\n        .content(\"fn main() { println!(\\\"Hello, World!\\\"); }\")\n        .build()\n        .unwrap();\n    assert_eq!(first_document, &expected);\n}\n"
  },
  {
    "path": "swiftide/tests/query_pipeline.rs",
    "content": "#![cfg(all(\n    feature = \"openai\",\n    feature = \"qdrant\",\n    feature = \"fastembed\",\n    feature = \"tree-sitter\"\n))]\n\nuse swiftide::indexing::{self, *};\nuse swiftide::query::search_strategies::HybridSearch;\nuse swiftide::{integrations, query};\nuse swiftide_integrations::fastembed::FastEmbed;\nuse swiftide_query::{answers, query_transformers, response_transformers};\nuse swiftide_test_utils::*;\nuse temp_dir::TempDir;\nuse wiremock::MockServer;\n\n#[test_log::test(tokio::test)]\nasync fn test_query_pipeline() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    std::fs::write(&codefile, \"fn main() { println!(\\\"Hello, World!\\\"); }\").unwrap();\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n\n    mock_chat_completions(&mock_server).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let (_qdrant, qdrant_url) = start_qdrant().await;\n\n    let qdrant_client = integrations::qdrant::Qdrant::try_from_url(&qdrant_url)\n        .unwrap()\n        .vector_size(384)\n        .collection_name(\"swiftide-test\".to_string())\n        .build()\n        .unwrap();\n\n    let fastembed = integrations::fastembed::FastEmbed::try_default().unwrap();\n\n    println!(\"Qdrant URL: {qdrant_url}\");\n\n    indexing::Pipeline::from_loader(\n        loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]),\n    )\n    .then_chunk(transformers::ChunkCode::try_for_language(\"rust\").unwrap())\n    .then_in_batch(transformers::Embed::new(fastembed.clone()).with_batch_size(1))\n    .then_store_with(qdrant_client.clone())\n    .run()\n    .await\n    .unwrap();\n\n    let query_pipeline = query::Pipeline::default()\n        .then_transform_query(query_transformers::GenerateSubquestions::from_client(\n            openai_client.clone(),\n        ))\n        .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n        .then_retrieve(qdrant_client.clone())\n        .then_transform_response(response_transformers::Summary::from_client(\n            openai_client.clone(),\n        ))\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result = query_pipeline.query(\"What is swiftide?\").await.unwrap();\n\n    assert!(result.embedding.is_some());\n    assert!(!result.answer().is_empty());\n}\n\n#[test_log::test(tokio::test)]\nasync fn test_hybrid_search_qdrant() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    std::fs::write(&codefile, \"fn main() { println!(\\\"Hello, World!\\\"); }\").unwrap();\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n\n    mock_chat_completions(&mock_server).await;\n\n    let openai_client = openai_client(&mock_server.uri(), \"text-embedding-3-small\", \"gpt-4o\");\n\n    let (_qdrant, qdrant_url) = start_qdrant().await;\n\n    let batch_size = 10;\n\n    let qdrant_client = integrations::qdrant::Qdrant::try_from_url(&qdrant_url)\n        .unwrap()\n        .vector_size(384)\n        .batch_size(batch_size)\n        .with_vector(EmbeddedField::Combined)\n        .with_sparse_vector(EmbeddedField::Combined)\n        .collection_name(\"swiftide-hybrid\")\n        .build()\n        .unwrap();\n\n    let fastembed_sparse = FastEmbed::try_default_sparse().unwrap().clone();\n    let fastembed = FastEmbed::try_default().unwrap().clone();\n\n    println!(\"Qdrant URL: {qdrant_url}\");\n\n    indexing::Pipeline::from_loader(\n        loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]),\n    )\n    .then_chunk(transformers::ChunkCode::try_for_language(\"rust\").unwrap())\n    .then_in_batch(transformers::Embed::new(fastembed.clone()).with_batch_size(batch_size))\n    .then_in_batch(\n        transformers::SparseEmbed::new(fastembed_sparse.clone()).with_batch_size(batch_size),\n    )\n    .then_store_with(qdrant_client.clone())\n    .run()\n    .await\n    .unwrap();\n\n    let collection = qdrant_client\n        .client()\n        .collection_info(\"swiftide-hybrid\")\n        .await\n        .unwrap();\n\n    dbg!(collection);\n\n    let query_pipeline = query::Pipeline::from_search_strategy(HybridSearch::default())\n        .then_transform_query(query_transformers::Embed::from_client(fastembed.clone()))\n        .then_transform_query(query_transformers::SparseEmbed::from_client(\n            fastembed_sparse.clone(),\n        ))\n        .then_retrieve(qdrant_client.clone())\n        .then_answer(answers::Simple::from_client(openai_client.clone()));\n\n    let result = query_pipeline.query(\"What is swiftide?\").await.unwrap();\n\n    assert!(result.embedding.is_some());\n    assert!(!result.answer().is_empty());\n}\n"
  },
  {
    "path": "swiftide/tests/sparse_embeddings_and_hybrid_search.rs",
    "content": "//! This module contains tests for the indexing pipeline in the Swiftide project.\n//! The tests validate the functionality of the pipeline, ensuring it processes data correctly\n//! from a temporary file, simulates API responses, and stores data accurately in the Qdrant vector\n//! database.\n\n#![cfg(all(feature = \"qdrant\", feature = \"fastembed\", feature = \"tree-sitter\"))]\n\nuse qdrant_client::qdrant::{\n    Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder, ScrollPointsBuilder,\n    SearchPointsBuilder, VectorInput,\n};\nuse swiftide::indexing::*;\nuse swiftide::integrations;\nuse swiftide_integrations::fastembed::FastEmbed;\nuse swiftide_test_utils::*;\nuse temp_dir::TempDir;\nuse wiremock::MockServer;\n\n/// Tests the indexing pipeline without any mocks.\n///\n/// This test sets up a temporary directory and file, simulates API responses using mock servers,\n/// configures an `OpenAI` client, and runs the indexing pipeline. It then validates that the data\n/// is correctly stored in the Qdrant vector database.\n///\n/// # Panics\n/// Panics if any of the setup steps fail, such as creating the temporary directory or file,\n/// starting the mock server, or configuring the `OpenAI` client.\n///\n/// # Errors\n/// If the indexing pipeline encounters an error, the test will print the received requests\n/// for debugging purposes.\n#[test_log::test(tokio::test)]\nasync fn test_sparse_indexing_pipeline() {\n    // Setup temporary directory and file for testing\n    let tempdir = TempDir::new().unwrap();\n    let codefile = tempdir.child(\"main.rs\");\n    std::fs::write(&codefile, \"fn main() { println!(\\\"Hello, World!\\\"); }\").unwrap();\n\n    // Setup mock servers to simulate API responses\n    let mock_server = MockServer::start().await;\n\n    mock_embeddings(&mock_server, 1).await;\n\n    let (qdrant_container, qdrant_url) = start_qdrant().await;\n    let fastembed_sparse = FastEmbed::try_default_sparse().unwrap();\n    let fastembed = FastEmbed::try_default().unwrap();\n    let memory_storage = persist::MemoryStorage::default();\n\n    println!(\"Qdrant URL: {qdrant_url}\");\n\n    let result =\n        Pipeline::from_loader(loaders::FileLoader::new(tempdir.path()).with_extensions(&[\"rs\"]))\n            .then_chunk(transformers::ChunkCode::try_for_language(\"rust\").unwrap())\n            .then_in_batch(transformers::SparseEmbed::new(fastembed_sparse).with_batch_size(20))\n            .then_in_batch(transformers::Embed::new(fastembed).with_batch_size(20))\n            .log_nodes()\n            .then_store_with(\n                integrations::qdrant::Qdrant::try_from_url(&qdrant_url)\n                    .unwrap()\n                    .vector_size(384)\n                    .with_vector(EmbeddedField::Combined)\n                    .with_sparse_vector(EmbeddedField::Combined)\n                    .collection_name(\"swiftide-test\".to_string())\n                    .build()\n                    .unwrap(),\n            )\n            .then_store_with(memory_storage.clone())\n            .run()\n            .await;\n\n    let node = memory_storage\n        .get_all_values()\n        .await\n        .first()\n        .unwrap()\n        .clone();\n\n    result.expect(\"Indexing pipeline failed\");\n\n    let qdrant_client = qdrant_client::Qdrant::from_url(&qdrant_url)\n        .build()\n        .unwrap();\n\n    let stored_node = qdrant_client\n        .scroll(\n            ScrollPointsBuilder::new(\"swiftide-test\")\n                .limit(1)\n                .with_payload(true)\n                .with_vectors(true),\n        )\n        .await\n        .unwrap();\n\n    dbg!(stored_node);\n    dbg!(\n        std::str::from_utf8(&qdrant_container.stdout_to_vec().await.unwrap())\n            .unwrap()\n            .split('\\n')\n            .collect::<Vec<_>>()\n    );\n\n    // Search using the dense vector\n    let dense = node\n        .vectors\n        .unwrap()\n        .into_values()\n        .collect::<Vec<_>>()\n        .first()\n        .cloned()\n        .unwrap();\n    let search_request = SearchPointsBuilder::new(\"swiftide-test\", dense.as_slice(), 10)\n        .with_payload(true)\n        .vector_name(EmbeddedField::Combined);\n\n    let search_response = qdrant_client.search_points(search_request).await.unwrap();\n    let first = search_response.result.first().unwrap();\n\n    assert!(\n        first\n            .payload\n            .get(\"path\")\n            .unwrap()\n            .as_str()\n            .unwrap()\n            .ends_with(\"main.rs\")\n    );\n    assert_eq!(\n        first.payload.get(\"content\").unwrap().as_str().unwrap(),\n        \"fn main() { println!(\\\"Hello, World!\\\"); }\"\n    );\n\n    // Search using the sparse vector\n    let sparse = node\n        .sparse_vectors\n        .unwrap()\n        .into_values()\n        .collect::<Vec<_>>()\n        .first()\n        .cloned()\n        .unwrap();\n\n    // Search sparse\n    let search_request = SearchPointsBuilder::new(\"swiftide-test\", sparse.values.as_slice(), 10)\n        .sparse_indices(sparse.indices.clone())\n        .vector_name(format!(\"{}_sparse\", EmbeddedField::Combined))\n        .with_payload(true);\n\n    let search_response = qdrant_client.search_points(search_request).await.unwrap();\n    let first = search_response.result.first().unwrap();\n\n    assert!(\n        first\n            .payload\n            .get(\"path\")\n            .unwrap()\n            .as_str()\n            .unwrap()\n            .ends_with(\"main.rs\")\n    );\n    assert_eq!(\n        first.payload.get(\"content\").unwrap().as_str().unwrap(),\n        \"fn main() { println!(\\\"Hello, World!\\\"); }\"\n    );\n\n    // Search hybrid\n    let search_response = qdrant_client\n        .query(\n            QueryPointsBuilder::new(\"swiftide-test\")\n                .with_payload(true)\n                .add_prefetch(\n                    PrefetchQueryBuilder::default()\n                        .query(Query::new_nearest(VectorInput::new_sparse(\n                            sparse.indices,\n                            sparse.values,\n                        )))\n                        .using(\"Combined_sparse\")\n                        .limit(20u64),\n                )\n                .add_prefetch(\n                    PrefetchQueryBuilder::default()\n                        .query(Query::new_nearest(dense))\n                        .using(\"Combined\")\n                        .limit(20u64),\n                )\n                .query(Query::new_fusion(Fusion::Rrf)),\n        )\n        .await\n        .unwrap();\n\n    let first = search_response.result.first().unwrap();\n\n    assert!(\n        first\n            .payload\n            .get(\"path\")\n            .unwrap()\n            .as_str()\n            .unwrap()\n            .ends_with(\"main.rs\")\n    );\n    assert_eq!(\n        first.payload.get(\"content\").unwrap().as_str().unwrap(),\n        \"fn main() { println!(\\\"Hello, World!\\\"); }\"\n    );\n}\n"
  },
  {
    "path": "swiftide-agents/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-agents\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\nswiftide-core = { path = \"../swiftide-core\", version = \"0.32\" }\nswiftide-indexing = { path = \"../swiftide-indexing\", version = \"0.32\" }\nanyhow.workspace = true\nasync-trait.workspace = true\ndyn-clone.workspace = true\nderive_builder.workspace = true\nindoc.workspace = true\ntracing.workspace = true\ntokio.workspace = true\n# pretty_assertions.workspace = true\nstrum.workspace = true\nstrum_macros.workspace = true\nserde.workspace = true\nserde_json.workspace = true\nfs-err = { workspace = true, features = [\"tokio\"] }\nthiserror.workspace = true\nfutures-util.workspace = true\ntokio-stream.workspace = true\ntokio-util = { workspace = true, features = [\"rt\"] }\nconvert_case.workspace = true\nschemars = { workspace = true, features = [\"derive\"] }\n\n# Mcp\nrmcp = { workspace = true, optional = true, default-features = false, features = [\n  \"base64\",\n  \"client\",\n  \"macros\",\n  \"server\",\n] }\n\n[dev-dependencies]\nswiftide-core = { path = \"../swiftide-core\", features = [\"test-utils\"] }\nmockall.workspace = true\ntest-log.workspace = true\ntemp-dir.workspace = true\ninsta.workspace = true\nrmcp = { workspace = true, features = [\"server\"] }\nschemars = { workspace = true }\n\n[lints]\nworkspace = true\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n\n[features]\nmcp = [\"dep:rmcp\"]\njson-schema = [\"swiftide-core/json-schema\"]\nlangfuse = []\n"
  },
  {
    "path": "swiftide-agents/src/agent.rs",
    "content": "#![allow(dead_code)]\nuse crate::{\n    default_context::DefaultContext,\n    errors::AgentError,\n    hooks::{\n        AfterCompletionFn, AfterEachFn, AfterToolFn, BeforeAllFn, BeforeCompletionFn, BeforeToolFn,\n        Hook, HookTypes, MessageHookFn, OnStartFn, OnStopFn, OnStreamFn,\n    },\n    invoke_hooks,\n    state::{self, StopReason},\n    system_prompt::SystemPrompt,\n    tools::{arg_preprocessor::ArgPreprocessor, control::Stop},\n};\nuse std::{\n    collections::{HashMap, HashSet, VecDeque},\n    hash::{DefaultHasher, Hash as _, Hasher as _},\n    sync::Arc,\n};\n\nuse derive_builder::Builder;\nuse futures_util::stream::StreamExt;\nuse swiftide_core::{\n    AgentContext, ToolBox,\n    chat_completion::{\n        ChatCompletion, ChatCompletionRequest, ChatMessage, Tool, ToolCall, ToolOutput,\n    },\n    prompt::Prompt,\n};\nuse tracing::{Instrument, debug};\n\n/// Agents are the main interface for building agentic systems.\n///\n/// Construct agents by calling the builder, setting an llm, configure hooks, tools and other\n/// customizations.\n///\n/// # Important defaults\n///\n/// - The default context is the `DefaultContext`, executing tools locally with the `LocalExecutor`.\n/// - A default `stop` tool is provided for agents to explicitly stop if needed\n/// - The default `SystemPrompt` instructs the agent with chain of thought and some common\n///   safeguards, but is otherwise quite bare. In a lot of cases this can be sufficient.\n///\n///   Agents are *not* cheap to clone. However, if an agent gets cloned, it will operate on the\n///   same context.\n#[derive(Builder)]\npub struct Agent {\n    /// Hooks are functions that are called at specific points in the agent's lifecycle.\n    #[builder(default, setter(into))]\n    pub(crate) hooks: Vec<Hook>,\n    /// The context in which the agent operates, by default this is the `DefaultContext`.\n    #[builder(\n        setter(custom),\n        default = Arc::new(DefaultContext::default()) as Arc<dyn AgentContext>\n    )]\n    pub(crate) context: Arc<dyn AgentContext>,\n    /// Tools the agent can use\n    #[builder(default = Agent::default_tools(), setter(custom))]\n    pub(crate) tools: HashSet<Box<dyn Tool>>,\n\n    /// Toolboxes are collections of tools that can be added to the agent.\n    ///\n    /// Toolboxes make their tools available to the agent at runtime.\n    #[builder(default)]\n    pub(crate) toolboxes: Vec<Box<dyn ToolBox>>,\n\n    /// The language model that the agent uses for completion.\n    #[builder(setter(custom))]\n    pub(crate) llm: Box<dyn ChatCompletion>,\n\n    /// System prompt for the agent when it starts\n    ///\n    /// Some agents profit significantly from a tailored prompt. But it is not always needed.\n    ///\n    /// See [`SystemPrompt`] for an opiniated, customizable system prompt.\n    ///\n    /// Swiftide provides a default system prompt for all agents.\n    ///\n    /// Alternatively you can also provide a `Prompt` directly, or disable the system prompt.\n    ///\n    /// # Example\n    ///\n    /// ```no_run\n    /// # use swiftide_agents::system_prompt::SystemPrompt;\n    /// # use swiftide_agents::Agent;\n    /// Agent::builder()\n    ///     .system_prompt(\n    ///         SystemPrompt::builder().role(\"You are an expert engineer\")\n    ///         .build().unwrap())\n    ///     .build().unwrap();\n    /// ```\n    #[builder(setter(into, strip_option), default = Some(SystemPrompt::default()))]\n    pub(crate) system_prompt: Option<SystemPrompt>,\n\n    /// Initial state of the agent\n    #[builder(private, default = state::State::default())]\n    pub(crate) state: state::State,\n\n    /// Optional limit on the amount of loops the agent can run.\n    /// The counter is reset when the agent is stopped.\n    #[builder(default, setter(strip_option))]\n    pub(crate) limit: Option<usize>,\n\n    /// The maximum amount of times the failed output of a tool will be send\n    /// to an LLM before the agent stops. Defaults to 3.\n    ///\n    /// LLMs sometimes send missing arguments, or a tool might actually fail, but retrying could be\n    /// worth while. If the limit is not reached, the agent will send the formatted error back to\n    /// the LLM.\n    ///\n    /// The limit is hashed based on the tool call name and arguments, so the limit is per tool\n    /// call.\n    ///\n    /// This limit is _not_ reset when the agent is stopped.\n    #[builder(default = 3)]\n    pub(crate) tool_retry_limit: usize,\n\n    /// Enables streaming the chat completion responses for the agent.\n    #[builder(default)]\n    pub(crate) streaming: bool,\n\n    /// When set to true, any tools in `Agent::default_tools` will be omitted. Only works if you\n    /// at at least one tool of your own.\n    #[builder(private, default)]\n    pub(crate) clear_default_tools: bool,\n\n    /// Internally tracks the amount of times a tool has been retried. The key is a hash based on\n    /// the name and args of the tool.\n    #[builder(private, default)]\n    pub(crate) tool_retries_counter: HashMap<u64, usize>,\n\n    /// The name of the agent; optional\n    #[builder(default = \"unnamed_agent\".into(), setter(into))]\n    pub(crate) name: String,\n\n    /// User messages waiting for any pending tool calls to complete.\n    #[builder(private, default)]\n    pub(crate) pending_user_messages: VecDeque<String>,\n}\n\nimpl Clone for Agent {\n    fn clone(&self) -> Self {\n        Agent {\n            hooks: self.hooks.clone(),\n            context: Arc::new(self.context.clone()),\n            tools: self.tools.clone(),\n            toolboxes: self.toolboxes.clone(),\n            llm: self.llm.clone(),\n            system_prompt: self.system_prompt.clone(),\n            state: self.state.clone(),\n            limit: self.limit,\n            tool_retry_limit: self.tool_retry_limit,\n            tool_retries_counter: HashMap::new(),\n            streaming: self.streaming,\n            name: self.name.clone(),\n            clear_default_tools: self.clear_default_tools,\n            pending_user_messages: VecDeque::new(),\n        }\n    }\n}\n\nimpl std::fmt::Debug for Agent {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Agent\")\n            .field(\"name\", &self.name)\n            // display hooks as a list of type: number of hooks\n            .field(\n                \"hooks\",\n                &self\n                    .hooks\n                    .iter()\n                    .map(std::string::ToString::to_string)\n                    .collect::<Vec<_>>(),\n            )\n            .field(\n                \"tools\",\n                &self\n                    .tools\n                    .iter()\n                    .map(swiftide_core::Tool::name)\n                    .collect::<Vec<_>>(),\n            )\n            .field(\"llm\", &\"Box<dyn ChatCompletion>\")\n            .field(\"state\", &self.state)\n            .finish()\n    }\n}\n\nimpl AgentBuilder {\n    /// The context in which the agent operates, by default this is the `DefaultContext`.\n    pub fn context(&mut self, context: impl AgentContext + 'static) -> &mut AgentBuilder\n    where\n        Self: Clone,\n    {\n        self.context = Some(Arc::new(context) as Arc<dyn AgentContext>);\n        self\n    }\n\n    /// Returns a mutable reference to the system prompt, if it is set.\n    pub fn system_prompt_mut(&mut self) -> Option<&mut SystemPrompt> {\n        self.system_prompt.as_mut().and_then(Option::as_mut)\n    }\n\n    /// Disable the system prompt.\n    pub fn no_system_prompt(&mut self) -> &mut Self {\n        self.system_prompt = Some(None);\n\n        self\n    }\n\n    /// Add a hook to the agent.\n    pub fn add_hook(&mut self, hook: Hook) -> &mut Self {\n        let hooks = self.hooks.get_or_insert_with(Vec::new);\n        hooks.push(hook);\n\n        self\n    }\n\n    /// Adds a tool to the agent\n    pub fn add_tool(&mut self, tool: impl Tool + 'static) -> &mut Self {\n        let tools = self.tools.get_or_insert_with(HashSet::new);\n        if let Some(tool) = tools.replace(tool.boxed()) {\n            tracing::debug!(\"Tool {} already exists, replacing\", tool.name());\n        }\n\n        self\n    }\n\n    /// Add a hook that runs once, before all completions. Even if the agent is paused and resumed,\n    /// before all will not trigger again.\n    pub fn before_all(&mut self, hook: impl BeforeAllFn + 'static) -> &mut Self {\n        self.add_hook(Hook::BeforeAll(Box::new(hook)))\n    }\n\n    /// Add a hook that runs once, when the agent starts. This hook also runs if the agent stopped\n    /// and then starts again. The hook runs after any `before_all` hooks and before the\n    /// `before_completion` hooks.\n    pub fn on_start(&mut self, hook: impl OnStartFn + 'static) -> &mut Self {\n        self.add_hook(Hook::OnStart(Box::new(hook)))\n    }\n\n    /// Add a hook that runs when the agent receives a streaming response\n    ///\n    /// The response will always include both the current accumulated message and the delta\n    ///\n    /// This will set `self.streaming` to true, there is no need to set it manually for the default\n    /// behaviour.\n    pub fn on_stream(&mut self, hook: impl OnStreamFn + 'static) -> &mut Self {\n        self.streaming = Some(true);\n        self.add_hook(Hook::OnStream(Box::new(hook)))\n    }\n\n    /// Add a hook that runs before each completion.\n    pub fn before_completion(&mut self, hook: impl BeforeCompletionFn + 'static) -> &mut Self {\n        self.add_hook(Hook::BeforeCompletion(Box::new(hook)))\n    }\n\n    /// Add a hook that runs after each tool. The `Result<ToolOutput, ToolError>` is provided\n    /// as mut, so the tool output can be fully modified.\n    ///\n    /// The `ToolOutput` also references the original `ToolCall`, allowing you to match at runtime\n    /// what tool to interact with.\n    pub fn after_tool(&mut self, hook: impl AfterToolFn + 'static) -> &mut Self {\n        self.add_hook(Hook::AfterTool(Box::new(hook)))\n    }\n\n    /// Add a hook that runs before each tool. Yields an immutable reference to the `ToolCall`.\n    pub fn before_tool(&mut self, hook: impl BeforeToolFn + 'static) -> &mut Self {\n        self.add_hook(Hook::BeforeTool(Box::new(hook)))\n    }\n\n    /// Add a hook that runs after each completion, before tool invocation and/or new messages.\n    pub fn after_completion(&mut self, hook: impl AfterCompletionFn + 'static) -> &mut Self {\n        self.add_hook(Hook::AfterCompletion(Box::new(hook)))\n    }\n\n    /// Add a hook that runs after each completion, after tool invocations, right before a new loop\n    /// might start\n    pub fn after_each(&mut self, hook: impl AfterEachFn + 'static) -> &mut Self {\n        self.add_hook(Hook::AfterEach(Box::new(hook)))\n    }\n\n    /// Add a hook that runs when a new message is added to the context. Note that each tool adds a\n    /// separate message.\n    pub fn on_new_message(&mut self, hook: impl MessageHookFn + 'static) -> &mut Self {\n        self.add_hook(Hook::OnNewMessage(Box::new(hook)))\n    }\n\n    pub fn on_stop(&mut self, hook: impl OnStopFn + 'static) -> &mut Self {\n        self.add_hook(Hook::OnStop(Box::new(hook)))\n    }\n\n    /// Set the LLM for the agent. An LLM must implement the `ChatCompletion` trait.\n    pub fn llm<LLM: ChatCompletion + Clone + 'static>(&mut self, llm: &LLM) -> &mut Self {\n        let boxed: Box<dyn ChatCompletion> = Box::new(llm.clone()) as Box<dyn ChatCompletion>;\n\n        self.llm = Some(boxed);\n        self\n    }\n\n    /// Removes the default `stop` tool from the agent. This allows you to add your own or use\n    /// other methods to stop the agent.\n    ///\n    /// Note that you can also just override the tool if the name of the tool is `stop`.\n    pub fn without_default_stop_tool(&mut self) -> &mut Self {\n        self.clear_default_tools = Some(true);\n        self\n    }\n\n    fn builder_default_tools(&self) -> HashSet<Box<dyn Tool>> {\n        if self.clear_default_tools.is_some_and(|b| b) {\n            HashSet::new()\n        } else {\n            Agent::default_tools()\n        }\n    }\n\n    /// Define the available tools for the agent. Tools must implement the `Tool` trait.\n    ///\n    /// See the [tool attribute macro](`swiftide_macros::tool`) and the [tool derive\n    /// macro](`swiftide_macros::Tool`) for easy ways to create (many) tools.\n    pub fn tools<TOOL, I: IntoIterator<Item = TOOL>>(&mut self, tools: I) -> &mut Self\n    where\n        TOOL: Into<Box<dyn Tool>>,\n    {\n        self.tools = Some(\n            self.builder_default_tools()\n                .into_iter()\n                .chain(tools.into_iter().map(Into::into))\n                .collect(),\n        );\n        self\n    }\n\n    /// Add a toolbox to the agent. Toolboxes are collections of tools that can be added to the\n    /// to the agent. Available tools are evaluated at runtime, when the agent starts for the first\n    /// time.\n    ///\n    /// Agents can have many toolboxes.\n    pub fn add_toolbox(&mut self, toolbox: impl ToolBox + 'static) -> &mut Self {\n        let toolboxes = self.toolboxes.get_or_insert_with(Vec::new);\n        toolboxes.push(Box::new(toolbox));\n\n        self\n    }\n}\n\nimpl Agent {\n    /// Build a new agent\n    pub fn builder() -> AgentBuilder {\n        AgentBuilder::default()\n            .tools(Agent::default_tools())\n            .to_owned()\n    }\n\n    /// The name of the agent\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n\n    /// Default tools for the agent that it always includes\n    /// Right now this is the `stop` tool, which allows the agent to stop itself.\n    pub fn default_tools() -> HashSet<Box<dyn Tool>> {\n        HashSet::from([Stop::default().boxed()])\n    }\n\n    /// Run the agent with a user message. The agent will loop completions, make tool calls, until\n    /// no new messages are available.\n    ///\n    /// # Errors\n    ///\n    /// Errors if anything goes wrong, see `AgentError` for more details.\n    #[tracing::instrument(skip_all, name = \"agent.query\", err)]\n    pub async fn query(&mut self, query: impl Into<Prompt>) -> Result<(), AgentError> {\n        let query = query\n            .into()\n            .render()\n            .map_err(AgentError::FailedToRenderPrompt)?;\n        self.run_agent(Some(query), false).await\n    }\n\n    /// Adds a tool to an agent at run time\n    pub fn add_tool(&mut self, tool: Box<dyn Tool>) {\n        if let Some(tool) = self.tools.replace(tool) {\n            tracing::debug!(\"Tool {} already exists, replacing\", tool.name());\n        }\n    }\n\n    /// Modify the tools of the agent at runtime\n    ///\n    /// Note that any mcp tools are added to the agent after the first start, and will only then\n    /// also be available here.\n    pub fn tools_mut(&mut self) -> &mut HashSet<Box<dyn Tool>> {\n        &mut self.tools\n    }\n\n    /// Run the agent with a user message once.\n    ///\n    /// # Errors\n    ///\n    /// Errors if anything goes wrong, see `AgentError` for more details.\n    #[tracing::instrument(skip_all, name = \"agent.query_once\", err)]\n    pub async fn query_once(&mut self, query: impl Into<Prompt>) -> Result<(), AgentError> {\n        self.run_agent(Some(query), true).await\n    }\n\n    /// Run the agent with without user message. The agent will loop completions, make tool calls,\n    /// until no new messages are available.\n    ///\n    /// # Errors\n    ///\n    /// Errors if anything goes wrong, see `AgentError` for more details.\n    #[tracing::instrument(skip_all, name = \"agent.run\", err)]\n    pub async fn run(&mut self) -> Result<(), AgentError> {\n        self.run_agent(None::<Prompt>, false).await\n    }\n\n    /// Run the agent with without user message. The agent will loop completions, make tool calls,\n    /// until\n    ///\n    /// # Errors\n    ///\n    /// Errors if anything goes wrong, see `AgentError` for more details.\n    #[tracing::instrument(skip_all, name = \"agent.run_once\", err)]\n    pub async fn run_once(&mut self) -> Result<(), AgentError> {\n        self.run_agent(None::<Prompt>, true).await\n    }\n\n    /// Retrieve the message history of the agent\n    ///\n    /// # Errors\n    ///\n    /// Error if the message history cannot be retrieved, e.g. if the context is not set up or a\n    /// connection fails\n    pub async fn history(&self) -> Result<Vec<ChatMessage>, AgentError> {\n        self.context\n            .history()\n            .await\n            .map_err(AgentError::MessageHistoryError)\n    }\n\n    pub(crate) async fn run_agent(\n        &mut self,\n        maybe_query: Option<impl Into<Prompt>>,\n        just_once: bool,\n    ) -> Result<(), AgentError> {\n        let maybe_query = maybe_query\n            .map(|q| q.into().render())\n            .transpose()\n            .map_err(AgentError::FailedToRenderPrompt)?;\n        if self.state.is_running() {\n            return Err(AgentError::AlreadyRunning);\n        }\n\n        if self.state.is_pending() {\n            if let Some(system_prompt) = &self.system_prompt {\n                self.context\n                    .add_messages(vec![ChatMessage::System(\n                        system_prompt\n                            .to_prompt()\n                            .render()\n                            .map_err(AgentError::FailedToRenderSystemPrompt)?,\n                    )])\n                    .await\n                    .map_err(AgentError::MessageHistoryError)?;\n            }\n\n            invoke_hooks!(BeforeAll, self);\n\n            self.load_toolboxes().await?;\n        }\n\n        if let Some(query) = maybe_query {\n            if cfg!(feature = \"langfuse\") {\n                debug!(langfuse.input = query);\n            }\n            tracing::debug!(\"Queueing user message until tool outputs are recorded\");\n            self.pending_user_messages.push_back(query);\n        }\n\n        self.invoke_pending_tool_calls().await?;\n\n        if self.has_unfulfilled_tool_calls().await? {\n            tracing::warn!(\n                \"Unfulfilled tool calls remain after invocation; agent/tool configuration is invalid\"\n            );\n            return Err(AgentError::UnfulfilledToolCalls);\n        }\n\n        self.flush_pending_user_messages().await?;\n\n        invoke_hooks!(OnStart, self);\n\n        self.state = state::State::Running;\n\n        let mut loop_counter = 0;\n\n        while let Some(messages) = self\n            .context\n            .next_completion()\n            .await\n            .map_err(AgentError::MessageHistoryError)?\n        {\n            if let Some(limit) = self.limit\n                && loop_counter >= limit\n            {\n                tracing::warn!(\"Agent loop limit reached\");\n                break;\n            }\n\n            // If the last message contains tool calls that have not been completed,\n            // run the tools first\n            if let Some(ChatMessage::Assistant(_, tool_calls)) =\n                maybe_tool_call_without_output(&messages)\n                && tool_calls\n                    .as_ref()\n                    .is_some_and(|tool_calls| !tool_calls.is_empty())\n            {\n                tracing::debug!(\"Uncompleted tool calls found; invoking tools\");\n                if let Some(tool_calls) = tool_calls.as_ref() {\n                    self.invoke_tools(tool_calls).await?;\n                }\n                // Move on to the next tick, so that the\n                continue;\n            }\n\n            let result = self.step(&messages, loop_counter).await;\n\n            if let Err(err) = result {\n                self.stop_with_error(&err).await;\n                tracing::error!(error = ?err, \"Agent stopped with error {err}\");\n                return Err(err);\n            }\n\n            if just_once || self.state.is_stopped() {\n                break;\n            }\n            loop_counter += 1;\n        }\n\n        // If there are no new messages, ensure we update our state\n        self.stop(StopReason::NoNewMessages).await;\n\n        Ok(())\n    }\n\n    #[tracing::instrument(skip(self, messages), err, fields(otel.name))]\n    async fn step(\n        &mut self,\n        messages: &[ChatMessage],\n        step_count: usize,\n    ) -> Result<(), AgentError> {\n        tracing::Span::current().record(\"otel.name\", format!(\"step-{step_count}\"));\n\n        debug!(\n            tools = ?self\n                .tools\n                .iter()\n                .map(|t| t.name())\n                .collect::<Vec<_>>()\n                ,\n            \"Running completion for agent with {} new messages\",\n            messages.len()\n        );\n\n        let mut chat_completion_request = ChatCompletionRequest::builder()\n            .messages(messages)\n            .tool_specs(self.tools.iter().map(swiftide_core::Tool::tool_spec))\n            .build()\n            .map_err(AgentError::FailedToBuildRequest)?;\n\n        invoke_hooks!(BeforeCompletion, self, &mut chat_completion_request);\n\n        debug!(\n            \"Calling LLM with the following new messages:\\n {}\",\n            self.context\n                .current_new_messages()\n                .await\n                .map_err(AgentError::MessageHistoryError)?\n                .iter()\n                .map(ToString::to_string)\n                .collect::<Vec<_>>()\n                .join(\",\\n\")\n        );\n\n        let mut response = if self.streaming {\n            let mut last_response = None;\n            let mut stream = self.llm.complete_stream(&chat_completion_request).await;\n\n            while let Some(response) = stream.next().await {\n                let response = response.map_err(AgentError::CompletionsFailed)?;\n                invoke_hooks!(OnStream, self, &response);\n                last_response = Some(response);\n            }\n            tracing::trace!(?last_response, \"Streaming completed\");\n            last_response.ok_or(AgentError::EmptyStream)\n        } else {\n            self.llm\n                .complete(&chat_completion_request)\n                .await\n                .map_err(AgentError::CompletionsFailed)\n        }?;\n\n        // The arg preprocessor helps avoid common llm errors.\n        // This must happen as early as possible\n        response\n            .tool_calls\n            .as_deref_mut()\n            .map(ArgPreprocessor::preprocess_tool_calls);\n\n        invoke_hooks!(AfterCompletion, self, &mut response);\n\n        let assistant_content = response.message.take();\n        let assistant_tool_calls = response.tool_calls.clone();\n\n        let has_assistant_message = assistant_content.is_some()\n            || assistant_tool_calls\n                .as_ref()\n                .is_some_and(|calls| !calls.is_empty());\n\n        if let Some(reasoning_items) = response.reasoning.take() {\n            if has_assistant_message {\n                for item in reasoning_items {\n                    self.add_message(ChatMessage::Reasoning(item)).await?;\n                }\n            } else {\n                tracing::debug!(\n                    \"Skipping reasoning items because no assistant message or tool call was produced\"\n                );\n            }\n        }\n\n        if has_assistant_message {\n            self.add_message(ChatMessage::Assistant(\n                assistant_content,\n                assistant_tool_calls,\n            ))\n            .await?;\n        }\n\n        if let Some(tool_calls) = response.tool_calls {\n            self.invoke_tools(&tool_calls).await?;\n        }\n\n        invoke_hooks!(AfterEach, self);\n\n        Ok(())\n    }\n\n    async fn invoke_tools(&mut self, tool_calls: &[ToolCall]) -> Result<(), AgentError> {\n        tracing::debug!(\"LLM returned tool calls: {:?}\", tool_calls);\n\n        let mut handles = vec![];\n        for tool_call in tool_calls {\n            let Some(tool) = self.find_tool_by_name(tool_call.name()) else {\n                tracing::warn!(\"Tool {} not found\", tool_call.name());\n                continue;\n            };\n            tracing::info!(\"Calling tool `{}`\", tool_call.name());\n\n            // let tool_args = tool_call.args().map(String::from);\n            let context: Arc<dyn AgentContext> = Arc::clone(&self.context);\n\n            invoke_hooks!(BeforeTool, self, &tool_call);\n\n            let tool_span = tracing::info_span!(\n                \"tool\",\n                \"otel.name\" = format!(\"tool.{}\", tool.name().as_ref()),\n            );\n\n            let handle_tool_call = tool_call.clone();\n            let handle = tokio::spawn(async move {\n                    let handle_tool_call = handle_tool_call;\n                    let output = tool.invoke(&*context, &handle_tool_call)\n                        .await?;\n\n                if cfg!(feature = \"langfuse\") {\n                    tracing::debug!(\n                        langfuse.output = %output,\n                        langfuse.input = handle_tool_call.args(),\n                        tool_name = tool.name().as_ref(),\n                    );\n                } else {\n                    tracing::debug!(output = output.to_string(), args = ?handle_tool_call.args(), tool_name = tool.name().as_ref(), \"Completed tool call\");\n                }\n\n                    Ok(output)\n                }.instrument(tool_span.or_current()));\n\n            handles.push((handle, tool_call));\n        }\n\n        for (handle, tool_call) in handles {\n            let mut output = handle\n                .await\n                .map_err(|err| AgentError::ToolFailedToJoin(tool_call.name().to_string(), err))?;\n\n            invoke_hooks!(AfterTool, self, &tool_call, &mut output);\n\n            if let Err(error) = output {\n                let stop = self.tool_calls_over_limit(tool_call);\n                if stop {\n                    tracing::error!(\n                        ?error,\n                        \"Tool call failed, retry limit reached, stopping agent: {error}\",\n                    );\n                } else {\n                    tracing::warn!(\n                        ?error,\n                        tool_call = ?tool_call,\n                        \"Tool call failed, retrying\",\n                    );\n                }\n                self.add_message(ChatMessage::ToolOutput(\n                    tool_call.clone(),\n                    ToolOutput::fail(error.to_string()),\n                ))\n                .await?;\n                if stop {\n                    self.stop(StopReason::ToolCallsOverLimit(tool_call.to_owned()))\n                        .await;\n                    return Err(error.into());\n                }\n                continue;\n            }\n\n            let output = output?;\n            self.handle_control_tools(tool_call, &output).await;\n\n            // Feedback required leaves the tool call open\n            //\n            // It assumes a follow up invocation of the agent will have the feedback approved\n            if !output.is_feedback_required() {\n                self.add_message(ChatMessage::ToolOutput(tool_call.to_owned(), output))\n                    .await?;\n            }\n        }\n\n        Ok(())\n    }\n\n    fn hooks_by_type(&self, hook_type: HookTypes) -> Vec<&Hook> {\n        self.hooks\n            .iter()\n            .filter(|h| hook_type == (*h).into())\n            .collect()\n    }\n\n    fn find_tool_by_name(&self, tool_name: &str) -> Option<Box<dyn Tool>> {\n        self.tools\n            .iter()\n            .find(|tool| tool.name() == tool_name)\n            .cloned()\n    }\n\n    // Handle any tool specific output (e.g. stop)\n    async fn handle_control_tools(&mut self, tool_call: &ToolCall, output: &ToolOutput) {\n        match output {\n            ToolOutput::Stop(maybe_message) => {\n                tracing::warn!(\"Stop tool called, stopping agent\");\n                self.stop(StopReason::RequestedByTool(\n                    tool_call.clone(),\n                    maybe_message.clone(),\n                ))\n                .await;\n            }\n            ToolOutput::FeedbackRequired(maybe_payload) => {\n                tracing::warn!(\"Feedback required, stopping agent\");\n                self.stop(StopReason::FeedbackRequired {\n                    tool_call: tool_call.clone(),\n                    payload: maybe_payload.clone(),\n                })\n                .await;\n            }\n            ToolOutput::AgentFailed(output) => {\n                tracing::warn!(\"Agent failed, stopping agent\");\n                self.stop(StopReason::AgentFailed(output.clone())).await;\n            }\n            _ => (),\n        }\n    }\n\n    /// Retrieve the system prompt, if it is set.\n    pub fn system_prompt(&self) -> Option<&SystemPrompt> {\n        self.system_prompt.as_ref()\n    }\n\n    /// Retrieve a mutable reference to the system prompt, if it is set.\n    ///\n    /// Note that the system prompt is rendered only once, when the agent starts for the first time\n    pub fn system_prompt_mut(&mut self) -> Option<&mut SystemPrompt> {\n        self.system_prompt.as_mut()\n    }\n\n    fn tool_calls_over_limit(&mut self, tool_call: &ToolCall) -> bool {\n        let mut s = DefaultHasher::new();\n        tool_call.hash(&mut s);\n        let hash = s.finish();\n\n        if let Some(retries) = self.tool_retries_counter.get_mut(&hash) {\n            let val = *retries >= self.tool_retry_limit;\n            *retries += 1;\n            val\n        } else {\n            self.tool_retries_counter.insert(hash, 1);\n            false\n        }\n    }\n\n    /// Add a message to the agent's context\n    ///\n    /// This will trigger a `OnNewMessage` hook if its present.\n    ///\n    /// If you want to add a message without triggering the hook, use the context directly.\n    ///\n    /// # Errors\n    ///\n    /// Errors if the message cannot be added to the context. With the default in memory context\n    /// that is not supposed to happen.\n    #[tracing::instrument(skip_all, fields(message = message.to_string()))]\n    pub async fn add_message(&self, mut message: ChatMessage) -> Result<(), AgentError> {\n        invoke_hooks!(OnNewMessage, self, &mut message);\n\n        self.context\n            .add_message(message)\n            .await\n            .map_err(AgentError::MessageHistoryError)?;\n        Ok(())\n    }\n\n    /// Tell the agent to stop. It will finish it's current loop and then stop.\n    pub async fn stop(&mut self, reason: impl Into<StopReason>) {\n        if self.state.is_stopped() {\n            return;\n        }\n\n        let reason = reason.into();\n        invoke_hooks!(OnStop, self, reason.clone(), None);\n\n        if cfg!(feature = \"langfuse\") {\n            debug!(langfuse.output = serde_json::to_string_pretty(&reason).ok());\n        }\n\n        self.state = state::State::Stopped(reason);\n    }\n\n    pub async fn stop_with_error(&mut self, error: &AgentError) {\n        if self.state.is_stopped() {\n            return;\n        }\n        invoke_hooks!(OnStop, self, StopReason::Error, Some(error));\n\n        self.state = state::State::Stopped(StopReason::Error);\n    }\n\n    /// Access the agent's context\n    pub fn context(&self) -> &dyn AgentContext {\n        &self.context\n    }\n\n    /// The agent is still running\n    pub fn is_running(&self) -> bool {\n        self.state.is_running()\n    }\n\n    /// The agent stopped\n    pub fn is_stopped(&self) -> bool {\n        self.state.is_stopped()\n    }\n\n    /// The agent has not (ever) started\n    pub fn is_pending(&self) -> bool {\n        self.state.is_pending()\n    }\n\n    /// Get a list of tools available to the agent\n    pub fn tools(&self) -> &HashSet<Box<dyn Tool>> {\n        &self.tools\n    }\n\n    pub fn state(&self) -> &state::State {\n        &self.state\n    }\n\n    pub fn stop_reason(&self) -> Option<&StopReason> {\n        self.state.stop_reason()\n    }\n\n    async fn has_unfulfilled_tool_calls(&self) -> Result<bool, AgentError> {\n        let history = self\n            .context\n            .history()\n            .await\n            .map_err(AgentError::MessageHistoryError)?;\n        Ok(maybe_tool_call_without_output(&history).is_some())\n    }\n\n    async fn invoke_pending_tool_calls(&mut self) -> Result<(), AgentError> {\n        let history = self\n            .context\n            .history()\n            .await\n            .map_err(AgentError::MessageHistoryError)?;\n\n        if let Some(ChatMessage::Assistant(_, tool_calls)) =\n            maybe_tool_call_without_output(&history)\n            && tool_calls\n                .as_ref()\n                .is_some_and(|tool_calls| !tool_calls.is_empty())\n            && let Some(tool_calls) = tool_calls.as_ref()\n        {\n            self.invoke_tools(tool_calls).await?;\n        }\n\n        Ok(())\n    }\n\n    async fn flush_pending_user_messages(&mut self) -> Result<(), AgentError> {\n        if self.pending_user_messages.is_empty() {\n            return Ok(());\n        }\n\n        let messages = self\n            .pending_user_messages\n            .drain(..)\n            .map(ChatMessage::new_user)\n            .collect();\n\n        self.context\n            .add_messages(messages)\n            .await\n            .map_err(AgentError::MessageHistoryError)?;\n        Ok(())\n    }\n\n    async fn load_toolboxes(&mut self) -> Result<(), AgentError> {\n        for toolbox in &self.toolboxes {\n            let tools = toolbox\n                .available_tools()\n                .await\n                .map_err(AgentError::ToolBoxFailedToLoad)?;\n            self.tools.extend(tools);\n        }\n\n        Ok(())\n    }\n}\n\n/// Reverse searches through messages, if it encounters a tool call before encountering an output,\n/// it will return the chat message with the tool calls, otherwise it returns None\nfn maybe_tool_call_without_output(messages: &[ChatMessage]) -> Option<&ChatMessage> {\n    for message in messages.iter().rev() {\n        if let ChatMessage::ToolOutput(..) = message {\n            return None;\n        }\n\n        if let ChatMessage::Assistant(_, tool_calls) = message\n            && tool_calls\n                .as_ref()\n                .is_some_and(|tool_calls| !tool_calls.is_empty())\n        {\n            return Some(message);\n        }\n    }\n\n    None\n}\n\n#[cfg(test)]\nmod tests {\n\n    use serde::ser::Error;\n    use swiftide_core::ToolFeedback;\n    use swiftide_core::chat_completion::errors::ToolError;\n    use swiftide_core::chat_completion::{ChatCompletionResponse, ToolCall};\n    use swiftide_core::test_utils::MockChatCompletion;\n\n    use super::*;\n    use crate::{\n        State, assistant, chat_request, chat_response, summary, system, tool_failed, tool_output,\n        user,\n    };\n\n    use crate::test_utils::{MockHook, MockTool};\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_builder_defaults() {\n        // Create a prompt\n        let mock_llm = MockChatCompletion::new();\n\n        // Build the agent\n        let agent = Agent::builder().llm(&mock_llm).build().unwrap();\n\n        // Check that the context is the default context\n\n        // Check that the default tools are added\n        assert!(agent.find_tool_by_name(\"stop\").is_some());\n\n        // Check it does not allow duplicates\n        let agent = Agent::builder()\n            .tools([Stop::default(), Stop::default()])\n            .llm(&mock_llm)\n            .build()\n            .unwrap();\n\n        assert_eq!(agent.tools.len(), 1);\n\n        // It should include the default tool if a different tool is provided\n        let agent = Agent::builder()\n            .tools([MockTool::new(\"mock_tool\")])\n            .llm(&mock_llm)\n            .build()\n            .unwrap();\n\n        assert_eq!(agent.tools.len(), 2);\n        assert!(agent.find_tool_by_name(\"mock_tool\").is_some());\n        assert!(agent.find_tool_by_name(\"stop\").is_some());\n\n        assert!(agent.context().history().await.unwrap().is_empty());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_tool_calling_loop() {\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::new(\"mock_tool\");\n\n        let chat_request = chat_request! {\n            user!(\"Write a poem\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"mock_tool\"]\n\n        };\n\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response));\n\n        let chat_request = chat_request! {\n            user!(\"Write a poem\"),\n            assistant!(\"Roses are red\", [\"mock_tool\"]),\n            tool_output!(\"mock_tool\", \"Great!\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let stop_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"stop\"]\n        };\n\n        mock_llm.expect_complete(chat_request, Ok(stop_response));\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n\n        let mut agent = Agent::builder()\n            .tools([mock_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        agent.query(prompt).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_tool_run_once() {\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::default();\n\n        let chat_request = chat_request! {\n            system!(\"My system prompt\"),\n            user!(\"Write a poem\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"mock_tool\"]\n\n        };\n\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response));\n\n        let mut agent = Agent::builder()\n            .tools([mock_tool])\n            .system_prompt(\"My system prompt\")\n            .llm(&mock_llm)\n            .build()\n            .unwrap();\n\n        agent.query_once(prompt).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_tool_via_toolbox_run_once() {\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::default();\n\n        let chat_request = chat_request! {\n            system!(\"My system prompt\"),\n            user!(\"Write a poem\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"mock_tool\"]\n\n        };\n\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response));\n\n        let mut agent = Agent::builder()\n            .add_toolbox(vec![mock_tool.boxed()])\n            .system_prompt(\"My system prompt\")\n            .llm(&mock_llm)\n            .build()\n            .unwrap();\n\n        agent.query_once(prompt).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_multiple_tool_calls() {\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::new(\"mock_tool1\");\n        let mock_tool2 = MockTool::new(\"mock_tool2\");\n\n        let chat_request = chat_request! {\n            system!(\"My system prompt\"),\n            user!(\"Write a poem\");\n\n\n\n            tools = [mock_tool.clone(), mock_tool2.clone()]\n        };\n\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n\n            tool_calls = [\"mock_tool1\", \"mock_tool2\"]\n\n        };\n\n        dbg!(&chat_request);\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n        mock_tool2.expect_invoke_ok(\"Great!\".into(), None);\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response));\n\n        let chat_request = chat_request! {\n            system!(\"My system prompt\"),\n            user!(\"Write a poem\"),\n            assistant!(\"Roses are red\", [\"mock_tool1\", \"mock_tool2\"]),\n            tool_output!(\"mock_tool1\", \"Great!\"),\n            tool_output!(\"mock_tool2\", \"Great!\");\n\n            tools = [mock_tool.clone(), mock_tool2.clone()]\n        };\n\n        let mock_tool_response = chat_response! {\n            \"Ok!\";\n\n            tool_calls = [\"stop\"]\n\n        };\n\n        mock_llm.expect_complete(chat_request, Ok(mock_tool_response));\n\n        let mut agent = Agent::builder()\n            .tools([mock_tool, mock_tool2])\n            .system_prompt(\"My system prompt\")\n            .llm(&mock_llm)\n            .build()\n            .unwrap();\n\n        agent.query(prompt).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_state_machine() {\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n\n        let chat_request = chat_request! {\n            user!(\"Write a poem\");\n            tools = []\n        };\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = []\n        };\n\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response));\n        let mut agent = Agent::builder()\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        // Agent has never run and is pending\n        assert!(agent.state.is_pending());\n        agent.query_once(prompt).await.unwrap();\n\n        // Agent is stopped, there might be more messages\n        assert!(agent.state.is_stopped());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_summary() {\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = []\n\n        };\n\n        let expected_chat_request = chat_request! {\n            system!(\"My system prompt\"),\n            user!(\"Write a poem\");\n\n            tools = []\n        };\n\n        mock_llm.expect_complete(expected_chat_request, Ok(mock_tool_response.clone()));\n\n        let mut agent = Agent::builder()\n            .system_prompt(\"My system prompt\")\n            .llm(&mock_llm)\n            .build()\n            .unwrap();\n\n        agent.query_once(prompt).await.unwrap();\n\n        agent\n            .context\n            .add_message(ChatMessage::new_summary(\"Summary\"))\n            .await\n            .unwrap();\n\n        let expected_chat_request = chat_request! {\n            system!(\"My system prompt\"),\n            summary!(\"Summary\"),\n            user!(\"Write another poem\");\n            tools = []\n        };\n        mock_llm.expect_complete(expected_chat_request, Ok(mock_tool_response.clone()));\n\n        agent.query_once(\"Write another poem\").await.unwrap();\n\n        agent\n            .context\n            .add_message(ChatMessage::new_summary(\"Summary 2\"))\n            .await\n            .unwrap();\n\n        let expected_chat_request = chat_request! {\n            system!(\"My system prompt\"),\n            summary!(\"Summary 2\"),\n            user!(\"Write a third poem\");\n            tools = []\n        };\n        mock_llm.expect_complete(expected_chat_request, Ok(mock_tool_response));\n\n        agent.query_once(\"Write a third poem\").await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_hooks() {\n        let mock_before_all = MockHook::new(\"before_all\").expect_calls(1).to_owned();\n        let mock_on_start_fn = MockHook::new(\"on_start\").expect_calls(1).to_owned();\n        let mock_before_completion = MockHook::new(\"before_completion\")\n            .expect_calls(2)\n            .to_owned();\n        let mock_after_completion = MockHook::new(\"after_completion\").expect_calls(2).to_owned();\n        let mock_after_each = MockHook::new(\"after_each\").expect_calls(2).to_owned();\n        let mock_on_message = MockHook::new(\"on_message\").expect_calls(4).to_owned();\n        let mock_on_stop = MockHook::new(\"on_stop\").expect_calls(1).to_owned();\n\n        // Once for mock tool and once for stop\n        let mock_before_tool = MockHook::new(\"before_tool\").expect_calls(2).to_owned();\n        let mock_after_tool = MockHook::new(\"after_tool\").expect_calls(2).to_owned();\n\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::default();\n\n        let chat_request = chat_request! {\n            user!(\"Write a poem\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"mock_tool\"]\n\n        };\n\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response));\n\n        let chat_request = chat_request! {\n            user!(\"Write a poem\"),\n            assistant!(\"Roses are red\", [\"mock_tool\"]),\n            tool_output!(\"mock_tool\", \"Great!\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let stop_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"stop\"]\n        };\n\n        mock_llm.expect_complete(chat_request, Ok(stop_response));\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n\n        let mut agent = Agent::builder()\n            .tools([mock_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .before_all(mock_before_all.hook_fn())\n            .on_start(mock_on_start_fn.on_start_fn())\n            .before_completion(mock_before_completion.before_completion_fn())\n            .before_tool(mock_before_tool.before_tool_fn())\n            .after_completion(mock_after_completion.after_completion_fn())\n            .after_tool(mock_after_tool.after_tool_fn())\n            .after_each(mock_after_each.hook_fn())\n            .on_new_message(mock_on_message.message_hook_fn())\n            .on_stop(mock_on_stop.stop_hook_fn())\n            .build()\n            .unwrap();\n\n        agent.query(prompt).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_loop_limit() {\n        let prompt = \"Generate content\"; // Example prompt\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::new(\"mock_tool\");\n\n        let chat_request = chat_request! {\n            user!(prompt);\n            tools = [mock_tool.clone()]\n        };\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n\n        let mock_tool_response = chat_response! {\n            \"Some response\";\n            tool_calls = [\"mock_tool\"]\n        };\n\n        // Set expectations for the mock LLM responses\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response.clone()));\n\n        // // Response for terminating the loop\n        let stop_response = chat_response! {\n            \"Final response\";\n            tool_calls = [\"stop\"]\n        };\n\n        mock_llm.expect_complete(chat_request, Ok(stop_response));\n\n        let mut agent = Agent::builder()\n            .tools([mock_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .limit(1) // Setting the loop limit to 1\n            .build()\n            .unwrap();\n\n        // Run the agent\n        agent.query(prompt).await.unwrap();\n\n        // Assert that the remaining message is still in the queue\n        let remaining = mock_llm.expectations.lock().unwrap().pop();\n        assert!(remaining.is_some());\n\n        // Assert that the agent is stopped after reaching the loop limit\n        assert!(agent.is_stopped());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_tool_retry_mechanism() {\n        let prompt = \"Execute my tool\";\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::new(\"retry_tool\");\n\n        // Configure mock tool to fail twice. First time is fed back to the LLM, second time is an\n        // error\n        mock_tool.expect_invoke(\n            Err(ToolError::WrongArguments(serde_json::Error::custom(\n                \"missing `query`\",\n            ))),\n            None,\n        );\n        mock_tool.expect_invoke(\n            Err(ToolError::WrongArguments(serde_json::Error::custom(\n                \"missing `query`\",\n            ))),\n            None,\n        );\n\n        let chat_request = chat_request! {\n            user!(prompt);\n            tools = [mock_tool.clone()]\n        };\n        let retry_response = chat_response! {\n            \"First failing attempt\";\n            tool_calls = [\"retry_tool\"]\n        };\n        mock_llm.expect_complete(chat_request.clone(), Ok(retry_response));\n\n        let chat_request = chat_request! {\n            user!(prompt),\n            assistant!(\"First failing attempt\", [\"retry_tool\"]),\n            tool_failed!(\"retry_tool\", \"arguments for tool failed to parse: missing `query`\");\n\n            tools = [mock_tool.clone()]\n        };\n        let will_fail_response = chat_response! {\n            \"Finished execution\";\n            tool_calls = [\"retry_tool\"]\n        };\n        mock_llm.expect_complete(chat_request.clone(), Ok(will_fail_response));\n\n        let mut agent = Agent::builder()\n            .tools([mock_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .tool_retry_limit(1) // The test relies on a limit of 2 retries.\n            .build()\n            .unwrap();\n\n        // Run the agent\n        let result = agent.query(prompt).await;\n\n        assert!(result.is_err());\n        assert!(result.unwrap_err().to_string().contains(\"missing `query`\"));\n        assert!(agent.is_stopped());\n    }\n\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_streaming() {\n        let prompt = \"Generate content\"; // Example prompt\n        let mock_llm = MockChatCompletion::new();\n        let on_stream_fn = MockHook::new(\"on_stream\").expect_calls(3).to_owned();\n\n        let chat_request = chat_request! {\n            user!(prompt);\n\n            tools = []\n        };\n\n        let response = chat_response! {\n            \"one two three\";\n            tool_calls = [\"stop\"]\n        };\n\n        // Set expectations for the mock LLM responses\n        mock_llm.expect_complete(chat_request, Ok(response));\n\n        let mut agent = Agent::builder()\n            .llm(&mock_llm)\n            .on_stream(on_stream_fn.on_stream_fn())\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        // Run the agent\n        agent.query(prompt).await.unwrap();\n\n        tracing::debug!(\"Agent finished running\");\n\n        // Assert that the agent is stopped after reaching the loop limit\n        assert!(agent.is_stopped());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_recovering_agent_existing_history() {\n        // First, let's run an agent\n        let prompt = \"Write a poem\";\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::new(\"mock_tool\");\n\n        let chat_request = chat_request! {\n            user!(\"Write a poem\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let mock_tool_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"mock_tool\"]\n\n        };\n\n        mock_llm.expect_complete(chat_request.clone(), Ok(mock_tool_response));\n\n        let chat_request = chat_request! {\n            user!(\"Write a poem\"),\n            assistant!(\"Roses are red\", [\"mock_tool\"]),\n            tool_output!(\"mock_tool\", \"Great!\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let stop_response = chat_response! {\n            \"Roses are red\";\n            tool_calls = [\"stop\"]\n        };\n\n        mock_llm.expect_complete(chat_request, Ok(stop_response));\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n\n        let mut agent = Agent::builder()\n            .tools([mock_tool.clone()])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        agent.query(prompt).await.unwrap();\n\n        // Let's retrieve the history of the agent\n        let history = agent.history().await.unwrap();\n\n        // Store it as a string somewhere\n        let serialized = serde_json::to_string(&history).unwrap();\n\n        // Retrieve it\n        let history: Vec<ChatMessage> =\n            serde_json::from_str::<Vec<swiftide_core::chat_completion::ChatMessage>>(&serialized)\n                .unwrap()\n                .into_iter()\n                .map(|message| message.to_owned())\n                .collect();\n\n        // Build a context from the history\n        let context = DefaultContext::default()\n            .with_existing_messages(history)\n            .await\n            .unwrap()\n            .to_owned();\n\n        let stop_output = ToolOutput::stop();\n        let expected_chat_request = chat_request! {\n            user!(\"Write a poem\"),\n            assistant!(\"Roses are red\", [\"mock_tool\"]),\n            tool_output!(\"mock_tool\", \"Great!\"),\n            assistant!(\"Roses are red\", [\"stop\"]),\n            tool_output!(\"stop\", stop_output),\n            user!(\"Try again!\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let stop_response = chat_response! {\n            \"Really stopping now\";\n            tool_calls = [\"stop\"]\n        };\n\n        mock_llm.expect_complete(expected_chat_request, Ok(stop_response));\n\n        let mut agent = Agent::builder()\n            .context(context)\n            .tools([mock_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        agent.query_once(\"Try again!\").await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_with_approval_required_tool() {\n        use super::*;\n        use crate::tools::control::ApprovalRequired;\n        use crate::{assistant, chat_request, chat_response, user};\n        use swiftide_core::chat_completion::ToolCall;\n\n        // Step 1: Build a tool that needs approval.\n        let mock_tool = MockTool::default();\n        mock_tool.expect_invoke_ok(\"Great!\".into(), None);\n\n        let approval_tool = ApprovalRequired(mock_tool.boxed());\n\n        // Step 2: Set up the mock LLM.\n        let mock_llm = MockChatCompletion::new();\n\n        let chat_req1 = chat_request! {\n            user!(\"Request with approval\");\n            tools = [approval_tool.clone()]\n        };\n        let chat_resp1 = chat_response! {\n            \"Completion message\";\n            tool_calls = [\"mock_tool\"]\n        };\n        mock_llm.expect_complete(chat_req1.clone(), Ok(chat_resp1));\n\n        // The response will include the previous request, but no tool output\n        // from the required tool\n        let chat_req2 = chat_request! {\n            user!(\"Request with approval\"),\n            assistant!(\"Completion message\", [\"mock_tool\"]),\n            tool_output!(\"mock_tool\", \"Great!\");\n            // Simulate feedback required output\n            tools = [approval_tool.clone()]\n        };\n        let chat_resp2 = chat_response! {\n            \"Post-feedback message\";\n            tool_calls = [\"stop\"]\n        };\n        mock_llm.expect_complete(chat_req2.clone(), Ok(chat_resp2));\n\n        // Step 3: Wire up the agent.\n        let mut agent = Agent::builder()\n            .tools([approval_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        // Step 4: Run agent to trigger approval.\n        agent.query_once(\"Request with approval\").await.unwrap();\n\n        assert!(matches!(\n            agent.state,\n            crate::state::State::Stopped(crate::state::StopReason::FeedbackRequired { .. })\n        ));\n\n        let State::Stopped(StopReason::FeedbackRequired { tool_call, .. }) = agent.state.clone()\n        else {\n            panic!(\"Expected feedback required\");\n        };\n\n        // Step 5: Simulate feedback, run again and assert finish.\n        agent\n            .context\n            .feedback_received(&tool_call, &ToolFeedback::approved())\n            .await\n            .unwrap();\n\n        tracing::debug!(\"running after approval\");\n        agent.run_once().await.unwrap();\n        assert!(agent.is_stopped());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_agent_with_approval_required_tool_denied() {\n        use super::*;\n        use crate::tools::control::ApprovalRequired;\n        use crate::{assistant, chat_request, chat_response, user};\n        use swiftide_core::chat_completion::ToolCall;\n\n        // Step 1: Build a tool that needs approval.\n        let mock_tool = MockTool::default();\n\n        let approval_tool = ApprovalRequired(mock_tool.boxed());\n\n        // Step 2: Set up the mock LLM.\n        let mock_llm = MockChatCompletion::new();\n\n        let chat_req1 = chat_request! {\n            user!(\"Request with approval\");\n            tools = [approval_tool.clone()]\n        };\n        let chat_resp1 = chat_response! {\n            \"Completion message\";\n            tool_calls = [\"mock_tool\"]\n        };\n        mock_llm.expect_complete(chat_req1.clone(), Ok(chat_resp1));\n\n        // The response will include the previous request, but no tool output\n        // from the required tool\n        let chat_req2 = chat_request! {\n            user!(\"Request with approval\"),\n            assistant!(\"Completion message\", [\"mock_tool\"]),\n            tool_output!(\"mock_tool\", \"This tool call was refused\");\n            // Simulate feedback required output\n            tools = [approval_tool.clone()]\n        };\n        let chat_resp2 = chat_response! {\n            \"Post-feedback message\";\n            tool_calls = [\"stop\"]\n        };\n        mock_llm.expect_complete(chat_req2.clone(), Ok(chat_resp2));\n\n        // Step 3: Wire up the agent.\n        let mut agent = Agent::builder()\n            .tools([approval_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        // Step 4: Run agent to trigger approval.\n        agent.query_once(\"Request with approval\").await.unwrap();\n\n        assert!(matches!(\n            agent.state,\n            crate::state::State::Stopped(crate::state::StopReason::FeedbackRequired { .. })\n        ));\n\n        let State::Stopped(StopReason::FeedbackRequired { tool_call, .. }) = agent.state.clone()\n        else {\n            panic!(\"Expected feedback required\");\n        };\n\n        // Step 5: Simulate feedback, run again and assert finish.\n        agent\n            .context\n            .feedback_received(&tool_call, &ToolFeedback::refused())\n            .await\n            .unwrap();\n\n        tracing::debug!(\"running after approval\");\n        agent.run_once().await.unwrap();\n\n        let history = agent.context().history().await.unwrap();\n        history\n            .iter()\n            .rfind(|m| {\n                let ChatMessage::ToolOutput(.., ToolOutput::Text(msg)) = m else {\n                    return false;\n                };\n                msg.contains(\"refused\")\n            })\n            .expect(\"Could not find refusal message\");\n\n        assert!(agent.is_stopped());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_defers_user_message_until_pending_tool_calls_complete() {\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::default();\n        mock_tool.expect_invoke_ok(\"Tool done\".into(), None);\n\n        let context = DefaultContext::default()\n            .with_existing_messages(vec![user!(\"Hello\"), assistant!(\"Need tool\", [\"mock_tool\"])])\n            .await\n            .unwrap()\n            .to_owned();\n\n        let expected_request = chat_request! {\n            user!(\"Hello\"),\n            assistant!(\"Need tool\", [\"mock_tool\"]),\n            tool_output!(\"mock_tool\", \"Tool done\"),\n            user!(\"Next\");\n\n            tools = [mock_tool.clone()]\n        };\n\n        let response = chat_response! {\n            \"All set\";\n            tool_calls = [\"stop\"]\n        };\n        mock_llm.expect_complete(expected_request, Ok(response));\n\n        let mut agent = Agent::builder()\n            .context(context)\n            .tools([mock_tool])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        agent.query_once(\"Next\").await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_removing_default_stop_tool() {\n        let mock_llm = MockChatCompletion::new();\n        let mock_tool = MockTool::new(\"mock_tool\");\n\n        // Build agent with without_default_stop_tool\n        let agent = Agent::builder()\n            .without_default_stop_tool()\n            .tools([mock_tool.clone()])\n            .llm(&mock_llm)\n            .no_system_prompt()\n            .build()\n            .unwrap();\n\n        // Check that \"stop\" tool is NOT included\n        assert!(agent.find_tool_by_name(\"stop\").is_none());\n        // Check that our provided tool is still present\n        assert!(agent.find_tool_by_name(\"mock_tool\").is_some());\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/default_context.rs",
    "content": "//! Manages agent history and provides an interface for the external world\n//!\n//! This is the default for agents. It is fully async and shareable between agents.\n//!\n//! By default uses the `LocalExecutor` for tool execution.\n//!\n//! If chat messages include a `ChatMessage::Summary`, all previous messages are ignored except the\n//! system prompt. This is useful for maintaining focus in long conversations or managing token\n//! limits.\nuse std::{\n    collections::HashMap,\n    sync::{\n        Arc, Mutex,\n        atomic::{AtomicUsize, Ordering},\n    },\n};\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{\n    AgentContext, Command, CommandError, CommandOutput, MessageHistory, ToolExecutor,\n};\nuse swiftide_core::{\n    ToolFeedback,\n    chat_completion::{ChatMessage, ToolCall},\n};\n\nuse crate::tools::local_executor::LocalExecutor;\n\n// TODO: Remove unit as executor and implement a local executor instead\n#[derive(Clone)]\npub struct DefaultContext {\n    /// Responsible for managing the conversation history\n    ///\n    /// By default, this is a `Arc<Mutex<Vec<ChatMessage>>>`.\n    message_history: Arc<dyn MessageHistory>,\n    /// Index in the conversation history where the next completion will start\n    completions_ptr: Arc<AtomicUsize>,\n\n    /// Index in the conversation history where the current completion started\n    /// Allows for retrieving only new messages since the last completion\n    current_completions_ptr: Arc<AtomicUsize>,\n\n    /// The executor used to run tools. I.e. local, remote, docker\n    tool_executor: Arc<dyn ToolExecutor>,\n\n    /// Stop if last message is from the assistant\n    stop_on_assistant: bool,\n\n    feedback_received: Arc<Mutex<HashMap<ToolCall, ToolFeedback>>>,\n}\n\nimpl Default for DefaultContext {\n    fn default() -> Self {\n        DefaultContext {\n            message_history: Arc::new(Mutex::new(Vec::new())),\n            completions_ptr: Arc::new(AtomicUsize::new(0)),\n            current_completions_ptr: Arc::new(AtomicUsize::new(0)),\n            tool_executor: Arc::new(LocalExecutor::default()) as Arc<dyn ToolExecutor>,\n            stop_on_assistant: true,\n            feedback_received: Arc::new(Mutex::new(HashMap::new())),\n        }\n    }\n}\n\nimpl std::fmt::Debug for DefaultContext {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"DefaultContext\")\n            .field(\"completion_history\", &self.message_history)\n            .field(\"completions_ptr\", &self.completions_ptr)\n            .field(\"current_completions_ptr\", &self.current_completions_ptr)\n            .field(\"tool_executor\", &\"Arc<dyn ToolExecutor>\")\n            .field(\"stop_on_assistant\", &self.stop_on_assistant)\n            .finish()\n    }\n}\n\nimpl DefaultContext {\n    /// Create a new context with a custom executor\n    pub fn from_executor<T: Into<Arc<dyn ToolExecutor>>>(executor: T) -> DefaultContext {\n        DefaultContext {\n            tool_executor: executor.into(),\n            ..Default::default()\n        }\n    }\n\n    /// If set to true, the agent will stop if the last message is from the assistant (i.e. no new\n    /// tool calls, summaries or user messages)\n    pub fn with_stop_on_assistant(&mut self, stop: bool) -> &mut Self {\n        self.stop_on_assistant = stop;\n        self\n    }\n\n    pub fn with_message_history(&mut self, backend: impl MessageHistory + 'static) -> &mut Self {\n        self.message_history = Arc::new(backend) as Arc<dyn MessageHistory>;\n        self\n    }\n\n    /// Build a context from an existing message history\n    ///\n    /// # Errors\n    ///\n    /// Errors if the message history cannot be extended\n    ///\n    /// # Panics\n    ///\n    /// Panics if the inner mutex is poisoned\n    pub async fn with_existing_messages<I: IntoIterator<Item = ChatMessage>>(\n        &mut self,\n        message_history: I,\n    ) -> Result<&mut Self> {\n        self.message_history\n            .overwrite(message_history.into_iter().collect())\n            .await?;\n\n        Ok(self)\n    }\n\n    /// Add existing tool feedback to the context\n    ///\n    /// # Panics\n    ///\n    /// Panics if the inner mutex is poisoned\n    pub fn with_tool_feedback(&mut self, feedback: impl Into<HashMap<ToolCall, ToolFeedback>>) {\n        self.feedback_received\n            .lock()\n            .unwrap()\n            .extend(feedback.into());\n    }\n}\n#[async_trait]\nimpl AgentContext for DefaultContext {\n    /// Retrieve messages for the next completion\n    async fn next_completion(&self) -> Result<Option<Vec<ChatMessage>>> {\n        let history = self.message_history.history().await?;\n\n        let mut current = self.completions_ptr.load(Ordering::SeqCst);\n\n        // handle out of bounds; if current > length, reset current to 0\n        // if length is 0, return None\n        if history.is_empty() {\n            tracing::debug!(\"No messages in history for completion\");\n            return Ok(None);\n        }\n\n        if current > history.len() {\n            tracing::warn!(\n                current,\n                len = history.len(),\n                \"Completions index was higher than history length, resetting to 0; this might be a bug\"\n            );\n            self.completions_ptr.store(0, Ordering::SeqCst);\n            self.current_completions_ptr.store(0, Ordering::SeqCst);\n\n            current = 0;\n        }\n\n        if history[current..].is_empty()\n            || (self.stop_on_assistant\n                && matches!(history.last(), Some(ChatMessage::Assistant(..)))\n                && self.feedback_received.lock().unwrap().is_empty())\n        {\n            tracing::debug!(?history, \"No new messages for completion\");\n            Ok(None)\n        } else {\n            let previous = self.completions_ptr.swap(history.len(), Ordering::SeqCst);\n            self.current_completions_ptr\n                .store(previous, Ordering::SeqCst);\n\n            Ok(Some(filter_messages_since_summary(history)))\n        }\n    }\n\n    /// Returns the messages the agent is currently completing on\n    async fn current_new_messages(&self) -> Result<Vec<ChatMessage>> {\n        let current = self.current_completions_ptr.load(Ordering::SeqCst);\n        let end = self.completions_ptr.load(Ordering::SeqCst);\n\n        let history = self.message_history.history().await?;\n\n        Ok(filter_messages_since_summary(\n            history[current..end].to_vec(),\n        ))\n    }\n\n    /// Retrieve all messages in the conversation history\n    async fn history(&self) -> Result<Vec<ChatMessage>> {\n        self.message_history.history().await\n    }\n\n    /// Add multiple messages to the conversation history\n    async fn add_messages(&self, messages: Vec<ChatMessage>) -> Result<()> {\n        self.message_history.extend_owned(messages).await\n    }\n\n    /// Add a single message to the conversation history\n    async fn add_message(&self, item: ChatMessage) -> Result<()> {\n        self.message_history.push_owned(item).await\n    }\n\n    /// Execute a command in the tool executor\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        self.tool_executor.exec_cmd(cmd).await\n    }\n\n    fn executor(&self) -> &Arc<dyn ToolExecutor> {\n        &self.tool_executor\n    }\n\n    /// Pops the last messages up until the previous completion\n    ///\n    /// LLMs failing completion for various reasons is unfortunately a common occurrence\n    /// This gives a way to redrive the last completion in a generic way\n    async fn redrive(&self) -> Result<()> {\n        let mut history = self.message_history.history().await?;\n        let previous = self.current_completions_ptr.load(Ordering::SeqCst);\n        let redrive_ptr = self.completions_ptr.swap(previous, Ordering::SeqCst);\n\n        // delete everything after the last completion\n        history.truncate(redrive_ptr);\n\n        self.message_history.overwrite(history).await?;\n\n        Ok(())\n    }\n\n    async fn has_received_feedback(&self, tool_call: &ToolCall) -> Option<ToolFeedback> {\n        // If feedback is present, return true with the optional payload,\n        // and remove it\n        // otherwise return false\n        let mut lock = self.feedback_received.lock().unwrap();\n        lock.remove(tool_call)\n    }\n\n    async fn feedback_received(&self, tool_call: &ToolCall, feedback: &ToolFeedback) -> Result<()> {\n        let mut lock = self.feedback_received.lock().unwrap();\n        // Set the message counter one back so that on a next try, the agent can resume by\n        // trying the tool calls first. Only does this if there are no other approvals\n        if lock.is_empty() {\n            let previous = self.current_completions_ptr.load(Ordering::SeqCst);\n            self.completions_ptr.swap(previous, Ordering::SeqCst);\n        }\n        tracing::debug!(?tool_call, context = ?self, \"feedback received\");\n        lock.insert(tool_call.clone(), feedback.clone());\n\n        Ok(())\n    }\n\n    /// Replace the entire conversation history\n    async fn replace_history(&self, items: Vec<ChatMessage>) -> Result<()> {\n        self.message_history.overwrite(items).await?;\n        self.completions_ptr.store(0, Ordering::SeqCst);\n        self.current_completions_ptr.store(0, Ordering::SeqCst);\n        Ok(())\n    }\n}\n\nfn filter_messages_since_summary(messages: Vec<ChatMessage>) -> Vec<ChatMessage> {\n    let mut summary_found = false;\n    let mut messages = messages\n        .into_iter()\n        .rev()\n        .filter(|m| {\n            if summary_found {\n                return matches!(m, ChatMessage::System(_));\n            }\n            if let ChatMessage::Summary(_) = m {\n                summary_found = true;\n            }\n            true\n        })\n        .collect::<Vec<_>>();\n\n    messages.reverse();\n\n    messages\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::{assistant, tool_output, user};\n\n    use super::*;\n    use swiftide_core::chat_completion::{ChatMessage, ToolCall};\n\n    #[tokio::test]\n    async fn test_iteration_tracking() {\n        let mut context = DefaultContext::default();\n\n        // Record initial chat messages\n        context\n            .add_messages(vec![\n                ChatMessage::System(\"You are awesome\".into()),\n                ChatMessage::User(\"Hello\".into()),\n            ])\n            .await\n            .unwrap();\n\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 2);\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        context\n            .add_messages(vec![assistant!(\"Hey?\"), user!(\"How are you?\")])\n            .await\n            .unwrap();\n\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 4);\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        // If the last message is from the assistant, we should not get any more completions\n        context\n            .add_messages(vec![assistant!(\"I am fine\")])\n            .await\n            .unwrap();\n\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        context.with_stop_on_assistant(false);\n\n        assert!(context.next_completion().await.unwrap().is_some());\n    }\n\n    #[tokio::test]\n    async fn test_should_complete_after_tool_call() {\n        let context = DefaultContext::default();\n        // Record initial chat messages\n        context\n            .add_messages(vec![\n                ChatMessage::System(\"You are awesome\".into()),\n                ChatMessage::User(\"Hello\".into()),\n            ])\n            .await\n            .unwrap();\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 2);\n        assert_eq!(context.current_new_messages().await.unwrap().len(), 2);\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        context\n            .add_messages(vec![\n                assistant!(\"Hey?\", [\"test\"]),\n                tool_output!(\"test\", \"Hoi\"),\n            ])\n            .await\n            .unwrap();\n\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(context.current_new_messages().await.unwrap().len(), 2);\n        assert_eq!(messages.len(), 4);\n\n        assert!(context.next_completion().await.unwrap().is_none());\n    }\n\n    #[tokio::test]\n    async fn test_filters_messages_before_summary() {\n        let messages = vec![\n            ChatMessage::System(\"System message\".into()),\n            ChatMessage::User(\"Hello\".into()),\n            ChatMessage::new_assistant(Some(\"Hello there\"), None),\n            ChatMessage::Summary(\"Summary message\".into()),\n            ChatMessage::User(\"This should be ignored\".into()),\n        ];\n        let context = DefaultContext::default();\n        // Record initial chat messages\n        context.add_messages(messages).await.unwrap();\n\n        let new_messages = context.next_completion().await.unwrap().unwrap();\n\n        assert_eq!(new_messages.len(), 3);\n        assert!(matches!(new_messages[0], ChatMessage::System(_)));\n        assert!(matches!(new_messages[1], ChatMessage::Summary(_)));\n        assert!(matches!(new_messages[2], ChatMessage::User(_)));\n\n        let current_new_messages = context.current_new_messages().await.unwrap();\n        assert_eq!(current_new_messages.len(), 3);\n        assert!(matches!(current_new_messages[0], ChatMessage::System(_)));\n        assert!(matches!(current_new_messages[1], ChatMessage::Summary(_)));\n        assert!(matches!(current_new_messages[2], ChatMessage::User(_)));\n\n        assert!(context.next_completion().await.unwrap().is_none());\n    }\n\n    #[tokio::test]\n    async fn test_filters_messages_before_summary_with_assistant_last() {\n        let messages = vec![\n            ChatMessage::System(\"System message\".into()),\n            ChatMessage::User(\"Hello\".into()),\n            ChatMessage::new_assistant(Some(\"Hello there\"), None),\n        ];\n        let mut context = DefaultContext::default();\n        context.with_stop_on_assistant(false);\n        // Record initial chat messages\n        context.add_messages(messages).await.unwrap();\n\n        let new_messages = context.next_completion().await.unwrap().unwrap();\n\n        assert_eq!(new_messages.len(), 3);\n        assert!(matches!(new_messages[0], ChatMessage::System(_)));\n        assert!(matches!(new_messages[1], ChatMessage::User(_)));\n        assert!(matches!(new_messages[2], ChatMessage::Assistant(..)));\n\n        context\n            .add_message(ChatMessage::Summary(\"Summary message 1\".into()))\n            .await\n            .unwrap();\n\n        let new_messages = context.next_completion().await.unwrap().unwrap();\n        dbg!(&new_messages);\n        assert_eq!(new_messages.len(), 2);\n        assert!(matches!(new_messages[0], ChatMessage::System(_)));\n        assert_eq!(\n            new_messages[1],\n            ChatMessage::Summary(\"Summary message 1\".into())\n        );\n\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        let messages = vec![\n            ChatMessage::User(\"Hello again\".into()),\n            ChatMessage::new_assistant(Some(\"Hello there again\"), None),\n        ];\n\n        context.add_messages(messages).await.unwrap();\n\n        let new_messages = context.next_completion().await.unwrap().unwrap();\n\n        assert!(matches!(new_messages[0], ChatMessage::System(_)));\n        assert_eq!(\n            new_messages[1],\n            ChatMessage::Summary(\"Summary message 1\".into())\n        );\n        assert_eq!(new_messages[2], ChatMessage::User(\"Hello again\".into()));\n        assert_eq!(\n            new_messages[3],\n            ChatMessage::new_assistant(Some(\"Hello there again\".to_string()), None)\n        );\n\n        context\n            .add_message(ChatMessage::Summary(\"Summary message 2\".into()))\n            .await\n            .unwrap();\n\n        let new_messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(new_messages.len(), 2);\n\n        assert!(matches!(new_messages[0], ChatMessage::System(_)));\n        assert_eq!(\n            new_messages[1],\n            ChatMessage::Summary(\"Summary message 2\".into())\n        );\n    }\n\n    #[tokio::test]\n    async fn test_redrive() {\n        let context = DefaultContext::default();\n\n        // Record initial chat messages\n        context\n            .add_messages(vec![\n                ChatMessage::System(\"System message\".into()),\n                ChatMessage::User(\"Hello\".into()),\n            ])\n            .await\n            .unwrap();\n\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 2);\n        assert!(context.next_completion().await.unwrap().is_none());\n        context.redrive().await.unwrap();\n\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 2);\n\n        context\n            .add_messages(vec![ChatMessage::User(\"Hey?\".into())])\n            .await\n            .unwrap();\n\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 3);\n        assert!(context.next_completion().await.unwrap().is_none());\n        context.redrive().await.unwrap();\n\n        // Add more messages\n        context\n            .add_messages(vec![ChatMessage::User(\"How are you?\".into())])\n            .await\n            .unwrap();\n\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 4);\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        // Redrive should remove the last set of messages\n        dbg!(&context);\n        context.redrive().await.unwrap();\n        dbg!(&context);\n\n        // We just redrove with the same messages\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 4);\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        // Add more messages\n        context\n            .add_messages(vec![\n                ChatMessage::User(\"How are you really?\".into()),\n                ChatMessage::User(\"How are you really?\".into()),\n            ])\n            .await\n            .unwrap();\n\n        // This should remove any additional messages\n        context.redrive().await.unwrap();\n\n        // We just redrove with the same messages\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 4);\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        // Redrive again\n        context.redrive().await.unwrap();\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 4);\n        assert!(context.next_completion().await.unwrap().is_none());\n    }\n\n    #[tokio::test]\n    async fn test_next_completion_empty_history() {\n        let context = DefaultContext::default();\n        let next = context.next_completion().await;\n        assert!(next.unwrap().is_none());\n    }\n\n    #[tokio::test]\n    async fn test_next_completion_out_of_bounds_ptr() {\n        let context = DefaultContext::default();\n        context\n            .add_messages(vec![\n                ChatMessage::System(\"System\".into()),\n                ChatMessage::User(\"Hi\".into()),\n            ])\n            .await\n            .unwrap();\n\n        // Set completions_ptr beyond the length of messages\n        context\n            .completions_ptr\n            .store(10, std::sync::atomic::Ordering::SeqCst);\n\n        // Should reset the pointer and return the full messages\n        let messages = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(messages.len(), 2);\n\n        // Second call should be empty again\n        assert!(context.next_completion().await.unwrap().is_none());\n    }\n\n    #[tokio::test]\n    async fn test_replace_history_replaces_and_resets_pointers() {\n        let mut context = DefaultContext::default();\n        context.with_stop_on_assistant(false);\n\n        // Add some initial messages\n        context\n            .add_messages(vec![\n                ChatMessage::System(\"Initial\".into()),\n                ChatMessage::User(\"Hello\".into()),\n                ChatMessage::new_assistant(Some(\"Hi.\"), None),\n            ])\n            .await\n            .unwrap();\n\n        // Consume the messages so pointers are moved\n        let orig = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(orig.len(), 3);\n        assert!(context.next_completion().await.unwrap().is_none());\n\n        // Replace history with a new set\n        let new_msgs = vec![\n            ChatMessage::System(\"System2\".into()),\n            ChatMessage::User(\"User2\".into()),\n        ];\n        context.replace_history(new_msgs.clone()).await.unwrap();\n\n        // After replacement, next_completion should return only the new messages\n        let replaced = context.next_completion().await.unwrap().unwrap();\n        assert_eq!(replaced, new_msgs);\n\n        // Next call should yield None again\n        assert!(context.next_completion().await.unwrap().is_none());\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/errors.rs",
    "content": "use swiftide_core::chat_completion::{\n    ChatCompletionRequestBuilderError,\n    errors::{LanguageModelError, ToolError},\n};\nuse thiserror::Error;\nuse tokio::task::JoinError;\n\n#[derive(Error, Debug)]\npub enum AgentError {\n    #[error(\"Agent is already running\")]\n    AlreadyRunning,\n\n    #[error(\"Failed to render system prompt {0:#}\")]\n    FailedToRenderSystemPrompt(anyhow::Error),\n\n    #[error(\"Failed to build chat completion request {0:#}\")]\n    FailedToBuildRequest(ChatCompletionRequestBuilderError),\n\n    #[error(\"Error from LLM when running completions {0:#}\")]\n    CompletionsFailed(LanguageModelError),\n\n    #[error(transparent)]\n    ToolError(#[from] ToolError),\n\n    #[error(\"Failed waiting for tool to finish {0:?}\")]\n    ToolFailedToJoin(String, JoinError),\n\n    #[error(\"Failed to load tools from toolbox {0:#}\")]\n    ToolBoxFailedToLoad(anyhow::Error),\n\n    #[error(\"Chat completion stream was empty\")]\n    EmptyStream,\n\n    #[error(\"Failed to render prompt {0:#}\")]\n    FailedToRenderPrompt(anyhow::Error),\n\n    #[error(\"Error with message history {0:#}\")]\n    MessageHistoryError(anyhow::Error),\n\n    #[error(\"Unfulfilled tool calls remain after invocation\")]\n    UnfulfilledToolCalls,\n}\n"
  },
  {
    "path": "swiftide-agents/src/hooks.rs",
    "content": "//! Hooks are functions that are called at specific points in the agent lifecycle.\n//!\n//!\n//! Since rust does not have async closures, hooks have to return a boxed, pinned async block\n//! themselves.\n//!\n//! # Example\n//!\n//! ```no_run\n//! # use swiftide_core::{AgentContext, chat_completion::ChatMessage};\n//! # use swiftide_agents::Agent;\n//! # fn test() {\n//! # let mut agent = swiftide_agents::Agent::builder();\n//! agent.before_all(move |agent: &Agent| {\n//!     Box::pin(async move {\n//!         agent.context().add_message(ChatMessage::new_user(\"Hello, world\")).await;\n//!         Ok(())\n//!     })\n//! });\n//! # }\n//! ```\n//! Rust has a long outstanding issue where it captures outer lifetimes when returning an impl\n//! that also has lifetimes, see [this issue](https://github.com/rust-lang/rust/issues/42940)\n//!\n//! This can happen if you write a method like `fn return_hook(&self) -> impl HookFn`, where the\n//! owner also has a lifetime.\n//! The trick is to set an explicit lifetime on self, and hook, where self must outlive the hook.\n//!\n//! # Example\n//!\n//! ```no_run\n//! # use swiftide_core::{AgentContext};\n//! # use swiftide_agents::hooks::BeforeAllFn;\n//! # use swiftide_agents::Agent;\n//! struct SomeHook<'thing> {\n//!    thing: &'thing str\n//! }\n//!\n//! impl<'thing> SomeHook<'thing> {\n//!    fn return_hook<'tool>(&'thing self) -> impl BeforeAllFn + 'tool where 'thing: 'tool {\n//!     move |_: &Agent| {\n//!      Box::pin(async move {{ Ok(())}})\n//!     }\n//!   }\n//! }\nuse anyhow::Result;\nuse std::{future::Future, pin::Pin};\n\nuse dyn_clone::DynClone;\nuse swiftide_core::chat_completion::{\n    ChatCompletionRequest, ChatCompletionResponse, ChatMessage, ToolCall, ToolOutput,\n    errors::ToolError,\n};\n\nuse crate::{Agent, errors::AgentError, state::StopReason};\n\npub trait BeforeAllFn:\n    for<'a> Fn(&'a Agent) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(BeforeAllFn);\n\npub trait AfterEachFn:\n    for<'a> Fn(&'a Agent) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(AfterEachFn);\n\npub trait BeforeCompletionFn:\n    for<'a> Fn(\n        &'a Agent,\n        &mut ChatCompletionRequest<'_>,\n    ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(BeforeCompletionFn);\n\npub trait AfterCompletionFn:\n    for<'a> Fn(\n        &'a Agent,\n        &mut ChatCompletionResponse,\n    ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(AfterCompletionFn);\n\n/// Hooks that are called after each tool\npub trait AfterToolFn:\n    for<'tool> Fn(\n        &'tool Agent,\n        &ToolCall,\n        &'tool mut Result<ToolOutput, ToolError>,\n    ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'tool>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(AfterToolFn);\n\n/// Hooks that are called before each tool\npub trait BeforeToolFn:\n    for<'a> Fn(&'a Agent, &ToolCall) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(BeforeToolFn);\n\n/// Hooks that are called when a new message is added to the `AgentContext`\npub trait MessageHookFn:\n    for<'a> Fn(&'a Agent, &mut ChatMessage) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(MessageHookFn);\n\n/// Hooks that are called when the agent starts, either from pending or stopped\npub trait OnStartFn:\n    for<'a> Fn(&'a Agent) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(OnStartFn);\n\n/// Hooks that are called when the agent stop\npub trait OnStopFn:\n    for<'a> Fn(\n        &'a Agent,\n        StopReason,\n        Option<&AgentError>,\n    ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(OnStopFn);\n\npub trait OnStreamFn:\n    for<'a> Fn(\n        &'a Agent,\n        &ChatCompletionResponse,\n    ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n    + Send\n    + Sync\n    + DynClone\n{\n}\n\ndyn_clone::clone_trait_object!(OnStreamFn);\n\n/// Wrapper around the different types of hooks\n#[derive(Clone, strum_macros::EnumDiscriminants, strum_macros::Display)]\n#[strum_discriminants(name(HookTypes), derive(strum_macros::Display))]\npub enum Hook {\n    /// Runs only once for the agent when it starts\n    BeforeAll(Box<dyn BeforeAllFn>),\n    /// Runs before every completion, yielding a mutable reference to the completion request\n    BeforeCompletion(Box<dyn BeforeCompletionFn>),\n    /// Runs after every completion, yielding a mutable reference to the completion response\n    AfterCompletion(Box<dyn AfterCompletionFn>),\n    /// Runs before every tool call, yielding a reference to the tool call\n    BeforeTool(Box<dyn BeforeToolFn>),\n    /// Runs after every tool call, yielding a reference to the tool call and a mutable result\n    AfterTool(Box<dyn AfterToolFn>),\n    /// Runs after all tools have completed and a single completion has been made\n    AfterEach(Box<dyn AfterEachFn>),\n    /// Runs when a new message is added to the `AgentContext`, yielding a mutable reference to the\n    /// message. This is only triggered when the message is added by the agent.\n    OnNewMessage(Box<dyn MessageHookFn>),\n    /// Runs when the agent starts, either from pending or stopped\n    OnStart(Box<dyn OnStartFn>),\n    /// Runs when the agent stops\n    OnStop(Box<dyn OnStopFn>),\n    /// Runs when the agent streams a response\n    OnStream(Box<dyn OnStreamFn>),\n}\n\nimpl<F> BeforeAllFn for F where\n    F: for<'a> Fn(&'a Agent) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> AfterEachFn for F where\n    F: for<'a> Fn(&'a Agent) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> BeforeCompletionFn for F where\n    F: for<'a> Fn(\n            &'a Agent,\n            &mut ChatCompletionRequest<'_>,\n        ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> AfterCompletionFn for F where\n    F: for<'a> Fn(\n            &'a Agent,\n            &mut ChatCompletionResponse,\n        ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> BeforeToolFn for F where\n    F: for<'a> Fn(&'a Agent, &ToolCall) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\nimpl<F> AfterToolFn for F where\n    F: for<'tool> Fn(\n            &'tool Agent,\n            &ToolCall,\n            &'tool mut Result<ToolOutput, ToolError>,\n        ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'tool>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> MessageHookFn for F where\n    F: for<'a> Fn(\n            &'a Agent,\n            &mut ChatMessage,\n        ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> OnStartFn for F where\n    F: for<'a> Fn(&'a Agent) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> OnStopFn for F where\n    F: for<'a> Fn(\n            &'a Agent,\n            StopReason,\n            Option<&AgentError>,\n        ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\nimpl<F> OnStreamFn for F where\n    F: for<'a> Fn(\n            &'a Agent,\n            &ChatCompletionResponse,\n        ) -> Pin<Box<dyn Future<Output = Result<()>> + Send + 'a>>\n        + Send\n        + Sync\n        + DynClone\n{\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::Agent;\n\n    #[test]\n    fn test_hooks_compile_sync_and_async() {\n        Agent::builder()\n            .before_all(|_| Box::pin(async { Ok(()) }))\n            .on_start(|_| Box::pin(async { Ok(()) }))\n            .before_completion(|_, _| Box::pin(async { Ok(()) }))\n            .before_tool(|_, _| Box::pin(async { Ok(()) }))\n            .after_tool(|_, _, _| Box::pin(async { Ok(()) }))\n            .after_completion(|_, _| Box::pin(async { Ok(()) }));\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n\n//! Swiftide agents are a flexible way to build fast and reliable AI agents.\n//!\n//! # Features\n//!\n//! * **Tools**: Tools can be defined as functions using the `#[tool]` attribute macro, the `Tool`\n//!   derive macro, or manually implementing the `Tool` trait.\n//! * **Hooks**: At various stages of the agent lifecycle, hooks can be defined to run custom logic.\n//!   These are defined when building the agent, and each take a closure.\n//! * **Context**: Agents operate in an `AgentContext`, which is a shared state between tools and\n//!   hooks. The context is responsible for managing the completions and interacting with the\n//!   outside world.\n//! * **Tool Execution**: A context takes a tool executor (local by default) to execute its tools\n//!   on. This enables tools to be run i.e. in containers, remote, etc.\n//! * **System prompt defaults**: `SystemPrompt` provides a default, customizable prompt for the\n//!   agent. If you want to provider your own prompt, the builder takes anything that converts into\n//!   a `Prompt`, including strings.\n//! * **Open Telemetry**: Agents are fully instrumented with open telemetry.\n//!\n//! # Example\n//!\n//! ```ignore\n//! # use swiftide_agents::Agent;\n//! # use swiftide_integrations as integrations;\n//! # async fn run() -> Result<(), Box<dyn std::error::Error>> {\n//! let openai = integrations::openai::OpenAI::builder()\n//!     .default_prompt_model(\"gpt-4o-mini\")\n//!     .build()?;\n//!\n//! Agent::builder()\n//!     .llm(&openai)\n//!     .before_completion(move |_,_|\n//!         Box::pin(async move {\n//!                 println!(\"Before each tool\");\n//!                 Ok(())\n//!             })\n//!     )\n//!     .build()?\n//!     .query(\"What is the meaning of life?\")\n//!     .await?;\n//! # return Ok(());\n//!\n//! # }\n//! ```\n//!\n//! Agents run in a loop as long as they have new messages to process.\nmod agent;\nmod default_context;\npub mod errors;\npub mod hooks;\nmod state;\npub mod system_prompt;\npub mod tasks;\npub mod tools;\nmod util;\n\npub use agent::{Agent, AgentBuilder, AgentBuilderError};\npub use default_context::DefaultContext;\npub use state::{State, StopReason};\n\n#[cfg(any(test, debug_assertions))]\npub mod test_utils;\n"
  },
  {
    "path": "swiftide-agents/src/snapshots/swiftide_agents__system_prompt__tests__customization.snap",
    "content": "---\nsource: swiftide-agents/src/system_prompt.rs\nexpression: rendered\n---\n# Your role\n\nspecial role\n# Guidelines you need to follow\n\n- Try to understand how to complete the task well before completing it.\n- special guideline\n\n\n# Constraints that must be adhered to\n\n- Think step by step\n- Think before you act; respond with your thoughts before calling tools\n- Do not make up any assumptions, use tools to get the information you need\n- Use the provided tools to interact with the system and accomplish the task\n- If you are stuck, or otherwise cannot complete the task, respond with your thoughts and call `stop`.\n- If the task is completed, or otherwise cannot continue, like requiring user feedback, call `stop`.\n- special constraint\n\n\n# Response Format\n\n- Always respond with your thoughts and reasoning for your actions in one or two sentences. Even when calling tools.\n- Once the goal is achieved, call the `stop` tool\n\nsome additional info\n"
  },
  {
    "path": "swiftide-agents/src/snapshots/swiftide_agents__system_prompt__tests__to_prompt.snap",
    "content": "---\nsource: swiftide-agents/src/system_prompt.rs\nexpression: rendered\n---\n# Your role\n\nspecial role\n# Guidelines you need to follow\n\n- Try to understand how to complete the task well before completing it.\n- special guideline\n\n\n# Constraints that must be adhered to\n\n- Think step by step\n- Think before you act; respond with your thoughts before calling tools\n- Do not make up any assumptions, use tools to get the information you need\n- Use the provided tools to interact with the system and accomplish the task\n- If you are stuck, or otherwise cannot complete the task, respond with your thoughts and call `stop`.\n- If the task is completed, or otherwise cannot continue, like requiring user feedback, call `stop`.\n- special constraint\n\n\n# Response Format\n\n- Always respond with your thoughts and reasoning for your actions in one or two sentences. Even when calling tools.\n- Once the goal is achieved, call the `stop` tool\n\nsome additional info\n"
  },
  {
    "path": "swiftide-agents/src/state.rs",
    "content": "//! Internal state of the agent\n\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\nuse swiftide_core::chat_completion::ToolCall;\n\n#[derive(Clone, Debug, Default, strum_macros::EnumDiscriminants, strum_macros::EnumIs)]\npub enum State {\n    #[default]\n    Pending,\n    Running,\n    Stopped(StopReason),\n}\n\nimpl State {\n    pub fn stop_reason(&self) -> Option<&StopReason> {\n        match self {\n            State::Stopped(reason) => Some(reason),\n            _ => None,\n        }\n    }\n}\n\n/// The reason the agent stopped\n///\n/// `StopReason::Other` has some convenience methods to convert from any `AsRef<str>`\n#[non_exhaustive]\n#[derive(Clone, Debug, strum_macros::EnumIs, PartialEq, Serialize, Deserialize)]\npub enum StopReason {\n    /// A tool called stop\n    RequestedByTool(ToolCall, Option<Value>),\n\n    /// Agent failed to complete with optional message\n    AgentFailed(Option<Value>),\n\n    /// A tool repeatedly failed\n    ToolCallsOverLimit(ToolCall),\n\n    /// A tool requires feedback before it will continue\n    FeedbackRequired {\n        tool_call: ToolCall,\n        payload: Option<serde_json::Value>,\n    },\n    /// There was an error\n    Error,\n\n    /// No new messages; stopping completions\n    NoNewMessages,\n\n    Other(String),\n}\n\nimpl StopReason {\n    pub fn as_requested_by_tool(&self) -> Option<(&ToolCall, Option<&Value>)> {\n        if let StopReason::RequestedByTool(t, message) = self {\n            Some((t, message.as_ref()))\n        } else {\n            None\n        }\n    }\n\n    pub fn as_tool_calls_over_limit(&self) -> Option<&ToolCall> {\n        if let StopReason::ToolCallsOverLimit(t) = self {\n            Some(t)\n        } else {\n            None\n        }\n    }\n\n    pub fn as_feedback_required(&self) -> Option<(&ToolCall, Option<&serde_json::Value>)> {\n        if let StopReason::FeedbackRequired { tool_call, payload } = self {\n            Some((tool_call, payload.as_ref()))\n        } else {\n            None\n        }\n    }\n\n    pub fn as_error(&self) -> Option<()> {\n        if matches!(self, StopReason::Error) {\n            Some(())\n        } else {\n            None\n        }\n    }\n\n    pub fn as_no_new_messages(&self) -> Option<()> {\n        if matches!(self, StopReason::NoNewMessages) {\n            Some(())\n        } else {\n            None\n        }\n    }\n\n    pub fn as_other(&self) -> Option<&str> {\n        if let StopReason::Other(s) = self {\n            Some(s)\n        } else {\n            None\n        }\n    }\n}\nimpl Default for StopReason {\n    fn default() -> Self {\n        StopReason::Other(\"No reason provided\".into())\n    }\n}\n\nimpl<S: AsRef<str>> From<S> for StopReason {\n    fn from(value: S) -> Self {\n        StopReason::Other(value.as_ref().to_string())\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/system_prompt.rs",
    "content": "//! The system prompt is the initial role and constraint defining message the LLM will receive for\n//! completion.\n//!\n//! By default, the system prompt is setup as a general-purpose chain-of-thought reasoning prompt\n//! with the role, guidelines, and constraints left empty for customization.\n//!\n//! You can override the the template entirely by providing your own `Prompt`. Optionally, you can\n//! still use the builder values by referencing them in your template.\n//!\n//! The builder provides an accessible way to build a system prompt.\n//!\n//! The agent will convert the system prompt into a prompt, adding it to the messages list the\n//! first time it is called.\n//!\n//! For customization, either the builder can be used to profit from defaults, or an override can\n//! be provided on the agent level.\n\nuse derive_builder::Builder;\nuse swiftide_core::prompt::Prompt;\n\n#[derive(Clone, Debug, Builder)]\n#[builder(setter(into, strip_option))]\npub struct SystemPrompt {\n    /// The role the agent is expected to fulfil.\n    #[builder(default)]\n    role: Option<String>,\n\n    /// Additional guidelines for the agent to follow\n    #[builder(default, setter(custom))]\n    guidelines: Vec<String>,\n    /// Additional constraints\n    #[builder(default, setter(custom))]\n    constraints: Vec<String>,\n\n    /// Optional additional raw markdown to append to the prompt\n    ///\n    /// For instance, if you would like to support an AGENTS.md file, add it here.\n    #[builder(default)]\n    additional: Option<String>,\n\n    /// The template to use for the system prompt\n    #[builder(default = default_prompt_template())]\n    template: Prompt,\n}\n\nimpl SystemPrompt {\n    pub fn builder() -> SystemPromptBuilder {\n        SystemPromptBuilder::default()\n    }\n\n    pub fn to_prompt(&self) -> Prompt {\n        self.clone().into()\n    }\n\n    /// Adds a guideline to the guidelines list.\n    pub fn with_added_guideline(&mut self, guideline: impl AsRef<str>) -> &mut Self {\n        self.guidelines.push(guideline.as_ref().to_string());\n        self\n    }\n\n    /// Adds a constraint to the constraints list.\n    pub fn with_added_constraint(&mut self, constraint: impl AsRef<str>) -> &mut Self {\n        self.constraints.push(constraint.as_ref().to_string());\n        self\n    }\n\n    /// Overwrites all guidelines.\n    pub fn with_guidelines<T: IntoIterator<Item = S>, S: AsRef<str>>(\n        &mut self,\n        guidelines: T,\n    ) -> &mut Self {\n        self.guidelines = guidelines\n            .into_iter()\n            .map(|s| s.as_ref().to_string())\n            .collect();\n        self\n    }\n\n    /// Overwrites all constraints.\n    pub fn with_constraints<T: IntoIterator<Item = S>, S: AsRef<str>>(\n        &mut self,\n        constraints: T,\n    ) -> &mut Self {\n        self.constraints = constraints\n            .into_iter()\n            .map(|s| s.as_ref().to_string())\n            .collect();\n        self\n    }\n\n    /// Changes the role.\n    pub fn with_role(&mut self, role: impl Into<String>) -> &mut Self {\n        self.role = Some(role.into());\n        self\n    }\n\n    /// Sets the additional markdown field.\n    pub fn with_additional(&mut self, additional: impl Into<String>) -> &mut Self {\n        self.additional = Some(additional.into());\n        self\n    }\n\n    /// Sets the template.\n    pub fn with_template(&mut self, template: impl Into<Prompt>) -> &mut Self {\n        self.template = template.into();\n        self\n    }\n}\n\nimpl From<String> for SystemPrompt {\n    fn from(text: String) -> Self {\n        SystemPrompt {\n            role: None,\n            guidelines: Vec::new(),\n            constraints: Vec::new(),\n            additional: None,\n            template: text.into(),\n        }\n    }\n}\n\nimpl From<&'static str> for SystemPrompt {\n    fn from(text: &'static str) -> Self {\n        SystemPrompt {\n            role: None,\n            guidelines: Vec::new(),\n            constraints: Vec::new(),\n            additional: None,\n            template: text.into(),\n        }\n    }\n}\n\nimpl From<SystemPrompt> for SystemPromptBuilder {\n    fn from(val: SystemPrompt) -> Self {\n        SystemPromptBuilder {\n            role: Some(val.role),\n            guidelines: Some(val.guidelines),\n            constraints: Some(val.constraints),\n            additional: Some(val.additional),\n            template: Some(val.template),\n        }\n    }\n}\n\nimpl From<Prompt> for SystemPrompt {\n    fn from(prompt: Prompt) -> Self {\n        SystemPrompt {\n            role: None,\n            guidelines: Vec::new(),\n            constraints: Vec::new(),\n            additional: None,\n            template: prompt,\n        }\n    }\n}\n\nimpl Default for SystemPrompt {\n    fn default() -> Self {\n        SystemPrompt {\n            role: None,\n            guidelines: Vec::new(),\n            constraints: Vec::new(),\n            additional: None,\n            template: default_prompt_template(),\n        }\n    }\n}\n\nimpl SystemPromptBuilder {\n    pub fn add_guideline(&mut self, guideline: &str) -> &mut Self {\n        self.guidelines\n            .get_or_insert_with(Vec::new)\n            .push(guideline.to_string());\n        self\n    }\n\n    pub fn add_constraint(&mut self, constraint: &str) -> &mut Self {\n        self.constraints\n            .get_or_insert_with(Vec::new)\n            .push(constraint.to_string());\n        self\n    }\n\n    pub fn guidelines<T: IntoIterator<Item = S>, S: AsRef<str>>(\n        &mut self,\n        guidelines: T,\n    ) -> &mut Self {\n        self.guidelines = Some(\n            guidelines\n                .into_iter()\n                .map(|s| s.as_ref().to_string())\n                .collect(),\n        );\n        self\n    }\n\n    pub fn constraints<T: IntoIterator<Item = S>, S: AsRef<str>>(\n        &mut self,\n        constraints: T,\n    ) -> &mut Self {\n        self.constraints = Some(\n            constraints\n                .into_iter()\n                .map(|s| s.as_ref().to_string())\n                .collect(),\n        );\n        self\n    }\n}\n\nfn default_prompt_template() -> Prompt {\n    include_str!(\"system_prompt_template.md\").into()\n}\n\n#[allow(clippy::from_over_into)]\nimpl Into<Prompt> for SystemPrompt {\n    fn into(self) -> Prompt {\n        let SystemPrompt {\n            role,\n            guidelines,\n            constraints,\n            template,\n            additional,\n        } = self;\n\n        template\n            .with_context_value(\"role\", role)\n            .with_context_value(\"guidelines\", guidelines)\n            .with_context_value(\"constraints\", constraints)\n            .with_context_value(\"additional\", additional)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[tokio::test]\n    async fn test_customization() {\n        let prompt = SystemPrompt::builder()\n            .role(\"special role\")\n            .guidelines([\"special guideline\"])\n            .constraints(vec![\"special constraint\".to_string()])\n            .additional(\"some additional info\")\n            .build()\n            .unwrap();\n\n        let prompt: Prompt = prompt.into();\n\n        let rendered = prompt.render().unwrap();\n\n        assert!(rendered.contains(\"special role\"), \"error: {rendered}\");\n        assert!(rendered.contains(\"special guideline\"), \"error: {rendered}\");\n        assert!(rendered.contains(\"special constraint\"), \"error: {rendered}\");\n        assert!(\n            rendered.contains(\"some additional info\"),\n            \"error: {rendered}\"\n        );\n\n        insta::assert_snapshot!(rendered);\n    }\n\n    #[tokio::test]\n    async fn test_to_prompt() {\n        let prompt = SystemPrompt::builder()\n            .role(\"special role\")\n            .guidelines([\"special guideline\"])\n            .constraints(vec![\"special constraint\".to_string()])\n            .additional(\"some additional info\")\n            .build()\n            .unwrap();\n\n        let prompt: Prompt = prompt.to_prompt();\n\n        let rendered = prompt.render().unwrap();\n\n        assert!(rendered.contains(\"special role\"), \"error: {rendered}\");\n        assert!(rendered.contains(\"special guideline\"), \"error: {rendered}\");\n        assert!(rendered.contains(\"special constraint\"), \"error: {rendered}\");\n        assert!(\n            rendered.contains(\"some additional info\"),\n            \"error: {rendered}\"\n        );\n\n        insta::assert_snapshot!(rendered);\n    }\n\n    #[tokio::test]\n    async fn test_system_prompt_to_builder() {\n        let sp = SystemPrompt {\n            role: Some(\"Assistant\".to_string()),\n            guidelines: vec![\"Be concise\".to_string()],\n            constraints: vec![\"No personal opinions\".to_string()],\n            additional: None,\n            template: \"Hello, {{role}}! Guidelines: {{guidelines}}, Constraints: {{constraints}}\"\n                .into(),\n        };\n\n        let builder = SystemPromptBuilder::from(sp.clone());\n\n        assert_eq!(builder.role, Some(Some(\"Assistant\".to_string())));\n        assert_eq!(builder.guidelines, Some(vec![\"Be concise\".to_string()]));\n        assert_eq!(\n            builder.constraints,\n            Some(vec![\"No personal opinions\".to_string()])\n        );\n        // For template, compare the rendered string\n        assert_eq!(\n            builder.template.as_ref().unwrap().render().unwrap(),\n            sp.template.render().unwrap()\n        );\n    }\n\n    #[test]\n    fn test_with_added_guideline_and_constraint() {\n        let mut sp = SystemPrompt::default();\n        sp.with_added_guideline(\"Stay polite\")\n            .with_added_guideline(\"Use Markdown\")\n            .with_added_constraint(\"No personal info\")\n            .with_added_constraint(\"Short responses\");\n\n        assert_eq!(sp.guidelines, vec![\"Stay polite\", \"Use Markdown\"]);\n        assert_eq!(sp.constraints, vec![\"No personal info\", \"Short responses\"]);\n    }\n\n    #[test]\n    fn test_with_guidelines_and_constraints_overwrites() {\n        let mut sp = SystemPrompt::default();\n        sp.with_guidelines([\"A\", \"B\", \"C\"])\n            .with_constraints(vec![\"X\", \"Y\"]);\n\n        assert_eq!(sp.guidelines, vec![\"A\", \"B\", \"C\"]);\n        assert_eq!(sp.constraints, vec![\"X\", \"Y\"]);\n\n        // Overwrite with different contents\n        sp.with_guidelines(vec![\"Z\"]);\n        sp.with_constraints([\"P\", \"Q\"]);\n        assert_eq!(sp.guidelines, vec![\"Z\"]);\n        assert_eq!(sp.constraints, vec![\"P\", \"Q\"]);\n    }\n\n    #[test]\n    fn test_with_role_and_additional_and_template() {\n        let mut sp = SystemPrompt::default();\n        sp.with_role(\"explainer\")\n            .with_additional(\"AGENTS.md here\")\n            .with_template(\"Template: {{role}}\");\n\n        assert_eq!(sp.role.as_deref(), Some(\"explainer\"));\n        assert_eq!(sp.additional.as_deref(), Some(\"AGENTS.md here\"));\n        assert_eq!(sp.template.render().unwrap(), \"Template: {{role}}\");\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/system_prompt_template.md",
    "content": "{% if role -%}\n\n# Your role\n\n{{role}}\n{% endif -%}\n\n# Guidelines you need to follow\n\n{# Guidelines provide soft rules and best practices to complete a task well -#}\n\n- Try to understand how to complete the task well before completing it.\n{% for item in guidelines -%}\n- {{item}}\n{% endfor %}\n\n# Constraints that must be adhered to\n\n{# Constraints are hard limitations that an agent must follow -#}\n\n- Think step by step\n- Think before you act; respond with your thoughts before calling tools\n- Do not make up any assumptions, use tools to get the information you need\n- Use the provided tools to interact with the system and accomplish the task\n- If you are stuck, or otherwise cannot complete the task, respond with your thoughts and call `stop`.\n- If the task is completed, or otherwise cannot continue, like requiring user feedback, call `stop`.\n{% for item in constraints -%}\n- {{item}}\n{% endfor %}\n\n# Response Format\n\n{# Instruct the agent to always respond with their thoughts (chain-of-thought) -#}\n\n- Always respond with your thoughts and reasoning for your actions in one or two sentences. Even when calling tools.\n- Once the goal is achieved, call the `stop` tool\n\n{{additional}}\n"
  },
  {
    "path": "swiftide-agents/src/tasks/closures.rs",
    "content": "use std::pin::Pin;\n\nuse async_trait::async_trait;\n\nuse super::{\n    errors::NodeError,\n    node::{NodeArg, NodeId, TaskNode},\n};\n\n#[derive(Clone)]\npub struct SyncFn<F, I, O>\nwhere\n    F: Fn(&I) -> Result<O, NodeError> + Send + Sync + Clone + 'static,\n{\n    pub f: F,\n    _phantom: std::marker::PhantomData<(I, O)>,\n}\n\n#[derive(Clone)]\npub struct AsyncFn<F, I, O>\nwhere\n    F: for<'a> Fn(&'a I) -> Pin<Box<dyn Future<Output = Result<O, NodeError>> + Send + 'a>>\n        + Send\n        + Sync\n        + Clone\n        + 'static,\n{\n    pub f: F,\n    _phantom: std::marker::PhantomData<(I, O)>,\n}\n\nimpl<F, I, O> SyncFn<F, I, O>\nwhere\n    F: Fn(&I) -> Result<O, NodeError> + Send + Sync + Clone + 'static,\n    I: NodeArg + Clone,\n    O: NodeArg + Clone,\n{\n    pub fn new(f: F) -> Self {\n        SyncFn {\n            f,\n            _phantom: std::marker::PhantomData,\n        }\n    }\n}\n\nimpl<F, I, O> AsyncFn<F, I, O>\nwhere\n    F: for<'a> Fn(&'a I) -> Pin<Box<dyn Future<Output = Result<O, NodeError>> + Send + 'a>>\n        + Send\n        + Sync\n        + Clone\n        + 'static,\n    I: NodeArg + Clone,\n    O: NodeArg + Clone,\n{\n    pub fn new(f: F) -> Self {\n        AsyncFn {\n            f,\n            _phantom: std::marker::PhantomData,\n        }\n    }\n}\n\nimpl<F> From<F> for SyncFn<F, (), ()>\nwhere\n    F: Fn(&()) -> Result<(), NodeError> + Send + Sync + Clone + 'static,\n{\n    fn from(f: F) -> Self {\n        SyncFn::new(f)\n    }\n}\n\nimpl<F> From<F> for AsyncFn<F, (), ()>\nwhere\n    F: for<'a> Fn(&'a ()) -> Pin<Box<dyn Future<Output = Result<(), NodeError>> + Send + 'a>>\n        + Send\n        + Sync\n        + Clone\n        + 'static,\n{\n    fn from(f: F) -> Self {\n        AsyncFn::new(f)\n    }\n}\n\n#[async_trait]\nimpl<F, I, O> TaskNode for SyncFn<F, I, O>\nwhere\n    F: Fn(&I) -> Result<O, NodeError> + Clone + Send + Sync + 'static,\n    I: NodeArg + Clone,\n    O: NodeArg + Clone,\n{\n    type Input = I;\n    type Output = O;\n    type Error = NodeError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        (self.f)(input)\n    }\n}\n\n#[async_trait]\nimpl<F, I, O> TaskNode for AsyncFn<F, I, O>\nwhere\n    F: for<'a> Fn(&'a I) -> Pin<Box<dyn Future<Output = Result<O, NodeError>> + Send + 'a>>\n        + Clone\n        + Send\n        + Sync\n        + 'static,\n    I: NodeArg + Clone,\n    O: NodeArg + Clone,\n{\n    type Input = I;\n    type Output = O;\n    type Error = NodeError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        (self.f)(input).await\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tasks/errors.rs",
    "content": "use std::{any::Any, sync::Arc};\n\nuse super::transition::TransitionPayload;\n\n#[derive(thiserror::Error, Debug)]\npub enum TaskError {\n    #[error(transparent)]\n    NodeError(#[from] NodeError),\n\n    #[error(\"MissingTransition: {0}\")]\n    MissingTransition(String),\n\n    #[error(\"MissingNode: {0}\")]\n    MissingNode(String),\n\n    #[error(\"Task failed with wrong output\")]\n    TypeError(String),\n\n    #[error(\"MissingInput: {0}\")]\n    MissingInput(String),\n\n    #[error(\"MissingOutput: {0}\")]\n    MissingOutput(String),\n\n    #[error(\"Task is missing steps\")]\n    NoSteps,\n}\n\nimpl TaskError {\n    pub fn missing_transition(node_id: usize) -> Self {\n        TaskError::MissingTransition(format!(\"Node {node_id} is missing a transition\"))\n    }\n\n    pub fn missing_node(node_id: usize) -> Self {\n        TaskError::MissingNode(format!(\"Node {node_id} is missing\"))\n    }\n\n    pub fn missing_input(node_id: usize) -> Self {\n        TaskError::MissingInput(format!(\"Node {node_id} is missing input\"))\n    }\n\n    pub fn missing_output(node_id: usize) -> Self {\n        TaskError::MissingOutput(format!(\"Node {node_id} is missing output\"))\n    }\n\n    pub fn type_error<T: Any + Send>(output: &T) -> Self {\n        let message = format!(\n            \"Expected output of type {}, but got {:?}\",\n            std::any::type_name::<T>(),\n            output.type_id()\n        );\n        TaskError::TypeError(message)\n    }\n}\n\n#[derive(Debug, thiserror::Error)]\npub struct NodeError {\n    pub node_error: Box<dyn std::error::Error + Send + Sync>,\n    pub transition_payload: Option<Arc<TransitionPayload>>,\n    pub node_id: usize,\n}\n\nimpl std::fmt::Display for NodeError {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"Node error in node {}: {:?}\",\n            self.node_id, self.node_error\n        )\n    }\n}\n\nimpl NodeError {\n    pub fn new(\n        node_error: impl Into<Box<dyn std::error::Error + Send + Sync>>,\n        node_id: usize,\n        transition_payload: Option<TransitionPayload>,\n    ) -> Self {\n        Self {\n            node_error: node_error.into(),\n            transition_payload: transition_payload.map(Arc::new),\n            node_id,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tasks/impls.rs",
    "content": "use std::sync::Arc;\n\nuse async_trait::async_trait;\nuse swiftide_core::{\n    ChatCompletion, Command, CommandError, CommandOutput, SimplePrompt, ToolExecutor,\n    chat_completion::{ChatCompletionRequest, ChatCompletionResponse, errors::LanguageModelError},\n    prompt::Prompt,\n};\nuse tokio::sync::Mutex;\n\nuse crate::{Agent, errors::AgentError};\n\nuse super::node::{NodeArg, NodeId, TaskNode};\n\n/// An example of wrapping an Agent as a `TaskNode`\n///\n/// For more control you can always roll your own\n#[derive(Clone, Debug)]\npub struct TaskAgent(Arc<Mutex<Agent>>);\n\nimpl From<Agent> for TaskAgent {\n    fn from(agent: Agent) -> Self {\n        TaskAgent(Arc::new(Mutex::new(agent)))\n    }\n}\n\n/// A 'default' implementation for an agent where there is no output\n#[async_trait]\nimpl TaskNode for TaskAgent {\n    type Input = Prompt;\n\n    type Output = ();\n\n    type Error = AgentError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        self.0.lock().await.query(input.clone()).await\n    }\n}\n\n#[async_trait]\nimpl TaskNode for Box<dyn SimplePrompt> {\n    type Input = Prompt;\n\n    type Output = String;\n\n    type Error = LanguageModelError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        // TODO: Prompt should be borrowed\n        self.prompt(input.clone()).await\n    }\n}\n\n#[async_trait]\nimpl TaskNode for Arc<dyn SimplePrompt> {\n    type Input = Prompt;\n\n    type Output = String;\n\n    type Error = LanguageModelError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        // TODO: Prompt should be borrowed\n        self.prompt(input.clone()).await\n    }\n}\n\n#[async_trait]\nimpl TaskNode for Box<dyn ChatCompletion> {\n    type Input = ChatCompletionRequest<'static>;\n\n    type Output = ChatCompletionResponse;\n\n    type Error = LanguageModelError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        self.complete(input).await\n    }\n}\n\n#[async_trait]\nimpl TaskNode for Arc<dyn ChatCompletion> {\n    type Input = ChatCompletionRequest<'static>;\n\n    type Output = ChatCompletionResponse;\n\n    type Error = LanguageModelError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        self.complete(input).await\n    }\n}\n\n#[async_trait]\nimpl TaskNode for Box<dyn ToolExecutor> {\n    type Input = Command;\n\n    type Output = CommandOutput;\n\n    type Error = CommandError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        self.exec_cmd(input).await\n    }\n}\n\n#[async_trait]\nimpl TaskNode for Arc<dyn ToolExecutor> {\n    type Input = Command;\n\n    type Output = CommandOutput;\n\n    type Error = CommandError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        self.exec_cmd(input).await\n    }\n}\n\n// Note: This only works for function pointers, not closures.\n#[async_trait]\nimpl<I: NodeArg, O: NodeArg, E: std::error::Error + Send + Sync + 'static> TaskNode\n    for fn(&I) -> Result<O, E>\n{\n    type Input = I;\n\n    type Output = O;\n\n    type Error = E;\n\n    async fn evaluate(\n        &self,\n        _node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        (self)(input)\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tasks/mod.rs",
    "content": "pub mod closures;\npub mod errors;\npub mod impls;\npub mod node;\npub mod task;\npub mod transition;\n"
  },
  {
    "path": "swiftide-agents/src/tasks/node.rs",
    "content": "use std::any::Any;\n\nuse async_trait::async_trait;\nuse dyn_clone::DynClone;\n\nuse super::{\n    errors::NodeError,\n    transition::{MarkedTransitionPayload, TransitionPayload},\n};\n\npub trait NodeArg: Send + Sync + DynClone + 'static {}\n\nimpl<T: Send + Sync + std::fmt::Debug + 'static + Clone> NodeArg for T {}\n\n#[derive(Debug, Clone)]\npub struct NoopNode<Context: NodeArg> {\n    _marker: std::marker::PhantomData<(Context, Box<dyn std::error::Error + Send + Sync>)>,\n}\n\nimpl<Context> Default for NoopNode<Context>\nwhere\n    Context: NodeArg,\n{\n    fn default() -> Self {\n        NoopNode {\n            _marker: std::marker::PhantomData,\n        }\n    }\n}\n\n#[async_trait]\nimpl<Context: NodeArg + Clone> TaskNode for NoopNode<Context> {\n    type Output = ();\n    type Input = Context;\n    type Error = NodeError;\n\n    async fn evaluate(\n        &self,\n        _node_id: &DynNodeId<Self>,\n        _context: &Context,\n    ) -> Result<Self::Output, Self::Error> {\n        Ok(())\n    }\n}\n\n#[async_trait]\npub trait TaskNode: Send + Sync + DynClone + Any {\n    type Input: NodeArg;\n    type Output: NodeArg;\n    type Error: std::error::Error + Send + Sync + 'static;\n\n    async fn evaluate(\n        &self,\n        node_id: &DynNodeId<Self>,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error>;\n}\n\npub type DynNodeId<T> = NodeId<\n    dyn TaskNode<\n            Input = <T as TaskNode>::Input,\n            Output = <T as TaskNode>::Output,\n            Error = <T as TaskNode>::Error,\n        >,\n>;\n\ndyn_clone::clone_trait_object!(\n    TaskNode<\n        Input = dyn NodeArg,\n        Output = dyn NodeArg,\n        Error = dyn std::error::Error + Send + Sync,\n    >\n);\n\n#[async_trait]\nimpl<Input: NodeArg, Output: NodeArg, Error: std::error::Error + Send + Sync + 'static> TaskNode\n    for Box<dyn TaskNode<Input = Input, Output = Output, Error = Error>>\n{\n    type Input = Input;\n    type Output = Output;\n    type Error = Error;\n\n    async fn evaluate(\n        &self,\n        node_id: &NodeId<\n            dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n        >,\n        input: &Self::Input,\n    ) -> Result<Self::Output, Self::Error> {\n        self.as_ref().evaluate(node_id, input).await\n    }\n}\n\ndyn_clone::clone_trait_object!(<Input, Output, Error> TaskNode<Input = Input, Output = Output, Error = Error>);\n\n#[derive(PartialEq, Eq)]\npub struct NodeId<T: TaskNode + ?Sized> {\n    pub id: usize,\n    _marker: std::marker::PhantomData<T>,\n}\n\nimpl<T: TaskNode + ?Sized> std::fmt::Debug for NodeId<T> {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        let type_name = std::any::type_name::<T>();\n\n        write!(f, \"NodeId<{type_name}>({})\", self.id)\n    }\n}\n\npub type AnyNodeId = usize;\n\nimpl<T: TaskNode + ?Sized> NodeId<T> {\n    pub fn id(&self) -> usize {\n        self.id\n    }\n\n    /// Returns a closure that can be used as a transition function\n    pub fn as_transition(&self) -> impl Fn(T::Input) -> MarkedTransitionPayload<T> + 'static {\n        let node_id = *self;\n\n        Box::new(move |context| node_id.transitions_with(context))\n    }\n\n    /// Returns a transition payload suitable for inside a task transition\n    ///\n    /// You can also get the closure version with `as_transition`\n    pub fn transitions_with(&self, context: T::Input) -> MarkedTransitionPayload<T> {\n        MarkedTransitionPayload::new(TransitionPayload::next_node(self, context))\n    }\n}\n\nimpl<T: TaskNode + 'static + ?Sized> NodeId<T> {\n    pub fn new(id: usize, _node: &T) -> Self {\n        NodeId {\n            id,\n            _marker: std::marker::PhantomData,\n        }\n    }\n\n    /// Returns the internal id of the node without the type information.\n    pub fn as_any(&self) -> AnyNodeId {\n        self.id\n    }\n\n    pub fn as_dyn(\n        self,\n    ) -> NodeId<dyn TaskNode<Input = T::Input, Output = T::Output, Error = T::Error>> {\n        NodeId {\n            id: self.id,\n            _marker: std::marker::PhantomData,\n        }\n    }\n}\n\nimpl<T: TaskNode + ?Sized> Clone for NodeId<T> {\n    fn clone(&self) -> Self {\n        *self\n    }\n}\nimpl<T: TaskNode + ?Sized> Copy for NodeId<T> {}\n"
  },
  {
    "path": "swiftide-agents/src/tasks/task.rs",
    "content": "//! Tasks enable you to to define a graph of interacting nodes\n//!\n//! The nodes can be any type that implements the `TaskNode` trait, which defines how the node\n//! will be evaluated with its input and output.\n//!\n//! Most swiftide primitives implement `TaskNode`, and it's easy to implement your own. Since how\n//! agents interact is subject to taste, we recommend implementing your own.\n//!\n//! WARN: Here be dragons! This api is not stable yet. We are using it in production, and is\n//! subject to rapid change. However, do not hesitate to open an issue if you find anything.\nuse std::{any::Any, pin::Pin, sync::Arc};\n\nuse tracing::Instrument as _;\n\nuse crate::tasks::{errors::NodeError, transition::TransitionFn};\n\nuse super::{\n    errors::TaskError,\n    node::{NodeArg, NodeId, NoopNode, TaskNode},\n    transition::{AnyNodeTransition, MarkedTransitionPayload, Transition, TransitionPayload},\n};\n\n#[derive(Debug)]\npub struct Task<Input: NodeArg, Output: NodeArg> {\n    nodes: Vec<Box<dyn AnyNodeTransition>>,\n    current_node: usize,\n    start_node: usize,\n    current_context: Option<Arc<dyn Any + Send + Sync>>,\n    _marker: std::marker::PhantomData<(Input, Output)>,\n}\n\nimpl<Input: NodeArg, Output: NodeArg> Clone for Task<Input, Output> {\n    fn clone(&self) -> Self {\n        Self {\n            nodes: self.nodes.clone(),\n            current_node: 0,\n            start_node: self.start_node,\n            current_context: None,\n            _marker: std::marker::PhantomData,\n        }\n    }\n}\n\nimpl<Input: NodeArg + Clone, Output: NodeArg + Clone> Default for Task<Input, Output> {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl<Input: NodeArg + Clone, Output: NodeArg + Clone> Task<Input, Output> {\n    pub fn new() -> Self {\n        let noop = NoopNode::<Output>::default();\n\n        let node_id = NodeId::new(0, &noop).as_dyn();\n\n        let noop_executor = Box::new(Transition {\n            node: Box::new(noop),\n            node_id: Box::new(node_id),\n            r#fn: Arc::new(|_output| {\n                Box::pin(async { unreachable!(\"Done node should never be evaluated.\") })\n            }),\n            is_set: false,\n        });\n        Self {\n            nodes: vec![noop_executor],\n            current_node: 0,\n            start_node: 0,\n            current_context: None,\n            _marker: std::marker::PhantomData,\n        }\n    }\n\n    /// Returns the current context as the input type, if it matches\n    pub fn current_input(&self) -> Option<&Input> {\n        let input = self.current_context.as_ref()?;\n\n        input.downcast_ref::<Input>()\n    }\n\n    /// Returns the current context as the output type, if it matches\n    pub fn current_output(&self) -> Option<&Output> {\n        let input = self.current_context.as_ref()?;\n\n        input.downcast_ref::<Output>()\n    }\n\n    /// Returns the `done` node for this task\n    pub fn done(&self) -> NodeId<NoopNode<Output>> {\n        NodeId::new(0, &NoopNode::default())\n    }\n\n    /// Creates a transition to the done node\n    pub fn transitions_to_done(\n        &self,\n    ) -> impl Fn(Output) -> MarkedTransitionPayload<NoopNode<Output>> + Send + Sync + 'static {\n        let done = self.done();\n        move |context| done.transitions_with(context)\n    }\n\n    /// Defines the start node of the task\n    pub fn starts_with<T: TaskNode<Input = Input> + Clone + 'static>(\n        &mut self,\n        node_id: NodeId<T>,\n    ) {\n        self.current_node = node_id.id;\n        self.start_node = node_id.id;\n    }\n\n    /// Validates that all nodes have transitions set\n    ///\n    /// # Errors\n    ///\n    /// Errors if a node is missing a transition\n    pub fn validate_transitions(&self) -> Result<(), TaskError> {\n        // TODO: Validate that the task can complete\n        for node_executor in &self.nodes {\n            // Skip the done node (index 0)\n            if node_executor.node_id() == 0 {\n                continue;\n            }\n\n            if !node_executor.transition_is_set() {\n                return Err(TaskError::missing_transition(node_executor.node_id()));\n            }\n        }\n\n        Ok(())\n    }\n\n    /// Runs the task with the given input\n    ///\n    /// # Errors\n    ///\n    /// Errors if the task fails\n    #[tracing::instrument(skip(self, input), name = \"task.run\", err)]\n    pub async fn run(&mut self, input: impl Into<Input>) -> Result<Option<Output>, TaskError> {\n        self.validate_transitions()?;\n\n        self.current_context = Some(Arc::new(input.into()) as Arc<dyn Any + Send + Sync>);\n\n        self.start_task().await\n    }\n\n    /// Resets the task to the start node\n    ///\n    /// WARN: This **will** lead to a type mismatch if the previous context is not the same as the\n    /// input of the start node\n    pub fn reset(&mut self) {\n        self.current_node = self.start_node;\n    }\n\n    /// Resumes the task from the current node\n    ///\n    /// # Errors\n    ///\n    /// Errors if the task fails\n    #[tracing::instrument(skip(self), name = \"task.resume\", err)]\n    pub async fn resume(&mut self) -> Result<Option<Output>, TaskError> {\n        self.start_task().await\n    }\n\n    async fn start_task(&mut self) -> Result<Option<Output>, TaskError> {\n        self.validate_transitions()?;\n\n        let mut span = tracing::info_span!(\"task.step\", node = self.current_node);\n        loop {\n            if self.current_node == 0 {\n                break;\n            }\n            let node_transition = self\n                .nodes\n                .get(self.current_node)\n                .ok_or_else(|| TaskError::missing_node(self.current_node))?;\n\n            let input = self\n                .current_context\n                .clone()\n                .ok_or_else(|| TaskError::missing_input(self.current_node))?;\n\n            tracing::debug!(\"Running node {}\", self.current_node);\n\n            let span_id = span.id().clone();\n            let transition_payload = node_transition\n                .evaluate_next(input)\n                .instrument(span.or_current())\n                .await?;\n\n            match transition_payload {\n                TransitionPayload::Pause => {\n                    tracing::info!(\"Task paused at node {}\", self.current_node);\n                    return Ok(None);\n                }\n                TransitionPayload::NextNode(transition_payload) => {\n                    self.current_node = transition_payload.node_id;\n                    self.current_context = Some(transition_payload.context);\n                }\n                TransitionPayload::Error(error) => {\n                    return Err(TaskError::NodeError(NodeError::new(\n                        error,\n                        self.current_node,\n                        None,\n                    )));\n                }\n            }\n            if self.current_node == 0 {\n                tracing::debug!(\"Task completed at node {}\", self.current_node);\n                break;\n            }\n\n            span = tracing::info_span!(\"task.step\", node = self.current_node).or_current();\n            span.follows_from(span_id);\n        }\n\n        let output = self\n            .current_context\n            .clone()\n            .ok_or_else(|| TaskError::missing_output(self.current_node))?;\n        let output = output\n            .downcast::<Output>()\n            .map_err(|e| TaskError::type_error(&e))?\n            .as_ref()\n            .clone();\n\n        Ok(Some(output))\n    }\n\n    /// Gets the current node of the task\n    pub fn current_node<T: TaskNode + 'static>(&self) -> Option<&T> {\n        self.node_at_index(self.current_node)\n    }\n\n    /// Gets the node at the given `NodeId`\n    pub fn node_at<T: TaskNode + 'static>(&self, node_id: NodeId<T>) -> Option<&T> {\n        self.node_at_index(node_id.id)\n    }\n\n    /// Gets the node at the given index\n    pub fn node_at_index<T: TaskNode + 'static>(&self, index: usize) -> Option<&T> {\n        let transition = self.transition_at_index::<T>(index)?;\n\n        let node = &*transition.node;\n\n        (node as &dyn Any).downcast_ref::<T>()\n    }\n\n    /// Gets the current transition of the task\n    #[allow(dead_code)]\n    fn current_transition<T: TaskNode + 'static>(\n        &self,\n    ) -> Option<&Transition<T::Input, T::Output, T::Error>> {\n        self.transition_at_index::<T>(self.current_node)\n    }\n\n    /// Gets the transition at the given `NodeId`\n    fn transition_at_index<T: TaskNode + 'static>(\n        &self,\n        index: usize,\n    ) -> Option<&Transition<T::Input, T::Output, T::Error>> {\n        tracing::debug!(\"Getting transition at index {}\", index);\n        let transition = self.nodes.get(index)?;\n\n        dbg!(&transition);\n\n        (&**transition as &dyn Any).downcast_ref::<Transition<T::Input, T::Output, T::Error>>()\n    }\n\n    /// Registers a new node in the task\n    pub fn register_node<T>(&mut self, node: T) -> NodeId<T>\n    where\n        T: TaskNode + 'static + Clone,\n        <T as TaskNode>::Input: Clone,\n        <T as TaskNode>::Output: Clone,\n    {\n        let id = self.nodes.len();\n        let node_id = NodeId::new(id, &node);\n        let node_executor = Box::new(Transition::<T::Input, T::Output, T::Error> {\n            node_id: Box::new(node_id.as_dyn()),\n            node: Box::new(node),\n            r#fn: Arc::new(move |_output| unreachable!(\"No transition for node {}.\", node_id.id)),\n            is_set: false,\n        });\n        // Debug the type name\n        tracing::debug!(node_id = ?node_id, type_name = std::any::type_name_of_val(&node_executor), \"Registering node\");\n\n        self.nodes.push(node_executor);\n\n        node_id\n    }\n\n    /// Registers a transition from one node to another\n    ///\n    /// Note that there are various helpers and conversions for the `MarkedTransitionPayload`\n    ///\n    /// # Errors\n    ///\n    /// Errors if the node does not exist\n    pub fn register_transition<'a, From, To, F>(\n        &mut self,\n        from: NodeId<From>,\n        transition: F,\n    ) -> Result<(), TaskError>\n    where\n        From: TaskNode + 'static + ?Sized,\n        To: TaskNode<Input = From::Output> + 'a + ?Sized,\n        F: Fn(To::Input) -> MarkedTransitionPayload<To> + Send + Sync + 'static,\n    {\n        let node_executor = self\n            .nodes\n            .get_mut(from.id)\n            .ok_or_else(|| TaskError::missing_node(from.id))?;\n\n        let any_executor: &mut dyn Any = node_executor.as_mut();\n\n        let Some(exec) =\n            any_executor.downcast_mut::<Transition<From::Input, From::Output, From::Error>>()\n        else {\n            let expected =\n                std::any::type_name::<Transition<From::Input, From::Output, From::Error>>();\n            let actual = std::any::type_name_of_val(node_executor);\n\n            unreachable!(\n                \"Transition at index {:?} is not a {expected:?}; Mismatched types, should not never happen. Actual: {actual:?}\",\n                from.id\n            );\n        };\n        let transition = Arc::new(transition);\n        let wrapped: Arc<dyn TransitionFn<From::Output>> = Arc::new(move |output: From::Output| {\n            let transition = transition.clone();\n            Box::pin(async move {\n                let output = transition(output);\n                output.into_inner()\n            })\n        });\n\n        exec.r#fn = wrapped;\n        exec.is_set = true;\n        // set function as before\n\n        Ok(())\n    }\n\n    /// Registers a transition from one node to another asynchronously\n    ///\n    /// Note that there are various helpers and conversions for the `MarkedTransitionPayload`\n    ///\n    /// # Errors\n    ///\n    /// Errors if the node does not exist\n    ///\n    /// NOTE: `AsyncFn` traits' returned future are not 'Send' and the inner type is unstable.\n    /// When they are, we can update Fn to `AsyncFn`\n    pub fn register_transition_async<'a, From, To, F>(\n        &mut self,\n        from: NodeId<From>,\n        transition: F,\n    ) -> Result<(), TaskError>\n    where\n        From: TaskNode + 'static + ?Sized,\n        To: TaskNode<Input = From::Output> + 'a + ?Sized,\n        F: Fn(To::Input) -> Pin<Box<dyn Future<Output = MarkedTransitionPayload<To>> + Send>>\n            + Send\n            + Sync\n            + 'static,\n    {\n        let node_executor = self\n            .nodes\n            .get_mut(from.id)\n            .ok_or_else(|| TaskError::missing_node(from.id))?;\n\n        let any_executor: &mut dyn Any = node_executor.as_mut();\n\n        let Some(exec) =\n            any_executor.downcast_mut::<Transition<From::Input, From::Output, From::Error>>()\n        else {\n            let expected =\n                std::any::type_name::<Transition<From::Input, From::Output, From::Error>>();\n            let actual = std::any::type_name_of_val(node_executor);\n\n            unreachable!(\n                \"Transition at index {:?} is not a {expected:?}; Mismatched types, should not never happen. Actual: {actual:?}\",\n                from.id\n            );\n        };\n        let transition = Arc::new(transition);\n        let wrapped: Arc<dyn TransitionFn<From::Output>> = Arc::new(move |output: From::Output| {\n            let transition = transition.clone();\n\n            Box::pin(async move {\n                let output = transition(output).await;\n                output.into_inner()\n            })\n        });\n\n        exec.r#fn = wrapped;\n        exec.is_set = true;\n        // set function as before\n\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use async_trait::async_trait;\n\n    use super::*;\n\n    #[derive(thiserror::Error, Debug)]\n    struct Error(String);\n\n    impl std::fmt::Display for Error {\n        fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n            write!(f, \"{}\", self.0)\n        }\n    }\n\n    #[derive(Clone, Default, Debug)]\n    struct IntNode;\n    #[async_trait]\n    impl TaskNode for IntNode {\n        type Input = i32;\n        type Output = i32;\n        type Error = Error;\n\n        async fn evaluate(\n            &self,\n            _node_id: &NodeId<\n                dyn TaskNode<Input = Self::Input, Output = Self::Output, Error = Self::Error>,\n            >,\n            input: &Self::Input,\n        ) -> Result<Self::Output, Self::Error> {\n            Ok(input + 1)\n        }\n    }\n    // Implement other required traits if necessary...\n\n    #[test_log::test(tokio::test)]\n    async fn sequential_3_node_task_reset_works() {\n        let mut task: Task<i32, i32> = Task::new();\n\n        // Register three nodes\n        let node1 = task.register_node(IntNode);\n        let node2 = task.register_node(IntNode);\n        let node3 = task.register_node(IntNode);\n\n        // Set start node\n        task.starts_with(node1);\n\n        // Register transitions (node1 → node2 → node3 → done)\n        task.register_transition::<_, _, _>(node1, move |input| node2.transitions_with(input))\n            .unwrap();\n        task.register_transition::<_, _, _>(node2, move |input| node3.transitions_with(input))\n            .unwrap();\n        task.register_transition::<_, _, _>(node3, task.transitions_to_done())\n            .unwrap();\n\n        // Run the task to completion\n        let res = task.run(1).await.unwrap();\n        assert_eq!(res, Some(4)); // 1 + 1 + 1 + 1\n\n        // Reset the task\n        task.reset();\n\n        // Assert current_node returns the correct node (node1)\n        dbg!(&task);\n        let n1_transition = task.transition_at_index::<IntNode>(1);\n\n        assert!(n1_transition.is_some());\n\n        let n1_transition = task.current_transition::<IntNode>();\n        assert!(n1_transition.is_some());\n\n        let n1_ref = task.current_node::<IntNode>();\n        assert!(n1_ref.is_some());\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tasks/transition.rs",
    "content": "use std::{any::Any, pin::Pin, sync::Arc};\n\nuse async_trait::async_trait;\nuse dyn_clone::DynClone;\n\nuse super::{\n    errors::NodeError,\n    node::{NodeArg, NodeId, TaskNode},\n};\n\npub trait TransitionFn<Input: Send + Sync>:\n    for<'a> Fn(Input) -> Pin<Box<dyn Future<Output = TransitionPayload> + Send>> + Send + Sync\n{\n}\n\n// dyn_clone::clone_trait_object!(<Input> TransitionFn<Input>);\n\nimpl<Input: Send + Sync, F> TransitionFn<Input> for F where\n    F: for<'a> Fn(Input) -> Pin<Box<dyn Future<Output = TransitionPayload> + Send>> + Send + Sync\n{\n}\n\npub(crate) struct Transition<\n    Input: NodeArg,\n    Output: NodeArg,\n    Error: std::error::Error + Send + Sync + 'static,\n> {\n    pub(crate) node: Box<dyn TaskNode<Input = Input, Output = Output, Error = Error> + Send + Sync>,\n    pub(crate) node_id: Box<NodeId<dyn TaskNode<Input = Input, Output = Output, Error = Error>>>,\n    // pub(crate) r#fn: Arc<dyn Fn(Output) -> TransitionPayload + Send + Sync>,\n    pub(crate) r#fn: Arc<dyn TransitionFn<Output> + Send>,\n    pub(crate) is_set: bool,\n}\n\nimpl<Input, Output, Error> Clone for Transition<Input, Output, Error>\nwhere\n    Input: NodeArg,\n    Output: NodeArg,\n    Error: std::error::Error + Send + Sync + 'static,\n{\n    fn clone(&self) -> Self {\n        Transition {\n            node: self.node.clone(),\n            node_id: self.node_id.clone(),\n            r#fn: self.r#fn.clone(),\n            is_set: self.is_set,\n        }\n    }\n}\n\nimpl<Input: NodeArg, Output: NodeArg, Error: std::error::Error + Send + Sync + 'static>\n    std::fmt::Debug for Transition<Input, Output, Error>\n{\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Transition\")\n            .field(\"node_id\", &self.node_id.id)\n            .field(\"is_set\", &self.is_set)\n            .finish()\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct NextNode {\n    // If we make this an enum instead, we can support spawning many nodes as well\n    pub(crate) node_id: usize,\n    pub(crate) context: Arc<dyn Any + Send + Sync>,\n}\n\nimpl NextNode {\n    pub fn new<T: TaskNode + ?Sized>(node_id: NodeId<T>, context: T::Input) -> Self\n    where\n        <T as TaskNode>::Input: 'static,\n    {\n        let context = Arc::new(context) as Arc<dyn Any + Send + Sync>;\n\n        NextNode {\n            node_id: node_id.id,\n            context,\n        }\n    }\n}\n\nimpl From<NextNode> for TransitionPayload {\n    fn from(next_node: NextNode) -> Self {\n        TransitionPayload::NextNode(next_node)\n    }\n}\n\n#[derive(Debug)]\npub enum TransitionPayload {\n    NextNode(NextNode),\n    Pause,\n    Error(Box<dyn std::error::Error + Send + Sync>),\n}\n\nimpl TransitionPayload {\n    pub fn next_node<T: TaskNode + ?Sized>(node_id: &NodeId<T>, context: T::Input) -> Self {\n        NextNode::new(*node_id, context).into()\n    }\n\n    pub fn pause() -> Self {\n        TransitionPayload::Pause\n    }\n\n    pub fn error(error: impl Into<Box<dyn std::error::Error + Send + Sync>>) -> Self {\n        TransitionPayload::Error(error.into())\n    }\n}\n\npub struct MarkedTransitionPayload<To: TaskNode + ?Sized>(\n    TransitionPayload,\n    std::marker::PhantomData<To>,\n);\n\nimpl<To: TaskNode + ?Sized> MarkedTransitionPayload<To> {\n    pub fn new(payload: TransitionPayload) -> Self {\n        MarkedTransitionPayload(payload, std::marker::PhantomData)\n    }\n\n    pub fn into_inner(self) -> TransitionPayload {\n        self.0\n    }\n}\n\nimpl<T: TaskNode> std::ops::Deref for MarkedTransitionPayload<T> {\n    type Target = TransitionPayload;\n\n    fn deref(&self) -> &Self::Target {\n        &self.0\n    }\n}\n\n#[async_trait]\npub(crate) trait AnyNodeTransition: Any + Send + Sync + std::fmt::Debug + DynClone {\n    fn transition_is_set(&self) -> bool;\n\n    async fn evaluate_next(\n        &self,\n        context: Arc<dyn Any + Send + Sync>,\n    ) -> Result<TransitionPayload, NodeError>;\n\n    fn node_id(&self) -> usize;\n}\n\ndyn_clone::clone_trait_object!(AnyNodeTransition);\n\n#[async_trait]\nimpl<Input: NodeArg, Output: NodeArg, Error: std::error::Error + Send + Sync + 'static>\n    AnyNodeTransition for Transition<Input, Output, Error>\n{\n    async fn evaluate_next(\n        &self,\n        context: Arc<dyn Any + Send + Sync>,\n    ) -> Result<TransitionPayload, NodeError> {\n        let context = context.downcast::<Input>().unwrap();\n\n        match self.node.evaluate(&self.node_id.as_dyn(), &context).await {\n            Ok(output) => Ok((self.r#fn)(output).await),\n            Err(error) => Err(NodeError::new(error, self.node_id.id, None)), /* node_id will be\n                                                                              * set by caller */\n        }\n    }\n\n    fn transition_is_set(&self) -> bool {\n        self.is_set\n    }\n\n    fn node_id(&self) -> usize {\n        self.node_id.id\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/test_utils.rs",
    "content": "use std::borrow::Cow;\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\nuse swiftide_core::chat_completion::ToolCall;\nuse swiftide_core::chat_completion::{Tool, ToolOutput, ToolSpec, errors::ToolError};\n\nuse swiftide_core::AgentContext;\n\nuse crate::Agent;\nuse crate::hooks::{\n    AfterCompletionFn, AfterToolFn, BeforeAllFn, BeforeCompletionFn, BeforeToolFn, MessageHookFn,\n    OnStartFn, OnStopFn, OnStreamFn,\n};\n\n#[macro_export]\nmacro_rules! chat_request {\n    ($($message:expr),+; tools = [$($tool:expr),*]) => {{\n        let mut builder = swiftide_core::chat_completion::ChatCompletionRequest::builder();\n        builder.messages(vec![$($message),*]);\n\n        let mut tool_specs = Vec::new();\n        $(tool_specs.push({\n            let tool = $tool;\n            tool.tool_spec()\n        });)*\n\n        tool_specs.extend(Agent::default_tools().into_iter().map(|tool| tool.tool_spec()));\n\n        builder.tool_specs(tool_specs);\n\n        builder.build().unwrap()\n    }};\n    ($($message:expr),+; tool_specs = [$($tool:expr),*]) => {{\n        let mut builder = swiftide_core::chat_completion::ChatCompletionRequest::builder();\n        builder.messages(vec![$($message),*]);\n\n        let mut tool_specs = Vec::new();\n        $(tool_specs.push($tool);)*\n        tool_specs.extend(Agent::default_tools().into_iter().map(|tool| tool.tool_spec()));\n\n        builder.tool_specs(tool_specs);\n\n        builder.build().unwrap()\n    }}\n}\n\n#[macro_export]\nmacro_rules! user {\n    ($message:expr) => {\n        swiftide_core::chat_completion::ChatMessage::new_user($message)\n    };\n}\n\n#[macro_export]\nmacro_rules! system {\n    ($message:expr) => {\n        swiftide_core::chat_completion::ChatMessage::new_system($message)\n    };\n}\n\n#[macro_export]\nmacro_rules! summary {\n    ($message:expr) => {\n        swiftide_core::chat_completion::ChatMessage::new_summary($message)\n    };\n}\n\n#[macro_export]\nmacro_rules! assistant {\n    ($message:expr) => {\n        swiftide_core::chat_completion::ChatMessage::new_assistant(\n            Some($message.to_string()),\n            None,\n        )\n    };\n    ($message:expr, [$($tool_call_name:expr),*]) => {{\n        let tool_calls = vec![\n            $(\n            ToolCall::builder()\n                .name($tool_call_name)\n                .id(\"1\")\n                .build()\n                .unwrap()\n            ),*\n        ];\n\n        ChatMessage::new_assistant(Some($message.to_string()), Some(tool_calls))\n    }};\n}\n\n#[macro_export]\nmacro_rules! tool_output {\n    ($tool_name:expr, $message:expr) => {{\n        ChatMessage::ToolOutput(\n            ToolCall::builder()\n                .name($tool_name)\n                .id(\"1\")\n                .build()\n                .unwrap(),\n            $message.into(),\n        )\n    }};\n}\n\n#[macro_export]\nmacro_rules! tool_failed {\n    ($tool_name:expr, $message:expr) => {{\n        ChatMessage::ToolOutput(\n            ToolCall::builder()\n                .name($tool_name)\n                .id(\"1\")\n                .build()\n                .unwrap(),\n            ToolOutput::fail($message),\n        )\n    }};\n}\n\n#[macro_export]\nmacro_rules! chat_response {\n    ($message:expr; tool_calls = [$($tool_name:expr),*]) => {{\n\n        let tool_calls = vec![\n            $(ToolCall::builder().name($tool_name).id(\"1\").build().unwrap()),*\n        ];\n\n        ChatCompletionResponse::builder()\n            .message($message)\n            .tool_calls(tool_calls)\n            .build()\n            .unwrap()\n    }};\n    (tool_calls = [$($tool_name:expr),*]) => {{\n\n        let tool_calls = vec![\n            $(ToolCall::builder().name($tool_name).id(\"1\").build().unwrap()),*\n        ];\n\n        ChatCompletionResponse::builder()\n            .tool_calls(tool_calls)\n            .build()\n            .unwrap()\n    }};\n}\n\ntype Expectations = Arc<Mutex<Vec<(Result<ToolOutput, ToolError>, Option<&'static str>)>>>;\n\n#[derive(Debug, Clone)]\npub struct MockTool {\n    expectations: Expectations,\n    name: &'static str,\n}\n\nimpl MockTool {\n    #[allow(clippy::should_implement_trait)]\n    pub fn default() -> Self {\n        Self::new(\"mock_tool\")\n    }\n    pub fn new(name: &'static str) -> Self {\n        Self {\n            expectations: Arc::new(Mutex::new(Vec::new())),\n            name,\n        }\n    }\n    pub fn expect_invoke_ok(\n        &self,\n        expected_result: ToolOutput,\n        expected_args: Option<&'static str>,\n    ) {\n        self.expect_invoke(Ok(expected_result), expected_args);\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn expect_invoke(\n        &self,\n        expected_result: Result<ToolOutput, ToolError>,\n        expected_args: Option<&'static str>,\n    ) {\n        self.expectations\n            .lock()\n            .unwrap()\n            .push((expected_result, expected_args));\n    }\n}\n\n#[async_trait]\nimpl Tool for MockTool {\n    async fn invoke(\n        &self,\n        _agent_context: &dyn AgentContext,\n        tool_call: &ToolCall,\n    ) -> std::result::Result<ToolOutput, ToolError> {\n        tracing::debug!(\n            \"[MockTool] Invoked `{}` with args: {:?}\",\n            self.name,\n            tool_call\n        );\n        let expectation = self\n            .expectations\n            .lock()\n            .unwrap()\n            .pop()\n            .unwrap_or_else(|| panic!(\"[MockTool] No expectations left for `{}`\", self.name));\n\n        assert_eq!(expectation.1, tool_call.args());\n\n        expectation.0\n    }\n\n    fn name(&self) -> Cow<'_, str> {\n        self.name.into()\n    }\n\n    fn tool_spec(&self) -> ToolSpec {\n        ToolSpec::builder()\n            .name(self.name().as_ref())\n            .description(\"A fake tool for testing purposes\")\n            .build()\n            .unwrap()\n    }\n}\n\nimpl From<MockTool> for Box<dyn Tool> {\n    fn from(val: MockTool) -> Self {\n        Box::new(val) as Box<dyn Tool>\n    }\n}\n\nimpl Drop for MockTool {\n    fn drop(&mut self) {\n        // Mock still borrowed elsewhere and expectations still be invoked\n        if Arc::strong_count(&self.expectations) > 1 {\n            return;\n        }\n        if self.expectations.lock().is_err() {\n            return;\n        }\n\n        let name = self.name;\n        if self.expectations.lock().unwrap().is_empty() {\n            tracing::debug!(\"[MockTool] All expectations were met for `{name}`\");\n        } else {\n            panic!(\n                \"[MockTool] Not all expectations were met for `{name}: {:?}\",\n                *self.expectations.lock().unwrap()\n            );\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct MockHook {\n    name: &'static str,\n    called: Arc<Mutex<usize>>,\n    expected_calls: usize,\n}\n\nimpl MockHook {\n    pub fn new(name: &'static str) -> Self {\n        Self {\n            name,\n            called: Arc::new(Mutex::new(0)),\n            expected_calls: 0,\n        }\n    }\n\n    pub fn expect_calls(&mut self, expected_calls: usize) -> &mut Self {\n        self.expected_calls = expected_calls;\n        self\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn hook_fn(&self) -> impl BeforeAllFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn on_start_fn(&self) -> impl OnStartFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n    #[allow(clippy::missing_panics_doc)]\n    pub fn before_completion_fn(&self) -> impl BeforeCompletionFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent, _| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn after_completion_fn(&self) -> impl AfterCompletionFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent, _| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn after_tool_fn(&self) -> impl AfterToolFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent, _, _| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn before_tool_fn(&self) -> impl BeforeToolFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent, _| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn message_hook_fn(&self) -> impl MessageHookFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent, _| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn stop_hook_fn(&self) -> impl OnStopFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent, _, _| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn on_stream_fn(&self) -> impl OnStreamFn + use<> {\n        let called = Arc::clone(&self.called);\n        move |_: &Agent, _| {\n            let called = Arc::clone(&called);\n            Box::pin(async move {\n                let mut called = called.lock().unwrap();\n                *called += 1;\n                Ok(())\n            })\n        }\n    }\n}\n\nimpl Drop for MockHook {\n    fn drop(&mut self) {\n        if Arc::strong_count(&self.called) > 1 {\n            return;\n        }\n        let Ok(called) = self.called.lock() else {\n            return;\n        };\n\n        if *called == self.expected_calls {\n            tracing::debug!(\n                \"[MockHook] `{}` all expectations met; called {} times\",\n                self.name,\n                *called\n            );\n        } else {\n            panic!(\n                \"[MockHook] `{}` was called {} times but expected {}\",\n                self.name, *called, self.expected_calls\n            )\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tools/arg_preprocessor.rs",
    "content": "use std::borrow::Cow;\n\nuse serde_json::{Map, Value};\nuse swiftide_core::chat_completion::ToolCall;\n\n/// Preprocesses arguments for tool calls and tries to fix common errors\n/// This must be infallible and the result is always forwarded to the tool\npub struct ArgPreprocessor;\n\nimpl ArgPreprocessor {\n    pub fn preprocess_tool_calls(tool_calls: &mut [ToolCall]) {\n        for tool_call in tool_calls.iter_mut() {\n            let args = Self::preprocess(tool_call.args());\n\n            if args.as_ref().is_some_and(|a| match a {\n                Cow::Borrowed(_) => false,\n                Cow::Owned(_) => true,\n            }) {\n                tool_call.with_args(args.map(|a| a.to_string()));\n            }\n        }\n    }\n    pub fn preprocess(value: Option<&str>) -> Option<Cow<'_, str>> {\n        Some(take_first_occurrence_in_object(value?))\n    }\n}\n\n/// Strips duplicate keys from JSON objects\nfn take_first_occurrence_in_object(value: &str) -> Cow<'_, str> {\n    let Ok(parsed) = &serde_json::from_str(value) else {\n        return Cow::Borrowed(value);\n    };\n    if let Value::Object(obj) = parsed {\n        let mut new_map = Map::with_capacity(obj.len());\n        for (k, v) in obj {\n            // Only insert if we haven't seen this key yet.\n            new_map.entry(k).or_insert(v.clone());\n        }\n        Cow::Owned(Value::Object(new_map).to_string())\n    } else {\n        // If the top-level isn't even an object, just pass it as is,\n        // or decide how you want to handle that situation.\n        Cow::Borrowed(value)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use serde_json::json;\n\n    #[test]\n    fn test_preprocess_regular_json() {\n        let input = json!({\n            \"key1\": \"value1\",\n            \"key2\": \"value2\"\n        })\n        .to_string();\n        let expected = json!({\n            \"key1\": \"value1\",\n            \"key2\": \"value2\"\n        });\n        let result = ArgPreprocessor::preprocess(Some(&input));\n        assert_eq!(result.as_deref(), Some(expected.to_string().as_str()));\n    }\n\n    #[test]\n    fn test_preprocess_json_with_duplicate_keys() {\n        let input = json!({\n            \"key1\": \"value1\",\n            \"key1\": \"value2\"\n        })\n        .to_string();\n        let expected = json!({\n            \"key1\": \"value2\"\n        });\n        let result = ArgPreprocessor::preprocess(Some(&input));\n        assert_eq!(result.as_deref(), Some(expected.to_string().as_str()));\n    }\n\n    #[test]\n    fn test_no_preprocess_invalid_json() {\n        let input = \"invalid json\";\n        let result = ArgPreprocessor::preprocess(Some(input));\n        assert_eq!(result.as_deref(), Some(input));\n    }\n\n    #[test]\n    fn test_no_input() {\n        let result = ArgPreprocessor::preprocess(None);\n        assert_eq!(result, None);\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tools/control.rs",
    "content": "//! Control tools manage control flow during agent's lifecycle.\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse schemars::{Schema, schema_for};\nuse std::borrow::Cow;\nuse swiftide_core::{\n    AgentContext, ToolFeedback,\n    chat_completion::{Tool, ToolCall, ToolOutput, ToolSpec, errors::ToolError},\n};\n\n/// `Stop` tool is a default tool used by agents to stop\n#[derive(Clone, Debug, Default)]\npub struct Stop {}\n\n#[async_trait]\nimpl Tool for Stop {\n    async fn invoke(\n        &self,\n        _agent_context: &dyn AgentContext,\n        _tool_call: &ToolCall,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(ToolOutput::stop())\n    }\n\n    fn name(&self) -> Cow<'_, str> {\n        \"stop\".into()\n    }\n\n    fn tool_spec(&self) -> ToolSpec {\n        ToolSpec::builder()\n            .name(\"stop\")\n            .description(\"When you have completed, or cannot complete, your task, call this\")\n            .build()\n            .unwrap()\n    }\n}\n\nimpl From<Stop> for Box<dyn Tool> {\n    fn from(val: Stop) -> Self {\n        Box::new(val)\n    }\n}\n\n/// `StopWithArgs` is an alternative stop tool that takes arguments\n#[derive(Clone, Debug)]\npub struct StopWithArgs {\n    parameters_schema: Option<Schema>,\n    expects_output_field: bool,\n}\n\nimpl Default for StopWithArgs {\n    fn default() -> Self {\n        Self {\n            parameters_schema: Some(schema_for!(DefaultStopWithArgsSpec)),\n            expects_output_field: true,\n        }\n    }\n}\n\nimpl StopWithArgs {\n    /// Create a new `StopWithArgs` tool with a custom parameters schema.\n    ///\n    /// When providing a custom schema the full argument payload will be forwarded to the\n    /// stop output without requiring an `output` field wrapper.\n    pub fn with_parameters_schema(schema: Schema) -> Self {\n        Self {\n            parameters_schema: Some(schema),\n            expects_output_field: false,\n        }\n    }\n\n    fn parameters_schema(&self) -> Schema {\n        self.parameters_schema\n            .clone()\n            .unwrap_or_else(|| schema_for!(DefaultStopWithArgsSpec))\n    }\n}\n\n#[derive(Clone, Debug, serde::Deserialize, serde::Serialize, schemars::JsonSchema)]\nstruct DefaultStopWithArgsSpec {\n    pub output: String,\n}\n\n#[async_trait]\nimpl Tool for StopWithArgs {\n    async fn invoke(\n        &self,\n        _agent_context: &dyn AgentContext,\n        tool_call: &ToolCall,\n    ) -> Result<ToolOutput, ToolError> {\n        let raw_args = tool_call\n            .args()\n            .ok_or_else(|| ToolError::missing_arguments(\"arguments\"))?;\n\n        let json: serde_json::Value = serde_json::from_str(raw_args)?;\n\n        let output = if self.expects_output_field {\n            json.get(\"output\")\n                .cloned()\n                .ok_or_else(|| ToolError::missing_arguments(\"output\"))?\n        } else {\n            json\n        };\n\n        Ok(ToolOutput::stop_with_args(output))\n    }\n\n    fn name(&self) -> Cow<'_, str> {\n        \"stop\".into()\n    }\n\n    fn tool_spec(&self) -> ToolSpec {\n        let schema = self.parameters_schema();\n\n        ToolSpec::builder()\n            .name(\"stop\")\n            .description(\"When you have completed, your task, call this with your expected output\")\n            .parameters_schema(schema)\n            .build()\n            .unwrap()\n    }\n}\n\nimpl From<StopWithArgs> for Box<dyn Tool> {\n    fn from(val: StopWithArgs) -> Self {\n        Box::new(val)\n    }\n}\n\n#[derive(Clone, Debug, serde::Deserialize, serde::Serialize, schemars::JsonSchema)]\nstruct AgentFailedArgsSpec {\n    pub reason: String,\n}\n\n/// A utility tool that can be used to let an agent decide it failed\n///\n/// This will _NOT_ have the agent return an error, instead, look at the stop reason of the agent.\n#[derive(Clone, Debug, serde::Deserialize, serde::Serialize)]\npub struct AgentCanFail {\n    parameters_schema: Option<Schema>,\n    expects_reason_field: bool,\n}\n\nimpl Default for AgentCanFail {\n    fn default() -> Self {\n        Self {\n            parameters_schema: Some(schema_for!(AgentFailedArgsSpec)),\n            expects_reason_field: true,\n        }\n    }\n}\n\nimpl AgentCanFail {\n    /// Create a new `AgentCanFail` tool with a custom parameters schema.\n    ///\n    /// When providing a custom schema the full argument payload will be forwarded to the failure\n    /// reason without requiring a `reason` field wrapper.\n    pub fn with_parameters_schema(schema: Schema) -> Self {\n        Self {\n            parameters_schema: Some(schema),\n            expects_reason_field: false,\n        }\n    }\n\n    fn parameters_schema(&self) -> Schema {\n        self.parameters_schema\n            .clone()\n            .unwrap_or_else(|| schema_for!(AgentFailedArgsSpec))\n    }\n}\n\n#[async_trait]\nimpl Tool for AgentCanFail {\n    async fn invoke(\n        &self,\n        _agent_context: &dyn AgentContext,\n        tool_call: &ToolCall,\n    ) -> Result<ToolOutput, ToolError> {\n        let raw_args = tool_call.args().ok_or_else(|| {\n            if self.expects_reason_field {\n                ToolError::missing_arguments(\"reason\")\n            } else {\n                ToolError::missing_arguments(\"arguments\")\n            }\n        })?;\n\n        let reason = if self.expects_reason_field {\n            let args: AgentFailedArgsSpec = serde_json::from_str(raw_args)?;\n            args.reason\n        } else {\n            let json: serde_json::Value = serde_json::from_str(raw_args)?;\n            json.to_string()\n        };\n\n        Ok(ToolOutput::agent_failed(reason))\n    }\n\n    fn name(&self) -> Cow<'_, str> {\n        \"task_failed\".into()\n    }\n\n    fn tool_spec(&self) -> ToolSpec {\n        let schema = self.parameters_schema();\n\n        ToolSpec::builder()\n            .name(\"task_failed\")\n            .description(\"If you cannot complete your task, or have otherwise failed, call this with your reason for failure\")\n            .parameters_schema(schema)\n            .build()\n            .unwrap()\n    }\n}\n\nimpl From<AgentCanFail> for Box<dyn Tool> {\n    fn from(val: AgentCanFail) -> Self {\n        Box::new(val)\n    }\n}\n\n#[derive(Clone)]\n/// Wraps a tool and requires approval before it can be used\npub struct ApprovalRequired(pub Box<dyn Tool>);\n\nimpl ApprovalRequired {\n    /// Creates a new `ApprovalRequired` tool\n    pub fn new(tool: impl Tool + 'static) -> Self {\n        Self(Box::new(tool))\n    }\n}\n\n#[async_trait]\nimpl Tool for ApprovalRequired {\n    async fn invoke(\n        &self,\n        context: &dyn AgentContext,\n        tool_call: &ToolCall,\n    ) -> Result<ToolOutput, ToolError> {\n        if let Some(feedback) = context.has_received_feedback(tool_call).await {\n            match feedback {\n                ToolFeedback::Approved { .. } => return self.0.invoke(context, tool_call).await,\n                ToolFeedback::Refused { .. } => {\n                    return Ok(ToolOutput::text(\"This tool call was refused\"));\n                }\n            }\n        }\n\n        Ok(ToolOutput::FeedbackRequired(None))\n    }\n\n    fn name(&self) -> Cow<'_, str> {\n        self.0.name()\n    }\n\n    fn tool_spec(&self) -> ToolSpec {\n        self.0.tool_spec()\n    }\n}\n\nimpl From<ApprovalRequired> for Box<dyn Tool> {\n    fn from(val: ApprovalRequired) -> Self {\n        Box::new(val)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use schemars::schema_for;\n    use serde_json::json;\n\n    fn dummy_tool_call(name: &str, args: Option<&str>) -> ToolCall {\n        let mut builder = ToolCall::builder().name(name).id(\"1\").to_owned();\n        if let Some(args) = args {\n            builder.args(args.to_string());\n        }\n        builder.build().unwrap()\n    }\n\n    #[tokio::test]\n    async fn test_stop_tool() {\n        let stop = Stop::default();\n        let ctx = ();\n        let tool_call = dummy_tool_call(\"stop\", None);\n        let out = stop.invoke(&ctx, &tool_call).await.unwrap();\n        assert_eq!(out, ToolOutput::stop());\n    }\n\n    #[tokio::test]\n    async fn test_stop_with_args_tool() {\n        let tool = StopWithArgs::default();\n        let ctx = ();\n        let args = r#\"{\"output\":\"expected result\"}\"#;\n        let tool_call = dummy_tool_call(\"stop\", Some(args));\n        let out = tool.invoke(&ctx, &tool_call).await.unwrap();\n        assert_eq!(out, ToolOutput::stop_with_args(\"expected result\"));\n    }\n\n    #[tokio::test]\n    async fn test_agent_can_fail_tool() {\n        let tool = AgentCanFail::default();\n        let ctx = ();\n        let args = r#\"{\"reason\":\"something went wrong\"}\"#;\n        let tool_call = dummy_tool_call(\"task_failed\", Some(args));\n        let out = tool.invoke(&ctx, &tool_call).await.unwrap();\n        assert_eq!(out, ToolOutput::agent_failed(\"something went wrong\"));\n    }\n\n    #[derive(Clone, Debug, serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    struct CustomFailArgs {\n        code: i32,\n        message: String,\n    }\n\n    #[test]\n    fn test_agent_can_fail_custom_schema_in_spec() {\n        let schema = schema_for!(CustomFailArgs);\n        let tool = AgentCanFail::with_parameters_schema(schema.clone());\n        let spec = tool.tool_spec();\n        assert_eq!(spec.parameters_schema, Some(schema));\n    }\n\n    #[tokio::test]\n    async fn test_agent_can_fail_custom_schema_forwards_payload() {\n        let schema = schema_for!(CustomFailArgs);\n        let tool = AgentCanFail::with_parameters_schema(schema);\n        let ctx = ();\n        let args = r#\"{\"code\":7,\"message\":\"error\"}\"#;\n        let tool_call = dummy_tool_call(\"task_failed\", Some(args));\n        let out = tool.invoke(&ctx, &tool_call).await.unwrap();\n        assert_eq!(\n            out,\n            ToolOutput::agent_failed(json!({\"code\":7,\"message\":\"error\"}).to_string())\n        );\n    }\n\n    #[test]\n    fn test_agent_can_fail_default_schema_matches_previous() {\n        let tool = AgentCanFail::default();\n        let spec = tool.tool_spec();\n        let expected = schema_for!(AgentFailedArgsSpec);\n        assert_eq!(spec.parameters_schema, Some(expected));\n    }\n\n    #[tokio::test]\n    async fn test_approval_required_feedback_required() {\n        let stop = Stop::default();\n        let tool = ApprovalRequired::new(stop);\n        let ctx = ();\n        let tool_call = dummy_tool_call(\"stop\", None);\n        let out = tool.invoke(&ctx, &tool_call).await.unwrap();\n\n        // On unit; existing feedback is always present\n        assert_eq!(out, ToolOutput::Stop(None));\n    }\n\n    #[derive(Clone, Debug, serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    struct CustomStopArgs {\n        value: i32,\n    }\n\n    #[test]\n    fn test_stop_with_args_custom_schema_in_spec() {\n        let schema = schema_for!(CustomStopArgs);\n        let tool = StopWithArgs::with_parameters_schema(schema.clone());\n        let spec = tool.tool_spec();\n        assert_eq!(spec.parameters_schema, Some(schema));\n    }\n\n    #[tokio::test]\n    async fn test_stop_with_args_custom_schema_forwards_payload() {\n        let schema = schema_for!(CustomStopArgs);\n        let tool = StopWithArgs::with_parameters_schema(schema);\n        let ctx = ();\n        let args = r#\"{\"value\":42}\"#;\n        let tool_call = dummy_tool_call(\"stop\", Some(args));\n        let out = tool.invoke(&ctx, &tool_call).await.unwrap();\n        assert_eq!(out, ToolOutput::stop_with_args(json!({\"value\": 42})));\n    }\n\n    #[test]\n    fn test_stop_with_args_default_schema_matches_previous() {\n        let tool = StopWithArgs::default();\n        let spec = tool.tool_spec();\n        let expected = schema_for!(DefaultStopWithArgsSpec);\n        assert_eq!(spec.parameters_schema, Some(expected));\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tools/local_executor.rs",
    "content": "//! Local executor for running tools on the local machine.\n//!\n//! By default will use the current directory as the working directory.\nuse std::{\n    collections::HashMap,\n    path::{Path, PathBuf},\n    process::Stdio,\n    time::Duration,\n};\n\nuse anyhow::{Context as _, Result};\nuse async_trait::async_trait;\nuse derive_builder::Builder;\nuse swiftide_core::{Command, CommandError, CommandOutput, Loader, ToolExecutor};\nuse swiftide_indexing::loaders::FileLoader;\nuse tokio::{\n    io::{AsyncBufReadExt as _, AsyncWriteExt as _},\n    task::JoinHandle,\n    time,\n};\n\n#[derive(Debug, Clone, Builder)]\npub struct LocalExecutor {\n    #[builder(default = \".\".into(), setter(into))]\n    workdir: PathBuf,\n\n    #[builder(default)]\n    default_timeout: Option<Duration>,\n\n    /// Clears env variables before executing commands.\n    #[builder(default)]\n    pub(crate) env_clear: bool,\n    /// Remove these environment variables before executing commands.\n    #[builder(default, setter(into))]\n    pub(crate) env_remove: Vec<String>,\n    ///  Set these environment variables before executing commands.\n    #[builder(default, setter(into))]\n    pub(crate) envs: HashMap<String, String>,\n}\n\nimpl Default for LocalExecutor {\n    fn default() -> Self {\n        LocalExecutor {\n            workdir: \".\".into(),\n            default_timeout: None,\n            env_clear: false,\n            env_remove: Vec::new(),\n            envs: HashMap::new(),\n        }\n    }\n}\n\nimpl LocalExecutor {\n    pub fn new(workdir: impl Into<PathBuf>) -> Self {\n        LocalExecutor {\n            workdir: workdir.into(),\n            default_timeout: None,\n            env_clear: false,\n            env_remove: Vec::new(),\n            envs: HashMap::new(),\n        }\n    }\n\n    pub fn builder() -> LocalExecutorBuilder {\n        LocalExecutorBuilder::default()\n    }\n\n    fn resolve_workdir(&self, cmd: &Command) -> PathBuf {\n        match cmd.current_dir_path() {\n            Some(path) if path.is_absolute() => path.to_path_buf(),\n            Some(path) => self.workdir.join(path),\n            None => self.workdir.clone(),\n        }\n    }\n\n    fn resolve_timeout(&self, cmd: &Command) -> Option<Duration> {\n        cmd.timeout_duration().copied().or(self.default_timeout)\n    }\n\n    #[allow(clippy::too_many_lines)]\n    async fn exec_shell(\n        &self,\n        cmd: &str,\n        workdir: &Path,\n        timeout: Option<Duration>,\n    ) -> Result<CommandOutput, CommandError> {\n        let lines: Vec<&str> = cmd.lines().collect();\n        let mut child = if let Some(first_line) = lines.first()\n            && first_line.starts_with(\"#!\")\n        {\n            let interpreter = first_line.trim_start_matches(\"#!/usr/bin/env \").trim();\n            tracing::info!(interpreter, \"detected shebang; running as script\");\n\n            let mut command = tokio::process::Command::new(interpreter);\n\n            if self.env_clear {\n                tracing::info!(\"clearing environment variables\");\n                command.env_clear();\n            }\n\n            for var in &self.env_remove {\n                tracing::info!(var, \"clearing environment variable\");\n                command.env_remove(var);\n            }\n\n            for (key, value) in &self.envs {\n                tracing::info!(key, \"setting environment variable\");\n                command.env(key, value);\n            }\n\n            let mut child = command\n                .current_dir(workdir)\n                .stdin(Stdio::piped())\n                .stdout(Stdio::piped())\n                .stderr(Stdio::piped())\n                .spawn()?;\n\n            if let Some(mut stdin) = child.stdin.take() {\n                let body = lines[1..].join(\"\\n\");\n                stdin.write_all(body.as_bytes()).await?;\n            }\n\n            child\n        } else {\n            tracing::info!(\"no shebang detected; running as command\");\n\n            let mut command = tokio::process::Command::new(\"sh\");\n\n            // Treat as shell command\n            command.arg(\"-c\").arg(cmd).current_dir(workdir);\n\n            if self.env_clear {\n                tracing::info!(\"clearing environment variables\");\n                command.env_clear();\n            }\n\n            for var in &self.env_remove {\n                tracing::info!(var, \"clearing environment variable\");\n                command.env_remove(var);\n            }\n\n            for (key, value) in &self.envs {\n                tracing::info!(key, \"setting environment variable\");\n                command.env(key, value);\n            }\n            command\n                .current_dir(workdir)\n                .stdin(Stdio::null())\n                .stdout(Stdio::piped())\n                .stderr(Stdio::piped())\n                .spawn()?\n        };\n\n        let stdout_task = if let Some(stdout) = child.stdout.take() {\n            Some(tokio::spawn(async move {\n                let mut lines = tokio::io::BufReader::new(stdout).lines();\n                let mut out = Vec::new();\n                while let Ok(Some(line)) = lines.next_line().await {\n                    out.push(line);\n                }\n                out\n            }))\n        } else {\n            tracing::warn!(\"Command has no stdout\");\n            None\n        };\n\n        let stderr_task = if let Some(stderr) = child.stderr.take() {\n            Some(tokio::spawn(async move {\n                let mut lines = tokio::io::BufReader::new(stderr).lines();\n                let mut out = Vec::new();\n                while let Ok(Some(line)) = lines.next_line().await {\n                    out.push(line);\n                }\n                out\n            }))\n        } else {\n            tracing::warn!(\"Command has no stderr\");\n            None\n        };\n\n        let status = match timeout {\n            Some(limit) => {\n                if let Ok(result) = time::timeout(limit, child.wait()).await {\n                    result.map_err(|err| CommandError::ExecutorError(err.into()))?\n                } else {\n                    tracing::warn!(?limit, \"command exceeded timeout; terminating\");\n                    if let Err(err) = child.start_kill() {\n                        tracing::warn!(?err, \"failed to start kill on timed out command\");\n                    }\n                    if let Err(err) = child.wait().await {\n                        tracing::warn!(?err, \"failed to reap command after timeout\");\n                    }\n\n                    let (stdout, stderr) =\n                        Self::collect_process_output(stdout_task, stderr_task).await;\n                    let cmd_output = Self::merge_output(&stdout, &stderr);\n\n                    return Err(CommandError::TimedOut {\n                        timeout: limit,\n                        output: cmd_output,\n                    });\n                }\n            }\n            None => child\n                .wait()\n                .await\n                .map_err(|err| CommandError::ExecutorError(err.into()))?,\n        };\n\n        let (stdout, stderr) = Self::collect_process_output(stdout_task, stderr_task).await;\n        let cmd_output = Self::merge_output(&stdout, &stderr);\n\n        if status.success() {\n            Ok(cmd_output)\n        } else {\n            Err(CommandError::NonZeroExit(cmd_output))\n        }\n    }\n\n    async fn exec_read_file(\n        &self,\n        workdir: &Path,\n        path: &Path,\n        timeout: Option<Duration>,\n    ) -> Result<CommandOutput, CommandError> {\n        let path = if path.is_absolute() {\n            path.to_path_buf()\n        } else {\n            workdir.join(path)\n        };\n        let read_future = fs_err::tokio::read(&path);\n        let output = match timeout {\n            Some(limit) => match time::timeout(limit, read_future).await {\n                Ok(result) => result?,\n                Err(_) => {\n                    return Err(CommandError::TimedOut {\n                        timeout: limit,\n                        output: CommandOutput::empty(),\n                    });\n                }\n            },\n            None => read_future.await?,\n        };\n\n        Ok(String::from_utf8(output)\n            .context(\"Failed to parse read file output\")?\n            .into())\n    }\n\n    async fn exec_write_file(\n        &self,\n        workdir: &Path,\n        path: &Path,\n        content: &str,\n        timeout: Option<Duration>,\n    ) -> Result<CommandOutput, CommandError> {\n        let path = if path.is_absolute() {\n            path.to_path_buf()\n        } else {\n            workdir.join(path)\n        };\n        if let Some(parent) = path.parent() {\n            let _ = fs_err::tokio::create_dir_all(parent).await;\n        }\n        let write_future = fs_err::tokio::write(&path, content);\n        match timeout {\n            Some(limit) => match time::timeout(limit, write_future).await {\n                Ok(result) => result?,\n                Err(_) => {\n                    return Err(CommandError::TimedOut {\n                        timeout: limit,\n                        output: CommandOutput::empty(),\n                    });\n                }\n            },\n            None => write_future.await?,\n        }\n\n        Ok(CommandOutput::empty())\n    }\n\n    async fn collect_process_output(\n        stdout_task: Option<JoinHandle<Vec<String>>>,\n        stderr_task: Option<JoinHandle<Vec<String>>>,\n    ) -> (Vec<String>, Vec<String>) {\n        let stdout = match stdout_task {\n            Some(task) => match task.await {\n                Ok(lines) => lines,\n                Err(err) => {\n                    tracing::warn!(?err, \"failed to collect stdout from command\");\n                    Vec::new()\n                }\n            },\n            None => Vec::new(),\n        };\n\n        let stderr = match stderr_task {\n            Some(task) => match task.await {\n                Ok(lines) => lines,\n                Err(err) => {\n                    tracing::warn!(?err, \"failed to collect stderr from command\");\n                    Vec::new()\n                }\n            },\n            None => Vec::new(),\n        };\n\n        (stdout, stderr)\n    }\n\n    fn merge_output(stdout: &[String], stderr: &[String]) -> CommandOutput {\n        stdout\n            .iter()\n            .chain(stderr.iter())\n            .cloned()\n            .collect::<Vec<_>>()\n            .join(\"\\n\")\n            .into()\n    }\n}\n#[async_trait]\nimpl ToolExecutor for LocalExecutor {\n    /// Execute a `Command` on the local machine\n    #[tracing::instrument(skip_self)]\n    async fn exec_cmd(&self, cmd: &Command) -> Result<swiftide_core::CommandOutput, CommandError> {\n        let workdir = __self.resolve_workdir(cmd);\n        let timeout = __self.resolve_timeout(cmd);\n        match cmd {\n            Command::Shell { command, .. } => __self.exec_shell(command, &workdir, timeout).await,\n            Command::ReadFile { path, .. } => __self.exec_read_file(&workdir, path, timeout).await,\n            Command::WriteFile { path, content, .. } => {\n                __self\n                    .exec_write_file(&workdir, path, content, timeout)\n                    .await\n            }\n            _ => unimplemented!(\"Unsupported command: {cmd:?}\"),\n        }\n    }\n\n    async fn stream_files(\n        &self,\n        path: &Path,\n        extensions: Option<Vec<String>>,\n    ) -> Result<swiftide_core::indexing::IndexingStream<String>> {\n        let mut loader = FileLoader::new(path);\n\n        if let Some(extensions) = extensions {\n            loader = loader.with_extensions(&extensions);\n        }\n\n        Ok(loader.into_stream())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use futures_util::StreamExt as _;\n    use indoc::indoc;\n    use std::{path::Path, sync::Arc, time::Duration};\n    use swiftide_core::{Command, ExecutorExt, ToolExecutor};\n    use temp_dir::TempDir;\n\n    #[tokio::test]\n    async fn test_local_executor_write_and_read_file() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        // Define the file path and content\n        let file_path = temp_path.join(\"test_file.txt\");\n        let file_content = \"Hello, world!\";\n\n        // Write a shell command to create a file with the specified content\n        let write_cmd =\n            Command::shell(format!(\"echo '{}' > {}\", file_content, file_path.display()));\n\n        // Execute the write command\n        executor.exec_cmd(&write_cmd).await?;\n\n        // Verify that the file was created successfully\n        assert!(file_path.exists());\n\n        // Write a shell command to read the file's content\n        let read_cmd = Command::shell(format!(\"cat {}\", file_path.display()));\n\n        // Execute the read command\n        let output = executor.exec_cmd(&read_cmd).await?;\n\n        // Verify that the content read from the file matches the expected content\n        assert_eq!(output.to_string(), format!(\"{file_content}\"));\n\n        let output = executor\n            .exec_cmd(&Command::read_file(&file_path))\n            .await\n            .unwrap();\n        assert_eq!(output.to_string(), format!(\"{file_content}\\n\"));\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_echo_hello_world() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        // Define the echo command\n        let echo_cmd = Command::shell(\"echo 'hello world'\");\n\n        // Execute the echo command\n        let output = executor.exec_cmd(&echo_cmd).await?;\n\n        // Verify that the output matches the expected content\n        assert_eq!(output.to_string().trim(), \"hello world\");\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_shell_timeout() -> anyhow::Result<()> {\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        let mut cmd = Command::shell(\"echo ready && sleep 1 && echo done\");\n        cmd.timeout(Duration::from_millis(100));\n\n        match executor.exec_cmd(&cmd).await {\n            Err(CommandError::TimedOut { timeout, output }) => {\n                assert_eq!(timeout, Duration::from_millis(100));\n                assert!(output.to_string().contains(\"ready\"));\n            }\n            other => anyhow::bail!(\"expected timeout error, got {other:?}\"),\n        }\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_default_timeout_applies() -> anyhow::Result<()> {\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        let executor = LocalExecutorBuilder::default()\n            .workdir(temp_path.to_path_buf())\n            .default_timeout(Some(Duration::from_millis(100)))\n            .build()?;\n\n        match executor.exec_cmd(&Command::shell(\"sleep 1\")).await {\n            Err(CommandError::TimedOut { timeout, output }) => {\n                assert_eq!(timeout, Duration::from_millis(100));\n                assert!(output.to_string().is_empty());\n            }\n            other => anyhow::bail!(\"expected default timeout, got {other:?}\"),\n        }\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_clear_env() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            env_clear: true,\n            ..Default::default()\n        };\n\n        // Define the echo command\n        let echo_cmd = Command::shell(\"printenv\");\n\n        // Execute the echo command\n        let output = executor.exec_cmd(&echo_cmd).await?.to_string();\n\n        // Verify that the output matches the expected content\n        // assert_eq!(output.to_string().trim(), \"\");\n        assert!(!output.contains(\"CARGO_PKG_VERSION\"), \"{output}\");\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_add_env() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            envs: HashMap::from([(\"TEST_ENV\".to_string(), \"HELLO\".to_string())]),\n            ..Default::default()\n        };\n\n        // Define the echo command\n        let echo_cmd = Command::shell(\"printenv\");\n\n        // Execute the echo command\n        let output = executor.exec_cmd(&echo_cmd).await?.to_string();\n\n        // Verify that the output matches the expected content\n        // assert_eq!(output.to_string().trim(), \"\");\n        assert!(output.contains(\"TEST_ENV=HELLO\"), \"{output}\");\n        // Double tap its included by default\n        assert!(output.contains(\"CARGO_PKG_VERSION\"), \"{output}\");\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_env_remove() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            env_remove: vec![\"CARGO_PKG_VERSION\".to_string()],\n            ..Default::default()\n        };\n\n        // Define the echo command\n        let echo_cmd = Command::shell(\"printenv\");\n\n        // Execute the echo command\n        let output = executor.exec_cmd(&echo_cmd).await?.to_string();\n\n        // Verify that the output matches the expected content\n        // assert_eq!(output.to_string().trim(), \"\");\n        assert!(!output.contains(\"CARGO_PKG_VERSION=\"), \"{output}\");\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_run_shebang() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        let script = r#\"#!/usr/bin/env python3\nprint(\"hello from python\")\nprint(1 + 2)\"#;\n\n        // Execute the echo command\n        let output = executor\n            .exec_cmd(&Command::shell(script))\n            .await?\n            .to_string();\n\n        // Verify that the output matches the expected content\n        assert!(output.contains(\"hello from python\"));\n        assert!(output.contains('3'));\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_multiline_with_quotes() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        // Define the file path and content\n        let file_path = \"test_file2.txt\";\n        let file_content = indoc! {r#\"\n            fn main() {\n                println!(\"Hello, world!\");\n            }\n        \"#};\n\n        // Write a shell command to create a file with the specified content\n        let write_cmd = Command::shell(format!(\"echo '{file_content}' > {file_path}\"));\n\n        // Execute the write command\n        executor.exec_cmd(&write_cmd).await?;\n\n        // Write a shell command to read the file's content\n        let read_cmd = Command::shell(format!(\"cat {file_path}\"));\n\n        // Execute the read command\n        let output = executor.exec_cmd(&read_cmd).await?;\n\n        // Verify that the content read from the file matches the expected content\n        assert_eq!(output.to_string(), format!(\"{file_content}\"));\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_write_and_read_file_commands() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        // Define the file path and content\n        let file_path = temp_path.join(\"test_file.txt\");\n        let file_content = \"Hello, world!\";\n\n        // Assert that the file does not exist and it gives the correct error\n        let cmd = Command::read_file(file_path.clone());\n        let result = executor.exec_cmd(&cmd).await;\n\n        if let Err(err) = result {\n            assert!(matches!(err, CommandError::NonZeroExit(..)));\n        } else {\n            panic!(\"Expected error but got {result:?}\");\n        }\n\n        // Create a write command\n        let write_cmd = Command::write_file(file_path.clone(), file_content.to_string());\n\n        // Execute the write command\n        executor.exec_cmd(&write_cmd).await?;\n\n        // Verify that the file was created successfully\n        assert!(file_path.exists());\n\n        // Create a read command\n        let read_cmd = Command::read_file(file_path.clone());\n\n        // Execute the read command\n        let output = executor.exec_cmd(&read_cmd).await?.output;\n\n        // Verify that the content read from the file matches the expected content\n        assert_eq!(output, file_content);\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_stream_files() -> anyhow::Result<()> {\n        // Create a temporary directory\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        // Create some test files in the temporary directory\n        fs_err::write(temp_path.join(\"file1.txt\"), \"Content of file 1\")?;\n        fs_err::write(temp_path.join(\"file2.txt\"), \"Content of file 2\")?;\n        fs_err::write(temp_path.join(\"file3.rs\"), \"Content of file 3\")?;\n\n        // Instantiate LocalExecutor with the temporary directory as workdir\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        // Stream files with no extensions filter\n        let stream = executor.stream_files(temp_path, None).await?;\n        let files: Vec<_> = stream.collect().await;\n\n        assert_eq!(files.len(), 3);\n\n        // Stream files with a specific extension filter\n        let stream = executor\n            .stream_files(temp_path, Some(vec![\"txt\".to_string()]))\n            .await?;\n        let txt_files: Vec<_> = stream.collect().await;\n\n        assert_eq!(txt_files.len(), 2);\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_honors_workdir() -> anyhow::Result<()> {\n        use std::fs;\n        use temp_dir::TempDir;\n\n        // 1. Create a temp dir and instantiate executor\n        let temp_dir = TempDir::new()?;\n        let temp_path = temp_dir.path();\n\n        let executor = LocalExecutor {\n            workdir: temp_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        // 2. Run a shell command in workdir and check output is workdir\n        let pwd_cmd = Command::shell(\"pwd\");\n        let pwd_output = executor.exec_cmd(&pwd_cmd).await?.to_string();\n        let pwd_path = std::fs::canonicalize(pwd_output.trim())?;\n        let temp_path = std::fs::canonicalize(temp_path)?;\n        assert_eq!(pwd_path, temp_path);\n\n        // 3. Write a file using WriteFile (should land in workdir)\n        let fname = \"workdir_check.txt\";\n        let write_cmd = Command::write_file(fname, \"test123\");\n        executor.exec_cmd(&write_cmd).await?;\n\n        // 4. Assert file exists in workdir, not current dir\n        let expected_path = temp_path.join(fname);\n        assert!(expected_path.exists());\n        assert!(!Path::new(fname).exists());\n\n        // 5. Write/read using ReadFile\n        let read_cmd = Command::read_file(fname);\n        let read_output = executor.exec_cmd(&read_cmd).await?.to_string();\n        assert_eq!(read_output.trim(), \"test123\");\n\n        // 6. Clean up\n        fs::remove_file(&expected_path)?;\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_command_current_dir() -> anyhow::Result<()> {\n        use std::fs;\n        use temp_dir::TempDir;\n\n        let temp_dir = TempDir::new()?;\n        let base_path = temp_dir.path();\n\n        let executor = LocalExecutor {\n            workdir: base_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        let nested_dir = base_path.join(\"nested\");\n        fs::create_dir_all(&nested_dir)?;\n\n        let mut pwd_cmd = Command::shell(\"pwd\");\n        pwd_cmd.current_dir(Path::new(\"nested\"));\n        let pwd_output = executor.exec_cmd(&pwd_cmd).await?.to_string();\n        let pwd_path = std::fs::canonicalize(pwd_output.trim())?;\n        assert_eq!(pwd_path, std::fs::canonicalize(&nested_dir)?);\n\n        let mut write_cmd = Command::write_file(\"file.txt\", \"hello\");\n        write_cmd.current_dir(Path::new(\"nested\"));\n        executor.exec_cmd(&write_cmd).await?;\n\n        assert!(!base_path.join(\"file.txt\").exists());\n        assert!(nested_dir.join(\"file.txt\").exists());\n\n        let mut read_cmd = Command::read_file(\"file.txt\");\n        read_cmd.current_dir(Path::new(\"nested\"));\n        let read_output = executor.exec_cmd(&read_cmd).await?.to_string();\n        assert_eq!(read_output.trim(), \"hello\");\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_current_dir() -> anyhow::Result<()> {\n        let temp_dir = TempDir::new()?;\n        let base_path = temp_dir.path();\n\n        let executor = LocalExecutor {\n            workdir: base_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        let nested = executor.scoped(\"nested\");\n        nested\n            .exec_cmd(&Command::write_file(\"file.txt\", \"hello\"))\n            .await?;\n\n        assert!(!base_path.join(\"file.txt\").exists());\n        assert!(base_path.join(\"nested\").join(\"file.txt\").exists());\n        assert_eq!(executor.workdir, base_path);\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_local_executor_current_dir_dyn() -> anyhow::Result<()> {\n        let temp_dir = TempDir::new()?;\n        let base_path = temp_dir.path();\n\n        let executor = LocalExecutor {\n            workdir: base_path.to_path_buf(),\n            ..Default::default()\n        };\n\n        let dyn_exec: Arc<dyn swiftide_core::ToolExecutor> = Arc::new(executor.clone());\n        let nested = dyn_exec.scoped(\"nested\");\n\n        nested\n            .exec_cmd(&Command::write_file(\"nested_file.txt\", \"hello\"))\n            .await?;\n\n        assert!(base_path.join(\"nested\").join(\"nested_file.txt\").exists());\n        assert!(!base_path.join(\"nested_file.txt\").exists());\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tools/mcp.rs",
    "content": "//! Add tools provided by an MCP server to an agent\n//!\n//! Uses the `rmcp` crate to connect to an MCP server and list available tools, and invoke them\n//!\n//! Supports any transport that the `rmcp` crate supports\nuse std::borrow::Cow;\nuse std::sync::Arc;\n\nuse anyhow::{Context as _, Result};\nuse async_trait::async_trait;\nuse rmcp::RoleClient;\nuse rmcp::ServiceExt;\nuse rmcp::model::{CallToolRequestParams, ClientInfo, Implementation, InitializeRequestParams};\nuse rmcp::service::RunningService;\nuse rmcp::transport::IntoTransport;\nuse schemars::Schema;\nuse serde::{Deserialize, Serialize};\nuse swiftide_core::CommandError;\nuse swiftide_core::chat_completion::ToolCall;\nuse swiftide_core::{\n    Tool, ToolBox,\n    chat_completion::{ToolSpec, errors::ToolError},\n};\nuse tokio::sync::RwLock;\n\n/// A filter to apply to the available tools\n#[derive(Clone, Debug, Serialize, Deserialize)]\npub enum ToolFilter {\n    Blacklist(Vec<String>),\n    Whitelist(Vec<String>),\n}\n\n/// Connects to an MCP server and provides tools at runtime to the agent.\n///\n/// WARN: The rmcp has a quirky feature to serve from `()`. This does not work; serve from\n/// `ClientInfo` instead, or from the transport and `Swiftide` will handle the rest.\n#[derive(Clone)]\npub struct McpToolbox {\n    service: Arc<RwLock<Option<RunningService<RoleClient, InitializeRequestParams>>>>,\n\n    /// Optional human readable name for the toolbox\n    name: Option<String>,\n\n    filter: Arc<Option<ToolFilter>>,\n}\n\nimpl McpToolbox {\n    /// Blacklist tools by name, the agent will not be able to use these tools\n    pub fn with_blacklist<ITEM: Into<String>, I: IntoIterator<Item = ITEM>>(\n        &mut self,\n        blacklist: I,\n    ) -> &mut Self {\n        let list = blacklist.into_iter().map(Into::into).collect::<Vec<_>>();\n        self.filter = Some(ToolFilter::Blacklist(list)).into();\n        self\n    }\n\n    /// Whitelist tools by name, the agent will only be able to use these tools\n    pub fn with_whitelist<ITEM: Into<String>, I: IntoIterator<Item = ITEM>>(\n        &mut self,\n        blacklist: I,\n    ) -> &mut Self {\n        let list = blacklist.into_iter().map(Into::into).collect::<Vec<_>>();\n        self.filter = Some(ToolFilter::Whitelist(list)).into();\n        self\n    }\n\n    /// Apply a custom filter to the tools\n    pub fn with_filter(&mut self, filter: ToolFilter) -> &mut Self {\n        self.filter = Some(filter).into();\n        self\n    }\n\n    /// Apply an optional name to the toolbox\n    pub fn with_name(&mut self, name: impl Into<String>) -> &mut Self {\n        self.name = Some(name.into());\n        self\n    }\n\n    pub fn name(&self) -> &str {\n        self.name.as_deref().unwrap_or(\"MCP Toolbox\")\n    }\n\n    /// Create a new toolbox from a transport\n    ///\n    /// # Errors\n    ///\n    /// Errors if the transport fails to connect\n    pub async fn try_from_transport<\n        E: std::error::Error + From<std::io::Error> + Send + Sync + 'static,\n        A,\n    >(\n        transport: impl IntoTransport<RoleClient, E, A>,\n    ) -> Result<Self> {\n        let info = Self::default_client_info();\n        let service = Arc::new(RwLock::new(Some(info.serve(transport).await?)));\n\n        Ok(Self {\n            service,\n            filter: None.into(),\n            name: None,\n        })\n    }\n\n    /// Create a new toolbox from a running service\n    pub fn from_running_service(\n        service: RunningService<RoleClient, InitializeRequestParams>,\n    ) -> Self {\n        Self {\n            service: Arc::new(RwLock::new(Some(service))),\n            filter: None.into(),\n            name: None,\n        }\n    }\n\n    fn default_client_info() -> ClientInfo {\n        ClientInfo {\n            client_info: Implementation {\n                name: \"swiftide\".into(),\n                version: env!(\"CARGO_PKG_VERSION\").into(),\n                title: None,\n                description: None,\n                icons: None,\n                website_url: None,\n            },\n            ..Default::default()\n        }\n    }\n\n    /// Disconnects from the MCP server if it is running\n    ///\n    /// If it is not running, an Ok is returned and it logs a tracing message\n    ///\n    /// # Errors\n    ///\n    /// Errors if the service is running but cannot be stopped\n    pub async fn cancel(&mut self) -> Result<()> {\n        let mut lock = self.service.write().await;\n        let Some(service) = std::mem::take(&mut *lock) else {\n            tracing::warn!(\"mcp server is not running\");\n            return Ok(());\n        };\n\n        tracing::debug!(name = self.name(), \"Stopping mcp server\");\n\n        service\n            .cancel()\n            .await\n            .context(\"failed to stop mcp server\")?;\n\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl ToolBox for McpToolbox {\n    #[tracing::instrument(skip_all)]\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>> {\n        let Some(service) = &*self.service.read().await else {\n            anyhow::bail!(\"No service available\");\n        };\n        tracing::debug!(name = self.name(), \"Connecting to mcp server\");\n        let peer_info = service.peer_info();\n        tracing::debug!(?peer_info, name = self.name(), \"Connected to mcp server\");\n\n        tracing::debug!(name = self.name(), \"Listing tools from mcp server\");\n        let tools = service\n            .list_all_tools()\n            .await\n            .context(\"Failed to list tools\")?;\n\n        let filter = self.filter.as_ref();\n        let mut server_name = peer_info\n            .map_or(\"mcp\", |info| info.server_info.name.as_str())\n            .trim()\n            .to_owned();\n        if server_name.is_empty() {\n            server_name = \"mcp\".into();\n        }\n\n        let tools = tools\n            .into_iter()\n            .filter(|tool| match &filter {\n                Some(ToolFilter::Blacklist(blacklist)) => {\n                    !blacklist.iter().any(|blocked| blocked == &tool.name)\n                }\n                Some(ToolFilter::Whitelist(whitelist)) => {\n                    whitelist.iter().any(|allowed| allowed == &tool.name)\n                }\n                None => true,\n            })\n            .map(|tool| {\n                let schema_value = tool.schema_as_json_value();\n                tracing::trace!(\n                    schema = ?schema_value,\n                    \"Parsing tool input schema for {}\",\n                    tool.name\n                );\n\n                let mut tool_spec_builder = ToolSpec::builder();\n\n                // Preallocate to avoid repeated string growth.\n                let mut registered_name =\n                    String::with_capacity(server_name.len() + tool.name.len() + 1);\n                registered_name.push_str(&server_name);\n                registered_name.push(':');\n                registered_name.push_str(&tool.name);\n\n                tool_spec_builder.name(registered_name.clone());\n                if let Some(description) = tool.description {\n                    tool_spec_builder.description(description);\n                }\n\n                match schema_value {\n                    serde_json::Value::Null => {}\n                    value => {\n                        let schema: Schema = serde_json::from_value(value)\n                            .context(\"Failed to parse tool input schema\")?;\n                        tool_spec_builder.parameters_schema(schema);\n                    }\n                }\n\n                let tool_spec = tool_spec_builder\n                    .build()\n                    .context(\"Failed to build tool spec\")?;\n                Ok(Box::new(McpTool {\n                    client: Arc::clone(&self.service),\n                    registered_name,\n                    server_tool_name: tool.name.into(),\n                    tool_spec,\n                }) as Box<dyn Tool>)\n            })\n            .collect::<Result<Vec<_>>>()\n            .context(\"Failed to build mcp tool specs\")?;\n        Ok(tools)\n    }\n\n    fn name(&self) -> Cow<'_, str> {\n        self.name().into()\n    }\n}\n\n#[derive(Clone)]\nstruct McpTool {\n    client: Arc<RwLock<Option<RunningService<RoleClient, InitializeRequestParams>>>>,\n    registered_name: String,\n    server_tool_name: String,\n    tool_spec: ToolSpec,\n}\n\n#[async_trait]\nimpl Tool for McpTool {\n    async fn invoke(\n        &self,\n        _agent_context: &dyn swiftide_core::AgentContext,\n        tool_call: &ToolCall,\n    ) -> Result<\n        swiftide_core::chat_completion::ToolOutput,\n        swiftide_core::chat_completion::errors::ToolError,\n    > {\n        let args = match tool_call.args() {\n            Some(args) => Some(serde_json::from_str(args).map_err(ToolError::WrongArguments)?),\n            None => None,\n        };\n\n        let request = CallToolRequestParams {\n            meta: None,\n            name: self.server_tool_name.clone().into(),\n            arguments: args,\n            task: None,\n        };\n\n        let Some(service) = &*self.client.read().await else {\n            return Err(\n                CommandError::ExecutorError(anyhow::anyhow!(\"mcp server is not running\")).into(),\n            );\n        };\n\n        tracing::debug!(request = ?request, tool = self.name().as_ref(), \"Invoking mcp tool\");\n        let response = service\n            .call_tool(request)\n            .await\n            .context(\"Failed to call tool\")?;\n\n        tracing::debug!(response = ?response, tool = self.name().as_ref(), \"Received response from mcp tool\");\n        let rmcp::model::CallToolResult {\n            content,\n            structured_content,\n            is_error,\n            ..\n        } = response;\n\n        let content = if content.is_empty() {\n            structured_content.map(|structured| structured.to_string())\n        } else {\n            let mut iter = content.into_iter().filter_map(|c| match c.raw {\n                rmcp::model::RawContent::Text(rmcp::model::RawTextContent { text, .. }) => {\n                    Some(text)\n                }\n                _ => None,\n            });\n            iter.next().map(|first| {\n                let mut joined = first;\n                for part in iter {\n                    joined.push('\\n');\n                    joined.push_str(&part);\n                }\n                joined\n            })\n        };\n\n        if is_error.unwrap_or(false) {\n            let content = content.unwrap_or_else(|| \"Unknown error\".to_string());\n            return Err(ToolError::Unknown(anyhow::anyhow!(\n                \"Failed to execute mcp tool: {content}\"\n            )));\n        }\n\n        match content {\n            Some(content) => Ok(content.into()),\n            // Some MCP tools may legitimately return no textual or structured content\n            // while still being successful (e.g. optional echo with null input).\n            None => Ok(\"Tool executed successfully\".into()),\n        }\n    }\n\n    fn name(&self) -> std::borrow::Cow<'_, str> {\n        self.registered_name.as_str().into()\n    }\n\n    fn tool_spec(&self) -> ToolSpec {\n        self.tool_spec.clone()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use copied_from_rmcp::Calculator;\n    use rmcp::serve_server;\n    use tokio::net::{UnixListener, UnixStream};\n\n    const SOCKET_PATH: &str = \"/tmp/swiftide-mcp.sock\";\n    const EXPECTED_PREFIX: &str = \"rmcp\";\n\n    #[allow(clippy::similar_names)]\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_socket() {\n        let _ = std::fs::remove_file(SOCKET_PATH);\n\n        match UnixListener::bind(SOCKET_PATH) {\n            Ok(unix_listener) => {\n                println!(\"Server successfully listening on {SOCKET_PATH}\");\n                tokio::spawn(server(unix_listener));\n            }\n            Err(e) => {\n                println!(\"Unable to bind to {SOCKET_PATH}: {e}\");\n            }\n        }\n\n        let client = client().await.unwrap();\n\n        let t = client.available_tools().await.unwrap();\n        assert_eq!(client.available_tools().await.unwrap().len(), 3);\n\n        let mut names = t.iter().map(|t| t.name().into_owned()).collect::<Vec<_>>();\n        names.sort();\n        assert_eq!(\n            names,\n            [\n                format!(\"{EXPECTED_PREFIX}:optional\"),\n                format!(\"{EXPECTED_PREFIX}:sub\"),\n                format!(\"{EXPECTED_PREFIX}:sum\")\n            ]\n        );\n\n        let sum_name = format!(\"{EXPECTED_PREFIX}:sum\");\n        let sum_tool = t.iter().find(|t| t.name().as_ref() == sum_name).unwrap();\n        let mut builder = ToolCall::builder()\n            .id(\"some\")\n            .args(r#\"{\"b\": \"hello\"}\"#)\n            .name(\"test\")\n            .name(\"test\")\n            .to_owned();\n\n        assert_eq!(sum_tool.tool_spec().name, sum_name);\n\n        let tool_call = builder.args(r#\"{\"a\": 10, \"b\": 20}\"#).build().unwrap();\n\n        let result = sum_tool\n            .invoke(&(), &tool_call)\n            .await\n            .unwrap()\n            .content()\n            .unwrap()\n            .to_string();\n        assert_eq!(result, \"30\");\n\n        let sub_name = format!(\"{EXPECTED_PREFIX}:sub\");\n        let sub_tool = t.iter().find(|t| t.name().as_ref() == sub_name).unwrap();\n        assert_eq!(sub_tool.tool_spec().name, sub_name);\n\n        let tool_call = builder.args(r#\"{\"a\": 10, \"b\": 20}\"#).build().unwrap();\n\n        let result = sub_tool\n            .invoke(&(), &tool_call)\n            .await\n            .unwrap()\n            .content()\n            .unwrap()\n            .to_string();\n        assert_eq!(result, \"-10\");\n\n        // The input schema type for the input param is string with null allowed\n        let optional_name = format!(\"{EXPECTED_PREFIX}:optional\");\n        let optional_tool = t\n            .iter()\n            .find(|t| t.name().as_ref() == optional_name)\n            .unwrap();\n        assert_eq!(optional_tool.tool_spec().name, optional_name);\n        let spec = optional_tool.tool_spec();\n        let schema = spec\n            .parameters_schema\n            .expect(\"optional tool should expose a schema\");\n        let schema_json = serde_json::to_value(schema).unwrap();\n        let _text_prop = schema_json\n            .get(\"properties\")\n            .and_then(|props| props.get(\"text\"))\n            .expect(\"optional tool schema must include `text`\");\n\n        let tool_call = builder.args(r#\"{\"text\": \"hello\"}\"#).build().unwrap();\n\n        let result = optional_tool\n            .invoke(&(), &tool_call)\n            .await\n            .unwrap()\n            .content()\n            .unwrap()\n            .to_string();\n        assert_eq!(result, \"hello\");\n\n        let tool_call = builder.args(r#\"{\"text\": null}\"#).build().unwrap();\n        let result = optional_tool\n            .invoke(&(), &tool_call)\n            .await\n            .unwrap()\n            .content()\n            .unwrap()\n            .to_string();\n        assert_eq!(result, \"\");\n\n        // Clean up socket file\n        let _ = std::fs::remove_file(SOCKET_PATH);\n    }\n\n    async fn server(unix_listener: UnixListener) -> anyhow::Result<()> {\n        while let Ok((stream, addr)) = unix_listener.accept().await {\n            println!(\"Client connected: {addr:?}\");\n            tokio::spawn(async move {\n                match serve_server(Calculator::new(), stream).await {\n                    Ok(server) => {\n                        println!(\"Server initialized successfully\");\n                        if let Err(e) = server.waiting().await {\n                            println!(\"Error while server waiting: {e:?}\");\n                        }\n                    }\n                    Err(e) => println!(\"Server initialization failed: {e:?}\"),\n                }\n\n                anyhow::Ok(())\n            });\n        }\n        Ok(())\n    }\n\n    async fn client() -> anyhow::Result<McpToolbox> {\n        println!(\"Client connecting to {SOCKET_PATH}\");\n        let stream = UnixStream::connect(SOCKET_PATH).await?;\n\n        // let client = serve_client((), stream).await?;\n        let client = McpToolbox::try_from_transport(stream).await?;\n        println!(\"Client connected and initialized successfully\");\n\n        Ok(client)\n    }\n\n    #[allow(clippy::unused_self)]\n    mod copied_from_rmcp {\n        use rmcp::{\n            ErrorData as McpError, ServerHandler,\n            handler::server::{tool::ToolRouter, wrapper::Parameters},\n            model::{CallToolResult, Content, ServerCapabilities, ServerInfo},\n            schemars, tool, tool_handler,\n        };\n\n        #[derive(Debug, serde::Deserialize, schemars::JsonSchema)]\n        pub struct Request {\n            pub a: i32,\n            pub b: i32,\n        }\n\n        #[derive(Debug, serde::Deserialize, schemars::JsonSchema)]\n        pub struct OptRequest {\n            pub text: Option<String>,\n        }\n\n        #[derive(Debug, Clone)]\n        pub struct Calculator {\n            tool_router: ToolRouter<Self>,\n        }\n\n        #[rmcp::tool_router]\n        impl Calculator {\n            pub fn new() -> Self {\n                Self {\n                    tool_router: Self::tool_router(),\n                }\n            }\n\n            #[allow(clippy::unnecessary_wraps)]\n            #[tool(description = \"Calculate the sum of two numbers\")]\n            fn sum(\n                &self,\n                Parameters(Request { a, b }): Parameters<Request>,\n            ) -> Result<CallToolResult, McpError> {\n                Ok(CallToolResult::success(vec![Content::text(\n                    (a + b).to_string(),\n                )]))\n            }\n\n            #[allow(clippy::unnecessary_wraps)]\n            #[tool(description = \"Calculate the sum of two numbers\")]\n            fn sub(\n                &self,\n                Parameters(Request { a, b }): Parameters<Request>,\n            ) -> Result<CallToolResult, McpError> {\n                Ok(CallToolResult::success(vec![Content::text(\n                    (a - b).to_string(),\n                )]))\n            }\n\n            #[allow(clippy::unnecessary_wraps)]\n            #[tool(description = \"Optional echo\")]\n            fn optional(\n                &self,\n                Parameters(OptRequest { text }): Parameters<OptRequest>,\n            ) -> Result<CallToolResult, McpError> {\n                Ok(CallToolResult::success(vec![Content::text(\n                    text.unwrap_or_default(),\n                )]))\n            }\n        }\n\n        #[tool_handler]\n        impl ServerHandler for Calculator {\n            fn get_info(&self) -> ServerInfo {\n                ServerInfo {\n                    instructions: Some(\"A simple calculator\".into()),\n                    capabilities: ServerCapabilities::builder().enable_tools().build(),\n                    ..Default::default()\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-agents/src/tools/mod.rs",
    "content": "//! Default tools and executor for agents\npub mod arg_preprocessor;\npub mod control;\npub mod local_executor;\n\n/// Add tools from a Model Context Protocol endpoint\n#[cfg(feature = \"mcp\")]\npub mod mcp;\n"
  },
  {
    "path": "swiftide-agents/src/util.rs",
    "content": "//! Internal utility functions and macros for anything agent\n\n/// Simple macro to consistently call hooks and clean up the code\n#[macro_export]\nmacro_rules! invoke_hooks {\n    (OnStream, $self_expr:expr $(, $arg:expr)* ) => {{\n        // For streaming we log less and only on the trace level\n        for hook in $self_expr.hooks_by_type(HookTypes::OnStream) {\n            // Downcast to the correct closure variant\n            if let Hook::OnStream(hook_fn) = hook {\n                // Create a tracing span for instrumentation\n                let span = tracing::trace_span!(\n                    \"hook\",\n                    \"otel.name\" = format!(\"hook.{:?}\", HookTypes::OnStream)\n                );\n\n                // Call the hook, instrument, and log on failure\n                if let Err(err) = hook_fn($self_expr $(, $arg)*)\n                    .instrument(span.or_current())\n                    .await\n                {\n                    tracing::error!(\n                        \"Error in {hooktype} hook: {err}\",\n                        hooktype = HookTypes::OnStream,\n                    );\n                }\n            }\n        }\n    }};\n    ($hook_type:ident, $self_expr:expr $(, $arg:expr)* ) => {{\n        // Iterate through every hook matching `HookTypes::$hook_type`\n        for hook in $self_expr.hooks_by_type(HookTypes::$hook_type) {\n            // Downcast to the correct closure variant\n            if let Hook::$hook_type(hook_fn) = hook {\n                // Create a tracing span for instrumentation\n                let span = tracing::info_span!(\n                    \"hook\",\n                    \"otel.name\" = format!(\"hook.{:?}\", HookTypes::$hook_type)\n                );\n                tracing::debug!(\"Calling {} hook\", HookTypes::$hook_type);\n\n                // Call the hook, instrument, and log on failure\n                if let Err(err) = hook_fn($self_expr $(, $arg)*)\n                    .instrument(span.or_current())\n                    .await\n                {\n                    tracing::error!(\n                        \"Error in {hooktype} hook: {err}\",\n                        hooktype = HookTypes::$hook_type,\n                    );\n                }\n            }\n        }\n    }};\n}\n"
  },
  {
    "path": "swiftide-core/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-core\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\ntracing = { workspace = true }\nasync-trait = { workspace = true }\nfutures-util = { workspace = true }\ntokio-stream = { workspace = true }\nitertools = { workspace = true }\nserde = { workspace = true }\nserde_json = { workspace = true }\nstrum = { workspace = true }\nstrum_macros = { workspace = true }\nmockall = { workspace = true, optional = true }\nlazy_static = { workspace = true }\nderive_builder = { workspace = true }\ndyn-clone = { workspace = true }\npin-project = { workspace = true }\nthiserror = { workspace = true }\nmetrics = { workspace = true, optional = true }\nschemars = { workspace = true, features = [\"derive\"] }\nasync-openai = { workspace = true, optional = true, features = [\"chat-completion-types\", \"embedding-types\", \"response-types\"] }\n\ntera = { workspace = true }\nuuid = { workspace = true, features = [\"v4\", \"v3\"] }\n\npretty_assertions = { workspace = true, optional = true }\n\n# Integrations\nqdrant-client = { workspace = true, optional = true }\nbackoff = { version = \"0.4.0\", features = [\"futures\", \"tokio\"] }\n\n[dev-dependencies]\ntest-case = { workspace = true }\ntest-log = { workspace = true }\ntokio = { workspace = true, features = [\"time\", \"test-util\"] }\ntokio-stream = { workspace = true }\n\n[features]\ndefaults = [\"truncate-debug\"]\ntest-utils = [\"dep:mockall\", \"dep:pretty_assertions\"]\nqdrant = [\"dep:qdrant-client\"]\n# Truncates large debug outputs on pipeline nodes\ntruncate-debug = []\nmetrics = [\"dep:metrics\"]\njson-schema = []\nopenai = [\"dep:async-openai\"]\n\n[lints]\nworkspace = true\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n"
  },
  {
    "path": "swiftide-core/README.md",
    "content": "# Swiftide-core\n\nCore crate includes low level types and traits for swiftide that are used by other crates.\n"
  },
  {
    "path": "swiftide-core/src/agent_traits.rs",
    "content": "use std::{\n    borrow::Cow,\n    path::{Path, PathBuf},\n    sync::{Arc, Mutex},\n    time::Duration,\n};\n\nuse crate::{\n    chat_completion::{ChatMessage, ToolCall},\n    indexing::IndexingStream,\n};\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse dyn_clone::DynClone;\nuse serde::{Deserialize, Serialize};\nuse thiserror::Error;\n\n/// A `ToolExecutor` provides an interface for agents to interact with a system\n/// in an isolated context.\n///\n/// When starting up an agent, it's context expects an executor. For example,\n/// you might want your coding agent to work with a fresh, isolated set of files,\n/// separated from the rest of the system.\n///\n/// See `swiftide-docker-executor` for an executor that uses Docker. By default\n/// the executor is a local executor.\n///\n/// Additionally, the executor can be used stream files files for indexing.\n#[async_trait]\npub trait ToolExecutor: Send + Sync + DynClone {\n    /// Execute a command in the executor\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError>;\n\n    /// Stream files from the executor\n    async fn stream_files(\n        &self,\n        path: &Path,\n        extensions: Option<Vec<String>>,\n    ) -> Result<IndexingStream<String>>;\n}\n\ndyn_clone::clone_trait_object!(ToolExecutor);\n\n/// Lightweight executor wrapper that applies a default working directory to forwarded commands.\n///\n/// Most callers should construct this via [`ExecutorExt::scoped`], which borrows the underlying\n/// executor and only clones commands/paths when the scope actually changes their resolution.\n#[derive(Debug, Clone)]\npub struct ScopedExecutor<E> {\n    executor: E,\n    scope: PathBuf,\n}\n\nimpl<E> ScopedExecutor<E> {\n    /// Build a new wrapper around `executor` that prefixes relative paths with `scope`.\n    pub fn new(executor: E, scope: impl Into<PathBuf>) -> Self {\n        Self {\n            executor,\n            scope: scope.into(),\n        }\n    }\n\n    /// Returns either the original command or a scoped clone depending on the current directory.\n    fn apply_scope<'a>(&'a self, cmd: &'a Command) -> Cow<'a, Command> {\n        match cmd.current_dir_path() {\n            Some(path) if path.is_absolute() || self.scope.as_os_str().is_empty() => {\n                Cow::Borrowed(cmd)\n            }\n            Some(path) => {\n                let mut scoped = cmd.clone();\n                scoped.current_dir(self.scope.join(path));\n                Cow::Owned(scoped)\n            }\n            None if self.scope.as_os_str().is_empty() => Cow::Borrowed(cmd),\n            None => {\n                let mut scoped = cmd.clone();\n                scoped.current_dir(self.scope.clone());\n                Cow::Owned(scoped)\n            }\n        }\n    }\n\n    /// Returns a path adjusted for the scope when the provided path is relative.\n    fn scoped_path<'a>(&'a self, path: &'a Path) -> Cow<'a, Path> {\n        if path.is_absolute() || self.scope.as_os_str().is_empty() {\n            Cow::Borrowed(path)\n        } else {\n            Cow::Owned(self.scope.join(path))\n        }\n    }\n\n    /// Access the inner executor.\n    pub fn inner(&self) -> &E {\n        &self.executor\n    }\n\n    /// Expose the scope that will be applied to relative paths.\n    pub fn scope(&self) -> &Path {\n        &self.scope\n    }\n}\n\n#[async_trait]\nimpl<E> ToolExecutor for ScopedExecutor<E>\nwhere\n    E: ToolExecutor + Send + Sync + Clone,\n{\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        let scoped_cmd = self.apply_scope(cmd);\n        self.executor.exec_cmd(scoped_cmd.as_ref()).await\n    }\n\n    async fn stream_files(\n        &self,\n        path: &Path,\n        extensions: Option<Vec<String>>,\n    ) -> Result<IndexingStream<String>> {\n        let scoped_path = self.scoped_path(path);\n        self.executor\n            .stream_files(scoped_path.as_ref(), extensions)\n            .await\n    }\n}\n\n/// Convenience methods for scoping executors without cloning them.\npub trait ExecutorExt {\n    /// Borrow `self` and return a wrapper that resolves relative operations inside `path`.\n    fn scoped(&self, path: impl Into<PathBuf>) -> ScopedExecutor<&Self>;\n\n    fn scoped_owned(self, path: impl Into<PathBuf>) -> ScopedExecutor<Self>\n    where\n        Self: Sized;\n}\n\nimpl<T> ExecutorExt for T\nwhere\n    T: ToolExecutor,\n{\n    fn scoped(&self, path: impl Into<PathBuf>) -> ScopedExecutor<&Self> {\n        ScopedExecutor::new(self, path)\n    }\n\n    fn scoped_owned(self, path: impl Into<PathBuf>) -> ScopedExecutor<Self> {\n        ScopedExecutor::new(self, path)\n    }\n}\n\n#[async_trait]\nimpl<T> ToolExecutor for &T\nwhere\n    T: ToolExecutor + ?Sized,\n{\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        (**self).exec_cmd(cmd).await\n    }\n\n    async fn stream_files(\n        &self,\n        path: &Path,\n        extensions: Option<Vec<String>>,\n    ) -> Result<IndexingStream<String>> {\n        (**self).stream_files(path, extensions).await\n    }\n}\n\n#[async_trait]\nimpl ToolExecutor for Arc<dyn ToolExecutor> {\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        self.as_ref().exec_cmd(cmd).await\n    }\n\n    async fn stream_files(\n        &self,\n        path: &Path,\n        extensions: Option<Vec<String>>,\n    ) -> Result<IndexingStream<String>> {\n        self.as_ref().stream_files(path, extensions).await\n    }\n}\n\n#[async_trait]\nimpl ToolExecutor for Box<dyn ToolExecutor> {\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        self.as_ref().exec_cmd(cmd).await\n    }\n\n    async fn stream_files(\n        &self,\n        path: &Path,\n        extensions: Option<Vec<String>>,\n    ) -> Result<IndexingStream<String>> {\n        self.as_ref().stream_files(path, extensions).await\n    }\n}\n\n#[derive(Debug, Error)]\npub enum CommandError {\n    /// The executor itself failed\n    #[error(\"executor error: {0:#}\")]\n    ExecutorError(#[from] anyhow::Error),\n\n    /// The command exceeded its allotted time budget\n    #[error(\"command timed out after {timeout:?}: {output}\")]\n    TimedOut {\n        timeout: Duration,\n        output: CommandOutput,\n    },\n\n    /// The command failed, i.e. failing tests with stderr. This error might be handled\n    #[error(\"command failed with NonZeroExit: {0}\")]\n    NonZeroExit(CommandOutput),\n}\n\nimpl From<std::io::Error> for CommandError {\n    fn from(err: std::io::Error) -> Self {\n        CommandError::NonZeroExit(err.to_string().into())\n    }\n}\n\n/// Commands that can be executed by the executor\n/// Conceptually, `Shell` allows any kind of input, and other commands enable more optimized\n/// implementations.\n///\n/// There is an ongoing consideration to make this an associated type on the executor\n///\n/// TODO: Should be able to borrow everything?\n///\n/// Use the constructor helpers (e.g. [`Command::shell`]) and then chain configuration methods\n/// such as [`Command::with_current_dir`] or [`Command::current_dir`] for builder-style ergonomics.\n#[derive(Debug, Clone)]\n#[non_exhaustive]\npub enum Command {\n    Shell {\n        command: String,\n        current_dir: Option<PathBuf>,\n        timeout: Option<Duration>,\n    },\n    ReadFile {\n        path: PathBuf,\n        current_dir: Option<PathBuf>,\n        timeout: Option<Duration>,\n    },\n    WriteFile {\n        path: PathBuf,\n        content: String,\n        current_dir: Option<PathBuf>,\n        timeout: Option<Duration>,\n    },\n}\n\nimpl Command {\n    pub fn shell<S: Into<String>>(cmd: S) -> Self {\n        Command::Shell {\n            command: cmd.into(),\n            current_dir: None,\n            timeout: None,\n        }\n    }\n\n    pub fn read_file<P: Into<PathBuf>>(path: P) -> Self {\n        Command::ReadFile {\n            path: path.into(),\n            current_dir: None,\n            timeout: None,\n        }\n    }\n\n    pub fn write_file<P: Into<PathBuf>, S: Into<String>>(path: P, content: S) -> Self {\n        Command::WriteFile {\n            path: path.into(),\n            content: content.into(),\n            current_dir: None,\n            timeout: None,\n        }\n    }\n\n    /// Override the working directory used when executing this command.\n    ///\n    /// Executors may interpret relative paths in the context of their own\n    /// working directory.\n    #[must_use]\n    pub fn with_current_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.current_dir(path);\n        self\n    }\n\n    /// Override the working directory using the `std::process::Command`\n    /// builder-lite style API.\n    pub fn current_dir<P: Into<PathBuf>>(&mut self, path: P) -> &mut Self {\n        let dir = Some(path.into());\n        match self {\n            Command::Shell { current_dir, .. }\n            | Command::ReadFile { current_dir, .. }\n            | Command::WriteFile { current_dir, .. } => {\n                *current_dir = dir;\n            }\n        }\n        self\n    }\n\n    pub fn clear_current_dir(&mut self) -> &mut Self {\n        match self {\n            Command::Shell { current_dir, .. }\n            | Command::ReadFile { current_dir, .. }\n            | Command::WriteFile { current_dir, .. } => {\n                *current_dir = None;\n            }\n        }\n        self\n    }\n\n    pub fn current_dir_path(&self) -> Option<&Path> {\n        match self {\n            Command::Shell { current_dir, .. }\n            | Command::ReadFile { current_dir, .. }\n            | Command::WriteFile { current_dir, .. } => current_dir.as_deref(),\n        }\n    }\n\n    /// Override the timeout used when executing this command.\n    #[must_use]\n    pub fn with_timeout(mut self, timeout: Duration) -> Self {\n        self.timeout(timeout);\n        self\n    }\n\n    /// Override the timeout using the builder-style API.\n    pub fn timeout(&mut self, timeout: Duration) -> &mut Self {\n        match self {\n            Command::Shell { timeout: slot, .. }\n            | Command::ReadFile { timeout: slot, .. }\n            | Command::WriteFile { timeout: slot, .. } => {\n                *slot = Some(timeout);\n            }\n        }\n        self\n    }\n\n    /// Remove any timeout previously configured on this command.\n    pub fn clear_timeout(&mut self) -> &mut Self {\n        match self {\n            Command::Shell { timeout, .. }\n            | Command::ReadFile { timeout, .. }\n            | Command::WriteFile { timeout, .. } => {\n                *timeout = None;\n            }\n        }\n        self\n    }\n\n    /// Returns the timeout associated with this command, if any.\n    pub fn timeout_duration(&self) -> Option<&Duration> {\n        match self {\n            Command::Shell { timeout, .. }\n            | Command::ReadFile { timeout, .. }\n            | Command::WriteFile { timeout, .. } => timeout.as_ref(),\n        }\n    }\n}\n\n/// Output from a `Command`\n#[derive(Debug, Clone)]\npub struct CommandOutput {\n    pub output: String,\n    // status_code: i32,\n    // success: bool,\n}\n\nimpl CommandOutput {\n    pub fn empty() -> Self {\n        CommandOutput {\n            output: String::new(),\n        }\n    }\n\n    pub fn new(output: impl Into<String>) -> Self {\n        CommandOutput {\n            output: output.into(),\n        }\n    }\n    pub fn is_empty(&self) -> bool {\n        self.output.is_empty()\n    }\n}\n\nimpl std::fmt::Display for CommandOutput {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        self.output.fmt(f)\n    }\n}\n\nimpl<T: Into<String>> From<T> for CommandOutput {\n    fn from(value: T) -> Self {\n        CommandOutput {\n            output: value.into(),\n        }\n    }\n}\n\nimpl AsRef<str> for CommandOutput {\n    fn as_ref(&self) -> &str {\n        &self.output\n    }\n}\n\n/// Feedback that can be given on a tool, i.e. with a human in the loop\n#[derive(Debug, Clone, Serialize, Deserialize, strum_macros::EnumIs)]\n#[cfg_attr(feature = \"json-schema\", derive(schemars::JsonSchema))]\npub enum ToolFeedback {\n    Approved { payload: Option<serde_json::Value> },\n    Refused { payload: Option<serde_json::Value> },\n}\n\nimpl ToolFeedback {\n    pub fn approved() -> Self {\n        ToolFeedback::Approved { payload: None }\n    }\n\n    pub fn refused() -> Self {\n        ToolFeedback::Refused { payload: None }\n    }\n\n    pub fn payload(&self) -> Option<&serde_json::Value> {\n        match self {\n            ToolFeedback::Refused { payload } | ToolFeedback::Approved { payload } => {\n                payload.as_ref()\n            }\n        }\n    }\n\n    #[must_use]\n    pub fn with_payload(self, payload: serde_json::Value) -> Self {\n        match self {\n            ToolFeedback::Approved { .. } => ToolFeedback::Approved {\n                payload: Some(payload),\n            },\n            ToolFeedback::Refused { .. } => ToolFeedback::Refused {\n                payload: Some(payload),\n            },\n        }\n    }\n}\n\n/// Acts as the interface to the external world and manages messages for completion\n#[async_trait]\npub trait AgentContext: Send + Sync {\n    /// List of all messages for this agent\n    ///\n    /// Used as main source for the next completion and expects all\n    /// messages to be returned if new messages are present.\n    ///\n    /// Once this method has been called, there should not be new messages\n    ///\n    /// TODO: Figure out a nice way to return a reference instead while still supporting i.e.\n    /// mutexes\n    async fn next_completion(&self) -> Result<Option<Vec<ChatMessage>>>;\n\n    /// Lists only the new messages after calling `new_completion`\n    async fn current_new_messages(&self) -> Result<Vec<ChatMessage>>;\n\n    /// Add messages for the next completion\n    async fn add_messages(&self, item: Vec<ChatMessage>) -> Result<()>;\n\n    /// Add messages for the next completion\n    async fn add_message(&self, item: ChatMessage) -> Result<()>;\n\n    /// Execute a command if the context supports it\n    ///\n    /// Deprecated: use executor instead to access the executor directly\n    #[deprecated(note = \"use executor instead\")]\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError>;\n\n    fn executor(&self) -> &Arc<dyn ToolExecutor>;\n\n    async fn history(&self) -> Result<Vec<ChatMessage>>;\n\n    /// Replace the entire history with the given items\n    async fn replace_history(&self, items: Vec<ChatMessage>) -> Result<()>;\n\n    /// Pops the last messages up until the last completion\n    ///\n    /// LLMs failing completion for various reasons is unfortunately a common occurrence\n    /// This gives a way to redrive the last completion in a generic way\n    async fn redrive(&self) -> Result<()>;\n\n    /// Tools that require feedback or approval (i.e. from a human) can use this to check if the\n    /// feedback is received\n    async fn has_received_feedback(&self, tool_call: &ToolCall) -> Option<ToolFeedback>;\n\n    async fn feedback_received(&self, tool_call: &ToolCall, feedback: &ToolFeedback) -> Result<()>;\n}\n\n#[async_trait]\nimpl AgentContext for Box<dyn AgentContext> {\n    async fn next_completion(&self) -> Result<Option<Vec<ChatMessage>>> {\n        (**self).next_completion().await\n    }\n\n    async fn current_new_messages(&self) -> Result<Vec<ChatMessage>> {\n        (**self).current_new_messages().await\n    }\n\n    async fn add_messages(&self, item: Vec<ChatMessage>) -> Result<()> {\n        (**self).add_messages(item).await\n    }\n\n    async fn add_message(&self, item: ChatMessage) -> Result<()> {\n        (**self).add_message(item).await\n    }\n\n    #[allow(deprecated)]\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        (**self).exec_cmd(cmd).await\n    }\n\n    fn executor(&self) -> &Arc<dyn ToolExecutor> {\n        (**self).executor()\n    }\n\n    async fn history(&self) -> Result<Vec<ChatMessage>> {\n        (**self).history().await\n    }\n\n    async fn replace_history(&self, items: Vec<ChatMessage>) -> Result<()> {\n        (**self).replace_history(items).await\n    }\n\n    async fn redrive(&self) -> Result<()> {\n        (**self).redrive().await\n    }\n\n    async fn has_received_feedback(&self, tool_call: &ToolCall) -> Option<ToolFeedback> {\n        (**self).has_received_feedback(tool_call).await\n    }\n\n    async fn feedback_received(&self, tool_call: &ToolCall, feedback: &ToolFeedback) -> Result<()> {\n        (**self).feedback_received(tool_call, feedback).await\n    }\n}\n\n#[async_trait]\nimpl AgentContext for Arc<dyn AgentContext> {\n    async fn next_completion(&self) -> Result<Option<Vec<ChatMessage>>> {\n        (**self).next_completion().await\n    }\n\n    async fn current_new_messages(&self) -> Result<Vec<ChatMessage>> {\n        (**self).current_new_messages().await\n    }\n\n    async fn add_messages(&self, item: Vec<ChatMessage>) -> Result<()> {\n        (**self).add_messages(item).await\n    }\n\n    async fn add_message(&self, item: ChatMessage) -> Result<()> {\n        (**self).add_message(item).await\n    }\n\n    #[allow(deprecated)]\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        (**self).exec_cmd(cmd).await\n    }\n\n    fn executor(&self) -> &Arc<dyn ToolExecutor> {\n        (**self).executor()\n    }\n\n    async fn history(&self) -> Result<Vec<ChatMessage>> {\n        (**self).history().await\n    }\n\n    async fn replace_history(&self, items: Vec<ChatMessage>) -> Result<()> {\n        (**self).replace_history(items).await\n    }\n\n    async fn redrive(&self) -> Result<()> {\n        (**self).redrive().await\n    }\n\n    async fn has_received_feedback(&self, tool_call: &ToolCall) -> Option<ToolFeedback> {\n        (**self).has_received_feedback(tool_call).await\n    }\n\n    async fn feedback_received(&self, tool_call: &ToolCall, feedback: &ToolFeedback) -> Result<()> {\n        (**self).feedback_received(tool_call, feedback).await\n    }\n}\n\n#[async_trait]\nimpl AgentContext for &dyn AgentContext {\n    async fn next_completion(&self) -> Result<Option<Vec<ChatMessage>>> {\n        (**self).next_completion().await\n    }\n\n    async fn current_new_messages(&self) -> Result<Vec<ChatMessage>> {\n        (**self).current_new_messages().await\n    }\n\n    async fn add_messages(&self, item: Vec<ChatMessage>) -> Result<()> {\n        (**self).add_messages(item).await\n    }\n\n    async fn add_message(&self, item: ChatMessage) -> Result<()> {\n        (**self).add_message(item).await\n    }\n\n    #[allow(deprecated)]\n    async fn exec_cmd(&self, cmd: &Command) -> Result<CommandOutput, CommandError> {\n        (**self).exec_cmd(cmd).await\n    }\n\n    fn executor(&self) -> &Arc<dyn ToolExecutor> {\n        (**self).executor()\n    }\n\n    async fn history(&self) -> Result<Vec<ChatMessage>> {\n        (**self).history().await\n    }\n\n    async fn replace_history(&self, items: Vec<ChatMessage>) -> Result<()> {\n        (**self).replace_history(items).await\n    }\n\n    async fn redrive(&self) -> Result<()> {\n        (**self).redrive().await\n    }\n\n    async fn has_received_feedback(&self, tool_call: &ToolCall) -> Option<ToolFeedback> {\n        (**self).has_received_feedback(tool_call).await\n    }\n\n    async fn feedback_received(&self, tool_call: &ToolCall, feedback: &ToolFeedback) -> Result<()> {\n        (**self).feedback_received(tool_call, feedback).await\n    }\n}\n\n/// Convenience implementation for empty agent context\n///\n/// Errors if tools attempt to execute commands\n#[async_trait]\nimpl AgentContext for () {\n    async fn next_completion(&self) -> Result<Option<Vec<ChatMessage>>> {\n        Ok(None)\n    }\n\n    async fn current_new_messages(&self) -> Result<Vec<ChatMessage>> {\n        Ok(Vec::new())\n    }\n\n    async fn add_messages(&self, _item: Vec<ChatMessage>) -> Result<()> {\n        Ok(())\n    }\n\n    async fn add_message(&self, _item: ChatMessage) -> Result<()> {\n        Ok(())\n    }\n\n    async fn exec_cmd(&self, _cmd: &Command) -> Result<CommandOutput, CommandError> {\n        Err(CommandError::ExecutorError(anyhow::anyhow!(\n            \"Empty agent context does not have a tool executor\"\n        )))\n    }\n\n    fn executor(&self) -> &Arc<dyn ToolExecutor> {\n        unimplemented!(\"Empty agent context does not have a tool executor\")\n    }\n\n    async fn history(&self) -> Result<Vec<ChatMessage>> {\n        Ok(Vec::new())\n    }\n\n    async fn replace_history(&self, _items: Vec<ChatMessage>) -> Result<()> {\n        Ok(())\n    }\n\n    async fn redrive(&self) -> Result<()> {\n        Ok(())\n    }\n\n    async fn has_received_feedback(&self, _tool_call: &ToolCall) -> Option<ToolFeedback> {\n        Some(ToolFeedback::Approved { payload: None })\n    }\n\n    async fn feedback_received(\n        &self,\n        _tool_call: &ToolCall,\n        _feedback: &ToolFeedback,\n    ) -> Result<()> {\n        Ok(())\n    }\n}\n\n/// A backend for the agent context.\n///\n/// A default is provided for `Arc<Mutex<Vec<ChatMessage>>>`.\n///\n/// If you want to use for instance a database, implement this trait and pass it to the agent\n/// context when creating it.\n#[async_trait]\npub trait MessageHistory: Send + Sync + std::fmt::Debug {\n    /// Returns the history of messages\n    async fn history(&self) -> Result<Vec<ChatMessage>>;\n\n    /// Add a message to the history\n    async fn push_owned(&self, item: ChatMessage) -> Result<()>;\n\n    /// Overwrite the history with the given items\n    async fn overwrite(&self, items: Vec<ChatMessage>) -> Result<()>;\n\n    /// Add a message to the history.\n    async fn push(&self, item: &ChatMessage) -> Result<()> {\n        self.push_owned(item.to_owned()).await\n    }\n\n    /// Extend the history with the given items.\n    async fn extend(&self, items: &[ChatMessage]) -> Result<()> {\n        self.extend_owned(items.iter().map(ChatMessage::to_owned).collect())\n            .await\n    }\n\n    /// Extend the history with the given items, taking ownership of them\n    async fn extend_owned(&self, items: Vec<ChatMessage>) -> Result<()> {\n        for item in items {\n            self.push_owned(item).await?;\n        }\n\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl MessageHistory for Mutex<Vec<ChatMessage>> {\n    async fn history(&self) -> Result<Vec<ChatMessage>> {\n        Ok(self.lock().unwrap().clone())\n    }\n\n    async fn push_owned(&self, item: ChatMessage) -> Result<()> {\n        self.lock().unwrap().push(item);\n\n        Ok(())\n    }\n\n    async fn overwrite(&self, items: Vec<ChatMessage>) -> Result<()> {\n        let mut lock = self.lock().unwrap();\n        *lock = items;\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/chat_completion_request.rs",
    "content": "use std::{borrow::Cow, collections::BTreeSet};\n\nuse derive_builder::Builder;\n\nuse super::{chat_message::ChatMessage, tools::ToolSpec, traits::Tool};\n\n/// A chat completion request represents a series of chat messages and tool interactions that can\n/// be send to any LLM.\n#[derive(Builder, Clone, PartialEq, Debug)]\n#[builder(setter(into, strip_option))]\npub struct ChatCompletionRequest<'a> {\n    pub messages: Cow<'a, [ChatMessage]>,\n    #[builder(default, setter(custom))]\n    pub tools_spec: BTreeSet<ToolSpec>,\n}\n\nimpl<'a> ChatCompletionRequest<'a> {\n    pub fn builder() -> ChatCompletionRequestBuilder<'a> {\n        ChatCompletionRequestBuilder::default()\n    }\n\n    /// Returns the chat messages included in the request.\n    pub fn messages(&self) -> &[ChatMessage] {\n        self.messages.as_ref()\n    }\n\n    /// Returns the tool specifications currently attached to the request.\n    pub fn tools_spec(&self) -> &BTreeSet<ToolSpec> {\n        &self.tools_spec\n    }\n\n    /// Returns an owned request with `'static` data.\n    pub fn to_owned(&self) -> ChatCompletionRequest<'static> {\n        ChatCompletionRequest {\n            messages: Cow::Owned(self.messages.iter().map(ChatMessage::to_owned).collect()),\n            tools_spec: self.tools_spec.clone(),\n        }\n    }\n}\n\nimpl From<Vec<ChatMessage>> for ChatCompletionRequest<'_> {\n    fn from(messages: Vec<ChatMessage>) -> Self {\n        ChatCompletionRequest {\n            messages: Cow::Owned(messages),\n            tools_spec: BTreeSet::new(),\n        }\n    }\n}\n\nimpl<'a> From<&'a [ChatMessage]> for ChatCompletionRequest<'a> {\n    fn from(messages: &'a [ChatMessage]) -> Self {\n        ChatCompletionRequest {\n            messages: Cow::Borrowed(messages),\n            tools_spec: BTreeSet::new(),\n        }\n    }\n}\n\nimpl ChatCompletionRequestBuilder<'_> {\n    #[deprecated(note = \"Use `tools` with real Tool instances instead\")]\n    pub fn tools_spec<I>(&mut self, tools_spec: I) -> &mut Self\n    where\n        I: IntoIterator<Item = ToolSpec>,\n    {\n        self.tools_spec = Some(tools_spec.into_iter().collect());\n        self\n    }\n\n    /// Adds multiple tools by deriving their specs from the provided instances.\n    pub fn tools<I, T>(&mut self, tools: I) -> &mut Self\n    where\n        I: IntoIterator<Item = T>,\n        T: Into<Box<dyn Tool>>,\n    {\n        let specs = tools.into_iter().map(|tool| {\n            let boxed: Box<dyn Tool> = tool.into();\n            boxed.tool_spec()\n        });\n        self.tool_specs(specs)\n    }\n\n    /// Adds a single tool instance to the request by deriving its spec.\n    pub fn tool<T>(&mut self, tool: T) -> &mut Self\n    where\n        T: Into<Box<dyn Tool>>,\n    {\n        let boxed: Box<dyn Tool> = tool.into();\n        self.tool_specs(std::iter::once(boxed.tool_spec()))\n    }\n\n    /// Extends the request with additional tool specifications.\n    pub fn tool_specs<I>(&mut self, specs: I) -> &mut Self\n    where\n        I: IntoIterator<Item = ToolSpec>,\n    {\n        let entry = self.tools_spec.get_or_insert_with(BTreeSet::new);\n        entry.extend(specs);\n        self\n    }\n\n    /// Adds a single chat message to the request\n    pub fn message(&mut self, message: impl Into<ChatMessage>) -> &mut Self {\n        let mut messages = self\n            .messages\n            .take()\n            .map(Cow::into_owned)\n            .unwrap_or_default();\n        messages.push(message.into());\n\n        self.messages = Some(Cow::Owned(messages));\n        self\n    }\n\n    /// Extends the request with multiple chat messages.\n    pub fn messages_iter<I>(&mut self, messages: I) -> &mut Self\n    where\n        I: IntoIterator<Item = ChatMessage>,\n    {\n        let mut new_messages = self\n            .messages\n            .take()\n            .map(Cow::into_owned)\n            .unwrap_or_default();\n        new_messages.extend(messages);\n        self.messages = Some(Cow::Owned(new_messages));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::ChatCompletionRequest;\n    use crate::chat_completion::{ChatMessage, ToolSpec};\n    use schemars::Schema;\n    use serde_json::json;\n\n    #[test]\n    fn tool_specs_are_stored_in_deterministic_order() {\n        let zebra = ToolSpec::builder()\n            .name(\"zebra\")\n            .description(\"later alphabetically\")\n            .parameters_schema(schema_from_json(json!({\n                \"type\": \"object\",\n                \"properties\": {\n                    \"b\": { \"type\": \"string\" },\n                    \"a\": { \"type\": \"string\" }\n                }\n            })))\n            .build()\n            .unwrap();\n\n        let alpha = ToolSpec::builder()\n            .name(\"alpha\")\n            .description(\"earlier alphabetically\")\n            .parameters_schema(schema_from_json(json!({\n                \"properties\": {\n                    \"z\": { \"type\": \"string\" },\n                    \"m\": { \"type\": \"string\" }\n                },\n                \"type\": \"object\"\n            })))\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs([zebra, alpha])\n            .build()\n            .unwrap();\n\n        let names = request\n            .tools_spec()\n            .iter()\n            .map(|spec| spec.name.as_str())\n            .collect::<Vec<_>>();\n\n        assert_eq!(names, vec![\"alpha\", \"zebra\"]);\n    }\n\n    fn schema_from_json(value: serde_json::Value) -> Schema {\n        serde_json::from_value(value).expect(\"valid schema\")\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/chat_completion_response.rs",
    "content": "use std::collections::HashMap;\n\nuse derive_builder::Builder;\nuse serde::{Deserialize, Serialize};\nuse uuid::Uuid;\n\nuse super::{ReasoningItem, ToolCallBuilder, tools::ToolCall};\n\n/// A generic response from chat completions\n///\n/// When streaming, the delta is available. Every response will have the accumulated message if\n/// present. The final message will also have the final tool calls.\n#[derive(Clone, Builder, Debug, Serialize, Deserialize, PartialEq)]\n#[builder(setter(strip_option, into), build_fn(error = anyhow::Error))]\npub struct ChatCompletionResponse {\n    /// An identifier for the response\n    ///\n    /// Useful when streaming to make sure chunks can be mapped to the right response\n    #[builder(private, default = Uuid::new_v4())]\n    pub id: Uuid,\n\n    #[builder(default)]\n    pub message: Option<String>,\n\n    #[builder(default)]\n    pub tool_calls: Option<Vec<ToolCall>>,\n\n    #[builder(default)]\n    pub usage: Option<Usage>,\n\n    #[builder(default)]\n    pub reasoning: Option<Vec<ReasoningItem>>,\n\n    /// Streaming response\n    #[builder(default)]\n    pub delta: Option<ChatCompletionResponseDelta>,\n}\n\nimpl Default for ChatCompletionResponse {\n    fn default() -> Self {\n        Self {\n            id: Uuid::new_v4(),\n            message: None,\n            tool_calls: None,\n            delta: None,\n            usage: None,\n            reasoning: None,\n        }\n    }\n}\n\n/// Usage statistics for a language model response.\n#[derive(Clone, Default, Builder, Debug, Serialize, Deserialize, PartialEq)]\n#[allow(clippy::struct_field_names)]\npub struct Usage {\n    /// Tokens used in the prompt or input.\n    pub prompt_tokens: u32,\n    /// Tokens generated in the completion or output.\n    pub completion_tokens: u32,\n    /// Total tokens used for the request.\n    pub total_tokens: u32,\n    /// Provider-specific usage breakdowns, when available.\n    #[builder(default)]\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub details: Option<UsageDetails>,\n}\n\nimpl Usage {\n    pub fn builder() -> UsageBuilder {\n        UsageBuilder::default()\n    }\n\n    /// Returns a normalized view of usage details when available.\n    ///\n    /// This keeps the public `Usage` fields intact and derives a consistent input/output breakdown\n    /// across providers (e.g. `OpenAI` chat vs. responses). Missing data is left as `None`.\n    pub fn normalized(&self) -> NormalizedUsage {\n        let details = self.details.as_ref().map(|details| {\n            let input = NormalizedInputUsageDetails {\n                cached_tokens: details\n                    .input_tokens_details\n                    .as_ref()\n                    .and_then(|input| input.cached_tokens)\n                    .or_else(|| {\n                        details\n                            .prompt_tokens_details\n                            .as_ref()\n                            .and_then(|prompt| prompt.cached_tokens)\n                    }),\n                audio_tokens: details\n                    .prompt_tokens_details\n                    .as_ref()\n                    .and_then(|prompt| prompt.audio_tokens),\n            };\n            let output = NormalizedOutputUsageDetails {\n                reasoning_tokens: details\n                    .output_tokens_details\n                    .as_ref()\n                    .and_then(|output| output.reasoning_tokens)\n                    .or_else(|| {\n                        details\n                            .completion_tokens_details\n                            .as_ref()\n                            .and_then(|completion| completion.reasoning_tokens)\n                    }),\n                audio_tokens: details\n                    .completion_tokens_details\n                    .as_ref()\n                    .and_then(|completion| completion.audio_tokens),\n                accepted_prediction_tokens: details\n                    .completion_tokens_details\n                    .as_ref()\n                    .and_then(|completion| completion.accepted_prediction_tokens),\n                rejected_prediction_tokens: details\n                    .completion_tokens_details\n                    .as_ref()\n                    .and_then(|completion| completion.rejected_prediction_tokens),\n            };\n\n            if input.is_empty() && output.is_empty() {\n                None\n            } else {\n                Some(NormalizedUsageDetails { input, output })\n            }\n        });\n\n        NormalizedUsage {\n            prompt_tokens: self.prompt_tokens,\n            completion_tokens: self.completion_tokens,\n            total_tokens: self.total_tokens,\n            details: details.flatten(),\n        }\n    }\n}\n\n/// Provider-specific usage breakdowns for a response.\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct UsageDetails {\n    /// Chat-completions style prompt token details.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub prompt_tokens_details: Option<PromptTokensDetails>,\n    /// Chat-completions style completion token details.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub completion_tokens_details: Option<CompletionTokensDetails>,\n    /// Responses-style input token details.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub input_tokens_details: Option<InputTokenDetails>,\n    /// Responses-style output token details.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub output_tokens_details: Option<OutputTokenDetails>,\n}\n\n/// Normalized usage totals with optional normalized details.\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct NormalizedUsage {\n    /// Tokens used in the prompt or input.\n    pub prompt_tokens: u32,\n    /// Tokens generated in the completion or output.\n    pub completion_tokens: u32,\n    /// Total tokens used for the request.\n    pub total_tokens: u32,\n    /// Normalized input/output breakdown, when available.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub details: Option<NormalizedUsageDetails>,\n}\n\n/// Normalized input/output usage breakdown.\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct NormalizedUsageDetails {\n    /// Normalized input usage details.\n    pub input: NormalizedInputUsageDetails,\n    /// Normalized output usage details.\n    pub output: NormalizedOutputUsageDetails,\n}\n\n/// Normalized input usage details.\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct NormalizedInputUsageDetails {\n    /// Tokens retrieved from cache, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub cached_tokens: Option<u32>,\n    /// Audio tokens in the input, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub audio_tokens: Option<u32>,\n}\n\nimpl NormalizedInputUsageDetails {\n    fn is_empty(&self) -> bool {\n        self.cached_tokens.is_none() && self.audio_tokens.is_none()\n    }\n}\n\n/// Normalized output usage details.\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct NormalizedOutputUsageDetails {\n    /// Tokens used for reasoning, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub reasoning_tokens: Option<u32>,\n    /// Audio tokens in the output, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub audio_tokens: Option<u32>,\n    /// Accepted prediction tokens, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub accepted_prediction_tokens: Option<u32>,\n    /// Rejected prediction tokens, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub rejected_prediction_tokens: Option<u32>,\n}\n\nimpl NormalizedOutputUsageDetails {\n    fn is_empty(&self) -> bool {\n        self.reasoning_tokens.is_none()\n            && self.audio_tokens.is_none()\n            && self.accepted_prediction_tokens.is_none()\n            && self.rejected_prediction_tokens.is_none()\n    }\n}\n\n/// OpenAI-style prompt token details (chat completions).\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct PromptTokensDetails {\n    /// Audio input tokens present in the prompt.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub audio_tokens: Option<u32>,\n    /// Cached tokens present in the prompt.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub cached_tokens: Option<u32>,\n}\n\nimpl PromptTokensDetails {\n    /// Returns true when no prompt token detail values are set.\n    pub fn is_empty(&self) -> bool {\n        self.audio_tokens.is_none() && self.cached_tokens.is_none()\n    }\n}\n\n/// OpenAI-style completion token details (chat completions).\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct CompletionTokensDetails {\n    /// Tokens accepted from predicted output, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub accepted_prediction_tokens: Option<u32>,\n    /// Audio tokens generated by the model, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub audio_tokens: Option<u32>,\n    /// Tokens generated by the model for reasoning, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub reasoning_tokens: Option<u32>,\n    /// Tokens rejected from predicted output, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub rejected_prediction_tokens: Option<u32>,\n}\n\nimpl CompletionTokensDetails {\n    /// Returns true when no completion token detail values are set.\n    pub fn is_empty(&self) -> bool {\n        self.accepted_prediction_tokens.is_none()\n            && self.audio_tokens.is_none()\n            && self.reasoning_tokens.is_none()\n            && self.rejected_prediction_tokens.is_none()\n    }\n}\n\n/// OpenAI-style input token details (Responses API).\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct InputTokenDetails {\n    /// Tokens retrieved from cache, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub cached_tokens: Option<u32>,\n}\n\n/// OpenAI-style output token details (Responses API).\n#[derive(Clone, Default, Debug, Serialize, Deserialize, PartialEq)]\npub struct OutputTokenDetails {\n    /// Tokens used for reasoning, when provided.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub reasoning_tokens: Option<u32>,\n}\n\n#[cfg(feature = \"openai\")]\nmod openai_usage {\n    use super::{\n        CompletionTokensDetails, InputTokenDetails, OutputTokenDetails, PromptTokensDetails, Usage,\n        UsageDetails,\n    };\n    use async_openai::types::{\n        chat::CompletionUsage, embeddings::EmbeddingUsage, responses::ResponseUsage,\n    };\n\n    impl From<&CompletionUsage> for Usage {\n        fn from(usage: &CompletionUsage) -> Self {\n            let prompt_details = usage.prompt_tokens_details.as_ref().and_then(|details| {\n                let details = PromptTokensDetails {\n                    audio_tokens: details.audio_tokens,\n                    cached_tokens: details.cached_tokens,\n                };\n                if details.is_empty() {\n                    None\n                } else {\n                    Some(details)\n                }\n            });\n            let completion_details = usage\n                .completion_tokens_details\n                .as_ref()\n                .and_then(|details| {\n                    let details = CompletionTokensDetails {\n                        accepted_prediction_tokens: details.accepted_prediction_tokens,\n                        audio_tokens: details.audio_tokens,\n                        reasoning_tokens: details.reasoning_tokens,\n                        rejected_prediction_tokens: details.rejected_prediction_tokens,\n                    };\n                    if details.is_empty() {\n                        None\n                    } else {\n                        Some(details)\n                    }\n                });\n            let details = if prompt_details.is_some() || completion_details.is_some() {\n                Some(UsageDetails {\n                    prompt_tokens_details: prompt_details,\n                    completion_tokens_details: completion_details,\n                    input_tokens_details: None,\n                    output_tokens_details: None,\n                })\n            } else {\n                None\n            };\n\n            Usage {\n                prompt_tokens: usage.prompt_tokens,\n                completion_tokens: usage.completion_tokens,\n                total_tokens: usage.total_tokens,\n                details,\n            }\n        }\n    }\n\n    impl From<&ResponseUsage> for Usage {\n        fn from(usage: &ResponseUsage) -> Self {\n            Usage {\n                prompt_tokens: usage.input_tokens,\n                completion_tokens: usage.output_tokens,\n                total_tokens: usage.total_tokens,\n                details: Some(UsageDetails {\n                    prompt_tokens_details: None,\n                    completion_tokens_details: None,\n                    input_tokens_details: Some(InputTokenDetails {\n                        cached_tokens: Some(usage.input_tokens_details.cached_tokens),\n                    }),\n                    output_tokens_details: Some(OutputTokenDetails {\n                        reasoning_tokens: Some(usage.output_tokens_details.reasoning_tokens),\n                    }),\n                }),\n            }\n        }\n    }\n\n    impl From<&EmbeddingUsage> for Usage {\n        fn from(usage: &EmbeddingUsage) -> Self {\n            Usage {\n                prompt_tokens: usage.prompt_tokens,\n                completion_tokens: 0,\n                total_tokens: usage.total_tokens,\n                details: None,\n            }\n        }\n    }\n}\n\n#[derive(Clone, Builder, Debug, Serialize, Deserialize, PartialEq)]\npub struct ChatCompletionResponseDelta {\n    #[builder(default)]\n    pub message_chunk: Option<String>,\n\n    #[builder(default)]\n    pub tool_calls_chunk: Option<HashMap<usize, ToolCallAccum>>,\n}\n\n// Accumulator for streamed tool calls\n#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)]\npub struct ToolCallAccum {\n    pub id: Option<String>,\n    pub name: Option<String>,\n    pub arguments: Option<String>,\n}\n\nimpl ChatCompletionResponse {\n    pub fn builder() -> ChatCompletionResponseBuilder {\n        ChatCompletionResponseBuilder::default()\n    }\n\n    pub fn message(&self) -> Option<&str> {\n        self.message.as_deref()\n    }\n\n    pub fn tool_calls(&self) -> Option<&[ToolCall]> {\n        self.tool_calls.as_deref()\n    }\n\n    /// Adds a streaming chunk to the message and also the delta\n    pub fn append_message_delta(&mut self, message_delta: Option<&str>) -> &mut Self {\n        // let message: Option<String> = message;\n        let Some(message_delta) = message_delta else {\n            return self;\n        };\n\n        if let Some(delta) = &mut self.delta {\n            delta.message_chunk = Some(message_delta.to_string());\n        } else {\n            self.delta = Some(ChatCompletionResponseDelta {\n                message_chunk: Some(message_delta.to_string()),\n                tool_calls_chunk: None,\n            });\n        }\n\n        self.message\n            .as_mut()\n            .map(|m| {\n                m.push_str(message_delta);\n            })\n            .unwrap_or_else(|| {\n                self.message = Some(message_delta.to_string());\n            });\n        self\n    }\n\n    /// Adds a streaming chunk to the tool calls, if it can be build, the tool call will be build,\n    /// otherwise it will remain in the delta and retried on the next call\n    pub fn append_tool_call_delta(\n        &mut self,\n        index: usize,\n        id: Option<&str>,\n        name: Option<&str>,\n        arguments: Option<&str>,\n    ) -> &mut Self {\n        if let Some(delta) = &mut self.delta {\n            let map = delta.tool_calls_chunk.get_or_insert_with(HashMap::new);\n            map.entry(index)\n                .and_modify(|v| {\n                    if v.id.is_none() {\n                        v.id = id.map(Into::into);\n                    }\n                    if v.name.is_none() {\n                        v.name = name.map(Into::into);\n                    }\n                    if let Some(v) = v.arguments.as_mut() {\n                        if let Some(arguments) = arguments {\n                            v.push_str(arguments);\n                        }\n                    } else {\n                        v.arguments = arguments.map(Into::into);\n                    }\n                })\n                .or_insert(ToolCallAccum {\n                    id: id.map(Into::into),\n                    name: name.map(Into::into),\n                    arguments: arguments.map(Into::into),\n                });\n        } else {\n            self.delta = Some(ChatCompletionResponseDelta {\n                message_chunk: None,\n                tool_calls_chunk: Some(HashMap::from([(\n                    index,\n                    ToolCallAccum {\n                        id: id.map(Into::into),\n                        name: name.map(Into::into),\n                        arguments: arguments.map(Into::into),\n                    },\n                )])),\n            });\n        }\n\n        // Now let's try to rebuild _every_ tool call and overwrite\n        // Performance wise very meh but it works, in reality it's only a couple of tool calls most\n        self.finalize_tools_from_stream();\n\n        self\n    }\n\n    pub fn append_usage_delta(\n        &mut self,\n        prompt_tokens: u32,\n        completion_tokens: u32,\n        total_tokens: u32,\n    ) -> &mut Self {\n        debug_assert!(prompt_tokens + completion_tokens == total_tokens);\n\n        if let Some(usage) = &mut self.usage {\n            usage.prompt_tokens += prompt_tokens;\n            usage.completion_tokens += completion_tokens;\n            usage.total_tokens += total_tokens;\n        } else {\n            self.usage = Some(Usage {\n                prompt_tokens,\n                completion_tokens,\n                total_tokens,\n                details: None,\n            });\n        }\n        self\n    }\n\n    fn finalize_tools_from_stream(&mut self) {\n        if let Some(values) = self\n            .delta\n            .as_ref()\n            .and_then(|d| d.tool_calls_chunk.as_ref().map(|t| t.values()))\n        {\n            let maybe_tool_calls = values\n                .filter_map(|maybe_tool_call| {\n                    ToolCallBuilder::default()\n                        .maybe_id(maybe_tool_call.id.clone())\n                        .maybe_name(maybe_tool_call.name.clone())\n                        .maybe_args(maybe_tool_call.arguments.clone())\n                        .build()\n                        .ok()\n                })\n                .collect::<Vec<_>>();\n\n            if !maybe_tool_calls.is_empty() {\n                self.tool_calls = Some(maybe_tool_calls);\n            }\n        }\n    }\n}\n\nimpl ChatCompletionResponseBuilder {\n    pub fn maybe_message<T: Into<Option<String>>>(&mut self, message: T) -> &mut Self {\n        self.message = Some(message.into());\n        self\n    }\n\n    pub fn maybe_tool_calls<T: Into<Option<Vec<ToolCall>>>>(&mut self, tool_calls: T) -> &mut Self {\n        self.tool_calls = Some(tool_calls.into());\n        self\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/chat_message.rs",
    "content": "use std::borrow::Cow;\n\nuse serde::{Deserialize, Serialize};\n\nuse super::tools::{ToolCall, ToolOutput};\n\n/// Reasoning items returned by chat providers that expose chain-of-thought metadata.\n#[derive(Clone, PartialEq, Debug, Serialize, Deserialize, Default)]\npub struct ReasoningItem {\n    /// Unique identifier for this reasoning item\n    pub id: String,\n    /// Reasoning summary content\n    pub summary: Vec<String>,\n    /// Reasoning text content\n    #[serde(default, skip_serializing_if = \"Option::is_none\")]\n    pub content: Option<Vec<String>>,\n    #[serde(default, skip_serializing_if = \"Option::is_none\")]\n    pub encrypted_content: Option<String>,\n    /// The status of the item. One of `in_progress`, `completed`, or `incomplete`.\n    /// Populated when items are returned via API.\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    pub status: Option<ReasoningStatus>,\n}\n\n#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq)]\n#[serde(rename_all = \"snake_case\")]\npub enum ReasoningStatus {\n    InProgress,\n    Completed,\n    Incomplete,\n}\n\n#[derive(Clone, PartialEq, Serialize, Deserialize)]\n#[serde(tag = \"type\", rename_all = \"snake_case\")]\npub enum ChatMessageContentSource {\n    Url {\n        url: String,\n    },\n    Bytes {\n        data: Vec<u8>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        media_type: Option<String>,\n    },\n    S3 {\n        uri: String,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        bucket_owner: Option<String>,\n    },\n    FileId {\n        file_id: String,\n    },\n}\n\nimpl ChatMessageContentSource {\n    pub fn url(url: impl Into<String>) -> Self {\n        Self::Url { url: url.into() }\n    }\n\n    pub fn bytes<M>(data: impl Into<Vec<u8>>, media_type: Option<M>) -> Self\n    where\n        M: Into<String>,\n    {\n        Self::Bytes {\n            data: data.into(),\n            media_type: media_type.map(Into::into),\n        }\n    }\n\n    pub fn s3<O>(uri: impl Into<String>, bucket_owner: Option<O>) -> Self\n    where\n        O: Into<String>,\n    {\n        Self::S3 {\n            uri: uri.into(),\n            bucket_owner: bucket_owner.map(Into::into),\n        }\n    }\n\n    pub fn file_id(file_id: impl Into<String>) -> Self {\n        Self::FileId {\n            file_id: file_id.into(),\n        }\n    }\n}\n\nimpl From<String> for ChatMessageContentSource {\n    fn from(value: String) -> Self {\n        Self::Url { url: value }\n    }\n}\n\nimpl From<&str> for ChatMessageContentSource {\n    fn from(value: &str) -> Self {\n        Self::Url {\n            url: value.to_owned(),\n        }\n    }\n}\n\nimpl From<Vec<u8>> for ChatMessageContentSource {\n    fn from(value: Vec<u8>) -> Self {\n        Self::Bytes {\n            data: value,\n            media_type: None,\n        }\n    }\n}\n\nimpl From<&[u8]> for ChatMessageContentSource {\n    fn from(value: &[u8]) -> Self {\n        Self::Bytes {\n            data: value.to_vec(),\n            media_type: None,\n        }\n    }\n}\n\nimpl std::fmt::Debug for ChatMessageContentSource {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            ChatMessageContentSource::Url { url } => f\n                .debug_struct(\"Url\")\n                .field(\"url\", &truncate_data_url(url))\n                .finish(),\n            ChatMessageContentSource::Bytes { data, media_type } => f\n                .debug_struct(\"Bytes\")\n                .field(\"len\", &data.len())\n                .field(\"media_type\", media_type)\n                .finish(),\n            ChatMessageContentSource::S3 { uri, bucket_owner } => f\n                .debug_struct(\"S3\")\n                .field(\"uri\", uri)\n                .field(\"bucket_owner\", bucket_owner)\n                .finish(),\n            ChatMessageContentSource::FileId { file_id } => {\n                f.debug_struct(\"FileId\").field(\"file_id\", file_id).finish()\n            }\n        }\n    }\n}\n\n#[derive(Clone, PartialEq, Serialize, Deserialize)]\n#[serde(tag = \"type\", rename_all = \"snake_case\")]\npub enum ChatMessageContentPart {\n    Text {\n        text: String,\n    },\n    Image {\n        source: ChatMessageContentSource,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        format: Option<String>,\n    },\n    Document {\n        source: ChatMessageContentSource,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        format: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        name: Option<String>,\n    },\n    Audio {\n        source: ChatMessageContentSource,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        format: Option<String>,\n    },\n    Video {\n        source: ChatMessageContentSource,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        format: Option<String>,\n    },\n}\n\nimpl ChatMessageContentPart {\n    pub fn text(text: impl Into<String>) -> Self {\n        Self::Text { text: text.into() }\n    }\n\n    pub fn image(source: impl Into<ChatMessageContentSource>) -> Self {\n        Self::Image {\n            source: source.into(),\n            format: None,\n        }\n    }\n\n    pub fn image_with_format(\n        source: impl Into<ChatMessageContentSource>,\n        format: impl Into<String>,\n    ) -> Self {\n        Self::Image {\n            source: source.into(),\n            format: Some(format.into()),\n        }\n    }\n\n    pub fn document(source: impl Into<ChatMessageContentSource>) -> Self {\n        Self::Document {\n            source: source.into(),\n            format: None,\n            name: None,\n        }\n    }\n\n    pub fn document_with_name(\n        source: impl Into<ChatMessageContentSource>,\n        name: impl Into<String>,\n    ) -> Self {\n        Self::Document {\n            source: source.into(),\n            format: None,\n            name: Some(name.into()),\n        }\n    }\n\n    pub fn audio(source: impl Into<ChatMessageContentSource>) -> Self {\n        Self::Audio {\n            source: source.into(),\n            format: None,\n        }\n    }\n\n    pub fn video(source: impl Into<ChatMessageContentSource>) -> Self {\n        Self::Video {\n            source: source.into(),\n            format: None,\n        }\n    }\n}\n\nimpl std::fmt::Debug for ChatMessageContentPart {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            ChatMessageContentPart::Text { text } => {\n                f.debug_struct(\"Text\").field(\"text\", text).finish()\n            }\n            ChatMessageContentPart::Image { source, format } => f\n                .debug_struct(\"Image\")\n                .field(\"source\", source)\n                .field(\"format\", format)\n                .finish(),\n            ChatMessageContentPart::Document {\n                source,\n                format,\n                name,\n            } => f\n                .debug_struct(\"Document\")\n                .field(\"source\", source)\n                .field(\"format\", format)\n                .field(\"name\", name)\n                .finish(),\n            ChatMessageContentPart::Audio { source, format } => f\n                .debug_struct(\"Audio\")\n                .field(\"source\", source)\n                .field(\"format\", format)\n                .finish(),\n            ChatMessageContentPart::Video { source, format } => f\n                .debug_struct(\"Video\")\n                .field(\"source\", source)\n                .field(\"format\", format)\n                .finish(),\n        }\n    }\n}\n\n#[derive(Clone, strum_macros::EnumIs, PartialEq, Debug, Serialize, Deserialize)]\npub enum ChatMessage {\n    System(String),\n    User(String),\n    UserWithParts(Vec<ChatMessageContentPart>),\n    Assistant(Option<String>, Option<Vec<ToolCall>>),\n    ToolOutput(ToolCall, ToolOutput),\n    Reasoning(ReasoningItem),\n\n    // A summary of the chat. If encountered all previous messages are ignored, except the system\n    // prompt\n    Summary(String),\n}\n\nimpl std::fmt::Display for ChatMessage {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            Self::System(s) => write!(f, \"System: \\\"{s}\\\"\"),\n            Self::User(s) => write!(f, \"User: \\\"{s}\\\"\"),\n            Self::UserWithParts(parts) => {\n                let (text, attachments) = summarize_user_parts(parts);\n                if attachments == 0 {\n                    write!(f, \"User: \\\"{text}\\\"\")\n                } else {\n                    write!(f, \"User: \\\"{text}\\\", attachments: {attachments}\")\n                }\n            }\n            Self::Assistant(content, tool_calls) => write!(\n                f,\n                \"Assistant: \\\"{}\\\", tools: {}\",\n                content.as_deref().unwrap_or(\"None\"),\n                tool_calls.as_deref().map_or(\"None\".to_string(), |tc| {\n                    tc.iter()\n                        .map(ToString::to_string)\n                        .collect::<Vec<_>>()\n                        .join(\", \")\n                })\n            ),\n            Self::ToolOutput(tc, to) => write!(f, \"ToolOutput: \\\"{tc}\\\": \\\"{to}\\\"\"),\n            Self::Reasoning(item) => write!(\n                f,\n                \"Reasoning: \\\"{}\\\", encrypted: {}\",\n                item.summary.join(\"\\n\"),\n                item.encrypted_content.is_some()\n            ),\n            Self::Summary(s) => write!(f, \"Summary: \\\"{s}\\\"\"),\n        }\n    }\n}\n\nimpl ChatMessage {\n    pub fn new_system(message: impl Into<String>) -> Self {\n        Self::System(message.into())\n    }\n\n    pub fn new_user(message: impl Into<String>) -> Self {\n        Self::User(message.into())\n    }\n\n    pub fn new_user_with_parts(parts: impl Into<Vec<ChatMessageContentPart>>) -> Self {\n        Self::UserWithParts(parts.into())\n    }\n\n    pub fn new_assistant(\n        message: Option<impl Into<String>>,\n        tool_calls: Option<Vec<ToolCall>>,\n    ) -> Self {\n        Self::Assistant(message.map(Into::into), tool_calls)\n    }\n\n    pub fn new_tool_output(tool_call: impl Into<ToolCall>, output: impl Into<ToolOutput>) -> Self {\n        Self::ToolOutput(tool_call.into(), output.into())\n    }\n\n    pub fn new_reasoning(message: ReasoningItem) -> Self {\n        Self::Reasoning(message)\n    }\n\n    pub fn new_summary(message: impl Into<String>) -> Self {\n        Self::Summary(message.into())\n    }\n\n    #[must_use]\n    pub fn to_owned(&self) -> Self {\n        self.clone()\n    }\n}\n\n/// Returns the content of the message as a string slice.\n///\n/// Note that this omits the tool calls from the assistant message.\n///\n/// If used for estimating tokens, consider this a very rought estimate\nimpl AsRef<str> for ChatMessage {\n    fn as_ref(&self) -> &str {\n        match self {\n            Self::System(s) | Self::User(s) | Self::Summary(s) => s,\n            Self::UserWithParts(parts) => match parts.as_slice() {\n                [ChatMessageContentPart::Text { text }] => text.as_ref(),\n                _ => \"\",\n            },\n            Self::Assistant(message, _) => message.as_deref().unwrap_or(\"\"),\n            Self::ToolOutput(_, output) => output.content().unwrap_or(\"\"),\n            Self::Reasoning(_) => \"\",\n        }\n    }\n}\n\nfn summarize_user_parts(parts: &[ChatMessageContentPart]) -> (String, usize) {\n    let mut text_parts = Vec::new();\n    let mut attachments = 0;\n    for part in parts {\n        match part {\n            ChatMessageContentPart::Text { text } => text_parts.push(text.as_str()),\n            ChatMessageContentPart::Image { .. }\n            | ChatMessageContentPart::Document { .. }\n            | ChatMessageContentPart::Audio { .. }\n            | ChatMessageContentPart::Video { .. } => attachments += 1,\n        }\n    }\n    (text_parts.join(\" \"), attachments)\n}\n\nfn truncate_data_url(url: &str) -> Cow<'_, str> {\n    const MAX_DATA_PREVIEW: usize = 32;\n\n    if !url.starts_with(\"data:\") {\n        return Cow::Borrowed(url);\n    }\n\n    let Some((prefix, data)) = url.split_once(',') else {\n        return Cow::Borrowed(url);\n    };\n\n    if data.len() <= MAX_DATA_PREVIEW {\n        return Cow::Borrowed(url);\n    }\n\n    let preview = &data[..MAX_DATA_PREVIEW];\n    let truncated = data.len() - MAX_DATA_PREVIEW;\n\n    Cow::Owned(format!(\n        \"{prefix},{preview}...[truncated {truncated} chars]\"\n    ))\n}\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/errors.rs",
    "content": "use std::borrow::Cow;\n\nuse thiserror::Error;\n\nuse crate::CommandError;\n\nuse super::ChatCompletionStream;\n\n/// A `ToolError` is an error that occurs when a tool is invoked.\n///\n/// Depending on the agent configuration, the tool might be retried with feedback to the LLM, up to\n/// a limit.\n#[derive(Error, Debug)]\npub enum ToolError {\n    /// I.e. the llm calls the tool with the wrong arguments\n    #[error(\"arguments for tool failed to parse: {0:#}\")]\n    WrongArguments(#[from] serde_json::Error),\n\n    /// Tool requires arguments but none were provided\n    #[error(\"arguments missing for tool {0:#}\")]\n    MissingArguments(Cow<'static, str>),\n\n    /// Tool execution failed\n    #[error(\"tool execution failed: {0:#}\")]\n    ExecutionFailed(#[from] CommandError),\n\n    #[error(transparent)]\n    Unknown(#[from] anyhow::Error),\n}\n\nimpl ToolError {\n    /// Tool received arguments that it could not parse\n    pub fn wrong_arguments(e: impl Into<serde_json::Error>) -> Self {\n        ToolError::WrongArguments(e.into())\n    }\n\n    /// Tool is missing required arguments\n    pub fn missing_arguments(tool_name: impl Into<Cow<'static, str>>) -> Self {\n        ToolError::MissingArguments(tool_name.into())\n    }\n\n    /// Tool execution failed\n    pub fn execution_failed(e: impl Into<CommandError>) -> Self {\n        ToolError::ExecutionFailed(e.into())\n    }\n\n    /// Tool failed with an unknown error\n    pub fn unknown(e: impl Into<anyhow::Error>) -> Self {\n        ToolError::Unknown(e.into())\n    }\n}\n\ntype BoxedError = Box<dyn std::error::Error + Send + Sync>;\n\n#[derive(Error, Debug)]\npub enum LanguageModelError {\n    #[error(\"Context length exceeded: {0:#}\")]\n    ContextLengthExceeded(BoxedError),\n    #[error(\"Permanent error: {0:#}\")]\n    PermanentError(BoxedError),\n    #[error(\"Transient error: {0:#}\")]\n    TransientError(BoxedError),\n}\n\nimpl LanguageModelError {\n    pub fn permanent(e: impl Into<BoxedError>) -> Self {\n        LanguageModelError::PermanentError(e.into())\n    }\n\n    pub fn transient(e: impl Into<BoxedError>) -> Self {\n        LanguageModelError::TransientError(e.into())\n    }\n\n    pub fn context_length_exceeded(e: impl Into<BoxedError>) -> Self {\n        LanguageModelError::ContextLengthExceeded(e.into())\n    }\n}\n\nimpl From<BoxedError> for LanguageModelError {\n    fn from(e: BoxedError) -> Self {\n        LanguageModelError::PermanentError(e)\n    }\n}\n\nimpl From<anyhow::Error> for LanguageModelError {\n    fn from(e: anyhow::Error) -> Self {\n        LanguageModelError::PermanentError(e.into())\n    }\n}\n\n// Make it easier to use the error in streaming functions\n\nimpl From<LanguageModelError> for ChatCompletionStream {\n    fn from(val: LanguageModelError) -> Self {\n        Box::pin(futures_util::stream::once(async move { Err(val) }))\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/mod.rs",
    "content": "//! This module enables the implementation of chat completion on LLM providers\n//!\n//! The main trait to implement is `ChatCompletion`, which takes a `ChatCompletionRequest` and\n//! returns a `ChatCompletionResponse`.\n//!\n//! A chat completion request is comprised of a list of `ChatMessage` to complete, with\n//! optionally tool specifications. The builder accepts either owned or borrowed messages and\n//! provides `tools(...)` while still exposing `tool_specs` for compatibility.\nmod chat_completion_request;\nmod chat_completion_response;\nmod chat_message;\npub mod errors;\nmod tool_schema;\nmod tools;\n\n// Re-exported in the root per convention\npub mod traits;\n\npub use chat_completion_request::*;\npub use chat_completion_response::*;\npub use chat_message::*;\npub use tools::*;\npub use traits::*;\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/tool_schema.rs",
    "content": "use schemars::Schema;\nuse serde_json::{Map, Value, json};\nuse thiserror::Error;\n\n#[derive(Clone, Debug, PartialEq)]\npub struct StrictToolParametersSchema {\n    document: Value,\n}\n\n#[derive(Debug, Error)]\npub enum ToolSchemaError {\n    #[error(\"failed to serialize tool schema\")]\n    SerializeSchema(#[from] serde_json::Error),\n    #[error(\"tool schema must be a JSON object\")]\n    RootMustBeObject,\n    #[error(\"tool schema node at {path} must be a JSON object\")]\n    NodeMustBeObject { path: String },\n    #[error(\"tool schema map at {path} must be a JSON object\")]\n    NodeMapMustBeObject { path: String },\n    #[error(\"tool schema required must be an array at {path}\")]\n    RequiredMustBeArray { path: String },\n    #[error(\n        \"strict tool schemas do not support patternProperties at {path}; define explicit properties instead\"\n    )]\n    PatternPropertiesUnsupported { path: String },\n    #[error(\n        \"strict tool schemas do not support propertyNames at {path}; define explicit properties instead\"\n    )]\n    PropertyNamesUnsupported { path: String },\n    #[error(\n        \"strict tool schemas do not support open object schemas at {path}; define explicit properties instead\"\n    )]\n    OpenObjectUnsupported { path: String },\n    #[error(\n        \"strict tool schemas do not support schema-valued additionalProperties at {path}; define explicit properties instead\"\n    )]\n    SchemaValuedAdditionalPropertiesUnsupported { path: String },\n    #[error(\"strict tool schemas do not support {kind}-valued additionalProperties at {path}\")]\n    InvalidAdditionalProperties { path: String, kind: &'static str },\n    #[error(\"strict tool schemas do not support $ref siblings {keywords} at {path}\")]\n    UnsupportedRefSiblingKeywords { path: String, keywords: String },\n}\n\nimpl StrictToolParametersSchema {\n    pub(super) fn try_from_raw(schema: Option<&Schema>) -> Result<Self, ToolSchemaError> {\n        let raw = match schema {\n            Some(schema) => serde_json::to_value(schema)?,\n            None => json!({}),\n        };\n\n        let root = raw.as_object().ok_or(ToolSchemaError::RootMustBeObject)?;\n\n        Ok(Self {\n            document: Value::Object(parse_schema_object(root, &SchemaPath::root(), true)?),\n        })\n    }\n\n    pub fn into_json(self) -> Value {\n        self.document\n    }\n\n    pub fn as_json(&self) -> &Value {\n        &self.document\n    }\n}\n\nfn parse_schema_value(value: &Value, path: &SchemaPath) -> Result<Value, ToolSchemaError> {\n    let object = value\n        .as_object()\n        .ok_or_else(|| ToolSchemaError::NodeMustBeObject {\n            path: path.to_string(),\n        })?;\n\n    Ok(Value::Object(parse_schema_object(object, path, false)?))\n}\n\nfn parse_schema_object(\n    schema: &Map<String, Value>,\n    path: &SchemaPath,\n    force_object: bool,\n) -> Result<Map<String, Value>, ToolSchemaError> {\n    let schema = normalize_schema_object(schema, path)?;\n\n    if force_object || schema_is_object(&schema) {\n        parse_object_schema(&schema, path)\n    } else {\n        parse_non_object_schema(&schema, path)\n    }\n}\n\nfn normalize_schema_object(\n    schema: &Map<String, Value>,\n    path: &SchemaPath,\n) -> Result<Map<String, Value>, ToolSchemaError> {\n    let mut normalized = schema.clone();\n    rewrite_nullable_type_union(&mut normalized);\n    rewrite_nullable_one_of(&mut normalized);\n    strip_ref_annotation_siblings(&mut normalized, path)?;\n    Ok(normalized)\n}\n\nfn rewrite_nullable_type_union(schema: &mut Map<String, Value>) {\n    let Some(entries) = schema.get(\"type\").and_then(Value::as_array) else {\n        return;\n    };\n\n    let Some(non_null_type) = nullable_type_union(entries).map(str::to_owned) else {\n        return;\n    };\n\n    let mut non_null_branch = schema.clone();\n    non_null_branch.insert(\"type\".to_string(), Value::String(non_null_type));\n    let annotations = extract_schema_annotations(schema);\n\n    for key in schema_annotation_keys() {\n        non_null_branch.remove(*key);\n    }\n\n    schema.clear();\n    schema.extend(annotations);\n    schema.insert(\n        \"anyOf\".to_string(),\n        Value::Array(vec![\n            Value::Object(non_null_branch),\n            json!({ \"type\": \"null\" }),\n        ]),\n    );\n}\n\nfn rewrite_nullable_one_of(schema: &mut Map<String, Value>) {\n    let Some(entries) = schema.get(\"oneOf\").and_then(Value::as_array).cloned() else {\n        return;\n    };\n\n    if is_nullable_union(&entries) {\n        schema.remove(\"oneOf\");\n        schema.insert(\"anyOf\".to_string(), Value::Array(entries));\n    }\n}\n\nfn is_nullable_union(entries: &[Value]) -> bool {\n    entries.len() == 2 && entries.iter().any(is_null_schema)\n}\n\nfn nullable_type_union(entries: &[Value]) -> Option<&str> {\n    if entries.len() != 2 {\n        return None;\n    }\n\n    let mut non_null = None;\n\n    for entry in entries {\n        let kind = entry.as_str()?;\n        if kind == \"null\" {\n            continue;\n        }\n\n        if non_null.is_some() {\n            return None;\n        }\n\n        non_null = Some(kind);\n    }\n\n    non_null\n}\n\nfn is_null_schema(value: &Value) -> bool {\n    matches!(\n        value,\n        Value::Object(object) if matches!(object.get(\"type\"), Some(Value::String(kind)) if kind == \"null\")\n    )\n}\n\nfn extract_schema_annotations(schema: &Map<String, Value>) -> Map<String, Value> {\n    schema_annotation_keys()\n        .iter()\n        .filter_map(|key| {\n            schema\n                .get(*key)\n                .cloned()\n                .map(|value| ((*key).to_string(), value))\n        })\n        .collect()\n}\n\nfn schema_annotation_keys() -> &'static [&'static str] {\n    &[\n        \"description\",\n        \"title\",\n        \"default\",\n        \"examples\",\n        \"deprecated\",\n        \"readOnly\",\n        \"writeOnly\",\n    ]\n}\n\nfn strip_ref_annotation_siblings(\n    schema: &mut Map<String, Value>,\n    path: &SchemaPath,\n) -> Result<(), ToolSchemaError> {\n    const SAFE_REF_ANNOTATIONS: &[&str] = &[\n        \"description\",\n        \"title\",\n        \"default\",\n        \"examples\",\n        \"deprecated\",\n        \"readOnly\",\n        \"writeOnly\",\n    ];\n\n    if !schema.contains_key(\"$ref\") {\n        return Ok(());\n    }\n\n    let mut unsupported = Vec::new();\n    let sibling_keys = schema\n        .keys()\n        .filter(|key| key.as_str() != \"$ref\")\n        .cloned()\n        .collect::<Vec<_>>();\n\n    for key in sibling_keys {\n        if SAFE_REF_ANNOTATIONS.contains(&key.as_str()) {\n            schema.remove(&key);\n        } else {\n            unsupported.push(key);\n        }\n    }\n\n    if unsupported.is_empty() {\n        Ok(())\n    } else {\n        Err(ToolSchemaError::UnsupportedRefSiblingKeywords {\n            path: path.to_string(),\n            keywords: unsupported.join(\", \"),\n        })\n    }\n}\n\nfn parse_object_schema(\n    schema: &Map<String, Value>,\n    path: &SchemaPath,\n) -> Result<Map<String, Value>, ToolSchemaError> {\n    if schema.get(\"patternProperties\").is_some() {\n        return Err(ToolSchemaError::PatternPropertiesUnsupported {\n            path: path.to_string(),\n        });\n    }\n\n    if schema.get(\"propertyNames\").is_some() {\n        return Err(ToolSchemaError::PropertyNamesUnsupported {\n            path: path.to_string(),\n        });\n    }\n\n    match schema.get(\"additionalProperties\") {\n        Some(Value::Bool(true)) => {\n            return Err(ToolSchemaError::OpenObjectUnsupported {\n                path: path.to_string(),\n            });\n        }\n        Some(Value::Object(_)) => {\n            return Err(\n                ToolSchemaError::SchemaValuedAdditionalPropertiesUnsupported {\n                    path: path.to_string(),\n                },\n            );\n        }\n        Some(Value::Array(_)) => {\n            return Err(ToolSchemaError::InvalidAdditionalProperties {\n                path: path.to_string(),\n                kind: \"array\",\n            });\n        }\n        Some(Value::Null) => {\n            return Err(ToolSchemaError::InvalidAdditionalProperties {\n                path: path.to_string(),\n                kind: \"null\",\n            });\n        }\n        Some(Value::String(_) | Value::Number(_)) => {\n            return Err(ToolSchemaError::InvalidAdditionalProperties {\n                path: path.to_string(),\n                kind: \"scalar\",\n            });\n        }\n        Some(Value::Bool(false)) | None => {}\n    }\n\n    let mut parsed = schema.clone();\n    parsed.insert(\"type\".to_string(), Value::String(\"object\".to_string()));\n    parsed.insert(\"additionalProperties\".to_string(), Value::Bool(false));\n    parsed.insert(\n        \"properties\".to_string(),\n        Value::Object(parse_schema_map(\n            schema.get(\"properties\"),\n            &path.with_key(\"properties\"),\n        )?),\n    );\n\n    if let Some(required) = schema.get(\"required\")\n        && !required.is_array()\n    {\n        return Err(ToolSchemaError::RequiredMustBeArray {\n            path: path.with_key(\"required\").to_string(),\n        });\n    }\n\n    recurse_schema_children(schema, &mut parsed, path)?;\n    Ok(parsed)\n}\n\nfn parse_non_object_schema(\n    schema: &Map<String, Value>,\n    path: &SchemaPath,\n) -> Result<Map<String, Value>, ToolSchemaError> {\n    let mut parsed = schema.clone();\n    recurse_schema_children(schema, &mut parsed, path)?;\n    Ok(parsed)\n}\n\nfn recurse_schema_children(\n    source: &Map<String, Value>,\n    target: &mut Map<String, Value>,\n    path: &SchemaPath,\n) -> Result<(), ToolSchemaError> {\n    for key in [\"items\", \"contains\", \"if\", \"then\", \"else\", \"not\"] {\n        if let Some(schema) = source.get(key) {\n            target.insert(\n                key.to_string(),\n                parse_schema_value(schema, &path.with_key(key))?,\n            );\n        }\n    }\n\n    for key in [\"anyOf\", \"oneOf\", \"allOf\", \"prefixItems\"] {\n        if let Some(entries) = source.get(key).and_then(Value::as_array) {\n            target.insert(\n                key.to_string(),\n                Value::Array(\n                    entries\n                        .iter()\n                        .enumerate()\n                        .map(|(index, schema)| {\n                            parse_schema_value(schema, &path.with_index(key, index))\n                        })\n                        .collect::<Result<Vec<_>, _>>()?,\n                ),\n            );\n        }\n    }\n\n    for key in [\"properties\", \"$defs\", \"definitions\", \"dependentSchemas\"] {\n        if let Some(entries) = source.get(key) {\n            target.insert(\n                key.to_string(),\n                Value::Object(parse_schema_map(Some(entries), &path.with_key(key))?),\n            );\n        }\n    }\n\n    Ok(())\n}\n\nfn parse_schema_map(\n    value: Option<&Value>,\n    path: &SchemaPath,\n) -> Result<Map<String, Value>, ToolSchemaError> {\n    let Some(value) = value else {\n        return Ok(Map::new());\n    };\n\n    let entries = value\n        .as_object()\n        .ok_or_else(|| ToolSchemaError::NodeMapMustBeObject {\n            path: path.to_string(),\n        })?;\n\n    let mut parsed = Map::new();\n    for (key, schema) in entries {\n        parsed.insert(\n            key.clone(),\n            parse_schema_value(schema, &path.with_key(key))?,\n        );\n    }\n\n    Ok(parsed)\n}\n\nfn schema_is_object(schema: &Map<String, Value>) -> bool {\n    type_includes_object(schema.get(\"type\"))\n        || schema.contains_key(\"properties\")\n        || schema.contains_key(\"additionalProperties\")\n        || schema.contains_key(\"patternProperties\")\n        || schema.contains_key(\"propertyNames\")\n}\n\nfn type_includes_object(value: Option<&Value>) -> bool {\n    match value {\n        Some(Value::String(kind)) => kind == \"object\",\n        Some(Value::Array(kinds)) => kinds\n            .iter()\n            .filter_map(Value::as_str)\n            .any(|kind| kind == \"object\"),\n        _ => false,\n    }\n}\n\n#[derive(Clone, Debug)]\npub(super) struct SchemaPath(Vec<String>);\n\nimpl SchemaPath {\n    fn root() -> Self {\n        Self(vec![\"$\".to_string()])\n    }\n\n    fn with_key(&self, key: impl Into<String>) -> Self {\n        let mut path = self.0.clone();\n        path.push(key.into());\n        Self(path)\n    }\n\n    fn with_index(&self, key: impl Into<String>, index: usize) -> Self {\n        let mut path = self.0.clone();\n        path.push(key.into());\n        path.push(index.to_string());\n        Self(path)\n    }\n}\n\nimpl std::fmt::Display for SchemaPath {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}\", self.0.join(\".\"))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentArgs {\n        request: NestedCommentRequest,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentRequest {\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        body: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        text: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        page_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        block_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        discussion_id: Option<String>,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize)]\n    #[serde(transparent)]\n    struct FreeformObject(serde_json::Map<String, Value>);\n\n    impl schemars::JsonSchema for FreeformObject {\n        fn schema_name() -> std::borrow::Cow<'static, str> {\n            \"FreeformObject\".into()\n        }\n\n        fn json_schema(_generator: &mut schemars::SchemaGenerator) -> Schema {\n            serde_json::from_value(json!({\n                \"type\": \"object\",\n                \"additionalProperties\": true\n            }))\n            .expect(\"freeform object schema should serialize\")\n        }\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct CreateViewArgs {\n        request: CreateViewRequest,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct CreateViewRequest {\n        body: FreeformObject,\n    }\n\n    #[test]\n    fn strict_tool_schema_rejects_nested_freeform_object_wrappers() {\n        let error =\n            StrictToolParametersSchema::try_from_raw(Some(&schemars::schema_for!(CreateViewArgs)))\n                .expect_err(\"freeform object should be rejected in strict mode\");\n\n        let message = error.to_string();\n        assert!(message.contains(\"strict tool schemas do not support open object schemas\"));\n        assert!(message.contains(\"FreeformObject\"));\n    }\n\n    #[test]\n    fn strict_tool_schema_rewrites_nullable_type_unions_to_any_of() {\n        let schema: Schema = serde_json::from_value(json!({\n            \"type\": \"object\",\n            \"properties\": {\n                \"body\": {\n                    \"type\": [\"string\", \"null\"]\n                }\n            },\n            \"required\": [\"body\"]\n        }))\n        .expect(\"schema should deserialize\");\n\n        let rendered = StrictToolParametersSchema::try_from_raw(Some(&schema))\n            .unwrap()\n            .into_json();\n\n        let body = &rendered[\"properties\"][\"body\"];\n        assert!(body.get(\"type\").is_none());\n        assert!(body.get(\"oneOf\").is_none());\n        assert_eq!(\n            body[\"anyOf\"],\n            Value::Array(vec![json!({ \"type\": \"string\" }), json!({ \"type\": \"null\" })])\n        );\n    }\n\n    #[test]\n    fn strict_tool_schema_rewrites_nullable_one_of_to_any_of() {\n        let schema: Schema = serde_json::from_value(json!({\n            \"type\": \"object\",\n            \"properties\": {\n                \"body\": {\n                    \"oneOf\": [\n                        { \"type\": \"string\" },\n                        { \"type\": \"null\" }\n                    ]\n                }\n            },\n            \"required\": [\"body\"]\n        }))\n        .expect(\"schema should deserialize\");\n\n        let rendered = StrictToolParametersSchema::try_from_raw(Some(&schema))\n            .unwrap()\n            .into_json();\n\n        let body = &rendered[\"properties\"][\"body\"];\n        assert!(body.get(\"type\").is_none());\n        assert!(body.get(\"oneOf\").is_none());\n        assert_eq!(\n            body[\"anyOf\"],\n            Value::Array(vec![json!({ \"type\": \"string\" }), json!({ \"type\": \"null\" })])\n        );\n    }\n\n    #[test]\n    fn strict_tool_schema_strips_ref_annotation_siblings() {\n        let schema: Schema = serde_json::from_value(json!({\n            \"type\": \"object\",\n            \"properties\": {\n                \"request\": {\n                    \"$ref\": \"#/$defs/NestedCommentRequest\",\n                    \"description\": \"A nested payload\"\n                }\n            },\n            \"required\": [\"request\"],\n            \"$defs\": {\n                \"NestedCommentRequest\": {\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"body\": { \"type\": \"string\" }\n                    },\n                    \"required\": [\"body\"]\n                }\n            }\n        }))\n        .expect(\"schema should deserialize\");\n\n        let rendered = StrictToolParametersSchema::try_from_raw(Some(&schema))\n            .unwrap()\n            .into_json();\n\n        assert_eq!(\n            rendered[\"properties\"][\"request\"],\n            json!({ \"$ref\": \"#/$defs/NestedCommentRequest\" })\n        );\n    }\n\n    #[test]\n    fn strict_tool_schema_preserves_nullable_numeric_constraints_on_the_non_null_branch() {\n        let schema: Schema = serde_json::from_value(json!({\n            \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n            \"type\": \"object\",\n            \"properties\": {\n                \"page_size\": {\n                    \"type\": [\"integer\", \"null\"],\n                    \"format\": \"uint\",\n                    \"minimum\": 0\n                }\n            },\n            \"required\": [\"page_size\"]\n        }))\n        .expect(\"schema should deserialize\");\n\n        let rendered = StrictToolParametersSchema::try_from_raw(Some(&schema))\n            .unwrap()\n            .into_json();\n\n        assert_eq!(\n            rendered.get(\"$schema\"),\n            Some(&json!(\"https://json-schema.org/draft/2020-12/schema\"))\n        );\n        let page_size = &rendered[\"properties\"][\"page_size\"];\n        assert!(page_size.get(\"format\").is_none());\n        assert!(page_size.get(\"minimum\").is_none());\n        assert_eq!(\n            page_size[\"anyOf\"],\n            Value::Array(vec![\n                json!({ \"type\": \"integer\", \"format\": \"uint\", \"minimum\": 0 }),\n                json!({ \"type\": \"null\" })\n            ])\n        );\n    }\n\n    #[test]\n    fn strict_tool_schema_moves_nullable_array_constraints_into_the_array_branch() {\n        let schema: Schema = serde_json::from_value(json!({\n            \"type\": \"object\",\n            \"properties\": {\n                \"children\": {\n                    \"type\": [\"array\", \"null\"],\n                    \"items\": { \"type\": \"string\" }\n                }\n            },\n            \"required\": [\"children\"]\n        }))\n        .expect(\"schema should deserialize\");\n\n        let rendered = StrictToolParametersSchema::try_from_raw(Some(&schema))\n            .unwrap()\n            .into_json();\n\n        let children = &rendered[\"properties\"][\"children\"];\n        assert!(children.get(\"items\").is_none());\n        assert_eq!(\n            children[\"anyOf\"],\n            Value::Array(vec![\n                json!({ \"type\": \"array\", \"items\": { \"type\": \"string\" } }),\n                json!({ \"type\": \"null\" })\n            ])\n        );\n    }\n\n    #[test]\n    fn strict_tool_schema_preserves_optional_nested_fields_before_provider_shaping() {\n        let schema = StrictToolParametersSchema::try_from_raw(Some(&schemars::schema_for!(\n            NestedCommentArgs\n        )))\n        .unwrap();\n\n        let rendered = schema.into_json();\n        let nested_ref = rendered[\"properties\"][\"request\"][\"$ref\"]\n            .as_str()\n            .expect(\"nested request should be referenced\");\n        let nested_name = nested_ref\n            .rsplit('/')\n            .next()\n            .expect(\"nested request ref name\");\n\n        assert_eq!(rendered[\"additionalProperties\"], Value::Bool(false));\n        assert!(\n            rendered[\"$defs\"][nested_name].get(\"required\").is_none(),\n            \"provider-neutral parsing should not force optional nested fields into required\"\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/tools.rs",
    "content": "use std::cmp::Ordering;\n\nuse derive_builder::Builder;\nuse schemars::Schema;\nuse serde::{Deserialize, Serialize};\nuse serde_json::{Map as JsonMap, Value as JsonValue};\nuse thiserror::Error;\n\npub use super::tool_schema::{StrictToolParametersSchema, ToolSchemaError};\n\n/// Output of a `ToolCall` which will be added as a message for the agent to use.\n#[derive(Debug, Clone, PartialEq, Serialize, Deserialize, strum_macros::EnumIs)]\n#[non_exhaustive]\npub enum ToolOutput {\n    /// Adds the result of the toolcall to messages\n    Text(String),\n\n    /// Indicates that the toolcall requires feedback, i.e. in a human-in-the-loop\n    FeedbackRequired(Option<serde_json::Value>),\n\n    /// Indicates that the toolcall failed, but can be handled by the llm\n    Fail(String),\n\n    /// Stops an agent with an optional message\n    Stop(Option<serde_json::Value>),\n\n    /// Indicates that the agent failed and should stop\n    AgentFailed(Option<serde_json::Value>),\n}\n\nimpl ToolOutput {\n    pub fn text(text: impl Into<String>) -> Self {\n        ToolOutput::Text(text.into())\n    }\n\n    pub fn feedback_required(feedback: Option<serde_json::Value>) -> Self {\n        ToolOutput::FeedbackRequired(feedback)\n    }\n\n    pub fn stop() -> Self {\n        ToolOutput::Stop(None)\n    }\n\n    pub fn stop_with_args(output: impl Into<serde_json::Value>) -> Self {\n        ToolOutput::Stop(Some(output.into()))\n    }\n\n    pub fn agent_failed(output: impl Into<serde_json::Value>) -> Self {\n        ToolOutput::AgentFailed(Some(output.into()))\n    }\n\n    pub fn fail(text: impl Into<String>) -> Self {\n        ToolOutput::Fail(text.into())\n    }\n\n    pub fn content(&self) -> Option<&str> {\n        match self {\n            ToolOutput::Fail(s) | ToolOutput::Text(s) => Some(s),\n            _ => None,\n        }\n    }\n\n    /// Get the inner text if the output is a `Text` variant.\n    pub fn as_text(&self) -> Option<&str> {\n        match self {\n            ToolOutput::Text(s) => Some(s),\n            _ => None,\n        }\n    }\n\n    /// Get the inner text if the output is a `Fail` variant.\n    pub fn as_fail(&self) -> Option<&str> {\n        match self {\n            ToolOutput::Fail(s) => Some(s),\n            _ => None,\n        }\n    }\n\n    /// Get the inner text if the output is a `Stop` variant.\n    pub fn as_stop(&self) -> Option<&serde_json::Value> {\n        match self {\n            ToolOutput::Stop(args) => args.as_ref(),\n            _ => None,\n        }\n    }\n\n    /// Get the inner text if the output is an `AgentFailed` variant.\n    pub fn as_agent_failed(&self) -> Option<&serde_json::Value> {\n        match self {\n            ToolOutput::AgentFailed(args) => args.as_ref(),\n            _ => None,\n        }\n    }\n\n    /// Get the inner feedback if the output is a `FeedbackRequired` variant.\n    pub fn as_feedback_required(&self) -> Option<&serde_json::Value> {\n        match self {\n            ToolOutput::FeedbackRequired(args) => args.as_ref(),\n            _ => None,\n        }\n    }\n}\n\nimpl<S: AsRef<str>> From<S> for ToolOutput {\n    fn from(value: S) -> Self {\n        ToolOutput::Text(value.as_ref().to_string())\n    }\n}\nimpl std::fmt::Display for ToolOutput {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            ToolOutput::Text(value) => write!(f, \"{value}\"),\n            ToolOutput::Fail(value) => write!(f, \"Tool call failed: {value}\"),\n            ToolOutput::Stop(args) => {\n                if let Some(value) = args {\n                    write!(f, \"Stop {value}\")\n                } else {\n                    write!(f, \"Stop\")\n                }\n            }\n            ToolOutput::FeedbackRequired(_) => {\n                write!(f, \"Feedback required\")\n            }\n            ToolOutput::AgentFailed(args) => write!(\n                f,\n                \"Agent failed with output: {}\",\n                args.as_ref().unwrap_or_default()\n            ),\n        }\n    }\n}\n\n/// A tool call that can be executed by the executor\n#[derive(Clone, Debug, Builder, PartialEq, Serialize, Deserialize, Eq)]\n#[cfg_attr(feature = \"json-schema\", derive(schemars::JsonSchema))]\n#[builder(setter(into, strip_option))]\npub struct ToolCall {\n    id: String,\n    name: String,\n    #[builder(default)]\n    args: Option<String>,\n}\n\n/// Hash is used for finding tool calls that have been retried by agents\nimpl std::hash::Hash for ToolCall {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name.hash(state);\n        self.args.hash(state);\n    }\n}\n\nimpl std::fmt::Display for ToolCall {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"{id}#{name} {args}\",\n            id = self.id,\n            name = self.name,\n            args = self.args.as_deref().unwrap_or(\"\")\n        )\n    }\n}\n\nimpl ToolCall {\n    pub fn builder() -> ToolCallBuilder {\n        ToolCallBuilder::default()\n    }\n\n    pub fn id(&self) -> &str {\n        &self.id\n    }\n\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n\n    pub fn args(&self) -> Option<&str> {\n        self.args.as_deref()\n    }\n\n    pub fn with_args(&mut self, args: Option<String>) {\n        self.args = args;\n    }\n}\n\nimpl ToolCallBuilder {\n    pub fn maybe_args<T: Into<Option<String>>>(&mut self, args: T) -> &mut Self {\n        self.args = Some(args.into());\n        self\n    }\n\n    pub fn maybe_id<T: Into<Option<String>>>(&mut self, id: T) -> &mut Self {\n        self.id = id.into();\n        self\n    }\n\n    pub fn maybe_name<T: Into<Option<String>>>(&mut self, name: T) -> &mut Self {\n        self.name = name.into();\n        self\n    }\n}\n\n/// A typed tool specification intended to be usable for multiple LLMs\n///\n/// i.e. the json spec `OpenAI` uses to define their tools\n#[derive(Clone, Debug, Serialize, Deserialize, Builder, Default)]\n#[builder(setter(into), derive(Debug, Serialize, Deserialize), build_fn(skip))]\n#[cfg_attr(feature = \"json-schema\", derive(schemars::JsonSchema))]\n#[serde(deny_unknown_fields)]\npub struct ToolSpec {\n    /// Name of the tool\n    pub name: String,\n    /// Description passed to the LLM for the tool\n    pub description: String,\n\n    #[builder(default, setter(strip_option))]\n    #[serde(skip_serializing_if = \"Option::is_none\")]\n    /// Optional JSON schema describing the tool arguments\n    pub parameters_schema: Option<Schema>,\n}\n\n#[derive(Debug, Error)]\npub enum ToolSpecError {\n    #[error(transparent)]\n    InvalidParametersSchema(#[from] ToolSchemaError),\n}\n\n#[derive(Debug, Error)]\npub enum ToolSpecBuildError {\n    #[error(\"missing required field `{field}`\")]\n    MissingField { field: &'static str },\n    #[error(transparent)]\n    InvalidParametersSchema(#[from] ToolSchemaError),\n}\n\nimpl ToolSpec {\n    pub fn builder() -> ToolSpecBuilder {\n        ToolSpecBuilder::default()\n    }\n\n    /// Returns the provider-neutral strict parameters schema for this tool.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error when the configured parameters schema is not compatible\n    /// with Swiftide's strict tool-schema contract.\n    pub fn strict_parameters_schema(&self) -> Result<StrictToolParametersSchema, ToolSpecError> {\n        Ok(StrictToolParametersSchema::try_from_raw(\n            self.parameters_schema.as_ref(),\n        )?)\n    }\n\n    /// Returns the provider-neutral strict parameters schema with deterministic JSON key ordering.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error when the configured parameters schema is not compatible\n    /// with Swiftide's strict tool-schema contract.\n    pub fn canonical_parameters_schema_json(&self) -> Result<JsonValue, ToolSpecError> {\n        Ok(canonicalize_json(\n            self.strict_parameters_schema()?.into_json(),\n        ))\n    }\n}\n\nimpl ToolSpecBuilder {\n    /// Builds a tool specification and validates its parameters schema.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error when a required field is missing or when the provided\n    /// parameters schema is not compatible with Swiftide's strict tool-schema\n    /// contract.\n    pub fn build(&self) -> Result<ToolSpec, ToolSpecBuildError> {\n        let name = self\n            .name\n            .clone()\n            .ok_or(ToolSpecBuildError::MissingField { field: \"name\" })?;\n        let description = self\n            .description\n            .clone()\n            .ok_or(ToolSpecBuildError::MissingField {\n                field: \"description\",\n            })?;\n        let parameters_schema = self.parameters_schema.clone().unwrap_or(None);\n\n        StrictToolParametersSchema::try_from_raw(parameters_schema.as_ref())?;\n\n        Ok(ToolSpec {\n            name,\n            description,\n            parameters_schema,\n        })\n    }\n}\n\nimpl PartialEq for ToolSpec {\n    fn eq(&self, other: &Self) -> bool {\n        self.name == other.name\n            && self.description == other.description\n            && tool_spec_schema_key(self) == tool_spec_schema_key(other)\n    }\n}\n\nimpl Eq for ToolSpec {}\n\nimpl PartialOrd for ToolSpec {\n    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {\n        Some(self.cmp(other))\n    }\n}\n\nimpl Ord for ToolSpec {\n    fn cmp(&self, other: &Self) -> Ordering {\n        self.name\n            .cmp(&other.name)\n            .then_with(|| self.description.cmp(&other.description))\n            .then_with(|| tool_spec_schema_key(self).cmp(&tool_spec_schema_key(other)))\n    }\n}\n\nimpl std::hash::Hash for ToolSpec {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name.hash(state);\n        self.description.hash(state);\n        tool_spec_schema_key(self).hash(state);\n    }\n}\n\nfn tool_spec_schema_key(spec: &ToolSpec) -> String {\n    spec.canonical_parameters_schema_json()\n        .ok()\n        .or_else(|| {\n            spec.parameters_schema\n                .as_ref()\n                .and_then(|schema| serde_json::to_value(schema).ok())\n                .map(canonicalize_json)\n        })\n        .and_then(|schema| serde_json::to_string(&schema).ok())\n        .unwrap_or_default()\n}\n\npub fn canonicalize_json(value: JsonValue) -> JsonValue {\n    match value {\n        JsonValue::Object(object) => {\n            let mut keys = object.keys().cloned().collect::<Vec<_>>();\n            keys.sort();\n\n            let mut sorted = JsonMap::with_capacity(object.len());\n            for key in keys {\n                if let Some(child) = object.get(&key) {\n                    sorted.insert(key, canonicalize_json(child.clone()));\n                }\n            }\n\n            JsonValue::Object(sorted)\n        }\n        JsonValue::Array(values) => {\n            JsonValue::Array(values.into_iter().map(canonicalize_json).collect())\n        }\n        scalar => scalar,\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use serde_json::{Value, json};\n    use std::collections::{BTreeSet, HashSet};\n    use std::hash::{DefaultHasher, Hash, Hasher};\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    struct ExampleArgs {\n        value: String,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    struct NestedCommentArgs {\n        request: NestedCommentRequest,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentRequest {\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        body: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        text: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        page_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        block_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        discussion_id: Option<String>,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize)]\n    #[serde(transparent)]\n    struct FreeformObject(serde_json::Map<String, Value>);\n\n    impl schemars::JsonSchema for FreeformObject {\n        fn schema_name() -> std::borrow::Cow<'static, str> {\n            \"FreeformObject\".into()\n        }\n\n        fn json_schema(_generator: &mut schemars::SchemaGenerator) -> Schema {\n            serde_json::from_value(json!({\n                \"type\": \"object\",\n                \"additionalProperties\": true\n            }))\n            .expect(\"freeform object schema should serialize\")\n        }\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct CreateViewArgs {\n        request: CreateViewRequest,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct CreateViewRequest {\n        body: FreeformObject,\n    }\n\n    #[test]\n    fn tool_spec_serializes_schema() {\n        let schema = schemars::schema_for!(ExampleArgs);\n\n        let spec = ToolSpec::builder()\n            .name(\"example\")\n            .description(\"An example tool\")\n            .parameters_schema(schema)\n            .build()\n            .unwrap();\n\n        let json = serde_json::to_value(&spec).unwrap();\n        assert_eq!(json.get(\"name\").and_then(|v| v.as_str()), Some(\"example\"));\n        assert!(json.get(\"parameters_schema\").is_some());\n    }\n\n    #[test]\n    fn tool_spec_is_hashable() {\n        let schema = schemars::schema_for!(ExampleArgs);\n        let spec = ToolSpec::builder()\n            .name(\"example\")\n            .description(\"An example tool\")\n            .parameters_schema(schema)\n            .build()\n            .unwrap();\n\n        let mut set = HashSet::new();\n        set.insert(spec.clone());\n\n        assert!(set.contains(&spec));\n    }\n\n    #[test]\n    fn tool_spec_hash_is_stable_across_schema_key_order() {\n        let first = ToolSpec::builder()\n            .name(\"create_view\")\n            .description(\"Create a view\")\n            .parameters_schema(\n                serde_json::from_value::<Schema>(json!({\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"body\": { \"type\": \"string\" },\n                        \"name\": { \"type\": \"string\" }\n                    }\n                }))\n                .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let second = ToolSpec::builder()\n            .name(\"create_view\")\n            .description(\"Create a view\")\n            .parameters_schema(\n                serde_json::from_value::<Schema>(json!({\n                    \"properties\": {\n                        \"name\": { \"type\": \"string\" },\n                        \"body\": { \"type\": \"string\" }\n                    },\n                    \"type\": \"object\"\n                }))\n                .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let mut first_hasher = DefaultHasher::new();\n        first.hash(&mut first_hasher);\n\n        let mut second_hasher = DefaultHasher::new();\n        second.hash(&mut second_hasher);\n\n        assert_eq!(first_hasher.finish(), second_hasher.finish());\n    }\n\n    #[test]\n    fn tool_spec_order_is_stable_across_schema_key_order() {\n        let first = ToolSpec::builder()\n            .name(\"create_view\")\n            .description(\"Create a view\")\n            .parameters_schema(\n                serde_json::from_value::<Schema>(json!({\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"body\": { \"type\": \"string\" },\n                        \"name\": { \"type\": \"string\" }\n                    }\n                }))\n                .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let second = ToolSpec::builder()\n            .name(\"create_view\")\n            .description(\"Create a view\")\n            .parameters_schema(\n                serde_json::from_value::<Schema>(json!({\n                    \"properties\": {\n                        \"name\": { \"type\": \"string\" },\n                        \"body\": { \"type\": \"string\" }\n                    },\n                    \"type\": \"object\"\n                }))\n                .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let set = BTreeSet::from([first, second]);\n\n        assert_eq!(set.len(), 1);\n    }\n\n    #[test]\n    fn strict_parameters_schema_returns_canonical_nested_schema() {\n        let spec = ToolSpec::builder()\n            .name(\"comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(schemars::schema_for!(NestedCommentArgs))\n            .build()\n            .unwrap();\n\n        let normalized = spec.strict_parameters_schema().unwrap().into_json();\n\n        assert_eq!(normalized[\"type\"], Value::String(\"object\".into()));\n        assert_eq!(normalized[\"additionalProperties\"], Value::Bool(false));\n        assert_eq!(\n            normalized[\"required\"],\n            Value::Array(vec![Value::String(\"request\".into())])\n        );\n\n        let nested_ref = normalized[\"properties\"][\"request\"][\"$ref\"]\n            .as_str()\n            .expect(\"nested request should be referenced\");\n        let nested_name = nested_ref\n            .rsplit('/')\n            .next()\n            .expect(\"nested request ref name\");\n        assert!(\n            normalized[\"$defs\"][nested_name].get(\"required\").is_none(),\n            \"strict schema parsing should preserve optional nested fields before provider shaping\"\n        );\n    }\n\n    #[test]\n    fn strict_parameters_schema_sets_additional_properties_false_on_nested_typed_objects() {\n        let spec = ToolSpec::builder()\n            .name(\"comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(schemars::schema_for!(NestedCommentArgs))\n            .build()\n            .unwrap();\n\n        let normalized = spec.strict_parameters_schema().unwrap().into_json();\n\n        let nested_ref = normalized[\"properties\"][\"request\"][\"$ref\"]\n            .as_str()\n            .expect(\"nested request should be referenced\");\n        let nested_name = nested_ref\n            .rsplit('/')\n            .next()\n            .expect(\"nested request ref name\");\n\n        assert_eq!(\n            normalized[\"$defs\"][nested_name][\"additionalProperties\"],\n            Value::Bool(false)\n        );\n    }\n\n    #[test]\n    fn tool_spec_builder_rejects_nested_freeform_objects_in_strict_mode() {\n        let error = ToolSpec::builder()\n            .name(\"create_view\")\n            .description(\"Create a view\")\n            .parameters_schema(schemars::schema_for!(CreateViewArgs))\n            .build()\n            .expect_err(\"freeform object should be rejected in strict mode\");\n\n        let message = error.to_string();\n        assert!(message.contains(\"strict tool schemas do not support open object schemas\"));\n        assert!(message.contains(\"FreeformObject\"));\n    }\n\n    #[test]\n    fn strict_parameters_schema_preserves_optional_nested_fields() {\n        let spec = ToolSpec::builder()\n            .name(\"comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(schemars::schema_for!(NestedCommentArgs))\n            .build()\n            .unwrap();\n\n        let normalized = spec.strict_parameters_schema().unwrap().into_json();\n\n        assert_eq!(normalized[\"type\"], Value::String(\"object\".into()));\n        assert_eq!(normalized[\"additionalProperties\"], Value::Bool(false));\n        assert_eq!(\n            normalized[\"required\"],\n            Value::Array(vec![Value::String(\"request\".into())])\n        );\n\n        let nested_ref = normalized[\"properties\"][\"request\"][\"$ref\"]\n            .as_str()\n            .expect(\"nested request should be referenced\");\n        let nested_name = nested_ref\n            .rsplit('/')\n            .next()\n            .expect(\"nested request ref name\");\n        assert_eq!(\n            normalized[\"$defs\"][nested_name][\"additionalProperties\"],\n            Value::Bool(false)\n        );\n        assert!(normalized[\"$defs\"][nested_name].get(\"required\").is_none());\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/chat_completion/traits.rs",
    "content": "use anyhow::Result;\nuse async_trait::async_trait;\nuse dyn_clone::DynClone;\nuse futures_util::Stream;\nuse std::{borrow::Cow, pin::Pin, sync::Arc};\n\nuse crate::AgentContext;\n\nuse super::{\n    ToolCall, ToolOutput, ToolSpec,\n    chat_completion_request::ChatCompletionRequest,\n    chat_completion_response::ChatCompletionResponse,\n    errors::{LanguageModelError, ToolError},\n};\n\npub type ChatCompletionStream =\n    Pin<Box<dyn Stream<Item = Result<ChatCompletionResponse, LanguageModelError>> + Send>>;\n#[async_trait]\npub trait ChatCompletion: Send + Sync + DynClone {\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError>;\n\n    /// Stream the completion response. If it's not supported, it will return a single\n    /// response\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        Box::pin(tokio_stream::iter(vec![self.complete(request).await]))\n    }\n}\n\n#[async_trait]\nimpl ChatCompletion for Box<dyn ChatCompletion> {\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        (**self).complete(request).await\n    }\n\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        (**self).complete_stream(request).await\n    }\n}\n\n#[async_trait]\nimpl ChatCompletion for &dyn ChatCompletion {\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        (**self).complete(request).await\n    }\n\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        (**self).complete_stream(request).await\n    }\n}\n\n#[async_trait]\nimpl<T> ChatCompletion for &T\nwhere\n    T: ChatCompletion + Clone + 'static,\n{\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        (**self).complete(request).await\n    }\n\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        (**self).complete_stream(request).await\n    }\n}\n\nimpl<LLM> From<&LLM> for Box<dyn ChatCompletion>\nwhere\n    LLM: ChatCompletion + Clone + 'static,\n{\n    fn from(llm: &LLM) -> Self {\n        Box::new(llm.clone()) as Box<dyn ChatCompletion>\n    }\n}\n\ndyn_clone::clone_trait_object!(ChatCompletion);\n\n/// The `Tool` trait is the main interface for chat completion and agent tools.\n///\n/// `swiftide-macros` provides a set of macros to generate implementations of this trait. If you\n/// need more control over the implementation, you can implement the trait manually.\n///\n/// The `ToolSpec` is what will end up with the LLM. A builder is provided. The `name` is expected\n/// to be unique, and is used to identify the tool. It should be the same as the name in the\n/// `ToolSpec`.\n#[async_trait]\npub trait Tool: Send + Sync + DynClone {\n    // tbd\n    async fn invoke(\n        &self,\n        agent_context: &dyn AgentContext,\n        tool_call: &ToolCall,\n    ) -> Result<ToolOutput, ToolError>;\n\n    fn name(&self) -> Cow<'_, str>;\n\n    fn tool_spec(&self) -> ToolSpec;\n\n    fn boxed<'a>(self) -> Box<dyn Tool + 'a>\n    where\n        Self: Sized + 'a,\n    {\n        Box::new(self) as Box<dyn Tool>\n    }\n}\n\n/// A toolbox is a collection of tools\n///\n/// It can be a list, an mcp client, or anything else we can think of.\n///\n/// This allows agents to not know their tools when they are created, and to get them at runtime.\n///\n/// It also allows for tools to be dynamically loaded and unloaded, etc.\n#[async_trait]\npub trait ToolBox: Send + Sync + DynClone {\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>>;\n\n    fn name(&self) -> Cow<'_, str> {\n        Cow::Borrowed(\"Unnamed ToolBox\")\n    }\n\n    fn boxed<'a>(self) -> Box<dyn ToolBox + 'a>\n    where\n        Self: Sized + 'a,\n    {\n        Box::new(self) as Box<dyn ToolBox>\n    }\n}\n\n#[async_trait]\nimpl ToolBox for Vec<Box<dyn Tool>> {\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>> {\n        Ok(self.clone())\n    }\n}\n\n#[async_trait]\nimpl ToolBox for Box<dyn ToolBox> {\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>> {\n        (**self).available_tools().await\n    }\n}\n\n#[async_trait]\nimpl ToolBox for Arc<dyn ToolBox> {\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>> {\n        (**self).available_tools().await\n    }\n}\n\n#[async_trait]\nimpl ToolBox for &dyn ToolBox {\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>> {\n        (**self).available_tools().await\n    }\n}\n\n#[async_trait]\nimpl ToolBox for &[Box<dyn Tool>] {\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>> {\n        Ok(self.to_vec())\n    }\n}\n\n#[async_trait]\nimpl ToolBox for [Box<dyn Tool>] {\n    async fn available_tools(&self) -> Result<Vec<Box<dyn Tool>>> {\n        Ok(self.to_vec())\n    }\n}\n\ndyn_clone::clone_trait_object!(ToolBox);\n\n#[async_trait]\nimpl Tool for Box<dyn Tool> {\n    async fn invoke(\n        &self,\n        agent_context: &dyn AgentContext,\n        tool_call: &ToolCall,\n    ) -> Result<ToolOutput, ToolError> {\n        (**self).invoke(agent_context, tool_call).await\n    }\n    fn name(&self) -> Cow<'_, str> {\n        (**self).name()\n    }\n    fn tool_spec(&self) -> ToolSpec {\n        (**self).tool_spec()\n    }\n}\n\ndyn_clone::clone_trait_object!(Tool);\n\n/// Tools are identified and unique by name\n/// These allow comparison and lookups\nimpl PartialEq for Box<dyn Tool> {\n    fn eq(&self, other: &Self) -> bool {\n        self.name() == other.name()\n    }\n}\nimpl Eq for Box<dyn Tool> {}\nimpl std::hash::Hash for Box<dyn Tool> {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name().hash(state);\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/document.rs",
    "content": "//! Documents are the main data structure that is retrieved via the query pipeline\n//!\n//! Retrievers are expected to eagerly set any configured metadata on the document, with the same\n//! field name used during indexing if applicable.\nuse std::fmt;\n\nuse derive_builder::Builder;\nuse serde::{Deserialize, Serialize};\n\nuse crate::{metadata::Metadata, util::debug_long_utf8};\n\n/// A document represents a single unit of retrieved text\n#[derive(Clone, PartialEq, Eq, Serialize, Deserialize, Builder)]\n#[builder(setter(into))]\npub struct Document {\n    #[builder(default)]\n    metadata: Metadata,\n    content: String,\n}\n\nimpl From<Document> for serde_json::Value {\n    fn from(document: Document) -> Self {\n        serde_json::json!({\n            \"metadata\": document.metadata,\n            \"content\": document.content,\n        })\n    }\n}\n\nimpl From<&Document> for serde_json::Value {\n    fn from(document: &Document) -> Self {\n        serde_json::json!({\n            \"metadata\": document.metadata,\n            \"content\": document.content,\n        })\n    }\n}\n\nimpl PartialOrd for Document {\n    fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {\n        Some(self.cmp(other))\n    }\n}\n\nimpl Ord for Document {\n    fn cmp(&self, other: &Self) -> std::cmp::Ordering {\n        self.content.cmp(&other.content)\n    }\n}\n\nimpl fmt::Debug for Document {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        f.debug_struct(\"Document\")\n            .field(\"metadata\", &self.metadata)\n            .field(\"content\", &debug_long_utf8(&self.content, 100))\n            .finish()\n    }\n}\n\nimpl<T: AsRef<str>> From<T> for Document {\n    fn from(value: T) -> Self {\n        Document::new(value.as_ref(), None)\n    }\n}\n\nimpl Document {\n    pub fn new(content: impl Into<String>, metadata: Option<Metadata>) -> Self {\n        Self {\n            metadata: metadata.unwrap_or_default(),\n            content: content.into(),\n        }\n    }\n\n    pub fn builder() -> DocumentBuilder {\n        DocumentBuilder::default()\n    }\n\n    pub fn content(&self) -> &str {\n        &self.content\n    }\n\n    pub fn metadata(&self) -> &Metadata {\n        &self.metadata\n    }\n\n    pub fn bytes(&self) -> &[u8] {\n        self.content.as_bytes()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::metadata::Metadata;\n\n    #[test]\n    fn test_document_creation() {\n        let content = \"Test content\";\n        let metadata = Metadata::from([(\"some\", \"metadata\")]);\n        let document = Document::new(content, Some(metadata.clone()));\n\n        assert_eq!(document.content(), content);\n        assert_eq!(document.metadata(), &metadata);\n    }\n\n    #[test]\n    fn test_document_default_metadata() {\n        let content = \"Test content\";\n        let document = Document::new(content, None);\n\n        assert_eq!(document.content(), content);\n        assert_eq!(document.metadata(), &Metadata::default());\n    }\n\n    #[test]\n    fn test_document_from_str() {\n        let content = \"Test content\";\n        let document: Document = content.into();\n\n        assert_eq!(document.content(), content);\n        assert_eq!(document.metadata(), &Metadata::default());\n    }\n\n    #[test]\n    fn test_document_partial_ord() {\n        let doc1 = Document::new(\"A\", None);\n        let doc2 = Document::new(\"B\", None);\n\n        assert!(doc1 < doc2);\n    }\n\n    #[test]\n    fn test_document_ord() {\n        let doc1 = Document::new(\"A\", None);\n        let doc2 = Document::new(\"B\", None);\n\n        assert!(doc1.cmp(&doc2) == std::cmp::Ordering::Less);\n    }\n\n    #[test]\n    fn test_document_debug() {\n        let content = \"Test content\";\n        let document = Document::new(content, None);\n        let debug_str = format!(\"{document:?}\");\n\n        assert!(debug_str.contains(\"Document\"));\n        assert!(debug_str.contains(\"metadata\"));\n        assert!(debug_str.contains(\"content\"));\n    }\n\n    #[test]\n    fn test_document_to_json() {\n        let content = \"Test content\";\n        let metadata = Metadata::from([(\"some\", \"metadata\")]);\n        let document = Document::new(content, Some(metadata.clone()));\n        let json_value: serde_json::Value = document.into();\n\n        assert_eq!(json_value[\"content\"], content);\n        assert_eq!(json_value[\"metadata\"], serde_json::json!(metadata));\n    }\n\n    #[test]\n    fn test_document_ref_to_json() {\n        let content = \"Test content\";\n        let metadata = Metadata::from([(\"some\", \"metadata\")]);\n        let document = Document::new(content, Some(metadata.clone()));\n        let json_value: serde_json::Value = (&document).into();\n\n        assert_eq!(json_value[\"content\"], content);\n        assert_eq!(json_value[\"metadata\"], serde_json::json!(metadata));\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/indexing_decorators.rs",
    "content": "use std::fmt::Debug;\n\nuse crate::chat_completion::{ChatCompletionRequest, ChatCompletionResponse};\nuse crate::stream_backoff::{StreamBackoff, TokioSleeper};\nuse crate::{ChatCompletion, ChatCompletionStream};\nuse crate::{EmbeddingModel, Embeddings, SimplePrompt, prompt::Prompt};\n\nuse crate::chat_completion::errors::LanguageModelError;\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse futures_util::{StreamExt as _, TryStreamExt as _};\nuse std::time::Duration;\n\n/// Backoff configuration for api calls.\n/// Each time an api call fails backoff will wait an increasing period of time for each subsequent\n/// retry attempt. see <https://docs.rs/backoff/latest/backoff/> for more details.\n#[derive(Debug, Clone, Copy)]\npub struct BackoffConfiguration {\n    /// Initial interval in seconds between retries\n    pub initial_interval_sec: u64,\n    /// The factor by which the interval is multiplied on each retry attempt\n    pub multiplier: f64,\n    /// Introduces randomness to avoid retry storms\n    pub randomization_factor: f64,\n    /// Total time all attempts are allowed in seconds. Once a retry must wait longer than this,\n    /// the request is considered to have failed.\n    pub max_elapsed_time_sec: u64,\n}\n\nimpl Default for BackoffConfiguration {\n    fn default() -> Self {\n        Self {\n            initial_interval_sec: 1,\n            multiplier: 2.0,\n            randomization_factor: 0.5,\n            max_elapsed_time_sec: 60,\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct LanguageModelWithBackOff<P: Clone> {\n    pub(crate) inner: P,\n    config: BackoffConfiguration,\n}\n\nimpl<P: Clone> LanguageModelWithBackOff<P> {\n    pub fn new(client: P, config: BackoffConfiguration) -> Self {\n        Self {\n            inner: client,\n            config,\n        }\n    }\n\n    pub(crate) fn strategy(&self) -> backoff::ExponentialBackoff {\n        backoff::ExponentialBackoffBuilder::default()\n            .with_initial_interval(Duration::from_secs(self.config.initial_interval_sec))\n            .with_multiplier(self.config.multiplier)\n            .with_max_elapsed_time(Some(Duration::from_secs(self.config.max_elapsed_time_sec)))\n            .with_randomization_factor(self.config.randomization_factor)\n            .build()\n    }\n}\n\n#[async_trait]\nimpl<P: SimplePrompt + Clone> SimplePrompt for LanguageModelWithBackOff<P> {\n    async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError> {\n        let strategy = self.strategy();\n\n        let op = || {\n            let prompt = prompt.clone();\n            async {\n                self.inner.prompt(prompt).await.map_err(|e| match e {\n                    LanguageModelError::ContextLengthExceeded(e) => {\n                        backoff::Error::Permanent(LanguageModelError::ContextLengthExceeded(e))\n                    }\n                    LanguageModelError::PermanentError(e) => {\n                        backoff::Error::Permanent(LanguageModelError::PermanentError(e))\n                    }\n                    LanguageModelError::TransientError(e) => {\n                        backoff::Error::transient(LanguageModelError::TransientError(e))\n                    }\n                })\n            }\n        };\n\n        backoff::future::retry(strategy, op).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.inner.name()\n    }\n}\n\n#[async_trait]\nimpl<P: EmbeddingModel + Clone> EmbeddingModel for LanguageModelWithBackOff<P> {\n    async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError> {\n        self.inner.embed(input).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.inner.name()\n    }\n}\n\n#[async_trait]\nimpl<LLM: ChatCompletion + Clone> ChatCompletion for LanguageModelWithBackOff<LLM> {\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        let strategy = self.strategy();\n\n        let op = || async move {\n            self.inner.complete(request).await.map_err(|e| match e {\n                LanguageModelError::ContextLengthExceeded(e) => {\n                    backoff::Error::Permanent(LanguageModelError::ContextLengthExceeded(e))\n                }\n                LanguageModelError::PermanentError(e) => {\n                    backoff::Error::Permanent(LanguageModelError::PermanentError(e))\n                }\n                LanguageModelError::TransientError(e) => {\n                    backoff::Error::transient(LanguageModelError::TransientError(e))\n                }\n            })\n        };\n\n        backoff::future::retry(strategy, op).await\n    }\n\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        let strategy = self.strategy();\n\n        let stream = self.inner.complete_stream(request).await;\n        let stream = stream\n            .map_err(|e| match e {\n                LanguageModelError::ContextLengthExceeded(e) => {\n                    backoff::Error::Permanent(LanguageModelError::ContextLengthExceeded(e))\n                }\n                LanguageModelError::PermanentError(e) => {\n                    backoff::Error::Permanent(LanguageModelError::PermanentError(e))\n                }\n                LanguageModelError::TransientError(e) => {\n                    backoff::Error::transient(LanguageModelError::TransientError(e))\n                }\n            })\n            .boxed();\n        StreamBackoff::new(stream, strategy, TokioSleeper)\n            .map_err(|e| match e {\n                backoff::Error::Permanent(e) => e,\n                backoff::Error::Transient { err, .. } => err,\n            })\n            .boxed()\n    }\n}\n#[cfg(test)]\nmod tests {\n\n    use uuid::Uuid;\n\n    use super::*;\n    use std::sync::Arc;\n    use std::sync::atomic::{AtomicUsize, Ordering};\n\n    #[derive(Debug, Clone)]\n    struct MockSimplePrompt {\n        call_count: Arc<AtomicUsize>,\n        should_fail_count: usize,\n        error_type: MockErrorType,\n    }\n\n    #[derive(Debug, Clone, Copy)]\n    enum MockErrorType {\n        Transient,\n        Permanent,\n        ContextLengthExceeded,\n    }\n\n    #[derive(Clone)]\n    struct MockChatCompletion {\n        call_count: Arc<AtomicUsize>,\n        should_fail_count: usize,\n        error_type: MockErrorType,\n    }\n\n    #[async_trait]\n    impl ChatCompletion for MockChatCompletion {\n        async fn complete(\n            &self,\n            _request: &ChatCompletionRequest<'_>,\n        ) -> Result<ChatCompletionResponse, LanguageModelError> {\n            let count = self.call_count.fetch_add(1, Ordering::SeqCst);\n\n            if count < self.should_fail_count {\n                match self.error_type {\n                    MockErrorType::Transient => Err(LanguageModelError::TransientError(Box::new(\n                        std::io::Error::new(std::io::ErrorKind::ConnectionReset, \"Transient error\"),\n                    ))),\n                    MockErrorType::Permanent => Err(LanguageModelError::PermanentError(Box::new(\n                        std::io::Error::new(std::io::ErrorKind::InvalidData, \"Permanent error\"),\n                    ))),\n                    MockErrorType::ContextLengthExceeded => Err(\n                        LanguageModelError::ContextLengthExceeded(Box::new(std::io::Error::new(\n                            std::io::ErrorKind::InvalidInput,\n                            \"Context length exceeded\",\n                        ))),\n                    ),\n                }\n            } else {\n                Ok(ChatCompletionResponse {\n                    id: Uuid::new_v4(),\n                    message: Some(\"Success response\".to_string()),\n                    tool_calls: None,\n                    delta: None,\n                    usage: None,\n                    reasoning: None,\n                })\n            }\n        }\n    }\n    #[async_trait]\n    impl SimplePrompt for MockSimplePrompt {\n        async fn prompt(&self, _prompt: Prompt) -> Result<String, LanguageModelError> {\n            let count = self.call_count.fetch_add(1, Ordering::SeqCst);\n\n            if count < self.should_fail_count {\n                match self.error_type {\n                    MockErrorType::Transient => Err(LanguageModelError::TransientError(Box::new(\n                        std::io::Error::new(std::io::ErrorKind::ConnectionReset, \"Transient error\"),\n                    ))),\n                    MockErrorType::Permanent => Err(LanguageModelError::PermanentError(Box::new(\n                        std::io::Error::new(std::io::ErrorKind::InvalidData, \"Permanent error\"),\n                    ))),\n                    MockErrorType::ContextLengthExceeded => Err(\n                        LanguageModelError::ContextLengthExceeded(Box::new(std::io::Error::new(\n                            std::io::ErrorKind::InvalidInput,\n                            \"Context length exceeded\",\n                        ))),\n                    ),\n                }\n            } else {\n                Ok(\"Success response\".to_string())\n            }\n        }\n\n        fn name(&self) -> &'static str {\n            \"MockSimplePrompt\"\n        }\n    }\n\n    #[tokio::test]\n    async fn test_language_model_with_backoff_retries_transient_errors() {\n        let call_count = Arc::new(AtomicUsize::new(0));\n        let mock_prompt = MockSimplePrompt {\n            call_count: call_count.clone(),\n            should_fail_count: 2, // Fail twice, succeed on third attempt\n            error_type: MockErrorType::Transient,\n        };\n\n        let config = BackoffConfiguration {\n            initial_interval_sec: 1,\n            max_elapsed_time_sec: 10,\n            multiplier: 1.5,\n            randomization_factor: 0.5,\n        };\n\n        let model_with_backoff = LanguageModelWithBackOff::new(mock_prompt, config);\n\n        let result = model_with_backoff.prompt(Prompt::from(\"Test prompt\")).await;\n\n        assert!(result.is_ok());\n        assert_eq!(call_count.load(Ordering::SeqCst), 3);\n        assert_eq!(result.unwrap(), \"Success response\");\n    }\n\n    #[tokio::test]\n    async fn test_language_model_with_backoff_does_not_retry_permanent_errors() {\n        let call_count = Arc::new(AtomicUsize::new(0));\n        let mock_prompt = MockSimplePrompt {\n            call_count: call_count.clone(),\n            should_fail_count: 1,\n            error_type: MockErrorType::Permanent,\n        };\n\n        let config = BackoffConfiguration {\n            initial_interval_sec: 1,\n            max_elapsed_time_sec: 10,\n            multiplier: 1.5,\n            randomization_factor: 0.5,\n        };\n\n        let model_with_backoff = LanguageModelWithBackOff::new(mock_prompt, config);\n\n        let result = model_with_backoff.prompt(Prompt::from(\"Test prompt\")).await;\n\n        assert!(result.is_err());\n        assert_eq!(call_count.load(Ordering::SeqCst), 1);\n\n        match result {\n            Err(LanguageModelError::PermanentError(_)) => {} // Expected\n            _ => panic!(\"Expected PermanentError\"),\n        }\n    }\n\n    #[tokio::test]\n    async fn test_language_model_with_backoff_does_not_retry_context_length_errors() {\n        let call_count = Arc::new(AtomicUsize::new(0));\n        let mock_prompt = MockSimplePrompt {\n            call_count: call_count.clone(),\n            should_fail_count: 1,\n            error_type: MockErrorType::ContextLengthExceeded,\n        };\n\n        let config = BackoffConfiguration {\n            initial_interval_sec: 1,\n            max_elapsed_time_sec: 10,\n            multiplier: 1.5,\n            randomization_factor: 0.5,\n        };\n\n        let model_with_backoff = LanguageModelWithBackOff::new(mock_prompt, config);\n\n        let result = model_with_backoff.prompt(Prompt::from(\"Test prompt\")).await;\n\n        assert!(result.is_err());\n        assert_eq!(call_count.load(Ordering::SeqCst), 1);\n\n        match result {\n            Err(LanguageModelError::ContextLengthExceeded(_)) => {} // Expected\n            _ => panic!(\"Expected ContextLengthExceeded\"),\n        }\n    }\n\n    #[tokio::test]\n    async fn test_language_model_with_backoff_retries_chat_completion_transient_errors() {\n        let call_count = Arc::new(AtomicUsize::new(0));\n        let mock_chat = MockChatCompletion {\n            call_count: call_count.clone(),\n            should_fail_count: 2, // Fail twice, succeed on third attempt\n            error_type: MockErrorType::Transient,\n        };\n\n        let config = BackoffConfiguration {\n            initial_interval_sec: 1,\n            max_elapsed_time_sec: 10,\n            multiplier: 1.5,\n            randomization_factor: 0.5,\n        };\n\n        let model_with_backoff = LanguageModelWithBackOff::new(mock_chat, config);\n\n        let request: ChatCompletionRequest<'static> = Vec::new().into();\n\n        let result = model_with_backoff.complete(&request).await;\n\n        assert!(result.is_ok());\n        assert_eq!(call_count.load(Ordering::SeqCst), 3);\n        assert_eq!(\n            result.unwrap().message,\n            Some(\"Success response\".to_string())\n        );\n    }\n\n    #[tokio::test]\n    async fn test_language_model_with_backoff_does_not_retry_chat_completion_permanent_errors() {\n        let call_count = Arc::new(AtomicUsize::new(0));\n        let mock_chat = MockChatCompletion {\n            call_count: call_count.clone(),\n            should_fail_count: 2, // Would fail twice if retried\n            error_type: MockErrorType::Permanent,\n        };\n\n        let config = BackoffConfiguration {\n            initial_interval_sec: 1,\n            max_elapsed_time_sec: 10,\n            multiplier: 1.5,\n            randomization_factor: 0.5,\n        };\n\n        let model_with_backoff = LanguageModelWithBackOff::new(mock_chat, config);\n\n        let request: ChatCompletionRequest<'static> = Vec::new().into();\n\n        let result = model_with_backoff.complete(&request).await;\n\n        assert!(result.is_err());\n        assert_eq!(call_count.load(Ordering::SeqCst), 1); // Should only be called once\n\n        match result {\n            Err(LanguageModelError::PermanentError(_)) => {} // Expected\n            _ => panic!(\"Expected PermanentError, got {result:?}\"),\n        }\n    }\n\n    #[tokio::test]\n    async fn test_language_model_with_backoff_does_not_retry_chat_completion_context_length_errors()\n    {\n        let call_count = Arc::new(AtomicUsize::new(0));\n        let mock_chat = MockChatCompletion {\n            call_count: call_count.clone(),\n            should_fail_count: 2, // Would fail twice if retried\n            error_type: MockErrorType::ContextLengthExceeded,\n        };\n\n        let config = BackoffConfiguration {\n            initial_interval_sec: 1,\n            max_elapsed_time_sec: 10,\n            multiplier: 1.5,\n            randomization_factor: 0.5,\n        };\n\n        let model_with_backoff = LanguageModelWithBackOff::new(mock_chat, config);\n\n        let request: ChatCompletionRequest<'static> = Vec::new().into();\n\n        let result = model_with_backoff.complete(&request).await;\n\n        assert!(result.is_err());\n        assert_eq!(call_count.load(Ordering::SeqCst), 1); // Should only be called once\n\n        match result {\n            Err(LanguageModelError::ContextLengthExceeded(_)) => {} // Expected\n            _ => panic!(\"Expected ContextLengthExceeded, got {result:?}\"),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/indexing_defaults.rs",
    "content": "use std::sync::Arc;\n\nuse crate::SimplePrompt;\n\n#[derive(Debug, Default, Clone)]\npub struct IndexingDefaults(Arc<IndexingDefaultsInner>);\n\n#[derive(Debug, Default)]\npub struct IndexingDefaultsInner {\n    simple_prompt: Option<Box<dyn SimplePrompt>>,\n}\n\nimpl IndexingDefaults {\n    pub fn simple_prompt(&self) -> Option<&dyn SimplePrompt> {\n        self.0.simple_prompt.as_deref()\n    }\n\n    pub fn from_simple_prompt(simple_prompt: Box<dyn SimplePrompt>) -> Self {\n        Self(Arc::new(IndexingDefaultsInner {\n            simple_prompt: Some(simple_prompt),\n        }))\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/indexing_stream.rs",
    "content": "#![allow(clippy::from_over_into)]\n\n//! This module defines the `IndexingStream` type, which is used internally by a pipeline  for\n//! handling asynchronous streams of `Node<T>` items in the indexing pipeline.\n\nuse crate::node::{Chunk, Node};\nuse anyhow::Result;\nuse futures_util::stream::{self, Stream};\nuse std::pin::Pin;\nuse tokio::sync::mpsc::Receiver;\n\npub use futures_util::StreamExt;\n\n// We need to inform the compiler that `inner` is pinned as well\n/// An asynchronous stream of `Node<T>` items.\n///\n/// Wraps an internal stream of `Result<Node<T>>` items.\n///\n/// Streams, iterators and vectors of `Result<Node<T>>` can be converted into an `IndexingStream`.\n#[pin_project::pin_project]\npub struct IndexingStream<T: Chunk> {\n    #[pin]\n    pub(crate) inner: Pin<Box<dyn Stream<Item = Result<Node<T>>> + Send>>,\n}\n\nimpl<T: Chunk> Stream for IndexingStream<T> {\n    type Item = Result<Node<T>>;\n\n    fn poll_next(\n        self: Pin<&mut Self>,\n        cx: &mut std::task::Context<'_>,\n    ) -> std::task::Poll<Option<Self::Item>> {\n        let this = self.project();\n        this.inner.poll_next(cx)\n    }\n}\n\nimpl<T: Chunk> Into<IndexingStream<T>> for Vec<Result<Node<T>>> {\n    fn into(self) -> IndexingStream<T> {\n        IndexingStream::iter(self)\n    }\n}\n\nimpl<T: Chunk> Into<IndexingStream<T>> for Vec<Node<T>> {\n    fn into(self) -> IndexingStream<T> {\n        IndexingStream::from_nodes(self)\n    }\n}\n\n// impl Into<IndexingStream> for anyhow::Error {\n//     fn into(self) -> IndexingStream {\n//         IndexingStream::iter(vec![Err(self)])\n//     }\n// }\n\nimpl<T: Chunk> Into<IndexingStream<T>> for Result<Vec<Node<T>>> {\n    fn into(self) -> IndexingStream<T> {\n        match self {\n            Ok(nodes) => IndexingStream::iter(nodes.into_iter().map(Ok)),\n            Err(err) => IndexingStream::iter(vec![Err(err)]),\n        }\n    }\n}\n\nimpl<T: Chunk> Into<IndexingStream<T>> for Pin<Box<dyn Stream<Item = Result<Node<T>>> + Send>> {\n    fn into(self) -> IndexingStream<T> {\n        IndexingStream { inner: self }\n    }\n}\n\nimpl<T: Chunk> Into<IndexingStream<T>> for Receiver<Result<Node<T>>> {\n    fn into(self) -> IndexingStream<T> {\n        IndexingStream {\n            inner: tokio_stream::wrappers::ReceiverStream::new(self).boxed(),\n        }\n    }\n}\n\nimpl<T: Chunk> From<anyhow::Error> for IndexingStream<T> {\n    fn from(err: anyhow::Error) -> Self {\n        IndexingStream::iter(vec![Err(err)])\n    }\n}\n\nimpl<T: Chunk> IndexingStream<T> {\n    pub fn empty() -> Self {\n        IndexingStream {\n            inner: stream::empty().boxed(),\n        }\n    }\n\n    /// Creates an `IndexingStream` from an iterator of `Result<Node<T>>`.\n    ///\n    /// WARN: Also works with Err items directly, which will result\n    /// in an _incorrect_ stream\n    pub fn iter<I>(iter: I) -> Self\n    where\n        I: IntoIterator<Item = Result<Node<T>>> + Send + 'static,\n        <I as IntoIterator>::IntoIter: Send,\n    {\n        IndexingStream {\n            inner: stream::iter(iter).boxed(),\n        }\n    }\n\n    pub fn from_nodes(nodes: Vec<Node<T>>) -> Self {\n        IndexingStream::iter(nodes.into_iter().map(Ok))\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/indexing_traits.rs",
    "content": "//! Traits in Swiftide allow for easy extendability\n//!\n//! All steps defined in the indexing pipeline and the generic transformers can also take a\n//! trait. To bring your own transformers, models and loaders, all you need to do is implement the\n//! trait and it should work out of the box.\nuse crate::Embeddings;\nuse crate::node::{Chunk, Node};\nuse crate::{\n    SparseEmbeddings, indexing_defaults::IndexingDefaults, indexing_stream::IndexingStream,\n};\nuse std::fmt::Debug;\nuse std::sync::Arc;\n\nuse crate::chat_completion::errors::LanguageModelError;\nuse crate::prompt::Prompt;\nuse anyhow::Result;\nuse async_trait::async_trait;\n\npub use dyn_clone::DynClone;\n/// All traits are easily mockable under tests\n#[cfg(feature = \"test-utils\")]\n#[doc(hidden)]\nuse mockall::{mock, predicate::str};\nuse schemars::{JsonSchema, schema_for};\nuse serde::de::DeserializeOwned;\n\n#[async_trait]\n/// Transforms single nodes into single nodes\npub trait Transformer: Send + Sync + DynClone {\n    type Input: Chunk;\n    type Output: Chunk;\n\n    async fn transform_node(&self, node: Node<Self::Input>) -> Result<Node<Self::Output>>;\n\n    /// Overrides the default concurrency of the pipeline\n    fn concurrency(&self) -> Option<usize> {\n        None\n    }\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(<I, O> Transformer<Input = I, Output = O>);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub Transformer {}\n\n    #[async_trait]\n    impl Transformer for Transformer {\n        type Input = String;\n        type Output = String;\n\n        async fn transform_node(&self, node: Node<String>) -> Result<Node<String>>;\n        fn concurrency(&self) -> Option<usize>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for Transformer {\n        fn clone(&self) -> Self;\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> Transformer for Box<dyn Transformer<Input = I, Output = O>> {\n    type Input = I;\n    type Output = O;\n\n    async fn transform_node(&self, node: Node<Self::Input>) -> Result<Node<Self::Output>> {\n        self.as_ref().transform_node(node).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        self.as_ref().concurrency()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> Transformer for Arc<dyn Transformer<Input = I, Output = O>> {\n    type Input = I;\n    type Output = O;\n\n    async fn transform_node(&self, node: Node<Self::Input>) -> Result<Node<Self::Output>> {\n        self.as_ref().transform_node(node).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        self.as_ref().concurrency()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> Transformer for &dyn Transformer<Input = I, Output = O> {\n    type Input = I;\n    type Output = O;\n\n    async fn transform_node(&self, node: Node<Self::Input>) -> Result<Node<Self::Output>> {\n        (*self).transform_node(node).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        (*self).concurrency()\n    }\n}\n\n#[async_trait]\n/// Use a closure as a transformer\n// TODO: Find a way to make this work with full generics\nimpl<F> Transformer for F\nwhere\n    F: Fn(Node<String>) -> Result<Node<String>> + Send + Sync + Clone,\n{\n    type Input = String;\n    type Output = String;\n\n    async fn transform_node(&self, node: Node<Self::Input>) -> Result<Node<Self::Output>> {\n        self(node)\n    }\n}\n\n#[async_trait]\n/// Transforms batched single nodes into streams of nodes\npub trait BatchableTransformer: Send + Sync + DynClone {\n    type Input: Chunk;\n    type Output: Chunk;\n\n    /// Transforms a batch of nodes into a stream of nodes\n    async fn batch_transform(&self, nodes: Vec<Node<Self::Input>>) -> IndexingStream<Self::Output>;\n\n    /// Overrides the default concurrency of the pipeline\n    fn concurrency(&self) -> Option<usize> {\n        None\n    }\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n\n    /// Overrides the default batch size of the pipeline\n    fn batch_size(&self) -> Option<usize> {\n        None\n    }\n}\n\ndyn_clone::clone_trait_object!(<I, O> BatchableTransformer<Input = I, Output = O>);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub BatchableTransformer {}\n\n    #[async_trait]\n    impl BatchableTransformer for BatchableTransformer {\n        type Input = String;\n        type Output = String;\n\n        async fn batch_transform(&self, nodes: Vec<Node<String>>) -> IndexingStream<String>;\n        fn name(&self) -> &'static str;\n        fn batch_size(&self) -> Option<usize>;\n        fn concurrency(&self) -> Option<usize>;\n    }\n\n    impl Clone for BatchableTransformer {\n        fn clone(&self) -> Self;\n    }\n}\n#[async_trait]\n/// Use a closure as a batchable transformer\nimpl<F> BatchableTransformer for F\nwhere\n    F: Fn(Vec<Node<String>>) -> IndexingStream<String> + Send + Sync + Clone,\n{\n    type Input = String;\n    type Output = String;\n\n    async fn batch_transform(&self, nodes: Vec<Node<String>>) -> IndexingStream<String> {\n        self(nodes)\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> BatchableTransformer\n    for Box<dyn BatchableTransformer<Input = I, Output = O>>\n{\n    type Input = I;\n    type Output = O;\n\n    async fn batch_transform(&self, nodes: Vec<Node<Self::Input>>) -> IndexingStream<Self::Output> {\n        self.as_ref().batch_transform(nodes).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        self.as_ref().concurrency()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> BatchableTransformer\n    for Arc<dyn BatchableTransformer<Input = I, Output = O>>\n{\n    type Input = I;\n    type Output = O;\n\n    async fn batch_transform(&self, nodes: Vec<Node<Self::Input>>) -> IndexingStream<Self::Output> {\n        self.as_ref().batch_transform(nodes).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        self.as_ref().concurrency()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> BatchableTransformer for &dyn BatchableTransformer<Input = I, Output = O> {\n    type Input = I;\n    type Output = O;\n\n    async fn batch_transform(&self, nodes: Vec<Node<Self::Input>>) -> IndexingStream<Self::Output> {\n        (*self).batch_transform(nodes).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        (*self).concurrency()\n    }\n}\n\n/// Starting point of a stream\npub trait Loader: DynClone + Send + Sync {\n    type Output: Chunk;\n\n    fn into_stream(self) -> IndexingStream<Self::Output>;\n\n    /// Intended for use with Box<dyn Loader>\n    ///\n    /// Only needed if you use trait objects (Box<dyn Loader>)\n    ///\n    /// # Example\n    ///\n    /// ```ignore\n    /// fn into_stream_boxed(self: Box<Self>) -> IndexingStream {\n    ///    self.into_stream()\n    ///  }\n    /// ```\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<Self::Output> {\n        unimplemented!(\n            \"Please implement into_stream_boxed for your loader, it needs to be implemented on the concrete type\"\n        )\n    }\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(<O> Loader<Output = O>);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub Loader {}\n\n    #[async_trait]\n    impl Loader for Loader {\n        type Output = String;\n\n        fn into_stream(self) -> IndexingStream<String>;\n        fn into_stream_boxed(self: Box<Self>) -> IndexingStream<String>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for Loader {\n        fn clone(&self) -> Self;\n    }\n}\n\nimpl<O: Chunk> Loader for Box<dyn Loader<Output = O>> {\n    type Output = O;\n\n    fn into_stream(self) -> IndexingStream<Self::Output> {\n        Loader::into_stream_boxed(self)\n    }\n\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<Self::Output> {\n        Loader::into_stream(*self)\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\nimpl<O: Chunk> Loader for &dyn Loader<Output = O> {\n    type Output = O;\n\n    fn into_stream(self) -> IndexingStream<Self::Output> {\n        Loader::into_stream_boxed(Box::new(self))\n    }\n\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<Self::Output> {\n        Loader::into_stream(*self)\n    }\n}\n\n#[async_trait]\n/// Turns one node into many nodes\npub trait ChunkerTransformer: Send + Sync + DynClone {\n    type Input: Chunk;\n    type Output: Chunk;\n\n    async fn transform_node(&self, node: Node<Self::Input>) -> IndexingStream<Self::Output>;\n\n    /// Overrides the default concurrency of the pipeline\n    fn concurrency(&self) -> Option<usize> {\n        None\n    }\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(<I, O> ChunkerTransformer<Input = I, Output = O>);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub ChunkerTransformer {}\n\n    #[async_trait]\n    impl ChunkerTransformer for ChunkerTransformer {\n        type Input = String;\n        type Output = String;\n\n    async fn transform_node(&self, node: Node<String>) -> IndexingStream<String>;\n        fn name(&self) -> &'static str;\n        fn concurrency(&self) -> Option<usize>;\n    }\n\n    impl Clone for ChunkerTransformer {\n        fn clone(&self) -> Self;\n    }\n}\n#[async_trait]\nimpl<I: Chunk, O: Chunk> ChunkerTransformer for Box<dyn ChunkerTransformer<Input = I, Output = O>> {\n    type Input = I;\n    type Output = O;\n\n    async fn transform_node(&self, node: Node<I>) -> IndexingStream<O> {\n        self.as_ref().transform_node(node).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        self.as_ref().concurrency()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> ChunkerTransformer for Arc<dyn ChunkerTransformer<Input = I, Output = O>> {\n    type Input = I;\n    type Output = O;\n\n    async fn transform_node(&self, node: Node<I>) -> IndexingStream<O> {\n        self.as_ref().transform_node(node).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        self.as_ref().concurrency()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> ChunkerTransformer for &dyn ChunkerTransformer<Input = I, Output = O> {\n    type Input = I;\n    type Output = O;\n\n    async fn transform_node(&self, node: Node<I>) -> IndexingStream<O> {\n        (*self).transform_node(node).await\n    }\n    fn concurrency(&self) -> Option<usize> {\n        (*self).concurrency()\n    }\n}\n\n#[async_trait]\nimpl<F> ChunkerTransformer for F\nwhere\n    F: Fn(Node<String>) -> IndexingStream<String> + Send + Sync + Clone,\n{\n    async fn transform_node(&self, node: Node<String>) -> IndexingStream<String> {\n        self(node)\n    }\n\n    type Input = String;\n\n    type Output = String;\n}\n\n#[async_trait]\n/// Caches nodes, typically by their path and hash\n/// Recommended to namespace on the storage\n///\n/// For now just bool return value for easy filter\npub trait NodeCache: Send + Sync + Debug + DynClone {\n    type Input: Chunk;\n\n    async fn get(&self, node: &Node<Self::Input>) -> bool;\n    async fn set(&self, node: &Node<Self::Input>);\n\n    /// Optionally provide a method to clear the cache\n    async fn clear(&self) -> Result<()> {\n        unimplemented!(\"Clear not implemented\")\n    }\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(<T> NodeCache<Input = T>);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub NodeCache {}\n\n    #[async_trait]\n    impl NodeCache for NodeCache {\n        type Input = String;\n        async fn get(&self, node: &Node<String>) -> bool;\n        async fn set(&self, node: &Node<String>);\n        async fn clear(&self) -> Result<()>;\n        fn name(&self) -> &'static str;\n\n    }\n\n    impl Clone for NodeCache {\n        fn clone(&self) -> Self;\n    }\n}\n\n#[async_trait]\nimpl<T: Chunk> NodeCache for Box<dyn NodeCache<Input = T>> {\n    type Input = T;\n\n    async fn get(&self, node: &Node<T>) -> bool {\n        self.as_ref().get(node).await\n    }\n    async fn set(&self, node: &Node<T>) {\n        self.as_ref().set(node).await;\n    }\n    async fn clear(&self) -> Result<()> {\n        self.as_ref().clear().await\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<T: Chunk> NodeCache for Arc<dyn NodeCache<Input = T>> {\n    type Input = T;\n    async fn get(&self, node: &Node<T>) -> bool {\n        self.as_ref().get(node).await\n    }\n    async fn set(&self, node: &Node<T>) {\n        self.as_ref().set(node).await;\n    }\n    async fn clear(&self) -> Result<()> {\n        self.as_ref().clear().await\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<T: Chunk> NodeCache for &dyn NodeCache<Input = T> {\n    type Input = T;\n    async fn get(&self, node: &Node<T>) -> bool {\n        (*self).get(node).await\n    }\n    async fn set(&self, node: &Node<T>) {\n        (*self).set(node).await;\n    }\n    async fn clear(&self) -> Result<()> {\n        (*self).clear().await\n    }\n}\n\n#[async_trait]\n/// Embeds a list of strings and returns its embeddings.\n/// Assumes the strings will be moved.\npub trait EmbeddingModel: Send + Sync + Debug + DynClone {\n    async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(EmbeddingModel);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub EmbeddingModel {}\n\n    #[async_trait]\n    impl EmbeddingModel for EmbeddingModel {\n        async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for EmbeddingModel {\n        fn clone(&self) -> Self;\n    }\n}\n\n#[async_trait]\nimpl EmbeddingModel for Box<dyn EmbeddingModel> {\n    async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError> {\n        self.as_ref().embed(input).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl EmbeddingModel for Arc<dyn EmbeddingModel> {\n    async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError> {\n        self.as_ref().embed(input).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl EmbeddingModel for &dyn EmbeddingModel {\n    async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError> {\n        (*self).embed(input).await\n    }\n}\n\n#[async_trait]\n/// Embeds a list of strings and returns its embeddings.\n/// Assumes the strings will be moved.\npub trait SparseEmbeddingModel: Send + Sync + Debug + DynClone {\n    async fn sparse_embed(\n        &self,\n        input: Vec<String>,\n    ) -> Result<SparseEmbeddings, LanguageModelError>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(SparseEmbeddingModel);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub SparseEmbeddingModel {}\n\n    #[async_trait]\n    impl SparseEmbeddingModel for SparseEmbeddingModel {\n        async fn sparse_embed(&self, input: Vec<String>) -> Result<SparseEmbeddings, LanguageModelError>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for SparseEmbeddingModel {\n        fn clone(&self) -> Self;\n    }\n}\n\n#[async_trait]\nimpl SparseEmbeddingModel for Box<dyn SparseEmbeddingModel> {\n    async fn sparse_embed(\n        &self,\n        input: Vec<String>,\n    ) -> Result<SparseEmbeddings, LanguageModelError> {\n        self.as_ref().sparse_embed(input).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl SparseEmbeddingModel for Arc<dyn SparseEmbeddingModel> {\n    async fn sparse_embed(\n        &self,\n        input: Vec<String>,\n    ) -> Result<SparseEmbeddings, LanguageModelError> {\n        self.as_ref().sparse_embed(input).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl SparseEmbeddingModel for &dyn SparseEmbeddingModel {\n    async fn sparse_embed(\n        &self,\n        input: Vec<String>,\n    ) -> Result<SparseEmbeddings, LanguageModelError> {\n        (*self).sparse_embed(input).await\n    }\n}\n\n#[async_trait]\n/// Given a string prompt, queries an LLM\npub trait SimplePrompt: Debug + Send + Sync + DynClone {\n    // Takes a simple prompt, prompts the llm and returns the response\n    async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(SimplePrompt);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub SimplePrompt {}\n\n    #[async_trait]\n    impl SimplePrompt for SimplePrompt {\n        async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for SimplePrompt {\n        fn clone(&self) -> Self;\n    }\n}\n\n#[async_trait]\nimpl SimplePrompt for Box<dyn SimplePrompt> {\n    async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError> {\n        self.as_ref().prompt(prompt).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl SimplePrompt for Arc<dyn SimplePrompt> {\n    async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError> {\n        self.as_ref().prompt(prompt).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl SimplePrompt for &dyn SimplePrompt {\n    async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError> {\n        (*self).prompt(prompt).await\n    }\n}\n\n#[async_trait]\n/// Persists nodes\npub trait Persist: Debug + Send + Sync + DynClone {\n    type Input: Chunk;\n    type Output: Chunk;\n\n    async fn setup(&self) -> Result<()>;\n    async fn store(&self, node: Node<Self::Input>) -> Result<Node<Self::Output>>;\n    async fn batch_store(&self, nodes: Vec<Node<Self::Input>>) -> IndexingStream<Self::Output>;\n    fn batch_size(&self) -> Option<usize> {\n        None\n    }\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(<I, O> Persist<Input = I, Output = O>);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub Persist {}\n\n    #[async_trait]\n    impl Persist for Persist {\n        type Input = String;\n        type Output = String;\n\n        async fn setup(&self) -> Result<()>;\n        async fn store(&self, node: Node<String>) -> Result<Node<String>>;\n        async fn batch_store(&self, nodes: Vec<Node<String>>) -> IndexingStream<String>;\n        fn batch_size(&self) -> Option<usize>;\n\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for Persist {\n        fn clone(&self) -> Self;\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> Persist for Box<dyn Persist<Input = I, Output = O>> {\n    type Input = I;\n    type Output = O;\n\n    async fn setup(&self) -> Result<()> {\n        self.as_ref().setup().await\n    }\n    async fn store(&self, node: Node<I>) -> Result<Node<O>> {\n        self.as_ref().store(node).await\n    }\n    async fn batch_store(&self, nodes: Vec<Node<I>>) -> IndexingStream<O> {\n        self.as_ref().batch_store(nodes).await\n    }\n    fn batch_size(&self) -> Option<usize> {\n        self.as_ref().batch_size()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> Persist for Arc<dyn Persist<Input = I, Output = O>> {\n    type Input = I;\n    type Output = O;\n\n    async fn setup(&self) -> Result<()> {\n        self.as_ref().setup().await\n    }\n    async fn store(&self, node: Node<I>) -> Result<Node<O>> {\n        self.as_ref().store(node).await\n    }\n    async fn batch_store(&self, nodes: Vec<Node<I>>) -> IndexingStream<O> {\n        self.as_ref().batch_store(nodes).await\n    }\n    fn batch_size(&self) -> Option<usize> {\n        self.as_ref().batch_size()\n    }\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<I: Chunk, O: Chunk> Persist for &dyn Persist<Input = I, Output = O> {\n    type Input = I;\n    type Output = O;\n\n    async fn setup(&self) -> Result<()> {\n        (*self).setup().await\n    }\n    async fn store(&self, node: Node<I>) -> Result<Node<O>> {\n        (*self).store(node).await\n    }\n    async fn batch_store(&self, nodes: Vec<Node<I>>) -> IndexingStream<O> {\n        (*self).batch_store(nodes).await\n    }\n    fn batch_size(&self) -> Option<usize> {\n        (*self).batch_size()\n    }\n}\n\n/// Allows for passing defaults from the pipeline to the transformer\n/// Required for batch transformers as at least a marker, implementation is not required\npub trait WithIndexingDefaults {\n    fn with_indexing_defaults(&mut self, _indexing_defaults: IndexingDefaults) {}\n}\n\n/// Allows for passing defaults from the pipeline to the batch transformer\n/// Required for batch transformers as at least a marker, implementation is not required\npub trait WithBatchIndexingDefaults {\n    fn with_indexing_defaults(&mut self, _indexing_defaults: IndexingDefaults) {}\n}\n\nimpl<I, O> WithIndexingDefaults for dyn Transformer<Input = I, Output = O> {}\nimpl<I, O> WithIndexingDefaults for Box<dyn Transformer<Input = I, Output = O>> {\n    fn with_indexing_defaults(&mut self, indexing_defaults: IndexingDefaults) {\n        self.as_mut().with_indexing_defaults(indexing_defaults);\n    }\n}\nimpl<I, O> WithBatchIndexingDefaults for dyn BatchableTransformer<Input = I, Output = O> {}\nimpl<I, O> WithBatchIndexingDefaults for Box<dyn BatchableTransformer<Input = I, Output = O>> {\n    fn with_indexing_defaults(&mut self, indexing_defaults: IndexingDefaults) {\n        self.as_mut().with_indexing_defaults(indexing_defaults);\n    }\n}\n\nimpl<F> WithIndexingDefaults for F where F: Fn(Node<String>) -> Result<Node<String>> {}\nimpl<F> WithBatchIndexingDefaults for F where F: Fn(Vec<Node<String>>) -> IndexingStream<String> {}\n\n#[cfg(feature = \"test-utils\")]\nimpl WithIndexingDefaults for MockTransformer {}\n// //\n#[cfg(feature = \"test-utils\")]\nimpl WithBatchIndexingDefaults for MockBatchableTransformer {}\n\n#[async_trait]\n/// Given a string prompt, queries an LLM to return structured data\npub trait StructuredPrompt: Debug + Send + Sync + DynClone {\n    async fn structured_prompt<T: DeserializeOwned + JsonSchema>(\n        &self,\n        prompt: Prompt,\n    ) -> Result<T, LanguageModelError>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\n/// Helper trait object to call structured prompt with dynamic dispatch\n///\n/// Internally Swiftide only implements this trait, as implementing `DynStructuredPrompt` gives\n/// `StructuredPrompt` for free\n#[async_trait]\npub trait DynStructuredPrompt: Debug + Send + Sync + DynClone {\n    async fn structured_prompt_dyn(\n        &self,\n        prompt: Prompt,\n        schema: schemars::Schema,\n    ) -> Result<serde_json::Value, LanguageModelError>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(DynStructuredPrompt);\n\n#[async_trait]\nimpl<C> StructuredPrompt for C\nwhere\n    C: DynStructuredPrompt + Debug + Send + Sync + DynClone,\n{\n    async fn structured_prompt<T: DeserializeOwned + JsonSchema>(\n        &self,\n        prompt: Prompt,\n    ) -> Result<T, LanguageModelError> {\n        // Call with T = serde_json::Value\n        let schema = schema_for!(T);\n        let val = self.structured_prompt_dyn(prompt, schema).await?;\n\n        let parsed = serde_json::from_value(val).map_err(LanguageModelError::permanent)?;\n\n        Ok(parsed)\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n#![cfg_attr(coverage_nightly, feature(coverage_attribute))]\n\npub mod agent_traits;\npub mod chat_completion;\npub mod indexing_decorators;\nmod indexing_defaults;\nmod indexing_stream;\npub mod indexing_traits;\nmod node;\nmod query;\nmod query_stream;\npub mod query_traits;\nmod search_strategies;\nmod stream_backoff;\npub mod token_estimation;\npub mod type_aliases;\n\npub mod document;\npub mod prompt;\npub use type_aliases::*;\n\nmod metadata;\nmod query_evaluation;\n\n/// All traits are available from the root\npub use crate::agent_traits::*;\npub use crate::chat_completion::traits::*;\npub use crate::indexing_traits::*;\npub use crate::query_traits::*;\npub use crate::token_estimation::*;\n\n// Decorators are available from the root\npub use crate::indexing_decorators::*;\n\npub mod indexing {\n    pub use crate::indexing_decorators::*;\n    pub use crate::indexing_defaults::*;\n    pub use crate::indexing_stream::IndexingStream;\n    pub use crate::indexing_traits::*;\n    pub use crate::metadata::*;\n    pub use crate::node::*;\n}\n\npub mod querying {\n    pub use crate::document::*;\n    pub use crate::query::*;\n    pub use crate::query_evaluation::*;\n    pub use crate::query_stream::*;\n    pub use crate::query_traits::*;\n    pub mod search_strategies {\n        pub use crate::search_strategies::*;\n    }\n}\n\n/// Re-export of commonly used dependencies.\npub mod prelude;\n\n#[cfg(feature = \"test-utils\")]\npub mod test_utils;\n\npub mod util;\n\n#[cfg(feature = \"metrics\")]\npub mod metrics;\n\n/// Pipeline statistics collection for monitoring and observability\npub mod statistics;\n"
  },
  {
    "path": "swiftide-core/src/metadata.rs",
    "content": "//! Metadata is a key-value store for indexation nodes\n//!\n//! Typically metadata is used to extract or generate additional information about the node\n//!\n//! Internally it uses a `BTreeMap` to store the key-value pairs, to ensure the data is sorted.\nuse std::collections::{BTreeMap, btree_map::IntoValues};\n\nuse serde::Deserializer;\n\nuse crate::util::debug_long_utf8;\n\n#[derive(Clone, Default, PartialEq, Eq)]\npub struct Metadata {\n    inner: BTreeMap<String, serde_json::Value>,\n}\n\nimpl std::fmt::Debug for Metadata {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_map()\n            .entries(\n                self.inner\n                    .iter()\n                    .map(|(k, v): (&String, &serde_json::Value)| {\n                        let fvalue = v.as_str().map_or_else(\n                            || debug_long_utf8(v.to_string(), 100),\n                            ToString::to_string,\n                        );\n\n                        (k, fvalue)\n                    }),\n            )\n            .finish()\n    }\n}\n\nimpl Metadata {\n    pub fn iter(&self) -> impl Iterator<Item = (&String, &serde_json::Value)> {\n        self.inner.iter()\n    }\n\n    pub fn insert<K, V>(&mut self, key: K, value: V)\n    where\n        K: Into<String>,\n        V: Into<serde_json::Value>,\n    {\n        self.inner.insert(key.into(), value.into());\n    }\n\n    pub fn get(&self, key: impl AsRef<str>) -> Option<&serde_json::Value> {\n        self.inner.get(key.as_ref())\n    }\n\n    pub fn into_values(self) -> IntoValues<String, serde_json::Value> {\n        self.inner.into_values()\n    }\n\n    pub fn keys(&self) -> impl Iterator<Item = &str> {\n        self.inner.keys().map(String::as_str)\n    }\n\n    pub fn values(&self) -> impl Iterator<Item = &serde_json::Value> {\n        self.inner.values()\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.inner.is_empty()\n    }\n}\n\nimpl<K, V> Extend<(K, V)> for Metadata\nwhere\n    K: Into<String>,\n    V: Into<serde_json::Value>,\n{\n    fn extend<T: IntoIterator<Item = (K, V)>>(&mut self, iter: T) {\n        self.inner\n            .extend(iter.into_iter().map(|(k, v)| (k.into(), v.into())));\n    }\n}\n\nimpl<K, V> From<Vec<(K, V)>> for Metadata\nwhere\n    K: Into<String>,\n    V: Into<serde_json::Value>,\n{\n    fn from(items: Vec<(K, V)>) -> Self {\n        let inner = items\n            .into_iter()\n            .map(|(k, v)| (k.into(), v.into()))\n            .collect();\n        Metadata { inner }\n    }\n}\n\nimpl<K, V> From<(K, V)> for Metadata\nwhere\n    K: Into<String>,\n    V: Into<serde_json::Value>,\n{\n    fn from(items: (K, V)) -> Self {\n        let sliced: [(K, V); 1] = [items];\n        let inner = sliced\n            .into_iter()\n            .map(|(k, v)| (k.into(), v.into()))\n            .collect();\n        Metadata { inner }\n    }\n}\n\nimpl<'a, K, V> From<&'a [(K, V)]> for Metadata\nwhere\n    K: Into<String> + Clone,\n    V: Into<serde_json::Value> + Clone,\n{\n    fn from(items: &'a [(K, V)]) -> Self {\n        let inner = items\n            .iter()\n            .cloned()\n            .map(|(k, v)| (k.into(), v.into()))\n            .collect();\n        Metadata { inner }\n    }\n}\n\nimpl<K: Ord, V, const N: usize> From<[(K, V); N]> for Metadata\nwhere\n    K: Ord + Into<String>,\n    V: Into<serde_json::Value>,\n{\n    fn from(mut arr: [(K, V); N]) -> Self {\n        if N == 0 {\n            return Metadata {\n                inner: BTreeMap::new(),\n            };\n        }\n        arr.sort_by(|a, b| a.0.cmp(&b.0));\n        let inner: BTreeMap<String, serde_json::Value> =\n            arr.into_iter().map(|(k, v)| (k.into(), v.into())).collect();\n        Metadata { inner }\n    }\n}\n\nimpl IntoIterator for Metadata {\n    type Item = (String, serde_json::Value);\n    type IntoIter = std::collections::btree_map::IntoIter<String, serde_json::Value>;\n    fn into_iter(self) -> Self::IntoIter {\n        self.inner.into_iter()\n    }\n}\n\nimpl<'iter> IntoIterator for &'iter Metadata {\n    type Item = (&'iter String, &'iter serde_json::Value);\n    type IntoIter = std::collections::btree_map::Iter<'iter, String, serde_json::Value>;\n    fn into_iter(self) -> Self::IntoIter {\n        self.inner.iter()\n    }\n}\n\nimpl<'de> serde::Deserialize<'de> for Metadata {\n    fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<Self, D::Error> {\n        BTreeMap::deserialize(deserializer).map(|inner| Metadata { inner })\n    }\n}\n\nimpl serde::Serialize for Metadata {\n    fn serialize<S: serde::Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {\n        self.inner.serialize(serializer)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use serde_json::json;\n\n    #[test]\n    fn test_insert_and_get() {\n        let mut metadata = Metadata::default();\n        let key = \"key\";\n        let value = \"value\";\n        metadata.insert(key, \"value\");\n\n        assert_eq!(metadata.get(key).unwrap().as_str(), Some(value));\n    }\n\n    #[test]\n    fn test_iter() {\n        let mut metadata = Metadata::default();\n        metadata.insert(\"key1\", json!(\"value1\"));\n        metadata.insert(\"key2\", json!(\"value2\"));\n\n        let mut iter = metadata.iter();\n        assert_eq!(iter.next(), Some((&\"key1\".to_string(), &json!(\"value1\"))));\n        assert_eq!(iter.next(), Some((&\"key2\".to_string(), &json!(\"value2\"))));\n        assert_eq!(iter.next(), None);\n    }\n\n    #[test]\n    fn test_extend() {\n        let mut metadata = Metadata::default();\n        metadata.extend(vec![(\"key1\", json!(\"value1\")), (\"key2\", json!(\"value2\"))]);\n\n        assert_eq!(metadata.get(\"key1\"), Some(&json!(\"value1\")));\n        assert_eq!(metadata.get(\"key2\"), Some(&json!(\"value2\")));\n    }\n\n    #[test]\n    fn test_from_vec() {\n        let metadata = Metadata::from(vec![(\"key1\", json!(\"value1\")), (\"key2\", json!(\"value2\"))]);\n\n        assert_eq!(metadata.get(\"key1\"), Some(&json!(\"value1\")));\n        assert_eq!(metadata.get(\"key2\"), Some(&json!(\"value2\")));\n    }\n\n    #[test]\n    fn test_into_values() {\n        let mut metadata = Metadata::default();\n        metadata.insert(\"key1\", json!(\"value1\"));\n        metadata.insert(\"key2\", json!(\"value2\"));\n\n        let values: Vec<_> = metadata.into_values().collect();\n        assert_eq!(values, vec![json!(\"value1\"), json!(\"value2\")]);\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/metrics.rs",
    "content": "use std::sync::OnceLock;\n\nuse metrics::{IntoLabels, Label, counter, describe_counter};\n\nstatic METRICS_INIT: OnceLock<bool> = OnceLock::new();\n\n/// Lazily describes all the metrics used in this module once\npub fn lazy_init() {\n    METRICS_INIT.get_or_init(|| {\n        describe_counter!(\"swiftide.usage.prompt_tokens\", \"token usage for the prompt\");\n        describe_counter!(\n            \"swiftide.usage.completion_tokens\",\n            \"token usage for the completion\"\n        );\n        describe_counter!(\"swiftide.usage.total_tokens\", \"total token usage\");\n        true\n    });\n}\n\n/// Emits usage metrics for a language model\npub fn emit_usage(\n    model: &str,\n    prompt_tokens: u64,\n    completion_tokens: u64,\n    total_tokens: u64,\n    custom_metadata: Option<impl IntoLabels>,\n) {\n    let model = model.to_string();\n    let mut metadata = vec![];\n\n    if let Some(custom_metadata) = custom_metadata {\n        metadata.extend(custom_metadata.into_labels());\n    }\n    metadata.push(Label::new(\"model\", model));\n\n    lazy_init();\n    counter!(\"swiftide.usage.prompt_tokens\", metadata.iter()).increment(prompt_tokens);\n    counter!(\"swiftide.usage.completion_tokens\", metadata.iter()).increment(completion_tokens);\n    counter!(\"swiftide.usage.total_tokens\", metadata.iter()).increment(total_tokens);\n}\n"
  },
  {
    "path": "swiftide-core/src/node.rs",
    "content": "//! This module defines the `Node` struct and its associated methods.\n//!\n//! `Node` represents a unit of data in the indexing process, containing metadata,\n//! the data chunk itself, and an optional vector representation.\n//!\n//! # Overview\n//!\n//! The `Node` struct is designed to encapsulate all necessary information for a single\n//! unit of data being processed in the indexing pipeline. It includes fields for an identifier,\n//! file path, data chunk, optional vector representation, and metadata.\n//!\n//! The struct provides methods to convert the node into an embeddable string format and to\n//! calculate a hash value for the node based on its path and chunk.\n//!\n//! # Usage\n//!\n//! The `Node` struct is used throughout the indexing pipeline to represent and process\n//! individual units of data. It is particularly useful in scenarios where metadata and data chunks\n//! need to be processed together.\nuse std::{\n    collections::HashMap,\n    fmt::Debug,\n    hash::{Hash, Hasher},\n    os::unix::ffi::OsStrExt,\n    path::PathBuf,\n};\n\nuse derive_builder::Builder;\nuse itertools::Itertools;\nuse serde::{Deserialize, Serialize};\n\nuse crate::{Embedding, SparseEmbedding, metadata::Metadata};\n\n/// Helper trait for types that can be used as data chunks in a `Node`.\n/// For now always expects an owned value\n///\n/// A chunk must be able to yield its bytes, be cloned (not while streaming), and be sent across\n/// threads.\npub trait Chunk: Clone + Send + Sync + Debug + AsRef<[u8]> + 'static {}\nimpl<T> Chunk for T where T: Clone + Send + Sync + Debug + AsRef<[u8]> + 'static {}\n\n/// Represents a unit of data in the indexing process.\n///\n/// `Node` encapsulates all necessary information for a single unit of data being processed\n/// in the indexing pipeline. It includes fields for an identifier, file path, data chunk, optional\n/// vector representation, and metadata.\n#[derive(Default, Clone, Serialize, Deserialize, PartialEq, Builder)]\n#[builder(setter(into, strip_option), build_fn(error = \"anyhow::Error\"))]\npub struct Node<T: Chunk> {\n    /// File path associated with the node.\n    #[builder(default)]\n    pub path: PathBuf,\n    /// Data chunk contained in the node.\n    pub chunk: T,\n    /// Optional vector representation of embedded data.\n    #[builder(default)]\n    pub vectors: Option<HashMap<EmbeddedField, Embedding>>,\n    /// Optional sparse vector representation of embedded data.\n    #[builder(default)]\n    pub sparse_vectors: Option<HashMap<EmbeddedField, SparseEmbedding>>,\n    /// Metadata associated with the node.\n    #[builder(default)]\n    pub metadata: Metadata,\n    /// Mode of embedding data Chunk and Metadata\n    #[builder(default)]\n    pub embed_mode: EmbedMode,\n    /// Size of the input this node was originally derived from in bytes\n    #[builder(default)]\n    pub original_size: usize,\n    /// Offset of the chunk relative to the start of the input this node was originally derived\n    /// from in bytes\n    #[builder(default)]\n    pub offset: usize,\n}\n\npub type TextNode = Node<String>;\n\nimpl<T: Chunk> NodeBuilder<T> {\n    pub fn maybe_sparse_vectors(\n        &mut self,\n        sparse_vectors: Option<HashMap<EmbeddedField, SparseEmbedding>>,\n    ) -> &mut Self {\n        self.sparse_vectors = Some(sparse_vectors);\n        self\n    }\n\n    pub fn maybe_vectors(\n        &mut self,\n        vectors: Option<HashMap<EmbeddedField, Embedding>>,\n    ) -> &mut Self {\n        self.vectors = Some(vectors);\n        self\n    }\n}\n\nimpl<T: Chunk> Debug for Node<T> {\n    /// Formats the node for debugging purposes.\n    ///\n    /// This method is used to provide a human-readable representation of the node when debugging.\n    /// The vector field is displayed as the number of elements in the vector if present.\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Node\")\n            .field(\"id\", &self.id())\n            .field(\"path\", &self.path)\n            .field(\"chunk\", &self.chunk)\n            .field(\"metadata\", &self.metadata)\n            .field(\n                \"vectors\",\n                &self\n                    .vectors\n                    .iter()\n                    .flat_map(HashMap::iter)\n                    .map(|(embed_type, vec)| format!(\"'{embed_type}': {}\", vec.len()))\n                    .join(\",\"),\n            )\n            .field(\n                \"sparse_vectors\",\n                &self\n                    .sparse_vectors\n                    .iter()\n                    .flat_map(HashMap::iter)\n                    .map(|(embed_type, vec)| {\n                        format!(\n                            \"'{embed_type}': indices({}), values({})\",\n                            vec.indices.len(),\n                            vec.values.len()\n                        )\n                    })\n                    .join(\",\"),\n            )\n            .field(\"embed_mode\", &self.embed_mode)\n            .finish()\n    }\n}\n\nimpl<T: Chunk> Node<T> {\n    /// Builds a new instance of `Node`, returning a `NodeBuilder`. Copies\n    /// over the fields from the provided `Node`.\n    pub fn build_from_other(node: &Node<T>) -> NodeBuilder<T> {\n        NodeBuilder::default()\n            .path(node.path.clone())\n            .chunk(node.chunk.clone())\n            .metadata(node.metadata.clone())\n            .maybe_vectors(node.vectors.clone())\n            .maybe_sparse_vectors(node.sparse_vectors.clone())\n            .embed_mode(node.embed_mode)\n            .original_size(node.original_size)\n            .offset(node.offset)\n            .to_owned()\n    }\n\n    /// Creates a new instance of `NodeBuilder.`\n    pub fn builder<VALUE: Chunk + Clone>() -> NodeBuilder<VALUE> {\n        NodeBuilder::default()\n    }\n\n    /// Creates a new instance of `Node` with the specified data chunk.\n    ///\n    /// The other fields are set to their default values.\n    pub fn new(chunk: impl Into<String>) -> Node<String> {\n        let chunk = chunk.into();\n        let original_size = chunk.len();\n        Node {\n            chunk,\n            original_size,\n            ..Default::default()\n        }\n    }\n\n    pub fn with_metadata(&mut self, metadata: impl Into<Metadata>) -> &mut Self {\n        self.metadata = metadata.into();\n        self\n    }\n\n    pub fn with_vectors(\n        &mut self,\n        vectors: impl Into<HashMap<EmbeddedField, Embedding>>,\n    ) -> &mut Self {\n        self.vectors = Some(vectors.into());\n        self\n    }\n\n    pub fn with_sparse_vectors(\n        &mut self,\n        sparse_vectors: impl Into<HashMap<EmbeddedField, SparseEmbedding>>,\n    ) -> &mut Self {\n        self.sparse_vectors = Some(sparse_vectors.into());\n        self\n    }\n\n    /// Retrieve the identifier of the node.\n    ///\n    /// Calculates the identifier of the node based on its path and chunk as bytes, returning a\n    /// UUID (v3).\n    ///\n    /// WARN: Does not memoize the id. Use sparingly.\n    pub fn id(&self) -> uuid::Uuid {\n        // Calculate the identifier based on the path and chunk as bytes\n        let bytes = [self.path.as_os_str().as_bytes(), self.chunk.as_ref()].concat();\n\n        uuid::Uuid::new_v3(&uuid::Uuid::NAMESPACE_OID, &bytes)\n    }\n}\n\nimpl Node<String> {\n    /// Creates embeddable data depending on chosen `EmbedMode`.\n    ///\n    /// # Returns\n    ///\n    /// Embeddable data mapped to their `EmbeddedField`.\n    pub fn as_embeddables(&self) -> Vec<(EmbeddedField, String)> {\n        // TODO: Cow and borrow the inner data + generic\n        let mut embeddables = Vec::new();\n\n        if self.embed_mode == EmbedMode::SingleWithMetadata || self.embed_mode == EmbedMode::Both {\n            embeddables.push((EmbeddedField::Combined, self.combine_chunk_with_metadata()));\n        }\n\n        if self.embed_mode == EmbedMode::PerField || self.embed_mode == EmbedMode::Both {\n            embeddables.push((EmbeddedField::Chunk, self.chunk.clone()));\n            for (name, value) in &self.metadata {\n                let value = value\n                    .as_str()\n                    .map_or_else(|| value.to_string(), ToString::to_string);\n                embeddables.push((EmbeddedField::Metadata(name.clone()), value));\n            }\n        }\n\n        embeddables\n    }\n\n    /// Converts the node into an [`self::EmbeddedField::Combined`] type of embeddable.\n    ///\n    /// This embeddable format consists of the metadata formatted as key-value pairs, each on a new\n    /// line, followed by the data chunk.\n    ///\n    /// # Returns\n    ///\n    /// A string representing the embeddable format of the node.\n    fn combine_chunk_with_metadata(&self) -> String {\n        // Metadata formatted by newlines joined with the chunk\n        let metadata = self\n            .metadata\n            .iter()\n            .map(|(k, v)| {\n                let v = v\n                    .as_str()\n                    .map_or_else(|| v.to_string(), ToString::to_string);\n\n                format!(\"{k}: {v}\")\n            })\n            .collect::<Vec<String>>()\n            .join(\"\\n\");\n\n        format!(\"{}\\n{}\", metadata, self.chunk)\n    }\n}\n\nimpl Hash for Node<String> {\n    /// Hashes the node based on its path and chunk.\n    ///\n    /// This method is used by the `calculate_hash` method to generate a hash value for the node.\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.path.hash(state);\n        self.chunk.hash(state);\n    }\n}\n\nimpl<T: Into<String>> From<T> for Node<String> {\n    fn from(value: T) -> Self {\n        let value: String = value.into();\n        Node::<String>::new(value)\n    }\n}\n\n/// Embed mode of the pipeline.\n#[derive(Copy, Debug, Default, Clone, Serialize, Deserialize, PartialEq)]\npub enum EmbedMode {\n    #[default]\n    /// Embedding Chunk of data combined with Metadata.\n    SingleWithMetadata,\n    /// Embedding Chunk of data and every Metadata separately.\n    PerField,\n    /// Embedding Chunk of data and every Metadata separately and Chunk of data combined with\n    /// Metadata.\n    Both,\n}\n\n/// Type of Embeddable stored in model.\n#[derive(\n    Clone, Default, Serialize, Deserialize, PartialEq, Eq, Hash, strum_macros::Display, Debug,\n)]\npub enum EmbeddedField {\n    #[default]\n    /// Embeddable created from Chunk of data combined with Metadata.\n    Combined,\n    /// Embeddable created from Chunk of data only.\n    Chunk,\n    /// Embeddable created from Metadata.\n    /// String stores Metadata name.\n    #[strum(to_string = \"Metadata: {0}\")]\n    Metadata(String),\n}\n\nimpl EmbeddedField {\n    /// Returns the name of the field when it would be a sparse vector\n    pub fn sparse_field_name(&self) -> String {\n        format!(\"{self}_sparse\")\n    }\n\n    /// Returns the name of the field when it would be a dense vector\n    pub fn field_name(&self) -> String {\n        format!(\"{self}\")\n    }\n}\n\n#[allow(clippy::from_over_into)]\nimpl Into<String> for EmbeddedField {\n    fn into(self) -> String {\n        self.to_string()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use test_case::test_case;\n\n    #[test_case(&EmbeddedField::Combined, [\"Combined\", \"Combined_sparse\"])]\n    #[test_case(&EmbeddedField::Chunk, [\"Chunk\", \"Chunk_sparse\"])]\n    #[test_case(&EmbeddedField::Metadata(\"test\".into()), [\"Metadata: test\", \"Metadata: test_sparse\"])]\n    fn field_name_tests(embedded_field: &EmbeddedField, expected: [&str; 2]) {\n        assert_eq!(embedded_field.field_name(), expected[0]);\n        assert_eq!(embedded_field.sparse_field_name(), expected[1]);\n    }\n\n    #[test]\n    fn test_debugging_node_with_utf8_char_boundary() {\n        let node = Node::from(\"🦀\".repeat(101));\n        // Single char\n        let _ = format!(\"{node:?}\");\n\n        // With invalid char boundary\n        let node = Node::from(\"Jürgen\".repeat(100));\n        let _ = format!(\"{node:?}\");\n    }\n\n    #[test]\n    fn test_build_from_other_without_vectors() {\n        let original_node = Node::from(\"test_chunk\")\n            .with_metadata(Metadata::default())\n            .with_vectors(HashMap::new())\n            .with_sparse_vectors(HashMap::new())\n            .to_owned();\n\n        let builder = Node::build_from_other(&original_node);\n        let new_node = builder.build().unwrap();\n\n        assert_eq!(original_node, new_node);\n    }\n\n    #[test]\n    fn test_build_from_other_with_vectors() {\n        let mut vectors = HashMap::new();\n        vectors.insert(EmbeddedField::Chunk, Embedding::default());\n\n        let mut sparse_vectors = HashMap::new();\n        sparse_vectors.insert(\n            EmbeddedField::Chunk,\n            SparseEmbedding {\n                indices: vec![],\n                values: vec![],\n            },\n        );\n\n        let original_node = Node::from(\"test_chunk\")\n            .with_metadata(Metadata::default())\n            .with_vectors(vectors.clone())\n            .with_sparse_vectors(sparse_vectors.clone())\n            .to_owned();\n\n        let builder = Node::build_from_other(&original_node);\n        let new_node = builder.build().unwrap();\n\n        assert_eq!(original_node, new_node);\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/prelude.rs",
    "content": "pub use anyhow::{Context as _, Result};\npub use async_trait::async_trait;\npub use derive_builder::Builder;\npub use futures_util::{StreamExt, TryStreamExt};\npub use std::sync::Arc;\npub use tracing::Instrument;\n\n#[cfg(feature = \"test-utils\")]\npub use crate::assert_default_prompt_snapshot;\n"
  },
  {
    "path": "swiftide-core/src/prompt.rs",
    "content": "//! Prompts templating and management\n//!\n//! Prompts are first class citizens in Swiftide and use [tera] under the hood. tera\n//! uses jinja style templates which allows for a lot of flexibility.\n//!\n//! Conceptually, a [Prompt] is something you send to i.e.\n//! [`SimplePrompt`][crate::SimplePrompt]. A prompt can have\n//! added context for substitution and other templating features.\n//!\n//! Transformers in Swiftide come with default prompts, and they can be customized or replaced as\n//! needed.\n//!\n//! [`Template`] can be added with [`Template::try_compiled_from_str`]. Prompts can also be\n//! created on the fly from anything that implements [`Into<String>`]. Compiled prompts are stored\n//! in an internal repository.\n//!\n//! Additionally, `Template::String` and `Template::Static` can be used to create\n//! templates on the fly as well.\n//!\n//! It's recommended to precompile your templates.\n//!\n//! # Example\n//!\n//! ```\n//! #[tokio::main]\n//! # async fn main() {\n//! # use swiftide_core::prompt::Prompt;\n//! let prompt = Prompt::from(\"hello {{world}}\").with_context_value(\"world\", \"swiftide\");\n//!\n//! assert_eq!(prompt.render().unwrap(), \"hello swiftide\");\n//! # }\n//! ```\nuse std::{\n    borrow::Cow,\n    sync::{LazyLock, RwLock},\n};\n\nuse anyhow::{Context as _, Result};\nuse tera::Tera;\n\nuse crate::node::TextNode;\n\n/// A Prompt can be used with large language models to prompt.\n#[derive(Clone, Debug)]\npub struct Prompt {\n    template_ref: TemplateRef,\n    context: Option<tera::Context>,\n}\n\n/// References a to be rendered template\n/// Either a one-off template or a tera template\n#[derive(Clone, Debug)]\nenum TemplateRef {\n    OneOff(Cow<'static, str>),\n    Tera(Cow<'static, str>),\n}\n\npub static SWIFTIDE_TERA: LazyLock<RwLock<Tera>> = LazyLock::new(|| RwLock::new(Tera::default()));\n\nimpl Prompt {\n    /// Extend the swiftide repository with another Tera instance.\n    ///\n    /// You can use this to add your own templates, functions and partials.\n    ///\n    /// # Panics\n    ///\n    /// Panics if the `RWLock` is poisoned.\n    ///\n    /// # Errors\n    ///\n    /// Errors if the `Tera` instance cannot be extended.\n    pub fn extend(other: &Tera) -> Result<()> {\n        let mut swiftide_tera = SWIFTIDE_TERA.write().unwrap();\n        swiftide_tera.extend(other)?;\n        Ok(())\n    }\n\n    /// Create a new prompt from a compiled template that is present in the Tera repository\n    pub fn from_compiled_template(name: impl Into<Cow<'static, str>>) -> Prompt {\n        Prompt {\n            template_ref: TemplateRef::Tera(name.into()),\n            context: None,\n        }\n    }\n\n    /// Adds an `ingestion::Node` to the context of the Prompt\n    #[must_use]\n    pub fn with_node(mut self, node: &TextNode) -> Self {\n        let context = self.context.get_or_insert_with(tera::Context::default);\n        context.insert(\"node\", &node);\n        self\n    }\n\n    /// Adds anything that implements [Into<tera::Context>], like `Serialize` to the Prompt\n    #[must_use]\n    pub fn with_context(mut self, new_context: impl Into<tera::Context>) -> Self {\n        let context = self.context.get_or_insert_with(tera::Context::default);\n        context.extend(new_context.into());\n\n        self\n    }\n\n    /// Adds a key-value pair to the context of the Prompt\n    #[must_use]\n    pub fn with_context_value(mut self, key: &str, value: impl Into<tera::Value>) -> Self {\n        let context = self.context.get_or_insert_with(tera::Context::default);\n        context.insert(key, &value.into());\n        self\n    }\n\n    /// Renders a prompt\n    ///\n    /// If no context is provided, the prompt will be rendered as is.\n    ///\n    /// # Errors\n    ///\n    /// See `Template::render`\n    ///\n    /// # Panics\n    ///\n    /// Panics if the `RWLock` is poisoned.\n    pub fn render(&self) -> Result<String> {\n        if self.context.is_none()\n            && let TemplateRef::OneOff(ref template) = self.template_ref\n        {\n            return Ok(template.to_string());\n        }\n\n        let context: Cow<'_, tera::Context> = self\n            .context\n            .as_ref()\n            .map_or_else(|| Cow::Owned(tera::Context::default()), Cow::Borrowed);\n\n        match &self.template_ref {\n            TemplateRef::OneOff(template) => {\n                tera::Tera::one_off(template.as_ref(), &context, false)\n                    .context(\"Failed to render one-off template\")\n            }\n            TemplateRef::Tera(template) => SWIFTIDE_TERA\n                .read()\n                .unwrap()\n                .render(template.as_ref(), &context)\n                .context(\"Failed to render template\"),\n        }\n    }\n}\n\nimpl From<&'static str> for Prompt {\n    fn from(prompt: &'static str) -> Self {\n        Prompt {\n            template_ref: TemplateRef::OneOff(prompt.into()),\n            context: None,\n        }\n    }\n}\n\nimpl From<String> for Prompt {\n    fn from(prompt: String) -> Self {\n        Prompt {\n            template_ref: TemplateRef::OneOff(prompt.into()),\n            context: None,\n        }\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use crate::node::Node;\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_prompt() {\n        let prompt: Prompt = \"hello {{world}}\".into();\n        let prompt = prompt.with_context_value(\"world\", \"swiftide\");\n        assert_eq!(prompt.render().unwrap(), \"hello swiftide\");\n    }\n\n    #[tokio::test]\n    async fn test_prompt_with_node() {\n        let prompt: Prompt = \"hello {{node.chunk}}\".into();\n        let node = Node::from(\"test\");\n        let prompt = prompt.with_node(&node);\n        assert_eq!(prompt.render().unwrap(), \"hello test\");\n    }\n\n    #[tokio::test]\n    async fn test_one_off_from_string() {\n        let mut prompt: Prompt = \"hello {{world}}\".into();\n        prompt = prompt.with_context_value(\"world\", \"swiftide\");\n\n        assert_eq!(prompt.render().unwrap(), \"hello swiftide\");\n    }\n\n    #[tokio::test]\n    async fn test_extending_with_custom_repository() {\n        let mut custom_tera = tera::Tera::new(\"**/some/prompts.md\").unwrap();\n\n        custom_tera\n            .add_raw_template(\"hello\", \"hello {{world}}\")\n            .unwrap();\n\n        Prompt::extend(&custom_tera).unwrap();\n\n        let prompt =\n            Prompt::from_compiled_template(\"hello\").with_context_value(\"world\", \"swiftide\");\n\n        assert_eq!(prompt.render().unwrap(), \"hello swiftide\");\n    }\n\n    #[tokio::test]\n    async fn test_coercion_to_prompt() {\n        // str\n        let raw: &str = \"hello {{world}}\";\n\n        let prompt: Prompt = raw.into();\n        assert_eq!(\n            prompt\n                .with_context_value(\"world\", \"swiftide\")\n                .render()\n                .unwrap(),\n            \"hello swiftide\"\n        );\n\n        let prompt: Prompt = raw.to_string().into();\n        assert_eq!(\n            prompt\n                .with_context_value(\"world\", \"swiftide\")\n                .render()\n                .unwrap(),\n            \"hello swiftide\"\n        );\n    }\n\n    #[tokio::test]\n    async fn test_assume_rendered_unless_context_methods_called() {\n        let prompt = Prompt::from(\"hello {{world}}\");\n\n        assert_eq!(prompt.render().unwrap(), \"hello {{world}}\");\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/query.rs",
    "content": "//! A query is the main object going through a query pipeline\n//!\n//! It acts as a statemachine, with the following transitions:\n//!\n//! `states::Pending`: No documents have been retrieved\n//! `states::Retrieved`: Documents have been retrieved\n//! `states::Answered`: The query has been answered\nuse derive_builder::Builder;\n\nuse crate::{Embedding, SparseEmbedding, document::Document, util::debug_long_utf8};\n\n/// A query is the main object going through a query pipeline\n///\n/// It acts as a statemachine, with the following transitions:\n///\n/// `states::Pending`: No documents have been retrieved\n/// `states::Retrieved`: Documents have been retrieved\n/// `states::Answered`: The query has been answered\n#[derive(Clone, Default, Builder, PartialEq)]\n#[builder(setter(into))]\npub struct Query<STATE: QueryState> {\n    original: String,\n    #[builder(default = \"self.original.clone().unwrap_or_default()\")]\n    current: String,\n    #[builder(default = STATE::default())]\n    state: STATE,\n    #[builder(default)]\n    transformation_history: Vec<TransformationEvent>,\n\n    // TODO: How would this work when doing a rollup query?\n    #[builder(default)]\n    pub embedding: Option<Embedding>,\n\n    #[builder(default)]\n    pub sparse_embedding: Option<SparseEmbedding>,\n\n    /// Documents the query will operate on\n    ///\n    /// A query can retrieve multiple times, accumulating documents\n    #[builder(default)]\n    pub documents: Vec<Document>,\n}\n\nimpl<STATE: std::fmt::Debug + QueryState> std::fmt::Debug for Query<STATE> {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Query\")\n            .field(\n                \"original\",\n                &debug_long_utf8(&self.original, 100).lines().take(1),\n            )\n            .field(\n                \"current\",\n                &debug_long_utf8(&self.current, 100).lines().take(1),\n            )\n            .field(\"state\", &self.state)\n            .field(\"num_transformations\", &self.transformation_history.len())\n            .field(\"embedding\", &self.embedding.is_some())\n            .field(\"num_documents\", &self.documents.len())\n            .finish()\n    }\n}\n\nimpl<STATE: Clone + QueryState> Query<STATE> {\n    pub fn builder() -> QueryBuilder<STATE> {\n        QueryBuilder::default().clone()\n    }\n\n    /// Return the query it started with\n    pub fn original(&self) -> &str {\n        &self.original\n    }\n\n    /// Return the current query (or after retrieval!)\n    pub fn current(&self) -> &str {\n        &self.current\n    }\n\n    fn transition_to<NEWSTATE: QueryState>(self, new_state: NEWSTATE) -> Query<NEWSTATE> {\n        Query {\n            state: new_state,\n            original: self.original,\n            current: self.current,\n            transformation_history: self.transformation_history,\n            embedding: self.embedding,\n            sparse_embedding: self.sparse_embedding,\n            documents: self.documents,\n        }\n    }\n\n    #[allow(dead_code)]\n    pub fn history(&self) -> &Vec<TransformationEvent> {\n        &self.transformation_history\n    }\n\n    /// Returns the current documents that will be used as context for answer generation\n    pub fn documents(&self) -> &[Document] {\n        &self.documents\n    }\n\n    /// Returns the current documents as mutable\n    pub fn documents_mut(&mut self) -> &mut Vec<Document> {\n        &mut self.documents\n    }\n}\n\nimpl<STATE: Clone + CanRetrieve> Query<STATE> {\n    /// Add retrieved documents and transition to `states::Retrieved`\n    pub fn retrieved_documents(mut self, documents: Vec<Document>) -> Query<states::Retrieved> {\n        self.documents.extend(documents.clone());\n        self.transformation_history\n            .push(TransformationEvent::Retrieved {\n                before: self.current.clone(),\n                after: self.current.clone(),\n                documents,\n            });\n\n        let state = states::Retrieved;\n\n        self.transition_to(state)\n    }\n}\n\nimpl Query<states::Pending> {\n    pub fn new(query: impl Into<String>) -> Self {\n        Self {\n            original: query.into(),\n            ..Default::default()\n        }\n    }\n\n    /// Transforms the current query\n    pub fn transformed_query(&mut self, new_query: impl Into<String>) {\n        let new_query = new_query.into();\n\n        self.transformation_history\n            .push(TransformationEvent::Transformed {\n                before: self.current.clone(),\n                after: new_query.clone(),\n            });\n\n        self.current = new_query;\n    }\n}\n\nimpl Query<states::Retrieved> {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Transforms the current response\n    pub fn transformed_response(&mut self, new_response: impl Into<String>) {\n        let new_response = new_response.into();\n\n        self.transformation_history\n            .push(TransformationEvent::Transformed {\n                before: self.current.clone(),\n                after: new_response.clone(),\n            });\n\n        self.current = new_response;\n    }\n\n    /// Transition the query to `states::Answered`\n    #[must_use]\n    pub fn answered(mut self, answer: impl Into<String>) -> Query<states::Answered> {\n        self.current = answer.into();\n        let state = states::Answered;\n        self.transition_to(state)\n    }\n}\n\nimpl Query<states::Answered> {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Returns the answer of the query\n    pub fn answer(&self) -> &str {\n        &self.current\n    }\n}\n\n/// Marker trait for query states\npub trait QueryState: Send + Sync + Default {}\n/// Marker trait for query states that can still retrieve\npub trait CanRetrieve: QueryState {}\n\n/// States of a query\npub mod states {\n    use super::{CanRetrieve, QueryState};\n\n    #[derive(Debug, Default, Clone, PartialEq)]\n    /// The query is pending and has not been used\n    pub struct Pending;\n\n    #[derive(Debug, Default, Clone, PartialEq)]\n    /// Documents have been retrieved\n    pub struct Retrieved;\n\n    #[derive(Debug, Default, Clone, PartialEq)]\n    /// The query has been answered\n    pub struct Answered;\n\n    impl QueryState for Pending {}\n    impl QueryState for Retrieved {}\n    impl QueryState for Answered {}\n\n    impl CanRetrieve for Pending {}\n    impl CanRetrieve for Retrieved {}\n}\n\nimpl<T: AsRef<str>> From<T> for Query<states::Pending> {\n    fn from(original: T) -> Self {\n        Self {\n            original: original.as_ref().to_string(),\n            current: original.as_ref().to_string(),\n            state: states::Pending,\n            ..Default::default()\n        }\n    }\n}\n\n#[derive(Clone, PartialEq)]\n/// Records changes to a query\npub enum TransformationEvent {\n    Transformed {\n        before: String,\n        after: String,\n    },\n    Retrieved {\n        before: String,\n        after: String,\n        documents: Vec<Document>,\n    },\n}\n\nimpl TransformationEvent {\n    /// Returns true if the event is a retrieval\n    pub fn is_retrieval(&self) -> bool {\n        matches!(self, TransformationEvent::Retrieved { .. })\n    }\n\n    /// Returns true if the event is a transformation\n    pub fn is_transformation(&self) -> bool {\n        matches!(self, TransformationEvent::Transformed { .. })\n    }\n\n    /// Returns the query before the transformation/retrieval\n    pub fn before(&self) -> &str {\n        match self {\n            TransformationEvent::Transformed { before, .. }\n            | TransformationEvent::Retrieved { before, .. } => before,\n        }\n    }\n\n    /// Returns the query after the transformation/retrieval\n    pub fn after(&self) -> &str {\n        match self {\n            TransformationEvent::Transformed { after, .. }\n            | TransformationEvent::Retrieved { after, .. } => after,\n        }\n    }\n\n    /// Returns the documents retrieved, if any\n    pub fn documents(&self) -> Option<&[Document]> {\n        match self {\n            TransformationEvent::Retrieved { documents, .. } => Some(documents),\n            TransformationEvent::Transformed { .. } => None,\n        }\n    }\n}\n\nimpl std::fmt::Debug for TransformationEvent {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            TransformationEvent::Transformed { before, after } => {\n                write!(\n                    f,\n                    \"Transformed: {} -> {}\",\n                    &debug_long_utf8(before, 100),\n                    &debug_long_utf8(after, 100)\n                )\n            }\n            TransformationEvent::Retrieved {\n                before,\n                after,\n                documents,\n            } => {\n                write!(\n                    f,\n                    \"Retrieved: {} -> {}\\nDocuments: {:?}\",\n                    &debug_long_utf8(before, 100),\n                    &debug_long_utf8(after, 100),\n                    documents.len()\n                )\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_query_initial_state() {\n        let query = Query::<states::Pending>::from(\"test query\");\n        assert_eq!(query.original(), \"test query\");\n        assert_eq!(query.current(), \"test query\");\n        assert_eq!(query.history().len(), 0);\n    }\n\n    #[test]\n    fn test_query_transformed_query() {\n        let mut query = Query::<states::Pending>::from(\"test query\");\n        query.transformed_query(\"new query\");\n        assert_eq!(query.current(), \"new query\");\n        assert_eq!(query.history().len(), 1);\n        if let TransformationEvent::Transformed { before, after } = &query.history()[0] {\n            assert_eq!(before, \"test query\");\n            assert_eq!(after, \"new query\");\n        } else {\n            panic!(\"Unexpected event in history\");\n        }\n    }\n\n    #[test]\n    fn test_query_retrieved_documents() {\n        let query = Query::<states::Pending>::from(\"test query\");\n        let documents: Vec<Document> = vec![\"doc1\".into(), \"doc2\".into()];\n        let query = query.retrieved_documents(documents.clone());\n        assert_eq!(query.documents(), &documents);\n        assert_eq!(query.history().len(), 1);\n        if let TransformationEvent::Retrieved {\n            before,\n            after,\n            documents: retrieved_docs,\n        } = &query.history()[0]\n        {\n            assert_eq!(before, \"test query\");\n            assert_eq!(after, \"test query\");\n            assert_eq!(retrieved_docs, &documents);\n        } else {\n            panic!(\"Unexpected event in history\");\n        }\n    }\n\n    #[test]\n    fn test_query_transformed_response() {\n        let query = Query::<states::Pending>::from(\"test query\");\n        let documents = vec![\"doc1\".into(), \"doc2\".into()];\n        let mut query = query.retrieved_documents(documents.clone());\n        query.transformed_response(\"new response\");\n\n        assert_eq!(query.current(), \"new response\");\n        assert_eq!(query.history().len(), 2);\n        assert_eq!(query.documents(), &documents);\n        assert_eq!(query.original, \"test query\");\n        if let TransformationEvent::Transformed { before, after } = &query.history()[1] {\n            assert_eq!(before, \"test query\");\n            assert_eq!(after, \"new response\");\n        } else {\n            panic!(\"Unexpected event in history\");\n        }\n    }\n\n    #[test]\n    fn test_query_answered() {\n        let query = Query::<states::Pending>::from(\"test query\");\n        let documents = vec![\"doc1\".into(), \"doc2\".into()];\n        let query = query.retrieved_documents(documents);\n        let query = query.answered(\"the answer\");\n\n        assert_eq!(query.answer(), \"the answer\");\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/query_evaluation.rs",
    "content": "use crate::querying::{Query, states};\n\n/// Wraps a query for evaluation. Used by the [`crate::query_traits::EvaluateQuery`] trait.\npub enum QueryEvaluation {\n    /// Retrieve documents\n    RetrieveDocuments(Query<states::Retrieved>),\n    /// Answer the query\n    AnswerQuery(Query<states::Answered>),\n}\n\nimpl std::fmt::Debug for QueryEvaluation {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            QueryEvaluation::RetrieveDocuments(query) => {\n                write!(f, \"RetrieveDocuments({query:?})\")\n            }\n            QueryEvaluation::AnswerQuery(query) => write!(f, \"AnswerQuery({query:?})\"),\n        }\n    }\n}\n\nimpl From<Query<states::Retrieved>> for QueryEvaluation {\n    fn from(val: Query<states::Retrieved>) -> Self {\n        QueryEvaluation::RetrieveDocuments(val)\n    }\n}\n\nimpl From<Query<states::Answered>> for QueryEvaluation {\n    fn from(val: Query<states::Answered>) -> Self {\n        QueryEvaluation::AnswerQuery(val)\n    }\n}\n\n// TODO: must be a nicer way, maybe not needed and full encapsulation is better anyway\nimpl QueryEvaluation {\n    pub fn retrieve_documents_query(self) -> Option<Query<states::Retrieved>> {\n        if let QueryEvaluation::RetrieveDocuments(query) = self {\n            Some(query)\n        } else {\n            None\n        }\n    }\n\n    pub fn answer_query(self) -> Option<Query<states::Answered>> {\n        if let QueryEvaluation::AnswerQuery(query) = self {\n            Some(query)\n        } else {\n            None\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_from_retrieved() {\n        let query = Query::<states::Retrieved>::new(); // Assuming Query has a new() method\n        let evaluation = QueryEvaluation::from(query.clone());\n\n        match evaluation {\n            QueryEvaluation::RetrieveDocuments(q) => assert_eq!(q, query),\n            QueryEvaluation::AnswerQuery(_) => panic!(\"Unexpected QueryEvaluation variant\"),\n        }\n    }\n\n    #[test]\n    fn test_from_answered() {\n        let query = Query::<states::Answered>::new(); // Assuming Query has a new() method\n        let evaluation = QueryEvaluation::from(query.clone());\n\n        match evaluation {\n            QueryEvaluation::AnswerQuery(q) => assert_eq!(q, query),\n            QueryEvaluation::RetrieveDocuments(_) => panic!(\"Unexpected QueryEvaluation variant\"),\n        }\n    }\n\n    #[test]\n    fn test_retrieve_documents_query() {\n        let query = Query::<states::Retrieved>::new(); // Assuming Query has a new() method\n        let evaluation = QueryEvaluation::RetrieveDocuments(query.clone());\n\n        match evaluation.retrieve_documents_query() {\n            Some(q) => assert_eq!(q, query),\n            None => panic!(\"Expected a query, got None\"),\n        }\n    }\n\n    #[test]\n    fn test_answer_query() {\n        let query = Query::<states::Answered>::new(); // Assuming Query has a new() method\n        let evaluation = QueryEvaluation::AnswerQuery(query.clone());\n\n        match evaluation.answer_query() {\n            Some(q) => assert_eq!(q, query),\n            None => panic!(\"Expected a query, got None\"),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/query_stream.rs",
    "content": "//! Internally used by a query pipeline\n//!\n//! Has a sender and receiver to initialize the stream\nuse anyhow::Result;\nuse std::pin::Pin;\nuse tokio::sync::mpsc::Sender;\nuse tokio_stream::wrappers::ReceiverStream;\n\nuse futures_util::stream::Stream;\npub use futures_util::{StreamExt, TryStreamExt};\n\nuse crate::{query::QueryState, querying::Query};\n\n/// Internally used by a query pipeline\n///\n/// Has a sender and receiver to initialize the stream\n#[pin_project::pin_project]\npub struct QueryStream<'stream, STATE: 'stream + QueryState> {\n    #[pin]\n    pub(crate) inner: Pin<Box<dyn Stream<Item = Result<Query<STATE>>> + Send + 'stream>>,\n\n    #[pin]\n    pub sender: Option<Sender<Result<Query<STATE>>>>,\n}\n\nimpl<'stream, STATE: QueryState + 'stream> Default for QueryStream<'stream, STATE> {\n    fn default() -> Self {\n        let (sender, receiver) = tokio::sync::mpsc::channel(1000);\n\n        Self {\n            inner: ReceiverStream::new(receiver).boxed(),\n            sender: Some(sender),\n        }\n    }\n}\n\nimpl<STATE: QueryState> Stream for QueryStream<'_, STATE> {\n    type Item = Result<Query<STATE>>;\n\n    fn poll_next(\n        self: Pin<&mut Self>,\n        cx: &mut std::task::Context<'_>,\n    ) -> std::task::Poll<Option<Self::Item>> {\n        let this = self.project();\n        this.inner.poll_next(cx)\n    }\n}\n\nimpl<STATE: QueryState> From<Pin<Box<dyn Stream<Item = Result<Query<STATE>>> + Send>>>\n    for QueryStream<'_, STATE>\n{\n    fn from(val: Pin<Box<dyn Stream<Item = Result<Query<STATE>>> + Send>>) -> Self {\n        QueryStream {\n            inner: val,\n            sender: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/query_traits.rs",
    "content": "use std::sync::Arc;\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse dyn_clone::DynClone;\n\nuse crate::{\n    query::{\n        Query,\n        states::{self, Retrieved},\n    },\n    querying::QueryEvaluation,\n};\n\n#[cfg(feature = \"test-utils\")]\nuse mockall::{mock, predicate::str};\n\n/// Can transform queries before retrieval\n#[async_trait]\npub trait TransformQuery: Send + Sync + DynClone {\n    async fn transform_query(\n        &self,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Pending>>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(TransformQuery);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub TransformQuery {}\n\n    #[async_trait]\n    impl TransformQuery for TransformQuery {\n        async fn transform_query(\n            &self,\n            query: Query<states::Pending>,\n        ) -> Result<Query<states::Pending>>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for TransformQuery {\n        fn clone(&self) -> Self;\n    }\n}\n\n#[async_trait]\nimpl<F> TransformQuery for F\nwhere\n    F: Fn(Query<states::Pending>) -> Result<Query<states::Pending>> + Send + Sync + Clone,\n{\n    async fn transform_query(\n        &self,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Pending>> {\n        (self)(query)\n    }\n}\n\n#[async_trait]\nimpl TransformQuery for Box<dyn TransformQuery> {\n    async fn transform_query(\n        &self,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Pending>> {\n        self.as_ref().transform_query(query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl TransformQuery for Arc<dyn TransformQuery> {\n    async fn transform_query(\n        &self,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Pending>> {\n        self.as_ref().transform_query(query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n/// A search strategy for the query pipeline\npub trait SearchStrategy: Clone + Send + Sync + Default {}\n\n/// Can retrieve documents given a `SearchStrategy`\n#[async_trait]\npub trait Retrieve<S: SearchStrategy>: Send + Sync + DynClone {\n    async fn retrieve(\n        &self,\n        search_strategy: &S,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(<S> Retrieve<S>);\n\n#[async_trait]\nimpl<S: SearchStrategy> Retrieve<S> for Box<dyn Retrieve<S>> {\n    async fn retrieve(\n        &self,\n        search_strategy: &S,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        self.as_ref().retrieve(search_strategy, query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<S: SearchStrategy> Retrieve<S> for Arc<dyn Retrieve<S>> {\n    async fn retrieve(\n        &self,\n        search_strategy: &S,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        self.as_ref().retrieve(search_strategy, query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl<S, F> Retrieve<S> for F\nwhere\n    S: SearchStrategy,\n    F: Fn(&S, Query<states::Pending>) -> Result<Query<states::Retrieved>> + Send + Sync + Clone,\n{\n    async fn retrieve(\n        &self,\n        search_strategy: &S,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        (self)(search_strategy, query)\n    }\n}\n\n/// Can transform a response after retrieval\n#[async_trait]\npub trait TransformResponse: Send + Sync + DynClone {\n    async fn transform_response(&self, query: Query<Retrieved>)\n    -> Result<Query<states::Retrieved>>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(TransformResponse);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub TransformResponse {}\n\n    #[async_trait]\n    impl TransformResponse for TransformResponse {\n        async fn transform_response(&self, query: Query<Retrieved>)\n            -> Result<Query<states::Retrieved>>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for TransformResponse {\n        fn clone(&self) -> Self;\n    }\n}\n#[async_trait]\nimpl<F> TransformResponse for F\nwhere\n    F: Fn(Query<Retrieved>) -> Result<Query<Retrieved>> + Send + Sync + Clone,\n{\n    async fn transform_response(&self, query: Query<Retrieved>) -> Result<Query<Retrieved>> {\n        (self)(query)\n    }\n}\n\n#[async_trait]\nimpl TransformResponse for Box<dyn TransformResponse> {\n    async fn transform_response(&self, query: Query<Retrieved>) -> Result<Query<Retrieved>> {\n        self.as_ref().transform_response(query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl TransformResponse for Arc<dyn TransformResponse> {\n    async fn transform_response(&self, query: Query<Retrieved>) -> Result<Query<Retrieved>> {\n        self.as_ref().transform_response(query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n/// Can answer the original query\n#[async_trait]\npub trait Answer: Send + Sync + DynClone {\n    async fn answer(&self, query: Query<states::Retrieved>) -> Result<Query<states::Answered>>;\n\n    fn name(&self) -> &'static str {\n        let name = std::any::type_name::<Self>();\n        name.split(\"::\").last().unwrap_or(name)\n    }\n}\n\ndyn_clone::clone_trait_object!(Answer);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub Answer {}\n\n    #[async_trait]\n    impl Answer for Answer {\n        async fn answer(&self, query: Query<states::Retrieved>) -> Result<Query<states::Answered>>;\n        fn name(&self) -> &'static str;\n    }\n\n    impl Clone for Answer {\n        fn clone(&self) -> Self;\n    }\n}\n#[async_trait]\nimpl<F> Answer for F\nwhere\n    F: Fn(Query<Retrieved>) -> Result<Query<states::Answered>> + Send + Sync + Clone,\n{\n    async fn answer(&self, query: Query<Retrieved>) -> Result<Query<states::Answered>> {\n        (self)(query)\n    }\n}\n\n#[async_trait]\nimpl Answer for Box<dyn Answer> {\n    async fn answer(&self, query: Query<Retrieved>) -> Result<Query<states::Answered>> {\n        self.as_ref().answer(query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n#[async_trait]\nimpl Answer for Arc<dyn Answer> {\n    async fn answer(&self, query: Query<Retrieved>) -> Result<Query<states::Answered>> {\n        self.as_ref().answer(query).await\n    }\n\n    fn name(&self) -> &'static str {\n        self.as_ref().name()\n    }\n}\n\n/// Evaluates a query\n///\n/// An evaluator needs to be able to respond to each step in the query pipeline\n#[async_trait]\npub trait EvaluateQuery: Send + Sync + DynClone {\n    async fn evaluate(&self, evaluation: QueryEvaluation) -> Result<()>;\n}\n\ndyn_clone::clone_trait_object!(EvaluateQuery);\n\n#[cfg(feature = \"test-utils\")]\nmock! {\n    #[derive(Debug)]\n    pub EvaluateQuery {}\n\n    #[async_trait]\n    impl EvaluateQuery for EvaluateQuery {\n        async fn evaluate(&self, evaluation: QueryEvaluation) -> Result<()>;\n    }\n\n    impl Clone for EvaluateQuery {\n        fn clone(&self) -> Self;\n    }\n}\n#[async_trait]\nimpl EvaluateQuery for Box<dyn EvaluateQuery> {\n    async fn evaluate(&self, evaluation: QueryEvaluation) -> Result<()> {\n        self.as_ref().evaluate(evaluation).await\n    }\n}\n\n#[async_trait]\nimpl EvaluateQuery for Arc<dyn EvaluateQuery> {\n    async fn evaluate(&self, evaluation: QueryEvaluation) -> Result<()> {\n        self.as_ref().evaluate(evaluation).await\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/search_strategies/custom_strategy.rs",
    "content": "//! Implements a flexible vector search strategy framework using closure-based configuration.\n//! Supports both synchronous and asynchronous query generation for different retrieval backends.\n\nuse crate::querying::{self, Query, states};\nuse anyhow::{Result, anyhow};\nuse std::future::Future;\nuse std::marker::PhantomData;\nuse std::pin::Pin;\nuse std::sync::Arc;\n\n// TODO: Should be possible to remove the static bounds and allow Q as borrowed with some fu\n\n// Function type for generating retriever-specific queries\ntype QueryGenerator<Q> = Arc<dyn Fn(&Query<states::Pending>) -> Result<Q> + Send + Sync>;\n\n// Function type for async query generation\ntype AsyncQueryGenerator<Q> = Arc<\n    dyn Fn(&Query<states::Pending>) -> Pin<Box<dyn Future<Output = Result<Q>> + Send>>\n        + Send\n        + Sync,\n>;\n\n/// Implements the strategy pattern for vector similarity search, allowing retrieval backends\n/// to define custom query generation logic through closures.\npub struct CustomStrategy<Q> {\n    query: Option<QueryGenerator<Q>>,\n    async_query: Option<AsyncQueryGenerator<Q>>,\n    _marker: PhantomData<Q>,\n}\n\nimpl<Q: Send + Sync> querying::SearchStrategy for CustomStrategy<Q> {}\n\nimpl<Q> Default for CustomStrategy<Q> {\n    fn default() -> Self {\n        Self {\n            query: None,\n            async_query: None,\n            _marker: PhantomData,\n        }\n    }\n}\n\nimpl<Q> Clone for CustomStrategy<Q> {\n    fn clone(&self) -> Self {\n        Self {\n            query: self.query.clone(),\n            async_query: self.async_query.clone(),\n            _marker: PhantomData,\n        }\n    }\n}\n\nimpl<Q: Send + Sync> CustomStrategy<Q> {\n    /// Creates a new strategy with a synchronous query generator.\n    pub fn from_query(\n        query: impl Fn(&Query<states::Pending>) -> Result<Q> + Send + Sync + 'static,\n    ) -> Self {\n        Self {\n            query: Some(Arc::new(query)),\n            async_query: None,\n            _marker: PhantomData,\n        }\n    }\n\n    /// Creates a new strategy with an asynchronous query generator.\n    pub fn from_async_query<F>(\n        query: impl Fn(&Query<states::Pending>) -> F + Send + Sync + 'static,\n    ) -> Self\n    where\n        F: Future<Output = Result<Q>> + Send + 'static,\n    {\n        Self {\n            query: None,\n            async_query: Some(Arc::new(move |q| Box::pin(query(q)))),\n            _marker: PhantomData,\n        }\n    }\n\n    /// Generates a query using either the sync or async generator.\n    /// Returns error if no query generator is set.\n    ///\n    /// # Errors\n    /// Returns an error if:\n    /// * No query generator has been configured\n    /// * The configured query generator fails during query generation\n    pub async fn build_query(&self, query_node: &Query<states::Pending>) -> Result<Q> {\n        match (&self.query, &self.async_query) {\n            (Some(query_fn), _) => query_fn(query_node),\n            (_, Some(async_fn)) => async_fn(query_node).await,\n            _ => Err(anyhow!(\"No query function has been set.\")),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/search_strategies/hybrid_search.rs",
    "content": "use derive_builder::Builder;\n\nuse crate::{indexing::EmbeddedField, querying};\n\nuse super::{DEFAULT_TOP_K, DEFAULT_TOP_N, SearchFilter};\n\n/// A hybrid search strategy that combines a similarity search with a\n/// keyword search / sparse search.\n///\n/// Defaults to a a maximum of 10 documents and `EmbeddedField::Combined` for the field(s).\n#[derive(Debug, Clone, Builder)]\n#[builder(setter(into))]\npub struct HybridSearch<FILTER: SearchFilter = ()> {\n    /// Maximum number of documents to return\n    #[builder(default)]\n    top_k: u64,\n    /// Maximum number of documents to return per query\n    #[builder(default)]\n    top_n: u64,\n\n    /// The field to use for the dense vector\n    #[builder(default)]\n    dense_vector_field: EmbeddedField,\n\n    /// The field to use for the sparse vector\n    /// TODO: I.e. lancedb does not use sparse embeddings for hybrid search\n    #[builder(default)]\n    sparse_vector_field: EmbeddedField,\n\n    #[builder(default)]\n    filter: Option<FILTER>,\n}\n\nimpl<FILTER: SearchFilter> querying::SearchStrategy for HybridSearch<FILTER> {}\n\nimpl<FILTER: SearchFilter> Default for HybridSearch<FILTER> {\n    fn default() -> Self {\n        Self {\n            top_k: DEFAULT_TOP_K,\n            top_n: DEFAULT_TOP_N,\n            dense_vector_field: EmbeddedField::Combined,\n            sparse_vector_field: EmbeddedField::Combined,\n            filter: None,\n        }\n    }\n}\n\nimpl<FILTER: SearchFilter> HybridSearch<FILTER> {\n    /// Creates a new hybrid search strategy that uses the provided filter\n    pub fn from_filter(filter: FILTER) -> Self {\n        Self {\n            filter: Some(filter),\n            ..Default::default()\n        }\n    }\n\n    pub fn with_filter<NEWFILTER: SearchFilter>(\n        self,\n        filter: NEWFILTER,\n    ) -> HybridSearch<NEWFILTER> {\n        HybridSearch {\n            top_k: self.top_k,\n            top_n: self.top_n,\n            dense_vector_field: self.dense_vector_field,\n            sparse_vector_field: self.sparse_vector_field,\n            filter: Some(filter),\n        }\n    }\n\n    /// Sets the maximum amount of total documents retrieved\n    pub fn with_top_k(&mut self, top_k: u64) -> &mut Self {\n        self.top_k = top_k;\n        self\n    }\n    /// Returns the maximum amount of total documents to be retrieved\n    pub fn top_k(&self) -> u64 {\n        self.top_k\n    }\n    /// Sets the maximum amount of documents to be retrieved\n    /// per individual query\n    pub fn with_top_n(&mut self, top_n: u64) -> &mut Self {\n        self.top_n = top_n;\n        self\n    }\n    /// Returns the maximum amount of documents per query\n    pub fn top_n(&self) -> u64 {\n        self.top_n\n    }\n    /// Sets the vector field for the dense vector\n    ///\n    /// Defaults to `EmbeddedField::Combined`\n    pub fn with_dense_vector_field(\n        &mut self,\n        dense_vector_field: impl Into<EmbeddedField>,\n    ) -> &mut Self {\n        self.dense_vector_field = dense_vector_field.into();\n        self\n    }\n\n    /// Returns the field for the dense vector\n    pub fn dense_vector_field(&self) -> &EmbeddedField {\n        &self.dense_vector_field\n    }\n    /// Sets the vector field for the sparse vector (if applicable)\n    ///\n    /// Defaults to `EmbeddedField::Combined`\n    pub fn with_sparse_vector_field(\n        &mut self,\n        sparse_vector_field: impl Into<EmbeddedField>,\n    ) -> &mut Self {\n        self.sparse_vector_field = sparse_vector_field.into();\n        self\n    }\n\n    /// Returns the field for the dense vector\n    pub fn sparse_vector_field(&self) -> &EmbeddedField {\n        &self.sparse_vector_field\n    }\n\n    pub fn filter(&self) -> Option<&FILTER> {\n        self.filter.as_ref()\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/search_strategies/mod.rs",
    "content": "//! Search strategies provide a generic way for Retrievers to implement their\n//! search in various ways.\n//!\n//! The strategy is also yielded to the Retriever and can contain addition configuration\n\nmod custom_strategy;\nmod hybrid_search;\nmod similarity_single_embedding;\n\npub(crate) const DEFAULT_TOP_K: u64 = 10;\npub(crate) const DEFAULT_TOP_N: u64 = 10;\n\npub use custom_strategy::*;\npub use hybrid_search::*;\npub use similarity_single_embedding::*;\n\npub trait SearchFilter: Clone + Sync + Send {}\n\n#[cfg(feature = \"qdrant\")]\nimpl SearchFilter for qdrant_client::qdrant::Filter {}\n\n// When no filters are applied\nimpl SearchFilter for () {}\n// Lancedb uses a string filter\nimpl SearchFilter for String {}\n"
  },
  {
    "path": "swiftide-core/src/search_strategies/similarity_single_embedding.rs",
    "content": "use crate::querying;\n\nuse super::{DEFAULT_TOP_K, SearchFilter};\n\n/// A simple, single vector similarity search where it takes the embedding on the current query\n/// and returns `top_k` documents.\n///\n/// Can optionally be used with a filter.\n#[derive(Debug, Clone)]\npub struct SimilaritySingleEmbedding<FILTER: SearchFilter = ()> {\n    /// Maximum number of documents to return\n    top_k: u64,\n\n    filter: Option<FILTER>,\n}\n\nimpl<FILTER: SearchFilter> querying::SearchStrategy for SimilaritySingleEmbedding<FILTER> {}\n\nimpl<FILTER: SearchFilter> Default for SimilaritySingleEmbedding<FILTER> {\n    fn default() -> Self {\n        Self {\n            top_k: DEFAULT_TOP_K,\n            filter: None,\n        }\n    }\n}\n\nimpl SimilaritySingleEmbedding<()> {\n    /// Set an optional filter to be used in the query\n    pub fn into_concrete_filter<FILTER: SearchFilter>(&self) -> SimilaritySingleEmbedding<FILTER> {\n        SimilaritySingleEmbedding::<FILTER> {\n            top_k: self.top_k,\n            filter: None,\n        }\n    }\n}\n\nimpl<FILTER: SearchFilter> SimilaritySingleEmbedding<FILTER> {\n    pub fn from_filter(filter: FILTER) -> Self {\n        Self {\n            filter: Some(filter),\n            ..Default::default()\n        }\n    }\n\n    /// Set the maximum amount of documents to be returned\n    pub fn with_top_k(&mut self, top_k: u64) -> &mut Self {\n        self.top_k = top_k;\n\n        self\n    }\n\n    /// Returns the maximum of documents to be returned\n    pub fn top_k(&self) -> u64 {\n        self.top_k\n    }\n\n    /// Set an optional filter to be used in the query\n    pub fn with_filter<NEWFILTER: SearchFilter>(\n        self,\n        filter: NEWFILTER,\n    ) -> SimilaritySingleEmbedding<NEWFILTER> {\n        SimilaritySingleEmbedding::<NEWFILTER> {\n            top_k: self.top_k,\n            filter: Some(filter),\n        }\n    }\n\n    pub fn filter(&self) -> &Option<FILTER> {\n        &self.filter\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/statistics.rs",
    "content": "//! Pipeline statistics collection\n//!\n//! This module provides comprehensive monitoring and observability for pipelines,\n//! including node counts, token usage, and timing information.\n//!\n//! # Example\n//!\n//! ```rust,ignore\n//! use swiftide::indexing::Pipeline;\n//!\n//! let pipeline = Pipeline::from_loader(loader)\n//!     .then(transformer)\n//!     .store(storage);\n//!\n//! // Run pipeline\n//! pipeline.run().await?;\n//!\n//! // Get statistics\n//! let stats = pipeline.stats();\n//! println!(\"Processed {} nodes in {:?}\", stats.nodes_processed, stats.duration());\n//! ```\n\nuse std::{\n    collections::HashMap,\n    sync::{\n        Mutex, MutexGuard,\n        atomic::{AtomicU64, Ordering},\n    },\n    time::{Duration, Instant},\n};\n\nconst TWO_POW_32_F64: f64 = 4_294_967_296.0;\n\nfn lock_recover<T>(mutex: &Mutex<T>) -> MutexGuard<'_, T> {\n    mutex\n        .lock()\n        .unwrap_or_else(std::sync::PoisonError::into_inner)\n}\n\nfn u64_to_f64(value: u64) -> f64 {\n    let upper = u32::try_from(value >> 32).expect(\"upper 32 bits always fit in u32\");\n    let lower =\n        u32::try_from(value & u64::from(u32::MAX)).expect(\"lower 32 bits always fit in u32\");\n\n    f64::from(upper) * TWO_POW_32_F64 + f64::from(lower)\n}\n\n/// Statistics for a single model's usage\n#[derive(Debug, Clone, Default, PartialEq)]\npub struct ModelUsage {\n    /// Number of prompt tokens used\n    pub prompt_tokens: u64,\n    /// Number of completion tokens used\n    pub completion_tokens: u64,\n    /// Total tokens used (prompt + completion)\n    pub total_tokens: u64,\n    /// Number of requests made to this model\n    pub request_count: u64,\n}\n\nimpl ModelUsage {\n    /// Creates a new `ModelUsage` with zero counts\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Records token usage for a single request\n    pub fn record(&mut self, prompt_tokens: u64, completion_tokens: u64) {\n        self.prompt_tokens += prompt_tokens;\n        self.completion_tokens += completion_tokens;\n        self.total_tokens += prompt_tokens + completion_tokens;\n        self.request_count += 1;\n    }\n}\n\n/// A snapshot of pipeline statistics at a specific point in time\n///\n/// This struct contains immutable statistics collected during pipeline execution.\n#[derive(Debug, Clone, Default, PartialEq)]\npub struct PipelineStats {\n    /// Total number of nodes processed\n    pub nodes_processed: u64,\n    /// Total number of nodes that resulted in error\n    pub nodes_failed: u64,\n    /// Total number of nodes persisted to storage\n    pub nodes_stored: u64,\n    /// Total number of transformations applied\n    pub transformations_applied: u64,\n    /// Token usage per model\n    pub token_usage: HashMap<String, ModelUsage>,\n    /// When the pipeline started\n    started_at: Option<Instant>,\n    /// When the pipeline completed\n    completed_at: Option<Instant>,\n}\n\nimpl PipelineStats {\n    /// Creates a new empty `PipelineStats`\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Returns the duration of the pipeline execution\n    ///\n    /// If the pipeline has not started, returns `None`.\n    /// If the pipeline has started but not completed, returns the elapsed time since start.\n    #[must_use]\n    pub fn duration(&self) -> Option<Duration> {\n        match (self.started_at, self.completed_at) {\n            (Some(start), Some(end)) => Some(end.duration_since(start)),\n            (Some(start), None) => Some(start.elapsed()),\n            _ => None,\n        }\n    }\n\n    /// Calculates nodes processed per second\n    ///\n    /// Returns `None` if the pipeline hasn't started or if no nodes have been processed.\n    #[must_use]\n    pub fn nodes_per_second(&self) -> Option<f64> {\n        let duration = self.duration()?;\n        if duration.as_secs_f64() == 0.0 || self.nodes_processed == 0 {\n            return None;\n        }\n        Some(u64_to_f64(self.nodes_processed) / duration.as_secs_f64())\n    }\n\n    /// Returns the total number of tokens used across all models\n    #[must_use]\n    pub fn total_tokens(&self) -> u64 {\n        self.token_usage.values().map(|u| u.total_tokens).sum()\n    }\n\n    /// Returns the total number of LLM requests made\n    #[must_use]\n    pub fn total_requests(&self) -> u64 {\n        self.token_usage.values().map(|u| u.request_count).sum()\n    }\n\n    /// Returns the total prompt tokens across all models\n    #[must_use]\n    pub fn total_prompt_tokens(&self) -> u64 {\n        self.token_usage.values().map(|u| u.prompt_tokens).sum()\n    }\n\n    /// Returns the total completion tokens across all models\n    #[must_use]\n    pub fn total_completion_tokens(&self) -> u64 {\n        self.token_usage.values().map(|u| u.completion_tokens).sum()\n    }\n}\n\n/// Thread-safe statistics collector for pipeline execution\n///\n/// This collector uses atomic counters for lock-free updates and can be safely\n/// shared across multiple threads during pipeline processing.\n#[derive(Debug)]\npub struct StatsCollector {\n    nodes_processed: AtomicU64,\n    nodes_failed: AtomicU64,\n    nodes_stored: AtomicU64,\n    transformations_applied: AtomicU64,\n    token_usage: Mutex<HashMap<String, ModelUsage>>,\n    started_at: Mutex<Option<Instant>>,\n    completed_at: Mutex<Option<Instant>>,\n}\n\nimpl Default for StatsCollector {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl StatsCollector {\n    /// Creates a new `StatsCollector`\n    #[must_use]\n    pub fn new() -> Self {\n        Self {\n            nodes_processed: AtomicU64::new(0),\n            nodes_failed: AtomicU64::new(0),\n            nodes_stored: AtomicU64::new(0),\n            transformations_applied: AtomicU64::new(0),\n            token_usage: Mutex::new(HashMap::new()),\n            started_at: Mutex::new(None),\n            completed_at: Mutex::new(None),\n        }\n    }\n\n    /// Marks the pipeline as started\n    pub fn start(&self) {\n        let mut started = lock_recover(&self.started_at);\n        *started = Some(Instant::now());\n    }\n\n    /// Marks the pipeline as completed\n    pub fn complete(&self) {\n        let mut completed = lock_recover(&self.completed_at);\n        *completed = Some(Instant::now());\n    }\n\n    /// Increments the count of processed nodes\n    pub fn increment_nodes_processed(&self, count: u64) {\n        self.nodes_processed.fetch_add(count, Ordering::Relaxed);\n    }\n\n    /// Increments the count of failed nodes\n    pub fn increment_nodes_failed(&self, count: u64) {\n        self.nodes_failed.fetch_add(count, Ordering::Relaxed);\n    }\n\n    /// Increments the count of stored nodes\n    pub fn increment_nodes_stored(&self, count: u64) {\n        self.nodes_stored.fetch_add(count, Ordering::Relaxed);\n    }\n\n    /// Increments the count of applied transformations\n    pub fn increment_transformations(&self, count: u64) {\n        self.transformations_applied\n            .fetch_add(count, Ordering::Relaxed);\n    }\n\n    /// Records token usage for a specific model\n    ///\n    /// This method is compatible with OpenTelemetry LLM specification.\n    ///\n    /// # Arguments\n    ///\n    /// * `model` - The name/identifier of the model\n    /// * `prompt_tokens` - Number of tokens in the prompt\n    /// * `completion_tokens` - Number of tokens in the completion\n    pub fn record_token_usage(\n        &self,\n        model: impl AsRef<str>,\n        prompt_tokens: u64,\n        completion_tokens: u64,\n    ) {\n        let mut usage = lock_recover(&self.token_usage);\n        let model_usage = usage.entry(model.as_ref().to_string()).or_default();\n        model_usage.record(prompt_tokens, completion_tokens);\n    }\n\n    /// Returns a snapshot of the current statistics\n    #[must_use]\n    pub fn get_stats(&self) -> PipelineStats {\n        PipelineStats {\n            nodes_processed: self.nodes_processed.load(Ordering::Relaxed),\n            nodes_failed: self.nodes_failed.load(Ordering::Relaxed),\n            nodes_stored: self.nodes_stored.load(Ordering::Relaxed),\n            transformations_applied: self.transformations_applied.load(Ordering::Relaxed),\n            token_usage: lock_recover(&self.token_usage).clone(),\n            started_at: *lock_recover(&self.started_at),\n            completed_at: *lock_recover(&self.completed_at),\n        }\n    }\n}\n\nimpl Clone for StatsCollector {\n    fn clone(&self) -> Self {\n        Self {\n            nodes_processed: AtomicU64::new(self.nodes_processed.load(Ordering::Relaxed)),\n            nodes_failed: AtomicU64::new(self.nodes_failed.load(Ordering::Relaxed)),\n            nodes_stored: AtomicU64::new(self.nodes_stored.load(Ordering::Relaxed)),\n            transformations_applied: AtomicU64::new(\n                self.transformations_applied.load(Ordering::Relaxed),\n            ),\n            token_usage: Mutex::new(lock_recover(&self.token_usage).clone()),\n            started_at: Mutex::new(*lock_recover(&self.started_at)),\n            completed_at: Mutex::new(*lock_recover(&self.completed_at)),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_stats_collector() {\n        let collector = StatsCollector::new();\n\n        collector.start();\n\n        collector.increment_nodes_processed(10);\n        collector.increment_nodes_failed(2);\n        collector.increment_nodes_stored(8);\n        collector.increment_transformations(15);\n\n        collector.complete();\n\n        let stats = collector.get_stats();\n\n        assert_eq!(stats.nodes_processed, 10);\n        assert_eq!(stats.nodes_failed, 2);\n        assert_eq!(stats.nodes_stored, 8);\n        assert_eq!(stats.transformations_applied, 15);\n        assert!(stats.duration().is_some());\n        assert!(stats.nodes_per_second().is_some());\n    }\n\n    #[test]\n    fn test_model_usage() {\n        let mut usage = ModelUsage::new();\n\n        usage.record(100, 50);\n        usage.record(200, 100);\n\n        assert_eq!(usage.prompt_tokens, 300);\n        assert_eq!(usage.completion_tokens, 150);\n        assert_eq!(usage.total_tokens, 450);\n        assert_eq!(usage.request_count, 2);\n    }\n\n    #[test]\n    fn test_record_token_usage() {\n        let collector = StatsCollector::new();\n\n        collector.record_token_usage(\"gpt-4\", 100, 50);\n        collector.record_token_usage(\"gpt-4\", 200, 100);\n        collector.record_token_usage(\"gpt-3.5\", 50, 25);\n\n        let stats = collector.get_stats();\n\n        assert_eq!(stats.token_usage.len(), 2);\n\n        let gpt4_usage = stats.token_usage.get(\"gpt-4\").unwrap();\n        assert_eq!(gpt4_usage.prompt_tokens, 300);\n        assert_eq!(gpt4_usage.completion_tokens, 150);\n        assert_eq!(gpt4_usage.request_count, 2);\n\n        assert_eq!(stats.total_tokens(), 525);\n        assert_eq!(stats.total_requests(), 3);\n    }\n\n    #[test]\n    fn test_empty_stats() {\n        let stats = PipelineStats::new();\n\n        assert_eq!(stats.nodes_processed, 0);\n        assert_eq!(stats.nodes_failed, 0);\n        assert_eq!(stats.total_tokens(), 0);\n        assert!(stats.duration().is_none());\n        assert!(stats.nodes_per_second().is_none());\n    }\n\n    #[test]\n    fn test_stats_collector_clone() {\n        let collector = StatsCollector::new();\n        collector.increment_nodes_processed(5);\n        collector.record_token_usage(\"model-1\", 10, 5);\n\n        let cloned = collector.clone();\n\n        // Modify original\n        collector.increment_nodes_processed(3);\n\n        // Cloned should have original value\n        let cloned_stats = cloned.get_stats();\n        assert_eq!(cloned_stats.nodes_processed, 5);\n\n        // Original should have updated value\n        let original_stats = collector.get_stats();\n        assert_eq!(original_stats.nodes_processed, 8);\n    }\n\n    #[test]\n    fn test_pipeline_stats_duration_while_running() {\n        let collector = StatsCollector::new();\n        collector.start();\n\n        let stats = collector.get_stats();\n\n        // Should return Some while running\n        assert!(stats.duration().is_some());\n        assert_eq!(stats.completed_at, None);\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/stream_backoff.rs",
    "content": "// Credits go to https://github.com/ihrwein/backoff/pull/50\nuse std::{pin::Pin, task::Poll, time::Duration};\n\nuse backoff::{backoff::Backoff, future::Sleeper};\nuse futures_util::{Stream, TryStream};\nuse pin_project::pin_project;\n\n// /// Applies a [`Backoff`] policy to a [`Stream`]\n// ///\n// /// After any [`Err`] is emitted, the stream is paused for [`Backoff::next_backoff`]. The\n// /// [`Backoff`] is [`reset`](`Backoff::reset`) on any [`Ok`] value.\n// ///\n// /// If [`Backoff::next_backoff`] returns [`None`] then the backing stream is given up on, and\n// closed. pub fn backoff<S: TryStream, B: Backoff>(\n//     stream: S,\n//     backoff: B,\n// ) -> StreamBackoff<S, B, impl Sleeper> {\n//     StreamBackoff::new(stream, backoff, TokioSleeper)\n// }\n\npub(crate) struct TokioSleeper;\nimpl Sleeper for TokioSleeper {\n    type Sleep = ::tokio::time::Sleep;\n    fn sleep(&self, dur: Duration) -> Self::Sleep {\n        ::tokio::time::sleep(dur)\n    }\n}\n\n/// See [`backoff`]\n#[pin_project]\npub struct StreamBackoff<S, B, Sl: Sleeper> {\n    #[pin]\n    stream: S,\n    backoff: B,\n    sleeper: Sl,\n    #[pin]\n    state: State<Sl>,\n}\n\n#[pin_project(project = StateProj)]\nenum State<Sl: Sleeper> {\n    BackingOff {\n        #[pin]\n        backoff_sleep: Sl::Sleep,\n    },\n    GivenUp,\n    Awake,\n}\n\nimpl<S: TryStream, B: Backoff, Sl: Sleeper> StreamBackoff<S, B, Sl> {\n    pub fn new(stream: S, backoff: B, sleeper: Sl) -> Self {\n        Self {\n            stream,\n            backoff,\n            sleeper,\n            state: State::Awake,\n        }\n    }\n}\n\nimpl<S: TryStream, B: Backoff, Sl: Sleeper> Stream for StreamBackoff<S, B, Sl>\nwhere\n    Sl::Sleep: Future,\n{\n    type Item = Result<S::Ok, S::Error>;\n\n    fn poll_next(\n        self: Pin<&mut Self>,\n        cx: &mut std::task::Context<'_>,\n    ) -> Poll<Option<Self::Item>> {\n        let mut this = self.project();\n        match this.state.as_mut().project() {\n            StateProj::BackingOff { mut backoff_sleep } => match backoff_sleep.as_mut().poll(cx) {\n                Poll::Ready(()) => {\n                    // tracing::debug!(deadline = ?backoff_sleep.deadline(), \"Backoff complete,\n                    // waking up\");\n                    this.state.set(State::Awake);\n                }\n                Poll::Pending => {\n                    // let deadline = backoff_sleep.deadline();\n                    // tracing::trace!(\n                    //     ?deadline,\n                    //     remaining_duration = ?deadline.saturating_duration_since(Instant::now()),\n                    //     \"Still waiting for backoff sleep to complete\"\n                    // );\n                    return Poll::Pending;\n                }\n            },\n            StateProj::GivenUp => {\n                // tracing::debug!(\"Backoff has given up, stream is closed\");\n                return Poll::Ready(None);\n            }\n            StateProj::Awake => {}\n        }\n\n        let next_item = this.stream.try_poll_next(cx);\n        match &next_item {\n            Poll::Ready(Some(Err(_))) => {\n                if let Some(backoff_duration) = this.backoff.next_backoff() {\n                    let backoff_sleep = this.sleeper.sleep(backoff_duration);\n                    // tracing::debug!(\n                    //     deadline = ?backoff_sleep.deadline(),\n                    //     duration = ?backoff_duration,\n                    //     \"Error received, backing off\"\n                    // );\n                    this.state.set(State::BackingOff { backoff_sleep });\n                } else {\n                    // tracing::debug!(\"Error received, giving up\");\n                    this.state.set(State::GivenUp);\n                }\n            }\n            Poll::Ready(_) => {\n                // tracing::trace!(\"Non-error received, resetting backoff\");\n                this.backoff.reset();\n            }\n            Poll::Pending => {}\n        }\n        next_item\n    }\n}\n\n// Tokio clock is required to be able to freeze time during marble tests\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    use futures_util::{StreamExt, pin_mut, poll, stream};\n    use std::{task::Poll, time::Duration};\n    use tokio::{self, sync::mpsc};\n\n    #[tokio::test]\n    async fn stream_should_back_off() {\n        tokio::time::pause();\n        let tick = Duration::from_secs(1);\n        let rx = stream::iter([Ok(0), Ok(1), Err(2), Ok(3), Ok(4)]);\n        let rx = StreamBackoff::new(rx, backoff::backoff::Constant::new(tick), TokioSleeper);\n        pin_mut!(rx);\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Ok(0))));\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Ok(1))));\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Err(2))));\n        assert_eq!(poll!(rx.next()), Poll::Pending);\n        tokio::time::advance(tick * 2).await;\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Ok(3))));\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Ok(4))));\n        assert_eq!(poll!(rx.next()), Poll::Ready(None));\n    }\n\n    #[tokio::test]\n    async fn backoff_time_should_update() {\n        tokio::time::pause();\n        let (tx, rx) = mpsc::unbounded_channel();\n        let rx = tokio_stream::wrappers::UnboundedReceiverStream::new(rx);\n        let rx = StreamBackoff::new(rx, LinearBackoff::new(Duration::from_secs(2)), TokioSleeper);\n        pin_mut!(rx);\n        tx.send(Ok(0)).unwrap();\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Ok(0))));\n        tx.send(Ok(1)).unwrap();\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Ok(1))));\n        tx.send(Err(2)).unwrap();\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Err(2))));\n        assert_eq!(poll!(rx.next()), Poll::Pending);\n        tokio::time::advance(Duration::from_secs(3)).await;\n        assert_eq!(poll!(rx.next()), Poll::Pending);\n        tx.send(Err(3)).unwrap();\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Err(3))));\n        tx.send(Ok(4)).unwrap();\n        assert_eq!(poll!(rx.next()), Poll::Pending);\n        tokio::time::advance(Duration::from_secs(3)).await;\n        assert_eq!(poll!(rx.next()), Poll::Pending);\n        tokio::time::advance(Duration::from_secs(2)).await;\n        assert_eq!(poll!(rx.next()), Poll::Ready(Some(Ok(4))));\n        assert_eq!(poll!(rx.next()), Poll::Pending);\n        drop(tx);\n        assert_eq!(poll!(rx.next()), Poll::Ready(None));\n    }\n\n    #[tokio::test]\n    async fn backoff_should_close_when_requested() {\n        assert_eq!(\n            StreamBackoff::new(\n                stream::iter([Ok(0), Ok(1), Err(2), Ok(3)]),\n                backoff::backoff::Stop {},\n                TokioSleeper\n            )\n            .collect::<Vec<_>>()\n            .await,\n            vec![Ok(0), Ok(1), Err(2)]\n        );\n    }\n\n    /// Dynamic backoff policy that is still deterministic and testable\n    struct LinearBackoff {\n        interval: Duration,\n        current_duration: Duration,\n    }\n\n    impl LinearBackoff {\n        fn new(interval: Duration) -> Self {\n            Self {\n                interval,\n                current_duration: Duration::ZERO,\n            }\n        }\n    }\n\n    impl Backoff for LinearBackoff {\n        fn next_backoff(&mut self) -> Option<Duration> {\n            self.current_duration += self.interval;\n            Some(self.current_duration)\n        }\n\n        fn reset(&mut self) {\n            self.current_duration = Duration::ZERO;\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/test_utils.rs",
    "content": "#![allow(clippy::missing_panics_doc)]\nuse std::fmt::Write as _;\nuse std::sync::{Arc, Mutex};\n\nuse async_trait::async_trait;\n\nuse crate::ChatCompletionStream;\nuse crate::chat_completion::{\n    ChatCompletion, ChatCompletionRequest, ChatCompletionResponse, errors::LanguageModelError,\n};\nuse anyhow::Result;\nuse pretty_assertions::assert_eq;\n\n#[macro_export]\nmacro_rules! assert_default_prompt_snapshot {\n    ($node:expr, $($key:expr => $value:expr),*) => {\n        #[tokio::test]\n        async fn test_default_prompt() {\n        let template = default_prompt();\n        let mut prompt = template.clone().with_node(&TextNode::new($node));\n        $(\n            prompt = prompt.with_context_value($key, $value);\n        )*\n        insta::assert_snapshot!(prompt.render().unwrap());\n        }\n    };\n\n    ($($key:expr => $value:expr),*) => {\n        #[tokio::test]\n        async fn test_default_prompt() {\n            let template = default_prompt();\n            let mut prompt = template;\n            $(\n                prompt = prompt.with_context_value($key, $value);\n            )*\n            insta::assert_snapshot!(prompt.render().unwrap());\n        }\n    };\n}\n\ntype Expectations = Arc<\n    Mutex<\n        Vec<(\n            ChatCompletionRequest<'static>,\n            Result<ChatCompletionResponse>,\n        )>,\n    >,\n>;\n\n#[derive(Clone)]\npub struct MockChatCompletion {\n    pub expectations: Expectations,\n    pub received_expectations: Expectations,\n}\n\nimpl Default for MockChatCompletion {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl MockChatCompletion {\n    pub fn new() -> Self {\n        Self {\n            expectations: Arc::new(Mutex::new(Vec::new())),\n            received_expectations: Arc::new(Mutex::new(Vec::new())),\n        }\n    }\n\n    pub fn expect_complete(\n        &self,\n        request: ChatCompletionRequest<'static>,\n        response: Result<ChatCompletionResponse>,\n    ) {\n        let mut mutex = self.expectations.lock().unwrap();\n\n        mutex.insert(0, (request, response));\n    }\n}\n\n#[async_trait]\nimpl ChatCompletion for MockChatCompletion {\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        let request = request.to_owned();\n        let (expected_request, response) =\n            self.expectations.lock().unwrap().pop().unwrap_or_else(|| {\n                panic!(\n                    \"Received completion request, but no expectations are set\\n {}\",\n                    pretty_request(&request)\n                )\n            });\n\n        assert_eq!(\n            &expected_request,\n            &request,\n            \"Unexpected request\\n: {}\\nRemaining expectations:\\n{}\",\n            pretty_request(&request),\n            pretty_expectation(&(expected_request.clone(), response))\n                + \"---\\n\"\n                + &self\n                    .expectations\n                    .lock()\n                    .unwrap()\n                    .iter()\n                    .map(pretty_expectation)\n                    .collect::<Vec<_>>()\n                    .join(\"---\\n\")\n        );\n\n        if let Ok(response) = response {\n            self.received_expectations\n                .lock()\n                .unwrap()\n                .push((expected_request, Ok(response.clone())));\n\n            tracing::debug!(\n                \"[MockChatCompletion] Received request:\\n{}\\nResponse:\\n{}\",\n                pretty_request(&request),\n                pretty_response(&response)\n            );\n            Ok(response)\n        } else {\n            let err = response.unwrap_err();\n            self.received_expectations\n                .lock()\n                .unwrap()\n                .push((expected_request, Err(anyhow::anyhow!(err.to_string()))));\n\n            Err(LanguageModelError::PermanentError(err.into()))\n        }\n    }\n\n    /// Fakes a stream, first it checks the expectations, then it streams the response\n    /// instantly in small chunks\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        let response = match self.complete(request).await {\n            Ok(response) => response,\n            Err(err) => return err.into(),\n        };\n\n        let (tx, rx) = tokio::sync::mpsc::unbounded_channel::<\n            Result<ChatCompletionResponse, LanguageModelError>,\n        >();\n\n        tokio::spawn(async move {\n            let mut chunk_response = ChatCompletionResponse::builder()\n                .maybe_tool_calls(response.tool_calls.clone())\n                .build()\n                .unwrap();\n\n            for chunk in response.message().unwrap().split_whitespace() {\n                tracing::debug!(\"[MockChatCompletion] Sending chunk: {chunk}\");\n\n                let chunk_response = chunk_response.append_message_delta(Some(chunk)).clone();\n                let _ = tx.send(Ok(chunk_response));\n                tokio::time::sleep(tokio::time::Duration::from_millis(10)).await;\n            }\n        });\n\n        Box::pin(tokio_stream::wrappers::UnboundedReceiverStream::new(rx))\n    }\n}\n\nimpl Drop for MockChatCompletion {\n    fn drop(&mut self) {\n        // We are still cloned, so do not check assertions yet\n        if Arc::strong_count(&self.received_expectations) > 1 {\n            return;\n        }\n        let Ok(expectations) = self.expectations.lock() else {\n            return;\n        };\n        let Ok(received) = self.received_expectations.lock() else {\n            return;\n        };\n\n        if expectations.is_empty() {\n            let num_received = received.len();\n            tracing::debug!(\"[MockChatCompletion] All {num_received} expectations were met\");\n        } else {\n            let received = received\n                .iter()\n                .map(pretty_expectation)\n                .collect::<Vec<_>>()\n                .join(\"---\\n\");\n\n            let pending = expectations\n                .iter()\n                .map(pretty_expectation)\n                .collect::<Vec<_>>()\n                .join(\"---\\n\");\n\n            panic!(\n                \"[MockChatCompletion] Not all expectations were met\\n received:\\n{received}\\n\\npending:\\n{pending}\"\n            );\n        }\n    }\n}\n\nfn pretty_expectation(\n    expectation: &(\n        ChatCompletionRequest<'static>,\n        Result<ChatCompletionResponse>,\n    ),\n) -> String {\n    let mut output = String::new();\n\n    let request = &expectation.0;\n    output.push_str(\"Request:\\n\");\n    output.push_str(&pretty_request(request));\n\n    output.push_str(\" =>\\n\");\n\n    let response_result = &expectation.1;\n\n    if let Ok(response) = response_result {\n        output += &pretty_response(response);\n    }\n\n    output\n}\n\nfn pretty_request(request: &ChatCompletionRequest<'_>) -> String {\n    let mut output = String::new();\n    for message in request.messages() {\n        writeln!(output, \" {message}\").unwrap();\n    }\n    output\n}\n\nfn pretty_response(response: &ChatCompletionResponse) -> String {\n    let mut output = String::new();\n    if let Some(message) = response.message() {\n        writeln!(output, \" {message}\").unwrap();\n    }\n    if let Some(tool_calls) = response.tool_calls() {\n        for tool_call in tool_calls {\n            writeln!(output, \" {tool_call}\").unwrap();\n        }\n    }\n    output\n}\n"
  },
  {
    "path": "swiftide-core/src/token_estimation.rs",
    "content": "use std::borrow::Cow;\n\nuse anyhow::Result;\nuse async_trait::async_trait;\n\nuse crate::{chat_completion::ChatMessage, prompt::Prompt};\n\n/// Estimate the number of tokens in a given value.\n///\n/// This trait is intentionally async so implementations can defer to remote or\n/// more expensive estimators without blocking.\n///\n/// # Examples\n///\n/// ```rust\n/// # use swiftide_core::token_estimation::{CharEstimator, EstimateTokens};\n/// # use swiftide_core::chat_completion::ChatMessage;\n/// # #[tokio::main]\n/// # async fn main() -> anyhow::Result<()> {\n/// let estimator = CharEstimator;\n/// let message = ChatMessage::new_user(\"Hello from Swiftide!\");\n/// let tokens = estimator.estimate(&message).await?;\n/// assert!(tokens > 0);\n/// # Ok(())\n/// # }\n/// ```\n#[async_trait]\npub trait EstimateTokens {\n    async fn estimate(&self, value: impl Estimatable) -> Result<usize>;\n}\n\n/// A rough estimator when speed matters more than accuracy.\n///\n/// Divides the number of characters by 4 as recommended by `OpenAI`.\n///\n/// # Examples\n///\n/// ```rust\n/// # use swiftide_core::token_estimation::{CharEstimator, EstimateTokens};\n/// # #[tokio::main]\n/// # async fn main() -> anyhow::Result<()> {\n/// let estimator = CharEstimator;\n/// let tokens = estimator.estimate(\"Roughly four chars per token.\").await?;\n/// assert!(tokens > 0);\n/// # Ok(())\n/// # }\n/// ```\npub struct CharEstimator;\n\n#[async_trait]\nimpl EstimateTokens for CharEstimator {\n    async fn estimate(&self, value: impl Estimatable) -> Result<usize> {\n        let s = value.for_estimate()?;\n        Ok(s.iter().map(|s| s.chars().count()).sum::<usize>() / 4 + value.additional_tokens())\n    }\n}\n\n/// A value that can be estimated for the number of tokens it contains.\n///\n/// # Errors\n///\n/// Errors if the value cannot be presented for estimation.\n///\n/// # Examples\n///\n/// ```rust\n/// # use std::borrow::Cow;\n/// # use anyhow::Result;\n/// # use swiftide_core::token_estimation::Estimatable;\n/// struct Snippet {\n///     title: String,\n///     body: String,\n/// }\n///\n/// impl Estimatable for Snippet {\n///     fn for_estimate(&self) -> Result<Vec<Cow<'_, str>>> {\n///         Ok(vec![Cow::Borrowed(&self.title), Cow::Borrowed(&self.body)])\n///     }\n/// }\n/// ```\npub trait Estimatable: Send + Sync {\n    /// A list of string slices used for estimation\n    ///\n    /// # Errors\n    ///\n    /// Some estimatable values may fail to render or prepare for estimation.\n    fn for_estimate(&self) -> Result<Vec<Cow<'_, str>>>;\n\n    /// Optionally return extra tokens that should be added to the estimate.\n    fn additional_tokens(&self) -> usize {\n        0\n    }\n}\n\nimpl Estimatable for &str {\n    fn for_estimate(&self) -> Result<Vec<Cow<'_, str>>> {\n        Ok(vec![Cow::Borrowed(self)])\n    }\n}\n\nimpl Estimatable for String {\n    fn for_estimate(&self) -> Result<Vec<Cow<'_, str>>> {\n        Ok(vec![Cow::Borrowed(self.as_str())])\n    }\n}\n\nimpl Estimatable for &Prompt {\n    fn for_estimate(&self) -> Result<Vec<Cow<'_, str>>> {\n        let rendered = self.render()?;\n        Ok(vec![Cow::Owned(rendered)])\n    }\n}\n\nimpl Estimatable for &ChatMessage {\n    fn for_estimate(&self) -> Result<Vec<Cow<'_, str>>> {\n        Ok(match self {\n            ChatMessage::User(msg) | ChatMessage::Summary(msg) | ChatMessage::System(msg) => {\n                vec![Cow::Borrowed(msg)]\n            }\n            ChatMessage::UserWithParts(parts) => parts\n                .iter()\n                .filter_map(|part| match part {\n                    crate::chat_completion::ChatMessageContentPart::Text { text } => {\n                        Some(Cow::Borrowed(text.as_ref()))\n                    }\n                    crate::chat_completion::ChatMessageContentPart::Image { .. }\n                    | crate::chat_completion::ChatMessageContentPart::Document { .. }\n                    | crate::chat_completion::ChatMessageContentPart::Audio { .. }\n                    | crate::chat_completion::ChatMessageContentPart::Video { .. } => None,\n                })\n                .collect(),\n            ChatMessage::Assistant(msg, vec) => {\n                // Note that this is not super accurate.\n                //\n                // It's a bit verbose to avoid unnecessary allocations. Is what it is.\n                let mut tool_calls = vec.as_ref().map(|vec| {\n                    vec.iter()\n                        .filter_map(|c| c.args().map(Cow::Borrowed))\n                        .collect::<Vec<_>>()\n                });\n\n                if let Some(msg) = msg {\n                    if let Some(tool_calls) = tool_calls.as_mut() {\n                        let mut msg = vec![Cow::Borrowed(msg.as_ref())];\n                        msg.append(tool_calls);\n                        msg\n                    } else {\n                        vec![Cow::Borrowed(msg)]\n                    }\n                } else if let Some(tool_calls) = tool_calls {\n                    tool_calls\n                } else {\n                    vec![\"None\".into()]\n                }\n            }\n            ChatMessage::ToolOutput(_tool_call, tool_output) => {\n                let tool_output_content = tool_output.content().unwrap_or_default();\n\n                vec![Cow::Borrowed(tool_output_content)]\n            }\n            ChatMessage::Reasoning(_reasoning_item) => vec![],\n        })\n    }\n\n    // 4 each for the role\n    //\n    // See https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb\n    fn additional_tokens(&self) -> usize {\n        4\n    }\n}\n\nimpl Estimatable for &[ChatMessage] {\n    fn for_estimate(&self) -> Result<Vec<Cow<'_, str>>> {\n        let mut total = Vec::new();\n        for msg in *self {\n            let mut v = msg\n                .for_estimate()?\n                .into_iter()\n                .map(Cow::into_owned)\n                .map(Into::into)\n                .collect();\n            total.append(&mut v);\n        }\n\n        Ok(total)\n    }\n\n    // Apparently every reply is primed with a <|start|>assistant<|message|>\n    fn additional_tokens(&self) -> usize {\n        self.iter().map(|m| m.additional_tokens()).sum::<usize>() + 3\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::chat_completion::ToolCall;\n\n    #[tokio::test]\n    async fn estimate_counts_characters_and_additional_tokens() {\n        let estimator = CharEstimator;\n        let tokens = estimator.estimate(\"abcd\").await.unwrap();\n        assert_eq!(tokens, 1);\n    }\n\n    #[tokio::test]\n    async fn estimate_prompt_renders_before_counting() {\n        let estimator = CharEstimator;\n        let prompt = Prompt::from(\"hello {{name}}\").with_context_value(\"name\", \"swiftide\");\n        let tokens = estimator.estimate(&prompt).await.unwrap();\n        assert_eq!(tokens, \"hello swiftide\".chars().count() / 4);\n    }\n\n    #[tokio::test]\n    async fn estimate_chat_message_includes_role_tokens() {\n        let estimator = CharEstimator;\n        let message = ChatMessage::new_user(\"hello\");\n        let tokens = estimator.estimate(&message).await.unwrap();\n        assert_eq!(tokens, \"hello\".chars().count() / 4 + 4);\n    }\n\n    #[tokio::test]\n    async fn estimate_slice_adds_reply_priming_tokens() {\n        let estimator = CharEstimator;\n        let messages = [\n            ChatMessage::new_user(\"hello\"),\n            ChatMessage::new_system(\"world\"),\n        ];\n        let tokens = estimator.estimate(&messages[..]).await.unwrap();\n        let content_tokens = \"helloworld\".chars().count() / 4;\n        let additional_tokens = 4 + 4 + 3;\n        assert_eq!(tokens, content_tokens + additional_tokens);\n    }\n\n    #[tokio::test]\n    async fn assistant_tool_calls_are_included_in_estimate() {\n        let estimator = CharEstimator;\n        let tool_call = ToolCall::builder()\n            .id(\"tool-1\")\n            .name(\"search\")\n            .args(\"{\\\"q\\\":\\\"swiftide\\\"}\")\n            .build()\n            .unwrap();\n        let message = ChatMessage::new_assistant(None::<String>, Some(vec![tool_call]));\n        let tokens = estimator.estimate(&message).await.unwrap();\n        let content_tokens = \"{\\\"q\\\":\\\"swiftide\\\"}\".chars().count() / 4;\n        assert_eq!(tokens, content_tokens + 4);\n    }\n\n    #[tokio::test]\n    async fn assistant_without_content_or_tools_uses_none_marker() {\n        let message = ChatMessage::Assistant(None, None);\n        let message_ref = &message;\n        let content = message_ref.for_estimate().unwrap();\n        assert_eq!(content, vec![Cow::Borrowed(\"None\")]);\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/type_aliases.rs",
    "content": "#![cfg_attr(coverage_nightly, coverage(off))]\n\nuse serde::{Deserialize, Serialize};\n\npub type Embedding = Vec<f32>;\npub type Embeddings = Vec<Embedding>;\n\n#[derive(Serialize, Deserialize, Clone, PartialEq)]\npub struct SparseEmbedding {\n    pub indices: Vec<u32>,\n    pub values: Vec<f32>,\n}\npub type SparseEmbeddings = Vec<SparseEmbedding>;\n\nimpl std::fmt::Debug for SparseEmbedding {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"SparseEmbedding\")\n            .field(\"indices\", &self.indices.len())\n            .field(\"values\", &self.values.len())\n            .finish()\n    }\n}\n"
  },
  {
    "path": "swiftide-core/src/util.rs",
    "content": "//! Utility functions for Swiftide\n\n/// Safely truncates a string to a maximum number of characters.\n///\n/// Respects utf8 character boundaries.\npub fn safe_truncate_utf8(s: impl AsRef<str>, max_chars: usize) -> String {\n    s.as_ref().chars().take(max_chars).collect()\n}\n\n/// Debug print a long string by truncating to n characters\n///\n/// Enabled with the `truncate-debug` feature flag, which is enabled by default.\n///\n/// If debugging large outputs is needed, set `swiftide_core` to `no-default-features`\n///\n/// # Example\n///\n/// ```ignore\n/// # use swiftide_core::util::debug_long_utf8;\n/// let s = debug_long_utf8(\"🦀\".repeat(10), 3);\n///\n/// assert_eq!(s, \"🦀🦀🦀 (10)\");\n/// ```\npub fn debug_long_utf8(s: impl AsRef<str>, max_chars: usize) -> String {\n    if cfg!(feature = \"truncate-debug\") {\n        let trunc = safe_truncate_utf8(&s, max_chars);\n\n        format!(\"{} ({})\", trunc, s.as_ref().chars().count())\n    } else {\n        s.as_ref().into()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_safe_truncate_str_with_utf8_char_boundary() {\n        let s = \"🦀\".repeat(101);\n\n        // Single char\n        assert_eq!(safe_truncate_utf8(&s, 100).chars().count(), 100);\n\n        // With invalid char boundary\n        let s = \"Jürgen\".repeat(100);\n        assert_eq!(safe_truncate_utf8(&s, 100).chars().count(), 100);\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-indexing\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\nswiftide-core = { path = \"../swiftide-core\", version = \"0.32\" }\nswiftide-macros = { path = \"../swiftide-macros\", version = \"0.32\" }\n\nanyhow = { workspace = true }\nasync-trait = { workspace = true }\nderive_builder = { workspace = true }\nfutures-util = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\ntokio-stream = { workspace = true }\nnum_cpus = { workspace = true }\ntracing = { workspace = true }\nitertools = { workspace = true }\nserde = { workspace = true }\nserde_json = { workspace = true }\nstrum = { workspace = true }\nstrum_macros = { workspace = true }\nindoc = { workspace = true }\n\nignore = { workspace = true }\ntext-splitter = { workspace = true, features = [\"markdown\"] }\nfs-err.workspace = true\n\n[dev-dependencies]\nswiftide-core = { path = \"../swiftide-core\", features = [\"test-utils\"] }\ntest-log = { workspace = true }\nmockall = { workspace = true }\ninsta = { workspace = true }\ntest-case = { workspace = true }\ntemp-dir = { workspace = true }\n\n[features]\n# TODO: Should not depend on integrations, transformers that use them should be in integrations instead and re-exported from root for convencience\ntree-sitter = []\n\n[lints]\nworkspace = true\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n"
  },
  {
    "path": "swiftide-indexing/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n\npub mod loaders;\npub mod persist;\npub mod transformers;\n\nmod pipeline;\npub use pipeline::Pipeline;\n"
  },
  {
    "path": "swiftide-indexing/src/loaders/file_loader.rs",
    "content": "//! Load files from a directory\nuse std::{\n    io::Read as _,\n    path::{Path, PathBuf},\n};\n\nuse anyhow::Context as _;\nuse ignore::{DirEntry, Walk};\nuse swiftide_core::{Loader, indexing::IndexingStream, indexing::TextNode};\nuse tracing::{Span, debug_span, instrument};\n\n/// The `FileLoader` struct is responsible for loading files from a specified directory, filtering\n/// them based on their extensions, and creating a stream of these files for further processing.\n///\n/// # Example\n///\n/// Create a pipeline that loads the current directory and indexes all files with the \".rs\"\n///\n/// ```no_run\n/// # use swiftide_indexing as indexing;\n/// # use swiftide_indexing::loaders::FileLoader;\n/// indexing::Pipeline::from_loader(FileLoader::new(\".\").with_extensions(&[\"rs\"]));\n/// ```\n#[derive(Clone, Debug)]\npub struct FileLoader {\n    pub(crate) root: PathBuf,\n    pub(crate) extensions: Option<Vec<String>>,\n}\n\nimpl FileLoader {\n    /// Creates a new `FileLoader` with the specified path.\n    ///\n    /// # Arguments\n    ///\n    /// - `root`: The root directory to load files from.\n    ///\n    /// # Returns\n    ///\n    /// A new instance of `FileLoader`.\n    pub fn new(root: impl AsRef<Path>) -> Self {\n        Self {\n            root: root.as_ref().to_path_buf(),\n            extensions: None,\n        }\n    }\n\n    /// Adds extensions to the loader.\n    ///\n    /// # Arguments\n    ///\n    /// - `extensions`: A list of extensions to add without the leading dot.\n    ///\n    /// # Returns\n    ///\n    /// The `FileLoader` instance with the added extensions.\n    #[must_use]\n    pub fn with_extensions(mut self, extensions: &[impl AsRef<str>]) -> Self {\n        let existing = self.extensions.get_or_insert_default();\n        existing.extend(extensions.iter().map(|ext| ext.as_ref().to_string()));\n        self\n    }\n\n    /// Lists the nodes (files) that match the specified extensions.\n    ///\n    /// # Returns\n    ///\n    /// A vector of `TextNode` representing the matching files.\n    ///\n    /// # Panics\n    ///\n    /// This method will panic if it fails to read a file's content.\n    pub fn list_nodes(&self) -> Vec<TextNode> {\n        self.iter().filter_map(Result::ok).collect()\n    }\n\n    /// Iterates over the files in the directory\n    pub fn iter(&self) -> impl Iterator<Item = anyhow::Result<TextNode>> + use<> {\n        Iter::new(&self.root, self.extensions.clone()).fuse()\n    }\n}\n\n/// An iterator that walks over the files in a directory and loads them.\n///\n/// This is a private struct that is used to implement the `FileLoader` iterator.\nstruct Iter {\n    /// The walk instance that iterates over the files in the directory.\n    walk: Walk,\n    /// The extensions to include.\n    include_extensions: Option<Vec<String>>,\n    /// A span that tracks the current file loader.\n    span: Span,\n}\n\nimpl Iterator for Iter {\n    type Item = anyhow::Result<TextNode>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        let _span = self.span.enter();\n        loop {\n            // stop the iteration if there are no more entries\n            let entry = self.walk.next()?;\n\n            // propagate any errors that occur during the directory traversal\n            let entry = match entry {\n                Ok(entry) => entry,\n                Err(err) => return Some(Err(err.into())),\n            };\n\n            if let Some(node) = self.load(&entry) {\n                return Some(node);\n            }\n        }\n    }\n}\n\nimpl Iter {\n    /// Creates a new `Iter` instance.\n    fn new(root: &Path, include_extensions: Option<Vec<String>>) -> Self {\n        let span = debug_span!(\"file_loader\", root = %root.display());\n        tracing::debug!(parent: &span, extensions = ?include_extensions, \"Loading files\");\n        Self {\n            walk: Walk::new(root),\n            include_extensions,\n            span,\n        }\n    }\n\n    #[instrument(skip_all, fields(path = %entry.path().display()))]\n    fn load(&self, entry: &DirEntry) -> Option<anyhow::Result<TextNode>> {\n        if entry.file_type().is_some_and(|ft| !ft.is_file()) {\n            // Skip directories and non-files\n            return None;\n        }\n        if let Some(extensions) = &self.include_extensions {\n            let Some(extension) = entry.path().extension() else {\n                tracing::trace!(\"Skipping file without extension\");\n                return None;\n            };\n            let extension = extension.to_string_lossy();\n            if !extensions.iter().any(|ext| ext == &extension) {\n                tracing::trace!(\"Skipping file with extension {extension}\");\n                return None;\n            }\n        }\n        tracing::debug!(\"Loading file\");\n        match read_node(entry) {\n            Ok(node) => {\n                tracing::debug!(node_id = %node.id(), \"Loaded file\");\n                Some(Ok(node))\n            }\n            Err(err) => {\n                tracing::error!(error = %err, \"Failed to load file\");\n                Some(Err(err))\n            }\n        }\n    }\n}\n\nfn read_node(entry: &DirEntry) -> anyhow::Result<TextNode> {\n    // Files might be invalid utf-8, so we need to read them as bytes and convert it lossy, as\n    // Swiftide (currently) works internally with strings.\n    let mut file = fs_err::File::open(entry.path()).context(\"Failed to open file\")?;\n    let mut buf = vec![];\n    file.read_to_end(&mut buf).context(\"Failed to read file\")?;\n    let content = String::from_utf8_lossy(&buf);\n\n    let original_size = content.len();\n\n    TextNode::builder()\n        .path(entry.path())\n        .chunk(content)\n        .original_size(original_size)\n        .build()\n}\n\nimpl Loader for FileLoader {\n    type Output = String;\n\n    /// Converts the `FileLoader` into a stream of `TextNode`.\n    ///\n    /// # Returns\n    ///\n    /// An `IndexingStream` representing the stream of files.\n    ///\n    /// # Errors\n    /// This method will return an error if it fails to read a file's content.\n    fn into_stream(self) -> IndexingStream<String> {\n        IndexingStream::iter(self.iter())\n    }\n\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<String> {\n        self.into_stream()\n    }\n}\n\n#[cfg(test)]\nmod test {\n\n    use tokio_stream::StreamExt as _;\n\n    use super::*;\n\n    #[test]\n    fn test_with_extensions() {\n        let loader = FileLoader::new(\"/tmp\").with_extensions(&[\"rs\"]);\n        assert_eq!(loader.extensions, Some(vec![\"rs\".to_string()]));\n    }\n\n    #[tokio::test]\n    async fn test_ignores_invalid_utf8() {\n        let tempdir = temp_dir::TempDir::new().unwrap();\n\n        fs_err::write(tempdir.child(\"invalid.txt\"), [0x80, 0x80, 0x80]).unwrap();\n\n        let loader = FileLoader::new(tempdir.path()).with_extensions(&[\"txt\"]);\n        let result = loader.into_stream().collect::<Vec<_>>().await;\n\n        assert_eq!(result.len(), 1);\n\n        let first = result.first().unwrap();\n\n        assert_eq!(first.as_ref().unwrap().chunk, \"���\".to_string());\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/loaders/mod.rs",
    "content": "//! The `loaders` module provides functionality for loading files from a specified directory.\n//! It includes the `FileLoader` struct which is used to filter and stream files based on their\n//! extensions.\n//!\n//! This module is a part of the Swiftide project, designed for asynchronous file indexing and\n//! processing. The `FileLoader` struct is re-exported for ease of use in other parts of the\n//! project.\n\npub mod file_loader;\n\npub use file_loader::FileLoader;\n"
  },
  {
    "path": "swiftide-indexing/src/persist/memory_storage.rs",
    "content": "use std::{\n    collections::HashMap,\n    sync::{\n        Arc,\n        atomic::{AtomicUsize, Ordering},\n    },\n};\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse derive_builder::Builder;\nuse tokio::sync::RwLock;\n\nuse swiftide_core::{\n    Persist,\n    indexing::{Chunk, IndexingStream, Node},\n};\n\n#[derive(Debug, Default, Builder, Clone)]\n#[builder(pattern = \"owned\")]\n/// A simple in-memory storage implementation.\n///\n/// Great for experimentation and testing.\n///\n/// The storage will use a zero indexed, incremental counter as the key for each node if the node id\n/// is not set.\npub struct MemoryStorage<T: Chunk = String> {\n    data: Arc<RwLock<HashMap<String, Node<T>>>>,\n    #[builder(default)]\n    batch_size: Option<usize>,\n    #[builder(default = Arc::new(AtomicUsize::new(0)))]\n    node_count: Arc<AtomicUsize>,\n}\n\nimpl<T: Chunk> MemoryStorage<T> {\n    fn key(&self) -> String {\n        self.node_count.fetch_add(1, Ordering::Relaxed).to_string()\n    }\n\n    /// Retrieve a node by its key\n    pub async fn get(&self, key: impl AsRef<str>) -> Option<Node<T>> {\n        self.data.read().await.get(key.as_ref()).cloned()\n    }\n\n    /// Retrieve all nodes in the storage\n    pub async fn get_all_values(&self) -> Vec<Node<T>> {\n        self.data.read().await.values().cloned().collect()\n    }\n\n    /// Retrieve all nodes in the storage with their keys\n    pub async fn get_all(&self) -> Vec<(String, Node<T>)> {\n        self.data\n            .read()\n            .await\n            .iter()\n            .map(|(k, v)| (k.clone(), v.clone()))\n            .collect()\n    }\n}\n\n#[async_trait]\nimpl<T: Chunk> Persist for MemoryStorage<T> {\n    type Input = T;\n    type Output = T;\n    async fn setup(&self) -> Result<()> {\n        Ok(())\n    }\n\n    /// Store a node by its id\n    ///\n    /// If the node does not have an id, a simple counter is used as the key.\n    async fn store(&self, node: Node<T>) -> Result<Node<T>> {\n        self.data.write().await.insert(self.key(), node.clone());\n\n        Ok(node)\n    }\n\n    /// Store multiple nodes at once\n    ///\n    /// If a node does not have an id, a simple counter is used as the key.\n    async fn batch_store(&self, nodes: Vec<Node<T>>) -> IndexingStream<T> {\n        let mut lock = self.data.write().await;\n\n        for node in &nodes {\n            lock.insert(self.key(), node.clone());\n        }\n\n        IndexingStream::iter(nodes.into_iter().map(Ok))\n    }\n\n    fn batch_size(&self) -> Option<usize> {\n        self.batch_size\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use futures_util::TryStreamExt;\n    use swiftide_core::indexing::TextNode;\n\n    #[tokio::test]\n    async fn test_memory_storage() {\n        let storage = MemoryStorage::default();\n        let node = TextNode::default();\n        let node = storage.store(node.clone()).await.unwrap();\n        assert_eq!(storage.get(\"0\").await, Some(node));\n    }\n\n    #[tokio::test]\n    async fn test_inserting_multiple_nodes() {\n        let storage = MemoryStorage::default();\n        let node1 = TextNode::default();\n        let node2 = TextNode::default();\n\n        storage.store(node1.clone()).await.unwrap();\n        storage.store(node2.clone()).await.unwrap();\n\n        dbg!(storage.get_all().await);\n        assert_eq!(storage.get(\"0\").await, Some(node1));\n        assert_eq!(storage.get(\"1\").await, Some(node2));\n    }\n\n    #[tokio::test]\n    async fn test_batch_store() {\n        let storage = MemoryStorage::default();\n        let node1 = TextNode::default();\n        let node2 = TextNode::default();\n\n        let stream = storage\n            .batch_store(vec![node1.clone(), node2.clone()])\n            .await;\n\n        let result: Vec<TextNode> = stream.try_collect().await.unwrap();\n\n        assert_eq!(result.len(), 2);\n        assert_eq!(result[0], node1);\n        assert_eq!(result[1], node2);\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/persist/mod.rs",
    "content": "//! Storage implementations for persisting data\n//!\n//! More storage implementations are available as integrations.\nmod memory_storage;\npub use memory_storage::MemoryStorage;\n"
  },
  {
    "path": "swiftide-indexing/src/pipeline.rs",
    "content": "use anyhow::Result;\nuse futures_util::{StreamExt, TryFutureExt, TryStreamExt};\nuse swiftide_core::{\n    BatchableTransformer, ChunkerTransformer, Loader, NodeCache, Persist, SimplePrompt,\n    Transformer, WithBatchIndexingDefaults, WithIndexingDefaults,\n    indexing::{Chunk, IndexingDefaults},\n    statistics::StatsCollector,\n};\nuse tokio::{\n    sync::{Mutex, mpsc},\n    task,\n};\nuse tracing::Instrument;\n\nuse std::{pin::Pin, sync::Arc, time::Duration};\n\nuse swiftide_core::indexing::{EmbedMode, IndexingStream, Node};\n\nmacro_rules! trace_span {\n    ($op:literal, $step:expr) => {\n        tracing::trace_span!($op, \"otel.name\" = format!(\"{}.{}\", $op, $step.name()),)\n    };\n\n    ($op:literal) => {\n        tracing::trace_span!($op, \"otel.name\" = format!(\"{}\", $op),)\n    };\n}\n\nmacro_rules! node_trace_log {\n    ($step:expr, $node:expr, $msg:literal) => {\n        tracing::trace!(\n            node = ?$node,\n            node_id = ?$node.id(),\n            step = $step.name(),\n            $msg\n        )\n    };\n}\n\nmacro_rules! batch_node_trace_log {\n    ($step:expr, $nodes:expr, $msg:literal) => {\n        tracing::trace!(batch_size = $nodes.len(), nodes = ?$nodes, step = $step.name(), $msg)\n    };\n}\n\nmacro_rules! pipeline_with_new_stream {\n    ($pipeline:expr, $stream:expr) => {\n        Pipeline {\n            stream: $stream.into(),\n            storage_setup_fns: $pipeline.storage_setup_fns.clone(),\n            concurrency: $pipeline.concurrency,\n            indexing_defaults: $pipeline.indexing_defaults.clone(),\n            batch_size: $pipeline.batch_size,\n            stats: $pipeline.stats.clone(),\n        }\n    };\n}\n\n/// The default batch size for batch processing.\nconst DEFAULT_BATCH_SIZE: usize = 256;\n\n/// A pipeline for indexing files, adding metadata, chunking, transforming, embedding, and then\n/// storing them.\n///\n/// The `Pipeline` struct orchestrates the entire file indexing process. It is designed to be\n/// flexible and performant, allowing for various stages of data transformation and storage to be\n/// configured and executed asynchronously.\n///\n/// # Fields\n///\n/// * `stream` - The stream of `Node` items to be processed.\n/// * `storage` - Optional storage backend where the processed nodes will be stored.\n/// * `concurrency` - The level of concurrency for processing nodes.\n/// * `stats` - Statistics collector for monitoring pipeline execution.\npub struct Pipeline<T: Chunk> {\n    stream: IndexingStream<T>,\n    // storage: Vec<Arc<dyn Persist<Input = T, Output = T>>>,\n    storage_setup_fns: Vec<DynStorageSetupFn>,\n    concurrency: usize,\n    indexing_defaults: IndexingDefaults,\n    batch_size: usize,\n    stats: StatsCollector,\n}\n\ntype DynStorageSetupFn =\n    Arc<dyn Fn() -> Pin<Box<dyn Future<Output = Result<()>> + Send>> + Send + Sync>;\n\nimpl<T: Chunk> Default for Pipeline<T> {\n    /// Creates a default `Pipeline` with an empty stream, no storage, and a concurrency level equal\n    /// to the number of CPUs.\n    fn default() -> Self {\n        Self {\n            stream: IndexingStream::<T>::empty(),\n            storage_setup_fns: Vec::new(),\n            concurrency: num_cpus::get(),\n            indexing_defaults: IndexingDefaults::default(),\n            batch_size: DEFAULT_BATCH_SIZE,\n            stats: StatsCollector::new(),\n        }\n    }\n}\n\nimpl<T: Chunk> Pipeline<T> {\n    /// Creates a `Pipeline` from a given loader.\n    ///\n    /// # Arguments\n    ///\n    /// * `loader` - A loader that implements the `Loader` trait.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` initialized with the provided loader.\n    pub fn from_loader(loader: impl Loader<Output = T> + 'static) -> Self {\n        let stream = loader.into_stream();\n        Self {\n            stream,\n            ..Default::default()\n        }\n    }\n\n    /// Sets the default LLM client to be used for LLM prompts for all transformers in the\n    /// pipeline.\n    #[must_use]\n    pub fn with_default_llm_client(mut self, client: impl SimplePrompt + 'static) -> Self {\n        self.indexing_defaults = IndexingDefaults::from_simple_prompt(Box::new(client));\n        self\n    }\n\n    /// Creates a `Pipeline` from a given stream.\n    ///\n    /// # Arguments\n    ///\n    /// * `stream` - An `IndexingStream` containing the nodes to be processed.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` initialized with the provided stream.\n    pub fn from_stream(stream: impl Into<IndexingStream<T>>) -> Self {\n        Self {\n            stream: stream.into(),\n            ..Default::default()\n        }\n    }\n\n    /// Sets the concurrency level for the pipeline. By default the concurrency is set to the\n    /// number of cpus.\n    ///\n    /// # Arguments\n    ///\n    /// * `concurrency` - The desired level of concurrency.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the updated concurrency level.\n    #[must_use]\n    pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n        self.concurrency = concurrency;\n        self\n    }\n\n    /// Sets the embed mode for the pipeline. The embed mode controls what (combination) fields of a\n    /// [`Node`] be embedded with a vector when transforming with [`crate::transformers::Embed`]\n    ///\n    /// See also [`swiftide_core::indexing::EmbedMode`].\n    ///\n    /// # Arguments\n    ///\n    /// * `embed_mode` - The desired embed mode.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the updated embed mode.\n    #[must_use]\n    pub fn with_embed_mode(mut self, embed_mode: EmbedMode) -> Self {\n        self.stream = self\n            .stream\n            .map_ok(move |mut node| {\n                node.embed_mode = embed_mode;\n                node\n            })\n            .boxed()\n            .into();\n        self\n    }\n\n    /// Filters out cached nodes using the provided cache.\n    ///\n    /// # Arguments\n    ///\n    /// * `cache` - A cache that implements the `NodeCache` trait.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the updated stream that filters out cached nodes.\n    #[must_use]\n    pub fn filter_cached(mut self, cache: impl NodeCache<Input = T> + 'static) -> Self {\n        let cache = Arc::new(cache);\n        self.stream = self\n            .stream\n            .try_filter_map(move |node| {\n                let cache = Arc::clone(&cache);\n                let span = trace_span!(\"filter_cached\", cache);\n\n                async move {\n                    if cache.get(&node).await {\n                        node_trace_log!(cache, node, \"node in cache, skipping\");\n                        Ok(None)\n                    } else {\n                        node_trace_log!(cache, node, \"node not in cache, processing\");\n                        cache.set(&node).await;\n                        Ok(Some(node))\n                    }\n                }\n                .instrument(span.or_current())\n            })\n            .boxed()\n            .into();\n        self\n    }\n\n    /// Adds a transformer to the pipeline.\n    ///\n    /// Closures can also be provided as transformers.\n    ///\n    /// # Arguments\n    ///\n    /// * `transformer` - A transformer that implements the `Transformer` trait.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the updated stream that applies the transformer to each node.\n    #[must_use]\n    pub fn then<Output: Chunk>(\n        self,\n        mut transformer: impl Transformer<Input = T, Output = Output> + WithIndexingDefaults + 'static,\n    ) -> Pipeline<Output> {\n        let concurrency = transformer.concurrency().unwrap_or(self.concurrency);\n\n        transformer.with_indexing_defaults(self.indexing_defaults.clone());\n\n        let transformer = Arc::new(transformer);\n        let stream = self\n            .stream\n            .map_ok(move |node| {\n                let transformer = transformer.clone();\n                let span = trace_span!(\"then\", transformer);\n\n                task::spawn(\n                    async move {\n                        node_trace_log!(transformer, node, \"Transforming node\");\n                        transformer.transform_node(node).await\n                    }\n                    .instrument(span.or_current()),\n                )\n                .err_into::<anyhow::Error>()\n            })\n            .try_buffer_unordered(concurrency)\n            .map(|x| x.and_then(|x| x));\n\n        pipeline_with_new_stream!(self, stream.boxed())\n    }\n\n    /// Adds a batch transformer to the pipeline.\n    ///\n    /// If the transformer has a batch size set, the batch size from the transformer is used,\n    /// otherwise the pipeline default batch size ([`DEFAULT_BATCH_SIZE`]).\n    ///\n    /// # Arguments\n    ///\n    /// * `transformer` - A transformer that implements the `BatchableTransformer` trait.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the updated stream that applies the batch transformer to each\n    /// batch of nodes.\n    #[must_use]\n    pub fn then_in_batch<Output: Chunk>(\n        self,\n        mut transformer: impl BatchableTransformer<Input = T, Output = Output>\n        + WithBatchIndexingDefaults\n        + 'static,\n    ) -> Pipeline<Output> {\n        let concurrency = transformer.concurrency().unwrap_or(self.concurrency);\n\n        transformer.with_indexing_defaults(self.indexing_defaults.clone());\n\n        let transformer = Arc::new(transformer);\n        let stream = self\n            .stream\n            .try_chunks(transformer.batch_size().unwrap_or(self.batch_size))\n            .map_ok(move |nodes| {\n                let transformer = Arc::clone(&transformer);\n                let span = trace_span!(\"then_in_batch\", transformer);\n\n                tokio::spawn(\n                    async move {\n                        batch_node_trace_log!(transformer, nodes, \"batch transforming nodes\");\n                        transformer.batch_transform(nodes).await\n                    }\n                    .instrument(span.or_current()),\n                )\n                .map_err(anyhow::Error::from)\n            })\n            .err_into::<anyhow::Error>()\n            .try_buffer_unordered(concurrency) // First get the streams from each future\n            .try_flatten_unordered(None) // Then flatten the streams into a single stream\n            .boxed();\n\n        pipeline_with_new_stream!(self, stream)\n    }\n\n    /// Adds a chunker transformer to the pipeline.\n    ///\n    /// # Arguments\n    ///\n    /// * `chunker` - A transformer that implements the `ChunkerTransformer` trait.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the updated stream that applies the chunker transformer to\n    /// each node.\n    #[must_use]\n    pub fn then_chunk<Output: Chunk>(\n        self,\n        chunker: impl ChunkerTransformer<Input = T, Output = Output> + 'static,\n    ) -> Pipeline<Output> {\n        let chunker = Arc::new(chunker);\n        let concurrency = chunker.concurrency().unwrap_or(self.concurrency);\n        let stream = self\n            .stream\n            .map_ok(move |node| {\n                let chunker = Arc::clone(&chunker);\n                let span = trace_span!(\"then_chunk\", chunker);\n\n                tokio::spawn(\n                    async move {\n                        node_trace_log!(chunker, node, \"Chunking node\");\n                        chunker.transform_node(node).await\n                    }\n                    .instrument(span.or_current()),\n                )\n                .map_err(anyhow::Error::from)\n            })\n            .err_into::<anyhow::Error>()\n            .try_buffer_unordered(concurrency)\n            .try_flatten_unordered(None);\n\n        pipeline_with_new_stream!(self, stream.boxed())\n    }\n\n    /// Transforms and expands a single node into many nodes\n    ///\n    /// Sementacially identical to `then_chunk` and repurposes the `ChunkerTransformer` trait.\n    ///\n    /// The real difference is in communicating intent and the trace/span names.\n    ///\n    /// # Arguments\n    ///\n    /// * `transformer` - A transformer that implements the `ChunkerTransformer` trait.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the updated stream that applies the chunker transformer to\n    /// each node.\n    #[must_use]\n    pub fn then_expand<Output: Chunk>(\n        self,\n        transformer: impl ChunkerTransformer<Input = T, Output = Output> + 'static,\n    ) -> Pipeline<Output> {\n        let chunker = Arc::new(transformer);\n        let concurrency = chunker.concurrency().unwrap_or(self.concurrency);\n        let stream = self\n            .stream\n            .map_ok(move |node| {\n                let chunker = Arc::clone(&chunker);\n                let span = trace_span!(\"then_expand\", chunker);\n\n                tokio::spawn(\n                    async move {\n                        node_trace_log!(chunker, node, \"Expanding node\");\n                        chunker.transform_node(node).await\n                    }\n                    .instrument(span.or_current()),\n                )\n                .map_err(anyhow::Error::from)\n            })\n            .err_into::<anyhow::Error>()\n            .try_buffer_unordered(concurrency)\n            .try_flatten_unordered(None);\n\n        pipeline_with_new_stream!(self, stream.boxed())\n    }\n\n    /// Persists indexing nodes using the provided storage backend.\n    ///\n    /// # Arguments\n    ///\n    /// * `storage` - A storage backend that implements the `Storage` trait.\n    ///\n    /// # Returns\n    ///\n    /// An instance of `Pipeline` with the configured storage backend.\n    ///\n    /// # Panics\n    ///\n    /// Panics if batch size turns out to be not set and batch storage is still invoked.\n    /// Pipeline only invokes batch storing if the batch size is set, so should be alright.\n    #[must_use]\n    pub fn then_store_with<Output: Chunk>(\n        mut self,\n        storage: impl Persist<Input = T, Output = Output> + 'static,\n    ) -> Pipeline<Output> {\n        let storage = Arc::new(storage);\n\n        let storage_closure = storage.clone();\n\n        // Ensure we run the setup function only once.\n        let completed = Arc::new(Mutex::new(false));\n        let setup_fn: DynStorageSetupFn = Arc::new(move || {\n            let completed = Arc::clone(&completed);\n            let storage_closure = Arc::clone(&storage_closure);\n            Box::pin(async move {\n                let mut lock = completed.lock().await;\n\n                tracing::trace!(?storage_closure, \"Setting up storage\");\n                storage_closure.setup().await?;\n                *lock = true;\n                Ok(())\n            })\n        });\n        self.storage_setup_fns.push(setup_fn);\n\n        // add storage to the stream instead of doing it at the end\n        let stream = if storage.batch_size().is_some() {\n            self.stream\n                .try_chunks(storage.batch_size().unwrap())\n                .map_ok(move |nodes| {\n                    let storage = Arc::clone(&storage);\n                    let span = trace_span!(\"then_store_with_batched\", storage);\n\n                    tokio::spawn(\n                        async move {\n                            batch_node_trace_log!(storage, nodes, \"batch storing nodes\");\n                            storage.batch_store(nodes).await\n                        }\n                        .instrument(span.or_current()),\n                    )\n                    .map_err(anyhow::Error::from)\n                })\n                .err_into::<anyhow::Error>()\n                .try_buffer_unordered(self.concurrency)\n                .try_flatten_unordered(None)\n                .boxed()\n        } else {\n            self.stream\n                .map_ok(move |node| {\n                    let storage = Arc::clone(&storage);\n                    let span = trace_span!(\"then_store_with\", storage);\n\n                    tokio::spawn(\n                        async move {\n                            node_trace_log!(storage, node, \"Storing node\");\n\n                            storage.store(node).await\n                        }\n                        .instrument(span.or_current()),\n                    )\n                    .err_into::<anyhow::Error>()\n                })\n                .try_buffer_unordered(self.concurrency)\n                .map(|x| x.and_then(|x| x))\n                .boxed()\n        };\n\n        pipeline_with_new_stream!(self, stream)\n    }\n\n    /// Splits the stream into two streams based on a predicate.\n    ///\n    /// Note that this is not lazy. It will start consuming the stream immediately\n    /// and send each item to the left or right stream based on the predicate.\n    ///\n    /// The other streams have a buffer, but should be started as soon as possible.\n    /// The channels of the resulting streams are bounded and the parent stream will panic\n    /// if sending fails.\n    ///\n    /// They can either be run concurrently, alternated between or merged back together.\n    ///\n    /// # Panics\n    ///\n    /// Panics if the receiving pipelines buffers are full or unavailable.\n    #[must_use]\n    pub fn split_by<P>(self, predicate: P) -> (Self, Self)\n    where\n        P: Fn(&Result<Node<T>>) -> bool + Send + Sync + 'static,\n    {\n        let predicate = Arc::new(predicate);\n\n        let (left_tx, left_rx) = mpsc::channel(1000);\n        let (right_tx, right_rx) = mpsc::channel(1000);\n\n        let stream = self.stream;\n        let span = trace_span!(\"split_by\");\n        tokio::spawn(\n            async move {\n                stream\n                    .for_each_concurrent(self.concurrency, move |item| {\n                        let predicate = Arc::clone(&predicate);\n                        let left_tx = left_tx.clone();\n                        let right_tx = right_tx.clone();\n                        async move {\n                            if predicate(&item) {\n                                tracing::trace!(?item, \"Sending to left stream\");\n                                left_tx\n                                    .send(item)\n                                    .await\n                                    .expect(\"Failed to send to left stream\");\n                            } else {\n                                tracing::trace!(?item, \"Sending to right stream\");\n                                right_tx\n                                    .send(item)\n                                    .await\n                                    .expect(\"Failed to send to right stream\");\n                            }\n                        }\n                    })\n                    .await;\n            }\n            .instrument(span.or_current()),\n        );\n\n        let left_pipeline = pipeline_with_new_stream!(self, left_rx);\n\n        let right_pipeline = pipeline_with_new_stream!(self, right_rx);\n\n        (left_pipeline, right_pipeline)\n    }\n\n    /// Merges two streams into one\n    ///\n    /// This is useful for merging two streams that have been split using the `split_by` method.\n    ///\n    /// The full stream can then be processed using the `run` method.\n    #[must_use]\n    pub fn merge(self, other: Self) -> Self {\n        let stream = tokio_stream::StreamExt::merge(self.stream, other.stream);\n\n        Self {\n            stream: stream.boxed().into(),\n            ..self\n        }\n    }\n\n    /// Throttles the stream of nodes, limiting the rate to 1 per duration.\n    ///\n    /// Useful for rate limiting the indexing pipeline. Uses `tokio_stream::StreamExt::throttle`\n    /// internally which has a granularity of 1ms.\n    #[must_use]\n    pub fn throttle(mut self, duration: impl Into<Duration>) -> Self {\n        self.stream = tokio_stream::StreamExt::throttle(self.stream, duration.into())\n            .boxed()\n            .into();\n        self\n    }\n\n    // Silently filters out errors encountered by the pipeline.\n    //\n    // This method filters out errors encountered by the pipeline, preventing them from bubbling up\n    // and terminating the stream. Note that errors are not logged.\n    #[must_use]\n    pub fn filter_errors(mut self) -> Self {\n        self.stream = self\n            .stream\n            .filter_map(|result| async {\n                match result {\n                    Ok(node) => Some(Ok(node)),\n                    Err(_e) => None,\n                }\n            })\n            .boxed()\n            .into();\n        self\n    }\n\n    /// Provide a closure to selectively filter nodes or errors\n    ///\n    /// This allows you to skip specific errors or nodes, or do ad hoc inspection.\n    ///\n    /// If the closure returns true, the result is kept, otherwise it is skipped.\n    #[must_use]\n    pub fn filter<F>(mut self, filter: F) -> Self\n    where\n        F: Fn(&Result<Node<T>>) -> bool + Send + Sync + 'static,\n    {\n        self.stream = self\n            .stream\n            .filter(move |result| {\n                let will_retain = filter(result);\n\n                async move { will_retain }\n            })\n            .boxed()\n            .into();\n        self\n    }\n\n    /// Logs all results processed by the pipeline.\n    ///\n    /// This method logs all results processed by the pipeline at the `DEBUG` level.\n    #[must_use]\n    pub fn log_all(self) -> Self {\n        self.log_errors().log_nodes()\n    }\n\n    /// Returns a snapshot of the current pipeline statistics\n    ///\n    /// This method provides real-time access to pipeline statistics during and after\n    /// execution. The returned statistics include node counts, token usage, and timing\n    /// information.\n    ///\n    /// # Example\n    ///\n    /// ```rust,ignore\n    /// let pipeline = Pipeline::from_loader(loader).then(transformer);\n    ///\n    /// // During or after execution\n    /// let stats = pipeline.stats();\n    /// println!(\"Processed {} nodes\", stats.nodes_processed);\n    /// ```\n    #[must_use]\n    pub fn stats(&self) -> swiftide_core::statistics::PipelineStats {\n        self.stats.get_stats()\n    }\n\n    /// Returns a reference to the statistics collector\n    ///\n    /// This provides direct access to the `StatsCollector` for recording additional\n    /// metrics or for use by transformers that need to report their own statistics.\n    #[must_use]\n    pub fn stats_collector(&self) -> &StatsCollector {\n        &self.stats\n    }\n\n    /// Logs all errors encountered by the pipeline.\n    ///\n    /// This method logs all errors encountered by the pipeline at the `ERROR` level.\n    #[must_use]\n    pub fn log_errors(mut self) -> Self {\n        self.stream = self\n            .stream\n            .inspect_err(|e| tracing::error!(?e, \"Error processing node\"))\n            .boxed()\n            .into();\n        self\n    }\n\n    /// Logs all nodes processed by the pipeline.\n    ///\n    /// This method logs all nodes processed by the pipeline at the `DEBUG` level.\n    #[must_use]\n    pub fn log_nodes(mut self) -> Self {\n        self.stream = self\n            .stream\n            .inspect_ok(|node| tracing::debug!(?node, \"Processed node: {:?}\", node))\n            .boxed()\n            .into();\n        self\n    }\n\n    /// Runs the indexing pipeline.\n    ///\n    /// This method processes the stream of nodes, applying all configured transformations and\n    /// storing the results.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` indicating the success or failure of the pipeline execution.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if no storage backend is configured or if any stage of the pipeline fails.\n    #[tracing::instrument(skip_all, fields(total_nodes), name = \"indexing_pipeline.run\")]\n    pub async fn run(mut self) -> Result<()> {\n        self.stats.start();\n\n        tracing::info!(\n            \"Starting indexing pipeline with {} concurrency\",\n            self.concurrency\n        );\n\n        // Ensure all storage backends are set up before processing nodes\n        let setup_futures = self\n            .storage_setup_fns\n            .into_iter()\n            .map(|func| async move { func().await })\n            .collect::<Vec<_>>();\n        futures_util::future::try_join_all(setup_futures).await?;\n\n        let mut total_nodes = 0u64;\n\n        while let Some(_result) = self.stream.try_next().await? {\n            total_nodes += 1;\n            // Count successful nodes as stored (nodes that reach the end of the stream)\n            self.stats.increment_nodes_stored(1);\n        }\n\n        self.stats.increment_nodes_processed(total_nodes);\n        self.stats.complete();\n\n        let stats = self.stats.get_stats();\n        let elapsed = stats.duration();\n\n        if let Some(duration) = elapsed {\n            let elapsed_secs = duration.as_secs_f64();\n            let nodes_per_sec = stats.nodes_per_second().unwrap_or(0.0);\n\n            tracing::info!(\n                nodes_processed = total_nodes,\n                nodes_stored = stats.nodes_stored,\n                total_tokens = stats.total_tokens(),\n                total_requests = stats.total_requests(),\n                elapsed_secs,\n                nodes_per_sec,\n                \"Pipeline completed\"\n            );\n        }\n\n        tracing::Span::current().record(\"total_nodes\", total_nodes);\n\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n\n    use super::*;\n    use crate::persist::MemoryStorage;\n    use mockall::Sequence;\n    use swiftide_core::indexing::*;\n\n    /// Tests a simple run of the indexing pipeline.\n    #[test_log::test(tokio::test)]\n    async fn test_simple_run() {\n        let mut loader = MockLoader::new();\n        let mut transformer = MockTransformer::new();\n        let mut batch_transformer = MockBatchableTransformer::new();\n        let mut chunker = MockChunkerTransformer::new();\n        let mut storage = MockPersist::new();\n\n        let mut seq = Sequence::new();\n\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| vec![Ok(Node::default())].into());\n\n        transformer.expect_transform_node().returning(|mut node| {\n            node.chunk = \"transformed\".to_string();\n            Ok(node)\n        });\n        transformer.expect_concurrency().returning(|| None);\n        transformer.expect_name().returning(|| \"transformer\");\n\n        batch_transformer\n            .expect_batch_transform()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|nodes| IndexingStream::iter(nodes.into_iter().map(Ok)));\n        batch_transformer.expect_concurrency().returning(|| None);\n        batch_transformer.expect_name().returning(|| \"transformer\");\n        batch_transformer.expect_batch_size().returning(|| None);\n\n        chunker\n            .expect_transform_node()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|node| {\n                let mut nodes = vec![];\n                for i in 0..3 {\n                    let mut node = node.clone();\n                    node.chunk = format!(\"transformed_chunk_{i}\");\n                    nodes.push(Ok(node));\n                }\n                nodes.into()\n            });\n        chunker.expect_concurrency().returning(|| None);\n        chunker.expect_name().returning(|| \"chunker\");\n\n        storage.expect_setup().returning(|| Ok(()));\n        storage.expect_batch_size().returning(|| None);\n        storage\n            .expect_store()\n            .times(3)\n            .in_sequence(&mut seq)\n            .withf(|node| node.chunk.starts_with(\"transformed_chunk_\"))\n            .returning(Ok);\n        storage.expect_name().returning(|| \"storage\");\n\n        let pipeline = Pipeline::from_loader(loader)\n            .then(transformer)\n            .then_in_batch(batch_transformer)\n            .then_chunk(chunker)\n            .then_store_with(storage);\n\n        pipeline.run().await.unwrap();\n    }\n\n    #[tokio::test]\n    async fn test_skipping_errors() {\n        let mut loader = MockLoader::new();\n        let mut transformer = MockTransformer::new();\n        let mut storage = MockPersist::new();\n        let mut seq = Sequence::new();\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| vec![Ok(Node::default())].into());\n        transformer\n            .expect_transform_node()\n            .returning(|_node| Err(anyhow::anyhow!(\"Error transforming node\")));\n        transformer.expect_concurrency().returning(|| None);\n        transformer.expect_name().returning(|| \"mock\");\n        storage.expect_setup().returning(|| Ok(()));\n        storage.expect_batch_size().returning(|| None);\n        storage.expect_store().times(0).returning(Ok);\n        let pipeline = Pipeline::from_loader(loader)\n            .then(transformer)\n            .then_store_with(storage)\n            .filter_errors();\n        pipeline.run().await.unwrap();\n    }\n\n    #[tokio::test]\n    async fn test_concurrent_calls_with_simple_transformer() {\n        let mut loader = MockLoader::new();\n        let mut transformer = MockTransformer::new();\n        let mut storage = MockPersist::new();\n        let mut seq = Sequence::new();\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| {\n                vec![\n                    Ok(Node::default()),\n                    Ok(Node::default()),\n                    Ok(Node::default()),\n                ]\n                .into()\n            });\n        transformer\n            .expect_transform_node()\n            .times(3)\n            .in_sequence(&mut seq)\n            .returning(|mut node| {\n                node.chunk = \"transformed\".to_string();\n                Ok(node)\n            });\n        transformer.expect_concurrency().returning(|| Some(3));\n        transformer.expect_name().returning(|| \"transformer\");\n        storage.expect_setup().returning(|| Ok(()));\n        storage.expect_batch_size().returning(|| None);\n        storage.expect_store().times(3).returning(Ok);\n        storage.expect_name().returning(|| \"storage\");\n\n        let pipeline = Pipeline::from_loader(loader)\n            .then(transformer)\n            .then_store_with(storage);\n        pipeline.run().await.unwrap();\n    }\n\n    #[tokio::test]\n    async fn test_arbitrary_closures_as_transformer() {\n        let mut loader = MockLoader::new();\n        let transformer = |node: TextNode| {\n            let mut node = node;\n            node.chunk = \"transformed\".to_string();\n            Ok(node)\n        };\n        let storage = MemoryStorage::default();\n        let mut seq = Sequence::new();\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| vec![Ok(TextNode::default())].into());\n\n        let pipeline = Pipeline::from_loader(loader)\n            .then(transformer)\n            .then_store_with(storage.clone());\n        pipeline.run().await.unwrap();\n\n        dbg!(storage.clone());\n        let processed_node = storage.get(\"0\").await.unwrap();\n        assert_eq!(processed_node.chunk, \"transformed\");\n    }\n\n    #[tokio::test]\n    async fn test_arbitrary_closures_as_batch_transformer() {\n        let mut loader = MockLoader::new();\n        let batch_transformer = |nodes: Vec<TextNode>| {\n            IndexingStream::iter(nodes.into_iter().map(|mut node| {\n                node.chunk = \"transformed\".to_string();\n                Ok(node)\n            }))\n        };\n        let storage = MemoryStorage::default();\n        let mut seq = Sequence::new();\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| vec![Ok(TextNode::default())].into());\n\n        let pipeline = Pipeline::from_loader(loader)\n            .then_in_batch(batch_transformer)\n            .then_store_with(storage.clone());\n        pipeline.run().await.unwrap();\n\n        dbg!(storage.clone());\n        let processed_node = storage.get(\"0\").await.unwrap();\n        assert_eq!(processed_node.chunk, \"transformed\");\n    }\n\n    #[tokio::test]\n    async fn test_filter_closure() {\n        let mut loader = MockLoader::new();\n        let storage = MemoryStorage::default();\n        let mut seq = Sequence::new();\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| {\n                vec![\n                    Ok(TextNode::default()),\n                    Ok(TextNode::new(\"skip\")),\n                    Ok(TextNode::default()),\n                ]\n                .into()\n            });\n        let pipeline = Pipeline::from_loader(loader)\n            .filter(|result| {\n                let node = result.as_ref().unwrap();\n                node.chunk != \"skip\"\n            })\n            .then_store_with(storage.clone());\n        pipeline.run().await.unwrap();\n        let nodes = storage.get_all().await;\n        assert_eq!(nodes.len(), 2);\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_split_and_merge() {\n        let mut loader = MockLoader::new();\n        let storage = MemoryStorage::default();\n        let mut seq = Sequence::new();\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| {\n                vec![\n                    Ok(TextNode::default()),\n                    Ok(TextNode::new(\"will go left\")),\n                    Ok(TextNode::default()),\n                ]\n                .into()\n            });\n\n        let pipeline = Pipeline::from_loader(loader);\n        let (mut left, mut right) = pipeline.split_by(|node| {\n            if let Ok(node) = node {\n                node.chunk.starts_with(\"will go left\")\n            } else {\n                false\n            }\n        });\n\n        // change the chunk to 'left'\n        left = left\n            .then(move |mut node: TextNode| {\n                node.chunk = \"left\".to_string();\n\n                Ok(node)\n            })\n            .log_all();\n\n        right = right.then(move |mut node: TextNode| {\n            node.chunk = \"right\".to_string();\n            Ok(node)\n        });\n\n        left.merge(right)\n            .then_store_with(storage.clone())\n            .run()\n            .await\n            .unwrap();\n        dbg!(storage.clone());\n\n        let all_nodes = storage.get_all_values().await;\n        assert_eq!(\n            all_nodes.iter().filter(|node| node.chunk == \"left\").count(),\n            1\n        );\n        assert_eq!(\n            all_nodes\n                .iter()\n                .filter(|node| node.chunk == \"right\")\n                .count(),\n            2\n        );\n    }\n\n    #[tokio::test]\n    async fn test_all_steps_should_work_as_dyn_box() {\n        let mut loader = MockLoader::new();\n        loader\n            .expect_into_stream_boxed()\n            .returning(|| vec![Ok(TextNode::default())].into());\n\n        let mut transformer = MockTransformer::new();\n        transformer.expect_transform_node().returning(Ok);\n        transformer.expect_concurrency().returning(|| None);\n        transformer.expect_name().returning(|| \"mock\");\n\n        let mut batch_transformer = MockBatchableTransformer::new();\n        batch_transformer\n            .expect_batch_transform()\n            .returning(std::convert::Into::into);\n        batch_transformer.expect_concurrency().returning(|| None);\n        batch_transformer.expect_name().returning(|| \"mock\");\n        let mut chunker = MockChunkerTransformer::new();\n        chunker\n            .expect_transform_node()\n            .returning(|node| vec![node].into());\n        chunker.expect_concurrency().returning(|| None);\n        chunker.expect_name().returning(|| \"mock\");\n\n        let mut storage = MockPersist::new();\n        storage.expect_setup().returning(|| Ok(()));\n        storage.expect_store().returning(Ok);\n        storage.expect_batch_size().returning(|| None);\n        storage.expect_name().returning(|| \"mock\");\n\n        let pipeline = Pipeline::from_loader(Box::new(loader) as Box<dyn Loader<Output = String>>)\n            .then(Box::new(transformer) as Box<dyn Transformer<Input = String, Output = String>>)\n            .then_in_batch(Box::new(batch_transformer) as Box<dyn BatchableTransformer<Input = String, Output = String>>)\n            .then_chunk(Box::new(chunker) as Box<dyn ChunkerTransformer<Input = String, Output = String>>)\n            .then_store_with(Box::new(storage) as Box<dyn Persist<Input = String, Output = String>>);\n        pipeline.run().await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_pipeline_statistics() {\n        let mut loader = MockLoader::new();\n        let mut storage = MockPersist::new();\n        let mut seq = Sequence::new();\n\n        loader\n            .expect_into_stream()\n            .times(1)\n            .in_sequence(&mut seq)\n            .returning(|| {\n                vec![\n                    Ok(TextNode::default()),\n                    Ok(TextNode::default()),\n                    Ok(TextNode::default()),\n                ]\n                .into()\n            });\n\n        storage.expect_setup().returning(|| Ok(()));\n        storage.expect_batch_size().returning(|| None);\n        storage.expect_store().times(3).returning(Ok);\n        storage.expect_name().returning(|| \"storage\");\n\n        let pipeline = Pipeline::from_loader(loader).then_store_with(storage);\n\n        // Test that we can access stats before running\n        let initial_stats = pipeline.stats();\n        assert_eq!(initial_stats.nodes_processed, 0);\n        assert_eq!(initial_stats.nodes_stored, 0);\n\n        pipeline.run().await.unwrap();\n\n        // After running, stats should be updated (access via the moved pipeline would not work,\n        // but we verify the internal behavior through the run method's logging)\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_stats_collector_access() {\n        let mut loader = MockLoader::new();\n        let storage = MemoryStorage::default();\n\n        loader\n            .expect_into_stream()\n            .returning(|| vec![Ok(TextNode::default()), Ok(TextNode::default())].into());\n\n        let pipeline = Pipeline::from_loader(loader).then_store_with(storage.clone());\n\n        // Access the stats collector\n        let collector = pipeline.stats_collector();\n\n        // Record some token usage manually (simulating what transformers would do)\n        collector.record_token_usage(\"gpt-4\", 100, 50);\n        collector.record_token_usage(\"gpt-3.5\", 50, 25);\n\n        let stats = collector.get_stats();\n        assert_eq!(stats.total_requests(), 2);\n        assert_eq!(stats.total_tokens(), 225);\n\n        // Run the pipeline\n        pipeline.run().await.unwrap();\n\n        // Verify storage has the nodes\n        let nodes = storage.get_all().await;\n        assert_eq!(nodes.len(), 2);\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/chunk_markdown.rs",
    "content": "//! Chunk markdown content into smaller pieces\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse derive_builder::Builder;\nuse swiftide_core::{ChunkerTransformer, indexing::IndexingStream, indexing::TextNode};\nuse text_splitter::{Characters, ChunkConfig, MarkdownSplitter};\n\nconst DEFAULT_MAX_CHAR_SIZE: usize = 2056;\n\n#[derive(Clone, Builder)]\n#[builder(setter(strip_option))]\n/// A transformer that chunks markdown content into smaller pieces.\n///\n/// The transformer will split the markdown content into smaller pieces based on the specified\n/// `max_characters` or `range` of characters.\n///\n/// For further customization, you can use the builder to create a custom splitter.\n///\n/// Technically that might work with every splitter `text_splitter` provides.\npub struct ChunkMarkdown {\n    /// Defaults to `None`. If you use a splitter that is resource heavy, this parameter can be\n    /// tuned.\n    #[builder(default)]\n    concurrency: Option<usize>,\n\n    /// Optional maximum number of characters per chunk.\n    ///\n    /// Defaults to [`DEFAULT_MAX_CHAR_SIZE`].\n    #[builder(default = \"DEFAULT_MAX_CHAR_SIZE\")]\n    max_characters: usize,\n\n    /// A range of minimum and maximum characters per chunk.\n    ///\n    /// Chunks smaller than the range min will be ignored. `max_characters` will be ignored if this\n    /// is set.\n    ///\n    /// If you provide a custom chunker with a range, you might want to set the range as well.\n    ///\n    /// Defaults to 0..[`max_characters`]\n    #[builder(default = \"0..DEFAULT_MAX_CHAR_SIZE\")]\n    range: std::ops::Range<usize>,\n\n    /// The markdown splitter from [`text_splitter`]\n    ///\n    /// Defaults to a new [`MarkdownSplitter`] with the specified `max_characters`.\n    #[builder(setter(into), default = \"self.default_client()\")]\n    chunker: Arc<MarkdownSplitter<Characters>>,\n}\n\nimpl std::fmt::Debug for ChunkMarkdown {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"ChunkMarkdown\")\n            .field(\"concurrency\", &self.concurrency)\n            .field(\"max_characters\", &self.max_characters)\n            .field(\"range\", &self.range)\n            .finish()\n    }\n}\n\nimpl Default for ChunkMarkdown {\n    fn default() -> Self {\n        Self::from_max_characters(DEFAULT_MAX_CHAR_SIZE)\n    }\n}\n\nimpl ChunkMarkdown {\n    pub fn builder() -> ChunkMarkdownBuilder {\n        ChunkMarkdownBuilder::default()\n    }\n\n    /// Create a new transformer with a maximum number of characters per chunk.\n    #[allow(clippy::missing_panics_doc)]\n    pub fn from_max_characters(max_characters: usize) -> Self {\n        Self::builder()\n            .max_characters(max_characters)\n            .build()\n            .expect(\"Cannot fail\")\n    }\n\n    /// Create a new transformer with a range of characters per chunk.\n    ///\n    /// Chunks smaller than the range will be ignored.\n    #[allow(clippy::missing_panics_doc)]\n    pub fn from_chunk_range(range: std::ops::Range<usize>) -> Self {\n        Self::builder().range(range).build().expect(\"Cannot fail\")\n    }\n\n    /// Set the number of concurrent chunks to process.\n    #[must_use]\n    pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n        self.concurrency = Some(concurrency);\n        self\n    }\n\n    fn min_size(&self) -> usize {\n        self.range.start\n    }\n}\n\nimpl ChunkMarkdownBuilder {\n    fn default_client(&self) -> Arc<MarkdownSplitter<Characters>> {\n        let chunk_config: ChunkConfig<Characters> = self\n            .range\n            .clone()\n            .map(ChunkConfig::<Characters>::from)\n            .or_else(|| self.max_characters.map(Into::into))\n            .unwrap_or(DEFAULT_MAX_CHAR_SIZE.into());\n\n        Arc::new(MarkdownSplitter::new(chunk_config))\n    }\n}\n\n#[async_trait]\nimpl ChunkerTransformer for ChunkMarkdown {\n    type Input = String;\n    type Output = String;\n\n    #[tracing::instrument(skip_all)]\n    async fn transform_node(&self, node: TextNode) -> IndexingStream<String> {\n        let chunks = self\n            .chunker\n            .chunks(&node.chunk)\n            .filter_map(|chunk| {\n                let trim = chunk.trim();\n                if trim.is_empty() || trim.len() < self.min_size() {\n                    None\n                } else {\n                    Some(chunk.to_string())\n                }\n            })\n            .collect::<Vec<String>>();\n\n        IndexingStream::iter(\n            chunks\n                .into_iter()\n                .map(move |chunk| TextNode::build_from_other(&node).chunk(chunk).build()),\n        )\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use futures_util::stream::TryStreamExt;\n\n    const MARKDOWN: &str = r\"\n        # Hello, world!\n\n        This is a test markdown document.\n\n        ## Section 1\n\n        This is a paragraph.\n\n        ## Section 2\n\n        This is another paragraph.\n        \";\n\n    #[tokio::test]\n    async fn test_transforming_with_max_characters_and_trimming() {\n        let chunker = ChunkMarkdown::from_max_characters(40);\n\n        let node = TextNode::new(MARKDOWN.to_string());\n\n        let nodes: Vec<TextNode> = chunker\n            .transform_node(node)\n            .await\n            .try_collect()\n            .await\n            .unwrap();\n\n        dbg!(&nodes.iter().map(|n| n.chunk.clone()).collect::<Vec<_>>());\n        for line in MARKDOWN.lines().filter(|line| !line.trim().is_empty()) {\n            nodes\n                .iter()\n                .find(|node| node.chunk == line.trim())\n                .unwrap_or_else(|| panic!(\"Line not found: {line}\"));\n        }\n\n        assert_eq!(nodes.len(), 6);\n    }\n\n    #[tokio::test]\n    async fn test_always_within_range() {\n        let ranges = vec![(10..15), (20..25), (30..35), (40..45), (50..55)];\n        for range in ranges {\n            let chunker = ChunkMarkdown::from_chunk_range(range.clone());\n            let node = TextNode::new(MARKDOWN.to_string());\n            let nodes: Vec<TextNode> = chunker\n                .transform_node(node)\n                .await\n                .try_collect()\n                .await\n                .unwrap();\n            // Assert all nodes chunk length within the range\n            assert!(\n                nodes.iter().all(|node| {\n                    let len = node.chunk.len();\n                    range.contains(&len)\n                }),\n                \"{:?}, {:?}\",\n                range,\n                nodes.iter().filter(|node| {\n                    let len = node.chunk.len();\n                    !range.contains(&len)\n                })\n            );\n        }\n    }\n\n    #[test]\n    fn test_builder() {\n        ChunkMarkdown::builder()\n            .chunker(MarkdownSplitter::new(40))\n            .concurrency(10)\n            .range(10..20)\n            .build()\n            .unwrap();\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/chunk_text.rs",
    "content": "//! Chunk text content into smaller pieces\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse derive_builder::Builder;\nuse swiftide_core::{ChunkerTransformer, indexing::IndexingStream, indexing::TextNode};\nuse text_splitter::{Characters, ChunkConfig, TextSplitter};\n\nconst DEFAULT_MAX_CHAR_SIZE: usize = 2056;\n\n#[derive(Debug, Clone, Builder)]\n#[builder(setter(strip_option))]\n/// A transformer that chunks text content into smaller pieces.\n///\n/// The transformer will split the text content into smaller pieces based on the specified\n/// `max_characters` or `range` of characters.\n///\n/// For further customization, you can use the builder to create a custom splitter. Uses\n/// `text_splitter` under the hood.\n///\n/// Technically that might work with every splitter `text_splitter` provides.\npub struct ChunkText {\n    /// The max number of concurrent chunks to process.\n    ///\n    /// Defaults to `None`. If you use a splitter that is resource heavy, this parameter can be\n    /// tuned.\n    #[builder(default)]\n    concurrency: Option<usize>,\n\n    /// Optional maximum number of characters per chunk.\n    ///\n    /// Defaults to [`DEFAULT_MAX_CHAR_SIZE`].\n    #[builder(default = \"DEFAULT_MAX_CHAR_SIZE\")]\n    #[allow(dead_code)]\n    max_characters: usize,\n\n    /// A range of minimum and maximum characters per chunk.\n    ///\n    /// Chunks smaller than the range min will be ignored. `max_characters` will be ignored if this\n    /// is set.\n    ///\n    /// If you provide a custom chunker with a range, you might want to set the range as well.\n    ///\n    /// Defaults to 0..[`max_characters`]\n    #[builder(default = \"0..DEFAULT_MAX_CHAR_SIZE\")]\n    range: std::ops::Range<usize>,\n\n    /// The text splitter from [`text_splitter`]\n    ///\n    /// Defaults to a new [`TextSplitter`] with the specified `max_characters`.\n    #[builder(setter(into), default = \"self.default_client()\")]\n    chunker: Arc<TextSplitter<Characters>>,\n}\n\nimpl Default for ChunkText {\n    fn default() -> Self {\n        Self::from_max_characters(DEFAULT_MAX_CHAR_SIZE)\n    }\n}\n\nimpl ChunkText {\n    pub fn builder() -> ChunkTextBuilder {\n        ChunkTextBuilder::default()\n    }\n\n    /// Create a new transformer with a maximum number of characters per chunk.\n    #[allow(clippy::missing_panics_doc)]\n    pub fn from_max_characters(max_characters: usize) -> Self {\n        Self::builder()\n            .max_characters(max_characters)\n            .build()\n            .expect(\"Cannot fail\")\n    }\n\n    /// Create a new transformer with a range of characters per chunk.\n    ///\n    /// Chunks smaller than the range will be ignored.\n    #[allow(clippy::missing_panics_doc)]\n    pub fn from_chunk_range(range: std::ops::Range<usize>) -> Self {\n        Self::builder().range(range).build().expect(\"Cannot fail\")\n    }\n\n    /// Set the number of concurrent chunks to process.\n    #[must_use]\n    pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n        self.concurrency = Some(concurrency);\n        self\n    }\n\n    fn min_size(&self) -> usize {\n        self.range.start\n    }\n}\n\nimpl ChunkTextBuilder {\n    fn default_client(&self) -> Arc<TextSplitter<Characters>> {\n        let chunk_config: ChunkConfig<Characters> = self\n            .range\n            .clone()\n            .map(ChunkConfig::<Characters>::from)\n            .or_else(|| self.max_characters.map(Into::into))\n            .unwrap_or(DEFAULT_MAX_CHAR_SIZE.into());\n\n        Arc::new(TextSplitter::new(chunk_config))\n    }\n}\n#[async_trait]\nimpl ChunkerTransformer for ChunkText {\n    type Input = String;\n    type Output = String;\n\n    #[tracing::instrument(skip_all, name = \"transformers.chunk_text\")]\n    async fn transform_node(&self, node: TextNode) -> IndexingStream<String> {\n        let chunks = self\n            .chunker\n            .chunks(&node.chunk)\n            .filter_map(|chunk| {\n                let trim = chunk.trim();\n                if trim.is_empty() || trim.len() < self.min_size() {\n                    None\n                } else {\n                    Some(chunk.to_string())\n                }\n            })\n            .collect::<Vec<String>>();\n\n        IndexingStream::iter(\n            chunks\n                .into_iter()\n                .map(move |chunk| TextNode::build_from_other(&node).chunk(chunk).build()),\n        )\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use futures_util::stream::TryStreamExt;\n\n    const TEXT: &str = r\"\n        This is a text.\n\n        This is a paragraph.\n\n        This is another paragraph.\n        \";\n\n    #[tokio::test]\n    async fn test_transforming_with_max_characters_and_trimming() {\n        let chunker = ChunkText::from_max_characters(40);\n\n        let node = TextNode::new(TEXT.to_string());\n\n        let nodes: Vec<TextNode> = chunker\n            .transform_node(node)\n            .await\n            .try_collect()\n            .await\n            .unwrap();\n\n        for line in TEXT.lines().filter(|line| !line.trim().is_empty()) {\n            assert!(nodes.iter().any(|node| node.chunk == line.trim()));\n        }\n\n        assert_eq!(nodes.len(), 3);\n    }\n\n    #[tokio::test]\n    async fn test_always_within_range() {\n        let ranges = vec![(10..15), (20..25), (30..35), (40..45), (50..55)];\n        for range in ranges {\n            let chunker = ChunkText::from_chunk_range(range.clone());\n            let node = TextNode::new(TEXT.to_string());\n            let nodes: Vec<TextNode> = chunker\n                .transform_node(node)\n                .await\n                .try_collect()\n                .await\n                .unwrap();\n            // Assert all nodes chunk length within the range\n            assert!(\n                nodes.iter().all(|node| {\n                    let len = node.chunk.len();\n                    range.contains(&len)\n                }),\n                \"{:?}, {:?}\",\n                range,\n                nodes.iter().filter(|node| {\n                    let len = node.chunk.len();\n                    !range.contains(&len)\n                })\n            );\n        }\n    }\n\n    #[test]\n    fn test_builder() {\n        ChunkText::builder()\n            .chunker(text_splitter::TextSplitter::new(40))\n            .concurrency(10)\n            .range(10..20)\n            .build()\n            .unwrap();\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/embed.rs",
    "content": "//! Generic embedding transformer\nuse std::{collections::VecDeque, sync::Arc};\n\nuse anyhow::bail;\nuse async_trait::async_trait;\nuse swiftide_core::{\n    BatchableTransformer, EmbeddingModel, WithBatchIndexingDefaults, WithIndexingDefaults,\n    indexing::{IndexingStream, TextNode},\n};\n\n/// A transformer that can generate embeddings for an `TextNode`\n///\n/// This file defines the `Embed` struct and its implementation of the `BatchableTransformer` trait.\n#[derive(Clone)]\npub struct Embed {\n    model: Arc<dyn EmbeddingModel>,\n    concurrency: Option<usize>,\n    batch_size: Option<usize>,\n}\n\nimpl std::fmt::Debug for Embed {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Embed\")\n            .field(\"concurrency\", &self.concurrency)\n            .field(\"batch_size\", &self.batch_size)\n            .finish()\n    }\n}\n\nimpl Embed {\n    /// Creates a new instance of the `Embed` transformer.\n    ///\n    /// # Parameters\n    ///\n    /// * `model` - An embedding model that implements the `EmbeddingModel` trait.\n    ///\n    /// # Returns\n    ///\n    /// A new instance of `Embed`.\n    pub fn new(model: impl EmbeddingModel + 'static) -> Self {\n        Self {\n            model: Arc::new(model),\n            concurrency: None,\n            batch_size: None,\n        }\n    }\n\n    #[must_use]\n    pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n        self.concurrency = Some(concurrency);\n        self\n    }\n\n    /// Sets the batch size for the transformer.\n    /// If the batch size is not set, the transformer will use the default batch size set by the\n    /// pipeline # Parameters\n    ///\n    /// * `batch_size` - The batch size to use for the transformer.\n    ///\n    /// # Returns\n    ///\n    /// A new instance of `Embed`.\n    #[must_use]\n    pub fn with_batch_size(mut self, batch_size: usize) -> Self {\n        self.batch_size = Some(batch_size);\n        self\n    }\n}\n\nimpl WithBatchIndexingDefaults for Embed {}\nimpl WithIndexingDefaults for Embed {}\n\n#[async_trait]\nimpl BatchableTransformer for Embed {\n    type Input = String;\n    type Output = String;\n\n    /// Transforms a batch of `TextNode` objects by generating embeddings for them.\n    ///\n    /// # Parameters\n    ///\n    /// * `nodes` - A vector of `TextNode` objects to be transformed.\n    ///\n    /// # Returns\n    ///\n    /// An `IndexingStream` containing the transformed `TextNode` objects with their embeddings.\n    ///\n    /// # Errors\n    ///\n    /// If the embedding process fails, the function returns a stream with the error.\n    #[tracing::instrument(skip_all, name = \"transformers.embed\")]\n    async fn batch_transform(&self, mut nodes: Vec<TextNode>) -> IndexingStream<String> {\n        // TODO: We should drop chunks that go over the token limit of the EmbedModel\n\n        // EmbeddedFields grouped by node stored in order of processed nodes.\n        let mut embeddings_keys_groups = VecDeque::with_capacity(nodes.len());\n        // Embeddable data of every node stored in order of processed nodes.\n        let embeddables_data = nodes\n            .iter_mut()\n            .fold(Vec::new(), |mut embeddables_data, node| {\n                let embeddables = node.as_embeddables();\n                let mut embeddables_keys = Vec::with_capacity(embeddables.len());\n                for (embeddable_key, embeddable_data) in embeddables {\n                    embeddables_keys.push(embeddable_key);\n                    embeddables_data.push(embeddable_data);\n                }\n                embeddings_keys_groups.push_back(embeddables_keys);\n                embeddables_data\n            });\n\n        // Embeddings vectors of every node stored in order of processed nodes.\n        let mut embeddings = match self.model.embed(embeddables_data).await {\n            Ok(embeddngs) => VecDeque::from(embeddngs),\n            Err(err) => return IndexingStream::iter(vec![Err(err.into())]),\n        };\n\n        // Iterator of nodes with embeddings vectors map.\n        let nodes_iter = nodes.into_iter().map(move |mut node| {\n            let Some(embedding_keys) = embeddings_keys_groups.pop_front() else {\n                bail!(\"Missing embedding data\");\n            };\n            node.vectors = embedding_keys\n                .into_iter()\n                .map(|embedded_field| {\n                    embeddings\n                        .pop_front()\n                        .map(|embedding| (embedded_field, embedding))\n                })\n                .collect();\n            Ok(node)\n        });\n\n        IndexingStream::iter(nodes_iter)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n\n    fn batch_size(&self) -> Option<usize> {\n        self.batch_size\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use swiftide_core::indexing::{EmbedMode, EmbeddedField, Metadata, TextNode};\n    use swiftide_core::{BatchableTransformer, MockEmbeddingModel};\n\n    use super::Embed;\n\n    use futures_util::StreamExt;\n    use mockall::predicate::*;\n    use test_case::test_case;\n\n    use swiftide_core::chat_completion::errors::LanguageModelError;\n\n    #[derive(Clone)]\n    struct TestData<'a> {\n        pub embed_mode: EmbedMode,\n        pub chunk: &'a str,\n        pub metadata: Metadata,\n        pub expected_embedables: Vec<&'a str>,\n        pub expected_vectors: Vec<(EmbeddedField, Vec<f32>)>,\n    }\n\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::SingleWithMetadata,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_1\", \"prompt_1\")]),\n            expected_embedables: vec![\"meta_1: prompt_1\\nchunk_1\"],\n            expected_vectors: vec![(EmbeddedField::Combined, vec![1f32])]\n        },\n        TestData {\n            embed_mode: EmbedMode::SingleWithMetadata,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_2\", \"prompt_2\")]),\n            expected_embedables: vec![\"meta_2: prompt_2\\nchunk_2\"],\n            expected_vectors: vec![(EmbeddedField::Combined, vec![2f32])]\n        }\n    ]; \"Multiple nodes EmbedMode::SingleWithMetadata with metadata.\")]\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::PerField,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_1\", \"prompt 1\")]),\n            expected_embedables: vec![\"chunk_1\", \"prompt 1\"],\n            expected_vectors: vec![\n                (EmbeddedField::Chunk, vec![10f32]),\n                (EmbeddedField::Metadata(\"meta_1\".into()), vec![11f32])\n            ]\n        },\n        TestData {\n            embed_mode: EmbedMode::PerField,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_2\", \"prompt 2\")]),\n            expected_embedables: vec![\"chunk_2\", \"prompt 2\"],\n            expected_vectors: vec![\n                (EmbeddedField::Chunk, vec![20f32]),\n                (EmbeddedField::Metadata(\"meta_2\".into()), vec![21f32])\n            ]\n        }\n    ]; \"Multiple nodes EmbedMode::PerField with metadata.\")]\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_1\", \"prompt 1\")]),\n            expected_embedables: vec![\"meta_1: prompt 1\\nchunk_1\", \"chunk_1\", \"prompt 1\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![10f32]),\n                (EmbeddedField::Chunk, vec![11f32]),\n                (EmbeddedField::Metadata(\"meta_1\".into()), vec![12f32])\n            ]\n        },\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_2\", \"prompt 2\")]),\n            expected_embedables: vec![\"meta_2: prompt 2\\nchunk_2\", \"chunk_2\", \"prompt 2\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![20f32]),\n                (EmbeddedField::Chunk, vec![21f32]),\n                (EmbeddedField::Metadata(\"meta_2\".into()), vec![22f32])\n            ]\n        }\n    ]; \"Multiple nodes EmbedMode::Both with metadata.\")]\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_10\", \"prompt 10\"), (\"meta_11\", \"prompt 11\"), (\"meta_12\", \"prompt 12\")]),\n            expected_embedables: vec![\"meta_10: prompt 10\\nmeta_11: prompt 11\\nmeta_12: prompt 12\\nchunk_1\", \"chunk_1\", \"prompt 10\", \"prompt 11\", \"prompt 12\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![10f32]),\n                (EmbeddedField::Chunk, vec![11f32]),\n                (EmbeddedField::Metadata(\"meta_10\".into()), vec![12f32]),\n                (EmbeddedField::Metadata(\"meta_11\".into()), vec![13f32]),\n                (EmbeddedField::Metadata(\"meta_12\".into()), vec![14f32]),\n            ]\n        },\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_20\", \"prompt 20\"), (\"meta_21\", \"prompt 21\"), (\"meta_22\", \"prompt 22\")]),\n            expected_embedables: vec![\"meta_20: prompt 20\\nmeta_21: prompt 21\\nmeta_22: prompt 22\\nchunk_2\", \"chunk_2\", \"prompt 20\", \"prompt 21\", \"prompt 22\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![20f32]),\n                (EmbeddedField::Chunk, vec![21f32]),\n                (EmbeddedField::Metadata(\"meta_20\".into()), vec![22f32]),\n                (EmbeddedField::Metadata(\"meta_21\".into()), vec![23f32]),\n                (EmbeddedField::Metadata(\"meta_22\".into()), vec![24f32])\n            ]\n        }\n    ]; \"Multiple nodes EmbedMode::Both with multiple metadata.\")]\n    #[test_case(vec![]; \"No ingestion nodes\")]\n    #[tokio::test]\n    async fn batch_transform(test_data: Vec<TestData<'_>>) {\n        let test_nodes: Vec<TextNode> = test_data\n            .iter()\n            .map(|data| {\n                TextNode::builder()\n                    .chunk(data.chunk)\n                    .metadata(data.metadata.clone())\n                    .embed_mode(data.embed_mode)\n                    .build()\n                    .unwrap()\n            })\n            .collect();\n\n        let expected_nodes: Vec<TextNode> = test_nodes\n            .clone()\n            .into_iter()\n            .zip(test_data.iter())\n            .map(|(mut expected_node, test_data)| {\n                expected_node.vectors = Some(test_data.expected_vectors.iter().cloned().collect());\n                expected_node\n            })\n            .collect();\n\n        let expected_embeddables_batch = test_data\n            .clone()\n            .iter()\n            .flat_map(|d| &d.expected_embedables)\n            .map(ToString::to_string)\n            .collect::<Vec<String>>();\n        let expected_vectors_batch: Vec<Vec<f32>> = test_data\n            .clone()\n            .iter()\n            .flat_map(|d| d.expected_vectors.iter().map(|(_, v)| v).cloned())\n            .collect();\n\n        let mut model_mock = MockEmbeddingModel::new();\n        model_mock\n            .expect_embed()\n            .withf(move |embeddables| expected_embeddables_batch.eq(embeddables))\n            .times(1)\n            .returning_st(move |_| Ok(expected_vectors_batch.clone()));\n\n        let embed = Embed::new(model_mock);\n\n        let mut stream = embed.batch_transform(test_nodes).await;\n\n        for expected_node in expected_nodes {\n            let ingested_node = stream\n                .next()\n                .await\n                .expect(\"IngestionStream has same length as expected_nodes\")\n                .expect(\"Is OK\");\n            debug_assert_eq!(ingested_node, expected_node);\n        }\n    }\n\n    #[tokio::test]\n    async fn test_returns_error_properly_if_embed_fails() {\n        let test_nodes = vec![TextNode::new(\"chunk\")];\n        let mut model_mock = MockEmbeddingModel::new();\n        model_mock\n            .expect_embed()\n            .times(1)\n            .returning(|_| Err(LanguageModelError::PermanentError(\"error\".into())));\n        let embed = Embed::new(model_mock);\n        let mut stream = embed.batch_transform(test_nodes).await;\n        let error = stream\n            .next()\n            .await\n            .expect(\"IngestionStream has same length as expected_nodes\")\n            .expect_err(\"Is Err\");\n\n        assert_eq!(error.to_string(), \"Permanent error: error\");\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/metadata_keywords.rs",
    "content": "//! Extract keywords from a node and add them as metadata\n//! This module defines the `MetadataKeywords` struct and its associated methods,\n//! which are used for generating metadata in the form of keywords\n//! for a given text. It interacts with a client (e.g., `OpenAI`) to generate\n//! the keywords based on the text chunk in a `TextNode`.\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{Transformer, indexing::TextNode};\n\n/// `MetadataKeywords` is responsible for generating keywords\n/// for a given text chunk. It uses a templated prompt to interact with a client\n/// that implements the `SimplePrompt` trait.\n#[swiftide_macros::indexing_transformer(\n    default_prompt_file = \"prompts/metadata_keywords.prompt.md\",\n    metadata_field_name = \"Keywords\"\n)]\npub struct MetadataKeywords {}\n\n#[async_trait]\nimpl Transformer for MetadataKeywords {\n    type Input = String;\n    type Output = String;\n\n    /// Transforms an `TextNode` by extracting a keywords\n    /// based on the text chunk within the node.\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The `TextNode` containing the text chunk to process.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` containing the transformed `TextNode` with added metadata,\n    /// or an error if the transformation fails.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the client fails to generate\n    /// a keywords from the provided prompt.\n    #[tracing::instrument(skip_all, name = \"transformers.metadata_keywords\")]\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        let prompt = self.prompt_template.clone().with_node(&node);\n        let response = self.prompt(prompt).await?;\n\n        node.metadata.insert(NAME, response);\n\n        Ok(node)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::MockSimplePrompt;\n\n    use super::*;\n\n    #[test_log::test(tokio::test)]\n    async fn test_template() {\n        let template = default_prompt();\n\n        let prompt = template.clone().with_node(&TextNode::new(\"test\"));\n        insta::assert_snapshot!(prompt.render().unwrap());\n    }\n\n    #[tokio::test]\n    async fn test_metadata_keywords() {\n        let mut client = MockSimplePrompt::new();\n\n        client\n            .expect_prompt()\n            .returning(|_| Ok(\"important,keywords\".to_string()));\n\n        let transformer = MetadataKeywords::builder().client(client).build().unwrap();\n        let node = TextNode::new(\"Some text\");\n\n        let result = transformer.transform_node(node).await.unwrap();\n\n        assert_eq!(\n            result.metadata.get(\"Keywords\").unwrap(),\n            \"important,keywords\"\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/metadata_qa_text.rs",
    "content": "//! Generates questions and answers from a given text chunk and adds them as metadata.\n//! This module defines the `MetadataQAText` struct and its associated methods,\n//! which are used for generating metadata in the form of questions and answers\n//! from a given text. It interacts with a client (e.g., `OpenAI`) to generate\n//! these questions and answers based on the text chunk in an `TextNode`.\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{Transformer, indexing::TextNode};\n\n/// `MetadataQAText` is responsible for generating questions and answers\n/// from a given text chunk. It uses a templated prompt to interact with a client\n/// that implements the `SimplePrompt` trait.\n#[swiftide_macros::indexing_transformer(\n    metadata_field_name = \"Questions and Answers (text)\",\n    default_prompt_file = \"prompts/metadata_qa_text.prompt.md\"\n)]\npub struct MetadataQAText {\n    #[builder(default = \"5\")]\n    num_questions: usize,\n}\n\n#[async_trait]\nimpl Transformer for MetadataQAText {\n    type Input = String;\n    type Output = String;\n\n    /// Transforms an `TextNode` by generating questions and answers\n    /// based on the text chunk within the node.\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The `TextNode` containing the text chunk to process.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` containing the transformed `TextNode` with added metadata,\n    /// or an error if the transformation fails.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the client fails to generate\n    /// questions and answers from the provided prompt.\n    #[tracing::instrument(skip_all, name = \"transformers.metadata_qa_text\")]\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        let prompt = self\n            .prompt_template\n            .clone()\n            .with_node(&node)\n            .with_context_value(\"questions\", self.num_questions);\n\n        let response = self.prompt(prompt).await?;\n\n        node.metadata.insert(NAME, response);\n\n        Ok(node)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::MockSimplePrompt;\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_template() {\n        let template = default_prompt();\n\n        let prompt = template\n            .clone()\n            .with_node(&TextNode::new(\"test\"))\n            .with_context_value(\"questions\", 5);\n        insta::assert_snapshot!(prompt.render().unwrap());\n    }\n\n    #[tokio::test]\n    async fn test_metadata_qacode() {\n        let mut client = MockSimplePrompt::new();\n\n        client\n            .expect_prompt()\n            .returning(|_| Ok(\"Q1: Hello\\nA1: World\".to_string()));\n\n        let transformer = MetadataQAText::builder().client(client).build().unwrap();\n        let node = TextNode::new(\"Some text\");\n\n        let result = transformer.transform_node(node).await.unwrap();\n\n        assert_eq!(\n            result.metadata.get(\"Questions and Answers (text)\").unwrap(),\n            \"Q1: Hello\\nA1: World\"\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/metadata_summary.rs",
    "content": "//! Generate a summary and adds it as metadata\n//! This module defines the `MetadataSummary` struct and its associated methods,\n//! which are used for generating metadata in the form of a summary\n//! for a given text. It interacts with a client (e.g., `OpenAI`) to generate\n//! the summary based on the text chunk in an `TextNode`.\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{Transformer, indexing::TextNode};\n\n/// `MetadataSummary` is responsible for generating a summary\n/// for a given text chunk. It uses a templated prompt to interact with a client\n/// that implements the `SimplePrompt` trait.\n#[swiftide_macros::indexing_transformer(\n    metadata_field_name = \"Summary\",\n    default_prompt_file = \"prompts/metadata_summary.prompt.md\"\n)]\npub struct MetadataSummary {}\n\n#[async_trait]\nimpl Transformer for MetadataSummary {\n    type Input = String;\n    type Output = String;\n    /// Transforms an `TextNode` by extracting a summary\n    /// based on the text chunk within the node.\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The `TextNode` containing the text chunk to process.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` containing the transformed `TextNode` with added metadata,\n    /// or an error if the transformation fails.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the client fails to generate\n    /// a summary from the provided prompt.\n    #[tracing::instrument(skip_all, name = \"transformers.metadata_summary\")]\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        let prompt = self.prompt_template.clone().with_node(&node);\n\n        let response = self.prompt(prompt).await?;\n\n        node.metadata.insert(NAME, response);\n\n        Ok(node)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::MockSimplePrompt;\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_template() {\n        let template = default_prompt();\n\n        let prompt = template.clone().with_node(&TextNode::new(\"test\"));\n        insta::assert_snapshot!(prompt.render().unwrap());\n    }\n\n    #[tokio::test]\n    async fn test_metadata_summary() {\n        let mut client = MockSimplePrompt::new();\n\n        client\n            .expect_prompt()\n            .returning(|_| Ok(\"A Summary\".to_string()));\n\n        let transformer = MetadataSummary::builder().client(client).build().unwrap();\n        let node = TextNode::new(\"Some text\");\n\n        let result = transformer.transform_node(node).await.unwrap();\n\n        assert_eq!(result.metadata.get(\"Summary\").unwrap(), \"A Summary\");\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/metadata_title.rs",
    "content": "//! Generate a title and adds it as metadata\n//! This module defines the `MetadataTitle` struct and its associated methods,\n//! which are used for generating metadata in the form of a title\n//! for a given text. It interacts with a client (e.g., `OpenAI`) to generate\n//! these questions and answers based on the text chunk in an `TextNode`.\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{Transformer, indexing::TextNode};\n\n/// `MetadataTitle` is responsible for generating a title\n/// for a given text chunk. It uses a templated prompt to interact with a client\n/// that implements the `SimplePrompt` trait.\n#[swiftide_macros::indexing_transformer(\n    metadata_field_name = \"Title\",\n    default_prompt_file = \"prompts/metadata_title.prompt.md\"\n)]\npub struct MetadataTitle {}\n\n#[async_trait]\nimpl Transformer for MetadataTitle {\n    type Input = String;\n    type Output = String;\n    /// Transforms an `TextNode` by generating questions and answers\n    /// based on the text chunk within the node.\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The `TextNode` containing the text chunk to process.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` containing the transformed `TextNode` with added metadata,\n    /// or an error if the transformation fails.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the client fails to generate\n    /// questions and answers from the provided prompt.\n    #[tracing::instrument(skip_all, name = \"transformers.metadata_title\")]\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        let prompt = self.prompt_template.clone().with_node(&node);\n\n        let response = self.prompt(prompt).await?;\n\n        node.metadata.insert(NAME, response);\n\n        Ok(node)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::MockSimplePrompt;\n\n    use super::*;\n\n    #[test_log::test(tokio::test)]\n    async fn test_template() {\n        let template = default_prompt();\n\n        let prompt = template.clone().with_node(&TextNode::new(\"test\"));\n        insta::assert_snapshot!(prompt.render().unwrap());\n    }\n\n    #[tokio::test]\n    async fn test_metadata_title() {\n        let mut client = MockSimplePrompt::new();\n\n        client\n            .expect_prompt()\n            .returning(|_| Ok(\"A Title\".to_string()));\n\n        let transformer = MetadataTitle::builder().client(client).build().unwrap();\n        let node = TextNode::new(\"Some text\");\n\n        let result = transformer.transform_node(node).await.unwrap();\n\n        assert_eq!(result.metadata.get(\"Title\").unwrap(), \"A Title\");\n    }\n}\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/mod.rs",
    "content": "//! Various transformers for chunking, embedding and transforming data\n//!\n//! These transformers are generic over their implementation and many require a\n//! swiftide integration to be configured.\n//!\n//! Transformers that prompt have a default prompt configured. Prompts can be customized\n//! and tailored, supporting Jinja style templating based on [tera](https://docs.rs/tera/latest/tera/).\n//!\n//!  See [`swiftide_core::prompt::Prompt`] and [`swiftide_core::template::Template`]\n\npub mod chunk_markdown;\npub mod chunk_text;\npub mod embed;\npub mod metadata_keywords;\npub mod metadata_qa_text;\npub mod metadata_summary;\npub mod metadata_title;\npub mod sparse_embed;\n\npub use chunk_markdown::ChunkMarkdown;\npub use chunk_text::ChunkText;\npub use embed::Embed;\npub use metadata_keywords::MetadataKeywords;\npub use metadata_qa_text::MetadataQAText;\npub use metadata_summary::MetadataSummary;\npub use metadata_title::MetadataTitle;\npub use sparse_embed::SparseEmbed;\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/prompts/metadata_keywords.prompt.md",
    "content": "# Task\n\nYour task is to generate a descriptive, concise keywords for the given text\n\n# Constraints\n\n- Only respond in the example format\n- Respond with a keywords that are representative of the text\n- Only include keywords that are literally included in the text\n- Respond with a comma-separated list of keywords\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\n<keyword>,<other-keyword>\n```\n\n# Text\n\n```\n{{ node.chunk }}\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/prompts/metadata_qa_text.prompt.md",
    "content": "# Task\n\nYour task is to generate questions and answers for the given text.\n\nGiven that somebody else might ask questions about the text, consider things like:\n\n- What does this text do?\n- What other internal parts does the text use?\n- Does this text have any dependencies?\n- What are some potential use cases for this text?\n- ... and so on\n\n# Constraints\n\n- Generate at most {{questions}} questions and answers.\n- Only respond in the example format\n- Only respond with questions and answers that can be derived from the text.\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\nQ1: What is the capital of France?\nA1: Paris.\n```\n\n# text\n\n```\n{{node.chunk}}\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/prompts/metadata_summary.prompt.md",
    "content": "# Task\n\nYour task is to generate a descriptive, concise summary for the given text\n\n# Constraints\n\n- Only respond in the example format\n- Respond with a summary that is accurate and descriptive without fluff\n- Only include information that is included in the text\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\n<summary>\n```\n\n# Text\n\n```\n{{node.chunk}}\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/prompts/metadata_title.prompt.md",
    "content": "# Task\n\nYour task is to generate a descriptive, concise title for the given text\n\n# Constraints\n\n- Only respond in the example format\n- Respond with a title that is accurate and descriptive without fluff\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\n<title>\n```\n\n# Text\n\n```\n{{node.chunk}}\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/snapshots/swiftide_indexing__transformers__compress_code_outline__test__compress_code_template.snap",
    "content": "---\nsource: swiftide-indexing/src/transformers/compress_code_outline.rs\nexpression: prompt.render().await.unwrap()\n---\n# Filtering Code Outline\nYour task is to filter the given file outline to the code chunk provided. The goal is to provide a context that is still contains the lines needed for understanding the code in the chunk whilst leaving out any irrelevant information.\n\n## Constraints\n  * Only use lines from the provided context, do not add any additional information\n  * Ensure that the selection you make is the most appropriate for the code chunk\n  * Make sure you include any definitions or imports that are used in the code chunk\n  * You do not need to repeat the code chunk in your response, it will be appended directly after your response.\n  * Do not use lines that are present in the code chunk\n\n## Code\n```\nCode using outline\n```\n\n## Outline\n```\nRelevant Outline\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/snapshots/swiftide_indexing__transformers__metadata_keywords__test__template.snap",
    "content": "---\nsource: swiftide-indexing/src/transformers/metadata_keywords.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate a descriptive, concise keywords for the given text\n\n# Constraints\n\n- Only respond in the example format\n- Respond with a keywords that are representative of the text\n- Only include keywords that are literally included in the text\n- Respond with a comma-separated list of keywords\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\n<keyword>,<other-keyword>\n```\n\n# Text\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/snapshots/swiftide_indexing__transformers__metadata_qa_code__test__template.snap",
    "content": "---\nsource: swiftide-indexing/src/transformers/metadata_qa_code.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate questions and answers for the given code.\n\nGiven that somebody else might ask questions about the code, consider things like:\n\n- What does this code do?\n- What other internal parts does the code use?\n- Does this code have any dependencies?\n- What are some potential use cases for this code?\n- ... and so on\n\n# Constraints\n\n- Generate only 5 questions and answers.\n- Only respond in the example format\n- Only respond with questions and answers that can be derived from the code.\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\nQ1: What does this code do?\nA1: It transforms strings into integers.\nQ2: What other internal parts does the code use?\nA2: A hasher to hash the strings.\n```\n\n# Code\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/snapshots/swiftide_indexing__transformers__metadata_qa_code__test__template_with_outline.snap",
    "content": "---\nsource: swiftide-indexing/src/transformers/metadata_qa_code.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate questions and answers for the given code.\n\nGiven that somebody else might ask questions about the code, consider things like:\n\n- What does this code do?\n- What other internal parts does the code use?\n- Does this code have any dependencies?\n- What are some potential use cases for this code?\n- ... and so on\n\n# Constraints\n\n- Generate only 5 questions and answers.\n- Only respond in the example format\n- Only respond with questions and answers that can be derived from the code.\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\nQ1: What does this code do?\nA1: It transforms strings into integers.\nQ2: What other internal parts does the code use?\nA2: A hasher to hash the strings.\n```\n\n\n## Outline of the parent file\n```\nTest outline\n```\n\n# Code\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/snapshots/swiftide_indexing__transformers__metadata_qa_text__test__template.snap",
    "content": "---\nsource: swiftide-indexing/src/transformers/metadata_qa_text.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate questions and answers for the given text.\n\nGiven that somebody else might ask questions about the text, consider things like:\n\n- What does this text do?\n- What other internal parts does the text use?\n- Does this text have any dependencies?\n- What are some potential use cases for this text?\n- ... and so on\n\n# Constraints\n\n- Generate at most 5 questions and answers.\n- Only respond in the example format\n- Only respond with questions and answers that can be derived from the text.\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\nQ1: What is the capital of France?\nA1: Paris.\n```\n\n# text\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/snapshots/swiftide_indexing__transformers__metadata_summary__test__template.snap",
    "content": "---\nsource: swiftide-indexing/src/transformers/metadata_summary.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate a descriptive, concise summary for the given text\n\n# Constraints\n\n- Only respond in the example format\n- Respond with a summary that is accurate and descriptive without fluff\n- Only include information that is included in the text\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\n<summary>\n```\n\n# Text\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/snapshots/swiftide_indexing__transformers__metadata_title__test__template.snap",
    "content": "---\nsource: swiftide-indexing/src/transformers/metadata_title.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate a descriptive, concise title for the given text\n\n# Constraints\n\n- Only respond in the example format\n- Respond with a title that is accurate and descriptive without fluff\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\n<title>\n```\n\n# Text\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-indexing/src/transformers/sparse_embed.rs",
    "content": "//! Generic embedding transformer\nuse std::{collections::VecDeque, sync::Arc};\n\nuse anyhow::bail;\nuse async_trait::async_trait;\nuse swiftide_core::{\n    BatchableTransformer, SparseEmbeddingModel, WithBatchIndexingDefaults, WithIndexingDefaults,\n    indexing::{IndexingStream, TextNode},\n};\n\n/// A transformer that can generate embeddings for an `TextNode`\n///\n/// This file defines the `SparseEmbed` struct and its implementation of the `BatchableTransformer`\n/// trait.\n#[derive(Clone)]\npub struct SparseEmbed {\n    embed_model: Arc<dyn SparseEmbeddingModel>,\n    concurrency: Option<usize>,\n    batch_size: Option<usize>,\n}\n\nimpl std::fmt::Debug for SparseEmbed {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"SparseEmbed\")\n            .field(\"concurrency\", &self.concurrency)\n            .finish()\n    }\n}\n\nimpl SparseEmbed {\n    /// Creates a new instance of the `SparseEmbed` transformer.\n    ///\n    /// # Parameters\n    ///\n    /// * `model` - An embedding model that implements the `SparseEmbeddingModel` trait.\n    ///\n    /// # Returns\n    ///\n    /// A new instance of `SparseEmbed`.\n    pub fn new(model: impl SparseEmbeddingModel + 'static) -> Self {\n        Self {\n            embed_model: Arc::new(model),\n            concurrency: None,\n            batch_size: None,\n        }\n    }\n\n    #[must_use]\n    pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n        self.concurrency = Some(concurrency);\n        self\n    }\n\n    /// Sets the batch size for the transformer.\n    /// If the batch size is not set, the transformer will use the default batch size set by the\n    /// pipeline # Parameters\n    ///\n    /// * `batch_size` - The batch size to use for the transformer.\n    ///\n    /// # Returns\n    ///\n    /// A new instance of `Embed`.\n    #[must_use]\n    pub fn with_batch_size(mut self, batch_size: usize) -> Self {\n        self.batch_size = Some(batch_size);\n        self\n    }\n}\n\nimpl WithBatchIndexingDefaults for SparseEmbed {}\nimpl WithIndexingDefaults for SparseEmbed {}\n\n#[async_trait]\nimpl BatchableTransformer for SparseEmbed {\n    type Input = String;\n    type Output = String;\n    /// Transforms a batch of `TextNode` objects by generating embeddings for them.\n    ///\n    /// # Parameters\n    ///\n    /// * `nodes` - A vector of `TextNode` objects to be transformed.\n    ///\n    /// # Returns\n    ///\n    /// An `IndexingStream` containing the transformed `TextNode` objects with their embeddings.\n    ///\n    /// # Errors\n    ///\n    /// If the embedding process fails, the function returns a stream with the error.\n    #[tracing::instrument(skip_all, name = \"transformers.embed\")]\n    async fn batch_transform(&self, mut nodes: Vec<TextNode>) -> IndexingStream<String> {\n        // TODO: We should drop chunks that go over the token limit of the SparseEmbedModel\n\n        // EmbeddedFields grouped by node stored in order of processed nodes.\n        let mut embeddings_keys_groups = VecDeque::with_capacity(nodes.len());\n        // SparseEmbeddable data of every node stored in order of processed nodes.\n        let embeddables_data = nodes\n            .iter_mut()\n            .fold(Vec::new(), |mut embeddables_data, node| {\n                let embeddables = node.as_embeddables();\n                let mut embeddables_keys = Vec::with_capacity(embeddables.len());\n                for (embeddable_key, embeddable_data) in embeddables {\n                    embeddables_keys.push(embeddable_key);\n                    embeddables_data.push(embeddable_data);\n                }\n                embeddings_keys_groups.push_back(embeddables_keys);\n                embeddables_data\n            });\n\n        // SparseEmbeddings vectors of every node stored in order of processed nodes.\n        let mut embeddings = match self.embed_model.sparse_embed(embeddables_data).await {\n            Ok(embeddngs) => VecDeque::from(embeddngs),\n            Err(err) => return IndexingStream::iter(vec![Err(err.into())]),\n        };\n\n        // Iterator of nodes with embeddings vectors map.\n        let nodes_iter = nodes.into_iter().map(move |mut node| {\n            let Some(embedding_keys) = embeddings_keys_groups.pop_front() else {\n                bail!(\"Missing embedding data\");\n            };\n            node.sparse_vectors = embedding_keys\n                .into_iter()\n                .map(|embedded_field| {\n                    embeddings\n                        .pop_front()\n                        .map(|embedding| (embedded_field, embedding))\n                })\n                .collect();\n            Ok(node)\n        });\n\n        IndexingStream::iter(nodes_iter)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n\n    fn batch_size(&self) -> Option<usize> {\n        self.batch_size\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use swiftide_core::indexing::{EmbedMode, EmbeddedField, Metadata, TextNode};\n    use swiftide_core::{\n        BatchableTransformer, MockSparseEmbeddingModel, SparseEmbedding, SparseEmbeddings,\n    };\n\n    use super::SparseEmbed;\n\n    use futures_util::StreamExt;\n    use mockall::predicate::*;\n    use test_case::test_case;\n\n    use swiftide_core::chat_completion::errors::LanguageModelError;\n\n    #[derive(Clone)]\n    struct TestData<'a> {\n        pub embed_mode: EmbedMode,\n        pub chunk: &'a str,\n        pub metadata: Metadata,\n        pub expected_embedables: Vec<&'a str>,\n        pub expected_vectors: Vec<(EmbeddedField, Vec<f32>)>,\n    }\n\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::SingleWithMetadata,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_1\", \"prompt_1\")]),\n            expected_embedables: vec![\"meta_1: prompt_1\\nchunk_1\"],\n            expected_vectors: vec![(EmbeddedField::Combined, vec![1f32])]\n        },\n        TestData {\n            embed_mode: EmbedMode::SingleWithMetadata,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_2\", \"prompt_2\")]),\n            expected_embedables: vec![\"meta_2: prompt_2\\nchunk_2\"],\n            expected_vectors: vec![(EmbeddedField::Combined, vec![2f32])]\n        }\n    ]; \"Multiple nodes EmbedMode::SingleWithMetadata with metadata.\")]\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::PerField,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_1\", \"prompt 1\")]),\n            expected_embedables: vec![\"chunk_1\", \"prompt 1\"],\n            expected_vectors: vec![\n                (EmbeddedField::Chunk, vec![10f32]),\n                (EmbeddedField::Metadata(\"meta_1\".into()), vec![11f32])\n            ]\n        },\n        TestData {\n            embed_mode: EmbedMode::PerField,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_2\", \"prompt 2\")]),\n            expected_embedables: vec![\"chunk_2\", \"prompt 2\"],\n            expected_vectors: vec![\n                (EmbeddedField::Chunk, vec![20f32]),\n                (EmbeddedField::Metadata(\"meta_2\".into()), vec![21f32])\n            ]\n        }\n    ]; \"Multiple nodes EmbedMode::PerField with metadata.\")]\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_1\", \"prompt 1\")]),\n            expected_embedables: vec![\"meta_1: prompt 1\\nchunk_1\", \"chunk_1\", \"prompt 1\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![10f32]),\n                (EmbeddedField::Chunk, vec![11f32]),\n                (EmbeddedField::Metadata(\"meta_1\".into()), vec![12f32])\n            ]\n        },\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_2\", \"prompt 2\")]),\n            expected_embedables: vec![\"meta_2: prompt 2\\nchunk_2\", \"chunk_2\", \"prompt 2\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![20f32]),\n                (EmbeddedField::Chunk, vec![21f32]),\n                (EmbeddedField::Metadata(\"meta_2\".into()), vec![22f32])\n            ]\n        }\n    ]; \"Multiple nodes EmbedMode::Both with metadata.\")]\n    #[test_case(vec![\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_1\",\n            metadata: Metadata::from([(\"meta_10\", \"prompt 10\"), (\"meta_11\", \"prompt 11\"), (\"meta_12\", \"prompt 12\")]),\n            expected_embedables: vec![\"meta_10: prompt 10\\nmeta_11: prompt 11\\nmeta_12: prompt 12\\nchunk_1\", \"chunk_1\", \"prompt 10\", \"prompt 11\", \"prompt 12\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![10f32]),\n                (EmbeddedField::Chunk, vec![11f32]),\n                (EmbeddedField::Metadata(\"meta_10\".into()), vec![12f32]),\n                (EmbeddedField::Metadata(\"meta_11\".into()), vec![13f32]),\n                (EmbeddedField::Metadata(\"meta_12\".into()), vec![14f32]),\n            ]\n        },\n        TestData {\n            embed_mode: EmbedMode::Both,\n            chunk: \"chunk_2\",\n            metadata: Metadata::from([(\"meta_20\", \"prompt 20\"), (\"meta_21\", \"prompt 21\"), (\"meta_22\", \"prompt 22\")]),\n            expected_embedables: vec![\"meta_20: prompt 20\\nmeta_21: prompt 21\\nmeta_22: prompt 22\\nchunk_2\", \"chunk_2\", \"prompt 20\", \"prompt 21\", \"prompt 22\"],\n            expected_vectors: vec![\n                (EmbeddedField::Combined, vec![20f32]),\n                (EmbeddedField::Chunk, vec![21f32]),\n                (EmbeddedField::Metadata(\"meta_20\".into()), vec![22f32]),\n                (EmbeddedField::Metadata(\"meta_21\".into()), vec![23f32]),\n                (EmbeddedField::Metadata(\"meta_22\".into()), vec![24f32])\n            ]\n        }\n    ]; \"Multiple nodes EmbedMode::Both with multiple metadata.\")]\n    #[test_case(vec![]; \"No ingestion nodes\")]\n    #[tokio::test]\n    async fn batch_transform(test_data: Vec<TestData<'_>>) {\n        let test_nodes: Vec<TextNode> = test_data\n            .iter()\n            .map(|data| {\n                TextNode::builder()\n                    .chunk(data.chunk)\n                    .metadata(data.metadata.clone())\n                    .embed_mode(data.embed_mode)\n                    .build()\n                    .unwrap()\n            })\n            .collect();\n\n        let expected_nodes: Vec<TextNode> = test_nodes\n            .clone()\n            .into_iter()\n            .zip(test_data.iter())\n            .map(|(mut expected_node, test_data)| {\n                expected_node.sparse_vectors = Some(\n                    test_data\n                        .expected_vectors\n                        .iter()\n                        .cloned()\n                        .map(|d| {\n                            (\n                                d.0,\n                                SparseEmbedding {\n                                    indices: vec![0],\n                                    values: d.1,\n                                },\n                            )\n                        })\n                        .collect(),\n                );\n                expected_node\n            })\n            .collect();\n\n        let expected_embeddables_batch = test_data\n            .clone()\n            .iter()\n            .flat_map(|d| &d.expected_embedables)\n            .map(ToString::to_string)\n            .collect::<Vec<String>>();\n\n        let expected_vectors_batch: SparseEmbeddings = test_data\n            .clone()\n            .iter()\n            .flat_map(|d| {\n                d.expected_vectors\n                    .iter()\n                    .map(|(_, v)| v)\n                    .cloned()\n                    .map(|v| SparseEmbedding {\n                        indices: vec![0],\n                        values: v,\n                    })\n            })\n            .collect();\n\n        let mut model_mock = MockSparseEmbeddingModel::new();\n        model_mock\n            .expect_sparse_embed()\n            .withf(move |embeddables| expected_embeddables_batch.eq(embeddables))\n            .times(1)\n            .returning_st(move |_| Ok(expected_vectors_batch.clone()));\n\n        let embed = SparseEmbed::new(model_mock);\n\n        let mut stream = embed.batch_transform(test_nodes).await;\n\n        for expected_node in expected_nodes {\n            let ingested_node = stream\n                .next()\n                .await\n                .expect(\"IngestionStream has same length as expected_nodes\")\n                .expect(\"Is OK\");\n\n            debug_assert_eq!(ingested_node, expected_node);\n        }\n    }\n\n    #[tokio::test]\n    async fn test_returns_error_properly_if_sparse_embed_fails() {\n        let test_nodes = vec![TextNode::new(\"chunk\")];\n        let mut model_mock = MockSparseEmbeddingModel::new();\n        model_mock\n            .expect_sparse_embed()\n            .times(1)\n            .returning(|_| Err(LanguageModelError::PermanentError(\"error\".into())));\n        let embed = SparseEmbed::new(model_mock);\n        let mut stream = embed.batch_transform(test_nodes).await;\n        let error = stream\n            .next()\n            .await\n            .expect(\"IngestionStream has same length as expected_nodes\")\n            .expect_err(\"Is Err\");\n\n        assert_eq!(error.to_string(), \"Permanent error: error\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-integrations\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\nswiftide-core = { path = \"../swiftide-core\", version = \"0.32\" }\nswiftide-macros = { path = \"../swiftide-macros\", version = \"0.32\" }\n\nanyhow = { workspace = true }\nasync-trait = { workspace = true }\nderive_builder = { workspace = true }\nserde = { workspace = true }\nserde_json = { workspace = true }\nbase64 = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\ntracing = { workspace = true }\nitertools = { workspace = true }\nchrono = { workspace = true }\nstrum = { workspace = true }\nstrum_macros = { workspace = true }\nthiserror = { workspace = true }\nregex = { workspace = true }\nfutures-util = { workspace = true }\ntera = { workspace = true }\nuuid = { workspace = true }\nmetrics = { workspace = true, optional = true }\ntracing-futures = { version = \"0.2.5\", features = [\"futures-03\"] }\nschemars.workspace = true\n\n# Integrations\nasync-openai = { workspace = true, optional = true, features = [\n  \"rustls\",\n  \"chat-completion\",\n  \"embedding\",\n  \"responses\",\n] }\nasync-anthropic = { workspace = true, optional = true }\nqdrant-client = { workspace = true, optional = true, default-features = false, features = [\n  \"serde\",\n] }\nsqlx = { workspace = true, optional = true, features = [\n  \"any\",\n  \"json\",\n  \"macros\",\n  \"postgres\",\n  \"runtime-tokio\",\n  \"chrono\",\n  \"uuid\",\n] }\npgvector = { workspace = true, optional = true, features = [\"sqlx\"] }\nredis = { workspace = true, features = [\n  \"aio\",\n  \"tokio-comp\",\n  \"connection-manager\",\n  \"tokio-rustls-comp\",\n], optional = true }\ntree-sitter = { workspace = true, optional = true }\ntree-sitter-rust = { workspace = true, optional = true }\ntree-sitter-python = { workspace = true, optional = true }\ntree-sitter-ruby = { workspace = true, optional = true }\ntree-sitter-typescript = { workspace = true, optional = true }\ntree-sitter-javascript = { workspace = true, optional = true }\ntree-sitter-java = { workspace = true, optional = true }\ntree-sitter-go = { workspace = true, optional = true }\ntree-sitter-solidity = { workspace = true, optional = true }\ntree-sitter-c = { workspace = true, optional = true }\ntree-sitter-cpp = { workspace = true, optional = true }\ntree-sitter-c-sharp = { workspace = true, optional = true }\ntree-sitter-elixir = { workspace = true, optional = true }\ntree-sitter-html = { workspace = true, optional = true }\ntree-sitter-php = { workspace = true, optional = true }\n\nfastembed = { workspace = true, optional = true }\nspider = { workspace = true, optional = true, default-features = true }\nhtmd = { workspace = true, optional = true }\naws-config = { workspace = true, features = [\n  \"behavior-version-latest\",\n  \"credentials-login\",\n  \"default-https-client\",\n  \"rt-tokio\",\n], optional = true }\naws-credential-types = { workspace = true, features = [\n  \"hardcoded-credentials\",\n], optional = true }\naws-sdk-bedrockruntime = { workspace = true, features = [\n  \"behavior-version-latest\",\n  \"default-https-client\",\n  \"rt-tokio\",\n], optional = true }\naws-smithy-types = { workspace = true, optional = true }\naws-smithy-json = { version = \"0.62.4\", optional = true }\nsecrecy = { workspace = true, optional = true }\nreqwest = { workspace = true, optional = true }\nreqwest-eventsource = { workspace = true, optional = true }\ndeadpool = { workspace = true, features = [\n  \"managed\",\n  \"rt_tokio_1\",\n], optional = true }\nfluvio = { workspace = true, optional = true }\nrdkafka = { workspace = true, optional = true }\narrow-array = { version = \"57.3\", default-features = false, optional = true }\nlancedb = { workspace = true, optional = true }\nparquet = { workspace = true, optional = true, features = [\n  \"async\",\n  \"arrow\",\n  \"snap\",\n] }\nredb = { workspace = true, optional = true }\nduckdb = { workspace = true, optional = true }\nlibduckdb-sys = { workspace = true, optional = true }\nfs-err = { workspace = true, features = [\"tokio\"] }\ntiktoken-rs = { workspace = true, optional = true }\n\n[dev-dependencies]\nswiftide-core = { path = \"../swiftide-core\", features = [\"test-utils\"] }\nswiftide-test-utils = { path = \"../swiftide-test-utils\", features = [\n  \"test-utils\",\n] }\nswiftide-macros = { path = \"../swiftide-macros\" }\ntemp-dir = { workspace = true }\npretty_assertions = { workspace = true }\n# arrow = { workspace = true, features = [\"test_utils\"] }\nduckdb = { workspace = true, features = [\"bundled\"] }\nlibduckdb-sys = { workspace = true, features = [\n  \"bundled\",\n  \"vcpkg\",\n  \"pkg-config\",\n] }\n\n# Used for hacking fluv to play nice\nflv-util = { workspace = true }\n\nmockall = { workspace = true }\ntest-log = { workspace = true }\ntestcontainers = { workspace = true }\ntestcontainers-modules = { workspace = true, features = [\"kafka\"] }\ntest-case = { workspace = true }\nindoc = { workspace = true }\ninsta = { workspace = true }\nwiremock = { workspace = true }\ntokio-stream = { workspace = true }\neventsource-stream.workspace = true\naws-smithy-eventstream = \"0.60.19\"\ntracing-subscriber = { workspace = true }\n\n\n[features]\ndefault = [\"rustls\"]\n\nmetrics = [\"dep:metrics\", \"swiftide-core/metrics\"]\n# Ensures rustls is used\nrustls = [\"reqwest?/rustls\", \"fastembed?/hf-hub-native-tls\"]\n# Qdrant for storage\nqdrant = [\"dep:qdrant-client\", \"swiftide-core/qdrant\", \"chrono/now\"]\n# PgVector for storage\npgvector = [\"dep:sqlx\", \"dep:pgvector\"]\n# Redis for caching and storage\nredis = [\"dep:redis\"]\n# Tree-sitter for code operations and chunking\ntree-sitter = [\n  \"dep:tree-sitter\",\n  \"dep:tree-sitter-rust\",\n  \"dep:tree-sitter-python\",\n  \"dep:tree-sitter-ruby\",\n  \"dep:tree-sitter-typescript\",\n  \"dep:tree-sitter-javascript\",\n  \"dep:tree-sitter-java\",\n  \"dep:tree-sitter-go\",\n  \"dep:tree-sitter-solidity\",\n  \"dep:tree-sitter-c\",\n  \"dep:tree-sitter-cpp\",\n  \"dep:tree-sitter-c-sharp\",\n  \"dep:tree-sitter-elixir\",\n  \"dep:tree-sitter-html\",\n  \"dep:tree-sitter-php\",\n]\n# OpenAI for embedding and prompting\nopenai = [\n  \"dep:async-openai\",\n  \"tiktoken-rs?/async-openai\",\n  \"dep:reqwest-eventsource\",\n  \"dep:reqwest\",\n  \"swiftide-core/openai\",\n]\n# Groq \ngroq = [\"dep:async-openai\", \"dep:secrecy\", \"dep:reqwest\", \"openai\"]\n# Goolge Gemini\ngemini = [\"dep:async-openai\", \"dep:secrecy\", \"dep:reqwest\", \"openai\"]\n# Ollama prompting, embedding, chatcompletion\nollama = [\"dep:async-openai\", \"dep:secrecy\", \"dep:reqwest\", \"openai\"]\n# Openrouter prompting, embedding, chatcompletion\nopen-router = [\"dep:async-openai\", \"dep:secrecy\", \"dep:reqwest\", \"openai\"]\n# FastEmbed (by qdrant) for fast, local embeddings\nfastembed = [\n  \"dep:fastembed\",\n  \"fastembed/ort-download-binaries\",\n  \"fastembed/hf-hub\",\n]\n# Dashscope prompting\ndashscope = [\"dep:async-openai\", \"dep:secrecy\", \"dep:reqwest\", \"openai\"]\n# Scraping via spider as loader and a html to markdown transformer\nscraping = [\"dep:spider\", \"dep:htmd\"]\n# AWS Bedrock for prompting\naws-bedrock = [\n  \"dep:aws-config\",\n  \"dep:aws-credential-types\",\n  \"dep:aws-sdk-bedrockruntime\",\n  \"dep:aws-smithy-types\",\n  \"dep:aws-smithy-json\",\n]\nlancedb = [\"dep:lancedb\", \"dep:deadpool\", \"dep:arrow-array\"]\n# Fluvio loader\nfluvio = [\"dep:fluvio\"]\n# Kafka loader\nkafka = [\"dep:rdkafka\"]\n# Paruqet loader\nparquet = [\"dep:arrow-array\", \"dep:parquet\"]\n# Anthropic for prompting and completions\nanthropic = [\"dep:async-anthropic\"]\n# Duckdb for indexing and retrieval\nduckdb = [\"dep:duckdb\", \"dep:libduckdb-sys\"]\ntiktoken = [\"dep:tiktoken-rs\"]\n\n# Langfuse compatibility\nlangfuse = []\n\n\n[lints]\nworkspace = true\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n"
  },
  {
    "path": "swiftide-integrations/src/anthropic/chat_completion.rs",
    "content": "use futures_util::{StreamExt as _, TryStreamExt as _, stream};\nuse std::sync::{Arc, Mutex};\n\nuse anyhow::{Context as _, Result};\nuse async_anthropic::types::{\n    CreateMessagesRequestBuilder, Message, MessageBuilder, MessageContent, MessageContentList,\n    MessageRole, MessagesStreamEvent, ToolChoice, ToolResultBuilder, ToolUseBuilder,\n};\nuse async_trait::async_trait;\nuse serde_json::{Value, json};\nuse swiftide_core::{\n    ChatCompletion, ChatCompletionStream,\n    chat_completion::{\n        ChatCompletionRequest, ChatCompletionResponse, ChatMessage, ToolCall, ToolOutput, ToolSpec,\n        Usage, UsageBuilder, errors::LanguageModelError,\n    },\n};\n\nuse super::Anthropic;\nuse super::tool_schema::AnthropicToolSchema;\n\n#[cfg(feature = \"metrics\")]\nuse swiftide_core::metrics::emit_usage;\n\n#[async_trait]\nimpl ChatCompletion for Anthropic {\n    #[tracing::instrument(skip_all, err)]\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        let model = &self.default_options.prompt_model;\n        let request = self\n            .build_request(request)\n            .and_then(|b| b.build().map_err(LanguageModelError::permanent))?;\n\n        tracing::debug!(\n            model = &model,\n            messages = serde_json::to_string_pretty(&request).expect(\"Infallible\"),\n            \"[ChatCompletion] Request to anthropic\"\n        );\n\n        let response = self\n            .client\n            .messages()\n            .create(request)\n            .await\n            .map_err(LanguageModelError::permanent)?;\n\n        tracing::debug!(\n            response = serde_json::to_string_pretty(&response).expect(\"Infallible\"),\n            \"[ChatCompletion] Response from anthropic\"\n        );\n\n        let maybe_tool_calls = response\n            .messages()\n            .iter()\n            .flat_map(Message::tool_uses)\n            .map(|atool| {\n                ToolCall::builder()\n                    .id(atool.id)\n                    .name(atool.name)\n                    .args(atool.input.to_string())\n                    .build()\n                    .expect(\"infallible\")\n            })\n            .collect::<Vec<_>>();\n        let maybe_tool_calls = if maybe_tool_calls.is_empty() {\n            None\n        } else {\n            Some(maybe_tool_calls)\n        };\n\n        let mut builder = ChatCompletionResponse::builder()\n            .maybe_message(response.messages().iter().find_map(Message::text))\n            .maybe_tool_calls(maybe_tool_calls)\n            .to_owned();\n\n        if let Some(usage) = &response.usage {\n            let input_tokens = usage.input_tokens.unwrap_or_default();\n            let output_tokens = usage.output_tokens.unwrap_or_default();\n            let total_tokens = input_tokens + output_tokens;\n\n            #[cfg(feature = \"metrics\")]\n            emit_usage(\n                model,\n                input_tokens.into(),\n                output_tokens.into(),\n                total_tokens.into(),\n                self.metric_metadata.as_ref(),\n            );\n\n            let usage = Usage {\n                prompt_tokens: input_tokens,\n                completion_tokens: output_tokens,\n                total_tokens,\n                details: None,\n            };\n            if let Some(callback) = &self.on_usage {\n                callback(&usage).await?;\n            }\n\n            let usage = UsageBuilder::default()\n                .prompt_tokens(input_tokens)\n                .completion_tokens(output_tokens)\n                .total_tokens(total_tokens)\n                .build()\n                .map_err(LanguageModelError::permanent)?;\n\n            builder.usage(usage);\n        }\n        builder.build().map_err(LanguageModelError::from)\n    }\n\n    #[tracing::instrument(skip_all)]\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        let model = &self.default_options.prompt_model;\n        let request = match self\n            .build_request(request)\n            .and_then(|b| b.build().map_err(LanguageModelError::permanent))\n        {\n            Ok(request) => request,\n            Err(e) => {\n                return e.into();\n            }\n        };\n\n        tracing::debug!(\n            model = &model,\n            messages = serde_json::to_string_pretty(&request).expect(\"Infallible\"),\n            \"[ChatCompletion] Request to anthropic\"\n        );\n\n        let response = self.client.messages().create_stream(request).await;\n\n        let accumulating_response = Arc::new(Mutex::new(ChatCompletionResponse::default()));\n        let final_response = Arc::clone(&accumulating_response);\n        #[cfg(feature = \"metrics\")]\n        let model = model.clone();\n        #[cfg(feature = \"metrics\")]\n        let metric_metadata = self.metric_metadata.clone();\n\n        let maybe_usage_callback = self.on_usage.clone();\n\n        response\n            .map_ok(move |chunk| {\n                let accumulating_response = Arc::clone(&accumulating_response);\n\n                let mut lock = accumulating_response.lock().unwrap();\n\n                append_delta_from_chunk(&chunk, &mut lock);\n                lock.clone()\n            })\n            .map_err(LanguageModelError::permanent)\n            .chain(\n                stream::iter(vec![final_response]).map(move |final_response| {\n                    if let Some(usage) = final_response.lock().unwrap().usage.as_ref() {\n                        let usage = usage.clone();\n\n                        if let Some(callback) = maybe_usage_callback.as_ref() {\n                            let usage = usage.clone();\n                            let callback = callback.clone();\n\n                            tokio::spawn(async move {\n                                if let Err(e) = callback(&usage).await {\n                                    tracing::error!(\"Error in on_usage callback: {}\", e);\n                                }\n                            });\n                        }\n\n                        #[cfg(feature = \"metrics\")]\n                        emit_usage(\n                            &model,\n                            usage.prompt_tokens.into(),\n                            usage.completion_tokens.into(),\n                            usage.total_tokens.into(),\n                            metric_metadata.as_ref(),\n                        );\n                    }\n\n                    Ok(final_response.lock().unwrap().clone())\n                }),\n            )\n            .boxed()\n    }\n}\n\n#[allow(clippy::collapsible_match)]\nfn append_delta_from_chunk(chunk: &MessagesStreamEvent, lock: &mut ChatCompletionResponse) {\n    match chunk {\n        MessagesStreamEvent::ContentBlockStart {\n            index,\n            content_block,\n        } => match content_block {\n            MessageContent::ToolUse(tool_use) => {\n                lock.append_tool_call_delta(*index, Some(&tool_use.id), Some(&tool_use.name), None);\n            }\n            MessageContent::Text(text) => {\n                lock.append_message_delta(Some(&text.text));\n            }\n            MessageContent::ToolResult(_tool_result) => (),\n        },\n        MessagesStreamEvent::ContentBlockDelta { index, delta } => match delta {\n            async_anthropic::types::ContentBlockDelta::TextDelta { text } => {\n                lock.append_message_delta(Some(text));\n            }\n            async_anthropic::types::ContentBlockDelta::InputJsonDelta { partial_json } => {\n                lock.append_tool_call_delta(*index, None, None, Some(partial_json));\n            }\n        },\n        #[allow(clippy::cast_possible_truncation)]\n        MessagesStreamEvent::MessageDelta { usage, .. } => {\n            if let Some(usage) = usage {\n                let input_tokens = usage.input_tokens.unwrap_or_default();\n                let output_tokens = usage.output_tokens.unwrap_or_default();\n                let total_tokens = input_tokens + output_tokens;\n                lock.append_usage_delta(input_tokens, output_tokens, total_tokens);\n            }\n        }\n\n        MessagesStreamEvent::MessageStart { message, usage } => {\n            if let Some(usage) = usage {\n                let input_tokens = usage.input_tokens.unwrap_or_default();\n                let output_tokens = usage.output_tokens.unwrap_or_default();\n                let total_tokens = input_tokens + output_tokens;\n                lock.append_usage_delta(input_tokens, output_tokens, total_tokens);\n            }\n            if let Some(message_usage) = &message.usage {\n                let input_tokens = message_usage.input_tokens.unwrap_or_default();\n                let output_tokens = message_usage.output_tokens.unwrap_or_default();\n                let total_tokens = input_tokens + output_tokens;\n                lock.append_usage_delta(input_tokens, output_tokens, total_tokens);\n            }\n        }\n        _ => {}\n    }\n}\n\nimpl Anthropic {\n    fn build_request(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<async_anthropic::types::CreateMessagesRequestBuilder, LanguageModelError> {\n        let model = &self.default_options.prompt_model;\n        let mut messages = request.messages().to_vec();\n\n        let maybe_system = messages\n            .iter()\n            .position(ChatMessage::is_system)\n            .map(|idx| messages.remove(idx));\n\n        let messages = messages_to_antropic(&messages)?;\n\n        let mut anthropic_request = CreateMessagesRequestBuilder::default()\n            .model(model)\n            .messages(messages)\n            .to_owned();\n\n        if let Some(ChatMessage::System(system)) = maybe_system {\n            anthropic_request.system(system);\n        }\n\n        if !request.tools_spec().is_empty() {\n            anthropic_request\n                .tools(\n                    request\n                        .tools_spec()\n                        .iter()\n                        .map(tools_to_anthropic)\n                        .collect::<Result<Vec<_>>>()?,\n                )\n                .tool_choice(ToolChoice::Auto);\n        }\n\n        Ok(anthropic_request)\n    }\n}\n\nfn messages_to_antropic(messages: &[ChatMessage]) -> Result<Vec<Message>> {\n    let mut anthropic_messages = Vec::with_capacity(messages.len());\n    let mut messages = messages.iter().peekable();\n\n    while let Some(message) = messages.next() {\n        match message {\n            ChatMessage::ToolOutput(tool_call, tool_output) => {\n                let mut content = vec![tool_result_to_anthropic(tool_call, tool_output)?];\n\n                while let Some(ChatMessage::ToolOutput(tool_call, tool_output)) = messages.peek() {\n                    content.push(tool_result_to_anthropic(tool_call, tool_output)?);\n                    messages.next();\n                }\n\n                anthropic_messages.push(\n                    MessageBuilder::default()\n                        .role(MessageRole::User)\n                        .content(MessageContentList(content))\n                        .build()\n                        .context(\"Failed to build message\")?,\n                );\n            }\n            _ => {\n                if let Some(message) = message_to_antropic(message)? {\n                    anthropic_messages.push(message);\n                }\n            }\n        }\n    }\n\n    Ok(anthropic_messages)\n}\n\nfn tool_result_to_anthropic(\n    tool_call: &ToolCall,\n    tool_output: &ToolOutput,\n) -> Result<MessageContent> {\n    Ok(ToolResultBuilder::default()\n        .tool_use_id(tool_call.id())\n        .content(tool_output.content().unwrap_or(\"Success\"))\n        .build()?\n        .into())\n}\n\n#[allow(clippy::items_after_statements)]\nfn message_to_antropic(message: &ChatMessage) -> Result<Option<Message>> {\n    let mut builder = MessageBuilder::default().role(MessageRole::User).to_owned();\n\n    match message {\n        ChatMessage::ToolOutput(tool_call, tool_output) => builder.content(MessageContentList(\n            vec![tool_result_to_anthropic(tool_call, tool_output)?],\n        )),\n        ChatMessage::Summary(msg) | ChatMessage::System(msg) => builder.content(msg.as_str()),\n        ChatMessage::User(content) => builder.content(content.as_str()),\n        ChatMessage::UserWithParts(parts) => {\n            if parts.iter().any(|part| {\n                !matches!(\n                    part,\n                    swiftide_core::chat_completion::ChatMessageContentPart::Text { .. }\n                )\n            }) {\n                anyhow::bail!(\"Anthropic chat completions only support text message parts\");\n            }\n            let text_parts = parts\n                .iter()\n                .filter_map(|part| match part {\n                    swiftide_core::chat_completion::ChatMessageContentPart::Text { text } => {\n                        Some(text.as_ref())\n                    }\n                    swiftide_core::chat_completion::ChatMessageContentPart::Image { .. }\n                    | swiftide_core::chat_completion::ChatMessageContentPart::Document { .. }\n                    | swiftide_core::chat_completion::ChatMessageContentPart::Audio { .. }\n                    | swiftide_core::chat_completion::ChatMessageContentPart::Video { .. } => None,\n                })\n                .collect::<Vec<_>>();\n            builder.content(text_parts.join(\" \"))\n        }\n        ChatMessage::Assistant(content, tool_calls) => {\n            builder.role(MessageRole::Assistant);\n\n            let mut content_list: Vec<MessageContent> = Vec::new();\n\n            if let Some(content) = content.as_ref() {\n                content_list.push(content.clone().into());\n            }\n\n            if let Some(tool_calls) = tool_calls.as_ref() {\n                for tool_call in tool_calls {\n                    let tool_call = ToolUseBuilder::default()\n                        .id(tool_call.id())\n                        .name(tool_call.name())\n                        .input(tool_call.args().and_then(|v| v.parse::<Value>().ok()))\n                        .build()?;\n\n                    content_list.push(tool_call.into());\n                }\n            }\n\n            if content_list.is_empty() {\n                return Ok(None);\n            }\n\n            let content_list = MessageContentList(content_list);\n\n            builder.content(content_list)\n        }\n        ChatMessage::Reasoning(_) => return Ok(None),\n    };\n\n    builder.build().context(\"Failed to build message\").map(Some)\n}\n\nfn tools_to_anthropic(\n    spec: &ToolSpec,\n) -> Result<serde_json::value::Map<String, serde_json::Value>> {\n    let mut map = json!({\n        \"name\": &spec.name,\n        \"description\": &spec.description,\n    })\n    .as_object_mut()\n    .context(\"Failed to build tool\")?\n    .to_owned();\n\n    let schema = AnthropicToolSchema::try_from(spec)\n        .context(\"tool schema must be Anthropic compatible\")?\n        .into_value();\n\n    map.insert(\"input_schema\".to_string(), schema);\n\n    Ok(map)\n}\n\n#[cfg(test)]\nmod tests {\n\n    use super::*;\n    use schemars::{JsonSchema, schema_for};\n    use swiftide_core::{\n        AgentContext, Tool,\n        chat_completion::{ChatCompletionRequest, ChatMessage},\n    };\n    use wiremock::{\n        Mock, MockServer, ResponseTemplate,\n        matchers::{body_partial_json, method, path},\n    };\n\n    #[derive(Clone)]\n    struct FakeTool();\n\n    #[derive(Clone)]\n    struct AlphaTool();\n\n    #[derive(JsonSchema, serde::Serialize, serde::Deserialize)]\n    struct LocationArgs {\n        location: String,\n    }\n\n    #[derive(JsonSchema, serde::Serialize, serde::Deserialize)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentArgs {\n        request: NestedCommentRequest,\n    }\n\n    #[derive(JsonSchema, serde::Serialize, serde::Deserialize)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentRequest {\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        body: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        text: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        page_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        block_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        discussion_id: Option<String>,\n    }\n\n    #[async_trait]\n    impl Tool for FakeTool {\n        async fn invoke(\n            &self,\n            _agent_context: &dyn AgentContext,\n            _tool_call: &ToolCall,\n        ) -> std::result::Result<\n            swiftide_core::chat_completion::ToolOutput,\n            swiftide_core::chat_completion::errors::ToolError,\n        > {\n            todo!()\n        }\n\n        fn name(&self) -> std::borrow::Cow<'_, str> {\n            \"get_weather\".into()\n        }\n\n        fn tool_spec(&self) -> ToolSpec {\n            ToolSpec::builder()\n                .description(\"Gets the weather\")\n                .name(\"get_weather\")\n                .parameters_schema(schema_for!(LocationArgs))\n                .build()\n                .unwrap()\n        }\n    }\n\n    #[async_trait]\n    impl Tool for AlphaTool {\n        async fn invoke(\n            &self,\n            _agent_context: &dyn AgentContext,\n            _tool_call: &ToolCall,\n        ) -> std::result::Result<\n            swiftide_core::chat_completion::ToolOutput,\n            swiftide_core::chat_completion::errors::ToolError,\n        > {\n            todo!()\n        }\n\n        fn name(&self) -> std::borrow::Cow<'_, str> {\n            \"alpha_tool\".into()\n        }\n\n        fn tool_spec(&self) -> ToolSpec {\n            ToolSpec::builder()\n                .name(\"alpha_tool\")\n                .description(\"Alpha tool\")\n                .parameters_schema(schemars::schema_for!(LocationArgs))\n                .build()\n                .unwrap()\n        }\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_without_tools() {\n        // Start a wiremock server\n        let mock_server = MockServer::start().await;\n\n        // Create a mock response\n        let mock_response = ResponseTemplate::new(200).set_body_json(serde_json::json!({\n            \"content\": [{\"type\": \"text\", \"text\": \"mocked response\"}]\n        }));\n\n        // Mock the expected endpoint\n        Mock::given(method(\"POST\"))\n            .and(path(\"/v1/messages\")) // Adjust path to match expected endpoint\n            .respond_with(mock_response)\n            .mount(&mock_server)\n            .await;\n\n        let client = async_anthropic::Client::builder()\n            .base_url(mock_server.uri())\n            .build()\n            .unwrap();\n\n        // Build an Anthropic client with the mock server's URL\n        let mut client_builder = Anthropic::builder();\n        client_builder.client(client);\n        let client = client_builder.build().unwrap();\n\n        // Prepare a sample request\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hello\".into())])\n            .build()\n            .unwrap();\n\n        // Call the complete method\n        let result = client.complete(&request).await.unwrap();\n\n        // Assert the result\n        assert_eq!(result.message, Some(\"mocked response\".into()));\n        assert!(result.tool_calls.is_none());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_with_tools() {\n        // Start a wiremock server\n        let mock_server = MockServer::start().await;\n\n        // Create a mock response\n        let mock_response = ResponseTemplate::new(200).set_body_json(serde_json::json!({\n            \"id\": \"msg_016zKNb88WhhgBQXhSaQf1rs\",\n            \"content\": [\n            {\n                \"type\": \"text\",\n                \"text\": \"I'll check the current weather in San Francisco, CA for you.\"\n            },\n            {\n                \"type\": \"tool_use\",\n                \"id\": \"toolu_01E1yxpxXU4hBgCMLzPL1FuR\",\n                \"input\": {\n                \"location\": \"San Francisco, CA\"\n                },\n                \"name\": \"get_weather\"\n            }\n            ],\n            \"model\": \"claude-3-5-sonnet-20241022\",\n            \"stop_reason\": \"tool_use\",\n            \"stop_sequence\": null,\n            \"usage\": {\n            \"input_tokens\": 403,\n            \"output_tokens\": 71\n            }\n        }));\n\n        // Mock the expected endpoint\n        Mock::given(method(\"POST\"))\n            .and(path(\"/v1/messages\")) // Adjust path to match expected endpoint\n            .respond_with(mock_response)\n            .mount(&mock_server)\n            .await;\n\n        let client = async_anthropic::Client::builder()\n            .base_url(mock_server.uri())\n            .build()\n            .unwrap();\n\n        // Build an Anthropic client with the mock server's URL\n        let mut client_builder = Anthropic::builder();\n        client_builder.client(client);\n        let client = client_builder.build().unwrap();\n\n        // Prepare a sample request\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hello\".into())])\n            .tool_specs([FakeTool().tool_spec()])\n            .build()\n            .unwrap();\n\n        // Call the complete method\n        let result = client.complete(&request).await.unwrap();\n\n        // Assert the result\n        assert_eq!(\n            result.message,\n            Some(\"I'll check the current weather in San Francisco, CA for you.\".into())\n        );\n        assert!(result.tool_calls.is_some());\n\n        let Some(tool_call) = result.tool_calls.and_then(|f| f.first().cloned()) else {\n            panic!(\"No tool call found\")\n        };\n        assert_eq!(tool_call.name(), \"get_weather\");\n        assert_eq!(\n            tool_call.args(),\n            Some(\n                json!({\"location\": \"San Francisco, CA\"})\n                    .to_string()\n                    .as_str()\n            )\n        );\n    }\n\n    #[test]\n    fn test_build_request_orders_tools_deterministically() {\n        let client = Anthropic::builder().build().unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hello\".into())])\n            .tool_specs([FakeTool().tool_spec(), AlphaTool().tool_spec()])\n            .build()\n            .unwrap();\n\n        let built = client.build_request(&request).unwrap().build().unwrap();\n        let tool_names = built\n            .tools\n            .expect(\"tools present\")\n            .into_iter()\n            .map(|tool| {\n                tool.get(\"name\")\n                    .and_then(serde_json::Value::as_str)\n                    .expect(\"tool name\")\n                    .to_owned()\n            })\n            .collect::<Vec<_>>();\n\n        assert_eq!(tool_names, vec![\"alpha_tool\", \"get_weather\"]);\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_with_system_prompt() {\n        // Start a wiremock server\n        let mock_server = MockServer::start().await;\n\n        // Create a mock response\n        let mock_response = ResponseTemplate::new(200).set_body_json(serde_json::json!({\n            \"content\": [{\"type\": \"text\", \"text\": \"Response with system prompt\"}],\n            \"usage\": {\n                \"input_tokens\": 19,\n                \"output_tokens\": 10,\n            }\n        }));\n\n        // Mock the expected endpoint\n        Mock::given(method(\"POST\"))\n            .and(path(\"/v1/messages\")) // Adjust path to match expected endpoint\n            .and(body_partial_json(json!({\n                \"system\": \"System message\",\n                \"messages\":[{\"role\":\"user\",\"content\":[{\"type\":\"text\",\"text\":\"Hello\"}]}]\n            })))\n            .respond_with(mock_response)\n            .mount(&mock_server)\n            .await;\n\n        let client = async_anthropic::Client::builder()\n            .base_url(mock_server.uri())\n            .build()\n            .unwrap();\n\n        // Build an Anthropic client with the mock server's URL\n        let mut client_builder = Anthropic::builder();\n        client_builder.client(client);\n        let client = client_builder.build().unwrap();\n\n        // Prepare a sample request with a system message\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![\n                ChatMessage::System(\"System message\".into()),\n                ChatMessage::User(\"Hello\".into()),\n            ])\n            .build()\n            .unwrap();\n\n        // Call the complete method\n        let response = client.complete(&request).await.unwrap();\n\n        // Assert the result\n        assert_eq!(response.message, Some(\"Response with system prompt\".into()));\n\n        let usage = response.usage.unwrap();\n        assert_eq!(usage.prompt_tokens, 19);\n        assert_eq!(usage.completion_tokens, 10);\n        assert_eq!(usage.total_tokens, 29);\n    }\n\n    #[test]\n    fn test_tools_to_anthropic() {\n        let tool_spec = ToolSpec::builder()\n            .description(\"Gets the weather\")\n            .name(\"get_weather\")\n            .parameters_schema(schema_for!(LocationArgs))\n            .build()\n            .unwrap();\n\n        let result = tools_to_anthropic(&tool_spec).unwrap();\n        let expected_schema = tool_spec.strict_parameters_schema().unwrap().into_json();\n        let expected = json!({\n            \"name\": \"get_weather\",\n            \"description\": \"Gets the weather\",\n            \"input_schema\": expected_schema,\n        });\n\n        assert_eq!(serde_json::Value::Object(result), expected);\n    }\n\n    #[test]\n    fn test_tools_to_anthropic_preserves_optional_nested_fields() {\n        let tool_spec = ToolSpec::builder()\n            .description(\"Creates a comment\")\n            .name(\"create_comment\")\n            .parameters_schema(schema_for!(NestedCommentArgs))\n            .build()\n            .unwrap();\n\n        let result = tools_to_anthropic(&tool_spec).unwrap();\n        let input_schema = result\n            .get(\"input_schema\")\n            .and_then(Value::as_object)\n            .expect(\"anthropic tool should contain input_schema\");\n\n        assert_eq!(\n            input_schema.get(\"type\"),\n            Some(&Value::String(\"object\".into()))\n        );\n        assert_eq!(\n            input_schema.get(\"required\"),\n            Some(&Value::Array(vec![Value::String(\"request\".into())]))\n        );\n\n        let nested_ref = input_schema[\"properties\"][\"request\"][\"$ref\"]\n            .as_str()\n            .expect(\"nested request should be referenced\");\n        let nested_name = nested_ref\n            .rsplit('/')\n            .next()\n            .expect(\"nested request ref name\");\n        assert!(input_schema[\"$defs\"][nested_name].get(\"required\").is_none());\n    }\n\n    #[test]\n    fn test_build_request_groups_adjacent_tool_outputs() {\n        let first_tool = ToolCall::builder()\n            .id(\"tool_1\")\n            .name(\"shell_command\")\n            .args(\"{\\\"cmd\\\":\\\"pwd\\\"}\")\n            .build()\n            .unwrap();\n        let second_tool = ToolCall::builder()\n            .id(\"tool_2\")\n            .name(\"git\")\n            .args(\"{\\\"command\\\":\\\"status\\\"}\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![\n                ChatMessage::Assistant(None, Some(vec![first_tool.clone(), second_tool.clone()])),\n                ChatMessage::new_tool_output(\n                    first_tool,\n                    ToolOutput::Text(\"pwd output\".to_string()),\n                ),\n                ChatMessage::new_tool_output(\n                    second_tool,\n                    ToolOutput::Text(\"git output\".to_string()),\n                ),\n            ])\n            .build()\n            .unwrap();\n\n        let client = Anthropic::builder().build().unwrap();\n        let built = client.build_request(&request).unwrap().build().unwrap();\n\n        assert_eq!(built.messages.len(), 2);\n        assert_eq!(built.messages[1].content.len(), 2);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/anthropic/mod.rs",
    "content": "use std::{pin::Pin, sync::Arc};\n\nuse derive_builder::Builder;\nuse swiftide_core::chat_completion::Usage;\n\npub mod chat_completion;\npub mod simple_prompt;\nmod tool_schema;\n\n#[derive(Builder, Clone)]\npub struct Anthropic {\n    #[builder(\n        default = Arc::new(async_anthropic::Client::default()),\n        setter(custom)\n    )]\n    client: Arc<async_anthropic::Client>,\n\n    #[builder(default)]\n    default_options: Options,\n\n    #[cfg(feature = \"metrics\")]\n    #[builder(default)]\n    /// Optional metadata to attach to metrics emitted by this client.\n    metric_metadata: Option<std::collections::HashMap<String, String>>,\n\n    /// A callback function that is called when usage information is available.\n    #[builder(default, setter(custom))]\n    #[allow(clippy::type_complexity)]\n    on_usage: Option<\n        Arc<\n            dyn for<'a> Fn(\n                    &'a Usage,\n                ) -> Pin<\n                    Box<dyn std::future::Future<Output = anyhow::Result<()>> + Send + 'a>,\n                > + Send\n                + Sync,\n        >,\n    >,\n}\n\nimpl std::fmt::Debug for Anthropic {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Anthropic\")\n            .field(\"client\", &self.client)\n            .field(\"default_options\", &self.default_options)\n            .finish()\n    }\n}\n\n#[derive(Debug, Clone, Builder)]\n#[builder(setter(into, strip_option))]\npub struct Options {\n    #[builder(default)]\n    pub prompt_model: String,\n}\n\nimpl Default for Options {\n    fn default() -> Self {\n        Self {\n            prompt_model: \"claude-3-5-sonnet-20241022\".to_string(),\n        }\n    }\n}\n\nimpl Anthropic {\n    pub fn builder() -> AnthropicBuilder {\n        AnthropicBuilder::default()\n    }\n}\n\nimpl AnthropicBuilder {\n    /// Adds a callback function that will be called when usage information is available.\n    pub fn on_usage<F>(&mut self, func: F) -> &mut Self\n    where\n        F: Fn(&Usage) -> anyhow::Result<()> + Send + Sync + 'static,\n    {\n        let func = Arc::new(func);\n        self.on_usage = Some(Some(Arc::new(move |usage: &Usage| {\n            let func = func.clone();\n            Box::pin(async move { func(usage) })\n        })));\n\n        self\n    }\n\n    /// Adds an asynchronous callback function that will be called when usage information is\n    /// available.\n    pub fn on_usage_async<F>(&mut self, func: F) -> &mut Self\n    where\n        F: for<'a> Fn(\n                &'a Usage,\n            )\n                -> Pin<Box<dyn std::future::Future<Output = anyhow::Result<()>> + Send + 'a>>\n            + Send\n            + Sync\n            + 'static,\n    {\n        let func = Arc::new(func);\n        self.on_usage = Some(Some(Arc::new(move |usage: &Usage| {\n            let func = func.clone();\n            Box::pin(async move { func(usage).await })\n        })));\n\n        self\n    }\n\n    /// Sets the client for the `Anthropic` instance.\n    ///\n    /// See the `async_anthropic::Client` documentation for more information.\n    ///\n    /// # Parameters\n    /// - `client`: The `Anthropic` client to set.\n    ///\n    /// # Returns\n    /// A mutable reference to the `AnthropicBuilder`.\n    pub fn client(&mut self, client: async_anthropic::Client) -> &mut Self {\n        self.client = Some(Arc::new(client));\n        self\n    }\n\n    /// Sets the default prompt model for the `Anthropic` instance.\n    ///\n    /// # Parameters\n    /// - `model`: The prompt model to set.\n    ///\n    /// # Returns\n    /// A mutable reference to the `AnthropicBuilder`.\n    pub fn default_prompt_model(&mut self, model: impl Into<String>) -> &mut Self {\n        if let Some(options) = self.default_options.as_mut() {\n            options.prompt_model = model.into();\n        } else {\n            self.default_options = Some(Options {\n                prompt_model: model.into(),\n            });\n        }\n        self\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/anthropic/simple_prompt.rs",
    "content": "use anyhow::Context as _;\nuse async_anthropic::{errors::AnthropicError, types::CreateMessagesRequestBuilder};\nuse async_trait::async_trait;\nuse swiftide_core::{\n    chat_completion::{Usage, errors::LanguageModelError},\n    indexing::SimplePrompt,\n};\n\n#[cfg(feature = \"metrics\")]\nuse swiftide_core::metrics::emit_usage;\n\nuse super::Anthropic;\n\n#[async_trait]\nimpl SimplePrompt for Anthropic {\n    #[tracing::instrument(skip_all, err)]\n    async fn prompt(\n        &self,\n        prompt: swiftide_core::prompt::Prompt,\n    ) -> Result<String, LanguageModelError> {\n        let model = &self.default_options.prompt_model;\n\n        let request = CreateMessagesRequestBuilder::default()\n            .model(model)\n            .messages(vec![prompt.render()?.into()])\n            .build()\n            .map_err(LanguageModelError::permanent)?;\n\n        tracing::debug!(\n            model = &model,\n            messages =\n                serde_json::to_string_pretty(&request).map_err(LanguageModelError::permanent)?,\n            \"[SimplePrompt] Request to anthropic\"\n        );\n\n        let response = self.client.messages().create(request).await.map_err(|e| {\n            match &e {\n                AnthropicError::NetworkError(_) => LanguageModelError::TransientError(e.into()),\n                // TODO: The Rust Anthropic client is not documented well, we should figure out\n                // which of these errors are client errors and which are server errors.\n                // And which would be the ContextLengthExceeded error\n                // For now, we'll just map all of them to client errors so we get feedback.\n                _ => LanguageModelError::PermanentError(e.into()),\n            }\n        })?;\n\n        tracing::debug!(\n            response =\n                serde_json::to_string_pretty(&response).map_err(LanguageModelError::permanent)?,\n            \"[SimplePrompt] Response from anthropic\"\n        );\n\n        if let Some(usage) = response.usage.as_ref() {\n            let usage = Usage {\n                prompt_tokens: usage.input_tokens.unwrap_or_default(),\n                completion_tokens: usage.output_tokens.unwrap_or_default(),\n                total_tokens: (usage.input_tokens.unwrap_or_default()\n                    + usage.output_tokens.unwrap_or_default()),\n                details: None,\n            };\n\n            if let Some(callback) = &self.on_usage {\n                callback(&usage).await?;\n            }\n\n            #[cfg(feature = \"metrics\")]\n            {\n                emit_usage(\n                    model,\n                    usage.prompt_tokens.into(),\n                    usage.completion_tokens.into(),\n                    usage.total_tokens.into(),\n                    self.metric_metadata.as_ref(),\n                );\n            }\n        }\n\n        let message = response\n            .messages()\n            .into_iter()\n            .next()\n            .context(\"No messages in response\")\n            .map_err(LanguageModelError::permanent)?;\n\n        message\n            .text()\n            .context(\"No text in response\")\n            .map_err(LanguageModelError::permanent)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use wiremock::{\n        Mock, MockServer, ResponseTemplate,\n        matchers::{method, path},\n    };\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_simple_prompt_with_mock() {\n        // Start a WireMock server\n        let mock_server = MockServer::start().await;\n\n        // Create a mock response\n        let mock_response = ResponseTemplate::new(200).set_body_json(serde_json::json!({\n            \"content\": [{\"type\": \"text\", \"text\": \"mocked response\"}]\n        }));\n\n        // Mock the expected endpoint\n        Mock::given(method(\"POST\"))\n            .and(path(\"/v1/messages\")) // Adjust path to match expected endpoint\n            .respond_with(mock_response)\n            .mount(&mock_server)\n            .await;\n\n        let client = async_anthropic::Client::builder()\n            .base_url(mock_server.uri())\n            .build()\n            .unwrap();\n\n        // Build an Anthropic client with the mock server's URL\n        let mut client_builder = Anthropic::builder();\n        client_builder.client(client);\n        let client = client_builder.build().unwrap();\n\n        // Call the prompt method\n        let result = client.prompt(\"hello\".into()).await.unwrap();\n\n        // Assert the result\n        assert_eq!(result, \"mocked response\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/anthropic/tool_schema.rs",
    "content": "use serde_json::Value;\nuse swiftide_core::chat_completion::{ToolSpec, ToolSpecError};\n\npub(super) struct AnthropicToolSchema(Value);\n\nimpl AnthropicToolSchema {\n    pub(super) fn into_value(self) -> Value {\n        self.0\n    }\n}\n\nimpl TryFrom<&ToolSpec> for AnthropicToolSchema {\n    type Error = ToolSpecError;\n\n    fn try_from(spec: &ToolSpec) -> Result<Self, Self::Error> {\n        Ok(Self(spec.canonical_parameters_schema_json()?))\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/aws_bedrock_v2/chat_completion.rs",
    "content": "use std::collections::HashMap;\n\nuse anyhow::Context as _;\nuse async_trait::async_trait;\nuse aws_sdk_bedrockruntime::{\n    operation::converse::ConverseOutput,\n    types::{\n        AudioBlock, AudioFormat, AudioSource, AutoToolChoice, ContentBlock, ContentBlockDelta,\n        ContentBlockStart, ConversationRole, ConverseOutput as ConverseResult,\n        ConverseStreamOutput, DocumentBlock, DocumentFormat, DocumentSource, ImageBlock,\n        ImageFormat, ImageSource, InferenceConfiguration, Message, ReasoningContentBlock,\n        ReasoningContentBlockDelta, ReasoningTextBlock, S3Location, StopReason, SystemContentBlock,\n        Tool, ToolChoice, ToolConfiguration, ToolInputSchema, ToolResultBlock,\n        ToolResultContentBlock, ToolResultStatus, ToolSpecification, ToolUseBlock, VideoBlock,\n        VideoFormat, VideoSource,\n    },\n};\nuse aws_smithy_json::{\n    deserialize::{json_token_iter, token::expect_document},\n    serialize::JsonValueWriter,\n};\nuse aws_smithy_types::{Blob, Document};\nuse base64::Engine as _;\nuse futures_util::stream;\n#[cfg(feature = \"langfuse\")]\nuse serde_json::json;\nuse swiftide_core::{\n    ChatCompletion, ChatCompletionStream,\n    chat_completion::{\n        ChatCompletionRequest, ChatCompletionResponse, ChatMessage, ChatMessageContentPart,\n        ChatMessageContentSource, ReasoningItem, ToolCall, ToolOutput, ToolSpec,\n        errors::LanguageModelError,\n    },\n};\nuse tracing_futures::Instrument;\n\nuse super::tool_schema::AwsBedrockToolSchema;\nuse super::{AwsBedrock, Options};\n\ntype ConverseInputParts = (\n    Vec<Message>,\n    Option<Vec<SystemContentBlock>>,\n    Option<InferenceConfiguration>,\n    Option<ToolConfiguration>,\n);\n\ntype ExtractedMessage = (Option<String>, Option<Vec<ToolCall>>, Vec<ReasoningItem>);\n\n#[async_trait]\nimpl ChatCompletion for AwsBedrock {\n    #[cfg_attr(not(feature = \"langfuse\"), tracing::instrument(skip_all, err))]\n    #[cfg_attr(\n        feature = \"langfuse\",\n        tracing::instrument(skip_all, err, fields(langfuse.type = \"GENERATION\"))\n    )]\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        let model = self.prompt_model()?;\n        #[cfg(feature = \"langfuse\")]\n        let tracking_request = Some(json!({\n            \"model\": model,\n            \"messages\": request.messages(),\n            \"tools_spec\": request.tools_spec(),\n        }));\n        #[cfg(not(feature = \"langfuse\"))]\n        let tracking_request: Option<serde_json::Value> = None;\n\n        let (messages, system, inference_config, tool_config) =\n            match build_converse_input(request, &self.default_options) {\n                Ok(parts) => parts,\n                Err(error) => {\n                    Self::track_failure(\n                        model,\n                        tracking_request.as_ref(),\n                        None::<&serde_json::Value>,\n                        &error,\n                    );\n                    return Err(error);\n                }\n            };\n        let additional_model_request_fields =\n            match super::additional_model_request_fields_from_options(model, &self.default_options)\n            {\n                Ok(fields) => fields,\n                Err(error) => {\n                    Self::track_failure(\n                        model,\n                        tracking_request.as_ref(),\n                        None::<&serde_json::Value>,\n                        &error,\n                    );\n                    return Err(error);\n                }\n            };\n\n        tracing::debug!(\n            model = model,\n            inference_config = ?inference_config,\n            has_tool_config = tool_config.is_some(),\n            \"[ChatCompletion] Request to bedrock converse\"\n        );\n\n        let response = match self\n            .client\n            .converse(\n                model,\n                messages,\n                system,\n                inference_config,\n                tool_config,\n                None,\n                additional_model_request_fields,\n                self.default_options\n                    .additional_model_response_field_paths\n                    .clone(),\n            )\n            .await\n        {\n            Ok(response) => response,\n            Err(error) => {\n                Self::track_failure(\n                    model,\n                    tracking_request.as_ref(),\n                    None::<&serde_json::Value>,\n                    &error,\n                );\n                return Err(error);\n            }\n        };\n\n        tracing::debug!(response = ?response, \"[ChatCompletion] Response from bedrock converse\");\n\n        let completion = match response_to_chat_completion(&response) {\n            Ok(completion) => completion,\n            Err(error) => {\n                Self::track_failure(\n                    model,\n                    tracking_request.as_ref(),\n                    None::<&serde_json::Value>,\n                    &error,\n                );\n                return Err(error);\n            }\n        };\n\n        if let Some(error) = super::context_length_exceeded_if_empty(\n            completion.message.is_some(),\n            completion.tool_calls.is_some(),\n            completion\n                .reasoning\n                .as_ref()\n                .is_some_and(|reasoning| !reasoning.is_empty()),\n            Some(response.stop_reason()),\n        ) {\n            Self::track_failure(model, tracking_request.as_ref(), Some(&completion), &error);\n            return Err(error);\n        }\n\n        self.track_completion(\n            model,\n            completion.usage.as_ref(),\n            tracking_request.as_ref(),\n            Some(&completion),\n        )\n        .await?;\n\n        Ok(completion)\n    }\n\n    #[cfg_attr(not(feature = \"langfuse\"), tracing::instrument(skip_all))]\n    #[cfg_attr(\n        feature = \"langfuse\",\n        tracing::instrument(skip_all, fields(langfuse.type = \"GENERATION\"))\n    )]\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        let model = match self.prompt_model() {\n            Ok(model) => model.to_string(),\n            Err(error) => return error.into(),\n        };\n        #[cfg(feature = \"langfuse\")]\n        let tracking_request = Some(json!({\n            \"model\": model,\n            \"messages\": request.messages(),\n            \"tools_spec\": request.tools_spec(),\n        }));\n        #[cfg(not(feature = \"langfuse\"))]\n        let tracking_request: Option<serde_json::Value> = None;\n\n        let (messages, system, inference_config, tool_config) =\n            match build_converse_input(request, &self.default_options) {\n                Ok(parts) => parts,\n                Err(error) => {\n                    Self::track_failure(\n                        &model,\n                        tracking_request.as_ref(),\n                        None::<&serde_json::Value>,\n                        &error,\n                    );\n                    return error.into();\n                }\n            };\n        let additional_model_request_fields =\n            match super::additional_model_request_fields_from_options(&model, &self.default_options)\n            {\n                Ok(fields) => fields,\n                Err(error) => {\n                    Self::track_failure(\n                        &model,\n                        tracking_request.as_ref(),\n                        None::<&serde_json::Value>,\n                        &error,\n                    );\n                    return error.into();\n                }\n            };\n\n        let stream_output = match self\n            .client\n            .converse_stream(\n                &model,\n                messages,\n                system,\n                inference_config,\n                tool_config,\n                additional_model_request_fields,\n                self.default_options\n                    .additional_model_response_field_paths\n                    .clone(),\n            )\n            .await\n        {\n            Ok(stream_output) => stream_output,\n            Err(error) => {\n                Self::track_failure(\n                    &model,\n                    tracking_request.as_ref(),\n                    None::<&serde_json::Value>,\n                    &error,\n                );\n                return error.into();\n            }\n        };\n\n        let self_for_stream = self.clone();\n\n        let event_stream = stream_output.stream;\n        let stream = stream::unfold(\n            (\n                event_stream,\n                ChatCompletionResponse::default(),\n                None::<StopReason>,\n                false,\n                model,\n                tracking_request,\n            ),\n            move |(\n                mut event_stream,\n                mut response,\n                mut stop_reason,\n                finished,\n                model,\n                tracking_request,\n            )| {\n                let self_for_stream = self_for_stream.clone();\n                async move {\n                    if finished {\n                        return None;\n                    }\n\n                    match event_stream.recv().await {\n                        Ok(Some(event)) => {\n                            apply_stream_event(&event, &mut response, &mut stop_reason);\n                            Some((\n                                Ok(response.clone()),\n                                (\n                                    event_stream,\n                                    response,\n                                    stop_reason,\n                                    false,\n                                    model,\n                                    tracking_request,\n                                ),\n                            ))\n                        }\n                        Ok(None) => {\n                            if let Some(error) = super::context_length_exceeded_if_empty(\n                                response.message.is_some(),\n                                response.tool_calls.is_some(),\n                                response\n                                    .reasoning\n                                    .as_ref()\n                                    .is_some_and(|reasoning| !reasoning.is_empty()),\n                                stop_reason.as_ref(),\n                            ) {\n                                Self::track_failure(\n                                    &model,\n                                    tracking_request.as_ref(),\n                                    Some(&response),\n                                    &error,\n                                );\n                                return Some((\n                                    Err(error),\n                                    (\n                                        event_stream,\n                                        response,\n                                        stop_reason,\n                                        true,\n                                        model,\n                                        tracking_request,\n                                    ),\n                                ));\n                            }\n\n                            if let Err(error) = self_for_stream\n                                .track_completion(\n                                    &model,\n                                    response.usage.as_ref(),\n                                    tracking_request.as_ref(),\n                                    Some(&response),\n                                )\n                                .await\n                            {\n                                return Some((\n                                    Err(error),\n                                    (\n                                        event_stream,\n                                        response,\n                                        stop_reason,\n                                        true,\n                                        model,\n                                        tracking_request,\n                                    ),\n                                ));\n                            }\n\n                            Some((\n                                Ok(response.clone()),\n                                (\n                                    event_stream,\n                                    response,\n                                    stop_reason,\n                                    true,\n                                    model,\n                                    tracking_request,\n                                ),\n                            ))\n                        }\n                        Err(error) => {\n                            let error =\n                                super::converse_stream_output_error_to_language_model_error(error);\n                            Self::track_failure(\n                                &model,\n                                tracking_request.as_ref(),\n                                Some(&response),\n                                &error,\n                            );\n                            Some((\n                                Err(error),\n                                (\n                                    event_stream,\n                                    response,\n                                    stop_reason,\n                                    true,\n                                    model,\n                                    tracking_request,\n                                ),\n                            ))\n                        }\n                    }\n                }\n            },\n        );\n\n        let span = if cfg!(feature = \"langfuse\") {\n            tracing::info_span!(\"stream\", langfuse.type = \"GENERATION\")\n        } else {\n            tracing::info_span!(\"stream\")\n        };\n\n        Box::pin(Instrument::instrument(stream, span))\n    }\n}\n\nfn build_converse_input(\n    request: &ChatCompletionRequest<'_>,\n    options: &Options,\n) -> Result<ConverseInputParts, LanguageModelError> {\n    let source_messages = request.messages();\n    let mut messages = Vec::with_capacity(source_messages.len());\n    let mut system = Vec::new();\n    let mut source_messages = source_messages.iter().peekable();\n\n    while let Some(message) = source_messages.next() {\n        match message {\n            ChatMessage::System(text) => {\n                system.push(SystemContentBlock::Text(text.clone()));\n            }\n            ChatMessage::Summary(text) | ChatMessage::User(text) => {\n                messages.push(user_message_from_text(text.clone())?);\n            }\n            ChatMessage::UserWithParts(parts) => messages.push(user_message_from_parts(parts)?),\n            ChatMessage::Assistant(content, maybe_tool_calls) => {\n                let mut blocks = Vec::with_capacity(\n                    usize::from(content.as_ref().is_some_and(|text| !text.is_empty()))\n                        + maybe_tool_calls.as_ref().map_or(0, Vec::len),\n                );\n\n                if let Some(content) = content.as_ref()\n                    && !content.is_empty()\n                {\n                    blocks.push(ContentBlock::Text(content.clone()));\n                }\n\n                if let Some(tool_calls) = maybe_tool_calls.as_ref() {\n                    for tool_call in tool_calls {\n                        let input =\n                            tool_call_args_to_document(tool_call.args()).with_context(|| {\n                                format!(\"Invalid JSON args for tool call {}\", tool_call.name())\n                            })?;\n                        let tool_use = ToolUseBlock::builder()\n                            .tool_use_id(tool_call.id())\n                            .name(tool_call.name())\n                            .input(input)\n                            .build()\n                            .map_err(LanguageModelError::permanent)?;\n                        blocks.push(ContentBlock::ToolUse(tool_use));\n                    }\n                }\n\n                if !blocks.is_empty() {\n                    messages.push(message_from_blocks(ConversationRole::Assistant, blocks)?);\n                }\n            }\n            ChatMessage::ToolOutput(tool_call, output) => {\n                let mut blocks = vec![tool_output_to_result_block(tool_call, output)?];\n\n                while let Some(ChatMessage::ToolOutput(tool_call, output)) = source_messages.peek()\n                {\n                    blocks.push(tool_output_to_result_block(tool_call, output)?);\n                    source_messages.next();\n                }\n\n                messages.push(message_from_blocks(ConversationRole::User, blocks)?);\n            }\n            ChatMessage::Reasoning(item) => {\n                if let Some(reasoning_message) = assistant_reasoning_message_from_item(item)? {\n                    messages.push(reasoning_message);\n                }\n            }\n        }\n    }\n\n    if messages.is_empty() {\n        return Err(LanguageModelError::permanent(\n            \"Bedrock Converse requires at least one non-system message\",\n        ));\n    }\n\n    Ok((\n        messages,\n        (!system.is_empty()).then_some(system),\n        super::inference_config_from_options(options),\n        tool_config_from_specs(request.tools_spec().iter(), options.tool_strict_enabled())?,\n    ))\n}\n\nfn user_message_from_text(text: String) -> Result<Message, LanguageModelError> {\n    message_from_blocks(ConversationRole::User, vec![ContentBlock::Text(text)])\n}\n\nfn user_message_from_parts(\n    parts: &[ChatMessageContentPart],\n) -> Result<Message, LanguageModelError> {\n    let mut blocks = Vec::with_capacity(parts.len());\n    let mut has_text = false;\n    let mut has_document = false;\n\n    for part in parts {\n        match part {\n            ChatMessageContentPart::Text { text } => {\n                if !text.is_empty() {\n                    blocks.push(ContentBlock::Text(text.clone()));\n                    has_text = true;\n                }\n            }\n            ChatMessageContentPart::Image { source, format } => {\n                blocks.push(ContentBlock::Image(image_block_from_part(\n                    source,\n                    format.as_deref(),\n                )?));\n            }\n            ChatMessageContentPart::Document {\n                source,\n                format,\n                name,\n            } => {\n                blocks.push(ContentBlock::Document(document_block_from_part(\n                    source,\n                    format.as_deref(),\n                    name.as_deref(),\n                )?));\n                has_document = true;\n            }\n            ChatMessageContentPart::Audio { source, format } => {\n                blocks.push(ContentBlock::Audio(audio_block_from_part(\n                    source,\n                    format.as_deref(),\n                )?));\n            }\n            ChatMessageContentPart::Video { source, format } => {\n                blocks.push(ContentBlock::Video(video_block_from_part(\n                    source,\n                    format.as_deref(),\n                )?));\n            }\n        }\n    }\n\n    if blocks.is_empty() {\n        return Err(LanguageModelError::permanent(\n            \"UserWithParts requires at least one content part\",\n        ));\n    }\n\n    if has_document && !has_text {\n        return Err(LanguageModelError::permanent(\n            \"Bedrock document parts require at least one text part in the same message\",\n        ));\n    }\n\n    message_from_blocks(ConversationRole::User, blocks)\n}\n\nfn image_block_from_part(\n    source: &ChatMessageContentSource,\n    format: Option<&str>,\n) -> Result<ImageBlock, LanguageModelError> {\n    let format = image_format_from_source(format, source)?;\n    let source = image_source_from_content_source(source)?;\n\n    ImageBlock::builder()\n        .format(format)\n        .source(source)\n        .build()\n        .map_err(LanguageModelError::permanent)\n}\n\nfn document_block_from_part(\n    source: &ChatMessageContentSource,\n    format: Option<&str>,\n    name: Option<&str>,\n) -> Result<DocumentBlock, LanguageModelError> {\n    let format = document_format_from_source(format, source)?;\n    let source = document_source_from_content_source(source)?;\n    let name = name.unwrap_or(\"document\");\n\n    DocumentBlock::builder()\n        .format(format)\n        .name(name)\n        .source(source)\n        .build()\n        .map_err(LanguageModelError::permanent)\n}\n\nfn audio_block_from_part(\n    source: &ChatMessageContentSource,\n    format: Option<&str>,\n) -> Result<AudioBlock, LanguageModelError> {\n    let format = audio_format_from_source(format, source)?;\n    let source = audio_source_from_content_source(source)?;\n\n    AudioBlock::builder()\n        .format(format)\n        .source(source)\n        .build()\n        .map_err(LanguageModelError::permanent)\n}\n\nfn video_block_from_part(\n    source: &ChatMessageContentSource,\n    format: Option<&str>,\n) -> Result<VideoBlock, LanguageModelError> {\n    let format = video_format_from_source(format, source)?;\n    let source = video_source_from_content_source(source)?;\n\n    VideoBlock::builder()\n        .format(format)\n        .source(source)\n        .build()\n        .map_err(LanguageModelError::permanent)\n}\n\nfn image_source_from_content_source(\n    source: &ChatMessageContentSource,\n) -> Result<ImageSource, LanguageModelError> {\n    source_from_content_source(source, \"image\", ImageSource::Bytes, ImageSource::S3Location)\n}\n\nfn document_source_from_content_source(\n    source: &ChatMessageContentSource,\n) -> Result<DocumentSource, LanguageModelError> {\n    source_from_content_source(\n        source,\n        \"document\",\n        DocumentSource::Bytes,\n        DocumentSource::S3Location,\n    )\n}\n\nfn audio_source_from_content_source(\n    source: &ChatMessageContentSource,\n) -> Result<AudioSource, LanguageModelError> {\n    source_from_content_source(source, \"audio\", AudioSource::Bytes, AudioSource::S3Location)\n}\n\nfn video_source_from_content_source(\n    source: &ChatMessageContentSource,\n) -> Result<VideoSource, LanguageModelError> {\n    source_from_content_source(source, \"video\", VideoSource::Bytes, VideoSource::S3Location)\n}\n\nfn source_from_content_source<T>(\n    source: &ChatMessageContentSource,\n    label: &str,\n    from_bytes: impl Fn(Blob) -> T,\n    from_s3: impl Fn(S3Location) -> T,\n) -> Result<T, LanguageModelError> {\n    match source {\n        ChatMessageContentSource::Bytes { data, .. } => Ok(from_bytes(Blob::new(data.clone()))),\n        ChatMessageContentSource::S3 { uri, bucket_owner } => {\n            Ok(from_s3(s3_location(uri, bucket_owner.as_deref())?))\n        }\n        ChatMessageContentSource::Url { url } => {\n            if is_s3_url(url) {\n                Ok(from_s3(s3_location(url, None)?))\n            } else if let Some((_, encoded)) = parse_data_url(url) {\n                Ok(from_bytes(Blob::new(decode_data_url_bytes(encoded)?)))\n            } else {\n                Err(LanguageModelError::permanent(format!(\n                    \"Bedrock {label} source URL must be data: or s3://\"\n                )))\n            }\n        }\n        ChatMessageContentSource::FileId { .. } => Err(LanguageModelError::permanent(format!(\n            \"Bedrock does not support file_id {label} sources\"\n        ))),\n    }\n}\n\nfn image_format_from_source(\n    format: Option<&str>,\n    source: &ChatMessageContentSource,\n) -> Result<ImageFormat, LanguageModelError> {\n    resolve_format(\n        format,\n        source,\n        infer_image_format_from_source,\n        |value| ImageFormat::try_parse(value).ok(),\n        \"image\",\n    )\n}\n\nfn document_format_from_source(\n    format: Option<&str>,\n    source: &ChatMessageContentSource,\n) -> Result<DocumentFormat, LanguageModelError> {\n    resolve_format(\n        format,\n        source,\n        infer_document_format_from_source,\n        |value| DocumentFormat::try_parse(value).ok(),\n        \"document\",\n    )\n}\n\nfn audio_format_from_source(\n    format: Option<&str>,\n    source: &ChatMessageContentSource,\n) -> Result<AudioFormat, LanguageModelError> {\n    resolve_format(\n        format,\n        source,\n        infer_audio_format_from_source,\n        |value| AudioFormat::try_parse(value).ok(),\n        \"audio\",\n    )\n}\n\nfn video_format_from_source(\n    format: Option<&str>,\n    source: &ChatMessageContentSource,\n) -> Result<VideoFormat, LanguageModelError> {\n    resolve_format(\n        format,\n        source,\n        infer_video_format_from_source,\n        |value| VideoFormat::try_parse(value).ok(),\n        \"video\",\n    )\n}\n\nfn resolve_format<T>(\n    explicit_format: Option<&str>,\n    source: &ChatMessageContentSource,\n    infer: impl Fn(&ChatMessageContentSource) -> Option<&'static str>,\n    parse: impl Fn(&str) -> Option<T>,\n    label: &str,\n) -> Result<T, LanguageModelError> {\n    let value = explicit_format.or_else(|| infer(source)).ok_or_else(|| {\n        LanguageModelError::permanent(format!(\"Bedrock {label} format is required\"))\n    })?;\n\n    parse(value).ok_or_else(|| {\n        LanguageModelError::permanent(format!(\"Unsupported Bedrock {label} format: {value}\"))\n    })\n}\n\nfn infer_image_format_from_source(source: &ChatMessageContentSource) -> Option<&'static str> {\n    infer_format_from_source(\n        source,\n        IMAGE_MEDIA_TYPE_FORMATS,\n        IMAGE_EXTENSION_FORMATS,\n        None,\n    )\n}\n\nfn infer_document_format_from_source(source: &ChatMessageContentSource) -> Option<&'static str> {\n    infer_format_from_source(\n        source,\n        DOCUMENT_MEDIA_TYPE_FORMATS,\n        DOCUMENT_EXTENSION_FORMATS,\n        Some(\"txt\"),\n    )\n}\n\nfn infer_audio_format_from_source(source: &ChatMessageContentSource) -> Option<&'static str> {\n    infer_format_from_source(\n        source,\n        AUDIO_MEDIA_TYPE_FORMATS,\n        AUDIO_EXTENSION_FORMATS,\n        None,\n    )\n}\n\nfn infer_video_format_from_source(source: &ChatMessageContentSource) -> Option<&'static str> {\n    infer_format_from_source(\n        source,\n        VIDEO_MEDIA_TYPE_FORMATS,\n        VIDEO_EXTENSION_FORMATS,\n        None,\n    )\n}\n\nfn infer_format_from_source(\n    source: &ChatMessageContentSource,\n    media_type_mappings: &[(&'static str, &'static str)],\n    extension_mappings: &[(&'static str, &'static str)],\n    fallback: Option<&'static str>,\n) -> Option<&'static str> {\n    match source {\n        ChatMessageContentSource::Bytes { media_type, .. } => media_type\n            .as_deref()\n            .and_then(|media_type| mapped_format(media_type, media_type_mappings))\n            .or(fallback),\n        ChatMessageContentSource::Url { url } => if let Some((media_type, _)) = parse_data_url(url)\n        {\n            mapped_format(media_type, media_type_mappings)\n        } else {\n            extension_from_url(url)\n                .and_then(|extension| mapped_format(extension, extension_mappings))\n        }\n        .or(fallback),\n        ChatMessageContentSource::S3 { uri, .. } => extension_from_url(uri)\n            .and_then(|extension| mapped_format(extension, extension_mappings))\n            .or(fallback),\n        ChatMessageContentSource::FileId { .. } => fallback,\n    }\n}\n\nfn s3_location(uri: &str, bucket_owner: Option<&str>) -> Result<S3Location, LanguageModelError> {\n    let mut builder = S3Location::builder().uri(uri);\n    if let Some(bucket_owner) = bucket_owner {\n        builder = builder.bucket_owner(bucket_owner);\n    }\n\n    builder.build().map_err(LanguageModelError::permanent)\n}\n\nfn is_s3_url(url: &str) -> bool {\n    url.starts_with(\"s3://\")\n}\n\nfn parse_data_url(url: &str) -> Option<(&str, &str)> {\n    let rest = url.strip_prefix(\"data:\")?;\n    let (header, data) = rest.split_once(',')?;\n    let media_type = header.strip_suffix(\";base64\")?;\n    Some((media_type, data))\n}\n\nfn decode_data_url_bytes(encoded: &str) -> Result<Vec<u8>, LanguageModelError> {\n    base64::engine::general_purpose::STANDARD\n        .decode(encoded)\n        .map_err(LanguageModelError::permanent)\n}\n\nfn extension_from_url(url: &str) -> Option<&str> {\n    let without_query = url.split(['?', '#']).next()?;\n    let filename = without_query.rsplit('/').next()?;\n    let (_, extension) = filename.rsplit_once('.')?;\n    Some(extension)\n}\n\nfn mapped_format(value: &str, mappings: &[(&'static str, &'static str)]) -> Option<&'static str> {\n    mappings\n        .iter()\n        .find_map(|(input, output)| input.eq_ignore_ascii_case(value).then_some(*output))\n}\n\nfn message_from_blocks(\n    role: ConversationRole,\n    blocks: Vec<ContentBlock>,\n) -> Result<Message, LanguageModelError> {\n    Message::builder()\n        .role(role)\n        .set_content(Some(blocks))\n        .build()\n        .map_err(LanguageModelError::permanent)\n}\n\nfn tool_output_to_result_block(\n    tool_call: &ToolCall,\n    output: &ToolOutput,\n) -> Result<ContentBlock, LanguageModelError> {\n    let status = match output {\n        ToolOutput::Fail(_) => Some(ToolResultStatus::Error),\n        _ => Some(ToolResultStatus::Success),\n    };\n\n    let tool_result = ToolResultBlock::builder()\n        .tool_use_id(tool_call.id())\n        .content(tool_output_to_content_block(output)?)\n        .set_status(status)\n        .build()\n        .map_err(LanguageModelError::permanent)?;\n\n    Ok(ContentBlock::ToolResult(tool_result))\n}\n\nfn tool_output_to_content_block(\n    output: &ToolOutput,\n) -> Result<ToolResultContentBlock, LanguageModelError> {\n    match output {\n        ToolOutput::Text(text) | ToolOutput::Fail(text) => {\n            Ok(ToolResultContentBlock::Text(text.clone()))\n        }\n        ToolOutput::FeedbackRequired(Some(value))\n        | ToolOutput::Stop(Some(value))\n        | ToolOutput::AgentFailed(Some(value)) => {\n            Ok(ToolResultContentBlock::Json(json_value_to_document(value)?))\n        }\n        _ => Ok(ToolResultContentBlock::Text(output.to_string())),\n    }\n}\n\nfn tool_call_args_to_document(args: Option<&str>) -> Result<Document, LanguageModelError> {\n    match args.map(str::trim) {\n        Some(args) if !args.is_empty() => parse_document_json_bytes(args.as_bytes())\n            .with_context(|| format!(\"Failed to parse tool args as JSON: {args}\"))\n            .map_err(LanguageModelError::permanent),\n        _ => Ok(Document::Object(HashMap::new())),\n    }\n}\n\nfn tool_config_from_specs<'a>(\n    tool_specs: impl IntoIterator<Item = &'a ToolSpec>,\n    strict: bool,\n) -> Result<Option<ToolConfiguration>, LanguageModelError> {\n    let tools = tool_specs\n        .into_iter()\n        .map(|spec| tool_spec_to_bedrock(spec, strict))\n        .collect::<Result<Vec<_>, _>>()?;\n\n    if tools.is_empty() {\n        return Ok(None);\n    }\n\n    let tool_config = ToolConfiguration::builder()\n        .set_tools(Some(tools))\n        .tool_choice(ToolChoice::Auto(AutoToolChoice::builder().build()))\n        .build()\n        .map_err(LanguageModelError::permanent)?;\n\n    Ok(Some(tool_config))\n}\n\nfn tool_spec_to_bedrock(spec: &ToolSpec, strict: bool) -> Result<Tool, LanguageModelError> {\n    let schema_value = AwsBedrockToolSchema::try_from(spec)\n        .map(AwsBedrockToolSchema::into_value)\n        .map_err(LanguageModelError::permanent)?;\n    let input_schema = ToolInputSchema::Json(json_value_to_document(&schema_value)?);\n\n    let mut builder = ToolSpecification::builder()\n        .name(spec.name.clone())\n        .input_schema(input_schema)\n        .strict(strict);\n\n    if !spec.description.is_empty() {\n        builder = builder.description(spec.description.clone());\n    }\n\n    let tool_spec = builder.build().map_err(LanguageModelError::permanent)?;\n    Ok(Tool::ToolSpec(tool_spec))\n}\n\npub(super) fn response_to_chat_completion(\n    response: &ConverseOutput,\n) -> Result<ChatCompletionResponse, LanguageModelError> {\n    let (message, tool_calls, reasoning) =\n        if let Some(ConverseResult::Message(message)) = response.output() {\n            extract_message_and_tool_calls(message)?\n        } else {\n            (None, None, Vec::new())\n        };\n\n    let mut builder = ChatCompletionResponse::builder()\n        .maybe_message(message)\n        .maybe_tool_calls(tool_calls)\n        .to_owned();\n\n    if !reasoning.is_empty() {\n        builder.reasoning(reasoning);\n    }\n\n    if let Some(usage) = response.usage() {\n        builder.usage(super::usage_from_bedrock(usage));\n    }\n\n    builder.build().map_err(LanguageModelError::from)\n}\n\nfn extract_message_and_tool_calls(\n    message: &Message,\n) -> Result<ExtractedMessage, LanguageModelError> {\n    let mut text = String::new();\n    let mut has_text = false;\n    let mut tool_calls = Vec::with_capacity(message.content().len());\n    let mut reasoning = Vec::new();\n\n    for (content_block_index, block) in message.content().iter().enumerate() {\n        match block {\n            ContentBlock::Text(block_text) => {\n                text.push_str(block_text);\n                has_text = true;\n            }\n            ContentBlock::ToolUse(tool_use) => {\n                let args = document_to_json_string(tool_use.input());\n                let tool_call = ToolCall::builder()\n                    .id(tool_use.tool_use_id())\n                    .name(tool_use.name())\n                    .args(args)\n                    .build()\n                    .map_err(LanguageModelError::permanent)?;\n                tool_calls.push(tool_call);\n            }\n            ContentBlock::ReasoningContent(ReasoningContentBlock::ReasoningText(\n                reasoning_text,\n            )) => {\n                reasoning.push(reasoning_item_from_reasoning_text(\n                    content_block_index,\n                    reasoning_text.text(),\n                    reasoning_text.signature(),\n                ));\n            }\n            _ => {}\n        }\n    }\n\n    let message = has_text.then_some(text);\n    let tool_calls = (!tool_calls.is_empty()).then_some(tool_calls);\n\n    Ok((message, tool_calls, reasoning))\n}\n\nfn document_to_json_string(document: &Document) -> String {\n    let mut output = String::new();\n    JsonValueWriter::new(&mut output).document(document);\n    output\n}\n\nfn apply_stream_event(\n    event: &ConverseStreamOutput,\n    response: &mut ChatCompletionResponse,\n    stop_reason: &mut Option<StopReason>,\n) {\n    match event {\n        ConverseStreamOutput::ContentBlockStart(event) => {\n            if let (Some(ContentBlockStart::ToolUse(tool_use)), Ok(index)) =\n                (event.start(), usize::try_from(event.content_block_index()))\n            {\n                response.append_tool_call_delta(\n                    index,\n                    Some(tool_use.tool_use_id()),\n                    Some(tool_use.name()),\n                    None,\n                );\n            }\n        }\n        ConverseStreamOutput::ContentBlockDelta(event) => {\n            let Ok(index) = usize::try_from(event.content_block_index()) else {\n                return;\n            };\n\n            let Some(delta) = event.delta() else {\n                return;\n            };\n\n            match delta {\n                ContentBlockDelta::Text(text) => {\n                    response.append_message_delta(Some(text));\n                }\n                ContentBlockDelta::ToolUse(delta) => {\n                    response.append_tool_call_delta(index, None, None, Some(delta.input()));\n                }\n                ContentBlockDelta::ReasoningContent(delta) => {\n                    apply_reasoning_delta(response, index, delta);\n                }\n                _ => {}\n            }\n        }\n        ConverseStreamOutput::MessageStop(event) => {\n            *stop_reason = Some(event.stop_reason().clone());\n        }\n        ConverseStreamOutput::Metadata(event) => {\n            if let Some(usage) = event.usage() {\n                response.usage = Some(super::usage_from_bedrock(usage));\n            }\n        }\n        _ => {}\n    }\n}\n\nfn assistant_reasoning_message_from_item(\n    item: &ReasoningItem,\n) -> Result<Option<Message>, LanguageModelError> {\n    let text = item\n        .content\n        .as_ref()\n        .and_then(|content| content.first())\n        .map(String::as_str)\n        .filter(|text| !text.is_empty());\n    let signature = item\n        .encrypted_content\n        .as_deref()\n        .filter(|value| !value.is_empty());\n\n    let (Some(text), Some(signature)) = (text, signature) else {\n        return Ok(None);\n    };\n\n    let reasoning_text_block = ReasoningTextBlock::builder()\n        .text(text)\n        .signature(signature)\n        .build()\n        .map_err(LanguageModelError::permanent)?;\n\n    message_from_blocks(\n        ConversationRole::Assistant,\n        vec![ContentBlock::ReasoningContent(\n            ReasoningContentBlock::ReasoningText(reasoning_text_block),\n        )],\n    )\n    .map(Some)\n}\n\nfn reasoning_item_from_reasoning_text(\n    content_block_index: usize,\n    text: &str,\n    signature: Option<&str>,\n) -> ReasoningItem {\n    ReasoningItem {\n        id: format!(\"bedrock_reasoning_{content_block_index}\"),\n        summary: Vec::new(),\n        content: Some(vec![text.to_string()]),\n        encrypted_content: signature.map(ToString::to_string),\n        status: None,\n    }\n}\n\nfn apply_reasoning_delta(\n    response: &mut ChatCompletionResponse,\n    content_block_index: usize,\n    delta: &ReasoningContentBlockDelta,\n) {\n    let reasoning_item = ensure_reasoning_item(response, content_block_index);\n\n    match delta {\n        ReasoningContentBlockDelta::Text(text) => {\n            let content = reasoning_item\n                .content\n                .get_or_insert_with(|| vec![String::new()]);\n            if content.is_empty() {\n                content.push(String::new());\n            }\n            content[0].push_str(text);\n        }\n        ReasoningContentBlockDelta::Signature(signature) => {\n            reasoning_item.encrypted_content = Some(signature.clone());\n        }\n        _ => {}\n    }\n}\n\nfn ensure_reasoning_item(\n    response: &mut ChatCompletionResponse,\n    content_block_index: usize,\n) -> &mut ReasoningItem {\n    let reasoning = response.reasoning.get_or_insert_with(Vec::new);\n    let reasoning_id = format!(\"bedrock_reasoning_{content_block_index}\");\n    if let Some(position) = reasoning.iter().position(|item| item.id == reasoning_id) {\n        return reasoning\n            .get_mut(position)\n            .expect(\"position from iter().position must exist\");\n    }\n\n    reasoning.push(ReasoningItem {\n        id: reasoning_id,\n        summary: Vec::new(),\n        content: None,\n        encrypted_content: None,\n        status: None,\n    });\n\n    reasoning\n        .last_mut()\n        .expect(\"pushed reasoning item must exist\")\n}\n\nfn json_value_to_document(value: &serde_json::Value) -> Result<Document, LanguageModelError> {\n    let bytes = serde_json::to_vec(value).map_err(LanguageModelError::permanent)?;\n    parse_document_json_bytes(&bytes).map_err(LanguageModelError::permanent)\n}\n\nfn parse_document_json_bytes(input: &[u8]) -> anyhow::Result<Document> {\n    let mut tokens = json_token_iter(input).peekable();\n    let document = expect_document(&mut tokens)?;\n\n    if tokens.next().transpose()?.is_some() {\n        anyhow::bail!(\"JSON input must contain exactly one value\");\n    }\n\n    Ok(document)\n}\n\nconst IMAGE_MEDIA_TYPE_FORMATS: &[(&str, &str)] = &[\n    (\"image/gif\", \"gif\"),\n    (\"image/jpeg\", \"jpeg\"),\n    (\"image/jpg\", \"jpeg\"),\n    (\"image/png\", \"png\"),\n    (\"image/webp\", \"webp\"),\n];\n\nconst IMAGE_EXTENSION_FORMATS: &[(&str, &str)] = &[\n    (\"gif\", \"gif\"),\n    (\"jpeg\", \"jpeg\"),\n    (\"jpg\", \"jpeg\"),\n    (\"png\", \"png\"),\n    (\"webp\", \"webp\"),\n];\n\nconst DOCUMENT_MEDIA_TYPE_FORMATS: &[(&str, &str)] = &[\n    (\"text/csv\", \"csv\"),\n    (\"application/msword\", \"doc\"),\n    (\n        \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\",\n        \"docx\",\n    ),\n    (\"text/html\", \"html\"),\n    (\"text/markdown\", \"md\"),\n    (\"text/x-markdown\", \"md\"),\n    (\"application/pdf\", \"pdf\"),\n    (\"text/plain\", \"txt\"),\n    (\"application/vnd.ms-excel\", \"xls\"),\n    (\n        \"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet\",\n        \"xlsx\",\n    ),\n];\n\nconst DOCUMENT_EXTENSION_FORMATS: &[(&str, &str)] = &[\n    (\"csv\", \"csv\"),\n    (\"doc\", \"doc\"),\n    (\"docx\", \"docx\"),\n    (\"html\", \"html\"),\n    (\"htm\", \"html\"),\n    (\"md\", \"md\"),\n    (\"markdown\", \"md\"),\n    (\"pdf\", \"pdf\"),\n    (\"txt\", \"txt\"),\n    (\"xls\", \"xls\"),\n    (\"xlsx\", \"xlsx\"),\n];\n\nconst AUDIO_MEDIA_TYPE_FORMATS: &[(&str, &str)] = &[\n    (\"audio/aac\", \"aac\"),\n    (\"audio/flac\", \"flac\"),\n    (\"audio/m4a\", \"m4a\"),\n    (\"audio/mka\", \"mka\"),\n    (\"audio/x-matroska\", \"mkv\"),\n    (\"audio/mpeg\", \"mp3\"),\n    (\"audio/mp3\", \"mp3\"),\n    (\"audio/mp4\", \"mp4\"),\n    (\"audio/ogg\", \"ogg\"),\n    (\"audio/opus\", \"opus\"),\n    (\"audio/wav\", \"wav\"),\n    (\"audio/x-wav\", \"wav\"),\n    (\"audio/wave\", \"wav\"),\n    (\"audio/webm\", \"webm\"),\n    (\"audio/x-aac\", \"x-aac\"),\n];\n\nconst AUDIO_EXTENSION_FORMATS: &[(&str, &str)] = &[\n    (\"aac\", \"aac\"),\n    (\"flac\", \"flac\"),\n    (\"m4a\", \"m4a\"),\n    (\"mka\", \"mka\"),\n    (\"mkv\", \"mkv\"),\n    (\"mp3\", \"mp3\"),\n    (\"mp4\", \"mp4\"),\n    (\"mpeg\", \"mpeg\"),\n    (\"mpga\", \"mpga\"),\n    (\"ogg\", \"ogg\"),\n    (\"opus\", \"opus\"),\n    (\"pcm\", \"pcm\"),\n    (\"wav\", \"wav\"),\n    (\"webm\", \"webm\"),\n    (\"x-aac\", \"x-aac\"),\n];\n\nconst VIDEO_MEDIA_TYPE_FORMATS: &[(&str, &str)] = &[\n    (\"video/x-flv\", \"flv\"),\n    (\"video/x-matroska\", \"mkv\"),\n    (\"video/quicktime\", \"mov\"),\n    (\"video/mp4\", \"mp4\"),\n    (\"video/mpeg\", \"mpeg\"),\n    (\"video/3gpp\", \"three_gp\"),\n    (\"video/webm\", \"webm\"),\n    (\"video/x-ms-wmv\", \"wmv\"),\n];\n\nconst VIDEO_EXTENSION_FORMATS: &[(&str, &str)] = &[\n    (\"flv\", \"flv\"),\n    (\"mkv\", \"mkv\"),\n    (\"mov\", \"mov\"),\n    (\"mp4\", \"mp4\"),\n    (\"mpeg\", \"mpeg\"),\n    (\"mpg\", \"mpg\"),\n    (\"3gp\", \"three_gp\"),\n    (\"webm\", \"webm\"),\n    (\"wmv\", \"wmv\"),\n];\n\n#[cfg(test)]\nmod tests {\n    use aws_sdk_bedrockruntime::Client;\n    use aws_sdk_bedrockruntime::{\n        operation::converse::ConverseOutput,\n        types::{\n            ContentBlockDeltaEvent, ContentBlockStart, ContentBlockStartEvent,\n            ConverseOutput as ConverseResult, Message, MessageStopEvent, ReasoningContentBlock,\n            ReasoningContentBlockDelta, ReasoningTextBlock, StopReason, TokenUsage,\n            ToolUseBlockDelta, ToolUseBlockStart,\n        },\n    };\n    use futures_util::StreamExt as _;\n    use schemars::{JsonSchema, schema_for};\n    use serde_json::{Value, json};\n    use swiftide_core::chat_completion::{\n        ChatMessage, ChatMessageContentPart, ChatMessageContentSource, ReasoningItem, ToolSpec,\n    };\n    use wiremock::{\n        Mock, MockServer, Request, Respond, ResponseTemplate,\n        matchers::{method, path},\n    };\n\n    use super::*;\n    #[cfg(feature = \"langfuse\")]\n    use crate::aws_bedrock_v2::test_utils::run_with_langfuse_event_capture;\n    use crate::aws_bedrock_v2::{\n        AwsBedrock, MockBedrockConverse, ReasoningEffort,\n        test_utils::{TEST_MODEL_ID, bedrock_client_for_mock_server, converse_stream_event},\n    };\n\n    #[derive(Debug, Clone, serde::Serialize, serde::Deserialize, JsonSchema)]\n    struct WeatherArgs {\n        location: String,\n    }\n\n    #[derive(Debug, Clone, serde::Serialize, serde::Deserialize, JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentArgs {\n        request: NestedCommentRequest,\n    }\n\n    #[derive(Debug, Clone, serde::Serialize, serde::Deserialize, JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentRequest {\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        body: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        text: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        page_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        block_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        discussion_id: Option<String>,\n    }\n\n    fn response_with_text_and_tool_call() -> ConverseOutput {\n        let mut args = HashMap::new();\n        args.insert(\n            \"location\".to_string(),\n            Document::String(\"Amsterdam\".to_string()),\n        );\n\n        ConverseOutput::builder()\n            .output(ConverseResult::Message(\n                Message::builder()\n                    .role(ConversationRole::Assistant)\n                    .content(ContentBlock::Text(\"Working on it\".to_string()))\n                    .content(ContentBlock::ToolUse(\n                        ToolUseBlock::builder()\n                            .tool_use_id(\"call_1\")\n                            .name(\"get_weather\")\n                            .input(Document::Object(args))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .build()\n                    .unwrap(),\n            ))\n            .usage(\n                TokenUsage::builder()\n                    .input_tokens(10)\n                    .output_tokens(8)\n                    .total_tokens(18)\n                    .build()\n                    .unwrap(),\n            )\n            .stop_reason(StopReason::ToolUse)\n            .build()\n            .unwrap()\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_maps_text_and_tool_calls() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .withf(\n                |model_id,\n                 messages,\n                 system,\n                 inference_config,\n                 tool_config,\n                 output_config,\n                 _additional_model_request_fields,\n                 _additional_model_response_field_paths| {\n                    model_id == \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n                        && messages.len() == 1\n                        && system.is_none()\n                        && inference_config.is_none()\n                        && tool_config.is_none()\n                        && output_config.is_none()\n                },\n            )\n            .returning(|_, _, _, _, _, _, _, _| Ok(response_with_text_and_tool_call()));\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Check weather\".into())])\n            .build()\n            .unwrap();\n\n        let response = bedrock.complete(&request).await.unwrap();\n\n        assert_eq!(response.message.as_deref(), Some(\"Working on it\"));\n        let tool_call = response\n            .tool_calls\n            .as_ref()\n            .and_then(|calls| calls.first())\n            .expect(\"tool call\");\n        assert_eq!(tool_call.id(), \"call_1\");\n        assert_eq!(tool_call.name(), \"get_weather\");\n        assert_eq!(\n            serde_json::from_str::<serde_json::Value>(tool_call.args().unwrap()).unwrap(),\n            serde_json::json!({\"location\":\"Amsterdam\"})\n        );\n        assert_eq!(response.usage.unwrap().total_tokens, 18);\n    }\n\n    #[cfg(feature = \"langfuse\")]\n    #[test]\n    fn test_complete_tracks_langfuse_failure_metadata_on_converse_error() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Err(LanguageModelError::permanent(\"bedrock request failed\"))\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Trace this failure\".into())])\n            .build()\n            .unwrap();\n\n        let (result, events) =\n            run_with_langfuse_event_capture(|| async { bedrock.complete(&request).await });\n\n        let error = result.expect_err(\"request should fail\");\n        assert!(error.to_string().contains(\"bedrock request failed\"));\n\n        let failure_event = events\n            .iter()\n            .find(|event| event.contains_key(\"langfuse.status_message\"))\n            .expect(\"langfuse failure event\");\n\n        assert_eq!(\n            failure_event\n                .get(\"langfuse.model\")\n                .map(std::string::String::as_str),\n            Some(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n        );\n        assert!(\n            failure_event\n                .get(\"langfuse.input\")\n                .is_some_and(|input| input.contains(\"Trace this failure\"))\n        );\n        assert!(\n            failure_event\n                .get(\"langfuse.status_message\")\n                .is_some_and(|message| message.contains(\"bedrock request failed\"))\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_passes_additional_model_fields() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        let mut thinking = HashMap::new();\n        thinking.insert(\"type\".to_string(), Document::String(\"enabled\".to_string()));\n        thinking.insert(\"budget_tokens\".to_string(), Document::from(512_u64));\n        let mut request_fields = HashMap::new();\n        request_fields.insert(\"thinking\".to_string(), Document::Object(thinking));\n        let request_fields = Document::Object(request_fields);\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .withf(\n                |model_id,\n                 _,\n                 _,\n                 _,\n                 _,\n                 _,\n                 additional_model_request_fields,\n                 additional_model_response_field_paths| {\n                    model_id == \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n                        && additional_model_request_fields\n                            .as_ref()\n                            .is_some_and(|fields| {\n                                fields\n                                    .as_object()\n                                    .and_then(|map| map.get(\"thinking\"))\n                                    .and_then(Document::as_object)\n                                    .and_then(|thinking| thinking.get(\"type\"))\n                                    .and_then(Document::as_string)\n                                    == Some(\"enabled\")\n                            })\n                        && additional_model_response_field_paths\n                            .as_ref()\n                            .is_some_and(|paths| paths == &vec![\"/thinking\".to_string()])\n                },\n            )\n            .returning(|_, _, _, _, _, _, _, _| Ok(response_with_text_and_tool_call()));\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .default_options(Options {\n                additional_model_request_fields: Some(request_fields),\n                additional_model_response_field_paths: Some(vec![\"/thinking\".to_string()]),\n                ..Default::default()\n            })\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Hello\".into())])\n            .build()\n            .unwrap();\n\n        let _ = bedrock.complete(&request).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_passes_reasoning_effort_for_claude_opus_4_5() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        let mut thinking = HashMap::new();\n        thinking.insert(\"type\".to_string(), Document::String(\"enabled\".to_string()));\n        thinking.insert(\"budget_tokens\".to_string(), Document::from(512_u64));\n        let mut request_fields = HashMap::new();\n        request_fields.insert(\"thinking\".to_string(), Document::Object(thinking));\n        let request_fields = Document::Object(request_fields);\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .withf(\n                |model_id,\n                 _,\n                 _,\n                 _,\n                 _,\n                 _,\n                 additional_model_request_fields,\n                 _additional_model_response_field_paths| {\n                    model_id == \"anthropic.claude-opus-4-5-20251101-v1:0\"\n                        && additional_model_request_fields\n                            .as_ref()\n                            .is_some_and(|fields| {\n                                let Some(fields) = fields.as_object() else {\n                                    return false;\n                                };\n\n                                let effort_matches = fields\n                                    .get(\"output_config\")\n                                    .and_then(Document::as_object)\n                                    .and_then(|output_config| output_config.get(\"effort\"))\n                                    .and_then(Document::as_string)\n                                    == Some(\"medium\");\n                                let thinking_matches = fields\n                                    .get(\"thinking\")\n                                    .and_then(Document::as_object)\n                                    .and_then(|thinking| thinking.get(\"type\"))\n                                    .and_then(Document::as_string)\n                                    == Some(\"enabled\");\n                                let beta_matches = fields\n                                    .get(\"anthropic_beta\")\n                                    .and_then(Document::as_array)\n                                    .is_some_and(|betas| {\n                                        betas.iter().any(|beta| {\n                                            beta.as_string() == Some(\"effort-2025-11-24\")\n                                        })\n                                    });\n\n                                effort_matches && thinking_matches && beta_matches\n                            })\n                },\n            )\n            .returning(|_, _, _, _, _, _, _, _| Ok(response_with_text_and_tool_call()));\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-opus-4-5-20251101-v1:0\")\n            .default_options(Options {\n                reasoning_effort: Some(ReasoningEffort::Medium),\n                additional_model_request_fields: Some(request_fields),\n                ..Default::default()\n            })\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Hello\".into())])\n            .build()\n            .unwrap();\n\n        let _ = bedrock.complete(&request).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    #[allow(deprecated)]\n    async fn test_complete_respects_tool_strict_option() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .withf(\n                |model_id,\n                 _,\n                 _,\n                 _,\n                 tool_config,\n                 output_config,\n                 _additional_model_request_fields,\n                 _additional_model_response_field_paths| {\n                    model_id == \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n                        && output_config.is_none()\n                        && tool_config\n                            .as_ref()\n                            .and_then(|config| config.tools().first())\n                            .is_some_and(|tool| match tool {\n                                Tool::ToolSpec(spec) => spec.strict() == Some(false),\n                                _ => false,\n                            })\n                },\n            )\n            .returning(|_, _, _, _, _, _, _, _| Ok(response_with_text_and_tool_call()));\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .default_options(Options {\n                tool_strict: Some(false),\n                ..Default::default()\n            })\n            .build()\n            .unwrap();\n\n        let tool_spec = ToolSpec::builder()\n            .name(\"get_weather\")\n            .description(\"Get weather\")\n            .build()\n            .unwrap();\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Check weather\".into())])\n            .tools_spec([tool_spec])\n            .build()\n            .unwrap();\n\n        let _ = bedrock.complete(&request).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_requires_model() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n        bedrock_mock.expect_converse_stream().never();\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user(\"Hello\")])\n            .build()\n            .unwrap();\n\n        let mut stream = bedrock.complete_stream(&request).await;\n        let first = stream.next().await.expect(\"stream should yield one item\");\n        assert!(matches!(first, Err(LanguageModelError::PermanentError(_))));\n        assert!(stream.next().await.is_none());\n    }\n\n    #[cfg(feature = \"langfuse\")]\n    #[test]\n    fn test_complete_stream_tracks_langfuse_failure_metadata_on_stream_error() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse_stream()\n            .once()\n            .returning(|_, _, _, _, _, _, _| {\n                Err(LanguageModelError::transient(\"bedrock stream failed\"))\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user(\"Stream this failure\")])\n            .build()\n            .unwrap();\n\n        let (first_item, events) = run_with_langfuse_event_capture(|| async {\n            let mut stream = bedrock.complete_stream(&request).await;\n            stream.next().await.expect(\"stream should yield an error\")\n        });\n\n        let error = first_item.expect_err(\"stream should fail\");\n        assert!(error.to_string().contains(\"bedrock stream failed\"));\n\n        let failure_event = events\n            .iter()\n            .find(|event| event.contains_key(\"langfuse.status_message\"))\n            .expect(\"langfuse failure event\");\n\n        assert!(\n            failure_event\n                .get(\"langfuse.input\")\n                .is_some_and(|input| input.contains(\"Stream this failure\"))\n        );\n        assert!(\n            failure_event\n                .get(\"langfuse.status_message\")\n                .is_some_and(|message| message.contains(\"bedrock stream failed\"))\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_rejects_system_only_messages() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n        bedrock_mock.expect_converse_stream().never();\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_system(\"You are a helper\")])\n            .build()\n            .unwrap();\n\n        let mut stream = bedrock.complete_stream(&request).await;\n        let first = stream.next().await.expect(\"stream should yield one item\");\n        assert!(matches!(first, Err(LanguageModelError::PermanentError(_))));\n        assert!(stream.next().await.is_none());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_returns_upstream_stream_error() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse_stream()\n            .once()\n            .withf(\n                |model_id,\n                 messages,\n                 system,\n                 inference_config,\n                 tool_config,\n                 _additional_model_request_fields,\n                 _additional_model_response_field_paths| {\n                    model_id == \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n                        && messages.len() == 1\n                        && matches!(messages[0].role(), ConversationRole::User)\n                        && matches!(messages[0].content().first(), Some(ContentBlock::Text(text)) if text == \"Hello\")\n                        && system.is_none()\n                        && inference_config.is_none()\n                        && tool_config.is_none()\n                },\n            )\n            .returning(|_, _, _, _, _, _, _| {\n                Err(LanguageModelError::transient(anyhow::anyhow!(\n                    \"stream init failed\"\n                )))\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user(\"Hello\")])\n            .build()\n            .unwrap();\n\n        let mut stream = bedrock.complete_stream(&request).await;\n        let first = stream.next().await.expect(\"stream should yield one item\");\n        assert!(matches!(first, Err(LanguageModelError::TransientError(_))));\n        assert!(stream.next().await.is_none());\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_green_path_with_wiremock() {\n        struct ValidateConverseRequest;\n\n        impl Respond for ValidateConverseRequest {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let payload: Value = serde_json::from_slice(&request.body).expect(\"request json\");\n\n                assert_eq!(payload[\"messages\"][0][\"role\"], \"user\");\n                assert_eq!(payload[\"messages\"][0][\"content\"][0][\"text\"], \"Hello\");\n\n                ResponseTemplate::new(200).set_body_json(json!({\n                    \"output\": {\n                        \"message\": {\n                            \"role\": \"assistant\",\n                            \"content\": [\n                                {\"text\": \"Hello from bedrock\"}\n                            ]\n                        }\n                    },\n                    \"stopReason\": \"end_turn\",\n                    \"usage\": {\n                        \"inputTokens\": 2,\n                        \"outputTokens\": 5,\n                        \"totalTokens\": 7\n                    }\n                }))\n            }\n        }\n\n        let mock_server = MockServer::start().await;\n        Mock::given(method(\"POST\"))\n            .and(path(format!(\"/model/{TEST_MODEL_ID}/converse\")))\n            .respond_with(ValidateConverseRequest)\n            .mount(&mock_server)\n            .await;\n\n        let client: Client = bedrock_client_for_mock_server(&mock_server.uri());\n        let bedrock = AwsBedrock::builder()\n            .client(client)\n            .default_prompt_model(TEST_MODEL_ID)\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user(\"Hello\")])\n            .build()\n            .unwrap();\n\n        let response = bedrock.complete(&request).await.unwrap();\n\n        assert_eq!(response.message(), Some(\"Hello from bedrock\"));\n        assert_eq!(\n            response.usage.as_ref().map(|usage| usage.total_tokens),\n            Some(7)\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_green_path_with_wiremock() {\n        struct ValidateConverseStreamRequest {\n            stream_body: Vec<u8>,\n        }\n\n        impl Respond for ValidateConverseStreamRequest {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let payload: Value = serde_json::from_slice(&request.body).expect(\"request json\");\n\n                assert_eq!(payload[\"messages\"][0][\"role\"], \"user\");\n                assert_eq!(payload[\"messages\"][0][\"content\"][0][\"text\"], \"Hello\");\n\n                ResponseTemplate::new(200).set_body_raw(\n                    self.stream_body.clone(),\n                    \"application/vnd.amazon.eventstream\",\n                )\n            }\n        }\n\n        let mock_server = MockServer::start().await;\n        let stream_body = [\n            converse_stream_event(\n                \"contentBlockDelta\",\n                &json!({\n                    \"contentBlockIndex\": 0,\n                    \"delta\": {\"text\": \"Hello stream\"}\n                }),\n            ),\n            converse_stream_event(\n                \"metadata\",\n                &json!({\n                    \"usage\": {\n                        \"inputTokens\": 4,\n                        \"outputTokens\": 5,\n                        \"totalTokens\": 9\n                    }\n                }),\n            ),\n            converse_stream_event(\n                \"messageStop\",\n                &json!({\n                    \"stopReason\": \"end_turn\"\n                }),\n            ),\n        ]\n        .concat();\n\n        Mock::given(method(\"POST\"))\n            .and(path(format!(\"/model/{TEST_MODEL_ID}/converse-stream\")))\n            .respond_with(ValidateConverseStreamRequest { stream_body })\n            .mount(&mock_server)\n            .await;\n\n        let client: Client = bedrock_client_for_mock_server(&mock_server.uri());\n        let bedrock = AwsBedrock::builder()\n            .client(client)\n            .default_prompt_model(TEST_MODEL_ID)\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user(\"Hello\")])\n            .build()\n            .unwrap();\n\n        let responses = bedrock\n            .complete_stream(&request)\n            .await\n            .collect::<Vec<_>>()\n            .await;\n        let last = responses\n            .last()\n            .expect(\"stream should yield\")\n            .as_ref()\n            .expect(\"last response ok\");\n\n        assert_eq!(last.message(), Some(\"Hello stream\"));\n        assert_eq!(last.usage.as_ref().map(|usage| usage.total_tokens), Some(9));\n    }\n\n    #[test]\n    fn test_tool_config_from_specs_builds_schema() {\n        let tool_spec = ToolSpec::builder()\n            .name(\"get_weather\")\n            .description(\"Get weather by location\")\n            .parameters_schema(schema_for!(WeatherArgs))\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs([tool_spec])\n            .build()\n            .unwrap();\n\n        let tool_config = tool_config_from_specs(request.tools_spec().iter(), true)\n            .unwrap()\n            .expect(\"tool config\");\n        assert_eq!(tool_config.tools().len(), 1);\n\n        let Tool::ToolSpec(spec) = &tool_config.tools()[0] else {\n            panic!(\"expected tool spec\");\n        };\n\n        assert_eq!(spec.name(), \"get_weather\");\n        assert_eq!(spec.description(), Some(\"Get weather by location\"));\n        assert_eq!(spec.strict(), Some(true));\n        assert!(matches!(\n            spec.input_schema(),\n            Some(ToolInputSchema::Json(Document::Object(schema)))\n                if schema.get(\"type\") == Some(&Document::String(\"object\".to_string()))\n                    && schema.get(\"additionalProperties\") == Some(&Document::Bool(false))\n        ));\n    }\n\n    #[test]\n    fn test_tool_config_from_specs_can_disable_strict() {\n        let tool_spec = ToolSpec::builder()\n            .name(\"get_weather\")\n            .description(\"Get weather\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs([tool_spec])\n            .build()\n            .unwrap();\n\n        let tool_config = tool_config_from_specs(request.tools_spec().iter(), false)\n            .unwrap()\n            .expect(\"tool config\");\n\n        let Tool::ToolSpec(spec) = &tool_config.tools()[0] else {\n            panic!(\"expected tool spec\");\n        };\n\n        assert_eq!(spec.strict(), Some(false));\n        assert!(matches!(\n            spec.input_schema(),\n            Some(ToolInputSchema::Json(Document::Object(schema)))\n                if schema.get(\"type\") == Some(&Document::String(\"object\".to_string()))\n                    && schema.get(\"additionalProperties\") == Some(&Document::Bool(false))\n        ));\n    }\n\n    #[test]\n    fn test_tool_config_from_specs_does_not_apply_openai_required_workaround() {\n        let tool_spec = ToolSpec::builder()\n            .name(\"create_comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(schema_for!(NestedCommentArgs))\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs([tool_spec])\n            .build()\n            .unwrap();\n\n        let tool_config = tool_config_from_specs(request.tools_spec().iter(), true)\n            .unwrap()\n            .expect(\"tool config\");\n\n        let Tool::ToolSpec(spec) = &tool_config.tools()[0] else {\n            panic!(\"expected tool spec\");\n        };\n\n        let Some(ToolInputSchema::Json(Document::Object(schema))) = spec.input_schema() else {\n            panic!(\"expected JSON object schema\");\n        };\n\n        assert_eq!(\n            schema.get(\"type\"),\n            Some(&Document::String(\"object\".to_string()))\n        );\n        assert_eq!(\n            schema.get(\"additionalProperties\"),\n            Some(&Document::Bool(false))\n        );\n        assert_eq!(\n            schema.get(\"required\"),\n            Some(&Document::Array(vec![Document::String(\n                \"request\".to_string()\n            )]))\n        );\n\n        let Some(Document::Object(properties)) = schema.get(\"properties\") else {\n            panic!(\"expected properties map\");\n        };\n        let Some(Document::String(nested_ref)) = properties\n            .get(\"request\")\n            .and_then(Document::as_object)\n            .and_then(|request| request.get(\"$ref\"))\n        else {\n            panic!(\"expected nested request $ref\");\n        };\n        let nested_name = nested_ref\n            .rsplit('/')\n            .next()\n            .expect(\"nested request ref name\");\n        let Some(Document::Object(defs)) = schema.get(\"$defs\") else {\n            panic!(\"expected defs map\");\n        };\n        let Some(Document::Object(nested_schema)) = defs.get(nested_name) else {\n            panic!(\"expected nested request schema\");\n        };\n        assert!(!nested_schema.contains_key(\"required\"));\n    }\n\n    #[test]\n    fn test_tool_config_from_specs_orders_tools_deterministically() {\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs([\n                ToolSpec::builder()\n                    .name(\"z_tool\")\n                    .description(\"later\")\n                    .build()\n                    .unwrap(),\n                ToolSpec::builder()\n                    .name(\"a_tool\")\n                    .description(\"earlier\")\n                    .build()\n                    .unwrap(),\n            ])\n            .build()\n            .unwrap();\n\n        let tool_config = tool_config_from_specs(request.tools_spec().iter(), true)\n            .unwrap()\n            .expect(\"tool config\");\n\n        let tool_names = tool_config\n            .tools()\n            .iter()\n            .map(|tool| match tool {\n                Tool::ToolSpec(spec) => spec.name(),\n                _ => panic!(\"expected tool spec\"),\n            })\n            .collect::<Vec<_>>();\n\n        assert_eq!(tool_names, vec![\"a_tool\", \"z_tool\"]);\n    }\n\n    #[test]\n    fn test_response_to_chat_completion_maps_reasoning_content() {\n        let response = ConverseOutput::builder()\n            .output(ConverseResult::Message(\n                Message::builder()\n                    .role(ConversationRole::Assistant)\n                    .content(ContentBlock::ReasoningContent(\n                        ReasoningContentBlock::ReasoningText(\n                            ReasoningTextBlock::builder()\n                                .text(\"I should call a weather tool\")\n                                .signature(\"sig_123\")\n                                .build()\n                                .unwrap(),\n                        ),\n                    ))\n                    .content(ContentBlock::Text(\"Working on it\".to_string()))\n                    .build()\n                    .unwrap(),\n            ))\n            .stop_reason(StopReason::EndTurn)\n            .build()\n            .unwrap();\n\n        let completion = response_to_chat_completion(&response).unwrap();\n        assert_eq!(completion.message.as_deref(), Some(\"Working on it\"));\n        let reasoning = completion.reasoning.expect(\"reasoning items\");\n        assert_eq!(reasoning.len(), 1);\n        assert_eq!(reasoning[0].id, \"bedrock_reasoning_0\");\n        assert_eq!(\n            reasoning[0].content.as_ref().and_then(|c| c.first()),\n            Some(&\"I should call a weather tool\".to_string())\n        );\n        assert_eq!(reasoning[0].encrypted_content.as_deref(), Some(\"sig_123\"));\n    }\n\n    #[test]\n    fn test_build_converse_input_replays_reasoning_items() {\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![\n                ChatMessage::Reasoning(ReasoningItem {\n                    id: \"r1\".to_string(),\n                    summary: Vec::new(),\n                    content: Some(vec![\"I should call a weather tool\".to_string()]),\n                    encrypted_content: Some(\"sig_123\".to_string()),\n                    status: None,\n                }),\n                ChatMessage::new_user(\"Use tool\"),\n            ])\n            .build()\n            .unwrap();\n\n        let (messages, _system, _inference, _tool_config) =\n            build_converse_input(&request, &Options::default()).unwrap();\n\n        assert_eq!(messages.len(), 2);\n        assert!(matches!(messages[0].role(), ConversationRole::Assistant));\n        let reasoning = messages[0]\n            .content()\n            .first()\n            .and_then(|content| content.as_reasoning_content().ok())\n            .and_then(|content| content.as_reasoning_text().ok())\n            .expect(\"reasoning content\");\n        assert_eq!(reasoning.text(), \"I should call a weather tool\");\n        assert_eq!(reasoning.signature(), Some(\"sig_123\"));\n    }\n\n    #[test]\n    fn test_build_converse_input_groups_adjacent_tool_outputs() {\n        let first_tool = ToolCall::builder()\n            .id(\"tool_1\")\n            .name(\"shell_command\")\n            .args(\"{\\\"cmd\\\":\\\"pwd\\\"}\")\n            .build()\n            .unwrap();\n        let second_tool = ToolCall::builder()\n            .id(\"tool_2\")\n            .name(\"git\")\n            .args(\"{\\\"command\\\":\\\"status\\\"}\")\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![\n                ChatMessage::Assistant(None, Some(vec![first_tool.clone(), second_tool.clone()])),\n                ChatMessage::new_tool_output(\n                    first_tool,\n                    ToolOutput::Text(\"pwd output\".to_string()),\n                ),\n                ChatMessage::new_tool_output(\n                    second_tool,\n                    ToolOutput::Text(\"git output\".to_string()),\n                ),\n            ])\n            .build()\n            .unwrap();\n\n        let (messages, _system, _inference, _tool_config) =\n            build_converse_input(&request, &Options::default()).unwrap();\n\n        assert_eq!(messages.len(), 2);\n        assert!(matches!(messages[0].role(), ConversationRole::Assistant));\n        assert!(matches!(messages[1].role(), ConversationRole::User));\n        assert_eq!(messages[1].content().len(), 2);\n\n        let first_result = messages[1]\n            .content()\n            .first()\n            .and_then(|block| block.as_tool_result().ok())\n            .expect(\"first tool result\");\n        let second_result = messages[1]\n            .content()\n            .get(1)\n            .and_then(|block| block.as_tool_result().ok())\n            .expect(\"second tool result\");\n\n        assert_eq!(first_result.tool_use_id(), \"tool_1\");\n        assert_eq!(second_result.tool_use_id(), \"tool_2\");\n    }\n\n    #[test]\n    fn test_build_converse_input_maps_image_part() {\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user_with_parts(vec![\n                ChatMessageContentPart::text(\"Describe this image\"),\n                ChatMessageContentPart::image(\"data:image/png;base64,AA==\"),\n            ])])\n            .build()\n            .unwrap();\n\n        let (messages, _system, _inference, _tool_config) =\n            build_converse_input(&request, &Options::default()).unwrap();\n        assert_eq!(messages.len(), 1);\n        assert!(matches!(messages[0].role(), ConversationRole::User));\n        assert_eq!(messages[0].content().len(), 2);\n        let image = messages[0]\n            .content()\n            .get(1)\n            .and_then(|content| content.as_image().ok())\n            .expect(\"image block\");\n        assert!(matches!(image.format(), ImageFormat::Png));\n        assert!(\n            image\n                .source()\n                .is_some_and(aws_sdk_bedrockruntime::types::ImageSource::is_bytes)\n        );\n    }\n\n    #[test]\n    fn test_build_converse_input_maps_audio_part() {\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user_with_parts(vec![\n                ChatMessageContentPart::text(\"Transcribe this\"),\n                ChatMessageContentPart::audio(ChatMessageContentSource::bytes(\n                    vec![1_u8, 2_u8, 3_u8],\n                    Some(\"audio/mpeg\".to_string()),\n                )),\n            ])])\n            .build()\n            .unwrap();\n\n        let (messages, _system, _inference, _tool_config) =\n            build_converse_input(&request, &Options::default()).unwrap();\n        let audio = messages[0]\n            .content()\n            .get(1)\n            .and_then(|content| content.as_audio().ok())\n            .expect(\"audio block\");\n        assert!(matches!(audio.format(), AudioFormat::Mp3));\n        assert!(\n            audio\n                .source()\n                .is_some_and(aws_sdk_bedrockruntime::types::AudioSource::is_bytes)\n        );\n    }\n\n    #[test]\n    fn test_build_converse_input_maps_video_part() {\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user_with_parts(vec![\n                ChatMessageContentPart::text(\"Describe this clip\"),\n                ChatMessageContentPart::video(\"s3://bucket/video.mp4\"),\n            ])])\n            .build()\n            .unwrap();\n\n        let (messages, _system, _inference, _tool_config) =\n            build_converse_input(&request, &Options::default()).unwrap();\n        let video = messages[0]\n            .content()\n            .get(1)\n            .and_then(|content| content.as_video().ok())\n            .expect(\"video block\");\n        assert!(matches!(video.format(), VideoFormat::Mp4));\n        assert!(\n            video\n                .source()\n                .is_some_and(aws_sdk_bedrockruntime::types::VideoSource::is_s3_location)\n        );\n    }\n\n    #[test]\n    fn test_build_converse_input_rejects_audio_http_url() {\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user_with_parts(vec![\n                ChatMessageContentPart::text(\"Transcribe this\"),\n                ChatMessageContentPart::audio(\"https://example.com/audio.mp3\"),\n            ])])\n            .build()\n            .unwrap();\n\n        let error = build_converse_input(&request, &Options::default()).unwrap_err();\n        assert!(format!(\"{error}\").contains(\"audio source URL must be data: or s3://\"));\n    }\n\n    #[test]\n    fn test_build_converse_input_rejects_document_without_text() {\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user_with_parts(vec![\n                ChatMessageContentPart::document(ChatMessageContentSource::bytes(\n                    vec![1_u8, 2_u8],\n                    Some(\"text/plain\".to_string()),\n                )),\n            ])])\n            .build()\n            .unwrap();\n\n        let error = build_converse_input(&request, &Options::default()).unwrap_err();\n        assert!(format!(\"{error}\").contains(\"require at least one text part\"));\n    }\n\n    #[test]\n    #[allow(clippy::too_many_lines)]\n    fn test_apply_stream_event_accumulates_deltas() {\n        let mut response = ChatCompletionResponse::default();\n        let mut stop_reason = None;\n\n        apply_stream_event(\n            &ConverseStreamOutput::ContentBlockStart(\n                ContentBlockStartEvent::builder()\n                    .content_block_index(0)\n                    .start(ContentBlockStart::ToolUse(\n                        ToolUseBlockStart::builder()\n                            .tool_use_id(\"call_1\")\n                            .name(\"get_weather\")\n                            .build()\n                            .unwrap(),\n                    ))\n                    .build()\n                    .unwrap(),\n            ),\n            &mut response,\n            &mut stop_reason,\n        );\n\n        apply_stream_event(\n            &ConverseStreamOutput::ContentBlockDelta(\n                ContentBlockDeltaEvent::builder()\n                    .content_block_index(0)\n                    .delta(ContentBlockDelta::ToolUse(\n                        ToolUseBlockDelta::builder()\n                            .input(\"{\\\"location\\\":\\\"Amsterdam\\\"}\")\n                            .build()\n                            .unwrap(),\n                    ))\n                    .build()\n                    .unwrap(),\n            ),\n            &mut response,\n            &mut stop_reason,\n        );\n\n        apply_stream_event(\n            &ConverseStreamOutput::ContentBlockDelta(\n                ContentBlockDeltaEvent::builder()\n                    .content_block_index(1)\n                    .delta(ContentBlockDelta::Text(\"Tool call created\".to_string()))\n                    .build()\n                    .unwrap(),\n            ),\n            &mut response,\n            &mut stop_reason,\n        );\n\n        apply_stream_event(\n            &ConverseStreamOutput::ContentBlockDelta(\n                ContentBlockDeltaEvent::builder()\n                    .content_block_index(2)\n                    .delta(ContentBlockDelta::ReasoningContent(\n                        ReasoningContentBlockDelta::Text(\"Thinking...\".to_string()),\n                    ))\n                    .build()\n                    .unwrap(),\n            ),\n            &mut response,\n            &mut stop_reason,\n        );\n\n        apply_stream_event(\n            &ConverseStreamOutput::ContentBlockDelta(\n                ContentBlockDeltaEvent::builder()\n                    .content_block_index(2)\n                    .delta(ContentBlockDelta::ReasoningContent(\n                        ReasoningContentBlockDelta::Signature(\"sig_123\".to_string()),\n                    ))\n                    .build()\n                    .unwrap(),\n            ),\n            &mut response,\n            &mut stop_reason,\n        );\n\n        apply_stream_event(\n            &ConverseStreamOutput::Metadata(\n                aws_sdk_bedrockruntime::types::ConverseStreamMetadataEvent::builder()\n                    .usage(\n                        TokenUsage::builder()\n                            .input_tokens(5)\n                            .output_tokens(3)\n                            .total_tokens(8)\n                            .build()\n                            .unwrap(),\n                    )\n                    .build(),\n            ),\n            &mut response,\n            &mut stop_reason,\n        );\n\n        apply_stream_event(\n            &ConverseStreamOutput::MessageStop(\n                MessageStopEvent::builder()\n                    .stop_reason(StopReason::ToolUse)\n                    .build()\n                    .unwrap(),\n            ),\n            &mut response,\n            &mut stop_reason,\n        );\n\n        assert_eq!(response.message.as_deref(), Some(\"Tool call created\"));\n        let tool_call = response\n            .tool_calls\n            .as_ref()\n            .and_then(|calls| calls.first())\n            .expect(\"tool call\");\n        assert_eq!(tool_call.id(), \"call_1\");\n        assert_eq!(tool_call.name(), \"get_weather\");\n        assert_eq!(tool_call.args(), Some(\"{\\\"location\\\":\\\"Amsterdam\\\"}\"));\n        let reasoning = response.reasoning.expect(\"reasoning item\");\n        assert_eq!(reasoning.len(), 1);\n        assert_eq!(reasoning[0].id, \"bedrock_reasoning_2\");\n        assert_eq!(\n            reasoning[0].content.as_ref().and_then(|c| c.first()),\n            Some(&\"Thinking...\".to_string())\n        );\n        assert_eq!(reasoning[0].encrypted_content.as_deref(), Some(\"sig_123\"));\n        assert_eq!(response.usage.unwrap().total_tokens, 8);\n        assert!(matches!(stop_reason, Some(StopReason::ToolUse)));\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/aws_bedrock_v2/mod.rs",
    "content": "use std::{pin::Pin, sync::Arc};\n\nuse async_trait::async_trait;\nuse aws_sdk_bedrockruntime::{\n    Client,\n    error::SdkError,\n    operation::{\n        converse::{ConverseError, ConverseOutput},\n        converse_stream::{\n            ConverseStreamError, ConverseStreamOutput as BedrockConverseStreamOutput,\n        },\n    },\n    types::{\n        InferenceConfiguration, Message, OutputConfig, StopReason, SystemContentBlock, TokenUsage,\n        ToolConfiguration, error::ConverseStreamOutputError,\n    },\n};\nuse aws_smithy_types::Document;\nuse derive_builder::Builder;\nuse serde::Serialize;\nuse swiftide_core::chat_completion::{\n    InputTokenDetails, Usage, UsageDetails, errors::LanguageModelError,\n};\nuse tokio::runtime::Handle;\n\n#[cfg(test)]\nuse mockall::automock;\n\nmod chat_completion;\nmod simple_prompt;\nmod structured_prompt;\n#[cfg(test)]\nmod test_utils;\nmod tool_schema;\n\n/// Converse-based integration with AWS Bedrock.\n///\n/// This integration uses Bedrock's unified Converse APIs (`Converse` + `ConverseStream`).\n#[derive(Builder, Clone)]\n#[builder(setter(into, strip_option))]\npub struct AwsBedrock {\n    /// The Bedrock runtime client.\n    #[builder(default = self.default_client(), setter(custom))]\n    client: Arc<dyn BedrockConverse>,\n\n    /// Default options for prompt requests.\n    #[builder(default, setter(custom))]\n    default_options: Options,\n\n    #[cfg(feature = \"metrics\")]\n    #[builder(default)]\n    /// Optional metadata to attach to metrics emitted by this client.\n    metric_metadata: Option<std::collections::HashMap<String, String>>,\n\n    /// A callback function that is called when usage information is available.\n    #[builder(default, setter(custom))]\n    #[allow(clippy::type_complexity)]\n    on_usage: Option<\n        Arc<\n            dyn for<'a> Fn(\n                    &'a Usage,\n                ) -> Pin<\n                    Box<dyn std::future::Future<Output = anyhow::Result<()>> + Send + 'a>,\n                > + Send\n                + Sync,\n        >,\n    >,\n}\n\nimpl std::fmt::Debug for AwsBedrock {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"AwsBedrock\")\n            .field(\"client\", &self.client)\n            .field(\"default_options\", &self.default_options)\n            .finish()\n    }\n}\n\n/// Anthropic Claude effort guidance for Bedrock model-specific request fields.\n///\n/// Bedrock currently documents the following support:\n/// - Claude Opus 4.5: `low`, `medium`, `high` via the `effort-2025-11-24` beta header.\n/// - Claude Opus 4.6 adaptive thinking: `low`, `medium`, `high`, `max` with no beta header.\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize)]\n#[serde(rename_all = \"lowercase\")]\npub enum ReasoningEffort {\n    Low,\n    Medium,\n    High,\n    Max,\n}\n\nimpl ReasoningEffort {\n    fn as_str(self) -> &'static str {\n        match self {\n            Self::Low => \"low\",\n            Self::Medium => \"medium\",\n            Self::High => \"high\",\n            Self::Max => \"max\",\n        }\n    }\n}\n\n#[derive(Debug, Clone, Builder, Default)]\n#[builder(setter(strip_option))]\npub struct Options {\n    /// Model ID or ARN used as `modelId` in Converse requests.\n    #[builder(default, setter(into))]\n    pub prompt_model: Option<String>,\n\n    /// Maximum number of tokens in the generated response.\n    #[builder(default)]\n    pub max_tokens: Option<i32>,\n\n    /// Sampling temperature.\n    #[builder(default)]\n    pub temperature: Option<f32>,\n\n    /// Nucleus sampling parameter.\n    #[builder(default)]\n    pub top_p: Option<f32>,\n\n    /// Stop sequences for response generation.\n    #[builder(default, setter(into))]\n    pub stop_sequences: Option<Vec<String>>,\n\n    /// Whether tool calls should enforce strict schema validation.\n    ///\n    /// Defaults to `true` when not set.\n    #[builder(default)]\n    pub tool_strict: Option<bool>,\n\n    /// Anthropic beta headers forwarded through Bedrock model-specific request fields.\n    ///\n    /// This is useful for Anthropic features on Bedrock that require `anthropic_beta`.\n    #[builder(default, setter(into))]\n    pub anthropic_beta: Option<Vec<String>>,\n\n    /// Anthropic Claude reasoning/token spend guidance forwarded through Bedrock\n    /// `additional_model_request_fields.output_config.effort`.\n    ///\n    /// For Claude Opus 4.5, Bedrock requires the `effort-2025-11-24` beta header. Swiftide adds\n    /// that header automatically when the configured model ID clearly identifies Claude Opus 4.5.\n    /// If you route through an inference profile or ARN, also set `anthropic_beta` explicitly.\n    ///\n    /// For Claude Opus 4.6 adaptive thinking, Bedrock documents `max` in addition to the other\n    /// levels. Use `additional_model_request_fields` to set `thinking.type = \"adaptive\"` when\n    /// needed.\n    #[builder(default)]\n    pub reasoning_effort: Option<ReasoningEffort>,\n\n    /// Provider-specific model request parameters passed to Converse.\n    ///\n    /// This is the Bedrock equivalent of model-specific reasoning controls.\n    #[builder(default)]\n    pub additional_model_request_fields: Option<Document>,\n\n    /// JSON Pointer paths for model-specific response fields.\n    #[builder(default, setter(into))]\n    pub additional_model_response_field_paths: Option<Vec<String>>,\n}\n\nimpl Options {\n    pub fn builder() -> OptionsBuilder {\n        OptionsBuilder::default()\n    }\n\n    pub fn tool_strict_enabled(&self) -> bool {\n        self.tool_strict.unwrap_or(true)\n    }\n\n    pub fn merge(&mut self, other: Options) {\n        if let Some(prompt_model) = other.prompt_model {\n            self.prompt_model = Some(prompt_model);\n        }\n        if let Some(max_tokens) = other.max_tokens {\n            self.max_tokens = Some(max_tokens);\n        }\n        if let Some(temperature) = other.temperature {\n            self.temperature = Some(temperature);\n        }\n        if let Some(top_p) = other.top_p {\n            self.top_p = Some(top_p);\n        }\n        if let Some(stop_sequences) = other.stop_sequences {\n            self.stop_sequences = Some(stop_sequences);\n        }\n        if let Some(tool_strict) = other.tool_strict {\n            self.tool_strict = Some(tool_strict);\n        }\n        if let Some(anthropic_beta) = other.anthropic_beta {\n            self.anthropic_beta = Some(anthropic_beta);\n        }\n        if let Some(reasoning_effort) = other.reasoning_effort {\n            self.reasoning_effort = Some(reasoning_effort);\n        }\n        if let Some(additional_model_request_fields) = other.additional_model_request_fields {\n            self.additional_model_request_fields = Some(additional_model_request_fields);\n        }\n        if let Some(additional_model_response_field_paths) =\n            other.additional_model_response_field_paths\n        {\n            self.additional_model_response_field_paths =\n                Some(additional_model_response_field_paths);\n        }\n    }\n}\n\nimpl AwsBedrock {\n    pub fn builder() -> AwsBedrockBuilder {\n        AwsBedrockBuilder::default()\n    }\n\n    /// Retrieve a reference to the default options.\n    pub fn options(&self) -> &Options {\n        &self.default_options\n    }\n\n    /// Retrieve a mutable reference to the default options.\n    pub fn options_mut(&mut self) -> &mut Options {\n        &mut self.default_options\n    }\n\n    fn prompt_model(&self) -> Result<&str, LanguageModelError> {\n        self.default_options\n            .prompt_model\n            .as_deref()\n            .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))\n    }\n\n    async fn report_usage(&self, model: &str, usage: &Usage) -> Result<(), LanguageModelError> {\n        #[cfg(not(feature = \"metrics\"))]\n        let _ = model;\n\n        if let Some(callback) = &self.on_usage {\n            callback(usage).await?;\n        }\n\n        #[cfg(feature = \"metrics\")]\n        {\n            swiftide_core::metrics::emit_usage(\n                model,\n                usage.prompt_tokens.into(),\n                usage.completion_tokens.into(),\n                usage.total_tokens.into(),\n                self.metric_metadata.as_ref(),\n            );\n        }\n\n        Ok(())\n    }\n\n    #[allow(unused_variables)]\n    async fn track_completion<R, S>(\n        &self,\n        model: &str,\n        usage: Option<&Usage>,\n        request: Option<&R>,\n        response: Option<&S>,\n    ) -> Result<(), LanguageModelError>\n    where\n        R: Serialize + ?Sized,\n        S: Serialize + ?Sized,\n    {\n        if let Some(usage) = usage {\n            self.report_usage(model, usage).await?;\n        }\n\n        #[cfg(feature = \"langfuse\")]\n        tracing::debug!(\n            langfuse.model = model,\n            langfuse.input = request.and_then(langfuse_json_redacted).unwrap_or_default(),\n            langfuse.output = response.and_then(langfuse_json).unwrap_or_default(),\n            langfuse.usage = usage.and_then(langfuse_json).unwrap_or_default(),\n        );\n\n        Ok(())\n    }\n\n    #[allow(unused_variables)]\n    fn track_failure<R, S>(\n        model: &str,\n        request: Option<&R>,\n        response: Option<&S>,\n        error: &LanguageModelError,\n    ) where\n        R: Serialize + ?Sized,\n        S: Serialize + ?Sized,\n    {\n        #[cfg(feature = \"langfuse\")]\n        tracing::debug!(\n            langfuse.model = model,\n            langfuse.input = request.and_then(langfuse_json_redacted).unwrap_or_default(),\n            langfuse.output = response.and_then(langfuse_json).unwrap_or_default(),\n            langfuse.status_message = error.to_string(),\n        );\n    }\n}\n\nimpl AwsBedrockBuilder {\n    #[allow(clippy::unused_self)]\n    fn default_config(&self) -> aws_config::SdkConfig {\n        tokio::task::block_in_place(|| Handle::current().block_on(aws_config::load_from_env()))\n    }\n\n    fn default_client(&self) -> Arc<Client> {\n        Arc::new(Client::new(&self.default_config()))\n    }\n\n    /// Sets the Bedrock runtime client.\n    pub fn client(&mut self, client: Client) -> &mut Self {\n        self.client = Some(Arc::new(client));\n        self\n    }\n\n    /// Sets the default prompt model for Converse requests.\n    pub fn default_prompt_model(&mut self, model: impl Into<String>) -> &mut Self {\n        if let Some(options) = self.default_options.as_mut() {\n            options.prompt_model = Some(model.into());\n        } else {\n            self.default_options = Some(Options {\n                prompt_model: Some(model.into()),\n                ..Default::default()\n            });\n        }\n\n        self\n    }\n\n    /// Sets default options for requests.\n    ///\n    /// Merges with existing options if already set.\n    pub fn default_options(&mut self, options: impl Into<Options>) -> &mut Self {\n        let options = options.into();\n        if let Some(existing_options) = self.default_options.as_mut() {\n            existing_options.merge(options);\n        } else {\n            self.default_options = Some(options);\n        }\n\n        self\n    }\n\n    /// Adds a callback function that will be called when usage information is available.\n    pub fn on_usage<F>(&mut self, func: F) -> &mut Self\n    where\n        F: Fn(&Usage) -> anyhow::Result<()> + Send + Sync + 'static,\n    {\n        let func = Arc::new(func);\n        self.on_usage = Some(Some(Arc::new(move |usage: &Usage| {\n            let func = func.clone();\n            Box::pin(async move { func(usage) })\n        })));\n\n        self\n    }\n\n    /// Adds an asynchronous callback function that will be called when usage information is\n    /// available.\n    pub fn on_usage_async<F>(&mut self, func: F) -> &mut Self\n    where\n        F: for<'a> Fn(\n                &'a Usage,\n            )\n                -> Pin<Box<dyn std::future::Future<Output = anyhow::Result<()>> + Send + 'a>>\n            + Send\n            + Sync\n            + 'static,\n    {\n        let func = Arc::new(func);\n        self.on_usage = Some(Some(Arc::new(move |usage: &Usage| {\n            let func = func.clone();\n            Box::pin(async move { func(usage).await })\n        })));\n\n        self\n    }\n\n    #[cfg(test)]\n    #[allow(private_bounds)]\n    pub fn test_client(&mut self, client: impl BedrockConverse + 'static) -> &mut Self {\n        self.client = Some(Arc::new(client));\n        self\n    }\n}\n\n#[cfg_attr(test, automock)]\n#[async_trait]\n#[allow(clippy::too_many_arguments)]\ntrait BedrockConverse: std::fmt::Debug + Send + Sync {\n    async fn converse(\n        &self,\n        model_id: &str,\n        messages: Vec<Message>,\n        system: Option<Vec<SystemContentBlock>>,\n        inference_config: Option<InferenceConfiguration>,\n        tool_config: Option<ToolConfiguration>,\n        output_config: Option<OutputConfig>,\n        additional_model_request_fields: Option<Document>,\n        additional_model_response_field_paths: Option<Vec<String>>,\n    ) -> Result<ConverseOutput, LanguageModelError>;\n\n    async fn converse_stream(\n        &self,\n        model_id: &str,\n        messages: Vec<Message>,\n        system: Option<Vec<SystemContentBlock>>,\n        inference_config: Option<InferenceConfiguration>,\n        tool_config: Option<ToolConfiguration>,\n        additional_model_request_fields: Option<Document>,\n        additional_model_response_field_paths: Option<Vec<String>>,\n    ) -> Result<BedrockConverseStreamOutput, LanguageModelError>;\n}\n\n#[async_trait]\n#[allow(clippy::too_many_arguments)]\nimpl BedrockConverse for Client {\n    async fn converse(\n        &self,\n        model_id: &str,\n        messages: Vec<Message>,\n        system: Option<Vec<SystemContentBlock>>,\n        inference_config: Option<InferenceConfiguration>,\n        tool_config: Option<ToolConfiguration>,\n        output_config: Option<OutputConfig>,\n        additional_model_request_fields: Option<Document>,\n        additional_model_response_field_paths: Option<Vec<String>>,\n    ) -> Result<ConverseOutput, LanguageModelError> {\n        let mut request = self\n            .converse()\n            .model_id(model_id)\n            .set_messages(Some(messages))\n            .set_system(system)\n            .set_tool_config(tool_config)\n            .set_output_config(output_config)\n            .set_additional_model_request_fields(additional_model_request_fields)\n            .set_additional_model_response_field_paths(additional_model_response_field_paths);\n\n        if let Some(inference_config) = inference_config {\n            request = request.inference_config(inference_config);\n        }\n\n        request\n            .send()\n            .await\n            .map_err(converse_error_to_language_model_error)\n    }\n\n    async fn converse_stream(\n        &self,\n        model_id: &str,\n        messages: Vec<Message>,\n        system: Option<Vec<SystemContentBlock>>,\n        inference_config: Option<InferenceConfiguration>,\n        tool_config: Option<ToolConfiguration>,\n        additional_model_request_fields: Option<Document>,\n        additional_model_response_field_paths: Option<Vec<String>>,\n    ) -> Result<BedrockConverseStreamOutput, LanguageModelError> {\n        let mut request = self\n            .converse_stream()\n            .model_id(model_id)\n            .set_messages(Some(messages))\n            .set_system(system)\n            .set_tool_config(tool_config)\n            .set_additional_model_request_fields(additional_model_request_fields)\n            .set_additional_model_response_field_paths(additional_model_response_field_paths);\n\n        if let Some(inference_config) = inference_config {\n            request = request.inference_config(inference_config);\n        }\n\n        request\n            .send()\n            .await\n            .map_err(converse_stream_error_to_language_model_error)\n    }\n}\n\nfn converse_error_to_language_model_error<R>(\n    error: SdkError<ConverseError, R>,\n) -> LanguageModelError\nwhere\n    R: std::fmt::Debug + Send + Sync + 'static,\n{\n    sdk_error_to_language_model_error(error, |service_error| {\n        matches!(\n            service_error,\n            ConverseError::ThrottlingException(_)\n                | ConverseError::ServiceUnavailableException(_)\n                | ConverseError::ModelNotReadyException(_)\n                | ConverseError::ModelTimeoutException(_)\n                | ConverseError::InternalServerException(_)\n        )\n    })\n}\n\nfn converse_stream_error_to_language_model_error<R>(\n    error: SdkError<ConverseStreamError, R>,\n) -> LanguageModelError\nwhere\n    R: std::fmt::Debug + Send + Sync + 'static,\n{\n    sdk_error_to_language_model_error(error, |service_error| {\n        matches!(\n            service_error,\n            ConverseStreamError::ThrottlingException(_)\n                | ConverseStreamError::ServiceUnavailableException(_)\n                | ConverseStreamError::ModelNotReadyException(_)\n                | ConverseStreamError::ModelTimeoutException(_)\n                | ConverseStreamError::InternalServerException(_)\n                | ConverseStreamError::ModelStreamErrorException(_)\n        )\n    })\n}\n\nfn converse_stream_output_error_to_language_model_error<R>(\n    error: SdkError<ConverseStreamOutputError, R>,\n) -> LanguageModelError\nwhere\n    R: std::fmt::Debug + Send + Sync + 'static,\n{\n    sdk_error_to_language_model_error(error, |service_error| {\n        matches!(\n            service_error,\n            ConverseStreamOutputError::ThrottlingException(_)\n                | ConverseStreamOutputError::ServiceUnavailableException(_)\n                | ConverseStreamOutputError::InternalServerException(_)\n                | ConverseStreamOutputError::ModelStreamErrorException(_)\n        )\n    })\n}\n\nfn sdk_error_to_language_model_error<E, R>(\n    error: SdkError<E, R>,\n    is_transient_service_error: impl Fn(&E) -> bool,\n) -> LanguageModelError\nwhere\n    E: std::error::Error + Send + Sync + 'static,\n    R: std::fmt::Debug + Send + Sync + 'static,\n{\n    let is_transient = match &error {\n        SdkError::TimeoutError(_) | SdkError::DispatchFailure(_) | SdkError::ResponseError(_) => {\n            true\n        }\n        SdkError::ServiceError(service_error) => is_transient_service_error(service_error.err()),\n        _ => false,\n    };\n    let detailed_error = match error {\n        SdkError::ServiceError(service_error) => anyhow::Error::new(service_error.into_err()),\n        error => anyhow::Error::msg(error_chain_message(&error)),\n    };\n\n    if is_transient {\n        LanguageModelError::transient(detailed_error)\n    } else {\n        LanguageModelError::permanent(detailed_error)\n    }\n}\n\nfn error_chain_message(error: &(dyn std::error::Error + 'static)) -> String {\n    std::iter::successors(Some(error), |err| err.source())\n        .map(std::string::ToString::to_string)\n        .collect::<Vec<_>>()\n        .join(\": \")\n}\n\nfn inference_config_from_options(options: &Options) -> Option<InferenceConfiguration> {\n    let mut builder = InferenceConfiguration::builder();\n    let mut has_any_value = false;\n\n    if let Some(max_tokens) = options.max_tokens {\n        builder = builder.max_tokens(max_tokens);\n        has_any_value = true;\n    }\n\n    if let Some(temperature) = options.temperature {\n        builder = builder.temperature(temperature);\n        has_any_value = true;\n    }\n\n    if let Some(top_p) = options.top_p {\n        builder = builder.top_p(top_p);\n        has_any_value = true;\n    }\n\n    if let Some(stop_sequences) = &options.stop_sequences {\n        builder = builder.set_stop_sequences(Some(stop_sequences.clone()));\n        has_any_value = true;\n    }\n\n    has_any_value.then(|| builder.build())\n}\n\nfn additional_model_request_fields_from_options(\n    model: &str,\n    options: &Options,\n) -> Result<Option<Document>, LanguageModelError> {\n    if options.reasoning_effort.is_none() && options.anthropic_beta.is_none() {\n        return Ok(options.additional_model_request_fields.clone());\n    }\n\n    let mut fields = match options.additional_model_request_fields.clone() {\n        Some(Document::Object(fields)) => fields,\n        Some(_) => {\n            return Err(LanguageModelError::permanent(\n                \"Bedrock additional_model_request_fields must be an object when using anthropic_beta or reasoning_effort\",\n            ));\n        }\n        None => std::collections::HashMap::new(),\n    };\n\n    if let Some(reasoning_effort) = options.reasoning_effort {\n        let mut output_config = match fields.remove(\"output_config\") {\n            Some(Document::Object(output_config)) => output_config,\n            Some(_) => {\n                return Err(LanguageModelError::permanent(\n                    \"Bedrock additional_model_request_fields.output_config must be an object when using reasoning_effort\",\n                ));\n            }\n            None => std::collections::HashMap::new(),\n        };\n\n        output_config.insert(\n            \"effort\".to_string(),\n            Document::String(reasoning_effort.as_str().to_string()),\n        );\n        fields.insert(\"output_config\".to_string(), Document::Object(output_config));\n    }\n\n    let mut anthropic_beta = match fields.remove(\"anthropic_beta\") {\n        Some(Document::Array(items)) => items\n            .into_iter()\n            .map(document_string)\n            .collect::<Result<Vec<_>, _>>()?,\n        Some(_) => {\n            return Err(LanguageModelError::permanent(\n                \"Bedrock additional_model_request_fields.anthropic_beta must be an array of strings\",\n            ));\n        }\n        None => Vec::new(),\n    };\n\n    if let Some(extra_beta_headers) = &options.anthropic_beta {\n        for beta_header in extra_beta_headers {\n            push_unique_string(&mut anthropic_beta, beta_header.clone());\n        }\n    }\n\n    if options.reasoning_effort.is_some() && model.contains(\"claude-opus-4-5\") {\n        push_unique_string(&mut anthropic_beta, \"effort-2025-11-24\".to_string());\n    }\n\n    if !anthropic_beta.is_empty() {\n        fields.insert(\n            \"anthropic_beta\".to_string(),\n            Document::Array(anthropic_beta.into_iter().map(Document::String).collect()),\n        );\n    }\n\n    Ok(Some(Document::Object(fields)))\n}\n\nfn document_string(value: Document) -> Result<String, LanguageModelError> {\n    match value {\n        Document::String(value) => Ok(value),\n        _ => Err(LanguageModelError::permanent(\n            \"Bedrock anthropic_beta entries must be strings\",\n        )),\n    }\n}\n\nfn push_unique_string(values: &mut Vec<String>, value: String) {\n    if !values.iter().any(|existing| existing == &value) {\n        values.push(value);\n    }\n}\n\nfn usage_from_bedrock(usage: &TokenUsage) -> Usage {\n    let cached_tokens = usage\n        .cache_read_input_tokens()\n        .and_then(i32_to_u32)\n        .or_else(|| usage.cache_write_input_tokens().and_then(i32_to_u32));\n\n    let details = cached_tokens.map(|cached_tokens| UsageDetails {\n        input_tokens_details: Some(InputTokenDetails {\n            cached_tokens: Some(cached_tokens),\n        }),\n        ..Default::default()\n    });\n\n    Usage {\n        prompt_tokens: i32_to_u32(usage.input_tokens()).unwrap_or_default(),\n        completion_tokens: i32_to_u32(usage.output_tokens()).unwrap_or_default(),\n        total_tokens: i32_to_u32(usage.total_tokens()).unwrap_or_default(),\n        details,\n    }\n}\n\nfn context_length_exceeded_if_empty(\n    has_message: bool,\n    has_tool_calls: bool,\n    has_reasoning: bool,\n    stop_reason: Option<&StopReason>,\n) -> Option<LanguageModelError> {\n    if has_message\n        || has_tool_calls\n        || has_reasoning\n        || !matches!(stop_reason, Some(StopReason::ModelContextWindowExceeded))\n    {\n        return None;\n    }\n\n    Some(LanguageModelError::context_length_exceeded(\n        \"Model context window exceeded\",\n    ))\n}\n\nfn i32_to_u32(value: i32) -> Option<u32> {\n    u32::try_from(value).ok()\n}\n\n#[cfg(feature = \"langfuse\")]\nfn langfuse_json<T: Serialize + ?Sized>(value: &T) -> Option<String> {\n    serde_json::to_string_pretty(value).ok()\n}\n\n#[cfg(feature = \"langfuse\")]\nfn langfuse_json_redacted<T: Serialize + ?Sized>(value: &T) -> Option<String> {\n    let mut value = serde_json::to_value(value).ok()?;\n    redact_sensitive_payloads(&mut value);\n    serde_json::to_string_pretty(&value).ok()\n}\n\n#[cfg(feature = \"langfuse\")]\nfn redact_sensitive_payloads(value: &mut serde_json::Value) {\n    match value {\n        serde_json::Value::Object(map) => {\n            for field in map.values_mut() {\n                redact_sensitive_payloads(field);\n            }\n        }\n        serde_json::Value::Array(items) => {\n            if items.iter().all(|item| item.as_u64().is_some()) && items.len() > 64 {\n                *value = serde_json::Value::String(format!(\"[{} bytes redacted]\", items.len()));\n            } else {\n                for item in items {\n                    redact_sensitive_payloads(item);\n                }\n            }\n        }\n        serde_json::Value::String(text) => {\n            if let Some(truncated) = truncate_data_url(text) {\n                *text = truncated;\n            }\n        }\n        _ => {}\n    }\n}\n\n#[cfg(feature = \"langfuse\")]\nfn truncate_data_url(url: &str) -> Option<String> {\n    const MAX_DATA_PREVIEW: usize = 32;\n\n    if !url.starts_with(\"data:\") {\n        return None;\n    }\n\n    let (prefix, data) = url.split_once(',')?;\n    if data.len() <= MAX_DATA_PREVIEW {\n        return None;\n    }\n\n    let preview = &data[..MAX_DATA_PREVIEW];\n    let truncated = data.len() - MAX_DATA_PREVIEW;\n\n    Some(format!(\n        \"{prefix},{preview}...[truncated {truncated} chars]\"\n    ))\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::{\n        Arc,\n        atomic::{AtomicU32, Ordering},\n    };\n\n    use aws_sdk_bedrockruntime::{\n        error::{ConnectorError, SdkError},\n        operation::{converse::ConverseError, converse_stream::ConverseStreamError},\n        types::{\n            StopReason, TokenUsage,\n            error::{\n                ConverseStreamOutputError, InternalServerException, ModelNotReadyException,\n                ModelStreamErrorException, ServiceUnavailableException, ThrottlingException,\n                ValidationException,\n            },\n        },\n    };\n    use swiftide_core::chat_completion::errors::LanguageModelError;\n\n    use super::*;\n\n    fn usage(total_tokens: u32) -> Usage {\n        Usage {\n            prompt_tokens: total_tokens / 2,\n            completion_tokens: total_tokens - (total_tokens / 2),\n            total_tokens,\n            details: None,\n        }\n    }\n\n    #[test]\n    fn test_options_builder_and_merge_only_overrides_present_fields() {\n        let mut base = Options::builder()\n            .prompt_model(\"model-a\")\n            .max_tokens(128)\n            .temperature(0.1)\n            .top_p(0.8)\n            .stop_sequences(vec![\"STOP_A\".to_string()])\n            .tool_strict(false)\n            .build()\n            .unwrap();\n\n        let mut request_fields = std::collections::HashMap::new();\n        request_fields.insert(\"thinking\".to_string(), Document::Bool(true));\n\n        let other = Options {\n            prompt_model: Some(\"model-b\".to_string()),\n            max_tokens: None,\n            temperature: Some(0.6),\n            top_p: None,\n            stop_sequences: Some(vec![\"STOP_B\".to_string()]),\n            tool_strict: Some(true),\n            anthropic_beta: Some(vec![\"context-1m-2025-08-07\".to_string()]),\n            reasoning_effort: Some(ReasoningEffort::Medium),\n            additional_model_request_fields: Some(Document::Object(request_fields)),\n            additional_model_response_field_paths: Some(vec![\"/thinking\".to_string()]),\n        };\n\n        base.merge(other);\n\n        assert_eq!(base.prompt_model.as_deref(), Some(\"model-b\"));\n        assert_eq!(base.max_tokens, Some(128));\n        assert_eq!(base.temperature, Some(0.6));\n        assert_eq!(base.top_p, Some(0.8));\n        assert_eq!(\n            base.stop_sequences.as_deref(),\n            Some(&[\"STOP_B\".to_string()][..])\n        );\n        assert_eq!(base.tool_strict, Some(true));\n        assert_eq!(\n            base.anthropic_beta.as_deref(),\n            Some(&[\"context-1m-2025-08-07\".to_string()][..])\n        );\n        assert_eq!(base.reasoning_effort, Some(ReasoningEffort::Medium));\n        assert!(base.additional_model_request_fields.is_some());\n        assert_eq!(\n            base.additional_model_response_field_paths.as_deref(),\n            Some(&[\"/thinking\".to_string()][..])\n        );\n    }\n\n    #[test]\n    fn test_tool_strict_enabled_defaults_to_true() {\n        assert!(Options::default().tool_strict_enabled());\n        assert!(\n            !Options {\n                tool_strict: Some(false),\n                ..Default::default()\n            }\n            .tool_strict_enabled()\n        );\n    }\n\n    #[test]\n    fn test_builder_default_options_and_prompt_model_merge_branches() {\n        let mut builder = AwsBedrock::builder();\n        builder.test_client(MockBedrockConverse::new());\n\n        builder.default_prompt_model(\"model-initial\");\n        builder.default_prompt_model(\"model-final\");\n\n        builder.default_options(Options {\n            max_tokens: Some(64),\n            ..Default::default()\n        });\n        builder.default_options(Options {\n            temperature: Some(0.7),\n            ..Default::default()\n        });\n\n        let mut client = builder.build().unwrap();\n        assert_eq!(\n            client.options().prompt_model.as_deref(),\n            Some(\"model-final\")\n        );\n        assert_eq!(client.options().max_tokens, Some(64));\n        assert_eq!(client.options().temperature, Some(0.7));\n\n        client.options_mut().top_p = Some(0.9);\n        assert_eq!(client.options().top_p, Some(0.9));\n        assert!(format!(\"{client:?}\").contains(\"AwsBedrock\"));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_track_completion_invokes_sync_usage_callback() {\n        let observed = Arc::new(AtomicU32::new(0));\n        let observed_for_callback = observed.clone();\n\n        let mut builder = AwsBedrock::builder();\n        builder\n            .test_client(MockBedrockConverse::new())\n            .default_prompt_model(\"model-a\")\n            .on_usage(move |usage| {\n                observed_for_callback.store(usage.total_tokens, Ordering::Relaxed);\n                Ok(())\n            });\n\n        let bedrock = builder.build().unwrap();\n        let req = serde_json::json!({\"request\": \"value\"});\n        let resp = serde_json::json!({\"response\": \"value\"});\n        let usage = usage(42);\n\n        bedrock\n            .track_completion(\"model-a\", Some(&usage), Some(&req), Some(&resp))\n            .await\n            .unwrap();\n\n        assert_eq!(observed.load(Ordering::Relaxed), 42);\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_track_completion_invokes_async_usage_callback() {\n        let observed = Arc::new(AtomicU32::new(0));\n        let observed_for_callback = observed.clone();\n\n        let mut builder = AwsBedrock::builder();\n        builder\n            .test_client(MockBedrockConverse::new())\n            .default_prompt_model(\"model-a\")\n            .on_usage_async(move |usage| {\n                let observed_for_callback = observed_for_callback.clone();\n                Box::pin(async move {\n                    observed_for_callback.store(usage.total_tokens, Ordering::Relaxed);\n                    Ok(())\n                })\n            });\n\n        let bedrock = builder.build().unwrap();\n        let usage = usage(99);\n\n        bedrock\n            .track_completion(\n                \"model-a\",\n                Some(&usage),\n                None::<&serde_json::Value>,\n                None::<&serde_json::Value>,\n            )\n            .await\n            .unwrap();\n\n        assert_eq!(observed.load(Ordering::Relaxed), 99);\n    }\n\n    #[test]\n    fn test_inference_config_from_options_builds_only_when_values_are_set() {\n        assert!(inference_config_from_options(&Options::default()).is_none());\n\n        let options = Options {\n            max_tokens: Some(256),\n            temperature: Some(0.2),\n            top_p: Some(0.9),\n            stop_sequences: Some(vec![\"DONE\".to_string()]),\n            ..Default::default()\n        };\n\n        let config = inference_config_from_options(&options).expect(\"inference config\");\n        assert_eq!(config.max_tokens(), Some(256));\n        assert_eq!(config.temperature(), Some(0.2));\n        assert_eq!(config.top_p(), Some(0.9));\n        assert_eq!(config.stop_sequences(), [\"DONE\"]);\n    }\n\n    #[test]\n    fn test_additional_model_request_fields_merges_reasoning_effort_and_betas() {\n        let mut thinking = std::collections::HashMap::new();\n        thinking.insert(\"type\".to_string(), Document::String(\"enabled\".to_string()));\n        thinking.insert(\"budget_tokens\".to_string(), Document::from(512_u64));\n\n        let raw_beta_headers = vec![Document::String(\"context-1m-2025-08-07\".to_string())];\n\n        let mut additional_fields = std::collections::HashMap::new();\n        additional_fields.insert(\"thinking\".to_string(), Document::Object(thinking));\n        additional_fields.insert(\n            \"anthropic_beta\".to_string(),\n            Document::Array(raw_beta_headers),\n        );\n\n        let options = Options {\n            anthropic_beta: Some(vec![\n                \"interleaved-thinking-2025-05-14\".to_string(),\n                \"effort-2025-11-24\".to_string(),\n            ]),\n            reasoning_effort: Some(ReasoningEffort::Medium),\n            additional_model_request_fields: Some(Document::Object(additional_fields)),\n            ..Default::default()\n        };\n\n        let merged = additional_model_request_fields_from_options(\n            \"anthropic.claude-opus-4-5-20251101-v1:0\",\n            &options,\n        )\n        .unwrap()\n        .expect(\"merged additional fields\");\n\n        let fields = merged.as_object().expect(\"object fields\");\n        let output_config = fields\n            .get(\"output_config\")\n            .and_then(Document::as_object)\n            .expect(\"output_config\");\n        assert_eq!(\n            output_config.get(\"effort\").and_then(Document::as_string),\n            Some(\"medium\")\n        );\n\n        let thinking = fields\n            .get(\"thinking\")\n            .and_then(Document::as_object)\n            .expect(\"thinking\");\n        assert_eq!(\n            thinking.get(\"type\").and_then(Document::as_string),\n            Some(\"enabled\")\n        );\n        assert!(thinking.get(\"budget_tokens\").is_some());\n\n        let anthropic_beta = fields\n            .get(\"anthropic_beta\")\n            .and_then(Document::as_array)\n            .expect(\"anthropic_beta\");\n        let anthropic_beta = anthropic_beta\n            .iter()\n            .map(|value| value.as_string().expect(\"beta header string\"))\n            .collect::<Vec<_>>();\n        assert_eq!(\n            anthropic_beta,\n            vec![\n                \"context-1m-2025-08-07\",\n                \"interleaved-thinking-2025-05-14\",\n                \"effort-2025-11-24\",\n            ]\n        );\n    }\n\n    #[test]\n    fn test_additional_model_request_fields_requires_object_when_merging_typed_fields() {\n        let options = Options {\n            reasoning_effort: Some(ReasoningEffort::Low),\n            additional_model_request_fields: Some(Document::Bool(true)),\n            ..Default::default()\n        };\n\n        let error = additional_model_request_fields_from_options(\"model\", &options).unwrap_err();\n        assert!(\n            error\n                .to_string()\n                .contains(\"additional_model_request_fields must be an object\")\n        );\n    }\n\n    #[test]\n    fn test_usage_from_bedrock_prefers_cache_read_and_falls_back_to_cache_write() {\n        let read_usage = TokenUsage::builder()\n            .input_tokens(10)\n            .output_tokens(5)\n            .total_tokens(15)\n            .cache_read_input_tokens(3)\n            .cache_write_input_tokens(9)\n            .build()\n            .unwrap();\n        let mapped_read = usage_from_bedrock(&read_usage);\n        assert_eq!(\n            mapped_read\n                .details\n                .as_ref()\n                .and_then(|details| details.input_tokens_details.as_ref())\n                .and_then(|details| details.cached_tokens),\n            Some(3)\n        );\n\n        let write_usage = TokenUsage::builder()\n            .input_tokens(10)\n            .output_tokens(5)\n            .total_tokens(15)\n            .cache_write_input_tokens(7)\n            .build()\n            .unwrap();\n        let mapped_write = usage_from_bedrock(&write_usage);\n        assert_eq!(\n            mapped_write\n                .details\n                .as_ref()\n                .and_then(|details| details.input_tokens_details.as_ref())\n                .and_then(|details| details.cached_tokens),\n            Some(7)\n        );\n    }\n\n    #[test]\n    fn test_usage_from_bedrock_defaults_negative_counts_to_zero() {\n        let usage = TokenUsage::builder()\n            .input_tokens(-1)\n            .output_tokens(-2)\n            .total_tokens(-3)\n            .build()\n            .unwrap();\n        let mapped = usage_from_bedrock(&usage);\n\n        assert_eq!(mapped.prompt_tokens, 0);\n        assert_eq!(mapped.completion_tokens, 0);\n        assert_eq!(mapped.total_tokens, 0);\n        assert_eq!(i32_to_u32(-1), None);\n        assert_eq!(i32_to_u32(12), Some(12));\n    }\n\n    #[test]\n    fn test_context_length_exceeded_only_when_empty_and_context_limit_hit() {\n        assert!(\n            context_length_exceeded_if_empty(\n                false,\n                false,\n                false,\n                Some(&StopReason::ModelContextWindowExceeded)\n            )\n            .is_some()\n        );\n        assert!(context_length_exceeded_if_empty(true, false, false, None).is_none());\n        assert!(context_length_exceeded_if_empty(false, true, false, None).is_none());\n        assert!(context_length_exceeded_if_empty(false, false, true, None).is_none());\n        assert!(\n            context_length_exceeded_if_empty(false, false, false, Some(&StopReason::EndTurn))\n                .is_none()\n        );\n    }\n\n    #[test]\n    fn test_sdk_error_mapping_classifies_transient_transport_failures() {\n        let timeout = sdk_error_to_language_model_error::<ConverseError, ()>(\n            SdkError::timeout_error(\"timeout\"),\n            |_| false,\n        );\n        assert!(matches!(timeout, LanguageModelError::TransientError(_)));\n\n        let dispatch = sdk_error_to_language_model_error::<ConverseError, ()>(\n            SdkError::dispatch_failure(ConnectorError::other(\"dispatch\".into(), None)),\n            |_| false,\n        );\n        assert!(matches!(dispatch, LanguageModelError::TransientError(_)));\n\n        let response = sdk_error_to_language_model_error::<ConverseError, ()>(\n            SdkError::response_error(\"response\", ()),\n            |_| false,\n        );\n        assert!(matches!(response, LanguageModelError::TransientError(_)));\n\n        let construction = sdk_error_to_language_model_error::<ConverseError, ()>(\n            SdkError::construction_failure(\"construction\"),\n            |_| false,\n        );\n        assert!(matches!(\n            construction,\n            LanguageModelError::PermanentError(_)\n        ));\n    }\n\n    #[test]\n    fn test_converse_error_mapping_distinguishes_transient_and_permanent_service_errors() {\n        let throttled = converse_error_to_language_model_error::<()>(SdkError::service_error(\n            ConverseError::ThrottlingException(ThrottlingException::builder().build()),\n            (),\n        ));\n        assert!(matches!(throttled, LanguageModelError::TransientError(_)));\n\n        let validation = converse_error_to_language_model_error::<()>(SdkError::service_error(\n            ConverseError::ValidationException(ValidationException::builder().build()),\n            (),\n        ));\n        assert!(matches!(validation, LanguageModelError::PermanentError(_)));\n    }\n\n    #[test]\n    fn test_converse_stream_error_mapping_distinguishes_transient_and_permanent_service_errors() {\n        let unavailable =\n            converse_stream_error_to_language_model_error::<()>(SdkError::service_error(\n                ConverseStreamError::ServiceUnavailableException(\n                    ServiceUnavailableException::builder().build(),\n                ),\n                (),\n            ));\n        assert!(matches!(unavailable, LanguageModelError::TransientError(_)));\n\n        let validation =\n            converse_stream_error_to_language_model_error::<()>(SdkError::service_error(\n                ConverseStreamError::ValidationException(ValidationException::builder().build()),\n                (),\n            ));\n        assert!(matches!(validation, LanguageModelError::PermanentError(_)));\n    }\n\n    #[test]\n    fn test_converse_stream_output_error_mapping_distinguishes_transient_and_permanent_service_errors()\n     {\n        let transient =\n            converse_stream_output_error_to_language_model_error::<()>(SdkError::service_error(\n                ConverseStreamOutputError::ModelStreamErrorException(\n                    ModelStreamErrorException::builder().build(),\n                ),\n                (),\n            ));\n        assert!(matches!(transient, LanguageModelError::TransientError(_)));\n\n        let permanent =\n            converse_stream_output_error_to_language_model_error::<()>(SdkError::service_error(\n                ConverseStreamOutputError::ValidationException(\n                    ValidationException::builder().build(),\n                ),\n                (),\n            ));\n        assert!(matches!(permanent, LanguageModelError::PermanentError(_)));\n    }\n\n    #[test]\n    fn test_error_chain_message_collects_nested_sources() {\n        let source = std::io::Error::other(\"inner\");\n        let outer = std::io::Error::other(source);\n        let chain = error_chain_message(&outer);\n\n        assert!(chain.contains(\"inner\"));\n    }\n\n    #[test]\n    fn test_converse_error_mapping_model_not_ready_and_stream_internal_server_are_transient() {\n        let model_not_ready =\n            converse_error_to_language_model_error::<()>(SdkError::service_error(\n                ConverseError::ModelNotReadyException(ModelNotReadyException::builder().build()),\n                (),\n            ));\n        assert!(matches!(\n            model_not_ready,\n            LanguageModelError::TransientError(_)\n        ));\n\n        let stream_internal =\n            converse_stream_output_error_to_language_model_error::<()>(SdkError::service_error(\n                ConverseStreamOutputError::InternalServerException(\n                    InternalServerException::builder().build(),\n                ),\n                (),\n            ));\n        assert!(matches!(\n            stream_internal,\n            LanguageModelError::TransientError(_)\n        ));\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/aws_bedrock_v2/simple_prompt.rs",
    "content": "use async_trait::async_trait;\nuse swiftide_core::{\n    ChatCompletion,\n    chat_completion::{ChatCompletionRequest, ChatMessage, errors::LanguageModelError},\n    indexing::SimplePrompt,\n    prompt::Prompt,\n};\n\n#[cfg(test)]\nuse crate::aws_bedrock_v2::Options;\n\nuse super::AwsBedrock;\n\n#[async_trait]\nimpl SimplePrompt for AwsBedrock {\n    #[cfg_attr(not(feature = \"langfuse\"), tracing::instrument(skip_all, err))]\n    #[cfg_attr(\n        feature = \"langfuse\",\n        tracing::instrument(skip_all, err, fields(langfuse.type = \"GENERATION\"))\n    )]\n    async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError> {\n        let prompt_text = prompt.render()?;\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::new_user(prompt_text)])\n            .build()\n            .map_err(LanguageModelError::permanent)?;\n\n        let response = self.complete(&request).await?;\n        response\n            .message\n            .ok_or_else(|| LanguageModelError::permanent(\"No text in response\"))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::HashMap;\n    use std::sync::{\n        Arc,\n        atomic::{AtomicU32, Ordering},\n    };\n\n    use aws_sdk_bedrockruntime::Client;\n    use aws_sdk_bedrockruntime::{\n        operation::converse::ConverseOutput,\n        types::{\n            ContentBlock, ConversationRole, ConverseOutput as ConverseResult, Message, StopReason,\n            TokenUsage, ToolUseBlock,\n        },\n    };\n    use aws_smithy_types::Document;\n    use serde_json::{Value, json};\n    use wiremock::{\n        Mock, MockServer, Request, Respond, ResponseTemplate,\n        matchers::{method, path},\n    };\n\n    use super::*;\n    use crate::aws_bedrock_v2::{\n        AwsBedrock, MockBedrockConverse, ReasoningEffort,\n        test_utils::{TEST_MODEL_ID, bedrock_client_for_mock_server},\n    };\n\n    fn response_with_text(text: &str) -> ConverseOutput {\n        ConverseOutput::builder()\n            .output(ConverseResult::Message(\n                Message::builder()\n                    .role(ConversationRole::Assistant)\n                    .content(ContentBlock::Text(text.to_string()))\n                    .build()\n                    .unwrap(),\n            ))\n            .stop_reason(StopReason::EndTurn)\n            .build()\n            .unwrap()\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_requires_model() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n        bedrock_mock.expect_converse().never();\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .build()\n            .unwrap();\n\n        let error = bedrock.prompt(\"hello\".into()).await.unwrap_err();\n        assert!(matches!(error, LanguageModelError::PermanentError(_)));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_uses_converse_api_and_extracts_text() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .withf(\n                |model_id,\n                 messages,\n                 system,\n                 inference_config,\n                 tool_config,\n                 output_config,\n                 _additional_model_request_fields,\n                 _additional_model_response_field_paths| {\n                    model_id == \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n                        && messages.len() == 1\n                        && matches!(messages[0].role(), ConversationRole::User)\n                        && matches!(messages[0].content().first(), Some(ContentBlock::Text(text)) if text == \"Hello\")\n                        && system.is_none()\n                    && tool_config.is_none()\n                    && output_config.is_none()\n                    && inference_config\n                        .as_ref()\n                        .is_some_and(|config| {\n                            config.max_tokens() == Some(256)\n                                && config.temperature() == Some(0.4)\n                                && config.top_p() == Some(0.9)\n                                && config.stop_sequences() == [\"STOP\"]\n                        })\n                },\n            )\n            .returning(|_, _, _, _, _, _, _, _| Ok(response_with_text(\"Hello, world!\")));\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .default_options(Options {\n                max_tokens: Some(256),\n                temperature: Some(0.4),\n                top_p: Some(0.9),\n                stop_sequences: Some(vec![\"STOP\".to_string()]),\n                ..Default::default()\n            })\n            .build()\n            .unwrap();\n\n        let response = bedrock.prompt(\"Hello\".into()).await.unwrap();\n\n        assert_eq!(response, \"Hello, world!\");\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_maps_context_window_stop_reason() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .stop_reason(StopReason::ModelContextWindowExceeded)\n                    .build()\n                    .unwrap())\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let error = bedrock.prompt(\"Hello\".into()).await.unwrap_err();\n\n        assert!(matches!(\n            error,\n            LanguageModelError::ContextLengthExceeded(_)\n        ));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_invokes_usage_callback() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .output(ConverseResult::Message(\n                        Message::builder()\n                            .role(ConversationRole::Assistant)\n                            .content(ContentBlock::Text(\"ok\".to_string()))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .usage(\n                        TokenUsage::builder()\n                            .input_tokens(11)\n                            .output_tokens(7)\n                            .total_tokens(18)\n                            .cache_read_input_tokens(5)\n                            .build()\n                            .unwrap(),\n                    )\n                    .stop_reason(StopReason::EndTurn)\n                    .build()\n                    .unwrap())\n            });\n\n        let observed_total = Arc::new(AtomicU32::new(0));\n        let observed_total_for_callback = observed_total.clone();\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .on_usage(move |usage| {\n                observed_total_for_callback.store(usage.total_tokens, Ordering::Relaxed);\n                assert_eq!(usage.prompt_tokens, 11);\n                assert_eq!(usage.completion_tokens, 7);\n                assert_eq!(usage.total_tokens, 18);\n                assert_eq!(\n                    usage\n                        .details\n                        .as_ref()\n                        .and_then(|details| details.input_tokens_details.as_ref())\n                        .and_then(|details| details.cached_tokens),\n                    Some(5)\n                );\n\n                Ok(())\n            })\n            .build()\n            .unwrap();\n\n        let response = bedrock.prompt(\"Hello\".into()).await.unwrap();\n\n        assert_eq!(response, \"ok\");\n        assert_eq!(observed_total.load(Ordering::Relaxed), 18);\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_green_path_with_wiremock() {\n        struct ValidateConverseRequest;\n\n        impl Respond for ValidateConverseRequest {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let payload: Value = serde_json::from_slice(&request.body).expect(\"request json\");\n                assert_eq!(payload[\"messages\"][0][\"role\"], \"user\");\n                assert_eq!(\n                    payload[\"messages\"][0][\"content\"][0][\"text\"],\n                    \"hello from prompt\"\n                );\n\n                ResponseTemplate::new(200).set_body_json(json!({\n                    \"output\": {\n                        \"message\": {\n                            \"role\": \"assistant\",\n                            \"content\": [\n                                {\"text\": \"prompt result\"}\n                            ]\n                        }\n                    },\n                    \"stopReason\": \"end_turn\",\n                    \"usage\": {\n                        \"inputTokens\": 1,\n                        \"outputTokens\": 2,\n                        \"totalTokens\": 3\n                    }\n                }))\n            }\n        }\n\n        let mock_server = MockServer::start().await;\n        Mock::given(method(\"POST\"))\n            .and(path(format!(\"/model/{TEST_MODEL_ID}/converse\")))\n            .respond_with(ValidateConverseRequest)\n            .mount(&mock_server)\n            .await;\n\n        let client: Client = bedrock_client_for_mock_server(&mock_server.uri());\n        let bedrock = AwsBedrock::builder()\n            .client(client)\n            .default_prompt_model(TEST_MODEL_ID)\n            .build()\n            .unwrap();\n\n        let response = bedrock.prompt(\"hello from prompt\".into()).await.unwrap();\n        assert_eq!(response, \"prompt result\");\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_returns_error_when_completion_has_no_text() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .output(ConverseResult::Message(\n                        Message::builder()\n                            .role(ConversationRole::Assistant)\n                            .content(ContentBlock::ToolUse(\n                                ToolUseBlock::builder()\n                                    .tool_use_id(\"call_1\")\n                                    .name(\"get_weather\")\n                                    .input(Document::Object(HashMap::new()))\n                                    .build()\n                                    .unwrap(),\n                            ))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .stop_reason(StopReason::ToolUse)\n                    .build()\n                    .unwrap())\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let error = bedrock.prompt(\"Hello\".into()).await.unwrap_err();\n        assert!(matches!(error, LanguageModelError::PermanentError(_)));\n        assert!(error.to_string().contains(\"No text in response\"));\n    }\n\n    #[ignore = \"requires live AWS Bedrock access and billable model invocation\"]\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn smoke_live_bedrock_reasoning_effort_prompt() {\n        let model = std::env::var(\"SWIFTIDE_AWS_BEDROCK_LIVE_MODEL\")\n            .unwrap_or_else(|_| \"anthropic.claude-opus-4-5-20251101-v1:0\".to_string());\n        let prompt = \"Reply with exactly 'swiftide-bedrock-effort-ok' and nothing else.\";\n\n        let bedrock = AwsBedrock::builder()\n            .default_prompt_model(model.clone())\n            .default_options(\n                Options::builder()\n                    .max_tokens(64)\n                    .reasoning_effort(ReasoningEffort::Low)\n                    .build()\n                    .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let mut attempts = 0;\n        let response = loop {\n            attempts += 1;\n            let attempt = tokio::time::timeout(\n                std::time::Duration::from_secs(60),\n                bedrock.prompt(prompt.into()),\n            )\n            .await\n            .expect(\"live Bedrock prompt timed out\");\n\n            match attempt {\n                Ok(response) => break response,\n                Err(LanguageModelError::TransientError(error)) => {\n                    eprintln!(\"transient Bedrock error during live smoke test: {error}\");\n                    assert!(\n                        attempts < 3,\n                        \"live Bedrock prompt failed after retries: {error}\"\n                    );\n                    tokio::time::sleep(std::time::Duration::from_secs(3)).await;\n                }\n                Err(error) => panic!(\"live Bedrock prompt failed: {error:?}\"),\n            }\n        };\n\n        println!(\"model={model}\");\n        println!(\"response={response}\");\n\n        assert!(\n            response.contains(\"swiftide-bedrock-effort-ok\"),\n            \"unexpected response: {response}\"\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/aws_bedrock_v2/structured_prompt.rs",
    "content": "use async_trait::async_trait;\nuse aws_sdk_bedrockruntime::types::{\n    ContentBlock, ConversationRole, JsonSchemaDefinition, Message, OutputConfig, OutputFormat,\n    OutputFormatStructure, OutputFormatType,\n};\nuse schemars::Schema;\n#[cfg(feature = \"langfuse\")]\nuse serde_json::json;\nuse swiftide_core::{\n    DynStructuredPrompt, chat_completion::errors::LanguageModelError, prompt::Prompt,\n};\n\nuse super::AwsBedrock;\n\n#[async_trait]\nimpl DynStructuredPrompt for AwsBedrock {\n    #[cfg_attr(not(feature = \"langfuse\"), tracing::instrument(skip_all, err))]\n    #[cfg_attr(\n        feature = \"langfuse\",\n        tracing::instrument(skip_all, err, fields(langfuse.type = \"GENERATION\"))\n    )]\n    async fn structured_prompt_dyn(\n        &self,\n        prompt: Prompt,\n        schema: Schema,\n    ) -> Result<serde_json::Value, LanguageModelError> {\n        let prompt_text = prompt.render()?;\n        let model = self.prompt_model()?;\n        let schema_json = serde_json::to_string(&schema).map_err(LanguageModelError::permanent)?;\n        #[cfg(feature = \"langfuse\")]\n        let tracking_request = Some(json!({\n            \"model\": model,\n            \"prompt\": prompt_text.as_str(),\n            \"schema\": schema,\n        }));\n        #[cfg(not(feature = \"langfuse\"))]\n        let tracking_request: Option<serde_json::Value> = None;\n\n        let message = Message::builder()\n            .role(ConversationRole::User)\n            .content(ContentBlock::Text(prompt_text))\n            .build()\n            .map_err(LanguageModelError::permanent)?;\n\n        let output_config = OutputConfig::builder()\n            .text_format(\n                OutputFormat::builder()\n                    .r#type(OutputFormatType::JsonSchema)\n                    .structure(OutputFormatStructure::JsonSchema(\n                        JsonSchemaDefinition::builder()\n                            .schema(schema_json)\n                            .name(\"structured_prompt\")\n                            .build()\n                            .map_err(LanguageModelError::permanent)?,\n                    ))\n                    .build()\n                    .map_err(LanguageModelError::permanent)?,\n            )\n            .build();\n        let additional_model_request_fields =\n            super::additional_model_request_fields_from_options(model, &self.default_options)?;\n\n        let response = match self\n            .client\n            .converse(\n                model,\n                vec![message],\n                None,\n                super::inference_config_from_options(&self.default_options),\n                None,\n                Some(output_config),\n                additional_model_request_fields,\n                self.default_options\n                    .additional_model_response_field_paths\n                    .clone(),\n            )\n            .await\n        {\n            Ok(response) => response,\n            Err(error) => {\n                Self::track_failure(\n                    model,\n                    tracking_request.as_ref(),\n                    None::<&serde_json::Value>,\n                    &error,\n                );\n                return Err(error);\n            }\n        };\n\n        let completion = match super::chat_completion::response_to_chat_completion(&response) {\n            Ok(completion) => completion,\n            Err(error) => {\n                Self::track_failure(\n                    model,\n                    tracking_request.as_ref(),\n                    None::<&serde_json::Value>,\n                    &error,\n                );\n                return Err(error);\n            }\n        };\n\n        self.track_completion(\n            model,\n            completion.usage.as_ref(),\n            tracking_request.as_ref(),\n            Some(&completion),\n        )\n        .await?;\n\n        let Some(ref response_text) = completion.message else {\n            if let Some(error) = super::context_length_exceeded_if_empty(\n                false,\n                completion.tool_calls.is_some(),\n                completion\n                    .reasoning\n                    .as_ref()\n                    .is_some_and(|reasoning| !reasoning.is_empty()),\n                Some(response.stop_reason()),\n            ) {\n                Self::track_failure(model, tracking_request.as_ref(), Some(&completion), &error);\n                return Err(error);\n            }\n\n            let error = LanguageModelError::permanent(\"No text in response\");\n            Self::track_failure(model, tracking_request.as_ref(), Some(&completion), &error);\n            return Err(error);\n        };\n\n        serde_json::from_str(response_text.trim())\n            .map_err(|error| {\n                LanguageModelError::permanent(anyhow::anyhow!(\n                    \"Failed to parse model response as JSON: {error}\"\n                ))\n            })\n            .inspect_err(|error| {\n                Self::track_failure(model, tracking_request.as_ref(), Some(&completion), error);\n            })\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::HashMap;\n    use std::sync::{\n        Arc,\n        atomic::{AtomicU32, Ordering},\n    };\n\n    use aws_sdk_bedrockruntime::Client;\n    use aws_sdk_bedrockruntime::{\n        operation::converse::ConverseOutput,\n        types::{\n            ContentBlock, ConversationRole, ConverseOutput as ConverseResult, Message, StopReason,\n            TokenUsage, ToolUseBlock,\n        },\n    };\n    use aws_smithy_types::Document;\n    use schemars::{JsonSchema, schema_for};\n    use serde_json::{Value, json};\n    use wiremock::{\n        Mock, MockServer, Request, Respond, ResponseTemplate,\n        matchers::{method, path},\n    };\n\n    use super::*;\n    #[cfg(feature = \"langfuse\")]\n    use crate::aws_bedrock_v2::test_utils::run_with_langfuse_event_capture;\n    use crate::aws_bedrock_v2::{\n        AwsBedrock, MockBedrockConverse, Options, ReasoningEffort,\n        test_utils::{TEST_MODEL_ID, bedrock_client_for_mock_server},\n    };\n\n    #[derive(Debug, Clone, serde::Serialize, serde::Deserialize, JsonSchema, PartialEq, Eq)]\n    struct StructuredOutput {\n        answer: String,\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_parses_json_response() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .withf(\n                |_,\n                 messages,\n                 _,\n                 _,\n                 _,\n                 output_config,\n                 _additional_model_request_fields,\n                 _additional_model_response_field_paths| {\n                    messages\n                        .first()\n                        .and_then(|message| message.content().first())\n                        .and_then(|content| content.as_text().ok())\n                        .is_some_and(|text| text == \"What is two times twenty one?\")\n                        && output_config\n                            .as_ref()\n                            .and_then(|config| config.text_format())\n                            .is_some_and(|format| {\n                                matches!(format.r#type(), OutputFormatType::JsonSchema)\n                                    && format\n                                        .structure()\n                                        .and_then(|structure| structure.as_json_schema().ok())\n                                        .is_some_and(|schema| {\n                                            schema.schema().contains(\"\\\"answer\\\"\")\n                                        })\n                            })\n                },\n            )\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .output(ConverseResult::Message(\n                        Message::builder()\n                            .role(ConversationRole::Assistant)\n                            .content(ContentBlock::Text(\"{\\\"answer\\\":\\\"42\\\"}\".to_string()))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .stop_reason(StopReason::EndTurn)\n                    .build()\n                    .unwrap())\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let value = bedrock\n            .structured_prompt_dyn(\n                \"What is two times twenty one?\".into(),\n                schema_for!(StructuredOutput),\n            )\n            .await\n            .unwrap();\n\n        assert_eq!(\n            serde_json::from_value::<StructuredOutput>(value).unwrap(),\n            StructuredOutput {\n                answer: \"42\".to_string()\n            }\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_passes_reasoning_effort_in_additional_model_request_fields() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .withf(\n                |model_id,\n                 _messages,\n                 _system,\n                 _inference_config,\n                 _tool_config,\n                 output_config,\n                 additional_model_request_fields,\n                 _additional_model_response_field_paths| {\n                    model_id == \"anthropic.claude-opus-4-5-20251101-v1:0\"\n                        && output_config\n                            .as_ref()\n                            .and_then(|config| config.text_format())\n                            .is_some()\n                        && additional_model_request_fields\n                            .as_ref()\n                            .is_some_and(|fields| {\n                                let Some(fields) = fields.as_object() else {\n                                    return false;\n                                };\n\n                                let effort_matches = fields\n                                    .get(\"output_config\")\n                                    .and_then(Document::as_object)\n                                    .and_then(|output_config| output_config.get(\"effort\"))\n                                    .and_then(Document::as_string)\n                                    == Some(\"low\");\n                                let beta_matches = fields\n                                    .get(\"anthropic_beta\")\n                                    .and_then(Document::as_array)\n                                    .is_some_and(|betas| {\n                                        betas.iter().any(|beta| {\n                                            beta.as_string() == Some(\"effort-2025-11-24\")\n                                        })\n                                    });\n\n                                effort_matches && beta_matches\n                            })\n                },\n            )\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .output(ConverseResult::Message(\n                        Message::builder()\n                            .role(ConversationRole::Assistant)\n                            .content(ContentBlock::Text(\"{\\\"answer\\\":\\\"42\\\"}\".to_string()))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .stop_reason(StopReason::EndTurn)\n                    .build()\n                    .unwrap())\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-opus-4-5-20251101-v1:0\")\n            .default_options(Options {\n                reasoning_effort: Some(ReasoningEffort::Low),\n                ..Default::default()\n            })\n            .build()\n            .unwrap();\n\n        let value = bedrock\n            .structured_prompt_dyn(\n                \"What is two times twenty one?\".into(),\n                schema_for!(StructuredOutput),\n            )\n            .await\n            .unwrap();\n\n        assert_eq!(\n            serde_json::from_value::<StructuredOutput>(value).unwrap(),\n            StructuredOutput {\n                answer: \"42\".to_string()\n            }\n        );\n    }\n\n    #[cfg(feature = \"langfuse\")]\n    #[test]\n    fn test_structured_prompt_tracks_langfuse_failure_metadata_on_converse_error() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Err(LanguageModelError::permanent(\"structured prompt failed\"))\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let (result, events) = run_with_langfuse_event_capture(|| async {\n            bedrock\n                .structured_prompt_dyn(\n                    \"Summarize this failure\".into(),\n                    schema_for!(StructuredOutput),\n                )\n                .await\n        });\n\n        let error = result.expect_err(\"request should fail\");\n        assert!(error.to_string().contains(\"structured prompt failed\"));\n\n        let failure_event = events\n            .iter()\n            .find(|event| event.contains_key(\"langfuse.status_message\"))\n            .expect(\"langfuse failure event\");\n\n        assert_eq!(\n            failure_event\n                .get(\"langfuse.model\")\n                .map(std::string::String::as_str),\n            Some(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n        );\n        assert!(\n            failure_event\n                .get(\"langfuse.input\")\n                .is_some_and(|input| input.contains(\"Summarize this failure\"))\n        );\n        assert!(\n            failure_event\n                .get(\"langfuse.status_message\")\n                .is_some_and(|message| message.contains(\"structured prompt failed\"))\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_reports_usage() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .output(ConverseResult::Message(\n                        Message::builder()\n                            .role(ConversationRole::Assistant)\n                            .content(ContentBlock::Text(\"{\\\"answer\\\":\\\"42\\\"}\".to_string()))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .usage(\n                        TokenUsage::builder()\n                            .input_tokens(9)\n                            .output_tokens(5)\n                            .total_tokens(14)\n                            .build()\n                            .unwrap(),\n                    )\n                    .stop_reason(StopReason::EndTurn)\n                    .build()\n                    .unwrap())\n            });\n\n        let observed_total = Arc::new(AtomicU32::new(0));\n        let observed_total_for_callback = observed_total.clone();\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .on_usage(move |usage| {\n                observed_total_for_callback.store(usage.total_tokens, Ordering::Relaxed);\n                Ok(())\n            })\n            .build()\n            .unwrap();\n\n        let _ = bedrock\n            .structured_prompt_dyn(\n                \"What is two times twenty one?\".into(),\n                schema_for!(StructuredOutput),\n            )\n            .await\n            .unwrap();\n\n        assert_eq!(observed_total.load(Ordering::Relaxed), 14);\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_green_path_with_wiremock() {\n        struct ValidateStructuredConverseRequest;\n\n        impl Respond for ValidateStructuredConverseRequest {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let payload: Value = serde_json::from_slice(&request.body).expect(\"request json\");\n\n                assert_eq!(payload[\"messages\"][0][\"role\"], \"user\");\n                assert_eq!(\n                    payload[\"messages\"][0][\"content\"][0][\"text\"],\n                    \"What is two times twenty one?\"\n                );\n                assert_eq!(payload[\"outputConfig\"][\"textFormat\"][\"type\"], \"json_schema\");\n                assert_eq!(\n                    payload[\"outputConfig\"][\"textFormat\"][\"structure\"][\"jsonSchema\"][\"name\"],\n                    \"structured_prompt\"\n                );\n                let schema =\n                    payload[\"outputConfig\"][\"textFormat\"][\"structure\"][\"jsonSchema\"][\"schema\"]\n                        .as_str()\n                        .expect(\"schema string\");\n                assert!(schema.contains(\"\\\"answer\\\"\"));\n\n                ResponseTemplate::new(200).set_body_json(json!({\n                    \"output\": {\n                        \"message\": {\n                            \"role\": \"assistant\",\n                            \"content\": [\n                                {\"text\": \"{\\\"answer\\\":\\\"42\\\"}\"}\n                            ]\n                        }\n                    },\n                    \"stopReason\": \"end_turn\",\n                    \"usage\": {\n                        \"inputTokens\": 2,\n                        \"outputTokens\": 3,\n                        \"totalTokens\": 5\n                    }\n                }))\n            }\n        }\n\n        let mock_server = MockServer::start().await;\n        Mock::given(method(\"POST\"))\n            .and(path(format!(\"/model/{TEST_MODEL_ID}/converse\")))\n            .respond_with(ValidateStructuredConverseRequest)\n            .mount(&mock_server)\n            .await;\n\n        let client: Client = bedrock_client_for_mock_server(&mock_server.uri());\n        let bedrock = AwsBedrock::builder()\n            .client(client)\n            .default_prompt_model(TEST_MODEL_ID)\n            .build()\n            .unwrap();\n\n        let value = bedrock\n            .structured_prompt_dyn(\n                \"What is two times twenty one?\".into(),\n                schema_for!(StructuredOutput),\n            )\n            .await\n            .unwrap();\n\n        assert_eq!(\n            serde_json::from_value::<StructuredOutput>(value).unwrap(),\n            StructuredOutput {\n                answer: \"42\".to_string()\n            }\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_returns_error_when_response_has_no_text() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .output(ConverseResult::Message(\n                        Message::builder()\n                            .role(ConversationRole::Assistant)\n                            .content(ContentBlock::ToolUse(\n                                ToolUseBlock::builder()\n                                    .tool_use_id(\"call_1\")\n                                    .name(\"structured_prompt\")\n                                    .input(Document::Object(HashMap::new()))\n                                    .build()\n                                    .unwrap(),\n                            ))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .stop_reason(StopReason::ToolUse)\n                    .build()\n                    .unwrap())\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let error = bedrock\n            .structured_prompt_dyn(\"Prompt\".into(), schema_for!(StructuredOutput))\n            .await\n            .unwrap_err();\n        assert!(matches!(error, LanguageModelError::PermanentError(_)));\n        assert!(error.to_string().contains(\"No text in response\"));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_returns_error_on_invalid_json_payload() {\n        let mut bedrock_mock = MockBedrockConverse::new();\n\n        bedrock_mock\n            .expect_converse()\n            .once()\n            .returning(|_, _, _, _, _, _, _, _| {\n                Ok(ConverseOutput::builder()\n                    .output(ConverseResult::Message(\n                        Message::builder()\n                            .role(ConversationRole::Assistant)\n                            .content(ContentBlock::Text(\"not-json\".to_string()))\n                            .build()\n                            .unwrap(),\n                    ))\n                    .stop_reason(StopReason::EndTurn)\n                    .build()\n                    .unwrap())\n            });\n\n        let bedrock = AwsBedrock::builder()\n            .test_client(bedrock_mock)\n            .default_prompt_model(\"anthropic.claude-3-5-sonnet-20241022-v2:0\")\n            .build()\n            .unwrap();\n\n        let error = bedrock\n            .structured_prompt_dyn(\"Prompt\".into(), schema_for!(StructuredOutput))\n            .await\n            .unwrap_err();\n        assert!(matches!(error, LanguageModelError::PermanentError(_)));\n        assert!(\n            error\n                .to_string()\n                .contains(\"Failed to parse model response as JSON\")\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/aws_bedrock_v2/test_utils.rs",
    "content": "use aws_credential_types::Credentials;\nuse aws_sdk_bedrockruntime::{Client, Config, config::Region};\nuse aws_smithy_types::event_stream::{Header, HeaderValue, Message};\nuse serde_json::Value;\n\npub(crate) const TEST_MODEL_ID: &str = \"bedrock-test-model\";\n\npub(crate) fn bedrock_client_for_mock_server(endpoint_url: &str) -> Client {\n    let config = Config::builder()\n        .behavior_version_latest()\n        .region(Region::new(\"us-east-1\"))\n        .credentials_provider(Credentials::for_tests())\n        .endpoint_url(endpoint_url)\n        .build();\n\n    Client::from_conf(config)\n}\n\npub(crate) fn converse_stream_event(event_type: &str, payload: &Value) -> Vec<u8> {\n    let message = Message::new_from_parts(\n        vec![\n            Header::new(\":message-type\", HeaderValue::String(\"event\".into())),\n            Header::new(\n                \":event-type\",\n                HeaderValue::String(event_type.to_owned().into()),\n            ),\n            Header::new(\n                \":content-type\",\n                HeaderValue::String(\"application/json\".into()),\n            ),\n        ],\n        serde_json::to_vec(&payload).expect(\"serialize event payload\"),\n    );\n\n    let mut bytes = Vec::new();\n    aws_smithy_eventstream::frame::write_message_to(&message, &mut bytes)\n        .expect(\"encode event stream frame\");\n    bytes\n}\n\n#[cfg(feature = \"langfuse\")]\npub(crate) type RecordedTracingEvent = std::collections::HashMap<String, String>;\n\n#[cfg(feature = \"langfuse\")]\n#[derive(Clone)]\nstruct EventCaptureLayer {\n    events: std::sync::Arc<std::sync::Mutex<Vec<RecordedTracingEvent>>>,\n}\n\n#[cfg(feature = \"langfuse\")]\n#[derive(Default)]\nstruct EventFieldVisitor {\n    fields: RecordedTracingEvent,\n}\n\n#[cfg(feature = \"langfuse\")]\nimpl tracing::field::Visit for EventFieldVisitor {\n    fn record_str(&mut self, field: &tracing::field::Field, value: &str) {\n        self.fields\n            .insert(field.name().to_string(), value.to_string());\n    }\n\n    fn record_bool(&mut self, field: &tracing::field::Field, value: bool) {\n        self.fields\n            .insert(field.name().to_string(), value.to_string());\n    }\n\n    fn record_i64(&mut self, field: &tracing::field::Field, value: i64) {\n        self.fields\n            .insert(field.name().to_string(), value.to_string());\n    }\n\n    fn record_u64(&mut self, field: &tracing::field::Field, value: u64) {\n        self.fields\n            .insert(field.name().to_string(), value.to_string());\n    }\n\n    fn record_debug(&mut self, field: &tracing::field::Field, value: &dyn std::fmt::Debug) {\n        self.fields\n            .insert(field.name().to_string(), format!(\"{value:?}\"));\n    }\n}\n\n#[cfg(feature = \"langfuse\")]\nimpl<S> tracing_subscriber::Layer<S> for EventCaptureLayer\nwhere\n    S: tracing::Subscriber + for<'span> tracing_subscriber::registry::LookupSpan<'span>,\n{\n    fn on_event(\n        &self,\n        event: &tracing::Event<'_>,\n        _ctx: tracing_subscriber::layer::Context<'_, S>,\n    ) {\n        let mut visitor = EventFieldVisitor::default();\n        event.record(&mut visitor);\n        self.events.lock().unwrap().push(visitor.fields);\n    }\n}\n\n#[cfg(feature = \"langfuse\")]\npub(crate) fn run_with_langfuse_event_capture<F, Fut, T>(\n    future_factory: F,\n) -> (T, Vec<RecordedTracingEvent>)\nwhere\n    F: FnOnce() -> Fut,\n    Fut: std::future::Future<Output = T>,\n{\n    use tracing_subscriber::prelude::*;\n\n    let events = std::sync::Arc::new(std::sync::Mutex::new(Vec::new()));\n    let subscriber = tracing_subscriber::registry().with(EventCaptureLayer {\n        events: events.clone(),\n    });\n    let dispatch = tracing::Dispatch::new(subscriber);\n    let runtime = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        .build()\n        .expect(\"test runtime\");\n\n    let result =\n        tracing::dispatcher::with_default(&dispatch, || runtime.block_on(future_factory()));\n\n    let recorded_events = events.lock().expect(\"event capture mutex\").clone();\n\n    (result, recorded_events)\n}\n"
  },
  {
    "path": "swiftide-integrations/src/aws_bedrock_v2/tool_schema.rs",
    "content": "use serde_json::Value;\nuse swiftide_core::chat_completion::{ToolSpec, ToolSpecError};\n\npub(super) struct AwsBedrockToolSchema(Value);\n\nimpl AwsBedrockToolSchema {\n    pub(super) fn into_value(self) -> Value {\n        self.0\n    }\n}\n\nimpl TryFrom<&ToolSpec> for AwsBedrockToolSchema {\n    type Error = ToolSpecError;\n\n    fn try_from(spec: &ToolSpec) -> Result<Self, Self::Error> {\n        Ok(Self(spec.canonical_parameters_schema_json()?))\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/dashscope/config.rs",
    "content": "use reqwest::header::{AUTHORIZATION, HeaderMap};\nuse secrecy::{ExposeSecret as _, SecretString};\nuse serde::Deserialize;\n\nconst DASHSCOPE_API_BASE: &str = \"https://dashscope.aliyuncs.com/compatible-mode/v1\";\n\n#[derive(Clone, Debug, Deserialize)]\n#[serde(default)]\npub struct DashscopeConfig {\n    api_base: String,\n    api_key: SecretString,\n}\n\nimpl Default for DashscopeConfig {\n    fn default() -> Self {\n        Self {\n            api_base: DASHSCOPE_API_BASE.to_string(),\n            api_key: get_api_key().into(),\n        }\n    }\n}\n\nfn get_api_key() -> String {\n    std::env::var(\"QWEN_API_KEY\")\n        .unwrap_or_else(|_| std::env::var(\"DASHSCOPE_API_KEY\").unwrap_or_default())\n}\n\nimpl async_openai::config::Config for DashscopeConfig {\n    fn headers(&self) -> HeaderMap {\n        let mut headers = HeaderMap::new();\n\n        headers.insert(\n            AUTHORIZATION,\n            format!(\"Bearer {}\", self.api_key.expose_secret())\n                .as_str()\n                .parse()\n                .unwrap(),\n        );\n\n        headers\n    }\n\n    fn url(&self, path: &str) -> String {\n        format!(\"{}{}\", self.api_base, path)\n    }\n\n    fn api_base(&self) -> &str {\n        &self.api_base\n    }\n\n    fn api_key(&self) -> &SecretString {\n        &self.api_key\n    }\n\n    fn query(&self) -> Vec<(&str, &str)> {\n        vec![]\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/dashscope/mod.rs",
    "content": "use config::DashscopeConfig;\n\nuse crate::openai;\n\nmod config;\n\npub type Dashscope = openai::GenericOpenAI<DashscopeConfig>;\nimpl Dashscope {\n    pub fn builder() -> DashscopeBuilder {\n        DashscopeBuilder::default()\n    }\n}\n\npub type DashscopeBuilder = openai::GenericOpenAIBuilder<DashscopeConfig>;\npub type DashscopeBuilderError = openai::GenericOpenAIBuilderError;\npub use openai::{Options, OptionsBuilder, OptionsBuilderError};\n\nimpl Default for Dashscope {\n    fn default() -> Self {\n        Dashscope::builder().build().unwrap()\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    #[test]\n    fn test_default_prompt_model() {\n        let openai = Dashscope::builder()\n            .default_prompt_model(\"qwen-long\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            openai.default_options.prompt_model,\n            Some(\"qwen-long\".to_string())\n        );\n\n        let openai = Dashscope::builder()\n            .default_prompt_model(\"qwen-turbo\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            openai.default_options.prompt_model,\n            Some(\"qwen-turbo\".to_string())\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/extensions.sql",
    "content": "INSTALL vss;\nINSTALL fts;\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/hybrid_query.sql",
    "content": "with fts as (\n    select \n        uuid, \n        chunk, \n        path,\n        fts_main_{{table_name}}.match_bm25(\n            uuid,\n            {{query}},\n            fields := chunk\n        ) as score\n    from {{table_name}}\n    limit {{top_n}}\n),\nembd as (\n    select \n        uuid, \n        chunk, \n        path,\n        array_cosine_similarity({{embedding_name}}, cast([{{embedding}}] as float[{{embedding_size}}])) as score\n    from {{table_name}}\n    limit {{top_n}}\n),\nnormalized_scores as (\n    select \n        fts.uuid, \n        fts.chunk, \n        fts.path,\n        fts.score as raw_fts_score, \n        embd.score as raw_embd_score,\n        (fts.score / (select max(score) from fts)) as norm_fts_score,\n        ((embd.score + 1) / (select max(score) + 1 from embd)) as norm_embd_score\n    from \n        fts\n    inner join\n        embd \n    on fts.uuid = embd.uuid\n)\nselect \n    uuid,\n    chunk,\n    path,\n    raw_fts_score, \n    raw_embd_score, \n    norm_fts_score, \n    norm_embd_score, \n    -- (alpha * norm_embd_score + (1-alpha) * norm_fts_score)\n    (0.8*norm_embd_score + 0.2*norm_fts_score) AS score_cc\nfrom \n    normalized_scores\norder by \n    score_cc desc\nlimit {{top_k}};\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/mod.rs",
    "content": "use std::{\n    collections::HashMap,\n    sync::{Arc, Mutex},\n};\n\nuse anyhow::{Context as _, Result};\nuse derive_builder::Builder;\nuse swiftide_core::{\n    indexing::{Chunk, EmbeddedField},\n    querying::search_strategies::HybridSearch,\n};\nuse tera::Context;\nuse tokio::sync::RwLock;\n\npub mod node_cache;\npub mod persist;\npub mod retrieve;\n\nconst DEFAULT_INDEXING_SCHEMA: &str = include_str!(\"schema.sql\");\nconst DEFAULT_UPSERT_QUERY: &str = include_str!(\"upsert.sql\");\nconst DEFAULT_HYBRID_QUERY: &str = include_str!(\"hybrid_query.sql\");\n\n/// Provides `Persist`, `Retrieve`, and `NodeCache` for duckdb\n///\n/// Unfortunately Metadata is not stored.\n///\n/// Supports the following search strategies:\n/// - `SimilaritySingleEmbedding`\n/// - `HybridSearch` (<https://motherduck.com/blog/search-using-duckdb-part-3>/)\n/// - Custom\n///\n/// NOTE: The integration is not optimized for ultra large datasets / load. It might work, if it\n/// doesn't let us know <3.\n#[derive(Clone, Builder)]\n#[builder(setter(into))]\npub struct Duckdb<T: Chunk = String> {\n    /// The connection to the database\n    ///\n    /// Note that this uses the tokio version of a mutex because the duckdb connection contains a\n    /// `RefCell`. This is not ideal, but it is what it is.\n    #[builder(setter(custom))]\n    connection: Arc<Mutex<duckdb::Connection>>,\n\n    /// The name of the table to use for storing nodes. Defaults to \"swiftide\".\n    #[builder(default = \"swiftide\".into())]\n    table_name: String,\n\n    /// The schema to use for the table\n    ///\n    /// Note that if you change the schema, you probably also need to change the upsert query.\n    ///\n    /// Additionally, if you intend to use vectors, you must install and load the vss extension.\n    #[builder(default = self.default_schema())]\n    schema: String,\n\n    // The vectors to be stored, field name -> size\n    #[builder(default)]\n    vectors: HashMap<EmbeddedField, usize>,\n\n    /// Batch size for storing nodes\n    #[builder(default = \"256\")]\n    batch_size: usize,\n\n    /// Sql to upsert a node\n    #[builder(private, default = self.default_node_upsert_sql())]\n    node_upsert_sql: String,\n\n    /// Name of the table to use for caching nodes. Defaults to `\"swiftide_cache\"`.\n    #[builder(default = \"swiftide_cache\".into())]\n    cache_table: String,\n\n    /// Tracks if the cache table has been created\n    #[builder(private, default = Arc::new(false.into()))]\n    cache_table_created: Arc<RwLock<bool>>, // note might need a mutex\n\n    /// Prefix to be used for keys stored in the database to avoid collisions. Can be used to\n    /// manually invalidate the cache.\n    #[builder(default = \"String::new()\")]\n    cache_key_prefix: String,\n\n    /// If enabled, vectors will be upserted with an ON CONFLICT DO UPDATE. If disabled, ON\n    /// conflict does nothing. Requires `duckdb` >= 1.2.1\n    #[builder(default)]\n    #[allow(dead_code)]\n    upsert_vectors: bool,\n\n    #[builder(default)]\n    chunk_type: std::marker::PhantomData<T>,\n}\n\nimpl<T: Chunk> std::fmt::Debug for Duckdb<T> {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Duckdb\")\n            .field(\"connection\", &\"Arc<Mutex<duckdb::Connection>>\")\n            .field(\"table_name\", &self.table_name)\n            .field(\"batch_size\", &self.batch_size)\n            .finish()\n    }\n}\n\nimpl Duckdb<String> {\n    pub fn builder() -> DuckdbBuilder<String> {\n        DuckdbBuilder::<String>::default()\n    }\n}\nimpl<T: Chunk> Duckdb<T> {\n    // pub fn builder() -> DuckdbBuilder<String> {\n    //     DuckdbBuilder::<String>::default()\n    // }\n\n    /// Name of the indexing table\n    pub fn table_name(&self) -> &str {\n        &self.table_name\n    }\n\n    /// Name of the cache table\n    pub fn cache_table(&self) -> &str {\n        &self.cache_table\n    }\n\n    /// Returns the connection to the database\n    pub fn connection(&self) -> &Mutex<duckdb::Connection> {\n        &self.connection\n    }\n\n    /// Creates HNSW indices on the vector fields\n    ///\n    /// These are *not* persisted. You must recreate them on startup.\n    ///\n    /// If you want to persist them, refer to the duckdb documentation.\n    ///\n    /// # Errors\n    ///\n    /// Errors if the connection or statement fails\n    ///\n    /// # Panics\n    ///\n    /// If the mutex locking the connection is poisoned\n    pub fn create_vector_indices(&self) -> Result<()> {\n        let table_name = &self.table_name;\n        let mut conn = self.connection.lock().unwrap();\n        let tx = conn.transaction().context(\"Failed to start transaction\")?;\n        {\n            for vector in self.vectors.keys() {\n                tx.execute(\n                    &format!(\n                        \"CREATE INDEX IF NOT EXISTS idx_{vector} ON {table_name} USING hnsw ({vector}) WITH (metric = 'cosine')\",\n                    ),\n                    [],\n                )\n                .context(\"Could not create index\")?;\n            }\n        }\n        tx.commit().context(\"Failed to commit transaction\")?;\n        Ok(())\n    }\n\n    /// Safely creates the cache table if it does not exist. Can be used concurrently\n    ///\n    /// # Errors\n    ///\n    /// Errors if the table or index could not be created\n    ///\n    /// # Panics\n    ///\n    /// If the mutex locking the connection is poisoned\n    pub async fn lazy_create_cache(&self) -> anyhow::Result<()> {\n        if !*self.cache_table_created.read().await {\n            let mut lock = self.cache_table_created.write().await;\n            let conn = self.connection.lock().unwrap();\n            conn.execute(\n                &format!(\n                    \"CREATE TABLE IF NOT EXISTS {} (uuid TEXT PRIMARY KEY, path TEXT)\",\n                    self.cache_table\n                ),\n                [],\n            )\n            .context(\"Could not create table\")?;\n            // Create an extra index on path\n            conn.execute(\n                &format!(\n                    \"CREATE INDEX IF NOT EXISTS idx_path ON {} (path)\",\n                    self.cache_table\n                ),\n                [],\n            )\n            .context(\"Could not create index\")?;\n            *lock = true;\n        }\n        Ok(())\n    }\n\n    /// Formats a node key for the cache table\n    pub fn node_key(&self, node: &swiftide_core::indexing::Node<T>) -> String {\n        format!(\"{}.{}\", self.cache_key_prefix, node.id())\n    }\n\n    fn hybrid_query_sql(\n        &self,\n        search_strategy: &HybridSearch,\n        query: &str,\n        embedding: &[f32],\n    ) -> Result<String> {\n        let table_name = &self.table_name;\n\n        // Silently ignores multiple vector fields\n        let (field_name, embedding_size) = self\n            .vectors\n            .iter()\n            .next()\n            .context(\"No vectors configured\")?;\n\n        if self.vectors.len() > 1 {\n            tracing::warn!(\n                \"Multiple vectors configured, but only the first one will be used: {:?}\",\n                self.vectors\n            );\n        }\n\n        let embedding = embedding\n            .iter()\n            .map(ToString::to_string)\n            .collect::<Vec<_>>()\n            .join(\",\");\n\n        let context = Context::from_value(serde_json::json!({\n            \"table_name\": table_name,\n            \"top_n\": search_strategy.top_n(),\n            \"top_k\": search_strategy.top_k(),\n            \"embedding_name\": field_name,\n            \"embedding_size\": embedding_size,\n            \"query\": wrap_and_escape(query),\n            \"embedding\": embedding,\n\n\n        }))?;\n\n        let rendered = tera::Tera::one_off(DEFAULT_HYBRID_QUERY, &context, false)?;\n        Ok(rendered)\n    }\n}\n\nfn wrap_and_escape(s: &str) -> String {\n    let quote = '\\'';\n    let mut buf = String::new();\n    buf.push(quote);\n    let chars = s.chars();\n    for ch in chars {\n        // escape `quote` by doubling it\n        if ch == quote {\n            buf.push(ch);\n        }\n        buf.push(ch);\n    }\n    buf.push(quote);\n\n    buf\n}\nimpl<T: Chunk> DuckdbBuilder<T> {\n    pub fn connection(&mut self, connection: impl Into<duckdb::Connection>) -> &mut Self {\n        self.connection = Some(Arc::new(Mutex::new(connection.into())));\n        self\n    }\n\n    pub fn with_vector(&mut self, field: EmbeddedField, size: usize) -> &mut Self {\n        self.vectors\n            .get_or_insert_with(HashMap::new)\n            .insert(field, size);\n        self\n    }\n\n    fn default_schema(&self) -> String {\n        let mut context = Context::default();\n        context.insert(\"table_name\", &self.table_name);\n        context.insert(\"vectors\", &self.vectors.clone().unwrap_or_default());\n\n        tera::Tera::one_off(DEFAULT_INDEXING_SCHEMA, &context, false)\n            .expect(\"Could not render schema; infalllible\")\n    }\n\n    fn default_node_upsert_sql(&self) -> String {\n        let mut context = Context::default();\n        context.insert(\"table_name\", &self.table_name);\n        context.insert(\"vectors\", &self.vectors.clone().unwrap_or_default());\n        context.insert(\"upsert_vectors\", &self.upsert_vectors);\n\n        context.insert(\n            \"vector_field_names\",\n            &self\n                .vectors\n                .as_ref()\n                .map(|v| v.keys().collect::<Vec<_>>())\n                .unwrap_or_default(),\n        );\n\n        tracing::info!(\"Rendering upsert sql\");\n        tera::Tera::one_off(DEFAULT_UPSERT_QUERY, &context, false)\n            .expect(\"could not render upsert query; infallible\")\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/node_cache.rs",
    "content": "use anyhow::Context as _;\nuse async_trait::async_trait;\nuse swiftide_core::{\n    NodeCache,\n    indexing::{Chunk, Node},\n};\n\nuse super::Duckdb;\n\nmacro_rules! unwrap_or_log {\n    ($result:expr) => {\n        match $result {\n            Ok(value) => value,\n            Err(e) => {\n                tracing::error!(\"Error: {:#}\", e);\n                debug_assert!(\n                    true,\n                    \"Duckdb should not give errors unless in very weird situations; this is a bug: {:#}\",\n                    e\n                );\n                return false;\n            }\n        }\n    };\n}\n\n#[async_trait]\nimpl<T: Chunk> NodeCache for Duckdb<T> {\n    type Input = T;\n\n    async fn get(&self, node: &Node<T>) -> bool {\n        unwrap_or_log!(\n            self.lazy_create_cache()\n                .await\n                .context(\"failed to create cache table\")\n        );\n\n        let sql = format!(\n            \"SELECT EXISTS(SELECT 1 FROM {} WHERE uuid = ?)\",\n            &self.cache_table\n        );\n\n        let lock = self.connection.lock().unwrap();\n        let mut stmt = unwrap_or_log!(\n            lock.prepare(&sql)\n                .context(\"Failed to prepare duckdb statement for persist\")\n        );\n\n        let present = unwrap_or_log!(\n            stmt.query_map([self.node_key(node)], |row| row.get::<_, bool>(0))\n                .context(\"failed to query for documents\")\n        )\n        .next()\n        .transpose();\n\n        unwrap_or_log!(present).unwrap_or(false)\n    }\n\n    async fn set(&self, node: &Node<T>) {\n        if let Err(err) = self\n            .lazy_create_cache()\n            .await\n            .context(\"failed to create cache table\")\n        {\n            tracing::error!(\"Failed to create cache table: {:#}\", err);\n            return;\n        }\n\n        let sql = format!(\n            \"INSERT INTO {} (uuid, path) VALUES (?, ?) ON CONFLICT (uuid) DO NOTHING\",\n            &self.cache_table\n        );\n\n        let lock = self.connection.lock().unwrap();\n        let mut stmt = match lock\n            .prepare(&sql)\n            .context(\"Failed to prepare duckdb statement for cache set\")\n        {\n            Ok(stmt) => stmt,\n            Err(err) => {\n                tracing::error!(\n                    \"Failed to prepare duckdb statement for cache set: {:#}\",\n                    err\n                );\n                return;\n            }\n        };\n\n        if let Err(err) = stmt\n            .execute([self.node_key(node), node.path.to_string_lossy().into()])\n            .context(\"failed to insert into cache table\")\n        {\n            tracing::error!(\"Failed to insert into cache table: {:#}\", err);\n        }\n    }\n\n    async fn clear(&self) -> anyhow::Result<()> {\n        let sql = format!(\"DROP TABLE IF EXISTS {}\", &self.cache_table);\n        let lock = self.connection.lock().unwrap();\n        let mut stmt = lock\n            .prepare(&sql)\n            .context(\"Failed to prepare duckdb statement for cache clear\")?;\n\n        stmt.execute([]).context(\"failed to delete cache table\")?;\n\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use swiftide_core::indexing::TextNode;\n\n    fn setup_duckdb() -> Duckdb {\n        Duckdb::builder()\n            .connection(duckdb::Connection::open_in_memory().unwrap())\n            .build()\n            .unwrap()\n    }\n\n    #[tokio::test]\n    async fn test_get_set() {\n        let duckdb = setup_duckdb();\n        let node = TextNode::new(\"test_get_set\");\n\n        assert!(!duckdb.get(&node).await);\n        duckdb.set(&node).await;\n        assert!(duckdb.get(&node).await);\n    }\n\n    #[tokio::test]\n    async fn test_clear() {\n        let duckdb = setup_duckdb();\n        let node = TextNode::new(\"test_clear\");\n\n        duckdb.set(&node).await;\n        assert!(duckdb.get(&node).await);\n        duckdb.clear().await.unwrap();\n        assert!(!duckdb.get(&node).await);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/persist.rs",
    "content": "use std::{\n    borrow::Cow,\n    path::Path,\n    sync::{LazyLock, Mutex as StdMutex},\n};\n\nuse anyhow::{Context as _, Result};\nuse async_trait::async_trait;\nuse duckdb::{\n    Statement, ToSql, params, params_from_iter,\n    types::{ToSqlOutput, Value},\n};\nuse swiftide_core::{\n    Persist,\n    indexing::{self, Chunk, Metadata, Node},\n};\nuse uuid::Uuid;\n\nuse super::Duckdb;\n\nstatic DUCKDB_EXTENSION_INSTALL_LOCK: LazyLock<StdMutex<()>> = LazyLock::new(|| StdMutex::new(()));\n\n#[allow(dead_code)]\nenum TextNodeValues<'a> {\n    Uuid(Uuid),\n    Path(&'a Path),\n    Chunk(&'a str),\n    Metadata(&'a Metadata),\n    Embedding(Cow<'a, [f32]>),\n    Null,\n}\n\nimpl ToSql for TextNodeValues<'_> {\n    fn to_sql(&self) -> duckdb::Result<ToSqlOutput<'_>> {\n        match self {\n            TextNodeValues::Uuid(uuid) => Ok(ToSqlOutput::Owned(uuid.to_string().into())),\n            // Should be borrow-able\n            TextNodeValues::Path(path) => Ok(path.to_string_lossy().to_string().into()),\n            TextNodeValues::Chunk(chunk) => chunk.to_sql(),\n            TextNodeValues::Metadata(_metadata) => {\n                unimplemented!(\"maps are not yet implemented for duckdb\");\n                // Casting doesn't work either, the duckdb conversion is also not implemented :(\n            }\n            TextNodeValues::Embedding(vector) => {\n                let array_str = format!(\n                    \"[{}]\",\n                    vector\n                        .iter()\n                        .map(ToString::to_string)\n                        .collect::<Vec<_>>()\n                        .join(\",\")\n                );\n                Ok(ToSqlOutput::Owned(array_str.into()))\n            }\n            TextNodeValues::Null => Ok(ToSqlOutput::Owned(Value::Null)),\n        }\n    }\n}\n\nimpl<T: Chunk + AsRef<str>> Duckdb<T> {\n    #[allow(clippy::unused_self)]\n    fn install_extensions(&self, conn: &duckdb::Connection) -> Result<()> {\n        // DuckDB extension install writes to a shared on-disk extension directory.\n        // Serializing installs avoids flaky concurrent install/load behavior in tests/CI.\n        let _lock = DUCKDB_EXTENSION_INSTALL_LOCK.lock().unwrap();\n        conn.execute_batch(include_str!(\"extensions.sql\"))\n            .context(\"Failed to install duckdb extensions (vss, fts)\")?;\n        Ok(())\n    }\n\n    fn store_node_on_stmt(&self, stmt: &mut Statement<'_>, node: &Node<T>) -> Result<()> {\n        let mut values = vec![\n            TextNodeValues::Uuid(node.id()),\n            TextNodeValues::Chunk(node.chunk.as_ref()),\n            TextNodeValues::Path(&node.path),\n        ];\n\n        let Some(node_vectors) = &node.vectors else {\n            anyhow::bail!(\"Expected node to have vectors; cannot store into duckdb\");\n        };\n\n        for field in self.vectors.keys() {\n            let Some(vector) = node_vectors.get(field) else {\n                anyhow::bail!(\"Expected vector for field {field} in node\");\n            };\n\n            values.push(TextNodeValues::Embedding(vector.into()));\n        }\n\n        // TODO: Investigate concurrency in duckdb, maybe optmistic if it works\n        stmt.execute(params_from_iter(values))\n            .context(\"Failed to store node\")?;\n\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl<T: Chunk + AsRef<str>> Persist for Duckdb<T> {\n    type Input = T;\n    type Output = T;\n\n    async fn setup(&self) -> Result<()> {\n        tracing::debug!(\"Setting up duckdb schema\");\n\n        {\n            let conn = self.connection.lock().unwrap();\n\n            // Create if not exists does not seem to work with duckdb, so we check first\n            if conn\n                // Duckdb has issues with params it seems.\n                .query_row(&format!(\"SHOW {}\", self.table_name()), params![], |row| {\n                    row.get::<_, String>(0)\n                })\n                .is_ok()\n            {\n                tracing::debug!(\"Indexing table already exists, skipping creation\");\n                return Ok(());\n            }\n\n            // Install extensions before schema loading so LOAD vss/fts in the schema succeeds.\n            self.install_extensions(&conn)?;\n\n            conn.execute_batch(&self.schema)\n                .context(\"Failed to create indexing table\")?;\n\n            tracing::debug!(schema = &self.schema, \"Indexing table created\");\n        }\n\n        tokio::time::sleep(std::time::Duration::from_secs(1)).await;\n\n        {\n            let conn = self.connection.lock().unwrap();\n            // We need to run this separately to ensure the table is created before we create the\n            // index\n            conn.execute_batch(&format!(\n                \"PRAGMA create_fts_index('{}', 'uuid', 'chunk', stemmer = 'porter',\n                 stopwords = 'english', ignore = '(\\\\.|[^a-z])+',\n                 strip_accents = 1, lower = 1, overwrite = 0);\n\",\n                self.table_name\n            ))?;\n        }\n\n        tracing::info!(\"Setup completed\");\n\n        Ok(())\n    }\n\n    async fn store(&self, node: indexing::Node<T>) -> Result<indexing::Node<T>> {\n        let lock = self.connection.lock().unwrap();\n        let mut stmt = lock.prepare(&self.node_upsert_sql)?;\n        self.store_node_on_stmt(&mut stmt, &node)?;\n\n        Ok(node)\n    }\n\n    async fn batch_store(&self, nodes: Vec<indexing::Node<T>>) -> indexing::IndexingStream<T> {\n        // TODO: Must batch\n        let mut new_nodes = Vec::with_capacity(nodes.len());\n\n        tracing::debug!(\"Waiting for transaction\");\n        let mut conn = self.connection.lock().unwrap();\n        tracing::debug!(\"Got transaction\");\n        let tx = match conn.transaction().context(\"Failed to start transaction\") {\n            Ok(tx) => tx,\n            Err(err) => {\n                return Err(err).into();\n            }\n        };\n\n        tracing::debug!(\"Starting batch store\");\n        {\n            let mut stmt = match tx\n                .prepare(&self.node_upsert_sql)\n                .context(\"Failed to prepare statement\")\n            {\n                Ok(stmt) => stmt,\n                Err(err) => {\n                    return Err(err).into();\n                }\n            };\n\n            for node in nodes {\n                new_nodes.push(self.store_node_on_stmt(&mut stmt, &node).map(|()| node));\n            }\n        };\n        if let Err(err) = tx.commit().context(\"Failed to commit transaction\") {\n            return Err(err).into();\n        }\n\n        new_nodes.into()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use futures_util::TryStreamExt as _;\n    use indexing::{EmbeddedField, TextNode};\n\n    use super::*;\n\n    #[test_log::test(tokio::test)]\n    async fn test_persisting_nodes() {\n        let client = Duckdb::builder()\n            .connection(duckdb::Connection::open_in_memory().unwrap())\n            .table_name(\"test\".to_string())\n            .with_vector(EmbeddedField::Combined, 3)\n            .build()\n            .unwrap();\n\n        let node = TextNode::new(\"Hello duckdb!\")\n            .with_vectors([(EmbeddedField::Combined, vec![1.0, 2.0, 3.0])])\n            .to_owned();\n\n        client.setup().await.unwrap();\n        client.store(node.clone()).await.unwrap();\n\n        tracing::info!(\"Stored node\");\n\n        {\n            let connection = client.connection.lock().unwrap();\n            let mut stmt = connection\n                .prepare(\"SELECT uuid,path,chunk FROM test\")\n                .unwrap();\n            let node_iter = stmt\n                .query_map([], |row| {\n                    Ok((\n                        row.get::<_, String>(0).unwrap(), // id\n                        row.get::<_, String>(1).unwrap(), // chunk\n                        row.get::<_, String>(2).unwrap(), // path\n                    ))\n                })\n                .unwrap();\n\n            let retrieved = node_iter.collect::<Result<Vec<_>, _>>().unwrap();\n            //\n            assert_eq!(retrieved.len(), 1);\n        }\n\n        tracing::info!(\"Retrieved node\");\n        // Verify the upsert and batch works\n        let new_nodes = vec![node.clone(), node.clone(), node.clone()];\n        let stream_nodes: Vec<TextNode> = client\n            .batch_store(new_nodes)\n            .await\n            .try_collect()\n            .await\n            .unwrap();\n\n        // let streamed_nodes: Vec<TextNode> = stream.try_collect().await.unwrap();\n        assert_eq!(stream_nodes.len(), 3);\n        assert_eq!(stream_nodes[0], node);\n\n        tracing::info!(\"Batch stored nodes 1\");\n        {\n            let connection = client.connection.lock().unwrap();\n            let mut stmt = connection\n                .prepare(\"SELECT uuid,path,chunk FROM test\")\n                .unwrap();\n            let node_iter = stmt\n                .query_map([], |row| {\n                    Ok((\n                        row.get::<_, String>(0).unwrap(), // id\n                        row.get::<_, String>(1).unwrap(), // chunk\n                        row.get::<_, String>(2).unwrap(), // path\n                    ))\n                })\n                .unwrap();\n\n            let retrieved = node_iter.collect::<Result<Vec<_>, _>>().unwrap();\n            assert_eq!(retrieved.len(), 1);\n        }\n\n        // Test batch store fully\n        let mut new_node = node.clone();\n        new_node.chunk = \"Something else\".into();\n\n        let new_nodes = vec![node.clone(), new_node.clone(), new_node.clone()];\n        let stream = client.batch_store(new_nodes).await;\n\n        let streamed_nodes: Vec<TextNode> = stream.try_collect().await.unwrap();\n        assert_eq!(streamed_nodes.len(), 3);\n        assert_eq!(streamed_nodes[0], node);\n\n        {\n            let connection = client.connection.lock().unwrap();\n            let mut stmt = connection\n                .prepare(\"SELECT uuid,path,chunk FROM test\")\n                .unwrap();\n\n            let node_iter = stmt\n                .query_map([], |row| {\n                    Ok((\n                        row.get::<_, String>(0).unwrap(), // id\n                        row.get::<_, String>(1).unwrap(), // chunk\n                        row.get::<_, String>(2).unwrap(), // path\n                    ))\n                })\n                .unwrap();\n            let retrieved = node_iter.collect::<Result<Vec<_>, _>>().unwrap();\n            assert_eq!(retrieved.len(), 2);\n        }\n    }\n\n    #[ignore = \"json types are acting up in duckdb at the moment\"]\n    #[test_log::test(tokio::test)]\n    async fn test_with_metadata() {\n        let client = Duckdb::builder()\n            .connection(duckdb::Connection::open_in_memory().unwrap())\n            .table_name(\"test\".to_string())\n            .with_vector(EmbeddedField::Combined, 3)\n            .build()\n            .unwrap();\n\n        let mut node = TextNode::new(\"Hello duckdb!\")\n            .with_vectors([(EmbeddedField::Combined, vec![1.0, 2.0, 3.0])])\n            .to_owned();\n\n        node.metadata\n            .insert(\"filter\".to_string(), \"true\".to_string());\n\n        client.setup().await.unwrap();\n        client.store(node).await.unwrap();\n\n        tracing::info!(\"Stored node\");\n\n        let connection = client.connection.lock().unwrap();\n        let mut stmt = connection\n            .prepare(\"SELECT uuid,path,chunk FROM test\")\n            .unwrap();\n\n        let node_iter = stmt\n            .query_map([], |row| {\n                Ok((\n                    row.get::<_, String>(0).unwrap(), // id\n                    row.get::<_, String>(1).unwrap(), // chunk\n                    row.get::<_, String>(2).unwrap(), // path\n                    row.get::<_, Value>(3).unwrap(),  // path\n                                                      // row.get::<_, String>(3).unwrap(), // metadata\n                                                      // row.get::<_, Vec<f32>>(4).unwrap(), // vector\n                ))\n            })\n            .unwrap();\n\n        let retrieved = node_iter.collect::<Result<Vec<_>, _>>().unwrap();\n        dbg!(&retrieved);\n        //\n        assert_eq!(retrieved.len(), 1);\n\n        let Value::Map(metadata) = &retrieved[0].3 else {\n            panic!(\"Expected metadata to be a map\");\n        };\n\n        assert_eq!(metadata.keys().count(), 1);\n        assert_eq!(\n            metadata.get(&Value::Text(\"filter\".into())).unwrap(),\n            &Value::Text(\"true\".into())\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_running_setup_twice() {\n        let client = Duckdb::builder()\n            .connection(duckdb::Connection::open_in_memory().unwrap())\n            .table_name(\"test\".to_string())\n            .with_vector(EmbeddedField::Combined, 3)\n            .build()\n            .unwrap();\n\n        client.setup().await.unwrap();\n        client.setup().await.unwrap(); // Should not panic or error\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_persisted() {\n        let temp_db_path = temp_dir::TempDir::new().unwrap();\n        let temp_db_path = temp_db_path.path().join(\"test_duckdb.db\");\n\n        let client = Duckdb::builder()\n            .connection(duckdb::Connection::open(temp_db_path).unwrap())\n            .table_name(\"test\".to_string())\n            .with_vector(EmbeddedField::Combined, 3)\n            .build()\n            .unwrap();\n\n        let mut node = TextNode::new(\"Hello duckdb!\")\n            .with_vectors([(EmbeddedField::Combined, vec![1.0, 2.0, 3.0])])\n            .to_owned();\n\n        node.metadata\n            .insert(\"filter\".to_string(), \"true\".to_string());\n\n        client.setup().await.unwrap();\n        client.store(node).await.unwrap();\n\n        tracing::info!(\"Stored node\");\n\n        let connection = client.connection.lock().unwrap();\n        let mut stmt = connection\n            .prepare(\"SELECT uuid,path,chunk FROM test\")\n            .unwrap();\n\n        let node_iter = stmt\n            .query_map([], |row| {\n                Ok((\n                    row.get::<_, String>(0).unwrap(), // id\n                    row.get::<_, String>(1).unwrap(), // chunk\n                    row.get::<_, String>(2).unwrap(), // path\n                ))\n            })\n            .unwrap();\n\n        let retrieved = node_iter.collect::<Result<Vec<_>, _>>().unwrap();\n        dbg!(&retrieved);\n        //\n        assert_eq!(retrieved.len(), 1);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/retrieve.rs",
    "content": "use anyhow::{Context as _, Result};\nuse async_trait::async_trait;\nuse swiftide_core::{\n    Retrieve,\n    indexing::Chunk,\n    querying::{\n        Document, Query,\n        search_strategies::{CustomStrategy, HybridSearch, SimilaritySingleEmbedding},\n        states,\n    },\n};\n\nuse super::Duckdb;\n\n#[async_trait]\nimpl<T: Chunk> Retrieve<SimilaritySingleEmbedding> for Duckdb<T> {\n    async fn retrieve(\n        &self,\n        search_strategy: &SimilaritySingleEmbedding,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        let Some(embedding) = query.embedding.as_ref() else {\n            return Err(anyhow::Error::msg(\"Missing embedding in query state\"));\n        };\n\n        let table_name = &self.table_name;\n\n        // Silently ignores multiple vector fields\n        let (field_name, embedding_size) = self\n            .vectors\n            .iter()\n            .next()\n            .context(\"No vectors configured\")?;\n\n        let limit = search_strategy.top_k();\n\n        // Ideally it should be a prepared statement, where only the new parameters lead to extra\n        // allocations. This is possible in 1.2.1, but that version is still broken for VSS via\n        // Rust.\n        let sql = format!(\n            \"SELECT uuid, chunk, path FROM {table_name}\\n\n            ORDER BY array_distance({field_name}, ARRAY[{}]::FLOAT[{embedding_size}])\\n\n            LIMIT {limit}\",\n            embedding\n                .iter()\n                .map(ToString::to_string)\n                .collect::<Vec<_>>()\n                .join(\",\")\n        );\n\n        tracing::trace!(\"[duckdb] Executing query: {}\", sql);\n\n        let conn = self.connection().lock().unwrap();\n\n        let mut stmt = conn\n            .prepare(&sql)\n            .context(\"Failed to prepare duckdb statement for persist\")?;\n\n        tracing::trace!(\"[duckdb] Retrieving documents\");\n\n        let documents = stmt\n            .query_map([], |row| {\n                Ok(Document::builder()\n                    .metadata([(\"id\", row.get::<_, String>(0)?), (\"path\", row.get(2)?)])\n                    .content(row.get::<_, String>(1)?)\n                    .build()\n                    .expect(\"Failed to build document; should never happen\"))\n            })\n            .context(\"failed to query for documents\")?\n            .collect::<Result<Vec<Document>, _>>()\n            .context(\"failed to build documents\")?;\n\n        tracing::debug!(\"[duckdb] Retrieved documents\");\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\n#[async_trait]\nimpl<T: Chunk> Retrieve<CustomStrategy<String>> for Duckdb<T> {\n    async fn retrieve(\n        &self,\n        search_strategy: &CustomStrategy<String>,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        let sql = search_strategy\n            .build_query(&query)\n            .await\n            .context(\"Failed to build query\")?;\n\n        tracing::debug!(\"[duckdb] Executing query: {}\", sql);\n\n        let conn = self.connection().lock().unwrap();\n        let mut stmt = conn\n            .prepare(&sql)\n            .context(\"Failed to prepare duckdb statement for persist\")?;\n\n        tracing::debug!(\"[duckdb] Prepared statement\");\n\n        let documents = stmt\n            .query_map([], |row| {\n                Ok(Document::builder()\n                    .metadata([(\"id\", row.get::<_, String>(0)?), (\"path\", row.get(2)?)])\n                    .content(row.get::<_, String>(1)?)\n                    .build()\n                    .expect(\"Failed to build document; should never happen\"))\n            })\n            .context(\"failed to query for documents\")?\n            .collect::<Result<Vec<Document>, _>>()\n            .context(\"failed to build documents\")?;\n\n        tracing::debug!(\"[duckdb] Retrieved documents\");\n\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\n#[async_trait]\nimpl<T: Chunk> Retrieve<HybridSearch> for Duckdb<T> {\n    async fn retrieve(\n        &self,\n        search_strategy: &HybridSearch,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        let Some(embedding) = query.embedding.as_ref() else {\n            return Err(anyhow::Error::msg(\"Missing embedding in query state\"));\n        };\n\n        let sql = self\n            .hybrid_query_sql(search_strategy, query.current(), embedding)\n            .context(\"Failed to build query\")?;\n\n        tracing::debug!(\"[duckdb] Executing query: {}\", sql);\n\n        let conn = self.connection().lock().unwrap();\n        let mut stmt = conn\n            .prepare(&sql)\n            .context(\"Failed to prepare duckdb statement for persist\")?;\n\n        tracing::debug!(\"[duckdb] Prepared statement\");\n\n        let documents = stmt\n            // DuckDB has issues with using `params!` :(\n            .query_map([], |row| {\n                Ok(Document::builder()\n                    .metadata([(\"id\", row.get::<_, String>(0)?), (\"path\", row.get(2)?)])\n                    .content(row.get::<_, String>(1)?)\n                    .build()\n                    .expect(\"Failed to build document; should never happen\"))\n            })\n            .context(\"failed to query for documents\")?\n            .collect::<Result<Vec<Document>, _>>()\n            .context(\"failed to build documents\")?;\n\n        tracing::debug!(\"[duckdb] Retrieved documents\");\n\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use indexing::{EmbeddedField, TextNode};\n    use swiftide_core::{Persist as _, indexing};\n\n    use super::*;\n\n    #[test_log::test(tokio::test)]\n    async fn test_duckdb_retrieving_documents() {\n        let client = Duckdb::builder()\n            .connection(duckdb::Connection::open_in_memory().unwrap())\n            .table_name(\"test\".to_string())\n            .with_vector(EmbeddedField::Combined, 3)\n            .build()\n            .unwrap();\n\n        let node = TextNode::new(\"Hello duckdb!\")\n            .with_vectors([(EmbeddedField::Combined, vec![1.0, 2.0, 3.0])])\n            .to_owned();\n\n        client.setup().await.unwrap();\n        client.store(node.clone()).await.unwrap();\n\n        tracing::info!(\"Stored node\");\n\n        let query = Query::<states::Pending>::builder()\n            .embedding(vec![1.0, 2.0, 3.0])\n            .original(\"Some query\")\n            .build()\n            .unwrap();\n\n        let result = client\n            .retrieve(&SimilaritySingleEmbedding::default(), query)\n            .await\n            .unwrap();\n\n        assert_eq!(result.documents().len(), 1);\n        let document = result.documents().first().unwrap();\n\n        assert_eq!(document.content(), \"Hello duckdb!\");\n        assert_eq!(\n            document.metadata().get(\"id\").unwrap().as_str(),\n            Some(node.id().to_string().as_str())\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_duckdb_retrieving_documents_hybrid() {\n        let client = Duckdb::builder()\n            .connection(duckdb::Connection::open_in_memory().unwrap())\n            .table_name(\"test\".to_string())\n            .with_vector(EmbeddedField::Combined, 3)\n            .build()\n            .unwrap();\n\n        let node = TextNode::new(\"Hello duckdb!\")\n            .with_vectors([(EmbeddedField::Combined, vec![1.0, 2.0, 3.0])])\n            .to_owned();\n\n        client.setup().await.unwrap();\n        client.store(node.clone()).await.unwrap();\n\n        tracing::info!(\"Stored node\");\n\n        let query = Query::<states::Pending>::builder()\n            .embedding(vec![1.0, 2.0, 3.0])\n            .original(\"Some query\")\n            .build()\n            .unwrap();\n\n        let result = client\n            .retrieve(&HybridSearch::default(), query)\n            .await\n            .unwrap();\n\n        assert_eq!(result.documents().len(), 1);\n        let document = result.documents().first().unwrap();\n\n        assert_eq!(document.content(), \"Hello duckdb!\");\n        assert_eq!(\n            document.metadata().get(\"id\").unwrap().as_str(),\n            Some(node.id().to_string().as_str())\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/schema.sql",
    "content": "LOAD vss;\nLOAD fts;\n\n\nCREATE TABLE IF NOT EXISTS {{table_name}} (\n  uuid TEXT PRIMARY KEY,\n  chunk TEXT NOT NULL,\n  path TEXT,\n\n  {% for vector, size in vectors %}\n    {{vector}} FLOAT[{{size}}],\n  {% endfor %}\n);\n\n"
  },
  {
    "path": "swiftide-integrations/src/duckdb/upsert.sql",
    "content": "INSERT INTO {{ table_name }} (uuid, chunk, path,  {{ vector_field_names | join(sep=\", \") }})\nVALUES (?, ?, ?,\n  {% for _ in range(end=vector_field_names | length) %}\n    ?,\n  {% endfor %}\n  )\n{% if upsert_vectors -%}\nON CONFLICT (uuid) DO UPDATE SET\n  chunk = EXCLUDED.chunk,\n  path = EXCLUDED.path,\n  {% for vector in vector_field_names %}\n    {{ vector }} = EXCLUDED.{{ vector }},\n  {% endfor %}\n{% else -%}\nON CONFLICT (uuid) DO NOTHING\n{% endif -%}\n;\n"
  },
  {
    "path": "swiftide-integrations/src/fastembed/embedding_model.rs",
    "content": "use anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{EmbeddingModel, Embeddings, chat_completion::errors::LanguageModelError};\n\nuse super::{EmbeddingModelType, FastEmbed};\n#[async_trait]\nimpl EmbeddingModel for FastEmbed {\n    #[tracing::instrument(skip_all)]\n    async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError> {\n        let mut embedding_model = self.embedding_model.lock().await;\n\n        match &mut *embedding_model {\n            EmbeddingModelType::Dense(model) => model\n                .embed(input, self.batch_size)\n                .map_err(LanguageModelError::permanent),\n            EmbeddingModelType::Sparse(_) => Err(LanguageModelError::PermanentError(\n                \"Expected dense model, got sparse\".into(),\n            )),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/fastembed/mod.rs",
    "content": "//! `FastEmbed` integration for text embedding.\n\nuse std::sync::Arc;\n\nuse anyhow::Result;\nuse derive_builder::Builder;\nuse fastembed::{SparseTextEmbedding, TextEmbedding};\n\npub use swiftide_core::EmbeddingModel as _;\npub use swiftide_core::SparseEmbeddingModel as _;\n\nmod embedding_model;\nmod rerank;\nmod sparse_embedding_model;\n\npub use rerank::Rerank;\n\npub enum EmbeddingModelType {\n    Dense(TextEmbedding),\n    Sparse(SparseTextEmbedding),\n}\n\nimpl From<TextEmbedding> for EmbeddingModelType {\n    fn from(val: TextEmbedding) -> Self {\n        EmbeddingModelType::Dense(val)\n    }\n}\n\nimpl From<SparseTextEmbedding> for EmbeddingModelType {\n    fn from(val: SparseTextEmbedding) -> Self {\n        EmbeddingModelType::Sparse(val)\n    }\n}\n\n/// Default batch size for embedding\n///\n/// Matches the default batch size in [`fastembed`](https://docs.rs/fastembed)\nconst DEFAULT_BATCH_SIZE: usize = 256;\n\n/// A wrapper around the `FastEmbed` library for text embedding.\n///\n/// Supports a variety of fast text embedding models. The default is the `Flag Embedding` model\n/// with a dimension size of 384.\n///\n/// A default can also be used for sparse embeddings, which by default uses Splade. Sparse\n/// embeddings are useful for more exact search in combination with dense vectors.\n///\n/// `Into` is implemented for all available models from fastembed-rs.\n///\n/// See the [FastEmbed documentation](https://docs.rs/fastembed) for more information on usage.\n///\n/// `FastEmbed` can be customized by setting the embedding model via the builder. The batch size can\n/// also be set and is recommended. Batch size should match the batch size in the indexing\n/// pipeline.\n///\n/// Note that the embedding vector dimensions need to match the dimensions of the vector database\n/// collection\n///\n/// Requires the `fastembed` feature to be enabled.\n#[derive(Builder, Clone)]\n#[builder(\n    pattern = \"owned\",\n    setter(strip_option),\n    build_fn(error = \"anyhow::Error\")\n)]\npub struct FastEmbed {\n    #[builder(\n        setter(custom),\n        default = \"Arc::new(tokio::sync::Mutex::new(TextEmbedding::try_new(Default::default())?.into()))\"\n    )]\n    embedding_model: Arc<tokio::sync::Mutex<EmbeddingModelType>>,\n    #[builder(default = \"Some(DEFAULT_BATCH_SIZE)\")]\n    batch_size: Option<usize>,\n}\n\nimpl std::fmt::Debug for FastEmbed {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"FastEmbedBuilder\")\n            .field(\"batch_size\", &self.batch_size)\n            .finish()\n    }\n}\n\nimpl FastEmbed {\n    /// Tries to build a default `FastEmbed` with `Flag Embedding`.\n    ///\n    /// # Errors\n    ///\n    /// Errors if the build fails\n    pub fn try_default() -> Result<Self> {\n        Self::builder().build()\n    }\n\n    /// Tries to build a default `FastEmbed` for sparse embeddings using Splade\n    ///\n    /// # Errors\n    ///\n    /// Errors if the build fails\n    pub fn try_default_sparse() -> Result<Self> {\n        Self::builder()\n            .embedding_model(SparseTextEmbedding::try_new(\n                fastembed::SparseInitOptions::default(),\n            )?)\n            .build()\n    }\n\n    pub fn builder() -> FastEmbedBuilder {\n        FastEmbedBuilder::default()\n    }\n}\n\nimpl FastEmbedBuilder {\n    #[must_use]\n    pub fn embedding_model(mut self, fastembed: impl Into<EmbeddingModelType>) -> Self {\n        self.embedding_model = Some(Arc::new(tokio::sync::Mutex::new(fastembed.into())));\n\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[tokio::test]\n    async fn test_fastembed() {\n        let fastembed = FastEmbed::try_default().unwrap();\n        let embeddings = fastembed.embed(vec![\"hello\".to_string()]).await.unwrap();\n        assert_eq!(embeddings.len(), 1);\n    }\n\n    #[tokio::test]\n    async fn test_sparse_fastembed() {\n        let fastembed = FastEmbed::try_default_sparse().unwrap();\n        let embeddings = fastembed\n            .sparse_embed(vec![\"hello\".to_string()])\n            .await\n            .unwrap();\n\n        // Model can vary in size, assert it's small and not the full dictionary (30k+)\n        assert!(embeddings[0].values.len() > 1);\n        assert!(embeddings[0].values.len() < 100);\n        assert_eq!(embeddings[0].indices.len(), embeddings[0].values.len());\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/fastembed/rerank.rs",
    "content": "use anyhow::{Context as _, Result};\nuse itertools::Itertools;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse derive_builder::Builder;\nuse fastembed::{RerankInitOptions, TextRerank};\nuse swiftide_core::{\n    TransformResponse,\n    querying::{Query, states},\n};\n\nconst TOP_K: usize = 10;\n\n// NOTE: If ever more rerank models are added (outside fastembed). This should be refactored to a\n// generic implementation with textrerank behind an interface.\n//\n// NOTE: Additionally, controlling what gets used for reranking from the query side (maybe not just\n// the original?), is also something to be said for. The usecase hasn't popped up yet.\n\n/// Reranking with [`fastembed::TextRerank`] in a query pipeline.\n///\n/// Uses the original user query to compare with the retrieved documents. Then updates the query\n/// with the `TOP_K` documents with the highest rerank score.\n///\n/// Can be customized with any rerank model from `fastembed` and the number of top documents to\n/// return. Optionally you can provide a template to render the document before reranking.\n#[derive(Clone, Builder)]\npub struct Rerank {\n    /// The reranker model from [`Fastembed`]\n    #[builder(\n        default = \"Arc::new(tokio::sync::Mutex::new(TextRerank::try_new(RerankInitOptions::default()).expect(\\\"Failed to build default rerank from Fastembed.rs\\\")))\",\n        setter(into)\n    )]\n    model: Arc<tokio::sync::Mutex<TextRerank>>,\n\n    /// The number of top documents returned by the reranker.\n    #[builder(default = TOP_K)]\n    top_k: usize,\n\n    /// Optionally a template can be provided to render the document\n    /// before reranking. I.e. to include metadata in the reranking.\n    ///\n    /// Available variables are `metadata` and `content`.\n    ///\n    /// Templates are rendered using Tera.\n    #[builder(default = None)]\n    document_template: Option<String>,\n\n    /// The rerank batch size to use. Defaults to the `Fastembed` default.\n    #[builder(default = None)]\n    model_batch_size: Option<usize>,\n}\n\nimpl std::fmt::Debug for Rerank {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Rerank\").finish()\n    }\n}\n\nimpl Rerank {\n    pub fn builder() -> RerankBuilder {\n        RerankBuilder::default()\n    }\n}\n\nimpl Default for Rerank {\n    fn default() -> Self {\n        Self {\n            model: Arc::new(tokio::sync::Mutex::new(\n                TextRerank::try_new(RerankInitOptions::default())\n                    .expect(\"Failed to build default rerank from Fastembed.rs\"),\n            )),\n            top_k: TOP_K,\n            document_template: None,\n            model_batch_size: None,\n        }\n    }\n}\n\n#[async_trait]\nimpl TransformResponse for Rerank {\n    async fn transform_response(\n        &self,\n        query: Query<states::Retrieved>,\n    ) -> Result<Query<states::Retrieved>> {\n        let mut query = query;\n\n        let current_documents = std::mem::take(&mut query.documents);\n\n        let docs_for_rerank = if let Some(template) = &self.document_template {\n            current_documents\n                .iter()\n                .map(|doc| {\n                    let context = tera::Context::from_serialize(doc)?;\n                    tera::Tera::one_off(template, &context, false)\n                        .context(\"Failed to render template\")\n                })\n                .collect::<Result<Vec<_>>>()?\n        } else {\n            current_documents\n                .iter()\n                .map(|doc| doc.content().to_string())\n                .collect()\n        };\n\n        let mut model = self.model.lock().await;\n\n        let reranked_documents = model\n            .rerank(\n                query.original(),\n                docs_for_rerank\n                    .iter()\n                    .map(String::as_ref)\n                    .collect::<Vec<&str>>(),\n                false,\n                self.model_batch_size,\n            )\n            .map_err(|e| anyhow::anyhow!(\"Failed to rerank documents: {e:?}\"))?\n            .iter()\n            .take(self.top_k)\n            .map(|r| current_documents[r.index].clone())\n            .collect_vec();\n\n        query.documents = reranked_documents;\n\n        Ok(query)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use swiftide_core::{document::Document, indexing::Metadata};\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_rerank_transform_response() {\n        // Test reranking without a template\n        let rerank = Rerank::builder().top_k(1).build().unwrap();\n\n        let documents = vec![\"content1\", \"content2\", \"content3\"]\n            .into_iter()\n            .map(Into::into)\n            .collect_vec();\n\n        let query = Query::builder()\n            .original(\"What is the capital of france?\")\n            .state(states::Retrieved)\n            .documents(documents)\n            .build()\n            .unwrap();\n\n        let result = rerank.transform_response(query).await;\n\n        assert!(result.is_ok());\n        let transformed_query = result.unwrap();\n        assert_eq!(transformed_query.documents.len(), 1);\n\n        // Test reranking with a template\n        let rerank = Rerank::builder()\n            .top_k(1)\n            .document_template(Some(\"{{ metadata.title }}\".to_string()))\n            .build()\n            .unwrap();\n\n        let metadata = Metadata::from([(\"title\", \"Title\")]);\n\n        let documents = vec![\"content1\", \"content2\", \"content3\"]\n            .into_iter()\n            .map(|content| Document::new(content, Some(metadata.clone())))\n            .collect_vec();\n\n        let query = Query::builder()\n            .original(\"What is the capital of france?\")\n            .state(states::Retrieved)\n            .documents(documents)\n            .build()\n            .unwrap();\n\n        let result = rerank.transform_response(query).await;\n\n        assert!(result.is_ok());\n        let transformed_query = result.unwrap();\n        assert_eq!(transformed_query.documents.len(), 1);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/fastembed/sparse_embedding_model.rs",
    "content": "use async_trait::async_trait;\nuse swiftide_core::chat_completion::errors::LanguageModelError;\nuse swiftide_core::{SparseEmbedding, SparseEmbeddingModel, SparseEmbeddings};\n\nuse super::{EmbeddingModelType, FastEmbed};\n#[async_trait]\nimpl SparseEmbeddingModel for FastEmbed {\n    #[tracing::instrument(skip_all)]\n    async fn sparse_embed(\n        &self,\n        input: Vec<String>,\n    ) -> Result<SparseEmbeddings, LanguageModelError> {\n        let mut embedding_model = self.embedding_model.lock().await;\n\n        match &mut *embedding_model {\n            EmbeddingModelType::Sparse(model) => model\n                .embed(input, self.batch_size)\n                .map_err(LanguageModelError::permanent)\n                .and_then(|embeddings| {\n                    embeddings\n                        .into_iter()\n                        .map(|embedding| {\n                            let indices = embedding\n                                .indices\n                                .iter()\n                                .map(|v| u32::try_from(*v).map_err(LanguageModelError::permanent))\n                                .collect::<Result<Vec<_>, LanguageModelError>>()?;\n\n                            Ok(SparseEmbedding {\n                                indices,\n                                values: embedding.values,\n                            })\n                        })\n                        .collect()\n                }),\n            EmbeddingModelType::Dense(_) => Err(LanguageModelError::PermanentError(\n                \"Expected sparse model, got dense\".into(),\n            )),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/fluvio/loader.rs",
    "content": "use std::string::ToString;\n\nuse anyhow::Context as _;\nuse futures_util::{StreamExt as _, TryStreamExt as _};\nuse swiftide_core::{\n    Loader,\n    indexing::{IndexingStream, TextNode},\n};\nuse tokio::runtime::Handle;\n\nuse super::Fluvio;\n\nimpl Loader for Fluvio {\n    type Output = String;\n\n    #[tracing::instrument]\n    fn into_stream(self) -> IndexingStream<String> {\n        let fluvio_config = self.fluvio_config;\n        let consumer_config = self.consumer_config_ext;\n\n        let stream = tokio::task::block_in_place(|| {\n            Handle::current().block_on(async {\n                let client = if let Some(fluvio_config) = &fluvio_config {\n                    fluvio::Fluvio::connect_with_config(fluvio_config).await\n                } else {\n                    fluvio::Fluvio::connect().await\n                }\n                .context(format!(\"Failed to connect to Fluvio {fluvio_config:?}\"))?;\n                client.consumer_with_config(consumer_config).await\n            })\n        })\n        .expect(\"Failed to connect to Fluvio\");\n\n        let swiftide_stream = stream\n            .map_ok(|f| {\n                let mut node = TextNode::new(f.get_value().to_string());\n                node.metadata\n                    .insert(\"fluvio_key\", f.get_key().map(ToString::to_string));\n\n                node\n            })\n            .map_err(anyhow::Error::from);\n\n        swiftide_stream.boxed().into()\n    }\n\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<String> {\n        self.into_stream()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::pin::Pin;\n\n    use super::*;\n    use anyhow::Result;\n    use fluvio::{\n        RecordKey,\n        consumer::ConsumerConfigExt,\n        metadata::{customspu::CustomSpuSpec, topic::TopicSpec},\n    };\n    use flv_util::socket_helpers::ServerAddress;\n    use futures_util::TryStreamExt;\n    use regex::Regex;\n    use testcontainers::{ContainerAsync, GenericImage, ImageExt, runners::AsyncRunner};\n    use tokio::io::{AsyncBufRead, AsyncBufReadExt};\n\n    // NOTE: Move to test-utils / upstream to testcontainers if needed elsewhere\n    struct FluvioCluster {\n        sc: ContainerAsync<GenericImage>,\n        spu: ContainerAsync<GenericImage>,\n\n        partitions: u32,\n        replicas: u32,\n        port: u16,\n        host_spu_port: u16,\n        client: fluvio::Fluvio,\n    }\n\n    impl FluvioCluster {\n        // Starts a fluvio cluster and connects the spu to the sc\n        pub async fn start() -> Result<FluvioCluster> {\n            static SC_PORT: u16 = 9003;\n            static SPU_PORT1: u16 = 9010;\n            static SPU_PORT2: u16 = 9011;\n            static NETWORK_NAME: &str = \"fluvio\";\n            static PARTITIONS: u32 = 1;\n            static REPLICAS: u32 = 1;\n\n            let sc = GenericImage::new(\"infinyon/fluvio\", \"latest\")\n                .with_exposed_port(SC_PORT.into())\n                .with_wait_for(testcontainers::core::WaitFor::message_on_stdout(\n                    \"started successfully\",\n                ))\n                .with_wait_for(testcontainers::core::WaitFor::seconds(1))\n                .with_network(NETWORK_NAME)\n                .with_container_name(\"sc\")\n                .with_cmd(\"./fluvio-run sc --local /fluvio/metadata\".split(' '))\n                .with_env_var(\"RUST_LOG\", \"info\")\n                .start()\n                .await?;\n\n            let spu = GenericImage::new(\"infinyon/fluvio\", \"latest\")\n                .with_exposed_port(SPU_PORT1.into())\n                .with_wait_for(testcontainers::core::WaitFor::message_on_stdout(\n                    \"started successfully\",\n                ))\n                    .with_wait_for(testcontainers::core::WaitFor::seconds(1))\n                .with_network(NETWORK_NAME)\n                .with_container_name(\"spu\")\n                .with_cmd(format!(\"./fluvio-run spu -i 5001 -p spu:{SPU_PORT1} -v spu:{SPU_PORT2} --sc-addr sc:9004 --log-base-dir /fluvio/data\").split(' '))\n                .with_env_var(\"RUST_LOG\", \"info\")\n                .start()\n                .await?;\n\n            let host_spu_port_1 = spu.get_host_port_ipv4(SPU_PORT1).await?;\n            let sc_host_port = sc.get_host_port_ipv4(SC_PORT).await?;\n            let endpoint = format!(\"127.0.0.1:{sc_host_port}\");\n            let config = fluvio::FluvioConfig::new(&endpoint);\n            let client = fluvio::Fluvio::connect_with_config(&config).await?;\n\n            let cluster = FluvioCluster {\n                sc,\n                spu,\n                port: sc_host_port,\n                host_spu_port: host_spu_port_1,\n                client,\n                replicas: REPLICAS,\n                partitions: PARTITIONS,\n            };\n\n            cluster.connect_spu_to_sc().await;\n\n            Ok(cluster)\n        }\n\n        async fn connect_spu_to_sc(&self) {\n            let admin = self.client().admin().await;\n\n            let spu_spec = CustomSpuSpec {\n                id: 5001,\n                public_endpoint: ServerAddress::try_from(format!(\"0.0.0.0:{}\", self.host_spu_port))\n                    .unwrap()\n                    .into(),\n                private_endpoint: ServerAddress::try_from(format!(\"spu:{}\", 9011))\n                    .unwrap()\n                    .into(),\n                rack: None,\n                public_endpoint_local: None,\n            };\n\n            admin\n                .create(\"SPU\".to_string(), false, spu_spec)\n                .await\n                .unwrap();\n        }\n\n        pub fn forward_logs_to_tracing(&self) {\n            Self::log_stdout(self.sc.stdout(true));\n            Self::log_stderr(self.sc.stderr(true));\n\n            Self::log_stdout(self.spu.stdout(true));\n            Self::log_stderr(self.spu.stderr(true));\n        }\n\n        pub fn client(&self) -> &fluvio::Fluvio {\n            &self.client\n        }\n\n        pub async fn create_topic(&self, topic_name: impl Into<String>) -> Result<()> {\n            let admin = self.client().admin().await;\n            let topic_spec = TopicSpec::new_computed(self.partitions, self.replicas, None);\n\n            admin.create(topic_name.into(), false, topic_spec).await\n        }\n\n        fn log_stdout(reader: Pin<Box<dyn AsyncBufRead + Send>>) {\n            let regex = Self::ansii_regex();\n            tokio::spawn(async move {\n                let mut lines = reader.lines();\n                while let Some(line) = lines.next_line().await.unwrap() {\n                    let line = regex.replace_all(&line, \"\").to_string();\n                    tracing::info!(line);\n                }\n            });\n        }\n\n        fn log_stderr(reader: Pin<Box<dyn AsyncBufRead + Send>>) {\n            let regex = Self::ansii_regex();\n            tokio::spawn(async move {\n                let mut lines = reader.lines();\n                while let Some(line) = lines.next_line().await.unwrap() {\n                    let line = regex.replace_all(&line, \"\").to_string();\n                    tracing::error!(line);\n                }\n            });\n        }\n\n        fn ansii_regex() -> Regex {\n            regex::Regex::new(r\"\\x1b\\[([\\x30-\\x3f]*[\\x20-\\x2f]*[\\x40-\\x7e])\").unwrap()\n        }\n\n        pub fn endpoint(&self) -> String {\n            format!(\"127.0.0.1:{}\", self.port)\n        }\n    }\n\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_fluvio_loader() {\n        static TOPIC_NAME: &str = \"hello-rust\";\n        static PARTITION_NUM: u32 = 0;\n\n        let fluvio_cluster = FluvioCluster::start()\n            .await\n            .expect(\"Failed to start Fluvio cluster\");\n\n        fluvio_cluster.forward_logs_to_tracing();\n        fluvio_cluster.create_topic(TOPIC_NAME).await.unwrap();\n\n        let client = fluvio_cluster.client();\n\n        let producer = client.topic_producer(TOPIC_NAME).await.unwrap();\n        producer\n            .send(RecordKey::NULL, \"Hello fluvio\")\n            .await\n            .unwrap();\n        producer.flush().await.unwrap();\n\n        // Consume the topic with the loader\n        let config = fluvio::FluvioConfig::new(fluvio_cluster.endpoint());\n        let loader = Fluvio::builder()\n            .fluvio_config(&config)\n            .consumer_config_ext(\n                ConsumerConfigExt::builder()\n                    .topic(TOPIC_NAME)\n                    .partition(PARTITION_NUM)\n                    .offset_start(fluvio::Offset::from_end(1))\n                    .build()\n                    .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let node: TextNode = loader.into_stream().try_next().await.unwrap().unwrap();\n\n        assert_eq!(node.chunk, \"Hello fluvio\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/fluvio/mod.rs",
    "content": "//! Fluvio is a real-time streaming data transformation platform.\n//!\n//! This module provides a Fluvio loader for Swiftide and allows you to ingest\n//! messages from Fluvio topics and use them for RAG.\n//!\n//! Can be configured with [`ConsumerConfigExt`].\n//!\n//! # Example\n//!\n//! ```no_run\n//! # use swiftide_integrations::fluvio::*;\n//! let loader = Fluvio::builder()\n//!     .consumer_config_ext(\n//!         ConsumerConfigExt::builder()\n//!             .topic(\"Hello Fluvio\")\n//!             .partition(0)\n//!             .offset_start(fluvio::Offset::from_end(1))\n//!             .build().unwrap()\n//!     ).build().unwrap();\n//! ```\n\nuse derive_builder::Builder;\n\nuse fluvio::FluvioConfig;\n/// Re-export the fluvio config builder\npub use fluvio::consumer::{ConsumerConfigExt, ConsumerConfigExtBuilder};\n\nmod loader;\n\n#[derive(Debug, Clone, Builder)]\n#[builder(setter(into, strip_option))]\npub struct Fluvio {\n    /// The Fluvio consumer configuration to use.\n    consumer_config_ext: ConsumerConfigExt,\n\n    #[builder(default, setter(custom))]\n    /// Custom connection configuration\n    fluvio_config: Option<FluvioConfig>,\n}\n\nimpl Fluvio {\n    /// Creates a new Fluvio instance from a consumer extended configuration\n    pub fn from_consumer_config(config: impl Into<ConsumerConfigExt>) -> Fluvio {\n        Fluvio {\n            consumer_config_ext: config.into(),\n            fluvio_config: None,\n        }\n    }\n\n    pub fn builder() -> FluvioBuilder {\n        FluvioBuilder::default()\n    }\n}\n\nimpl FluvioBuilder {\n    pub fn fluvio_config(&mut self, config: &FluvioConfig) -> &mut Self {\n        self.fluvio_config = Some(Some(config.to_owned()));\n\n        self\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/gemini/config.rs",
    "content": "use reqwest::header::{AUTHORIZATION, HeaderMap};\nuse secrecy::{ExposeSecret as _, SecretString};\nuse serde::Deserialize;\n\nconst GEMINI_API_BASE: &str = \"https://generativelanguage.googleapis.com/v1beta/openai\";\n\n#[derive(Clone, Debug, Deserialize)]\n#[serde(default)]\npub struct GeminiConfig {\n    api_base: String,\n    api_key: SecretString,\n}\n\nimpl Default for GeminiConfig {\n    fn default() -> Self {\n        Self {\n            api_base: GEMINI_API_BASE.to_string(),\n            api_key: std::env::var(\"GEMINI_API_KEY\")\n                .unwrap_or_else(|_| String::new())\n                .into(),\n        }\n    }\n}\n\nimpl async_openai::config::Config for GeminiConfig {\n    fn headers(&self) -> HeaderMap {\n        let mut headers = HeaderMap::new();\n\n        headers.insert(\n            AUTHORIZATION,\n            format!(\"Bearer {}\", self.api_key.expose_secret())\n                .as_str()\n                .parse()\n                .unwrap(),\n        );\n\n        headers\n    }\n\n    fn url(&self, path: &str) -> String {\n        format!(\"{}{}\", self.api_base, path)\n    }\n\n    fn api_base(&self) -> &str {\n        &self.api_base\n    }\n\n    fn api_key(&self) -> &SecretString {\n        &self.api_key\n    }\n\n    fn query(&self) -> Vec<(&str, &str)> {\n        vec![]\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/gemini/mod.rs",
    "content": "//! This module provides integration with `Gemini`'s API, enabling the use of language models within\n//! the Swiftide project. It includes the `Gemini` struct for managing API clients and default\n//! options for prompt models. The module is conditionally compiled based on the \"groq\" feature\n//! flag.\n\nuse crate::openai;\n\nuse self::config::GeminiConfig;\n\nmod config;\n\n/// The `Gemini` struct encapsulates a `Gemini` client that implements\n/// [`swiftide_core::SimplePrompt`]\n///\n/// There is also a builder available.\n///\n/// By default it will look for a `GEMINI_API_KEY` environment variable. Note that a model\n/// always needs to be set, either with [`Gemini::with_default_prompt_model`] or via the builder.\n/// You can find available models in the Gemini documentation.\n///\n/// Under the hood it uses [`async_openai`], with the Gemini openai mapping. This means\n/// some features might not work as expected. See the Gemini documentation for details.\npub type Gemini = openai::GenericOpenAI<GeminiConfig>;\npub type GeminiBuilder = openai::GenericOpenAIBuilder<GeminiConfig>;\npub type GeminiBuilderError = openai::GenericOpenAIBuilderError;\npub use openai::{Options, OptionsBuilder, OptionsBuilderError};\n\nimpl Gemini {\n    pub fn builder() -> GeminiBuilder {\n        GeminiBuilder::default()\n    }\n}\n\nimpl Default for Gemini {\n    fn default() -> Self {\n        Self::builder().build().unwrap()\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/groq/config.rs",
    "content": "use reqwest::header::{AUTHORIZATION, HeaderMap};\nuse secrecy::{ExposeSecret as _, SecretString};\nuse serde::Deserialize;\n\nconst GROQ_API_BASE: &str = \"https://api.groq.com/openai/v1\";\n\n#[derive(Clone, Debug, Deserialize)]\n#[serde(default)]\npub struct GroqConfig {\n    api_base: String,\n    api_key: SecretString,\n}\n\nimpl Default for GroqConfig {\n    fn default() -> Self {\n        Self {\n            api_base: GROQ_API_BASE.to_string(),\n            api_key: std::env::var(\"GROQ_API_KEY\")\n                .unwrap_or_else(|_| String::new())\n                .into(),\n        }\n    }\n}\n\nimpl async_openai::config::Config for GroqConfig {\n    fn headers(&self) -> HeaderMap {\n        let mut headers = HeaderMap::new();\n\n        headers.insert(\n            AUTHORIZATION,\n            format!(\"Bearer {}\", self.api_key.expose_secret())\n                .as_str()\n                .parse()\n                .unwrap(),\n        );\n\n        headers\n    }\n\n    fn url(&self, path: &str) -> String {\n        format!(\"{}{}\", self.api_base, path)\n    }\n\n    fn api_base(&self) -> &str {\n        &self.api_base\n    }\n\n    fn api_key(&self) -> &SecretString {\n        &self.api_key\n    }\n\n    fn query(&self) -> Vec<(&str, &str)> {\n        vec![]\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/groq/mod.rs",
    "content": "//! This module provides integration with `Groq`'s API, enabling the use of language models within\n//! the Swiftide project. It includes the `Groq` struct for managing API clients and default options\n//! for prompt models. The module is conditionally compiled based on the \"groq\" feature flag.\n\nuse crate::openai;\n\nuse self::config::GroqConfig;\n\nmod config;\n\n/// The `Groq` struct encapsulates a `Groq` client that implements [`swiftide_core::SimplePrompt`]\n///\n/// There is also a builder available.\n///\n/// By default it will look for a `GROQ_API_KEY` environment variable. Note that a model\n/// always needs to be set, either with [`Groq::with_default_prompt_model`] or via the builder.\n/// You can find available models in the Groq documentation.\n///\n/// Under the hood it uses [`async_openai`], with the Groq openai mapping. This means\n/// some features might not work as expected. See the Groq documentation for details.\npub type Groq = openai::GenericOpenAI<GroqConfig>;\npub type GroqBuilder = openai::GenericOpenAIBuilder<GroqConfig>;\npub type GroqBuilderError = openai::GenericOpenAIBuilderError;\npub use openai::{Options, OptionsBuilder, OptionsBuilderError};\n\nimpl Groq {\n    pub fn builder() -> GroqBuilder {\n        GroqBuilder::default()\n    }\n}\n\nimpl Default for Groq {\n    fn default() -> Self {\n        Self::builder().build().unwrap()\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/kafka/loader.rs",
    "content": "use futures_util::{StreamExt as _, stream};\nuse rdkafka::{\n    Message,\n    consumer::{Consumer, StreamConsumer},\n    message::BorrowedMessage,\n};\nuse swiftide_core::{Loader, indexing::IndexingStream, indexing::Node};\n\nuse super::Kafka;\n\nimpl Loader for Kafka {\n    type Output = String;\n\n    #[tracing::instrument]\n    fn into_stream(self) -> IndexingStream<String> {\n        let client_config = self.client_config;\n        let topic = self.topic.clone();\n\n        let consumer: StreamConsumer = client_config\n            .create()\n            .expect(\"Failed to create Kafka consumer\");\n\n        consumer\n            .subscribe(&[&topic])\n            .expect(\"Failed to subscribe to topic\");\n\n        let swiftide_stream = stream::unfold(consumer, |consumer| async move {\n            loop {\n                match consumer.recv().await {\n                    Ok(message) => {\n                        // only handle Some(Ok(s))\n                        if let Some(Ok(payload)) = message.payload_view::<str>() {\n                            let mut node = Node::<String>::new(payload);\n                            msg_metadata(&mut node, &message);\n                            tracing::trace!(?node, ?payload, \"received message\");\n                            return Some((Ok(node), consumer));\n                        }\n                        // otherwise, like a message with an invalid payload or payload is None\n                        tracing::debug!(\"Skipping message with invalid payload\");\n                    }\n                    Err(e) => return Some((Err(anyhow::Error::from(e)), consumer)),\n                }\n            }\n        });\n\n        swiftide_stream.boxed().into()\n    }\n\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<String> {\n        (*self).into_stream()\n    }\n}\n\nfn msg_metadata(node: &mut Node<String>, message: &BorrowedMessage) {\n    // Add Kafka-specific metadata\n    node.metadata\n        .insert(\"kafka_topic\", message.topic().to_string());\n\n    node.metadata\n        .insert(\"kafka_partition\", message.partition().to_string());\n    node.metadata\n        .insert(\"kafka_offset\", message.offset().to_string());\n\n    // Add timestamp if present\n    if let Some(timestamp) = message.timestamp().to_millis() {\n        node.metadata\n            .insert(\"kafka_timestamp\", timestamp.to_string());\n    }\n\n    // Add key if present\n    if let Some(Ok(key)) = message.key_view::<str>() {\n        node.metadata.insert(\"kafka_key\", key.to_string());\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::time::Duration;\n\n    use super::*;\n    use crate::kafka::Kafka;\n    use anyhow::Result;\n    use futures_util::TryStreamExt;\n    use rdkafka::{\n        ClientConfig,\n        admin::{AdminClient, AdminOptions, NewTopic, TopicReplication},\n        client::DefaultClientContext,\n        producer::{FutureProducer, FutureRecord, Producer},\n    };\n    use swiftide_core::indexing::TextNode;\n    use testcontainers::{ContainerAsync, runners::AsyncRunner};\n    use testcontainers_modules::kafka::apache::{self};\n\n    struct KafkaBroker {\n        _broker: ContainerAsync<apache::Kafka>,\n        partitions: i32,\n        replicas: i32,\n        client_config: ClientConfig,\n    }\n\n    impl KafkaBroker {\n        pub async fn start() -> Result<Self> {\n            static PARTITIONS: i32 = 1;\n            static REPLICAS: i32 = 1;\n\n            let kafka_node = apache::Kafka::default().start().await?;\n            let bootstrap_servers = format!(\n                \"127.0.0.1:{}\",\n                kafka_node.get_host_port_ipv4(apache::KAFKA_PORT).await?\n            );\n\n            let mut client_config = ClientConfig::new();\n            client_config.set(\"bootstrap.servers\", &bootstrap_servers);\n            client_config.set(\"group.id\", \"group_id\");\n            client_config.set(\"auto.offset.reset\", \"earliest\");\n\n            let broker = KafkaBroker {\n                _broker: kafka_node,\n                client_config,\n                partitions: PARTITIONS,\n                replicas: REPLICAS,\n            };\n\n            Ok(broker)\n        }\n\n        pub async fn create_topic(&self, topic: impl AsRef<str>) -> Result<()> {\n            let admin = self.admin_client();\n            admin\n                .create_topics(\n                    &[NewTopic {\n                        name: topic.as_ref(),\n                        num_partitions: self.partitions,\n                        replication: TopicReplication::Fixed(self.replicas),\n                        config: vec![],\n                    }],\n                    &AdminOptions::default(),\n                )\n                .await\n                .expect(\"topic creation failed\");\n            Ok(())\n        }\n\n        fn admin_client(&self) -> AdminClient<DefaultClientContext> {\n            self.client_config.create().unwrap()\n        }\n\n        fn producer(&self) -> FutureProducer {\n            self.client_config.create().unwrap()\n        }\n    }\n\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_kafka_loader() {\n        static TOPIC_NAME: &str = \"topic\";\n        let kafka_broker = KafkaBroker::start().await.unwrap();\n        kafka_broker.create_topic(TOPIC_NAME).await.unwrap();\n\n        let producer = kafka_broker.producer();\n        producer\n            .send(\n                FutureRecord::to(TOPIC_NAME).payload(\"payload\").key(\"key\"),\n                Duration::from_secs(0),\n            )\n            .await\n            .unwrap();\n        producer.flush(Duration::from_secs(0)).unwrap();\n\n        let loader = Kafka::builder()\n            .client_config(kafka_broker.client_config.clone())\n            .topic(TOPIC_NAME)\n            .build()\n            .unwrap();\n\n        let node: TextNode = loader.into_stream().try_next().await.unwrap().unwrap();\n        assert_eq!(node.chunk, \"payload\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/kafka/mod.rs",
    "content": "//! Kafka is a distributed streaming platform.\n//!\n//! This module provides a Kafka loader for Swiftide and allows you to ingest\n//! messages from Kafka topics and use them for RAG.\n//!\n//! Can be configured with [`ClientConfig`].\n//!\n//! # Example\n//!\n//! ```no_run\n//! # use swiftide_integrations::kafka::*;\n//! let kafka = Kafka::builder()\n//!     .client_config(ClientConfig::new())\n//!     .topic(\"Hello Kafka\")\n//!     .build().unwrap();\n//! ```\n\nuse anyhow::{Context, Result};\nuse derive_builder::Builder;\nuse rdkafka::{\n    admin::{AdminClient, AdminOptions, NewTopic, TopicReplication},\n    client::DefaultClientContext,\n    consumer::{Consumer, StreamConsumer},\n    producer::FutureProducer,\n};\nuse swiftide_core::indexing::TextNode;\n\npub use rdkafka::config::ClientConfig;\n\nmod loader;\nmod persist;\n\n#[derive(Debug, Clone, Builder)]\n#[builder(setter(into, strip_option))]\npub struct Kafka {\n    client_config: ClientConfig,\n    topic: String,\n    #[builder(default)]\n    /// Customize the key used for persisting nodes\n    persist_key_fn: Option<fn(&TextNode) -> Result<String>>,\n    #[builder(default)]\n    /// Customize the value used for persisting nodes\n    persist_payload_fn: Option<fn(&TextNode) -> Result<String>>,\n    #[builder(default = \"1\")]\n    partition: i32,\n    #[builder(default = \"1\")]\n    factor: i32,\n    #[builder(default)]\n    create_topic_if_not_exists: bool,\n    #[builder(default = \"32\")]\n    batch_size: usize,\n}\n\nimpl Kafka {\n    pub fn from_client_config(config: impl Into<ClientConfig>, topic: impl Into<String>) -> Kafka {\n        Kafka {\n            client_config: config.into(),\n            topic: topic.into(),\n            persist_key_fn: None,\n            persist_payload_fn: None,\n            partition: 1,\n            factor: 1,\n            create_topic_if_not_exists: false,\n            batch_size: 32,\n        }\n    }\n\n    pub fn builder() -> KafkaBuilder {\n        KafkaBuilder::default()\n    }\n\n    fn producer(&self) -> Result<FutureProducer<DefaultClientContext>> {\n        self.client_config\n            .create()\n            .context(\"Failed to create producer\")\n    }\n\n    fn topic_exists(&self) -> Result<bool> {\n        let consumer: StreamConsumer = self\n            .client_config\n            .create()\n            .context(\"Failed to create consumer\")?;\n        let metadata = consumer.fetch_metadata(Some(&self.topic), None)?;\n        Ok(!metadata.topics().is_empty())\n    }\n\n    async fn create_topic(&self) -> Result<()> {\n        let admin_client: AdminClient<DefaultClientContext> = self\n            .client_config\n            .create()\n            .context(\"Failed to create admin client\")?;\n        admin_client\n            .create_topics(\n                vec![&NewTopic::new(\n                    &self.topic,\n                    self.partition,\n                    TopicReplication::Fixed(self.factor),\n                )],\n                &AdminOptions::new(),\n            )\n            .await?;\n        Ok(())\n    }\n\n    /// Generates a ky for a given node to be persisted in Kafka.\n    fn persist_key_for_node(&self, node: &TextNode) -> Result<String> {\n        if let Some(key_fn) = self.persist_key_fn {\n            key_fn(node)\n        } else {\n            let hash = node.id();\n            Ok(format!(\"{}:{}\", node.path.to_string_lossy(), hash))\n        }\n    }\n\n    /// Generates a value for a given node to be persisted in Kafka.\n    /// By default, the node is serialized as JSON.\n    /// If a custom function is provided, it is used to generate the value.\n    /// Otherwise, the node is serialized as JSON.\n    fn persist_value_for_node(&self, node: &TextNode) -> Result<String> {\n        if let Some(value_fn) = self.persist_payload_fn {\n            value_fn(node)\n        } else {\n            Ok(serde_json::to_string(node)?)\n        }\n    }\n\n    fn node_to_key_payload(&self, node: &TextNode) -> Result<(String, String)> {\n        let key = self\n            .persist_key_for_node(node)\n            .map_err(|e| anyhow::anyhow!(\"persist_key_for_node failed: {e:?} (node: {node:?})\"))?;\n        let payload = self.persist_value_for_node(node).map_err(|e| {\n            anyhow::anyhow!(\"persist_value_for_node failed: {e:?} (node: {node:?})\")\n        })?;\n\n        Ok((key, payload))\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/kafka/persist.rs",
    "content": "use std::{sync::Arc, time::Duration};\n\nuse anyhow::Result;\nuse async_trait::async_trait;\n\nuse rdkafka::producer::FutureRecord;\nuse swiftide_core::{\n    Persist,\n    indexing::{IndexingStream, TextNode},\n};\n\nuse super::Kafka;\n\n#[async_trait]\nimpl Persist for Kafka {\n    type Input = String;\n    type Output = String;\n\n    async fn setup(&self) -> Result<()> {\n        if self.topic_exists()? {\n            return Ok(());\n        }\n        if !self.create_topic_if_not_exists {\n            return Err(anyhow::anyhow!(\"Topic {} does not exist\", self.topic));\n        }\n        self.create_topic().await?;\n        Ok(())\n    }\n\n    fn batch_size(&self) -> Option<usize> {\n        Some(self.batch_size)\n    }\n\n    async fn store(&self, node: TextNode) -> Result<TextNode> {\n        let (key, payload) = self.node_to_key_payload(&node)?;\n        self.producer()?\n            .send(\n                FutureRecord::to(&self.topic).key(&key).payload(&payload),\n                Duration::from_secs(0),\n            )\n            .await\n            .map_err(|(e, _)| anyhow::anyhow!(\"Failed to send node: {e:?}\"))?;\n        Ok(node)\n    }\n\n    async fn batch_store(&self, nodes: Vec<TextNode>) -> IndexingStream<String> {\n        let producer = Arc::new(self.producer().expect(\"Failed to create producer\"));\n\n        for node in &nodes {\n            match self.node_to_key_payload(node) {\n                Ok((key, payload)) => {\n                    if let Err(e) = producer\n                        .send(\n                            FutureRecord::to(&self.topic).payload(&payload).key(&key),\n                            Duration::from_secs(0),\n                        )\n                        .await\n                    {\n                        return vec![Err(anyhow::anyhow!(\"failed to send node: {e:?}\"))].into();\n                    }\n                }\n                Err(e) => {\n                    return vec![Err(e)].into();\n                }\n            }\n        }\n\n        IndexingStream::iter(nodes.into_iter().map(Ok))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use futures_util::TryStreamExt;\n    use rdkafka::ClientConfig;\n    use testcontainers::runners::AsyncRunner;\n    use testcontainers_modules::kafka::apache::{self};\n\n    #[test_log::test(tokio::test)]\n    async fn test_kafka_persist() {\n        static TOPIC_NAME: &str = \"topic\";\n\n        let kafka_node = apache::Kafka::default()\n            .start()\n            .await\n            .expect(\"failed to start kafka\");\n        let bootstrap_servers = format!(\n            \"127.0.0.1:{}\",\n            kafka_node\n                .get_host_port_ipv4(apache::KAFKA_PORT)\n                .await\n                .expect(\"failed to get kafka port\")\n        );\n\n        let mut client_config = ClientConfig::new();\n        client_config.set(\"bootstrap.servers\", &bootstrap_servers);\n        let storage = Kafka::builder()\n            .client_config(client_config)\n            .topic(TOPIC_NAME)\n            .build()\n            .unwrap();\n\n        let node = TextNode::new(\"chunk\");\n\n        storage.setup().await.unwrap();\n        storage.store(node.clone()).await.unwrap();\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_kafka_batch_persist() {\n        static TOPIC_NAME: &str = \"topic\";\n\n        let kafka_node = apache::Kafka::default()\n            .start()\n            .await\n            .expect(\"failed to start kafka\");\n        let bootstrap_servers = format!(\n            \"127.0.0.1:{}\",\n            kafka_node\n                .get_host_port_ipv4(apache::KAFKA_PORT)\n                .await\n                .expect(\"failed to get kafka port\")\n        );\n\n        let mut client_config = ClientConfig::new();\n        client_config.set(\"bootstrap.servers\", &bootstrap_servers);\n        let storage = Kafka::builder()\n            .client_config(client_config)\n            .topic(TOPIC_NAME)\n            .create_topic_if_not_exists(true)\n            .batch_size(2usize)\n            .build()\n            .unwrap();\n\n        let nodes = vec![TextNode::default(); 6];\n\n        storage.setup().await.unwrap();\n\n        let stream = storage.batch_store(nodes.clone()).await;\n\n        let result: Vec<TextNode> = stream.try_collect().await.unwrap();\n\n        assert_eq!(result.len(), 6);\n        assert_eq!(result[0], nodes[0]);\n        assert_eq!(result[1], nodes[1]);\n        assert_eq!(result[2], nodes[2]);\n        assert_eq!(result[3], nodes[3]);\n        assert_eq!(result[4], nodes[4]);\n        assert_eq!(result[5], nodes[5]);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/lancedb/connection_pool.rs",
    "content": "use anyhow::Context as _;\nuse anyhow::Result;\nuse deadpool::managed::Manager;\nuse derive_builder::Builder;\nuse lancedb::connection::ConnectBuilder;\n\n#[derive(Builder, Debug, Clone)]\n#[builder(setter(into), build_fn(error = \"anyhow::Error\"))]\npub struct LanceDBPoolManager {\n    uri: String,\n    #[builder(default)]\n    api_key: Option<String>,\n    #[builder(default)]\n    region: Option<String>,\n    #[builder(default)]\n    storage_options: Vec<(String, String)>,\n}\n\npub type LanceDBConnectionPool = deadpool::managed::Pool<LanceDBPoolManager>;\n\nimpl LanceDBPoolManager {\n    pub fn builder() -> LanceDBPoolManagerBuilder {\n        LanceDBPoolManagerBuilder::default()\n    }\n}\n\nimpl Manager for LanceDBPoolManager {\n    type Type = lancedb::Connection;\n    type Error = anyhow::Error;\n\n    async fn create(&self) -> Result<Self::Type, Self::Error> {\n        let mut builder = ConnectBuilder::new(&self.uri);\n\n        if let Some(api_key) = &self.api_key {\n            builder = builder.api_key(api_key);\n        }\n\n        if let Some(region) = &self.region {\n            builder = builder.region(region);\n        }\n\n        for (key, value) in &self.storage_options {\n            builder = builder.storage_option(key, value);\n        }\n\n        builder\n            .execute()\n            .await\n            .context(\"Failed to create LanceDB connection\")\n    }\n\n    async fn recycle(\n        &self,\n        _obj: &mut Self::Type,\n        _metrics: &deadpool::managed::Metrics,\n    ) -> deadpool::managed::RecycleResult<Self::Error> {\n        // NOTE: Should work fine with drop\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/lancedb/mod.rs",
    "content": "use std::sync::Arc;\n\nuse anyhow::Context as _;\nuse anyhow::Result;\nuse connection_pool::LanceDBConnectionPool;\nuse connection_pool::LanceDBPoolManager;\nuse deadpool::managed::Object;\nuse derive_builder::Builder;\nuse lancedb::arrow::arrow_schema::{DataType, Field, Schema};\nuse swiftide_core::indexing::EmbeddedField;\npub mod connection_pool;\npub mod persist;\npub mod retrieve;\n\n/// `LanceDB` is a columnar database that separates data and compute.\n///\n/// This enables local, embedded databases, or storing in a cloud storage.\n///\n/// See examples for more information.\n///\n/// Implements `Persist` and `Retrieve`.\n///\n/// If you want to store / retrieve metadata in Lance, the columns can be defined with\n/// `with_metadata`.\n///\n/// Note: For querying large tables you manually need to create an index. You can get an\n/// active connection via `get_connection`.\n///\n/// # Example\n///\n/// ```no_run\n/// # use swiftide_integrations::lancedb::{LanceDB};\n/// # use swiftide_core::indexing::EmbeddedField;\n/// LanceDB::builder()\n/// .uri(\"/my/lancedb\")\n/// .vector_size(1536)\n/// .with_vector(EmbeddedField::Combined)\n/// .with_metadata(\"Metadata field to also store\")\n/// .table_name(\"swiftide_test\")\n/// .build()\n/// .unwrap();\n#[derive(Builder, Clone)]\n#[builder(setter(into, strip_option), build_fn(error = \"anyhow::Error\"))]\n#[allow(dead_code)]\npub struct LanceDB {\n    /// Connection pool for `LanceDB`\n    /// By default will use settings provided when creating the instance.\n    #[builder(default = \"self.default_connection_pool()?\")]\n    connection_pool: Arc<LanceDBConnectionPool>,\n\n    /// Set the URI. Required unless a connection pool is provided.\n    uri: Option<String>,\n    /// The maximum number of connections, defaults to 10.\n    #[builder(default = \"Some(10)\")]\n    pool_size: Option<usize>,\n\n    /// Optional API key\n    #[builder(default)]\n    api_key: Option<String>,\n    /// Optional Region\n    #[builder(default)]\n    region: Option<String>,\n    /// Storage options\n    #[builder(default)]\n    storage_options: Vec<(String, String)>,\n\n    #[builder(private, default = \"self.default_schema_from_fields()\")]\n    schema: Arc<Schema>,\n\n    /// The name of the table to store the data\n    /// By default will use `swiftide`\n    #[builder(default = \"\\\"swiftide\\\".into()\")]\n    table_name: String,\n\n    /// Default sizes of vectors. Vectors can also be of different\n    /// sizes by specifying the size in the vector configuration.\n    vector_size: Option<i32>,\n\n    /// Batch size for storing nodes in `LanceDB`. Default is 256.\n    #[builder(default = \"256\")]\n    batch_size: usize,\n\n    /// Field configuration for `LanceDB`, will result in the eventual schema.\n    ///\n    /// Supports multiple field types, see [`FieldConfig`] for more details.\n    #[builder(default = \"self.default_fields()\")]\n    fields: Vec<FieldConfig>,\n}\n\nimpl std::fmt::Debug for LanceDB {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        f.debug_struct(\"LanceDB\")\n            .field(\"schema\", &self.schema)\n            .finish()\n    }\n}\n\nimpl LanceDB {\n    pub fn builder() -> LanceDBBuilder {\n        LanceDBBuilder::default()\n    }\n\n    /// Get a connection to `LanceDB` from the connection pool\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the connection cannot be retrieved.\n    pub async fn get_connection(&self) -> Result<Object<LanceDBPoolManager>> {\n        Box::pin(self.connection_pool.get())\n            .await\n            .map_err(|e| anyhow::anyhow!(e))\n    }\n\n    /// Opens the lancedb table\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the table cannot be opened or the connection cannot be acquired.\n    pub async fn open_table(&self) -> Result<lancedb::Table> {\n        let conn = self.get_connection().await?;\n        conn.open_table(&self.table_name)\n            .execute()\n            .await\n            .context(\"Failed to open table\")\n    }\n}\n\nimpl LanceDBBuilder {\n    #[allow(clippy::missing_panics_doc)]\n    pub fn with_vector(&mut self, config: impl Into<VectorConfig>) -> &mut Self {\n        if self.fields.is_none() {\n            self.fields(self.default_fields());\n        }\n\n        self.fields\n            .as_mut()\n            .unwrap()\n            .push(FieldConfig::Vector(config.into()));\n\n        self\n    }\n\n    #[allow(clippy::missing_panics_doc)]\n    pub fn with_metadata(&mut self, config: impl Into<MetadataConfig>) -> &mut Self {\n        if self.fields.is_none() {\n            self.fields(self.default_fields());\n        }\n        self.fields\n            .as_mut()\n            .unwrap()\n            .push(FieldConfig::Metadata(config.into()));\n        self\n    }\n\n    #[allow(clippy::unused_self)]\n    fn default_fields(&self) -> Vec<FieldConfig> {\n        vec![FieldConfig::ID, FieldConfig::Chunk]\n    }\n\n    fn default_schema_from_fields(&self) -> Arc<Schema> {\n        let mut fields = Vec::new();\n        let vector_size = self.vector_size;\n\n        for field in self.fields.as_deref().unwrap_or(&self.default_fields()) {\n            match field {\n                FieldConfig::Vector(config) => {\n                    let vector_size = config.vector_size.or(vector_size.flatten()).expect(\n                        \"Vector size should be set either in the field or in the LanceDB builder\",\n                    );\n\n                    fields.push(Field::new(\n                        config.field_name(),\n                        DataType::FixedSizeList(\n                            Arc::new(Field::new(\"item\", DataType::Float32, true)),\n                            vector_size,\n                        ),\n                        true,\n                    ));\n                }\n                FieldConfig::Chunk => {\n                    fields.push(Field::new(field.field_name(), DataType::Utf8, false));\n                }\n                FieldConfig::Metadata(_) => {\n                    fields.push(Field::new(field.field_name(), DataType::Utf8, true));\n                }\n                FieldConfig::ID => {\n                    fields.push(Field::new(\n                        field.field_name(),\n                        DataType::FixedSizeList(\n                            Arc::new(Field::new(\"item\", DataType::UInt8, true)),\n                            16,\n                        ),\n                        false,\n                    ));\n                }\n            }\n        }\n        Arc::new(Schema::new(fields))\n    }\n\n    fn default_connection_pool(&self) -> Result<Arc<LanceDBConnectionPool>> {\n        let mgr = LanceDBPoolManager::builder()\n            .uri(self.uri.clone().flatten().context(\"URI should be set\")?)\n            .api_key(self.api_key.clone().flatten())\n            .region(self.region.clone().flatten())\n            .storage_options(self.storage_options.clone().unwrap_or_default())\n            .build()?;\n\n        LanceDBConnectionPool::builder(mgr)\n            .max_size(self.pool_size.flatten().unwrap_or(10))\n            .build()\n            .map(Arc::new)\n            .map_err(Into::into)\n    }\n}\n\n#[derive(Clone)]\npub enum FieldConfig {\n    Vector(VectorConfig),\n    Metadata(MetadataConfig),\n    Chunk,\n    ID,\n}\n\nimpl FieldConfig {\n    pub fn field_name(&self) -> String {\n        match self {\n            FieldConfig::Vector(config) => config.field_name(),\n            FieldConfig::Metadata(config) => config.field.clone(),\n            FieldConfig::Chunk => \"chunk\".into(),\n            FieldConfig::ID => \"id\".into(),\n        }\n    }\n}\n\n#[derive(Clone)]\npub struct VectorConfig {\n    embedded_field: EmbeddedField,\n    vector_size: Option<i32>,\n}\n\nimpl VectorConfig {\n    pub fn field_name(&self) -> String {\n        format!(\n            \"vector_{}\",\n            normalize_field_name(&self.embedded_field.to_string())\n        )\n    }\n}\n\nimpl From<EmbeddedField> for VectorConfig {\n    fn from(val: EmbeddedField) -> Self {\n        VectorConfig {\n            embedded_field: val,\n            vector_size: None,\n        }\n    }\n}\n\n#[derive(Clone)]\npub struct MetadataConfig {\n    field: String,\n    original_field: String,\n}\n\nimpl<T: AsRef<str>> From<T> for MetadataConfig {\n    fn from(val: T) -> Self {\n        MetadataConfig {\n            field: normalize_field_name(val.as_ref()),\n            original_field: val.as_ref().to_string(),\n        }\n    }\n}\n\npub(crate) fn normalize_field_name(field: &str) -> String {\n    field\n        .to_lowercase()\n        .replace(|c: char| !c.is_alphanumeric(), \"_\")\n}\n"
  },
  {
    "path": "swiftide-integrations/src/lancedb/persist.rs",
    "content": "use std::sync::Arc;\n\nuse anyhow::Context as _;\nuse anyhow::Result;\nuse arrow_array::Array;\nuse arrow_array::FixedSizeListArray;\nuse arrow_array::GenericByteArray;\nuse arrow_array::RecordBatch;\nuse arrow_array::RecordBatchIterator;\nuse arrow_array::types::Float32Type;\nuse arrow_array::types::UInt8Type;\nuse arrow_array::types::Utf8Type;\nuse async_trait::async_trait;\nuse swiftide_core::Persist;\nuse swiftide_core::indexing::IndexingStream;\nuse swiftide_core::indexing::TextNode;\n\nuse super::FieldConfig;\nuse super::LanceDB;\n\n#[async_trait]\nimpl Persist for LanceDB {\n    type Input = String;\n    type Output = String;\n\n    #[tracing::instrument(skip_all)]\n    async fn setup(&self) -> Result<()> {\n        let conn = self.get_connection().await?;\n        let schema = self.schema.clone();\n\n        if let Err(err) = conn.open_table(&self.table_name).execute().await {\n            if matches!(err, lancedb::Error::TableNotFound { .. }) {\n                conn.create_empty_table(&self.table_name, schema)\n                    .execute()\n                    .await\n                    .map(|_| ())\n                    .map_err(anyhow::Error::from)?;\n            } else {\n                return Err(err.into());\n            }\n        }\n\n        Ok(())\n    }\n\n    #[tracing::instrument(skip_all)]\n    async fn store(&self, node: TextNode) -> Result<TextNode> {\n        let mut nodes = vec![node; 1];\n        self.store_nodes(&nodes).await?;\n\n        let node = nodes.swap_remove(0);\n\n        Ok(node)\n    }\n\n    #[tracing::instrument(skip_all)]\n    async fn batch_store(&self, nodes: Vec<TextNode>) -> IndexingStream<String> {\n        self.store_nodes(&nodes).await.map(|()| nodes).into()\n    }\n\n    fn batch_size(&self) -> Option<usize> {\n        Some(self.batch_size)\n    }\n}\n\nimpl LanceDB {\n    async fn store_nodes(&self, nodes: &[TextNode]) -> Result<()> {\n        let schema = self.schema.clone();\n\n        let batches = self.extract_arrow_batches_from_nodes(nodes)?;\n\n        let data = RecordBatchIterator::new(\n            vec![\n                RecordBatch::try_new(schema.clone(), batches)\n                    .context(\"Could not create batches\")?,\n            ]\n            .into_iter()\n            .map(Ok),\n            schema.clone(),\n        );\n\n        let conn = self.get_connection().await?;\n        let table = conn.open_table(&self.table_name).execute().await?;\n        let mut merge_insert = table.merge_insert(&[\"id\"]);\n\n        merge_insert\n            .when_matched_update_all(None)\n            .when_not_matched_insert_all();\n\n        merge_insert.execute(Box::new(data)).await?;\n\n        Ok(())\n    }\n\n    fn extract_arrow_batches_from_nodes(\n        &self,\n        nodes: &[TextNode],\n    ) -> core::result::Result<Vec<Arc<dyn Array>>, anyhow::Error> {\n        let fields = self.fields.as_slice();\n        let mut batches: Vec<Arc<dyn Array>> = Vec::with_capacity(fields.len());\n\n        for field in fields {\n            match field {\n                FieldConfig::Vector(config) => {\n                    let mut row = Vec::with_capacity(nodes.len());\n                    let vector_size = config\n                        .vector_size\n                        .or(self.vector_size)\n                        .context(\"Expected vector size to be set for field\")?;\n\n                    for node in nodes {\n                        let data = node\n                            .vectors\n                            .as_ref()\n                            // TODO: verify compiler optimizes the double loops away\n                            .and_then(|v| v.get(&config.embedded_field))\n                            .map(|v| v.iter().map(|f| Some(*f)));\n\n                        row.push(data);\n                    }\n                    batches.push(Arc::new(FixedSizeListArray::from_iter_primitive::<\n                        Float32Type,\n                        _,\n                        _,\n                    >(row, vector_size)));\n                }\n                FieldConfig::Metadata(config) => {\n                    let mut row = Vec::with_capacity(nodes.len());\n\n                    for node in nodes {\n                        let data = node\n                            .metadata\n                            .get(&config.original_field)\n                            // TODO: Verify this gives the correct data\n                            .and_then(|v| v.as_str());\n\n                        row.push(data);\n                    }\n                    batches.push(Arc::new(GenericByteArray::<Utf8Type>::from_iter(row)));\n                }\n                FieldConfig::Chunk => {\n                    let mut row = Vec::with_capacity(nodes.len());\n\n                    for node in nodes {\n                        let data = Some(node.chunk.as_str());\n                        row.push(data);\n                    }\n                    batches.push(Arc::new(GenericByteArray::<Utf8Type>::from_iter(row)));\n                }\n                FieldConfig::ID => {\n                    let mut row = Vec::with_capacity(nodes.len());\n                    for node in nodes {\n                        let data = Some(node.id().as_bytes().map(Some));\n                        row.push(data);\n                    }\n                    batches.push(Arc::new(FixedSizeListArray::from_iter_primitive::<\n                        UInt8Type,\n                        _,\n                        _,\n                    >(row, 16)));\n                }\n            }\n        }\n        Ok(batches)\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::{Persist as _, indexing::EmbeddedField};\n    use temp_dir::TempDir;\n\n    use super::*;\n\n    async fn setup() -> (TempDir, LanceDB) {\n        let tempdir = TempDir::new().unwrap();\n        let lancedb = LanceDB::builder()\n            .uri(tempdir.child(\"lancedb\").to_str().unwrap())\n            .vector_size(384)\n            .with_metadata(\"filter\")\n            .with_vector(EmbeddedField::Combined)\n            .table_name(\"swiftide_test\")\n            .build()\n            .unwrap();\n        lancedb.setup().await.unwrap();\n\n        (tempdir, lancedb)\n    }\n\n    #[tokio::test]\n    async fn test_no_error_when_table_exists() {\n        let (_guard, lancedb) = setup().await;\n\n        lancedb\n            .setup()\n            .await\n            .expect(\"Should not error if table exists\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/lancedb/retrieve.rs",
    "content": "use anyhow::Result;\nuse arrow_array::{RecordBatch, StringArray};\nuse async_trait::async_trait;\nuse futures_util::TryStreamExt;\nuse itertools::Itertools;\nuse lancedb::query::{ExecutableQuery, QueryBase};\nuse swiftide_core::{\n    Retrieve,\n    document::Document,\n    indexing::Metadata,\n    querying::{\n        Query,\n        search_strategies::{CustomStrategy, SimilaritySingleEmbedding},\n        states,\n    },\n};\n\nuse super::{FieldConfig, LanceDB};\n\n/// Implement the `Retrieve` trait for `SimilaritySingleEmbedding` search strategy.\n///\n/// Can be used in the query pipeline to retrieve documents from `LanceDB`.\n///\n/// Supports filters as strings. Refer to the `LanceDB` documentation for the format.\n#[async_trait]\nimpl Retrieve<SimilaritySingleEmbedding<String>> for LanceDB {\n    #[tracing::instrument]\n    async fn retrieve(\n        &self,\n        search_strategy: &SimilaritySingleEmbedding<String>,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        let Some(embedding) = &query.embedding else {\n            anyhow::bail!(\"No embedding for query\")\n        };\n\n        let table = self\n            .get_connection()\n            .await?\n            .open_table(&self.table_name)\n            .execute()\n            .await?;\n\n        let vector_fields = self\n            .fields\n            .iter()\n            .filter(|field| matches!(field, FieldConfig::Vector(_)))\n            .collect_vec();\n\n        if vector_fields.is_empty() || vector_fields.len() > 1 {\n            anyhow::bail!(\"Zero or multiple vector fields configured in schema\")\n        }\n\n        let column_name = vector_fields.first().map(|v| v.field_name()).unwrap();\n\n        let mut query_builder = table\n            .query()\n            .nearest_to(embedding.as_slice())?\n            .column(&column_name)\n            .limit(usize::try_from(search_strategy.top_k())?);\n\n        if let Some(filter) = &search_strategy.filter() {\n            query_builder = query_builder.only_if(filter);\n        }\n\n        let batches = query_builder\n            .execute()\n            .await?\n            .try_collect::<Vec<_>>()\n            .await?;\n\n        let documents = Self::retrieve_from_record_batches(batches.as_slice());\n\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\n#[async_trait]\nimpl Retrieve<SimilaritySingleEmbedding> for LanceDB {\n    async fn retrieve(\n        &self,\n        search_strategy: &SimilaritySingleEmbedding,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        Retrieve::<SimilaritySingleEmbedding<String>>::retrieve(\n            self,\n            &search_strategy.into_concrete_filter::<String>(),\n            query,\n        )\n        .await\n    }\n}\n\n#[async_trait]\nimpl<Q: ExecutableQuery + Send + Sync + 'static> Retrieve<CustomStrategy<Q>> for LanceDB {\n    /// Implements vector similarity search for `LanceDB` using a custom query strategy.\n    ///\n    /// # Type Parameters\n    /// * `VectorQuery` - `LanceDB`'s query type for vector similarity search\n    async fn retrieve(\n        &self,\n        search_strategy: &CustomStrategy<Q>,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        // Build the custom query using both strategy and query state\n        let query_builder = search_strategy.build_query(&query).await?;\n\n        // Execute the query using the builder's built-in methods\n        let batches = query_builder\n            .execute()\n            .await?\n            .try_collect::<Vec<_>>()\n            .await?;\n\n        let documents = Self::retrieve_from_record_batches(batches.as_slice());\n\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\nimpl LanceDB {\n    /// Retrieves documents from Arrow `RecordBatches` by processing each row and extracting content\n    /// and metadata fields.\n    ///\n    /// The function expects a \"chunk\" field to contain the main document content, while all other\n    /// string fields are treated as metadata. Non-string fields are currently skipped    \n    fn retrieve_from_record_batches(batches: &[RecordBatch]) -> Vec<Document> {\n        let total_rows: usize = batches.iter().map(RecordBatch::num_rows).sum();\n        let mut documents = Vec::with_capacity(total_rows);\n\n        let process_batch = |batch: &RecordBatch, documents: &mut Vec<Document>| {\n            for row_idx in 0..batch.num_rows() {\n                let schema = batch.schema();\n\n                let (content, metadata): (String, Option<Metadata>) = {\n                    let mut metadata = Metadata::default();\n                    let mut content = String::new();\n\n                    for (col_idx, field) in schema.as_ref().fields().iter().enumerate() {\n                        if let Some(array) =\n                            batch.column(col_idx).as_any().downcast_ref::<StringArray>()\n                        {\n                            let value = array.value(row_idx).to_string();\n\n                            if field.name() == \"chunk\" {\n                                content = value;\n                            } else {\n                                metadata.insert(field.name().clone(), value);\n                            }\n                        } else {\n                            // Handle other array types as necessary\n                            // TODO: Can't we just downcast to serde::Value or fail?\n                        }\n                    }\n\n                    (\n                        content,\n                        if metadata.is_empty() {\n                            None\n                        } else {\n                            Some(metadata)\n                        },\n                    )\n                };\n\n                documents.push(Document::new(content, metadata));\n            }\n        };\n\n        for batch in batches {\n            process_batch(batch, &mut documents);\n        }\n\n        documents\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::{\n        Persist as _,\n        indexing::{self, EmbeddedField},\n    };\n    use temp_dir::TempDir;\n\n    use super::*;\n\n    async fn setup() -> (TempDir, LanceDB) {\n        let tempdir = TempDir::new().unwrap();\n        let lancedb = LanceDB::builder()\n            .uri(tempdir.child(\"lancedb\").to_str().unwrap())\n            .vector_size(384)\n            .with_metadata(\"filter\")\n            .with_vector(EmbeddedField::Combined)\n            .table_name(\"swiftide_test\")\n            .build()\n            .unwrap();\n        lancedb.setup().await.unwrap();\n\n        (tempdir, lancedb)\n    }\n\n    #[tokio::test]\n    async fn test_retrieve_multiple_docs_and_filter() {\n        let (_guard, lancedb) = setup().await;\n\n        let nodes = vec![\n            indexing::TextNode::new(\"test_query1\").with_metadata((\"filter\", \"true\")),\n            indexing::TextNode::new(\"test_query2\").with_metadata((\"filter\", \"true\")),\n            indexing::TextNode::new(\"test_query3\").with_metadata((\"filter\", \"false\")),\n        ]\n        .into_iter()\n        .map(|node| {\n            node.with_vectors([(EmbeddedField::Combined, vec![1.0; 384])]);\n            node.to_owned()\n        })\n        .collect();\n\n        lancedb\n            .batch_store(nodes)\n            .await\n            .try_collect::<Vec<_>>()\n            .await\n            .unwrap();\n\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n        query.embedding = Some(vec![1.0; 384]);\n\n        let search_strategy =\n            SimilaritySingleEmbedding::from_filter(\"filter = \\\"true\\\"\".to_string());\n        let result = lancedb\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 2);\n\n        let search_strategy =\n            SimilaritySingleEmbedding::from_filter(\"filter = \\\"banana\\\"\".to_string());\n        let result = lancedb\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 0);\n\n        let search_strategy = SimilaritySingleEmbedding::<()>::default();\n        let result = lancedb\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 3);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n\n//! Integrations with various platforms and external services.\n\n#[cfg(feature = \"anthropic\")]\npub mod anthropic;\n#[cfg(feature = \"aws-bedrock\")]\npub mod aws_bedrock_v2;\n#[cfg(feature = \"dashscope\")]\npub mod dashscope;\n#[cfg(feature = \"duckdb\")]\npub mod duckdb;\n#[cfg(feature = \"fastembed\")]\npub mod fastembed;\n#[cfg(feature = \"fluvio\")]\npub mod fluvio;\n#[cfg(feature = \"gemini\")]\npub mod gemini;\n#[cfg(feature = \"groq\")]\npub mod groq;\n#[cfg(feature = \"kafka\")]\npub mod kafka;\n#[cfg(feature = \"lancedb\")]\npub mod lancedb;\n#[cfg(feature = \"ollama\")]\npub mod ollama;\n#[cfg(feature = \"open-router\")]\npub mod open_router;\n#[cfg(feature = \"openai\")]\npub mod openai;\n#[cfg(feature = \"parquet\")]\npub mod parquet;\n#[cfg(feature = \"pgvector\")]\npub mod pgvector;\n#[cfg(feature = \"qdrant\")]\npub mod qdrant;\n#[cfg(feature = \"redb\")]\npub mod redb;\n#[cfg(feature = \"redis\")]\npub mod redis;\n#[cfg(feature = \"scraping\")]\npub mod scraping;\n#[cfg(feature = \"tiktoken\")]\npub mod tiktoken;\n#[cfg(feature = \"tree-sitter\")]\npub mod treesitter;\n"
  },
  {
    "path": "swiftide-integrations/src/ollama/config.rs",
    "content": "use derive_builder::Builder;\nuse reqwest::header::HeaderMap;\nuse secrecy::SecretString;\nuse serde::Deserialize;\n\nconst OLLAMA_API_BASE: &str = \"http://localhost:11434/v1\";\n\n#[derive(Clone, Debug, Deserialize, Builder)]\n#[serde(default)]\npub struct OllamaConfig {\n    api_base: String,\n    api_key: SecretString,\n}\n\nimpl OllamaConfig {\n    pub fn builder() -> OllamaConfigBuilder {\n        OllamaConfigBuilder::default()\n    }\n\n    pub fn with_api_base(&mut self, api_base: &str) -> &mut Self {\n        self.api_base = api_base.to_string();\n\n        self\n    }\n}\n\nimpl Default for OllamaConfig {\n    fn default() -> Self {\n        Self {\n            api_base: OLLAMA_API_BASE.to_string(),\n            api_key: String::new().into(),\n        }\n    }\n}\n\nimpl async_openai::config::Config for OllamaConfig {\n    fn headers(&self) -> HeaderMap {\n        HeaderMap::new()\n    }\n\n    fn url(&self, path: &str) -> String {\n        format!(\"{}{}\", self.api_base, path)\n    }\n\n    fn api_base(&self) -> &str {\n        &self.api_base\n    }\n\n    fn api_key(&self) -> &SecretString {\n        &self.api_key\n    }\n\n    fn query(&self) -> Vec<(&str, &str)> {\n        vec![]\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/ollama/mod.rs",
    "content": "//! This module provides integration with `Ollama`'s API, enabling the use of language models and\n//! embeddings within the Swiftide project. It includes the `Ollama` struct for managing API clients\n//! and default options for embedding and prompt models. The module is conditionally compiled based\n//! on the \"ollama\" feature flag.\n\nuse config::OllamaConfig;\n\nuse crate::openai;\n\npub mod config;\n\n/// The `Ollama` struct encapsulates an `Ollama` client and default options for embedding and prompt\n/// models. It uses the `Builder` pattern for flexible and customizable instantiation.\n///\n/// By default it will look for a `OLLAMA_API_KEY` environment variable. Note that either a prompt\n/// model or embedding model always need to be set, either with\n/// [`Ollama::with_default_prompt_model`] or [`Ollama::with_default_embed_model`] or via the\n/// builder. You can find available models in the Ollama documentation.\n///\n/// Under the hood it uses [`async_openai`], with the Ollama openai mapping. This means\n/// some features might not work as expected. See the Ollama documentation for details.\npub type Ollama = openai::GenericOpenAI<OllamaConfig>;\npub type OllamaBuilder = openai::GenericOpenAIBuilder<OllamaConfig>;\npub type OllamaBuilderError = openai::GenericOpenAIBuilderError;\npub use openai::{Options, OptionsBuilder, OptionsBuilderError};\n\nimpl Ollama {\n    /// Build a new `Ollama` instance\n    pub fn builder() -> OllamaBuilder {\n        OllamaBuilder::default()\n    }\n}\nimpl Default for Ollama {\n    fn default() -> Self {\n        Self::builder().build().unwrap()\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_default_prompt_model() {\n        let openai = Ollama::builder()\n            .default_prompt_model(\"llama3.1\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            openai.default_options.prompt_model,\n            Some(\"llama3.1\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_default_embed_model() {\n        let ollama = Ollama::builder()\n            .default_embed_model(\"mxbai-embed-large\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            ollama.default_options.embed_model,\n            Some(\"mxbai-embed-large\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_default_models() {\n        let ollama = Ollama::builder()\n            .default_embed_model(\"mxbai-embed-large\")\n            .default_prompt_model(\"llama3.1\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            ollama.default_options.embed_model,\n            Some(\"mxbai-embed-large\".to_string())\n        );\n        assert_eq!(\n            ollama.default_options.prompt_model,\n            Some(\"llama3.1\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_building_via_default_prompt_model() {\n        let mut client = Ollama::default();\n\n        assert!(client.default_options.prompt_model.is_none());\n\n        client.with_default_prompt_model(\"llama3.1\");\n        assert_eq!(\n            client.default_options.prompt_model,\n            Some(\"llama3.1\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_building_via_default_embed_model() {\n        let mut client = Ollama::default();\n\n        assert!(client.default_options.embed_model.is_none());\n\n        client.with_default_embed_model(\"mxbai-embed-large\");\n        assert_eq!(\n            client.default_options.embed_model,\n            Some(\"mxbai-embed-large\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_building_via_default_models() {\n        let mut client = Ollama::default();\n\n        assert!(client.default_options.embed_model.is_none());\n\n        client.with_default_prompt_model(\"llama3.1\");\n        client.with_default_embed_model(\"mxbai-embed-large\");\n        assert_eq!(\n            client.default_options.prompt_model,\n            Some(\"llama3.1\".to_string())\n        );\n        assert_eq!(\n            client.default_options.embed_model,\n            Some(\"mxbai-embed-large\".to_string())\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/open_router/config.rs",
    "content": "use derive_builder::Builder;\nuse reqwest::header::{AUTHORIZATION, HeaderMap};\nuse secrecy::{ExposeSecret as _, SecretString};\nuse serde::Deserialize;\n\nconst OPENROUTER_API_BASE: &str = \"https://openrouter.ai/api/v1\";\n\n#[derive(Clone, Debug, Deserialize, Builder)]\n#[serde(default)]\n#[builder(setter(into, strip_option))]\npub struct OpenRouterConfig {\n    #[builder(default = OPENROUTER_API_BASE.to_string())]\n    api_base: String,\n    api_key: SecretString,\n    /// Sets the HTTP-Referer header (leaderbord)\n    site_url: Option<String>,\n    /// Sets the name (leaderbord)\n    site_name: Option<String>,\n}\n\nimpl OpenRouterConfig {\n    pub fn builder() -> OpenRouterConfigBuilder {\n        OpenRouterConfigBuilder::default()\n    }\n    pub fn with_api_base(&mut self, api_base: &str) -> &mut Self {\n        self.api_base = api_base.to_string();\n\n        self\n    }\n\n    pub fn with_api_key(&mut self, api_key: impl Into<SecretString>) -> &mut Self {\n        self.api_key = api_key.into();\n\n        self\n    }\n    pub fn with_site_url(&mut self, site_url: &str) -> &mut Self {\n        self.site_url = Some(site_url.to_string());\n\n        self\n    }\n\n    pub fn with_site_name(&mut self, site_name: &str) -> &mut Self {\n        self.site_name = Some(site_name.to_string());\n\n        self\n    }\n}\n\nimpl Default for OpenRouterConfig {\n    fn default() -> Self {\n        Self {\n            api_base: OPENROUTER_API_BASE.to_string(),\n            api_key: std::env::var(\"OPENROUTER_API_KEY\")\n                .unwrap_or_else(|_| String::new())\n                .into(),\n            site_url: None,\n            site_name: None,\n        }\n    }\n}\n\nimpl async_openai::config::Config for OpenRouterConfig {\n    fn headers(&self) -> HeaderMap {\n        let mut headers = HeaderMap::new();\n\n        let api_key = self.api_key.expose_secret();\n        assert!(!api_key.is_empty(), \"API key for OpenRouter is required\");\n\n        headers.insert(\n            AUTHORIZATION,\n            format!(\"Bearer {}\", self.api_key.expose_secret())\n                .as_str()\n                .parse()\n                .unwrap(),\n        );\n        if let Ok(site_url) = self\n            .site_url\n            .as_deref()\n            .unwrap_or(\"https://github.com/bosun-ai/swiftide\")\n            .parse()\n        {\n            headers.insert(\"HTTP-Referer\", site_url);\n        }\n\n        if let Ok(site_name) = self.site_url.as_deref().unwrap_or(\"Swiftide\").parse() {\n            headers.insert(\"X-Title\", site_name);\n        }\n\n        headers\n    }\n\n    fn url(&self, path: &str) -> String {\n        format!(\"{}{}\", self.api_base, path)\n    }\n\n    fn api_base(&self) -> &str {\n        &self.api_base\n    }\n\n    fn api_key(&self) -> &SecretString {\n        &self.api_key\n    }\n\n    fn query(&self) -> Vec<(&str, &str)> {\n        vec![]\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/open_router/mod.rs",
    "content": "//! This module provides integration with `OpenRouter`'s API, enabling the use of language models\n//! and embeddings within the Swiftide project. It includes the `OpenRouter` struct for managing API\n//! clients and default options for embedding and prompt models. The module is conditionally\n//! compiled based on the \"openrouter\" feature flag.\n\nuse config::OpenRouterConfig;\n\nuse crate::openai;\n\npub mod config;\n\n/// The `OpenRouter` struct encapsulates an `OpenRouter` client and default options for embedding\n/// and prompt models. It uses the `Builder` pattern for flexible and customizable instantiation.\n///\n/// By default it will look for a `OPENROUTER_API_KEY` environment variable. Note that either a\n/// prompt model or embedding model always need to be set, either with\n/// [`OpenRouter::with_default_prompt_model`] or [`OpenRouter::with_default_embed_model`] or via the\n/// builder. You can find available models in the `OpenRouter` documentation.\n///\n/// Under the hood it uses [`async_openai`], with the `OpenRouter` openai compatible api. This means\n/// some features might not work as expected. See the `OpenRouter` documentation for details.\npub type OpenRouter = openai::GenericOpenAI<OpenRouterConfig>;\npub type OpenRouterBuilder = openai::GenericOpenAIBuilder<OpenRouterConfig>;\npub type OpenRouterBuilderError = openai::GenericOpenAIBuilderError;\npub use openai::{Options, OptionsBuilder, OptionsBuilderError};\n\nimpl OpenRouter {\n    /// Creates a new `OpenRouterBuilder` for constructing `OpenRouter` instances.\n    pub fn builder() -> OpenRouterBuilder {\n        OpenRouterBuilder::default()\n    }\n}\n\nimpl Default for OpenRouter {\n    fn default() -> Self {\n        Self::builder().build().unwrap()\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_default_prompt_model() {\n        let openai = OpenRouter::builder()\n            .default_prompt_model(\"llama3.1\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            openai.default_options.prompt_model,\n            Some(\"llama3.1\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_default_models() {\n        let openrouter = OpenRouter::builder()\n            .default_prompt_model(\"llama3.1\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            openrouter.default_options.prompt_model,\n            Some(\"llama3.1\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_building_via_default_prompt_model() {\n        let mut client = OpenRouter::default();\n\n        assert!(client.default_options.prompt_model.is_none());\n\n        client.with_default_prompt_model(\"llama3.1\");\n        assert_eq!(\n            client.default_options.prompt_model,\n            Some(\"llama3.1\".to_string())\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/openai/chat_completion.rs",
    "content": "use anyhow::{Context as _, Result};\nuse async_openai::types::chat::{\n    ChatCompletionMessageToolCall, ChatCompletionMessageToolCalls,\n    ChatCompletionRequestAssistantMessageArgs, ChatCompletionRequestMessageContentPartAudio,\n    ChatCompletionRequestMessageContentPartImage, ChatCompletionRequestMessageContentPartText,\n    ChatCompletionRequestSystemMessageArgs, ChatCompletionRequestToolMessageArgs,\n    ChatCompletionRequestUserMessageArgs, ChatCompletionRequestUserMessageContent,\n    ChatCompletionRequestUserMessageContentPart, ChatCompletionStreamOptions,\n    ChatCompletionToolChoiceOption, ChatCompletionTools, FunctionCall, FunctionObject, ImageUrl,\n    InputAudio, InputAudioFormat, ToolChoiceOptions,\n};\nuse async_trait::async_trait;\nuse base64::Engine as _;\nuse futures_util::StreamExt as _;\nuse futures_util::stream;\nuse itertools::Itertools;\nuse serde::Serialize;\nuse swiftide_core::ChatCompletionStream;\nuse swiftide_core::chat_completion::Usage;\nuse swiftide_core::chat_completion::{\n    ChatCompletion, ChatCompletionRequest, ChatCompletionResponse, ChatMessage,\n    ChatMessageContentPart, ChatMessageContentSource, ToolCall, ToolSpec,\n    errors::LanguageModelError,\n};\n#[cfg(feature = \"metrics\")]\nuse swiftide_core::metrics::emit_usage;\n\nuse super::GenericOpenAI;\nuse super::openai_error_to_language_model_error;\nuse super::responses_api::{\n    build_responses_request_from_chat, response_to_chat_completion, responses_stream_adapter,\n};\nuse super::tool_schema::OpenAiToolSchema;\nuse tracing_futures::Instrument;\n\n#[async_trait]\nimpl<\n    C: async_openai::config::Config\n        + std::default::Default\n        + Sync\n        + Send\n        + std::fmt::Debug\n        + Clone\n        + 'static,\n> ChatCompletion for GenericOpenAI<C>\n{\n    #[cfg_attr(not(feature = \"langfuse\"), tracing::instrument(skip_all, err))]\n    #[cfg_attr(\n        feature = \"langfuse\",\n        tracing::instrument(skip_all, err, fields(langfuse.type = \"GENERATION\"))\n    )]\n    async fn complete(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        if self.is_responses_api_enabled() {\n            return self.complete_via_responses_api(request).await;\n        }\n\n        let model = self\n            .default_options\n            .prompt_model\n            .as_ref()\n            .context(\"Model not set\")?;\n\n        let messages = request\n            .messages()\n            .iter()\n            .filter_map(|message| message_to_openai(message).transpose())\n            .collect::<Result<Vec<_>>>()?;\n\n        // Build the request to be sent to the OpenAI API.\n        let mut openai_request = self\n            .chat_completion_request_defaults()\n            .model(model)\n            .messages(messages)\n            .to_owned();\n\n        if !request.tools_spec().is_empty() {\n            openai_request\n                .tools(\n                    request\n                        .tools_spec()\n                        .iter()\n                        .map(tools_to_openai)\n                        .collect::<Result<Vec<_>>>()?,\n                )\n                .tool_choice(ChatCompletionToolChoiceOption::Mode(\n                    ToolChoiceOptions::Auto,\n                ));\n            if let Some(par) = self.default_options.parallel_tool_calls {\n                openai_request.parallel_tool_calls(par);\n            }\n        }\n\n        let openai_request = openai_request\n            .build()\n            .map_err(openai_error_to_language_model_error)?;\n\n        tracing::trace!(model, request = ?request, \"Sending request to OpenAI\");\n\n        let tracking_request = openai_request.clone();\n        let response = self\n            .client\n            .chat()\n            .create(openai_request)\n            .await\n            .map_err(openai_error_to_language_model_error)?;\n\n        tracing::trace!(?response, \"[ChatCompletion] Full response from OpenAI\");\n        // Make sure the debug log is a concise one line\n\n        let mut builder = ChatCompletionResponse::builder()\n            .maybe_message(\n                response\n                    .choices\n                    .first()\n                    .and_then(|choice| choice.message.content.clone()),\n            )\n            .maybe_tool_calls(\n                response\n                    .choices\n                    .first()\n                    .and_then(|choice| choice.message.tool_calls.as_ref())\n                    .map(|tool_calls| {\n                        tool_calls\n                            .iter()\n                            .filter_map(|tool_call| match tool_call {\n                                ChatCompletionMessageToolCalls::Function(call) => Some(\n                                    ToolCall::builder()\n                                        .id(call.id.clone())\n                                        .args(call.function.arguments.clone())\n                                        .name(call.function.name.clone())\n                                        .build()\n                                        .expect(\"infallible\"),\n                                ),\n                                ChatCompletionMessageToolCalls::Custom(_) => None,\n                            })\n                            .collect_vec()\n                    }),\n            )\n            .to_owned();\n\n        if let Some(usage) = &response.usage {\n            builder.usage(Usage::from(usage));\n        }\n\n        let our_response = builder.build().map_err(LanguageModelError::from)?;\n\n        self.track_completion(\n            model,\n            our_response.usage.as_ref(),\n            Some(&tracking_request),\n            Some(&our_response),\n        );\n\n        Ok(our_response)\n    }\n\n    #[tracing::instrument(skip_all)]\n    async fn complete_stream(&self, request: &ChatCompletionRequest<'_>) -> ChatCompletionStream {\n        if self.is_responses_api_enabled() {\n            return self.complete_stream_via_responses_api(request).await;\n        }\n\n        let Some(model_name) = self.default_options.prompt_model.clone() else {\n            return LanguageModelError::permanent(\"Model not set\").into();\n        };\n\n        #[cfg(not(any(feature = \"metrics\", feature = \"langfuse\")))]\n        let _ = &model_name;\n\n        let messages = match request\n            .messages()\n            .iter()\n            .filter_map(|message| message_to_openai(message).transpose())\n            .collect::<Result<Vec<_>>>()\n        {\n            Ok(messages) => messages,\n            Err(e) => return LanguageModelError::from(e).into(),\n        };\n\n        // Build the request to be sent to the OpenAI API.\n        let mut openai_request = self\n            .chat_completion_request_defaults()\n            .model(&model_name)\n            .messages(messages)\n            .stream(true)\n            .stream_options(ChatCompletionStreamOptions {\n                include_usage: Some(true),\n                include_obfuscation: None,\n            })\n            .to_owned();\n\n        if !request.tools_spec().is_empty() {\n            openai_request\n                .tools(\n                    match request\n                        .tools_spec()\n                        .iter()\n                        .map(tools_to_openai)\n                        .collect::<Result<Vec<_>>>()\n                    {\n                        Ok(tools) => tools,\n                        Err(e) => {\n                            return LanguageModelError::from(e).into();\n                        }\n                    },\n                )\n                .tool_choice(ChatCompletionToolChoiceOption::Mode(\n                    ToolChoiceOptions::Auto,\n                ));\n            if let Some(par) = self.default_options.parallel_tool_calls {\n                openai_request.parallel_tool_calls(par);\n            }\n        }\n\n        let openai_request = match openai_request.build() {\n            Ok(request) => request,\n            Err(e) => {\n                return openai_error_to_language_model_error(e).into();\n            }\n        };\n\n        tracing::trace!(model = %model_name, request = ?request, \"Sending request to OpenAI\");\n\n        let response_stream = match self\n            .client\n            .chat()\n            .create_stream(openai_request.clone())\n            .await\n        {\n            Ok(response) => response,\n            Err(e) => return openai_error_to_language_model_error(e).into(),\n        };\n\n        let stream_full = self.stream_full;\n        let model_name_for_track = model_name.clone();\n        let self_for_stream = self.clone();\n        let tracking_request = openai_request;\n\n        let span = if cfg!(feature = \"langfuse\") {\n            tracing::info_span!(\"stream\", langfuse.type = \"GENERATION\")\n        } else {\n            tracing::info_span!(\"stream\")\n        };\n\n        let stream = stream::unfold(\n            (\n                response_stream,\n                ChatCompletionResponse::default(),\n                tracking_request.clone(),\n                false, // finished\n            ),\n            move |(mut response_stream, mut state, tracking_request, finished)| {\n                let stream_full = stream_full;\n                let self_for_stream = self_for_stream.clone();\n                let model_name_for_track = model_name_for_track.clone();\n                async move {\n                    if finished {\n                        return None;\n                    }\n\n                    match response_stream.next().await {\n                        Some(Ok(chunk)) => {\n                            let delta_message = chunk\n                                .choices\n                                .first()\n                                .and_then(|d| d.delta.content.as_deref());\n                            let delta_tool_calls = chunk\n                                .choices\n                                .first()\n                                .and_then(|d| d.delta.tool_calls.as_deref());\n                            let usage = chunk.usage.as_ref();\n\n                            state.append_message_delta(delta_message);\n                            if let Some(delta_tool_calls) = delta_tool_calls {\n                                for tc in delta_tool_calls {\n                                    state.append_tool_call_delta(\n                                        tc.index as usize,\n                                        tc.id.as_deref(),\n                                        tc.function.as_ref().and_then(|f| f.name.as_deref()),\n                                        tc.function.as_ref().and_then(|f| f.arguments.as_deref()),\n                                    );\n                                }\n                            }\n                            if let Some(usage) = usage {\n                                let usage = Usage::from(usage);\n                                state.append_usage_delta(\n                                    usage.prompt_tokens,\n                                    usage.completion_tokens,\n                                    usage.total_tokens,\n                                );\n                            }\n\n                            let snapshot = if stream_full {\n                                state.clone()\n                            } else {\n                                ChatCompletionResponse {\n                                    id: state.id,\n                                    message: None,\n                                    tool_calls: None,\n                                    usage: None,\n                                    reasoning: None,\n                                    delta: state.delta.clone(),\n                                }\n                            };\n\n                            Some((\n                                Ok(snapshot),\n                                (response_stream, state, tracking_request, false),\n                            ))\n                        }\n                        Some(Err(err)) => Some((\n                            Err(openai_error_to_language_model_error(err)),\n                            (response_stream, state, tracking_request, true),\n                        )),\n                        None => {\n                            // Final emission; track completion with the full state.\n                            self_for_stream.track_completion(\n                                &model_name_for_track,\n                                state.usage.as_ref(),\n                                Some(&tracking_request),\n                                Some(&state),\n                            );\n                            let final_snapshot = state.clone();\n                            Some((\n                                Ok(final_snapshot),\n                                (response_stream, state, tracking_request, true),\n                            ))\n                        }\n                    }\n                }\n            },\n        );\n\n        Box::pin(tracing_futures::Instrument::instrument(stream, span))\n    }\n}\n\nimpl<\n    C: async_openai::config::Config\n        + std::default::Default\n        + Sync\n        + Send\n        + std::fmt::Debug\n        + Clone\n        + 'static,\n> GenericOpenAI<C>\n{\n    async fn complete_via_responses_api(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> Result<ChatCompletionResponse, LanguageModelError> {\n        let model = self\n            .default_options\n            .prompt_model\n            .as_ref()\n            .context(\"Model not set\")?;\n\n        let create_request = build_responses_request_from_chat(self, request)?;\n        let tracking_request = create_request.clone();\n\n        let response = self\n            .client\n            .responses()\n            .create(create_request)\n            .await\n            .map_err(openai_error_to_language_model_error)?;\n\n        let completion = response_to_chat_completion(&response)?;\n\n        self.track_completion(\n            model,\n            completion.usage.as_ref(),\n            Some(&tracking_request),\n            Some(&completion),\n        );\n\n        Ok(completion)\n    }\n\n    #[allow(clippy::too_many_lines)]\n    async fn complete_stream_via_responses_api(\n        &self,\n        request: &ChatCompletionRequest<'_>,\n    ) -> ChatCompletionStream {\n        #[allow(unused_variables)]\n        let Some(model_name) = self.default_options.prompt_model.clone() else {\n            return LanguageModelError::permanent(\"Model not set\").into();\n        };\n\n        let mut create_request = match build_responses_request_from_chat(self, request) {\n            Ok(req) => req,\n            Err(err) => return err.into(),\n        };\n\n        create_request.stream = Some(true);\n\n        let stream = match self\n            .client\n            .responses()\n            .create_stream(create_request.clone())\n            .await\n        {\n            Ok(stream) => stream,\n            Err(err) => return openai_error_to_language_model_error(err).into(),\n        };\n\n        let stream_full = self.stream_full;\n\n        let span = if cfg!(feature = \"langfuse\") {\n            tracing::info_span!(\"responses_stream\", langfuse.type = \"GENERATION\")\n        } else {\n            tracing::info_span!(\"responses_stream\")\n        };\n\n        let mapped_stream = responses_stream_adapter(stream, stream_full);\n\n        let this = self.clone();\n        let tracked_request = create_request;\n\n        let mapped_stream = mapped_stream.map(move |result| match result {\n            Ok(item) => {\n                if item.finished {\n                    this.track_completion(\n                        &model_name,\n                        item.response.usage.as_ref(),\n                        Some(&tracked_request),\n                        Some(&item.response),\n                    );\n                }\n\n                Ok(item.response)\n            }\n            Err(err) => Err(err),\n        });\n\n        Box::pin(Instrument::instrument(mapped_stream, span))\n    }\n    #[allow(unused_variables)]\n    pub(crate) fn track_completion<R, S>(\n        &self,\n        model: &str,\n        usage: Option<&Usage>,\n        request: Option<&R>,\n        response: Option<&S>,\n    ) where\n        R: Serialize + ?Sized,\n        S: Serialize + ?Sized,\n    {\n        if let Some(usage) = usage {\n            let cb_usage = usage.clone();\n            if let Some(callback) = &self.on_usage {\n                let callback = callback.clone();\n                tokio::spawn(async move {\n                    if let Err(err) = callback(&cb_usage).await {\n                        tracing::error!(\"Error in on_usage callback: {err}\");\n                    }\n                });\n            }\n\n            #[cfg(feature = \"metrics\")]\n            emit_usage(\n                model,\n                usage.prompt_tokens.into(),\n                usage.completion_tokens.into(),\n                usage.total_tokens.into(),\n                self.metric_metadata.as_ref(),\n            );\n        }\n\n        #[cfg(feature = \"langfuse\")]\n        tracing::debug!(\n            langfuse.model = model,\n            langfuse.input = request.and_then(langfuse_json_redacted).unwrap_or_default(),\n            langfuse.output = response.and_then(langfuse_json).unwrap_or_default(),\n            langfuse.usage = usage.and_then(langfuse_json).unwrap_or_default(),\n        );\n    }\n}\n\n#[cfg(feature = \"langfuse\")]\npub(crate) fn langfuse_json<T: Serialize + ?Sized>(value: &T) -> Option<String> {\n    serde_json::to_string_pretty(value).ok()\n}\n\n#[cfg(feature = \"langfuse\")]\npub(crate) fn langfuse_json_redacted<T: Serialize + ?Sized>(value: &T) -> Option<String> {\n    let mut value = serde_json::to_value(value).ok()?;\n    redact_image_urls(&mut value);\n    serde_json::to_string_pretty(&value).ok()\n}\n\n#[cfg(feature = \"langfuse\")]\nfn redact_image_urls(value: &mut serde_json::Value) {\n    match value {\n        serde_json::Value::Object(map) => {\n            if let Some(image_url) = map.get_mut(\"image_url\")\n                && let serde_json::Value::Object(image_obj) = image_url\n                && let Some(serde_json::Value::String(url)) = image_obj.get_mut(\"url\")\n                && let Some(truncated) = truncate_data_url(url)\n            {\n                *url = truncated;\n            }\n\n            for val in map.values_mut() {\n                redact_image_urls(val);\n            }\n        }\n        serde_json::Value::Array(arr) => {\n            for val in arr {\n                redact_image_urls(val);\n            }\n        }\n        _ => {}\n    }\n}\n\n#[cfg(feature = \"langfuse\")]\nfn truncate_data_url(url: &str) -> Option<String> {\n    const MAX_DATA_PREVIEW: usize = 32;\n\n    if !url.starts_with(\"data:\") {\n        return None;\n    }\n\n    let (prefix, data) = url.split_once(',')?;\n    if data.len() <= MAX_DATA_PREVIEW {\n        return None;\n    }\n\n    let preview = &data[..MAX_DATA_PREVIEW];\n    let truncated = data.len() - MAX_DATA_PREVIEW;\n\n    Some(format!(\n        \"{prefix},{preview}...[truncated {truncated} chars]\"\n    ))\n}\n\n#[cfg(not(feature = \"langfuse\"))]\n#[allow(dead_code)]\npub(crate) fn langfuse_json<T>(_value: &T) -> Option<String> {\n    None\n}\n\nfn tools_to_openai(spec: &ToolSpec) -> Result<ChatCompletionTools> {\n    let parameters = OpenAiToolSchema::try_from(spec)\n        .context(\"tool schema must be OpenAI compatible\")?\n        .into_value();\n\n    let function = FunctionObject {\n        name: spec.name.clone(),\n        description: Some(spec.description.clone()),\n        parameters: Some(parameters),\n        strict: Some(true),\n    };\n\n    Ok(ChatCompletionTools::Function(\n        async_openai::types::chat::ChatCompletionTool { function },\n    ))\n}\n\nfn message_to_openai(\n    message: &ChatMessage,\n) -> Result<Option<async_openai::types::chat::ChatCompletionRequestMessage>> {\n    let openai_message = match message {\n        ChatMessage::User(msg) => ChatCompletionRequestUserMessageArgs::default()\n            .content(msg.as_ref())\n            .build()?\n            .into(),\n        ChatMessage::UserWithParts(parts) => ChatCompletionRequestUserMessageArgs::default()\n            .content(user_parts_to_openai(parts)?)\n            .build()?\n            .into(),\n        ChatMessage::System(msg) => ChatCompletionRequestSystemMessageArgs::default()\n            .content(msg.as_ref())\n            .build()?\n            .into(),\n        ChatMessage::Summary(msg) => ChatCompletionRequestAssistantMessageArgs::default()\n            .content(msg.as_ref())\n            .build()?\n            .into(),\n        ChatMessage::ToolOutput(tool_call, tool_output) => {\n            let Some(content) = tool_output.content() else {\n                return Ok(Some(\n                    ChatCompletionRequestToolMessageArgs::default()\n                        .tool_call_id(tool_call.id())\n                        .build()?\n                        .into(),\n                ));\n            };\n\n            ChatCompletionRequestToolMessageArgs::default()\n                .content(content)\n                .tool_call_id(tool_call.id())\n                .build()?\n                .into()\n        }\n        ChatMessage::Assistant(content, tool_calls) => {\n            let mut builder = ChatCompletionRequestAssistantMessageArgs::default();\n\n            let has_tool_calls = tool_calls.as_ref().is_some_and(|calls| !calls.is_empty());\n\n            if let Some(content) = content.as_deref() {\n                builder.content(content);\n            }\n\n            if let Some(tool_calls) = tool_calls.as_ref() {\n                let calls = tool_calls\n                    .iter()\n                    .map(|tool_call| {\n                        ChatCompletionMessageToolCalls::Function(ChatCompletionMessageToolCall {\n                            id: tool_call.id().to_string(),\n                            function: FunctionCall {\n                                name: tool_call.name().to_string(),\n                                arguments: tool_call.args().unwrap_or_default().to_string(),\n                            },\n                        })\n                    })\n                    .collect::<Vec<_>>();\n\n                builder.tool_calls(calls);\n            }\n\n            if content.is_none() && !has_tool_calls {\n                return Ok(None);\n            }\n\n            builder.build()?.into()\n        }\n        ChatMessage::Reasoning(_) => return Ok(None),\n    };\n\n    Ok(Some(openai_message))\n}\n\nfn user_parts_to_openai(\n    parts: &[ChatMessageContentPart],\n) -> Result<ChatCompletionRequestUserMessageContent> {\n    let mapped = parts\n        .iter()\n        .map(part_to_openai_user_content_part)\n        .collect::<Result<Vec<_>>>()?;\n\n    Ok(ChatCompletionRequestUserMessageContent::Array(mapped))\n}\n\nfn part_to_openai_user_content_part(\n    part: &ChatMessageContentPart,\n) -> Result<ChatCompletionRequestUserMessageContentPart> {\n    Ok(match part {\n        ChatMessageContentPart::Text { text } => ChatCompletionRequestUserMessageContentPart::from(\n            ChatCompletionRequestMessageContentPartText::from(text.as_ref()),\n        ),\n        ChatMessageContentPart::Image { source, .. } => {\n            let image_url = ImageUrl {\n                url: source_to_openai_url(source)?,\n                detail: None,\n            };\n            ChatCompletionRequestUserMessageContentPart::from(\n                ChatCompletionRequestMessageContentPartImage { image_url },\n            )\n        }\n        ChatMessageContentPart::Audio { source, format } => {\n            let ChatMessageContentSource::Bytes { data, .. } = source else {\n                anyhow::bail!(\"OpenAI chat input_audio only supports bytes sources\");\n            };\n\n            let format = match format.as_deref() {\n                Some(\"wav\") => InputAudioFormat::Wav,\n                Some(\"mp3\") | None => InputAudioFormat::Mp3,\n                Some(other) => anyhow::bail!(\"Unsupported OpenAI chat input_audio format: {other}\"),\n            };\n\n            let input_audio = InputAudio {\n                data: base64::engine::general_purpose::STANDARD.encode(data),\n                format,\n            };\n\n            ChatCompletionRequestUserMessageContentPart::from(\n                ChatCompletionRequestMessageContentPartAudio { input_audio },\n            )\n        }\n        ChatMessageContentPart::Document { .. } => {\n            anyhow::bail!(\"OpenAI chat file parts are not supported by async-openai yet\")\n        }\n        ChatMessageContentPart::Video { .. } => {\n            anyhow::bail!(\"OpenAI chat completion does not support video parts\")\n        }\n    })\n}\n\nfn source_to_openai_url(source: &ChatMessageContentSource) -> Result<String> {\n    match source {\n        ChatMessageContentSource::Url { url } => Ok(url.clone()),\n        ChatMessageContentSource::FileId { .. } => {\n            anyhow::bail!(\"OpenAI chat image_url does not accept file_id sources\")\n        }\n        ChatMessageContentSource::S3 { .. } => {\n            anyhow::bail!(\"OpenAI chat image_url does not accept s3 sources\")\n        }\n        ChatMessageContentSource::Bytes { data, media_type } => {\n            let media_type = media_type.as_deref().unwrap_or(\"application/octet-stream\");\n            let encoded = base64::engine::general_purpose::STANDARD.encode(data);\n            Ok(format!(\"data:{media_type};base64,{encoded}\"))\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::openai::{OpenAI, Options};\n\n    use super::*;\n    use futures_util::StreamExt;\n    use serde_json::json;\n    use std::sync::Arc;\n    use swiftide_core::chat_completion::{ToolCallBuilder, ToolOutput, UsageBuilder};\n    use wiremock::matchers::{method, path};\n    use wiremock::{Mock, MockServer, ResponseTemplate};\n\n    #[allow(dead_code)]\n    #[derive(schemars::JsonSchema)]\n    struct WeatherArgs {\n        _city: String,\n    }\n\n    #[allow(dead_code)]\n    #[derive(schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentArgs {\n        request: NestedCommentRequest,\n    }\n\n    #[allow(dead_code)]\n    #[derive(schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentRequest {\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        body: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        text: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        page_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        block_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        discussion_id: Option<String>,\n    }\n\n    #[test]\n    fn test_tools_to_openai_sets_additional_properties_false() {\n        let spec = ToolSpec::builder()\n            .name(\"get_weather\")\n            .description(\"Retrieve weather data\")\n            .parameters_schema(schemars::schema_for!(WeatherArgs))\n            .build()\n            .unwrap();\n\n        let tool = tools_to_openai(&spec).expect(\"tool conversion succeeds\");\n\n        let function = match tool {\n            ChatCompletionTools::Function(ref tool) => &tool.function,\n            ChatCompletionTools::Custom(_) => panic!(\"expected function tool\"),\n        };\n\n        let additional_properties = function\n            .parameters\n            .as_ref()\n            .and_then(serde_json::Value::as_object)\n            .and_then(|obj| obj.get(\"additionalProperties\"))\n            .cloned();\n\n        assert_eq!(\n            additional_properties,\n            Some(serde_json::Value::Bool(false)),\n            \"Chat Completions require additionalProperties=false for tool parameters, got {}\",\n            serde_json::to_string_pretty(&function.parameters).unwrap()\n        );\n    }\n\n    #[test]\n    fn test_tools_to_openai_sets_nested_required_for_typed_request_objects() {\n        let spec = ToolSpec::builder()\n            .name(\"notion_create_comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(schemars::schema_for!(NestedCommentArgs))\n            .build()\n            .unwrap();\n\n        let tool = tools_to_openai(&spec).expect(\"tool conversion succeeds\");\n\n        let function = match tool {\n            ChatCompletionTools::Function(ref tool) => &tool.function,\n            ChatCompletionTools::Custom(_) => panic!(\"expected function tool\"),\n        };\n\n        let nested_required = function.parameters.as_ref().and_then(|schema| {\n            let request_schema = schema\n                .get(\"properties\")\n                .and_then(|value| value.get(\"request\"))\n                .and_then(serde_json::Value::as_object)?;\n            let referenced_required = request_schema\n                .get(\"$ref\")\n                .and_then(serde_json::Value::as_str)\n                .and_then(|reference| reference.strip_prefix(\"#/$defs/\"))\n                .and_then(|definition_name| {\n                    schema\n                        .get(\"$defs\")\n                        .and_then(|value| value.get(definition_name))\n                })\n                .and_then(|value| value.get(\"required\"))\n                .and_then(serde_json::Value::as_array);\n\n            referenced_required.or_else(|| {\n                request_schema\n                    .get(\"required\")\n                    .and_then(serde_json::Value::as_array)\n            })\n        });\n\n        let nested_required = nested_required.expect(\"nested request should have required\");\n        let names: std::collections::HashSet<_> = nested_required\n            .iter()\n            .filter_map(serde_json::Value::as_str)\n            .collect();\n\n        assert!(names.contains(\"body\"));\n        assert!(names.contains(\"text\"));\n        assert!(names.contains(\"page_id\"));\n        assert!(names.contains(\"block_id\"));\n        assert!(names.contains(\"discussion_id\"));\n    }\n\n    #[test]\n    fn test_message_to_openai_with_image_parts() {\n        let message = ChatMessage::new_user_with_parts(vec![\n            ChatMessageContentPart::text(\"Describe this image.\"),\n            ChatMessageContentPart::image(\"https://example.com/image.png\"),\n        ]);\n\n        let openai_message = message_to_openai(&message)\n            .expect(\"message conversion succeeds\")\n            .expect(\"message present\");\n\n        let value = serde_json::to_value(openai_message).expect(\"serialize message\");\n        let content = value\n            .get(\"content\")\n            .and_then(serde_json::Value::as_array)\n            .expect(\"content array\");\n\n        assert_eq!(content[0][\"type\"], \"text\");\n        assert_eq!(content[0][\"text\"], \"Describe this image.\");\n        assert_eq!(content[1][\"type\"], \"image_url\");\n        assert_eq!(\n            content[1][\"image_url\"][\"url\"],\n            \"https://example.com/image.png\"\n        );\n        assert!(content[1][\"image_url\"][\"detail\"].is_null());\n    }\n\n    #[test]\n    fn test_message_to_openai_with_image_bytes_source() {\n        let message = ChatMessage::new_user_with_parts(vec![\n            ChatMessageContentPart::text(\"Describe this image.\"),\n            ChatMessageContentPart::Image {\n                source: ChatMessageContentSource::bytes(\n                    vec![0_u8, 1_u8, 2_u8],\n                    Some(\"image/png\".to_string()),\n                ),\n                format: None,\n            },\n        ]);\n\n        let openai_message = message_to_openai(&message)\n            .expect(\"message conversion succeeds\")\n            .expect(\"message present\");\n\n        let value = serde_json::to_value(openai_message).expect(\"serialize message\");\n        let content = value\n            .get(\"content\")\n            .and_then(serde_json::Value::as_array)\n            .expect(\"content array\");\n\n        let image_url = content[1][\"image_url\"][\"url\"]\n            .as_str()\n            .expect(\"image_url must be string\");\n        assert!(image_url.starts_with(\"data:image/png;base64,\"));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete() {\n        let mock_server = MockServer::start().await;\n\n        // Mock OpenAI API response\n        let response_body = json!({\n          \"id\": \"chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT\",\n          \"object\": \"chat.completion\",\n          \"created\": 123,\n          \"model\": \"gpt-4o\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": \"Hello, world!\",\n                \"refusal\": null,\n                \"annotations\": []\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"stop\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 19,\n            \"completion_tokens\": 10,\n            \"total_tokens\": 29,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"service_tier\": \"default\"\n        });\n        Mock::given(method(\"POST\"))\n            .and(path(\"/chat/completions\"))\n            .respond_with(ResponseTemplate::new(200).set_body_json(response_body))\n            .mount(&mock_server)\n            .await;\n\n        // Create a GenericOpenAI instance with the mock server URL\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let async_openai = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(async_openai)\n            .default_prompt_model(\"gpt-4o\")\n            .build()\n            .expect(\"Can create OpenAI client.\");\n\n        // Prepare a test request\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Hi\".into())])\n            .build()\n            .unwrap();\n\n        // Call the `complete` method\n        let response = openai.complete(&request).await.unwrap();\n\n        // Assert the response\n        assert_eq!(response.message(), Some(\"Hello, world!\"));\n\n        // Usage\n        let usage = response.usage.unwrap();\n        assert_eq!(usage.prompt_tokens, 19);\n        assert_eq!(usage.completion_tokens, 10);\n        assert_eq!(usage.total_tokens, 29);\n\n        let details = usage.details.as_ref().expect(\"usage details\");\n        assert_eq!(\n            details\n                .prompt_tokens_details\n                .as_ref()\n                .and_then(|d| d.cached_tokens),\n            Some(0)\n        );\n        assert_eq!(\n            details\n                .completion_tokens_details\n                .as_ref()\n                .and_then(|d| d.reasoning_tokens),\n            Some(0)\n        );\n\n        let normalized = usage.normalized();\n        let normalized_details = normalized.details.expect(\"normalized details\");\n        assert_eq!(normalized_details.input.cached_tokens, Some(0));\n        assert_eq!(normalized_details.output.reasoning_tokens, Some(0));\n    }\n\n    #[test_log::test(tokio::test)]\n    #[allow(clippy::items_after_statements)]\n    async fn test_complete_responses_api() {\n        use serde_json::{Value, json};\n        use wiremock::{Request, Respond};\n\n        let mock_server = MockServer::start().await;\n\n        let response_body = json!({\n            \"created_at\": 123,\n            \"id\": \"resp_123\",\n            \"model\": \"gpt-4.1-mini\",\n            \"object\": \"response\",\n            \"status\": \"completed\",\n            \"output\": [\n                {\n                    \"type\": \"message\",\n                    \"id\": \"msg_1\",\n                    \"role\": \"assistant\",\n                    \"status\": \"completed\",\n                    \"content\": [\n                        {\"type\": \"output_text\", \"text\": \"Hello via responses\", \"annotations\": []}\n                    ]\n                }\n            ],\n            \"usage\": {\n                \"input_tokens\": 5,\n                \"input_tokens_details\": {\"cached_tokens\": 0},\n                \"output_tokens\": 3,\n                \"output_tokens_details\": {\"reasoning_tokens\": 0},\n                \"total_tokens\": 8\n            }\n        });\n\n        struct ValidateResponsesRequest {\n            expected_model: &'static str,\n            response: Value,\n        }\n\n        impl Respond for ValidateResponsesRequest {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let body: Value = serde_json::from_slice(&request.body).unwrap();\n                assert_eq!(body[\"model\"], self.expected_model);\n                let input = body[\"input\"].as_array().expect(\"input array\");\n                assert_eq!(input.len(), 1);\n                assert_eq!(input[0][\"role\"], \"user\");\n                assert_eq!(input[0][\"content\"], \"Hello via prompt\");\n\n                let _: async_openai::types::responses::Response =\n                    serde_json::from_value(self.response.clone()).unwrap();\n\n                ResponseTemplate::new(200).set_body_json(self.response.clone())\n            }\n        }\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/responses\"))\n            .respond_with(ValidateResponsesRequest {\n                expected_model: \"gpt-4.1-mini\",\n                response: response_body,\n            })\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let async_openai = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(async_openai)\n            .default_prompt_model(\"gpt-4.1-mini\")\n            .use_responses_api(true)\n            .build()\n            .expect(\"Can create OpenAI client.\");\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Hello via prompt\".into())])\n            .build()\n            .unwrap();\n\n        let response = openai.complete(&request).await.unwrap();\n\n        assert_eq!(response.message(), Some(\"Hello via responses\"));\n\n        let usage = response.usage.expect(\"usage present\");\n        assert_eq!(usage.prompt_tokens, 5);\n        assert_eq!(usage.completion_tokens, 3);\n        assert_eq!(usage.total_tokens, 8);\n\n        let details = usage.details.as_ref().expect(\"usage details\");\n        assert_eq!(\n            details\n                .input_tokens_details\n                .as_ref()\n                .and_then(|d| d.cached_tokens),\n            Some(0)\n        );\n        assert_eq!(\n            details\n                .output_tokens_details\n                .as_ref()\n                .and_then(|d| d.reasoning_tokens),\n            Some(0)\n        );\n\n        let normalized = usage.normalized();\n        let normalized_details = normalized.details.expect(\"normalized details\");\n        assert_eq!(normalized_details.input.cached_tokens, Some(0));\n        assert_eq!(normalized_details.output.reasoning_tokens, Some(0));\n    }\n\n    #[test_log::test(tokio::test)]\n    #[allow(clippy::items_after_statements)]\n    async fn test_complete_with_all_default_settings() {\n        use serde_json::Value;\n        use wiremock::{Request, Respond, ResponseTemplate};\n\n        let mock_server = wiremock::MockServer::start().await;\n\n        // Custom matcher to validate all settings in the incoming request\n        struct ValidateAllSettings;\n\n        impl Respond for ValidateAllSettings {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let v: Value = serde_json::from_slice(&request.body).unwrap();\n\n                // Validate required fields\n                assert_eq!(v[\"model\"], \"gpt-4-turbo\");\n                let arr = v[\"messages\"].as_array().unwrap();\n                assert_eq!(arr.len(), 1);\n                assert_eq!(arr[0][\"content\"], \"Test\");\n\n                assert_eq!(v[\"parallel_tool_calls\"], true);\n                assert_eq!(v[\"max_completion_tokens\"], 77);\n                assert!((v[\"temperature\"].as_f64().unwrap() - 0.42).abs() < 1e-5);\n                assert_eq!(v[\"reasoning_effort\"], serde_json::Value::Null);\n                assert_eq!(v[\"seed\"], 42);\n                assert!((v[\"presence_penalty\"].as_f64().unwrap() - 1.1).abs() < 1e-5);\n\n                // Metadata as JSON object and user string\n                assert_eq!(v[\"metadata\"], serde_json::json!({\"key\": \"value\"}));\n                assert_eq!(v[\"user\"], \"test-user\");\n                ResponseTemplate::new(200).set_body_json(serde_json::json!({\n                \"id\": \"chatcmpl-xxx\",\n                \"object\": \"chat.completion\",\n                \"created\": 123,\n                \"model\": \"gpt-4-turbo\",\n                \"choices\": [{\n                    \"index\": 0,\n                    \"message\": {\n                        \"role\": \"assistant\",\n                        \"content\": \"All settings validated\",\n                        \"refusal\": null,\n                        \"annotations\": []\n                    },\n                    \"logprobs\": null,\n                    \"finish_reason\": \"stop\"\n                }],\n                \"usage\": {\n                    \"prompt_tokens\": 19,\n                    \"completion_tokens\": 10,\n                    \"total_tokens\": 29,\n                    \"prompt_tokens_details\": {\"cached_tokens\": 0, \"audio_tokens\": 0},\n                    \"completion_tokens_details\": {\"reasoning_tokens\": 0, \"audio_tokens\": 0, \"accepted_prediction_tokens\": 0, \"rejected_prediction_tokens\": 0}\n                },\n                \"service_tier\": \"default\"\n            }))\n            }\n        }\n\n        wiremock::Mock::given(wiremock::matchers::method(\"POST\"))\n            .and(wiremock::matchers::path(\"/chat/completions\"))\n            .respond_with(ValidateAllSettings)\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let async_openai = async_openai::Client::with_config(config);\n\n        let openai = crate::openai::OpenAI::builder()\n            .client(async_openai)\n            .default_prompt_model(\"gpt-4-turbo\")\n            .default_embed_model(\"not-used\")\n            .parallel_tool_calls(Some(true))\n            .default_options(\n                Options::builder()\n                    .max_completion_tokens(77)\n                    .temperature(0.42)\n                    .reasoning_effort(async_openai::types::responses::ReasoningEffort::Low)\n                    .seed(42)\n                    .presence_penalty(1.1)\n                    .metadata(serde_json::json!({\"key\": \"value\"}))\n                    .user(\"test-user\"),\n            )\n            .build()\n            .expect(\"Can create OpenAI client.\");\n\n        let request = swiftide_core::chat_completion::ChatCompletionRequest::builder()\n            .messages(vec![swiftide_core::chat_completion::ChatMessage::User(\n                \"Test\".into(),\n            )])\n            .build()\n            .unwrap();\n\n        let response = openai.complete(&request).await.unwrap();\n\n        assert_eq!(response.message(), Some(\"All settings validated\"));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_with_tools_sets_auto_choice_and_parallel_calls() {\n        use serde_json::Value;\n        use wiremock::{Request, Respond, ResponseTemplate};\n\n        #[derive(schemars::JsonSchema)]\n        struct WeatherArgs {\n            _city: String,\n        }\n\n        let weather_tool = ToolSpec::builder()\n            .name(\"get_weather\")\n            .description(\"weather\")\n            .parameters_schema(schemars::schema_for!(WeatherArgs))\n            .build()\n            .unwrap();\n\n        let alpha_tool = ToolSpec::builder()\n            .name(\"alpha_tool\")\n            .description(\"alpha\")\n            .parameters_schema(schemars::schema_for!(WeatherArgs))\n            .build()\n            .unwrap();\n\n        let mock_server = MockServer::start().await;\n\n        let response_body = json!({\n          \"id\": \"chatcmpl-xyz\",\n          \"object\": \"chat.completion\",\n          \"created\": 1,\n          \"model\": \"gpt-4o\",\n          \"choices\": [{\n            \"index\": 0,\n            \"message\": {\n              \"role\": \"assistant\",\n              \"content\": \"Here\",\n              \"refusal\": null,\n              \"annotations\": []\n            },\n            \"finish_reason\": \"stop\"\n          }],\n          \"usage\": {\n            \"prompt_tokens\": 2,\n            \"completion_tokens\": 3,\n            \"total_tokens\": 5,\n            \"prompt_tokens_details\": {\"cached_tokens\": 0, \"audio_tokens\": 0},\n            \"completion_tokens_details\": {\"reasoning_tokens\": 0, \"audio_tokens\": 0, \"accepted_prediction_tokens\": 0, \"rejected_prediction_tokens\": 0}\n          }\n        });\n\n        #[allow(clippy::items_after_statements)]\n        struct Validate(Value);\n        #[allow(clippy::items_after_statements)]\n        impl Respond for Validate {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let v: Value = serde_json::from_slice(&request.body).unwrap();\n                assert_eq!(v[\"model\"], \"gpt-4o\");\n                assert_eq!(v[\"parallel_tool_calls\"], true);\n                assert_eq!(v[\"tool_choice\"], \"auto\");\n                let tools = v[\"tools\"].as_array().unwrap();\n                assert_eq!(tools.len(), 2);\n                let tool_names = tools\n                    .iter()\n                    .map(|tool| tool[\"function\"][\"name\"].as_str().unwrap())\n                    .collect::<Vec<_>>();\n                assert_eq!(tool_names, vec![\"alpha_tool\", \"get_weather\"]);\n                ResponseTemplate::new(200)\n                    .insert_header(\"content-type\", \"application/json\")\n                    .set_body_json(self.0.clone())\n            }\n        }\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/chat/completions\"))\n            .respond_with(Validate(response_body.clone()))\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let async_openai = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(async_openai)\n            .default_prompt_model(\"gpt-4o\")\n            .parallel_tool_calls(Some(true))\n            .build()\n            .unwrap();\n\n        let req = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs([weather_tool, alpha_tool])\n            .build()\n            .unwrap();\n\n        let resp = openai.complete(&req).await.unwrap();\n        assert_eq!(resp.message(), Some(\"Here\"));\n        assert_eq!(resp.usage.unwrap().total_tokens, 5);\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_happy_path() {\n        let mock_server = MockServer::start().await;\n\n        let sse_body = \"\\\ndata: {\\\"id\\\":\\\"chatcmpl-123\\\",\\\"created\\\":1,\\\"object\\\":\\\"chat.completion.chunk\\\",\\\"model\\\":\\\"gpt-4o-mini\\\",\\\"choices\\\":[{\\\"index\\\":0,\\\"delta\\\":{\\\"content\\\":\\\"Hi\\\"},\\\"finish_reason\\\":null}]}\\n\\\n\\n\\\ndata: {\\\"id\\\":\\\"chatcmpl-123\\\",\\\"created\\\":1,\\\"object\\\":\\\"chat.completion.chunk\\\",\\\"model\\\":\\\"gpt-4o-mini\\\",\\\"choices\\\":[{\\\"index\\\":0,\\\"delta\\\":{},\\\"finish_reason\\\":\\\"stop\\\"}],\\\"usage\\\":{\\\"prompt_tokens\\\":1,\\\"completion_tokens\\\":2,\\\"total_tokens\\\":3}}\\n\\\n\\n\\\ndata: [DONE]\\n\\n\";\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/chat/completions\"))\n            .respond_with(ResponseTemplate::new(200).set_body_raw(sse_body, \"text/event-stream\"))\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let async_openai = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(async_openai)\n            .default_prompt_model(\"gpt-4o-mini\")\n            .build()\n            .unwrap();\n\n        let req = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Hello\".into())])\n            .build()\n            .unwrap();\n\n        let results: Vec<_> = openai.complete_stream(&req).await.collect().await;\n        let last = results.last().unwrap().as_ref().unwrap();\n        assert_eq!(last.message(), Some(\"Hi\"));\n        assert_eq!(last.usage.as_ref().map(|u| u.total_tokens), Some(3));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_delta_only_mode() {\n        let mock_server = MockServer::start().await;\n\n        let sse_body = \"\\\ndata: {\\\"id\\\":\\\"chatcmpl-123\\\",\\\"created\\\":1,\\\"object\\\":\\\"chat.completion.chunk\\\",\\\"model\\\":\\\"gpt-4o-mini\\\",\\\"choices\\\":[{\\\"index\\\":0,\\\"delta\\\":{\\\"content\\\":\\\"Hi\\\"},\\\"finish_reason\\\":null}]}\\n\\\n\\n\\\ndata: {\\\"id\\\":\\\"chatcmpl-123\\\",\\\"created\\\":1,\\\"object\\\":\\\"chat.completion.chunk\\\",\\\"model\\\":\\\"gpt-4o-mini\\\",\\\"choices\\\":[{\\\"index\\\":0,\\\"delta\\\":{},\\\"finish_reason\\\":\\\"stop\\\"}],\\\"usage\\\":{\\\"prompt_tokens\\\":1,\\\"completion_tokens\\\":2,\\\"total_tokens\\\":3}}\\n\\\n\\n\\\ndata: [DONE]\\n\\n\";\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/chat/completions\"))\n            .respond_with(ResponseTemplate::new(200).set_body_raw(sse_body, \"text/event-stream\"))\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let async_openai = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(async_openai)\n            .default_prompt_model(\"gpt-4o-mini\")\n            .stream_full(false)\n            .build()\n            .unwrap();\n\n        let req = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"Hello\".into())])\n            .build()\n            .unwrap();\n\n        let mut stream = openai.complete_stream(&req).await;\n        let first = stream.next().await.unwrap().unwrap();\n        assert!(first.message.is_none());\n        assert!(first.usage.is_none());\n        assert!(\n            first.delta.is_some(),\n            \"delta-only mode should emit delta snapshots\"\n        );\n\n        let final_snapshot = stream.next().await.unwrap().unwrap();\n        // Final snapshot should arrive in delta-only mode.\n        assert!(final_snapshot.usage.is_none() || final_snapshot.usage.is_some());\n        while let Some(item) = stream.next().await {\n            item.expect(\"stream should not error\");\n        }\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_invalid_tool_schema_errors() {\n        let invalid_schema = schemars::Schema::from(true);\n\n        let err = ToolSpec::builder()\n            .name(\"bad\")\n            .description(\"bad schema\")\n            .parameters_schema(invalid_schema)\n            .build()\n            .expect_err(\"invalid tool schemas should be rejected at build time\");\n\n        assert!(\n            err.to_string()\n                .contains(\"tool schema must be a JSON object\")\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_invalid_tool_schema_errors() {\n        let invalid_schema = schemars::Schema::from(true);\n\n        let err = ToolSpec::builder()\n            .name(\"bad\")\n            .description(\"bad schema\")\n            .parameters_schema(invalid_schema)\n            .build()\n            .expect_err(\"invalid tool schemas should be rejected at build time\");\n\n        assert!(\n            err.to_string()\n                .contains(\"tool schema must be a JSON object\")\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_rate_limit_transient_error() {\n        let mock_server = MockServer::start().await;\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/chat/completions\"))\n            .respond_with(ResponseTemplate::new(429).set_body_string(\"rate limit\"))\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let async_openai = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(async_openai)\n            .default_prompt_model(\"gpt-4o-mini\")\n            .build()\n            .unwrap();\n\n        let req = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .build()\n            .unwrap();\n\n        let mut stream = openai.complete_stream(&req).await;\n        let first = stream.next().await.expect(\"stream yields one item\");\n        assert!(matches!(first, Err(LanguageModelError::TransientError(_))));\n        assert!(stream.next().await.is_none());\n    }\n\n    #[test]\n    fn test_message_to_openai_tool_output_without_content() {\n        let tool_call = ToolCallBuilder::default()\n            .id(\"call_1\")\n            .name(\"noop\")\n            .build()\n            .unwrap();\n        let msg = ChatMessage::ToolOutput(tool_call, ToolOutput::stop());\n\n        let converted = message_to_openai(&msg)\n            .expect(\"conversion succeeds\")\n            .expect(\"message is not filtered\");\n        match converted {\n            async_openai::types::chat::ChatCompletionRequestMessage::Tool(m) => {\n                assert_eq!(m.tool_call_id, \"call_1\");\n                assert_eq!(\n                    m.content,\n                    async_openai::types::chat::ChatCompletionRequestToolMessageContent::Text(\n                        String::new()\n                    )\n                );\n            }\n            other => panic!(\"expected tool message, got {other:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_message_to_openai_assistant_with_tool_calls_and_text() {\n        let tool_call = ToolCallBuilder::default()\n            .id(\"call_2\")\n            .name(\"math\")\n            .args(\"{\\\"x\\\":1}\")\n            .build()\n            .unwrap();\n\n        let msg = ChatMessage::new_assistant(Some(\"pending\"), Some(vec![tool_call.clone()]));\n        let converted = message_to_openai(&msg)\n            .expect(\"conversion succeeds\")\n            .expect(\"message is not filtered\");\n        match converted {\n            async_openai::types::chat::ChatCompletionRequestMessage::Assistant(m) => {\n                assert_eq!(m.content.unwrap(), \"pending\".into());\n                let calls = m.tool_calls.unwrap();\n                assert_eq!(calls.len(), 1);\n                let async_openai::types::chat::ChatCompletionMessageToolCalls::Function(call) =\n                    &calls[0]\n                else {\n                    panic!(\"expected function tool call\");\n                };\n                assert_eq!(call.id, \"call_2\");\n                assert_eq!(call.function.name, \"math\");\n                assert_eq!(call.function.arguments, \"{\\\"x\\\":1}\");\n            }\n            other => panic!(\"expected assistant message, got {other:?}\"),\n        }\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_complete_stream_model_missing_errors_immediately() {\n        let openai = OpenAI::builder()\n            .default_embed_model(\"unused\")\n            .build()\n            .expect(\"builder without prompt model still constructs\");\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .build()\n            .unwrap();\n\n        let mut stream = openai.complete_stream(&request).await;\n        let first = stream.next().await.expect(\"stream yields one item\");\n        assert!(\n            matches!(first, Err(LanguageModelError::PermanentError(msg)) if msg.to_string().contains(\"Model not set\"))\n        );\n        assert!(stream.next().await.is_none(), \"stream ends after error\");\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_track_completion_invokes_on_usage_callback() {\n        use std::sync::atomic::{AtomicUsize, Ordering};\n\n        let hits = Arc::new(AtomicUsize::new(0));\n        let hits_clone = hits.clone();\n        let openai = OpenAI::builder()\n            .default_prompt_model(\"gpt-4o\")\n            .on_usage(move |_usage| {\n                hits_clone.fetch_add(1, Ordering::SeqCst);\n                Ok(())\n            })\n            .build()\n            .unwrap();\n\n        let usage = UsageBuilder::default()\n            .prompt_tokens(1)\n            .completion_tokens(1)\n            .total_tokens(2)\n            .build()\n            .unwrap();\n\n        openai.track_completion(\n            \"gpt-4o\",\n            Some(&usage),\n            Option::<&()>::None,\n            Option::<&()>::None,\n        );\n\n        // give spawned task a tick\n        tokio::time::sleep(std::time::Duration::from_millis(10)).await;\n        assert_eq!(hits.load(Ordering::SeqCst), 1);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/openai/embed.rs",
    "content": "use async_openai::types::embeddings::{CreateEmbeddingRequest, CreateEmbeddingResponse};\nuse async_trait::async_trait;\n\nuse swiftide_core::{\n    EmbeddingModel, Embeddings,\n    chat_completion::{Usage, errors::LanguageModelError},\n};\n\nuse super::GenericOpenAI;\nuse crate::openai::openai_error_to_language_model_error;\n\n#[async_trait]\nimpl<\n    C: async_openai::config::Config\n        + std::default::Default\n        + Sync\n        + Send\n        + std::fmt::Debug\n        + Clone\n        + 'static,\n> EmbeddingModel for GenericOpenAI<C>\n{\n    async fn embed(&self, input: Vec<String>) -> Result<Embeddings, LanguageModelError> {\n        let model = self\n            .default_options\n            .embed_model\n            .as_ref()\n            .ok_or(LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n        let request = self\n            .embed_request_defaults()\n            .model(model)\n            .input(&input)\n            .build()\n            .map_err(LanguageModelError::permanent)?;\n\n        tracing::debug!(\n            num_chunks = input.len(),\n            model = &model,\n            \"[Embed] Request to openai\"\n        );\n        let response = self\n            .client\n            .embeddings()\n            .create(request)\n            .await\n            .map_err(openai_error_to_language_model_error)?;\n\n        let usage = Usage::from(&response.usage);\n\n        // Only track usage for embedding calls, as requests and responses are extremely verbose\n        self.track_completion(\n            model,\n            Some(&usage),\n            None::<&CreateEmbeddingRequest>,\n            None::<&CreateEmbeddingResponse>,\n        );\n\n        let num_embeddings = response.data.len();\n        tracing::debug!(num_embeddings = num_embeddings, \"[Embed] Response openai\");\n\n        // WARN: Naively assumes that the order is preserved. Might not always be the case.\n        Ok(response.data.into_iter().map(|d| d.embedding).collect())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::openai::OpenAI;\n    use serde_json::json;\n    use wiremock::{\n        Mock, MockServer, Request, Respond, ResponseTemplate,\n        matchers::{method, path},\n    };\n\n    #[test_log::test(tokio::test)]\n    async fn test_embed_returns_error_when_model_missing() {\n        let openai = OpenAI::builder().build().unwrap();\n        let err = openai.embed(vec![\"text\".into()]).await.unwrap_err();\n        assert!(matches!(err, LanguageModelError::PermanentError(_)));\n    }\n\n    #[allow(clippy::items_after_statements)]\n    #[test_log::test(tokio::test)]\n    async fn test_embed_success() {\n        let mock_server = MockServer::start().await;\n\n        let response_body = json!({\n            \"data\": [{\n                \"embedding\": [0.1, 0.2],\n                \"index\": 0,\n                \"object\": \"embedding\"\n            }],\n            \"model\": \"text-embedding-3-small\",\n            \"object\": \"list\",\n            \"usage\": {\"prompt_tokens\": 5, \"total_tokens\": 5}\n        });\n\n        struct ValidateEmbeddingRequest(serde_json::Value);\n\n        impl Respond for ValidateEmbeddingRequest {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let body: serde_json::Value = serde_json::from_slice(&request.body).unwrap();\n                assert_eq!(body[\"model\"], \"text-embedding-3-small\");\n                assert!(body[\"input\"].is_array());\n                ResponseTemplate::new(200).set_body_json(self.0.clone())\n            }\n        }\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/embeddings\"))\n            .respond_with(ValidateEmbeddingRequest(response_body))\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let client = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(client)\n            .default_embed_model(\"text-embedding-3-small\")\n            .build()\n            .unwrap();\n\n        let embeddings = openai\n            .embed(vec![\"Hello\".into(), \"World\".into()])\n            .await\n            .unwrap();\n\n        assert_eq!(embeddings.len(), 1);\n        assert_eq!(embeddings[0], vec![0.1, 0.2]);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/openai/mod.rs",
    "content": "//! This module provides integration with `OpenAI`'s API, enabling the use of language models and\n//! embeddings within the Swiftide project. It includes the `OpenAI` struct for managing API clients\n//! and default options for embedding and prompt models. The module is conditionally compiled based\n//! on the \"openai\" feature flag.\n\nuse async_openai::error::{OpenAIError, StreamError};\nuse async_openai::types::chat::CreateChatCompletionRequestArgs;\nuse async_openai::types::embeddings::CreateEmbeddingRequestArgs;\nuse derive_builder::Builder;\nuse reqwest::StatusCode;\nuse reqwest_eventsource::Error as EventSourceError;\nuse std::pin::Pin;\nuse std::sync::Arc;\nuse swiftide_core::chat_completion::Usage;\nuse swiftide_core::chat_completion::errors::LanguageModelError;\n\nmod chat_completion;\nmod embed;\nmod responses_api;\nmod simple_prompt;\nmod structured_prompt;\nmod tool_schema;\n\n// expose type aliases to simplify downstream use of the open ai builder invocations\npub use async_openai::config::AzureConfig;\npub use async_openai::config::OpenAIConfig;\npub use async_openai::types::responses::ReasoningEffort;\n\n#[cfg(feature = \"tiktoken\")]\nuse crate::tiktoken::TikToken;\n#[cfg(feature = \"tiktoken\")]\nuse anyhow::Result;\n#[cfg(feature = \"tiktoken\")]\nuse swiftide_core::Estimatable;\n#[cfg(feature = \"tiktoken\")]\nuse swiftide_core::EstimateTokens;\n\n/// The `OpenAI` struct encapsulates an `OpenAI` client and default options for embedding and prompt\n/// models. It uses the `Builder` pattern for flexible and customizable instantiation.\n///\n/// # Example\n///\n/// ```no_run\n/// # use swiftide_integrations::openai::{OpenAI, Options};\n/// # use swiftide_integrations::openai::OpenAIConfig;\n///\n/// // Create an OpenAI client with default options. The client will use the OPENAI_API_KEY environment variable.\n/// let openai = OpenAI::builder()\n///     .default_embed_model(\"text-embedding-3-small\")\n///     .default_prompt_model(\"gpt-4\")\n///     .build().unwrap();\n///\n/// // Create an OpenAI client with a custom api key.\n/// let openai = OpenAI::builder()\n///     .default_embed_model(\"text-embedding-3-small\")\n///     .default_prompt_model(\"gpt-4\")\n///     .client(async_openai::Client::with_config(async_openai::config::OpenAIConfig::default().with_api_key(\"my-api-key\")))\n///     .build().unwrap();\n///\n/// // Create an OpenAI client with custom options\n/// let openai = OpenAI::builder()\n///     .default_embed_model(\"text-embedding-3-small\")\n///     .default_prompt_model(\"gpt-4\")\n///     .default_options(\n///         Options::builder()\n///           .temperature(1.0)\n///           .parallel_tool_calls(false)\n///           .user(\"MyUserId\")\n///     )\n///     .build().unwrap();\n/// ```\npub type OpenAI = GenericOpenAI<OpenAIConfig>;\npub type OpenAIBuilder = GenericOpenAIBuilder<OpenAIConfig>;\n\n#[derive(Builder, Clone)]\n#[builder(setter(into, strip_option))]\n/// Generic client for `OpenAI` APIs.\npub struct GenericOpenAI<\n    C: async_openai::config::Config + Default = async_openai::config::OpenAIConfig,\n> {\n    /// The `OpenAI` client, wrapped in an `Arc` for thread-safe reference counting.\n    /// Defaults to a new instance of `async_openai::Client`.\n    #[builder(\n        default = \"Arc::new(async_openai::Client::<C>::default())\",\n        setter(custom)\n    )]\n    client: Arc<async_openai::Client<C>>,\n\n    /// Default options for embedding and prompt models.\n    #[builder(default, setter(custom))]\n    pub(crate) default_options: Options,\n\n    #[cfg(feature = \"tiktoken\")]\n    #[cfg_attr(feature = \"tiktoken\", builder(default))]\n    pub(crate) tiktoken: TikToken,\n\n    /// Convenience option to stream the full response. Defaults to true, because nobody has time\n    /// to reconstruct the delta. Disabling this will make the streamed content only return the\n    /// delta, for when performance matters. This only has effect when streaming is enabled.\n    #[builder(default = true)]\n    pub stream_full: bool,\n\n    #[cfg(feature = \"metrics\")]\n    #[builder(default)]\n    /// Optional metadata to attach to metrics emitted by this client.\n    metric_metadata: Option<std::collections::HashMap<String, String>>,\n\n    /// Opt-in flag to use `OpenAI`'s Responses API instead of the legacy Chat Completions API.\n    #[builder(default)]\n    pub(crate) use_responses_api: bool,\n\n    /// A callback function that is called when usage information is available.\n    #[builder(default, setter(custom))]\n    #[allow(clippy::type_complexity)]\n    on_usage: Option<\n        Arc<\n            dyn for<'a> Fn(\n                    &'a Usage,\n                ) -> Pin<\n                    Box<dyn std::future::Future<Output = anyhow::Result<()>> + Send + 'a>,\n                > + Send\n                + Sync,\n        >,\n    >,\n}\n\nimpl<C: async_openai::config::Config + Default + std::fmt::Debug> std::fmt::Debug\n    for GenericOpenAI<C>\n{\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"GenericOpenAI\")\n            .field(\"client\", &self.client)\n            .field(\"default_options\", &self.default_options)\n            .field(\"stream_full\", &self.stream_full)\n            .field(\"use_responses_api\", &self.use_responses_api)\n            .finish_non_exhaustive()\n    }\n}\n\n/// The `Options` struct holds configuration options for the `OpenAI` client.\n/// It includes optional fields for specifying the embedding and prompt models.\n#[derive(Debug, Clone, Builder, Default)]\n#[builder(setter(strip_option))]\npub struct Options {\n    /// The default embedding model to use, if specified.\n    #[builder(default, setter(into))]\n    pub embed_model: Option<String>,\n    /// The default prompt model to use, if specified.\n    #[builder(default, setter(into))]\n    pub prompt_model: Option<String>,\n\n    #[builder(default)]\n    /// Option to enable or disable parallel tool calls for completions.\n    ///\n    /// At this moment, o1 and o3-mini do not support it and should be set to `None`.\n    pub parallel_tool_calls: Option<bool>,\n\n    /// Maximum number of tokens to generate in the completion.\n    ///\n    /// By default, the limit is disabled\n    #[builder(default)]\n    pub max_completion_tokens: Option<u32>,\n\n    /// Temperature setting for the model.\n    #[builder(default)]\n    pub temperature: Option<f32>,\n\n    /// Reasoning effor for reasoning models.\n    #[builder(default, setter(into))]\n    pub reasoning_effort: Option<ReasoningEffort>,\n\n    /// Enable reasoning summary/encrypted content handling for the Responses API.\n    ///\n    /// This is enabled by default, but only takes effect when `reasoning_effort` is set.\n    /// Disable it with `reasoning_features(false)` if you do not want summaries or encrypted\n    /// reasoning stored and replayed.\n    ///\n    /// Note: reasoning summaries/encrypted content require an `OpenAI` organization that is\n    /// verified for reasoning access; unverified orgs may receive no summaries.\n    #[builder(default, setter(into))]\n    pub reasoning_features: Option<bool>,\n\n    /// This feature is in Beta. If specified, our system will make a best effort to sample\n    /// deterministically, such that repeated requests with the same seed and parameters should\n    /// return the same result. Determinism is not guaranteed, and you should refer to the\n    /// `system_fingerprint` response parameter to monitor changes in the backend.\n    #[builder(default)]\n    pub seed: Option<i64>,\n\n    /// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they\n    /// appear in the text so far, increasing the model’s likelihood to talk about new topics.\n    #[builder(default)]\n    pub presence_penalty: Option<f32>,\n\n    /// Developer-defined tags and values used for filtering completions in the dashboard.\n    #[builder(default, setter(into))]\n    pub metadata: Option<serde_json::Value>,\n\n    /// A unique identifier representing your end-user, which can help `OpenAI` to monitor and\n    /// detect abuse.\n    #[builder(default, setter(into))]\n    pub user: Option<String>,\n\n    #[builder(default)]\n    /// The number of dimensions the resulting output embeddings should have. Only supported in\n    /// text-embedding-3 and later models.\n    pub dimensions: Option<u32>,\n}\n\nimpl Options {\n    /// Creates a new `OptionsBuilder` for constructing `Options` instances.\n    pub fn builder() -> OptionsBuilder {\n        OptionsBuilder::default()\n    }\n\n    /// Extends options with other options\n    pub fn merge(&mut self, other: &Options) {\n        if let Some(embed_model) = &other.embed_model {\n            self.embed_model = Some(embed_model.clone());\n        }\n        if let Some(prompt_model) = &other.prompt_model {\n            self.prompt_model = Some(prompt_model.clone());\n        }\n        if let Some(parallel_tool_calls) = other.parallel_tool_calls {\n            self.parallel_tool_calls = Some(parallel_tool_calls);\n        }\n        if let Some(max_completion_tokens) = other.max_completion_tokens {\n            self.max_completion_tokens = Some(max_completion_tokens);\n        }\n        if let Some(temperature) = other.temperature {\n            self.temperature = Some(temperature);\n        }\n        if let Some(reasoning_effort) = &other.reasoning_effort {\n            self.reasoning_effort = Some(reasoning_effort.clone());\n        }\n        if let Some(reasoning_features) = other.reasoning_features {\n            self.reasoning_features = Some(reasoning_features);\n        }\n        if let Some(seed) = other.seed {\n            self.seed = Some(seed);\n        }\n        if let Some(presence_penalty) = other.presence_penalty {\n            self.presence_penalty = Some(presence_penalty);\n        }\n        if let Some(metadata) = &other.metadata {\n            self.metadata = Some(metadata.clone());\n        }\n        if let Some(user) = &other.user {\n            self.user = Some(user.clone());\n        }\n        if let Some(dimensions) = other.dimensions {\n            self.dimensions = Some(dimensions);\n        }\n    }\n}\n\nimpl From<OptionsBuilder> for Options {\n    fn from(value: OptionsBuilder) -> Self {\n        Self {\n            embed_model: value.embed_model.flatten(),\n            prompt_model: value.prompt_model.flatten(),\n            parallel_tool_calls: value.parallel_tool_calls.flatten(),\n            max_completion_tokens: value.max_completion_tokens.flatten(),\n            temperature: value.temperature.flatten(),\n            reasoning_effort: value.reasoning_effort.flatten(),\n            reasoning_features: value.reasoning_features.flatten(),\n            presence_penalty: value.presence_penalty.flatten(),\n            seed: value.seed.flatten(),\n            metadata: value.metadata.flatten(),\n            user: value.user.flatten(),\n            dimensions: value.dimensions.flatten(),\n        }\n    }\n}\n\nimpl From<&mut OptionsBuilder> for Options {\n    fn from(value: &mut OptionsBuilder) -> Self {\n        let value = value.clone();\n        Self {\n            embed_model: value.embed_model.flatten(),\n            prompt_model: value.prompt_model.flatten(),\n            parallel_tool_calls: value.parallel_tool_calls.flatten(),\n            max_completion_tokens: value.max_completion_tokens.flatten(),\n            temperature: value.temperature.flatten(),\n            reasoning_effort: value.reasoning_effort.flatten(),\n            reasoning_features: value.reasoning_features.flatten(),\n            presence_penalty: value.presence_penalty.flatten(),\n            seed: value.seed.flatten(),\n            metadata: value.metadata.flatten(),\n            user: value.user.flatten(),\n            dimensions: value.dimensions.flatten(),\n        }\n    }\n}\n\nimpl OpenAI {\n    /// Creates a new `OpenAIBuilder` for constructing `OpenAI` instances.\n    pub fn builder() -> OpenAIBuilder {\n        let mut builder = OpenAIBuilder::default();\n        builder.default_options(Options {\n            reasoning_features: Some(true),\n            ..Default::default()\n        });\n        builder\n    }\n}\n\nimpl<C: async_openai::config::Config + Default + Sync + Send + std::fmt::Debug>\n    GenericOpenAIBuilder<C>\n{\n    /// Adds a callback function that will be called when usage information is available.\n    pub fn on_usage<F>(&mut self, func: F) -> &mut Self\n    where\n        F: Fn(&Usage) -> anyhow::Result<()> + Send + Sync + 'static,\n    {\n        let func = Arc::new(func);\n        self.on_usage = Some(Some(Arc::new(move |usage: &Usage| {\n            let func = func.clone();\n            Box::pin(async move { func(usage) })\n        })));\n\n        self\n    }\n\n    /// Adds an asynchronous callback function that will be called when usage information is\n    /// available.\n    pub fn on_usage_async<F>(&mut self, func: F) -> &mut Self\n    where\n        F: for<'a> Fn(\n                &'a Usage,\n            )\n                -> Pin<Box<dyn std::future::Future<Output = anyhow::Result<()>> + Send + 'a>>\n            + Send\n            + Sync\n            + 'static,\n    {\n        let func = Arc::new(func);\n        self.on_usage = Some(Some(Arc::new(move |usage: &Usage| {\n            let func = func.clone();\n            Box::pin(async move { func(usage).await })\n        })));\n\n        self\n    }\n    /// Sets the `OpenAI` client for the `OpenAI` instance.\n    ///\n    /// # Parameters\n    /// - `client`: The `OpenAI` client to set.\n    ///\n    /// # Returns\n    /// A mutable reference to the `OpenAIBuilder`.\n    pub fn client(&mut self, client: async_openai::Client<C>) -> &mut Self {\n        self.client = Some(Arc::new(client));\n        self\n    }\n\n    /// Sets the default embedding model for the `OpenAI` instance.\n    ///\n    /// # Parameters\n    /// - `model`: The embedding model to set.\n    ///\n    /// # Returns\n    /// A mutable reference to the `OpenAIBuilder`.\n    pub fn default_embed_model(&mut self, model: impl Into<String>) -> &mut Self {\n        if let Some(options) = self.default_options.as_mut() {\n            options.embed_model = Some(model.into());\n        } else {\n            self.default_options = Some(Options {\n                embed_model: Some(model.into()),\n                ..Default::default()\n            });\n        }\n        self\n    }\n\n    /// Sets the `user` field used by `OpenAI` to monitor and detect usage and abuse.\n    pub fn for_end_user(&mut self, user: impl Into<String>) -> &mut Self {\n        if let Some(options) = self.default_options.as_mut() {\n            options.user = Some(user.into());\n        } else {\n            self.default_options = Some(Options {\n                user: Some(user.into()),\n                ..Default::default()\n            });\n        }\n        self\n    }\n\n    /// Enable or disable parallel tool calls for completions.\n    ///\n    /// Note that currently reasoning models do not support parallel tool calls\n    ///\n    /// Defaults to `true`\n    pub fn parallel_tool_calls(&mut self, parallel_tool_calls: Option<bool>) -> &mut Self {\n        if let Some(options) = self.default_options.as_mut() {\n            options.parallel_tool_calls = parallel_tool_calls;\n        } else {\n            self.default_options = Some(Options {\n                parallel_tool_calls,\n                ..Default::default()\n            });\n        }\n        self\n    }\n\n    /// Sets the default prompt model for the `OpenAI` instance.\n    ///\n    /// # Parameters\n    /// - `model`: The prompt model to set.\n    ///\n    /// # Returns\n    /// A mutable reference to the `OpenAIBuilder`.\n    pub fn default_prompt_model(&mut self, model: impl Into<String>) -> &mut Self {\n        if let Some(options) = self.default_options.as_mut() {\n            options.prompt_model = Some(model.into());\n        } else {\n            self.default_options = Some(Options {\n                prompt_model: Some(model.into()),\n                ..Default::default()\n            });\n        }\n        self\n    }\n\n    /// Sets the default options to use for requests to the `OpenAI` API.\n    ///\n    /// Merges with any existing options\n    pub fn default_options(&mut self, options: impl Into<Options>) -> &mut Self {\n        if let Some(existing_options) = self.default_options.as_mut() {\n            existing_options.merge(&options.into());\n        } else {\n            self.default_options = Some(options.into());\n        }\n        self\n    }\n}\n\nimpl<C: async_openai::config::Config + Default> GenericOpenAI<C> {\n    /// Estimates the number of tokens for implementors of the `Estimatable` trait.\n    ///\n    /// I.e. `String`, `ChatMessage` etc\n    ///\n    /// # Errors\n    ///\n    /// Errors if tokinization fails in any way\n    #[cfg(feature = \"tiktoken\")]\n    pub async fn estimate_tokens(&self, value: impl Estimatable) -> Result<usize> {\n        self.tiktoken.estimate(value).await\n    }\n\n    pub fn with_default_prompt_model(&mut self, model: impl Into<String>) -> &mut Self {\n        self.default_options = Options {\n            prompt_model: Some(model.into()),\n            ..self.default_options.clone()\n        };\n        self\n    }\n\n    pub fn with_default_embed_model(&mut self, model: impl Into<String>) -> &mut Self {\n        self.default_options = Options {\n            embed_model: Some(model.into()),\n            ..self.default_options.clone()\n        };\n        self\n    }\n\n    /// Retrieve a reference to the inner `OpenAI` client.\n    pub fn client(&self) -> &Arc<async_openai::Client<C>> {\n        &self.client\n    }\n\n    /// Retrieve a reference to the default options for the `OpenAI` instance.\n    pub fn options(&self) -> &Options {\n        &self.default_options\n    }\n\n    /// Retrieve a mutable reference to the default options for the `OpenAI` instance.\n    pub fn options_mut(&mut self) -> &mut Options {\n        &mut self.default_options\n    }\n\n    /// Returns whether the Responses API is enabled for this client.\n    pub fn is_responses_api_enabled(&self) -> bool {\n        self.use_responses_api\n    }\n\n    fn chat_completion_request_defaults(&self) -> CreateChatCompletionRequestArgs {\n        let mut args = CreateChatCompletionRequestArgs::default();\n\n        let options = &self.default_options;\n\n        if let Some(parallel_tool_calls) = options.parallel_tool_calls {\n            args.parallel_tool_calls(parallel_tool_calls);\n        }\n\n        if let Some(max_tokens) = options.max_completion_tokens {\n            args.max_completion_tokens(max_tokens);\n        }\n\n        if let Some(temperature) = options.temperature {\n            args.temperature(temperature);\n        }\n\n        if let Some(seed) = options.seed {\n            args.seed(seed);\n        }\n\n        if let Some(presence_penalty) = options.presence_penalty {\n            args.presence_penalty(presence_penalty);\n        }\n\n        if let Some(metadata) = &options.metadata {\n            args.metadata(metadata.clone());\n        }\n\n        if let Some(user) = &options.user {\n            args.user(user.clone());\n        }\n\n        args\n    }\n\n    fn embed_request_defaults(&self) -> CreateEmbeddingRequestArgs {\n        let mut args = CreateEmbeddingRequestArgs::default();\n\n        let options = &self.default_options;\n\n        if let Some(user) = &options.user {\n            args.user(user.clone());\n        }\n\n        if let Some(dimensions) = options.dimensions {\n            args.dimensions(dimensions);\n        }\n\n        args\n    }\n}\n\npub fn openai_error_to_language_model_error(e: OpenAIError) -> LanguageModelError {\n    match e {\n        OpenAIError::ApiError(api_error) => {\n            // If the response is an ApiError, it could be a context length exceeded error\n            if api_error.code == Some(\"context_length_exceeded\".to_string()) {\n                LanguageModelError::context_length_exceeded(OpenAIError::ApiError(api_error))\n            } else {\n                LanguageModelError::permanent(OpenAIError::ApiError(api_error))\n            }\n        }\n        OpenAIError::Reqwest(e) => {\n            // async_openai passes any network errors as reqwest errors, so we just assume they are\n            // recoverable\n            LanguageModelError::transient(e)\n        }\n        OpenAIError::JSONDeserialize(_, _) => {\n            // OpenAI generated a non-json response, probably a temporary problem on their side\n            // (i.e. reverse proxy can't find an available backend)\n            LanguageModelError::transient(e)\n        }\n        OpenAIError::StreamError(stream_error) => {\n            // Note that this will _retry_ the stream. We have to assume that the stream just\n            // started if a 429 happens. For future readers, internally the streaming crate\n            // (eventsource) already applies backoff.\n            if is_rate_limited_stream_error(&stream_error) {\n                LanguageModelError::transient(OpenAIError::StreamError(stream_error))\n            } else {\n                LanguageModelError::permanent(OpenAIError::StreamError(stream_error))\n            }\n        }\n        OpenAIError::FileSaveError(_)\n        | OpenAIError::FileReadError(_)\n        | OpenAIError::InvalidArgument(_) => LanguageModelError::permanent(e),\n    }\n}\n\nfn is_rate_limited_stream_error(error: &StreamError) -> bool {\n    match error {\n        StreamError::ReqwestEventSource(inner) => match inner {\n            EventSourceError::InvalidStatusCode(status, _) => {\n                *status == StatusCode::TOO_MANY_REQUESTS\n            }\n            EventSourceError::Transport(source) => {\n                source.status() == Some(StatusCode::TOO_MANY_REQUESTS)\n            }\n            _ => false,\n        },\n        StreamError::UnknownEvent(_) | StreamError::EventStream(_) => false,\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use async_openai::error::{ApiError, OpenAIError, StreamError};\n    use eventsource_stream::Event;\n\n    /// test default embed model\n    #[test]\n    fn test_default_embed_and_prompt_model() {\n        let openai: OpenAI = OpenAI::builder()\n            .default_embed_model(\"gpt-3\")\n            .default_prompt_model(\"gpt-4\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            openai.default_options.embed_model,\n            Some(\"gpt-3\".to_string())\n        );\n        assert_eq!(\n            openai.default_options.prompt_model,\n            Some(\"gpt-4\".to_string())\n        );\n\n        let openai: OpenAI = OpenAI::builder()\n            .default_prompt_model(\"gpt-4\")\n            .default_embed_model(\"gpt-3\")\n            .build()\n            .unwrap();\n        assert_eq!(\n            openai.default_options.prompt_model,\n            Some(\"gpt-4\".to_string())\n        );\n        assert_eq!(\n            openai.default_options.embed_model,\n            Some(\"gpt-3\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_use_responses_api_flag() {\n        let openai: OpenAI = OpenAI::builder().use_responses_api(true).build().unwrap();\n\n        assert!(openai.is_responses_api_enabled());\n    }\n\n    #[test]\n    fn test_context_length_exceeded_error() {\n        // Create an API error with the context_length_exceeded code\n        let api_error = ApiError {\n            message: \"This model's maximum context length is 8192 tokens\".to_string(),\n            r#type: Some(\"invalid_request_error\".to_string()),\n            param: Some(\"messages\".to_string()),\n            code: Some(\"context_length_exceeded\".to_string()),\n        };\n\n        let openai_error = OpenAIError::ApiError(api_error);\n        let result = openai_error_to_language_model_error(openai_error);\n\n        // Verify it's categorized as ContextLengthExceeded\n        match result {\n            LanguageModelError::ContextLengthExceeded(_) => {} // Expected\n            _ => panic!(\"Expected ContextLengthExceeded error, got {result:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_api_error_permanent() {\n        // Create a generic API error (not context length exceeded)\n        let api_error = ApiError {\n            message: \"Invalid API key\".to_string(),\n            r#type: Some(\"invalid_request_error\".to_string()),\n            param: Some(\"api_key\".to_string()),\n            code: Some(\"invalid_api_key\".to_string()),\n        };\n\n        let openai_error = OpenAIError::ApiError(api_error);\n        let result = openai_error_to_language_model_error(openai_error);\n\n        // Verify it's categorized as PermanentError\n        match result {\n            LanguageModelError::PermanentError(_) => {} // Expected\n            _ => panic!(\"Expected PermanentError, got {result:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_file_save_error_is_permanent() {\n        // Create a file save error\n        let openai_error = OpenAIError::FileSaveError(\"Failed to save file\".to_string());\n        let result = openai_error_to_language_model_error(openai_error);\n\n        // Verify it's categorized as PermanentError\n        match result {\n            LanguageModelError::PermanentError(_) => {} // Expected\n            _ => panic!(\"Expected PermanentError, got {result:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_file_read_error_is_permanent() {\n        // Create a file read error\n        let openai_error = OpenAIError::FileReadError(\"Failed to read file\".to_string());\n        let result = openai_error_to_language_model_error(openai_error);\n\n        // Verify it's categorized as PermanentError\n        match result {\n            LanguageModelError::PermanentError(_) => {} // Expected\n            _ => panic!(\"Expected PermanentError, got {result:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_stream_error_is_permanent() {\n        // Create a stream error\n        let openai_error =\n            OpenAIError::StreamError(Box::new(StreamError::UnknownEvent(Event::default())));\n        let result = openai_error_to_language_model_error(openai_error);\n\n        // Verify it's categorized as PermanentError\n        match result {\n            LanguageModelError::PermanentError(_) => {} // Expected\n            _ => panic!(\"Expected PermanentError, got {result:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_invalid_argument_is_permanent() {\n        // Create an invalid argument error\n        let openai_error = OpenAIError::InvalidArgument(\"Invalid argument\".to_string());\n        let result = openai_error_to_language_model_error(openai_error);\n\n        // Verify it's categorized as PermanentError\n        match result {\n            LanguageModelError::PermanentError(_) => {} // Expected\n            _ => panic!(\"Expected PermanentError, got {result:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_options_merge_overrides_set_fields() {\n        let mut base = Options::builder()\n            .prompt_model(\"a\")\n            .temperature(0.1)\n            .build()\n            .unwrap();\n\n        let overlay = Options::builder()\n            .prompt_model(\"b\")\n            .presence_penalty(0.2)\n            .build()\n            .unwrap();\n\n        base.merge(&overlay);\n\n        assert_eq!(base.prompt_model.as_deref(), Some(\"b\"));\n        assert_eq!(base.temperature, Some(0.1));\n        assert_eq!(base.presence_penalty, Some(0.2));\n    }\n\n    #[test]\n    #[allow(deprecated)]\n    fn test_chat_completion_request_defaults_omits_reasoning_effort() {\n        let openai: OpenAI = OpenAI::builder()\n            .default_options(\n                Options::builder()\n                    .parallel_tool_calls(true)\n                    .max_completion_tokens(42)\n                    .temperature(0.3)\n                    .reasoning_effort(ReasoningEffort::Low)\n                    .seed(7)\n                    .presence_penalty(1.1)\n                    .metadata(serde_json::json!({\"tag\": \"demo\"}))\n                    .user(\"user-1\"),\n            )\n            .build()\n            .unwrap();\n\n        let built = openai\n            .chat_completion_request_defaults()\n            .messages(Vec::new())\n            .model(\"gpt-4o\")\n            .build()\n            .unwrap();\n\n        assert_eq!(built.parallel_tool_calls, Some(true));\n        assert_eq!(built.max_completion_tokens, Some(42));\n        assert_eq!(built.temperature, Some(0.3));\n        assert_eq!(built.reasoning_effort, None);\n        assert_eq!(built.seed, Some(7));\n        assert_eq!(built.presence_penalty, Some(1.1));\n        assert_eq!(\n            built.metadata,\n            Some(async_openai::types::Metadata::from(\n                serde_json::json!({\"tag\": \"demo\"})\n            ))\n        );\n        assert_eq!(built.user, Some(\"user-1\".to_string()));\n    }\n\n    #[test]\n    #[allow(deprecated)]\n    fn test_embed_request_defaults_sets_user_and_dimensions() {\n        let openai: OpenAI = OpenAI::builder()\n            .default_options(Options::builder().user(\"end-user\").dimensions(128))\n            .build()\n            .unwrap();\n\n        let built = openai\n            .embed_request_defaults()\n            .model(\"text-embedding-3-small\")\n            .input(\"hello\")\n            .build()\n            .unwrap();\n        assert_eq!(built.user, Some(\"end-user\".to_string()));\n        assert_eq!(built.dimensions, Some(128));\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/openai/responses_api.rs",
    "content": "use std::collections::HashMap;\nuse std::pin::Pin;\nuse std::task::{Context, Poll};\n\nuse anyhow::{Context as _, Result};\nuse async_openai::types::responses::{\n    CreateResponse, CreateResponseArgs, EasyInputContent, EasyInputMessageArgs, FunctionCallOutput,\n    FunctionCallOutputItemParam, FunctionTool, FunctionToolCall, ImageDetail, IncludeEnum,\n    InputContent, InputFileArgs, InputImageContent, InputItem, InputParam, InputTextContent,\n    MessageType, OutputContent, OutputItem, OutputMessage, OutputMessageContent, OutputStatus,\n    ReasoningArgs, ReasoningSummary, Response, ResponseFormatJsonSchema, ResponseStream,\n    ResponseStreamEvent, ResponseTextParam, Role, Status, TextResponseFormatConfiguration, Tool,\n    ToolChoiceOptions, ToolChoiceParam,\n};\nuse base64::Engine as _;\nuse futures_util::Stream;\nuse swiftide_core::chat_completion::{\n    ChatCompletionRequest, ChatCompletionResponse, ChatMessage, ChatMessageContentPart,\n    ChatMessageContentSource, ReasoningItem, ToolCall, ToolOutput, ToolSpec, Usage,\n};\n\nuse super::tool_schema::OpenAiToolSchema;\nuse super::{GenericOpenAI, openai_error_to_language_model_error};\nuse crate::openai::LanguageModelError;\n\ntype LmResult<T> = Result<T, LanguageModelError>;\n\npub(super) fn build_responses_request_from_chat<C>(\n    client: &GenericOpenAI<C>,\n    request: &ChatCompletionRequest<'_>,\n) -> LmResult<CreateResponse>\nwhere\n    C: async_openai::config::Config + Clone + Default,\n{\n    let model = client\n        .options()\n        .prompt_model\n        .as_ref()\n        .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n    let mut args = base_request_args(client, model)?;\n\n    let options = client.options();\n    let include_reasoning = options.reasoning_effort.is_some();\n    let input_items = chat_messages_to_input_items(request.messages(), include_reasoning)?;\n    args.input(InputParam::Items(input_items));\n\n    if !request.tools_spec().is_empty() {\n        let tools = request\n            .tools_spec()\n            .iter()\n            .map(tool_spec_to_responses_tool)\n            .collect::<Result<Vec<_>>>()\n            .map_err(LanguageModelError::permanent)?;\n\n        args.tools(tools);\n        if client.options().parallel_tool_calls.unwrap_or(true) {\n            args.tool_choice(ToolChoiceParam::Mode(ToolChoiceOptions::Auto));\n        }\n    }\n\n    args.build().map_err(openai_error_to_language_model_error)\n}\n\nfn base_request_args<C>(client: &GenericOpenAI<C>, model: &str) -> LmResult<CreateResponseArgs>\nwhere\n    C: async_openai::config::Config + Clone + Default,\n{\n    let mut args = CreateResponseArgs::default();\n    args.model(model);\n\n    let options = client.options();\n\n    if let Some(parallel_tool_calls) = options.parallel_tool_calls {\n        args.parallel_tool_calls(parallel_tool_calls);\n    }\n\n    if let Some(max_tokens) = options.max_completion_tokens {\n        args.max_output_tokens(max_tokens);\n    }\n\n    if let Some(temperature) = options.temperature {\n        args.temperature(temperature);\n    }\n\n    if let Some(reasoning_effort) = options.reasoning_effort.clone() {\n        let mut reasoning = ReasoningArgs::default();\n        reasoning.effort(reasoning_effort);\n\n        if options.reasoning_features.unwrap_or(true) {\n            reasoning.summary(ReasoningSummary::Auto);\n            args.include(vec![IncludeEnum::ReasoningEncryptedContent]);\n        }\n\n        let reasoning = reasoning.build().map_err(LanguageModelError::permanent)?;\n        args.reasoning(reasoning);\n\n        // Reasoning models should always be stateless in Responses API usage.\n        args.store(false);\n    }\n\n    if let Some(seed) = options.seed {\n        tracing::warn!(\n            seed,\n            \"`seed` is not supported by the Responses API; ignoring\"\n        );\n    }\n\n    if let Some(presence_penalty) = options.presence_penalty {\n        tracing::warn!(\n            presence_penalty,\n            \"`presence_penalty` is not supported by the Responses API; ignoring\"\n        );\n    }\n\n    if let Some(metadata) = options.metadata.as_ref() {\n        if let Some(converted) = convert_metadata(metadata) {\n            args.metadata(converted);\n        } else {\n            tracing::warn!(\"Responses metadata must be a flat map of string values; skipping\");\n        }\n    }\n\n    Ok(args)\n}\n\nfn convert_metadata(value: &serde_json::Value) -> Option<HashMap<String, String>> {\n    match value {\n        serde_json::Value::Object(map) => {\n            let mut out = HashMap::with_capacity(map.len());\n            for (key, val) in map {\n                if let Some(s) = val.as_str() {\n                    out.insert(key.clone(), s.to_owned());\n                } else {\n                    return None;\n                }\n            }\n            Some(out)\n        }\n        _ => None,\n    }\n}\n\nfn tool_spec_to_responses_tool(spec: &ToolSpec) -> Result<Tool> {\n    let parameters = OpenAiToolSchema::try_from(spec)\n        .context(\"tool schema must be OpenAI compatible\")?\n        .into_value();\n\n    let function = FunctionTool {\n        name: spec.name.clone(),\n        parameters: Some(parameters),\n        strict: Some(true),\n        description: Some(spec.description.clone()),\n    };\n\n    Ok(Tool::Function(function))\n}\n\nfn chat_messages_to_input_items(\n    messages: &[ChatMessage],\n    include_reasoning: bool,\n) -> LmResult<Vec<InputItem>> {\n    let mut items = Vec::with_capacity(messages.len());\n\n    for message in messages {\n        match message {\n            ChatMessage::System(content) => {\n                items.push(message_item(Role::System, content.clone())?);\n            }\n            ChatMessage::User(content) => {\n                items.push(message_item(Role::User, content.clone())?);\n            }\n            ChatMessage::UserWithParts(parts) => {\n                let content = user_parts_to_easy_input_content(parts)?;\n                items.push(message_item_with_content(Role::User, content)?);\n            }\n            ChatMessage::Assistant(content, tool_calls) => {\n                if let Some(text) = content.as_ref() {\n                    items.push(message_item(Role::Assistant, text.clone())?);\n                }\n\n                if let Some(tool_calls) = tool_calls.as_ref() {\n                    for tool_call in tool_calls {\n                        let call_id = normalize_responses_function_call_id(tool_call.id());\n                        let arguments = tool_call.args().unwrap_or_default().to_owned();\n\n                        let function_call = FunctionToolCall {\n                            arguments,\n                            call_id: call_id.clone(),\n                            name: tool_call.name().to_owned(),\n                            id: None,\n                            status: Some(OutputStatus::InProgress),\n                        };\n\n                        items.push(InputItem::Item(\n                            async_openai::types::responses::Item::FunctionCall(function_call),\n                        ));\n                    }\n                }\n            }\n            ChatMessage::ToolOutput(tool_call, tool_output) => {\n                let output = match tool_output {\n                    ToolOutput::FeedbackRequired(value)\n                    | ToolOutput::Stop(value)\n                    | ToolOutput::AgentFailed(value) => FunctionCallOutput::Text(\n                        value\n                            .as_ref()\n                            .map_or_else(String::new, serde_json::Value::to_string),\n                    ),\n                    ToolOutput::Text(text) | ToolOutput::Fail(text) => {\n                        FunctionCallOutput::Text(text.clone())\n                    }\n                    _ => FunctionCallOutput::Text(String::new()),\n                };\n\n                let function_output = FunctionCallOutputItemParam {\n                    call_id: normalize_responses_function_call_id(tool_call.id()),\n                    output,\n                    id: None,\n                    status: Some(OutputStatus::Completed),\n                };\n\n                items.push(InputItem::Item(\n                    async_openai::types::responses::Item::FunctionCallOutput(function_output),\n                ));\n            }\n            ChatMessage::Reasoning(item) => {\n                if !include_reasoning\n                    || item.encrypted_content.is_none()\n                    || item\n                        .encrypted_content\n                        .as_ref()\n                        .is_some_and(String::is_empty)\n                {\n                    continue;\n                }\n\n                let reasoning_item = async_openai::types::responses::ReasoningItem {\n                    id: item.id.clone(),\n                    summary: Vec::new(),\n                    content: None,\n                    encrypted_content: item.encrypted_content.clone(),\n                    status: None,\n                };\n\n                items.push(InputItem::Item(\n                    async_openai::types::responses::Item::Reasoning(reasoning_item),\n                ));\n            }\n            ChatMessage::Summary(content) => {\n                items.push(message_item(Role::Assistant, content.clone())?);\n            }\n        }\n    }\n\n    Ok(items)\n}\n\nfn message_item(role: Role, content: String) -> LmResult<InputItem> {\n    message_item_with_content(role, EasyInputContent::Text(content))\n}\n\nfn message_item_with_content(role: Role, content: EasyInputContent) -> LmResult<InputItem> {\n    Ok(InputItem::EasyMessage(\n        EasyInputMessageArgs::default()\n            .r#type(MessageType::Message)\n            .role(role)\n            .content(content)\n            .build()\n            .map_err(LanguageModelError::permanent)?,\n    ))\n}\n\nfn user_parts_to_easy_input_content(\n    parts: &[ChatMessageContentPart],\n) -> LmResult<EasyInputContent> {\n    let mapped = parts\n        .iter()\n        .map(part_to_input_content)\n        .collect::<LmResult<Vec<_>>>()?;\n    Ok(EasyInputContent::ContentList(mapped))\n}\n\nfn part_to_input_content(part: &ChatMessageContentPart) -> LmResult<InputContent> {\n    Ok(match part {\n        ChatMessageContentPart::Text { text } => {\n            InputContent::from(InputTextContent::from(text.as_str()))\n        }\n        ChatMessageContentPart::Image { source, .. } => {\n            let image = match source {\n                ChatMessageContentSource::Url { url } => InputImageContent {\n                    detail: ImageDetail::default(),\n                    file_id: None,\n                    image_url: Some(url.clone()),\n                },\n                ChatMessageContentSource::FileId { file_id } => InputImageContent {\n                    detail: ImageDetail::default(),\n                    file_id: Some(file_id.clone()),\n                    image_url: None,\n                },\n                ChatMessageContentSource::Bytes { data, media_type } => {\n                    let media_type = media_type.as_deref().unwrap_or(\"application/octet-stream\");\n                    let encoded = base64::engine::general_purpose::STANDARD.encode(data);\n                    InputImageContent {\n                        detail: ImageDetail::default(),\n                        file_id: None,\n                        image_url: Some(format!(\"data:{media_type};base64,{encoded}\")),\n                    }\n                }\n                ChatMessageContentSource::S3 { .. } => {\n                    return Err(LanguageModelError::permanent(\n                        \"OpenAI responses input_image does not support s3 sources\",\n                    ));\n                }\n            };\n            InputContent::from(image)\n        }\n        ChatMessageContentPart::Document {\n            source,\n            format,\n            name,\n        } => {\n            let mut builder = InputFileArgs::default();\n            let filename = name\n                .as_deref()\n                .map(str::to_owned)\n                .or_else(|| format.as_ref().map(|ext| format!(\"document.{ext}\")))\n                .unwrap_or_else(|| \"document\".to_string());\n\n            match source {\n                ChatMessageContentSource::Url { url } => {\n                    builder.file_url(url.as_str());\n                }\n                ChatMessageContentSource::FileId { file_id } => {\n                    builder.file_id(file_id.as_str());\n                }\n                ChatMessageContentSource::Bytes { data, .. } => {\n                    let encoded = base64::engine::general_purpose::STANDARD.encode(data);\n                    builder.file_data(encoded).filename(filename);\n                }\n                ChatMessageContentSource::S3 { .. } => {\n                    return Err(LanguageModelError::permanent(\n                        \"OpenAI responses input_file does not support s3 sources\",\n                    ));\n                }\n            }\n\n            InputContent::from(builder.build().map_err(LanguageModelError::permanent)?)\n        }\n        ChatMessageContentPart::Audio { .. } => {\n            return Err(LanguageModelError::permanent(\n                \"OpenAI responses API does not support audio parts in chat conversion\",\n            ));\n        }\n        ChatMessageContentPart::Video { .. } => {\n            return Err(LanguageModelError::permanent(\n                \"OpenAI responses API does not support video parts in chat conversion\",\n            ));\n        }\n    })\n}\n\nfn normalize_responses_function_call_id(id: &str) -> String {\n    if id.starts_with(\"fc_\") {\n        id.to_owned()\n    } else if let Some(stripped) = id.strip_prefix(\"call_\") {\n        format!(\"fc_{stripped}\")\n    } else {\n        id.to_owned()\n    }\n}\n\n#[derive(Default)]\npub(super) struct ResponsesStreamState {\n    response: ChatCompletionResponse,\n    finished: bool,\n}\n\n#[derive(Debug, Clone)]\npub(super) struct ResponsesStreamItem {\n    pub response: ChatCompletionResponse,\n    pub finished: bool,\n}\n\nimpl ResponsesStreamState {\n    #[allow(clippy::too_many_lines)]\n    fn apply_event(\n        &mut self,\n        event: ResponseStreamEvent,\n        stream_full: bool,\n    ) -> LmResult<Option<ResponsesStreamItem>> {\n        if self.finished {\n            return Ok(None);\n        }\n\n        let maybe_item = match event {\n            ResponseStreamEvent::ResponseOutputTextDelta(delta) => {\n                self.response\n                    .append_message_delta(Some(delta.delta.as_str()));\n                Some(self.emit(stream_full, false))\n            }\n            ResponseStreamEvent::ResponseContentPartAdded(part) => match &part.part {\n                OutputContent::OutputText(text) => {\n                    self.response.append_message_delta(Some(text.text.as_str()));\n                    Some(self.emit(stream_full, false))\n                }\n                _ => None,\n            },\n            ResponseStreamEvent::ResponseOutputItemAdded(event) => match event.item {\n                OutputItem::FunctionCall(function_call) => {\n                    let index = event.output_index as usize;\n                    let id = function_call_identifier(&function_call);\n                    let arguments = (!function_call.arguments.is_empty())\n                        .then_some(function_call.arguments.as_str());\n                    self.response.append_tool_call_delta(\n                        index,\n                        Some(id),\n                        Some(function_call.name.as_str()),\n                        arguments,\n                    );\n                    Some(self.emit(stream_full, false))\n                }\n                OutputItem::Message(message) => {\n                    collect_message_text_from_message(&message).map(|text| {\n                        self.response.append_message_delta(Some(text.as_str()));\n                        self.emit(stream_full, false)\n                    })\n                }\n                _ => None,\n            },\n            ResponseStreamEvent::ResponseOutputItemDone(event) => {\n                if let OutputItem::FunctionCall(function_call) = event.item {\n                    let index = event.output_index as usize;\n                    let id = function_call_identifier(&function_call);\n                    self.response.append_tool_call_delta(\n                        index,\n                        Some(id),\n                        Some(function_call.name.as_str()),\n                        None,\n                    );\n                    Some(self.emit(stream_full, false))\n                } else {\n                    None\n                }\n            }\n            ResponseStreamEvent::ResponseFunctionCallArgumentsDelta(delta) => {\n                let index = delta.output_index as usize;\n                self.response\n                    .append_tool_call_delta(index, None, None, Some(delta.delta.as_str()));\n                Some(self.emit(stream_full, false))\n            }\n            ResponseStreamEvent::ResponseFunctionCallArgumentsDone(done) => {\n                let index = done.output_index as usize;\n\n                let name = done.name.as_deref().filter(|n| !n.is_empty());\n\n                let mut arguments = None;\n                if !done.arguments.is_empty() {\n                    let new_args = done.arguments.as_str();\n                    let duplicate = self\n                        .response\n                        .tool_calls\n                        .as_ref()\n                        .and_then(|calls| calls.get(index))\n                        .and_then(|tc| tc.args())\n                        .is_some_and(|existing| existing == new_args);\n                    if !duplicate {\n                        arguments = Some(new_args);\n                    }\n                }\n\n                if name.is_some() || arguments.is_some() {\n                    self.response\n                        .append_tool_call_delta(index, None, name, arguments);\n                    Some(self.emit(stream_full, false))\n                } else {\n                    None\n                }\n            }\n            ResponseStreamEvent::ResponseCompleted(completed) => {\n                metadata_to_chat_completion(&completed.response, &mut self.response)?;\n                self.response.delta = None;\n                self.finished = true;\n                Some(self.emit(stream_full, true))\n            }\n            ResponseStreamEvent::ResponseIncomplete(incomplete) => {\n                metadata_to_chat_completion(&incomplete.response, &mut self.response)?;\n                self.response.delta = None;\n                self.finished = true;\n                Some(self.emit(stream_full, true))\n            }\n            ResponseStreamEvent::ResponseFailed(failed) => {\n                self.finished = true;\n                let message = failed.response.error.as_ref().map_or_else(\n                    || \"Responses API stream failed\".to_string(),\n                    |err| format!(\"{}: {}\", err.code, err.message),\n                );\n                return Err(LanguageModelError::permanent(message));\n            }\n            ResponseStreamEvent::ResponseError(error) => {\n                self.finished = true;\n                return Err(LanguageModelError::permanent(error.message));\n            }\n            _ => None,\n        };\n\n        Ok(maybe_item)\n    }\n\n    fn emit(&mut self, stream_full: bool, finished: bool) -> ResponsesStreamItem {\n        let response = if finished {\n            // Stream is complete; move the accumulated response out of state.\n            let mut response = std::mem::take(&mut self.response);\n            response.delta = None;\n            response\n        } else if stream_full {\n            self.response.clone()\n        } else {\n            ChatCompletionResponse {\n                id: self.response.id,\n                message: None,\n                tool_calls: None,\n                usage: None,\n                reasoning: None,\n                delta: self.response.delta.clone(),\n            }\n        };\n\n        ResponsesStreamItem { response, finished }\n    }\n\n    fn take_final(&mut self, stream_full: bool) -> Option<ResponsesStreamItem> {\n        if self.finished {\n            None\n        } else {\n            self.finished = true;\n            Some(self.emit(stream_full, true))\n        }\n    }\n}\n\npub(super) fn responses_stream_adapter(\n    stream: ResponseStream,\n    stream_full: bool,\n) -> ResponsesStreamAdapter {\n    ResponsesStreamAdapter::new(stream, stream_full)\n}\n\npub(super) struct ResponsesStreamAdapter {\n    inner: ResponseStream,\n    state: ResponsesStreamState,\n    stream_full: bool,\n    finished: bool,\n}\n\nimpl ResponsesStreamAdapter {\n    fn new(stream: ResponseStream, stream_full: bool) -> Self {\n        Self {\n            inner: stream,\n            state: ResponsesStreamState::default(),\n            stream_full,\n            finished: false,\n        }\n    }\n}\n\nimpl Stream for ResponsesStreamAdapter {\n    type Item = LmResult<ResponsesStreamItem>;\n\n    fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        let this = self.get_mut();\n\n        if this.finished {\n            return Poll::Ready(None);\n        }\n\n        loop {\n            match this.inner.as_mut().poll_next(cx) {\n                Poll::Ready(Some(result)) => {\n                    let event = match result {\n                        Ok(event) => event,\n                        Err(err) => {\n                            this.finished = true;\n                            return Poll::Ready(Some(Err(openai_error_to_language_model_error(\n                                err,\n                            ))));\n                        }\n                    };\n\n                    match this.state.apply_event(event, this.stream_full) {\n                        Ok(Some(item)) => {\n                            if item.finished {\n                                this.finished = true;\n                            }\n                            return Poll::Ready(Some(Ok(item)));\n                        }\n                        Ok(None) => {}\n                        Err(err) => {\n                            this.finished = true;\n                            return Poll::Ready(Some(Err(err)));\n                        }\n                    }\n                }\n                Poll::Ready(None) => {\n                    this.finished = true;\n                    if let Some(item) = this.state.take_final(this.stream_full) {\n                        return Poll::Ready(Some(Ok(item)));\n                    }\n                    return Poll::Ready(None);\n                }\n                Poll::Pending => return Poll::Pending,\n            }\n        }\n    }\n}\n\npub(super) fn response_to_chat_completion(response: &Response) -> LmResult<ChatCompletionResponse> {\n    if matches!(response.status, Status::Failed) {\n        let error = response.error.as_ref().map_or_else(\n            || \"OpenAI Responses API returned failure\".to_string(),\n            |err| format!(\"{}: {}\", err.code, err.message),\n        );\n        return Err(LanguageModelError::permanent(error));\n    }\n\n    let mut builder = ChatCompletionResponse::builder();\n\n    let reasoning_items = collect_reasoning_items_from_items(&response.output);\n    if !reasoning_items.is_empty() {\n        builder.reasoning(reasoning_items);\n    }\n\n    if let Some(text) = response.output_text().filter(|s| !s.is_empty()) {\n        builder.message(text);\n    } else if let Some(text) = collect_message_text_from_items(&response.output) {\n        builder.message(text);\n    }\n\n    let tool_calls = collect_tool_calls_from_items(&response.output)?;\n    if !tool_calls.is_empty() {\n        builder.tool_calls(tool_calls);\n    }\n\n    if let Some(usage) = response.usage.as_ref() {\n        builder.usage(Usage::from(usage));\n    }\n\n    builder.build().map_err(LanguageModelError::from)\n}\n\npub(super) fn metadata_to_chat_completion(\n    metadata: &Response,\n    accumulator: &mut ChatCompletionResponse,\n) -> LmResult<()> {\n    if let Some(usage) = metadata.usage.as_ref() {\n        accumulator.usage = Some(Usage::from(usage));\n    }\n\n    if accumulator.message.is_none()\n        && let Some(text) = collect_message_text_from_items(&metadata.output)\n    {\n        accumulator.message = Some(text);\n    }\n\n    if accumulator.tool_calls.is_none() {\n        let tool_calls = collect_tool_calls_from_items(&metadata.output)?;\n        if !tool_calls.is_empty() {\n            accumulator.tool_calls = Some(tool_calls);\n        }\n    }\n\n    if accumulator.reasoning.is_none() {\n        let reasoning_items = collect_reasoning_items_from_items(&metadata.output);\n        if !reasoning_items.is_empty() {\n            accumulator.reasoning = Some(reasoning_items);\n        }\n    }\n\n    Ok(())\n}\n\nfn collect_message_text_from_items(output: &[OutputItem]) -> Option<String> {\n    let mut buffer = String::new();\n\n    for item in output {\n        if let OutputItem::Message(OutputMessage { content, .. }) = item {\n            for part in content {\n                if let OutputMessageContent::OutputText(text) = part {\n                    if !buffer.is_empty() {\n                        buffer.push('\\n');\n                    }\n                    buffer.push_str(&text.text);\n                }\n            }\n        }\n    }\n\n    if buffer.is_empty() {\n        None\n    } else {\n        Some(buffer)\n    }\n}\n\nfn collect_message_text_from_message(message: &OutputMessage) -> Option<String> {\n    let mut buffer = String::new();\n\n    for part in &message.content {\n        if let OutputMessageContent::OutputText(text) = part {\n            if !buffer.is_empty() {\n                buffer.push('\\n');\n            }\n            buffer.push_str(&text.text);\n        }\n    }\n\n    if buffer.is_empty() {\n        None\n    } else {\n        Some(buffer)\n    }\n}\n\nfn collect_tool_calls_from_items(output: &[OutputItem]) -> LmResult<Vec<ToolCall>> {\n    let calls = output.iter().filter_map(|item| match item {\n        OutputItem::FunctionCall(function_call) => Some(function_call),\n        _ => None,\n    });\n\n    tool_calls_from_iter(calls)\n}\n\nfn collect_reasoning_items_from_items(output: &[OutputItem]) -> Vec<ReasoningItem> {\n    output\n        .iter()\n        .filter_map(|item| match item {\n            OutputItem::Reasoning(reasoning) => Some(ReasoningItem {\n                id: reasoning.id.clone(),\n                summary: reasoning\n                    .summary\n                    .iter()\n                    .map(|part| match part {\n                        async_openai::types::responses::SummaryPart::SummaryText(summary) => {\n                            summary.text.clone()\n                        }\n                    })\n                    .collect(),\n                content: reasoning\n                    .content\n                    .as_ref()\n                    .map(|c| c.iter().map(|c| c.text.clone()).collect()),\n                status: {\n                    if let Some(status) = &reasoning.status {\n                        match status {\n                            OutputStatus::Completed => {\n                                Some(swiftide_core::chat_completion::ReasoningStatus::Completed)\n                            }\n                            OutputStatus::InProgress => {\n                                Some(swiftide_core::chat_completion::ReasoningStatus::InProgress)\n                            }\n                            OutputStatus::Incomplete => {\n                                Some(swiftide_core::chat_completion::ReasoningStatus::Incomplete)\n                            }\n                        }\n                    } else {\n                        None\n                    }\n                },\n                encrypted_content: reasoning.encrypted_content.clone(),\n            }),\n            _ => None,\n        })\n        .collect()\n}\n\nfn tool_call_from_function_call(function_call: &FunctionToolCall) -> LmResult<ToolCall> {\n    let id = if function_call.call_id.is_empty() {\n        function_call.id.as_deref().unwrap_or_default().to_string()\n    } else {\n        function_call.call_id.clone()\n    };\n\n    let mut builder = ToolCall::builder();\n    builder.id(id);\n    builder.name(function_call.name.clone());\n    if !function_call.arguments.is_empty() {\n        builder.maybe_args(Some(function_call.arguments.clone()));\n    }\n    builder\n        .build()\n        .context(\"Failed to build tool call\")\n        .map_err(LanguageModelError::permanent)\n}\n\nfn tool_calls_from_iter<'a, I>(calls: I) -> LmResult<Vec<ToolCall>>\nwhere\n    I: IntoIterator<Item = &'a FunctionToolCall>,\n{\n    calls\n        .into_iter()\n        .map(tool_call_from_function_call)\n        .collect::<Result<Vec<_>, _>>()\n}\n\nfn function_call_identifier(function_call: &FunctionToolCall) -> &str {\n    if function_call.call_id.is_empty() {\n        function_call\n            .id\n            .as_deref()\n            .unwrap_or(function_call.call_id.as_str())\n    } else {\n        function_call.call_id.as_str()\n    }\n}\n\npub(super) fn build_responses_request_from_prompt<C>(\n    client: &GenericOpenAI<C>,\n    prompt_text: String,\n) -> LmResult<CreateResponse>\nwhere\n    C: async_openai::config::Config + Clone + Default,\n{\n    let model = client\n        .options()\n        .prompt_model\n        .as_ref()\n        .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n    let mut args = base_request_args(client, model)?;\n    args.input(InputParam::Items(vec![InputItem::EasyMessage(\n        EasyInputMessageArgs::default()\n            .r#type(MessageType::Message)\n            .role(Role::User)\n            .content(EasyInputContent::Text(prompt_text))\n            .build()\n            .map_err(LanguageModelError::permanent)?,\n    )]));\n\n    args.build().map_err(openai_error_to_language_model_error)\n}\n\npub(super) fn build_responses_request_from_prompt_with_schema<C>(\n    client: &GenericOpenAI<C>,\n    prompt_text: String,\n    schema: serde_json::Value,\n) -> LmResult<CreateResponse>\nwhere\n    C: async_openai::config::Config + Clone + Default,\n{\n    let model = client\n        .options()\n        .prompt_model\n        .as_ref()\n        .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n    let mut args = base_request_args(client, model)?;\n    args.input(InputParam::Items(vec![InputItem::EasyMessage(\n        EasyInputMessageArgs::default()\n            .r#type(MessageType::Message)\n            .role(Role::User)\n            .content(EasyInputContent::Text(prompt_text))\n            .build()\n            .map_err(LanguageModelError::permanent)?,\n    )]));\n\n    args.text(ResponseTextParam {\n        format: TextResponseFormatConfiguration::JsonSchema(ResponseFormatJsonSchema {\n            description: None,\n            name: \"swiftide_structured_output\".into(),\n            schema: Some(schema),\n            strict: Some(true),\n        }),\n        verbosity: None,\n    });\n\n    args.build().map_err(openai_error_to_language_model_error)\n}\n\n#[allow(clippy::items_after_statements)]\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use async_openai::types::responses::{\n        AssistantRole, FunctionToolCall, IncludeEnum, InputTokenDetails, OutputItem, OutputMessage,\n        OutputMessageContent, OutputStatus, OutputTextContent, OutputTokenDetails, ReasoningEffort,\n        ReasoningSummary, ResponseCompletedEvent, ResponseErrorEvent, ResponseFailedEvent,\n        ResponseFunctionCallArgumentsDeltaEvent, ResponseFunctionCallArgumentsDoneEvent,\n        ResponseOutputItemAddedEvent, ResponseOutputItemDoneEvent, ResponseStreamEvent,\n        ResponseTextDeltaEvent, ResponseUsage as ResponsesUsage, Tool,\n    };\n    use serde_json::{json, to_value};\n    use std::collections::HashSet;\n    use swiftide_core::chat_completion::{\n        ChatCompletionRequest, ChatCompletionResponse, ChatMessage, ChatMessageContentPart,\n        ReasoningItem, ToolCall, ToolSpec, Usage,\n    };\n\n    use crate::openai::{OpenAI, Options};\n\n    fn expect_emit(\n        state: &mut ResponsesStreamState,\n        event: ResponseStreamEvent,\n        stream_full: bool,\n    ) -> ResponsesStreamItem {\n        state\n            .apply_event(event, stream_full)\n            .unwrap()\n            .expect(\"expected emission\")\n    }\n\n    fn expect_no_emit(\n        state: &mut ResponsesStreamState,\n        event: ResponseStreamEvent,\n        stream_full: bool,\n    ) {\n        assert!(\n            state.apply_event(event, stream_full).unwrap().is_none(),\n            \"expected no emission\"\n        );\n    }\n\n    fn sample_usage() -> ResponsesUsage {\n        ResponsesUsage {\n            input_tokens: 5,\n            input_tokens_details: InputTokenDetails { cached_tokens: 0 },\n            output_tokens: 3,\n            output_tokens_details: OutputTokenDetails {\n                reasoning_tokens: 0,\n            },\n            total_tokens: 8,\n        }\n    }\n\n    #[allow(dead_code)]\n    #[derive(schemars::JsonSchema)]\n    struct WeatherArgs {\n        _city: String,\n    }\n\n    #[allow(dead_code)]\n    #[derive(schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentArgs {\n        request: NestedCommentRequest,\n    }\n\n    #[allow(dead_code)]\n    #[derive(schemars::JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentRequest {\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        body: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        text: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        page_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        block_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        discussion_id: Option<String>,\n    }\n\n    fn sample_tool_spec() -> ToolSpec {\n        ToolSpec::builder()\n            .name(\"get_weather\")\n            .description(\"Retrieve weather data\")\n            .parameters_schema(schemars::schema_for!(WeatherArgs))\n            .build()\n            .unwrap()\n    }\n\n    fn sample_tool_spec_named(name: &str) -> ToolSpec {\n        ToolSpec::builder()\n            .name(name)\n            .description(format!(\"{name} description\"))\n            .parameters_schema(schemars::schema_for!(WeatherArgs))\n            .build()\n            .unwrap()\n    }\n\n    #[test]\n    fn test_user_parts_to_easy_input_content_with_image() {\n        let parts = vec![\n            ChatMessageContentPart::text(\"Describe this image.\"),\n            ChatMessageContentPart::image(\"https://example.com/image.png\"),\n        ];\n\n        let easy = user_parts_to_easy_input_content(&parts).expect(\"map user parts\");\n        let value = to_value(easy).expect(\"serialize easy content\");\n        let parts = value.as_array().expect(\"expected content list array\");\n\n        assert_eq!(parts[0][\"type\"], \"input_text\");\n        assert_eq!(parts[0][\"text\"], \"Describe this image.\");\n        assert_eq!(parts[1][\"type\"], \"input_image\");\n        assert_eq!(parts[1][\"image_url\"], \"https://example.com/image.png\");\n        assert_eq!(parts[1][\"detail\"], \"auto\");\n    }\n\n    fn output_message(id: &str, parts: &[&str]) -> OutputMessage {\n        OutputMessage {\n            content: parts\n                .iter()\n                .map(|text| {\n                    OutputMessageContent::OutputText(OutputTextContent {\n                        annotations: Vec::new(),\n                        logprobs: None,\n                        text: (*text).to_string(),\n                    })\n                })\n                .collect(),\n            id: id.to_string(),\n            role: AssistantRole::Assistant,\n            status: OutputStatus::Completed,\n        }\n    }\n\n    fn response_with_message_tool_reasoning(message: &str) -> Response {\n        let output_message = OutputItem::Message(output_message(\"msg\", &[message]));\n        let output = vec![\n            serde_json::to_value(output_message).expect(\"output message serializes\"),\n            json!({\n                \"type\": \"function_call\",\n                \"id\": \"call\",\n                \"call_id\": \"call\",\n                \"name\": \"metadata_tool\",\n                \"arguments\": \"{\\\"ok\\\":true}\",\n                \"status\": \"completed\"\n            }),\n            json!({\n                \"type\": \"reasoning\",\n                \"id\": \"reasoning_meta\",\n                \"summary\": [\n                    {\"type\": \"summary_text\", \"text\": \"metadata summary\"}\n                ]\n            }),\n        ];\n\n        serde_json::from_value(json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1\",\n            \"object\": \"response\",\n            \"status\": \"completed\",\n            \"output\": output,\n            \"usage\": sample_usage(),\n        }))\n        .expect(\"valid response json\")\n    }\n\n    #[test]\n    fn test_build_responses_request_includes_tools_and_options() {\n        let openai = OpenAI::builder()\n            .default_prompt_model(\"gpt-4.1\")\n            .parallel_tool_calls(Some(true))\n            .default_options(\n                Options::builder()\n                    .metadata(json!({\"tag\": \"demo\"}))\n                    .user(\"tester\")\n                    .temperature(0.2),\n            )\n            .build()\n            .unwrap();\n\n        let mut tools = HashSet::new();\n        tools.insert(sample_tool_spec_named(\"z_tool\"));\n        tools.insert(sample_tool_spec_named(\"a_tool\"));\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs(tools)\n            .build()\n            .unwrap();\n\n        let create = build_responses_request_from_chat(&openai, &request).unwrap();\n\n        assert_eq!(create.model.as_deref(), Some(\"gpt-4.1\"));\n        assert_eq!(create.temperature, Some(0.2));\n        assert_eq!(create.parallel_tool_calls, Some(true));\n        assert_eq!(\n            create\n                .metadata\n                .as_ref()\n                .and_then(|m| m.get(\"tag\"))\n                .map(String::as_str),\n            Some(\"demo\"),\n        );\n\n        let InputParam::Items(items) = &create.input else {\n            panic!(\"expected items input\");\n        };\n        assert_eq!(items.len(), 1);\n\n        let tools = create.tools.expect(\"tools present\");\n        assert_eq!(tools.len(), 2);\n        let tool_names = tools\n            .iter()\n            .map(|tool| match tool {\n                Tool::Function(function) => function.name.as_str(),\n                _ => panic!(\"expected function tool\"),\n            })\n            .collect::<Vec<_>>();\n        assert_eq!(tool_names, vec![\"a_tool\", \"z_tool\"]);\n        assert_eq!(\n            create.tool_choice,\n            Some(ToolChoiceParam::Mode(ToolChoiceOptions::Auto))\n        );\n    }\n\n    #[test]\n    fn test_build_responses_request_sets_additional_properties_false_for_custom_tool_schema() {\n        let openai = OpenAI::builder()\n            .default_prompt_model(\"gpt-4.1\")\n            .build()\n            .unwrap();\n\n        let mut tools = HashSet::new();\n        tools.insert(sample_tool_spec());\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs(tools)\n            .build()\n            .unwrap();\n\n        let create = build_responses_request_from_chat(&openai, &request).unwrap();\n\n        let tools = create.tools.expect(\"tools present\");\n        assert_eq!(tools.len(), 1);\n\n        let Tool::Function(function) = &tools[0] else {\n            panic!(\"expected function tool\");\n        };\n\n        let additional_properties = function\n            .parameters\n            .as_ref()\n            .and_then(|params| params.get(\"additionalProperties\").cloned());\n\n        #[allow(dead_code)]\n        #[derive(schemars::JsonSchema)]\n        #[serde(deny_unknown_fields)]\n        #[schemars(title = \"WeatherArgs\")]\n        struct WeatherArgsCorrect {\n            _city: String,\n        }\n\n        let expected_parameters = serde_json::json!({\n            \"type\": \"object\",\n            \"title\": \"WeatherArgs\",\n            \"properties\": {\n                \"_city\": {\n                    \"type\": \"string\"\n                }\n            },\n            \"required\": [\"_city\"],\n            \"additionalProperties\": false\n        });\n\n        assert_eq!(\n            additional_properties,\n            Some(serde_json::Value::Bool(false)),\n            \"OpenAI requires additionalProperties to be set to false for tool parameters, got {}\",\n            serde_json::to_string_pretty(&function.parameters).unwrap()\n        );\n\n        assert_eq!(function.parameters, Some(expected_parameters));\n    }\n\n    #[test]\n    fn test_build_responses_request_sets_nested_required_for_typed_request_objects() {\n        let openai = OpenAI::builder()\n            .default_prompt_model(\"gpt-4.1\")\n            .build()\n            .unwrap();\n\n        let mut tools = HashSet::new();\n        tools.insert(\n            ToolSpec::builder()\n                .name(\"notion_create_comment\")\n                .description(\"Create a comment\")\n                .parameters_schema(schemars::schema_for!(NestedCommentArgs))\n                .build()\n                .unwrap(),\n        );\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .tool_specs(tools)\n            .build()\n            .unwrap();\n\n        let create = build_responses_request_from_chat(&openai, &request).unwrap();\n        let tools = create.tools.expect(\"tools present\");\n        let Tool::Function(function) = &tools[0] else {\n            panic!(\"expected function tool\");\n        };\n\n        let nested_required = function.parameters.as_ref().and_then(|schema| {\n            let request_schema = schema\n                .get(\"properties\")\n                .and_then(|value| value.get(\"request\"))\n                .and_then(serde_json::Value::as_object)?;\n            let referenced_required = request_schema\n                .get(\"$ref\")\n                .and_then(serde_json::Value::as_str)\n                .and_then(|reference| reference.strip_prefix(\"#/$defs/\"))\n                .and_then(|definition_name| {\n                    schema\n                        .get(\"$defs\")\n                        .and_then(|value| value.get(definition_name))\n                })\n                .and_then(|value| value.get(\"required\"))\n                .and_then(serde_json::Value::as_array);\n\n            referenced_required.or_else(|| {\n                request_schema\n                    .get(\"required\")\n                    .and_then(serde_json::Value::as_array)\n            })\n        });\n\n        let nested_required = nested_required.expect(\"nested request should have required\");\n        let names: std::collections::HashSet<_> = nested_required\n            .iter()\n            .filter_map(serde_json::Value::as_str)\n            .collect();\n\n        assert!(names.contains(\"body\"));\n        assert!(names.contains(\"text\"));\n        assert!(names.contains(\"page_id\"));\n        assert!(names.contains(\"block_id\"));\n        assert!(names.contains(\"discussion_id\"));\n    }\n\n    #[test]\n    fn test_build_responses_request_reasoning_is_stateless_with_summary_and_encrypted_content() {\n        let openai = OpenAI::builder()\n            .default_prompt_model(\"gpt-4.1\")\n            .default_options(Options::builder().reasoning_effort(ReasoningEffort::Low))\n            .build()\n            .unwrap();\n\n        let request = ChatCompletionRequest::builder()\n            .messages(vec![ChatMessage::User(\"hi\".into())])\n            .build()\n            .unwrap();\n\n        let create = build_responses_request_from_chat(&openai, &request).unwrap();\n\n        assert_eq!(create.store, Some(false));\n        assert_eq!(\n            create.reasoning.as_ref().and_then(|r| r.summary),\n            Some(ReasoningSummary::Auto)\n        );\n        assert!(\n            create\n                .include\n                .as_ref()\n                .is_some_and(|items| items.contains(&IncludeEnum::ReasoningEncryptedContent))\n        );\n    }\n\n    #[test]\n    fn test_chat_messages_to_input_items_keeps_tool_calls_without_content() {\n        let tool_call = ToolCall::builder()\n            .id(\"call_123\")\n            .name(\"lookup\")\n            .maybe_args(Some(\"{\\\"q\\\":\\\"rust\\\"}\".to_string()))\n            .build()\n            .unwrap();\n\n        let message = ChatMessage::Assistant(None, Some(vec![tool_call]));\n\n        let items = chat_messages_to_input_items(&[message], true).expect(\"conversion succeeds\");\n        assert_eq!(items.len(), 1);\n\n        let InputItem::Item(async_openai::types::responses::Item::FunctionCall(function_call)) =\n            &items[0]\n        else {\n            panic!(\"expected function call item\");\n        };\n\n        assert_eq!(function_call.call_id, \"fc_123\");\n        assert_eq!(function_call.name, \"lookup\");\n        assert_eq!(function_call.arguments, \"{\\\"q\\\":\\\"rust\\\"}\");\n        assert_eq!(function_call.status, Some(OutputStatus::InProgress));\n    }\n\n    #[test]\n    fn test_chat_messages_to_input_items_includes_reasoning_with_encrypted_content() {\n        let message = ChatMessage::Reasoning(ReasoningItem {\n            id: \"reasoning_1\".to_string(),\n            summary: vec![\"First\".to_string(), \"Second\".to_string()],\n            encrypted_content: Some(\"encrypted\".to_string()),\n            ..Default::default()\n        });\n\n        let items = chat_messages_to_input_items(&[message], true).expect(\"conversion succeeds\");\n        assert_eq!(items.len(), 1);\n\n        let InputItem::Item(async_openai::types::responses::Item::Reasoning(reasoning_item)) =\n            &items[0]\n        else {\n            panic!(\"expected reasoning item\");\n        };\n\n        assert_eq!(reasoning_item.id, \"reasoning_1\");\n        assert!(reasoning_item.summary.is_empty());\n        assert_eq!(\n            reasoning_item.encrypted_content.as_deref(),\n            Some(\"encrypted\")\n        );\n    }\n\n    #[test]\n    fn test_chat_messages_to_input_items_ignores_empty_assistant() {\n        let message = ChatMessage::Assistant(None, None);\n\n        let items = chat_messages_to_input_items(&[message], true).expect(\"conversion succeeds\");\n        assert!(items.is_empty());\n    }\n\n    #[test]\n    fn test_tool_call_from_function_call_uses_id_when_call_id_missing() {\n        let function_call = FunctionToolCall {\n            arguments: String::new(),\n            call_id: String::new(),\n            name: \"lookup\".to_string(),\n            id: Some(\"call_456\".to_string()),\n            status: Some(OutputStatus::Completed),\n        };\n\n        let tool_call = tool_call_from_function_call(&function_call).expect(\"tool call\");\n        assert_eq!(tool_call.id(), \"call_456\");\n        assert_eq!(tool_call.name(), \"lookup\");\n        assert!(tool_call.args().is_none());\n    }\n\n    #[test]\n    fn test_collect_message_text_helpers_join_parts() {\n        let output = vec![\n            OutputItem::Message(output_message(\"msg_1\", &[\"First\", \"Second\"])),\n            OutputItem::FunctionCall(FunctionToolCall {\n                arguments: \"{}\".to_string(),\n                call_id: \"call\".to_string(),\n                name: \"noop\".to_string(),\n                id: None,\n                status: Some(OutputStatus::Completed),\n            }),\n            OutputItem::Message(output_message(\"msg_2\", &[\"Third\"])),\n        ];\n\n        let collected = collect_message_text_from_items(&output).expect(\"text present\");\n        assert_eq!(collected, \"First\\nSecond\\nThird\");\n\n        let message = output_message(\"msg_single\", &[\"Line one\", \"Line two\"]);\n        let collected_message =\n            collect_message_text_from_message(&message).expect(\"message text present\");\n        assert_eq!(collected_message, \"Line one\\nLine two\");\n    }\n\n    #[test]\n    fn test_metadata_to_chat_completion_respects_existing_fields() {\n        let metadata = response_with_message_tool_reasoning(\"metadata message\");\n\n        let mut empty = ChatCompletionResponse::default();\n        metadata_to_chat_completion(&metadata, &mut empty).expect(\"metadata applies\");\n        assert_eq!(empty.message.as_deref(), Some(\"metadata message\"));\n        assert!(empty.tool_calls.is_some());\n        assert!(empty.reasoning.is_some());\n        assert!(empty.usage.is_some());\n\n        let existing_tool = ToolCall::builder()\n            .id(\"existing\")\n            .name(\"existing_tool\")\n            .maybe_args(Some(\"{\\\"keep\\\":true}\".to_string()))\n            .build()\n            .unwrap();\n\n        let existing_reasoning = ReasoningItem {\n            id: \"existing_reasoning\".to_string(),\n            summary: vec![\"keep\".to_string()],\n            encrypted_content: None,\n            ..Default::default()\n        };\n\n        let existing_usage = Usage {\n            prompt_tokens: 1,\n            completion_tokens: 1,\n            total_tokens: 2,\n            details: None,\n        };\n\n        let mut existing = ChatCompletionResponse::builder()\n            .message(\"existing message\")\n            .tool_calls(vec![existing_tool.clone()])\n            .reasoning(vec![existing_reasoning.clone()])\n            .usage(existing_usage)\n            .build()\n            .unwrap();\n\n        metadata_to_chat_completion(&metadata, &mut existing).expect(\"metadata applies\");\n        assert_eq!(existing.message.as_deref(), Some(\"existing message\"));\n        assert_eq!(\n            existing\n                .tool_calls\n                .as_ref()\n                .and_then(|calls| calls.first())\n                .map(ToolCall::id),\n            Some(\"existing\")\n        );\n        assert_eq!(\n            existing\n                .reasoning\n                .as_ref()\n                .and_then(|items| items.first())\n                .map(|item| item.id.as_str()),\n            Some(\"existing_reasoning\")\n        );\n        assert_eq!(\n            existing.usage.as_ref().map(|usage| usage.total_tokens),\n            Some(sample_usage().total_tokens)\n        );\n    }\n\n    #[test]\n    fn test_tool_output_preserves_structured_values() {\n        let tool_call = ToolCall::builder()\n            .id(\"fc_test\")\n            .name(\"demo\")\n            .maybe_args(Some(\"{\\\"ok\\\":true}\".to_owned()))\n            .build()\n            .unwrap();\n\n        let messages = vec![\n            ChatMessage::ToolOutput(\n                tool_call.clone(),\n                ToolOutput::Stop(Some(json!({\"foo\": \"bar\"}))),\n            ),\n            ChatMessage::ToolOutput(\n                tool_call.clone(),\n                ToolOutput::FeedbackRequired(Some(json!({\"nested\": {\"a\": 1}}))),\n            ),\n            ChatMessage::ToolOutput(\n                tool_call.clone(),\n                ToolOutput::AgentFailed(Some(json!([1, 2, 3]))),\n            ),\n        ];\n\n        let items = chat_messages_to_input_items(&messages, true).expect(\"conversion succeeds\");\n        assert_eq!(items.len(), 3);\n\n        for (item, expected) in\n            items\n                .iter()\n                .zip([r#\"{\"foo\":\"bar\"}\"#, r#\"{\"nested\":{\"a\":1}}\"#, r\"[1,2,3]\"])\n        {\n            let InputItem::Item(async_openai::types::responses::Item::FunctionCallOutput(\n                function_output,\n            )) = item\n            else {\n                panic!(\"expected function call output item\");\n            };\n\n            assert_eq!(function_output.call_id, \"fc_test\");\n            assert_eq!(\n                function_output.output,\n                FunctionCallOutput::Text(expected.to_string())\n            );\n        }\n    }\n\n    #[test]\n    fn test_response_to_chat_completion_maps_outputs() {\n        let usage = sample_usage();\n        let response: Response = serde_json::from_value(json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1\",\n            \"object\": \"response\",\n            \"status\": \"completed\",\n            \"output\": [\n                {\n                    \"type\": \"message\",\n                    \"id\": \"msg\",\n                    \"role\": \"assistant\",\n                    \"status\": \"completed\",\n                    \"content\": [\n                        {\"type\": \"output_text\", \"text\": \"Assistant reply\", \"annotations\": []}\n                    ]\n                },\n                {\n                    \"type\": \"function_call\",\n                    \"id\": \"tool\",\n                    \"call_id\": \"tool\",\n                    \"name\": \"get_weather\",\n                    \"arguments\": \"{\\\"city\\\":\\\"Oslo\\\"}\",\n                    \"status\": \"completed\"\n                }\n            ],\n            \"usage\": usage,\n        }))\n        .expect(\"valid response json\");\n\n        let completion = response_to_chat_completion(&response).unwrap();\n        assert_eq!(completion.message(), Some(\"Assistant reply\"));\n\n        let tool_calls = completion.tool_calls().expect(\"tool calls present\");\n        assert_eq!(tool_calls.len(), 1);\n        assert_eq!(tool_calls[0].name(), \"get_weather\");\n        assert_eq!(tool_calls[0].args(), Some(\"{\\\"city\\\":\\\"Oslo\\\"}\"));\n\n        let usage = completion.usage.expect(\"usage\");\n        assert_eq!(usage.prompt_tokens, 5);\n        assert_eq!(usage.completion_tokens, 3);\n        assert_eq!(usage.total_tokens, 8);\n    }\n\n    #[test]\n    fn test_response_to_chat_completion_collects_reasoning_summary_and_encrypted_content() {\n        let usage = sample_usage();\n        let response: Response = serde_json::from_value(json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1\",\n            \"object\": \"response\",\n            \"status\": \"completed\",\n            \"output\": [\n                {\n                    \"type\": \"reasoning\",\n                    \"id\": \"reasoning_1\",\n                    \"summary\": [\n                        {\"type\": \"summary_text\", \"text\": \"First\"},\n                        {\"type\": \"summary_text\", \"text\": \"Second\"}\n                    ],\n                    \"encrypted_content\": \"encrypted\"\n                }\n            ],\n            \"usage\": usage,\n        }))\n        .expect(\"valid response json\");\n\n        let completion = response_to_chat_completion(&response).unwrap();\n        let reasoning = completion.reasoning.expect(\"reasoning items present\");\n\n        assert_eq!(reasoning.len(), 1);\n        assert_eq!(reasoning[0].id, \"reasoning_1\");\n        assert_eq!(\n            reasoning[0].summary,\n            vec![\"First\".to_string(), \"Second\".to_string()]\n        );\n        assert_eq!(reasoning[0].encrypted_content.as_deref(), Some(\"encrypted\"));\n    }\n\n    #[test]\n    fn test_stream_accumulator_handles_text_and_tool_events() {\n        let mut state = ResponsesStreamState::default();\n\n        let delta: ResponseTextDeltaEvent = serde_json::from_value(json!({\n            \"sequence_number\": 0,\n            \"item_id\": \"msg_1\",\n            \"output_index\": 0,\n            \"content_index\": 0,\n            \"delta\": \"Hello\"\n        }))\n        .unwrap();\n\n        let chunk = expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseOutputTextDelta(delta),\n            false,\n        );\n\n        assert_eq!(\n            chunk\n                .response\n                .delta\n                .as_ref()\n                .and_then(|d| d.message_chunk.as_deref()),\n            Some(\"Hello\")\n        );\n\n        let item_added: ResponseOutputItemAddedEvent = serde_json::from_value(json!({\n            \"sequence_number\": 1,\n            \"output_index\": 0,\n            \"item\": {\n                \"type\": \"function_call\",\n                \"id\": \"call\",\n                \"call_id\": \"call\",\n                \"name\": \"lookup\",\n                \"arguments\": \"\",\n                \"status\": \"in_progress\"\n            }\n        }))\n        .unwrap();\n\n        expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseOutputItemAdded(item_added),\n            false,\n        );\n\n        let args_delta: ResponseFunctionCallArgumentsDeltaEvent = serde_json::from_value(json!({\n            \"sequence_number\": 2,\n            \"item_id\": \"call\",\n            \"output_index\": 0,\n            \"delta\": \"{\\\"q\\\":\\\"rust\\\"}\"\n        }))\n        .unwrap();\n\n        expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseFunctionCallArgumentsDelta(args_delta),\n            false,\n        );\n\n        let args_done: ResponseFunctionCallArgumentsDoneEvent = serde_json::from_value(json!({\n            \"sequence_number\": 3,\n            \"item_id\": \"call\",\n            \"output_index\": 0,\n            \"name\": \"lookup\",\n            \"arguments\": \"{\\\"q\\\":\\\"rust\\\"}\"\n        }))\n        .unwrap();\n\n        expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseFunctionCallArgumentsDone(args_done),\n            false,\n        );\n\n        let usage = sample_usage();\n        let completed: ResponseCompletedEvent = serde_json::from_value(json!({\n            \"sequence_number\": 4,\n            \"response\": {\n                \"id\": \"resp\",\n                \"object\": \"response\",\n                \"created_at\": 0,\n                \"status\": \"completed\",\n                \"model\": \"gpt-4.1\",\n                \"output\": [],\n                \"usage\": to_value(&usage).unwrap()\n            }\n        }))\n        .unwrap();\n\n        let final_chunk = expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseCompleted(completed),\n            false,\n        );\n        assert!(final_chunk.finished);\n\n        assert_eq!(final_chunk.response.message(), Some(\"Hello\"));\n\n        let tool_calls = final_chunk\n            .response\n            .tool_calls()\n            .expect(\"tool calls present\");\n        assert_eq!(tool_calls[0].name(), \"lookup\");\n        assert_eq!(tool_calls[0].args(), Some(\"{\\\"q\\\":\\\"rust\\\"}\"));\n\n        let usage = final_chunk.response.usage.expect(\"usage\");\n        assert_eq!(usage.total_tokens, 8);\n    }\n\n    #[test]\n    fn test_stream_state_take_final_only_once() {\n        let mut state = ResponsesStreamState::default();\n        assert!(state.take_final(true).is_some());\n        assert!(state.take_final(true).is_none());\n    }\n\n    #[test]\n    fn test_stream_state_ignores_events_after_completion() {\n        let mut state = ResponsesStreamState::default();\n\n        let usage = sample_usage();\n        let completed: ResponseCompletedEvent = serde_json::from_value(json!({\n            \"sequence_number\": 0,\n            \"response\": {\n                \"id\": \"resp\",\n                \"object\": \"response\",\n                \"created_at\": 0,\n                \"status\": \"completed\",\n                \"model\": \"gpt-4.1\",\n                \"output\": [],\n                \"usage\": to_value(&usage).unwrap()\n            }\n        }))\n        .unwrap();\n\n        let finished = expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseCompleted(completed),\n            false,\n        );\n        assert!(finished.finished);\n\n        let delta: ResponseTextDeltaEvent = serde_json::from_value(json!({\n            \"sequence_number\": 1,\n            \"item_id\": \"msg_1\",\n            \"output_index\": 0,\n            \"content_index\": 0,\n            \"delta\": \"ignored\"\n        }))\n        .unwrap();\n\n        expect_no_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseOutputTextDelta(delta),\n            false,\n        );\n    }\n\n    #[test]\n    fn test_stream_state_message_item_added_collects_text() {\n        let mut state = ResponsesStreamState::default();\n\n        let item_added: ResponseOutputItemAddedEvent = serde_json::from_value(json!({\n            \"sequence_number\": 0,\n            \"output_index\": 0,\n            \"item\": {\n                \"type\": \"message\",\n                \"id\": \"msg\",\n                \"role\": \"assistant\",\n                \"status\": \"completed\",\n                \"content\": [\n                    {\"type\": \"output_text\", \"text\": \"Hello\", \"annotations\": []},\n                    {\"type\": \"output_text\", \"text\": \"World\", \"annotations\": []}\n                ]\n            }\n        }))\n        .unwrap();\n\n        let chunk = expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseOutputItemAdded(item_added),\n            true,\n        );\n\n        assert_eq!(chunk.response.message(), Some(\"Hello\\nWorld\"));\n    }\n\n    #[test]\n    fn test_stream_state_output_item_done_emits_tool_call() {\n        let mut state = ResponsesStreamState::default();\n\n        let item_added: ResponseOutputItemAddedEvent = serde_json::from_value(json!({\n            \"sequence_number\": 0,\n            \"output_index\": 0,\n            \"item\": {\n                \"type\": \"function_call\",\n                \"id\": \"call\",\n                \"call_id\": \"call\",\n                \"name\": \"lookup\",\n                \"arguments\": \"\",\n                \"status\": \"in_progress\"\n            }\n        }))\n        .unwrap();\n\n        expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseOutputItemAdded(item_added),\n            true,\n        );\n\n        let done: ResponseOutputItemDoneEvent = serde_json::from_value(json!({\n            \"sequence_number\": 1,\n            \"output_index\": 0,\n            \"item\": {\n                \"type\": \"function_call\",\n                \"id\": \"call-id\",\n                \"call_id\": \"\",\n                \"name\": \"lookup\",\n                \"arguments\": \"\",\n                \"status\": \"completed\"\n            }\n        }))\n        .unwrap();\n\n        let chunk = expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseOutputItemDone(done),\n            true,\n        );\n\n        let calls = chunk.response.tool_calls().expect(\"tool calls present\");\n        assert_eq!(calls[0].id(), \"call\");\n        assert_eq!(calls[0].name(), \"lookup\");\n    }\n\n    #[test]\n    fn test_stream_state_duplicate_arguments_done_no_emit() {\n        let mut state = ResponsesStreamState::default();\n\n        let item_added: ResponseOutputItemAddedEvent = serde_json::from_value(json!({\n            \"sequence_number\": 0,\n            \"output_index\": 0,\n            \"item\": {\n                \"type\": \"function_call\",\n                \"id\": \"call\",\n                \"call_id\": \"call\",\n                \"name\": \"lookup\",\n                \"arguments\": \"\",\n                \"status\": \"in_progress\"\n            }\n        }))\n        .unwrap();\n        expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseOutputItemAdded(item_added),\n            false,\n        );\n\n        let args_delta: ResponseFunctionCallArgumentsDeltaEvent = serde_json::from_value(json!({\n            \"sequence_number\": 1,\n            \"item_id\": \"call\",\n            \"output_index\": 0,\n            \"delta\": \"{\\\"q\\\":1}\"\n        }))\n        .unwrap();\n        expect_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseFunctionCallArgumentsDelta(args_delta),\n            false,\n        );\n\n        let args_done: ResponseFunctionCallArgumentsDoneEvent = serde_json::from_value(json!({\n            \"sequence_number\": 2,\n            \"item_id\": \"call\",\n            \"output_index\": 0,\n            \"arguments\": \"{\\\"q\\\":1}\",\n            \"name\": \"\"\n        }))\n        .unwrap();\n\n        expect_no_emit(\n            &mut state,\n            ResponseStreamEvent::ResponseFunctionCallArgumentsDone(args_done),\n            false,\n        );\n    }\n\n    #[test]\n    fn test_stream_state_response_failed_and_error() {\n        let mut state = ResponsesStreamState::default();\n\n        let failed: ResponseFailedEvent = serde_json::from_value(json!({\n            \"sequence_number\": 0,\n            \"response\": {\n                \"id\": \"resp\",\n                \"object\": \"response\",\n                \"created_at\": 0,\n                \"status\": \"failed\",\n                \"model\": \"gpt-4.1\",\n                \"output\": [],\n                \"error\": {\"code\": \"oops\", \"message\": \"boom\"}\n            }\n        }))\n        .unwrap();\n\n        let err = state\n            .apply_event(ResponseStreamEvent::ResponseFailed(failed), false)\n            .unwrap_err();\n        assert!(\n            matches!(err, LanguageModelError::PermanentError(msg) if msg.to_string().contains(\"oops\"))\n        );\n\n        let mut state = ResponsesStreamState::default();\n        let err_event: ResponseErrorEvent = serde_json::from_value(json!({\n            \"sequence_number\": 1,\n            \"message\": \"bad things\"\n        }))\n        .unwrap();\n        let err = state\n            .apply_event(ResponseStreamEvent::ResponseError(err_event), false)\n            .unwrap_err();\n        assert!(\n            matches!(err, LanguageModelError::PermanentError(msg) if msg.to_string().contains(\"bad things\"))\n        );\n    }\n\n    #[test]\n    fn test_response_to_chat_completion_failed_status_errors() {\n        let response: Response = serde_json::from_value(json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1\",\n            \"object\": \"response\",\n            \"status\": \"failed\",\n            \"error\": {\"code\": \"oops\", \"message\": \"boom\"},\n            \"output\": []\n        }))\n        .unwrap();\n\n        let err = response_to_chat_completion(&response).unwrap_err();\n        assert!(\n            matches!(err, LanguageModelError::PermanentError(msg) if msg.to_string().contains(\"oops\"))\n        );\n    }\n\n    #[test]\n    fn test_convert_metadata_rejects_non_string_values() {\n        let metadata = json!({\"tag\": 123});\n        assert!(convert_metadata(&metadata).is_none());\n    }\n\n    #[test]\n    fn test_base_request_args_runs_with_seed_and_presence_penalty() {\n        let openai = OpenAI::builder()\n            .default_prompt_model(\"gpt-4.1\")\n            .default_options(\n                Options::builder()\n                    .seed(7)\n                    .presence_penalty(0.4)\n                    .temperature(0.1),\n            )\n            .build()\n            .unwrap();\n\n        assert!(base_request_args(&openai, \"gpt-4.1\").is_ok());\n    }\n\n    #[test]\n    fn test_normalize_responses_function_call_id() {\n        assert_eq!(\n            normalize_responses_function_call_id(\"call_12345\"),\n            \"fc_12345\"\n        );\n        assert_eq!(normalize_responses_function_call_id(\"fc_abc\"), \"fc_abc\");\n        assert_eq!(normalize_responses_function_call_id(\"custom\"), \"custom\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/openai/simple_prompt.rs",
    "content": "//! This module provides an implementation of the `SimplePrompt` trait for the `OpenAI` struct.\n//! It defines an asynchronous function to interact with the `OpenAI` API, allowing prompt\n//! processing and generating responses as part of the Swiftide system.\n\nuse async_openai::types::chat::ChatCompletionRequestUserMessageArgs;\nuse async_trait::async_trait;\nuse swiftide_core::{\n    SimplePrompt,\n    chat_completion::{Usage, errors::LanguageModelError},\n    prompt::Prompt,\n    util::debug_long_utf8,\n};\n\nuse super::responses_api::{build_responses_request_from_prompt, response_to_chat_completion};\nuse crate::openai::openai_error_to_language_model_error;\n\nuse super::GenericOpenAI;\nuse anyhow::Result;\n\n/// The `SimplePrompt` trait defines a method for sending a prompt to an AI model and receiving a\n/// response.\n#[async_trait]\nimpl<\n    C: async_openai::config::Config\n        + std::default::Default\n        + Sync\n        + Send\n        + std::fmt::Debug\n        + Clone\n        + 'static,\n> SimplePrompt for GenericOpenAI<C>\n{\n    /// Sends a prompt to the `OpenAI` API and returns the response content.\n    ///\n    /// # Parameters\n    /// - `prompt`: A string slice that holds the prompt to be sent to the `OpenAI` API.\n    ///\n    /// # Returns\n    /// - `Result<String>`: On success, returns the content of the response as a `String`. On\n    ///   failure, returns an error wrapped in a `Result`.\n    ///\n    /// # Errors\n    /// - Returns an error if the model is not set in the default options.\n    /// - Returns an error if the request to the `OpenAI` API fails.\n    /// - Returns an error if the response does not contain the expected content.\n    #[cfg_attr(not(feature = \"langfuse\"), tracing::instrument(skip_all, err))]\n    #[cfg_attr(\n        feature = \"langfuse\",\n        tracing::instrument(skip_all, err, fields(langfuse.type = \"GENERATION\"))\n    )]\n    async fn prompt(&self, prompt: Prompt) -> Result<String, LanguageModelError> {\n        if self.is_responses_api_enabled() {\n            return self.prompt_via_responses_api(prompt).await;\n        }\n\n        // Retrieve the model from the default options, returning an error if not set.\n        let model = self\n            .default_options\n            .prompt_model\n            .as_ref()\n            .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n        // Build the request to be sent to the OpenAI API.\n        let request = self\n            .chat_completion_request_defaults()\n            .model(model)\n            .messages(vec![\n                ChatCompletionRequestUserMessageArgs::default()\n                    .content(prompt.render()?)\n                    .build()\n                    .map_err(LanguageModelError::permanent)?\n                    .into(),\n            ])\n            .build()\n            .map_err(LanguageModelError::permanent)?;\n\n        // Log the request for debugging purposes.\n        tracing::trace!(\n            model = &model,\n            messages = debug_long_utf8(\n                serde_json::to_string_pretty(&request.messages.last())\n                    .map_err(LanguageModelError::permanent)?,\n                100\n            ),\n            \"[SimplePrompt] Request to openai\"\n        );\n\n        // Send the request to the OpenAI API and await the response.\n        // Move the request; we logged key fields above if needed.\n        let tracking_request = request.clone();\n        let response = self\n            .client\n            .chat()\n            .create(request)\n            .await\n            .map_err(openai_error_to_language_model_error)?;\n\n        let message = response\n            .choices\n            .first()\n            .and_then(|choice| choice.message.content.clone())\n            .ok_or_else(|| {\n                LanguageModelError::PermanentError(\"Expected content in response\".into())\n            })?;\n\n        let usage = response.usage.as_ref().map(Usage::from);\n\n        self.track_completion(\n            model,\n            usage.as_ref(),\n            Some(&tracking_request),\n            Some(&response),\n        );\n\n        Ok(message)\n    }\n}\n\nimpl<\n    C: async_openai::config::Config\n        + std::default::Default\n        + Sync\n        + Send\n        + std::fmt::Debug\n        + Clone\n        + 'static,\n> GenericOpenAI<C>\n{\n    async fn prompt_via_responses_api(&self, prompt: Prompt) -> Result<String, LanguageModelError> {\n        let prompt_text = prompt.render().map_err(LanguageModelError::permanent)?;\n        let model = self\n            .default_options\n            .prompt_model\n            .as_ref()\n            .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n        let create_request = build_responses_request_from_prompt(self, prompt_text.clone())?;\n\n        let response = self\n            .client\n            .responses()\n            .create(create_request.clone())\n            .await\n            .map_err(openai_error_to_language_model_error)?;\n\n        let completion = response_to_chat_completion(&response)?;\n\n        let message = completion.message.clone().ok_or_else(|| {\n            LanguageModelError::PermanentError(\"Expected content in response\".into())\n        })?;\n\n        self.track_completion(\n            model,\n            completion.usage.as_ref(),\n            Some(&create_request),\n            Some(&completion),\n        );\n\n        Ok(message)\n    }\n}\n\n#[allow(clippy::items_after_statements)]\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::openai::OpenAI;\n    use serde_json::Value;\n    use wiremock::{\n        Mock, MockServer, Request, Respond, ResponseTemplate,\n        matchers::{method, path},\n    };\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_errors_when_model_missing() {\n        let openai = OpenAI::builder().build().unwrap();\n        let result = openai.prompt(\"hello\".into()).await;\n        assert!(matches!(result, Err(LanguageModelError::PermanentError(_))));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_via_responses_api_returns_message() {\n        let mock_server = MockServer::start().await;\n\n        let response_body = serde_json::json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1-mini\",\n            \"object\": \"response\",\n            \"status\": \"completed\",\n            \"output\": [\n                {\n                    \"type\": \"message\",\n                    \"id\": \"msg\",\n                    \"role\": \"assistant\",\n                    \"status\": \"completed\",\n                    \"content\": [\n                        {\"type\": \"output_text\", \"text\": \"Hello world\", \"annotations\": []}\n                    ]\n                }\n            ],\n            \"usage\": {\n                \"input_tokens\": 4,\n                \"input_tokens_details\": {\"cached_tokens\": 0},\n                \"output_tokens\": 2,\n                \"output_tokens_details\": {\"reasoning_tokens\": 0},\n                \"total_tokens\": 6\n            }\n        });\n\n        struct ValidatePromptRequest {\n            response: Value,\n        }\n\n        impl Respond for ValidatePromptRequest {\n            fn respond(&self, request: &Request) -> ResponseTemplate {\n                let payload: Value = serde_json::from_slice(&request.body).unwrap();\n                assert_eq!(payload[\"model\"], self.response[\"model\"]);\n                let items = payload[\"input\"].as_array().expect(\"array input\");\n                assert_eq!(items.len(), 1);\n                assert_eq!(items[0][\"type\"], \"message\");\n                ResponseTemplate::new(200).set_body_json(self.response.clone())\n            }\n        }\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/responses\"))\n            .respond_with(ValidatePromptRequest {\n                response: response_body,\n            })\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let client = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(client)\n            .default_prompt_model(\"gpt-4.1-mini\")\n            .use_responses_api(true)\n            .build()\n            .unwrap();\n\n        let result = openai.prompt(\"Say hi\".into()).await.unwrap();\n        assert_eq!(result, \"Hello world\");\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_prompt_via_responses_api_missing_output_errors() {\n        let mock_server = MockServer::start().await;\n        let empty_response = serde_json::json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1-mini\",\n            \"object\": \"response\",\n            \"output\": [],\n            \"status\": \"completed\"\n        });\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/responses\"))\n            .respond_with(ResponseTemplate::new(200).set_body_json(empty_response))\n            .mount(&mock_server)\n            .await;\n\n        let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server.uri());\n        let client = async_openai::Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(client)\n            .default_prompt_model(\"gpt-4.1-mini\")\n            .use_responses_api(true)\n            .build()\n            .unwrap();\n\n        let err = openai.prompt(\"test\".into()).await.unwrap_err();\n        assert!(matches!(err, LanguageModelError::PermanentError(_)));\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/openai/structured_prompt.rs",
    "content": "//! This module provides an implementation of the `StructuredPrompt` trait for the `OpenAI` struct.\n//!\n//! Unlike the other traits, `StructuredPrompt` is *not* dyn safe.\n//!\n//! Use `DynStructuredPrompt` if you need dyn dispatch. For custom implementations, if you\n//! implement `DynStructuredPrompt`, you get `StructuredPrompt` for free.\n\nuse async_openai::types::{\n    chat::ChatCompletionRequestUserMessageArgs,\n    responses::{ResponseFormat, ResponseFormatJsonSchema},\n};\nuse async_trait::async_trait;\nuse schemars::Schema;\nuse swiftide_core::{\n    DynStructuredPrompt,\n    chat_completion::{Usage, errors::LanguageModelError},\n    prompt::Prompt,\n    util::debug_long_utf8,\n};\n\nuse super::responses_api::{\n    build_responses_request_from_prompt_with_schema, response_to_chat_completion,\n};\nuse crate::openai::openai_error_to_language_model_error;\n\nuse super::GenericOpenAI;\nuse anyhow::{Context as _, Result};\n\n/// The `StructuredPrompt` trait defines a method for sending a prompt to an AI model and receiving\n/// a response.\n#[async_trait]\nimpl<\n    C: async_openai::config::Config\n        + std::default::Default\n        + Sync\n        + Send\n        + std::fmt::Debug\n        + Clone\n        + 'static,\n> DynStructuredPrompt for GenericOpenAI<C>\n{\n    /// Sends a prompt to the `OpenAI` API and returns the response content.\n    ///\n    /// # Parameters\n    /// - `prompt`: A string slice that holds the prompt to be sent to the `OpenAI` API.\n    ///\n    /// # Returns\n    /// - `Result<String>`: On success, returns the content of the response as a `String`. On\n    ///   failure, returns an error wrapped in a `Result`.\n    ///\n    /// # Errors\n    /// - Returns an error if the model is not set in the default options.\n    /// - Returns an error if the request to the `OpenAI` API fails.\n    /// - Returns an error if the response does not contain the expected content.\n    #[tracing::instrument(skip_all, err)]\n    #[cfg_attr(\n        feature = \"langfuse\",\n        tracing::instrument(skip_all, err, fields(langfuse.type = \"GENERATION\"))\n    )]\n    async fn structured_prompt_dyn(\n        &self,\n        prompt: Prompt,\n        schema: Schema,\n    ) -> Result<serde_json::Value, LanguageModelError> {\n        if self.is_responses_api_enabled() {\n            return self\n                .structured_prompt_via_responses_api(prompt, schema)\n                .await;\n        }\n\n        // Retrieve the model from the default options, returning an error if not set.\n        let model = self\n            .default_options\n            .prompt_model\n            .as_ref()\n            .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n        let schema_value =\n            serde_json::to_value(&schema).context(\"Failed to get schema as value\")?;\n        let response_format = ResponseFormat::JsonSchema {\n            json_schema: ResponseFormatJsonSchema {\n                description: None,\n                name: \"structured_prompt\".into(),\n                schema: Some(schema_value),\n                strict: Some(true),\n            },\n        };\n\n        // Build the request to be sent to the OpenAI API.\n        let request = self\n            .chat_completion_request_defaults()\n            .model(model)\n            .response_format(response_format)\n            .messages(vec![\n                ChatCompletionRequestUserMessageArgs::default()\n                    .content(prompt.render()?)\n                    .build()\n                    .map_err(LanguageModelError::permanent)?\n                    .into(),\n            ])\n            .build()\n            .map_err(LanguageModelError::permanent)?;\n\n        // Log the request for debugging purposes.\n        tracing::trace!(\n            model = &model,\n            messages = debug_long_utf8(\n                serde_json::to_string_pretty(&request.messages.last())\n                    .map_err(LanguageModelError::permanent)?,\n                100\n            ),\n            \"[StructuredPrompt] Request to openai\"\n        );\n\n        // Send the request to the OpenAI API and await the response.\n        let response = self\n            .client\n            .chat()\n            .create(request.clone())\n            .await\n            .map_err(openai_error_to_language_model_error)?;\n\n        let message = response\n            .choices\n            .first()\n            .and_then(|choice| choice.message.content.clone())\n            .ok_or_else(|| {\n                LanguageModelError::PermanentError(\"Expected content in response\".into())\n            })?;\n\n        let usage = response.usage.as_ref().map(Usage::from);\n\n        self.track_completion(model, usage.as_ref(), Some(&request), Some(&response));\n\n        let parsed = serde_json::from_str(&message)\n            .with_context(|| format!(\"Failed to parse response\\n {message}\"))?;\n\n        // Extract and return the content of the response, returning an error if not found.\n        Ok(parsed)\n    }\n}\n\nimpl<\n    C: async_openai::config::Config\n        + std::default::Default\n        + Sync\n        + Send\n        + std::fmt::Debug\n        + Clone\n        + 'static,\n> GenericOpenAI<C>\n{\n    async fn structured_prompt_via_responses_api(\n        &self,\n        prompt: Prompt,\n        schema: Schema,\n    ) -> Result<serde_json::Value, LanguageModelError> {\n        let prompt_text = prompt.render().map_err(LanguageModelError::permanent)?;\n        let model = self\n            .default_options\n            .prompt_model\n            .as_ref()\n            .ok_or_else(|| LanguageModelError::PermanentError(\"Model not set\".into()))?;\n\n        let schema_value = serde_json::to_value(&schema)\n            .context(\"Failed to get schema as value\")\n            .map_err(LanguageModelError::permanent)?;\n\n        let create_request = build_responses_request_from_prompt_with_schema(\n            self,\n            prompt_text.clone(),\n            schema_value,\n        )?;\n        let tracking_request = create_request.clone();\n\n        let response = self\n            .client\n            .responses()\n            .create(create_request)\n            .await\n            .map_err(openai_error_to_language_model_error)?;\n\n        let completion = response_to_chat_completion(&response)?;\n\n        let message = completion.message.clone().ok_or_else(|| {\n            LanguageModelError::PermanentError(\"Expected content in response\".into())\n        })?;\n\n        self.track_completion(\n            model,\n            completion.usage.as_ref(),\n            Some(&tracking_request),\n            Some(&completion),\n        );\n\n        let parsed = serde_json::from_str(&message)\n            .with_context(|| format!(\"Failed to parse response\\n {message}\"))\n            .map_err(LanguageModelError::permanent)?;\n\n        Ok(parsed)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::openai::{self, OpenAI};\n    use swiftide_core::StructuredPrompt;\n\n    use super::*;\n    use async_openai::Client;\n    use async_openai::config::OpenAIConfig;\n    use schemars::{JsonSchema, schema_for};\n    use serde::{Deserialize, Serialize};\n    use serde_json::json;\n    use wiremock::{\n        Mock, MockServer, ResponseTemplate,\n        matchers::{method, path},\n    };\n\n    #[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, PartialEq, Eq)]\n    struct SimpleOutput {\n        answer: String,\n    }\n\n    async fn setup_client() -> (MockServer, OpenAI) {\n        // Start the Wiremock server\n        let mock_server = MockServer::start().await;\n\n        // Prepare the response the mock should return\n        let assistant_msg = serde_json::json!({\n            \"role\": \"assistant\",\n            \"content\": serde_json::to_string(&SimpleOutput {\n                answer: \"42\".to_owned()\n            }).unwrap(),\n        });\n\n        let body = serde_json::json!({\n          \"id\": \"chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT\",\n          \"object\": \"chat.completion\",\n          \"created\": 123,\n          \"model\": \"gpt-4.1-2025-04-14\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": assistant_msg,\n              \"logprobs\": null,\n              \"finish_reason\": \"stop\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 19,\n            \"completion_tokens\": 10,\n            \"total_tokens\": 29,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"service_tier\": \"default\"\n        });\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/chat/completions\"))\n            .respond_with(ResponseTemplate::new(200).set_body_json(body))\n            .mount(&mock_server)\n            .await;\n\n        // Point our client at the mock server\n        let config = OpenAIConfig::new().with_api_base(mock_server.uri());\n        let client = Client::with_config(config);\n\n        // Construct the GenericOpenAI instance\n        let opts = openai::Options {\n            prompt_model: Some(\"gpt-4\".to_string()),\n            ..openai::Options::default()\n        };\n        (\n            mock_server,\n            OpenAI::builder()\n                .client(client)\n                .default_options(opts)\n                .build()\n                .unwrap(),\n        )\n    }\n\n    #[tokio::test]\n    async fn test_structured_prompt_with_wiremock() {\n        let (_guard, ai) = setup_client().await;\n        // Call structured_prompt\n        let result: serde_json::Value = ai.structured_prompt(\"test\".into()).await.unwrap();\n        dbg!(&result);\n\n        // Assert\n        assert_eq!(\n            serde_json::from_value::<SimpleOutput>(result).unwrap(),\n            SimpleOutput {\n                answer: \"42\".into()\n            }\n        );\n    }\n\n    #[tokio::test]\n    async fn test_structured_prompt_with_wiremock_as_box() {\n        let (_guard, ai) = setup_client().await;\n        // Call structured_prompt\n        let ai: Box<dyn DynStructuredPrompt> = Box::new(ai);\n        let result: serde_json::Value = ai\n            .structured_prompt_dyn(\"test\".into(), schema_for!(SimpleOutput))\n            .await\n            .unwrap();\n        dbg!(&result);\n\n        // Assert\n        assert_eq!(\n            serde_json::from_value::<SimpleOutput>(result).unwrap(),\n            SimpleOutput {\n                answer: \"42\".into()\n            }\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_via_responses_api() {\n        let mock_server = MockServer::start().await;\n\n        let response_body = json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1-mini\",\n            \"object\": \"response\",\n            \"status\": \"completed\",\n            \"output\": [\n                {\n                    \"type\": \"message\",\n                    \"id\": \"msg\",\n                    \"role\": \"assistant\",\n                    \"status\": \"completed\",\n                    \"content\": [\n                        {\"type\": \"output_text\", \"text\": serde_json::to_string(&SimpleOutput { answer: \"structured\".into() }).unwrap(), \"annotations\": []}\n                    ]\n                }\n            ],\n            \"usage\": {\n                \"input_tokens\": 10,\n                \"input_tokens_details\": {\"cached_tokens\": 0},\n                \"output_tokens\": 4,\n                \"output_tokens_details\": {\"reasoning_tokens\": 0},\n                \"total_tokens\": 14\n            }\n        });\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/responses\"))\n            .respond_with(ResponseTemplate::new(200).set_body_json(response_body))\n            .mount(&mock_server)\n            .await;\n\n        let config = OpenAIConfig::new().with_api_base(mock_server.uri());\n        let client = Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(client)\n            .default_prompt_model(\"gpt-4.1-mini\")\n            .use_responses_api(true)\n            .build()\n            .unwrap();\n\n        let schema = schema_for!(SimpleOutput);\n        let result = openai\n            .structured_prompt_dyn(\"Render\".into(), schema)\n            .await\n            .unwrap();\n\n        assert_eq!(\n            serde_json::from_value::<SimpleOutput>(result).unwrap(),\n            SimpleOutput {\n                answer: \"structured\".into(),\n            }\n        );\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_structured_prompt_via_responses_api_invalid_json_errors() {\n        let mock_server = MockServer::start().await;\n\n        let bad_response = json!({\n            \"created_at\": 0,\n            \"id\": \"resp\",\n            \"model\": \"gpt-4.1-mini\",\n            \"object\": \"response\",\n            \"status\": \"completed\",\n            \"output\": [\n                {\n                    \"type\": \"message\",\n                    \"id\": \"msg\",\n                    \"role\": \"assistant\",\n                    \"status\": \"completed\",\n                    \"content\": [\n                        {\"type\": \"output_text\", \"text\": \"not json\", \"annotations\": []}\n                    ]\n                }\n            ]\n        });\n\n        Mock::given(method(\"POST\"))\n            .and(path(\"/responses\"))\n            .respond_with(ResponseTemplate::new(200).set_body_json(bad_response))\n            .mount(&mock_server)\n            .await;\n\n        let config = OpenAIConfig::new().with_api_base(mock_server.uri());\n        let client = Client::with_config(config);\n\n        let openai = OpenAI::builder()\n            .client(client)\n            .default_prompt_model(\"gpt-4.1-mini\")\n            .use_responses_api(true)\n            .build()\n            .unwrap();\n\n        let schema = schema_for!(SimpleOutput);\n        let err = openai\n            .structured_prompt_dyn(\"Render\".into(), schema)\n            .await\n            .unwrap_err();\n\n        assert!(matches!(err, LanguageModelError::PermanentError(_)));\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/openai/tool_schema.rs",
    "content": "use serde_json::{Map, Value};\nuse swiftide_core::chat_completion::{ToolSpec, ToolSpecError};\nuse thiserror::Error;\n\ntype SchemaNormalizer = fn(&mut Value) -> Result<(), OpenAiToolSchemaError>;\ntype SchemaValidator = fn(&Value) -> Result<(), OpenAiToolSchemaError>;\n\n#[derive(Debug)]\npub(super) struct OpenAiToolSchema(Value);\n\nimpl OpenAiToolSchema {\n    pub(super) fn into_value(self) -> Value {\n        self.0\n    }\n}\n\nimpl TryFrom<&ToolSpec> for OpenAiToolSchema {\n    type Error = OpenAiToolSchemaError;\n\n    fn try_from(spec: &ToolSpec) -> Result<Self, Self::Error> {\n        let value = OpenAiSchemaPipeline::apply(spec.canonical_parameters_schema_json()?)?;\n        Ok(Self(value))\n    }\n}\n\n#[derive(Debug, Error)]\npub(super) enum OpenAiToolSchemaError {\n    #[error(\"{0}\")]\n    InvalidParametersSchema(String),\n    #[error(\"OpenAI strict tool schemas do not support `{keyword}` at {path}\")]\n    UnsupportedKeyword { path: String, keyword: &'static str },\n    #[error(\"OpenAI strict tool schemas do not support array-valued `type` at {path}\")]\n    UnsupportedTypeUnion { path: String },\n}\n\nimpl From<ToolSpecError> for OpenAiToolSchemaError {\n    fn from(value: ToolSpecError) -> Self {\n        Self::InvalidParametersSchema(value.to_string())\n    }\n}\n\nstruct OpenAiSchemaPipeline;\n\nimpl OpenAiSchemaPipeline {\n    fn apply(mut schema: Value) -> Result<Value, OpenAiToolSchemaError> {\n        for normalizer in [\n            strip_schema_metadata as SchemaNormalizer,\n            strip_rust_numeric_formats,\n            complete_required_arrays,\n        ] {\n            normalizer(&mut schema)?;\n        }\n\n        {\n            let validator = validate_openai_compatibility as SchemaValidator;\n            validator(&schema)?;\n        }\n\n        Ok(schema)\n    }\n}\n\nfn strip_schema_metadata(schema: &mut Value) -> Result<(), OpenAiToolSchemaError> {\n    walk_schema_mut(schema, &SchemaPath::root(), &mut |node, _| {\n        node.remove(\"$schema\");\n        Ok(())\n    })\n}\n\nfn strip_rust_numeric_formats(schema: &mut Value) -> Result<(), OpenAiToolSchemaError> {\n    walk_schema_mut(schema, &SchemaPath::root(), &mut |node, _| {\n        let should_strip = node\n            .get(\"format\")\n            .and_then(Value::as_str)\n            .is_some_and(is_rust_numeric_format);\n\n        if should_strip {\n            node.remove(\"format\");\n        }\n\n        Ok(())\n    })\n}\n\nfn complete_required_arrays(schema: &mut Value) -> Result<(), OpenAiToolSchemaError> {\n    walk_schema_mut(schema, &SchemaPath::root(), &mut |node, _| {\n        let Some(properties) = node.get(\"properties\").and_then(Value::as_object) else {\n            return Ok(());\n        };\n\n        node.insert(\n            \"required\".to_string(),\n            Value::Array(properties.keys().cloned().map(Value::String).collect()),\n        );\n\n        Ok(())\n    })\n}\n\nfn validate_openai_compatibility(schema: &Value) -> Result<(), OpenAiToolSchemaError> {\n    walk_schema(schema, &SchemaPath::root(), &mut |node, path| {\n        if node.contains_key(\"oneOf\") {\n            return Err(OpenAiToolSchemaError::UnsupportedKeyword {\n                path: path.to_string(),\n                keyword: \"oneOf\",\n            });\n        }\n\n        if matches!(node.get(\"type\"), Some(Value::Array(_))) {\n            return Err(OpenAiToolSchemaError::UnsupportedTypeUnion {\n                path: path.to_string(),\n            });\n        }\n\n        Ok(())\n    })\n}\n\nfn is_rust_numeric_format(format: &str) -> bool {\n    matches!(\n        format,\n        \"int8\"\n            | \"int16\"\n            | \"int32\"\n            | \"int64\"\n            | \"int128\"\n            | \"isize\"\n            | \"uint\"\n            | \"uint8\"\n            | \"uint16\"\n            | \"uint32\"\n            | \"uint64\"\n            | \"uint128\"\n            | \"usize\"\n    )\n}\n\nfn walk_schema_mut(\n    value: &mut Value,\n    path: &SchemaPath,\n    visitor: &mut impl FnMut(&mut Map<String, Value>, &SchemaPath) -> Result<(), OpenAiToolSchemaError>,\n) -> Result<(), OpenAiToolSchemaError> {\n    let Value::Object(node) = value else {\n        return Ok(());\n    };\n\n    visitor(node, path)?;\n    walk_schema_children_mut(node, path, visitor)\n}\n\nfn walk_schema_children_mut(\n    node: &mut Map<String, Value>,\n    path: &SchemaPath,\n    visitor: &mut impl FnMut(&mut Map<String, Value>, &SchemaPath) -> Result<(), OpenAiToolSchemaError>,\n) -> Result<(), OpenAiToolSchemaError> {\n    for key in [\"items\", \"contains\", \"if\", \"then\", \"else\", \"not\"] {\n        if let Some(child) = node.get_mut(key) {\n            walk_schema_mut(child, &path.with_key(key), visitor)?;\n        }\n    }\n\n    for key in [\"anyOf\", \"oneOf\", \"allOf\", \"prefixItems\"] {\n        let Some(entries) = node.get_mut(key).and_then(Value::as_array_mut) else {\n            continue;\n        };\n\n        for (index, child) in entries.iter_mut().enumerate() {\n            walk_schema_mut(child, &path.with_index(key, index), visitor)?;\n        }\n    }\n\n    for key in [\"properties\", \"$defs\", \"definitions\", \"dependentSchemas\"] {\n        let Some(entries) = node.get_mut(key).and_then(Value::as_object_mut) else {\n            continue;\n        };\n\n        for (entry_key, child) in entries.iter_mut() {\n            walk_schema_mut(child, &path.with_key(key).with_key(entry_key), visitor)?;\n        }\n    }\n\n    Ok(())\n}\n\nfn walk_schema(\n    value: &Value,\n    path: &SchemaPath,\n    visitor: &mut impl FnMut(&Map<String, Value>, &SchemaPath) -> Result<(), OpenAiToolSchemaError>,\n) -> Result<(), OpenAiToolSchemaError> {\n    let Value::Object(node) = value else {\n        return Ok(());\n    };\n\n    visitor(node, path)?;\n    walk_schema_children(node, path, visitor)\n}\n\nfn walk_schema_children(\n    node: &Map<String, Value>,\n    path: &SchemaPath,\n    visitor: &mut impl FnMut(&Map<String, Value>, &SchemaPath) -> Result<(), OpenAiToolSchemaError>,\n) -> Result<(), OpenAiToolSchemaError> {\n    for key in [\"items\", \"contains\", \"if\", \"then\", \"else\", \"not\"] {\n        if let Some(child) = node.get(key) {\n            walk_schema(child, &path.with_key(key), visitor)?;\n        }\n    }\n\n    for key in [\"anyOf\", \"oneOf\", \"allOf\", \"prefixItems\"] {\n        let Some(entries) = node.get(key).and_then(Value::as_array) else {\n            continue;\n        };\n\n        for (index, child) in entries.iter().enumerate() {\n            walk_schema(child, &path.with_index(key, index), visitor)?;\n        }\n    }\n\n    for key in [\"properties\", \"$defs\", \"definitions\", \"dependentSchemas\"] {\n        let Some(entries) = node.get(key).and_then(Value::as_object) else {\n            continue;\n        };\n\n        for (entry_key, child) in entries {\n            walk_schema(child, &path.with_key(key).with_key(entry_key), visitor)?;\n        }\n    }\n\n    Ok(())\n}\n\n#[derive(Clone, Debug)]\nstruct SchemaPath(Vec<String>);\n\nimpl SchemaPath {\n    fn root() -> Self {\n        Self(vec![\"$\".to_string()])\n    }\n\n    fn with_key(&self, key: impl Into<String>) -> Self {\n        let mut path = self.0.clone();\n        path.push(key.into());\n        Self(path)\n    }\n\n    fn with_index(&self, key: impl Into<String>, index: usize) -> Self {\n        let mut path = self.0.clone();\n        path.push(key.into());\n        path.push(index.to_string());\n        Self(path)\n    }\n}\n\nimpl std::fmt::Display for SchemaPath {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}\", self.0.join(\".\"))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use schemars::JsonSchema;\n    use serde_json::json;\n    use swiftide_core::chat_completion::ToolSpec;\n\n    use super::OpenAiToolSchema;\n\n    #[derive(serde::Serialize, serde::Deserialize, JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentArgs {\n        request: NestedCommentRequest,\n    }\n\n    #[derive(serde::Serialize, serde::Deserialize, JsonSchema)]\n    #[serde(deny_unknown_fields)]\n    struct NestedCommentRequest {\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        body: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        text: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        page_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        block_id: Option<String>,\n        #[serde(default, skip_serializing_if = \"Option::is_none\")]\n        discussion_id: Option<String>,\n    }\n\n    #[test]\n    fn openai_tool_schema_strips_schema_metadata_and_rust_formats() {\n        let spec = ToolSpec::builder()\n            .name(\"comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(\n                serde_json::from_value::<schemars::Schema>(json!({\n                    \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"page_size\": {\n                            \"type\": [\"integer\", \"null\"],\n                            \"format\": \"uint\",\n                            \"minimum\": 0\n                        }\n                    }\n                }))\n                .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let schema = OpenAiToolSchema::try_from(&spec).unwrap().into_value();\n\n        assert!(schema.get(\"$schema\").is_none());\n        assert_eq!(\n            schema[\"properties\"][\"page_size\"][\"anyOf\"],\n            json!([\n                { \"type\": \"integer\", \"minimum\": 0 },\n                { \"type\": \"null\" }\n            ])\n        );\n    }\n\n    #[test]\n    fn openai_tool_schema_adds_recursive_required_arrays() {\n        let spec = ToolSpec::builder()\n            .name(\"comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(schemars::schema_for!(NestedCommentArgs))\n            .build()\n            .unwrap();\n\n        let schema = OpenAiToolSchema::try_from(&spec).unwrap().into_value();\n        let nested_ref = schema[\"properties\"][\"request\"][\"$ref\"]\n            .as_str()\n            .expect(\"nested request should be referenced\");\n        let nested_name = nested_ref\n            .rsplit('/')\n            .next()\n            .expect(\"nested request ref name\");\n\n        assert_eq!(\n            schema[\"$defs\"][nested_name][\"required\"],\n            json!([\"block_id\", \"body\", \"discussion_id\", \"page_id\", \"text\"])\n        );\n    }\n\n    #[test]\n    fn openai_tool_schema_rejects_non_nullable_one_of() {\n        let spec = ToolSpec::builder()\n            .name(\"comment\")\n            .description(\"Create a comment\")\n            .parameters_schema(\n                serde_json::from_value::<schemars::Schema>(json!({\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"content\": {\n                            \"oneOf\": [\n                                { \"type\": \"string\" },\n                                { \"type\": \"integer\" }\n                            ]\n                        }\n                    }\n                }))\n                .unwrap(),\n            )\n            .build()\n            .unwrap();\n\n        let error = OpenAiToolSchema::try_from(&spec).expect_err(\"oneOf should be rejected\");\n        assert!(error.to_string().contains(\"`oneOf`\"));\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/parquet/loader.rs",
    "content": "use anyhow::{Context as _, Result};\nuse arrow_array::{LargeStringArray, StringArray, StringViewArray};\nuse fs_err::tokio::File;\nuse futures_util::StreamExt as _;\nuse parquet::arrow::{ParquetRecordBatchStreamBuilder, ProjectionMask};\nuse swiftide_core::{\n    Loader,\n    indexing::{IndexingStream, TextNode},\n};\nuse tokio::runtime::Handle;\n\nuse super::Parquet;\n\nimpl Loader for Parquet {\n    type Output = String;\n\n    fn into_stream(self) -> IndexingStream<String> {\n        let mut builder = tokio::task::block_in_place(|| {\n            Handle::current().block_on(async {\n                let file = File::open(self.path).await.expect(\"Failed to open file\");\n\n                ParquetRecordBatchStreamBuilder::new(file)\n                    .await\n                    .context(\"Failed to load builder\")\n                    .unwrap()\n                    .with_batch_size(self.batch_size)\n            })\n        });\n\n        let file_metadata = builder.metadata().file_metadata().clone();\n        dbg!(file_metadata.schema_descr().columns());\n        let column_idx = file_metadata\n            .schema()\n            .get_fields()\n            .iter()\n            .enumerate()\n            .find_map(|(pos, column)| {\n                if self.column_name == column.name() {\n                    Some(pos)\n                } else {\n                    None\n                }\n            })\n            .unwrap_or_else(|| panic!(\"Column {} not found in dataset\", &self.column_name));\n\n        let mask = ProjectionMask::roots(file_metadata.schema_descr(), [column_idx]);\n        builder = builder.with_projection(mask);\n\n        let stream = builder.build().expect(\"Failed to build parquet builder\");\n\n        let swiftide_stream = stream.flat_map_unordered(None, move |result_batch| {\n            let Ok(batch) = result_batch else {\n                let new_result: Result<TextNode> = Err(anyhow::anyhow!(result_batch.unwrap_err()));\n\n                return vec![new_result].into();\n            };\n            assert!(batch.num_columns() == 1, \"Number of columns _must_ be 1\");\n\n            let column = batch.column(0); // Should only have one column at this point\n            let node_values = if let Some(values) = column.as_any().downcast_ref::<StringArray>() {\n                values\n                    .iter()\n                    .flatten()\n                    .map(TextNode::from)\n                    .map(Ok)\n                    .collect::<Vec<_>>()\n            } else if let Some(values) = column.as_any().downcast_ref::<LargeStringArray>() {\n                values\n                    .iter()\n                    .flatten()\n                    .map(TextNode::from)\n                    .map(Ok)\n                    .collect::<Vec<_>>()\n            } else if let Some(values) = column.as_any().downcast_ref::<StringViewArray>() {\n                values\n                    .iter()\n                    .flatten()\n                    .map(TextNode::from)\n                    .map(Ok)\n                    .collect::<Vec<_>>()\n            } else {\n                let new_result: Result<TextNode> = Err(anyhow::anyhow!(\n                    \"Parquet column is not a string array (got {:?})\",\n                    column.data_type()\n                ));\n\n                return vec![new_result].into();\n            };\n\n            IndexingStream::iter(node_values)\n        });\n\n        swiftide_stream.boxed().into()\n\n        // let mask = ProjectionMask::\n    }\n\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<String> {\n        self.into_stream()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::path::PathBuf;\n\n    use futures_util::TryStreamExt as _;\n\n    use super::*;\n\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_parquet_loader() {\n        let mut path = PathBuf::from(env!(\"CARGO_MANIFEST_DIR\"));\n        path.push(\"src/parquet/test.parquet\");\n        dbg!(&path);\n\n        let loader = Parquet::builder()\n            .path(path)\n            .column_name(\"chunk\")\n            .build()\n            .unwrap();\n\n        let result = loader.into_stream().try_collect::<Vec<_>>().await.unwrap();\n\n        let expected = [TextNode::new(\"hello\"), TextNode::new(\"world\")];\n        assert_eq!(result, expected);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/parquet/mod.rs",
    "content": "//! Stream data from parquet files\nuse std::path::PathBuf;\n\nuse derive_builder::Builder;\n\npub mod loader;\n\n/// Stream data from parquet files on a single column\n///\n/// Provide a path, column and optional batch size. The column must be of type `StringArray`. Then\n/// the column is loaded into the chunks of the Node.\n///\n/// # Panics\n///\n/// The loader can panic during initialization if anything with parquet or arrow fails before\n/// starting the stream.\n#[derive(Debug, Clone, Builder)]\n#[builder(setter(into, strip_option))]\npub struct Parquet {\n    path: PathBuf,\n    column_name: String,\n    #[builder(default = \"1024\")]\n    batch_size: usize,\n}\n\nimpl Parquet {\n    pub fn builder() -> ParquetBuilder {\n        ParquetBuilder::default()\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/pgvector/fixtures.rs",
    "content": "//! Test fixtures and utilities for pgvector integration testing.\n//!\n//! Provides test infrastructure and helper types to verify vector storage and retrieval:\n//! - Mock data generation for different embedding modes\n//! - Test containers for `PostgreSQL` with pgvector extension\n//! - Common test scenarios and assertions\n//!\n//! # Examples\n//!\n//! ```rust\n//! use swiftide_integrations::pgvector::fixtures::{TestContext, PgVectorTestData};\n//! use swiftide_core::indexing::{EmbedMode, EmbeddedField};\n//!\n//! # async fn example() -> Result<(), Box<dyn std::error::Error>> {\n//! // Initialize test context with PostgreSQL container\n//! let context = TestContext::setup_with_cfg(\n//!     Some(vec![\"category\", \"priority\"]),\n//!     vec![EmbeddedField::Combined].into_iter().collect()\n//! ).await?;\n//!\n//! // Create test data for different embedding modes\n//! let test_data = PgVectorTestData {\n//!     embed_mode: EmbedMode::SingleWithMetadata,\n//!     chunk: \"test content\",\n//!     metadata: None,\n//!     vectors: vec![PgVectorTestData::create_test_vector(\n//!         EmbeddedField::Combined,\n//!         1.0\n//!     )],\n//! };\n//! # Ok(())\n//! # }\n//! ```\n//!\n//! The module supports testing for:\n//! - Single embedding with/without metadata\n//! - Per-field embeddings\n//! - Combined embedding modes\n//! - Different vector configurations\n//! - Various metadata scenarios\nuse crate::pgvector::PgVector;\nuse std::collections::HashSet;\nuse swiftide_core::{\n    Persist,\n    indexing::{self, EmbeddedField},\n};\nuse testcontainers::{ContainerAsync, GenericImage};\n\n/// Test data structure for pgvector integration testing.\n///\n/// Provides a flexible structure to test different embedding modes and configurations,\n/// including metadata handling and vector generation.\n///\n/// # Examples\n///\n/// ```rust\n/// use swiftide_integrations::pgvector::fixtures::PgVectorTestData;\n/// use swiftide_core::indexing::{EmbedMode, EmbeddedField};\n///\n/// let test_data = PgVectorTestData {\n///     embed_mode: EmbedMode::SingleWithMetadata,\n///     chunk: \"test content\",\n///     metadata: None,\n///     vectors: vec![PgVectorTestData::create_test_vector(\n///         EmbeddedField::Combined,\n///         1.0\n///     )],\n/// };\n/// ```\n#[derive(Clone)]\npub(crate) struct PgVectorTestData<'a> {\n    /// Embedding mode for the test case\n    pub embed_mode: indexing::EmbedMode,\n    /// Test content chunk\n    pub chunk: &'a str,\n    /// Optional metadata for testing metadata handling\n    pub metadata: Option<indexing::Metadata>,\n    /// Vector embeddings with their corresponding fields\n    pub vectors: Vec<(indexing::EmbeddedField, Vec<f32>)>,\n    pub expected_in_results: bool,\n}\n\nimpl PgVectorTestData<'_> {\n    pub(crate) fn to_node(&self) -> indexing::TextNode {\n        // Create the initial builder\n        let mut base_builder = indexing::TextNode::builder();\n\n        // Set the required fields\n        let mut builder = base_builder.chunk(self.chunk).embed_mode(self.embed_mode);\n\n        // Add metadata if it exists\n        if let Some(metadata) = &self.metadata {\n            builder = builder.metadata(metadata.clone());\n        }\n\n        // Build the node and add vectors\n        let mut node = builder.build().unwrap();\n        node.vectors = Some(self.vectors.clone().into_iter().collect());\n        node\n    }\n\n    pub(crate) fn create_test_vector(\n        field: EmbeddedField,\n        base_value: f32,\n    ) -> (EmbeddedField, Vec<f32>) {\n        (field, vec![base_value; 384])\n    }\n}\n\n/// Test context managing `PostgreSQL` container and pgvector storage.\n///\n/// Handles the lifecycle of test containers and provides configured storage\n/// instances for testing.\n///\n/// # Examples\n///\n/// ```rust\n/// # use swiftide_integrations::pgvector::fixtures::TestContext;\n/// # use swiftide_core::indexing::EmbeddedField;\n/// # async fn example() -> Result<(), Box<dyn std::error::Error>> {\n/// // Setup test context with specific configuration\n/// let context = TestContext::setup_with_cfg(\n///     Some(vec![\"category\"]),\n///     vec![EmbeddedField::Combined].into_iter().collect()\n/// ).await?;\n///\n/// // Use context for testing\n/// context.pgv_storage.setup().await?;\n/// # Ok(())\n/// # }\n/// ```\npub(crate) struct TestContext {\n    /// Configured pgvector storage instance\n    pub(crate) pgv_storage: PgVector,\n    /// Container instance running `PostgreSQL` with pgvector\n    _pgv_db_container: ContainerAsync<GenericImage>,\n}\n\nimpl TestContext {\n    /// Set up the test context, initializing `PostgreSQL` and `PgVector` storage\n    /// with configurable metadata fields\n    pub(crate) async fn setup_with_cfg(\n        metadata_fields: Option<Vec<&str>>,\n        vector_fields: HashSet<EmbeddedField>,\n    ) -> Result<Self, Box<dyn std::error::Error>> {\n        // Start `PostgreSQL` container and obtain the connection URL\n        let (pgv_db_container, pgv_db_url) = swiftide_test_utils::start_postgres().await;\n        tracing::info!(\"Postgres database URL: {:#?}\", pgv_db_url);\n\n        // Initialize the connection pool outside of the builder chain\n        let mut connection_pool = PgVector::builder();\n\n        // Configure PgVector storage\n        let mut builder = connection_pool\n            .db_url(pgv_db_url)\n            .vector_size(384)\n            .table_name(\"swiftide_pgvector_test\".to_string());\n\n        // Add all vector fields\n        for vector_field in vector_fields {\n            builder = builder.with_vector(vector_field);\n        }\n\n        // Add all metadata fields\n        if let Some(metadata_fields_inner) = metadata_fields {\n            for field in metadata_fields_inner {\n                builder = builder.with_metadata(field);\n            }\n        }\n\n        let pgv_storage = builder.build().map_err(|err| {\n            tracing::error!(\"Failed to build PgVector: {}\", err);\n            err\n        })?;\n\n        // Set up PgVector storage (create the table if not exists)\n        pgv_storage.setup().await.map_err(|err| {\n            tracing::error!(\"PgVector setup failed: {}\", err);\n            err\n        })?;\n\n        Ok(Self {\n            pgv_storage,\n            _pgv_db_container: pgv_db_container,\n        })\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/pgvector/mod.rs",
    "content": "//! Integration module for `PostgreSQL` vector database (pgvector) operations.\n//!\n//! This module provides a client interface for vector similarity search operations using pgvector,\n//! supporting:\n//! - Vector collection management with configurable schemas\n//! - Efficient vector storage and indexing\n//! - Connection pooling with automatic retries\n//! - Batch operations for optimized performance\n//! - Metadata included in retrieval\n//!\n//! The functionality is primarily used through the [`PgVector`] client, which implements\n//! the [`Persist`] trait for seamless integration with indexing and query pipelines.\n//!\n//! # Example\n//! ```rust\n//! # use swiftide_integrations::pgvector::PgVector;\n//! # async fn example() -> anyhow::Result<()> {\n//! let client = PgVector::builder()\n//!     .db_url(\"postgresql://localhost:5432/vectors\")\n//!     .vector_size(384)\n//!     .build()?;\n//!\n//! # Ok(())\n//! # }\n//! ```\n#[cfg(test)]\nmod fixtures;\n\nmod persist;\nmod pgv_table_types;\nmod retrieve;\nuse anyhow::Result;\nuse derive_builder::Builder;\nuse sqlx::PgPool;\nuse std::fmt;\nuse std::sync::Arc;\nuse std::sync::OnceLock;\nuse tokio::time::Duration;\n\npub use pgv_table_types::{FieldConfig, MetadataConfig, VectorConfig};\n\n/// Default maximum connections for the database connection pool.\nconst DB_POOL_CONN_MAX: u32 = 10;\n\n/// Default maximum retries for database connection attempts.\nconst DB_POOL_CONN_RETRY_MAX: u32 = 3;\n\n/// Delay between connection retry attempts, in seconds.\nconst DB_POOL_CONN_RETRY_DELAY_SECS: u64 = 3;\n\n/// Default batch size for storing nodes.\nconst BATCH_SIZE: usize = 50;\n\n/// Represents a Pgvector client with configuration options.\n///\n/// This struct is used to interact with the Pgvector vector database, providing methods to manage\n/// vector collections, store data, and ensure efficient searches. The client can be cloned with low\n/// cost as it shares connections.\n#[derive(Builder, Clone)]\n#[builder(setter(into, strip_option), build_fn(error = \"anyhow::Error\"))]\npub struct PgVector {\n    /// Name of the table to store vectors.\n    #[builder(default = \"String::from(\\\"swiftide_pgv_store\\\")\")]\n    table_name: String,\n\n    /// Default vector size; can be customized per configuration.\n    vector_size: i32,\n\n    /// Batch size for storing nodes.\n    #[builder(default = \"BATCH_SIZE\")]\n    batch_size: usize,\n\n    /// Field configurations for the `PgVector` table schema.\n    ///\n    /// Supports multiple field types (see [`FieldConfig`]).\n    #[builder(default)]\n    fields: Vec<FieldConfig>,\n\n    /// Database connection URL.\n    db_url: String,\n\n    /// Maximum connections allowed in the connection pool.\n    #[builder(default = \"DB_POOL_CONN_MAX\")]\n    db_max_connections: u32,\n\n    /// Maximum retry attempts for establishing a database connection.\n    #[builder(default = \"DB_POOL_CONN_RETRY_MAX\")]\n    db_max_retry: u32,\n\n    /// Delay between retry attempts for database connections.\n    #[builder(default = \"Duration::from_secs(DB_POOL_CONN_RETRY_DELAY_SECS)\")]\n    db_conn_retry_delay: Duration,\n\n    /// Lazy-initialized database connection pool.\n    #[builder(default = \"Arc::new(OnceLock::new())\")]\n    connection_pool: Arc<OnceLock<PgPool>>,\n\n    /// SQL statement used for executing bulk insert.\n    #[builder(default = \"Arc::new(OnceLock::new())\")]\n    sql_stmt_bulk_insert: Arc<OnceLock<String>>,\n}\n\nimpl fmt::Debug for PgVector {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        f.debug_struct(\"PgVector\")\n            .field(\"table_name\", &self.table_name)\n            .field(\"vector_size\", &self.vector_size)\n            .field(\"batch_size\", &self.batch_size)\n            .finish()\n    }\n}\n\nimpl PgVector {\n    /// Creates a new instance of `PgVectorBuilder` with default settings.\n    ///\n    /// # Returns\n    ///\n    /// A new `PgVectorBuilder`.\n    pub fn builder() -> PgVectorBuilder {\n        PgVectorBuilder::default()\n    }\n\n    /// Retrieves a connection pool for `PostgreSQL`.\n    ///\n    /// This function returns the connection pool used for interacting with the `PostgreSQL`\n    /// database. It fetches the pool from the `PgDBConnectionPool` struct.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` that, on success, contains the `PgPool` representing the database connection\n    /// pool. On failure, an error is returned.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if it fails to retrieve the connection pool, which could\n    /// occur if the underlying connection to `PostgreSQL` has not been properly established.\n    pub async fn get_pool(&self) -> Result<&PgPool> {\n        self.pool_get_or_initialize().await\n    }\n\n    pub fn get_table_name(&self) -> &str {\n        &self.table_name\n    }\n}\n\nimpl PgVectorBuilder {\n    /// Adds a vector configuration to the builder.\n    ///\n    /// # Arguments\n    ///\n    /// * `config` - The vector configuration to add, which can be converted into a `VectorConfig`.\n    ///\n    /// # Returns\n    ///\n    /// A mutable reference to the builder with the new vector configuration added.\n    pub fn with_vector(&mut self, config: impl Into<VectorConfig>) -> &mut Self {\n        // Use `get_or_insert_with` to initialize `fields` if it's `None`\n        self.fields\n            .get_or_insert_with(Self::default_fields)\n            .push(FieldConfig::Vector(config.into()));\n\n        self\n    }\n\n    /// Sets the metadata configuration for the vector similarity search.\n    ///\n    /// This method allows you to specify metadata configurations for vector similarity search using\n    /// `MetadataConfig`. The provided configuration will be added as a new field in the\n    /// builder.\n    ///\n    /// # Arguments\n    ///\n    /// * `config` - The metadata configuration to use.\n    ///\n    /// # Returns\n    ///\n    /// * Returns a mutable reference to `self` for method chaining.\n    pub fn with_metadata(&mut self, config: impl Into<MetadataConfig>) -> &mut Self {\n        // Use `get_or_insert_with` to initialize `fields` if it's `None`\n        self.fields\n            .get_or_insert_with(Self::default_fields)\n            .push(FieldConfig::Metadata(config.into()));\n\n        self\n    }\n\n    pub fn default_fields() -> Vec<FieldConfig> {\n        vec![FieldConfig::ID, FieldConfig::Chunk]\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::pgvector::fixtures::{PgVectorTestData, TestContext};\n    use futures_util::TryStreamExt;\n    use std::collections::HashSet;\n    use swiftide_core::{\n        Persist, Retrieve,\n        document::Document,\n        indexing::{self, EmbedMode, EmbeddedField},\n        querying::{Query, search_strategies::SimilaritySingleEmbedding, states},\n    };\n    use test_case::test_case;\n\n    #[test_log::test(tokio::test)]\n    async fn test_metadata_filter_with_vector_search() {\n        let test_context = TestContext::setup_with_cfg(\n            vec![\"category\", \"priority\"].into(),\n            HashSet::from([EmbeddedField::Combined]),\n        )\n        .await\n        .expect(\"Test setup failed\");\n\n        // Create nodes with different metadata and vectors\n        let nodes = vec![\n            indexing::TextNode::new(\"content1\")\n                .with_vectors([(EmbeddedField::Combined, vec![1.0; 384])])\n                .with_metadata(vec![(\"category\", \"A\"), (\"priority\", \"1\")]),\n            indexing::TextNode::new(\"content2\")\n                .with_vectors([(EmbeddedField::Combined, vec![1.1; 384])])\n                .with_metadata(vec![(\"category\", \"A\"), (\"priority\", \"2\")]),\n            indexing::TextNode::new(\"content3\")\n                .with_vectors([(EmbeddedField::Combined, vec![1.2; 384])])\n                .with_metadata(vec![(\"category\", \"B\"), (\"priority\", \"1\")]),\n        ]\n        .into_iter()\n        .map(|node| node.to_owned())\n        .collect();\n\n        // Store all nodes\n        test_context\n            .pgv_storage\n            .batch_store(nodes)\n            .await\n            .try_collect::<Vec<_>>()\n            .await\n            .unwrap();\n\n        // Test combined metadata and vector search\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n        query.embedding = Some(vec![1.0; 384]);\n\n        let search_strategy =\n            SimilaritySingleEmbedding::from_filter(\"category = \\\"A\\\"\".to_string());\n\n        let result = test_context\n            .pgv_storage\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n\n        assert_eq!(result.documents().len(), 2);\n\n        let contents = result\n            .documents()\n            .iter()\n            .map(Document::content)\n            .collect::<Vec<_>>();\n        assert!(contents.contains(&\"content1\"));\n        assert!(contents.contains(&\"content2\"));\n\n        // Additional test with priority filter\n        let search_strategy =\n            SimilaritySingleEmbedding::from_filter(\"priority = \\\"1\\\"\".to_string());\n        let result = test_context\n            .pgv_storage\n            .retrieve(&search_strategy, query)\n            .await\n            .unwrap();\n\n        assert_eq!(result.documents().len(), 2);\n        let contents = result\n            .documents()\n            .iter()\n            .map(Document::content)\n            .collect::<Vec<_>>();\n        assert!(contents.contains(&\"content1\"));\n        assert!(contents.contains(&\"content3\"));\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_vector_similarity_search_accuracy() {\n        let test_context = TestContext::setup_with_cfg(\n            vec![\"category\", \"priority\"].into(),\n            HashSet::from([EmbeddedField::Combined]),\n        )\n        .await\n        .expect(\"Test setup failed\");\n\n        // Create nodes with known vector relationships\n        let base_vector = vec![1.0; 384];\n        let similar_vector = base_vector.iter().map(|x| x + 0.1).collect::<Vec<_>>();\n        let dissimilar_vector = vec![-1.0; 384];\n\n        let nodes = vec![\n            indexing::TextNode::new(\"base_content\")\n                .with_vectors([(EmbeddedField::Combined, base_vector)])\n                .with_metadata(vec![(\"category\", \"A\"), (\"priority\", \"1\")]),\n            indexing::TextNode::new(\"similar_content\")\n                .with_vectors([(EmbeddedField::Combined, similar_vector)])\n                .with_metadata(vec![(\"category\", \"A\"), (\"priority\", \"2\")]),\n            indexing::TextNode::new(\"dissimilar_content\")\n                .with_vectors([(EmbeddedField::Combined, dissimilar_vector)])\n                .with_metadata(vec![(\"category\", \"B\"), (\"priority\", \"1\")]),\n        ]\n        .into_iter()\n        .map(|node| node.to_owned())\n        .collect();\n\n        // Store all nodes\n        test_context\n            .pgv_storage\n            .batch_store(nodes)\n            .await\n            .try_collect::<Vec<_>>()\n            .await\n            .unwrap();\n\n        // Search with base vector\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n        query.embedding = Some(vec![1.0; 384]);\n\n        let mut search_strategy = SimilaritySingleEmbedding::<()>::default();\n        search_strategy.with_top_k(2);\n\n        let result = test_context\n            .pgv_storage\n            .retrieve(&search_strategy, query)\n            .await\n            .unwrap();\n\n        // Verify that similar vectors are retrieved first\n        assert_eq!(result.documents().len(), 2);\n        let contents = result\n            .documents()\n            .iter()\n            .map(Document::content)\n            .collect::<Vec<_>>();\n        assert!(contents.contains(&\"base_content\"));\n        assert!(contents.contains(&\"similar_content\"));\n    }\n\n    #[test_case(\n        // SingleWithMetadata - No Metadata\n        vec![\n            PgVectorTestData {\n                embed_mode: EmbedMode::SingleWithMetadata,\n                chunk: \"single_no_meta_1\",\n                metadata: None,\n                vectors: vec![PgVectorTestData::create_test_vector(EmbeddedField::Combined, 1.0)],\n                expected_in_results: true,\n            },\n            PgVectorTestData {\n                embed_mode: EmbedMode::SingleWithMetadata,\n                chunk: \"single_no_meta_2\",\n                metadata: None,\n                vectors: vec![PgVectorTestData::create_test_vector(EmbeddedField::Combined, 1.1)],\n                expected_in_results: true,\n            }\n        ],\n        HashSet::from([EmbeddedField::Combined])\n        ; \"SingleWithMetadata mode without metadata\")]\n    #[test_case(\n        // SingleWithMetadata - With Metadata\n        vec![\n            PgVectorTestData {\n                embed_mode: EmbedMode::SingleWithMetadata,\n                chunk: \"single_with_meta_1\",\n                metadata: Some(vec![\n                    (\"category\", \"A\"),\n                    (\"priority\", \"high\")\n                ].into()),\n                vectors: vec![PgVectorTestData::create_test_vector(EmbeddedField::Combined, 1.2)],\n                expected_in_results: true,\n            },\n            PgVectorTestData {\n                embed_mode: EmbedMode::SingleWithMetadata,\n                chunk: \"single_with_meta_2\",\n                metadata: Some(vec![\n                    (\"category\", \"B\"),\n                    (\"priority\", \"low\")\n                ].into()),\n                vectors: vec![PgVectorTestData::create_test_vector(EmbeddedField::Combined, 1.3)],\n                expected_in_results: true,\n            }\n        ],\n        HashSet::from([EmbeddedField::Combined])\n        ; \"SingleWithMetadata mode with metadata\")]\n    #[test_log::test(tokio::test)]\n    async fn test_persist_nodes(\n        test_cases: Vec<PgVectorTestData<'_>>,\n        vector_fields: HashSet<EmbeddedField>,\n    ) {\n        // Extract all possible metadata fields from test cases\n        let metadata_fields: Vec<&str> = test_cases\n            .iter()\n            .filter_map(|case| case.metadata.as_ref())\n            .flat_map(|metadata| metadata.iter().map(|(key, _)| key.as_str()))\n            .collect::<std::collections::HashSet<_>>()\n            .into_iter()\n            .collect();\n\n        // Initialize test context with all required metadata fields\n        let test_context = TestContext::setup_with_cfg(Some(metadata_fields), vector_fields)\n            .await\n            .expect(\"Test setup failed\");\n\n        // Convert test cases to nodes and store them\n        let nodes: Vec<indexing::TextNode> =\n            test_cases.iter().map(PgVectorTestData::to_node).collect();\n\n        // Test batch storage\n        let stored_nodes = test_context\n            .pgv_storage\n            .batch_store(nodes.clone())\n            .await\n            .try_collect::<Vec<_>>()\n            .await\n            .expect(\"Failed to store nodes\");\n\n        assert_eq!(\n            stored_nodes.len(),\n            nodes.len(),\n            \"All nodes should be stored\"\n        );\n\n        // Verify storage and retrieval for each test case\n        for (test_case, stored_node) in test_cases.iter().zip(stored_nodes.iter()) {\n            // 1. Verify basic node properties\n            assert_eq!(\n                stored_node.chunk, test_case.chunk,\n                \"Stored chunk should match\"\n            );\n            assert_eq!(\n                stored_node.embed_mode, test_case.embed_mode,\n                \"Embed mode should match\"\n            );\n\n            // 2. Verify vectors were stored correctly\n            let stored_vectors = stored_node\n                .vectors\n                .as_ref()\n                .expect(\"Vectors should be present\");\n            assert_eq!(\n                stored_vectors.len(),\n                test_case.vectors.len(),\n                \"Vector count should match\"\n            );\n\n            // 3. Test vector similarity search\n            for (field, vector) in &test_case.vectors {\n                let mut query = Query::<states::Pending>::new(\"test_query\");\n                query.embedding = Some(vector.clone());\n\n                let mut search_strategy = SimilaritySingleEmbedding::<()>::default();\n                search_strategy.with_top_k(nodes.len() as u64);\n\n                let result = test_context\n                    .pgv_storage\n                    .retrieve(&search_strategy, query)\n                    .await\n                    .expect(\"Retrieval should succeed\");\n\n                if test_case.expected_in_results {\n                    assert!(\n                        result\n                            .documents()\n                            .iter()\n                            .map(Document::content)\n                            .collect::<Vec<_>>()\n                            .contains(&test_case.chunk),\n                        \"Document should be found in results for field {field}\",\n                    );\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/pgvector/persist.rs",
    "content": "//! Storage persistence implementation for vector embeddings.\n//!\n//! Implements the [`Persist`] trait for [`PgVector`], providing vector storage capabilities:\n//! - Database schema initialization and setup\n//! - Single-node storage operations\n//! - Optimized batch storage with configurable batch sizes\n//!\n//! NOTE: Persisting and retrieving metadata is not supported at the moment.\n//!\n//! The implementation ensures thread-safe concurrent access and handles\n//! connection management automatically.\nuse crate::pgvector::PgVector;\nuse anyhow::{Result, anyhow};\nuse async_trait::async_trait;\nuse swiftide_core::{\n    Persist,\n    indexing::{IndexingStream, TextNode},\n};\n\n#[async_trait]\nimpl Persist for PgVector {\n    type Input = String;\n    type Output = String;\n    #[tracing::instrument(skip_all)]\n    async fn setup(&self) -> Result<()> {\n        // Get or initialize the connection pool\n        let pool = self.pool_get_or_initialize().await?;\n\n        if self.sql_stmt_bulk_insert.get().is_none() {\n            let sql = self.generate_unnest_upsert_sql()?;\n\n            self.sql_stmt_bulk_insert\n                .set(sql)\n                .map_err(|_| anyhow!(\"SQL bulk store statement is already set\"))?;\n        }\n\n        let mut tx = pool.begin().await?;\n\n        // Create extension\n        let sql = \"CREATE EXTENSION IF NOT EXISTS vector\";\n        sqlx::query(sql).execute(&mut *tx).await?;\n\n        // Create table\n        let create_table_sql = self.generate_create_table_sql()?;\n        sqlx::query(&create_table_sql).execute(&mut *tx).await?;\n\n        // Create HNSW index\n        let index_sql = self.create_index_sql()?;\n        sqlx::query(&index_sql).execute(&mut *tx).await?;\n\n        tx.commit().await?;\n\n        Ok(())\n    }\n\n    #[tracing::instrument(skip_all)]\n    async fn store(&self, node: TextNode) -> Result<TextNode> {\n        let mut nodes = vec![node; 1];\n        self.store_nodes(&nodes).await?;\n\n        let node = nodes.swap_remove(0);\n\n        Ok(node)\n    }\n\n    #[tracing::instrument(skip_all)]\n    async fn batch_store(&self, nodes: Vec<TextNode>) -> IndexingStream<String> {\n        self.store_nodes(&nodes).await.map(|()| nodes).into()\n    }\n\n    fn batch_size(&self) -> Option<usize> {\n        Some(self.batch_size)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::pgvector::fixtures::TestContext;\n    use std::collections::HashSet;\n    use swiftide_core::{Persist, indexing::EmbeddedField};\n\n    #[test_log::test(tokio::test)]\n    async fn test_persist_setup_no_error_when_table_exists() {\n        let test_context = TestContext::setup_with_cfg(\n            vec![\"filter\"].into(),\n            HashSet::from([EmbeddedField::Combined]),\n        )\n        .await\n        .expect(\"Test setup failed\");\n\n        test_context\n            .pgv_storage\n            .setup()\n            .await\n            .expect(\"PgVector setup should not fail when the table already exists\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/pgvector/pgv_table_types.rs",
    "content": "//! `PostgreSQL` table schema and type conversion utilities for vector storage.\n//!\n//! Provides schema configuration and data type conversion functionality:\n//! - Table schema generation with vector and metadata columns\n//! - Field configuration for different vector embedding types\n//! - HNSW index creation for similarity search optimization\n//! - Bulk data preparation and SQL query generation\nuse crate::pgvector::PgVector;\nuse anyhow::{Result, anyhow};\nuse pgvector as ExtPgVector;\nuse regex::Regex;\nuse sqlx::PgPool;\nuse sqlx::postgres::PgArguments;\nuse sqlx::postgres::PgPoolOptions;\nuse std::collections::BTreeMap;\nuse swiftide_core::indexing::{EmbeddedField, TextNode};\nuse tokio::time::sleep;\n\n/// Configuration for vector embedding columns in the `PostgreSQL` table.\n///\n/// This struct defines how vector embeddings are stored and managed in the database,\n/// mapping Swiftide's embedded fields to `PostgreSQL` vector columns.\n#[derive(Clone, Debug)]\npub struct VectorConfig {\n    embedded_field: EmbeddedField,\n    pub field: String,\n}\n\nimpl VectorConfig {\n    pub fn new(embedded_field: &EmbeddedField) -> Self {\n        Self {\n            embedded_field: embedded_field.clone(),\n            field: format!(\n                \"vector_{}\",\n                PgVector::normalize_field_name(&embedded_field.to_string()),\n            ),\n        }\n    }\n}\n\nimpl From<EmbeddedField> for VectorConfig {\n    fn from(val: EmbeddedField) -> Self {\n        Self::new(&val)\n    }\n}\n\n/// Configuration for metadata fields in the `PostgreSQL` table.\n///\n/// Handles the mapping and storage of metadata fields, ensuring proper column naming\n/// and type conversion for `PostgreSQL` compatibility.\n#[derive(Clone, Debug)]\npub struct MetadataConfig {\n    field: String,\n    original_field: String,\n}\n\nimpl MetadataConfig {\n    pub fn new<T: Into<String>>(original_field: T) -> Self {\n        let original: String = original_field.into();\n        Self {\n            field: format!(\"meta_{}\", PgVector::normalize_field_name(&original)),\n            original_field: original,\n        }\n    }\n}\n\nimpl<T: AsRef<str>> From<T> for MetadataConfig {\n    fn from(val: T) -> Self {\n        Self::new(val.as_ref())\n    }\n}\n\n/// Field configuration types supported in the `PostgreSQL` table schema.\n///\n/// Represents different field types that can be configured in the table schema,\n/// including vector embeddings, metadata, and system fields.\n#[derive(Clone, Debug)]\npub enum FieldConfig {\n    /// `Vector` - Vector embedding field configuration\n    Vector(VectorConfig),\n    /// `Metadata` - Metadata field configuration\n    Metadata(MetadataConfig),\n    /// `Chunk` - Text content storage field\n    Chunk,\n    /// `ID` - Primary key field\n    ID,\n}\n\nimpl FieldConfig {\n    pub fn field_name(&self) -> &str {\n        match self {\n            FieldConfig::Vector(config) => &config.field,\n            FieldConfig::Metadata(config) => &config.field,\n            FieldConfig::Chunk => \"chunk\",\n            FieldConfig::ID => \"id\",\n        }\n    }\n}\n\n/// Internal structure for managing bulk upsert operations.\n///\n/// Collects and organizes data for efficient bulk insertions and updates,\n/// grouping related fields for UNNEST-based operations.\nstruct BulkUpsertData<'a> {\n    ids: Vec<sqlx::types::Uuid>,\n    chunks: Vec<&'a str>,\n    metadata_fields: Vec<Vec<serde_json::Value>>,\n    vector_fields: Vec<Vec<ExtPgVector::Vector>>,\n    field_mapping: FieldMapping<'a>,\n}\n\nstruct FieldMapping<'a> {\n    metadata_names: Vec<&'a str>,\n    vector_names: Vec<&'a str>,\n}\n\nimpl<'a> BulkUpsertData<'a> {\n    fn new(fields: &'a [FieldConfig], size: usize) -> Self {\n        let (metadata_names, vector_names): (Vec<&str>, Vec<&str>) = (\n            fields\n                .iter()\n                .filter_map(|field| match field {\n                    FieldConfig::Metadata(config) => Some(config.field.as_str()),\n                    _ => None,\n                })\n                .collect(),\n            fields\n                .iter()\n                .filter_map(|field| match field {\n                    FieldConfig::Vector(config) => Some(config.field.as_str()),\n                    _ => None,\n                })\n                .collect(),\n        );\n\n        Self {\n            ids: Vec::with_capacity(size),\n            chunks: Vec::with_capacity(size),\n            metadata_fields: vec![Vec::with_capacity(size); metadata_names.len()],\n            vector_fields: vec![Vec::with_capacity(size); vector_names.len()],\n            field_mapping: FieldMapping {\n                metadata_names,\n                vector_names,\n            },\n        }\n    }\n\n    fn get_metadata_index(&self, field: &str) -> Option<usize> {\n        self.field_mapping\n            .metadata_names\n            .iter()\n            .position(|&name| name == field)\n    }\n\n    fn get_vector_index(&self, field: &str) -> Option<usize> {\n        self.field_mapping\n            .vector_names\n            .iter()\n            .position(|&name| name == field)\n    }\n}\n\nimpl PgVector {\n    /// Generates a SQL statement to create a table for storing vector embeddings.\n    ///\n    /// The table will include columns for an ID, chunk data, metadata, and a vector embedding.\n    ///\n    /// # Returns\n    ///\n    /// * The generated SQL statement.\n    ///\n    /// # Errors\n    ///\n    /// * Returns an error if the table name is invalid or if `vector_size` is not configured.\n    pub fn generate_create_table_sql(&self) -> Result<String> {\n        // Validate table_name and field_name (e.g., check against allowed patterns)\n        if !Self::is_valid_identifier(&self.table_name) {\n            return Err(anyhow::anyhow!(\"Invalid table name\"));\n        }\n\n        let columns: Vec<String> = self\n            .fields\n            .iter()\n            .map(|field| match field {\n                FieldConfig::ID => \"id UUID NOT NULL\".to_string(),\n                FieldConfig::Chunk => format!(\"{} TEXT NOT NULL\", field.field_name()),\n                FieldConfig::Metadata(_) => format!(\"{} JSONB\", field.field_name()),\n                FieldConfig::Vector(_) => {\n                    format!(\"{} VECTOR({})\", field.field_name(), self.vector_size)\n                }\n            })\n            .chain(std::iter::once(\"PRIMARY KEY (id)\".to_string()))\n            .collect();\n\n        let sql = format!(\n            \"CREATE TABLE IF NOT EXISTS {} (\\n  {}\\n)\",\n            self.table_name,\n            columns.join(\",\\n  \")\n        );\n\n        Ok(sql)\n    }\n\n    /// Generates the SQL statement to create an HNSW index on the vector column.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if:\n    /// - No vector field is found in the table configuration.\n    /// - The table name or field name is invalid.\n    pub fn create_index_sql(&self) -> Result<String> {\n        let index_name = format!(\"{}_embedding_idx\", self.table_name);\n        let vector_field = self\n            .fields\n            .iter()\n            .find(|f| matches!(f, FieldConfig::Vector(_)))\n            .ok_or_else(|| anyhow::anyhow!(\"No vector field found in configuration\"))?\n            .field_name();\n\n        // Validate table_name and field_name (e.g., check against allowed patterns)\n        if !Self::is_valid_identifier(&self.table_name)\n            || !Self::is_valid_identifier(&index_name)\n            || !Self::is_valid_identifier(vector_field)\n        {\n            return Err(anyhow::anyhow!(\"Invalid table or field name\"));\n        }\n\n        Ok(format!(\n            \"CREATE INDEX IF NOT EXISTS {} ON {} USING hnsw ({} vector_cosine_ops)\",\n            index_name, &self.table_name, vector_field\n        ))\n    }\n\n    /// Stores a list of nodes in the database using an upsert operation.\n    ///\n    /// # Arguments\n    ///\n    /// * `nodes` - A slice of `TextNode` objects to be stored.\n    ///\n    /// # Returns\n    ///\n    /// * `Result<()>` - `Ok` if all nodes are successfully stored, `Err` otherwise.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if:\n    /// - The database connection pool is not established.\n    /// - Any of the SQL queries fail to execute due to schema mismatch, constraint violations, or\n    ///   connectivity issues.\n    /// - Committing the transaction fails.\n    pub async fn store_nodes(&self, nodes: &[TextNode]) -> Result<()> {\n        let pool = self.pool_get_or_initialize().await?;\n\n        let mut tx = pool.begin().await?;\n        let bulk_data = self.prepare_bulk_data(nodes)?;\n\n        let sql = self\n            .sql_stmt_bulk_insert\n            .get()\n            .ok_or_else(|| anyhow!(\"SQL bulk insert statement not set\"))?;\n\n        let query = self.bind_bulk_data_to_query(sqlx::query(sql), &bulk_data)?;\n\n        query\n            .execute(&mut *tx)\n            .await\n            .map_err(|e| anyhow!(\"Failed to store nodes: {e:?}\"))?;\n\n        tx.commit()\n            .await\n            .map_err(|e| anyhow!(\"Failed to commit transaction: {e:?}\"))\n    }\n\n    /// Prepares data from nodes into vectors for bulk processing.\n    #[allow(clippy::implicit_clone)]\n    fn prepare_bulk_data<'a>(&'a self, nodes: &'a [TextNode]) -> Result<BulkUpsertData<'a>> {\n        let mut bulk_data = BulkUpsertData::new(&self.fields, nodes.len());\n\n        for node in nodes {\n            bulk_data.ids.push(node.id());\n            bulk_data.chunks.push(node.chunk.as_str());\n\n            for field in &self.fields {\n                match field {\n                    FieldConfig::Metadata(config) => {\n                        let idx = bulk_data\n                            .get_metadata_index(config.field.as_str())\n                            .ok_or_else(|| anyhow!(\"Invalid metadata field\"))?;\n\n                        let value = node\n                            .metadata\n                            .get(&config.original_field)\n                            .ok_or_else(|| anyhow!(\"Missing metadata field\"))?;\n\n                        let mut metadata_map = BTreeMap::new();\n                        metadata_map.insert(config.original_field.clone(), value.clone());\n\n                        bulk_data.metadata_fields[idx].push(serde_json::to_value(metadata_map)?);\n                    }\n                    FieldConfig::Vector(config) => {\n                        let idx = bulk_data\n                            .get_vector_index(config.field.as_str())\n                            .ok_or_else(|| anyhow!(\"Invalid vector field\"))?;\n\n                        let data = node\n                            .vectors\n                            .as_ref()\n                            .and_then(|v| v.get(&config.embedded_field))\n                            .map(|v| v.to_vec())\n                            .unwrap_or_default();\n\n                        bulk_data.vector_fields[idx].push(ExtPgVector::Vector::from(data));\n                    }\n                    _ => (),\n                }\n            }\n        }\n\n        Ok(bulk_data)\n    }\n\n    /// Generates SQL for UNNEST-based bulk upsert.\n    ///\n    /// # Returns\n    ///\n    /// * `Result<String>` - The generated SQL statement or an error if fields are empty.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if `self.fields` is empty, as no valid SQL can be generated.\n    pub(crate) fn generate_unnest_upsert_sql(&self) -> Result<String> {\n        if self.fields.is_empty() {\n            return Err(anyhow!(\"Cannot generate upsert SQL with empty fields\"));\n        }\n\n        let mut columns = Vec::new();\n        let mut unnest_params = Vec::new();\n        let mut param_counter = 1;\n\n        for field in &self.fields {\n            let name = field.field_name();\n            columns.push(name.to_string());\n\n            unnest_params.push(format!(\n                \"${param_counter}::{}\",\n                match field {\n                    FieldConfig::ID => \"UUID[]\",\n                    FieldConfig::Chunk => \"TEXT[]\",\n                    FieldConfig::Metadata(_) => \"JSONB[]\",\n                    FieldConfig::Vector(_) => \"VECTOR[]\",\n                }\n            ));\n\n            param_counter += 1;\n        }\n\n        let update_columns = self\n            .fields\n            .iter()\n            .filter(|field| !matches!(field, FieldConfig::ID)) // Skip ID field in updates\n            .map(|field| {\n                let name = field.field_name();\n                format!(\"{name} = EXCLUDED.{name}\")\n            })\n            .collect::<Vec<_>>()\n            .join(\", \");\n\n        Ok(format!(\n            r\"\n            INSERT INTO {} ({})\n            SELECT {}\n            FROM UNNEST({}) AS t({})\n            ON CONFLICT (id) DO UPDATE SET {}\",\n            self.table_name,\n            columns.join(\", \"),\n            columns.join(\", \"),\n            unnest_params.join(\", \"),\n            columns.join(\", \"),\n            update_columns\n        ))\n    }\n\n    /// Binds bulk data to the SQL query, ensuring data arrays are matched to corresponding fields.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if any metadata or vector field is missing from the bulk data.\n    #[allow(clippy::implicit_clone)]\n    fn bind_bulk_data_to_query<'a>(\n        &self,\n        mut query: sqlx::query::Query<'a, sqlx::Postgres, PgArguments>,\n        bulk_data: &'a BulkUpsertData,\n    ) -> Result<sqlx::query::Query<'a, sqlx::Postgres, PgArguments>> {\n        for field in &self.fields {\n            query = match field {\n                FieldConfig::ID => query.bind(&bulk_data.ids),\n                FieldConfig::Chunk => query.bind(&bulk_data.chunks),\n                FieldConfig::Vector(config) => {\n                    let idx = bulk_data\n                        .get_vector_index(config.field.as_str())\n                        .ok_or_else(|| {\n                            anyhow!(\"Vector field {} not found in bulk data\", config.field)\n                        })?;\n                    query.bind(&bulk_data.vector_fields[idx])\n                }\n                FieldConfig::Metadata(config) => {\n                    let idx = bulk_data\n                        .get_metadata_index(config.field.as_str())\n                        .ok_or_else(|| {\n                            anyhow!(\"Metadata field {} not found in bulk data\", config.field)\n                        })?;\n                    query.bind(&bulk_data.metadata_fields[idx])\n                }\n            };\n        }\n        Ok(query)\n    }\n\n    /// Retrieves the name of the vector column configured in the schema.\n    ///\n    /// # Returns\n    /// * `Ok(String)` - The name of the vector column if exactly one is configured.\n    /// # Errors\n    /// * `Error::NoEmbedding` - If no vector field is configured in the schema.\n    /// * `Error::MultipleEmbeddings` - If multiple vector fields are configured in the schema.\n    pub fn get_vector_column_name(&self) -> Result<String> {\n        let vector_fields: Vec<_> = self\n            .fields\n            .iter()\n            .filter(|field| matches!(field, FieldConfig::Vector(_)))\n            .collect();\n\n        match vector_fields.as_slice() {\n            [field] => Ok(field.field_name().to_string()),\n            [] => Err(anyhow!(\"No vector field configured in schema\")),\n            _ => Err(anyhow!(\n                \"Search strategy for multiple vector fields in the schema is not yet implemented\"\n            )),\n        }\n    }\n}\n\nimpl PgVector {\n    pub fn normalize_field_name(field: &str) -> String {\n        // Define the special characters as an array\n        let special_chars: [char; 4] = ['(', '[', '{', '<'];\n\n        // First split by special characters and take the first part\n        let base_text = field\n            .split(|c| special_chars.contains(&c))\n            .next()\n            .unwrap_or(field)\n            .trim();\n\n        // Split by whitespace, take up to 3 words, convert to lowercase\n        let normalized = base_text\n            .split_whitespace()\n            .take(3)\n            .collect::<Vec<&str>>()\n            .join(\"_\")\n            .to_lowercase();\n\n        // Ensure the result only contains alphanumeric chars and underscores\n        normalized\n            .chars()\n            .filter(|c| c.is_alphanumeric() || *c == '_')\n            .collect()\n    }\n\n    pub(crate) fn is_valid_identifier(identifier: &str) -> bool {\n        // PostgreSQL identifier rules:\n        // 1. Must start with a letter (a-z) or underscore\n        // 2. Subsequent characters can be letters, underscores, digits (0-9), or dollar signs\n        // 3. Maximum length is 63 bytes\n        // 4. Cannot be a reserved keyword\n\n        // Check length\n        if identifier.is_empty() || identifier.len() > 63 {\n            return false;\n        }\n\n        // Use a regular expression to check the pattern\n        let identifier_regex = Regex::new(r\"^[a-zA-Z_][a-zA-Z0-9_$]*$\").unwrap();\n        if !identifier_regex.is_match(identifier) {\n            return false;\n        }\n\n        // Check if it's not a reserved keyword\n        !Self::is_reserved_keyword(identifier)\n    }\n\n    pub(crate) fn is_reserved_keyword(word: &str) -> bool {\n        // This list is not exhaustive. You may want to expand it based on\n        // the PostgreSQL version you're using.\n        const RESERVED_KEYWORDS: &[&str] = &[\n            \"SELECT\", \"FROM\", \"WHERE\", \"INSERT\", \"UPDATE\", \"DELETE\", \"DROP\", \"CREATE\", \"TABLE\",\n            \"INDEX\", \"ALTER\", \"ADD\", \"COLUMN\", \"AND\", \"OR\", \"NOT\", \"NULL\", \"TRUE\",\n            \"FALSE\",\n            // Add more keywords as needed\n        ];\n\n        RESERVED_KEYWORDS.contains(&word.to_uppercase().as_str())\n    }\n}\n\nimpl PgVector {\n    async fn create_pool(&self) -> Result<PgPool> {\n        let pool_options = PgPoolOptions::new().max_connections(self.db_max_connections);\n\n        for attempt in 1..=self.db_max_retry {\n            match pool_options.clone().connect(self.db_url.as_ref()).await {\n                Ok(pool) => {\n                    tracing::info!(\"Successfully established database connection\");\n                    return Ok(pool);\n                }\n                Err(err) if attempt < self.db_max_retry => {\n                    tracing::warn!(\n                        error = %err,\n                        attempt = attempt,\n                        max_retries = self.db_max_retry,\n                        \"Database connection attempt failed, retrying...\"\n                    );\n                    sleep(self.db_conn_retry_delay).await;\n                }\n                Err(err) => {\n                    return Err(anyhow!(err).context(\"Failed to establish database connection\"));\n                }\n            }\n        }\n\n        Err(anyhow!(\n            \"Max connection retries ({}) exceeded\",\n            self.db_max_retry\n        ))\n    }\n\n    /// Returns a reference to the `PgPool` if it is already initialized,\n    /// or creates and initializes it if it is not.\n    ///\n    /// # Errors\n    /// This function will return an error if pool creation fails.\n    pub async fn pool_get_or_initialize(&self) -> Result<&PgPool> {\n        if let Some(pool) = self.connection_pool.get() {\n            return Ok(pool);\n        }\n\n        let pool = self.create_pool().await?;\n        self.connection_pool\n            .set(pool)\n            .map_err(|_| anyhow!(\"Pool already initialized\"))?;\n\n        // Re-check if the pool was set successfully, otherwise return an error\n        self.connection_pool\n            .get()\n            .ok_or_else(|| anyhow!(\"Failed to retrieve connection pool after setting it\"))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_valid_identifiers() {\n        assert!(PgVector::is_valid_identifier(\"valid_name\"));\n        assert!(PgVector::is_valid_identifier(\"_valid_name\"));\n        assert!(PgVector::is_valid_identifier(\"valid_name_123\"));\n        assert!(PgVector::is_valid_identifier(\"validName\"));\n    }\n\n    #[test]\n    fn test_invalid_identifiers() {\n        assert!(!PgVector::is_valid_identifier(\"\")); // Empty string\n        assert!(!PgVector::is_valid_identifier(&\"a\".repeat(64))); // Too long\n        assert!(!PgVector::is_valid_identifier(\"123_invalid\")); // Starts with a number\n        assert!(!PgVector::is_valid_identifier(\"invalid-name\")); // Contains hyphen\n        assert!(!PgVector::is_valid_identifier(\"select\")); // Reserved keyword\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/pgvector/retrieve.rs",
    "content": "use crate::pgvector::{FieldConfig, PgVector, PgVectorBuilder};\nuse anyhow::{Result, anyhow};\nuse async_trait::async_trait;\nuse pgvector::Vector;\nuse sqlx::{Column, Row, prelude::FromRow, types::Uuid};\nuse std::fmt::Write as _;\nuse swiftide_core::{\n    Retrieve,\n    document::Document,\n    indexing::Metadata,\n    querying::{\n        Query,\n        search_strategies::{CustomStrategy, SimilaritySingleEmbedding},\n        states,\n    },\n};\n\n#[allow(dead_code)]\n#[derive(Debug, Clone)]\nstruct VectorSearchResult {\n    id: Uuid,\n    chunk: String,\n    metadata: Metadata,\n}\n\nimpl From<VectorSearchResult> for Document {\n    fn from(val: VectorSearchResult) -> Self {\n        Document::new(val.chunk, Some(val.metadata))\n    }\n}\n\nimpl FromRow<'_, sqlx::postgres::PgRow> for VectorSearchResult {\n    fn from_row(row: &sqlx::postgres::PgRow) -> Result<Self, sqlx::Error> {\n        let mut metadata = Metadata::default();\n\n        // Metadata fields are stored each as prefixed meta_ fields. Perhaps we should add a single\n        // metadata field instead of multiple fields.\n        for column in row.columns() {\n            if column.name().starts_with(\"meta_\") {\n                row.try_get::<serde_json::Value, _>(column.name())?\n                    .as_object()\n                    .and_then(|object| {\n                        object.keys().collect::<Vec<_>>().first().map(|key| {\n                            metadata.insert(\n                                key.to_owned(),\n                                object.get(key.as_str()).expect(\"infallible\").clone(),\n                            );\n                        })\n                    });\n            }\n        }\n\n        Ok(VectorSearchResult {\n            id: row.try_get(\"id\")?,\n            chunk: row.try_get(\"chunk\")?,\n            metadata,\n        })\n    }\n}\n\n#[allow(clippy::redundant_closure_for_method_calls)]\n#[async_trait]\nimpl Retrieve<SimilaritySingleEmbedding<String>> for PgVector {\n    #[tracing::instrument]\n    async fn retrieve(\n        &self,\n        search_strategy: &SimilaritySingleEmbedding<String>,\n        query_state: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        let embedding = if let Some(embedding) = query_state.embedding.as_ref() {\n            Vector::from(embedding.clone())\n        } else {\n            return Err(anyhow::Error::msg(\"Missing embedding in query state\"));\n        };\n\n        let vector_column_name = self.get_vector_column_name()?;\n\n        let pool = self.pool_get_or_initialize().await?;\n\n        let default_columns: Vec<_> = PgVectorBuilder::default_fields()\n            .iter()\n            .map(|f| f.field_name().to_string())\n            .chain(\n                self.fields\n                    .iter()\n                    .filter(|f| matches!(f, FieldConfig::Metadata(_)))\n                    .map(|f| f.field_name().to_string()),\n            )\n            .collect();\n\n        // Start building the SQL query\n        let mut sql = format!(\n            \"SELECT {} FROM {}\",\n            default_columns.join(\", \"),\n            self.table_name\n        );\n\n        if let Some(filter) = search_strategy.filter() {\n            let filter_parts: Vec<&str> = filter.split('=').collect();\n            if filter_parts.len() == 2 {\n                let key = filter_parts[0].trim();\n                let value = filter_parts[1].trim().trim_matches('\"');\n                tracing::debug!(\n                    \"Filter being applied: key = {:#?}, value = {:#?}\",\n                    key,\n                    value\n                );\n\n                let sql_filter = format!(\n                    \" WHERE meta_{}->>'{}' = '{}'\",\n                    PgVector::normalize_field_name(key),\n                    key,\n                    value\n                );\n                sql.push_str(&sql_filter);\n            } else {\n                return Err(anyhow!(\"Invalid filter format\"));\n            }\n        }\n\n        // Add the ORDER BY clause for vector similarity search\n        write!(sql, \" ORDER BY {vector_column_name} <=> $1 LIMIT $2\")?;\n\n        tracing::debug!(\"Running retrieve with SQL: {}\", sql);\n\n        let top_k = i32::try_from(search_strategy.top_k())\n            .map_err(|_| anyhow!(\"Failed to convert top_k to i32\"))?;\n\n        let data: Vec<VectorSearchResult> = sqlx::query_as(&sql)\n            .bind(embedding)\n            .bind(top_k)\n            .fetch_all(pool)\n            .await?;\n\n        let docs = data.into_iter().map(Into::into).collect();\n\n        Ok(query_state.retrieved_documents(docs))\n    }\n}\n\n#[async_trait]\nimpl Retrieve<SimilaritySingleEmbedding> for PgVector {\n    async fn retrieve(\n        &self,\n        search_strategy: &SimilaritySingleEmbedding,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        Retrieve::<SimilaritySingleEmbedding<String>>::retrieve(\n            self,\n            &search_strategy.into_concrete_filter::<String>(),\n            query,\n        )\n        .await\n    }\n}\n\n#[async_trait]\nimpl Retrieve<CustomStrategy<sqlx::QueryBuilder<'static, sqlx::Postgres>>> for PgVector {\n    async fn retrieve(\n        &self,\n        search_strategy: &CustomStrategy<sqlx::QueryBuilder<'static, sqlx::Postgres>>,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        // Get the database pool\n        let pool = self.get_pool().await?;\n\n        // Build the custom query using both strategy and query state\n        let mut query_builder = search_strategy.build_query(&query).await?;\n\n        // Execute the query using the builder's built-in methods\n        let results = query_builder\n            .build_query_as::<VectorSearchResult>() // Convert to a typed query\n            .fetch_all(pool) // Execute and get all results\n            .await\n            .map_err(|e| anyhow!(\"Failed to execute search query: {e}\"))?;\n\n        // Transform results into documents\n        let documents = results.into_iter().map(Into::into).collect();\n\n        // Update query state with retrieved documents\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::pgvector::fixtures::TestContext;\n    use futures_util::TryStreamExt;\n    use std::collections::HashSet;\n    use swiftide_core::{Persist, indexing, indexing::EmbeddedField};\n    use swiftide_core::{\n        Retrieve,\n        querying::{Query, search_strategies::SimilaritySingleEmbedding, states},\n    };\n\n    #[test_log::test(tokio::test)]\n    async fn test_retrieve_multiple_docs_and_filter() {\n        let test_context = TestContext::setup_with_cfg(\n            vec![\"filter\"].into(),\n            HashSet::from([EmbeddedField::Combined]),\n        )\n        .await\n        .expect(\"Test setup failed\");\n\n        let nodes = vec![\n            indexing::TextNode::new(\"test_query1\").with_metadata((\"filter\", \"true\")),\n            indexing::TextNode::new(\"test_query2\").with_metadata((\"filter\", \"true\")),\n            indexing::TextNode::new(\"test_query3\").with_metadata((\"filter\", \"false\")),\n        ]\n        .into_iter()\n        .map(|node| {\n            node.with_vectors([(EmbeddedField::Combined, vec![1.0; 384])]);\n            node.to_owned()\n        })\n        .collect();\n\n        test_context\n            .pgv_storage\n            .batch_store(nodes)\n            .await\n            .try_collect::<Vec<_>>()\n            .await\n            .unwrap();\n\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n        query.embedding = Some(vec![1.0; 384]);\n\n        let search_strategy = SimilaritySingleEmbedding::<()>::default();\n        let result = test_context\n            .pgv_storage\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n\n        assert_eq!(result.documents().len(), 3);\n\n        let search_strategy =\n            SimilaritySingleEmbedding::from_filter(\"filter = \\\"true\\\"\".to_string());\n\n        let result = test_context\n            .pgv_storage\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n\n        assert_eq!(result.documents().len(), 2);\n\n        let search_strategy =\n            SimilaritySingleEmbedding::from_filter(\"filter = \\\"banana\\\"\".to_string());\n\n        let result = test_context\n            .pgv_storage\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 0);\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_retrieve_docs_with_metadata() {\n        let test_context = TestContext::setup_with_cfg(\n            vec![\"other\", \"text\"].into(),\n            HashSet::from([EmbeddedField::Combined]),\n        )\n        .await\n        .expect(\"Test setup failed\");\n\n        let nodes = vec![\n            indexing::TextNode::new(\"test_query1\")\n                .with_metadata([\n                    (\"other\", serde_json::Value::from(10)),\n                    (\"text\", serde_json::Value::from(\"some text\")),\n                ])\n                .with_vectors([(EmbeddedField::Combined, vec![1.0; 384])])\n                .to_owned(),\n        ];\n\n        test_context\n            .pgv_storage\n            .batch_store(nodes)\n            .await\n            .try_collect::<Vec<_>>()\n            .await\n            .unwrap();\n\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n        query.embedding = Some(vec![1.0; 384]);\n\n        let search_strategy = SimilaritySingleEmbedding::<()>::default();\n        let result = test_context\n            .pgv_storage\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n\n        assert_eq!(result.documents().len(), 1);\n\n        let doc = result.documents().first().unwrap();\n        assert_eq!(\n            doc.metadata().get(\"other\"),\n            Some(&serde_json::Value::from(10))\n        );\n        assert_eq!(\n            doc.metadata().get(\"text\"),\n            Some(&serde_json::Value::from(\"some text\"))\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/qdrant/indexing_node.rs",
    "content": "//! This module provides functionality to convert an `Node` into a `qdrant::PointStruct`.\n//! The conversion is essential for storing data in the Qdrant vector database, which is used\n//! for efficient vector similarity search. The module handles metadata augmentation and ensures\n//! data compatibility with Qdrant's required format.\n\nuse anyhow::{Result, bail};\nuse std::{\n    collections::{HashMap, HashSet},\n    string::ToString,\n};\n\nuse qdrant_client::{\n    Payload,\n    qdrant::{self, Value},\n};\nuse swiftide_core::{Embedding, SparseEmbedding, indexing::EmbeddedField};\n\nuse super::NodeWithVectors;\n\n/// Implements the `TryInto` trait to convert an `NodeWithVectors` into a `qdrant::PointStruct`.\n/// This conversion is necessary for storing the node in the Qdrant vector database.\nimpl TryInto<qdrant::PointStruct> for NodeWithVectors<'_> {\n    type Error = anyhow::Error;\n\n    /// Converts the `Node` into a `qdrant::PointStruct`.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the vector is not set in the `Node`.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` which is `Ok` if the conversion is successful, containing the\n    /// `qdrant::PointStruct`. If the conversion fails, it returns an `anyhow::Error`.\n    fn try_into(self) -> Result<qdrant::PointStruct> {\n        let node = self.node;\n        // Calculate a unique identifier for the node.\n        let id = node.id();\n\n        // Extend the metadata with additional information.\n        // TODO: The node is already cloned in the `NodeWithVectors` constructor.\n        // Then additional data is added to the metadata, including the full chunk\n        // Data is then taken as ref and reassigned. Seems like a lot of needless allocations\n\n        // Create a payload compatible with Qdrant's API.\n        let mut payload: Payload = node\n            .metadata\n            .iter()\n            .map(|(k, v)| (k.clone(), Value::from(v.clone())))\n            .collect::<HashMap<String, Value>>()\n            .into();\n\n        payload.insert(\"path\", node.path.to_string_lossy().to_string());\n        payload.insert(\"content\", node.chunk.clone());\n        payload.insert(\n            \"last_updated_at\",\n            Value::from(chrono::Utc::now().to_rfc3339()),\n        );\n\n        let Some(vectors) = node.vectors.clone() else {\n            bail!(\"Node without vectors\")\n        };\n        let vectors =\n            try_create_vectors(&self.vector_fields, vectors, node.sparse_vectors.clone())?;\n\n        // Construct the `qdrant::PointStruct` and return it.\n        Ok(qdrant::PointStruct::new(id.to_string(), vectors, payload))\n    }\n}\n\nfn try_create_vectors(\n    vector_fields: &HashSet<&EmbeddedField>,\n    vectors: HashMap<EmbeddedField, Embedding>,\n    sparse_vectors: Option<HashMap<EmbeddedField, SparseEmbedding>>,\n) -> Result<qdrant::Vectors> {\n    if vectors.is_empty() {\n        bail!(\"Node with empty vectors\")\n    } else if vectors.len() == 1 && sparse_vectors.is_none() {\n        let Some(vector) = vectors.into_values().next() else {\n            bail!(\"Node has no vector entry\")\n        };\n        return Ok(vector.into());\n    }\n    let mut qdrant_vectors = qdrant::NamedVectors::default();\n\n    for (field, vector) in vectors {\n        if !vector_fields.contains(&field) {\n            continue;\n        }\n        qdrant_vectors = qdrant_vectors.add_vector(field.to_string(), vector);\n    }\n\n    if let Some(sparse_vectors) = sparse_vectors {\n        for (field, sparse_vector) in sparse_vectors {\n            if !vector_fields.contains(&field) {\n                continue;\n            }\n\n            qdrant_vectors = qdrant_vectors.add_vector(\n                format!(\"{field}_sparse\"),\n                qdrant::Vector::new_sparse(\n                    sparse_vector.indices.into_iter().collect::<Vec<_>>(),\n                    sparse_vector.values,\n                ),\n            );\n        }\n    }\n\n    Ok(qdrant_vectors.into())\n}\n\n#[cfg(test)]\nmod tests {\n    use std::collections::{HashMap, HashSet};\n\n    use qdrant_client::qdrant::PointStruct;\n    use swiftide_core::indexing::{EmbeddedField, TextNode};\n    use test_case::test_case;\n\n    use crate::qdrant::indexing_node::NodeWithVectors;\n    use pretty_assertions::assert_eq;\n\n    static EXPECTED_UUID: &str = \"d42d252d-671d-37ef-a157-8e85d0710610\";\n\n    #[test_case(\n        TextNode::builder()\n            .path(\"/path\")\n            .chunk(\"data\")\n            .vectors([(EmbeddedField::Chunk, vec![1.0])])\n            .metadata([(\"m1\", \"mv1\")])\n            .embed_mode(swiftide_core::indexing::EmbedMode::SingleWithMetadata)\n            .build().unwrap()\n        ,\n        HashSet::from([EmbeddedField::Combined]),\n        PointStruct::new(EXPECTED_UUID, vec![1.0], HashMap::from([\n            (\"content\", \"data\".into()),\n            (\"path\", \"/path\".into()),\n            (\"m1\", \"mv1\".into())])\n        );\n        \"Node with single vector creates struct with unnamed vector\"\n    )]\n    #[test_case(\n        TextNode::builder()\n            .path(\"/path\")\n            .chunk(\"data\")\n            .vectors([\n                (EmbeddedField::Chunk, vec![1.0]),\n                (EmbeddedField::Metadata(\"m1\".into()), vec![2.0])\n            ])\n            .metadata([(\"m1\", \"mv1\")])\n            .embed_mode(swiftide_core::indexing::EmbedMode::PerField)\n            .build().unwrap(),\n        HashSet::from([EmbeddedField::Chunk, EmbeddedField::Metadata(\"m1\".into())]),\n        PointStruct::new(EXPECTED_UUID, HashMap::from([\n                (\"Chunk\".to_string(), vec![1.0]),\n                (\"Metadata: m1\".to_string(), vec![2.0])\n            ]),\n            HashMap::from([\n                (\"content\", \"data\".into()),\n                (\"path\", \"/path\".into()),\n                (\"m1\", \"mv1\".into())])\n        );\n        \"Node with multiple vectors creates struct with named vectors\"\n    )]\n    #[test_case(\n        TextNode::builder()\n            .path(\"/path\")\n            .chunk(\"data\")\n            .vectors([\n                (EmbeddedField::Chunk, vec![1.0]),\n                (EmbeddedField::Combined, vec![1.0]),\n                (EmbeddedField::Metadata(\"m1\".into()), vec![1.0]),\n                (EmbeddedField::Metadata(\"m2\".into()), vec![2.0])\n            ])\n            .metadata([(\"m1\", \"mv1\"), (\"m2\", \"mv2\")])\n            .embed_mode(swiftide_core::indexing::EmbedMode::Both)\n            .build().unwrap(),\n        HashSet::from([EmbeddedField::Combined]),\n        PointStruct::new(EXPECTED_UUID,\n            HashMap::from([\n                (\"Combined\".to_string(), vec![1.0]),\n            ]),\n            HashMap::from([\n                (\"content\", \"data\".into()),\n                (\"path\", \"/path\".into()),\n                (\"m1\", \"mv1\".into()),\n                (\"m2\", \"mv2\".into())])\n        );\n        \"Storing only `Combined` vector. Skipping other vectors.\"\n    )]\n    #[allow(clippy::needless_pass_by_value)]\n    fn try_into_point_struct_test(\n        node: TextNode,\n        vector_fields: HashSet<EmbeddedField>,\n        mut expected_point: PointStruct,\n    ) {\n        let node = NodeWithVectors::new(&node, vector_fields.iter().collect());\n        let point: PointStruct = node.try_into().expect(\"Can create PointStruct\");\n\n        // patch last_update_at field to avoid test failure because of time difference\n        let last_updated_at_key = \"last_updated_at\";\n        let last_updated_at = point\n            .payload\n            .get(last_updated_at_key)\n            .expect(\"Has autogenerated `last_updated_at` field.\");\n        expected_point\n            .payload\n            .insert(last_updated_at_key.into(), last_updated_at.clone());\n\n        assert_eq!(point.id, expected_point.id);\n        assert_eq!(point.payload, expected_point.payload);\n        assert_eq!(point.vectors, expected_point.vectors);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/qdrant/mod.rs",
    "content": "//! This module provides integration with the Qdrant vector database.\n//! It includes functionalities to interact with Qdrant, such as creating and managing vector\n//! collections, storing data, and ensuring proper indexing for efficient searches.\n//!\n//! Qdrant can be used both in `indexing::Pipeline` and `query::Pipeline`\n\nmod indexing_node;\nmod persist;\nmod retrieve;\nuse std::collections::{HashMap, HashSet};\n\nuse std::sync::Arc;\n\nuse anyhow::{Context as _, Result, bail};\nuse derive_builder::Builder;\npub use qdrant_client;\nuse qdrant_client::qdrant::{self, SparseVectorParamsBuilder, SparseVectorsConfigBuilder};\n\nuse swiftide_core::indexing::{EmbeddedField, TextNode};\n\nconst DEFAULT_COLLECTION_NAME: &str = \"swiftide\";\nconst DEFAULT_QDRANT_URL: &str = \"http://localhost:6334\";\nconst DEFAULT_BATCH_SIZE: usize = 50;\n\n/// A struct representing a Qdrant client with configuration options.\n///\n/// This struct is used to interact with the Qdrant vector database, providing methods to create and\n/// manage vector collections, store data, and ensure proper indexing for efficient searches.\n///\n/// Can be cloned with relative low cost as the client is shared.\n#[derive(Builder, Clone)]\n#[builder(\n    pattern = \"owned\",\n    setter(strip_option),\n    build_fn(error = \"anyhow::Error\")\n)]\npub struct Qdrant {\n    /// The Qdrant client used to interact with the Qdrant vector database.\n    ///\n    /// By default the client will be build from `QDRANT_URL` and option `QDRANT_API_KEY`.\n    /// It will fall back to `http://localhost:6334` if `QDRANT_URL` is not set.\n    #[builder(setter(into), default = \"self.default_client()?\")]\n    #[allow(clippy::missing_fields_in_debug)]\n    client: Arc<qdrant_client::Qdrant>,\n    /// The name of the collection to be used in Qdrant. Defaults to \"swiftide\".\n    #[builder(default = \"DEFAULT_COLLECTION_NAME.to_string()\")]\n    #[builder(setter(into))]\n    collection_name: String,\n    /// The default size of the vectors to be stored in the collection.\n    vector_size: u64,\n    #[builder(default = \"Distance::Cosine\")]\n    /// The default distance of the vectors to be stored in the collection\n    vector_distance: Distance,\n    /// The batch size for operations. Optional.\n    #[builder(default = \"Some(DEFAULT_BATCH_SIZE)\")]\n    batch_size: Option<usize>,\n    #[builder(private, default = \"Self::default_vectors()\")]\n    pub(crate) vectors: HashMap<EmbeddedField, VectorConfig>,\n    #[builder(private, default)]\n    pub(crate) sparse_vectors: HashMap<EmbeddedField, SparseVectorConfig>,\n}\n\nimpl Qdrant {\n    /// Returns a new `QdrantBuilder` for constructing a `Qdrant` instance.\n    pub fn builder() -> QdrantBuilder {\n        QdrantBuilder::default()\n    }\n\n    /// Tries to create a `QdrantBuilder` from a given URL. Will use the api key in `QDRANT_API_KEY`\n    /// if present.\n    ///\n    /// Returns\n    ///\n    /// # Arguments\n    ///\n    /// * `url` - A string slice that holds the URL for the Qdrant client.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` containing the `QdrantBuilder` if successful, or an error otherwise.\n    ///\n    /// # Errors\n    ///\n    /// Errors if client fails build\n    pub fn try_from_url(url: impl AsRef<str>) -> Result<QdrantBuilder> {\n        Ok(QdrantBuilder::default().client(\n            qdrant_client::Qdrant::from_url(url.as_ref())\n                .api_key(std::env::var(\"QDRANT_API_KEY\"))\n                .build()?,\n        ))\n    }\n\n    /// Creates an index in the Qdrant collection if it does not already exist.\n    ///\n    /// This method checks if the specified collection exists in Qdrant. If it does not exist, it\n    /// creates a new collection with the specified vector size and cosine distance metric.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` indicating success or failure.\n    ///\n    /// # Errors\n    ///\n    /// Errors if client fails build\n    pub async fn create_index_if_not_exists(&self) -> Result<()> {\n        let collection_name = &self.collection_name;\n\n        tracing::debug!(\"Checking if collection {collection_name} exists\");\n        if self.client.collection_exists(collection_name).await? {\n            tracing::warn!(\n                \"Collection {collection_name} exists, skipping collection creation; if vector configurations have not changed, you can ignore this message\"\n            );\n            return Ok(());\n        }\n\n        let vectors_config = self.create_vectors_config()?;\n        tracing::debug!(?vectors_config, \"Adding vectors config\");\n\n        let mut collection =\n            qdrant::CreateCollectionBuilder::new(collection_name).vectors_config(vectors_config);\n\n        if let Some(sparse_vectors_config) = self.create_sparse_vectors_config() {\n            tracing::debug!(?sparse_vectors_config, \"Adding sparse vectors config\");\n            collection = collection.sparse_vectors_config(sparse_vectors_config);\n        }\n\n        tracing::info!(\"Creating collection {collection_name}\");\n        self.client.create_collection(collection).await?;\n        Ok(())\n    }\n\n    fn create_vectors_config(&self) -> Result<qdrant_client::qdrant::vectors_config::Config> {\n        if self.vectors.is_empty() {\n            bail!(\"No configured vectors\");\n        } else if self.vectors.len() == 1 && self.sparse_vectors.is_empty() {\n            let config = self\n                .vectors\n                .values()\n                .next()\n                .context(\"Has one vector config\")?;\n            let vector_params = self.create_vector_params(config);\n            return Ok(qdrant::vectors_config::Config::Params(vector_params));\n        }\n        let mut map = HashMap::<String, qdrant::VectorParams>::default();\n        for (embedded_field, config) in &self.vectors {\n            let vector_name = embedded_field.to_string();\n            let vector_params = self.create_vector_params(config);\n\n            map.insert(vector_name, vector_params);\n        }\n\n        Ok(qdrant::vectors_config::Config::ParamsMap(\n            qdrant::VectorParamsMap { map },\n        ))\n    }\n\n    fn create_sparse_vectors_config(&self) -> Option<qdrant::SparseVectorConfig> {\n        if self.sparse_vectors.is_empty() {\n            return None;\n        }\n        let mut sparse_vectors_config = SparseVectorsConfigBuilder::default();\n        for embedded_field in self.sparse_vectors.keys() {\n            let vector_name = format!(\"{embedded_field}_sparse\");\n            let vector_params = SparseVectorParamsBuilder::default();\n            sparse_vectors_config.add_named_vector_params(vector_name, vector_params);\n        }\n\n        Some(sparse_vectors_config.into())\n    }\n\n    fn create_vector_params(&self, config: &VectorConfig) -> qdrant::VectorParams {\n        let size = config.vector_size.unwrap_or(self.vector_size);\n        let distance = config.distance.unwrap_or(self.vector_distance);\n\n        tracing::debug!(\n            \"Creating vector params: size={}, distance={:?}\",\n            size,\n            distance\n        );\n        qdrant::VectorParamsBuilder::new(size, distance).build()\n    }\n\n    /// Returns the inner client for custom operations\n    pub fn client(&self) -> &Arc<qdrant_client::Qdrant> {\n        &self.client\n    }\n}\n\nimpl QdrantBuilder {\n    #[allow(clippy::unused_self)]\n    fn default_client(&self) -> Result<Arc<qdrant_client::Qdrant>> {\n        let client = qdrant_client::Qdrant::from_url(\n            &std::env::var(\"QDRANT_URL\").unwrap_or(DEFAULT_QDRANT_URL.to_string()),\n        )\n        .api_key(std::env::var(\"QDRANT_API_KEY\"))\n        .build()\n        .context(\"Could not build default qdrant client\")?;\n\n        Ok(Arc::new(client))\n    }\n\n    /// Configures a dense vector on the collection\n    ///\n    /// When not configured Pipeline by default configures vector only for\n    /// [`EmbeddedField::Combined`] Default config is enough when\n    /// `indexing::Pipeline::with_embed_mode` is not set or when the value is set to\n    /// [`swiftide_core::indexing::EmbedMode::SingleWithMetadata`].\n    #[must_use]\n    pub fn with_vector(mut self, vector: impl Into<VectorConfig>) -> QdrantBuilder {\n        if self.vectors.is_none() {\n            self = self.vectors(HashMap::default());\n        }\n        let vector = vector.into();\n        if let Some(vectors) = self.vectors.as_mut()\n            && let Some(overridden_vector) = vectors.insert(vector.embedded_field.clone(), vector)\n        {\n            tracing::warn!(\n                \"Overriding named vector config: {}\",\n                overridden_vector.embedded_field\n            );\n        }\n        self\n    }\n\n    /// Configures a sparse vector on the collection\n    #[must_use]\n    pub fn with_sparse_vector(mut self, vector: impl Into<SparseVectorConfig>) -> QdrantBuilder {\n        if self.sparse_vectors.is_none() {\n            self = self.sparse_vectors(HashMap::default());\n        }\n        let vector = vector.into();\n        if let Some(vectors) = self.sparse_vectors.as_mut()\n            && let Some(overridden_vector) = vectors.insert(vector.embedded_field.clone(), vector)\n        {\n            tracing::warn!(\n                \"Overriding named vector config: {}\",\n                overridden_vector.embedded_field\n            );\n        }\n        self\n    }\n\n    fn default_vectors() -> HashMap<EmbeddedField, VectorConfig> {\n        HashMap::from([(EmbeddedField::default(), VectorConfig::default())])\n    }\n}\n\n#[allow(clippy::missing_fields_in_debug)]\nimpl std::fmt::Debug for Qdrant {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Qdrant\")\n            .field(\"collection_name\", &self.collection_name)\n            .field(\"vector_size\", &self.vector_size)\n            .field(\"batch_size\", &self.batch_size)\n            .finish()\n    }\n}\n\n/// Vector config\n///\n/// See also [`QdrantBuilder::with_vector`]\n#[derive(Clone, Builder, Default)]\npub struct VectorConfig {\n    /// A type of the embeddable of the stored vector.\n    #[builder(default)]\n    pub(super) embedded_field: EmbeddedField,\n    /// A size of the vector to be stored in the collection.\n    ///\n    /// Overrides default set in [`QdrantBuilder::vector_size`]\n    #[builder(setter(into, strip_option), default)]\n    vector_size: Option<u64>,\n    /// A distance of the vector to be stored in the collection.\n    ///\n    /// Overrides default set in [`QdrantBuilder::vector_distance`]\n    #[builder(setter(into, strip_option), default)]\n    distance: Option<qdrant::Distance>,\n}\n\nimpl VectorConfig {\n    pub fn builder() -> VectorConfigBuilder {\n        VectorConfigBuilder::default()\n    }\n}\n\nimpl From<EmbeddedField> for VectorConfig {\n    fn from(value: EmbeddedField) -> Self {\n        Self {\n            embedded_field: value,\n            ..Default::default()\n        }\n    }\n}\n\n/// Sparse Vector config\n#[derive(Clone, Builder, Default)]\npub struct SparseVectorConfig {\n    embedded_field: EmbeddedField,\n}\n\nimpl From<EmbeddedField> for SparseVectorConfig {\n    fn from(value: EmbeddedField) -> Self {\n        Self {\n            embedded_field: value,\n        }\n    }\n}\n\npub type Distance = qdrant::Distance;\n\n/// Utility struct combining `TextNode` with `EmbeddedField`s of configured _Qdrant_ vectors.\nstruct NodeWithVectors<'a> {\n    node: &'a TextNode,\n    vector_fields: HashSet<&'a EmbeddedField>,\n}\n\nimpl<'a> NodeWithVectors<'a> {\n    pub fn new(node: &'a TextNode, vector_fields: HashSet<&'a EmbeddedField>) -> Self {\n        Self {\n            node,\n            vector_fields,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/qdrant/persist.rs",
    "content": "//! This module provides an implementation of the `Storage` trait for the `Qdrant` struct.\n//! It includes methods for setting up the storage, storing a single node, and storing a batch of\n//! nodes. This integration allows the Swiftide project to use Qdrant as a storage backend.\n\nuse std::collections::HashSet;\nuse swiftide_core::{\n    indexing::{EmbeddedField, IndexingStream, Persist, TextNode},\n    prelude::*,\n};\n\nuse qdrant_client::qdrant::UpsertPointsBuilder;\n\nuse super::{NodeWithVectors, Qdrant};\n\n#[async_trait]\nimpl Persist for Qdrant {\n    type Input = String;\n    type Output = String;\n\n    /// Returns the batch size for the Qdrant storage.\n    ///\n    /// # Returns\n    ///\n    /// An `Option<usize>` representing the batch size if set, otherwise `None`.\n    fn batch_size(&self) -> Option<usize> {\n        self.batch_size\n    }\n\n    /// Sets up the Qdrant storage by creating the necessary index if it does not exist.\n    ///\n    /// # Returns\n    ///\n    /// A `Result<()>` which is `Ok` if the setup is successful, otherwise an error.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the index creation fails.\n    #[tracing::instrument(skip_all, err)]\n    async fn setup(&self) -> Result<()> {\n        tracing::debug!(\"Setting up Qdrant storage\");\n        self.create_index_if_not_exists().await\n    }\n\n    /// Stores a single indexing node in the Qdrant storage.\n    ///\n    /// WARN: If running debug builds, the store is blocking and will impact performance\n    ///\n    /// # Parameters\n    ///\n    /// - `node`: The `TextNode` to be stored.\n    ///\n    /// # Returns\n    ///\n    /// A `Result<()>` which is `Ok` if the storage is successful, otherwise an error.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the node conversion or storage operation fails.\n    #[tracing::instrument(skip_all, err, name = \"storage.qdrant.store\")]\n    async fn store(&self, node: TextNode) -> Result<TextNode> {\n        let node_with_vectors = NodeWithVectors::new(&node, self.vector_fields());\n        let point = node_with_vectors.try_into()?;\n\n        tracing::debug!(\"Storing node\");\n\n        self.client\n            .upsert_points(\n                UpsertPointsBuilder::new(self.collection_name.clone(), vec![point])\n                    .wait(cfg!(debug_assertions)),\n            )\n            .await?;\n        Ok(node)\n    }\n\n    /// Stores a batch of indexing nodes in the Qdrant storage.\n    ///\n    /// # Parameters\n    ///\n    /// - `nodes`: A vector of `TextNode` to be stored.\n    ///\n    /// # Returns\n    ///\n    /// A `Result<()>` which is `Ok` if the storage is successful, otherwise an error.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if any node conversion or storage operation fails.\n    #[tracing::instrument(skip_all, name = \"storage.qdrant.batch_store\")]\n    async fn batch_store(&self, nodes: Vec<TextNode>) -> IndexingStream<String> {\n        let points = nodes\n            .iter()\n            .map(|node| NodeWithVectors::new(node, self.vector_fields()))\n            .map(NodeWithVectors::try_into)\n            .collect::<Result<Vec<_>>>();\n\n        let Ok(points) = points else {\n            return vec![Err(points.unwrap_err())].into();\n        };\n\n        tracing::debug!(\"Storing batch of {} nodes\", points.len());\n\n        match self\n            .client\n            .upsert_points(\n                UpsertPointsBuilder::new(self.collection_name.clone(), points)\n                    .wait(cfg!(debug_assertions)),\n            )\n            .await\n        {\n            Ok(_) => IndexingStream::iter(nodes.into_iter().map(Ok)),\n            Err(err) => vec![Err(err.into())].into(),\n        }\n    }\n}\n\nimpl Qdrant {\n    fn vector_fields(&self) -> HashSet<&EmbeddedField> {\n        self.vectors.keys().collect::<HashSet<_>>()\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/qdrant/retrieve.rs",
    "content": "use qdrant_client::qdrant::{self, PrefetchQueryBuilder, ScoredPoint, SearchPointsBuilder};\nuse swiftide_core::{\n    Retrieve,\n    document::Document,\n    indexing::{EmbeddedField, Metadata},\n    prelude::{Result, *},\n    querying::{\n        Query,\n        search_strategies::{HybridSearch, SimilaritySingleEmbedding},\n        states,\n    },\n};\n\nuse super::Qdrant;\n\n/// Implement the `Retrieve` trait for `SimilaritySingleEmbedding` search strategy.\n///\n/// Can be used in the query pipeline to retrieve documents from Qdrant.\n///\n/// Supports filters via the `qdrant_client::qdrant::Filter` type.\n#[async_trait]\nimpl Retrieve<SimilaritySingleEmbedding<qdrant::Filter>> for Qdrant {\n    #[tracing::instrument]\n    async fn retrieve(\n        &self,\n        search_strategy: &SimilaritySingleEmbedding<qdrant::Filter>,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        let Some(embedding) = &query.embedding else {\n            anyhow::bail!(\"No embedding for query\")\n        };\n        let mut query_builder = SearchPointsBuilder::new(\n            &self.collection_name,\n            embedding.to_owned(),\n            search_strategy.top_k(),\n        )\n        .with_payload(true);\n\n        if let Some(filter) = &search_strategy.filter() {\n            query_builder = query_builder.filter(filter.to_owned());\n        }\n\n        if self.vectors.len() > 1 || !self.sparse_vectors.is_empty() {\n            // TODO: Make this configurable\n            // It will break if there are multiple vectors and no combined vector\n            query_builder = query_builder.vector_name(EmbeddedField::Combined.field_name());\n        }\n\n        let result = self\n            .client\n            .search_points(query_builder.build())\n            .await\n            .context(\"Failed to retrieve from qdrant\")?\n            .result;\n\n        let documents = result\n            .into_iter()\n            .map(scored_point_into_document)\n            .collect::<Result<Vec<_>>>()?;\n\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\n/// Ensures that the `SimilaritySingleEmbedding` search strategy can be used when no filter is set.\n#[async_trait]\nimpl Retrieve<SimilaritySingleEmbedding> for Qdrant {\n    async fn retrieve(\n        &self,\n        search_strategy: &SimilaritySingleEmbedding,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        Retrieve::<SimilaritySingleEmbedding<qdrant::Filter>>::retrieve(\n            self,\n            &search_strategy.into_concrete_filter::<qdrant::Filter>(),\n            query,\n        )\n        .await\n    }\n}\n\n/// Implement the `Retrieve` trait for `HybridSearch` search strategy.\n///\n/// Can be used in the query pipeline to retrieve documents from Qdrant.\n///\n/// Expects both a dense and sparse embedding to be set on the query.\n#[async_trait]\nimpl Retrieve<HybridSearch<qdrant::Filter>> for Qdrant {\n    #[tracing::instrument]\n    async fn retrieve(\n        &self,\n        search_strategy: &HybridSearch<qdrant::Filter>,\n        query: Query<states::Pending>,\n    ) -> Result<Query<states::Retrieved>> {\n        let Some(dense) = &query.embedding else {\n            anyhow::bail!(\"No embedding for query\")\n        };\n\n        let Some(sparse) = &query.sparse_embedding else {\n            anyhow::bail!(\"No sparse embedding for query\")\n        };\n\n        let mut sparse_prefetch = PrefetchQueryBuilder::default()\n            .query(qdrant::Query::new_nearest(qdrant::VectorInput::new_sparse(\n                sparse.indices.clone(),\n                sparse.values.clone(),\n            )))\n            .using(search_strategy.sparse_vector_field().sparse_field_name())\n            .limit(search_strategy.top_n());\n\n        let mut dense_prefetch = PrefetchQueryBuilder::default()\n            .query(qdrant::Query::new_nearest(dense.clone()))\n            .using(search_strategy.dense_vector_field().field_name())\n            .limit(search_strategy.top_n());\n\n        if let Some(filter) = search_strategy.filter() {\n            sparse_prefetch = sparse_prefetch.filter(filter.clone());\n            dense_prefetch = dense_prefetch.filter(filter.clone());\n        }\n\n        let query_points = qdrant::QueryPointsBuilder::new(&self.collection_name)\n            .with_payload(true)\n            .add_prefetch(sparse_prefetch)\n            .add_prefetch(dense_prefetch)\n            .query(qdrant::Query::new_fusion(qdrant::Fusion::Rrf))\n            .limit(search_strategy.top_k());\n\n        // NOTE: Potential improvement to consume the vectors instead of cloning\n        let result = self.client.query(query_points).await?.result;\n\n        let documents = result\n            .into_iter()\n            .map(scored_point_into_document)\n            .collect::<Result<Vec<_>>>()?;\n\n        Ok(query.retrieved_documents(documents))\n    }\n}\n\nfn scored_point_into_document(scored_point: ScoredPoint) -> Result<Document> {\n    let content = scored_point\n        .payload\n        .get(\"content\")\n        .context(\"Expected document in qdrant payload\")?\n        .to_string();\n\n    let metadata: Metadata = scored_point\n        .payload\n        .into_iter()\n        .filter(|(k, _)| *k != \"content\")\n        .collect::<Vec<(_, _)>>()\n        .into();\n\n    Ok(Document::new(content, Some(metadata)))\n}\n\n#[cfg(test)]\nmod tests {\n    use itertools::Itertools as _;\n    use swiftide_core::{\n        Persist as _,\n        indexing::{self, EmbeddedField},\n    };\n\n    use super::*;\n\n    async fn setup() -> (\n        testcontainers::ContainerAsync<testcontainers::GenericImage>,\n        Qdrant,\n    ) {\n        let (guard, qdrant_url) = swiftide_test_utils::start_qdrant().await;\n\n        let qdrant_client = Qdrant::try_from_url(qdrant_url)\n            .unwrap()\n            .vector_size(384)\n            .with_vector(EmbeddedField::Combined)\n            .with_sparse_vector(EmbeddedField::Combined)\n            .build()\n            .unwrap();\n\n        qdrant_client.setup().await.unwrap();\n\n        let nodes = vec![\n            indexing::TextNode::new(\"test_query1\").with_metadata((\"filter\", \"true\")),\n            indexing::TextNode::new(\"test_query2\").with_metadata((\"filter\", \"true\")),\n            indexing::TextNode::new(\"test_query3\").with_metadata((\"filter\", \"false\")),\n        ]\n        .into_iter()\n        .map(|node| {\n            node.with_vectors([(EmbeddedField::Combined, vec![1.0; 384])]);\n            node.with_sparse_vectors([(\n                EmbeddedField::Combined,\n                swiftide_core::SparseEmbedding {\n                    indices: vec![0, 1],\n                    values: vec![1.0, 1.0],\n                },\n            )]);\n            node.to_owned()\n        })\n        .collect();\n\n        qdrant_client\n            .batch_store(nodes)\n            .await\n            .try_collect::<Vec<_>>()\n            .await\n            .unwrap();\n\n        (guard, qdrant_client)\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_retrieve_multiple_docs_and_filter() {\n        let (_guard, qdrant_client) = setup().await;\n\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n        query.embedding = Some(vec![1.0; 384]);\n\n        let search_strategy = SimilaritySingleEmbedding::<()>::default();\n        let result = qdrant_client\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 3);\n        assert_eq!(\n            result\n                .documents()\n                .iter()\n                .sorted()\n                .map(Document::content)\n                .collect_vec(),\n            // FIXME: The extra quotes should be removed by serde (via qdrant::Value), but they are\n            // not\n            [\"\\\"test_query1\\\"\", \"\\\"test_query2\\\"\", \"\\\"test_query3\\\"\"]\n                .into_iter()\n                .sorted()\n                .collect_vec()\n        );\n\n        let search_strategy = SimilaritySingleEmbedding::from_filter(qdrant::Filter::must([\n            qdrant::Condition::matches(\"filter\", \"true\".to_string()),\n        ]));\n        let result = qdrant_client\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 2);\n        assert_eq!(\n            result\n                .documents()\n                .iter()\n                .sorted()\n                .map(Document::content)\n                .collect_vec(),\n            [\"\\\"test_query1\\\"\", \"\\\"test_query2\\\"\"]\n                .into_iter()\n                .sorted()\n                .collect_vec()\n        );\n\n        let search_strategy = SimilaritySingleEmbedding::from_filter(qdrant::Filter::must([\n            qdrant::Condition::matches(\"filter\", \"banana\".to_string()),\n        ]));\n        let result = qdrant_client\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 0);\n    }\n\n    #[tokio::test]\n    async fn test_hybrid_search() {\n        let (_guard, qdrant_client) = setup().await;\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n\n        query.embedding = Some(vec![1.0; 384]);\n        query.sparse_embedding = Some(swiftide_core::SparseEmbedding {\n            indices: vec![0, 1],\n            values: vec![1.0, 1.0],\n        });\n        let search_strategy = HybridSearch::default();\n        let result = qdrant_client\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 3);\n    }\n\n    #[tokio::test]\n    async fn test_hybrid_search_with_filter() {\n        let (_guard, qdrant_client) = setup().await;\n        let mut query = Query::<states::Pending>::new(\"test_query\");\n\n        query.embedding = Some(vec![1.0; 384]);\n        query.sparse_embedding = Some(swiftide_core::SparseEmbedding {\n            indices: vec![0, 1],\n            values: vec![1.0, 1.0],\n        });\n        let search_strategy =\n            HybridSearch::from_filter(qdrant::Filter::must([qdrant::Condition::matches(\n                \"filter\",\n                \"true\".to_string(),\n            )]));\n        let result = qdrant_client\n            .retrieve(&search_strategy, query.clone())\n            .await\n            .unwrap();\n        assert_eq!(result.documents().len(), 2);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/redb/mod.rs",
    "content": "//! Redb is a simple, portable, high-performance, ACID, embedded key-value store.\n//!\n//! Redb can be used as a fast, embedded node cache, without the need for external services.\n\nuse anyhow::Result;\nuse std::{path::PathBuf, sync::Arc};\n\nuse derive_builder::Builder;\n\nmod node_cache;\n\n/// `Redb` provides a caching filter for indexing nodes using Redb.\n///\n/// Redb is a simple, portable, high-performance, ACID, embedded key-value store.\n/// It enables using a local file based cache without the need for external services.\n///\n/// # Example\n///\n/// ```no_run\n/// # use swiftide_integrations::redb::{Redb};\n/// Redb::builder()\n///     .database_path(\"/my/redb\")\n///     .table_name(\"swiftide_test\")\n///     .cache_key_prefix(\"my_cache\")\n///     .build().unwrap();\n/// ```\n#[derive(Clone, Builder)]\n#[builder(build_fn(error = \"anyhow::Error\"), setter(into))]\npub struct Redb {\n    /// The database to use for caching nodes. Allows overwriting the default database created from\n    /// `database_path`.\n    #[builder(setter(into), default = \"Arc::new(self.default_database()?)\")]\n    database: Arc<redb::Database>,\n\n    /// Path to the database, required if no database override is provided. This is the recommended\n    /// usage.\n    #[builder(setter(into, strip_option))]\n    database_path: Option<PathBuf>,\n    /// The name of the table to use for caching nodes. Defaults to \"swiftide\".\n    #[builder(default = \"\\\"swiftide\\\".to_string()\")]\n    table_name: String,\n    /// Prefix to be used for keys stored in the database to avoid collisions. Can be used to\n    /// manually invalidate the cache.\n    #[builder(default = \"String::new()\")]\n    cache_key_prefix: String,\n}\n\nimpl std::fmt::Debug for Redb {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Redb\")\n            .field(\"database\", &self.database)\n            .field(\"database_path\", &self.database_path)\n            .field(\"table_name\", &self.table_name)\n            .field(\"cache_key_prefix\", &self.cache_key_prefix)\n            .finish()\n    }\n}\n\nimpl RedbBuilder {\n    fn default_database(&self) -> Result<redb::Database> {\n        let db = redb::Database::create(\n            self.database_path\n                .clone()\n                .flatten()\n                .ok_or(anyhow::anyhow!(\"Expected database path\"))?,\n        )?;\n\n        Ok(db)\n    }\n}\n\nimpl Redb {\n    pub fn builder() -> RedbBuilder {\n        RedbBuilder::default()\n    }\n    pub fn node_key(&self, node: &swiftide_core::indexing::TextNode) -> String {\n        format!(\"{}.{}\", self.cache_key_prefix, node.id())\n    }\n\n    pub fn table_definition(&self) -> redb::TableDefinition<'_, String, bool> {\n        redb::TableDefinition::<String, bool>::new(&self.table_name)\n    }\n\n    pub fn database(&self) -> &redb::Database {\n        &self.database\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/redb/node_cache.rs",
    "content": "use anyhow::Result;\nuse async_trait::async_trait;\nuse redb::ReadableDatabase;\nuse swiftide_core::{NodeCache, indexing::TextNode};\n\nuse super::Redb;\n\n// Simple proc macro that gets the ok value of a result or logs the error and returns false (not\n// cached)\n//\n// The underlying issue is that redb can be fickly if panics happened. We just want to make sure it\n// does not become worse. There probably is a better solution.\nmacro_rules! unwrap_or_log {\n    ($result:expr) => {\n        match $result {\n            Ok(value) => value,\n            Err(e) => {\n                tracing::error!(\"Error: {:#}\", e);\n                debug_assert!(\n                    true,\n                    \"Redb should not give errors unless in very weird situations; this is a bug: {:#}\",\n                    e\n                );\n                return false;\n            }\n        }\n    };\n}\n#[async_trait]\nimpl NodeCache for Redb {\n    type Input = String;\n\n    async fn get(&self, node: &TextNode) -> bool {\n        let table_definition = self.table_definition();\n        let read_txn = unwrap_or_log!(self.database.begin_read());\n\n        let result = read_txn.open_table(table_definition);\n\n        let table = match result {\n            Ok(table) => table,\n            Err(redb::TableError::TableDoesNotExist { .. }) => {\n                // Create the table\n                {\n                    let write_txn = unwrap_or_log!(self.database.begin_write());\n\n                    unwrap_or_log!(write_txn.open_table(table_definition));\n                    unwrap_or_log!(write_txn.commit());\n                }\n\n                let read_tx = unwrap_or_log!(self.database.begin_read());\n                unwrap_or_log!(read_tx.open_table(table_definition))\n            }\n            Err(e) => {\n                tracing::error!(\"Failed to open table: {e:#}\");\n                return false;\n            }\n        };\n\n        match table.get(self.node_key(node)).unwrap() {\n            Some(access_guard) => access_guard.value(),\n            None => false,\n        }\n    }\n\n    async fn set(&self, node: &TextNode) {\n        let write_txn = self.database.begin_write().unwrap();\n\n        {\n            let mut table = write_txn.open_table(self.table_definition()).unwrap();\n\n            table.insert(self.node_key(node), true).unwrap();\n        }\n        write_txn.commit().unwrap();\n    }\n\n    /// Deletes the full cache table from the database.\n    async fn clear(&self) -> Result<()> {\n        let write_txn = self.database.begin_write().unwrap();\n        let _ = write_txn.delete_table(self.table_definition());\n\n        write_txn.commit().unwrap();\n\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use swiftide_core::indexing::TextNode;\n    use temp_dir::TempDir;\n\n    fn setup_redb() -> Redb {\n        let tempdir = TempDir::new().unwrap();\n        Redb::builder()\n            .database_path(tempdir.child(\"test_clear\"))\n            .build()\n            .unwrap()\n    }\n\n    #[tokio::test]\n    async fn test_get_set() {\n        let redb = setup_redb();\n        let node = TextNode::new(\"test_get_set\");\n        assert!(!redb.get(&node).await);\n        redb.set(&node).await;\n        assert!(redb.get(&node).await);\n    }\n    #[tokio::test]\n    async fn test_clear() {\n        let redb = setup_redb();\n        let node = TextNode::new(\"test_clear\");\n        redb.set(&node).await;\n        assert!(redb.get(&node).await);\n        redb.clear().await.unwrap();\n        assert!(!redb.get(&node).await);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/redis/message_history.rs",
    "content": "use anyhow::{Context as _, Result};\nuse async_trait::async_trait;\nuse swiftide_core::{MessageHistory, chat_completion::ChatMessage, indexing::Chunk};\n\nuse super::Redis;\n\n#[async_trait]\nimpl<T: Chunk> MessageHistory for Redis<T> {\n    async fn history(&self) -> Result<Vec<ChatMessage>> {\n        if let Some(mut cm) = self.lazy_connect().await {\n            let messages: Vec<String> = redis::cmd(\"LRANGE\")\n                .arg(&self.message_history_key)\n                .arg(0)\n                .arg(-1)\n                .query_async(&mut cm)\n                .await\n                .context(\"Error fetching message history\")?;\n            messages\n                .into_iter()\n                .map(|msg| {\n                    serde_json::from_str::<ChatMessage>(&msg).context(\"Error deserializing message\")\n                })\n                .collect()\n        } else {\n            anyhow::bail!(\"Failed to connect to Redis\")\n        }\n    }\n\n    async fn push_owned(&self, item: ChatMessage) -> Result<()> {\n        if let Some(mut cm) = self.lazy_connect().await {\n            redis::cmd(\"RPUSH\")\n                .arg(&self.message_history_key)\n                .arg(serde_json::to_string(&item)?)\n                .query_async::<()>(&mut cm)\n                .await\n                .context(\"Error pushing to message history\")?;\n            Ok(())\n        } else {\n            anyhow::bail!(\"Failed to connect to Redis\")\n        }\n    }\n\n    async fn extend_owned(&self, items: Vec<ChatMessage>) -> Result<()> {\n        if let Some(mut cm) = self.lazy_connect().await {\n            if items.is_empty() {\n                return Ok(());\n            }\n\n            redis::cmd(\"RPUSH\")\n                .arg(&self.message_history_key)\n                .arg(serialize_messages(items)?)\n                .query_async::<()>(&mut cm)\n                .await\n                .context(\"Error pushing to message history\")?;\n            Ok(())\n        } else {\n            anyhow::bail!(\"Failed to connect to Redis\")\n        }\n    }\n\n    async fn overwrite(&self, items: Vec<ChatMessage>) -> Result<()> {\n        if let Some(mut cm) = self.lazy_connect().await {\n            // If it does not exist yet, we can just push the items\n            let _ = redis::cmd(\"DEL\")\n                .arg(&self.message_history_key)\n                .query_async::<()>(&mut cm)\n                .await;\n\n            if items.is_empty() {\n                // If we are overwriting with an empty history, we can just return\n                return Ok(());\n            }\n\n            redis::cmd(\"RPUSH\")\n                .arg(&self.message_history_key)\n                .arg(serialize_messages(items)?)\n                .query_async::<()>(&mut cm)\n                .await\n                .context(\"Error pushing to message history\")?;\n            Ok(())\n        } else {\n            anyhow::bail!(\"Failed to connect to Redis\")\n        }\n    }\n}\n\nfn serialize_messages(items: Vec<ChatMessage>) -> Result<Vec<String>> {\n    items\n        .into_iter()\n        .map(|item| serde_json::to_string(&item).context(\"Error serializing message\"))\n        .collect()\n}\n\n#[cfg(test)]\nmod tests {\n    use testcontainers::{ContainerAsync, GenericImage, runners::AsyncRunner as _};\n\n    use super::*;\n\n    async fn start_redis() -> (String, ContainerAsync<GenericImage>) {\n        let redis_container = testcontainers::GenericImage::new(\"redis\", \"7.2.4\")\n            .with_exposed_port(6379.into())\n            .with_wait_for(testcontainers::core::WaitFor::message_on_stdout(\n                \"Ready to accept connections\",\n            ))\n            .start()\n            .await\n            .expect(\"Redis started\");\n\n        let host = redis_container.get_host().await.unwrap();\n        let port = redis_container.get_host_port_ipv4(6379).await.unwrap();\n\n        let url = format!(\"redis://{host}:{port}/\");\n\n        (url, redis_container)\n    }\n\n    #[tokio::test]\n    async fn test_no_messages_yet() {\n        let (url, _container) = start_redis().await;\n        let redis = Redis::try_from_url(url, \"tests\").unwrap();\n\n        let messages = redis.history().await.unwrap();\n        assert!(\n            messages.is_empty(),\n            \"Expected history to be empty for new Redis key\"\n        );\n    }\n\n    #[tokio::test]\n    async fn test_adding_and_next_completions() {\n        let (url, _container) = start_redis().await;\n        let redis = Redis::try_from_url(url, \"tests\").unwrap();\n\n        let m1 = ChatMessage::new_system(\"System test\");\n        let m2 = ChatMessage::User(\"User test\".into());\n\n        redis.push_owned(m1.clone()).await.unwrap();\n        redis.push_owned(m2.clone()).await.unwrap();\n\n        let hist = redis.history().await.unwrap();\n        assert_eq!(\n            hist,\n            vec![m1.clone(), m2.clone()],\n            \"History should match what's pushed\"\n        );\n\n        let hist2 = redis.history().await.unwrap();\n        assert_eq!(\n            hist2,\n            vec![m1, m2],\n            \"History should be unchanged on repeated call\"\n        );\n    }\n\n    #[tokio::test]\n    async fn test_overwrite_history() {\n        let (url, _container) = start_redis().await;\n        let redis = Redis::try_from_url(url, \"tests\").unwrap();\n\n        // Check that overwrite on empty also works\n        redis.overwrite(vec![]).await.unwrap();\n\n        let m1 = ChatMessage::new_system(\"First\");\n        let m2 = ChatMessage::User(\"Second\".into());\n        redis.push_owned(m1.clone()).await.unwrap();\n        redis.push_owned(m2.clone()).await.unwrap();\n\n        let m3 = ChatMessage::new_assistant(Some(\"Overwritten\".to_string()), None);\n        redis.overwrite(vec![m3.clone()]).await.unwrap();\n\n        let hist = redis.history().await.unwrap();\n        assert_eq!(\n            hist,\n            vec![m3],\n            \"History should only contain the overwritten message\"\n        );\n    }\n\n    #[tokio::test]\n    async fn test_extend() {\n        let (url, _container) = start_redis().await;\n        let redis = Redis::try_from_url(url, \"tests\").unwrap();\n\n        let m1 = ChatMessage::new_system(\"First\");\n        let m2 = ChatMessage::User(\"Second\".into());\n        redis.push_owned(m1.clone()).await.unwrap();\n\n        let m3 = ChatMessage::new_assistant(Some(\"Third\".to_string()), None);\n        redis\n            .extend_owned(vec![m2.clone(), m3.clone()])\n            .await\n            .unwrap();\n\n        let hist = redis.history().await.unwrap();\n        assert_eq!(hist, vec![m1, m2, m3], \"History should append on extend\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/redis/mod.rs",
    "content": "//! This module provides the integration with Redis for caching nodes in the Swiftide system.\n//!\n//! The primary component of this module is the `Redis`, which is re-exported for use\n//! in other parts of the system. The `Redis` struct is responsible for managing and\n//! caching nodes during the indexing process, leveraging Redis for efficient storage and retrieval.\n//!\n//! # Overview\n//!\n//! Redis implements the following `Swiftide` traits:\n//! - `Node<T>Cache`\n//! - `Persist`\n//! - `MessageHistory`\n//!\n//! Additionally it provides various helper and utility functions for managing the Redis connection\n//! and key management. The connection is managed using a connection manager. When\n//! cloned, the connection manager is shared across all instances.\n\nuse std::sync::Arc;\n\nuse anyhow::{Context as _, Result};\nuse derive_builder::Builder;\nuse serde::Serialize;\nuse tokio::sync::RwLock;\n\nuse swiftide_core::indexing::{Chunk, Node};\n\nmod message_history;\nmod node_cache;\nmod persist;\n\n/// `Redis` provides a caching mechanism for nodes using Redis.\n/// It helps in optimizing the indexing process by skipping nodes that have already been processed.\n///\n/// # Fields\n///\n/// * `client` - The Redis client used to interact with the Redis server.\n/// * `connection_manager` - Manages the Redis connections asynchronously.\n/// * `key_prefix` - A prefix used for keys stored in Redis to avoid collisions.\n#[allow(clippy::type_complexity)]\n#[derive(Builder, Clone)]\n#[builder(pattern = \"owned\", setter(strip_option))]\npub struct Redis<T: Chunk = String> {\n    #[builder(setter(into))]\n    client: Arc<redis::Client>,\n    #[builder(default, setter(skip))]\n    connection_manager: Arc<RwLock<Option<redis::aio::ConnectionManager>>>,\n    #[builder(default, setter(into))]\n    cache_key_prefix: Arc<String>,\n    #[builder(default = \"10\")]\n    /// The batch size used for persisting nodes. Defaults to a safe 10.\n    batch_size: usize,\n    #[builder(default)]\n    /// Customize the key used for persisting nodes\n    persist_key_fn: Option<fn(&Node<T>) -> Result<String>>,\n    #[builder(default)]\n    /// Customize the value used for persisting nodes\n    persist_value_fn: Option<fn(&Node<T>) -> Result<String>>,\n    #[builder(default = \"message_history\".to_string().into(), setter(into))]\n    message_history_key: Arc<String>,\n}\n\nimpl Redis<String> {\n    /// Creates a new `Redis` instance from a given Redis URL and key prefix.\n    ///\n    /// # Parameters\n    ///\n    /// * `url` - The URL of the Redis server.\n    /// * `prefix` - The prefix to be used for keys stored in Redis.\n    ///\n    /// # Returns\n    ///\n    /// A `Result` containing the `Redis` instance or an error if the client could not be created.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the Redis client cannot be opened.\n    pub fn try_from_url(url: impl AsRef<str>, prefix: impl AsRef<str>) -> Result<Redis<String>> {\n        let client = redis::Client::open(url.as_ref()).context(\"Failed to open redis client\")?;\n        Ok(Redis::<String> {\n            client: client.into(),\n            connection_manager: Arc::new(RwLock::new(None)),\n            cache_key_prefix: prefix.as_ref().to_string().into(),\n            batch_size: 10,\n            persist_key_fn: None,\n            persist_value_fn: None,\n            message_history_key: format!(\"{}:message_history\", prefix.as_ref()).into(),\n        })\n    }\n}\n\nimpl<T: Chunk> Redis<T> {\n    /// # Errors\n    ///\n    /// Returns an error if the Redis client cannot be opened\n    pub fn try_build_from_url(url: impl AsRef<str>) -> Result<RedisBuilder<T>> {\n        Ok(RedisBuilder::default()\n            .client(redis::Client::open(url.as_ref()).context(\"Failed to open redis client\")?))\n    }\n\n    /// Builds a new `Redis` instance from the builder.\n    pub fn builder() -> RedisBuilder<T> {\n        RedisBuilder::default()\n    }\n\n    /// Set the key to be used for the message history\n    pub fn with_message_history_key(&mut self, prefix: impl Into<String>) -> &mut Self {\n        self.message_history_key = Arc::new(prefix.into());\n        self\n    }\n\n    /// Lazily connects to the Redis server and returns the connection manager.\n    ///\n    /// # Returns\n    ///\n    /// An `Option` containing the `ConnectionManager` if the connection is successful, or `None` if\n    /// it fails.\n    ///\n    /// # Errors\n    ///\n    /// Logs an error and returns `None` if the connection manager cannot be obtained.\n    async fn lazy_connect(&self) -> Option<redis::aio::ConnectionManager> {\n        if self.connection_manager.read().await.is_none() {\n            let result = self.client.get_connection_manager().await;\n            if let Err(e) = result {\n                tracing::error!(\"Failed to get connection manager: {}\", e);\n                return None;\n            }\n            let mut cm = self.connection_manager.write().await;\n            *cm = result.ok();\n        }\n\n        self.connection_manager.read().await.clone()\n    }\n\n    /// Generates a Redis key for a given node using the key prefix and the node's hash.\n    ///\n    /// # Parameters\n    ///\n    /// * `node` - The node for which the key is to be generated.\n    ///\n    /// # Returns\n    ///\n    /// A `String` representing the Redis key for the node.\n    fn cache_key_for_node(&self, node: &Node<T>) -> String {\n        format!(\"{}:{}\", self.cache_key_prefix, node.id())\n    }\n\n    /// Generates a key for a given node to be persisted in Redis.\n    fn persist_key_for_node(&self, node: &Node<T>) -> Result<String> {\n        if let Some(key_fn) = self.persist_key_fn {\n            key_fn(node)\n        } else {\n            let hash = node.id();\n            Ok(format!(\"{}:{}\", node.path.to_string_lossy(), hash))\n        }\n    }\n\n    /// Resets the cache by deleting all keys with the specified prefix.\n    /// This function is intended for testing purposes and is inefficient for production use.\n    ///\n    /// # Errors\n    ///\n    /// Panics if the keys cannot be retrieved or deleted.\n    #[allow(dead_code)]\n    async fn reset_cache(&self) {\n        if let Some(mut cm) = self.lazy_connect().await {\n            let keys: Vec<String> = redis::cmd(\"KEYS\")\n                .arg(format!(\"{}:*\", self.cache_key_prefix))\n                .query_async(&mut cm)\n                .await\n                .expect(\"Could not get keys\");\n\n            for key in &keys {\n                let _: usize = redis::cmd(\"DEL\")\n                    .arg(key)\n                    .query_async(&mut cm)\n                    .await\n                    .expect(\"Failed to reset cache\");\n            }\n        }\n    }\n\n    /// Gets a node persisted in Redis using the GET command\n    /// Takes a node and returns a Result<Option<String>>\n    #[allow(dead_code)]\n    async fn get_node(&self, node: &Node<T>) -> Result<Option<String>> {\n        if let Some(mut cm) = self.lazy_connect().await {\n            let key = self.persist_key_for_node(node)?;\n            let result: Option<String> = redis::cmd(\"GET\")\n                .arg(key)\n                .query_async(&mut cm)\n                .await\n                .context(\"Error getting from redis\")?;\n            Ok(result)\n        } else {\n            anyhow::bail!(\"Failed to connect to Redis\")\n        }\n    }\n}\n\nimpl<T: Chunk + Serialize> Redis<T> {\n    /// Generates a value for a given node to be persisted in Redis.\n    /// By default, the node is serialized as JSON.\n    /// If a custom function is provided, it is used to generate the value.\n    /// Otherwise, the node is serialized as JSON.\n    fn persist_value_for_node(&self, node: &Node<T>) -> Result<String> {\n        if let Some(value_fn) = self.persist_value_fn {\n            value_fn(node)\n        } else {\n            Ok(serde_json::to_string(node)?)\n        }\n    }\n}\n\n// Redis CM does not implement debug\n#[allow(clippy::missing_fields_in_debug)]\nimpl<T: Chunk> std::fmt::Debug for Redis<T> {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Redis\")\n            .field(\"client\", &self.client)\n            .finish()\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/redis/node_cache.rs",
    "content": "use anyhow::Result;\nuse async_trait::async_trait;\n\nuse swiftide_core::indexing::{Chunk, Node, NodeCache};\n\nuse super::Redis;\n\n#[allow(dependency_on_unit_never_type_fallback)]\n#[async_trait]\nimpl<T: Chunk> NodeCache for Redis<T> {\n    type Input = T;\n    /// Checks if a node is present in the cache.\n    ///\n    /// # Parameters\n    ///\n    /// * `node` - The node to be checked in the cache.\n    ///\n    /// # Returns\n    ///\n    /// `true` if the node is present in the cache, `false` otherwise.\n    ///\n    /// # Errors\n    ///\n    /// Logs an error and returns `false` if the cache check fails.\n    #[tracing::instrument(skip_all, fields(hit), level = \"trace\")]\n    async fn get(&self, node: &Node<T>) -> bool {\n        let cache_result = if let Some(mut cm) = self.lazy_connect().await {\n            let result = redis::cmd(\"EXISTS\")\n                .arg(self.cache_key_for_node(node))\n                .query_async(&mut cm)\n                .await;\n\n            match result {\n                Ok(1) => true,\n                Ok(0) => false,\n                Err(e) => {\n                    tracing::error!(\"Failed to check node cache: {}\", e);\n                    false\n                }\n                _ => {\n                    tracing::error!(\"Unexpected response from redis\");\n                    false\n                }\n            }\n        } else {\n            false\n        };\n\n        tracing::Span::current().record(\"hit\", cache_result);\n\n        cache_result\n    }\n\n    /// Sets a node in the cache.\n    ///\n    /// # Parameters\n    ///\n    /// * `node` - The node to be set in the cache.\n    ///\n    /// # Errors\n    ///\n    /// Logs an error if the node cannot be set in the cache.\n    #[tracing::instrument(skip_all, level = \"trace\")]\n    async fn set(&self, node: &Node<T>) {\n        if let Some(mut cm) = self.lazy_connect().await {\n            let result: Result<(), redis::RedisError> = redis::cmd(\"SET\")\n                .arg(self.cache_key_for_node(node))\n                .arg(1)\n                .query_async(&mut cm)\n                .await;\n\n            if let Err(e) = result {\n                tracing::error!(\"Failed to set node cache: {}\", e);\n            }\n        }\n    }\n\n    async fn clear(&self) -> Result<()> {\n        if self.cache_key_prefix.is_empty() {\n            return Err(anyhow::anyhow!(\n                \"No cache key prefix set; not flushing cache\"\n            ));\n        }\n\n        if let Some(mut cm) = self.lazy_connect().await {\n            redis::cmd(\"DEL\")\n                .arg(format!(\"{}*\", self.cache_key_prefix))\n                .query_async::<()>(&mut cm)\n                .await?;\n\n            Ok(())\n        } else {\n            anyhow::bail!(\"Failed to connect to Redis\");\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    use swiftide_core::indexing::TextNode;\n    use testcontainers::runners::AsyncRunner;\n\n    /// Tests the `RedisNodeCache` implementation.\n    #[test_log::test(tokio::test)]\n    async fn test_redis_cache() {\n        let redis = testcontainers::GenericImage::new(\"redis\", \"7.2.4\")\n            .with_exposed_port(6379.into())\n            .with_wait_for(testcontainers::core::WaitFor::message_on_stdout(\n                \"Ready to accept connections\",\n            ))\n            .start()\n            .await\n            .expect(\"Redis started\");\n\n        let host = redis.get_host().await.unwrap();\n        let port = redis.get_host_port_ipv4(6379).await.unwrap();\n        let cache = Redis::try_from_url(format!(\"redis://{host}:{port}\"), \"test\")\n            .expect(\"Could not build redis client\");\n        cache.reset_cache().await;\n\n        let node = TextNode::new(\"chunk\");\n\n        let before_cache = cache.get(&node).await;\n        assert!(!before_cache);\n\n        cache.set(&node).await;\n        let after_cache = cache.get(&node).await;\n        assert!(after_cache);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/redis/persist.rs",
    "content": "use anyhow::{Context as _, Result};\nuse async_trait::async_trait;\n\nuse serde::Serialize;\nuse swiftide_core::{\n    Persist,\n    indexing::{Chunk, IndexingStream, Node},\n};\n\nuse super::Redis;\n\n#[async_trait]\n#[allow(dependency_on_unit_never_type_fallback)]\nimpl<T: Chunk + Serialize> Persist for Redis<T> {\n    type Input = T;\n    type Output = T;\n    async fn setup(&self) -> Result<()> {\n        Ok(())\n    }\n\n    fn batch_size(&self) -> Option<usize> {\n        Some(self.batch_size)\n    }\n\n    /// Stores a node in Redis using the SET command.\n    ///\n    /// By default nodes are stored with the path and hash as key and the node serialized as JSON as\n    /// value.\n    ///\n    /// You can customize the key and value used for storing nodes by setting the `persist_key_fn`\n    /// and `persist_value_fn` fields.\n    async fn store(&self, node: Node<T>) -> Result<Node<T>> {\n        if let Some(mut cm) = self.lazy_connect().await {\n            redis::cmd(\"SET\")\n                .arg(self.persist_key_for_node(&node)?)\n                .arg(self.persist_value_for_node(&node)?)\n                .query_async::<()>(&mut cm)\n                .await\n                .context(\"Error persisting to redis\")?;\n\n            Ok(node)\n        } else {\n            anyhow::bail!(\"Failed to connect to Redis\")\n        }\n    }\n\n    /// Stores a batch of nodes in Redis using the MSET command.\n    ///\n    /// By default nodes are stored with the path and hash as key and the node serialized as JSON as\n    /// value.\n    ///\n    /// You can customize the key and value used for storing nodes by setting the `persist_key_fn`\n    /// and `persist_value_fn` fields.\n    async fn batch_store(&self, nodes: Vec<Node<T>>) -> IndexingStream<T> {\n        // use mset for batch store\n        if let Some(mut cm) = self.lazy_connect().await {\n            let args = match nodes\n                .iter()\n                .map(|node| -> Result<Vec<String>> {\n                    let key = self.persist_key_for_node(node)?;\n                    let value = self.persist_value_for_node(node)?;\n\n                    Ok(vec![key, value])\n                })\n                .collect::<Result<Vec<_>>>()\n            {\n                Ok(args) => args,\n                Err(err) => return vec![Err(err)].into(),\n            };\n\n            let result: Result<()> = redis::cmd(\"MSET\")\n                .arg(args)\n                .query_async(&mut cm)\n                .await\n                .context(\"Error persisting to redis\");\n\n            if let Err(e) = result {\n                IndexingStream::iter([Err(e)])\n            } else {\n                IndexingStream::iter(nodes.into_iter().map(Ok))\n            }\n        } else {\n            IndexingStream::iter([Err(anyhow::anyhow!(\"Failed to connect to Redis\"))])\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use futures_util::TryStreamExt;\n    use swiftide_core::indexing::TextNode;\n    use testcontainers::{ContainerAsync, GenericImage, runners::AsyncRunner};\n\n    async fn start_redis() -> ContainerAsync<GenericImage> {\n        testcontainers::GenericImage::new(\"redis\", \"7.2.4\")\n            .with_exposed_port(6379.into())\n            .with_wait_for(testcontainers::core::WaitFor::message_on_stdout(\n                \"Ready to accept connections\",\n            ))\n            .start()\n            .await\n            .expect(\"Redis started\")\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_redis_persist() {\n        let redis_container = start_redis().await;\n\n        let host = redis_container.get_host().await.unwrap();\n        let port = redis_container.get_host_port_ipv4(6379).await.unwrap();\n        let redis = Redis::try_build_from_url(format!(\"redis://{host}:{port}\"))\n            .unwrap()\n            .build()\n            .unwrap();\n\n        let node = TextNode::new(\"chunk\");\n\n        redis.store(node.clone()).await.unwrap();\n        let stored_node = serde_json::from_str(&redis.get_node(&node).await.unwrap().unwrap());\n\n        assert_eq!(node, stored_node.unwrap());\n    }\n\n    // test batch store\n    #[test_log::test(tokio::test)]\n    async fn test_redis_batch_persist() {\n        let redis_container = start_redis().await;\n        let host = redis_container.get_host().await.unwrap();\n        let port = redis_container.get_host_port_ipv4(6379).await.unwrap();\n        let redis = Redis::try_build_from_url(format!(\"redis://{host}:{port}\"))\n            .unwrap()\n            .batch_size(20)\n            .build()\n            .unwrap();\n        let nodes = vec![TextNode::new(\"test\"), TextNode::new(\"other\")];\n\n        let stream = redis.batch_store(nodes).await;\n        let streamed_nodes: Vec<TextNode> = stream.try_collect().await.unwrap();\n\n        assert_eq!(streamed_nodes.len(), 2);\n\n        for node in streamed_nodes {\n            let stored_node = serde_json::from_str(&redis.get_node(&node).await.unwrap().unwrap());\n            assert_eq!(node, stored_node.unwrap());\n        }\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_redis_custom_persist() {\n        let redis_container = start_redis().await;\n        let host = redis_container.get_host().await.unwrap();\n        let port = redis_container.get_host_port_ipv4(6379).await.unwrap();\n        let redis = Redis::<String>::try_build_from_url(format!(\"redis://{host}:{port}\"))\n            .unwrap()\n            .persist_key_fn(|_node| Ok(\"test\".to_string()))\n            .persist_value_fn(|_node| Ok(\"hello world\".to_string()))\n            .build()\n            .unwrap();\n        let node = Node::default();\n\n        redis.store(node.clone()).await.unwrap();\n        let stored_node = redis.get_node(&node).await.unwrap();\n\n        assert_eq!(stored_node.unwrap(), \"hello world\");\n        assert_eq!(\n            redis.persist_key_for_node(&node).unwrap(),\n            \"test\".to_string()\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/scraping/html_to_markdown_transformer.rs",
    "content": "use std::sync::Arc;\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse htmd::HtmlToMarkdown;\n\nuse swiftide_core::{Transformer, indexing::TextNode};\n\n/// Transforms HTML content into markdown.\n///\n/// Useful for converting scraping results into markdown.\n#[swiftide_macros::indexing_transformer(derive(skip_default, skip_debug))]\npub struct HtmlToMarkdownTransformer {\n    /// The `HtmlToMarkdown` instance used to convert HTML to markdown.\n    ///\n    /// Sets a sane default, but can be customized.\n    htmd: Arc<HtmlToMarkdown>,\n}\n\nimpl Default for HtmlToMarkdownTransformer {\n    fn default() -> Self {\n        Self {\n            htmd: HtmlToMarkdown::builder()\n                .skip_tags(vec![\"script\", \"style\"])\n                .build()\n                .into(),\n            concurrency: None,\n            client: None,\n            indexing_defaults: None,\n        }\n    }\n}\n\nimpl std::fmt::Debug for HtmlToMarkdownTransformer {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"HtmlToMarkdownTransformer\").finish()\n    }\n}\n\n#[async_trait]\nimpl Transformer for HtmlToMarkdownTransformer {\n    type Input = String;\n    type Output = String;\n    /// Converts the HTML content in the `TextNode` to markdown.\n    ///\n    /// Will Err the node if the conversion fails.\n    #[tracing::instrument(skip_all, name = \"transformer.html_to_markdown\")]\n    async fn transform_node(&self, node: TextNode) -> Result<TextNode> {\n        let chunk = self.htmd.convert(&node.chunk)?;\n\n        TextNode::build_from_other(&node).chunk(chunk).build()\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[tokio::test]\n    async fn test_html_to_markdown() {\n        let node = TextNode::new(\"<h1>Hello, World!</h1>\");\n        let transformer = HtmlToMarkdownTransformer::default();\n        let transformed_node = transformer.transform_node(node).await.unwrap();\n        assert_eq!(transformed_node.chunk, \"# Hello, World!\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/scraping/loader.rs",
    "content": "use derive_builder::Builder;\nuse spider::website::Website;\n\nuse swiftide_core::{\n    Loader,\n    indexing::{IndexingStream, TextNode},\n};\n\n#[derive(Debug, Builder, Clone)]\n#[builder(pattern = \"owned\")]\n/// Scrapes a given website\n///\n/// Under the hood uses the `spider` crate to scrape the website.\n/// For more configuration options see their documentation.\npub struct ScrapingLoader {\n    spider_website: Website,\n}\n\nimpl ScrapingLoader {\n    pub fn builder() -> ScrapingLoaderBuilder {\n        ScrapingLoaderBuilder::default()\n    }\n\n    // Constructs a scrapingloader from a `spider::Website` configuration\n    #[allow(dead_code)]\n    pub fn from_spider(spider_website: Website) -> Self {\n        Self { spider_website }\n    }\n\n    /// Constructs a scrapingloader from a given url\n    pub fn from_url(url: impl AsRef<str>) -> Self {\n        Self::from_spider(Website::new(url.as_ref()))\n    }\n}\n\nimpl Loader for ScrapingLoader {\n    type Output = String;\n\n    fn into_stream(mut self) -> IndexingStream<String> {\n        let (tx, rx) = tokio::sync::mpsc::channel(1000);\n        let mut spider_rx = self\n            .spider_website\n            .subscribe(0)\n            .expect(\"Failed to subscribe to spider\");\n        tracing::info!(\"Subscribed to spider\");\n\n        let _recv_thread = tokio::spawn(async move {\n            while let Ok(res) = spider_rx.recv().await {\n                let html = res.get_html();\n                let original_size = html.len();\n\n                let node = TextNode::builder()\n                    .chunk(html)\n                    .original_size(original_size)\n                    .path(res.get_url())\n                    .build();\n\n                tracing::debug!(?node, \"[Spider] Received node from spider\");\n\n                if let Err(error) = tx.send(node).await {\n                    tracing::error!(?error, \"[Spider] Failed to send node to stream\");\n                    break;\n                }\n            }\n        });\n\n        let mut spider_website = self.spider_website;\n\n        let _scrape_thread = tokio::spawn(async move {\n            tracing::info!(\"[Spider] Starting scrape loop\");\n            // TODO: It would be much nicer if this used `scrape` instead, as it is supposedly\n            // more concurrent\n            spider_website.crawl().await;\n            tracing::info!(\"[Spider] Scrape loop finished\");\n        });\n\n        // NOTE: Handles should stay alive because of rx, but feels a bit fishy\n        rx.into()\n    }\n\n    fn into_stream_boxed(self: Box<Self>) -> IndexingStream<String> {\n        self.into_stream()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use anyhow::Result;\n    use futures_util::StreamExt;\n    use swiftide_core::indexing::Loader;\n    use wiremock::matchers::{method, path};\n    use wiremock::{Mock, MockServer, Request, ResponseTemplate};\n\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_scraping_loader_with_wiremock() {\n        // Set up the wiremock server to simulate the remote web server\n        let mock_server = MockServer::start().await;\n\n        // Mocked response for the page we will scrape\n        let body = \"<html><body><h1>Test Page</h1></body></html>\";\n        Mock::given(method(\"GET\"))\n            .and(path(\"/\"))\n            .respond_with(ResponseTemplate::new(200).set_body_string(body))\n            .mount(&mock_server)\n            .await;\n\n        // Create an instance of ScrapingLoader using the mock server's URL\n        let loader = ScrapingLoader::from_url(mock_server.uri());\n\n        // Execute the into_stream method\n        let stream = loader.into_stream();\n\n        // Process the stream to check if we get the expected result\n        let nodes = stream.collect::<Vec<Result<TextNode>>>().await;\n\n        assert_eq!(nodes.len(), 1);\n\n        let first_node = nodes.first().unwrap().as_ref().unwrap();\n\n        assert_eq!(first_node.chunk, body);\n    }\n\n    #[test_log::test(tokio::test(flavor = \"multi_thread\"))]\n    async fn test_scraping_loader_multiple_pages() {\n        // Set up the wiremock server to simulate the remote web server\n        let mock_server = MockServer::start().await;\n\n        // Mocked response for the page we will scrape\n        let body = \"<html><body><h1>Test Page</h1><a href=\\\"/other\\\">link</a></body></html>\";\n        Mock::given(method(\"GET\"))\n            .and(path(\"/\"))\n            .respond_with(ResponseTemplate::new(200).set_body_string(body))\n            .mount(&mock_server)\n            .await;\n\n        let body2 = \"<html><body><h1>Test Page 2</h1></body></html>\";\n        Mock::given(method(\"GET\"))\n            .and(path(\"/other\"))\n            .respond_with(move |_req: &Request| {\n                std::thread::sleep(std::time::Duration::from_secs(1));\n                ResponseTemplate::new(200).set_body_string(body2)\n            })\n            .mount(&mock_server)\n            .await;\n\n        // Create an instance of ScrapingLoader using the mock server's URL\n        let loader = ScrapingLoader::from_url(mock_server.uri());\n\n        // Execute the into_stream method\n        let stream = loader.into_stream();\n\n        // Process the stream to check if we get the expected result\n        let mut nodes = stream.collect::<Vec<Result<TextNode>>>().await;\n\n        assert_eq!(nodes.len(), 2);\n\n        let first_node = nodes.pop().unwrap().unwrap();\n\n        assert_eq!(first_node.chunk, body2);\n\n        let second_node = nodes.pop().unwrap().unwrap();\n\n        assert_eq!(second_node.chunk, body);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/scraping/mod.rs",
    "content": "//! Scraping loader using and html to markdown transformer\nmod html_to_markdown_transformer;\nmod loader;\n\npub use html_to_markdown_transformer::HtmlToMarkdownTransformer;\npub use loader::ScrapingLoader;\n"
  },
  {
    "path": "swiftide-integrations/src/tiktoken/mod.rs",
    "content": "//! Use tiktoken-rs to estimate token count on various common Swiftide types\n//!\n//! Intended to be used for openai models.\n//!\n//! Note that the library is heavy on the unwraps.\n\nuse std::sync::Arc;\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::token_estimation::{Estimatable, EstimateTokens};\nuse tiktoken_rs::{CoreBPE, get_bpe_from_model, get_bpe_from_tokenizer, tokenizer::Tokenizer};\n\n/// A tiktoken based tokenizer for openai models. Can also be used for other models.\n///\n/// Implements `EstimateTokens` for various swiftide types (prompts, chat messages, lists of chat\n/// messages) and regular strings.\n///\n/// Estimates are estimates; not exact counts.\n///\n/// # Example\n///\n/// ```no_run\n/// # use swiftide_core::token_estimation::EstimateTokens;\n/// # use swiftide_integrations::tiktoken::TikToken;\n///\n/// # async fn test() {\n/// let tokenizer = TikToken::try_from_model(\"gpt-4-0314\").unwrap();\n/// let estimate = tokenizer.estimate(\"hello {{world}}\").await.unwrap();\n///\n/// assert_eq!(estimate, 4);\n/// # }\n/// ```\n#[derive(Clone)]\npub struct TikToken {\n    /// The tiktoken model to use\n    bpe: Arc<CoreBPE>,\n}\n\nimpl std::fmt::Debug for TikToken {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"TikToken\").finish()\n    }\n}\n\nimpl Default for TikToken {\n    fn default() -> Self {\n        Self::try_from_model(\"gpt-4o\")\n            .expect(\"infallible; gpt-4o should be valid model for tiktoken\")\n    }\n}\n\nimpl TikToken {\n    /// Build a `TikToken` from an openai model name\n    ///\n    /// # Errors\n    ///\n    /// Errors if the tokenizer cannot be found from the model or it cannot be build\n    pub fn try_from_model(model: impl AsRef<str>) -> Result<Self> {\n        let bpe = get_bpe_from_model(model.as_ref())?;\n        Ok(Self { bpe: Arc::new(bpe) })\n    }\n\n    /// Build a `TikToken` from a `tiktoken_rs::tiktoken::Tokenizer`\n    ///\n    /// # Errors\n    ///\n    /// Errors if the tokenizer cannot be build\n    pub fn try_from_tokenizer(tokenizer: Tokenizer) -> Result<Self> {\n        let bpe = get_bpe_from_tokenizer(tokenizer)?;\n        Ok(Self { bpe: Arc::new(bpe) })\n    }\n}\n\n#[async_trait]\nimpl EstimateTokens for TikToken {\n    async fn estimate(&self, value: impl Estimatable) -> Result<usize> {\n        let mut total = 0;\n        for text in value.for_estimate()? {\n            total += self.bpe.encode_with_special_tokens(text.as_ref()).len();\n        }\n\n        Ok(total + value.additional_tokens())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use swiftide_core::{chat_completion::ChatMessage, prompt::Prompt};\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_estimate_tokens() {\n        let tokenizer = TikToken::try_from_model(\"gpt-4-0314\").unwrap();\n        let prompt = Prompt::from(\"hello {{world}}\");\n        let tokens = tokenizer.estimate(&prompt).await.unwrap();\n        assert_eq!(tokens, 4);\n    }\n\n    #[tokio::test]\n    async fn test_estimate_tokens_from_tokenizer() {\n        let tokenizer = TikToken::try_from_tokenizer(Tokenizer::O200kBase).unwrap();\n        let prompt = \"hello {{world}}\";\n        let tokens = tokenizer.estimate(prompt).await.unwrap();\n        assert_eq!(tokens, 4);\n    }\n\n    #[tokio::test]\n    async fn test_estimate_chat_messages() {\n        let messages = vec![\n            ChatMessage::new_user(\"hello \".repeat(10)),\n            ChatMessage::new_system(\"world\"),\n        ];\n\n        // 11x hello + 1x world + 2x 4 per message + 1x 3 for full + 2 whatever = 23\n\n        let tokenizer = TikToken::try_from_model(\"gpt-4-0314\").unwrap();\n        dbg!(messages.as_slice().for_estimate().unwrap());\n\n        assert_eq!(tokenizer.estimate(messages.as_slice()).await.unwrap(), 23);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/chunk_code.rs",
    "content": "//! Chunk code using tree-sitter\nuse anyhow::{Context as _, Result};\nuse async_trait::async_trait;\nuse derive_builder::Builder;\n\nuse crate::treesitter::{ChunkSize, CodeSplitter, SupportedLanguages};\nuse swiftide_core::{\n    ChunkerTransformer,\n    indexing::{IndexingStream, TextNode},\n};\n\n/// The `ChunkCode` struct is responsible for chunking code into smaller pieces\n/// based on the specified language and chunk size.\n///\n/// It uses tree-sitter under the hood, and tries to split the code into smaller, meaningful\n/// chunks.\n///\n/// # Example\n///\n/// ```no_run\n/// # use swiftide_integrations::treesitter::transformers::ChunkCode;\n/// # use swiftide_integrations::treesitter::SupportedLanguages;\n/// // Chunk rust code with a maximum chunk size of 1000 bytes.\n/// ChunkCode::try_for_language_and_chunk_size(SupportedLanguages::Rust, 1000);\n///\n/// // Chunk python code with a minimum chunk size of 500 bytes and maximum chunk size of 2048.\n/// // Smaller chunks than 500 bytes will be discarded.\n/// ChunkCode::try_for_language_and_chunk_size(SupportedLanguages::Python, 500..2048);\n/// ````\n#[derive(Debug, Clone, Builder)]\n#[builder(pattern = \"owned\", setter(into, strip_option))]\npub struct ChunkCode {\n    chunker: CodeSplitter,\n    #[builder(default)]\n    concurrency: Option<usize>,\n}\n\nimpl ChunkCode {\n    pub fn builder() -> ChunkCodeBuilder {\n        ChunkCodeBuilder::default()\n    }\n\n    /// Tries to create a `ChunkCode` instance for a given programming language.\n    ///\n    /// # Parameters\n    /// - `lang`: The programming language to be used for chunking. It should implement\n    ///   `TryInto<SupportedLanguages>`.\n    ///\n    /// # Returns\n    /// - `Result<Self>`: Returns an instance of `ChunkCode` if successful, otherwise returns an\n    ///   error.\n    ///\n    /// # Errors\n    /// - Returns an error if the language is not supported or if the `CodeSplitter` fails to build.\n    pub fn try_for_language(lang: impl TryInto<SupportedLanguages>) -> Result<Self> {\n        Ok(Self {\n            chunker: CodeSplitter::builder().try_language(lang)?.build()?,\n            concurrency: None,\n        })\n    }\n\n    /// Tries to create a `ChunkCode` instance for a given programming language and chunk size.\n    ///\n    /// # Parameters\n    /// - `lang`: The programming language to be used for chunking. It should implement\n    ///   `TryInto<SupportedLanguages>`.\n    /// - `chunk_size`: The size of the chunks. It should implement `Into<ChunkSize>`.\n    ///\n    /// # Returns\n    /// - `Result<Self>`: Returns an instance of `ChunkCode` if successful, otherwise returns an\n    ///   error.\n    ///\n    /// # Errors\n    /// - Returns an error if the language is not supported, if the chunk size is invalid, or if the\n    ///   `CodeSplitter` fails to build.\n    pub fn try_for_language_and_chunk_size(\n        lang: impl TryInto<SupportedLanguages>,\n        chunk_size: impl Into<ChunkSize>,\n    ) -> Result<Self> {\n        Ok(Self {\n            chunker: CodeSplitter::builder()\n                .try_language(lang)?\n                .chunk_size(chunk_size)\n                .build()?,\n            concurrency: None,\n        })\n    }\n\n    #[must_use]\n    pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n        self.concurrency = Some(concurrency);\n        self\n    }\n}\n\n#[async_trait]\nimpl ChunkerTransformer for ChunkCode {\n    type Input = String;\n    type Output = String;\n    /// Transforms a `TextNode` by splitting its code chunk into smaller pieces.\n    ///\n    /// # Parameters\n    /// - `node`: The `TextNode` containing the code chunk to be split.\n    ///\n    /// # Returns\n    /// - `IndexingStream`: A stream of `TextNode` instances, each containing a smaller chunk of\n    ///   code.\n    ///\n    /// # Errors\n    /// - If the code splitting fails, an error is sent downstream.\n    #[tracing::instrument(skip_all, name = \"transformers.chunk_code\")]\n    async fn transform_node(&self, node: TextNode) -> IndexingStream<String> {\n        let split_result = self.chunker.split(&node.chunk);\n\n        if let Ok(split) = split_result {\n            let mut offset = 0;\n\n            IndexingStream::iter(split.into_iter().map(move |chunk| {\n                let chunk_size = chunk.len();\n\n                let node = TextNode::build_from_other(&node)\n                    .chunk(chunk)\n                    .offset(offset)\n                    .build();\n\n                offset += chunk_size;\n\n                node\n            }))\n        } else {\n            // Send the error downstream\n            IndexingStream::iter(vec![Err(split_result\n                .with_context(|| format!(\"Failed to chunk {}\", node.path.display()))\n                .unwrap_err())])\n        }\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/code_tree.rs",
    "content": "//! Code parsing\n//!\n//! Extracts typed semantics from code.\n#![allow(dead_code)]\nuse itertools::Itertools;\nuse tree_sitter::{Parser, Query, QueryCursor, StreamingIterator as _, Tree};\n\nuse anyhow::{Context as _, Result};\nuse std::collections::HashSet;\n\nuse crate::treesitter::queries::{\n    csharp, go, java, javascript, python, ruby, rust, solidity, typescript,\n};\n\nuse super::SupportedLanguages;\n\n#[derive(Debug)]\npub struct CodeParser {\n    language: SupportedLanguages,\n}\n\nimpl CodeParser {\n    pub fn from_language(language: SupportedLanguages) -> Self {\n        Self { language }\n    }\n\n    /// Parses code and returns a `CodeTree`\n    ///\n    /// Tree-sitter is pretty lenient and will parse invalid code. I.e. if the code is invalid,\n    /// queries might fail and return no results.\n    ///\n    /// This is good as it makes this safe to use for chunked code as well.\n    ///\n    /// # Errors\n    ///\n    /// Errors if the language is not support or if the tree cannot be parsed\n    pub fn parse<'a>(&self, code: &'a str) -> Result<CodeTree<'a>> {\n        let mut parser = Parser::new();\n        parser.set_language(&self.language.into())?;\n        let ts_tree = parser.parse(code, None).context(\"No nodes found\")?;\n\n        Ok(CodeTree {\n            ts_tree,\n            code,\n            language: self.language,\n        })\n    }\n}\n\n/// A code tree is a queryable representation of code\npub struct CodeTree<'a> {\n    ts_tree: Tree,\n    code: &'a str,\n    language: SupportedLanguages,\n}\n\npub struct ReferencesAndDefinitions {\n    pub references: Vec<String>,\n    pub definitions: Vec<String>,\n}\n\nimpl CodeTree<'_> {\n    /// Queries for references and definitions in the code. It returns a unique list of non-local\n    /// references, and local definitions.\n    ///\n    /// # Errors\n    ///\n    /// Errors if the query is invalid or fails\n    pub fn references_and_definitions(&self) -> Result<ReferencesAndDefinitions> {\n        let (defs, refs) = ts_queries_for_language(self.language);\n\n        let defs_query = Query::new(&self.language.into(), defs)?;\n        let refs_query = Query::new(&self.language.into(), refs)?;\n\n        let defs = self.ts_query_for_matches(&defs_query)?;\n        let refs = self.ts_query_for_matches(&refs_query)?;\n\n        Ok(ReferencesAndDefinitions {\n            // Remove any self references\n            references: refs\n                .into_iter()\n                .filter(|r| !defs.contains(r))\n                .sorted()\n                .collect(),\n            definitions: defs.into_iter().sorted().collect(),\n        })\n    }\n\n    /// Given a `tree-sitter` query, searches the code and returns a list of matching symbols\n    fn ts_query_for_matches(&self, query: &Query) -> Result<HashSet<String>> {\n        let mut cursor = QueryCursor::new();\n\n        cursor\n            .matches(query, self.ts_tree.root_node(), self.code.as_bytes())\n            .map_deref(|m| {\n                m.captures\n                    .iter()\n                    .map(|c| {\n                        Ok(c.node\n                            .utf8_text(self.code.as_bytes())\n                            .context(\"Failed to parse node\")?\n                            .to_string())\n                    })\n                    .collect::<Result<Vec<_>>>()\n                    .map(|s| s.join(\"\"))\n            })\n            .collect::<Result<HashSet<_>>>()\n    }\n}\n\nfn ts_queries_for_language(language: SupportedLanguages) -> (&'static str, &'static str) {\n    use SupportedLanguages::{\n        C, CSharp, Cpp, Elixir, Go, HTML, Java, Javascript, PHP, Python, Ruby, Rust, Solidity,\n        Typescript,\n    };\n\n    match language {\n        Rust => (rust::DEFS, rust::REFS),\n        Python => (python::DEFS, python::REFS),\n        // The univocal proof that TS is just a linter\n        Typescript => (typescript::DEFS, typescript::REFS),\n        Javascript => (javascript::DEFS, javascript::REFS),\n        Ruby => (ruby::DEFS, ruby::REFS),\n        Java => (java::DEFS, java::REFS),\n        Go => (go::DEFS, go::REFS),\n        CSharp => (csharp::DEFS, csharp::REFS),\n        Solidity => (solidity::DEFS, solidity::REFS),\n        C | Cpp | Elixir | PHP | HTML => unimplemented!(),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_parsing_on_rust() {\n        let parser = CodeParser::from_language(SupportedLanguages::Rust);\n        let code = r#\"\n        use std::io;\n\n        fn main() {\n            println!(\"Hello, world!\");\n        }\n        \"#;\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.references, vec![\"println\"]);\n\n        assert_eq!(result.definitions, vec![\"main\"]);\n    }\n\n    #[test]\n    fn test_parsing_on_solidity() {\n        let parser = CodeParser::from_language(SupportedLanguages::Solidity);\n        let code = r\"\n        pragma solidity ^0.8.0;\n\n        contract MyContract {\n            function myFunction() public {\n                emit MyEvent();\n            }\n        }\n        \";\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.references, vec![\"MyEvent\"]);\n        assert_eq!(result.definitions, vec![\"MyContract\", \"myFunction\"]);\n    }\n\n    #[test]\n    fn test_parsing_on_ruby() {\n        let parser = CodeParser::from_language(SupportedLanguages::Ruby);\n        let code = r#\"\n        class A < Inheritance\n          include ActuallyAlsoInheritance\n\n          def a\n            puts \"A\"\n          end\n        end\n        \"#;\n\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(\n            result.references,\n            [\"ActuallyAlsoInheritance\", \"Inheritance\", \"include\", \"puts\",]\n        );\n\n        assert_eq!(result.definitions, [\"A\", \"a\"]);\n    }\n\n    #[test]\n    fn test_parsing_python() {\n        // test with a python class and list comprehension\n        let parser = CodeParser::from_language(SupportedLanguages::Python);\n        let code = r#\"\n        class A:\n            def __init__(self):\n                self.a = [x for x in range(10)]\n\n        def hello_world():\n            print(\"Hello, world!\")\n        \"#;\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.references, [\"print\", \"range\"]);\n        assert_eq!(result.definitions, vec![\"A\", \"hello_world\"]);\n    }\n\n    #[test]\n    fn test_parsing_on_c_sharp() {\n        let parser = CodeParser::from_language(SupportedLanguages::CSharp);\n        let code = r#\"\n        public class Greeter\n        {\n            public void SayHello()\n            {\n                System.Console.WriteLine(\"Hello, world!\");\n            }\n        }\n        \"#;\n\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n\n        assert_eq!(result.references, vec![\"WriteLine\"]);\n        assert_eq!(result.definitions, vec![\"Greeter\", \"SayHello\"]);\n    }\n\n    #[test]\n    fn test_parsing_on_typescript() {\n        let parser = CodeParser::from_language(SupportedLanguages::Typescript);\n        let code = r#\"\n        function Test() {\n            console.log(\"Hello, TypeScript!\");\n            otherThing();\n        }\n\n        class MyClass {\n            constructor() {\n                let local = 5;\n                this.myMethod();\n            }\n\n            myMethod() {\n                console.log(\"Hello, TypeScript!\");\n            }\n        }\n        \"#;\n\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.definitions, vec![\"MyClass\", \"Test\", \"myMethod\"]);\n        assert_eq!(result.references, vec![\"log\", \"otherThing\"]);\n    }\n\n    #[test]\n    fn test_parsing_on_javascript() {\n        let parser = CodeParser::from_language(SupportedLanguages::Javascript);\n        let code = r#\"\n        function Test() {\n            console.log(\"Hello, JavaScript!\");\n            otherThing();\n        }\n        class MyClass {\n            constructor() {\n                let local = 5;\n                this.myMethod();\n            }\n            myMethod() {\n                console.log(\"Hello, JavaScript!\");\n            }\n        }\n        \"#;\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.definitions, vec![\"MyClass\", \"Test\", \"myMethod\"]);\n        assert_eq!(result.references, vec![\"log\", \"otherThing\"]);\n    }\n\n    #[test]\n    fn test_parsing_on_java() {\n        let parser = CodeParser::from_language(SupportedLanguages::Java);\n        let code = r#\"\n        public class Hello {\n            public static void main(String[] args) {\n                System.out.printf(\"Hello %s!%n\", args[0]);\n            }\n        }\n        \"#;\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.definitions, vec![\"Hello\", \"main\"]);\n        assert_eq!(result.references, vec![\"printf\"]);\n    }\n\n    #[test]\n    fn test_parsing_on_java_enum() {\n        let parser = CodeParser::from_language(SupportedLanguages::Java);\n        let code = r\"\n        enum Material {\n            DENIM,\n            CANVAS,\n            SPANDEX_3_PERCENT\n        }\n\n        class Person {\n\n\n          Person(string name) {\n            this.name = name;\n\n            this.pants = new Pants<Pocket>();\n          }\n\n          String getName() {\n            a = this.name;\n            b = new one.two.Three();\n            c = Material.DENIM;\n          }\n        }\n        \";\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.definitions, vec![\"Material\", \"Person\", \"getName\"]);\n        assert!(result.references.is_empty());\n    }\n\n    #[test]\n    fn test_parsing_go() {\n        let parser = CodeParser::from_language(SupportedLanguages::Go);\n        // hello world go with struct\n        let code = r\"\n        package main\n\n        type Person struct {\n            name string\n            age int\n        }\n\n        func main() {\n            p := Person{name: 'John', age: 30}\n            fmt.Println(p)\n        }\n        \";\n\n        let tree = parser.parse(code).unwrap();\n        let result = tree.references_and_definitions().unwrap();\n        assert_eq!(result.references, vec![\"Println\", \"int\", \"string\"]);\n        assert_eq!(result.definitions, vec![\"Person\", \"main\"]);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/compress_code_outline.rs",
    "content": "//! `CompressCodeOutline` is a transformer that reduces the size of the outline of a the parent file\n//! of a chunk to make it more relevant to the chunk.\nuse std::sync::OnceLock;\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{Transformer, indexing::TextNode};\n\n/// `CompressCodeChunk` rewrites the \"Outline\" metadata field of a chunk to\n/// condense it and make it more relevant to the chunk in question. It is useful as a\n/// step after chunking a file that has had outline generated for it with `FileToOutlineTreeSitter`.\n#[swiftide_macros::indexing_transformer(\n    metadata_field_name = \"Outline\",\n    default_prompt_file = \"prompts/compress_code_outline.prompt.md\"\n)]\npub struct CompressCodeOutline {}\n\nfn extract_markdown_codeblock(text: String) -> String {\n    static REGEX: OnceLock<regex::Regex> = OnceLock::new();\n\n    let re = REGEX.get_or_init(|| regex::Regex::new(r\"(?sm)```\\w*\\n(.*?)```\").unwrap());\n    let captures = re.captures(text.as_str());\n    captures\n        .map(|c| c.get(1).unwrap().as_str().to_string())\n        .unwrap_or(text)\n}\n\n#[async_trait]\nimpl Transformer for CompressCodeOutline {\n    type Input = String;\n    type Output = String;\n    /// Asynchronously transforms an `TextNode` by reducing the size of the outline to make it more\n    /// relevant to the chunk.\n    ///\n    /// This method uses the `SimplePrompt` client to compress the outline of the `TextNode` and\n    /// updates the `TextNode` with the compressed outline.\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The `TextNode` to be transformed.\n    ///\n    /// # Returns\n    ///\n    /// A result containing the transformed `TextNode` or an error if the transformation fails.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the `SimplePrompt` client fails to generate a\n    /// response.\n    #[tracing::instrument(skip_all, name = \"transformers.compress_code_outline\")]\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        if node.metadata.get(NAME).is_none() {\n            return Ok(node);\n        }\n\n        let prompt = self.prompt_template.clone().with_node(&node);\n\n        let response = extract_markdown_codeblock(self.prompt(prompt).await?);\n\n        node.metadata.insert(NAME, response);\n\n        Ok(node)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::MockSimplePrompt;\n\n    use super::*;\n\n    #[test_log::test(tokio::test)]\n    async fn test_compress_code_template() {\n        let template = default_prompt();\n\n        let outline = \"Relevant Outline\";\n        let code = \"Code using outline\";\n        let mut node = TextNode::new(code);\n        node.metadata.insert(\"Outline\", outline);\n\n        let prompt = template.clone().with_node(&node);\n\n        insta::assert_snapshot!(prompt.render().unwrap());\n    }\n\n    #[tokio::test]\n    async fn test_compress_code_outline() {\n        let mut client = MockSimplePrompt::new();\n\n        client\n            .expect_prompt()\n            .returning(|_| Ok(\"RelevantOutline\".to_string()));\n\n        let transformer = CompressCodeOutline::builder()\n            .client(client)\n            .build()\n            .unwrap();\n        let mut node = TextNode::new(\"Some text\");\n        node.offset = 0;\n        node.original_size = 100;\n\n        node.metadata\n            .insert(\"Outline\".to_string(), \"Some outline\".to_string());\n\n        let result = transformer.transform_node(node).await.unwrap();\n\n        assert_eq!(result.chunk, \"Some text\");\n        assert_eq!(result.metadata.get(\"Outline\").unwrap(), \"RelevantOutline\");\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/metadata_qa_code.rs",
    "content": "//! Generate questions and answers based on code chunks and add them as metadata\n\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse swiftide_core::{Transformer, indexing::TextNode};\n\n/// `MetadataQACode` is responsible for generating questions and answers based on code chunks.\n/// This struct integrates with the indexing pipeline to enhance the metadata of each code chunk\n/// by adding relevant questions and answers.\n#[swiftide_macros::indexing_transformer(\n    metadata_field_name = \"Questions and Answers (code)\",\n    default_prompt_file = \"prompts/metadata_qa_code.prompt.md\"\n)]\npub struct MetadataQACode {\n    #[builder(default = \"5\")]\n    num_questions: usize,\n}\n\n#[async_trait]\nimpl Transformer for MetadataQACode {\n    type Input = String;\n    type Output = String;\n    /// Asynchronously transforms a `TextNode` by generating questions and answers for its code\n    /// chunk.\n    ///\n    /// This method uses the `SimplePrompt` client to generate questions and answers based on the\n    /// code chunk and adds this information to the node's metadata.\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The `TextNode` to be transformed.\n    ///\n    /// # Returns\n    ///\n    /// A result containing the transformed `TextNode` or an error if the transformation fails.\n    ///\n    /// # Errors\n    ///\n    /// This function will return an error if the `SimplePrompt` client fails to generate a\n    /// response.\n    #[tracing::instrument(skip_all, name = \"transformers.metadata_qa_code\")]\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        let mut prompt = self\n            .prompt_template\n            .clone()\n            .with_node(&node)\n            .with_context_value(\"questions\", self.num_questions);\n\n        if let Some(outline) = node.metadata.get(\"Outline\") {\n            prompt = prompt.with_context_value(\"outline\", outline.as_str());\n        }\n\n        let response = self.prompt(prompt).await?;\n\n        node.metadata.insert(NAME, response);\n\n        Ok(node)\n    }\n\n    fn concurrency(&self) -> Option<usize> {\n        self.concurrency\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::{MockSimplePrompt, assert_default_prompt_snapshot};\n\n    use super::*;\n\n    assert_default_prompt_snapshot!(\"test\", \"questions\" => 5);\n\n    #[tokio::test]\n    async fn test_template_with_outline() {\n        let template = default_prompt();\n\n        let prompt = template\n            .clone()\n            .with_node(&TextNode::new(\"test\"))\n            .with_context_value(\"questions\", 5)\n            .with_context_value(\"outline\", \"Test outline\");\n        insta::assert_snapshot!(prompt.render().unwrap());\n    }\n\n    #[tokio::test]\n    async fn test_metadata_qacode() {\n        let mut client = MockSimplePrompt::new();\n\n        client\n            .expect_prompt()\n            .returning(|_| Ok(\"Q1: Hello\\nA1: World\".to_string()));\n\n        let transformer = MetadataQACode::builder().client(client).build().unwrap();\n        let node = TextNode::new(\"Some text\");\n\n        let result = transformer.transform_node(node).await.unwrap();\n\n        assert_eq!(\n            result.metadata.get(\"Questions and Answers (code)\").unwrap(),\n            \"Q1: Hello\\nA1: World\"\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/metadata_refs_defs_code.rs",
    "content": "//! Adds references and definitions found in code as metadata to chunks\n//!\n//! Uses tree-sitter to do the extractions. It tries to only get unique definitions and references,\n//! and only references that are not local.\n//!\n//! See the [`crate::treesitter::CodeParser`] tests for some examples.\n//!\n//! # Example\n//!\n//! ```no_run\n//! # use swiftide_core::indexing::TextNode;\n//! # use swiftide_integrations::treesitter::transformers::metadata_refs_defs_code::*;\n//! # use swiftide_core::Transformer;\n//! # #[tokio::main]\n//! # async fn main() -> Result<(), Box<dyn std::error::Error>> {\n//! let transformer = MetadataRefsDefsCode::try_from_language(\"rust\").unwrap();\n//! let code = r#\"\n//!   fn main() {\n//!     println!(\"Hello, World!\");\n//!   }\n//! \"#;\n//! let mut node = TextNode::new(code.to_string());\n//!\n//! node = transformer.transform_node(node).await.unwrap();\n//!\n//! assert_eq!(\n//!     node.metadata.get(NAME_REFERENCES).unwrap().as_str().unwrap(),\n//!     \"println\"\n//! );\n//! assert_eq!(\n//!     node.metadata.get(NAME_DEFINITIONS).unwrap().as_str().unwrap(),\n//!     \"main\"\n//! );\n//! # Ok(())\n//! # }\n//! ```\nuse std::sync::Arc;\n\nuse swiftide_core::{Transformer, indexing::TextNode};\n\nuse crate::treesitter::{CodeParser, SupportedLanguages};\nuse anyhow::{Context as _, Result};\nuse async_trait::async_trait;\n\npub const NAME_REFERENCES: &str = \"References (code)\";\npub const NAME_DEFINITIONS: &str = \"Definitions (code)\";\n\n/// `MetadataRefsDefsCode` is responsible for extracting references and definitions.\n#[swiftide_macros::indexing_transformer(derive(skip_default))]\npub struct MetadataRefsDefsCode {\n    code_parser: Arc<CodeParser>,\n}\n\nimpl MetadataRefsDefsCode {\n    /// Tries to build a new `MetadataRefsDefsCode` transformer\n    ///\n    /// # Errors\n    ///\n    /// Language is not supported by tree-sitter\n    pub fn try_from_language(language: impl TryInto<SupportedLanguages>) -> Result<Self> {\n        let language: SupportedLanguages = language\n            .try_into()\n            .ok()\n            .context(\"Treesitter language not supported\")?;\n\n        MetadataRefsDefsCode::builder()\n            .code_parser(CodeParser::from_language(language))\n            .build()\n    }\n}\n\n#[async_trait]\nimpl Transformer for MetadataRefsDefsCode {\n    type Input = String;\n    type Output = String;\n    /// Extracts references and definitions from code and\n    /// adds them as metadata to the node if present\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        let refs_defs = self\n            .code_parser\n            .parse(&node.chunk)?\n            .references_and_definitions()?;\n\n        if !refs_defs.references.is_empty() {\n            node.metadata\n                .insert(NAME_REFERENCES.to_string(), refs_defs.references.join(\",\"));\n        }\n\n        if !refs_defs.definitions.is_empty() {\n            node.metadata.insert(\n                NAME_DEFINITIONS.to_string(),\n                refs_defs.definitions.join(\",\"),\n            );\n        }\n        Ok(node)\n    }\n}\n\n#[cfg(test)]\nmod test {\n\n    use super::*;\n    use test_case::test_case;\n\n    #[test_case(\"rust\", \"fn main() { println!(\\\"Hello, World!\\\"); }\", \"println\", \"main\"; \"rust\")]\n    #[test_case(\"ruby\", \"def main; puts 'Hello, World!'; end\", \"puts\", \"main\"; \"ruby\")]\n    #[test_case(\"python\", \"def main(): print('Hello, World!')\", \"print\", \"main\"; \"python\")]\n    #[test_case(\"javascript\", \"function main() { console.log('Hello, World!'); }\", \"log\", \"main\"; \"javascript\")]\n    #[test_case(\"typescript\", \"function main() { console.log('Hello, World!'); }\", \"log\", \"main\"; \"typescript\")]\n    #[test_case(\"java\", \"public class Main { public static void main(String[] args) { System.out.println(\\\"Hello, World!\\\"); } }\", \"println\", \"Main,main\"; \"java\")]\n    #[test_case(\"c-sharp\", \"public class Program { public static void Main(string[] args) { System.Console.WriteLine(\\\"Hello, World!\\\"); } }\", \"WriteLine\", \"Main,Program\"; \"c-sharp\")]\n    #[tokio::test]\n    async fn assert_refs_defs_from_code(\n        lang: &str,\n        code: &str,\n        expected_references: &str,\n        expected_definitions: &str,\n    ) {\n        let transformer = MetadataRefsDefsCode::try_from_language(lang).unwrap();\n        let node = TextNode::new(code);\n\n        let node = transformer.transform_node(node).await.unwrap();\n\n        let references = node\n            .metadata\n            .get(NAME_REFERENCES)\n            .unwrap()\n            .as_str()\n            .unwrap()\n            .to_string();\n        let definitions = node\n            .metadata\n            .get(NAME_DEFINITIONS)\n            .unwrap()\n            .as_str()\n            .unwrap()\n            .to_string();\n\n        assert_eq!(references, expected_references);\n        assert_eq!(definitions, expected_definitions);\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/mod.rs",
    "content": "//! Chunking code with tree-sitter and various tools\nmod code_tree;\nmod outliner;\nmod queries;\nmod splitter;\nmod supported_languages;\n\npub use code_tree::{CodeParser, CodeTree, ReferencesAndDefinitions};\npub use outliner::{CodeOutliner, CodeOutlinerBuilder};\npub use splitter::{ChunkSize, CodeSplitter, CodeSplitterBuilder};\npub use supported_languages::SupportedLanguages;\n\npub mod chunk_code;\npub mod compress_code_outline;\npub mod metadata_qa_code;\npub mod metadata_refs_defs_code;\npub mod outline_code_tree_sitter;\n\npub mod transformers {\n    pub use super::chunk_code::{self, ChunkCode};\n    pub use super::compress_code_outline::{self, CompressCodeOutline};\n    pub use super::metadata_qa_code::{self, MetadataQACode};\n    pub use super::metadata_refs_defs_code::{self, MetadataRefsDefsCode};\n    pub use super::outline_code_tree_sitter::{self, OutlineCodeTreeSitter};\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/outline_code_tree_sitter.rs",
    "content": "//! Add the outline of the code in the given file to the metadata of a node, using tree-sitter.\nuse anyhow::Result;\nuse async_trait::async_trait;\n\nuse swiftide_core::Transformer;\nuse swiftide_core::indexing::TextNode;\n\nuse crate::treesitter::{CodeOutliner, SupportedLanguages};\n\n/// `OutlineCodeTreeSitter` adds a \"Outline\" field to the metadata of a node that contains\n/// a summary of the code in the node. It uses the tree-sitter parser to parse the code and\n/// remove any information that is less relevant for tasks that consider the file as a whole.\n#[swiftide_macros::indexing_transformer(metadata_field_name = \"Outline\", derive(skip_default))]\npub struct OutlineCodeTreeSitter {\n    outliner: CodeOutliner,\n    minimum_file_size: Option<usize>,\n}\n\nimpl OutlineCodeTreeSitter {\n    /// Tries to create a `OutlineCodeTreeSitter` instance for a given programming language.\n    ///\n    /// # Parameters\n    /// - `lang`: The programming language to be used to parse the code. It should implement\n    ///   `TryInto<SupportedLanguages>`.\n    ///\n    /// # Returns\n    /// - `Result<Self>`: Returns an instance of `OutlineCodeTreeSitter` if successful, otherwise\n    ///   returns an error.\n    ///\n    /// # Errors\n    /// - Returns an error if the language is not supported or if the `CodeOutliner` fails to build.\n    pub fn try_for_language(\n        lang: impl TryInto<SupportedLanguages>,\n        minimum_file_size: Option<usize>,\n    ) -> Result<Self> {\n        Ok(Self {\n            outliner: CodeOutliner::builder().try_language(lang)?.build()?,\n            minimum_file_size,\n            client: None,\n            concurrency: None,\n            indexing_defaults: None,\n        })\n    }\n}\n\n#[async_trait]\nimpl Transformer for OutlineCodeTreeSitter {\n    type Input = String;\n    type Output = String;\n    /// Adds context to the metadata of a `TextNode` containing code in the \"Outline\" field.\n    ///\n    /// It uses the `CodeOutliner` to generate the context.\n    ///\n    /// # Parameters\n    /// - `node`: The `TextNode` containing the code of which the context is to be generated.\n    ///\n    /// # Returns\n    /// - `TextNode`: The same `TextNode` instances, with the metadata updated to include the\n    ///   generated context.\n    ///\n    /// # Errors\n    /// - If the code outlining fails, an error is sent downstream.\n    #[tracing::instrument(skip_all, name = \"transformers.outline_code_tree_sitter\")]\n    async fn transform_node(&self, mut node: TextNode) -> Result<TextNode> {\n        if let Some(minimum_file_size) = self.minimum_file_size\n            && node.chunk.len() < minimum_file_size\n        {\n            return Ok(node);\n        }\n\n        let outline_result = self.outliner.outline(&node.chunk)?;\n        node.metadata.insert(NAME, outline_result);\n        Ok(node)\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/outliner.rs",
    "content": "use anyhow::{Context as _, Result};\nuse tree_sitter::{Node, Parser, TreeCursor};\n\nuse derive_builder::Builder;\n\nuse super::supported_languages::SupportedLanguages;\n\n#[derive(Debug, Builder, Clone)]\n/// Generates a summary of a code file.\n///\n/// It does so by parsing the code file and removing function bodies, leaving only the function\n/// signatures and other top-level declarations along with any comments.\n///\n/// The resulting summary can be used as a context when considering subsets of the code file, or for\n/// determining relevance of the code file to a given task.\n#[builder(setter(into), build_fn(error = \"anyhow::Error\"))]\npub struct CodeOutliner {\n    #[builder(setter(custom))]\n    language: SupportedLanguages,\n}\n\nimpl CodeOutlinerBuilder {\n    /// Attempts to set the language for the `CodeOutliner`.\n    ///\n    /// # Arguments\n    ///\n    /// * `language` - A value that can be converted into `SupportedLanguages`.\n    ///\n    /// # Returns\n    ///\n    /// * `Result<Self>` - The builder instance with the language set, or an error if the language\n    ///   is not supported.\n    ///\n    /// # Errors\n    /// * If the language is not supported, an error is returned.\n    pub fn try_language(mut self, language: impl TryInto<SupportedLanguages>) -> Result<Self> {\n        self.language = Some(\n            language\n                .try_into()\n                .ok()\n                .context(\"Treesitter language not supported\")?,\n        );\n        Ok(self)\n    }\n}\n\nimpl CodeOutliner {\n    /// Creates a new `CodeOutliner` with the specified language\n    ///\n    /// # Arguments\n    ///\n    /// * `language` - The programming language for which the code will be outlined.\n    ///\n    /// # Returns\n    ///\n    /// * `Self` - A new instance of `CodeOutliner`.\n    pub fn new(language: SupportedLanguages) -> Self {\n        Self { language }\n    }\n\n    /// Creates a new builder for `CodeOutliner`.\n    ///\n    /// # Returns\n    ///\n    /// * `CodeOutlinerBuilder` - A new builder instance for `CodeOutliner`.\n    pub fn builder() -> CodeOutlinerBuilder {\n        CodeOutlinerBuilder::default()\n    }\n\n    /// outlines a code file.\n    ///\n    /// # Arguments\n    ///\n    /// * `code` - The source code to be split.\n    ///\n    /// # Returns\n    ///\n    /// * `Result<String>` - A result containing a string, or an error if the code could not be\n    ///   parsed.\n    ///\n    /// # Errors\n    /// * If the code could not be parsed, an error is returned.\n    pub fn outline(&self, code: &str) -> Result<String> {\n        let mut parser = Parser::new();\n        parser.set_language(&self.language.into())?;\n        let tree = parser.parse(code, None).context(\"No nodes found\")?;\n        let root_node = tree.root_node();\n\n        if root_node.has_error() {\n            anyhow::bail!(\"Root node has invalid syntax\");\n        }\n\n        let mut cursor = root_node.walk();\n        let mut summary = String::with_capacity(code.len());\n        let mut last_end = 0;\n        self.outline_node(&mut cursor, code, &mut summary, &mut last_end);\n        Ok(summary)\n    }\n\n    fn is_unneeded_node(&self, node: Node) -> bool {\n        match self.language {\n            SupportedLanguages::Rust | SupportedLanguages::Java | SupportedLanguages::CSharp => {\n                matches!(node.kind(), \"block\")\n            }\n            SupportedLanguages::Typescript | SupportedLanguages::Javascript => {\n                matches!(node.kind(), \"statement_block\")\n            }\n            SupportedLanguages::Python => match node.kind() {\n                \"block\" => {\n                    let parent = node.parent().expect(\"Python block node has no parent\");\n                    parent.kind() == \"function_definition\"\n                }\n                _ => false,\n            },\n            SupportedLanguages::Ruby => match node.kind() {\n                \"body_statement\" => {\n                    let parent = node\n                        .parent()\n                        .expect(\"Ruby body_statement node has no parent\");\n                    parent.kind() == \"method\"\n                }\n                _ => false,\n            },\n            SupportedLanguages::Go => unimplemented!(),\n            SupportedLanguages::Solidity => unimplemented!(),\n            SupportedLanguages::C => unimplemented!(),\n            SupportedLanguages::Cpp => unimplemented!(),\n            SupportedLanguages::Elixir => unimplemented!(),\n            SupportedLanguages::HTML => unimplemented!(),\n            SupportedLanguages::PHP => unimplemented!(),\n        }\n    }\n\n    /// outlines a syntax node\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The syntax node to be chunked.\n    /// * `source` - The source code as a string.\n    /// * `last_end` - The end byte of the last chunk.\n    ///\n    /// # Returns\n    ///\n    /// * `String` - A summary of the syntax node.\n    fn outline_node(\n        &self,\n        cursor: &mut TreeCursor,\n        source: &str,\n        summary: &mut String,\n        last_end: &mut usize,\n    ) {\n        let node = cursor.node();\n        // If the node is not needed in the summary, skip it and go to the next sibling\n        if self.is_unneeded_node(node) {\n            summary.push_str(&source[*last_end..node.start_byte()]);\n            *last_end = node.end_byte();\n            if cursor.goto_next_sibling() {\n                self.outline_node(cursor, source, summary, last_end);\n            }\n            return;\n        }\n\n        let mut next_cursor = cursor.clone();\n\n        // If the node is a non-leaf, recursively outline its children\n        if next_cursor.goto_first_child() {\n            self.outline_node(&mut next_cursor, source, summary, last_end);\n        // If the node is a leaf, add the text to the summary\n        } else {\n            summary.push_str(&source[*last_end..node.end_byte()]);\n            *last_end = node.end_byte();\n        }\n\n        if cursor.goto_next_sibling() {\n            self.outline_node(cursor, source, summary, last_end);\n        } else {\n            // Done with this node\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    // Test every supported language.\n    // We should strip away all code blocks and leave only imports, comments, function signatures,\n    // class, interface and structure definitions and definitions of constants, variables and other\n    // members.\n    #[test]\n    fn test_outline_rust() {\n        let code = r#\"\nuse anyhow::{Context as _, Result};\n// This is a comment\nfn main(a: usize, b: usize) -> usize {\n    println!(\"Hello, world!\");\n}\n\npub struct Bla {\n    a: usize\n}\n\nimpl Bla {\n    fn ok(&mut self) {\n        self.a = 1;\n    }\n}\"#;\n        let outliner = CodeOutliner::new(SupportedLanguages::Rust);\n        let summary = outliner.outline(code).unwrap();\n        assert_eq!(\n            summary,\n            \"\\nuse anyhow::{Context as _, Result};\\n// This is a comment\\nfn main(a: usize, b: usize) -> usize \\n\\npub struct Bla {\\n    a: usize\\n}\\n\\nimpl Bla {\\n    fn ok(&mut self) \\n}\"\n        );\n    }\n\n    #[test]\n    fn test_outline_typescript() {\n        let code = r#\"\nimport { Context as _, Result } from 'anyhow';\n// This is a comment\nfunction main(a: number, b: number): number {\n    console.log(\"Hello, world!\");\n}\n\nexport class Bla {\n    a: number;\n}\n\nexport interface Bla {\n    ok(): void;\n}\"#;\n        let outliner = CodeOutliner::new(SupportedLanguages::Typescript);\n        let summary = outliner.outline(code).unwrap();\n        assert_eq!(\n            summary,\n            \"\\nimport { Context as _, Result } from 'anyhow';\\n// This is a comment\\nfunction main(a: number, b: number): number \\n\\nexport class Bla {\\n    a: number;\\n}\\n\\nexport interface Bla {\\n    ok(): void;\\n}\"\n        );\n    }\n\n    #[test]\n    fn test_outline_python() {\n        let code = r#\"\nimport sys\n# This is a comment\ndef main(a: int, b: int) -> int:\n    print(\"Hello, world!\")\n\nclass Bla:\n    def __init__(self):\n        self.a = 1\n\n    def ok(self):\n        self.a = 1\n\"#;\n        let outliner = CodeOutliner::new(SupportedLanguages::Python);\n        let summary = outliner.outline(code).unwrap();\n        assert_eq!(\n            summary,\n            \"\\nimport sys\\n# This is a comment\\ndef main(a: int, b: int) -> int:\\n    \\n\\nclass Bla:\\n    def __init__(self):\\n        \\n\\n    def ok(self):\\n        \"\n        );\n    }\n\n    #[test]\n    fn test_outline_ruby() {\n        let code = r#\"\nrequire 'anyhow'\n# This is a comment\ndef main(a, b)\n    puts \"Hello, world!\"\nend\n\nclass Bla\n    def ok\n        @a = 1\n    end\nend\n\"#;\n        let outliner = CodeOutliner::new(SupportedLanguages::Ruby);\n        let summary = outliner.outline(code).unwrap();\n        assert_eq!(\n            summary,\n            \"\\nrequire 'anyhow'\\n# This is a comment\\ndef main(a, b)\\n    \\nend\\n\\nclass Bla\\n    def ok\\n        \\n    end\\nend\"\n        );\n    }\n\n    #[test]\n    fn test_outline_javascript() {\n        let code = r#\"\nimport { Context as _, Result } from 'anyhow';\n// This is a comment\nfunction main(a, b) {\n    console.log(\"Hello, world!\");\n}\n\nclass Bla {\n    constructor() {\n        this.a = 1;\n    }\n\n    ok() {\n        this.a = 1;\n    }\n}\n\"#;\n        let outliner = CodeOutliner::new(SupportedLanguages::Javascript);\n        let summary = outliner.outline(code).unwrap();\n        assert_eq!(\n            summary,\n            \"\\nimport { Context as _, Result } from 'anyhow';\\n// This is a comment\\nfunction main(a, b) \\n\\nclass Bla {\\n    constructor() \\n\\n    ok() \\n}\"\n        );\n    }\n\n    #[test]\n    fn test_outline_java() {\n        let code = r#\"\nimport java.io.PrintStream;\nimport java.util.Scanner;\n\npublic class HelloWorld {\n    // This is a comment\n    public static void main(String[] args) {\n        PrintStream out = System.out;\n\n        out.println(\"Hello, World!\");\n    }\n}\n\"#;\n        let outliner = CodeOutliner::new(SupportedLanguages::Java);\n        let summary = outliner.outline(code).unwrap();\n        println!(\"{summary}\");\n        assert_eq!(\n            summary,\n            \"\\nimport java.io.PrintStream;\\nimport java.util.Scanner;\\n\\npublic class HelloWorld {\\n    // This is a comment\\n    public static void main(String[] args) \\n}\"\n        );\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/prompts/compress_code_outline.prompt.md",
    "content": "# Filtering Code Outline\n\nYour task is to filter the given file outline to the code chunk provided. The goal is to provide a context that is still contains the lines needed for understanding the code in the chunk whilst leaving out any irrelevant information.\n\n## Constraints\n\n- Only use lines from the provided context, do not add any additional information\n- Ensure that the selection you make is the most appropriate for the code chunk\n- Make sure you include any definitions or imports that are used in the code chunk\n- You do not need to repeat the code chunk in your response, it will be appended directly after your response.\n- Do not use lines that are present in the code chunk\n\n## Code\n\n```\n{{ node.chunk }}\n```\n\n## Outline\n\n```\n{{ node.metadata[\"Outline\"] }}\n```\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/prompts/metadata_qa_code.prompt.md",
    "content": "# Task\n\nYour task is to generate questions and answers for the given code.\n\nGiven that somebody else might ask questions about the code, consider things like:\n\n- What does this code do?\n- What other internal parts does the code use?\n- Does this code have any dependencies?\n- What are some potential use cases for this code?\n- ... and so on\n\n# Constraints\n\n- Generate only {{questions}} questions and answers.\n- Only respond in the example format\n- Only respond with questions and answers that can be derived from the code.\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\nQ1: What does this code do?\nA1: It transforms strings into integers.\nQ2: What other internal parts does the code use?\nA2: A hasher to hash the strings.\n```\n\n{% if outline %}\n\n## Outline of the parent file\n\n```\n{{ outline }}\n```\n\n{% endif %}\n\n# Code\n\n```\n{{ node.chunk }}\n```\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/queries.rs",
    "content": "// https://github.com/tree-sitter/tree-sitter-ruby/blob/master/queries/tags.scm\npub mod ruby {\n    pub const DEFS: &str = r\"\n(\n  [\n    (method\n      name: (_) @name)\n    (singleton_method\n      name: (_) @name)\n  ]\n)\n\n(alias\n  name: (_) @name)\n\n(setter\n  (identifier) @ignore)\n\n(\n  [\n    (class\n      name: [\n        (constant) @name\n        (scope_resolution\n          name: (_) @name)\n      ]) \n    (singleton_class\n      value: [\n        (constant) @name\n        (scope_resolution\n          name: (_) @name)\n      ])\n  ]\n)\n\n(\n  (module\n    name: [\n      (constant) @name\n      (scope_resolution\n        name: (_) @name)\n    ])\n)\n\";\n\n    pub const REFS: &str = r#\"\n(call method: (identifier) @name)\n\n(\n  [(identifier) (constant)] @name\n  (#is-not? local)\n  (#not-match? @name \"^(lambda|load|require|require_relative|__FILE__|__LINE__)$\")\n)\n\"#;\n}\n\n// https://github.com/tree-sitter/tree-sitter-python/blob/master/queries/tags.scm\npub mod python {\n    pub const DEFS: &str = r#\"\n            (class_definition\n                name: (identifier) @name)\n\n            (\n            (function_definition\n                name: (identifier) @name)\n            (#not-eq? @name \"__init__\")\n            )\n\n        \"#;\n\n    pub const REFS: &str = \"\n\n            (call\n            function: [\n                (identifier) @name\n                (attribute\n                    attribute: (identifier))\n            ])\n        \";\n}\n\n// https://github.com/tree-sitter/tree-sitter-typescript/blob/master/queries/tags.scm\npub mod typescript {\n    pub const DEFS: &str = r#\"\n            (function_signature\n                name: (identifier) @name)\n\n            (method_signature\n                name: (property_identifier) @name)\n\n            (abstract_method_signature\n                name: (property_identifier) @name)\n\n            (abstract_class_declaration\n                name: (type_identifier) @name)\n\n            (module\n                name: (identifier) @name)\n\n            (interface_declaration\n                name: (type_identifier) @name)\n\n            (\n            (method_definition\n                name: (property_identifier) @name)\n            (#not-eq? @name \"constructor\")\n            )\n\n            (\n            [\n                (class\n                name: (_) @name)\n                (class_declaration\n                name: (_) @name)\n            ] \n            )\n\n            (\n            [\n                (function_expression\n                name: (identifier) @name)\n                (function_declaration\n                name: (identifier) @name)\n                (generator_function\n                name: (identifier) @name)\n                (generator_function_declaration\n                name: (identifier) @name)\n            ] \n            )\n\n            (\n            (lexical_declaration\n                (variable_declarator\n                name: (identifier) @name\n                value: [(arrow_function) (function_expression)]))\n            )\n\n            (\n            (variable_declaration\n                (variable_declarator\n                name: (identifier) @name\n                value: [(arrow_function) (function_expression)]))\n            )\n        \"#;\n\n    pub const REFS: &str = r#\"\n            (type_annotation\n                (type_identifier) @name)\n\n            (new_expression\n                constructor: (identifier) @name)\n            (\n            (call_expression\n                function: (identifier) @name) \n            (#not-match? @name \"^(require)$\")\n            )\n\n            (call_expression\n            function: (member_expression\n                property: (property_identifier) @name)\n            arguments: (_))\n        \"#;\n}\n\n// https://github.com/tree-sitter/tree-sitter-javascript/blob/master/queries/tags.scm\npub mod javascript {\n    pub const DEFS: &str = r#\"\n        (\n        (method_definition\n            name: (property_identifier) @name)\n        (#not-eq? @name \"constructor\")\n        )\n\n        (\n        [\n            (class\n            name: (_) @name)\n            (class_declaration\n            name: (_) @name)\n        ] \n        )\n\n        (\n        [\n            (function_expression\n            name: (identifier) @name)\n            (function_declaration\n            name: (identifier) @name)\n            (generator_function\n            name: (identifier) @name)\n            (generator_function_declaration\n            name: (identifier) @name)\n        ] \n        )\n\n        (\n        (lexical_declaration\n            (variable_declarator\n            name: (identifier) @name\n            value: [(arrow_function) (function_expression)]) @definition.function)\n        )\n\n        (\n        (variable_declaration\n            (variable_declarator\n            name: (identifier) @name\n            value: [(arrow_function) (function_expression)]) @definition.function)\n        )\n\n        (assignment_expression\n        left: [\n            (identifier) @name\n            (member_expression\n            property: (property_identifier) @name)\n        ]\n        right: [(arrow_function) (function_expression)]\n        ) \n\n        (pair\n        key: (property_identifier) @name\n        value: [(arrow_function) (function_expression)])\n\n        \"#;\n\n    pub const REFS: &str = r#\"\n        (\n        (call_expression\n            function: (identifier) @name) \n        (#not-match? @name \"^(require)$\")\n        )\n\n        (call_expression\n        function: (member_expression\n            property: (property_identifier) @name)\n        arguments: (_))\n\n        (new_expression\n        constructor: (_) @name)\n\n        (export_statement value: (assignment_expression left: (identifier) @name right: ([\n        (number)\n        (string)\n        (identifier)\n        (undefined)\n        (null)\n        (new_expression)\n        (binary_expression)\n        (call_expression)\n        ]))) \n    \"#;\n}\n\n// https://github.com/tree-sitter/tree-sitter-rust/blob/master/queries/tags.scm\npub mod rust {\n    pub const DEFS: &str = \"\n            (struct_item\n                name: (type_identifier) @name)\n\n            (enum_item\n                name: (type_identifier) @name)\n\n            (union_item\n                name: (type_identifier) @name)\n\n            (type_item\n                name: (type_identifier) @name)\n\n            (declaration_list\n                (function_item\n                    name: (identifier) @name))\n\n            (function_item\n                name: (identifier) @name)\n\n            (trait_item\n                name: (type_identifier) @name)\n\n            (mod_item\n                name: (identifier) @name)\n\n            (macro_definition\n                name: (identifier) @name)\n        \";\n\n    pub const REFS: &str = \"\n            (call_expression\n                function: (identifier) @name)\n\n            (call_expression\n                function: (field_expression\n                    field: (field_identifier) @name))\n\n            (macro_invocation\n                macro: (identifier) @name)\n        \";\n}\n\n// https://github.com/tree-sitter/tree-sitter-java/blob/master/queries/tags.scm\npub mod java {\n    pub const DEFS: &str = \"\n           (class_declaration\n                name: (identifier) @name)\n\n           (enum_declaration\n                name: (identifier) @name)\n\n            (method_declaration\n                name: (identifier) @name)\n\n            (interface_declaration\n                name: (identifier) @name)\n\n            (type_list\n                (type_identifier) @name)\n\n            (superclass (type_identifier) @name)\";\n    pub const REFS: &str = \"\n            (method_invocation\n                name: (identifier) @name\n                arguments: (argument_list))\n\n            (object_creation_expression\n                type: (type_identifier) @name)\";\n}\n\npub mod go {\n    pub const DEFS: &str = r\"\n    (function_declaration\n    name: (identifier) @name)\n\n    (method_declaration\n    name: (field_identifier) @name)\n\n    (type_declaration (type_spec name: (type_identifier) @name type: (interface_type)))\n\n    (type_declaration (type_spec name: (type_identifier) @name type: (struct_type)))\n\n    (import_declaration (import_spec) @name)\n\n    (var_declaration (var_spec name: (identifier) @name))\n\n    (const_declaration (const_spec name: (identifier) @name))\n\n            \";\n\n    pub const REFS: &str = r#\"\n    (call_expression\n    function: [\n        (identifier) @name\n        (parenthesized_expression (identifier) @name)\n        (selector_expression field: (field_identifier) @name)\n        (parenthesized_expression (selector_expression field: (field_identifier) @name))\n    ])\n\n    (type_spec\n    name: (type_identifier) @name) \n\n    (package_clause \"package\" (package_identifier) @name)\n    (type_identifier) @name \n            \"#;\n}\n\npub mod solidity {\n    pub const DEFS: &str = r\"\n    (function_definition\n    name: (identifier) @name)\n\n    (source_file\n        (function_definition\n            name: (identifier) @name))\n\n    (contract_declaration\n    name: (identifier) @name) \n\n    (interface_declaration\n    name: (identifier) @name)\n\n    (library_declaration\n    name: (identifier) @name)\n\n    (struct_declaration name: (identifier) @name)\n    (enum_declaration name: (identifier) @name)\n    (event_definition name: (identifier) @name)\n    \";\n\n    pub const REFS: &str = r\"\n    (call_expression (expression (identifier)) @name )\n\n    (call_expression\n        (expression (member_expression\n            property: (_) @name )))\n\n    (emit_statement name: (_) @name)\n\n\n    (inheritance_specifier\n        ancestor: (user_defined_type (_) @name . ))\n\n\n    (import_directive\n    import_name: (_) @name )\n    \";\n}\n// https://github.com/tree-sitter/tree-sitter-c-sharp/blob/master/queries/tags.scm\npub mod csharp {\n    pub const DEFS: &str = r\"\n        (class_declaration\n            name: (identifier) @name)\n\n        (interface_declaration\n            name: (identifier) @name)\n\n        (method_declaration\n            name: (identifier) @name)\n\n        (namespace_declaration\n            name: (identifier) @name)\n    \";\n\n    pub const REFS: &str = r\"\n        (class_declaration\n            (base_list (_) @name))\n\n        (interface_declaration\n            (base_list (_) @name))\n\n        (object_creation_expression\n            type: (identifier) @name)\n\n        (type_parameter_constraints_clause\n            (identifier) @name)\n\n        (type_parameter_constraint\n            (type type: (identifier) @name))\n\n        (variable_declaration\n            type: (identifier) @name)\n\n        (invocation_expression\n            function: (member_access_expression\n                name: (identifier) @name))\n    \";\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/snapshots/swiftide_integrations__treesitter__compress_code_outline__test__compress_code_template.snap",
    "content": "---\nsource: swiftide-integrations/src/treesitter/compress_code_outline.rs\nexpression: prompt.render().await.unwrap()\n---\n# Filtering Code Outline\n\nYour task is to filter the given file outline to the code chunk provided. The goal is to provide a context that is still contains the lines needed for understanding the code in the chunk whilst leaving out any irrelevant information.\n\n## Constraints\n\n- Only use lines from the provided context, do not add any additional information\n- Ensure that the selection you make is the most appropriate for the code chunk\n- Make sure you include any definitions or imports that are used in the code chunk\n- You do not need to repeat the code chunk in your response, it will be appended directly after your response.\n- Do not use lines that are present in the code chunk\n\n## Code\n\n```\nCode using outline\n```\n\n## Outline\n\n```\nRelevant Outline\n```\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/snapshots/swiftide_integrations__treesitter__metadata_qa_code__test__default_prompt.snap",
    "content": "---\nsource: swiftide-integrations/src/treesitter/metadata_qa_code.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate questions and answers for the given code.\n\nGiven that somebody else might ask questions about the code, consider things like:\n\n- What does this code do?\n- What other internal parts does the code use?\n- Does this code have any dependencies?\n- What are some potential use cases for this code?\n- ... and so on\n\n# Constraints\n\n- Generate only 5 questions and answers.\n- Only respond in the example format\n- Only respond with questions and answers that can be derived from the code.\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\nQ1: What does this code do?\nA1: It transforms strings into integers.\nQ2: What other internal parts does the code use?\nA2: A hasher to hash the strings.\n```\n\n\n\n# Code\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/snapshots/swiftide_integrations__treesitter__metadata_qa_code__test__template_with_outline.snap",
    "content": "---\nsource: swiftide-integrations/src/treesitter/metadata_qa_code.rs\nexpression: prompt.render().await.unwrap()\n---\n# Task\n\nYour task is to generate questions and answers for the given code.\n\nGiven that somebody else might ask questions about the code, consider things like:\n\n- What does this code do?\n- What other internal parts does the code use?\n- Does this code have any dependencies?\n- What are some potential use cases for this code?\n- ... and so on\n\n# Constraints\n\n- Generate only 5 questions and answers.\n- Only respond in the example format\n- Only respond with questions and answers that can be derived from the code.\n\n# Example\n\nRespond in the following example format and do not include anything else:\n\n```\nQ1: What does this code do?\nA1: It transforms strings into integers.\nQ2: What other internal parts does the code use?\nA2: A hasher to hash the strings.\n```\n\n\n\n## Outline of the parent file\n\n```\nTest outline\n```\n\n\n\n# Code\n\n```\ntest\n```\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/splitter.rs",
    "content": "use anyhow::{Context as _, Result};\nuse std::ops::Range;\nuse tree_sitter::{Node, Parser};\n\nuse derive_builder::Builder;\n\nuse super::supported_languages::SupportedLanguages;\n\n// TODO: Instead of counting bytes, count tokens with titktoken\nconst DEFAULT_MAX_BYTES: usize = 1500;\n\n#[derive(Debug, Builder, Clone)]\n/// Splits code files into meaningful chunks\n///\n/// Supports splitting code files into chunks based on a maximum size or a range of bytes.\n#[builder(setter(into), build_fn(error = \"anyhow::Error\"))]\npub struct CodeSplitter {\n    /// Maximum size of a chunk in bytes or a range of bytes\n    #[builder(default, setter(into))]\n    chunk_size: ChunkSize,\n    #[builder(setter(custom))]\n    language: SupportedLanguages,\n}\n\nimpl CodeSplitterBuilder {\n    /// Attempts to set the language for the `CodeSplitter`.\n    ///\n    /// # Arguments\n    ///\n    /// * `language` - A value that can be converted into `SupportedLanguages`.\n    ///\n    /// # Returns\n    ///\n    /// * `Result<Self>` - The builder instance with the language set, or an error if the language\n    ///   is not supported.\n    ///\n    /// # Errors\n    ///\n    /// Errors if language is not supported\n    pub fn try_language(mut self, language: impl TryInto<SupportedLanguages>) -> Result<Self> {\n        self.language = Some(\n            language\n                .try_into()\n                .ok()\n                .context(\"Treesitter language not supported\")?,\n        );\n        Ok(self)\n    }\n}\n\n#[derive(Debug, Clone)]\n/// Represents the size of a chunk, either as a fixed number of bytes or a range of bytes.\npub enum ChunkSize {\n    Bytes(usize),\n    Range(Range<usize>),\n}\n\nimpl From<usize> for ChunkSize {\n    /// Converts a `usize` into a `ChunkSize::Bytes` variant.\n    fn from(size: usize) -> Self {\n        ChunkSize::Bytes(size)\n    }\n}\n\nimpl From<Range<usize>> for ChunkSize {\n    /// Converts a `Range<usize>` into a `ChunkSize::Range` variant.\n    fn from(range: Range<usize>) -> Self {\n        ChunkSize::Range(range)\n    }\n}\n\nimpl Default for ChunkSize {\n    /// Provides a default value for `ChunkSize`, which is `ChunkSize::Bytes(DEFAULT_MAX_BYTES)`.\n    fn default() -> Self {\n        ChunkSize::Bytes(DEFAULT_MAX_BYTES)\n    }\n}\n\nimpl CodeSplitter {\n    /// Creates a new `CodeSplitter` with the specified language and default chunk size.\n    ///\n    /// # Arguments\n    ///\n    /// * `language` - The programming language for which the code will be split.\n    ///\n    /// # Returns\n    ///\n    /// * `Self` - A new instance of `CodeSplitter`.\n    pub fn new(language: SupportedLanguages) -> Self {\n        Self {\n            chunk_size: ChunkSize::default(),\n            language,\n        }\n    }\n\n    /// Creates a new builder for `CodeSplitter`.\n    ///\n    /// # Returns\n    ///\n    /// * `CodeSplitterBuilder` - A new builder instance for `CodeSplitter`.\n    pub fn builder() -> CodeSplitterBuilder {\n        CodeSplitterBuilder::default()\n    }\n\n    /// Recursively chunks a syntax node into smaller pieces based on the chunk size.\n    ///\n    /// # Arguments\n    ///\n    /// * `node` - The syntax node to be chunked.\n    /// * `source` - The source code as a string.\n    /// * `last_end` - The end byte of the last chunk.\n    ///\n    /// # Returns\n    ///\n    /// * `Vec<String>` - A vector of code chunks as strings.\n    fn chunk_node(\n        &self,\n        node: Node,\n        source: &str,\n        mut last_end: usize,\n        current_chunk: Option<String>,\n    ) -> Vec<String> {\n        let mut new_chunks: Vec<String> = Vec::new();\n        let mut current_chunk = current_chunk.unwrap_or_default();\n\n        for child in node.children(&mut node.walk()) {\n            debug_assert!(\n                current_chunk.len() <= self.max_bytes(),\n                \"Chunk too big: {} > {}\",\n                current_chunk.len(),\n                self.max_bytes()\n            );\n\n            // if the next child will make the chunk too big then there are two options:\n            // 1. if the next child is too big to fit in a whole chunk, then recursively chunk it\n            //    one level down\n            // 2. if the next child is small enough to fit in a chunk, then add the current chunk to\n            //    the list and start a new chunk\n\n            let next_child_size = child.end_byte() - last_end;\n            if current_chunk.len() + next_child_size >= self.max_bytes() {\n                if next_child_size > self.max_bytes() {\n                    let mut sub_chunks =\n                        self.chunk_node(child, source, last_end, Some(current_chunk));\n                    current_chunk = sub_chunks.pop().unwrap_or_default();\n                    new_chunks.extend(sub_chunks);\n                } else {\n                    // NOTE: if the current chunk was smaller than then the min_bytes, then it is\n                    // discarded here\n                    if !current_chunk.is_empty() && current_chunk.len() > self.min_bytes() {\n                        new_chunks.push(current_chunk);\n                    }\n                    current_chunk = source[last_end..child.end_byte()].to_string();\n                }\n            } else {\n                current_chunk += &source[last_end..child.end_byte()];\n            }\n\n            last_end = child.end_byte();\n        }\n\n        if !current_chunk.is_empty() && current_chunk.len() > self.min_bytes() {\n            new_chunks.push(current_chunk);\n        }\n\n        new_chunks\n    }\n\n    /// Splits the given code into chunks based on the chunk size.\n    ///\n    /// # Arguments\n    ///\n    /// * `code` - The source code to be split.\n    ///\n    /// # Returns\n    ///\n    /// * `Result<Vec<String>>` - A result containing a vector of code chunks as strings, or an\n    ///   error if the code could not be parsed.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the node cannot be found or fails to parse\n    pub fn split(&self, code: &str) -> Result<Vec<String>> {\n        let mut parser = Parser::new();\n        parser.set_language(&self.language.into())?;\n        let tree = parser.parse(code, None).context(\"No nodes found\")?;\n        let root_node = tree.root_node();\n\n        if root_node.has_error() {\n            tracing::warn!(\"Syntax error parsing code: {code:?}\");\n            return Ok(vec![code.to_string()]);\n        }\n\n        Ok(self.chunk_node(root_node, code, 0, None))\n    }\n\n    /// Returns the maximum number of bytes allowed in a chunk.\n    ///\n    /// # Returns\n    ///\n    /// * `usize` - The maximum number of bytes in a chunk.\n    fn max_bytes(&self) -> usize {\n        match &self.chunk_size {\n            ChunkSize::Bytes(size) => *size,\n            ChunkSize::Range(range) => range.end,\n        }\n    }\n\n    /// Returns the minimum number of bytes allowed in a chunk.\n    ///\n    /// # Returns\n    ///\n    /// * `usize` - The minimum number of bytes in a chunk.\n    fn min_bytes(&self) -> usize {\n        if let ChunkSize::Range(range) = &self.chunk_size {\n            range.start\n        } else {\n            0\n        }\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use indoc::indoc;\n\n    #[test]\n    fn test_split_single_chunk() {\n        let code = \"fn hello_world() {}\";\n\n        let splitter = CodeSplitter::new(SupportedLanguages::Rust);\n\n        let chunks = splitter.split(code);\n\n        assert_eq!(chunks.unwrap(), vec![\"fn hello_world() {}\"]);\n    }\n\n    #[test]\n    fn test_chunk_lines() {\n        let splitter = CodeSplitter::new(SupportedLanguages::Rust);\n\n        let text = indoc! {r#\"\n            fn main() {\n                println!(\"Hello\");\n                println!(\"World\");\n                println!(\"!\");\n            }\n        \"#};\n\n        let chunks = splitter.split(text).unwrap();\n\n        dbg!(&chunks);\n        assert_eq!(chunks.len(), 1);\n        assert_eq!(\n            chunks[0],\n            \"fn main() {\\n    println!(\\\"Hello\\\");\\n    println!(\\\"World\\\");\\n    println!(\\\"!\\\");\\n}\"\n        );\n    }\n\n    #[test]\n    fn test_max_bytes_limit() {\n        let splitter = CodeSplitter::builder()\n            .try_language(SupportedLanguages::Rust)\n            .unwrap()\n            .chunk_size(50)\n            .build()\n            .unwrap();\n\n        let text = indoc! {r#\"\n            fn main() {\n                println!(\"Hello, World!\");\n                println!(\"Goodbye, World!\");\n            }\n        \"#};\n        let chunks = splitter.split(text).unwrap();\n\n        assert!(chunks.iter().all(|chunk| chunk.len() <= 50));\n        assert!(\n            chunks\n                .windows(2)\n                .all(|pair| pair.iter().map(String::len).sum::<usize>() >= 50)\n        );\n\n        assert_eq!(\n            chunks,\n            vec![\n                \"fn main() {\\n    println!(\\\"Hello, World!\\\");\",\n                \"\\n    println!(\\\"Goodbye, World!\\\");\\n}\",\n            ]\n        );\n    }\n\n    #[test]\n    fn test_empty_text() {\n        let splitter = CodeSplitter::builder()\n            .try_language(SupportedLanguages::Rust)\n            .unwrap()\n            .chunk_size(50)\n            .build()\n            .unwrap();\n\n        let text = \"\";\n        let chunks = splitter.split(text).unwrap();\n\n        dbg!(&chunks);\n        assert_eq!(chunks.len(), 0);\n    }\n\n    #[test]\n    fn test_range_max() {\n        let splitter = CodeSplitter::builder()\n            .try_language(SupportedLanguages::Rust)\n            .unwrap()\n            .chunk_size(0..50)\n            .build()\n            .unwrap();\n\n        let text = indoc! {r#\"\n            fn main() {\n                println!(\"Hello, World!\");\n                println!(\"Goodbye, World!\");\n            }\n        \"#};\n        let chunks = splitter.split(text).unwrap();\n        assert_eq!(\n            chunks,\n            vec![\n                \"fn main() {\\n    println!(\\\"Hello, World!\\\");\",\n                \"\\n    println!(\\\"Goodbye, World!\\\");\\n}\",\n            ]\n        );\n    }\n\n    #[test]\n    fn test_range_min_and_max() {\n        let splitter = CodeSplitter::builder()\n            .try_language(SupportedLanguages::Rust)\n            .unwrap()\n            .chunk_size(20..50)\n            .build()\n            .unwrap();\n        let text = indoc! {r#\"\n            fn main() {\n                println!(\"Hello, World!\");\n                println!(\"Goodbye, World!\");\n            }\n        \"#};\n        let chunks = splitter.split(text).unwrap();\n\n        assert!(chunks.iter().all(|chunk| chunk.len() <= 50));\n        assert!(\n            chunks\n                .windows(2)\n                .all(|pair| pair.iter().map(String::len).sum::<usize>() > 50)\n        );\n        assert!(chunks.iter().all(|chunk| chunk.len() >= 20));\n\n        assert_eq!(\n            chunks,\n            vec![\n                \"fn main() {\\n    println!(\\\"Hello, World!\\\");\",\n                \"\\n    println!(\\\"Goodbye, World!\\\");\\n}\"\n            ]\n        );\n    }\n\n    #[test]\n    fn test_on_self() {\n        // read the current file\n        let code = include_str!(\"splitter.rs\");\n        // try chunking with varying ranges of bytes, give me ten with different min and max\n        let ranges = vec![\n            10..200,\n            50..100,\n            100..150,\n            150..200,\n            200..250,\n            250..300,\n            300..350,\n            350..400,\n            400..450,\n            450..500,\n        ];\n\n        for range in ranges {\n            let min = range.start;\n            let max = range.end;\n            let splitter = CodeSplitter::builder()\n                .try_language(\"rust\")\n                .unwrap()\n                .chunk_size(range)\n                .build()\n                .unwrap();\n\n            assert_eq!(splitter.min_bytes(), min);\n            assert_eq!(splitter.max_bytes(), max);\n\n            let chunks = splitter.split(code).unwrap();\n\n            assert!(chunks.iter().all(|chunk| chunk.len() <= max));\n            let chunk_pairs_that_are_smaller_than_max = chunks\n                .windows(2)\n                .filter(|pair| pair.iter().map(String::len).sum::<usize>() < max);\n            assert!(\n                chunk_pairs_that_are_smaller_than_max.clone().count() == 0,\n                \"max: {}, {} + {}, {:?}\",\n                max,\n                chunk_pairs_that_are_smaller_than_max\n                    .clone()\n                    .next()\n                    .unwrap()[0]\n                    .len(),\n                chunk_pairs_that_are_smaller_than_max\n                    .clone()\n                    .next()\n                    .unwrap()[1]\n                    .len(),\n                chunk_pairs_that_are_smaller_than_max\n                    .collect::<Vec<_>>()\n                    .first()\n            );\n            assert!(chunks.iter().all(|chunk| chunk.len() >= min));\n\n            assert!(\n                chunks.iter().all(|chunk| chunk.len() >= min),\n                \"{:?}\",\n                chunks\n                    .iter()\n                    .filter(|chunk| chunk.len() < min)\n                    .collect::<Vec<_>>()\n            );\n            assert!(\n                chunks.iter().all(|chunk| chunk.len() <= max),\n                \"max = {}, chunks = {:?}\",\n                max,\n                chunks\n                    .iter()\n                    .filter(|chunk| chunk.len() > max)\n                    .collect::<Vec<_>>()\n            );\n        }\n\n        // assert there are no nodes smaller than 10\n    }\n}\n"
  },
  {
    "path": "swiftide-integrations/src/treesitter/supported_languages.rs",
    "content": "//! This module defines the supported programming languages for the Swiftide project and provides\n//! utility functions for mapping these languages to their respective file extensions and\n//! tree-sitter language objects.\n//!\n//! The primary purpose of this module is to facilitate the recognition and handling of different\n//! programming languages by mapping file extensions and converting language enums to tree-sitter\n//! language objects for accurate parsing and syntax analysis.\n//!\n//! # Supported Languages\n//! - Rust\n//! - Typescript\n//! - Python\n//! - Ruby\n//! - Javascript\n//! - Solidity\n\nuse std::hash::Hash;\n#[allow(unused_imports)]\npub use std::str::FromStr as _;\n\nuse serde::{Deserialize, Serialize};\n\n/// Enum representing the supported programming languages in the Swiftide project.\n///\n/// This enum is used to map programming languages to their respective file extensions and\n/// tree-sitter language objects. The `EnumString` and `Display` macros from the `strum_macros`\n/// crate are used to provide string conversion capabilities. The `ascii_case_insensitive` attribute\n/// allows for case-insensitive string matching.\n#[derive(\n    Debug,\n    PartialEq,\n    Eq,\n    Clone,\n    Copy,\n    Deserialize,\n    Serialize,\n    strum_macros::EnumString,\n    strum_macros::Display,\n    strum_macros::EnumIter,\n    strum_macros::AsRefStr,\n)]\n#[strum(ascii_case_insensitive)]\n#[non_exhaustive]\npub enum SupportedLanguages {\n    #[serde(alias = \"rust\")]\n    Rust,\n    #[serde(alias = \"typescript\")]\n    Typescript,\n    #[serde(alias = \"python\")]\n    Python,\n    #[serde(alias = \"ruby\")]\n    Ruby,\n    #[serde(alias = \"javascript\")]\n    Javascript,\n    #[serde(alias = \"java\")]\n    Java,\n    #[serde(alias = \"go\")]\n    Go,\n    #[serde(rename = \"c-sharp\", alias = \"csharp\", alias = \"c#\", alias = \"C#\")]\n    #[strum(\n        serialize = \"csharp\",\n        serialize = \"c-sharp\",\n        serialize = \"c#\",\n        serialize = \"C#\",\n        to_string = \"c-sharp\"\n    )]\n    CSharp,\n    #[serde(alias = \"solidity\")]\n    Solidity,\n    #[serde(alias = \"c\")]\n    C,\n    #[serde(alias = \"cpp\", alias = \"c++\", alias = \"C++\", rename = \"C++\")]\n    #[strum(\n        serialize = \"c++\",\n        serialize = \"cpp\",\n        serialize = \"Cpp\",\n        to_string = \"C++\"\n    )]\n    Cpp,\n    #[serde(alias = \"elixir\")]\n    Elixir,\n    #[serde(alias = \"html\", alias = \"Html\")]\n    HTML,\n    #[serde(alias = \"php\", alias = \"PHP\", alias = \"Php\")]\n    PHP,\n}\n\nimpl Hash for SupportedLanguages {\n    /// Implements the `Hash` trait for `SupportedLanguages`.\n    ///\n    /// This allows instances of `SupportedLanguages` to be used as keys in hash maps and sets.\n    ///\n    /// # Parameters\n    /// - `state`: The mutable state to which the hash is added.\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.as_ref().hash(state);\n    }\n}\n\n/// Static array of file extensions for Rust files.\nstatic RUST_EXTENSIONS: &[&str] = &[\"rs\"];\n\n/// Static array of file extensions for Typescript files.\nstatic TYPESCRIPT_EXTENSIONS: &[&str] = &[\"ts\", \"tsx\", \"js\", \"jsx\"];\n\n/// Static array of file extensions for Python files.\nstatic PYTHON_EXTENSIONS: &[&str] = &[\"py\"];\n\n/// Static array of file extensions for Ruby files.\nstatic RUBY_EXTENSIONS: &[&str] = &[\"rb\"];\n\n/// Static array of file extensions for Javascript files.\nstatic JAVASCRIPT_EXTENSIONS: &[&str] = &[\"js\", \"jsx\"];\n\n/// Static array of file extensions for Java files.\nstatic JAVA_EXTENSIONS: &[&str] = &[\"java\"];\n\n/// Static array of file extensions for Go files.\nstatic GO_EXTENSIONS: &[&str] = &[\"go\"];\n\n/// Static array of file extensions for C# files.\nstatic C_SHARP_EXTENSIONS: &[&str] = &[\"cs\", \"csx\"];\n\n/// Static array of file extensions for Solidity files.\nstatic SOLIDITY_EXTENSIONS: &[&str] = &[\"sol\"];\n\n/// Static array of file extensions for C files.\nstatic C_EXTENSIONS: &[&str] = &[\"c\", \"h\", \"o\"];\n\n/// Static array of file extensions for C++ files.\nstatic CPP_EXTENSIONS: &[&str] = &[\"c\", \"h\", \"o\", \"cc\", \"cpp\"];\n\nstatic ELIXIR_EXTENSIONS: &[&str] = &[\"ex\", \"exs\"];\n\nstatic HTML_EXTENSIONS: &[&str] = &[\"html\", \"htm\", \"xhtml\"];\n\nstatic PHP_EXTENSIONS: &[&str] = &[\"php\"];\n\nimpl SupportedLanguages {\n    /// Returns the file extensions associated with the supported language.\n    ///\n    /// # Returns\n    /// A static slice of string slices representing the file extensions.\n    pub fn file_extensions(&self) -> &[&str] {\n        match self {\n            SupportedLanguages::Rust => RUST_EXTENSIONS,\n            SupportedLanguages::Typescript => TYPESCRIPT_EXTENSIONS,\n            SupportedLanguages::Python => PYTHON_EXTENSIONS,\n            SupportedLanguages::Ruby => RUBY_EXTENSIONS,\n            SupportedLanguages::Javascript => JAVASCRIPT_EXTENSIONS,\n            SupportedLanguages::Java => JAVA_EXTENSIONS,\n            SupportedLanguages::Go => GO_EXTENSIONS,\n            SupportedLanguages::CSharp => C_SHARP_EXTENSIONS,\n            SupportedLanguages::Solidity => SOLIDITY_EXTENSIONS,\n            SupportedLanguages::C => C_EXTENSIONS,\n            SupportedLanguages::Cpp => CPP_EXTENSIONS,\n            SupportedLanguages::Elixir => ELIXIR_EXTENSIONS,\n            SupportedLanguages::HTML => HTML_EXTENSIONS,\n            SupportedLanguages::PHP => PHP_EXTENSIONS,\n        }\n    }\n}\n\nimpl From<SupportedLanguages> for tree_sitter::Language {\n    /// Converts a `SupportedLanguages` enum to a `tree_sitter::Language` object.\n    ///\n    /// This implementation allows for the conversion of the supported languages to their respective\n    /// tree-sitter language objects, enabling accurate parsing and syntax analysis.\n    ///\n    /// # Parameters\n    /// - `val`: The `SupportedLanguages` enum value to be converted.\n    ///\n    /// # Returns\n    /// A `tree_sitter::Language` object corresponding to the provided `SupportedLanguages` enum\n    /// value.\n    fn from(val: SupportedLanguages) -> Self {\n        match val {\n            SupportedLanguages::Rust => tree_sitter_rust::LANGUAGE,\n            SupportedLanguages::Python => tree_sitter_python::LANGUAGE,\n            SupportedLanguages::Typescript => tree_sitter_typescript::LANGUAGE_TYPESCRIPT,\n            SupportedLanguages::Javascript => tree_sitter_javascript::LANGUAGE,\n            SupportedLanguages::Ruby => tree_sitter_ruby::LANGUAGE,\n            SupportedLanguages::Java => tree_sitter_java::LANGUAGE,\n            SupportedLanguages::Go => tree_sitter_go::LANGUAGE,\n            SupportedLanguages::CSharp => tree_sitter_c_sharp::LANGUAGE,\n            SupportedLanguages::Solidity => tree_sitter_solidity::LANGUAGE,\n            SupportedLanguages::C => tree_sitter_c::LANGUAGE,\n            SupportedLanguages::Cpp => tree_sitter_cpp::LANGUAGE,\n            SupportedLanguages::Elixir => tree_sitter_elixir::LANGUAGE,\n            SupportedLanguages::HTML => tree_sitter_html::LANGUAGE,\n            SupportedLanguages::PHP => tree_sitter_php::LANGUAGE_PHP,\n        }\n        .into()\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    pub use strum::IntoEnumIterator as _;\n\n    /// Tests the case-insensitive string conversion for `SupportedLanguages`.\n    #[test]\n    fn test_supported_languages_from_str() {\n        assert_eq!(\n            SupportedLanguages::from_str(\"rust\"),\n            Ok(SupportedLanguages::Rust)\n        );\n        assert_eq!(\n            SupportedLanguages::from_str(\"typescript\"),\n            Ok(SupportedLanguages::Typescript)\n        );\n        assert_eq!(\n            SupportedLanguages::from_str(\"java\"),\n            Ok(SupportedLanguages::Java)\n        );\n        assert_eq!(\n            SupportedLanguages::from_str(\"c-sharp\"),\n            Ok(SupportedLanguages::CSharp)\n        );\n    }\n\n    /// Tests the case-insensitive string conversion for `SupportedLanguages` with different casing.\n    #[test]\n    fn test_supported_languages_from_str_case_insensitive() {\n        assert_eq!(\n            SupportedLanguages::from_str(\"Rust\"),\n            Ok(SupportedLanguages::Rust)\n        );\n        assert_eq!(\n            SupportedLanguages::from_str(\"TypeScript\"),\n            Ok(SupportedLanguages::Typescript)\n        );\n\n        assert_eq!(\n            SupportedLanguages::from_str(\"Java\"),\n            Ok(SupportedLanguages::Java)\n        );\n        assert_eq!(\n            SupportedLanguages::from_str(\"C-Sharp\"),\n            Ok(SupportedLanguages::CSharp)\n        );\n        assert_eq!(\n            SupportedLanguages::from_str(\"C++\"),\n            Ok(SupportedLanguages::Cpp)\n        );\n        assert_eq!(\n            SupportedLanguages::from_str(\"cpp\"),\n            Ok(SupportedLanguages::Cpp)\n        );\n\n        assert_eq!(\n            SupportedLanguages::from_str(\"elixir\"),\n            Ok(SupportedLanguages::Elixir)\n        );\n    }\n\n    #[test]\n    fn test_serialize_and_deserialize_for_supported_languages() {\n        for lang in SupportedLanguages::iter() {\n            let val = serde_json::to_string(&lang).unwrap();\n\n            assert_eq!(\n                serde_json::to_string(&lang).unwrap(),\n                format!(\"\\\"{lang}\\\"\"),\n                \"Failed to serialize {lang}\"\n            );\n            assert_eq!(\n                serde_json::from_str::<SupportedLanguages>(&val).unwrap(),\n                lang,\n                \"Failed to deserialize {lang}\"\n            );\n            assert_eq!(\n                serde_json::from_str::<SupportedLanguages>(&val.to_lowercase()).unwrap(),\n                lang,\n                \"Failed to deserialize lowercase {lang}\"\n            );\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-langfuse\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\n# TODO: Go over these\nserde.workspace = true\nserde_with = { version = \"^3.8\", default-features = false, features = [\n  \"base64\",\n  \"std\",\n  \"macros\",\n] }\nserde_json.workspace = true\nserde_repr = \"^0.1\"\nchrono = { workspace = true, features = [\"now\"] }\ntokio.workspace = true\nuuid.workspace = true\ntracing.workspace = true\ntracing-subscriber.workspace = true\nfutures = \"^0.3\"\nurl = \"^2.5\"\nreqwest = { version = \"^0.13\", default-features = false, features = [\n  \"json\",\n  \"multipart\",\n] }\nanyhow.workspace = true\nasync-trait.workspace = true\ndyn-clone.workspace = true\nswiftide-core = { path = \"../swiftide-core\", version = \"0.32\" }\n\n[dev-dependencies]\nwiremock.workspace = true\ntest-log.workspace = true\ninsta.workspace = true\ntracing-appender = \"0.2.3\"\n\n\n# We need some custom stuff because of the codegen and me being lazy\n[lints.rust]\ndead_code = \"warn\"\n\n[lints.clippy]\ncargo = { level = \"warn\", priority = -1 }\n# pedantic = { level = \"warn\", priority = -1 }\nblocks_in_conditions = \"allow\"\nmust_use_candidate = \"allow\"\nmodule_name_repetitions = \"allow\"\nmissing_fields_in_debug = \"allow\"\nmultiple_crate_versions = \"allow\"\noption_option = \"allow\"\n"
  },
  {
    "path": "swiftide-langfuse/src/apis/configuration.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\n#[derive(Debug, Clone)]\npub struct Configuration {\n    pub base_path: String,\n    pub user_agent: Option<String>,\n    pub client: reqwest::Client,\n    pub basic_auth: Option<BasicAuth>,\n    pub oauth_access_token: Option<String>,\n    pub bearer_access_token: Option<String>,\n    pub api_key: Option<ApiKey>,\n}\n\npub type BasicAuth = (String, Option<String>);\n\n#[derive(Debug, Clone)]\npub struct ApiKey {\n    pub prefix: Option<String>,\n    pub key: String,\n}\n\nimpl Configuration {\n    pub fn new() -> Configuration {\n        Configuration::default()\n    }\n}\n\nimpl Default for Configuration {\n    fn default() -> Self {\n        Configuration {\n            base_path: \"http://localhost\".to_owned(),\n            user_agent: Some(\"OpenAPI-Generator//rust\".to_owned()),\n            client: reqwest::Client::new(),\n            basic_auth: None,\n            oauth_access_token: None,\n            bearer_access_token: None,\n            api_key: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/apis/ingestion_api.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse super::{ContentType, Error, configuration};\nuse crate::{apis::ResponseContent, models};\nuse reqwest;\nuse serde::{Deserialize, Serialize, de::Error as _};\n\n/// struct for typed errors of method [`ingestion_batch`]\n#[derive(Debug, Clone, Serialize, Deserialize)]\n#[serde(untagged)]\npub enum IngestionBatchError {\n    Status400(serde_json::Value),\n    Status401(serde_json::Value),\n    Status403(serde_json::Value),\n    Status404(serde_json::Value),\n    Status405(serde_json::Value),\n    UnknownValue(serde_json::Value),\n}\n\n/// Batched ingestion for Langfuse Tracing. If you want to use tracing via the API, such as to build your own Langfuse client implementation, this is the only API route you need to implement.  Within each batch, there can be multiple events. Each event has a type, an id, a timestamp, metadata and a body. Internally, we refer to this as the \\\"event envelope\\\" as it tells us something about the event but not the trace. We use the event id within this envelope to deduplicate messages to avoid processing the same event twice, i.e. the event id should be unique per request. The event.body.id is the ID of the actual trace and will be used for updates and will be visible within the Langfuse App. I.e. if you want to update a trace, you'd use the same body id, but separate event IDs.  Notes: - Introduction to data model: <https://langfuse.com/docs/tracing-data-model> - Batch sizes are limited to 3.5 MB in total. You need to adjust the number of events per batch accordingly. - The API does not return a 4xx status code for input errors. Instead, it responds with a 207 status code, which includes a list of the encountered errors.\npub async fn ingestion_batch(\n    configuration: &configuration::Configuration,\n    ingestion_batch_request: &models::IngestionBatchRequest,\n) -> Result<models::IngestionResponse, Error<IngestionBatchError>> {\n    // add a prefix to parameters to efficiently prevent name collisions\n    let p_ingestion_batch_request = ingestion_batch_request;\n\n    let uri_str = format!(\"{}/api/public/ingestion\", configuration.base_path);\n    let mut req_builder = configuration\n        .client\n        .request(reqwest::Method::POST, &uri_str);\n\n    if let Some(ref user_agent) = configuration.user_agent {\n        req_builder = req_builder.header(reqwest::header::USER_AGENT, user_agent.clone());\n    }\n    if let Some(ref auth_conf) = configuration.basic_auth {\n        req_builder = req_builder.basic_auth(auth_conf.0.clone(), auth_conf.1.clone());\n    }\n    req_builder = req_builder.json(&p_ingestion_batch_request);\n\n    let req = req_builder.build()?;\n    let resp = configuration.client.execute(req).await?;\n\n    let status = resp.status();\n    let content_type = resp\n        .headers()\n        .get(\"content-type\")\n        .and_then(|v| v.to_str().ok())\n        .unwrap_or(\"application/octet-stream\");\n    let content_type = super::ContentType::from(content_type);\n\n    if !status.is_client_error() && !status.is_server_error() {\n        let content = resp.text().await?;\n        match content_type {\n            ContentType::Json => serde_json::from_str(&content).map_err(Error::from),\n            ContentType::Text => Err(Error::from(serde_json::Error::custom(\n                \"Received `text/plain` content type response that cannot be converted to `models::IngestionResponse`\",\n            ))),\n            ContentType::Unsupported(unknown_type) => {\n                Err(Error::from(serde_json::Error::custom(format!(\n                    \"Received `{unknown_type}` content type response that cannot be converted to `models::IngestionResponse`\"\n                ))))\n            }\n        }\n    } else {\n        let content = resp.text().await?;\n        let entity: Option<IngestionBatchError> = serde_json::from_str(&content).ok();\n        Err(Error::ResponseError(ResponseContent {\n            status,\n            content,\n            entity,\n        }))\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/apis/mod.rs",
    "content": "use std::error;\nuse std::fmt;\n\n#[derive(Debug, Clone)]\n#[allow(dead_code)]\npub struct ResponseContent<T> {\n    pub status: reqwest::StatusCode,\n    pub content: String,\n    pub entity: Option<T>,\n}\n\n#[derive(Debug)]\npub enum Error<T> {\n    Reqwest(reqwest::Error),\n    Serde(serde_json::Error),\n    Io(std::io::Error),\n    #[allow(clippy::enum_variant_names)]\n    ResponseError(ResponseContent<T>),\n}\n\nimpl<T> fmt::Display for Error<T> {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        let (module, e) = match self {\n            Error::Reqwest(e) => (\"reqwest\", e.to_string()),\n            Error::Serde(e) => (\"serde\", e.to_string()),\n            Error::Io(e) => (\"IO\", e.to_string()),\n            Error::ResponseError(e) => (\"response\", format!(\"status code {}\", e.status)),\n        };\n        write!(f, \"error in {module}: {e}\")\n    }\n}\n\nimpl<T: fmt::Debug> error::Error for Error<T> {\n    fn source(&self) -> Option<&(dyn error::Error + 'static)> {\n        Some(match self {\n            Error::Reqwest(e) => e,\n            Error::Serde(e) => e,\n            Error::Io(e) => e,\n            Error::ResponseError(_) => return None,\n        })\n    }\n}\n\nimpl<T> From<reqwest::Error> for Error<T> {\n    fn from(e: reqwest::Error) -> Self {\n        Error::Reqwest(e)\n    }\n}\n\nimpl<T> From<serde_json::Error> for Error<T> {\n    fn from(e: serde_json::Error) -> Self {\n        Error::Serde(e)\n    }\n}\n\nimpl<T> From<std::io::Error> for Error<T> {\n    fn from(e: std::io::Error) -> Self {\n        Error::Io(e)\n    }\n}\n\n/// Internal use only\n/// A content type supported by this client.\n#[allow(dead_code)]\nenum ContentType {\n    Json,\n    Text,\n    Unsupported(String),\n}\n\nimpl From<&str> for ContentType {\n    fn from(content_type: &str) -> Self {\n        if content_type.starts_with(\"application\") && content_type.contains(\"json\") {\n            Self::Json\n        } else if content_type.starts_with(\"text/plain\") {\n            Self::Text\n        } else {\n            Self::Unsupported(content_type.to_string())\n        }\n    }\n}\n\npub mod configuration;\npub mod ingestion_api;\n"
  },
  {
    "path": "swiftide-langfuse/src/langfuse_batch_manager.rs",
    "content": "use crate::apis::configuration::Configuration;\nuse crate::apis::ingestion_api::ingestion_batch;\nuse crate::models::{IngestionBatchRequest, IngestionEvent};\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse dyn_clone::DynClone;\nuse std::sync::Arc;\nuse std::sync::atomic::{AtomicBool, Ordering};\nuse std::time::Duration;\nuse tokio::sync::Mutex;\n\n#[derive(Debug, Default, Clone)]\npub struct LangfuseBatchManager {\n    config: Arc<Configuration>,\n    pub batch: Arc<Mutex<Vec<IngestionEvent>>>,\n    dropped: Arc<AtomicBool>,\n}\n\n#[async_trait]\npub trait BatchManagerTrait: Send + Sync + DynClone {\n    async fn add_event(&self, event: IngestionEvent);\n    async fn flush(&self) -> anyhow::Result<()>;\n    fn boxed(&self) -> Box<dyn BatchManagerTrait + Send + Sync>;\n}\n\ndyn_clone::clone_trait_object!(BatchManagerTrait);\n\nimpl LangfuseBatchManager {\n    pub fn new(config: Configuration) -> Self {\n        Self {\n            config: Arc::new(config),\n            batch: Arc::new(Mutex::new(Vec::new())),\n\n            // Locally track if the manager has been dropped to avoid spawning tasks after drop\n            dropped: Arc::new(AtomicBool::new(false)),\n        }\n    }\n\n    pub fn spawn(self) {\n        if self.dropped.load(Ordering::Relaxed) {\n            tracing::trace!(\"LangfuseBatchManager has been dropped, not spawning sender task\");\n            return;\n        }\n\n        const BATCH_INTERVAL: Duration = Duration::from_secs(5);\n\n        tokio::spawn(async move {\n            loop {\n                tokio::time::sleep(BATCH_INTERVAL).await;\n                if let Err(e) = self.send_async().await {\n                    tracing::error!(\n                        error.msg = %e,\n                        error.type = %std::any::type_name_of_val(&e),\n                        \"Failed to send batch to Langfuse\"\n                    );\n                }\n            }\n        });\n    }\n\n    pub async fn flush(&self) -> Result<()> {\n        let lock = self.batch.lock().await;\n        if !lock.is_empty() {\n            drop(lock);\n            self.send_async().await?;\n        }\n        Ok(())\n    }\n\n    pub async fn send_async(&self) -> Result<()> {\n        tracing::trace!(\"Sending batch to Langfuse\");\n        if self.dropped.load(Ordering::Relaxed) {\n            tracing::error!(\"LangfuseBatchManager has been dropped, not sending batch\");\n            return Ok(());\n        }\n        let mut batch_guard = self.batch.lock().await;\n        if batch_guard.is_empty() {\n            return Ok(());\n        }\n\n        let batch = std::mem::take(&mut *batch_guard);\n        let mut payload = IngestionBatchRequest {\n            batch,\n            metadata: None, // Optional metadata can be added here if needed\n        };\n\n        drop(batch_guard); // Release the lock before making the network call\n\n        let response = ingestion_batch(&self.config, &payload).await?;\n\n        for error in &response.errors {\n            // Any errors we log and ignore, no retry\n            tracing::error!(\n                id = %error.id,\n                status = error.status,\n                message = error.message.as_ref().unwrap_or(&None).as_deref().unwrap_or(\"No message\"),\n                error = ?error.error,\n                \"Partial failure in batch ingestion\"\n            );\n        }\n\n        if response.successes.is_empty() {\n            tracing::error!(\"All items in the batch failed, retrying all items\");\n\n            let mut batch_guard = self.batch.lock().await;\n            batch_guard.append(&mut payload.batch);\n        }\n\n        if response.successes.is_empty() && !response.errors.is_empty() {\n            anyhow::bail!(\"Langfuse ingestion failed for all items\");\n        } else {\n            Ok(())\n        }\n    }\n\n    pub async fn add_event(&self, event: IngestionEvent) {\n        self.batch.lock().await.push(event);\n    }\n}\n\n#[async_trait]\nimpl BatchManagerTrait for LangfuseBatchManager {\n    async fn add_event(&self, event: IngestionEvent) {\n        self.add_event(event).await;\n    }\n\n    async fn flush(&self) -> anyhow::Result<()> {\n        self.flush().await\n    }\n\n    fn boxed(&self) -> Box<dyn BatchManagerTrait + Send + Sync> {\n        Box::new(self.clone())\n    }\n}\n\nimpl Drop for LangfuseBatchManager {\n    fn drop(&mut self) {\n        if Arc::strong_count(&self.dropped) > 1 {\n            // There are other references to this manager, don't flush yet\n            return;\n        }\n        if self.dropped.swap(true, Ordering::SeqCst) {\n            // Already dropped\n            return;\n        }\n        let this = self.clone();\n\n        tokio::task::spawn_blocking(move || {\n            let handle = tokio::runtime::Handle::current();\n            if let Err(e) = handle.block_on(async move { this.flush().await }) {\n                tracing::error!(\"Error flushing LangfuseBatchManager on drop: {:?}\", e);\n            }\n        });\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/lib.rs",
    "content": "//! Provides a Langfuse integration for Swiftide\n//!\n//! Agents and completion traits will report their input, output, and usage to langfuse.\n//!\n//! The `LangfuseLayer` needs to be set up like any other tracing layer.\n//!\n//! By default, it requires the LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables\n//! to be set. You can also provide a custom Langfuse URL via the LANGFUSE_URL environment\n//! variable.\n//!\n//! All `Langfuse` data is on the `debug` level. Make sure your tracing setup captures that level.\n//!\n//! # Example\n//! ```no_run\n//! # use swiftide_langfuse::LangfuseLayer;\n//! # use tracing::metadata::LevelFilter;\n//! # use tracing_subscriber::prelude::*;\n//!\n//! // Assuming you have other layers\n//! let mut layers = Vec::new();\n//! layers.push(LangfuseLayer::default().with_filter(LevelFilter::DEBUG).boxed());\n//!\n//! let registry = tracing_subscriber::registry()\n//!     .with(layers);\n//!\n//! registry.init();\n//! ```\n//!\n//! For more advanced usage, refer to the `LangfuseLayer` documentation.\n//!\n//! Refer to the [Langfuse documentation](https://langfuse.com/docs/) for more details on how to setup Langfuse itself.\nmod apis;\nmod langfuse_batch_manager;\nmod models;\nmod tracing_layer;\n\nconst DEFAULT_LANGFUSE_URL: &str = \"http://localhost:3000\";\n\npub use crate::apis::configuration::Configuration;\npub use crate::langfuse_batch_manager::LangfuseBatchManager;\npub use crate::tracing_layer::LangfuseLayer;\n"
  },
  {
    "path": "swiftide-langfuse/src/models/create_event_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct CreateEventBody {\n    #[serde(\n        rename = \"id\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub id: Option<Option<String>>,\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"startTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"level\", skip_serializing_if = \"Option::is_none\")]\n    pub level: Option<models::ObservationLevel>,\n    #[serde(\n        rename = \"statusMessage\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub status_message: Option<Option<String>>,\n    #[serde(\n        rename = \"parentObservationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub parent_observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n}\n\nimpl CreateEventBody {\n    pub fn new() -> CreateEventBody {\n        CreateEventBody {\n            id: None,\n            trace_id: None,\n            name: None,\n            start_time: None,\n            metadata: None,\n            input: None,\n            output: None,\n            level: None,\n            status_message: None,\n            parent_observation_id: None,\n            version: None,\n            environment: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/create_generation_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct CreateGenerationBody {\n    #[serde(\n        rename = \"completionStartTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub completion_start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"model\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub model: Option<Option<String>>,\n    #[serde(\n        rename = \"modelParameters\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub model_parameters: Option<Option<std::collections::HashMap<String, models::MapValue>>>,\n    #[serde(rename = \"usage\", skip_serializing_if = \"Option::is_none\")]\n    pub usage: Option<Box<models::IngestionUsage>>,\n    #[serde(rename = \"usageDetails\", skip_serializing_if = \"Option::is_none\")]\n    pub usage_details: Option<Box<models::UsageDetails>>,\n    #[serde(\n        rename = \"costDetails\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub cost_details: Option<Option<std::collections::HashMap<String, f64>>>,\n    #[serde(\n        rename = \"promptName\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub prompt_name: Option<Option<String>>,\n    #[serde(\n        rename = \"promptVersion\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub prompt_version: Option<Option<i32>>,\n    #[serde(\n        rename = \"endTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub end_time: Option<Option<String>>,\n    #[serde(\n        rename = \"id\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub id: Option<Option<String>>,\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"startTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"level\", skip_serializing_if = \"Option::is_none\")]\n    pub level: Option<models::ObservationLevel>,\n    #[serde(\n        rename = \"statusMessage\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub status_message: Option<Option<String>>,\n    #[serde(\n        rename = \"parentObservationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub parent_observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n}\n\nimpl CreateGenerationBody {\n    pub fn new() -> CreateGenerationBody {\n        CreateGenerationBody {\n            completion_start_time: None,\n            model: None,\n            model_parameters: None,\n            usage: None,\n            usage_details: None,\n            cost_details: None,\n            prompt_name: None,\n            prompt_version: None,\n            end_time: None,\n            id: None,\n            trace_id: None,\n            name: None,\n            start_time: None,\n            metadata: None,\n            input: None,\n            output: None,\n            level: None,\n            status_message: None,\n            parent_observation_id: None,\n            version: None,\n            environment: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/create_score_value.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n/// `CreateScoreValue` : The value of the score. Must be passed as string for categorical scores,\n/// and numeric for boolean and numeric scores The value of the score. Must be passed as string for\n/// categorical scores, and numeric for boolean and numeric scores\n#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]\n#[serde(untagged)]\npub enum CreateScoreValue {\n    Number(f64),\n    String(String),\n}\n\nimpl Default for CreateScoreValue {\n    fn default() -> Self {\n        Self::Number(Default::default())\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/create_span_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct CreateSpanBody {\n    #[serde(\n        rename = \"endTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub end_time: Option<Option<String>>,\n    #[serde(\n        rename = \"id\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub id: Option<Option<String>>,\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"startTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"level\", skip_serializing_if = \"Option::is_none\")]\n    pub level: Option<models::ObservationLevel>,\n    #[serde(\n        rename = \"statusMessage\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub status_message: Option<Option<String>>,\n    #[serde(\n        rename = \"parentObservationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub parent_observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n}\n\nimpl CreateSpanBody {\n    pub fn new() -> CreateSpanBody {\n        CreateSpanBody {\n            end_time: None,\n            id: None,\n            trace_id: None,\n            name: None,\n            start_time: None,\n            metadata: None,\n            input: None,\n            output: None,\n            level: None,\n            status_message: None,\n            parent_observation_id: None,\n            version: None,\n            environment: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_batch_request.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct IngestionBatchRequest {\n    /// Batch of tracing events to be ingested. Discriminated by attribute `type`.\n    #[serde(rename = \"batch\")]\n    pub batch: Vec<models::IngestionEvent>,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_error.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct IngestionError {\n    #[serde(rename = \"id\")]\n    pub id: String,\n    #[serde(rename = \"status\")]\n    pub status: i32,\n    #[serde(\n        rename = \"message\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub message: Option<Option<String>>,\n    #[serde(\n        rename = \"error\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub error: Option<Option<serde_json::Value>>,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse chrono::Utc;\nuse serde::{Deserialize, Serialize};\nuse uuid::Uuid;\n\n#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]\n#[serde(untagged)]\npub enum IngestionEvent {\n    TraceCreate(Box<models::TraceCreate>),\n    ScoreCreate(Box<models::ScoreCreate>),\n    SpanCreate(Box<models::SpanCreate>),\n    SpanUpdate(Box<models::SpanUpdate>),\n    GenerationCreate(Box<models::GenerationCreate>),\n    GenerationUpdate(Box<models::GenerationUpdate>),\n    EventCreate(Box<models::EventCreate>),\n    SdkLog(Box<models::SdkLog>),\n    ObservationCreate(Box<models::ObservationCreate>),\n    ObservationUpdate(Box<models::ObservationUpdate>),\n}\n\nimpl Default for IngestionEvent {\n    fn default() -> Self {\n        Self::TraceCreate(Default::default())\n    }\n}\n\nimpl IngestionEvent {\n    pub fn new_trace_create(body: models::TraceBody) -> Self {\n        IngestionEvent::TraceCreate(Box::new(models::TraceCreate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of::Type::TraceCreate,\n        )))\n    }\n\n    pub fn new_score_create(body: models::ScoreBody) -> Self {\n        IngestionEvent::ScoreCreate(Box::new(models::ScoreCreate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_1::Type::ScoreCreate,\n        )))\n    }\n\n    pub fn new_span_create(body: models::CreateSpanBody) -> Self {\n        IngestionEvent::SpanCreate(Box::new(models::SpanCreate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_2::Type::SpanCreate,\n        )))\n    }\n\n    pub fn new_span_update(body: models::UpdateSpanBody) -> Self {\n        IngestionEvent::SpanUpdate(Box::new(models::SpanUpdate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_3::Type::SpanUpdate,\n        )))\n    }\n\n    pub fn new_generation_create(body: models::CreateGenerationBody) -> Self {\n        IngestionEvent::GenerationCreate(Box::new(models::GenerationCreate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_4::Type::GenerationCreate,\n        )))\n    }\n\n    pub fn new_generation_update(body: models::UpdateGenerationBody) -> Self {\n        IngestionEvent::GenerationUpdate(Box::new(models::GenerationUpdate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_5::Type::GenerationUpdate,\n        )))\n    }\n\n    pub fn new_event_create(body: models::CreateEventBody) -> Self {\n        IngestionEvent::EventCreate(Box::new(models::EventCreate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_6::Type::EventCreate,\n        )))\n    }\n\n    pub fn new_sdk_log(body: models::SdkLogBody) -> Self {\n        IngestionEvent::SdkLog(Box::new(models::SdkLog::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_7::Type::SdkLog,\n        )))\n    }\n\n    pub fn new_observation_create(body: models::ObservationBody) -> Self {\n        IngestionEvent::ObservationCreate(Box::new(models::ObservationCreate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_8::Type::ObservationCreate,\n        )))\n    }\n\n    pub fn new_observation_update(body: models::ObservationBody) -> Self {\n        IngestionEvent::ObservationUpdate(Box::new(models::ObservationUpdate::new(\n            body,\n            Uuid::new_v4().to_string(),\n            Utc::now().to_rfc3339(),\n            models::ingestion_event_one_of_9::Type::ObservationUpdate,\n        )))\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct TraceCreate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::TraceBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl TraceCreate {\n    pub fn new(\n        body: models::TraceBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> TraceCreate {\n        TraceCreate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"trace-create\")]\n    #[default]\n    TraceCreate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_1.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct ScoreCreate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::ScoreBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl ScoreCreate {\n    pub fn new(\n        body: models::ScoreBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> ScoreCreate {\n        ScoreCreate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"score-create\")]\n    #[default]\n    ScoreCreate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_2.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct SpanCreate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::CreateSpanBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl SpanCreate {\n    pub fn new(\n        body: models::CreateSpanBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> SpanCreate {\n        SpanCreate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"span-create\")]\n    #[default]\n    SpanCreate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_3.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct SpanUpdate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::UpdateSpanBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl SpanUpdate {\n    pub fn new(\n        body: models::UpdateSpanBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> SpanUpdate {\n        SpanUpdate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"span-update\")]\n    #[default]\n    SpanUpdate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_4.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct GenerationCreate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::CreateGenerationBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl GenerationCreate {\n    pub fn new(\n        body: models::CreateGenerationBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> GenerationCreate {\n        GenerationCreate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"generation-create\")]\n    #[default]\n    GenerationCreate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_5.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct GenerationUpdate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::UpdateGenerationBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl GenerationUpdate {\n    pub fn new(\n        body: models::UpdateGenerationBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> GenerationUpdate {\n        GenerationUpdate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"generation-update\")]\n    #[default]\n    GenerationUpdate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_6.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct EventCreate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::CreateEventBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl EventCreate {\n    pub fn new(\n        body: models::CreateEventBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> EventCreate {\n        EventCreate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"event-create\")]\n    #[default]\n    EventCreate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_7.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct SdkLog {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::SdkLogBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl SdkLog {\n    pub fn new(body: models::SdkLogBody, id: String, timestamp: String, r#type: Type) -> SdkLog {\n        SdkLog {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"sdk-log\")]\n    #[default]\n    SdkLog,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_8.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct ObservationCreate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::ObservationBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl ObservationCreate {\n    pub fn new(\n        body: models::ObservationBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> ObservationCreate {\n        ObservationCreate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"observation-create\")]\n    #[default]\n    ObservationCreate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_event_one_of_9.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct ObservationUpdate {\n    #[serde(rename = \"body\")]\n    pub body: Box<models::ObservationBody>,\n    /// UUID v4 that identifies the event\n    #[serde(rename = \"id\")]\n    pub id: String,\n    /// Datetime (ISO 8601) of event creation in client. Should be as close to actual event\n    /// creation in client as possible, this timestamp will be used for ordering of events in\n    /// future release. Resolution: milliseconds (required), microseconds (optimal).\n    #[serde(rename = \"timestamp\")]\n    pub timestamp: String,\n    /// Optional. Metadata field used by the Langfuse SDKs for debugging.\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"type\")]\n    pub r#type: Type,\n}\n\nimpl ObservationUpdate {\n    pub fn new(\n        body: models::ObservationBody,\n        id: String,\n        timestamp: String,\n        r#type: Type,\n    ) -> ObservationUpdate {\n        ObservationUpdate {\n            body: Box::new(body),\n            id,\n            timestamp,\n            metadata: None,\n            r#type,\n        }\n    }\n}\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum Type {\n    #[serde(rename = \"observation-update\")]\n    #[default]\n    ObservationUpdate,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_response.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct IngestionResponse {\n    #[serde(rename = \"successes\")]\n    pub successes: Vec<models::IngestionSuccess>,\n    #[serde(rename = \"errors\")]\n    pub errors: Vec<models::IngestionError>,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_success.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct IngestionSuccess {\n    #[serde(rename = \"id\")]\n    pub id: String,\n    #[serde(rename = \"status\")]\n    pub status: i32,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/ingestion_usage.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]\n#[serde(untagged)]\npub enum IngestionUsage {\n    Usage(Box<models::Usage>),\n    OpenAiUsage(Box<models::OpenAiUsage>),\n}\n\nimpl Default for IngestionUsage {\n    fn default() -> Self {\n        Self::Usage(Default::default())\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/map_value.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]\n#[serde(untagged)]\npub enum MapValue {\n    String(String),\n    Integer(i32),\n    Boolean(bool),\n    Array(Vec<String>),\n}\n\nimpl Default for MapValue {\n    fn default() -> Self {\n        Self::String(Default::default())\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/mod.rs",
    "content": "// pub mod annotation_queue;\n// pub use self::annotation_queue::AnnotationQueue;\n// pub mod annotation_queue_assignment_request;\n// pub use self::annotation_queue_assignment_request::AnnotationQueueAssignmentRequest;\n// pub mod annotation_queue_item;\n// pub use self::annotation_queue_item::AnnotationQueueItem;\n// pub mod annotation_queue_object_type;\n// pub use self::annotation_queue_object_type::AnnotationQueueObjectType;\n// pub mod annotation_queue_status;\n// pub use self::annotation_queue_status::AnnotationQueueStatus;\n// pub mod api_key_deletion_response;\n// pub use self::api_key_deletion_response::ApiKeyDeletionResponse;\n// pub mod api_key_list;\n// pub use self::api_key_list::ApiKeyList;\n// pub mod api_key_response;\n// pub use self::api_key_response::ApiKeyResponse;\n// pub mod api_key_summary;\n// pub use self::api_key_summary::ApiKeySummary;\n// pub mod authentication_scheme;\n// pub use self::authentication_scheme::AuthenticationScheme;\n// pub mod base_event;\n// pub use self::base_event::BaseEvent;\n// pub mod base_prompt;\n// pub use self::base_prompt::BasePrompt;\n// pub mod base_score;\n// pub use self::base_score::BaseScore;\n// pub mod base_score_v1;\n// pub use self::base_score_v1::BaseScoreV1;\n// pub mod boolean_score;\n// pub use self::boolean_score::BooleanScore;\n// pub mod boolean_score_v1;\n// pub use self::boolean_score_v1::BooleanScoreV1;\n// pub mod bulk_config;\n// pub use self::bulk_config::BulkConfig;\n// pub mod categorical_score;\n// pub use self::categorical_score::CategoricalScore;\n// pub mod categorical_score_v1;\n// pub use self::categorical_score_v1::CategoricalScoreV1;\n// pub mod chat_message;\n// pub use self::chat_message::ChatMessage;\n// pub mod chat_message_with_placeholders;\n// pub use self::chat_message_with_placeholders::ChatMessageWithPlaceholders;\n// pub mod chat_message_with_placeholders_one_of;\n// pub use self::chat_message_with_placeholders_one_of::ChatMessageWithPlaceholdersOneOf;\n// pub mod chat_message_with_placeholders_one_of_1;\n// pub use self::chat_message_with_placeholders_one_of_1::ChatMessageWithPlaceholdersOneOf1;\n// pub mod chat_prompt;\n// pub use self::chat_prompt::ChatPrompt;\n// pub mod comment;\n// pub use self::comment::Comment;\n// pub mod comment_object_type;\n// pub use self::comment_object_type::CommentObjectType;\n// pub mod config_category;\n// pub use self::config_category::ConfigCategory;\n// pub mod create_annotation_queue_assignment_response;\n// pub use self::create_annotation_queue_assignment_response::CreateAnnotationQueueAssignmentResponse;\n// pub mod create_annotation_queue_item_request;\n// pub use self::create_annotation_queue_item_request::CreateAnnotationQueueItemRequest;\n// pub mod create_annotation_queue_request;\n// pub use self::create_annotation_queue_request::CreateAnnotationQueueRequest;\n// pub mod create_chat_prompt_request;\n// pub use self::create_chat_prompt_request::CreateChatPromptRequest;\n// pub mod create_comment_request;\n// pub use self::create_comment_request::CreateCommentRequest;\n// pub mod create_comment_response;\n// pub use self::create_comment_response::CreateCommentResponse;\n// pub mod create_dataset_item_request;\n// pub use self::create_dataset_item_request::CreateDatasetItemRequest;\n// pub mod create_dataset_request;\n// pub use self::create_dataset_request::CreateDatasetRequest;\n// pub mod create_dataset_run_item_request;\n// pub use self::create_dataset_run_item_request::CreateDatasetRunItemRequest;\npub mod create_event_body;\npub use self::create_event_body::CreateEventBody;\n// pub mod create_event_event;\n// pub use self::create_event_event::CreateEventEvent;\npub mod create_generation_body;\npub use self::create_generation_body::CreateGenerationBody;\n// pub mod create_generation_event;\n// pub use self::create_generation_event::CreateGenerationEvent;\n// pub mod create_model_request;\n// pub use self::create_model_request::CreateModelRequest;\n// pub mod create_observation_event;\n// pub use self::create_observation_event::CreateObservationEvent;\n// pub mod create_prompt_request;\n// pub use self::create_prompt_request::CreatePromptRequest;\n// pub mod create_prompt_request_one_of;\n// pub use self::create_prompt_request_one_of::CreatePromptRequestOneOf;\n// pub mod create_prompt_request_one_of_1;\n// pub use self::create_prompt_request_one_of_1::CreatePromptRequestOneOf1;\n// pub mod create_score_config_request;\n// pub use self::create_score_config_request::CreateScoreConfigRequest;\n// pub mod create_score_request;\n// pub use self::create_score_request::CreateScoreRequest;\n// pub mod create_score_response;\n// pub use self::create_score_response::CreateScoreResponse;\npub mod create_score_value;\npub use self::create_score_value::CreateScoreValue;\npub mod create_span_body;\npub use self::create_span_body::CreateSpanBody;\n// pub mod create_span_event;\n// pub use self::create_span_event::CreateSpanEvent;\n// pub mod create_text_prompt_request;\n// pub use self::create_text_prompt_request::CreateTextPromptRequest;\n// pub mod dataset;\n// pub use self::dataset::Dataset;\n// pub mod dataset_item;\n// pub use self::dataset_item::DatasetItem;\n// pub mod dataset_run;\n// pub use self::dataset_run::DatasetRun;\n// pub mod dataset_run_item;\n// pub use self::dataset_run_item::DatasetRunItem;\n// pub mod dataset_run_with_items;\n// pub use self::dataset_run_with_items::DatasetRunWithItems;\n// pub mod dataset_status;\n// pub use self::dataset_status::DatasetStatus;\n// pub mod delete_annotation_queue_assignment_response;\n// pub use self::delete_annotation_queue_assignment_response::DeleteAnnotationQueueAssignmentResponse;\n// pub mod delete_annotation_queue_item_response;\n// pub use self::delete_annotation_queue_item_response::DeleteAnnotationQueueItemResponse;\n// pub mod delete_dataset_item_response;\n// pub use self::delete_dataset_item_response::DeleteDatasetItemResponse;\n// pub mod delete_dataset_run_response;\n// pub use self::delete_dataset_run_response::DeleteDatasetRunResponse;\n// pub mod delete_membership_request;\n// pub use self::delete_membership_request::DeleteMembershipRequest;\n// pub mod delete_trace_response;\n// pub use self::delete_trace_response::DeleteTraceResponse;\n// pub mod filter_config;\n// pub use self::filter_config::FilterConfig;\n// pub mod get_comments_response;\n// pub use self::get_comments_response::GetCommentsResponse;\n// pub mod get_media_response;\n// pub use self::get_media_response::GetMediaResponse;\n// pub mod get_media_upload_url_request;\n// pub use self::get_media_upload_url_request::GetMediaUploadUrlRequest;\n// pub mod get_media_upload_url_response;\n// pub use self::get_media_upload_url_response::GetMediaUploadUrlResponse;\n// pub mod get_scores_response;\n// pub use self::get_scores_response::GetScoresResponse;\n// pub mod get_scores_response_data;\n// pub use self::get_scores_response_data::GetScoresResponseData;\n// pub mod get_scores_response_data_boolean;\n// pub use self::get_scores_response_data_boolean::GetScoresResponseDataBoolean;\n// pub mod get_scores_response_data_categorical;\n// pub use self::get_scores_response_data_categorical::GetScoresResponseDataCategorical;\n// pub mod get_scores_response_data_numeric;\n// pub use self::get_scores_response_data_numeric::GetScoresResponseDataNumeric;\n// pub mod get_scores_response_data_one_of;\n// pub use self::get_scores_response_data_one_of::GetScoresResponseDataOneOf;\n// pub mod get_scores_response_data_one_of_1;\n// pub use self::get_scores_response_data_one_of_1::GetScoresResponseDataOneOf1;\n// pub mod get_scores_response_data_one_of_2;\n// pub use self::get_scores_response_data_one_of_2::GetScoresResponseDataOneOf2;\n// pub mod get_scores_response_trace_data;\n// pub use self::get_scores_response_trace_data::GetScoresResponseTraceData;\n// pub mod health_response;\n// pub use self::health_response::HealthResponse;\npub mod ingestion_batch_request;\npub use self::ingestion_batch_request::IngestionBatchRequest;\npub mod ingestion_error;\npub use self::ingestion_error::IngestionError;\npub mod ingestion_event;\npub use self::ingestion_event::IngestionEvent;\npub mod ingestion_event_one_of;\npub use self::ingestion_event_one_of::TraceCreate;\npub mod ingestion_event_one_of_1;\npub use self::ingestion_event_one_of_1::ScoreCreate;\npub mod ingestion_event_one_of_2;\npub use self::ingestion_event_one_of_2::SpanCreate;\npub mod ingestion_event_one_of_3;\npub use self::ingestion_event_one_of_3::SpanUpdate;\npub mod ingestion_event_one_of_4;\npub use self::ingestion_event_one_of_4::GenerationCreate;\npub mod ingestion_event_one_of_5;\npub use self::ingestion_event_one_of_5::GenerationUpdate;\npub mod ingestion_event_one_of_6;\npub use self::ingestion_event_one_of_6::EventCreate;\npub mod ingestion_event_one_of_7;\npub use self::ingestion_event_one_of_7::SdkLog;\npub mod ingestion_event_one_of_8;\npub use self::ingestion_event_one_of_8::ObservationCreate;\npub mod ingestion_event_one_of_9;\npub use self::ingestion_event_one_of_9::ObservationUpdate;\npub mod ingestion_response;\npub use self::ingestion_response::IngestionResponse;\npub mod ingestion_success;\npub use self::ingestion_success::IngestionSuccess;\npub mod ingestion_usage;\npub use self::ingestion_usage::IngestionUsage;\n// pub mod llm_adapter;\n// pub use self::llm_adapter::LlmAdapter;\n// pub mod llm_connection;\n// pub use self::llm_connection::LlmConnection;\npub mod map_value;\npub use self::map_value::MapValue;\n// pub mod media_content_type;\n// pub use self::media_content_type::MediaContentType;\n// pub mod membership_deletion_response;\n// pub use self::membership_deletion_response::MembershipDeletionResponse;\n// pub mod membership_request;\n// pub use self::membership_request::MembershipRequest;\n// pub mod membership_response;\n// pub use self::membership_response::MembershipResponse;\n// pub mod membership_role;\n// pub use self::membership_role::MembershipRole;\n// pub mod memberships_response;\n// pub use self::memberships_response::MembershipsResponse;\n// pub mod metrics_response;\n// pub use self::metrics_response::MetricsResponse;\n// pub mod model;\n// pub use self::model::Model;\n// pub mod model_price;\n// pub use self::model_price::ModelPrice;\npub mod model_usage_unit;\npub use self::model_usage_unit::ModelUsageUnit;\n// pub mod numeric_score;\n// pub use self::numeric_score::NumericScore;\n// pub mod numeric_score_v1;\n// pub use self::numeric_score_v1::NumericScoreV1;\n// pub mod observation;\n// pub use self::observation::Observation;\npub mod observation_body;\npub use self::observation_body::ObservationBody;\npub mod observation_level;\npub use self::observation_level::ObservationLevel;\npub mod observation_type;\npub use self::observation_type::ObservationType;\n// pub mod observations;\n// pub use self::observations::Observations;\n// pub mod observations_view;\n// pub use self::observations_view::ObservationsView;\n// pub mod observations_views;\n// pub use self::observations_views::ObservationsViews;\npub mod open_ai_completion_usage_schema;\npub use self::open_ai_completion_usage_schema::OpenAiCompletionUsageSchema;\npub mod open_ai_response_usage_schema;\npub use self::open_ai_response_usage_schema::OpenAiResponseUsageSchema;\npub mod open_ai_usage;\npub use self::open_ai_usage::OpenAiUsage;\npub mod optional_observation_body;\n// pub mod organization_project;\n// pub use self::organization_project::OrganizationProject;\n// pub mod organization_projects_response;\n// pub use self::organization_projects_response::OrganizationProjectsResponse;\n// pub mod paginated_annotation_queue_items;\n// pub use self::paginated_annotation_queue_items::PaginatedAnnotationQueueItems;\n// pub mod paginated_annotation_queues;\n// pub use self::paginated_annotation_queues::PaginatedAnnotationQueues;\n// pub mod paginated_dataset_items;\n// pub use self::paginated_dataset_items::PaginatedDatasetItems;\n// pub mod paginated_dataset_run_items;\n// pub use self::paginated_dataset_run_items::PaginatedDatasetRunItems;\n// pub mod paginated_dataset_runs;\n// pub use self::paginated_dataset_runs::PaginatedDatasetRuns;\n// pub mod paginated_datasets;\n// pub use self::paginated_datasets::PaginatedDatasets;\n// pub mod paginated_llm_connections;\n// pub use self::paginated_llm_connections::PaginatedLlmConnections;\n// pub mod paginated_models;\n// pub use self::paginated_models::PaginatedModels;\n// pub mod paginated_sessions;\n// pub use self::paginated_sessions::PaginatedSessions;\n// pub mod patch_media_body;\n// pub use self::patch_media_body::PatchMediaBody;\n// pub mod placeholder_message;\n// pub use self::placeholder_message::PlaceholderMessage;\n// pub mod project;\n// pub use self::project::Project;\n// pub mod project_deletion_response;\n// pub use self::project_deletion_response::ProjectDeletionResponse;\n// pub mod projects;\n// pub use self::projects::Projects;\n// pub mod projects_create_api_key_request;\n// pub use self::projects_create_api_key_request::ProjectsCreateApiKeyRequest;\n// pub mod projects_create_request;\n// pub use self::projects_create_request::ProjectsCreateRequest;\n// pub mod prompt;\n// pub use self::prompt::Prompt;\n// pub mod prompt_meta;\n// pub use self::prompt_meta::PromptMeta;\n// pub mod prompt_meta_list_response;\n// pub use self::prompt_meta_list_response::PromptMetaListResponse;\n// pub mod prompt_one_of;\n// pub use self::prompt_one_of::PromptOneOf;\n// pub mod prompt_one_of_1;\n// pub use self::prompt_one_of_1::PromptOneOf1;\n// pub mod prompt_version_update_request;\n// pub use self::prompt_version_update_request::PromptVersionUpdateRequest;\n// pub mod resource_meta;\n// pub use self::resource_meta::ResourceMeta;\n// pub mod resource_type;\n// pub use self::resource_type::ResourceType;\n// pub mod resource_types_response;\n// pub use self::resource_types_response::ResourceTypesResponse;\n// pub mod schema_extension;\n// pub use self::schema_extension::SchemaExtension;\n// pub mod schema_resource;\n// pub use self::schema_resource::SchemaResource;\n// pub mod schemas_response;\n// pub use self::schemas_response::SchemasResponse;\n// pub mod scim_create_user_request;\n// pub use self::scim_create_user_request::ScimCreateUserRequest;\n// pub mod scim_email;\n// pub use self::scim_email::ScimEmail;\n// pub mod scim_feature_support;\n// pub use self::scim_feature_support::ScimFeatureSupport;\n// pub mod scim_name;\n// pub use self::scim_name::ScimName;\n// pub mod scim_user;\n// pub use self::scim_user::ScimUser;\n// pub mod scim_users_list_response;\n// pub use self::scim_users_list_response::ScimUsersListResponse;\n// pub mod score;\n// pub use self::score::Score;\npub mod score_body;\npub use self::score_body::ScoreBody;\n// pub mod score_config;\n// pub use self::score_config::ScoreConfig;\n// pub mod score_configs;\n// pub use self::score_configs::ScoreConfigs;\npub mod score_data_type;\npub use self::score_data_type::ScoreDataType;\n// pub mod score_event;\n// pub use self::score_event::ScoreEvent;\n// pub mod score_one_of;\n// pub use self::score_one_of::ScoreOneOf;\n// pub mod score_one_of_1;\n// pub use self::score_one_of_1::ScoreOneOf1;\n// pub mod score_one_of_2;\n// pub use self::score_one_of_2::ScoreOneOf2;\n// pub mod score_source;\n// pub use self::score_source::ScoreSource;\n// pub mod score_v1;\n// pub use self::score_v1::ScoreV1;\n// pub mod score_v1_one_of;\n// pub use self::score_v1_one_of::ScoreV1OneOf;\n// pub mod score_v1_one_of_1;\n// pub use self::score_v1_one_of_1::ScoreV1OneOf1;\n// pub mod score_v1_one_of_2;\n// pub use self::score_v1_one_of_2::ScoreV1OneOf2;\npub mod sdk_log_body;\npub use self::sdk_log_body::SdkLogBody;\n// pub mod sdk_log_event;\n// pub use self::sdk_log_event::SdkLogEvent;\n// pub mod service_provider_config;\n// pub use self::service_provider_config::ServiceProviderConfig;\n// pub mod session;\n// pub use self::session::Session;\n// pub mod session_with_traces;\n// pub use self::session_with_traces::SessionWithTraces;\n// pub mod sort;\n// pub use self::sort::Sort;\n// pub mod text_prompt;\n// pub use self::text_prompt::TextPrompt;\n// pub mod trace;\n// pub use self::trace::Trace;\npub mod trace_body;\npub use self::trace_body::TraceBody;\n// pub mod trace_delete_multiple_request;\n// pub use self::trace_delete_multiple_request::TraceDeleteMultipleRequest;\n// pub mod trace_event;\n// pub use self::trace_event::TraceEvent;\n// pub mod trace_with_details;\n// pub use self::trace_with_details::TraceWithDetails;\n// pub mod trace_with_full_details;\n// pub use self::trace_with_full_details::TraceWithFullDetails;\n// pub mod traces;\n// pub use self::traces::Traces;\n// pub mod update_annotation_queue_item_request;\n// pub use self::update_annotation_queue_item_request::UpdateAnnotationQueueItemRequest;\n// pub mod update_event_body;\n// pub use self::update_event_body::UpdateEventBody;\npub mod update_generation_body;\npub use self::update_generation_body::UpdateGenerationBody;\n// pub mod update_generation_event;\n// pub use self::update_generation_event::UpdateGenerationEvent;\n// pub mod update_observation_event;\n// pub use self::update_observation_event::UpdateObservationEvent;\npub mod update_span_body;\npub use self::update_span_body::UpdateSpanBody;\n// pub mod update_span_event;\n// pub use self::update_span_event::UpdateSpanEvent;\n// pub mod upsert_llm_connection_request;\n// pub use self::upsert_llm_connection_request::UpsertLlmConnectionRequest;\npub mod usage;\npub use self::usage::Usage;\npub mod usage_details;\npub use self::usage_details::UsageDetails;\n// pub mod user_meta;\n// pub use self::user_meta::UserMeta;\n// pub mod utils_meta_response;\n// pub use self::utils_meta_response::UtilsMetaResponse;\n"
  },
  {
    "path": "swiftide-langfuse/src/models/model_usage_unit.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n/// `ModelUsageUnit` : Unit of usage in Langfuse\n/// Unit of usage in Langfuse\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum ModelUsageUnit {\n    #[serde(rename = \"CHARACTERS\")]\n    #[default]\n    Characters,\n    #[serde(rename = \"TOKENS\")]\n    Tokens,\n    #[serde(rename = \"MILLISECONDS\")]\n    Milliseconds,\n    #[serde(rename = \"SECONDS\")]\n    Seconds,\n    #[serde(rename = \"IMAGES\")]\n    Images,\n    #[serde(rename = \"REQUESTS\")]\n    Requests,\n}\n\nimpl std::fmt::Display for ModelUsageUnit {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        match self {\n            Self::Characters => write!(f, \"CHARACTERS\"),\n            Self::Tokens => write!(f, \"TOKENS\"),\n            Self::Milliseconds => write!(f, \"MILLISECONDS\"),\n            Self::Seconds => write!(f, \"SECONDS\"),\n            Self::Images => write!(f, \"IMAGES\"),\n            Self::Requests => write!(f, \"REQUESTS\"),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/observation_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct ObservationBody {\n    #[serde(\n        rename = \"id\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub id: Option<Option<String>>,\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(rename = \"type\")]\n    pub r#type: models::ObservationType,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"startTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"endTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub end_time: Option<Option<String>>,\n    #[serde(\n        rename = \"completionStartTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub completion_start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"model\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub model: Option<Option<String>>,\n    #[serde(\n        rename = \"modelParameters\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub model_parameters: Option<Option<std::collections::HashMap<String, models::MapValue>>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"usage\", skip_serializing_if = \"Option::is_none\")]\n    pub usage: Option<Box<models::Usage>>,\n    #[serde(rename = \"level\", skip_serializing_if = \"Option::is_none\")]\n    pub level: Option<models::ObservationLevel>,\n    #[serde(\n        rename = \"statusMessage\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub status_message: Option<Option<String>>,\n    #[serde(\n        rename = \"parentObservationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub parent_observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n}\n\nimpl ObservationBody {\n    pub fn new(r#type: models::ObservationType) -> ObservationBody {\n        ObservationBody {\n            id: None,\n            trace_id: None,\n            r#type,\n            name: None,\n            start_time: None,\n            end_time: None,\n            completion_start_time: None,\n            model: None,\n            model_parameters: None,\n            input: None,\n            version: None,\n            metadata: None,\n            output: None,\n            usage: None,\n            level: None,\n            status_message: None,\n            parent_observation_id: None,\n            environment: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/observation_level.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum ObservationLevel {\n    #[serde(rename = \"DEBUG\")]\n    #[default]\n    Debug,\n    #[serde(rename = \"DEFAULT\")]\n    Default,\n    #[serde(rename = \"WARNING\")]\n    Warning,\n    #[serde(rename = \"ERROR\")]\n    Error,\n}\n\nimpl std::fmt::Display for ObservationLevel {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        match self {\n            Self::Debug => write!(f, \"DEBUG\"),\n            Self::Default => write!(f, \"DEFAULT\"),\n            Self::Warning => write!(f, \"WARNING\"),\n            Self::Error => write!(f, \"ERROR\"),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/observation_type.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum ObservationType {\n    #[serde(rename = \"SPAN\")]\n    #[default]\n    Span,\n    #[serde(rename = \"GENERATION\")]\n    Generation,\n    #[serde(rename = \"EVENT\")]\n    Event,\n    #[serde(rename = \"AGENT\")]\n    Agent,\n    #[serde(rename = \"TOOL\")]\n    Tool,\n    #[serde(rename = \"CHAIN\")]\n    Chain,\n    #[serde(rename = \"RETRIEVER\")]\n    Retriever,\n    #[serde(rename = \"EVALUATOR\")]\n    Evaluator,\n    #[serde(rename = \"EMBEDDING\")]\n    Embedding,\n    #[serde(rename = \"GUARDRAIL\")]\n    Guardrail,\n}\n\nimpl std::fmt::Display for ObservationType {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        match self {\n            Self::Span => write!(f, \"SPAN\"),\n            Self::Generation => write!(f, \"GENERATION\"),\n            Self::Event => write!(f, \"EVENT\"),\n            Self::Agent => write!(f, \"AGENT\"),\n            Self::Tool => write!(f, \"TOOL\"),\n            Self::Chain => write!(f, \"CHAIN\"),\n            Self::Retriever => write!(f, \"RETRIEVER\"),\n            Self::Evaluator => write!(f, \"EVALUATOR\"),\n            Self::Embedding => write!(f, \"EMBEDDING\"),\n            Self::Guardrail => write!(f, \"GUARDRAIL\"),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/open_ai_completion_usage_schema.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n/// `OpenAiCompletionUsageSchema` : `OpenAI` Usage schema from (Chat-)Completion APIs\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct OpenAiCompletionUsageSchema {\n    #[serde(rename = \"prompt_tokens\")]\n    pub prompt_tokens: i32,\n    #[serde(rename = \"completion_tokens\")]\n    pub completion_tokens: i32,\n    #[serde(rename = \"total_tokens\")]\n    pub total_tokens: i32,\n    #[serde(\n        rename = \"prompt_tokens_details\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub prompt_tokens_details: Option<Option<std::collections::HashMap<String, i32>>>,\n    #[serde(\n        rename = \"completion_tokens_details\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub completion_tokens_details: Option<Option<std::collections::HashMap<String, i32>>>,\n}\n\nimpl OpenAiCompletionUsageSchema {\n    /// `OpenAI` Usage schema from (Chat-)Completion APIs\n    pub fn new(\n        prompt_tokens: i32,\n        completion_tokens: i32,\n        total_tokens: i32,\n    ) -> OpenAiCompletionUsageSchema {\n        OpenAiCompletionUsageSchema {\n            prompt_tokens,\n            completion_tokens,\n            total_tokens,\n            prompt_tokens_details: None,\n            completion_tokens_details: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/open_ai_response_usage_schema.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n/// `OpenAiResponseUsageSchema` : `OpenAI` Usage schema from Response API\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct OpenAiResponseUsageSchema {\n    #[serde(rename = \"input_tokens\")]\n    pub input_tokens: i32,\n    #[serde(rename = \"output_tokens\")]\n    pub output_tokens: i32,\n    #[serde(rename = \"total_tokens\")]\n    pub total_tokens: i32,\n    #[serde(\n        rename = \"input_tokens_details\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input_tokens_details: Option<Option<std::collections::HashMap<String, i32>>>,\n    #[serde(\n        rename = \"output_tokens_details\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output_tokens_details: Option<Option<std::collections::HashMap<String, i32>>>,\n}\n\nimpl OpenAiResponseUsageSchema {\n    /// `OpenAI` Usage schema from Response API\n    pub fn new(\n        input_tokens: i32,\n        output_tokens: i32,\n        total_tokens: i32,\n    ) -> OpenAiResponseUsageSchema {\n        OpenAiResponseUsageSchema {\n            input_tokens,\n            output_tokens,\n            total_tokens,\n            input_tokens_details: None,\n            output_tokens_details: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/open_ai_usage.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n/// `OpenAiUsage` : Usage interface of `OpenAI` for improved compatibility.\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct OpenAiUsage {\n    #[serde(\n        rename = \"promptTokens\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub prompt_tokens: Option<Option<i32>>,\n    #[serde(\n        rename = \"completionTokens\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub completion_tokens: Option<Option<i32>>,\n    #[serde(\n        rename = \"totalTokens\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub total_tokens: Option<Option<i32>>,\n}\n\nimpl OpenAiUsage {\n    /// Usage interface of `OpenAI` for improved compatibility.\n    pub fn new() -> OpenAiUsage {\n        OpenAiUsage {\n            prompt_tokens: None,\n            completion_tokens: None,\n            total_tokens: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/optional_observation_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[allow(dead_code)]\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct OptionalObservationBody {\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"startTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"level\", skip_serializing_if = \"Option::is_none\")]\n    pub level: Option<models::ObservationLevel>,\n    #[serde(\n        rename = \"statusMessage\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub status_message: Option<Option<String>>,\n    #[serde(\n        rename = \"parentObservationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub parent_observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/score_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct ScoreBody {\n    #[serde(\n        rename = \"id\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub id: Option<Option<String>>,\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(\n        rename = \"sessionId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub session_id: Option<Option<String>>,\n    #[serde(\n        rename = \"observationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"datasetRunId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub dataset_run_id: Option<Option<String>>,\n    #[serde(rename = \"name\")]\n    pub name: String,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n    #[serde(rename = \"value\")]\n    pub value: Box<models::CreateScoreValue>,\n    #[serde(\n        rename = \"comment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub comment: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"dataType\", skip_serializing_if = \"Option::is_none\")]\n    pub data_type: Option<models::ScoreDataType>,\n    /// Reference a score config on a score. When set, the score name must equal the config name\n    /// and scores must comply with the config's range and data type. For categorical scores, the\n    /// value must map to a config category. Numeric scores might be constrained by the score\n    /// config's max and min values\n    #[serde(\n        rename = \"configId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub config_id: Option<Option<String>>,\n}\n\nimpl ScoreBody {\n    pub fn new(name: String, value: models::CreateScoreValue) -> ScoreBody {\n        ScoreBody {\n            id: None,\n            trace_id: None,\n            session_id: None,\n            observation_id: None,\n            dataset_run_id: None,\n            name,\n            environment: None,\n            value: Box::new(value),\n            comment: None,\n            metadata: None,\n            data_type: None,\n            config_id: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/score_data_type.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(\n    Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, Default,\n)]\npub enum ScoreDataType {\n    #[serde(rename = \"NUMERIC\")]\n    #[default]\n    Numeric,\n    #[serde(rename = \"BOOLEAN\")]\n    Boolean,\n    #[serde(rename = \"CATEGORICAL\")]\n    Categorical,\n}\n\nimpl std::fmt::Display for ScoreDataType {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        match self {\n            Self::Numeric => write!(f, \"NUMERIC\"),\n            Self::Boolean => write!(f, \"BOOLEAN\"),\n            Self::Categorical => write!(f, \"CATEGORICAL\"),\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/sdk_log_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct SdkLogBody {\n    #[serde(rename = \"log\", deserialize_with = \"Option::deserialize\")]\n    pub log: Option<serde_json::Value>,\n}\n\nimpl SdkLogBody {\n    pub fn new(log: Option<serde_json::Value>) -> SdkLogBody {\n        SdkLogBody { log }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/trace_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct TraceBody {\n    #[serde(\n        rename = \"id\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub id: Option<Option<String>>,\n    #[serde(\n        rename = \"timestamp\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub timestamp: Option<Option<String>>,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"userId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub user_id: Option<Option<String>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"sessionId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub session_id: Option<Option<String>>,\n    #[serde(\n        rename = \"release\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub release: Option<Option<String>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"tags\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub tags: Option<Option<Vec<String>>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n    /// Make trace publicly accessible via url\n    #[serde(\n        rename = \"public\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub public: Option<Option<bool>>,\n}\n\nimpl TraceBody {\n    pub fn new() -> TraceBody {\n        TraceBody {\n            id: None,\n            timestamp: None,\n            name: None,\n            user_id: None,\n            input: None,\n            output: None,\n            session_id: None,\n            release: None,\n            version: None,\n            metadata: None,\n            tags: None,\n            environment: None,\n            public: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/update_generation_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct UpdateGenerationBody {\n    #[serde(\n        rename = \"completionStartTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub completion_start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"model\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub model: Option<Option<String>>,\n    #[serde(\n        rename = \"modelParameters\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub model_parameters: Option<Option<std::collections::HashMap<String, models::MapValue>>>,\n    #[serde(rename = \"usage\", skip_serializing_if = \"Option::is_none\")]\n    pub usage: Option<Box<models::IngestionUsage>>,\n    #[serde(\n        rename = \"promptName\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub prompt_name: Option<Option<String>>,\n    #[serde(rename = \"usageDetails\", skip_serializing_if = \"Option::is_none\")]\n    pub usage_details: Option<Box<models::UsageDetails>>,\n    #[serde(\n        rename = \"costDetails\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub cost_details: Option<Option<std::collections::HashMap<String, f64>>>,\n    #[serde(\n        rename = \"promptVersion\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub prompt_version: Option<Option<i32>>,\n    #[serde(\n        rename = \"endTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub end_time: Option<Option<String>>,\n    #[serde(rename = \"id\")]\n    pub id: String,\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"startTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"level\", skip_serializing_if = \"Option::is_none\")]\n    pub level: Option<models::ObservationLevel>,\n    #[serde(\n        rename = \"statusMessage\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub status_message: Option<Option<String>>,\n    #[serde(\n        rename = \"parentObservationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub parent_observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n}\n\nimpl UpdateGenerationBody {\n    pub fn new(id: String) -> UpdateGenerationBody {\n        UpdateGenerationBody {\n            completion_start_time: None,\n            model: None,\n            model_parameters: None,\n            usage: None,\n            prompt_name: None,\n            usage_details: None,\n            cost_details: None,\n            prompt_version: None,\n            end_time: None,\n            id,\n            trace_id: None,\n            name: None,\n            start_time: None,\n            metadata: None,\n            input: None,\n            output: None,\n            level: None,\n            status_message: None,\n            parent_observation_id: None,\n            version: None,\n            environment: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/update_span_body.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct UpdateSpanBody {\n    #[serde(\n        rename = \"endTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub end_time: Option<Option<String>>,\n    #[serde(rename = \"id\")]\n    pub id: String,\n    #[serde(\n        rename = \"traceId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub trace_id: Option<Option<String>>,\n    #[serde(\n        rename = \"name\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub name: Option<Option<String>>,\n    #[serde(\n        rename = \"startTime\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub start_time: Option<Option<String>>,\n    #[serde(\n        rename = \"metadata\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub metadata: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<serde_json::Value>>,\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<serde_json::Value>>,\n    #[serde(rename = \"level\", skip_serializing_if = \"Option::is_none\")]\n    pub level: Option<models::ObservationLevel>,\n    #[serde(\n        rename = \"statusMessage\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub status_message: Option<Option<String>>,\n    #[serde(\n        rename = \"parentObservationId\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub parent_observation_id: Option<Option<String>>,\n    #[serde(\n        rename = \"version\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub version: Option<Option<String>>,\n    #[serde(\n        rename = \"environment\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub environment: Option<Option<String>>,\n}\n\nimpl UpdateSpanBody {\n    pub fn new(id: String) -> UpdateSpanBody {\n        UpdateSpanBody {\n            end_time: None,\n            id,\n            trace_id: None,\n            name: None,\n            start_time: None,\n            metadata: None,\n            input: None,\n            output: None,\n            level: None,\n            status_message: None,\n            parent_observation_id: None,\n            version: None,\n            environment: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/usage.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n/// Usage : (Deprecated. Use usageDetails and costDetails instead.) Standard interface for usage and\n/// cost\n#[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)]\npub struct Usage {\n    /// Number of input units (e.g. tokens)\n    #[serde(\n        rename = \"input\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input: Option<Option<i32>>,\n    /// Number of output units (e.g. tokens)\n    #[serde(\n        rename = \"output\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output: Option<Option<i32>>,\n    /// Defaults to input+output if not set\n    #[serde(\n        rename = \"total\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub total: Option<Option<i32>>,\n    #[serde(rename = \"unit\", skip_serializing_if = \"Option::is_none\")]\n    pub unit: Option<models::ModelUsageUnit>,\n    /// USD input cost\n    #[serde(\n        rename = \"inputCost\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub input_cost: Option<Option<f64>>,\n    /// USD output cost\n    #[serde(\n        rename = \"outputCost\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub output_cost: Option<Option<f64>>,\n    /// USD total cost, defaults to input+output\n    #[serde(\n        rename = \"totalCost\",\n        default,\n        with = \"::serde_with::rust::double_option\",\n        skip_serializing_if = \"Option::is_none\"\n    )]\n    pub total_cost: Option<Option<f64>>,\n}\n\nimpl Usage {\n    /// (Deprecated. Use usageDetails and costDetails instead.) Standard interface for usage and\n    /// cost\n    pub fn new() -> Usage {\n        Usage {\n            input: None,\n            output: None,\n            total: None,\n            unit: None,\n            input_cost: None,\n            output_cost: None,\n            total_cost: None,\n        }\n    }\n}\n\nimpl From<swiftide_core::chat_completion::Usage> for Usage {\n    fn from(value: swiftide_core::chat_completion::Usage) -> Self {\n        Usage {\n            input: Some(Some(value.prompt_tokens as i32)),\n            output: Some(Some(value.completion_tokens as i32)),\n            total: Some(Some(value.total_tokens as i32)),\n            unit: Some(models::ModelUsageUnit::Tokens),\n            input_cost: None,\n            output_cost: None,\n            total_cost: None,\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/models/usage_details.rs",
    "content": "// langfuse\n//\n// ## Authentication  Authenticate with the API using [Basic Auth](https://en.wikipedia.org/wiki/Basic_access_authentication), get API keys in the project settings:  - username: Langfuse Public Key - password: Langfuse Secret Key  ## Exports  - OpenAPI spec: https://cloud.langfuse.com/generated/api/openapi.yml - Postman collection: https://cloud.langfuse.com/generated/postman/collection.json\n//\n// The version of the OpenAPI document:\n//\n// Generated by: https://openapi-generator.tech\n\nuse crate::models;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]\n#[serde(untagged)]\npub enum UsageDetails {\n    Object(std::collections::HashMap<String, i32>),\n    OpenAiCompletionUsageSchema(Box<models::OpenAiCompletionUsageSchema>),\n    OpenAiResponseUsageSchema(Box<models::OpenAiResponseUsageSchema>),\n}\n\nimpl Default for UsageDetails {\n    fn default() -> Self {\n        Self::Object(Default::default())\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/src/tracing_layer.rs",
    "content": "use anyhow::Context as _;\nuse chrono::Utc;\nuse reqwest::Client;\nuse serde_json::Value;\nuse std::collections::HashMap;\nuse std::str::FromStr as _;\nuse std::sync::Arc;\nuse std::{env, fmt};\nuse tokio::sync::Mutex;\nuse tracing::field::{Field, Visit};\nuse tracing::{Event, Id, Level, Metadata, Subscriber, span};\nuse tracing_subscriber::Layer;\nuse tracing_subscriber::layer::Context;\nuse tracing_subscriber::registry::LookupSpan;\nuse uuid::Uuid;\n\nuse crate::langfuse_batch_manager::{BatchManagerTrait, LangfuseBatchManager};\nuse crate::models::{\n    IngestionEvent, ObservationBody, ObservationLevel, ObservationType, TraceBody,\n};\nuse crate::{Configuration, DEFAULT_LANGFUSE_URL};\n\n#[derive(Default, Debug, Clone)]\npub struct SpanData {\n    pub observation_id: String, // Langfuse requires ids to be UUID v4 strings\n    pub name: String,\n    pub start_time: String,\n    pub level: ObservationLevel,\n    pub metadata: serde_json::Map<String, Value>,\n    pub parent_span_id: Option<u64>,\n}\n\nimpl SpanData {\n    pub fn get<T>(&self, key: &str) -> Option<T>\n    where\n        T: serde::de::DeserializeOwned,\n    {\n        if let Some(value) = self.metadata.get(key) {\n            let parsed = serde_json::from_value(value.clone());\n            if let Err(e) = &parsed {\n                tracing::warn!(\n                    error.msg = %e,\n                    error.type = %std::any::type_name_of_val(e),\n                    key = %key,\n                    value = %value,\n                    \"[Langfuse] Failed to parse metadata field\"\n                );\n            }\n\n            return parsed.ok();\n        }\n        None\n    }\n\n    /// Returns metadata with all keys that do not start with \"langfuse.\"\n    #[must_use]\n    pub fn remaining_metadata(&self) -> Option<serde_json::Map<String, Value>> {\n        let mut metadata = self.metadata.clone();\n        metadata.retain(|k, _| !k.starts_with(\"langfuse.\"));\n\n        if metadata.is_empty() {\n            None\n        } else {\n            Some(metadata)\n        }\n    }\n}\n\nimpl From<serde_json::Map<String, Value>> for SpanData {\n    fn from(metadata: serde_json::Map<String, Value>) -> Self {\n        SpanData {\n            metadata,\n            ..Default::default()\n        }\n    }\n}\n\npub fn map_level(level: &Level) -> ObservationLevel {\n    use ObservationLevel::{Debug, Default, Error, Warning};\n    match *level {\n        Level::ERROR => Error,\n        Level::WARN => Warning,\n        Level::INFO => Default,\n        Level::DEBUG => Debug,\n        Level::TRACE => Debug,\n    }\n}\n\n#[derive(Debug)]\npub struct SpanTracker {\n    active_spans: HashMap<u64, (String, ObservationType)>,\n    current_trace_id: Option<String>,\n}\n\nimpl Default for SpanTracker {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl SpanTracker {\n    pub fn new() -> Self {\n        Self {\n            active_spans: HashMap::new(),\n            current_trace_id: None,\n        }\n    }\n\n    pub fn add_span(&mut self, span_id: u64, observation_id: String, ty: ObservationType) {\n        self.active_spans.insert(span_id, (observation_id, ty));\n    }\n\n    pub fn get_span(&self, span_id: u64) -> Option<&(String, ObservationType)> {\n        self.active_spans.get(&span_id)\n    }\n\n    pub fn remove_span(&mut self, span_id: u64) -> Option<(String, ObservationType)> {\n        self.active_spans.remove(&span_id)\n    }\n}\n\n#[derive(Clone)]\npub struct LangfuseLayer {\n    pub batch_manager: Box<dyn BatchManagerTrait>,\n    pub span_tracker: Arc<Mutex<SpanTracker>>,\n}\n\nfn observation_create_from(\n    trace_id: &str,\n    observation_id: &str,\n    span_data: &mut SpanData,\n    parent_observation_id: Option<String>,\n) -> IngestionEvent {\n    // Expect all langfuse values to be prefixed by \"langfuse.\"\n    // Extract the fields from the metadata\n\n    // Metadata is all values without a langfuse prefix\n    let metadata = span_data.remaining_metadata().map(Into::into);\n\n    let start_time = span_data\n        .get(\"langfuse.start_time\")\n        .unwrap_or(span_data.start_time.clone());\n\n    let name = span_data.get(\"otel.name\").unwrap_or(span_data.name.clone());\n    let swiftide_usage = span_data.get::<swiftide_core::chat_completion::Usage>(\"langfuse.usage\");\n\n    IngestionEvent::new_observation_create(ObservationBody {\n        id: Some(Some(observation_id.to_string())),\n        trace_id: Some(Some(trace_id.to_string())),\n        r#type: span_data\n            .get(\"langfuse.type\")\n            .unwrap_or(ObservationType::Span),\n        name: Some(Some(name)),\n        start_time: Some(Some(start_time)),\n        level: Some(span_data.level),\n        parent_observation_id: Some(parent_observation_id),\n        metadata: Some(metadata),\n        model: Some(span_data.get(\"langfuse.model\")),\n        model_parameters: Some(span_data.get(\"langfuse.model_parameters\")),\n        input: Some(span_data.get(\"langfuse.input\")),\n        version: Some(span_data.get(\"langfuse.version\")),\n        output: Some(span_data.get(\"langfuse.output\")),\n        usage: swiftide_usage.map(|u| Box::new(u.into())),\n        status_message: Some(span_data.get(\"langfuse.status_message\")),\n        environment: Some(span_data.get(\"langfuse.environment\")),\n\n        completion_start_time: None,\n        end_time: None,\n    })\n}\n\nimpl Default for LangfuseLayer {\n    fn default() -> Self {\n        let public_key = env::var(\"LANGFUSE_PUBLIC_KEY\")\n            .or_else(|_| env::var(\"LANGFUSE_INIT_PROJECT_PUBLIC_KEY\"))\n            .unwrap_or_default();\n\n        let secret_key = env::var(\"LANGFUSE_SECRET_KEY\")\n            .or_else(|_| env::var(\"LANGFUSE_INIT_PROJECT_SECRET_KEY\"))\n            .unwrap_or_default();\n\n        if public_key.is_empty() || secret_key.is_empty() {\n            panic!(\n                \"Public key or secret key not set. Please set LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables.\"\n            );\n        }\n\n        let base_url =\n            env::var(\"LANGFUSE_URL\").unwrap_or_else(|_| DEFAULT_LANGFUSE_URL.to_string());\n\n        let config = Configuration {\n            base_path: base_url.clone(),\n            user_agent: Some(\"swiftide\".to_string()),\n            client: Client::new(),\n            basic_auth: Some((public_key.clone(), Some(secret_key.clone()))),\n            ..Default::default()\n        };\n\n        let batch_manager = LangfuseBatchManager::new(config);\n\n        batch_manager.clone().spawn();\n\n        LangfuseLayer {\n            batch_manager: batch_manager.boxed(),\n            span_tracker: Arc::new(Mutex::new(SpanTracker::new())),\n        }\n    }\n}\nimpl LangfuseLayer {\n    // Builds the layer from an existing configuration\n    pub fn from_config(config: Configuration) -> Self {\n        let batch_manager = LangfuseBatchManager::new(config);\n\n        batch_manager.clone().spawn();\n\n        let span_tracker = Arc::new(Mutex::new(SpanTracker::new()));\n\n        Self {\n            batch_manager: batch_manager.boxed(),\n            span_tracker,\n        }\n    }\n    // Start the layer with a batch manager\n    //\n    // Note that the batch manager _must_ be started before using this layer.\n    pub fn from_batch_manager(batch_manager: &LangfuseBatchManager) -> Self {\n        let span_tracker = Arc::new(Mutex::new(SpanTracker::new()));\n\n        Self {\n            batch_manager: batch_manager.boxed(),\n            span_tracker,\n        }\n    }\n    pub async fn flush(&self) -> anyhow::Result<()> {\n        self.batch_manager\n            .flush()\n            .await\n            .context(\"Failed to flush\")?;\n\n        Ok(())\n    }\n\n    pub async fn handle_span(&self, span_id: u64, mut span_data: SpanData) {\n        let observation_id = span_data.observation_id.clone();\n\n        let langfuse_ty = span_data\n            .get(\"langfuse.type\")\n            .unwrap_or(ObservationType::Span);\n\n        {\n            let mut spans = self.span_tracker.lock().await;\n            spans.add_span(span_id, observation_id.clone(), langfuse_ty);\n        }\n\n        // Get parent ID if it exists\n        let parent_id = if let Some(parent_span_id) = span_data.parent_span_id {\n            let spans = self.span_tracker.lock().await;\n            spans.get_span(parent_span_id).cloned().map(|(id, _)| id)\n        } else {\n            None\n        };\n\n        let trace_id = self.ensure_trace_id().await;\n\n        // Create the span observation\n        let event = observation_create_from(&trace_id, &observation_id, &mut span_data, parent_id);\n\n        self.batch_manager.add_event(event).await;\n    }\n\n    pub async fn handle_span_close(&self, span_id: u64) {\n        let Some((observation_id, langfuse_type)) =\n            self.span_tracker.lock().await.remove_span(span_id)\n        else {\n            return;\n        };\n\n        let trace_id = self.ensure_trace_id().await;\n\n        let event = IngestionEvent::new_observation_update(ObservationBody {\n            id: Some(Some(observation_id.clone())),\n            r#type: langfuse_type,\n            trace_id: Some(Some(trace_id.clone())),\n            end_time: Some(Some(Utc::now().to_rfc3339())),\n            ..Default::default()\n        });\n        self.batch_manager.add_event(event).await;\n    }\n\n    pub async fn ensure_trace_id(&self) -> String {\n        let mut spans = self.span_tracker.lock().await;\n        if let Some(id) = spans.current_trace_id.clone() {\n            return id;\n        }\n\n        let trace_id = Uuid::new_v4().to_string();\n        spans.current_trace_id = Some(trace_id.clone());\n\n        let event = IngestionEvent::new_trace_create(TraceBody {\n            id: Some(Some(trace_id.clone())),\n            name: Some(Some(Utc::now().timestamp().to_string())),\n            timestamp: Some(Some(Utc::now().to_rfc3339())),\n            public: Some(Some(false)),\n            ..Default::default()\n        });\n        self.batch_manager.add_event(event).await;\n\n        trace_id\n    }\n\n    pub async fn handle_record(&self, span_id: u64, metadata: serde_json::Map<String, Value>) {\n        let Some((observation_id, langfuse_type)) =\n            self.span_tracker.lock().await.get_span(span_id).cloned()\n        else {\n            return;\n        };\n\n        let trace_id = self.ensure_trace_id().await;\n        let metadata = SpanData::from(metadata);\n        let remaining = metadata.remaining_metadata().map(Into::into);\n        let swiftide_usage =\n            metadata.get::<swiftide_core::chat_completion::Usage>(\"langfuse.usage\");\n        let event = IngestionEvent::new_observation_update(ObservationBody {\n            id: Some(Some(observation_id.clone())),\n            trace_id: Some(Some(trace_id.clone())),\n            r#type: langfuse_type,\n            metadata: Some(remaining),\n            input: Some(metadata.get(\"langfuse.input\")),\n            output: Some(metadata.get(\"langfuse.output\")),\n            model: Some(metadata.get(\"langfuse.model\")),\n            model_parameters: Some(metadata.get(\"langfuse.model_parameters\")),\n            version: Some(metadata.get(\"langfuse.version\")),\n            usage: swiftide_usage.map(|u| Box::new(u.into())),\n            status_message: Some(metadata.get(\"langfuse.status_message\")),\n            environment: Some(metadata.get(\"langfuse.environment\")),\n            ..Default::default()\n        });\n\n        self.batch_manager.add_event(event).await;\n    }\n}\n\nimpl<S> Layer<S> for LangfuseLayer\nwhere\n    S: Subscriber + for<'a> LookupSpan<'a>,\n{\n    fn enabled(&self, _metadata: &Metadata<'_>, _ctx: Context<'_, S>) -> bool {\n        // Enable this layer for all spans and events\n        true\n    }\n\n    fn on_new_span(&self, attrs: &span::Attributes<'_>, id: &span::Id, ctx: Context<'_, S>) {\n        let span_id = id.into_u64();\n\n        let parent_span_id = ctx\n            .span_scope(id)\n            .and_then(|mut scope| scope.nth(1))\n            .map(|parent| parent.id().into_u64());\n\n        let mut visitor = JsonVisitor::new();\n        attrs.record(&mut visitor);\n\n        let span_data = SpanData {\n            observation_id: Uuid::new_v4().to_string(),\n            name: attrs.metadata().name().to_string(),\n            start_time: Utc::now().to_rfc3339(),\n            level: map_level(attrs.metadata().level()),\n            metadata: visitor.recorded_fields,\n            parent_span_id,\n        };\n\n        let layer = self.clone();\n        tokio::spawn(async move { layer.handle_span(span_id, span_data).await });\n    }\n\n    fn on_close(&self, id: Id, _ctx: Context<'_, S>) {\n        let span_id = id.into_u64();\n        let layer = self.clone();\n\n        tokio::spawn(async move { layer.handle_span_close(span_id).await });\n    }\n\n    fn on_record(&self, span: &Id, values: &span::Record<'_>, _ctx: Context<'_, S>) {\n        let span_id = span.into_u64();\n        let mut visitor = JsonVisitor::new();\n        values.record(&mut visitor);\n        let metadata = visitor.recorded_fields;\n\n        if !metadata.is_empty() {\n            let layer = self.clone();\n            tokio::spawn(async move { layer.handle_record(span_id, metadata).await });\n        }\n    }\n\n    fn on_event(&self, event: &Event<'_>, ctx: Context<'_, S>) {\n        let mut visitor = JsonVisitor::new();\n        event.record(&mut visitor);\n        let metadata = visitor.recorded_fields;\n\n        if let Some(span_id) = ctx.lookup_current().map(|span| span.id().into_u64()) {\n            let layer = self.clone();\n            tokio::spawn(async move { layer.handle_record(span_id, metadata).await });\n        }\n    }\n}\n\n#[derive(Debug)]\nstruct JsonVisitor {\n    recorded_fields: serde_json::Map<String, Value>,\n}\n\nimpl JsonVisitor {\n    fn new() -> Self {\n        Self {\n            recorded_fields: serde_json::Map::new(),\n        }\n    }\n\n    fn insert_value(&mut self, field: &Field, value: Value) {\n        self.recorded_fields.insert(field.name().to_string(), value);\n    }\n}\n\nmacro_rules! record_field {\n    ($fn_name:ident, $type:ty) => {\n        fn $fn_name(&mut self, field: &Field, value: $type) {\n            self.insert_value(field, Value::from(value));\n        }\n    };\n}\n\nimpl Visit for JsonVisitor {\n    record_field!(record_i64, i64);\n    record_field!(record_u64, u64);\n    record_field!(record_bool, bool);\n\n    fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {\n        self.insert_value(field, Value::String(format!(\"{value:?}\")));\n    }\n\n    fn record_str(&mut self, field: &Field, value: &str) {\n        let value = Value::from_str(value).unwrap_or_else(|_| Value::String(value.to_string()));\n        self.insert_value(field, value);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use tokio::sync::Mutex;\n    use tracing::{Level, subscriber::set_global_default};\n    use tracing_subscriber::prelude::*;\n\n    #[derive(Clone)]\n    struct InMemoryBatchManager {\n        pub events: Arc<Mutex<Vec<crate::models::ingestion_event::IngestionEvent>>>,\n    }\n    #[async_trait::async_trait]\n    impl crate::langfuse_batch_manager::BatchManagerTrait for InMemoryBatchManager {\n        async fn add_event(&self, event: crate::models::ingestion_event::IngestionEvent) {\n            self.events.lock().await.push(event);\n        }\n        async fn flush(&self) -> anyhow::Result<()> {\n            Ok(())\n        }\n        fn boxed(&self) -> Box<dyn crate::langfuse_batch_manager::BatchManagerTrait + Send + Sync> {\n            Box::new(Self {\n                events: Arc::clone(&self.events),\n            })\n        }\n    }\n\n    #[test_log::test(tokio::test)]\n    async fn test_generation_span_fields_are_correct_and_single_observation_created() {\n        let events = Arc::new(Mutex::new(Vec::new()));\n        let batch_mgr = InMemoryBatchManager {\n            events: Arc::clone(&events),\n        };\n        let langfuse_layer = LangfuseLayer {\n            batch_manager: batch_mgr.boxed(),\n            span_tracker: Arc::new(Mutex::new(SpanTracker::new())),\n        };\n\n        let (non_blocking, _guard) = tracing_appender::non_blocking(std::io::sink());\n        let subscriber = tracing_subscriber::Registry::default()\n            .with(langfuse_layer)\n            .with(\n                tracing_subscriber::fmt::layer()\n                    .with_writer(non_blocking)\n                    .with_test_writer(),\n            );\n\n        set_global_default(subscriber).unwrap();\n\n        let usage = swiftide_core::chat_completion::Usage {\n            prompt_tokens: 5,\n            completion_tokens: 9,\n            total_tokens: 14,\n            details: None,\n        };\n\n        // Start a GENERATION span, record fields, and drop/end.\n        {\n            let span = tracing::span!(\n                Level::INFO,\n                \"prompt\",\n                langfuse.type = \"GENERATION\",\n                langfuse.input = \"sample-in\",\n                langfuse.output = \"sample-out\",\n                langfuse.usage = serde_json::to_string(&usage).unwrap()\n\n            );\n            let _enter = span.enter();\n            // Span ends here (dropped)\n        }\n\n        // Allow async processing to complete\n        tokio::time::sleep(std::time::Duration::from_millis(200)).await;\n\n        let events = events.lock().await;\n        // There should be one observation create (and likely one trace, but we check for GENERATION\n        // only)\n        let generation_events: Vec<_> = events\n            .iter()\n            .filter(|e| {\n                matches!(\n                    e,\n                    crate::models::ingestion_event::IngestionEvent::ObservationCreate(_)\n                )\n            })\n            .collect();\n\n        assert_eq!(generation_events.len(), 1);\n\n        if let crate::models::ingestion_event::IngestionEvent::ObservationCreate(obs) =\n            &generation_events[0]\n        {\n            let body = &obs.body;\n            assert_eq!(body.r#type, crate::models::ObservationType::Generation);\n            assert_eq!(body.input, Some(Some(\"sample-in\".into())));\n            assert_eq!(body.output, Some(Some(\"sample-out\".into())));\n            assert_eq!(\n                body.usage\n                    .as_ref()\n                    .map(|b| serde_json::to_value(&**b).unwrap()),\n                Some(serde_json::json!({\"input\": 5, \"output\": 9, \"total\": 14, \"unit\": \"TOKENS\"}))\n            );\n        } else {\n            panic!(\"Did not capture a GENERATION observation as expected\");\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-langfuse/tests/full_flow.rs",
    "content": "use std::sync::{Arc, Mutex};\n\nuse reqwest::Client;\nuse swiftide_langfuse::{Configuration, LangfuseBatchManager, LangfuseLayer};\nuse tokio::task::yield_now;\nuse tracing::{Level, info, span};\nuse tracing_subscriber::{Registry, layer::SubscriberExt};\nuse wiremock::{\n    Mock, MockServer, ResponseTemplate,\n    matchers::{method, path},\n};\n\n#[test_log::test(tokio::test)]\nasync fn integration_tracing_layer_sends_to_langfuse() {\n    // Start Wiremock server\n    let mock_server = MockServer::start().await;\n\n    // Mock a successful ingestion response\n    let response = ResponseTemplate::new(200).set_body_raw(\n        r#\"{\"successes\":[{\"id\":\"abc\",\"status\":200}],\"errors\":[]}\"#,\n        \"application/json\",\n    );\n\n    let body = Arc::new(Mutex::new(None));\n    let body_clone = body.clone();\n\n    Mock::given(method(\"POST\"))\n        .and(path(\"/api/public/ingestion\"))\n        .respond_with(move |req: &wiremock::Request| {\n            let body_clone = body_clone.clone();\n            let body_str = String::from_utf8_lossy(&req.body).to_string();\n            let mut lock = body_clone.lock().unwrap();\n            *lock = Some(body_str);\n            response.clone()\n        })\n        .expect(1)\n        .mount(&mock_server)\n        .await;\n\n    // Prepare Langfuse config to point to the mock server\n    let config = Configuration {\n        base_path: mock_server.uri(),\n        user_agent: Some(\"integration-test\".into()),\n        client: Client::new(),\n        basic_auth: Some((\"PUBLIC\".into(), Some(\"SECRET\".into()))),\n        ..Default::default()\n    };\n\n    // Set up tracing layer\n    let batch_manager = LangfuseBatchManager::new(config);\n    let layer = LangfuseLayer::from_batch_manager(&batch_manager);\n\n    batch_manager.clone().spawn();\n\n    // Install subscriber and layer\n    let subscriber = Registry::default().with(layer);\n    tracing::subscriber::with_default(subscriber, || {\n        let span = span!(\n            Level::INFO,\n            \"test_span\",\n            \"langfuse.input\" = \"LANGFUSE INPUT\",\n            \"langfuse.output\" = \"LANGFUSE OUTPUT\",\n            \"langfuse.model\" = \"LANGFUSE MODEL\",\n            \"otel.name\" = \"OTEL.OVERWRITE\",\n            foo = 42\n        );\n        let _enter = span.enter();\n        info!(bar = \"baz\", \"Hello from integration test\");\n    });\n\n    // Give some time for the async tasks to run\n    yield_now().await;\n    // Force the flush as the batch manager is not dropped yet\n    batch_manager.flush().await.unwrap();\n\n    // Assert request received\n    mock_server.verify().await;\n\n    insta::with_settings!({\n        filters => vec![\n        // UUID v4/v5 pattern\n        (r#\"\"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}\"\"#, r#\"\"<UUID>\"\"#),\n        // Improved ISO8601 datetime filter, matching both Z and offsets\n        (r#\"\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}(?:\\.\\d+)?(?:Z|[+-]\\d{2}:\\d{2})\"\"#, r#\"\"<TIMESTAMP>\"\"#),\n        // Unix timestamp (with optional ms)\n        (r#\"\"\\d{10,13}\"\"#, r#\"\"<UNIX_TIMESTAMP>\"\"#),\n        ]\n    }, {\n        insta::assert_snapshot!(body.lock().unwrap().as_ref().unwrap())\n    });\n}\n"
  },
  {
    "path": "swiftide-langfuse/tests/snapshots/full_flow__integration_tracing_layer_sends_to_langfuse.snap",
    "content": "---\nsource: swiftide-langfuse/tests/full_flow.rs\nexpression: body.lock().unwrap().as_ref().unwrap()\n---\n{\"batch\":[{\"body\":{\"id\":\"<UUID>\",\"timestamp\":\"<TIMESTAMP>\",\"name\":\"<UNIX_TIMESTAMP>\",\"public\":false},\"id\":\"<UUID>\",\"timestamp\":\"<TIMESTAMP>\",\"type\":\"trace-create\"},{\"body\":{\"id\":\"<UUID>\",\"traceId\":\"<UUID>\",\"type\":\"SPAN\",\"name\":\"OTEL.OVERWRITE\",\"startTime\":\"<TIMESTAMP>\",\"model\":\"LANGFUSE MODEL\",\"modelParameters\":null,\"input\":\"LANGFUSE INPUT\",\"version\":null,\"metadata\":{\"foo\":42,\"otel.name\":\"OTEL.OVERWRITE\"},\"output\":\"LANGFUSE OUTPUT\",\"level\":\"DEFAULT\",\"statusMessage\":null,\"parentObservationId\":null,\"environment\":null},\"id\":\"<UUID>\",\"timestamp\":\"<TIMESTAMP>\",\"type\":\"observation-create\"},{\"body\":{\"id\":\"<UUID>\",\"traceId\":\"<UUID>\",\"type\":\"SPAN\",\"model\":null,\"modelParameters\":null,\"input\":null,\"version\":null,\"metadata\":{\"bar\":\"baz\",\"message\":\"Hello from integration test\"},\"output\":null,\"statusMessage\":null,\"environment\":null},\"id\":\"<UUID>\",\"timestamp\":\"<TIMESTAMP>\",\"type\":\"observation-update\"},{\"body\":{\"id\":\"<UUID>\",\"traceId\":\"<UUID>\",\"type\":\"SPAN\",\"endTime\":\"<TIMESTAMP>\"},\"id\":\"<UUID>\",\"timestamp\":\"<TIMESTAMP>\",\"type\":\"observation-update\"}]}\n"
  },
  {
    "path": "swiftide-macros/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-macros\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[lib]\nproc-macro = true\n\n[dependencies]\nquote = { workspace = true }\nsyn = { workspace = true }\ndarling = { workspace = true }\nproc-macro2 = { workspace = true }\nconvert_case = { workspace = true }\n\n# Macro dependencies\nanyhow.workspace = true\nasync-trait.workspace = true\nserde = { workspace = true, optional = true }\nserde_json = { workspace = true, optional = true }\nschemars = { workspace = true, features = [\"derive\"] }\n\n[dev-dependencies]\npretty_assertions.workspace = true\nrustversion = \"1.0.18\"\ntrybuild = \"1.0\"\nprettyplease = \"0.2.25\"\ninsta.workspace = true\nswiftide = { path = \"../swiftide/\" }\nswiftide-core = { path = \"../swiftide-core/\" }\ntokio = { workspace = true, features = [\"full\"] }\n\n[lints]\nworkspace = true\n\n[features]\n# TODO: Clean up feature flag\ndefault = [\"swiftide-agents\"]\nswiftide-agents = [\"dep:serde\", \"dep:serde_json\"]\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n"
  },
  {
    "path": "swiftide-macros/src/indexing_transformer.rs",
    "content": "use darling::{Error, FromMeta, ast::NestedMeta};\nuse proc_macro2::TokenStream;\nuse quote::quote;\nuse syn::{Fields, Ident, ItemStruct};\n\n#[derive(FromMeta, Default)]\n#[darling(default)]\nstruct TransformerArgs {\n    metadata_field_name: Option<String>,\n    default_prompt_file: Option<String>,\n\n    derive: DeriveOptions,\n}\n\n#[derive(FromMeta, Debug, Default)]\n#[darling(default)]\nstruct DeriveOptions {\n    skip_debug: bool,\n    skip_clone: bool,\n    skip_default: bool,\n}\n\n#[allow(clippy::too_many_lines)]\npub(crate) fn indexing_transformer_impl(args: TokenStream, input: ItemStruct) -> TokenStream {\n    let args = match parse_args(args) {\n        Ok(args) => args,\n        Err(e) => return e.write_errors(),\n    };\n\n    let struct_name = &input.ident;\n    let builder_name = Ident::new(\n        &format!(\"{struct_name}Builder\"),\n        proc_macro2::Span::call_site(),\n    );\n    let vis = &input.vis;\n    let attrs = &input.attrs;\n    let existing_fields =\n        extract_existing_fields(input.fields).collect::<Vec<proc_macro2::TokenStream>>();\n\n    let metadata_field_name = match args.metadata_field_name {\n        Some(name) => quote! { pub const NAME: &str = #name; },\n        None => quote! {},\n    };\n\n    let prompt_template_struct_attr = match &args.default_prompt_file {\n        Some(_file) => quote! {\n            #[builder(default = \"default_prompt()\")]\n            prompt_template: hidden::Prompt,\n        },\n        None => quote! {},\n    };\n\n    let default_prompt_fn = match &args.default_prompt_file {\n        Some(file) => quote! {\n            fn default_prompt() -> hidden::Prompt {\n                include_str!(#file).into()\n            }\n        },\n        None => quote! {},\n    };\n\n    let derive = {\n        let mut tokens = vec![quote! { hidden::Builder}];\n        if !args.derive.skip_debug {\n            tokens.push(quote! { Debug });\n        }\n        if !args.derive.skip_clone {\n            tokens.push(quote! { Clone });\n        }\n\n        quote! { #[derive(#(#tokens),*)] }\n    };\n\n    let default_impl = if args.derive.skip_default {\n        quote! {}\n    } else {\n        quote! {\n            impl Default for #struct_name {\n                fn default() -> Self {\n                    #builder_name::default().build().unwrap()\n                }\n            }\n        }\n    };\n\n    quote! {\n        mod hidden {\n            pub use std::sync::Arc;\n            pub use anyhow::Result;\n            pub use derive_builder::Builder;\n            pub use swiftide_core::{\n                indexing::{IndexingDefaults},\n                prompt::Prompt,\n                chat_completion::errors::LanguageModelError,\n                SimplePrompt, Transformer, WithIndexingDefaults\n            };\n\n        }\n\n        #metadata_field_name\n\n        #derive\n        #[builder(setter(into, strip_option), build_fn(error = \"anyhow::Error\"))]\n        #(#attrs)*\n        #vis struct #struct_name {\n            #(#existing_fields)*\n            #[builder(setter(custom), default)]\n            client: Option<hidden::Arc<dyn hidden::SimplePrompt>>,\n\n            #prompt_template_struct_attr\n\n            #[builder(default)]\n            concurrency: Option<usize>,\n            #[builder(private, default)]\n            indexing_defaults: Option<hidden::IndexingDefaults>,\n        }\n\n        #default_impl\n\n        impl #struct_name {\n            /// Creates a new builder for the transformer\n            pub fn builder() -> #builder_name {\n                #builder_name::default()\n            }\n\n            /// Build a new transformer from a client\n            pub fn from_client(client: impl hidden::SimplePrompt + 'static) -> #builder_name {\n                #builder_name::default().client(client).to_owned()\n            }\n\n            /// Create a new transformer from a client\n            pub fn new(client: impl hidden::SimplePrompt + 'static) -> Self {\n                #builder_name::default().client(client).build().unwrap()\n            }\n\n            /// Set the concurrency level for the transformer\n            #[must_use]\n            pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n                self.concurrency = Some(concurrency);\n                self\n            }\n\n\n            /// Prompts either the client provided to the transformer or a default client\n            /// provided on the indexing pipeline\n            ///\n            /// # Errors\n            ///\n            /// Gives an error if no (default) client is provided\n            async fn prompt(&self, prompt: hidden::Prompt) -> hidden::Result<String, hidden::LanguageModelError> {\n                if let Some(client) = &self.client {\n                    return client.prompt(prompt).await\n                };\n\n                let Some(defaults) = &self.indexing_defaults.as_ref() else {\n                    return Err(hidden::LanguageModelError::PermanentError(\"No client provided\".into()))\n                };\n\n                let Some(client) = defaults.simple_prompt() else {\n                    return Err(hidden::LanguageModelError::PermanentError(\"No client provided\".into()))\n                };\n                client.prompt(prompt).await\n            }\n        }\n\n        impl #builder_name {\n            pub fn client(&mut self, client: impl hidden::SimplePrompt + 'static) -> &mut Self {\n                self.client = Some(Some(hidden::Arc::new(client) as hidden::Arc<dyn hidden::SimplePrompt>));\n                self\n            }\n        }\n\n        impl hidden::WithIndexingDefaults for #struct_name {\n            fn with_indexing_defaults(&mut self, defaults: hidden::IndexingDefaults) {\n                self.indexing_defaults = Some(defaults);\n            }\n        }\n\n        #default_prompt_fn\n    }\n}\n\nfn parse_args(args: TokenStream) -> Result<TransformerArgs, Error> {\n    let attr_args = NestedMeta::parse_meta_list(args)?;\n\n    TransformerArgs::from_list(&attr_args)\n}\n\nfn extract_existing_fields(fields: Fields) -> impl Iterator<Item = proc_macro2::TokenStream> {\n    fields.into_iter().map(|field| {\n        let field_name = &field.ident;\n        let field_type = &field.ty;\n        let field_vis = &field.vis;\n        let field_attrs = &field.attrs;\n\n        quote! {\n            #(#field_attrs)*\n            #field_vis #field_name: #field_type,\n        }\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use quote::quote;\n    use syn::{ItemStruct, parse_quote};\n\n    #[test]\n    fn test_includes_doc_comments() {\n        let input: ItemStruct = parse_quote! {\n            /// This is a test struct\n            pub struct TestStruct {\n                /// This is a test field\n                pub test_field: String,\n            }\n        };\n\n        let args: TokenStream = quote!();\n        let output = indexing_transformer_impl(args, input);\n\n        let expected_output = quote! {\n            mod hidden {\n                pub use std::sync::Arc;\n                pub use anyhow::Result;\n                pub use derive_builder::Builder;\n                pub use swiftide_core::{\n                    indexing::{IndexingDefaults},\n                    prompt::Prompt,\n                    chat_completion::errors::LanguageModelError,\n                    SimplePrompt, Transformer, WithIndexingDefaults\n                };\n            }\n\n            #[derive(hidden::Builder, Debug, Clone)]\n            #[builder(setter(into, strip_option), build_fn(error = \"anyhow::Error\"))]\n            /// This is a test struct\n            pub struct TestStruct {\n                /// This is a test field\n                pub test_field: String,\n                #[builder(setter(custom), default)]\n                client: Option<hidden::Arc<dyn hidden::SimplePrompt>>,\n                #[builder(default)]\n                concurrency: Option<usize>,\n                #[builder(private, default)]\n                indexing_defaults: Option<hidden::IndexingDefaults>,\n            }\n\n            impl Default for TestStruct {\n                fn default() -> Self {\n                    TestStructBuilder::default().build().unwrap()\n                }\n            }\n\n            impl TestStruct {\n                /// Creates a new builder for the transformer\n                pub fn builder() -> TestStructBuilder {\n                    TestStructBuilder::default()\n                }\n\n                /// Build a new transformer from a client\n                pub fn from_client(client: impl hidden::SimplePrompt + 'static) -> TestStructBuilder {\n                    TestStructBuilder::default().client(client).to_owned()\n                }\n\n                /// Create a new transformer from a client\n                pub fn new(client: impl hidden::SimplePrompt + 'static) -> Self {\n                    TestStructBuilder::default().client(client).build().unwrap()\n                }\n\n                /// Set the concurrency level for the transformer\n                #[must_use]\n                pub fn with_concurrency(mut self, concurrency: usize) -> Self {\n                    self.concurrency = Some(concurrency);\n                    self\n                }\n\n                /// Prompts either the client provided to the transformer or a default client\n                /// provided on the indexing pipeline\n                ///\n                /// # Errors\n                ///\n                /// Gives an error if no (default) client is provided\n                async fn prompt(&self, prompt: hidden::Prompt) -> hidden::Result<String, hidden::LanguageModelError> {\n                    if let Some(client) = &self.client {\n                        return client.prompt(prompt).await\n                    };\n\n                    let Some(defaults) = &self.indexing_defaults.as_ref() else {\n                        return Err(hidden::LanguageModelError::PermanentError(\"No client provided\".into()))\n                    };\n\n                    let Some(client) = defaults.simple_prompt() else {\n                        return Err(hidden::LanguageModelError::PermanentError(\"No client provided\".into()))\n                    };\n                    client.prompt(prompt).await\n                }\n            }\n\n            impl TestStructBuilder {\n                pub fn client(&mut self, client: impl hidden::SimplePrompt + 'static) -> &mut Self {\n                    self.client = Some(Some(hidden::Arc::new(client) as hidden::Arc<dyn hidden::SimplePrompt>));\n                    self\n                }\n            }\n\n            impl hidden::WithIndexingDefaults for TestStruct {\n                fn with_indexing_defaults(&mut self, defaults: hidden::IndexingDefaults) {\n                    self.indexing_defaults = Some(defaults);\n                }\n            }\n        };\n\n        assert_eq!(output.to_string(), expected_output.to_string());\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n\n//! This crate provides macros for generating boilerplate code\n//! for indexing transformers\nuse proc_macro::TokenStream;\n\nmod indexing_transformer;\n#[cfg(test)]\nmod test_utils;\nmod tool;\nuse indexing_transformer::indexing_transformer_impl;\nuse syn::{DeriveInput, ItemFn, ItemStruct, parse_macro_input};\nuse tool::{tool_attribute_impl, tool_derive_impl};\n\n/// Generates boilerplate for an indexing transformer.\n#[proc_macro_attribute]\npub fn indexing_transformer(args: TokenStream, input: TokenStream) -> TokenStream {\n    let input = parse_macro_input!(input as ItemStruct);\n    indexing_transformer_impl(args.into(), input).into()\n}\n\n#[proc_macro_attribute]\n/// Creates a `Tool` from an async function.\n///\n/// # Example\n/// ```ignore\n/// #[tool(description = \"Searches code\", param(name = \"code_query\", description = \"The code query\"))]\n/// pub async fn search_code(context: &dyn AgentContext, code_query: &str) -> Result<ToolOutput,\n/// ToolError> {\n///    Ok(\"hello\".into())\n/// }\n///\n/// // The tool can then be used with agents:\n/// Agent::builder().tools([search_code()])\n///\n/// // Or\n///\n/// Agent::builder().tools([SearchCode::default()])\n/// ```\npub fn tool(args: TokenStream, input: TokenStream) -> TokenStream {\n    let input = parse_macro_input!(input as ItemFn);\n    tool_attribute_impl(&args.into(), &input).into()\n}\n\n/// Derive `Tool` on a struct.\n///\n/// Useful if your structs have internal state and you want to use it in your tool.\n///\n/// # Example\n/// ```ignore\n/// #[derive(Clone, Tool)]\n/// #[tool(description = \"Searches code\", param(name = \"code_query\", description = \"The code query\"))]\n/// pub struct SearchCode {\n///   search_command: String\n/// }\n///\n/// impl SearchCode {\n///   pub async fn search_code(&self, context: &dyn AgentContext, code_query: &str) -> Result<ToolOutput, ToolError> {\n///     context.exec_cmd(&self.search_command.into()).await.map(Into::into)\n///   }\n/// }\n/// ```\n#[proc_macro_derive(Tool, attributes(tool))]\npub fn derive_tool(input: TokenStream) -> TokenStream {\n    let input = parse_macro_input!(input as DeriveInput);\n\n    match tool_derive_impl(&input) {\n        Ok(tokens) => tokens.into(),\n        Err(err) => err.into_compile_error().into(),\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/test_utils.rs",
    "content": "pub fn pretty_macro_output(item: &proc_macro2::TokenStream) -> String {\n    let file = syn::parse_file(&item.to_string())\n        .unwrap_or_else(|_| panic!(\"Failed to parse token stream: {}\", &item.to_string()));\n    prettyplease::unparse(&file)\n}\n\n// Add a macro that pretty compares two token streams using the above called `assert_ts_eq!`\n#[macro_export]\nmacro_rules! assert_ts_eq {\n    ($left:expr, $right:expr) => {{\n        let left_pretty = $crate::test_utils::pretty_macro_output(&$left);\n        let right_pretty = $crate::test_utils::pretty_macro_output(&$right);\n        pretty_assertions::assert_eq!(left_pretty, right_pretty);\n    }};\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/args.rs",
    "content": "use convert_case::{Case, Casing as _};\nuse darling::{Error, FromMeta, ast::NestedMeta};\nuse proc_macro2::TokenStream;\nuse quote::{ToTokens as _, quote};\nuse syn::{FnArg, Ident, ItemFn, Pat, PatType, parse_quote};\n\n#[derive(FromMeta, Default, Debug)]\npub struct ToolArgs {\n    #[darling(default)]\n    /// Name of the tool\n    /// Defaults to the underscored version of the function name or struct\n    name: String,\n\n    /// Name of the function to call\n    /// Defaults to the underscored version of the function name or struct\n    #[darling(default)]\n    fn_name: String,\n\n    /// Description of the tool\n    description: Description,\n\n    /// Parameters the tool can take\n    #[darling(multiple, rename = \"param\")]\n    params: Vec<ParamOptions>,\n}\n\n#[derive(FromMeta, Debug, Default)]\n#[darling(default)]\npub struct ParamOptions {\n    pub name: String,\n    pub description: String,\n\n    /// Backwards compatibility: optional JSON type hint (string based)\n    pub json_type: Option<String>,\n\n    /// Explicit rust type override parsed from the attribute\n    pub rust_type: Option<syn::Type>,\n\n    pub required: Option<bool>,\n\n    #[darling(skip)]\n    pub resolved_type: Option<syn::Type>,\n}\n\n#[derive(Debug)]\npub enum Description {\n    Literal(String),\n    Path(syn::Path),\n}\n\nimpl Default for Description {\n    fn default() -> Self {\n        Description::Literal(String::new())\n    }\n}\n\nimpl FromMeta for Description {\n    fn from_expr(expr: &syn::Expr) -> darling::Result<Self> {\n        match expr {\n            syn::Expr::Lit(lit) => {\n                if let syn::Lit::Str(s) = &lit.lit {\n                    Ok(Description::Literal(s.value()))\n                } else {\n                    Err(Error::unsupported_format(\n                        \"expected a string literal or a const\",\n                    ))\n                }\n            }\n            syn::Expr::Path(path) => Ok(Description::Path(path.path.clone())),\n            _ => Err(Error::unsupported_format(\n                \"expected a string literal or a const\",\n            )),\n        }\n    }\n}\n\nimpl ToolArgs {\n    pub fn try_from_attribute_input(input: &ItemFn, args: TokenStream) -> Result<Self, Error> {\n        validate_first_argument_is_agent_context(input)?;\n\n        let attr_args = NestedMeta::parse_meta_list(args)?;\n\n        let mut args = ToolArgs::from_list(&attr_args)?;\n        for arg in input.sig.inputs.iter().skip(1) {\n            if let FnArg::Typed(PatType { pat, ty, .. }) = arg\n                && let Pat::Ident(ident) = &**pat\n            {\n                let ty = as_owned_ty(ty);\n\n                if let Some(param) = args.params.iter_mut().find(|p| ident.ident == p.name) {\n                    param.rust_type = Some(ty);\n                }\n            }\n        }\n        args.infer_param_types()?;\n\n        validate_spec_and_fn_args_match(&args, input)?;\n\n        args.with_name_from_ident(&input.sig.ident);\n\n        Ok(args)\n    }\n\n    pub fn infer_param_types(&mut self) -> Result<(), Error> {\n        for param in &mut self.params {\n            let mut ty = if let Some(ty) = param.rust_type.clone() {\n                ty\n            } else if let Some(json_type) = &param.json_type {\n                json_type_to_rust_type(json_type)\n            } else {\n                syn::parse_quote! { String }\n            };\n\n            let is_option = is_option_type(&ty);\n\n            match param.required {\n                Some(true) if is_option => {\n                    return Err(Error::custom(format!(\n                        \"The parameter {} is marked as required but has an optional type\",\n                        param.name\n                    )));\n                }\n                Some(false) if !is_option => {\n                    ty = wrap_type_in_option(ty);\n                }\n                None if is_option => {\n                    param.required = Some(false);\n                }\n                None => {\n                    param.required = Some(true);\n                }\n                _ => {}\n            }\n\n            param.resolved_type = Some(ty);\n        }\n        Ok(())\n    }\n\n    pub fn with_name_from_ident(&mut self, ident: &syn::Ident) {\n        if self.name.is_empty() {\n            self.name = ident.to_string().to_case(Case::Snake);\n        }\n\n        if self.fn_name.is_empty() {\n            self.fn_name = ident.to_string().to_case(Case::Snake);\n        }\n    }\n\n    pub fn tool_name(&self) -> &str {\n        &self.name\n    }\n\n    pub fn fn_name(&self) -> &str {\n        &self.fn_name\n    }\n\n    pub fn tool_description(&self) -> &Description {\n        &self.description\n    }\n\n    pub fn tool_params(&self) -> &[ParamOptions] {\n        &self.params\n    }\n\n    pub fn derive_invoke_args(&self) -> Vec<TokenStream> {\n        self.params\n            .iter()\n            .map(|param| {\n                let ident = syn::Ident::new(&param.name, proc_macro2::Span::call_site());\n                if param.should_pass_owned() {\n                    quote! { args.#ident }\n                } else {\n                    quote! { &args.#ident }\n                }\n            })\n            .collect()\n    }\n\n    pub fn args_struct(&self) -> TokenStream {\n        if self.params.is_empty() {\n            return quote! {};\n        }\n\n        let mut fields = Vec::new();\n\n        for param in &self.params {\n            let ty = param\n                .resolved_type\n                .as_ref()\n                .expect(\"parameter types should be resolved\");\n            let ident = syn::Ident::new(&param.name, proc_macro2::Span::call_site());\n            fields.push(quote! { pub #ident: #ty });\n        }\n\n        let args_struct_ident = self.args_struct_ident();\n        quote! {\n            #[derive(\n                ::swiftide::reexports::serde::Serialize,\n                ::swiftide::reexports::serde::Deserialize,\n                ::swiftide::reexports::schemars::JsonSchema,\n                Debug\n            )]\n            #[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\n            pub struct #args_struct_ident {\n                #(#fields),*\n            }\n        }\n    }\n\n    pub fn args_struct_ident(&self) -> Ident {\n        syn::Ident::new(\n            &format!(\"{}Args\", self.name.to_case(Case::Pascal)),\n            proc_macro2::Span::call_site(),\n        )\n    }\n}\n\nfn validate_spec_and_fn_args_match(tool_args: &ToolArgs, item_fn: &ItemFn) -> Result<(), Error> {\n    let mut found_spec_arg_names = tool_args\n        .params\n        .iter()\n        .map(|param| param.name.clone())\n        .collect::<Vec<_>>();\n    found_spec_arg_names.sort();\n\n    let mut seen_arg_names = vec![];\n\n    item_fn.sig.inputs.iter().skip(1).for_each(|arg| {\n        if let FnArg::Typed(PatType { pat, .. }) = arg\n            && let Pat::Ident(ident) = &**pat\n        {\n            seen_arg_names.push(ident.ident.to_string());\n        }\n    });\n    seen_arg_names.sort();\n\n    let mut errors = Error::accumulator();\n    if found_spec_arg_names != seen_arg_names {\n        let missing_args = found_spec_arg_names\n            .iter()\n            .filter(|name| !seen_arg_names.contains(name))\n            .collect::<Vec<_>>();\n\n        let missing_params = seen_arg_names\n            .iter()\n            .filter(|name| !found_spec_arg_names.contains(name))\n            .collect::<Vec<_>>();\n\n        if !missing_args.is_empty() {\n            errors.push(Error::custom(format!(\n                \"The following parameters are missing from the function signature: {missing_args:?}\"\n            )));\n        }\n\n        if !missing_params.is_empty() {\n            errors.push(Error::custom(format!(\n                \"The following parameters are missing from the spec: {missing_params:?}\"\n            )));\n        }\n    }\n\n    errors.finish()?;\n    Ok(())\n}\n\nfn json_type_to_rust_type(json_type: &str) -> syn::Type {\n    match json_type.to_ascii_lowercase().as_str() {\n        \"number\" => syn::parse_quote! { usize },\n        \"boolean\" => syn::parse_quote! { bool },\n        \"array\" => syn::parse_quote! { Vec<String> },\n        \"object\" => syn::parse_quote! { ::serde_json::Value },\n        // default to string if nothing is specified\n        _ => syn::parse_quote! { String },\n    }\n}\n\nfn is_option_type(ty: &syn::Type) -> bool {\n    if let syn::Type::Path(type_path) = ty {\n        if type_path.qself.is_some() {\n            return false;\n        }\n\n        return type_path\n            .path\n            .segments\n            .last()\n            .is_some_and(|segment| segment.ident == \"Option\");\n    }\n\n    false\n}\n\nfn wrap_type_in_option(ty: syn::Type) -> syn::Type {\n    if is_option_type(&ty) {\n        ty\n    } else {\n        syn::parse_quote! { Option<#ty> }\n    }\n}\n\nfn as_owned_ty(ty: &syn::Type) -> syn::Type {\n    if let syn::Type::Reference(r) = ty {\n        if let syn::Type::Path(p) = &*r.elem {\n            if p.path.is_ident(\"str\") {\n                return parse_quote!(String);\n            }\n\n            // Does this happen?\n            if p.path.is_ident(\"Vec\")\n                && let syn::PathArguments::AngleBracketed(args) = &p.path.segments[0].arguments\n                && let syn::GenericArgument::Type(ty) = args.args.first().unwrap()\n            {\n                let inner = as_owned_ty(ty);\n                return parse_quote!(Vec<#inner>);\n            }\n\n            if let Some(last_segment) = p.path.segments.last()\n                && last_segment.ident.to_string().as_str() == \"Option\"\n                && let syn::PathArguments::AngleBracketed(generics) = &last_segment.arguments\n                && let Some(syn::GenericArgument::Type(inner_ty)) = generics.args.first()\n            {\n                let inner_ty = as_owned_ty(inner_ty);\n                return parse_quote!(Option<#inner_ty>);\n            }\n\n            return parse_quote!(String);\n        }\n        if let syn::Type::Slice(slice_type) = &*r.elem {\n            // slice_type.elem is T. We'll replace with Vec<T>.\n            let elem = &slice_type.elem;\n            return parse_quote!(Vec<#elem>);\n        }\n        panic!(\"Unsupported reference type\");\n    } else {\n        ty.to_owned()\n    }\n}\n\nfn is_vec_type(ty: &syn::Type) -> bool {\n    if let syn::Type::Path(type_path) = ty {\n        if type_path.qself.is_some() {\n            return false;\n        }\n\n        return type_path\n            .path\n            .segments\n            .last()\n            .is_some_and(|segment| segment.ident == \"Vec\");\n    }\n\n    false\n}\n\nimpl ParamOptions {\n    fn should_pass_owned(&self) -> bool {\n        self.resolved_type.as_ref().is_some_and(is_vec_type)\n    }\n}\n\nfn validate_first_argument_is_agent_context(input_fn: &ItemFn) -> Result<(), Error> {\n    let expected_first_arg = quote! { &dyn AgentContext };\n    let error_msg = \"The first argument must be `&dyn AgentContext`\";\n\n    if let Some(FnArg::Typed(first_arg)) = input_fn.sig.inputs.first() {\n        if first_arg.ty.to_token_stream().to_string() != expected_first_arg.to_string() {\n            return Err(Error::custom(error_msg).with_span(&first_arg.ty));\n        }\n    } else {\n        return Err(Error::custom(error_msg).with_span(&input_fn.sig));\n    }\n\n    Ok(())\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/mod.rs",
    "content": "#![allow(clippy::used_underscore_binding)]\n#![allow(clippy::needless_continue)]\n\nuse args::ToolArgs;\nuse darling::{Error, FromDeriveInput};\nuse proc_macro2::TokenStream;\nuse quote::quote;\nuse syn::{DeriveInput, FnArg, ItemFn, Pat, PatType, parse_quote};\n\nmod args;\nmod tool_spec;\nmod wrapped;\n\n#[allow(clippy::too_many_lines)]\npub(crate) fn tool_attribute_impl(input_args: &TokenStream, input: &ItemFn) -> TokenStream {\n    let tool_args = match ToolArgs::try_from_attribute_input(input, input_args.clone()) {\n        Ok(args) => args,\n        Err(e) => return e.write_errors(),\n    };\n\n    let fn_name = &input.sig.ident;\n\n    let args_struct = tool_args.args_struct();\n    let args_struct_ident = tool_args.args_struct_ident();\n    let arg_names = input\n        .sig\n        .inputs\n        .iter()\n        .skip(1)\n        .filter_map(|arg| {\n            if let FnArg::Typed(PatType { pat, ty, .. }) = arg {\n                if let Pat::Ident(ident) = &**pat {\n                    // If the argument is a reference, we need to reference the quote as well\n                    if let syn::Type::Reference(_) = &**ty {\n                        Some(quote! { &args.#ident })\n                    } else {\n                        Some(quote! { args.#ident })\n                    }\n                } else {\n                    None\n                }\n            } else {\n                None\n            }\n        })\n        .collect::<Vec<_>>();\n    let tool_name = tool_args.tool_name();\n\n    let tool_struct = wrapped::struct_name(input);\n\n    let wrapped_fn = wrapped::wrap_tool_fn(input);\n\n    let tool_spec = tool_spec::tool_spec(&tool_args);\n\n    let invoke_body = if arg_names.is_empty() {\n        quote! {\n            return self.#fn_name(agent_context).await;\n        }\n    } else {\n        quote! {\n            let Some(args) = tool_call.args()\n            else { return Err(::swiftide::chat_completion::errors::ToolError::MissingArguments(format!(\"No arguments provided for {}\", #tool_name).into())) };\n\n            let args: #args_struct_ident = ::swiftide::reexports::serde_json::from_str(&args)?;\n            return self.#fn_name(agent_context, #(#arg_names),*).await;\n        }\n    };\n\n    let boxed_from = boxed_from(&tool_struct, &parse_quote!());\n\n    quote! {\n        #args_struct\n\n        #wrapped_fn\n\n        #[::swiftide::reexports::async_trait::async_trait]\n        impl ::swiftide::chat_completion::Tool for #tool_struct {\n            async fn invoke(&self, agent_context: &dyn ::swiftide::traits::AgentContext, tool_call: &swiftide::chat_completion::ToolCall) -> ::std::result::Result<::swiftide::chat_completion::ToolOutput, ::swiftide::chat_completion::errors::ToolError> {\n                #invoke_body\n            }\n\n            fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n                #tool_name.into()\n            }\n\n            fn tool_spec(&self) -> ::swiftide::chat_completion::ToolSpec {\n                #tool_spec\n            }\n        }\n\n        #boxed_from\n    }\n}\n\n#[allow(clippy::needless_continue)]\n#[derive(FromDeriveInput)]\n#[darling(attributes(tool), supports(struct_any), and_then = ToolDerive::update_defaults, forward_attrs(allow, doc, cfg))]\nstruct ToolDerive {\n    ident: syn::Ident,\n    #[allow(dead_code)]\n    attrs: Vec<syn::Attribute>,\n    #[darling(flatten)]\n    tool: ToolArgs,\n}\n\nimpl ToolDerive {\n    pub fn update_defaults(mut self) -> Result<Self, Error> {\n        self.tool.with_name_from_ident(&self.ident);\n        self.tool.infer_param_types()?;\n        Ok(self)\n    }\n}\n\npub(crate) fn tool_derive_impl(input: &DeriveInput) -> syn::Result<TokenStream> {\n    let parsed: ToolDerive = ToolDerive::from_derive_input(input)?;\n    let struct_ident = &parsed.ident;\n\n    let expected_fn_name = parsed.tool.fn_name();\n    let expected_fn_ident = syn::Ident::new(expected_fn_name, struct_ident.span());\n\n    let invoke_tool_args = parsed.tool.derive_invoke_args();\n    let args_struct_ident = parsed.tool.args_struct_ident();\n    let args_struct = parsed.tool.args_struct();\n\n    let invoke_body = if invoke_tool_args.is_empty() {\n        quote! { return self.#expected_fn_ident(agent_context).await }\n    } else {\n        quote! {\n            let Some(args) = tool_call.args()\n            else { return Err(::swiftide::chat_completion::errors::ToolError::MissingArguments(format!(\"No arguments provided for {}\", #expected_fn_name).into())) };\n\n            let args: #args_struct_ident = ::swiftide::reexports::serde_json::from_str(&args)?;\n            return self.#expected_fn_ident(agent_context, #(#invoke_tool_args),*).await;\n        }\n    };\n\n    let tool_spec = tool_spec::tool_spec(&parsed.tool);\n\n    let (impl_generics, ty_generics, where_clause) = input.generics.split_for_impl();\n\n    // Arg should be, if empty None, else Some(&args)\n    let boxed_from = boxed_from(struct_ident, &input.generics);\n    Ok(quote! {\n        #args_struct\n\n\n        #[async_trait::async_trait]\n        impl #impl_generics swiftide::chat_completion::Tool for #struct_ident #ty_generics #where_clause {\n            async fn invoke(&self, agent_context: &dyn swiftide::traits::AgentContext, tool_call: &swiftide::chat_completion::ToolCall) -> std::result::Result<swiftide::chat_completion::ToolOutput, ::swiftide::chat_completion::errors::ToolError> {\n                #invoke_body\n            }\n\n            fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n                #expected_fn_name.into()\n            }\n\n            fn tool_spec(&self) -> swiftide::chat_completion::ToolSpec {\n                #tool_spec\n            }\n        }\n\n        #boxed_from\n    })\n}\n\nfn boxed_from(struct_ident: &syn::Ident, generics: &syn::Generics) -> TokenStream {\n    if !generics.params.is_empty() {\n        return quote!();\n    }\n    let (impl_generics, ty_generics, where_clause) = generics.split_for_impl();\n\n    let lt_ident = if let Some(other_lifetime) = generics.lifetimes().next() {\n        let lifetime = &other_lifetime.lifetime;\n        quote!(+ #lifetime)\n    } else {\n        quote!()\n    };\n\n    quote! {\n        impl #impl_generics From<#struct_ident #ty_generics> for Box<dyn ::swiftide::chat_completion::Tool #lt_ident> #where_clause {\n            fn from(val: #struct_ident) -> Self {\n                Box::new(val) as Box<dyn ::swiftide::chat_completion::Tool>\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use quote::quote;\n    use syn::{ItemFn, parse_quote};\n\n    #[test]\n    fn test_snapshot_single_arg() {\n        let args = quote! {\n            description = \"Hello world tool\",\n            param(\n                name = \"code_query\",\n                description = \"my param description\"\n            )\n        };\n        let input: ItemFn = parse_quote! {\n            pub async fn search_code(context: &dyn AgentContext, code_query: &str) -> Result<ToolOutput, ToolError> {\n                return Ok(\"hello\".into())\n            }\n        };\n\n        let output = tool_attribute_impl(&args, &input);\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n\n    #[test]\n    fn test_snapshot_single_arg_option() {\n        let args = quote! {\n            description = \"Hello world tool\",\n            param(\n                name = \"code_query\",\n                description = \"my param description\"\n            )\n        };\n        let input: ItemFn = parse_quote! {\n            pub async fn search_code(context: &dyn AgentContext, code_query: &Option<String>) -> Result<ToolOutput, ToolError> {\n                return Ok(\"hello\".into())\n            }\n        };\n\n        let output = tool_attribute_impl(&args, &input);\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n\n    #[test]\n    fn test_snapshot_multiple_args() {\n        let args = quote! {\n            description = \"Hello world tool\",\n            param(\n                name = \"code_query\",\n                description = \"my param description\"\n            ),\n            param(\n                name = \"other\",\n                description = \"my param description\"\n            )\n        };\n        let input: ItemFn = parse_quote! {\n            pub async fn search_code(context: &dyn AgentContext, code_query: &str, other: &str) -> Result<ToolOutput> {\n                return Ok(\"hello\".into())\n            }\n        };\n\n        let output = tool_attribute_impl(&args, &input);\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n\n    #[test]\n    fn test_snapshot_derive() {\n        let input: DeriveInput = parse_quote! {\n            #[tool(description=\"Hello derive\")]\n            pub struct HelloDerive {\n                my_thing: String\n            }\n        };\n\n        let output = tool_derive_impl(&input).unwrap();\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n\n    #[test]\n    fn test_snapshot_derive_with_args() {\n        let input: DeriveInput = parse_quote! {\n            #[tool(description=\"Hello derive\", param(name=\"test\", description=\"test param\"))]\n            pub struct HelloDerive {\n                my_thing: String\n            }\n        };\n\n        let output = tool_derive_impl(&input).unwrap();\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n\n    #[test]\n    fn test_snapshot_derive_with_option() {\n        let input: DeriveInput = parse_quote! {\n            #[tool(description=\"Hello derive\", param(name=\"test\", description=\"test param\", required = false))]\n            pub struct HelloDerive {\n                my_thing: String\n            }\n        };\n\n        let output = tool_derive_impl(&input).unwrap();\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n\n    #[test]\n    fn test_snapshot_derive_with_lifetime() {\n        let input: DeriveInput = parse_quote! {\n            #[tool(description=\"Hello derive\", param(name=\"test\", description=\"test param\"))]\n            pub struct HelloDerive<'a> {\n                my_thing: &'a str,\n            }\n        };\n\n        let output = tool_derive_impl(&input).unwrap();\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n\n    #[test]\n    fn test_snapshot_derive_with_generics() {\n        let input: DeriveInput = parse_quote! {\n            #[tool(description=\"Hello derive\", param(name=\"test\", description=\"test param\"))]\n            pub struct HelloDerive<S: Send + Sync + Clone> {\n                my_thing: S,\n            }\n        };\n\n        let output = tool_derive_impl(&input).unwrap();\n\n        insta::assert_snapshot!(crate::test_utils::pretty_macro_output(&output));\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__simple_tool.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\nmod hidden {\n    pub use swiftide_agents::{Tool, AgentContext};\n    pub use anyhow::{bail, Result};\n    pub use swiftide_core::chat_completion::{JsonSpec, ToolOutput};\n    pub use async_trait::async_trait;\n}\n#[derive(serde::Serialize, serde::Deserialize)]\nstruct SearchCodeArgs<'a> {\n    pub code_query: &'a str,\n}\n#[derive(Clone)]\nstruct SearchCode {}\npub fn search_code() -> SearchCode {\n    SearchCode {}\n}\nimpl SearchCode {\n    pub async fn search_code(\n        &self,\n        context: &dyn AgentContext,\n        code_query: &str,\n    ) -> Result<ToolOutput> {\n        return Ok(\"hello\".into());\n    }\n}\n#[hidden::async_trait]\nimpl hidden::Tool for SearchCode {\n    async fn invoke(\n        &self,\n        agent_context: &dyn hidden::AgentContext,\n        raw_args: Option<&str>,\n    ) -> hidden::Result<hidden::ToolOutput> {\n        let Some(args) = raw_args else {\n            hidden::bail!(\"No arguments provided for {}\", \"search_code\")\n        };\n        let args: SearchCodeArgs = serde_json::from_str(&args)?;\n        return self.search_code(agent_context, args.code_query).await;\n    }\n    fn name(&self) -> &'static str {\n        \"search_code\"\n    }\n    fn json_spec(&self) -> hidden::JsonSpec {\n        \"{\\n  \\\"description\\\": \\\"Hello world tool\\\",\\n  \\\"name\\\": \\\"search_code\\\",\\n  \\\"parameters\\\": {\\n    \\\"my param\\\": {\\n      \\\"description\\\": \\\"my param description\\\",\\n      \\\"type\\\": \\\"string\\\"\\n    }\\n  }\\n}\"\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_derive.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[async_trait::async_trait]\nimpl swiftide::chat_completion::Tool for HelloDerive {\n    async fn invoke(\n        &self,\n        agent_context: &dyn swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> std::result::Result<\n        swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        return self.hello_derive(agent_context).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"hello_derive\".into()\n    }\n    fn tool_spec(&self) -> swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"hello_derive\")\n            .description(\"Hello derive\")\n            .build()\n            .unwrap()\n    }\n}\nimpl From<HelloDerive> for Box<dyn ::swiftide::chat_completion::Tool> {\n    fn from(val: HelloDerive) -> Self {\n        Box::new(val) as Box<dyn ::swiftide::chat_completion::Tool>\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_derive_with_args.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[derive(\n    ::swiftide::reexports::serde::Serialize,\n    ::swiftide::reexports::serde::Deserialize,\n    ::swiftide::reexports::schemars::JsonSchema,\n    Debug\n)]\n#[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\npub struct HelloDeriveArgs {\n    pub test: String,\n}\n#[async_trait::async_trait]\nimpl swiftide::chat_completion::Tool for HelloDerive {\n    async fn invoke(\n        &self,\n        agent_context: &dyn swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> std::result::Result<\n        swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        let Some(args) = tool_call.args() else {\n            return Err(\n                ::swiftide::chat_completion::errors::ToolError::MissingArguments(\n                    format!(\"No arguments provided for {}\", \"hello_derive\").into(),\n                ),\n            )\n        };\n        let args: HelloDeriveArgs = ::swiftide::reexports::serde_json::from_str(&args)?;\n        return self.hello_derive(agent_context, &args.test).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"hello_derive\".into()\n    }\n    fn tool_spec(&self) -> swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"hello_derive\")\n            .description(\"Hello derive\")\n            .parameters_schema(\n                ::swiftide::reexports::schemars::schema_for!(HelloDeriveArgs),\n            )\n            .build()\n            .unwrap()\n    }\n}\nimpl From<HelloDerive> for Box<dyn ::swiftide::chat_completion::Tool> {\n    fn from(val: HelloDerive) -> Self {\n        Box::new(val) as Box<dyn ::swiftide::chat_completion::Tool>\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_derive_with_generics.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[derive(\n    ::swiftide::reexports::serde::Serialize,\n    ::swiftide::reexports::serde::Deserialize,\n    ::swiftide::reexports::schemars::JsonSchema,\n    Debug\n)]\n#[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\npub struct HelloDeriveArgs {\n    pub test: String,\n}\n#[async_trait::async_trait]\nimpl<S: Send + Sync + Clone> swiftide::chat_completion::Tool for HelloDerive<S> {\n    async fn invoke(\n        &self,\n        agent_context: &dyn swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> std::result::Result<\n        swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        let Some(args) = tool_call.args() else {\n            return Err(\n                ::swiftide::chat_completion::errors::ToolError::MissingArguments(\n                    format!(\"No arguments provided for {}\", \"hello_derive\").into(),\n                ),\n            )\n        };\n        let args: HelloDeriveArgs = ::swiftide::reexports::serde_json::from_str(&args)?;\n        return self.hello_derive(agent_context, &args.test).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"hello_derive\".into()\n    }\n    fn tool_spec(&self) -> swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"hello_derive\")\n            .description(\"Hello derive\")\n            .parameters_schema(\n                ::swiftide::reexports::schemars::schema_for!(HelloDeriveArgs),\n            )\n            .build()\n            .unwrap()\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_derive_with_lifetime.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[derive(\n    ::swiftide::reexports::serde::Serialize,\n    ::swiftide::reexports::serde::Deserialize,\n    ::swiftide::reexports::schemars::JsonSchema,\n    Debug\n)]\n#[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\npub struct HelloDeriveArgs {\n    pub test: String,\n}\n#[async_trait::async_trait]\nimpl<'a> swiftide::chat_completion::Tool for HelloDerive<'a> {\n    async fn invoke(\n        &self,\n        agent_context: &dyn swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> std::result::Result<\n        swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        let Some(args) = tool_call.args() else {\n            return Err(\n                ::swiftide::chat_completion::errors::ToolError::MissingArguments(\n                    format!(\"No arguments provided for {}\", \"hello_derive\").into(),\n                ),\n            )\n        };\n        let args: HelloDeriveArgs = ::swiftide::reexports::serde_json::from_str(&args)?;\n        return self.hello_derive(agent_context, &args.test).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"hello_derive\".into()\n    }\n    fn tool_spec(&self) -> swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"hello_derive\")\n            .description(\"Hello derive\")\n            .parameters_schema(\n                ::swiftide::reexports::schemars::schema_for!(HelloDeriveArgs),\n            )\n            .build()\n            .unwrap()\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_derive_with_option.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[derive(\n    ::swiftide::reexports::serde::Serialize,\n    ::swiftide::reexports::serde::Deserialize,\n    ::swiftide::reexports::schemars::JsonSchema,\n    Debug\n)]\n#[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\npub struct HelloDeriveArgs {\n    pub test: Option<String>,\n}\n#[async_trait::async_trait]\nimpl swiftide::chat_completion::Tool for HelloDerive {\n    async fn invoke(\n        &self,\n        agent_context: &dyn swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> std::result::Result<\n        swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        let Some(args) = tool_call.args() else {\n            return Err(\n                ::swiftide::chat_completion::errors::ToolError::MissingArguments(\n                    format!(\"No arguments provided for {}\", \"hello_derive\").into(),\n                ),\n            )\n        };\n        let args: HelloDeriveArgs = ::swiftide::reexports::serde_json::from_str(&args)?;\n        return self.hello_derive(agent_context, &args.test).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"hello_derive\".into()\n    }\n    fn tool_spec(&self) -> swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"hello_derive\")\n            .description(\"Hello derive\")\n            .parameters_schema(\n                ::swiftide::reexports::schemars::schema_for!(HelloDeriveArgs),\n            )\n            .build()\n            .unwrap()\n    }\n}\nimpl From<HelloDerive> for Box<dyn ::swiftide::chat_completion::Tool> {\n    fn from(val: HelloDerive) -> Self {\n        Box::new(val) as Box<dyn ::swiftide::chat_completion::Tool>\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_multiple_args.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[derive(\n    ::swiftide::reexports::serde::Serialize,\n    ::swiftide::reexports::serde::Deserialize,\n    ::swiftide::reexports::schemars::JsonSchema,\n    Debug\n)]\n#[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\npub struct SearchCodeArgs {\n    pub code_query: String,\n    pub other: String,\n}\n#[derive(Clone, Default)]\npub struct SearchCode {}\npub fn search_code() -> Box<dyn ::swiftide::chat_completion::Tool> {\n    Box::new(SearchCode {}) as Box<dyn ::swiftide::chat_completion::Tool>\n}\nimpl SearchCode {\n    pub async fn search_code(\n        &self,\n        context: &dyn AgentContext,\n        code_query: &str,\n        other: &str,\n    ) -> Result<ToolOutput> {\n        return Ok(\"hello\".into());\n    }\n}\n#[::swiftide::reexports::async_trait::async_trait]\nimpl ::swiftide::chat_completion::Tool for SearchCode {\n    async fn invoke(\n        &self,\n        agent_context: &dyn ::swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> ::std::result::Result<\n        ::swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        let Some(args) = tool_call.args() else {\n            return Err(\n                ::swiftide::chat_completion::errors::ToolError::MissingArguments(\n                    format!(\"No arguments provided for {}\", \"search_code\").into(),\n                ),\n            )\n        };\n        let args: SearchCodeArgs = ::swiftide::reexports::serde_json::from_str(&args)?;\n        return self.search_code(agent_context, &args.code_query, &args.other).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"search_code\".into()\n    }\n    fn tool_spec(&self) -> ::swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"search_code\")\n            .description(\"Hello world tool\")\n            .parameters_schema(\n                ::swiftide::reexports::schemars::schema_for!(SearchCodeArgs),\n            )\n            .build()\n            .unwrap()\n    }\n}\nimpl From<SearchCode> for Box<dyn ::swiftide::chat_completion::Tool> {\n    fn from(val: SearchCode) -> Self {\n        Box::new(val) as Box<dyn ::swiftide::chat_completion::Tool>\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_single_arg.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[derive(\n    ::swiftide::reexports::serde::Serialize,\n    ::swiftide::reexports::serde::Deserialize,\n    ::swiftide::reexports::schemars::JsonSchema,\n    Debug\n)]\n#[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\npub struct SearchCodeArgs {\n    pub code_query: String,\n}\n#[derive(Clone, Default)]\npub struct SearchCode {}\npub fn search_code() -> Box<dyn ::swiftide::chat_completion::Tool> {\n    Box::new(SearchCode {}) as Box<dyn ::swiftide::chat_completion::Tool>\n}\nimpl SearchCode {\n    pub async fn search_code(\n        &self,\n        context: &dyn AgentContext,\n        code_query: &str,\n    ) -> Result<ToolOutput, ToolError> {\n        return Ok(\"hello\".into());\n    }\n}\n#[::swiftide::reexports::async_trait::async_trait]\nimpl ::swiftide::chat_completion::Tool for SearchCode {\n    async fn invoke(\n        &self,\n        agent_context: &dyn ::swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> ::std::result::Result<\n        ::swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        let Some(args) = tool_call.args() else {\n            return Err(\n                ::swiftide::chat_completion::errors::ToolError::MissingArguments(\n                    format!(\"No arguments provided for {}\", \"search_code\").into(),\n                ),\n            )\n        };\n        let args: SearchCodeArgs = ::swiftide::reexports::serde_json::from_str(&args)?;\n        return self.search_code(agent_context, &args.code_query).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"search_code\".into()\n    }\n    fn tool_spec(&self) -> ::swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"search_code\")\n            .description(\"Hello world tool\")\n            .parameters_schema(\n                ::swiftide::reexports::schemars::schema_for!(SearchCodeArgs),\n            )\n            .build()\n            .unwrap()\n    }\n}\nimpl From<SearchCode> for Box<dyn ::swiftide::chat_completion::Tool> {\n    fn from(val: SearchCode) -> Self {\n        Box::new(val) as Box<dyn ::swiftide::chat_completion::Tool>\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/snapshots/swiftide_macros__tool__tests__snapshot_single_arg_option.snap",
    "content": "---\nsource: swiftide-macros/src/tool/mod.rs\nexpression: \"crate::test_utils::pretty_macro_output(&output)\"\n---\n#[derive(\n    ::swiftide::reexports::serde::Serialize,\n    ::swiftide::reexports::serde::Deserialize,\n    ::swiftide::reexports::schemars::JsonSchema,\n    Debug\n)]\n#[schemars(crate = \"::swiftide::reexports::schemars\", deny_unknown_fields)]\npub struct SearchCodeArgs {\n    pub code_query: Option<String>,\n}\n#[derive(Clone, Default)]\npub struct SearchCode {}\npub fn search_code() -> Box<dyn ::swiftide::chat_completion::Tool> {\n    Box::new(SearchCode {}) as Box<dyn ::swiftide::chat_completion::Tool>\n}\nimpl SearchCode {\n    pub async fn search_code(\n        &self,\n        context: &dyn AgentContext,\n        code_query: &Option<String>,\n    ) -> Result<ToolOutput, ToolError> {\n        return Ok(\"hello\".into());\n    }\n}\n#[::swiftide::reexports::async_trait::async_trait]\nimpl ::swiftide::chat_completion::Tool for SearchCode {\n    async fn invoke(\n        &self,\n        agent_context: &dyn ::swiftide::traits::AgentContext,\n        tool_call: &swiftide::chat_completion::ToolCall,\n    ) -> ::std::result::Result<\n        ::swiftide::chat_completion::ToolOutput,\n        ::swiftide::chat_completion::errors::ToolError,\n    > {\n        let Some(args) = tool_call.args() else {\n            return Err(\n                ::swiftide::chat_completion::errors::ToolError::MissingArguments(\n                    format!(\"No arguments provided for {}\", \"search_code\").into(),\n                ),\n            )\n        };\n        let args: SearchCodeArgs = ::swiftide::reexports::serde_json::from_str(&args)?;\n        return self.search_code(agent_context, &args.code_query).await;\n    }\n    fn name<'TOOL>(&'TOOL self) -> std::borrow::Cow<'TOOL, str> {\n        \"search_code\".into()\n    }\n    fn tool_spec(&self) -> ::swiftide::chat_completion::ToolSpec {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(\"search_code\")\n            .description(\"Hello world tool\")\n            .parameters_schema(\n                ::swiftide::reexports::schemars::schema_for!(SearchCodeArgs),\n            )\n            .build()\n            .unwrap()\n    }\n}\nimpl From<SearchCode> for Box<dyn ::swiftide::chat_completion::Tool> {\n    fn from(val: SearchCode) -> Self {\n        Box::new(val) as Box<dyn ::swiftide::chat_completion::Tool>\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/tool_spec.rs",
    "content": "use proc_macro2::TokenStream;\nuse quote::quote;\n\nuse super::args::{Description, ToolArgs};\n\npub fn tool_spec(args: &ToolArgs) -> TokenStream {\n    let tool_name = args.tool_name();\n    let description = match &args.tool_description() {\n        Description::Literal(description) => quote! { #description },\n        Description::Path(path) => quote! { #path },\n    };\n\n    let builder = quote! {\n        swiftide::chat_completion::ToolSpec::builder()\n            .name(#tool_name)\n            .description(#description)\n    };\n\n    if args.tool_params().is_empty() {\n        quote! { #builder.build().unwrap() }\n    } else {\n        let args_struct_ident = args.args_struct_ident();\n        quote! {\n            #builder\n                .parameters_schema(::swiftide::reexports::schemars::schema_for!(#args_struct_ident))\n                .build()\n                .unwrap()\n        }\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/src/tool/wrapped.rs",
    "content": "use proc_macro2::TokenStream;\nuse quote::quote;\nuse syn::{Ident, ItemFn};\n\npub(crate) fn struct_name(input: &ItemFn) -> Ident {\n    let struct_name_str = input\n        .sig\n        .ident\n        .to_string()\n        .split('_') // Split by underscores\n        .map(|s| {\n            let mut chars = s.chars();\n            chars\n                .next()\n                .map(|c| c.to_ascii_uppercase())\n                .into_iter()\n                .collect::<String>()\n                + chars.as_str()\n        })\n        .collect::<String>();\n    Ident::new(&struct_name_str, input.sig.ident.span())\n}\n\npub(crate) fn wrap_tool_fn(input: &ItemFn) -> TokenStream {\n    let fn_name = &input.sig.ident;\n    let fn_args = &input.sig.inputs;\n    let fn_body = &input.block;\n    let fn_output = &input.sig.output;\n\n    let struct_name = struct_name(input);\n\n    let fn_args = fn_args.iter();\n\n    quote! {\n        #[derive(Clone, Default)]\n        pub struct #struct_name {}\n\n        pub fn #fn_name() -> Box<dyn ::swiftide::chat_completion::Tool> {\n            Box::new(#struct_name {}) as Box<dyn ::swiftide::chat_completion::Tool>\n        }\n\n        impl #struct_name {\n            pub async fn #fn_name(&self, #(#fn_args),*) #fn_output #fn_body\n        }\n\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::assert_ts_eq;\n\n    use super::*;\n    use quote::quote;\n    use syn::{ItemFn, parse_quote};\n\n    #[test]\n    fn test_wrap_tool_fn() {\n        let input: ItemFn = parse_quote! {\n            pub async fn search_code(context: &dyn swiftide::traits::AgentContext, code_query: &str) -> std::result::Result<swiftide::chat_completion::ToolOutput, ::swiftide::chat_completion::errors::ToolError> {\n                return Ok(\"hello\".into())\n            }\n        };\n\n        let output = wrap_tool_fn(&input);\n\n        let expected = quote! {\n            #[derive(Clone, Default)]\n            pub struct SearchCode {}\n\n            pub fn search_code() -> Box<dyn ::swiftide::chat_completion::Tool> {\n                Box::new(SearchCode {}) as Box<dyn ::swiftide::chat_completion::Tool>\n            }\n\n            impl SearchCode {\n                pub async fn search_code(&self, context: &dyn swiftide::traits::AgentContext, code_query: &str) -> std::result::Result<swiftide::chat_completion::ToolOutput, ::swiftide::chat_completion::errors::ToolError> {\n                    return Ok(\"hello\".into())\n                }\n\n            }\n        };\n\n        assert_ts_eq!(&output, &expected);\n    }\n\n    #[test]\n    fn test_wrap_multiple_args() {\n        let input: ItemFn = parse_quote! {\n            pub async fn search_code(context: &dyn swiftide::traits::AgentContext, code_query: &str, other_arg: &str) -> std::result::Result<swiftide::chat_completion::ToolOutput, ::swiftide::chat_completion::errors::ToolError> {\n                return Ok(\"hello\".into())\n            }\n        };\n\n        let output = wrap_tool_fn(&input);\n\n        let expected = quote! {\n            #[derive(Clone, Default)]\n            pub struct SearchCode {}\n\n            pub fn search_code() -> Box<dyn ::swiftide::chat_completion::Tool> {\n                Box::new(SearchCode {}) as Box<dyn ::swiftide::chat_completion::Tool>\n            }\n\n            impl SearchCode {\n                pub async fn search_code(&self, context: &dyn swiftide::traits::AgentContext, code_query: &str, other_arg: &str) -> std::result::Result<swiftide::chat_completion::ToolOutput, ::swiftide::chat_completion::errors::ToolError> {\n                    return Ok(\"hello\".into())\n                }\n\n            }\n        };\n\n        assert_ts_eq!(&output, &expected);\n    }\n}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_derive_missing_description.rs",
    "content": "use swiftide::chat_completion::{errors::ToolError, ToolOutput};\nuse swiftide::traits::AgentContext;\nuse swiftide_macros::Tool;\n\n#[derive(Clone, Tool)]\nstruct MyToolNoArgs {\n    test: String,\n}\n\nimpl MyToolNoArgs {\n    async fn my_tool_no_args(\n        &self,\n        _agent_context: &dyn AgentContext,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_derive_missing_description.stderr",
    "content": "error: Missing field `description`\n --> tests/tool/tool_derive_missing_description.rs:5:17\n  |\n5 | #[derive(Clone, Tool)]\n  |                 ^^^^\n  |\n  = note: this error originates in the derive macro `Tool` (in Nightly builds, run with -Z macro-backtrace for more info)\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_derive_pass.rs",
    "content": "#![allow(unused_variables)]\nuse swiftide::chat_completion::{errors::ToolError, ToolOutput};\nuse swiftide::traits::AgentContext;\nuse swiftide_macros::Tool;\n\n#[derive(Clone, Tool)]\n#[tool(\n    description = \"Hello tool\",\n    param(name = \"test\", description = \"My param\")\n)]\nstruct MyTool {\n    test: String,\n}\n\nimpl MyTool {\n    async fn my_tool(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &str,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello {test}\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(\n    description = \"Hello tool\",\n    param(name = \"test\", description = \"My param\"),\n    param(name = \"other\", description = \"My other param\")\n)]\nstruct MyToolMultiParams {}\n\nimpl MyToolMultiParams {\n    async fn my_tool_multi_params(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &str,\n        other: &str,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello {test} {other}\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = \"Hello tool\")]\nstruct MyToolNoArgs {\n    test: String,\n}\n\nimpl MyToolNoArgs {\n    async fn my_tool_no_args(\n        &self,\n        agent_context: &dyn AgentContext,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = \"Hello tool\")]\nstruct MyToolLifetime<'a> {\n    test: &'a str,\n}\n\nimpl MyToolLifetime<'_> {\n    async fn my_tool_lifetime(\n        &self,\n        agent_context: &dyn AgentContext,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\nconst DESCRIPTION: &str = \"Hello tool\";\n#[derive(Clone, Tool)]\n#[tool(description = DESCRIPTION)]\nstruct MyToolConst<'a> {\n    test: &'a str,\n}\n\nimpl MyToolConst<'_> {\n    async fn my_tool_const(\n        &self,\n        agent_context: &dyn AgentContext,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = DESCRIPTION,\n    param(name = \"test\", description = \"My param\", json_type = \"number\")\n)]\nstruct MyToolNumber;\n\nimpl MyToolNumber {\n    async fn my_tool_number(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &usize,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = DESCRIPTION,\n    param(name = \"test\", description = \"My param\", rust_type = \"usize\")\n)]\nstruct MyToolNumber2;\n\nimpl MyToolNumber2 {\n    async fn my_tool_number_2(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &usize,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = DESCRIPTION,\n    name = \"my_very_renamed_tool\",\n    fn_name = \"my_very_renamed_tool\",\n    param(name = \"test\", description = \"My param\", rust_type = \"usize\")\n)]\nstruct MyRenamedTool;\n\nimpl MyRenamedTool {\n    async fn my_very_renamed_tool(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &usize,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = DESCRIPTION,\n    param(name = \"test\", description = \"My param\", required = false)\n)]\nstruct MyOptionalTool;\n\nimpl MyOptionalTool {\n    async fn my_optional_tool(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &Option<String>,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = DESCRIPTION,\n    param(name = \"test\", description = \"My param\", rust_type = \"Option<usize>\")\n)]\nstruct MyOptionalTool2;\n\nimpl MyOptionalTool2 {\n    async fn my_optional_tool_2(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &Option<usize>,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(description = DESCRIPTION,\n    param(name = \"test\", description = \"My param\")\n)]\nstruct MyGenericTool<S: Send + Sync + Clone> {\n    thing: S,\n}\n\nimpl<S: Send + Sync + Clone> MyGenericTool<S> {\n    async fn my_generic_tool(\n        &self,\n        agent_context: &dyn AgentContext,\n        test: &str,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Hello world\").into())\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_derive_vec_argument_pass.rs",
    "content": "#![allow(unused_variables)]\nuse swiftide::chat_completion::{errors::ToolError, ToolOutput};\nuse swiftide::traits::AgentContext;\nuse swiftide_macros::Tool;\n\n#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, swiftide::reexports::schemars::JsonSchema)]\nstruct CustomType {\n    value: String,\n}\n\n#[derive(Clone, Tool)]\n#[tool(\n    description = \"Tool that takes a Vec<CustomType>\",\n    param(name = \"items\", description = \"items\", rust_type = \"Vec<CustomType>\")\n)]\nstruct VecTool;\n\nimpl VecTool {\n    async fn vec_tool(\n        &self,\n        agent_context: &dyn AgentContext,\n        items: Vec<CustomType>,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Received {} items\", items.len()).into())\n    }\n}\n\n#[derive(Clone, Tool)]\n#[tool(\n    description = \"Tool that takes nested Vec<CustomType>\",\n    param(name = \"items\", description = \"nested items\", rust_type = \"Vec<Vec<CustomType>>\")\n)]\nstruct NestedVecTool;\n\nimpl NestedVecTool {\n    async fn nested_vec_tool(\n        &self,\n        agent_context: &dyn AgentContext,\n        items: Vec<Vec<CustomType>>,\n    ) -> Result<ToolOutput, ToolError> {\n        Ok(format!(\"Received {} groups\", items.len()).into())\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_missing_arg_fail.rs",
    "content": "use swiftide::chat_completion::errors::ToolError;\nuse swiftide::chat_completion::ToolOutput;\nuse swiftide::traits::AgentContext;\n\n#[swiftide_macros::tool(\n    description = \"My first tool\",\n    param(name = \"msg\", description = \"A message for testing\")\n)]\nasync fn basic_tool(\n    _agent_context: &dyn AgentContext,\n    msg: &str,\n    other: &str,\n) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg}\").into())\n}\n\nconst READ_FILE: &str = \"Read a file\";\n\n#[swiftide_macros::tool(\n    description = READ_FILE,\n    param(name = \"number\", description = \"Number to guess\")\n)]\nasync fn guess_a_number(\n    _context: &dyn AgentContext,\n    number: usize,\n) -> Result<ToolOutput, ToolError> {\n    let actual_number = 42;\n\n    if number == actual_number {\n        Ok(\"You guessed it!\".into())\n    } else {\n        Ok(\"Try again!\".into())\n    }\n}\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_missing_arg_fail.stderr",
    "content": "error: The following parameters are missing from the spec: [\"other\"]\n --> tests/tool/tool_missing_arg_fail.rs:5:1\n  |\n5 | / #[swiftide_macros::tool(\n6 | |     description = \"My first tool\",\n7 | |     param(name = \"msg\", description = \"A message for testing\")\n8 | | )]\n  | |__^\n  |\n  = note: this error originates in the attribute macro `swiftide_macros::tool` (in Nightly builds, run with -Z macro-backtrace for more info)\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_missing_parameter_fail.rs",
    "content": "#[swiftide_macros::tool(\n    description = \"My first tool\",\n    param(name = \"Message\", description = \"A message for testing\")\n)]\nasync fn basic_tool(_agent_context: &dyn AgentContext, msg: &str) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg}\").into())\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_missing_parameter_fail.stderr",
    "content": "error: The following parameters are missing from the function signature: [\"Message\"]\n --> tests/tool/tool_missing_parameter_fail.rs:1:1\n  |\n1 | / #[swiftide_macros::tool(\n2 | |     description = \"My first tool\",\n3 | |     param(name = \"Message\", description = \"A message for testing\")\n4 | | )]\n  | |__^\n  |\n  = note: this error originates in the attribute macro `swiftide_macros::tool` (in Nightly builds, run with -Z macro-backtrace for more info)\n\nerror: The following parameters are missing from the spec: [\"msg\"]\n --> tests/tool/tool_missing_parameter_fail.rs:1:1\n  |\n1 | / #[swiftide_macros::tool(\n2 | |     description = \"My first tool\",\n3 | |     param(name = \"Message\", description = \"A message for testing\")\n4 | | )]\n  | |__^\n  |\n  = note: this error originates in the attribute macro `swiftide_macros::tool` (in Nightly builds, run with -Z macro-backtrace for more info)\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_multiple_arguments_pass.rs",
    "content": "use swiftide::chat_completion::{errors::ToolError, ToolOutput};\nuse swiftide::traits::AgentContext;\n\n#[swiftide_macros::tool(\n    description = \"My first tool\",\n    param(name = \"msg\", description = \"A message for testing\"),\n    param(name = \"other\", description = \"A message for testing\")\n)]\nasync fn basic_tool(\n    _agent_context: &dyn AgentContext,\n    msg: &str,\n    other: &str,\n) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg}\").into())\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_no_argument_pass.rs",
    "content": "use swiftide::chat_completion::{errors::ToolError, ToolOutput};\nuse swiftide::traits::AgentContext;\n\n#[swiftide_macros::tool(description = \"My first tool\")]\nasync fn basic_tool(_agent_context: &dyn AgentContext) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello tool\").into())\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_object_argument_pass.rs",
    "content": "use std::collections::BTreeMap;\n\nuse serde_json::Value;\nuse swiftide::chat_completion::{errors::ToolError, ToolOutput};\nuse swiftide::traits::AgentContext;\n\n#[swiftide_macros::tool(\n    description = \"Tool that accepts object payloads\",\n    param(name = \"payload\", description = \"Arbitrary JSON object\")\n)]\nasync fn object_tool(\n    _ctx: &dyn AgentContext,\n    payload: BTreeMap<String, Value>,\n) -> Result<ToolOutput, ToolError> {\n    Ok(ToolOutput::text(format!(\"keys={}\", payload.len())))\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool/tool_single_argument_pass.rs",
    "content": "use swiftide::chat_completion::{errors::ToolError, ToolOutput};\nuse swiftide::traits::AgentContext;\n\n#[swiftide_macros::tool(\n    description = \"My first tool\",\n    param(name = \"msg\", description = \"A message for testing\")\n)]\nasync fn basic_tool(_agent_context: &dyn AgentContext, msg: &str) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg}\").into())\n}\n\n#[swiftide_macros::tool(\n    description = \"My first num tool\",\n    param(\n        name = \"msg\",\n        description = \"A message for testing\",\n        json_type = \"number\"\n    )\n)]\nasync fn basic_tool_num(\n    _agent_context: &dyn AgentContext,\n    msg: i32,\n) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg}\").into())\n}\n\n#[swiftide_macros::tool(\n    description = \"My first num tool\",\n    param(name = \"msg\", description = \"A message for testing\")\n)]\nasync fn basic_tool_num_no_type(\n    _agent_context: &dyn AgentContext,\n    msg: i32,\n) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg}\").into())\n}\n\n#[swiftide_macros::tool(\n    description = \"My first array tool\",\n    param(\n        name = \"msg\",\n        description = \"A message for testing\",\n        json_type = \"array\"\n    )\n)]\nasync fn basic_tool_vec(\n    _agent_context: &dyn AgentContext,\n    msg: Vec<String>,\n) -> Result<ToolOutput, ToolError> {\n    let msg = msg.join(\", \");\n    Ok(format!(\"Hello {msg}\").into())\n}\n\n#[swiftide_macros::tool(\n    description = \"My first bool tool\",\n    param(\n        name = \"msg\",\n        description = \"A message for testing\",\n        json_type = \"boolean\"\n    )\n)]\nasync fn basic_tool_bool(\n    _agent_context: &dyn AgentContext,\n    msg: bool,\n) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg}\").into())\n}\n\n#[swiftide_macros::tool(\n    description = \"My first num slice tool\",\n    param(\n        name = \"msg\",\n        description = \"A message for testing\",\n        json_type = \"array\"\n    )\n)]\nasync fn basic_tool_num_slice(\n    _agent_context: &dyn AgentContext,\n    msg: &[i32],\n) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg:?}\").into())\n}\n\n#[swiftide_macros::tool(\n    description = \"My first num slice tool\",\n    param(name = \"msg\", description = \"A message for testing\")\n)]\nasync fn basic_tool_num_optional(\n    _agent_context: &dyn AgentContext,\n    msg: Option<i32>,\n) -> Result<ToolOutput, ToolError> {\n    Ok(format!(\"Hello {msg:?}\").into())\n}\n\nfn main() {}\n"
  },
  {
    "path": "swiftide-macros/tests/tool.rs",
    "content": "#[rustversion::attr(nightly, ignore = \"nightly has different output\")]\n#[test]\nfn test_tool() {\n    let t = trybuild::TestCases::new();\n    t.pass(\"tests/tool/tool_single_argument_pass.rs\");\n    t.pass(\"tests/tool/tool_no_argument_pass.rs\");\n    t.pass(\"tests/tool/tool_multiple_arguments_pass.rs\");\n    t.pass(\"tests/tool/tool_object_argument_pass.rs\");\n    t.compile_fail(\"tests/tool/tool_missing_arg_fail.rs\");\n    t.compile_fail(\"tests/tool/tool_missing_parameter_fail.rs\");\n}\n\n#[rustversion::attr(nightly, ignore = \"nightly has different output\")]\n#[test]\nfn test_tool_derive() {\n    let t = trybuild::TestCases::new();\n    t.pass(\"tests/tool/tool_derive_pass.rs\");\n    t.pass(\"tests/tool/tool_derive_vec_argument_pass.rs\");\n    t.compile_fail(\"tests/tool/tool_derive_missing_description.rs\");\n}\n"
  },
  {
    "path": "swiftide-query/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-query\"\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\nasync-trait = { workspace = true }\nderive_builder = { workspace = true }\nfutures-util = { workspace = true }\ntokio = { workspace = true }\nnum_cpus = { workspace = true }\ntracing = { workspace = true }\nindoc = { workspace = true }\nserde = { workspace = true }\nserde_json = { workspace = true }\ntera = { workspace = true }\n\n# Internal\nswiftide-core = { path = \"../swiftide-core\", version = \"0.32.1\" }\n\n[dev-dependencies]\nswiftide-core = { path = \"../swiftide-core\", features = [\"test-utils\"] }\n\ninsta = { workspace = true }\n\n\n[lints]\nworkspace = true\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n"
  },
  {
    "path": "swiftide-query/src/answers/mod.rs",
    "content": "//! Given a query, generate an answer\n\nmod simple;\n\npub use simple::*;\n"
  },
  {
    "path": "swiftide-query/src/answers/simple.rs",
    "content": "//! Generate an answer based on the current query\nuse std::sync::Arc;\nuse swiftide_core::{\n    Answer,\n    document::Document,\n    indexing::SimplePrompt,\n    prelude::*,\n    prompt::Prompt,\n    querying::{Query, states},\n};\n\n/// Generate an answer based on the current query\n///\n/// For most general purposes, this transformer should provide a sensible default. It takes either\n/// a transformation that has already been applied to the documents (in `Query::current`), or the\n/// documents themselves, and will then feed them as context with the _original_ question to an llm\n/// to generate an answer.\n///\n/// For the template context, the following variables are available:\n/// - **question**: The original question asked by the user\n/// - **original**: Alias for `question`\n/// - **current**: The current transformed query\n/// - **documents**: The documents to use as context\n///\n/// Optionally, a custom document template can be provided to render the documents in a specific\n/// way.\n#[derive(Debug, Clone, Builder)]\npub struct Simple {\n    #[builder(setter(custom))]\n    client: Arc<dyn SimplePrompt>,\n    #[builder(default = \"default_prompt()\")]\n    prompt_template: Prompt,\n    #[builder(default, setter(into, strip_option))]\n    document_template: Option<Prompt>,\n}\n\nimpl Simple {\n    pub fn builder() -> SimpleBuilder {\n        SimpleBuilder::default()\n    }\n\n    /// Builds a new simple answer generator from a client that implements [`SimplePrompt`].\n    ///\n    /// # Panics\n    ///\n    /// Panics if the build failed\n    pub fn from_client(client: impl SimplePrompt + 'static) -> Simple {\n        SimpleBuilder::default()\n            .client(client)\n            .to_owned()\n            .build()\n            .expect(\"Failed to build Simple\")\n    }\n}\n\nimpl SimpleBuilder {\n    pub fn client(&mut self, client: impl SimplePrompt + 'static) -> &mut Self {\n        self.client = Some(Arc::new(client) as Arc<dyn SimplePrompt>);\n        self\n    }\n}\n\nfn default_prompt() -> Prompt {\n    indoc::indoc! {\"\n    Answer the following question based on the context provided:\n    {{ question }}\n\n    ## Constraints\n    * Do not include any information that is not in the provided context.\n    * If the question cannot be answered by the provided context, state that it cannot be answered.\n    * Answer the question completely and format it as markdown.\n\n    ## Context\n\n    ---\n    {{ documents }}\n    ---\n    \"}\n    .into()\n}\n\n#[async_trait]\nimpl Answer for Simple {\n    #[tracing::instrument(skip_all)]\n    async fn answer(&self, query: Query<states::Retrieved>) -> Result<Query<states::Answered>> {\n        let mut context = tera::Context::new();\n\n        context.insert(\"question\", query.original());\n        context.insert(\"original\", query.original());\n        context.insert(\"current\", query.current());\n\n        // If there is a current transformation that is different from the original (transformed)\n        // query, use those as documents (i.e. a summary)\n        let documents = if !query.current().is_empty()\n            && query\n                .history()\n                .iter()\n                .rfind(|e| e.is_retrieval())\n                .is_some_and(|h| h.before() != query.current())\n        {\n            query.current().to_string()\n        } else if let Some(template) = &self.document_template {\n            let mut rendered_documents = Vec::new();\n            for document in query.documents() {\n                let rendered = template\n                    .clone()\n                    .with_context(tera::Context::from_serialize(document)?)\n                    .render()?;\n                rendered_documents.push(rendered);\n            }\n\n            rendered_documents.join(\"\\n---\\n\")\n        } else {\n            query\n                .documents()\n                .iter()\n                .map(Document::content)\n                .collect::<Vec<_>>()\n                .join(\"\\n---\\n\")\n        };\n        context.insert(\"documents\", &documents);\n\n        let answer = self\n            .client\n            .prompt(self.prompt_template.clone().with_context(context))\n            .await?;\n\n        Ok(query.answered(answer))\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use std::sync::Mutex;\n\n    use insta::assert_snapshot;\n    use swiftide_core::{MockSimplePrompt, indexing::Metadata, querying::TransformationEvent};\n\n    use super::*;\n\n    assert_default_prompt_snapshot!(\"question\" => \"What is love?\", \"documents\" => \"My context\");\n\n    #[tokio::test]\n    async fn test_uses_current_if_present() {\n        let mut mock_client = MockSimplePrompt::new();\n\n        // I'll buy a beer for the first person who can think of a less insane way to do this\n        let received_prompt = Arc::new(Mutex::new(None));\n        let cloned = received_prompt.clone();\n        mock_client\n            .expect_prompt()\n            .withf(move |prompt| {\n                cloned.lock().unwrap().replace(prompt.clone());\n                true\n            })\n            .once()\n            .returning(|_| Ok(String::default()));\n\n        let documents = vec![\n            Document::new(\"First document\", Some(Metadata::from((\"some\", \"metadata\")))),\n            Document::new(\n                \"Second document\",\n                Some(Metadata::from((\"other\", \"metadata\"))),\n            ),\n        ];\n        let query: Query<states::Retrieved> = Query::builder()\n            .original(\"original\")\n            .current(\"A fictional generated summary\")\n            .state(states::Retrieved)\n            .transformation_history(vec![TransformationEvent::Retrieved {\n                before: \"abc\".to_string(),\n                after: \"abc\".to_string(),\n                documents: documents.clone(),\n            }])\n            .documents(documents)\n            .build()\n            .unwrap();\n\n        let transformer = Simple::builder().client(mock_client).build().unwrap();\n\n        transformer.answer(query).await.unwrap();\n\n        let received_prompt = received_prompt.lock().unwrap().take().unwrap();\n        let rendered = received_prompt.render().unwrap();\n        assert_snapshot!(rendered);\n    }\n\n    #[tokio::test]\n    async fn test_custom_document_template() {\n        let mut mock_client = MockSimplePrompt::new();\n\n        // I'll buy a beer for the first person who can think of a less insane way to do this\n        let received_prompt = Arc::new(Mutex::new(None));\n        let cloned = received_prompt.clone();\n        mock_client\n            .expect_prompt()\n            .withf(move |prompt| {\n                cloned.lock().unwrap().replace(prompt.clone());\n                true\n            })\n            .once()\n            .returning(|_| Ok(String::default()));\n\n        let documents = vec![\n            Document::new(\"First document\", Some(Metadata::from((\"some\", \"metadata\")))),\n            Document::new(\n                \"Second document\",\n                Some(Metadata::from((\"other\", \"metadata\"))),\n            ),\n        ];\n        let query: Query<states::Retrieved> = Query::builder()\n            .original(\"original\")\n            .current(String::default())\n            .state(states::Retrieved)\n            .transformation_history(vec![TransformationEvent::Retrieved {\n                before: \"abc\".to_string(),\n                after: \"abc\".to_string(),\n                documents: documents.clone(),\n            }])\n            .documents(documents)\n            .build()\n            .unwrap();\n\n        let transformer = Simple::builder()\n            .client(mock_client)\n            .document_template(indoc::indoc! {\"\n                {% for key, value in metadata -%}\n                    {{ key }}: {{ value }}\n                {% endfor -%}\n\n                {{ content }}\"})\n            .build()\n            .unwrap();\n\n        transformer.answer(query).await.unwrap();\n\n        let received_prompt = received_prompt.lock().unwrap().take().unwrap();\n        let rendered = received_prompt.render().unwrap();\n        assert_snapshot!(rendered);\n    }\n}\n"
  },
  {
    "path": "swiftide-query/src/answers/snapshots/swiftide_query__answers__simple__test__custom_document_template.snap",
    "content": "---\nsource: swiftide-query/src/answers/simple.rs\nexpression: rendered\n---\nAnswer the following question based on the context provided:\noriginal\n\n## Constraints\n* Do not include any information that is not in the provided context.\n* If the question cannot be answered by the provided context, state that it cannot be answered.\n* Answer the question completely and format it as markdown.\n\n## Context\n\n---\nsome: metadata\nFirst document\n---\nother: metadata\nSecond document\n---\n"
  },
  {
    "path": "swiftide-query/src/answers/snapshots/swiftide_query__answers__simple__test__default_prompt.snap",
    "content": "---\nsource: swiftide-query/src/answers/simple.rs\nexpression: prompt.render().await.unwrap()\n---\nAnswer the following question based on the context provided:\nWhat is love?\n\n## Constraints\n* Do not include any information that is not in the provided context.\n* If the question cannot be answered by the provided context, state that it cannot be answered.\n* Answer the question completely and format it as markdown.\n\n## Context\n\n---\nMy context\n---\n"
  },
  {
    "path": "swiftide-query/src/answers/snapshots/swiftide_query__answers__simple__test__uses_current_if_present.snap",
    "content": "---\nsource: swiftide-query/src/answers/simple.rs\nexpression: rendered\n---\nAnswer the following question based on the context provided:\noriginal\n\n## Constraints\n* Do not include any information that is not in the provided context.\n* If the question cannot be answered by the provided context, state that it cannot be answered.\n* Answer the question completely and format it as markdown.\n\n## Context\n\n---\nA fictional generated summary\n---\n"
  },
  {
    "path": "swiftide-query/src/evaluators/mod.rs",
    "content": "//! This module contains evaluators for evaluating the quality of a pipeline.\npub mod ragas;\n"
  },
  {
    "path": "swiftide-query/src/evaluators/ragas.rs",
    "content": "//! The Ragas evaluator allows you to export a RAGAS compatible JSON dataset.\n//!\n//! RAGAS requires a ground truth to compare to. You can either record the answers for an initial\n//! dataset, or provide the ground truth yourself.\n//!\n//! Refer to the ragas documentation on how to use the dataset or take a look at a more involved\n//! example at [swiftide-tutorials](https://github.com/bosun-ai/swiftide-tutorial).\n//!\n//! # Example\n//!\n//! ```ignore\n//! # use swiftide_query::*;\n//! # use anyhow::{Result, Context};\n//! # #[tokio::main]\n//! # async fn main() -> anyhow::Result<()> {\n//!\n//! let openai = swiftide::integrations::openai::OpenAi::default();\n//! let qdrant = swiftide::integrations::qdrant::Qdrant::default();\n//!\n//! let ragas = evaluators::ragas::Ragas::from_prepared_questions(questions);\n//!\n//! let pipeline = query::Pipeline::default()\n//! .evaluate_with(ragas.clone())\n//! .then_transform_query(query_transformers::GenerateSubquestions::from_client(openai.clone()))\n//! .then_transform_query(query_transformers::Embed::from_client(\n//! openai.clone(),\n//! ))\n//! .then_retrieve(qdrant.clone())\n//! .then_answer(answers::Simple::from_client(openai.clone()));\n//!\n//! pipeline.query_all(ragas.questions().await).await.unwrap();\n//!\n//! std::fs::write(\"output.json\", ragas.to_json().await).unwrap();\n//! # Ok(())\n//! # }\nuse anyhow::Result;\nuse async_trait::async_trait;\nuse serde::{Deserialize, Serialize};\nuse serde_json::json;\nuse std::{collections::HashMap, str::FromStr, sync::Arc};\nuse tokio::sync::RwLock;\n\nuse swiftide_core::{\n    EvaluateQuery,\n    querying::{Query, QueryEvaluation, states},\n};\n\n/// Ragas evaluator to be used in a pipeline\n#[derive(Debug, Clone)]\npub struct Ragas {\n    dataset: Arc<RwLock<EvaluationDataSet>>,\n}\n\n/// Row structure for RAGAS compatible JSON\n#[derive(Debug, Clone, Default, Serialize, Deserialize)]\npub struct EvaluationData {\n    question: String,\n    answer: String,\n    contexts: Vec<String>,\n    ground_truth: String,\n}\n\n/// Dataset for RAGAS compatible JSON, indexed by question\n#[derive(Debug, Clone)]\npub struct EvaluationDataSet(HashMap<String, EvaluationData>);\n\nimpl Ragas {\n    /// Builds a new Ragas evaluator from a list of questions or a list of tuples with questions and\n    /// ground truths. You can also call `parse` to load a dataset from a JSON string.\n    pub fn from_prepared_questions(questions: impl Into<EvaluationDataSet>) -> Self {\n        Ragas {\n            dataset: Arc::new(RwLock::new(questions.into())),\n        }\n    }\n\n    pub async fn questions(&self) -> Vec<Query<states::Pending>> {\n        self.dataset.read().await.0.keys().map(Into::into).collect()\n    }\n\n    /// Records the current answers as ground truths in the dataset\n    pub async fn record_answers_as_ground_truth(&self) {\n        self.dataset.write().await.record_answers_as_ground_truth();\n    }\n\n    /// Outputs the dataset as a JSON string compatible with RAGAS\n    pub async fn to_json(&self) -> String {\n        self.dataset.read().await.to_json()\n    }\n}\n\n#[async_trait]\nimpl EvaluateQuery for Ragas {\n    #[tracing::instrument(skip_all)]\n    async fn evaluate(&self, query: QueryEvaluation) -> Result<()> {\n        let mut dataset = self.dataset.write().await;\n        dataset.upsert_evaluation(&query)\n    }\n}\n\nimpl EvaluationDataSet {\n    pub(crate) fn record_answers_as_ground_truth(&mut self) {\n        for data in self.0.values_mut() {\n            data.ground_truth.clone_from(&data.answer);\n        }\n    }\n\n    pub(crate) fn upsert_evaluation(&mut self, query: &QueryEvaluation) -> Result<()> {\n        match query {\n            QueryEvaluation::RetrieveDocuments(query) => self.upsert_retrieved_documents(query),\n            QueryEvaluation::AnswerQuery(query) => self.upsert_answer(query),\n        }\n    }\n\n    // For each upsort, check if it exists and update it, or return an error\n    fn upsert_retrieved_documents(&mut self, query: &Query<states::Retrieved>) -> Result<()> {\n        let question = query.original();\n        let data = self\n            .0\n            .get_mut(question)\n            .ok_or_else(|| anyhow::anyhow!(\"Question not found\"))?;\n\n        data.contexts = query\n            .documents()\n            .iter()\n            .map(|d| d.content().to_string())\n            .collect::<Vec<_>>();\n        Ok(())\n    }\n\n    fn upsert_answer(&mut self, query: &Query<states::Answered>) -> Result<()> {\n        let question = query.original();\n        let data = self\n            .0\n            .get_mut(question)\n            .ok_or_else(|| anyhow::anyhow!(\"Question not found\"))?;\n\n        data.answer = query.answer().to_string();\n\n        Ok(())\n    }\n\n    /// Outputs json for ragas\n    ///\n    /// # Format\n    ///\n    /// ```json\n    /// [\n    ///   {\n    ///   \"question\": \"What is the capital of France?\",\n    ///   \"answer\": \"Paris\",\n    ///   \"contexts\": [\"Paris is the capital of France\"],\n    ///   \"ground_truth\": \"Paris\"\n    ///   },\n    ///   {\n    ///   \"question\": \"What is the capital of France?\",\n    ///   \"answer\": \"Paris\",\n    ///   \"contexts\": [\"Paris is the capital of France\"],\n    ///   \"ground_truth\": \"Paris\"\n    ///   }\n    /// ]\n    /// ```\n    pub(crate) fn to_json(&self) -> String {\n        let json_value = json!(self.0.values().collect::<Vec<_>>());\n        serde_json::to_string_pretty(&json_value).unwrap_or_else(|_| json_value.to_string())\n    }\n}\n\n// Can just do a list of questions leaving ground truth, answers, contexts empty\nimpl From<Vec<String>> for EvaluationDataSet {\n    fn from(val: Vec<String>) -> Self {\n        EvaluationDataSet(\n            val.into_iter()\n                .map(|question| {\n                    (\n                        question.clone(),\n                        EvaluationData {\n                            question,\n                            ..EvaluationData::default()\n                        },\n                    )\n                })\n                .collect(),\n        )\n    }\n}\n\nimpl From<&[String]> for EvaluationDataSet {\n    fn from(val: &[String]) -> Self {\n        EvaluationDataSet(\n            val.iter()\n                .map(|question| {\n                    (\n                        question.clone(),\n                        EvaluationData {\n                            question: question.clone(),\n                            ..EvaluationData::default()\n                        },\n                    )\n                })\n                .collect(),\n        )\n    }\n}\n\n// Can take a list of tuples for questions and ground truths\nimpl From<Vec<(String, String)>> for EvaluationDataSet {\n    fn from(val: Vec<(String, String)>) -> Self {\n        EvaluationDataSet(\n            val.into_iter()\n                .map(|(question, ground_truth)| {\n                    (\n                        question.clone(),\n                        EvaluationData {\n                            question,\n                            ground_truth,\n                            ..EvaluationData::default()\n                        },\n                    )\n                })\n                .collect(),\n        )\n    }\n}\n\n/// Parse an existing dataset from a JSON string\nimpl FromStr for EvaluationDataSet {\n    type Err = serde_json::Error;\n\n    fn from_str(val: &str) -> std::prelude::v1::Result<Self, Self::Err> {\n        let data: Vec<EvaluationData> = serde_json::from_str(val)?;\n        Ok(EvaluationDataSet(\n            data.into_iter()\n                .map(|data| (data.question.clone(), data))\n                .collect(),\n        ))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::sync::Arc;\n    use swiftide_core::querying::{Query, QueryEvaluation};\n    use tokio::sync::RwLock;\n\n    #[tokio::test]\n    async fn test_ragas_from_prepared_questions() {\n        let questions = vec![\"What is Rust?\".to_string(), \"What is Tokio?\".to_string()];\n        let ragas = Ragas::from_prepared_questions(questions.clone());\n\n        let stored_questions = ragas.questions().await;\n        assert_eq!(stored_questions.len(), questions.len());\n\n        for question in questions {\n            assert!(stored_questions.iter().any(|q| q.original() == question));\n        }\n    }\n\n    #[tokio::test]\n    async fn test_ragas_record_answers_as_ground_truth() {\n        let dataset = Arc::new(RwLock::new(EvaluationDataSet::from(vec![(\n            \"What is Rust?\".to_string(),\n            \"A programming language\".to_string(),\n        )])));\n        let ragas = Ragas {\n            dataset: dataset.clone(),\n        };\n\n        {\n            let mut lock = dataset.write().await;\n            let data = lock.0.get_mut(\"What is Rust?\").unwrap();\n            data.answer = \"A systems programming language\".to_string();\n        }\n\n        ragas.record_answers_as_ground_truth().await;\n\n        let updated_data = ragas.dataset.read().await;\n        let data = updated_data.0.get(\"What is Rust?\").unwrap();\n        assert_eq!(data.ground_truth, \"A systems programming language\");\n    }\n\n    #[tokio::test]\n    async fn test_ragas_to_json() {\n        let dataset = EvaluationDataSet::from(vec![(\n            \"What is Rust?\".to_string(),\n            \"A programming language\".to_string(),\n        )]);\n        let ragas = Ragas {\n            dataset: Arc::new(RwLock::new(dataset)),\n        };\n\n        let json_output = ragas.to_json().await;\n        let expected_json = \"[\\n  {\\n    \\\"answer\\\": \\\"\\\",\\n    \\\"contexts\\\": [],\\n    \\\"ground_truth\\\": \\\"A programming language\\\",\\n    \\\"question\\\": \\\"What is Rust?\\\"\\n  }\\n]\";\n        assert_eq!(json_output, expected_json);\n    }\n\n    #[tokio::test]\n    async fn test_evaluate_query_upsert_retrieved_documents() {\n        let dataset = EvaluationDataSet::from(vec![\"What is Rust?\".to_string()]);\n        let ragas = Ragas {\n            dataset: Arc::new(RwLock::new(dataset.clone())),\n        };\n\n        let query = Query::builder()\n            .original(\"What is Rust?\")\n            .documents(vec![\"Rust is a language\".into()])\n            .build()\n            .unwrap();\n        let evaluation = QueryEvaluation::RetrieveDocuments(query.clone());\n\n        ragas.evaluate(evaluation).await.unwrap();\n\n        let updated_data = ragas.dataset.read().await;\n        let data = updated_data.0.get(\"What is Rust?\").unwrap();\n        assert_eq!(data.contexts, vec![\"Rust is a language\"]);\n    }\n\n    #[tokio::test]\n    async fn test_evaluate_query_upsert_answer() {\n        let dataset = EvaluationDataSet::from(vec![\"What is Rust?\".to_string()]);\n        let ragas = Ragas {\n            dataset: Arc::new(RwLock::new(dataset.clone())),\n        };\n\n        let query = Query::builder()\n            .original(\"What is Rust?\")\n            .current(\"A systems programming language\")\n            .build()\n            .unwrap();\n        let evaluation = QueryEvaluation::AnswerQuery(query.clone());\n\n        ragas.evaluate(evaluation).await.unwrap();\n\n        let updated_data = ragas.dataset.read().await;\n        let data = updated_data.0.get(\"What is Rust?\").unwrap();\n        assert_eq!(data.answer, \"A systems programming language\");\n    }\n\n    #[tokio::test]\n    async fn test_evaluation_dataset_record_answers_as_ground_truth() {\n        let mut dataset = EvaluationDataSet::from(vec![\"What is Rust?\".to_string()]);\n        let data = dataset.0.get_mut(\"What is Rust?\").unwrap();\n        data.answer = \"A programming language\".to_string();\n\n        dataset.record_answers_as_ground_truth();\n\n        let data = dataset.0.get(\"What is Rust?\").unwrap();\n        assert_eq!(data.ground_truth, \"A programming language\");\n    }\n\n    #[tokio::test]\n    async fn test_evaluation_dataset_to_json() {\n        let dataset = EvaluationDataSet::from(vec![(\n            \"What is Rust?\".to_string(),\n            \"A programming language\".to_string(),\n        )]);\n\n        let json_output = dataset.to_json();\n        let expected_json = \"[\\n  {\\n    \\\"answer\\\": \\\"\\\",\\n    \\\"contexts\\\": [],\\n    \\\"ground_truth\\\": \\\"A programming language\\\",\\n    \\\"question\\\": \\\"What is Rust?\\\"\\n  }\\n]\";\n        assert_eq!(json_output, expected_json);\n    }\n\n    #[tokio::test]\n    async fn test_evaluation_dataset_upsert_retrieved_documents() {\n        let mut dataset = EvaluationDataSet::from(vec![\"What is Rust?\".to_string()]);\n\n        let query = Query::builder()\n            .original(\"What is Rust?\")\n            .documents(vec![\"Rust is a language\".into()])\n            .build()\n            .unwrap();\n        dataset\n            .upsert_evaluation(&QueryEvaluation::RetrieveDocuments(query.clone()))\n            .unwrap();\n\n        let data = dataset.0.get(\"What is Rust?\").unwrap();\n        assert_eq!(data.contexts, vec![\"Rust is a language\"]);\n    }\n\n    #[tokio::test]\n    async fn test_evaluation_dataset_upsert_answer() {\n        let mut dataset = EvaluationDataSet::from(vec![\"What is Rust?\".to_string()]);\n\n        let query = Query::builder()\n            .original(\"What is Rust?\")\n            .current(\"A systems programming language\")\n            .build()\n            .unwrap();\n        dataset\n            .upsert_evaluation(&QueryEvaluation::AnswerQuery(query.clone()))\n            .unwrap();\n\n        let data = dataset.0.get(\"What is Rust?\").unwrap();\n        assert_eq!(data.answer, \"A systems programming language\");\n    }\n}\n"
  },
  {
    "path": "swiftide-query/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n\npub mod answers;\nmod query;\npub mod query_transformers;\npub mod response_transformers;\n\npub use query::*;\npub mod evaluators;\n"
  },
  {
    "path": "swiftide-query/src/query/mod.rs",
    "content": "mod pipeline;\n\npub use pipeline::Pipeline;\n"
  },
  {
    "path": "swiftide-query/src/query/pipeline.rs",
    "content": "//! A query pipeline can be used to answer a user query\n//!\n//! The pipeline has a sequence of steps:\n//!     1. Transform the query (i.e. Generating subquestions, embeddings)\n//!     2. Retrieve documents from storage\n//!     3. Transform these documents into a suitable context for answering\n//!     4. Answering the query\n//!\n//! WARN: The query pipeline is in a very early stage!\n//!\n//! Under the hood, it uses a [`SearchStrategy`] that an implementor of [`Retrieve`] (i.e. Qdrant)\n//! must implement.\n//!\n//! A query pipeline is lazy and only runs when query is called.\n\nuse futures_util::TryFutureExt as _;\nuse std::sync::Arc;\nuse swiftide_core::{\n    EvaluateQuery,\n    prelude::*,\n    querying::{\n        Answer, Query, QueryState, QueryStream, Retrieve, SearchStrategy, TransformQuery,\n        TransformResponse, search_strategies::SimilaritySingleEmbedding, states,\n    },\n};\nuse tokio::sync::mpsc::Sender;\n\n/// The starting point of a query pipeline\npub struct Pipeline<\n    'stream,\n    STRATEGY: SearchStrategy = SimilaritySingleEmbedding,\n    STATE: QueryState = states::Pending,\n> {\n    search_strategy: STRATEGY,\n    stream: QueryStream<'stream, STATE>,\n    query_sender: Sender<Result<Query<states::Pending>>>,\n    evaluator: Option<Arc<Box<dyn EvaluateQuery>>>,\n    default_concurrency: usize,\n}\n\n/// By default the [`SearchStrategy`] is [`SimilaritySingleEmbedding`], which embed the current\n/// query and returns a collection of documents.\nimpl Default for Pipeline<'_, SimilaritySingleEmbedding> {\n    fn default() -> Self {\n        let stream = QueryStream::default();\n        Self {\n            search_strategy: SimilaritySingleEmbedding::default(),\n            query_sender: stream\n                .sender\n                .clone()\n                .expect(\"Pipeline received stream without query entrypoint\"),\n            stream,\n            evaluator: None,\n            default_concurrency: num_cpus::get(),\n        }\n    }\n}\n\nimpl<'a, STRATEGY: SearchStrategy> Pipeline<'a, STRATEGY> {\n    /// Create a query pipeline from a [`SearchStrategy`]\n    ///\n    /// # Panics\n    ///\n    /// Panics if the inner stream fails to build\n    #[must_use]\n    pub fn from_search_strategy(strategy: STRATEGY) -> Pipeline<'a, STRATEGY> {\n        let stream = QueryStream::default();\n\n        Pipeline {\n            search_strategy: strategy,\n            query_sender: stream\n                .sender\n                .clone()\n                .expect(\"Pipeline received stream without query entrypoint\"),\n            stream,\n            evaluator: None,\n            default_concurrency: num_cpus::get(),\n        }\n    }\n}\n\nimpl<'stream: 'static, STRATEGY> Pipeline<'stream, STRATEGY, states::Pending>\nwhere\n    STRATEGY: SearchStrategy,\n{\n    /// Evaluate queries with an evaluator\n    #[must_use]\n    pub fn evaluate_with<T: EvaluateQuery + 'stream>(mut self, evaluator: T) -> Self {\n        self.evaluator = Some(Arc::new(Box::new(evaluator)));\n\n        self\n    }\n\n    /// Transform a query into something else, see [`crate::query_transformers`]\n    #[must_use]\n    pub fn then_transform_query<T: TransformQuery + 'stream>(\n        self,\n        transformer: T,\n    ) -> Pipeline<'stream, STRATEGY, states::Pending> {\n        let transformer = Arc::new(transformer);\n\n        let Pipeline {\n            stream,\n            query_sender,\n            search_strategy,\n            evaluator,\n            default_concurrency,\n        } = self;\n\n        let new_stream = stream\n            .map_ok(move |query| {\n                let transformer = Arc::clone(&transformer);\n                let span = tracing::info_span!(\"then_transform_query\", query = ?query);\n\n                tokio::spawn(\n                    async move {\n                        let transformed_query = transformer.transform_query(query).await?;\n                        tracing::debug!(\n                            transformed_query = transformed_query.current(),\n                            query_transformer = transformer.name(),\n                            \"Transformed query\"\n                        );\n\n                        Ok(transformed_query)\n                    }\n                    .instrument(span.or_current()),\n                )\n                .err_into::<anyhow::Error>()\n            })\n            .try_buffer_unordered(default_concurrency)\n            .map(|x| x.and_then(|x| x));\n\n        Pipeline {\n            stream: new_stream.boxed().into(),\n            search_strategy,\n            query_sender,\n            evaluator,\n            default_concurrency,\n        }\n    }\n}\n\nimpl<'stream: 'static, STRATEGY: SearchStrategy + 'stream>\n    Pipeline<'stream, STRATEGY, states::Pending>\n{\n    /// Executes the query based on a search query with a retriever\n    #[must_use]\n    pub fn then_retrieve<T: ToOwned<Owned = impl Retrieve<STRATEGY> + 'stream>>(\n        self,\n        retriever: T,\n    ) -> Pipeline<'stream, STRATEGY, states::Retrieved> {\n        let retriever = Arc::new(retriever.to_owned());\n        let Pipeline {\n            stream,\n            query_sender,\n            search_strategy,\n            evaluator,\n            default_concurrency,\n        } = self;\n\n        let strategy_for_stream = search_strategy.clone();\n        let evaluator_for_stream = evaluator.clone();\n\n        let new_stream = stream\n            .map_ok(move |query| {\n                let search_strategy = strategy_for_stream.clone();\n                let retriever = Arc::clone(&retriever);\n                let span = tracing::info_span!(\"then_retrieve\", query = ?query);\n                let evaluator_for_stream = evaluator_for_stream.clone();\n\n                tokio::spawn(\n                    async move {\n                        let result = retriever.retrieve(&search_strategy, query).await?;\n\n                        tracing::debug!(\n                            num_documents = result.documents().len(),\n                            total_bytes = result\n                                .documents()\n                                .iter()\n                                .map(|d| d.bytes().len())\n                                .sum::<usize>(),\n                            \"Retrieved documents\"\n                        );\n\n                        if let Some(evaluator) = evaluator_for_stream.as_ref() {\n                            evaluator.evaluate(result.clone().into()).await?;\n                            Ok(result)\n                        } else {\n                            Ok(result)\n                        }\n                    }\n                    .instrument(span.or_current()),\n                )\n                .err_into::<anyhow::Error>()\n            })\n            .try_buffer_unordered(default_concurrency)\n            .map(|x| x.and_then(|x| x));\n\n        Pipeline {\n            stream: new_stream.boxed().into(),\n            search_strategy: search_strategy.clone(),\n            query_sender,\n            evaluator,\n            default_concurrency,\n        }\n    }\n}\n\nimpl<'stream: 'static, STRATEGY: SearchStrategy> Pipeline<'stream, STRATEGY, states::Retrieved> {\n    /// Transforms a retrieved query into something else\n    #[must_use]\n    pub fn then_transform_response<T: TransformResponse + 'stream>(\n        self,\n        transformer: T,\n    ) -> Pipeline<'stream, STRATEGY, states::Retrieved> {\n        let transformer = Arc::new(transformer);\n        let Pipeline {\n            stream,\n            query_sender,\n            search_strategy,\n            evaluator,\n            default_concurrency,\n        } = self;\n\n        let new_stream = stream\n            .map_ok(move |query| {\n                let transformer = Arc::clone(&transformer);\n                let span = tracing::info_span!(\"then_transform_response\", query = ?query);\n                tokio::spawn(\n                    async move {\n                        let transformed_query = transformer.transform_response(query).await?;\n                        tracing::debug!(\n                            transformed_query = transformed_query.current(),\n                            response_transformer = transformer.name(),\n                            \"Transformed response\"\n                        );\n\n                        Ok(transformed_query)\n                    }\n                    .instrument(span.or_current()),\n                )\n                .err_into::<anyhow::Error>()\n            })\n            .try_buffer_unordered(default_concurrency)\n            .map(|x| x.and_then(|x| x));\n\n        Pipeline {\n            stream: new_stream.boxed().into(),\n            search_strategy,\n            query_sender,\n            evaluator,\n            default_concurrency,\n        }\n    }\n}\n\nimpl<'stream: 'static, STRATEGY: SearchStrategy> Pipeline<'stream, STRATEGY, states::Retrieved> {\n    /// Generates an answer based on previous transformations\n    #[must_use]\n    pub fn then_answer<T: Answer + 'stream>(\n        self,\n        answerer: T,\n    ) -> Pipeline<'stream, STRATEGY, states::Answered> {\n        let answerer = Arc::new(answerer);\n        let Pipeline {\n            stream,\n            query_sender,\n            search_strategy,\n            evaluator,\n            default_concurrency,\n        } = self;\n        let evaluator_for_stream = evaluator.clone();\n\n        let new_stream = stream\n            .map_ok(move |query: Query<states::Retrieved>| {\n                let answerer = Arc::clone(&answerer);\n                let span = tracing::info_span!(\"then_answer\", query = ?query);\n                let evaluator_for_stream = evaluator_for_stream.clone();\n\n                tokio::spawn(\n                    async move {\n                        tracing::debug!(answerer = answerer.name(), \"Answering query\");\n                        let result = answerer.answer(query).await?;\n\n                        if let Some(evaluator) = evaluator_for_stream.as_ref() {\n                            evaluator.evaluate(result.clone().into()).await?;\n                            Ok(result)\n                        } else {\n                            Ok(result)\n                        }\n                    }\n                    .instrument(span.or_current()),\n                )\n                .err_into::<anyhow::Error>()\n            })\n            .try_buffer_unordered(default_concurrency)\n            .map(|x| x.and_then(|x| x));\n\n        Pipeline {\n            stream: new_stream.boxed().into(),\n            search_strategy,\n            query_sender,\n            evaluator,\n            default_concurrency,\n        }\n    }\n}\n\nimpl<STRATEGY: SearchStrategy> Pipeline<'_, STRATEGY, states::Answered> {\n    /// Runs the pipeline with a user query, accepts `&str` as well.\n    ///\n    /// # Errors\n    ///\n    /// Errors if any of the transformations failed or no response was found\n    #[tracing::instrument(skip_all, name = \"query_pipeline.query\")]\n    pub async fn query(\n        mut self,\n        query: impl Into<Query<states::Pending>>,\n    ) -> Result<Query<states::Answered>> {\n        tracing::debug!(\"Sending query\");\n        let now = std::time::Instant::now();\n\n        self.query_sender.send(Ok(query.into())).await?;\n\n        let answer = self.stream.try_next().await?.ok_or_else(|| {\n            anyhow::anyhow!(\"Pipeline did not receive a response from the query stream\")\n        });\n\n        let elapsed_in_seconds = now.elapsed().as_secs();\n        tracing::warn!(\n            elapsed_in_seconds,\n            \"Answered query in {} seconds\",\n            elapsed_in_seconds\n        );\n\n        answer\n    }\n\n    /// Runs the pipeline with a user query, accepts `&str` as well.\n    ///\n    /// Does not consume the pipeline and requires a mutable reference. This allows\n    /// the pipeline to be reused.\n    ///\n    /// # Errors\n    ///\n    /// Errors if any of the transformations failed or no response was found\n    #[tracing::instrument(skip_all, name = \"query_pipeline.query_mut\")]\n    pub async fn query_mut(\n        &mut self,\n        query: impl Into<Query<states::Pending>>,\n    ) -> Result<Query<states::Answered>> {\n        tracing::warn!(\"Sending query\");\n        let now = std::time::Instant::now();\n\n        self.query_sender.send(Ok(query.into())).await?;\n\n        let answer = self\n            .stream\n            .by_ref()\n            .take(1)\n            .try_next()\n            .await?\n            .ok_or_else(|| {\n                anyhow::anyhow!(\"Pipeline did not receive a response from the query stream\")\n            });\n\n        tracing::debug!(?answer, \"Received an answer\");\n\n        let elapsed_in_seconds = now.elapsed().as_secs();\n        tracing::warn!(\n            elapsed_in_seconds,\n            \"Answered query in {} seconds\",\n            elapsed_in_seconds\n        );\n\n        answer\n    }\n\n    /// Runs the pipeline with multiple queries\n    ///\n    /// # Errors\n    ///\n    /// Errors if any of the transformations failed, no response was found, or the stream was\n    /// closed.\n    #[tracing::instrument(skip_all, name = \"query_pipeline.query_all\")]\n    pub async fn query_all(\n        self,\n        queries: Vec<impl Into<Query<states::Pending>> + Clone>,\n    ) -> Result<Vec<Query<states::Answered>>> {\n        tracing::warn!(\"Sending queries\");\n        let now = std::time::Instant::now();\n\n        let Pipeline {\n            query_sender,\n            mut stream,\n            ..\n        } = self;\n\n        for query in &queries {\n            query_sender.send(Ok(query.clone().into())).await?;\n        }\n        tracing::info!(\"All queries sent\");\n\n        let mut results = vec![];\n        while let Some(result) = stream.try_next().await? {\n            tracing::debug!(?result, \"Received an answer\");\n            results.push(result);\n            if results.len() == queries.len() {\n                break;\n            }\n        }\n\n        let elapsed_in_seconds = now.elapsed().as_secs();\n        tracing::warn!(\n            num_queries = queries.len(),\n            elapsed_in_seconds,\n            \"Answered all queries in {} seconds\",\n            elapsed_in_seconds\n        );\n        Ok(results)\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::{\n        MockAnswer, MockTransformQuery, MockTransformResponse, querying::search_strategies,\n    };\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_closures_in_each_step() {\n        let pipeline = Pipeline::default()\n            .then_transform_query(move |query: Query<states::Pending>| Ok(query))\n            .then_retrieve(\n                move |_: &search_strategies::SimilaritySingleEmbedding,\n                      query: Query<states::Pending>| {\n                    Ok(query.retrieved_documents(vec![]))\n                },\n            )\n            .then_transform_response(Ok)\n            .then_answer(move |query: Query<states::Retrieved>| Ok(query.answered(\"Ok\")));\n        let response = pipeline.query(\"What\").await.unwrap();\n        assert_eq!(response.answer(), \"Ok\");\n    }\n\n    #[tokio::test]\n    async fn test_all_steps_should_accept_dyn_box() {\n        let mut query_transformer = MockTransformQuery::new();\n        query_transformer.expect_transform_query().returning(Ok);\n\n        let mut response_transformer = MockTransformResponse::new();\n        response_transformer\n            .expect_transform_response()\n            .returning(Ok);\n        let mut answer_transformer = MockAnswer::new();\n        answer_transformer\n            .expect_answer()\n            .returning(|query| Ok(query.answered(\"OK\")));\n\n        let pipeline = Pipeline::default()\n            .then_transform_query(Box::new(query_transformer) as Box<dyn TransformQuery>)\n            .then_retrieve(\n                |_: &search_strategies::SimilaritySingleEmbedding,\n                 query: Query<states::Pending>| {\n                    Ok(query.retrieved_documents(vec![]))\n                },\n            )\n            .then_transform_response(Box::new(response_transformer) as Box<dyn TransformResponse>)\n            .then_answer(Box::new(answer_transformer) as Box<dyn Answer>);\n        let response = pipeline.query(\"What\").await.unwrap();\n        assert_eq!(response.answer(), \"OK\");\n    }\n\n    #[tokio::test]\n    async fn test_reuse_with_query_mut() {\n        let mut pipeline = Pipeline::default()\n            .then_transform_query(move |query: Query<states::Pending>| Ok(query))\n            .then_retrieve(\n                move |_: &search_strategies::SimilaritySingleEmbedding,\n                      query: Query<states::Pending>| {\n                    Ok(query.retrieved_documents(vec![]))\n                },\n            )\n            .then_transform_response(Ok)\n            .then_answer(move |query: Query<states::Retrieved>| Ok(query.answered(\"Ok\")));\n\n        let response = pipeline.query_mut(\"What\").await.unwrap();\n        assert_eq!(response.answer(), \"Ok\");\n        let response = pipeline.query_mut(\"What\").await.unwrap();\n        assert_eq!(response.answer(), \"Ok\");\n    }\n}\n"
  },
  {
    "path": "swiftide-query/src/query_transformers/embed.rs",
    "content": "use std::sync::Arc;\n\nuse swiftide_core::{\n    indexing::EmbeddingModel,\n    prelude::*,\n    querying::{Query, TransformQuery, states},\n};\n\n#[derive(Debug, Clone)]\npub struct Embed {\n    embed_model: Arc<dyn EmbeddingModel>,\n}\n\nimpl Embed {\n    pub fn from_client(client: impl EmbeddingModel + 'static) -> Embed {\n        Embed {\n            embed_model: Arc::new(client),\n        }\n    }\n}\n\n#[async_trait]\nimpl TransformQuery for Embed {\n    #[tracing::instrument(skip_all)]\n    async fn transform_query(\n        &self,\n        mut query: Query<states::Pending>,\n    ) -> Result<Query<states::Pending>> {\n        let Some(embedding) = self\n            .embed_model\n            .embed(vec![query.current().to_string()])\n            .await?\n            .pop()\n        else {\n            anyhow::bail!(\"Failed to embed query\")\n        };\n\n        query.embedding = Some(embedding);\n\n        Ok(query)\n    }\n}\n"
  },
  {
    "path": "swiftide-query/src/query_transformers/generate_subquestions.rs",
    "content": "//! Generate subquestions for a query\n//!\n//! Useful for similarity search where you want a wider vector coverage\nuse std::sync::Arc;\nuse swiftide_core::{\n    indexing::SimplePrompt,\n    prelude::*,\n    prompt::Prompt,\n    querying::{Query, TransformQuery, states},\n};\n\n#[derive(Debug, Clone, Builder)]\npub struct GenerateSubquestions {\n    #[builder(setter(custom))]\n    client: Arc<dyn SimplePrompt>,\n    #[builder(default = \"default_prompt()\")]\n    prompt_template: Prompt,\n    #[builder(default = \"5\")]\n    num_questions: usize,\n}\n\nimpl GenerateSubquestions {\n    pub fn builder() -> GenerateSubquestionsBuilder {\n        GenerateSubquestionsBuilder::default()\n    }\n\n    /// Builds a new subquestions generator from a client that implements [`SimplePrompt`]\n    ///\n    /// # Panics\n    ///\n    /// Panics if the build failed\n    pub fn from_client(client: impl SimplePrompt + 'static) -> GenerateSubquestions {\n        GenerateSubquestionsBuilder::default()\n            .client(client)\n            .to_owned()\n            .build()\n            .expect(\"Failed to build GenerateSubquestions\")\n    }\n}\n\nimpl GenerateSubquestionsBuilder {\n    pub fn client(&mut self, client: impl SimplePrompt + 'static) -> &mut Self {\n        self.client = Some(Arc::new(client) as Arc<dyn SimplePrompt>);\n        self\n    }\n}\n\nfn default_prompt() -> Prompt {\n    indoc::indoc!(\"\n    Your job is to help a query tool find the right context.\n\n    Given the following question:\n    {{question}}\n\n    Please think of {{num_questions}}  additional questions that can help answering the original question.\n\n    Especially consider what might be relevant to answer the question, like dependencies, usage and structure of the code.\n\n    Please respond with the original question and the additional questions only.\n\n    ## Example\n\n    - {{question}}\n    - Additional question 1\n    - Additional question 2\n    - Additional question 3\n    - Additional question 4\n    - Additional question 5\n    \").into()\n}\n\n#[async_trait]\nimpl TransformQuery for GenerateSubquestions {\n    #[tracing::instrument(skip_self)]\n    async fn transform_query(\n        &self,\n        mut query: Query<states::Pending>,\n    ) -> Result<Query<states::Pending>> {\n        let new_query = self\n            .client\n            .prompt(\n                self.prompt_template\n                    .clone()\n                    .with_context_value(\"question\", query.current())\n                    .with_context_value(\"num_questions\", self.num_questions),\n            )\n            .await?;\n        query.transformed_query(new_query);\n\n        Ok(query)\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    assert_default_prompt_snapshot!(\"question\" => \"What is love?\", \"num_questions\" => 5);\n}\n"
  },
  {
    "path": "swiftide-query/src/query_transformers/mod.rs",
    "content": "//! Transform queries that are yet to be made\nmod generate_subquestions;\npub use generate_subquestions::GenerateSubquestions;\n\nmod embed;\nmod sparse_embed;\npub use embed::Embed;\npub use sparse_embed::SparseEmbed;\n"
  },
  {
    "path": "swiftide-query/src/query_transformers/snapshots/swiftide_query__query_transformers__generate_subquestions__test__default_prompt.snap",
    "content": "---\nsource: swiftide-query/src/query_transformers/generate_subquestions.rs\nexpression: prompt.render().await.unwrap()\n---\nYour job is to help a query tool find the right context.\n\nGiven the following question:\nWhat is love?\n\nPlease think of 5  additional questions that can help answering the original question.\n\nEspecially consider what might be relevant to answer the question, like dependencies, usage and structure of the code.\n\nPlease respond with the original question and the additional questions only.\n\n## Example\n\n- What is love?\n- Additional question 1\n- Additional question 2\n- Additional question 3\n- Additional question 4\n- Additional question 5\n"
  },
  {
    "path": "swiftide-query/src/query_transformers/sparse_embed.rs",
    "content": "use std::sync::Arc;\n\nuse swiftide_core::{\n    SparseEmbeddingModel,\n    prelude::*,\n    querying::{Query, TransformQuery, states},\n};\n\n/// Embed a query with a sparse embedding.\n#[derive(Debug, Clone)]\npub struct SparseEmbed {\n    embed_model: Arc<dyn SparseEmbeddingModel>,\n}\n\nimpl SparseEmbed {\n    pub fn from_client(client: impl SparseEmbeddingModel + 'static) -> SparseEmbed {\n        SparseEmbed {\n            embed_model: Arc::new(client),\n        }\n    }\n}\n\n#[async_trait]\nimpl TransformQuery for SparseEmbed {\n    #[tracing::instrument(skip_all)]\n    async fn transform_query(\n        &self,\n        mut query: Query<states::Pending>,\n    ) -> Result<Query<states::Pending>> {\n        let Some(embedding) = self\n            .embed_model\n            .sparse_embed(vec![query.current().to_string()])\n            .await?\n            .pop()\n        else {\n            anyhow::bail!(\"Failed to embed query\")\n        };\n\n        query.sparse_embedding = Some(embedding);\n\n        Ok(query)\n    }\n}\n"
  },
  {
    "path": "swiftide-query/src/response_transformers/mod.rs",
    "content": "//! Transform retrieved queries\nmod summary;\n\npub use summary::*;\n"
  },
  {
    "path": "swiftide-query/src/response_transformers/snapshots/swiftide_query__response_transformers__summary__test__default_prompt.snap",
    "content": "---\nsource: swiftide-query/src/response_transformers/summary.rs\nexpression: prompt.render().await.unwrap()\n---\nYour job is to help a query tool find the right context.\n\nSummarize the following documents.\n\n## Constraints\n* Do not add any information that is not available in the documents.\n* Summarize comprehensively and ensure no data that might be important is left out.\n* Summarize as a single markdown document\n\n## Documents\n\n---\nFirst document\n---\n---\nSecond Document\n---\n"
  },
  {
    "path": "swiftide-query/src/response_transformers/summary.rs",
    "content": "use std::sync::Arc;\nuse swiftide_core::{\n    TransformResponse,\n    indexing::SimplePrompt,\n    prelude::*,\n    prompt::Prompt,\n    querying::{Query, states},\n};\n\n#[derive(Debug, Clone, Builder)]\npub struct Summary {\n    #[builder(setter(custom))]\n    client: Arc<dyn SimplePrompt>,\n    #[builder(default = \"default_prompt()\")]\n    prompt_template: Prompt,\n}\n\nimpl Summary {\n    pub fn builder() -> SummaryBuilder {\n        SummaryBuilder::default()\n    }\n\n    /// Builds a new summary generator from a client that implements [`SimplePrompt`].\n    ///\n    /// Will try to summarize documents using an llm, instructed to preserve as much information as\n    /// possible.\n    ///\n    /// # Panics\n    ///\n    /// Panics if the build failed\n    pub fn from_client(client: impl SimplePrompt + 'static) -> Summary {\n        SummaryBuilder::default()\n            .client(client)\n            .to_owned()\n            .build()\n            .expect(\"Failed to build Summary\")\n    }\n}\n\nimpl SummaryBuilder {\n    pub fn client(&mut self, client: impl SimplePrompt + 'static) -> &mut Self {\n        self.client = Some(Arc::new(client) as Arc<dyn SimplePrompt>);\n        self\n    }\n}\n\nfn default_prompt() -> Prompt {\n    indoc::indoc!(\n        \"\n    Your job is to help a query tool find the right context.\n\n    Summarize the following documents.\n\n    ## Constraints\n    * Do not add any information that is not available in the documents.\n    * Summarize comprehensively and ensure no data that might be important is left out.\n    * Summarize as a single markdown document\n\n    ## Documents\n\n    {% for document in documents -%}\n    ---\n    {{ document.content }}\n    ---\n    {% endfor -%}\n    \"\n    )\n    .into()\n}\n\n#[async_trait]\nimpl TransformResponse for Summary {\n    #[tracing::instrument(skip_all)]\n    async fn transform_response(\n        &self,\n        mut query: Query<states::Retrieved>,\n    ) -> Result<Query<states::Retrieved>> {\n        let new_response = self\n            .client\n            .prompt(\n                self.prompt_template\n                    .clone()\n                    .with_context_value(\"documents\", query.documents()),\n            )\n            .await?;\n        query.transformed_response(new_response);\n\n        Ok(query)\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use swiftide_core::document::Document;\n\n    use super::*;\n\n    assert_default_prompt_snapshot!(\"documents\" => vec![Document::from(\"First document\"), Document::from(\"Second Document\")]);\n}\n"
  },
  {
    "path": "swiftide-test-utils/Cargo.toml",
    "content": "cargo-features = [\"edition2024\"]\n\n[package]\nname = \"swiftide-test-utils\"\npublish = false\nversion.workspace = true\nedition.workspace = true\nlicense.workspace = true\nreadme.workspace = true\nkeywords.workspace = true\ndescription.workspace = true\ncategories.workspace = true\nrepository.workspace = true\nhomepage.workspace = true\n\n[dependencies]\nswiftide-integrations = { path = \"../swiftide-integrations\", features = [\n  \"openai\",\n] }\n\nserde = { workspace = true }\nserde_json = { workspace = true }\nasync-openai = { workspace = true }\ntestcontainers = { workspace = true }\nwiremock = { workspace = true }\n\n[features]\ndefault = [\"test-utils\"]\ntest-utils = []\n\n[package.metadata.docs.rs]\nall-features = true\ncargo-args = [\"-Zunstable-options\", \"-Zrustdoc-scrape-examples\"]\nrustdoc-args = [\"--cfg\", \"docsrs\"]\n"
  },
  {
    "path": "swiftide-test-utils/src/lib.rs",
    "content": "// show feature flags in the generated documentation\n// https://doc.rust-lang.org/rustdoc/unstable-features.html#extensions-to-the-doc-attribute\n#![cfg_attr(docsrs, feature(doc_cfg))]\n#![cfg_attr(docsrs, doc(auto_cfg))]\n#![doc(html_logo_url = \"https://github.com/bosun-ai/swiftide/raw/master/images/logo.png\")]\n\n#[cfg(feature = \"test-utils\")]\nmod test_utils;\n\n#[cfg(feature = \"test-utils\")]\npub use test_utils::*;\n"
  },
  {
    "path": "swiftide-test-utils/src/test_utils.rs",
    "content": "#![allow(missing_docs)]\n#![allow(clippy::missing_panics_doc)]\n\nuse serde_json::json;\nuse testcontainers::{\n    ContainerAsync, GenericImage, ImageExt,\n    core::{IntoContainerPort, WaitFor, wait::HttpWaitStrategy},\n    runners::AsyncRunner,\n};\nuse wiremock::matchers::{method, path};\nuse wiremock::{Mock, MockServer, ResponseTemplate};\n\nuse swiftide_integrations as integrations;\n\npub fn openai_client(\n    mock_server_uri: &str,\n    embed_model: &str,\n    prompt_model: &str,\n) -> integrations::openai::OpenAI {\n    let config = async_openai::config::OpenAIConfig::new().with_api_base(mock_server_uri);\n    let async_openai = async_openai::Client::with_config(config);\n    integrations::openai::OpenAI::builder()\n        .client(async_openai)\n        .default_options(\n            integrations::openai::Options::builder()\n                .embed_model(embed_model)\n                .prompt_model(prompt_model)\n                .build()\n                .unwrap(),\n        )\n        .build()\n        .expect(\"Can create OpenAI client.\")\n}\n\n/// Setup Qdrant container.\n/// Returns container server and `server_url`.\npub async fn start_qdrant() -> (ContainerAsync<GenericImage>, String) {\n    let qdrant = testcontainers::GenericImage::new(\"qdrant/qdrant\", \"v1.13.4\")\n        .with_exposed_port(6334.into())\n        .with_exposed_port(6333.into())\n        .with_wait_for(testcontainers::core::WaitFor::http(\n            HttpWaitStrategy::new(\"/readyz\")\n                .with_port(6333.into())\n                .with_expected_status_code(200_u16),\n        ))\n        .start()\n        .await\n        .expect(\"Qdrant started\");\n    let qdrant_url = format!(\n        \"http://127.0.0.1:{port}\",\n        port = qdrant.get_host_port_ipv4(6334).await.unwrap()\n    );\n    (qdrant, qdrant_url)\n}\n\n/// Setup Redis container for caching in the test.\n/// Returns container server and `server_url`.\npub async fn start_redis() -> (ContainerAsync<GenericImage>, String) {\n    let redis = testcontainers::GenericImage::new(\"redis\", \"7-alpine\")\n        .with_exposed_port(6379.into())\n        .with_wait_for(testcontainers::core::WaitFor::message_on_stdout(\n            \"Ready to accept connections\",\n        ))\n        .start()\n        .await\n        .expect(\"Redis started\");\n    let redis_url = format!(\n        \"redis://{host}:{port}\",\n        host = redis.get_host().await.unwrap(),\n        port = redis.get_host_port_ipv4(6379).await.unwrap()\n    );\n    (redis, redis_url)\n}\n\n/// Setup Postgres container.\n/// Returns container server and `server_url`.\npub async fn start_postgres() -> (ContainerAsync<GenericImage>, String) {\n    let postgres = testcontainers::GenericImage::new(\"pgvector/pgvector\", \"pg17\")\n        .with_wait_for(WaitFor::message_on_stdout(\n            \"database system is ready to accept connections\",\n        ))\n        .with_exposed_port(5432.tcp())\n        .with_env_var(\"POSTGRES_USER\", \"myuser\")\n        .with_env_var(\"POSTGRES_PASSWORD\", \"mypassword\")\n        .with_env_var(\"POSTGRES_DB\", \"mydatabase\")\n        .start()\n        .await\n        .expect(\"Failed to start Postgres container\");\n\n    // Construct the connection URL using the dynamically assigned port\n    let host_port = postgres.get_host_port_ipv4(5432).await.unwrap();\n    let postgres_url = format!(\"postgresql://myuser:mypassword@127.0.0.1:{host_port}/mydatabase\");\n\n    (postgres, postgres_url)\n}\n\n/// Mock embeddings creation endpoint.\n/// `embeddings_count` controls number of returned embedding vectors.\npub async fn mock_embeddings(mock_server: &MockServer, embeddings_count: u8) {\n    let data = (0..embeddings_count)\n        .map(|i| {\n            json!( {\n              \"object\": \"embedding\",\n              \"embedding\": vec![0; 1536],\n              \"index\": i\n            })\n        })\n        .collect::<Vec<serde_json::Value>>();\n    let data: serde_json::Value = serde_json::Value::Array(data);\n    Mock::given(method(\"POST\"))\n        .and(path(\"/embeddings\"))\n        .respond_with(ResponseTemplate::new(200).set_body_json(json!({\n          \"object\": \"list\",\n          \"data\": data,\n          \"model\": \"text-embedding-ada-002\",\n          \"usage\": {\n            \"prompt_tokens\": 8,\n            \"total_tokens\": 8\n        }\n        })))\n        .mount(mock_server)\n        .await;\n}\n\npub async fn mock_chat_completions(mock_server: &MockServer) {\n    Mock::given(method(\"POST\"))\n        .and(path(\"/chat/completions\"))\n        .respond_with(ResponseTemplate::new(200).set_body_json(json!({\n            \"id\": \"chatcmpl-123\",\n            \"object\": \"chat.completion\",\n            \"created\": 1_677_652_288,\n            \"model\": \"gpt-3.5-turbo-0125\",\n            \"system_fingerprint\": \"fp_44709d6fcb\",\n            \"choices\": [{\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": \"\\n\\nHello there, how may I assist you today?\",\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"stop\"\n            }],\n            \"usage\": {\n              \"prompt_tokens\": 9,\n              \"completion_tokens\": 12,\n              \"total_tokens\": 21\n            }\n        })))\n        .mount(mock_server)\n        .await;\n}\n"
  },
  {
    "path": "typos.toml",
    "content": "[files]\n# Autogenerated\nextend-exclude = [\"CHANGELOG.md\", \"cliff.toml\"]\n"
  }
]