[
  {
    "path": ".github/FUNDING.yml",
    "content": "github: brianheineman\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: \"\\U0001F41E Bug Report\"\nabout: \"If something isn't working as expected \\U0001F914.\"\ntitle: ''\nlabels: 'i: bug, i: needs triage'\nassignees: ''\n\n---\n\n**What steps will reproduce the bug? (please provide code snippet if relevant)**\n\n1. step 1\n2. step 2\n3. ...\n\n**What happens?**\n\n...\n\n**What did you expect to happen instead?**\n\n...\n\n### Information about your environment\n\n* postgresql_embedded version: [REQUIRED] (e.g. \"0.14.2\")\n* Database version: [REQUIRED] (e.g. \"16.4.0\")\n* Operating system: [REQUIRED]\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "content": "blank_issues_enabled: false\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: \"\\U00002728 Feature Request\"\nabout: \"I have a suggestion (and may want to implement it \\U0001F642)!\"\ntitle: ''\nlabels: 'i: enhancement, i: needs triage'\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context about the feature request here.\n"
  },
  {
    "path": ".github/codecov.yml",
    "content": "coverage:\n  status:\n    patch:\n      default:\n        threshold: 0.05%\n    project:\n      default:\n        threshold: 0.05%\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\nupdates:\n  - package-ecosystem: \"cargo\"\n    directory: \"/\"\n    schedule:\n      interval: \"monthly\"\n"
  },
  {
    "path": ".github/workflows/benchmarks.yml",
    "content": "name: Benchmarks\n\non:\n  push:\n    branches:\n      - main\n  pull_request:\n    types: [ opened, reopened, synchronize ]\n\npermissions:\n  contents: read\n\njobs:\n  benchmark:\n    name: Run Benchmarks\n    runs-on: ubuntu-latest\n    permissions:\n      pull-requests: write\n    steps:\n      - name: Checkout source code\n        uses: actions/checkout@v4\n\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          components: 'llvm-tools-preview'\n          toolchain: stable\n\n      - name: Install benchmarking tools\n        uses: bencherdev/bencher@main\n\n      - name: Run benchmarks\n        if: ${{ github.event_name == 'pull_request' }}\n        env:\n          BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}\n          BENCHER_PROJECT: theseus-rs-postgresql-embedded\n          BENCHER_ADAPTER: rust_criterion\n        run: |\n          bencher run \\\n            --branch $GITHUB_HEAD_REF \\\n            --ci-number \"${{ github.event.number }}\" \\\n            --github-actions \"${{ secrets.GITHUB_TOKEN }}\" \\\n            --err \\\n            \"cargo bench --features blocking\"\n\n      - name: Run benchmarks\n        if: ${{ github.event_name != 'pull_request' }}\n        env:\n          BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}\n          BENCHER_PROJECT: theseus-rs-postgresql-embedded\n          BENCHER_ADAPTER: rust_criterion\n        run: |\n          bencher run \"cargo bench --features blocking\"\n"
  },
  {
    "path": ".github/workflows/checks.yml",
    "content": "name: Fast checks\n\nenv:\n  CARGO_TERM_COLOR: always\n  RUSTFLAGS: \"-D warnings\"\n\non:\n  workflow_call:\n\npermissions:\n  contents: read\n\njobs:\n  audit:\n    runs-on: ubuntu-24.04\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          toolchain: stable\n      - name: Install cargo audit\n        run: cargo install cargo-audit\n      - name: Audit dependencies\n        run: cargo audit\n\n  check:\n    runs-on: ubuntu-24.04\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          toolchain: stable\n      - name: Check the project\n        run: |\n          cargo check --workspace --all-targets --features blocking\n          cargo check --workspace --all-targets --features bundled\n          cargo check --workspace --all-targets --features tokio\n          cargo check --workspace --all-targets --all-features\n\n  clippy:\n    runs-on: ubuntu-24.04\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          toolchain: stable\n          components: clippy\n      - name: Check lints\n        env:\n          GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}\n        run: |\n          cargo clippy --all-targets --all-features --examples --tests\n\n  deny:\n    runs-on: ubuntu-24.04\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          toolchain: stable\n      - name: Install cargo deny\n        run: cargo install cargo-deny\n      - name: Check licenses\n        run: cargo deny check --allow duplicate\n\n  doc:\n    runs-on: ubuntu-24.04\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          toolchain: stable\n      - name: Check documentation\n        env:\n          RUSTDOCFLAGS: -D warnings\n        run: cargo doc --workspace --no-deps --document-private-items --all-features\n\n  fmt:\n    runs-on: ubuntu-24.04\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          toolchain: stable\n          components: rustfmt\n      - name: Check formatting\n        run: cargo fmt --all --check\n"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: ci\n\non:\n  push:\n    branches:\n      - main\n  pull_request:\n    branches:\n      - main\n\npermissions:\n  contents: read\n\njobs:\n  checks:\n    name: Checks\n    uses: ./.github/workflows/checks.yml\n\n  build:\n    name: ${{ matrix.platform }}\n    needs: [ checks ]\n    runs-on: ${{ matrix.os }}\n    strategy:\n      fail-fast: false\n      matrix:\n        platform:\n          - linux-arm\n          - linux-x64\n          - macos-arm\n          - macos-x64\n          - windows-x64\n\n        include:\n          - platform: linux-arm\n            os: ubuntu-24.04-arm\n          - platform: linux-x64\n            os: ubuntu-latest\n          - platform: macos-arm\n            os: macos-15\n          - platform: macos-x64\n            os: macos-15-intel\n          - platform: windows-x64\n            os: windows-2022\n\n    steps:\n      - name: Checkout source code\n        uses: actions/checkout@v4\n\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          toolchain: stable\n\n      - name: Install cargo-llvm-cov\n        uses: taiki-e/install-action@main\n        with:\n          tool: cargo-llvm-cov\n\n      - name: Tests\n        if: ${{ matrix.platform != 'linux-x64' }}\n        env:\n          CARGO_TERM_COLOR: always\n          GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}\n          RUST_LOG: \"info,postgresql_archive=debug,postgresql_commands=debug,postgresql_embedded=debug\"\n          RUST_LOG_SPAN_EVENTS: full\n        run: |\n          cargo test\n\n      - name: Tests\n        if: ${{ matrix.platform == 'linux-x64' }}\n        env:\n          CARGO_TERM_COLOR: always\n          GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}\n          RUST_LOG: \"info,postgresql_archive=debug,postgresql_commands=debug,postgresql_embedded=debug\"\n          RUST_LOG_SPAN_EVENTS: full\n        run: |\n          cargo llvm-cov --all-features --workspace --lcov --output-path lcov.info\n\n      - name: Upload to codecov.io\n        if: ${{ matrix.platform == 'linux-x64' }}\n        uses: codecov/codecov-action@v4\n        with:\n          files: lcov.info\n          fail_ci_if_error: true\n          verbose: true\n        env:\n          CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/pr-benchmarks.yml",
    "content": "name: Benchmarks\n\non:\n  pull_request:\n    types: [ opened, reopened, synchronize ]\n\npermissions:\n  contents: read\n\njobs:\n  benchmark:\n    name: Run Benchmarks\n    runs-on: ubuntu-22.04\n    permissions:\n      pull-requests: write\n    steps:\n      - name: Checkout source code\n        uses: actions/checkout@v4\n\n      - name: Install Rust\n        uses: dtolnay/rust-toolchain@master\n        with:\n          components: 'llvm-tools-preview'\n          toolchain: stable\n\n      - name: Install benchmarking tools\n        uses: bencherdev/bencher@main\n\n      - name: Run benchmarks\n        env:\n          BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}\n          BENCHER_PROJECT: theseus-rs-postgresql-embedded\n          BENCHER_ADAPTER: rust_criterion\n        run: |\n          bencher run \\\n            --branch $GITHUB_HEAD_REF \\\n            --ci-number \"${{ github.event.number }}\" \\\n            --github-actions \"${{ secrets.GITHUB_TOKEN }}\" \\\n            --err \\\n            \"cargo bench --features blocking\"\n"
  },
  {
    "path": ".github/workflows/release-plz.yml",
    "content": "name: Release-plz\n\npermissions:\n  pull-requests: write\n  contents: write\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n\n  # Release unpublished packages.\n  release-plz-release:\n    name: Release-plz release\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n        with:\n          fetch-depth: 0\n      - name: Install Rust toolchain\n        uses: dtolnay/rust-toolchain@stable\n      - name: Run release-plz\n        uses: release-plz/action@v0.5\n        with:\n          command: release\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n          CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}\n\n  # Create a PR with the new versions and changelog, preparing the next release.\n  release-plz-pr:\n    name: Release-plz PR\n    runs-on: ubuntu-latest\n    concurrency:\n      group: release-plz-${{ github.ref }}\n      cancel-in-progress: false\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n        with:\n          fetch-depth: 0\n      - name: Install Rust toolchain\n        uses: dtolnay/rust-toolchain@stable\n      - name: Run release-plz\n        uses: release-plz/action@v0.5\n        with:\n          command: release-pr\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n          CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": "/target\n\n# Rust Rover\n/.idea\n"
  },
  {
    "path": ".rustfmt.toml",
    "content": "newline_style = \"Unix\"\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),\nand this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).\n\n## [Unreleased]\n\n## `postgresql_extensions` - [0.20.2](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.20.1...postgresql_extensions-v0.20.2) - 2026-02-22\n\n### Other\n- remove num-format dependency\n\n## `postgresql_embedded` - [0.20.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.20.1...v0.20.2) - 2026-02-22\n\n### Added\n- add unix socket support\n\n### Other\n- remove num-format dependency\n\n## `postgresql_commands` - [0.20.2](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.20.1...postgresql_commands-v0.20.2) - 2026-02-22\n\n### Added\n- add unix socket support\n\n## `postgresql_archive` - [0.20.2](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.20.1...postgresql_archive-v0.20.2) - 2026-02-22\n\n### Added\n- add unix socket support\n\n### Other\n- remove num-format dependency\n\n## `postgresql_extensions` - [0.20.1](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.20.0...postgresql_extensions-v0.20.1) - 2026-02-08\n\n### Other\n- update rust to 1.92.0\n- reduce map_err by adding some From<Error> implementations\n\n## `postgresql_embedded` - [0.20.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.20.0...v0.20.1) - 2026-02-08\n\n### Added\n- add postgresql v18 support\n\n### Fixed\n- update to support all targets\n\n### Other\n- Merge branch 'main' into caching_builds\n- Target\n- Cache archives\n- update rust to 1.92.0\n- reduce map_err by adding some From<Error> implementations\n\n## `postgresql_commands` - [0.20.1](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.20.0...postgresql_commands-v0.20.1) - 2026-02-08\n\n### Other\n- update rust to 1.92.0\n\n## `postgresql_archive` - [0.20.1](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.20.0...postgresql_archive-v0.20.1) - 2026-02-08\n\n### Other\n- update rust to 1.92.0\n- reduce map_err by adding some From<Error> implementations\n\n## `postgresql_extensions` - [0.20.0](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.19.0...postgresql_extensions-v0.20.0) - 2025-08-31\n\n### Fixed\n- always use the build version of postgresql when the bundled feature is enabled to avoid network access\n\n### Other\n- remove devcontainer support\n\n## `postgresql_embedded` - [0.20.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.19.0...v0.20.0) - 2025-08-31\n\n### Fixed\n- always use the build version of postgresql when the bundled feature is enabled to avoid network access\n- [**breaking**] rename pg_dump compression argument to compress\n\n### Other\n- minor doc updates\n- remove devcontainer support\n- correct lint errors\n- update to Rust 1.89.0\n\n## `postgresql_commands` - [0.20.0](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.19.0...postgresql_commands-v0.20.0) - 2025-08-31\n\n### Fixed\n- [**breaking**] rename pg_dump compression argument to compress\n\n### Other\n- remove devcontainer support\n\n## `postgresql_archive` - [0.20.0](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.19.0...postgresql_archive-v0.20.0) - 2025-08-31\n\n### Other\n- minor doc updates\n- remove devcontainer support\n\n## `postgresql_embedded` - [0.19.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.7...v0.19.0) - 2025-06-24\n\n### Added\n- allow skipping the installation step during setup\n\n### Other\n- correct typo in variable name\n- update extractor feature documentation\n\n## `postgresql_archive` - [0.19.0](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.18.7...postgresql_archive-v0.19.0) - 2025-06-24\n\n### Other\n- update extractor feature documentation\n\n## `postgresql_embedded` - [0.18.7](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.6...v0.18.7) - 2025-06-20\n\n### Fixed\n- set CREATE_NO_WINDOW creation flag on Windows\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_commands` - [0.18.7](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.18.6...postgresql_commands-v0.18.7) - 2025-06-20\n\n### Fixed\n- set CREATE_NO_WINDOW creation flag on Windows\n\n## `postgresql_archive` - [0.18.7](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.18.6...postgresql_archive-v0.18.7) - 2025-06-20\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_extensions` - [0.18.6](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.18.5...postgresql_extensions-v0.18.6) - 2025-06-17\n\n### Added\n\n- add extractor feature flags\n\n### Other\n\n- correct lint errors\n\n## `postgresql_embedded` - [0.18.6](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.5...v0.18.6) - 2025-06-17\n\n### Added\n\n- add extractor feature flags\n\n### Other\n\n- make liblzma an optional dependency\n- add documentation for bundled feature flag\n- correct lint errors\n\n## `postgresql_archive` - [0.18.6](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.18.5...postgresql_archive-v0.18.6) - 2025-06-17\n\n### Added\n\n- add extractor feature flags\n\n### Other\n\n- make liblzma an optional dependency\n- correct lint errors\n\n## `postgresql_extensions` - [0.18.5](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.18.4...postgresql_extensions-v0.18.5) - 2025-05-28\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_embedded` - [0.18.5](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.4...v0.18.5) - 2025-05-28\n\n### Fixed\n- correct theseus build bundle\n- revert SupportFn type change\n- custom release url not working and compilation failure\n\n### Other\n- Merge branch 'main' into main\n- update to criterion=0.6.0, pgvector=0.4.1, reqwest=0.12.18, sqlx=0.8.6, tokio=1.45.1, zip=4.0.0\n- minor syntax change\n- update Cargo.toml dependencies\n\n## `postgresql_commands` - [0.18.5](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.18.4...postgresql_commands-v0.18.5) - 2025-05-28\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_archive` - [0.18.5](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.18.4...postgresql_archive-v0.18.5) - 2025-05-28\n\n### Fixed\n- correct theseus build bundle\n- revert SupportFn type change\n- custom release url not working and compilation failure\n\n### Other\n- update to criterion=0.6.0, pgvector=0.4.1, reqwest=0.12.18, sqlx=0.8.6, tokio=1.45.1, zip=4.0.0\n- minor syntax change\n\n## `postgresql_extensions` - [0.18.4](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.18.3...postgresql_extensions-v0.18.4) - 2025-05-15\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_embedded` - [0.18.4](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.3...v0.18.4) - 2025-05-15\n\n### Other\n- update to Rust 1.87.0\n- update dependencies\n- update Cargo.toml dependencies\n\n## `postgresql_commands` - [0.18.4](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.18.3...postgresql_commands-v0.18.4) - 2025-05-15\n\n### Other\n- update to Rust 1.87.0\n\n## `postgresql_archive` - [0.18.4](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.18.3...postgresql_archive-v0.18.4) - 2025-05-15\n\n### Other\n- update to Rust 1.87.0\n- update dependencies\n\n## `postgresql_extensions` - [0.18.3](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.18.2...postgresql_extensions-v0.18.3) - 2025-04-03\n\n### Other\n- update to Rust 1.86.0\n\n## `postgresql_embedded` - [0.18.3](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.2...v0.18.3) - 2025-04-03\n\n### Other\n- update Cargo.toml dependencies\n- update to Rust 1.86.0\n\n## `postgresql_archive` - [0.18.3](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.18.2...postgresql_archive-v0.18.3) - 2025-04-03\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_extensions` - [0.18.2](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.18.1...postgresql_extensions-v0.18.2) - 2025-03-21\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_embedded` - [0.18.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.1...v0.18.2) - 2025-03-21\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_commands` - [0.18.2](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.18.1...postgresql_commands-v0.18.2) - 2025-03-21\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_archive` - [0.18.2](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.18.1...postgresql_archive-v0.18.2) - 2025-03-21\n\n### Other\n- update Cargo.toml dependencies\n\n## `postgresql_embedded` - [0.18.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.18.0...v0.18.1) - 2025-02-26\n\n### Fix\n- Check for existing installations in children before installing\n\n## `postgresql_extensions` - [0.18.0](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.17.5...postgresql_extensions-v0.18.0) - 2025-02-20\n\n### Added\n- update to Rust 2024 edition\n\n### Other\n- [**breaking**] rename feature rustls-tls to rustls\n\n## `postgresql_commands` - [0.18.0](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.17.5...postgresql_commands-v0.18.0) - 2025-02-20\n\n### Added\n- update to Rust 2024 edition\n\n## `postgresql_embedded` - [0.18.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.17.5...v0.18.0) - 2025-02-20\n\n### Added\n- update to Rust 2024 edition\n\n### Other\n- update dependencies\n- [**breaking**] rename feature rustls-tls to rustls\n\n## `postgresql_archive` - [0.18.0](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.17.5...postgresql_archive-v0.18.0) - 2025-02-20\n\n### Added\n- update to Rust 2024 edition\n\n### Other\n- [**breaking**] rename feature rustls-tls to rustls\n\n## `postgresql_extensions` - [0.17.5](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_extensions-v0.17.4...postgresql_extensions-v0.17.5) - 2025-01-25\n\n### Other\n- replace regex with regex-lite to reduce dependencies\n- update ci configuration\n\n## `postgresql_commands` - [0.17.5](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_commands-v0.17.4...postgresql_commands-v0.17.5) - 2025-01-25\n\n### Other\n- remove anyhow and human_bytes dependencies\n\n## `postgresql_embedded` - [0.17.5](https://github.com/theseus-rs/postgresql-embedded/compare/v0.17.4...v0.17.5) - 2025-01-25\n\n### Other\n- make tracing-indicatif optional\n- remove anyhow and human_bytes dependencies\n- replace regex with regex-lite to reduce dependencies\n- remove http dependency\n- update ci configuration\n\n## `postgresql_archive` - [0.17.5](https://github.com/theseus-rs/postgresql-embedded/compare/postgresql_archive-v0.17.4...postgresql_archive-v0.17.5) - 2025-01-25\n\n### Other\n- replace regex with regex-lite to reduce dependencies\n- remove http dependency\n- make tracing-indicatif optional\n- remove anyhow and human_bytes dependencies\n\n## `postgresql_embedded` - [v0.17.4](https://github.com/theseus-rs/postgresql-embedded/compare/v0.17.3...v0.17.4) - 2025-01-17\n\n### Chore\n\n- update to Rust 1.83\n- update to Rust 1.84\n\n### Fix\n\n- correct deny.toml\n- use tokio::process::spawn() for pc_ctl on Windows\n\n## `postgresql_embedded` - [v0.17.3](https://github.com/theseus-rs/postgresql-embedded/compare/v0.17.2...v0.17.3) - 2024-11-12\n\n### Build\n\n- update codecov action to version 4\n- update code coverage generation\n- update to Rust 1.82.0\n\n### Chore\n\n- add FUNDING.yml\n- add FUNDING.yml\n- correct new linting errors\n- update dependencies\n- add Unicode-3.0 as an allowed license\n\n### Fix\n\n- correct zonky extractor\n\n## `postgresql_embedded` - [v0.17.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.17.1...v0.17.2) - 2024-10-01\n\n### Build\n\n- correct documentation build\n\n## `postgresql_embedded` - [v0.17.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.17.0...v0.17.1) - 2024-10-01\n\n### Build\n\n- correct documentation build\n- update dependencies\n\n## `postgresql_embedded` - [v0.17.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.16.3...v0.17.0) - 2024-09-28\n\n### Chore\n\n- update dependencies\n- add issue templates\n- forbid clippy allow attributes\n- add rust-toolchain.toml\n- updates for clippy lints\n\n### Deprecated\n\n- [**breaking**] remove version 12 and deprecate version 13\n\n### Fix\n\n- allow archives to be bundled from alternate github repositories\n\n### Test\n\n- update extension test to run with specific postgresql version\n\n## `postgresql_embedded` - [v0.16.3](https://github.com/theseus-rs/postgresql-embedded/compare/v0.16.2...v0.16.3) - 2024-09-04\n\n### Chore\n\n- switch from xz2 to liblzma\n- ignore .idea directory\n\n## `postgresql_embedded` - [v0.16.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.16.1...v0.16.2) - 2024-08-24\n\n### Build\n\n- update audit and deny checks\n\n### Docs\n\n- split axum and progress bar examples\n- minor doc correction\n\n### Fix\n\n- update dependencies to address [RUSTSEC-2024-0363](https://rustsec.org/advisories/RUSTSEC-2024-0363.html)\n\n### Refactor\n\n- rename embedded_async_diesel_r2d2 to diesel_embedded\n\n## `postgresql_embedded` - [v0.16.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.16.0...v0.16.1) - 2024-08-13\n\n### Build\n\n- remove unused dependencies\n\n### Docs\n\n- add axum example\n- add indicatif to axum example\n\n### Feat\n\n- add archive tracing progress bar status\n\n### Fix\n\n- update maven status to set progress bar position\n\n### Test\n\n- update version of postgresql used for testing from 16.3.0 to 16.4.0\n- update windows test assertion\n\n## `postgresql_embedded` - [v0.16.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.15.0...v0.16.0) - 2024-08-04\n\n### Build\n\n- sort dependencies\n- update dependencies\n- address lint error\n\n### Docs\n\n- add PortalCorp example for pgvector\n\n### Feat\n\n- add portal corp extensions\n\n### Fix\n\n- correct steampipe extension url resolution\n- add .dll support\n- update steampipe to use detected OS if not on macos\n- correct extension regex to match file extensions\n\n### Refactor\n\n- [**breaking**] refactor extension matchers\n- [**breaking**] return list of files from archive extract function\n- [**breaking**] refactor archive extract directories\n- refactor zonky extractor to use generic tar_xz_extractor\n\n### Test\n\n- update portal corp test so that it does not run on macos x64\n- add tests for extension matchers\n- update archive test assertions to be platform specific\n- update expected files extracted\n- improve matcher error tests\n- enable portal corp test for all platforms\n\n## `postgresql_embedded` - [v0.15.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.14.2...v0.15.0) - 2024-08-01\n\n### Build\n\n- update tls features\n- add github feature to steampipe and tensor-chord\n\n### Docs\n\n- correct doc errors\n- correct doc errors\n- add vector extension example\n- update vector_extension example to run queries\n\n### Feat\n\n- [**breaking**] initial postgresql_extensions crate\n\n### Fix\n\n- registered github archive repositories for extensions\n- correct steampipe install matcher\n- [**breaking**] update extension matchers to use postgresql major version\n- correct cargo check failure\n- correct serialization error writing configuration\n- correct vector example error\n- linting error\n\n### Refactor\n\n- de-deduplicate steampipe matcher logic\n\n### Test\n\n- add version tests\n- remove unused extension model\n- update lifecycle test to run on linux only\n- update steampipe test to run on macos\n- disable steampipe test on macos\n- update steampipe matcher test\n- improve model test coverage\n\n## `postgresql_embedded` - [v0.14.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.14.1...v0.14.2) - 2024-07-17\n\n### Build\n\n- remove clear caches action\n\n### Docs\n\n- add version optimization documentation\n\n### Fix\n\n- updated PgConfigBuilder interface to align with pg_config executable\n- improve commands on windows to return stdout and stderr\n- correct linting errors\n\n### Test\n\n- correct windows test failure\n\n## `postgresql_embedded` - [v0.14.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.14.0...v0.14.1) - 2024-07-06\n\n### Build\n\n- change default from rustls-tls to native-tls\n- suppress lint warning\n- correct lint error\n- correct formatting\n- update non-windows build configuration\n- update non-windows build tests\n\n### Docs\n\n- update docs for new features\n\n### Fix\n\n- correct bug where commands hang on windows when retrieving stdout/stderr\n- correct hang when tokio is not used\n- update command tests to work on Windows\n\n### Test\n\n- correct linux/macos tests\n- increase timeout to 30 seconds\n- increase timeout to 30 seconds\n- revert timeout to 5 seconds\n\n## `postgresql_embedded` - [v0.14.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.13.0...v0.14.0) - 2024-07-03\n\n### Feat\n\n- [**breaking**] add feature flags to enable zonky\n\n### Test\n\n- correct extract test implementations\n\n## `postgresql_embedded` - [v0.13.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.12.0...v0.13.0) - 2024-07-01\n\n### Build\n\n- pin dependencies\n- update use definitions when blocking feature enabled\n- unpin dependencies\n- correct url dependency definition\n- correct documentation link error\n- print target triple during build\n- remove build caching\n- correct lint error\n- update license rules\n- correct formatting error\n\n### Docs\n\n- update README.md\n- simplify documentation\n- remove reference to Bytes\n- update documentation\n- update readmes\n\n### Feat\n\n- [**breaking**] add semantic versioing support and configurable repositories\n- add matcher registry\n- [**breaking**] add configurable hashers\n- add sha2-512 support\n- add blake2 and sha3 hash support\n- add hasher and matcher supports function\n- [**breaking**] add configurable extractors\n- add support for installing binaries from the zonky project\n- add SHA1 hash support for older Maven repositories\n- utilize sqlx for database management to support PostgreSQL installations that do not bundle psql\n- update hasher registry to work with Maven central and add MD5 hash\n\n### Fix\n\n- correct asset hash logic\n- convert possible panics to errors\n\n### Refactor\n\n- [**breaking**] rename ReleaseNotFound to VersionNotFound\n- [**breaking**] remove bytes dependency\n- [**breaking**] remove bytes dependency\n- remove default registry values\n\n### Test\n\n- remove extraneous tests\n- add tests to improve test coverage\n- correct test_blake2b_512\n- improve test coverage\n- add zonky archive integration test\n- correct hash test\n\n## `postgresql_embedded` - [v0.12.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.11.0...v0.12.0) - 2024-06-21\n\n### Refactor\n\n- [**breaking**] move version from PostgreSQL::new() to Settings\n\n## `postgresql_embedded` - [v0.11.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.10.2...v0.11.0) - 2024-06-20\n\n### Build\n\n- Enable pedantic lints\n\n### Docs\n\n- update documentation\n- updated archive documentation examples\n\n### Feat\n\n- [**breaking**] allow releases URL to be configured\n- allow releases url to be specified at build time when the bundles flag is set with the POSTGRESQL_RELEASES_URL\n  environment variable\n- export Version to improve dx\n\n### Fix\n\n- reference settings directly instead of via function call\n- update examples\n- pass settings release_url when bundled flag is set\n\n### Test\n\n- add missing command error tests and clean up lint directives\n\n## `postgresql_embedded` - [v0.10.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.10.1...v0.10.2) - 2024-06-18\n\n### Fix\n\n- correct errors when PGDATABASE envar is set\n\n## `postgresql_embedded` - [v0.10.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.9.5...v0.10.1) - 2024-06-14\n\n### Build\n\n- allow Unicode-3.0 license\n\n### Feat\n\n- [**breaking**] add ability to specify multiple pg_ctl options and define server configuration in Settings\n\n## `postgresql_embedded` - [v0.9.5](https://github.com/theseus-rs/postgresql-embedded/compare/v0.9.4...v0.9.5) - 2024-06-03\n\n### Build\n\n- address pedantic clippy warnings\n\n### Fix\n\n- don't require rustls for the build script. only enable by default.\n\n## `postgresql_embedded` - [v0.9.4](https://github.com/theseus-rs/postgresql-embedded/compare/v0.9.3...v0.9.4) - 2024-05-31\n\n### Feat\n\n- add native-tls support\n\n## `postgresql_embedded` - [v0.9.3](https://github.com/theseus-rs/postgresql-embedded/compare/v0.9.2...v0.9.3) - 2024-05-21\n\n### PostgreSQL\n\n- don't trace self, and when tracing commands only trace the base name. makes the traces less enormous and also avoids\n  dumping passwords into traces.\n\n## `postgresql_embedded` - [v0.9.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.9.1...v0.9.2) - 2024-05-19\n\n### Build\n\n- correct lint warnings\n- update dependencies\n\n### Chore\n\n- update dependencies\n\n### Fix\n\n- correct debug message\n\n### Test\n\n- add authentication tests\n- improve test coverage\n\n## `postgresql_embedded` - [v0.9.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.9.0...v0.9.1) - 2024-05-01\n\n### Fix\n\n- create extract_dir on same filesystem as out_dir\n\n##\n`postgresql_embedded` - [v0.9.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.8.3...v0.9.0) - 2024-04-26\n\n### Fix\n\n- [**breaking**] define bootstrap superuser as postgres\n- [**breaking**] define bootstrap superuser as postgres\n\n##\n`postgresql_embedded` - [v0.8.3](https://github.com/theseus-rs/postgresql-embedded/compare/v0.8.2...v0.8.3) - 2024-04-21\n\n### Build\n\n- add CODECOV_TOKEN to code coverage build step\n\n### Chore\n\n- update dependencies\n- update reqwest libraries\n- address format error\n\n## `postgresql_embedded` - [v0.8.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.8.1...v0.8.2) - 2024-04-05\n\n### Fix\n\n- suppress bytes parameter in tracing instrumentation\n\n## `postgresql_embedded` - [v0.8.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.8.0...v0.8.1) - 2024-04-03\n\n### Build\n\n- update build dependencies to address audit check\n\n### Test\n\n- add command integration test\n\n## `postgresql_embedded` - [v0.8.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.7.3...v0.8.0) - 2024-04-03\n\n### Build\n\n- update dependencies\n- correct linting errors\n\n### Refactor\n\n- [**breaking**] move commands into postgresql_commands crate\n\n## `postgresql_embedded` - [v0.7.3](https://github.com/theseus-rs/postgresql-embedded/compare/v0.7.2...v0.7.3) - 2024-03-25\n\n### Chore\n\n- remove scorecard.yml\n\n### Feat\n\n- add ability to create settings from a url\n\n### Refact\n\n- remove use of embedded=true parameter\n\n## `postgresql_embedded` - [v0.7.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.7.1...v0.7.2) - 2024-03-16\n\n### Chore\n\n- add Debug trait to CommandBuilder\n\n### Feat\n\n- add tracing instrumentation\n\n## `postgresql_embedded` - [v0.7.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.7.0...v0.7.1) - 2024-03-15\n\n### Fix\n\n- correct parallel archive extract failures\n\n## `postgresql_embedded` - [v0.7.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.6.2...v0.7.0) - 2024-03-15\n\n### Docs\n\n- update vscode development container instructions\n\n### Fix\n\n- [**breaking**] correct parallel archive extract failures\n\n## `postgresql_embedded` - [v0.6.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.6.1...v0.6.2) - 2024-03-07\n\n### Chore\n\n- correct lint error\n\n### Feat\n\n- add reqwest backoff/retry logic and tracing support\n\n## `postgresql_embedded` - [v0.6.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.6.0...v0.6.1) - 2024-03-06\n\n### Chore\n\n- update use of settings of postgres connection and correct typo in output\n- update dependencies\n- remove use of copy left license MPL-2.0 from dependencies\n\n### Fix\n\n- update dependencies to address RUSTSEC-2024-0020\n\n## `postgresql_embedded` - [v0.6.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.5.0...v0.6.0) - 2024-02-24\n\n### Chore\n\n- correct formatting\n- correct linting error\n\n### Fix\n\n- [**breaking**] remove bundled as a default feature and corrected bug when the bundled feature is not used\n\n## `postgresql_embedded` - [v0.5.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.4.1...v0.5.0) - 2024-02-22\n\n### Chore\n\n- remove unnecessary use of command pipes\n- update action permissions to reduce write privilege scope\n- ignore RUSTSEC-2023-0071 as it is only used in sqlx example code\n- correct linting errors\n\n### Ci\n\n- run all benchmarks from workspace at once instead of individually from crates\n\n### Docs\n\n- add SECURITY.md\n- add postgres driver and sqlx examples\n- add documentation explaining why RUSTSEC-2023-0071 is ignored\n\n### Refactor\n\n- [**breaking**] refactor status to check on demand instead of attempting to track the state dynamically\n\n### Test\n\n- remove unused code\n\n## `postgresql_embedded` - [v0.4.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.4.0...v0.4.1) - 2024-02-18\n\n### Chore\n\n- Add initial dev container support\n- update windows to use UTF8 to align with other operating systems and utilize capabilities of the newer releases\n  from https://github.com/theseus-rs/postgresql-binaries\n- add code coverage configuration\n- remove extraneous line in Cargo.toml\n- update release drafter to version 6 to address node 16 deprecation warning\n- update pr-benchmarks name\n\n### Ci\n\n- update build to run benchmarks\n- add BENCHER_API_TOKEN to benchmark action\n- remove build.yml and move jobs into ci.yml\n- split benchmark runs\n- update build to run benchmarks\n- add benchmark pull request integration\n- update approach for setting ci-number\n- add pull-requests: write permission\n- remove conditional pr benchmark statements\n\n### Docs\n\n- add cargo keywords\n- update docs for new functions\n- add bencher badges\n\n### Feat\n\n- add devcontainer support\n\n### Refactor\n\n- update psql to manage setting the PGPASSWORD environment variable when pg_password is set\n- refactor the way benchmarks run on the main branch vs a PR\n\n### Test\n\n- add benchmarks\n- add CommandBuilder test coverage\n- correct the embedded lifecycle benchmark name\n- reduce archive benchmark sample size to 10\n- update benchmark configuration\n- remove bencher conditional arguments\n- combine benchmark runs into one step\n- remove all bencher options\n- reduce embedded sample size to 10 to reduce benchmark runtime\n- update benchmark pull request configuration\n\n## `postgresql_embedded` - [v0.4.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.3.2...v0.4.0) - 2024-02-13\n\n### Docs\n\n- add postgres to keywords\n\n### Refactor\n\n- [**breaking**] remove archive hash from the public interface and always calculate/verify the has when requesting an\n  archive\n- [**breaking**] remove archive hash from the public interface and always calculate/verify the has when requesting an\n  archive\n- simplified installation logic and improved code coverage\n\n### Test\n\n- improve lifecycle test coverage\n- update elapsed error test to sleep longer to prevent intermittent test failure\n\n## `postgresql_embedded` - [v0.3.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.3.1...v0.3.2) - 2024-02-13\n\n### Bug\n\n- correct bug where serialization fails when there is a draft release of the PostgreSQL binaries\n\n### Chore\n\n- add examples\n- add missing license definitions\n\n### Test\n\n- update test code coverage\n- add tests for examples\n\n## `postgresql_embedded` - [v0.3.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.3.0...v0.3.1) - 2024-02-12\n\n### Chore\n\n- address linting error\n- change tracing levels from info to debug\n\n### Ci\n\n- add pull request labeler\n\n### Docs\n\n- update cargo description\n\n### Refactor\n\n- update postgresql_embedded::ArchiveError argument\n\n## `postgresql_embedded` - [v0.3.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.2.3...v0.3.0) - 2024-02-11\n\n### Ci\n\n- add release drafter\n\n### Refactor\n\n- [**breaking**] rename ArchiveError to postgresql_archive::Error and EmbeddedError to postgresql_embedded::Error\n\n## `postgresql_embedded` - [v0.2.3](https://github.com/theseus-rs/postgresql-embedded/compare/v0.2.2...v0.2.3) - 2024-02-11\n\n### Ci\n\n- add scheduled action to clear github caches\n\n## `postgresql_embedded` - [v0.2.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.2.1...v0.2.2) - 2024-02-11\n\n### Bug\n\n- warn when a release tag name does not match the expected version pattern\n\n### Chore\n\n- remove default feature test\n- update release to 0.2.2\n\n### Docs\n\n- wrap synchronous API docs in feature blocks\n- remove ci badge from rust docs\n- update examples in documentation to remove unnecessary use of .unwrap()\n\n### Feat\n\n- enable code coverage\n- add code coverage badges\n\n### Test\n\n- add tests to improve code coverage\n- updated valid initial statuses\n\n## `postgresql_embedded` - [v0.2.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.2.0...v0.2.1) - 2024-02-10\n\n### Chore\n\n- update release to 0.2.1\n\n### Docs\n\n- enable documentation features\n\n## `postgresql_embedded` - [v0.2.0](https://github.com/theseus-rs/postgresql-embedded/compare/v0.1.2...v0.2.0) - 2024-02-10\n\n### Chore\n\n- update release to 0.2.0\n\n### Docs\n\n- updated examples to use no_run to prevent documentation build failures\n\n## `postgresql_embedded` - [v0.1.2](https://github.com/theseus-rs/postgresql-embedded/compare/v0.1.1...v0.1.2) - 2024-02-10\n\n### Chore\n\n- remove cargo vet check\n- remove unused cargo dist configuration\n- update release to 0.1.2\n\n### Docs\n\n- update badges for release\n- correct crate repository urls\n- add documentation for CommandExecutor\n- remove note regarding tokio usage for the example\n- added documentation for POSTGRESQL_VERSION and GITHUB_TOKEN usage\n\n## `postgresql_embedded` - [v0.1.1](https://github.com/theseus-rs/postgresql-embedded/compare/v0.1.0...v0.1.1) - 2024-02-10\n\n### Docs\n\n- mark docs as ignored to prevent doc release failures\n\n## `postgresql_embedded` - [v0.1.0](https://github.com/theseus-rs/postgresql-embedded/compare/bd97bf1b5b53beb503034d499a0186c75ba6271e...v0.1.0) - 2024-02-10\n\n### Bug\n\n- corrected unused import and unused variable errors when building on windows\n- update postgresql_embedded to enable \"bundled\" as a default feature\n- correct doc lint\n- correct command test failures on windows\n- correct command builder test failures on windows\n- correct command builder test bugs on windows\n- update archive extract to support symlinks\n- corrected extract bug on MacOS caused by a directory being treated as a file\n- set encoding to SQL_ASCII for windows until binary is built with UTF8 support; use -o instead of --option when\n  attempting to start the server\n- remove failing code coverage actions\n\n### Build\n\n- *(deps)* bump tempfile from 3.9.0 to 3.10.0\n\n### Chore\n\n- initial CI configuration\n- updated tempfile config for cargo vet\n- reduce test execution and setup code coverage\n- enable rust / cargo caching for ci\n- enable caching to ci checks\n- update vet check for hermit-abi\n- update cargo vet config\n- add GITHUB_TOKEN to clippy and tests to address rate limiting\n- disable windows build\n- add author and release metadata\n- add missing crate descriptions\n- update release metadata\n\n### Docs\n\n- update MIT License header\n- update ci status badge\n- disable blocking rust doc examples\n\n### Feat\n\n- add ability to embed PostgreSQL installation in a Rust executable\n- add GITHUB_TOKEN as a Bearer token when calling the GitHub API in order to increase the rate limit\n- added initial tracing support\n\n### Refactor\n\n- update the name of the postgresql binaries repository\n\n### Test\n\n- refactor version constant tests so that they can be run in parallel to speed up builds\n- corrected pg_ctl test\n\n"
  },
  {
    "path": "Cargo.toml",
    "content": "[workspace]\ndefault-members = [\n    \"postgresql_archive\",\n    \"postgresql_commands\",\n    \"postgresql_embedded\",\n    \"postgresql_extensions\",\n]\nmembers = [\n    \"examples/*\",\n    \"postgresql_archive\",\n    \"postgresql_commands\",\n    \"postgresql_embedded\",\n    \"postgresql_extensions\",\n]\nresolver = \"3\"\n\n[workspace.package]\nauthors = [\"Brian Heineman <brian.heineman@gmail.com>\"]\ncategories = [\"database\"]\nedition = \"2024\"\nkeywords = [\"postgresql\", \"postgres\", \"embedded\", \"database\", \"server\"]\nlicense = \"(Apache-2.0 OR MIT) AND PostgreSQL\"\nrepository = \"https://github.com/theseus-rs/postgresql-embedded\"\nrust-version = \"1.92.0\"\nversion = \"0.20.2\"\n\n[workspace.dependencies]\nanyhow = \"1.0.102\"\nasync-trait = \"0.1.89\"\naxum = \"0.8.8\"\ncriterion = \"0.8.2\"\ndiesel = \"2.3.6\"\ndiesel_migrations = \"2.3.1\"\nflate2 = \"1.1.9\"\nfutures-util = \"0.3.32\"\nhex = \"0.4.3\"\nindicatif = \"0.18.4\"\nindoc = \"2.0.7\"\nliblzma = \"0.4.6\"\nmd-5 = \"0.10.6\"\npgvector = \"0.4.1\"\npostgres = \"0.19.12\"\nquick-xml = \"0.39.2\"\nr2d2_postgres = \"0.18.2\"\nrand = \"0.10.0\"\nregex-lite = \"0.1.9\"\nreqwest = { version = \"0.13.2\", default-features = false }\nreqwest-middleware = \"0.5.1\"\nreqwest-retry = \"0.9.1\"\nreqwest-tracing = \"0.7.0\"\nsemver = \"1.0.27\"\nserde = \"1.0.228\"\nserde_json = \"1.0.149\"\nsha1 = \"0.10.6\"\nsha2 = \"0.10.9\"\nsqlx = { version = \"0.8.6\", default-features = false, features = [\"postgres\"] }\ntar = \"0.4.44\"\ntarget-triple = \"1.0.0\"\ntempfile = \"3.25.0\"\ntest-log = \"0.2.19\"\nthiserror = \"2.0.18\"\ntokio = \"1.49.0\"\ntracing = \"0.1.44\"\ntracing-indicatif = \"0.3.14\"\ntracing-subscriber = \"0.3.22\"\nurl = \"2.5.8\"\nzip = { version = \"8.1.0\", default-features = false, features = [\"deflate\"] }\n\n[workspace.lints.rust]\ndead_code = \"allow\"\nmissing_debug_implementations = \"deny\"\nunsafe_code = \"deny\"\nwarnings = \"deny\"\n\n[workspace.lints.clippy]\npedantic = { level = \"deny\", priority = -1 }\nallow_attributes = \"deny\"\nfallible_impl_from = \"deny\"\nunwrap_used = \"deny\"\n\n[workspace.metadata.release]\nshared-version = true\ndependent-version = \"upgrade\"\ntag-name = \"v{{version}}\"\n"
  },
  {
    "path": "LICENSE-APACHE",
    "content": "                              Apache License\n                        Version 2.0, January 2004\n                     http://www.apache.org/licenses/\n\nTERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n1. Definitions.\n\n   \"License\" shall mean the terms and conditions for use, reproduction,\n   and distribution as defined by Sections 1 through 9 of this document.\n\n   \"Licensor\" shall mean the copyright owner or entity authorized by\n   the copyright owner that is granting the License.\n\n   \"Legal Entity\" shall mean the union of the acting entity and all\n   other entities that control, are controlled by, or are under common\n   control with that entity. For the purposes of this definition,\n   \"control\" means (i) the power, direct or indirect, to cause the\n   direction or management of such entity, whether by contract or\n   otherwise, or (ii) ownership of fifty percent (50%) or more of the\n   outstanding shares, or (iii) beneficial ownership of such entity.\n\n   \"You\" (or \"Your\") shall mean an individual or Legal Entity\n   exercising permissions granted by this License.\n\n   \"Source\" form shall mean the preferred form for making modifications,\n   including but not limited to software source code, documentation\n   source, and configuration files.\n\n   \"Object\" form shall mean any form resulting from mechanical\n   transformation or translation of a Source form, including but\n   not limited to compiled object code, generated documentation,\n   and conversions to other media types.\n\n   \"Work\" shall mean the work of authorship, whether in Source or\n   Object form, made available under the License, as indicated by a\n   copyright notice that is included in or attached to the work\n   (an example is provided in the Appendix below).\n\n   \"Derivative Works\" shall mean any work, whether in Source or Object\n   form, that is based on (or derived from) the Work and for which the\n   editorial revisions, annotations, elaborations, or other modifications\n   represent, as a whole, an original work of authorship. For the purposes\n   of this License, Derivative Works shall not include works that remain\n   separable from, or merely link (or bind by name) to the interfaces of,\n   the Work and Derivative Works thereof.\n\n   \"Contribution\" shall mean any work of authorship, including\n   the original version of the Work and any modifications or additions\n   to that Work or Derivative Works thereof, that is intentionally\n   submitted to Licensor for inclusion in the Work by the copyright owner\n   or by an individual or Legal Entity authorized to submit on behalf of\n   the copyright owner. For the purposes of this definition, \"submitted\"\n   means any form of electronic, verbal, or written communication sent\n   to the Licensor or its representatives, including but not limited to\n   communication on electronic mailing lists, source code control systems,\n   and issue tracking systems that are managed by, or on behalf of, the\n   Licensor for the purpose of discussing and improving the Work, but\n   excluding communication that is conspicuously marked or otherwise\n   designated in writing by the copyright owner as \"Not a Contribution.\"\n\n   \"Contributor\" shall mean Licensor and any individual or Legal Entity\n   on behalf of whom a Contribution has been received by Licensor and\n   subsequently incorporated within the Work.\n\n2. Grant of Copyright License. Subject to the terms and conditions of\n   this License, each Contributor hereby grants to You a perpetual,\n   worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n   copyright license to reproduce, prepare Derivative Works of,\n   publicly display, publicly perform, sublicense, and distribute the\n   Work and such Derivative Works in Source or Object form.\n\n3. Grant of Patent License. Subject to the terms and conditions of\n   this License, each Contributor hereby grants to You a perpetual,\n   worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n   (except as stated in this section) patent license to make, have made,\n   use, offer to sell, sell, import, and otherwise transfer the Work,\n   where such license applies only to those patent claims licensable\n   by such Contributor that are necessarily infringed by their\n   Contribution(s) alone or by combination of their Contribution(s)\n   with the Work to which such Contribution(s) was submitted. If You\n   institute patent litigation against any entity (including a\n   cross-claim or counterclaim in a lawsuit) alleging that the Work\n   or a Contribution incorporated within the Work constitutes direct\n   or contributory patent infringement, then any patent licenses\n   granted to You under this License for that Work shall terminate\n   as of the date such litigation is filed.\n\n4. Redistribution. You may reproduce and distribute copies of the\n   Work or Derivative Works thereof in any medium, with or without\n   modifications, and in Source or Object form, provided that You\n   meet the following conditions:\n\n   (a) You must give any other recipients of the Work or\n       Derivative Works a copy of this License; and\n\n   (b) You must cause any modified files to carry prominent notices\n       stating that You changed the files; and\n\n   (c) You must retain, in the Source form of any Derivative Works\n       that You distribute, all copyright, patent, trademark, and\n       attribution notices from the Source form of the Work,\n       excluding those notices that do not pertain to any part of\n       the Derivative Works; and\n\n   (d) If the Work includes a \"NOTICE\" text file as part of its\n       distribution, then any Derivative Works that You distribute must\n       include a readable copy of the attribution notices contained\n       within such NOTICE file, excluding those notices that do not\n       pertain to any part of the Derivative Works, in at least one\n       of the following places: within a NOTICE text file distributed\n       as part of the Derivative Works; within the Source form or\n       documentation, if provided along with the Derivative Works; or,\n       within a display generated by the Derivative Works, if and\n       wherever such third-party notices normally appear. The contents\n       of the NOTICE file are for informational purposes only and\n       do not modify the License. You may add Your own attribution\n       notices within Derivative Works that You distribute, alongside\n       or as an addendum to the NOTICE text from the Work, provided\n       that such additional attribution notices cannot be construed\n       as modifying the License.\n\n   You may add Your own copyright statement to Your modifications and\n   may provide additional or different license terms and conditions\n   for use, reproduction, or distribution of Your modifications, or\n   for any such Derivative Works as a whole, provided Your use,\n   reproduction, and distribution of the Work otherwise complies with\n   the conditions stated in this License.\n\n5. Submission of Contributions. Unless You explicitly state otherwise,\n   any Contribution intentionally submitted for inclusion in the Work\n   by You to the Licensor shall be under the terms and conditions of\n   this License, without any additional terms or conditions.\n   Notwithstanding the above, nothing herein shall supersede or modify\n   the terms of any separate license agreement you may have executed\n   with Licensor regarding such Contributions.\n\n6. Trademarks. This License does not grant permission to use the trade\n   names, trademarks, service marks, or product names of the Licensor,\n   except as required for reasonable and customary use in describing the\n   origin of the Work and reproducing the content of the NOTICE file.\n\n7. Disclaimer of Warranty. Unless required by applicable law or\n   agreed to in writing, Licensor provides the Work (and each\n   Contributor provides its Contributions) on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n   implied, including, without limitation, any warranties or conditions\n   of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n   PARTICULAR PURPOSE. You are solely responsible for determining the\n   appropriateness of using or redistributing the Work and assume any\n   risks associated with Your exercise of permissions under this License.\n\n8. Limitation of Liability. In no event and under no legal theory,\n   whether in tort (including negligence), contract, or otherwise,\n   unless required by applicable law (such as deliberate and grossly\n   negligent acts) or agreed to in writing, shall any Contributor be\n   liable to You for damages, including any direct, indirect, special,\n   incidental, or consequential damages of any character arising as a\n   result of this License or out of the use or inability to use the\n   Work (including but not limited to damages for loss of goodwill,\n   work stoppage, computer failure or malfunction, or any and all\n   other commercial damages or losses), even if such Contributor\n   has been advised of the possibility of such damages.\n\n9. Accepting Warranty or Additional Liability. While redistributing\n   the Work or Derivative Works thereof, You may choose to offer,\n   and charge a fee for, acceptance of support, warranty, indemnity,\n   or other liability obligations and/or rights consistent with this\n   License. However, in accepting such obligations, You may act only\n   on Your own behalf and on Your sole responsibility, not on behalf\n   of any other Contributor, and only if You agree to indemnify,\n   defend, and hold each Contributor harmless for any liability\n   incurred by, or claims asserted against, such Contributor by reason\n   of your accepting any such warranty or additional liability.\n\nEND OF TERMS AND CONDITIONS\n"
  },
  {
    "path": "LICENSE-MIT",
    "content": "MIT License\n\nCopyright (c) 2023 Theseus contributors\n\nPermission is hereby granted, free of charge, to any\nperson obtaining a copy of this software and associated\ndocumentation files (the \"Software\"), to deal in the\nSoftware without restriction, including without\nlimitation the rights to use, copy, modify, merge,\npublish, distribute, sublicense, and/or sell copies of\nthe Software, and to permit persons to whom the Software\nis furnished to do so, subject to the following\nconditions:\n\nThe above copyright notice and this permission notice\nshall be included in all copies or substantial portions\nof the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF\nANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED\nTO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\nPARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\nSHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\nCLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\nOF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR\nIN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\"><img width=\"250\" height=\"250\" src=\"images/logo.png\"></p>\n\n# PostgreSQL Embedded\n\n[![ci](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml)\n[![Documentation](https://docs.rs/postgresql_embedded/badge.svg)](https://docs.rs/postgresql_embedded)\n[![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n[![Benchmarks](https://img.shields.io/badge/%F0%9F%90%B0_bencher-enabled-6ec241)](https://bencher.dev/perf/theseus-rs-postgresql-embedded)\n[![Latest version](https://img.shields.io/crates/v/postgresql_embedded.svg)](https://crates.io/crates/postgresql_embedded)\n[![License](https://img.shields.io/crates/l/postgresql_embedded)](https://github.com/theseus-rs/postgresql-embedded#license)\n[![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n\nInstall and run a PostgreSQL database locally on Linux, MacOS or Windows. PostgreSQL can be\nbundled with your application, or downloaded on demand.\n\nThis library provides an embedded-like experience for PostgreSQL similar to what you would have with\nSQLite. This is accomplished by downloading and installing PostgreSQL during runtime. There is\nalso a \"bundled\" feature that when enabled, will download the PostgreSQL installation archive at\ncompile time, include it in your binary and install from the binary version at runtime.\nIn either case, PostgreSQL will run in a separate process space.\n\n## Features\n\n- installing and running PostgreSQL\n- running PostgreSQL on ephemeral ports\n- Unix socket support\n- async and blocking API\n- bundling the PostgreSQL archive in an executable\n- semantic version resolution\n- ability to configure PostgreSQL startup options\n- settings builder for fluent configuration\n- URL based configuration\n- choice of native-tls or rustls\n- support for installing PostgreSQL extensions\n\n## Getting Started\n\n### Example\n\n```rust\nuse postgresql_embedded::{PostgreSQL, Result};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    postgresql.database_exists(database_name).await?;\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await\n}\n```\n\n## Notes\n\nSupports using PostgreSQL binaries from:\n\n* [theseus-rs/postgresql-binaries](https://github.com/theseus-rs/postgresql-binaries) (default)\n* [zonkyio/embedded-postgres-binaries](https://github.com/zonkyio/embedded-postgres-binaries)\n\n## Safety\n\nThese crates use `#![forbid(unsafe_code)]` to ensure everything is implemented in 100% safe Rust.\n\n## License\n\nLicensed under either of\n\n* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)\n* MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT)\n\nat your option.\n\nPostgreSQL is covered under [The PostgreSQL License](https://opensource.org/licenses/postgresql).\n\n## Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as\ndefined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.\n\n## Prior Art\n\nProjects that inspired this one:\n\n* [zonkyio/embedded-postgres-binaries](https://github.com/zonkyio/embedded-postgres-binaries)\n* [faokunega/pg-embed](https://github.com/faokunega/pg-embed)\n"
  },
  {
    "path": "SECURITY.md",
    "content": "# Security Policy\n\n## Supported Versions\n\nOnly the latest version of this crate is supported.\n\n## Reporting a Vulnerability\n\nTo report a security vulnerability, please use the form\nat https://github.com/theseus-rs/postgresql-embedded/security/advisories/new\n"
  },
  {
    "path": "clippy.toml",
    "content": "allow-unwrap-in-tests = true\n"
  },
  {
    "path": "deny.toml",
    "content": "# Documentation for this configuration file can be found here\n# https://embarkstudios.github.io/cargo-deny/checks/cfg.html\n\n[graph]\ntargets = [\n    { triple = \"aarch64-unknown-linux-gnu\" },\n    { triple = \"aarch64-unknown-linux-musl\" },\n    { triple = \"aarch64-apple-darwin\" },\n    { triple = \"x86_64-apple-darwin\" },\n    { triple = \"x86_64-pc-windows-msvc\" },\n    { triple = \"x86_64-unknown-linux-gnu\" },\n    { triple = \"x86_64-unknown-linux-musl\" },\n]\n\n# https://embarkstudios.github.io/cargo-deny/checks/licenses/cfg.html\n[licenses]\nallow = [\n    \"Apache-2.0\",\n    \"BSD-2-Clause\",\n    \"BSD-3-Clause\",\n    \"BSL-1.0\",\n    \"MIT\",\n    \"PostgreSQL\",\n    \"Unicode-3.0\",\n    \"Zlib\",\n]\n\n# https://embarkstudios.github.io/cargo-deny/checks/advisories/cfg.html\n[advisories]\nignore = [\n]\n\n# https://embarkstudios.github.io/cargo-deny/checks/bans/cfg.html\n[bans]\nmultiple-versions = \"deny\"\nwildcards = \"allow\"\ndeny = []\n\n[[licenses.clarify]]\nname = \"ring\"\nexpression = \"MIT AND ISC AND OpenSSL\"\nlicense-files = [\n    { path = \"LICENSE\", hash = 0xbd0eed23 }\n]\n"
  },
  {
    "path": "examples/archive_async/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"archive_async\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\npostgresql_archive = { path = \"../../postgresql_archive\" }\ntempfile = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/archive_async/src/main.rs",
    "content": "#![forbid(unsafe_code)]\n#![forbid(clippy::allow_attributes)]\n#![deny(clippy::pedantic)]\n\nuse postgresql_archive::configuration::theseus;\nuse postgresql_archive::{Result, VersionReq, extract, get_archive};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let url = theseus::URL;\n    let version_req = VersionReq::STAR;\n    let (archive_version, archive) = get_archive(url, &version_req).await?;\n    let out_dir = tempfile::tempdir()?.keep();\n    extract(url, &archive, &out_dir).await?;\n    println!(\n        \"PostgreSQL {} extracted to {}\",\n        archive_version,\n        out_dir.to_string_lossy()\n    );\n    Ok(())\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_archive_async_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/archive_sync/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"archive_sync\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\npostgresql_archive = { path = \"../../postgresql_archive\", features = [\"blocking\"] }\ntempfile = { workspace = true }\n"
  },
  {
    "path": "examples/archive_sync/src/main.rs",
    "content": "use postgresql_archive::blocking::{extract, get_archive};\nuse postgresql_archive::configuration::theseus;\nuse postgresql_archive::{Result, VersionReq};\n\nfn main() -> Result<()> {\n    let url = theseus::URL;\n    let version_req = VersionReq::STAR;\n    let (archive_version, archive) = get_archive(url, &version_req)?;\n    let out_dir = tempfile::tempdir()?.keep();\n    extract(url, &archive, &out_dir)?;\n    println!(\n        \"PostgreSQL {} extracted to {}\",\n        archive_version,\n        out_dir.to_string_lossy()\n    );\n    Ok(())\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_archive_sync_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/axum_embedded/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"axum_embedded\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\naxum = { workspace = true }\npostgresql_embedded = { path = \"../../postgresql_embedded\" }\npostgresql_extensions = { path = \"../../postgresql_extensions\" }\nsqlx = { workspace = true, features = [\"runtime-tokio\"] }\ntracing = { workspace = true }\ntracing-subscriber = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/axum_embedded/src/main.rs",
    "content": "use anyhow::Result;\nuse axum::extract::State;\nuse axum::{Json, Router, http::StatusCode, routing::get};\nuse postgresql_embedded::{PostgreSQL, Settings, VersionReq};\nuse sqlx::PgPool;\nuse sqlx::postgres::PgPoolOptions;\nuse std::env;\nuse std::time::Duration;\nuse tokio::net::TcpListener;\nuse tracing::info;\n\n/// Example of how to use postgresql embedded with axum.\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt().compact().init();\n\n    let db_url =\n        env::var(\"DATABASE_URL\").unwrap_or_else(|_| \"postgresql://postgres@localhost\".to_string());\n    info!(\"Installing PostgreSQL\");\n    let settings = Settings::from_url(&db_url)?;\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n\n    info!(\"Installing the vector extension from PortalCorp\");\n    postgresql_extensions::install(\n        postgresql.settings(),\n        \"portal-corp\",\n        \"pgvector_compiled\",\n        &VersionReq::parse(\"=0.16.12\")?,\n    )\n    .await?;\n\n    info!(\"Starting PostgreSQL\");\n    postgresql.start().await?;\n\n    let database_name = \"axum-test\";\n    info!(\"Creating database {database_name}\");\n    postgresql.create_database(database_name).await?;\n\n    info!(\"Configuring extension\");\n    let settings = postgresql.settings().clone();\n    let database_url = settings.url(database_name);\n    let pool = PgPool::connect(database_url.as_str()).await?;\n    pool.close().await;\n\n    info!(\"Restarting database\");\n    postgresql.stop().await?;\n    postgresql.start().await?;\n\n    info!(\"Setup connection pool\");\n    let pool = PgPoolOptions::new()\n        .max_connections(5)\n        .acquire_timeout(Duration::from_secs(3))\n        .connect(&database_url)\n        .await?;\n\n    info!(\"Enabling extension\");\n    enable_extension(&pool).await?;\n\n    info!(\"Start application\");\n    let app = Router::new().route(\"/\", get(extensions)).with_state(pool);\n\n    let listener = TcpListener::bind(\"0.0.0.0:3000\").await.unwrap();\n    info!(\"Listening on {}\", listener.local_addr()?);\n    axum::serve(listener, app).await?;\n\n    Ok(())\n}\n\nasync fn enable_extension(pool: &PgPool) -> Result<()> {\n    sqlx::query(\"CREATE EXTENSION IF NOT EXISTS vector\")\n        .execute(pool)\n        .await?;\n    Ok(())\n}\n\nasync fn extensions(State(pool): State<PgPool>) -> Result<Json<Vec<String>>, (StatusCode, String)> {\n    sqlx::query_scalar(\"SELECT name FROM pg_available_extensions ORDER BY name\")\n        .fetch_all(&pool)\n        .await\n        .map(Json)\n        .map_err(internal_error)\n}\n\nfn internal_error<E: std::error::Error>(err: E) -> (StatusCode, String) {\n    (StatusCode::INTERNAL_SERVER_ERROR, err.to_string())\n}\n"
  },
  {
    "path": "examples/diesel_embedded/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"diesel_embedded\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\ndiesel = { workspace = true, features = [\"postgres\", \"r2d2\"] }\ndiesel_migrations = { workspace = true, features = [\"postgres\"] }\npostgresql_embedded = { path = \"../../postgresql_embedded\" }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/diesel_embedded/README.md",
    "content": "This example is taken from [Getting Started with Diesel](https://diesel.rs/guides/getting-started)\nand modified to work with an embedded database.\n"
  },
  {
    "path": "examples/diesel_embedded/diesel.toml",
    "content": "# For documentation on how to configure this file,\n# see https://diesel.rs/guides/configuring-diesel-cli\n\n[print_schema]\nfile = \"src/schema.rs\"\ncustom_type_derives = [\"diesel::query_builder::QueryId\", \"Clone\"]\n\n[migrations_directory]\ndir = \"./migrations\"\n"
  },
  {
    "path": "examples/diesel_embedded/migrations/.keep",
    "content": ""
  },
  {
    "path": "examples/diesel_embedded/migrations/2024-08-17-200823_create_posts/down.sql",
    "content": "DROP TABLE posts\n"
  },
  {
    "path": "examples/diesel_embedded/migrations/2024-08-17-200823_create_posts/up.sql",
    "content": "CREATE TABLE posts\n(\n    id        SERIAL PRIMARY KEY,\n    title     VARCHAR NOT NULL,\n    body      TEXT    NOT NULL,\n    published BOOLEAN NOT NULL DEFAULT FALSE\n)\n"
  },
  {
    "path": "examples/diesel_embedded/src/main.rs",
    "content": "use crate::models::{NewPost, Post};\nuse diesel::r2d2::{ConnectionManager, Pool};\nuse diesel::{PgConnection, RunQueryDsl, SelectableHelper};\nuse diesel_migrations::{EmbeddedMigrations, MigrationHarness, embed_migrations};\nuse postgresql_embedded::{PostgreSQL, Result, Settings, VersionReq};\n\nmod models;\npub mod schema;\n\nconst MIGRATIONS: EmbeddedMigrations = embed_migrations!(\"./migrations/\");\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let settings = Settings {\n        version: VersionReq::parse(\"=16.4.0\")?,\n        username: \"postgres\".to_string(),\n        password: \"postgres\".to_string(),\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"diesel_demo\";\n    postgresql.create_database(database_name).await?;\n    postgresql.database_exists(database_name).await?;\n\n    {\n        let database_url = postgresql.settings().url(database_name);\n        let manager = ConnectionManager::<PgConnection>::new(database_url);\n        let pool = Pool::builder()\n            .test_on_check_out(true)\n            .build(manager)\n            .expect(\"Could not build connection pool\");\n        let mut mig_run = pool.clone().get().unwrap();\n        mig_run.run_pending_migrations(MIGRATIONS).unwrap();\n\n        let post = create_post(\n            &mut pool.get().unwrap(),\n            \"My First Post\",\n            \"This is my firs post\",\n        );\n        println!(\"Post '{}' created\", post.title);\n    }\n\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await\n}\n\n/// Create a new post\n///\n/// # Panics\n/// if the post cannot be saved\npub fn create_post(conn: &mut PgConnection, title: &str, body: &str) -> Post {\n    use crate::schema::posts;\n\n    let new_post = NewPost { title, body };\n\n    diesel::insert_into(posts::table)\n        .values(&new_post)\n        .returning(Post::as_returning())\n        .get_result(conn)\n        .expect(\"Error saving new post\")\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_diesel_embedded_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/diesel_embedded/src/models.rs",
    "content": "use diesel::prelude::*;\n\n#[derive(Queryable, Selectable)]\n#[diesel(table_name = crate::schema::posts)]\n#[diesel(check_for_backend(diesel::pg::Pg))]\npub struct Post {\n    pub id: i32,\n    pub title: String,\n    pub body: String,\n    pub published: bool,\n}\n\n#[derive(Insertable)]\n#[diesel(table_name = crate::schema::posts)]\npub struct NewPost<'a> {\n    pub title: &'a str,\n    pub body: &'a str,\n}\n"
  },
  {
    "path": "examples/diesel_embedded/src/schema.rs",
    "content": "diesel::table! {\n    posts (id) {\n        id -> Int4,\n        title -> Varchar,\n        body -> Text,\n        published -> Bool,\n    }\n}\n"
  },
  {
    "path": "examples/download_progress_bar/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"download_progress_bar\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\nindicatif = { workspace = true }\npostgresql_embedded = { path = \"../../postgresql_embedded\", features = [\"indicatif\"] }\ntracing-indicatif = { workspace = true }\ntracing-subscriber = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/download_progress_bar/src/main.rs",
    "content": "use anyhow::Result;\nuse indicatif::ProgressStyle;\nuse postgresql_embedded::{PostgreSQL, Settings, VersionReq};\nuse tracing_indicatif::IndicatifLayer;\nuse tracing_subscriber::filter::LevelFilter;\nuse tracing_subscriber::prelude::*;\nuse tracing_subscriber::{Registry, fmt};\n\n/// Example of how to display a progress bar for the postgresql embedded archive download\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let progress_style = ProgressStyle::with_template(\"{span_child_prefix}{spinner} {span_name} [{elapsed_precise}] [{wide_bar:.green.bold}] {bytes}/{total_bytes} ({bytes_per_sec}, {eta})\")?\n        .progress_chars(\"=> \");\n    let indicatif_layer = IndicatifLayer::new().with_progress_style(progress_style);\n    let subscriber = Registry::default()\n        .with(fmt::Layer::default().with_filter(LevelFilter::INFO))\n        .with(indicatif_layer);\n    subscriber.init();\n\n    let settings = Settings {\n        version: VersionReq::parse(\"=16.4.0\")?,\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    postgresql.database_exists(database_name).await?;\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await?;\n    Ok(())\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_download_progress_bar_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/embedded_async/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"embedded_async\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\npostgresql_embedded = { path = \"../../postgresql_embedded\" }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/embedded_async/src/main.rs",
    "content": "use postgresql_embedded::{PostgreSQL, Result, Settings, VersionReq};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let settings = Settings {\n        version: VersionReq::parse(\"=16.4.0\")?,\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    postgresql.database_exists(database_name).await?;\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_embedded_async_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/embedded_sync/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"embedded_sync\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\npostgresql_embedded = { path = \"../../postgresql_embedded\", features = [\"blocking\"] }\n"
  },
  {
    "path": "examples/embedded_sync/src/main.rs",
    "content": "use postgresql_embedded::Result;\nuse postgresql_embedded::blocking::PostgreSQL;\n\nfn main() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup()?;\n    postgresql.start()?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name)?;\n    postgresql.database_exists(database_name)?;\n    postgresql.drop_database(database_name)?;\n\n    postgresql.stop()\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_embedded_sync_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/portal_corp_extension/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"portal_corp_extension\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\nindoc = { workspace = true }\npgvector = { workspace = true, features = [\"sqlx\"] }\npostgresql_embedded = { path = \"../../postgresql_embedded\" }\npostgresql_extensions = { path = \"../../postgresql_extensions\" }\nsqlx = { workspace = true, features = [\"runtime-tokio\"] }\ntracing = { workspace = true }\ntracing-subscriber = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/portal_corp_extension/src/main.rs",
    "content": "use anyhow::Result;\nuse indoc::indoc;\nuse pgvector::Vector;\nuse sqlx::{PgPool, Row};\nuse tracing::info;\n\nuse postgresql_embedded::{PostgreSQL, Settings, VersionReq};\n\n/// Example of how to install and configure the `PortalCorp` pgvector extension.\n///\n/// See: <https://github.com/pgvector/pgvector?tab=readme-ov-file#getting-started>\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt().compact().init();\n\n    info!(\"Installing PostgreSQL\");\n    let postgresql_version = VersionReq::parse(\"=16.4.0\")?;\n    let settings = Settings {\n        version: postgresql_version.clone(),\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n\n    let settings = postgresql.settings();\n    // Skip the test if the PostgreSQL version does not match; when testing with the 'bundled'\n    // feature, the version may vary and the test will fail.\n    if settings.version != postgresql_version {\n        eprintln!(\"Postgresql version does not match\");\n        return Ok(());\n    }\n\n    info!(\"Installing the vector extension from PortalCorp\");\n    postgresql_extensions::install(\n        postgresql.settings(),\n        \"portal-corp\",\n        \"pgvector_compiled\",\n        &VersionReq::parse(\"=0.16.12\")?,\n    )\n    .await?;\n\n    info!(\"Starting PostgreSQL\");\n    postgresql.start().await?;\n\n    let database_name = \"vector-example\";\n    info!(\"Creating database {database_name}\");\n    postgresql.create_database(database_name).await?;\n\n    info!(\"Configuring extension\");\n    let settings = postgresql.settings();\n    let database_url = settings.url(database_name);\n    let pool = PgPool::connect(database_url.as_str()).await?;\n    pool.close().await;\n\n    info!(\"Restarting database\");\n    postgresql.stop().await?;\n    postgresql.start().await?;\n\n    info!(\"Enabling extension\");\n    let pool = PgPool::connect(database_url.as_str()).await?;\n    enable_extension(&pool).await?;\n\n    info!(\"Creating table\");\n    create_table(&pool).await?;\n\n    info!(\"Creating data\");\n    create_data(&pool).await?;\n\n    info!(\"Get the nearest neighbors by L2 distance\");\n    execute_query(\n        &pool,\n        \"SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5\",\n    )\n    .await?;\n\n    info!(\"Stopping database\");\n    postgresql.stop().await?;\n    Ok(())\n}\n\nasync fn enable_extension(pool: &PgPool) -> Result<()> {\n    sqlx::query(\"DROP EXTENSION IF EXISTS vector\")\n        .execute(pool)\n        .await?;\n    sqlx::query(\"CREATE EXTENSION IF NOT EXISTS vector\")\n        .execute(pool)\n        .await?;\n    Ok(())\n}\n\nasync fn create_table(pool: &PgPool) -> Result<()> {\n    sqlx::query(indoc! {\"\n        CREATE TABLE IF NOT EXISTS items (\n            id bigserial PRIMARY KEY,\n            embedding vector(3) NOT NULL\n        )\n    \"})\n    .execute(pool)\n    .await?;\n    Ok(())\n}\n\nasync fn create_data(pool: &PgPool) -> Result<()> {\n    sqlx::query(indoc! {\"\n        INSERT INTO items (embedding)\n        VALUES\n            ('[1,2,3]'),\n            ('[4,5,6]')\n    \"})\n    .execute(pool)\n    .await?;\n    Ok(())\n}\n\nasync fn execute_query(pool: &PgPool, query: &str) -> Result<()> {\n    info!(\"Query: {query}\");\n    let rows = sqlx::query(query).fetch_all(pool).await?;\n    for row in rows {\n        let id: i64 = row.try_get(\"id\")?;\n        let embedding: Vector = row.try_get(\"embedding\")?;\n        info!(\"ID: {id}, Embedding: {embedding:?}\");\n    }\n    Ok(())\n}\n\n#[cfg(test)]\nmod test {\n    #[cfg(not(all(target_os = \"linux\", target_arch = \"x86_64\")))]\n    use super::*;\n\n    #[cfg(not(all(target_os = \"linux\", target_arch = \"x86_64\")))]\n    #[test]\n    fn test_portal_corp_extension_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/postgres_embedded/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"postgres_embedded\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\npostgres = { workspace = true }\npostgresql_embedded = { path = \"../../postgresql_embedded\", features = [\"blocking\"] }\n"
  },
  {
    "path": "examples/postgres_embedded/README.md",
    "content": "This example is based on [sqlx/example/todos](https://github.com/launchbadge/sqlx/tree/main/examples/postgres/todos)\nand modified to work with the postgres driver.\n"
  },
  {
    "path": "examples/postgres_embedded/src/main.rs",
    "content": "use anyhow::Result;\nuse postgres::{Client, NoTls};\nuse postgresql_embedded::blocking::PostgreSQL;\n\nfn main() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup()?;\n    postgresql.start()?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name)?;\n    let settings = postgresql.settings();\n    let mut client = Client::connect(\n        format!(\n            \"host={host} port={port} user={username} password={password}\",\n            host = settings.host,\n            port = settings.port,\n            username = settings.username,\n            password = settings.password\n        )\n        .as_str(),\n        NoTls,\n    )?;\n\n    println!(\"Creating table 'todos'\");\n    create_table_todo(&mut client)?;\n\n    let description = \"Implement embedded database with postgres\";\n    println!(\"Adding new todo with description '{description}'\");\n    let todo_id = add_todo(&mut client, description)?;\n    println!(\"Added new todo with id {todo_id}\");\n\n    println!(\"Marking todo {todo_id} as done\");\n    if complete_todo(&mut client, todo_id)? {\n        println!(\"Todo {todo_id} is marked as done\");\n    }\n\n    println!(\"Printing list of all todos\");\n    list_todos(&mut client)?;\n\n    Ok(())\n}\n\nfn create_table_todo(client: &mut Client) -> Result<()> {\n    let _ = client.execute(\n        \"CREATE TABLE IF NOT EXISTS todos (id BIGSERIAL PRIMARY KEY, description TEXT NOT NULL, done BOOLEAN NOT NULL DEFAULT FALSE);\",\n        &[],\n    )?;\n\n    Ok(())\n}\n\nfn add_todo(client: &mut Client, description: &str) -> Result<i64> {\n    let row = client.query_one(\n        \"INSERT INTO todos (description) VALUES ($1) RETURNING id\",\n        &[&description],\n    )?;\n\n    let id: i64 = row.get(0);\n    Ok(id)\n}\n\nfn complete_todo(client: &mut Client, id: i64) -> Result<bool> {\n    let rows_affected = client.execute(\"UPDATE todos SET done = TRUE WHERE id = $1\", &[&id])?;\n\n    Ok(rows_affected > 0)\n}\n\nfn list_todos(client: &mut Client) -> Result<()> {\n    let rows = client.query(\"SELECT id, description, done FROM todos ORDER BY id\", &[])?;\n\n    for rec in rows {\n        let id: i64 = rec.get(\"id\");\n        let description: String = rec.get(\"description\");\n        let done: bool = rec.get(\"done\");\n        println!(\n            \"- [{}] {}: {}\",\n            if done { \"x\" } else { \" \" },\n            id,\n            &description,\n        );\n    }\n\n    Ok(())\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_postgres_embedded_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/sqlx_embedded/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"sqlx_embedded\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\npostgresql_embedded = { path = \"../../postgresql_embedded\" }\nsqlx = { workspace = true, features = [\"runtime-tokio\"] }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/sqlx_embedded/README.md",
    "content": "This example is taken from [sqlx/example/todos](https://github.com/launchbadge/sqlx/tree/main/examples/postgres/todos)\nand modified to work with an embedded database.\n"
  },
  {
    "path": "examples/sqlx_embedded/src/main.rs",
    "content": "use anyhow::Result;\nuse postgresql_embedded::PostgreSQL;\nuse sqlx::Row;\nuse sqlx::postgres::PgPool;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    let settings = postgresql.settings();\n    let database_url = settings.url(database_name);\n\n    let pool = PgPool::connect(database_url.as_str()).await?;\n\n    println!(\"Creating table 'todos'\");\n    create_table_todo(&pool).await?;\n\n    let description = \"Implement embedded database with sqlx\";\n    println!(\"Adding new todo with description '{description}'\");\n    let todo_id = add_todo(&pool, description).await?;\n    println!(\"Added new todo with id {todo_id}\");\n\n    println!(\"Marking todo {todo_id} as done\");\n    if complete_todo(&pool, todo_id).await? {\n        println!(\"Todo {todo_id} is marked as done\");\n    }\n\n    println!(\"Printing list of all todos\");\n    list_todos(&pool).await?;\n\n    Ok(())\n}\n\nasync fn create_table_todo(pool: &PgPool) -> Result<()> {\n    sqlx::query(\n        \"CREATE TABLE IF NOT EXISTS todos(id BIGSERIAL PRIMARY KEY, description TEXT NOT NULL, done BOOLEAN NOT NULL DEFAULT FALSE);\"\n    ).execute(pool).await?;\n\n    Ok(())\n}\n\nasync fn add_todo(pool: &PgPool, description: &str) -> Result<i64> {\n    let rec = sqlx::query(\"INSERT INTO todos (description) VALUES ($1) RETURNING id\")\n        .bind(description)\n        .fetch_one(pool)\n        .await?;\n\n    let id: i64 = rec.get(\"id\");\n    Ok(id)\n}\n\nasync fn complete_todo(pool: &PgPool, id: i64) -> Result<bool> {\n    let rows_affected = sqlx::query(\"UPDATE todos SET done = TRUE WHERE id = $1\")\n        .bind(id)\n        .execute(pool)\n        .await?\n        .rows_affected();\n\n    Ok(rows_affected > 0)\n}\n\nasync fn list_todos(pool: &PgPool) -> Result<()> {\n    let recs = sqlx::query(\"SELECT id, description, done FROM todos ORDER BY id\")\n        .fetch_all(pool)\n        .await?;\n\n    for rec in recs {\n        let id: i64 = rec.get(\"id\");\n        let description: String = rec.get(\"description\");\n        let done: bool = rec.get(\"done\");\n        println!(\n            \"- [{}] {}: {}\",\n            if done { \"x\" } else { \" \" },\n            id,\n            &description,\n        );\n    }\n\n    Ok(())\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_sqlx_embedded_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/tensor_chord_extension/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"tensor_chord_extension\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\nanyhow = { workspace = true }\nindoc = { workspace = true }\npostgresql_embedded = { path = \"../../postgresql_embedded\" }\npostgresql_extensions = { path = \"../../postgresql_extensions\" }\nsqlx = { workspace = true, features = [\"runtime-tokio\"] }\ntracing = { workspace = true }\ntracing-subscriber = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/tensor_chord_extension/src/main.rs",
    "content": "use anyhow::Result;\nuse indoc::indoc;\nuse sqlx::{PgPool, Row};\nuse tracing::info;\n\nuse postgresql_embedded::{PostgreSQL, Settings, VersionReq};\n\n/// Example of how to install and configure the `TensorChord` vector extension.\n///\n/// See: <https://github.com/tensorchord/pgvecto.rs/?tab=readme-ov-file#quick-start>\n#[tokio::main]\nasync fn main() -> Result<()> {\n    tracing_subscriber::fmt().compact().init();\n\n    info!(\"Installing PostgreSQL\");\n    let settings = Settings {\n        version: VersionReq::parse(\"=16.4.0\")?,\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n\n    info!(\"Installing the vector extension from TensorChord\");\n    postgresql_extensions::install(\n        postgresql.settings(),\n        \"tensor-chord\",\n        \"pgvecto.rs\",\n        &VersionReq::parse(\"=0.4.0\")?,\n    )\n    .await?;\n\n    info!(\"Starting PostgreSQL\");\n    postgresql.start().await?;\n\n    let database_name = \"vector-example\";\n    info!(\"Creating database {database_name}\");\n    postgresql.create_database(database_name).await?;\n\n    info!(\"Configuring extension\");\n    let settings = postgresql.settings();\n    let database_url = settings.url(database_name);\n    let pool = PgPool::connect(database_url.as_str()).await?;\n    configure_extension(&pool).await?;\n    pool.close().await;\n\n    info!(\"Restarting database\");\n    postgresql.stop().await?;\n    postgresql.start().await?;\n\n    info!(\"Enabling extension\");\n    let pool = PgPool::connect(database_url.as_str()).await?;\n    enable_extension(&pool).await?;\n\n    info!(\"Creating table\");\n    create_table(&pool).await?;\n\n    info!(\"Creating data\");\n    create_data(&pool).await?;\n\n    info!(\"Squared Euclidean Distance\");\n    execute_query(\n        &pool,\n        \"SELECT '[1, 2, 3]'::vector <-> '[3, 2, 1]'::vector AS value\",\n    )\n    .await?;\n\n    info!(\"Negative Dot Product\");\n    execute_query(\n        &pool,\n        \"SELECT '[1, 2, 3]'::vector <#> '[3, 2, 1]'::vector AS value\",\n    )\n    .await?;\n\n    info!(\"Cosine Distance\");\n    execute_query(\n        &pool,\n        \"SELECT '[1, 2, 3]'::vector <=> '[3, 2, 1]'::vector AS value\",\n    )\n    .await?;\n\n    info!(\"Stopping database\");\n    postgresql.stop().await?;\n    Ok(())\n}\n\nasync fn configure_extension(pool: &PgPool) -> Result<()> {\n    sqlx::query(\"ALTER SYSTEM SET shared_preload_libraries = \\\"vectors.so\\\"\")\n        .execute(pool)\n        .await?;\n    sqlx::query(\"ALTER SYSTEM SET search_path = \\\"$user\\\", public, vectors\")\n        .execute(pool)\n        .await?;\n    Ok(())\n}\n\nasync fn enable_extension(pool: &PgPool) -> Result<()> {\n    sqlx::query(\"DROP EXTENSION IF EXISTS vectors\")\n        .execute(pool)\n        .await?;\n    sqlx::query(\"CREATE EXTENSION IF NOT EXISTS vectors\")\n        .execute(pool)\n        .await?;\n    Ok(())\n}\n\nasync fn create_table(pool: &PgPool) -> Result<()> {\n    sqlx::query(indoc! {\"\n        CREATE TABLE IF NOT EXISTS items (\n            id bigserial PRIMARY KEY,\n            embedding vector(3) NOT NULL\n        )\n    \"})\n    .execute(pool)\n    .await?;\n    Ok(())\n}\n\nasync fn create_data(pool: &PgPool) -> Result<()> {\n    sqlx::query(indoc! {\"\n        INSERT INTO items (embedding)\n        VALUES\n            ('[1,2,3]'),\n            ('[4,5,6]')\n    \"})\n    .execute(pool)\n    .await?;\n    sqlx::query(indoc! {\"\n        INSERT INTO items (embedding)\n        VALUES\n            (ARRAY[1, 2, 3]::real[]),\n            (ARRAY[4, 5, 6]::real[]\n        )\n    \"})\n    .execute(pool)\n    .await?;\n    Ok(())\n}\n\nasync fn execute_query(pool: &PgPool, query: &str) -> Result<()> {\n    let row = sqlx::query(query).fetch_one(pool).await?;\n    let value: f32 = row.try_get(\"value\")?;\n    info!(\"{}: {}\", query, value);\n    Ok(())\n}\n\n// #[cfg(test)]\n// mod test {\n//     use super::*;\n//\n//     #[test]\n//     #[ignore = \"this extension has been deprecated\"]\n//     fn test_tensor_chord_extension_main() -> Result<()> {\n//         main()\n//     }\n// }\n"
  },
  {
    "path": "examples/unix_socket/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"unix_socket\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\npostgresql_embedded = { path = \"../../postgresql_embedded\" }\ntempfile = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/unix_socket/src/main.rs",
    "content": "use postgresql_embedded::{PostgreSQL, Result, SettingsBuilder};\n\n#[cfg(unix)]\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let socket_dir = tempfile::tempdir().expect(\"failed to create temp dir for socket\");\n\n    let settings = SettingsBuilder::new()\n        .socket_dir(socket_dir.path().to_path_buf())\n        .build();\n\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let port = postgresql.settings().port;\n    let socket_file = socket_dir.path().join(format!(\".s.PGSQL.{port}\"));\n    println!(\"PostgreSQL is listening on Unix socket: {socket_file:?}\");\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    println!(\"Created database '{database_name}'\");\n\n    let exists = postgresql.database_exists(database_name).await?;\n    println!(\"Database '{database_name}' exists: {exists}\");\n\n    postgresql.drop_database(database_name).await?;\n    println!(\"Dropped database '{database_name}'\");\n\n    postgresql.stop().await?;\n    println!(\"PostgreSQL stopped\");\n\n    Ok(())\n}\n\n#[cfg(not(unix))]\nfn main() {\n    eprintln!(\"Unix socket support is only available on Unix platforms\");\n}\n\n#[cfg(test)]\n#[cfg(unix)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_unix_socket_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "examples/zonky/Cargo.toml",
    "content": "[package]\nedition.workspace = true\nname = \"zonky\"\npublish = false\nlicense.workspace = true\nversion.workspace = true\n\n[dependencies]\npostgresql_archive = { path = \"../../postgresql_archive\" }\npostgresql_embedded = { path = \"../../postgresql_embedded\", default-features = false, features = [\"zonky\"] }\ntokio = { workspace = true, features = [\"full\"] }\n"
  },
  {
    "path": "examples/zonky/src/main.rs",
    "content": "use postgresql_archive::VersionReq;\nuse postgresql_archive::configuration::zonky;\nuse postgresql_embedded::{PostgreSQL, Result, Settings};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let settings = Settings {\n        releases_url: zonky::URL.to_string(),\n        version: VersionReq::parse(\"=16.3.0\")?,\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    postgresql.database_exists(database_name).await?;\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await\n}\n\n#[cfg(test)]\nmod test {\n    #[cfg(not(all(target_os = \"linux\", target_arch = \"x86_64\")))]\n    use super::*;\n\n    #[cfg(not(all(target_os = \"linux\", target_arch = \"x86_64\")))]\n    #[test]\n    fn test_zonky_main() -> Result<()> {\n        main()\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/Cargo.toml",
    "content": "[package]\nauthors.workspace = true\ncategories.workspace = true\ndescription = \"A library for downloading and extracting PostgreSQL archives\"\nedition.workspace = true\nkeywords.workspace = true\nlicense.workspace = true\nname = \"postgresql_archive\"\nrepository = \"https://github.com/theseus-rs/postgresql-embedded\"\nrust-version.workspace = true\nversion.workspace = true\n\n[dependencies]\nasync-trait = { workspace = true }\nflate2 = { workspace = true, optional = true }\nfutures-util = { workspace = true }\nhex = { workspace = true }\nliblzma = { workspace = true, optional = true }\nmd-5 = { workspace = true, optional = true }\nquick-xml = { workspace = true, features = [\"serialize\"], optional = true }\nregex-lite = { workspace = true }\nreqwest = { workspace = true, default-features = false, features = [\"http2\", \"json\", \"query\", \"stream\"] }\nreqwest-middleware = { workspace = true, features = [\"query\"] }\nreqwest-retry = { workspace = true }\nreqwest-tracing = { workspace = true }\nsemver = { workspace = true }\nserde = { workspace = true, features = [\"derive\"] }\nserde_json = { workspace = true, optional = true }\nsha1 = { workspace = true, optional = true }\nsha2 = { workspace = true, optional = true }\ntar = { workspace = true, optional = true }\ntarget-triple = { workspace = true }\ntempfile = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"full\"], optional = true }\ntracing = { workspace = true, features = [\"log\"] }\ntracing-indicatif = { workspace = true, optional = true }\nurl = { workspace = true }\nzip = { workspace = true, optional = true }\n\n[dev-dependencies]\nanyhow = { workspace = true }\ncriterion = { workspace = true }\nhex = { workspace = true }\ntest-log = { workspace = true }\ntokio = { workspace = true }\n\n[features]\ndefault = [\n    \"native-tls\",\n    \"theseus\"\n]\nblocking = [\"dep:tokio\"]\ngithub = [\n    \"dep:serde_json\",\n]\nindicatif = [\n    \"dep:tracing-indicatif\"\n]\nmaven = [\n    \"dep:quick-xml\",\n    \"md5\",\n    \"sha1\",\n    \"sha2\",\n]\nmd5 = [\"dep:md-5\"]\nnative-tls = [\"reqwest/native-tls\"]\nrustls = [\"reqwest/rustls\"]\nsha1 = [\"dep:sha1\"]\nsha2 = [\"dep:sha2\"]\ntar-gz = [\n    \"dep:flate2\",\n    \"dep:tar\",\n]\ntar-xz = [\n    \"dep:liblzma\",\n    \"dep:tar\",\n]\ntheseus = [\n    \"github\",\n    \"sha2\",\n    \"tar-gz\",\n]\nzip = [\n    \"dep:zip\",\n]\nzonky = [\n    \"maven\",\n    \"tar-xz\",\n    \"zip\",\n]\n\n[package.metadata.docs.rs]\nfeatures = [\"blocking\"]\ntargets = [\"x86_64-unknown-linux-gnu\"]\n\n[[bench]]\nharness = false\nname = \"archive\"\n\n[package.metadata.cargo-machete]\nignored = [\n    \"md-5\",\n    \"serde_json\",\n]\n"
  },
  {
    "path": "postgresql_archive/README.md",
    "content": "# PostgreSQL Archive\n\n[![ci](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml)\n[![Documentation](https://docs.rs/postgresql_archive/badge.svg)](https://docs.rs/postgresql_archive)\n[![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n[![Benchmarks](https://img.shields.io/badge/%F0%9F%90%B0_bencher-enabled-6ec241)](https://bencher.dev/perf/theseus-rs-postgresql-embedded)\n[![Latest version](https://img.shields.io/crates/v/postgresql_archive.svg)](https://crates.io/crates/postgresql_archive)\n[![License](https://img.shields.io/crates/l/postgresql_archive?)](https://github.com/theseus-rs/postgresql-embedded/tree/main/postgresql_archive#license)\n[![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n\nA configurable library for downloading and extracting PostgreSQL archives.\n\n## Examples\n\n### Asynchronous API\n\n```rust\nuse postgresql_archive::{extract, get_archive, Result, VersionReq};\nuse postgresql_archive::configuration::theseus;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let url = theseus::URL;\n    let (archive_version, archive) = get_archive(url, &VersionReq::STAR).await?;\n    let out_dir = std::env::temp_dir();\n    extract(url, &archive, &out_dir).await\n}\n```\n\n### Synchronous API\n\n```rust\nuse postgresql_archive::configuration::theseus;\nuse postgresql_archive::{Result, VersionReq};\nuse postgresql_archive::blocking::{extract, get_archive};\n\nfn main() -> Result<()> {\n    let url = theseus::URL;\n    let (archive_version, archive) = get_archive(url, &VersionReq::STAR)?;\n    let out_dir = std::env::temp_dir();\n    extract(url, &archive, &out_dir)\n}\n```\n\n## Feature flags\n\npostgresql_archive uses [feature flags] to address compile time and binary size\nuses.\n\nThe following features are available:\n\n| Name         | Description                      | Default? |\n|--------------|----------------------------------|----------|\n| `blocking`   | Enables the blocking API         | No       |\n| `indicatif`  | Enables tracing-indcatif support | No       |\n| `native-tls` | Enables native-tls support       | Yes      |\n| `rustls`     | Enables rustls support           | No       |\n\n### Configurations\n\n| Name      | Description                         | Default? |\n|-----------|-------------------------------------|----------|\n| `theseus` | Enables theseus PostgreSQL binaries | Yes      |\n| `zonky`   | Enables zonky PostgreSQL binaries   | No       |\n\n### Extractors\n\n| Name     | Description              | Default? |\n|----------|--------------------------|----------|\n| `tar-gz` | Enables tar gz extractor | Yes      |\n| `tar-xz` | Enables tar xz extractor | No       |\n| `zip`    | Enables zip extractor    | No       |\n\n### Hashers\n\n| Name   | Description          | Default? |\n|--------|----------------------|----------|\n| `md5`  | Enables md5 hashers  | No       |\n| `sha1` | Enables sha1 hashers | No       |\n| `sha2` | Enables sha2 hashers | Yes¹     |\n\n¹ enabled by the `theseus` feature flag.\n\n### Repositories\n\n| Name     | Description               | Default? |\n|----------|---------------------------|----------|\n| `github` | Enables github repository | Yes¹     |\n| `maven`  | Enables maven repository  | No       |\n\n¹ enabled by the `theseus` feature flag.\n\n## Supported platforms\n\n`postgresql_archive` provides implementations for the following:\n\n* [theseus-rs/postgresql-binaries](https://github.com/theseus-rs/postgresql-binaries)\n* [zonkyio/embedded-postgres-binaries](https://github.com/zonkyio/embedded-postgres-binaries)\n\n## License\n\nLicensed under either of\n\n* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)\n* MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT)\n\nat your option.\n\n## Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as\ndefined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.\n"
  },
  {
    "path": "postgresql_archive/benches/archive.rs",
    "content": "use criterion::{Criterion, criterion_group, criterion_main};\nuse postgresql_archive::blocking::{extract, get_archive};\nuse postgresql_archive::configuration::theseus;\nuse postgresql_archive::{Result, VersionReq};\nuse std::fs::{create_dir_all, remove_dir_all};\nuse std::time::Duration;\n\nfn benchmarks(criterion: &mut Criterion) {\n    bench_extract(criterion).ok();\n}\n\nfn bench_extract(criterion: &mut Criterion) -> Result<()> {\n    let version_req = VersionReq::STAR;\n    let (_archive_version, archive) = get_archive(theseus::URL, &version_req)?;\n\n    criterion.bench_function(\"extract\", |bencher| {\n        bencher.iter(|| {\n            extract_archive(&archive).ok();\n        });\n    });\n\n    Ok(())\n}\n\nfn extract_archive(archive: &Vec<u8>) -> Result<()> {\n    let out_dir = tempfile::tempdir()?.path().to_path_buf();\n    create_dir_all(&out_dir)?;\n    extract(theseus::URL, archive, &out_dir)?;\n    remove_dir_all(&out_dir)?;\n    Ok(())\n}\n\ncriterion_group!(\n    name = benches;\n    config = Criterion::default()\n        .measurement_time(Duration::from_secs(30))\n        .sample_size(10);\n    targets = benchmarks\n);\ncriterion_main!(benches);\n"
  },
  {
    "path": "postgresql_archive/src/archive.rs",
    "content": "//! Manage PostgreSQL archives\n#![allow(dead_code)]\n\nuse crate::error::Result;\nuse crate::{extractor, repository};\nuse regex_lite::Regex;\nuse semver::{Version, VersionReq};\nuse std::path::{Path, PathBuf};\nuse tracing::instrument;\n\n/// Gets the version for the specified [version requirement](VersionReq). If a version for the\n/// [version requirement](VersionReq) is not found, then an error is returned.\n///\n/// # Errors\n/// * If the version is not found.\n#[instrument(level = \"debug\")]\npub async fn get_version(url: &str, version_req: &VersionReq) -> Result<Version> {\n    let repository = repository::registry::get(url)?;\n    let version = repository.get_version(version_req).await?;\n    Ok(version)\n}\n\n/// Gets the archive for a given [version requirement](VersionReq) that passes the default\n/// matcher. If no archive is found for the [version requirement](VersionReq) and matcher then\n/// an [error](crate::error::Error) is returned.\n///\n/// # Errors\n/// * If the archive is not found.\n/// * If the archive cannot be downloaded.\n#[instrument]\npub async fn get_archive(url: &str, version_req: &VersionReq) -> Result<(Version, Vec<u8>)> {\n    let repository = repository::registry::get(url)?;\n    let archive = repository.get_archive(version_req).await?;\n    let version = archive.version().clone();\n    let bytes = archive.bytes().to_vec();\n    Ok((version, bytes))\n}\n\n/// Extracts the compressed tar `bytes` to the [out_dir](Path).\n///\n/// # Errors\n/// Returns an error if the extraction fails.\n#[instrument(skip(bytes))]\npub async fn extract(url: &str, bytes: &Vec<u8>, out_dir: &Path) -> Result<Vec<PathBuf>> {\n    let extractor_fn = extractor::registry::get(url)?;\n    let mut extract_directories = extractor::ExtractDirectories::default();\n    extract_directories.add_mapping(Regex::new(\".*\")?, out_dir.to_path_buf());\n    extractor_fn(bytes, &extract_directories)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::configuration::theseus::URL;\n\n    #[tokio::test]\n    async fn test_get_version() -> Result<()> {\n        let version_req = VersionReq::parse(\"=16.4.0\")?;\n        let version = get_version(URL, &version_req).await?;\n        assert_eq!(Version::new(16, 4, 0), version);\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_archive() -> Result<()> {\n        let version_req = VersionReq::parse(\"=16.4.0\")?;\n        let (version, bytes) = get_archive(URL, &version_req).await?;\n        assert_eq!(Version::new(16, 4, 0), version);\n        assert!(!bytes.is_empty());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/blocking/archive.rs",
    "content": "use crate::{Version, VersionReq};\nuse std::path::{Path, PathBuf};\nuse std::sync::LazyLock;\nuse tokio::runtime::Runtime;\n\nstatic RUNTIME: LazyLock<Runtime> = LazyLock::new(|| Runtime::new().unwrap());\n\n/// Gets the version for the specified [version requirement](VersionReq). If a version for the\n/// [version requirement](VersionReq) is not found, then an error is returned.\n///\n/// # Errors\n/// * If the version is not found.\npub fn get_version(url: &str, version_req: &VersionReq) -> crate::Result<Version> {\n    RUNTIME\n        .handle()\n        .block_on(async move { crate::get_version(url, version_req).await })\n}\n\n/// Gets the archive for a given [version requirement](VersionReq) that passes the default\n/// matcher.\n///\n/// If no archive is found for the [version requirement](VersionReq) and matcher then\n/// an [error](crate::error::Error) is returned.\n///\n/// # Errors\n/// * If the archive is not found.\n/// * If the archive cannot be downloaded.\npub fn get_archive(url: &str, version_req: &VersionReq) -> crate::Result<(Version, Vec<u8>)> {\n    RUNTIME\n        .handle()\n        .block_on(async move { crate::get_archive(url, version_req).await })\n}\n\n/// Extracts the compressed tar `bytes` to the [out_dir](Path).\n///\n/// # Errors\n/// Returns an error if the extraction fails.\npub fn extract(url: &str, bytes: &Vec<u8>, out_dir: &Path) -> crate::Result<Vec<PathBuf>> {\n    RUNTIME\n        .handle()\n        .block_on(async move { crate::extract(url, bytes, out_dir).await })\n}\n"
  },
  {
    "path": "postgresql_archive/src/blocking/mod.rs",
    "content": "mod archive;\n\npub use archive::{extract, get_archive, get_version};\n"
  },
  {
    "path": "postgresql_archive/src/configuration/custom/matcher.rs",
    "content": "use semver::Version;\n\n/// Matcher for PostgreSQL binaries from custom GitHub release repositories following the same\n/// pattern as <https://github.com/theseus-rs/postgresql-binaries>\n///\n/// # Errors\n/// * If the asset matcher fails.\npub fn matcher(_url: &str, name: &str, version: &Version) -> crate::Result<bool> {\n    let target = target_triple::TARGET;\n    // TODO: consider relaxing the version format to allow for more flexibility in where the version\n    //       and target appear in the filename.\n    let expected_name = format!(\"postgresql-{version}-{target}.tar.gz\");\n    Ok(name == expected_name)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::{Result, matcher};\n\n    const TEST_URL: &str = \"https://github.com/owner/repo\";\n\n    #[test]\n    fn test_register_custom_repo() -> Result<()> {\n        #[expect(clippy::unnecessary_wraps)]\n        fn supports_fn(url: &str) -> Result<bool> {\n            Ok(url == TEST_URL)\n        }\n        matcher::registry::register(supports_fn, matcher)?;\n\n        let matcher = matcher::registry::get(TEST_URL)?;\n        let version = Version::new(16, 3, 0);\n        let expected_name = format!(\"postgresql-{}-{}.tar.gz\", version, target_triple::TARGET);\n        assert!(matcher(\"\", &expected_name, &version)?);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/configuration/custom/mod.rs",
    "content": "pub mod matcher;\n\npub use matcher::matcher;\n"
  },
  {
    "path": "postgresql_archive/src/configuration/mod.rs",
    "content": "pub mod custom;\n#[cfg(feature = \"theseus\")]\npub mod theseus;\n#[cfg(feature = \"zonky\")]\npub mod zonky;\n"
  },
  {
    "path": "postgresql_archive/src/configuration/theseus/extractor.rs",
    "content": "use crate::Error::Unexpected;\nuse crate::Result;\nuse crate::extractor::{ExtractDirectories, tar_gz_extract};\nuse regex_lite::Regex;\nuse std::fs::{create_dir_all, remove_dir_all, remove_file, rename};\nuse std::path::{Path, PathBuf};\nuse std::thread::sleep;\nuse std::time::Duration;\nuse tracing::{debug, instrument, warn};\n\n/// Extracts the compressed tar `bytes` to the [out_dir](Path).\n///\n/// # Errors\n/// Returns an error if the extraction fails.\n#[instrument(skip(bytes))]\npub fn extract(bytes: &Vec<u8>, extract_directories: &ExtractDirectories) -> Result<Vec<PathBuf>> {\n    let out_dir = extract_directories.get_path(\".\")?;\n\n    let parent_dir = if let Some(parent) = out_dir.parent() {\n        parent\n    } else {\n        debug!(\"No parent directory for {}\", out_dir.to_string_lossy());\n        out_dir.as_path()\n    };\n\n    create_dir_all(parent_dir)?;\n\n    let lock_file = acquire_lock(parent_dir)?;\n    // If the directory already exists, then the archive has already been\n    // extracted by another process.\n    if out_dir.exists() {\n        debug!(\n            \"Directory already exists {}; skipping extraction: \",\n            out_dir.to_string_lossy()\n        );\n        remove_file(&lock_file)?;\n        return Ok(Vec::new());\n    }\n\n    let extract_dir = tempfile::tempdir_in(parent_dir)?.keep();\n    debug!(\"Extracting archive to {}\", extract_dir.to_string_lossy());\n    let mut archive_extract_directories = ExtractDirectories::default();\n    archive_extract_directories.add_mapping(Regex::new(\".*\")?, extract_dir.clone());\n    let files = tar_gz_extract(bytes, &archive_extract_directories)?;\n\n    if out_dir.exists() {\n        debug!(\n            \"Directory already exists {}; skipping rename and removing extraction directory: {}\",\n            out_dir.to_string_lossy(),\n            extract_dir.to_string_lossy()\n        );\n        remove_dir_all(&extract_dir)?;\n    } else {\n        debug!(\n            \"Renaming {} to {}\",\n            extract_dir.to_string_lossy(),\n            out_dir.to_string_lossy()\n        );\n        rename(extract_dir, out_dir)?;\n    }\n\n    if lock_file.is_file() {\n        debug!(\"Removing lock file: {}\", lock_file.to_string_lossy());\n        remove_file(lock_file)?;\n    }\n\n    Ok(files)\n}\n\n/// Acquires a lock file in the [out_dir](Path) to prevent multiple processes from extracting the\n/// archive at the same time.\n///\n/// # Errors\n/// * If the lock file cannot be acquired.\n#[instrument(level = \"debug\")]\nfn acquire_lock(out_dir: &Path) -> Result<PathBuf> {\n    let lock_file = out_dir.join(\"postgresql-archive.lock\");\n\n    if lock_file.is_file() {\n        let metadata = lock_file.metadata()?;\n        let created = metadata.created()?;\n\n        if created.elapsed()?.as_secs() > 300 {\n            warn!(\n                \"Stale lock file detected; removing file to attempt process recovery: {}\",\n                lock_file.to_string_lossy()\n            );\n            remove_file(&lock_file)?;\n        }\n    }\n\n    debug!(\n        \"Attempting to acquire lock: {}\",\n        lock_file.to_string_lossy()\n    );\n\n    for _ in 0..30 {\n        let lock = std::fs::OpenOptions::new()\n            .create(true)\n            .truncate(true)\n            .write(true)\n            .open(&lock_file);\n\n        match lock {\n            Ok(_) => {\n                debug!(\"Lock acquired: {}\", lock_file.to_string_lossy());\n                return Ok(lock_file);\n            }\n            Err(error) => {\n                warn!(\"unable to acquire lock: {error}\");\n                sleep(Duration::from_secs(1));\n            }\n        }\n    }\n\n    Err(Unexpected(\"Failed to acquire lock\".to_string()))\n}\n"
  },
  {
    "path": "postgresql_archive/src/configuration/theseus/matcher.rs",
    "content": "use semver::Version;\n\n/// Matcher for PostgreSQL binaries from <https://github.com/theseus-rs/postgresql-binaries>\n///\n/// # Errors\n/// * If the asset matcher fails.\npub fn matcher(_url: &str, name: &str, version: &Version) -> crate::Result<bool> {\n    let target = target_triple::TARGET;\n    let expected_name = format!(\"postgresql-{version}-{target}.tar.gz\");\n    Ok(name == expected_name)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::Result;\n\n    #[test]\n    fn test_asset_match_success() -> Result<()> {\n        let url = \"\";\n        let version = Version::parse(\"16.4.0\")?;\n        let target = target_triple::TARGET;\n        let name = format!(\"postgresql-{version}-{target}.tar.gz\");\n\n        assert!(matcher(url, name.as_str(), &version)?, \"{}\", name);\n        Ok(())\n    }\n\n    #[test]\n    fn test_asset_match_errors() -> Result<()> {\n        let url = \"\";\n        let version = Version::parse(\"16.4.0\")?;\n        let target = target_triple::TARGET;\n        let names = vec![\n            format!(\"foo-{version}-{target}.tar.gz\"),\n            format!(\"postgresql-{target}.tar.gz\"),\n            format!(\"postgresql-{version}.tar.gz\"),\n            format!(\"postgresql-{version}-{target}.tar\"),\n            format!(\"postgresql-{version}-{target}\"),\n        ];\n\n        for name in names {\n            assert!(!matcher(url, name.as_str(), &version)?, \"{}\", name);\n        }\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/configuration/theseus/mod.rs",
    "content": "mod extractor;\nmod matcher;\n\npub const URL: &str = \"https://github.com/theseus-rs/postgresql-binaries\";\n\npub use extractor::extract;\npub use matcher::matcher;\n"
  },
  {
    "path": "postgresql_archive/src/configuration/zonky/extractor.rs",
    "content": "use crate::Error::Unexpected;\nuse crate::Result;\nuse crate::extractor::{ExtractDirectories, tar_xz_extract};\nuse regex_lite::Regex;\nuse std::fs::{create_dir_all, remove_dir_all, remove_file, rename};\nuse std::io::Cursor;\nuse std::path::{Path, PathBuf};\nuse std::thread::sleep;\nuse std::time::Duration;\nuse tracing::{debug, instrument, warn};\nuse zip::ZipArchive;\n\n/// Extracts the compressed tar `bytes` to the [out_dir](Path).\n///\n/// # Errors\n/// Returns an error if the extraction fails.\n#[expect(clippy::case_sensitive_file_extension_comparisons)]\n#[instrument(skip(bytes))]\npub fn extract(bytes: &Vec<u8>, extract_directories: &ExtractDirectories) -> Result<Vec<PathBuf>> {\n    let out_dir = extract_directories.get_path(\".\")?;\n    let parent_dir = if let Some(parent) = out_dir.parent() {\n        parent\n    } else {\n        debug!(\"No parent directory for {}\", out_dir.to_string_lossy());\n        out_dir.as_path()\n    };\n\n    create_dir_all(parent_dir)?;\n\n    let lock_file = acquire_lock(parent_dir)?;\n    // If the directory already exists, then the archive has already been\n    // extracted by another process.\n    if out_dir.exists() {\n        debug!(\n            \"Directory already exists {}; skipping extraction: \",\n            out_dir.to_string_lossy()\n        );\n        remove_file(&lock_file)?;\n        return Ok(Vec::new());\n    }\n\n    let extract_dir = tempfile::tempdir_in(parent_dir)?.keep();\n    debug!(\"Extracting archive to {}\", extract_dir.to_string_lossy());\n\n    let reader = Cursor::new(bytes);\n    let mut archive = ZipArchive::new(reader)?;\n    let mut archive_bytes = Vec::new();\n    for i in 0..archive.len() {\n        let mut file = archive.by_index(i)?;\n        let file_name = file.name().to_string();\n        if file_name.ends_with(\".txz\") {\n            debug!(\"Found archive file: {file_name}\");\n            std::io::copy(&mut file, &mut archive_bytes)?;\n            break;\n        }\n    }\n\n    if archive_bytes.is_empty() {\n        return Err(Unexpected(\"Failed to find archive file\".to_string()));\n    }\n\n    let mut archive_extract_directories = ExtractDirectories::default();\n    archive_extract_directories.add_mapping(Regex::new(\".*\")?, extract_dir.clone());\n    let files = tar_xz_extract(&archive_bytes, &archive_extract_directories)?;\n\n    if out_dir.exists() {\n        debug!(\n            \"Directory already exists {}; skipping rename and removing extraction directory: {}\",\n            out_dir.to_string_lossy(),\n            extract_dir.to_string_lossy()\n        );\n        remove_dir_all(&extract_dir)?;\n    } else {\n        debug!(\n            \"Renaming {} to {}\",\n            extract_dir.to_string_lossy(),\n            out_dir.to_string_lossy()\n        );\n        rename(extract_dir, out_dir)?;\n    }\n\n    if lock_file.is_file() {\n        debug!(\"Removing lock file: {}\", lock_file.to_string_lossy());\n        remove_file(lock_file)?;\n    }\n\n    Ok(files)\n}\n\n/// Acquires a lock file in the [out_dir](Path) to prevent multiple processes from extracting the\n/// archive at the same time.\n///\n/// # Errors\n/// * If the lock file cannot be acquired.\n#[instrument(level = \"debug\")]\nfn acquire_lock(out_dir: &Path) -> crate::Result<PathBuf> {\n    let lock_file = out_dir.join(\"postgresql-archive.lock\");\n\n    if lock_file.is_file() {\n        let metadata = lock_file.metadata()?;\n        let created = metadata.created()?;\n\n        if created.elapsed()?.as_secs() > 300 {\n            warn!(\n                \"Stale lock file detected; removing file to attempt process recovery: {}\",\n                lock_file.to_string_lossy()\n            );\n            remove_file(&lock_file)?;\n        }\n    }\n\n    debug!(\n        \"Attempting to acquire lock: {}\",\n        lock_file.to_string_lossy()\n    );\n\n    for _ in 0..30 {\n        let lock = std::fs::OpenOptions::new()\n            .create(true)\n            .truncate(true)\n            .write(true)\n            .open(&lock_file);\n\n        match lock {\n            Ok(_) => {\n                debug!(\"Lock acquired: {}\", lock_file.to_string_lossy());\n                return Ok(lock_file);\n            }\n            Err(error) => {\n                warn!(\"unable to acquire lock: {error}\");\n                sleep(Duration::from_secs(1));\n            }\n        }\n    }\n\n    Err(Unexpected(\"Failed to acquire lock\".to_string()))\n}\n"
  },
  {
    "path": "postgresql_archive/src/configuration/zonky/matcher.rs",
    "content": "use crate::Result;\nuse semver::Version;\nuse std::env;\n\n/// Matcher for PostgreSQL binaries from <https://github.com/zonkyio/embedded-postgres-binaries>\n///\n/// # Errors\n/// * If the asset matcher fails.\npub fn matcher(_url: &str, name: &str, version: &Version) -> Result<bool> {\n    let os = get_os();\n    let arch = get_arch();\n    let expected_name = format!(\"embedded-postgres-binaries-{os}-{arch}-{version}.jar\");\n    Ok(name == expected_name)\n}\n\n/// Returns the operating system of the current system.\npub(crate) fn get_os() -> &'static str {\n    match env::consts::OS {\n        \"macos\" => \"darwin\",\n        os => os,\n    }\n}\n\n/// Returns the architecture of the current system.\npub(crate) fn get_arch() -> &'static str {\n    match env::consts::ARCH {\n        \"arm\" => \"arm32v7\",\n        \"x86_64\" => \"amd64\",\n        \"aarch64\" => \"arm64v8\",\n        \"powerpc64\" => \"ppc64le\",\n        \"x86\" => \"i386\",\n        arch => arch,\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::Result;\n\n    #[test]\n    fn test_asset_match_success() -> Result<()> {\n        let url = \"\";\n        let os = get_os();\n        let arch = get_arch();\n        let version = Version::parse(\"16.4.0\")?;\n        let name = format!(\"embedded-postgres-binaries-{os}-{arch}-{version}.jar\");\n\n        assert!(matcher(url, name.as_str(), &version)?, \"{}\", name);\n        Ok(())\n    }\n\n    #[test]\n    fn test_asset_match_errors() -> Result<()> {\n        let url = \"\";\n        let os = get_os();\n        let arch = get_arch();\n        let version = Version::parse(\"16.4.0\")?;\n        let names = vec![\n            format!(\"foo-{os}-{arch}-{version}.jar\"),\n            format!(\"embedded-postgres-binaries-{arch}-{version}.jar\"),\n            format!(\"embedded-postgres-binaries-{os}-{version}.jar\"),\n            format!(\"embedded-postgres-binaries-{os}-{arch}.jar\"),\n            format!(\"embedded-postgres-binaries-{os}-{arch}-{version}.zip\"),\n        ];\n\n        for name in names {\n            assert!(!matcher(url, name.as_str(), &version)?, \"{}\", name);\n        }\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/configuration/zonky/mod.rs",
    "content": "mod extractor;\nmod matcher;\nmod repository;\n\npub const URL: &str = \"https://github.com/zonkyio/embedded-postgres-binaries\";\n\npub use extractor::extract;\npub use matcher::matcher;\npub use repository::Zonky;\n"
  },
  {
    "path": "postgresql_archive/src/configuration/zonky/repository.rs",
    "content": "use crate::Result;\nuse crate::configuration::zonky::matcher::{get_arch, get_os};\nuse crate::repository::Archive;\nuse crate::repository::maven::repository::Maven;\nuse crate::repository::model::Repository;\nuse async_trait::async_trait;\nuse semver::{Version, VersionReq};\nuse tracing::instrument;\n\n/// Zonky repository.\n///\n/// This repository is used to interact with Zonky Maven repositories\n/// (e.g. <https://repo1.maven.org/maven2/io/zonky/test/postgres\">).\n#[derive(Debug)]\npub struct Zonky {\n    maven: Box<dyn Repository>,\n}\n\nconst MAVEN_URL: &str = \"https://repo1.maven.org/maven2/io/zonky/test/postgres\";\n\nimpl Zonky {\n    /// Creates a new Zonky repository from the specified URL in the format\n    /// <https://github.com/zonkyio/embedded-postgres-binaries>\n    ///\n    /// # Errors\n    /// * If the URL is invalid.\n    #[expect(clippy::new_ret_no_self)]\n    pub fn new(_url: &str) -> Result<Box<dyn Repository>> {\n        let os = get_os();\n        let arch = get_arch();\n        let archive = format!(\"embedded-postgres-binaries-{os}-{arch}\");\n        let url = format!(\"{MAVEN_URL}/{archive}\");\n        let maven = Maven::new(url.as_str())?;\n        Ok(Box::new(Zonky { maven }))\n    }\n}\n\n#[async_trait]\nimpl Repository for Zonky {\n    #[instrument(level = \"debug\")]\n    fn name(&self) -> &str {\n        \"Zonky\"\n    }\n\n    #[instrument(level = \"debug\")]\n    async fn get_version(&self, version_req: &VersionReq) -> Result<Version> {\n        self.maven.get_version(version_req).await\n    }\n\n    #[instrument]\n    async fn get_archive(&self, version_req: &VersionReq) -> Result<Archive> {\n        self.maven.get_archive(version_req).await\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::configuration::zonky;\n\n    #[test]\n    fn test_name() {\n        let zonky = Zonky::new(zonky::URL).unwrap();\n        assert_eq!(\"Zonky\", zonky.name());\n    }\n\n    //\n    // get_version tests\n    //\n\n    #[tokio::test]\n    async fn test_get_version() -> Result<()> {\n        let maven = Zonky::new(zonky::URL)?;\n        let version_req = VersionReq::STAR;\n        let version = maven.get_version(&version_req).await?;\n        assert!(version > Version::new(0, 0, 0));\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_specific_version() -> Result<()> {\n        let zonky = Zonky::new(zonky::URL)?;\n        let version_req = VersionReq::parse(\"=16.2.0\")?;\n        let version = zonky.get_version(&version_req).await?;\n        assert_eq!(Version::new(16, 2, 0), version);\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_specific_not_found() -> Result<()> {\n        let zonky = Zonky::new(zonky::URL)?;\n        let version_req = VersionReq::parse(\"=0.0.0\")?;\n        let error = zonky.get_version(&version_req).await.unwrap_err();\n        assert_eq!(\"version not found for '=0.0.0'\", error.to_string());\n        Ok(())\n    }\n\n    //\n    // get_archive tests\n    //\n\n    #[tokio::test]\n    async fn test_get_archive() -> Result<()> {\n        let zonky = Zonky::new(zonky::URL)?;\n        let os = get_os();\n        let arch = get_arch();\n        let version = Version::new(16, 2, 0);\n        let version_req = VersionReq::parse(format!(\"={version}\").as_str())?;\n        let archive = zonky.get_archive(&version_req).await?;\n        assert_eq!(\n            format!(\"embedded-postgres-binaries-{os}-{arch}-{version}.jar\"),\n            archive.name()\n        );\n        assert_eq!(&version, archive.version());\n        assert!(!archive.bytes().is_empty());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/error.rs",
    "content": "use std::sync::PoisonError;\n\n/// PostgreSQL archive result type\npub type Result<T, E = Error> = core::result::Result<T, E>;\n\n/// PostgreSQL archive errors\n#[derive(Debug, thiserror::Error)]\npub enum Error {\n    /// Asset not found\n    #[error(\"asset not found\")]\n    AssetNotFound,\n    /// Asset hash not found\n    #[error(\"asset hash not found for asset '{0}'\")]\n    AssetHashNotFound(String),\n    /// Error when the hash of the archive does not match the expected hash\n    #[error(\"Archive hash [{archive_hash}] does not match expected hash [{hash}]\")]\n    ArchiveHashMismatch { archive_hash: String, hash: String },\n    /// Invalid version\n    #[error(\"version '{0}' is invalid\")]\n    InvalidVersion(String),\n    /// IO error\n    #[error(\"{0}\")]\n    IoError(String),\n    /// Parse error\n    #[error(\"{0}\")]\n    ParseError(String),\n    /// Poisoned lock\n    #[error(\"poisoned lock '{0}'\")]\n    PoisonedLock(String),\n    /// Repository failure\n    #[error(\"{0}\")]\n    RepositoryFailure(String),\n    /// Unexpected error\n    #[error(\"{0}\")]\n    Unexpected(String),\n    /// Unsupported extractor\n    #[error(\"unsupported extractor for '{0}'\")]\n    UnsupportedExtractor(String),\n    /// Unsupported hasher\n    #[error(\"unsupported hasher for '{0}'\")]\n    UnsupportedHasher(String),\n    /// Unsupported hasher\n    #[error(\"unsupported matcher for '{0}'\")]\n    UnsupportedMatcher(String),\n    /// Unsupported repository\n    #[error(\"unsupported repository for '{0}'\")]\n    UnsupportedRepository(String),\n    /// Version not found\n    #[error(\"version not found for '{0}'\")]\n    VersionNotFound(String),\n}\n\n/// Converts a [`regex_lite::Error`] into an [`ParseError`](Error::ParseError)\nimpl From<regex_lite::Error> for Error {\n    fn from(error: regex_lite::Error) -> Self {\n        Error::ParseError(error.to_string())\n    }\n}\n\n/// Converts a [`reqwest::Error`] into an [`IoError`](Error::IoError)\nimpl From<reqwest::Error> for Error {\n    fn from(error: reqwest::Error) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n/// Converts a [`reqwest_middleware::Error`] into an [`IoError`](Error::IoError)\nimpl From<reqwest_middleware::Error> for Error {\n    fn from(error: reqwest_middleware::Error) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n/// Converts a [`std::io::Error`] into an [`IoError`](Error::IoError)\nimpl From<std::io::Error> for Error {\n    fn from(error: std::io::Error) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n/// Converts a [`std::time::SystemTimeError`] into an [`IoError`](Error::IoError)\nimpl From<std::time::SystemTimeError> for Error {\n    fn from(error: std::time::SystemTimeError) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n/// Converts a [`std::num::ParseIntError`] into an [`ParseError`](Error::ParseError)\nimpl From<std::num::ParseIntError> for Error {\n    fn from(error: std::num::ParseIntError) -> Self {\n        Error::ParseError(error.to_string())\n    }\n}\n\n/// Converts a [`semver::Error`] into an [`ParseError`](Error::ParseError)\nimpl From<semver::Error> for Error {\n    fn from(error: semver::Error) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n/// Converts a [`std::path::StripPrefixError`] into an [`ParseError`](Error::ParseError)\nimpl From<std::path::StripPrefixError> for Error {\n    fn from(error: std::path::StripPrefixError) -> Self {\n        Error::ParseError(error.to_string())\n    }\n}\n\n/// Converts a [`url::ParseError`] into an [`ParseError`](Error::ParseError)\nimpl From<url::ParseError> for Error {\n    fn from(error: url::ParseError) -> Self {\n        Error::ParseError(error.to_string())\n    }\n}\n\n#[cfg(feature = \"maven\")]\n/// Converts a [`quick_xml::DeError`] into a [`ParseError`](Error::ParseError)\nimpl From<quick_xml::DeError> for Error {\n    fn from(error: quick_xml::DeError) -> Self {\n        Error::ParseError(error.to_string())\n    }\n}\n\n#[cfg(feature = \"zip\")]\n/// Converts a [`zip::result::ZipError`] into a [`ParseError`](Error::Unexpected)\nimpl From<zip::result::ZipError> for Error {\n    fn from(error: zip::result::ZipError) -> Self {\n        Error::Unexpected(error.to_string())\n    }\n}\n\n/// Converts a [`std::sync::PoisonError<T>`] into a [`ParseError`](Error::PoisonedLock)\nimpl<T> From<PoisonError<T>> for Error {\n    fn from(value: PoisonError<T>) -> Self {\n        Error::PoisonedLock(value.to_string())\n    }\n}\n\n/// These are relatively low value tests; they are here to reduce the coverage gap and\n/// ensure that the error conversions are working as expected.\n#[cfg(test)]\nmod test {\n    use super::*;\n    use anyhow::anyhow;\n    use semver::VersionReq;\n    use std::ops::Add;\n    use std::path::PathBuf;\n    use std::str::FromStr;\n    use std::time::{Duration, SystemTime};\n\n    #[test]\n    fn test_from_regex_error() {\n        let regex_error = regex_lite::Regex::new(\"(?=a)\").expect_err(\"regex error\");\n        let error = Error::from(regex_error);\n        assert_eq!(error.to_string(), \"look-around is not supported\");\n    }\n\n    #[tokio::test]\n    async fn test_from_reqwest_error() {\n        let result = reqwest::get(\"https://a.com\").await;\n        assert!(result.is_err());\n        if let Err(error) = result {\n            let error = Error::from(error);\n            assert!(error.to_string().contains(\"error sending request\"));\n        }\n    }\n\n    #[tokio::test]\n    async fn test_from_reqwest_middeleware_error() {\n        let reqwest_middleware_error =\n            reqwest_middleware::Error::Middleware(anyhow!(\"middleware error: test\"));\n        let error = Error::from(reqwest_middleware_error);\n        assert!(error.to_string().contains(\"middleware error: test\"));\n    }\n\n    #[test]\n    fn test_from_io_error() {\n        let io_error = std::io::Error::new(std::io::ErrorKind::NotFound, \"test\");\n        let error = Error::from(io_error);\n        assert_eq!(error.to_string(), \"test\");\n    }\n\n    #[test]\n    fn test_from_parse_int_error() {\n        let parse_int_error = u64::from_str(\"test\").expect_err(\"parse int error\");\n        let error = Error::from(parse_int_error);\n        assert_eq!(error.to_string(), \"invalid digit found in string\");\n    }\n\n    #[test]\n    fn test_from_semver_error() {\n        let semver_error = VersionReq::parse(\"foo\").expect_err(\"semver error\");\n        let error = Error::from(semver_error);\n        assert_eq!(\n            error.to_string(),\n            \"unexpected character 'f' while parsing major version number\"\n        );\n    }\n\n    #[test]\n    fn test_from_strip_prefix_error() {\n        let path = PathBuf::from(\"test\");\n        let strip_prefix_error = path.strip_prefix(\"foo\").expect_err(\"strip prefix error\");\n        let error = Error::from(strip_prefix_error);\n        assert_eq!(error.to_string(), \"prefix not found\");\n    }\n\n    #[test]\n    fn test_from_system_time_error() {\n        let future_time = SystemTime::now().add(Duration::from_secs(300));\n        let system_time_error = SystemTime::now()\n            .duration_since(future_time)\n            .expect_err(\"system time error\");\n        let error = Error::from(system_time_error);\n        assert_eq!(\n            error.to_string(),\n            \"second time provided was later than self\"\n        );\n    }\n\n    #[test]\n    fn test_from_url_parse_error() {\n        let parse_error = url::ParseError::EmptyHost;\n        let error = Error::from(parse_error);\n        assert_eq!(error.to_string(), \"empty host\");\n    }\n\n    #[cfg(feature = \"maven\")]\n    #[test]\n    fn test_from_quick_xml_error() {\n        let xml = \"<invalid>\";\n        let quick_xml_error = quick_xml::de::from_str::<String>(xml).expect_err(\"quick_xml error\");\n        let error = Error::from(quick_xml_error);\n        assert!(matches!(error, Error::ParseError(_)));\n    }\n\n    #[cfg(feature = \"zip\")]\n    #[test]\n    fn test_from_zip_error() {\n        let zip_error = zip::result::ZipError::FileNotFound;\n        let error = Error::from(zip_error);\n        assert!(matches!(error, Error::Unexpected(_)));\n        assert!(\n            error\n                .to_string()\n                .contains(\"specified file not found in archive\")\n        );\n    }\n\n    #[test]\n    fn test_from_poisoned_lock() {\n        let error = Error::from(std::sync::PoisonError::new(()));\n        assert!(matches!(error, Error::PoisonedLock(_)));\n        assert!(error.to_string().contains(\"poisoned lock\"));\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/extractor/mod.rs",
    "content": "mod model;\npub mod registry;\n#[cfg(feature = \"tar-gz\")]\nmod tar_gz_extractor;\n#[cfg(feature = \"tar-xz\")]\nmod tar_xz_extractor;\n#[cfg(feature = \"zip\")]\nmod zip_extractor;\n\npub use model::ExtractDirectories;\n#[cfg(feature = \"tar-gz\")]\npub use tar_gz_extractor::extract as tar_gz_extract;\n#[cfg(feature = \"tar-xz\")]\npub use tar_xz_extractor::extract as tar_xz_extract;\n#[cfg(feature = \"zip\")]\npub use zip_extractor::extract as zip_extract;\n"
  },
  {
    "path": "postgresql_archive/src/extractor/model.rs",
    "content": "use crate::{Error, Result};\nuse regex_lite::Regex;\nuse std::fmt::Display;\nuse std::path::PathBuf;\n\n/// Extract directories manage the directories to extract a file in an archive to based upon the\n/// associated regex matching the file path.\n#[derive(Debug)]\npub struct ExtractDirectories {\n    mappings: Vec<(Regex, PathBuf)>,\n}\n\nimpl ExtractDirectories {\n    /// Creates a new ExtractDirectories instance.\n    #[must_use]\n    pub fn new(mappings: Vec<(Regex, PathBuf)>) -> Self {\n        Self { mappings }\n    }\n\n    /// Adds a new mapping to the ExtractDirectories instance.\n    pub fn add_mapping(&mut self, regex: Regex, path: PathBuf) {\n        self.mappings.push((regex, path));\n    }\n\n    /// Returns the path associated with the first regex that matches the file path.\n    /// If no regex matches, then the file path is returned.\n    ///\n    /// # Errors\n    /// Returns an error if the file path cannot be converted to a string.\n    pub fn get_path(&self, file_path: &str) -> Result<PathBuf> {\n        for (regex, path) in &self.mappings {\n            if regex.is_match(file_path) {\n                return Ok(path.clone());\n            }\n        }\n        Err(Error::Unexpected(format!(\n            \"No regex matched the file path: {file_path}\"\n        )))\n    }\n}\n\n/// Default implementation for ExtractDirectories.\nimpl Default for ExtractDirectories {\n    /// Creates a new ExtractDirectories instance with an empty mappings vector.\n    fn default() -> Self {\n        ExtractDirectories::new(Vec::new())\n    }\n}\n\n/// Display implementation for ExtractDirectories.\nimpl Display for ExtractDirectories {\n    /// Formats the ExtractDirectories instance.\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        for (regex, path) in &self.mappings {\n            writeln!(f, \"{} -> {}\", regex, path.display())?;\n        }\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_new() -> Result<()> {\n        let mappings = vec![(Regex::new(\".*\")?, PathBuf::from(\"test\"))];\n        let extract_directories = ExtractDirectories::new(mappings);\n        let path = extract_directories.get_path(\"foo\")?;\n        assert_eq!(\"test\", path.to_string_lossy());\n        Ok(())\n    }\n\n    #[test]\n    fn test_default() {\n        let extract_directories = ExtractDirectories::default();\n        let result = extract_directories.get_path(\"foo\");\n        assert!(result.is_err());\n    }\n\n    #[test]\n    fn test_add_mapping() -> Result<()> {\n        let mut extract_directories = ExtractDirectories::default();\n        extract_directories.add_mapping(Regex::new(\".*\")?, PathBuf::from(\"test\"));\n        let path = extract_directories.get_path(\"foo\")?;\n        assert_eq!(\"test\", path.to_string_lossy());\n        Ok(())\n    }\n\n    #[test]\n    fn test_get_path() -> Result<()> {\n        let mappings = vec![\n            (Regex::new(\"test\")?, PathBuf::from(\"test\")),\n            (Regex::new(\"foo\")?, PathBuf::from(\"bar\")),\n        ];\n        let extract_directories = ExtractDirectories::new(mappings);\n        let path = extract_directories.get_path(\"foo\")?;\n        assert_eq!(\"bar\", path.to_string_lossy());\n        Ok(())\n    }\n\n    #[test]\n    fn test_display() -> Result<()> {\n        let mappings = vec![\n            (Regex::new(\"test\")?, PathBuf::from(\"test\")),\n            (Regex::new(\"foo\")?, PathBuf::from(\"bar\")),\n        ];\n        let extract_directories = ExtractDirectories::new(mappings);\n        let display = extract_directories.to_string();\n        assert_eq!(\"test -> test\\nfoo -> bar\\n\", display);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/extractor/registry.rs",
    "content": "use crate::Error::UnsupportedExtractor;\nuse crate::Result;\n#[cfg(feature = \"theseus\")]\nuse crate::configuration::theseus;\n#[cfg(feature = \"zonky\")]\nuse crate::configuration::zonky;\nuse crate::extractor::ExtractDirectories;\nuse std::path::PathBuf;\nuse std::sync::{Arc, LazyLock, Mutex, RwLock};\n\nstatic REGISTRY: LazyLock<Arc<Mutex<RepositoryRegistry>>> =\n    LazyLock::new(|| Arc::new(Mutex::new(RepositoryRegistry::default())));\n\ntype SupportsFn = fn(&str) -> Result<bool>;\ntype ExtractFn = fn(&Vec<u8>, &ExtractDirectories) -> Result<Vec<PathBuf>>;\n\n/// Singleton struct to store extractors\n#[expect(clippy::type_complexity)]\nstruct RepositoryRegistry {\n    extractors: Vec<(Arc<RwLock<SupportsFn>>, Arc<RwLock<ExtractFn>>)>,\n}\n\nimpl RepositoryRegistry {\n    /// Creates a new extractor registry.\n    fn new() -> Self {\n        Self {\n            extractors: Vec::new(),\n        }\n    }\n\n    /// Registers an extractor. Newly registered extractors take precedence over existing ones.\n    fn register(&mut self, supports_fn: SupportsFn, extract_fn: ExtractFn) {\n        self.extractors.insert(\n            0,\n            (\n                Arc::new(RwLock::new(supports_fn)),\n                Arc::new(RwLock::new(extract_fn)),\n            ),\n        );\n    }\n\n    /// Gets an extractor that supports the specified URL\n    ///\n    /// # Errors\n    /// * If the URL is not supported.\n    fn get(&self, url: &str) -> Result<ExtractFn> {\n        for (supports_fn, extractor_fn) in &self.extractors {\n            let supports_function = supports_fn.read()?;\n\n            if supports_function(url)? {\n                let extractor_function = extractor_fn.read()?;\n                return Ok(*extractor_function);\n            }\n        }\n\n        Err(UnsupportedExtractor(url.to_string()))\n    }\n}\n\nimpl Default for RepositoryRegistry {\n    /// Creates a new repository registry with the default repositories registered.\n    fn default() -> Self {\n        let mut registry = Self::new();\n        #[cfg(feature = \"theseus\")]\n        registry.register(|url| Ok(url.starts_with(theseus::URL)), theseus::extract);\n        #[cfg(feature = \"zonky\")]\n        registry.register(|url| Ok(url.starts_with(zonky::URL)), zonky::extract);\n        registry\n    }\n}\n\n/// Registers an extractor. Newly registered extractors take precedence over existing ones.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn register(supports_fn: SupportsFn, extractor_fn: ExtractFn) -> Result<()> {\n    REGISTRY.lock()?.register(supports_fn, extractor_fn);\n    Ok(())\n}\n\n/// Gets an extractor that supports the specified URL\n///\n/// # Errors\n/// * If the URL is not supported.\npub fn get(url: &str) -> Result<ExtractFn> {\n    REGISTRY.lock()?.get(url)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use regex_lite::Regex;\n\n    #[test]\n    fn test_register() -> Result<()> {\n        register(|url| Ok(url == \"https://foo.com\"), |_, _| Ok(Vec::new()))?;\n        let url = \"https://foo.com\";\n        let extractor = get(url)?;\n        let mut extract_directories = ExtractDirectories::default();\n        extract_directories.add_mapping(Regex::new(\".*\")?, PathBuf::from(\"test\"));\n        assert!(extractor(&Vec::new(), &extract_directories).is_ok());\n        Ok(())\n    }\n\n    #[test]\n    fn test_get_error() {\n        let error = get(\"foo\").unwrap_err();\n        assert_eq!(\"unsupported extractor for 'foo'\", error.to_string());\n    }\n\n    #[test]\n    #[cfg(feature = \"theseus\")]\n    fn test_get_theseus_postgresql_binaries() {\n        assert!(get(theseus::URL).is_ok());\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/extractor/tar_gz_extractor.rs",
    "content": "use crate::Error::Unexpected;\nuse crate::Result;\nuse crate::extractor::ExtractDirectories;\nuse flate2::bufread::GzDecoder;\nuse std::fs::{File, create_dir_all};\nuse std::io::{BufReader, Cursor, copy};\nuse std::path::PathBuf;\nuse tar::Archive;\nuse tracing::{debug, instrument, warn};\n\n/// Extracts the compressed tar `bytes` to paths defined in `extract_directories`.\n///\n/// # Errors\n/// Returns an error if the extraction fails.\n#[instrument(skip(bytes))]\npub fn extract(bytes: &Vec<u8>, extract_directories: &ExtractDirectories) -> Result<Vec<PathBuf>> {\n    let mut files = Vec::new();\n    let input = BufReader::new(Cursor::new(bytes));\n    let decoder = GzDecoder::new(input);\n    let mut archive = Archive::new(decoder);\n    let mut extracted_bytes = 0;\n\n    for archive_entry in archive.entries()? {\n        let mut entry = archive_entry?;\n        let entry_header = entry.header();\n        let entry_type = entry_header.entry_type();\n        let entry_size = entry_header.size()?;\n        #[cfg(unix)]\n        let file_mode = entry_header.mode()?;\n\n        let entry_header_path = entry_header.path()?.to_path_buf();\n        let prefix = match entry_header_path.components().next() {\n            Some(component) => component.as_os_str().to_str().unwrap_or_default(),\n            None => {\n                return Err(Unexpected(\n                    \"Failed to get file header path prefix\".to_string(),\n                ));\n            }\n        };\n        let stripped_entry_header_path = entry_header_path.strip_prefix(prefix)?.to_path_buf();\n        let Ok(extract_dir) = extract_directories.get_path(prefix) else {\n            continue;\n        };\n        let mut entry_name = extract_dir.clone();\n        entry_name.push(stripped_entry_header_path);\n\n        if entry_type.is_dir() || entry_name.is_dir() {\n            create_dir_all(&entry_name)?;\n        } else if entry_type.is_file() {\n            let mut output_file = File::create(&entry_name)?;\n            copy(&mut entry, &mut output_file)?;\n            extracted_bytes += entry_size;\n\n            #[cfg(unix)]\n            {\n                use std::os::unix::fs::PermissionsExt;\n                output_file.set_permissions(std::fs::Permissions::from_mode(file_mode))?;\n            }\n            files.push(entry_name);\n        } else if entry_type.is_symlink() {\n            #[cfg(unix)]\n            if let Some(symlink_target) = entry.link_name()? {\n                let symlink_path = entry_name.clone();\n                std::os::unix::fs::symlink(symlink_target.as_ref(), symlink_path)?;\n                files.push(entry_name);\n            }\n        }\n    }\n\n    let number_of_files = files.len();\n    debug!(\"Extracted {number_of_files} files totalling {extracted_bytes}\");\n\n    Ok(files)\n}\n"
  },
  {
    "path": "postgresql_archive/src/extractor/tar_xz_extractor.rs",
    "content": "use crate::Error::Unexpected;\nuse crate::Result;\nuse crate::extractor::ExtractDirectories;\nuse liblzma::bufread::XzDecoder;\nuse std::fs::{File, create_dir_all};\nuse std::io::{BufReader, Cursor, copy};\nuse std::path::PathBuf;\nuse tar::Archive;\nuse tracing::{debug, instrument, warn};\n\n/// Extracts the compressed tar `bytes` to paths defined in `extract_directories`.\n///\n/// # Errors\n/// Returns an error if the extraction fails.\n#[instrument(skip(bytes))]\npub fn extract(bytes: &Vec<u8>, extract_directories: &ExtractDirectories) -> Result<Vec<PathBuf>> {\n    let mut files = Vec::new();\n    let input = BufReader::new(Cursor::new(bytes));\n    let decoder = XzDecoder::new(input);\n    let mut archive = Archive::new(decoder);\n    let mut extracted_bytes = 0;\n\n    for archive_entry in archive.entries()? {\n        let mut entry = archive_entry?;\n        let entry_header = entry.header();\n        let entry_type = entry_header.entry_type();\n        let entry_size = entry_header.size()?;\n        #[cfg(unix)]\n        let file_mode = entry_header.mode()?;\n\n        let entry_header_path = entry_header.path()?.to_path_buf();\n        let prefix = match entry_header_path.components().next() {\n            Some(component) => component.as_os_str().to_str().unwrap_or_default(),\n            None => {\n                return Err(Unexpected(\n                    \"Failed to get file header path prefix\".to_string(),\n                ));\n            }\n        };\n        let Ok(extract_dir) = extract_directories.get_path(prefix) else {\n            continue;\n        };\n        let mut entry_name = extract_dir.clone();\n        entry_name.push(entry_header_path);\n\n        if entry_type.is_dir() || entry_name.is_dir() {\n            create_dir_all(&entry_name)?;\n        } else if entry_type.is_file() {\n            if let Some(parent) = entry_name.parent() {\n                create_dir_all(parent)?;\n            }\n            let mut output_file = File::create(&entry_name)?;\n            copy(&mut entry, &mut output_file)?;\n            extracted_bytes += entry_size;\n\n            #[cfg(unix)]\n            {\n                use std::os::unix::fs::PermissionsExt;\n                output_file.set_permissions(std::fs::Permissions::from_mode(file_mode))?;\n            }\n            files.push(entry_name);\n        } else if entry_type.is_symlink() {\n            #[cfg(unix)]\n            if let Some(symlink_target) = entry.link_name()? {\n                let symlink_path = entry_name.clone();\n                std::os::unix::fs::symlink(symlink_target.as_ref(), symlink_path)?;\n                files.push(entry_name);\n            }\n        }\n    }\n\n    let number_of_files = files.len();\n    debug!(\"Extracted {number_of_files} files totalling {extracted_bytes}\");\n\n    Ok(files)\n}\n"
  },
  {
    "path": "postgresql_archive/src/extractor/zip_extractor.rs",
    "content": "use crate::Result;\nuse crate::extractor::ExtractDirectories;\nuse std::fs::create_dir_all;\nuse std::io::Cursor;\nuse std::path::PathBuf;\nuse std::{fs, io};\nuse tracing::{debug, instrument, warn};\nuse zip::ZipArchive;\n\n/// Extracts the compressed tar `bytes` to paths defined in `extract_directories`.\n///\n/// # Errors\n/// Returns an error if the extraction fails.\n#[instrument(skip(bytes))]\npub fn extract(bytes: &Vec<u8>, extract_directories: &ExtractDirectories) -> Result<Vec<PathBuf>> {\n    let mut files = Vec::new();\n    let reader = Cursor::new(bytes);\n    let mut archive = ZipArchive::new(reader)?;\n    let mut extracted_bytes = 0;\n\n    for i in 0..archive.len() {\n        let mut file = archive.by_index(i)?;\n        let file_path = PathBuf::from(file.name());\n        let file_path = PathBuf::from(file_path.file_name().unwrap_or_default());\n        let file_name = file_path.to_string_lossy();\n\n        let Ok(extract_dir) = extract_directories.get_path(&file_name) else {\n            continue;\n        };\n        create_dir_all(&extract_dir)?;\n\n        let mut out = Vec::new();\n        io::copy(&mut file, &mut out)?;\n        extracted_bytes += out.len() as u64;\n        let path = PathBuf::from(&extract_dir).join(file_path);\n        fs::write(&path, out)?;\n        files.push(path);\n    }\n\n    let number_of_files = files.len();\n    debug!(\"Extracted {number_of_files} files totalling {extracted_bytes}\");\n\n    Ok(files)\n}\n"
  },
  {
    "path": "postgresql_archive/src/hasher/md5.rs",
    "content": "use crate::Result;\nuse md5::{Digest, Md5};\n\n/// Hashes the data using MD5.\n///\n/// # Errors\n/// * If the data cannot be hashed.\npub fn hash(data: &Vec<u8>) -> Result<String> {\n    let mut hasher = Md5::new();\n    hasher.update(data);\n    let hash = hex::encode(hasher.finalize());\n    Ok(hash)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_hash() -> Result<()> {\n        let data = vec![4, 2];\n        let hash = hash(&data)?;\n        assert_eq!(\"21fb3d1d1a91a7e80dff456205f3380b\", hash);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/hasher/mod.rs",
    "content": "#[cfg(feature = \"md5\")]\npub mod md5;\npub mod registry;\n#[cfg(feature = \"sha1\")]\npub mod sha1;\n#[cfg(feature = \"sha2\")]\npub mod sha2_256;\n#[cfg(feature = \"sha2\")]\npub mod sha2_512;\n"
  },
  {
    "path": "postgresql_archive/src/hasher/registry.rs",
    "content": "use crate::Error::UnsupportedHasher;\nuse crate::Result;\n#[cfg(feature = \"theseus\")]\nuse crate::configuration::theseus;\n#[cfg(feature = \"md5\")]\nuse crate::hasher::md5;\n#[cfg(feature = \"sha1\")]\nuse crate::hasher::sha1;\n#[cfg(feature = \"sha2\")]\nuse crate::hasher::sha2_256;\n#[cfg(all(feature = \"sha2\", feature = \"maven\"))]\nuse crate::hasher::sha2_512;\n#[cfg(feature = \"maven\")]\nuse crate::repository::maven;\nuse std::sync::{Arc, LazyLock, Mutex, RwLock};\n\nstatic REGISTRY: LazyLock<Arc<Mutex<HasherRegistry>>> =\n    LazyLock::new(|| Arc::new(Mutex::new(HasherRegistry::default())));\n\npub type SupportsFn = fn(&str, &str) -> Result<bool>;\npub type HasherFn = fn(&Vec<u8>) -> Result<String>;\n\n/// Singleton struct to store hashers\n#[expect(clippy::type_complexity)]\nstruct HasherRegistry {\n    hashers: Vec<(Arc<RwLock<SupportsFn>>, Arc<RwLock<HasherFn>>)>,\n}\n\nimpl HasherRegistry {\n    /// Creates a new hasher registry.\n    fn new() -> Self {\n        Self {\n            hashers: Vec::new(),\n        }\n    }\n\n    /// Registers a hasher for a supports function. Newly registered hashers will take precedence\n    /// over existing ones.\n    fn register(&mut self, supports_fn: SupportsFn, hasher_fn: HasherFn) {\n        self.hashers.insert(\n            0,\n            (\n                Arc::new(RwLock::new(supports_fn)),\n                Arc::new(RwLock::new(hasher_fn)),\n            ),\n        );\n    }\n\n    /// Get a hasher for the specified url and extension.\n    ///\n    /// # Errors\n    /// * If the registry is poisoned.\n    fn get<S: AsRef<str>>(&self, url: S, extension: S) -> Result<HasherFn> {\n        let url = url.as_ref();\n        let extension = extension.as_ref();\n        for (supports_fn, hasher_fn) in &self.hashers {\n            let supports_function = supports_fn.read()?;\n            if supports_function(url, extension)? {\n                let hasher_function = hasher_fn.read()?;\n                return Ok(*hasher_function);\n            }\n        }\n\n        Err(UnsupportedHasher(url.to_string()))\n    }\n}\n\nimpl Default for HasherRegistry {\n    /// Creates a new hasher registry with the default hashers registered.\n    fn default() -> Self {\n        let mut registry = Self::new();\n        #[cfg(feature = \"theseus\")]\n        registry.register(\n            |url, extension| Ok(url.starts_with(theseus::URL) && extension == \"sha256\"),\n            sha2_256::hash,\n        );\n        // Register the Maven hashers: https://maven.apache.org/resolver/about-checksums.html#implemented-checksum-algorithms\n        #[cfg(feature = \"maven\")]\n        registry.register(\n            |url, extension| Ok(url.starts_with(maven::URL) && extension == \"md5\"),\n            md5::hash,\n        );\n        #[cfg(feature = \"maven\")]\n        registry.register(\n            |url, extension| Ok(url.starts_with(maven::URL) && extension == \"sha1\"),\n            sha1::hash,\n        );\n        #[cfg(feature = \"maven\")]\n        registry.register(\n            |url, extension| Ok(url.starts_with(maven::URL) && extension == \"sha256\"),\n            sha2_256::hash,\n        );\n        #[cfg(feature = \"maven\")]\n        registry.register(\n            |url, extension| Ok(url.starts_with(maven::URL) && extension == \"sha512\"),\n            sha2_512::hash,\n        );\n        registry\n    }\n}\n\n/// Registers a hasher for a supports function. Newly registered hashers will take precedence\n/// over existing ones.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn register(supports_fn: SupportsFn, hasher_fn: HasherFn) -> Result<()> {\n    REGISTRY.lock()?.register(supports_fn, hasher_fn);\n    Ok(())\n}\n\n/// Get a hasher for the specified url and extension.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn get<S: AsRef<str>>(url: S, extension: S) -> Result<HasherFn> {\n    REGISTRY.lock()?.get(url, extension)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    fn test_hasher(extension: &str, expected: &str) -> Result<()> {\n        let hasher = get(\"https://foo.com\", extension)?;\n        let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 0];\n        let hash = hasher(&data)?;\n        assert_eq!(expected, hash);\n        Ok(())\n    }\n\n    #[test]\n    fn test_register() -> Result<()> {\n        register(\n            |_, extension| Ok(extension == \"test\"),\n            |_| Ok(\"42\".to_string()),\n        )?;\n        test_hasher(\"test\", \"42\")\n    }\n\n    #[test]\n    fn test_get_invalid_url_error() {\n        let error = get(\"https://foo.com\", \"foo\").unwrap_err();\n        assert_eq!(\n            \"unsupported hasher for 'https://foo.com'\",\n            error.to_string()\n        );\n    }\n\n    #[test]\n    #[cfg(feature = \"theseus\")]\n    fn test_get_invalid_extension_error() {\n        let error = get(theseus::URL, \"foo\").unwrap_err();\n        assert_eq!(\n            format!(\"unsupported hasher for '{}'\", theseus::URL),\n            error.to_string()\n        );\n    }\n\n    #[test]\n    #[cfg(feature = \"theseus\")]\n    fn test_get_theseus_postgresql_binaries() {\n        assert!(get(theseus::URL, \"sha256\").is_ok());\n    }\n\n    #[test]\n    #[cfg(feature = \"maven\")]\n    fn test_get_zonky_postgresql_binaries() {\n        assert!(get(maven::URL, \"sha512\").is_ok());\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/hasher/sha1.rs",
    "content": "use crate::Result;\nuse sha1::{Digest, Sha1};\n\n/// Hashes the data using SHA1.\n///\n/// # Errors\n/// * If the data cannot be hashed.\npub fn hash(data: &Vec<u8>) -> Result<String> {\n    let mut hasher = Sha1::new();\n    hasher.update(data);\n    let hash = hex::encode(hasher.finalize());\n    Ok(hash)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_hash() -> Result<()> {\n        let data = vec![4, 2];\n        let hash = hash(&data)?;\n        assert_eq!(\"1f3e1678e699640dfa5173d3a52b004f5e164d87\", hash);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/hasher/sha2_256.rs",
    "content": "use crate::Result;\nuse sha2::{Digest, Sha256};\n\n/// Hashes the data using SHA2-256.\n///\n/// # Errors\n/// * If the data cannot be hashed.\npub fn hash(data: &Vec<u8>) -> Result<String> {\n    let mut hasher = Sha256::new();\n    hasher.update(data);\n    let hash = hex::encode(hasher.finalize());\n    Ok(hash)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_hash() -> Result<()> {\n        let data = vec![4, 2];\n        let hash = hash(&data)?;\n        assert_eq!(\n            \"b7586d310e5efb1b7d10a917ba5af403adbf54f4f77fe7fdcb4880a95dac7e7e\",\n            hash\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/hasher/sha2_512.rs",
    "content": "use crate::Result;\nuse sha2::{Digest, Sha512};\n\n/// Hashes the data using SHA2-512.\n///\n/// # Errors\n/// * If the data cannot be hashed.\npub fn hash(data: &Vec<u8>) -> Result<String> {\n    let mut hasher = Sha512::new();\n    hasher.update(data);\n    let hash = hex::encode(hasher.finalize());\n    Ok(hash)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_hash() -> Result<()> {\n        let data = vec![4, 2];\n        let hash = hash(&data)?;\n        assert_eq!(\n            \"7df6418d1791a6fe80e726319f16f107534a663346f99e0d155e359a54f6c74391e2f3be19c995c3c903926d348bd86c339bd982e10f09aa776e4ff85d36387a\",\n            hash\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/lib.rs",
    "content": "//! # postgresql_archive\n//!\n//! [![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n//! [![Benchmarks](https://img.shields.io/badge/%F0%9F%90%B0_bencher-enabled-6ec241)](https://bencher.dev/perf/theseus-rs-postgresql-embedded)\n//! [![License](https://img.shields.io/crates/l/postgresql_archive?)](https://github.com/theseus-rs/postgresql-embedded/tree/main/postgresql_archive#license)\n//! [![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n//!\n//! Retrieve and extract PostgreSQL on Linux, MacOS or Windows.\n//!\n//! ## Table of contents\n//!\n//! - [Examples](#examples)\n//! - [Feature flags](#feature-flags)\n//! - [Supported platforms](#supported-platforms)\n//! - [Safety](#safety)\n//! - [License](#license)\n//! - [Notes](#notes)\n//!\n//! ## Examples\n//!\n//! ### Asynchronous API\n//!\n//! ```no_run\n//! use postgresql_archive::{extract, get_archive, Result, VersionReq };\n//! use postgresql_archive::configuration::theseus;\n//!\n//! #[tokio::main]\n//! async fn main() -> Result<()> {\n//!     let url = theseus::URL;\n//!     let (archive_version, archive) = get_archive(url, &VersionReq::STAR).await?;\n//!     let out_dir = std::env::temp_dir();\n//!     let files = extract(url, &archive, &out_dir).await?;\n//!     Ok(())\n//! }\n//! ```\n//!\n//! ### Synchronous API\n//! ```no_run\n//! #[cfg(feature = \"blocking\")] {\n//! use postgresql_archive::configuration::theseus;\n//! use postgresql_archive::VersionReq;\n//! use postgresql_archive::blocking::{extract, get_archive};\n//!\n//! let url = theseus::URL;\n//! let (archive_version, archive) = get_archive(url, &VersionReq::STAR).unwrap();\n//! let out_dir = std::env::temp_dir();\n//! let result = extract(url, &archive, &out_dir).unwrap();\n//! }\n//! ```\n//!\n//! ## Feature flags\n//!\n//! postgresql_archive uses [feature flags] to address compile time and binary size\n//! uses.\n//!\n//! The following features are available:\n//!\n//! | Name         | Description                | Default? |\n//! |--------------|----------------------------|----------|\n//! | `blocking`   | Enables the blocking API   | No       |\n//! | `native-tls` | Enables native-tls support | Yes      |\n//! | `rustls`     | Enables rustls support     | No       |\n//!\n//! ### Configurations\n//!\n//! | Name      | Description                         | Default? |\n//! |-----------|-------------------------------------|----------|\n//! | `theseus` | Enables theseus PostgreSQL binaries | Yes      |\n//! | `zonky`   | Enables zonky PostgreSQL binaries   | No       |\n//!\n//! ### Extractors\n//!\n//! | Name     | Description              | Default? |\n//! |----------|--------------------------|----------|\n//! | `tar-gz` | Enables tar gz extractor | Yes      |\n//! | `tar-xz` | Enables tar xz extractor | No       |\n//! | `zip`    | Enables zip extractor    | No       |\n//!\n//! ### Hashers\n//!\n//! | Name   | Description          | Default? |\n//! |--------|----------------------|----------|\n//! | `md5`  | Enables md5 hashers  | No       |\n//! | `sha1` | Enables sha1 hashers | No       |\n//! | `sha2` | Enables sha2 hashers | Yes¹     |\n//!\n//! ¹ enabled by the `theseus` feature flag.\n//!\n//! ### Repositories\n//!\n//! | Name     | Description               | Default? |\n//! |----------|---------------------------|----------|\n//! | `github` | Enables github repository | Yes¹     |\n//! | `maven`  | Enables maven repository  | No       |\n//!\n//! ¹ enabled by the `theseus` feature flag.\n//!\n//! ## Supported platforms\n//!\n//! `postgresql_archive` provides implementations for the following:\n//!\n//! * [theseus-rs/postgresql-binaries](https://github.com/theseus-rs/postgresql-binaries)\n//! * [zonkyio/embedded-postgres-binaries](https://github.com/zonkyio/embedded-postgres-binaries)\n//!\n//! ## Safety\n//!\n//! This crate uses `#![forbid(unsafe_code)]` to ensure everything is implemented in 100% safe Rust.\n//!\n//! ## License\n//!\n//! Licensed under either of\n//!\n//! * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or <https://www.apache.org/licenses/LICENSE-2.0>)\n//! * MIT license ([LICENSE-MIT](LICENSE-MIT) or <https://opensource.org/licenses/MIT>)\n//!\n//! at your option.\n//!\n//! PostgreSQL is covered under [The PostgreSQL License](https://opensource.org/licenses/postgresql).\n\nmod archive;\n#[cfg(feature = \"blocking\")]\npub mod blocking;\npub mod configuration;\nmod error;\npub mod extractor;\npub mod hasher;\npub mod matcher;\npub mod repository;\nmod version;\n\npub use archive::{extract, get_archive, get_version};\npub use error::{Error, Result};\npub use semver::{Version, VersionReq};\npub use version::{ExactVersion, ExactVersionReq};\n"
  },
  {
    "path": "postgresql_archive/src/matcher/mod.rs",
    "content": "pub mod registry;\n"
  },
  {
    "path": "postgresql_archive/src/matcher/registry.rs",
    "content": "use crate::Error::UnsupportedMatcher;\nuse crate::Result;\n#[cfg(feature = \"theseus\")]\nuse crate::configuration::theseus;\n#[cfg(feature = \"zonky\")]\nuse crate::configuration::zonky;\nuse semver::Version;\nuse std::sync::{Arc, LazyLock, Mutex, RwLock};\n\nstatic REGISTRY: LazyLock<Arc<Mutex<MatchersRegistry>>> =\n    LazyLock::new(|| Arc::new(Mutex::new(MatchersRegistry::default())));\n\npub type SupportsFn = fn(&str) -> Result<bool>;\npub type MatcherFn = fn(&str, &str, &Version) -> Result<bool>;\n\n/// Singleton struct to store matchers\n#[expect(clippy::type_complexity)]\nstruct MatchersRegistry {\n    matchers: Vec<(Arc<RwLock<SupportsFn>>, Arc<RwLock<MatcherFn>>)>,\n}\n\nimpl MatchersRegistry {\n    /// Creates a new matcher registry.\n    fn new() -> Self {\n        Self {\n            matchers: Vec::new(),\n        }\n    }\n\n    /// Registers a matcher for a supports function. Newly registered matchers will take precedence\n    /// over existing ones.\n    fn register(&mut self, supports_fn: SupportsFn, matcher_fn: MatcherFn) {\n        self.matchers.insert(\n            0,\n            (\n                Arc::new(RwLock::new(supports_fn)),\n                Arc::new(RwLock::new(matcher_fn)),\n            ),\n        );\n    }\n\n    /// Get a matcher for the specified URL.\n    ///\n    /// # Errors\n    /// * If the registry is poisoned.\n    fn get<S: AsRef<str>>(&self, url: S) -> Result<MatcherFn> {\n        let url = url.as_ref();\n        for (supports_fn, matcher_fn) in &self.matchers {\n            let supports_function = supports_fn.read()?;\n            if supports_function(url)? {\n                let matcher_function = matcher_fn.read()?;\n                return Ok(*matcher_function);\n            }\n        }\n\n        Err(UnsupportedMatcher(url.to_string()))\n    }\n}\n\nimpl Default for MatchersRegistry {\n    /// Creates a new matcher registry with the default matchers registered.\n    fn default() -> Self {\n        let mut registry = Self::new();\n        #[cfg(feature = \"theseus\")]\n        registry.register(|url| Ok(url == theseus::URL), theseus::matcher);\n        #[cfg(feature = \"zonky\")]\n        registry.register(|url| Ok(url == zonky::URL), zonky::matcher);\n        registry\n    }\n}\n\n/// Registers a matcher for a supports function. Newly registered matchers will take precedence over\n/// existing ones.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn register(supports_fn: SupportsFn, matcher_fn: MatcherFn) -> Result<()> {\n    REGISTRY.lock()?.register(supports_fn, matcher_fn);\n    Ok(())\n}\n\n/// Get a matcher for the specified URL.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn get<S: AsRef<str>>(url: S) -> Result<MatcherFn> {\n    REGISTRY.lock()?.get(url)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_register() -> Result<()> {\n        register(\n            |url| Ok(url == \"https://foo.com\"),\n            |_url, name, _version| Ok(name == \"foo\"),\n        )?;\n\n        let matcher = get(\"https://foo.com\")?;\n        let version = Version::new(16, 3, 0);\n\n        assert!(matcher(\"\", \"foo\", &version)?);\n        Ok(())\n    }\n\n    #[test]\n    fn test_get_error() {\n        let result = get(\"foo\").unwrap_err();\n        assert_eq!(\"unsupported matcher for 'foo'\", result.to_string());\n    }\n\n    #[test]\n    #[cfg(feature = \"theseus\")]\n    fn test_get_theseus_postgresql_binaries() {\n        assert!(get(theseus::URL).is_ok());\n    }\n\n    #[test]\n    #[cfg(feature = \"zonky\")]\n    fn test_get_zonyk_postgresql_binaries() {\n        assert!(get(zonky::URL).is_ok());\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/repository/github/mod.rs",
    "content": "pub(crate) mod models;\npub mod repository;\n"
  },
  {
    "path": "postgresql_archive/src/repository/github/models.rs",
    "content": "//! Structs for GitHub API responses\nuse serde::{Deserialize, Serialize};\n\n/// Represents a GitHub release\n#[derive(Clone, Debug, Deserialize, Serialize)]\npub(crate) struct Release {\n    pub url: String,\n    pub assets_url: String,\n    pub upload_url: String,\n    pub html_url: String,\n    pub id: i64,\n    pub tag_name: String,\n    pub name: String,\n    pub draft: bool,\n    pub prerelease: bool,\n    pub assets: Vec<Asset>,\n}\n\n/// Represents a GitHub asset\n#[derive(Clone, Debug, Deserialize, Serialize)]\npub(crate) struct Asset {\n    pub url: String,\n    pub id: i64,\n    pub node_id: String,\n    pub name: String,\n    pub label: String,\n    pub content_type: String,\n    pub state: String,\n    pub size: i64,\n    pub browser_download_url: String,\n}\n"
  },
  {
    "path": "postgresql_archive/src/repository/github/repository.rs",
    "content": "use crate::Error::{\n    ArchiveHashMismatch, AssetHashNotFound, AssetNotFound, RepositoryFailure, VersionNotFound,\n};\nuse crate::hasher::registry::HasherFn;\nuse crate::repository::Archive;\nuse crate::repository::github::models::{Asset, Release};\nuse crate::repository::model::Repository;\nuse crate::{Result, hasher, matcher};\nuse async_trait::async_trait;\nuse futures_util::StreamExt;\nuse regex_lite::Regex;\nuse reqwest::header::HeaderMap;\nuse reqwest_middleware::{ClientBuilder, ClientWithMiddleware};\nuse reqwest_retry::RetryTransientMiddleware;\nuse reqwest_retry::policies::ExponentialBackoff;\nuse reqwest_tracing::TracingMiddleware;\nuse semver::{Version, VersionReq};\nuse std::env;\nuse std::io::Write;\nuse std::str::FromStr;\nuse std::sync::LazyLock;\nuse tracing::{debug, instrument, warn};\n#[cfg(feature = \"indicatif\")]\nuse tracing_indicatif::span_ext::IndicatifSpanExt;\n\nuse url::Url;\n\nconst GITHUB_API_VERSION_HEADER: &str = \"X-GitHub-Api-Version\";\nconst GITHUB_API_VERSION: &str = \"2022-11-28\";\n\nstatic GITHUB_TOKEN: LazyLock<Option<String>> = LazyLock::new(|| match env::var(\"GITHUB_TOKEN\") {\n    Ok(token) => {\n        debug!(\"GITHUB_TOKEN environment variable found\");\n        Some(token)\n    }\n    Err(_) => None,\n});\n\nstatic USER_AGENT: LazyLock<String> = LazyLock::new(|| {\n    format!(\n        \"{PACKAGE}/{VERSION}\",\n        PACKAGE = env!(\"CARGO_PKG_NAME\"),\n        VERSION = env!(\"CARGO_PKG_VERSION\")\n    )\n});\n\n/// GitHub repository.\n///\n/// This repository is used to interact with GitHub. The configuration url should be\n/// in the format <https://github.com/owner/repository>\n/// (e.g. <https://github.com/theseus-rs/postgresql-binaries>).\n#[derive(Debug)]\npub struct GitHub {\n    url: String,\n    releases_url: String,\n}\n\nimpl GitHub {\n    /// Creates a new GitHub repository from the specified URL in the format\n    /// <https://github.com/owner/repository>\n    ///\n    /// # Errors\n    /// * If the URL is invalid.\n    #[expect(clippy::new_ret_no_self)]\n    pub fn new(url: &str) -> Result<Box<dyn Repository>> {\n        let parsed_url = Url::parse(url)?;\n        let path = parsed_url.path().trim_start_matches('/');\n        let path_parts = path.split('/').collect::<Vec<_>>();\n        let owner = (*path_parts\n            .first()\n            .ok_or_else(|| RepositoryFailure(format!(\"No owner in URL {url}\")))?)\n        .to_string();\n        let repo = (*path_parts\n            .get(1)\n            .ok_or_else(|| RepositoryFailure(format!(\"No repo in URL {url}\")))?)\n        .to_string();\n        let releases_url = format!(\"https://api.github.com/repos/{owner}/{repo}/releases\");\n\n        Ok(Box::new(Self {\n            url: url.to_string(),\n            releases_url,\n        }))\n    }\n\n    /// Gets the version from the specified tag name.\n    ///\n    /// # Errors\n    /// * If the version cannot be parsed.\n    fn get_version_from_tag_name(tag_name: &str) -> Result<Version> {\n        // Trim and prefix characters from the tag name (e.g., \"v16.4.0\" -> \"16.4.0\").\n        let tag_name = tag_name.trim_start_matches(|c: char| !c.is_numeric());\n        match Version::from_str(tag_name) {\n            Ok(version) => Ok(version),\n            Err(error) => {\n                warn!(\"Failed to parse version {tag_name}\");\n                Err(error.into())\n            }\n        }\n    }\n\n    /// Gets the release for the specified [version requirement](VersionReq). If a release for the\n    /// [version requirement](VersionReq) is not found, then an error is returned.\n    ///\n    /// # Errors\n    /// * If the release is not found.\n    #[instrument(level = \"debug\")]\n    async fn get_release(&self, version_req: &VersionReq) -> Result<Release> {\n        debug!(\"Attempting to locate release for version requirement {version_req}\");\n        let client = reqwest_client();\n        let mut result: Option<Release> = None;\n        let mut page = 1;\n\n        loop {\n            let request = client\n                .get(&self.releases_url)\n                .headers(Self::headers())\n                .query(&[(\"page\", page.to_string().as_str()), (\"per_page\", \"100\")]);\n            let response = request.send().await?.error_for_status()?;\n            let response_releases = response.json::<Vec<Release>>().await?;\n            if response_releases.is_empty() {\n                break;\n            }\n\n            for release in response_releases {\n                let tag_name = release.tag_name.clone();\n                let Ok(release_version) = Self::get_version_from_tag_name(tag_name.as_str()) else {\n                    warn!(\"Failed to parse release version {tag_name}\");\n                    continue;\n                };\n\n                if version_req.matches(&release_version) {\n                    if let Some(result_release) = &result {\n                        let result_version =\n                            Self::get_version_from_tag_name(result_release.tag_name.as_str())?;\n                        if release_version > result_version {\n                            result = Some(release);\n                        }\n                    } else {\n                        result = Some(release);\n                    }\n                }\n            }\n\n            page += 1;\n        }\n\n        match result {\n            Some(release) => {\n                let version = Self::get_version_from_tag_name(&release.tag_name)?;\n                debug!(\"Version {version} found for version requirement {version_req}\");\n                Ok(release)\n            }\n            None => Err(VersionNotFound(version_req.to_string())),\n        }\n    }\n\n    /// Gets the asset for the specified release that passes the supplied matcher. If an asset for\n    /// that passes the matcher is not found, then an [AssetNotFound] error is returned.\n    ///\n    /// # Errors\n    /// * If the asset is not found.\n    #[instrument(level = \"debug\", skip(version, release))]\n    fn get_asset(\n        &self,\n        version: &Version,\n        release: &Release,\n    ) -> Result<(Asset, Option<Asset>, Option<HasherFn>)> {\n        let matcher = matcher::registry::get(&self.url)?;\n        let mut release_asset: Option<Asset> = None;\n        for asset in &release.assets {\n            if matcher(&self.url, asset.name.as_str(), version)? {\n                release_asset = Some(asset.clone());\n                break;\n            }\n        }\n\n        let Some(asset) = release_asset else {\n            return Err(AssetNotFound);\n        };\n\n        // Attempt to find the asset hash for the asset.\n        let mut asset_hash: Option<Asset> = None;\n        let mut asset_hasher_fn: Option<HasherFn> = None;\n        for release_asset in &release.assets {\n            let release_asset_name = release_asset.name.as_str();\n            if !release_asset_name.starts_with(&asset.name) {\n                continue;\n            }\n            let extension = release_asset_name\n                .strip_prefix(format!(\"{}.\", asset.name.as_str()).as_str())\n                .unwrap_or_default();\n\n            if let Ok(hasher_fn) = hasher::registry::get(&self.url, &extension.to_string()) {\n                asset_hash = Some(release_asset.clone());\n                asset_hasher_fn = Some(hasher_fn);\n                break;\n            }\n        }\n\n        Ok((asset, asset_hash, asset_hasher_fn))\n    }\n\n    /// Returns the headers for the GitHub request.\n    fn headers() -> HeaderMap {\n        let mut headers = HeaderMap::new();\n        headers.append(\n            GITHUB_API_VERSION_HEADER,\n            GITHUB_API_VERSION.parse().unwrap(),\n        );\n        headers.append(\"User-Agent\", USER_AGENT.parse().unwrap());\n        if let Some(token) = &*GITHUB_TOKEN {\n            headers.append(\"Authorization\", format!(\"Bearer {token}\").parse().unwrap());\n        }\n        headers\n    }\n}\n\n#[async_trait]\nimpl Repository for GitHub {\n    #[instrument(level = \"debug\")]\n    fn name(&self) -> &str {\n        \"GitHub\"\n    }\n\n    #[instrument(level = \"debug\")]\n    async fn get_version(&self, version_req: &VersionReq) -> Result<Version> {\n        let release = self.get_release(version_req).await?;\n        let version = Self::get_version_from_tag_name(release.tag_name.as_str())?;\n        Ok(version)\n    }\n\n    #[instrument]\n    async fn get_archive(&self, version_req: &VersionReq) -> Result<Archive> {\n        let release = self.get_release(version_req).await?;\n        let version = Self::get_version_from_tag_name(release.tag_name.as_str())?;\n        let (asset, asset_hash, asset_hasher_fn) = self.get_asset(&version, &release)?;\n        let name = asset.name.clone();\n\n        let client = reqwest_client();\n        debug!(\"Downloading archive {}\", asset.browser_download_url);\n        let request = client\n            .get(&asset.browser_download_url)\n            .headers(Self::headers());\n        let response = request.send().await?.error_for_status()?;\n        #[cfg(feature = \"indicatif\")]\n        let span = tracing::Span::current();\n        #[cfg(feature = \"indicatif\")]\n        {\n            let content_length = response.content_length().unwrap_or_default();\n            span.pb_set_length(content_length);\n        }\n        let mut bytes = Vec::new();\n        let mut source = response.bytes_stream();\n        while let Some(chunk) = source.next().await {\n            bytes.write_all(&chunk?)?;\n            #[cfg(feature = \"indicatif\")]\n            span.pb_set_position(bytes.len() as u64);\n        }\n        debug!(\n            \"Archive {} downloaded: {}\",\n            asset.browser_download_url,\n            bytes.len(),\n        );\n\n        if let Some(asset_hash) = asset_hash {\n            let archive_hash = match asset_hasher_fn {\n                Some(hasher_fn) => hasher_fn(&bytes)?,\n                None => return Err(AssetHashNotFound(asset.name))?,\n            };\n            let hash_len = archive_hash.len();\n\n            debug!(\n                \"Downloading archive hash {}\",\n                asset_hash.browser_download_url\n            );\n            let request = client\n                .get(&asset_hash.browser_download_url)\n                .headers(Self::headers());\n            let response = request.send().await?.error_for_status()?;\n            let text = response.text().await?;\n            let re = Regex::new(&format!(r\"[0-9a-f]{{{hash_len}}}\"))?;\n            let hash = match re.find(&text) {\n                Some(hash) => hash.as_str().to_string(),\n                None => return Err(AssetHashNotFound(asset.name)),\n            };\n            debug!(\n                \"Archive hash {} downloaded: {}\",\n                asset_hash.browser_download_url,\n                text.len(),\n            );\n\n            if archive_hash != hash {\n                return Err(ArchiveHashMismatch { archive_hash, hash });\n            }\n        }\n\n        let archive = Archive::new(name, version, bytes);\n        Ok(archive)\n    }\n}\n\n/// Creates a new reqwest client with middleware for tracing, and retrying transient errors.\nfn reqwest_client() -> ClientWithMiddleware {\n    let retry_policy = ExponentialBackoff::builder().build_with_max_retries(3);\n    ClientBuilder::new(reqwest::Client::new())\n        .with(TracingMiddleware::default())\n        .with(RetryTransientMiddleware::new_with_policy(retry_policy))\n        .build()\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::configuration::theseus::URL;\n\n    #[test]\n    fn test_name() {\n        let github = GitHub::new(URL).unwrap();\n        assert_eq!(\"GitHub\", github.name());\n    }\n\n    #[test]\n    fn test_get_version_from_tag_name() -> Result<()> {\n        let versions = vec![\"16.4.0\", \"v16.4.0\"];\n        for version in versions {\n            let version = GitHub::get_version_from_tag_name(version)?;\n            assert_eq!(Version::new(16, 4, 0), version);\n        }\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_get_version_from_tag_name_error() {\n        let error = GitHub::get_version_from_tag_name(\"foo\").unwrap_err();\n        assert_eq!(\n            \"empty string, expected a semver version\".to_string(),\n            error.to_string()\n        );\n    }\n\n    //\n    // get_version tests\n    //\n\n    #[tokio::test]\n    async fn test_get_version() -> Result<()> {\n        let github = GitHub::new(URL)?;\n        let version_req = VersionReq::STAR;\n        let version = github.get_version(&version_req).await?;\n        assert!(version > Version::new(0, 0, 0));\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_specific_version() -> Result<()> {\n        let github = GitHub::new(URL)?;\n        let version_req = VersionReq::parse(\"=16.4.0\")?;\n        let version = github.get_version(&version_req).await?;\n        assert_eq!(Version::new(16, 4, 0), version);\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_specific_not_found() -> Result<()> {\n        let github = GitHub::new(URL)?;\n        let version_req = VersionReq::parse(\"=0.0.0\")?;\n        let error = github.get_version(&version_req).await.unwrap_err();\n        assert_eq!(\"version not found for '=0.0.0'\", error.to_string());\n        Ok(())\n    }\n\n    //\n    // get_archive tests\n    //\n\n    #[tokio::test]\n    async fn test_get_archive() -> Result<()> {\n        let github = GitHub::new(URL)?;\n        let version_req = VersionReq::parse(\"=16.4.0\")?;\n        let archive = github.get_archive(&version_req).await?;\n        assert_eq!(\n            format!(\"postgresql-16.4.0-{}.tar.gz\", target_triple::TARGET),\n            archive.name()\n        );\n        assert_eq!(&Version::new(16, 4, 0), archive.version());\n        assert!(!archive.bytes().is_empty());\n        Ok(())\n    }\n\n    //\n    // Plugin Support\n    //\n\n    /// Test that a version with a 'v' prefix is correctly parsed; this is a common convention\n    /// for GitHub releases.  Use a known PostgreSQL plugin repository for the test.\n    #[tokio::test]\n    async fn test_get_version_with_v_prefix() -> Result<()> {\n        let github = GitHub::new(\"https://github.com/turbot/steampipe-plugin-csv\")?;\n        let version_req = VersionReq::parse(\"=0.12.0\")?;\n        let version = github.get_version(&version_req).await?;\n        assert_eq!(Version::new(0, 12, 0), version);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/repository/maven/mod.rs",
    "content": "pub(crate) mod models;\npub mod repository;\n\npub const URL: &str = \"https://repo1.maven.org/maven2\";\n"
  },
  {
    "path": "postgresql_archive/src/repository/maven/models.rs",
    "content": "/// Maven metadata XML structure\n///\n/// ```xml\n/// <metadata>\n///   <groupId>io.zonky.test.postgres</groupId>\n///   <artifactId>embedded-postgres-binaries-linux-amd64</artifactId>\n///   <versioning>\n///     <latest>16.2.0</latest>\n///     <release>16.2.0</release>\n///     <versions>\n///       ...\n///       <version>15.6.0</version>\n///       <version>16.2.0</version>\n///     </versions>\n///     <lastUpdated>20240210235512</lastUpdated>\n///   </versioning>\n/// </metadata>\n/// ```\nuse serde::{Deserialize, Serialize};\n\n/// Represents a Maven artifact metadata\n#[derive(Clone, Debug, Deserialize, Serialize)]\npub(crate) struct Metadata {\n    #[serde(rename = \"groupId\")]\n    pub(crate) group_id: String,\n    #[serde(rename = \"artifactId\")]\n    pub(crate) artifact_id: String,\n    pub(crate) versioning: Versioning,\n}\n\n/// Represents Maven versioning information\n#[derive(Clone, Debug, Deserialize, Serialize)]\npub(crate) struct Versioning {\n    pub(crate) latest: String,\n    pub(crate) release: String,\n    pub(crate) versions: Versions,\n    #[serde(rename = \"lastUpdated\")]\n    pub(crate) last_updated: String,\n}\n\n/// Represents Maven versions\n#[derive(Clone, Debug, Deserialize, Serialize)]\npub(crate) struct Versions {\n    pub(crate) version: Vec<String>,\n}\n"
  },
  {
    "path": "postgresql_archive/src/repository/maven/repository.rs",
    "content": "use crate::Error::{ArchiveHashMismatch, RepositoryFailure, VersionNotFound};\nuse crate::repository::Archive;\nuse crate::repository::maven::models::Metadata;\nuse crate::repository::model::Repository;\nuse crate::{Result, hasher};\nuse async_trait::async_trait;\nuse futures_util::StreamExt;\nuse reqwest::header::HeaderMap;\nuse reqwest_middleware::{ClientBuilder, ClientWithMiddleware};\nuse reqwest_retry::RetryTransientMiddleware;\nuse reqwest_retry::policies::ExponentialBackoff;\nuse reqwest_tracing::TracingMiddleware;\nuse semver::{Version, VersionReq};\nuse std::env;\nuse std::io::Write;\nuse std::sync::LazyLock;\nuse tracing::{debug, instrument, warn};\n#[cfg(feature = \"indicatif\")]\nuse tracing_indicatif::span_ext::IndicatifSpanExt;\n\nstatic USER_AGENT: LazyLock<String> = LazyLock::new(|| {\n    format!(\n        \"{PACKAGE}/{VERSION}\",\n        PACKAGE = env!(\"CARGO_PKG_NAME\"),\n        VERSION = env!(\"CARGO_PKG_VERSION\")\n    )\n});\n\n/// Maven repository.\n///\n/// This repository is used to interact with Maven repositories\n/// (e.g. <https://repo1.maven.org/maven2>).\n#[derive(Debug)]\npub struct Maven {\n    url: String,\n}\n\nimpl Maven {\n    /// Creates a new Maven repository from the specified URL in the format\n    /// <https://repo1.maven.org/maven2/io/zonky/test/postgres/embedded-postgres-binaries-linux-amd64>\n    ///\n    /// # Errors\n    /// * If the URL is invalid.\n    #[expect(clippy::new_ret_no_self)]\n    pub fn new(url: &str) -> Result<Box<dyn Repository>> {\n        Ok(Box::new(Self {\n            url: url.to_string(),\n        }))\n    }\n\n    /// Gets the artifact id and version that matches the specified version requirement.\n    ///\n    /// # Errors\n    /// * If the version requirement does not match any versions.\n    #[instrument(level = \"debug\")]\n    async fn get_artifact(&self, version_req: &VersionReq) -> Result<(String, Version)> {\n        debug!(\"Attempting to locate release for version requirement {version_req}\");\n        let client = reqwest_client();\n        let url = format!(\"{}/maven-metadata.xml\", self.url);\n        let request = client.get(&url).headers(Self::headers());\n        let response = request.send().await?.error_for_status()?;\n        let text = response.text().await?;\n        let metadata: Metadata = quick_xml::de::from_str(&text)?;\n        let artifact = metadata.artifact_id;\n        let mut result = None;\n        for version in &metadata.versioning.versions.version {\n            let version = Version::parse(version)?;\n            if version_req.matches(&version) {\n                if let Some(result_version) = result.clone() {\n                    if version > result_version {\n                        result = Some(version);\n                    }\n                } else {\n                    result = Some(version);\n                }\n            }\n        }\n\n        match &result {\n            Some(version) => {\n                debug!(\"Version {version} found for version requirement {version_req}\");\n                Ok((artifact, version.clone()))\n            }\n            None => Err(VersionNotFound(version_req.to_string())),\n        }\n    }\n\n    /// Returns the headers for the Maven request.\n    fn headers() -> HeaderMap {\n        let mut headers = HeaderMap::new();\n        headers.append(\"User-Agent\", USER_AGENT.parse().unwrap());\n        headers\n    }\n}\n\n#[async_trait]\nimpl Repository for Maven {\n    #[instrument(level = \"debug\")]\n    fn name(&self) -> &str {\n        \"Maven\"\n    }\n\n    #[instrument(level = \"debug\")]\n    async fn get_version(&self, version_req: &VersionReq) -> Result<Version> {\n        debug!(\"Attempting to locate release for version requirement {version_req}\");\n        let (_, version) = self.get_artifact(version_req).await?;\n        Ok(version)\n    }\n\n    #[instrument]\n    async fn get_archive(&self, version_req: &VersionReq) -> Result<Archive> {\n        let (artifact, version) = self.get_artifact(version_req).await?;\n        let archive_name = format!(\"{artifact}-{version}.jar\");\n        let archive_url = format!(\"{url}/{version}/{artifact}-{version}.jar\", url = self.url,);\n\n        let mut hasher_result = None;\n        // Try to find a hasher for the archive; the extensions are ordered by preference.\n        for extension in &[\"sha512\", \"sha256\", \"sha1\", \"md5\"] {\n            if let Ok(hasher_fn) = hasher::registry::get(&self.url, &(*extension).to_string()) {\n                hasher_result = Some((extension, hasher_fn));\n            }\n        }\n\n        let Some((extension, hasher_fn)) = hasher_result else {\n            return Err(RepositoryFailure(format!(\n                \"no hashers found for {}\",\n                &self.url\n            )));\n        };\n        let archive_hash_url = format!(\"{archive_url}.{extension}\");\n        let client = reqwest_client();\n        debug!(\"Downloading archive hash {archive_hash_url}\");\n        let request = client.get(&archive_hash_url).headers(Self::headers());\n        let response = request.send().await?.error_for_status()?;\n        let hash = response.text().await?;\n        debug!(\"Archive hash {archive_hash_url} downloaded: {}\", hash.len(),);\n\n        debug!(\"Downloading archive {archive_url}\");\n        let request = client.get(&archive_url).headers(Self::headers());\n        let response = request.send().await?.error_for_status()?;\n        #[cfg(feature = \"indicatif\")]\n        let span = tracing::Span::current();\n        #[cfg(feature = \"indicatif\")]\n        {\n            let content_length = response.content_length().unwrap_or_default();\n            span.pb_set_length(content_length);\n        }\n        let mut bytes = Vec::new();\n        let mut source = response.bytes_stream();\n        while let Some(chunk) = source.next().await {\n            bytes.write_all(&chunk?)?;\n            #[cfg(feature = \"indicatif\")]\n            span.pb_set_position(bytes.len() as u64);\n        }\n        debug!(\"Archive {archive_url} downloaded: {}\", bytes.len(),);\n\n        let archive_hash = hasher_fn(&bytes)?;\n        if archive_hash != hash {\n            return Err(ArchiveHashMismatch { archive_hash, hash });\n        }\n\n        let archive = Archive::new(archive_name, version, bytes);\n        Ok(archive)\n    }\n}\n\n/// Creates a new reqwest client with middleware for tracing, and retrying transient errors.\nfn reqwest_client() -> ClientWithMiddleware {\n    let retry_policy = ExponentialBackoff::builder().build_with_max_retries(3);\n    ClientBuilder::new(reqwest::Client::new())\n        .with(TracingMiddleware::default())\n        .with(RetryTransientMiddleware::new_with_policy(retry_policy))\n        .build()\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    const URL: &str = \"https://repo1.maven.org/maven2/io/zonky/test/postgres/embedded-postgres-binaries-linux-amd64\";\n\n    #[test]\n    fn test_name() {\n        let maven = Maven::new(URL).unwrap();\n        assert_eq!(\"Maven\", maven.name());\n    }\n\n    //\n    // get_version tests\n    //\n\n    #[tokio::test]\n    async fn test_get_version() -> Result<()> {\n        let maven = Maven::new(URL)?;\n        let version_req = VersionReq::STAR;\n        let version = maven.get_version(&version_req).await?;\n        assert!(version > Version::new(0, 0, 0));\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_specific_version() -> Result<()> {\n        let maven = Maven::new(URL)?;\n        let version_req = VersionReq::parse(\"=16.2.0\")?;\n        let version = maven.get_version(&version_req).await?;\n        assert_eq!(Version::new(16, 2, 0), version);\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_specific_not_found() -> Result<()> {\n        let maven = Maven::new(URL)?;\n        let version_req = VersionReq::parse(\"=0.0.0\")?;\n        let error = maven.get_version(&version_req).await.unwrap_err();\n        assert_eq!(\"version not found for '=0.0.0'\", error.to_string());\n        Ok(())\n    }\n\n    //\n    // get_archive tests\n    //\n\n    #[tokio::test]\n    async fn test_get_archive() -> Result<()> {\n        let maven = Maven::new(URL)?;\n        let version = Version::new(16, 2, 0);\n        let version_req = VersionReq::parse(format!(\"={version}\").as_str())?;\n        let archive = maven.get_archive(&version_req).await?;\n        assert_eq!(\n            format!(\"embedded-postgres-binaries-linux-amd64-{version}.jar\"),\n            archive.name()\n        );\n        assert_eq!(&version, archive.version());\n        assert!(!archive.bytes().is_empty());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/repository/mod.rs",
    "content": "#[cfg(feature = \"github\")]\npub mod github;\n#[cfg(feature = \"maven\")]\npub mod maven;\npub mod model;\npub mod registry;\n\npub use model::{Archive, Repository};\n"
  },
  {
    "path": "postgresql_archive/src/repository/model.rs",
    "content": "use async_trait::async_trait;\nuse semver::{Version, VersionReq};\nuse std::fmt::Debug;\n\n/// A trait for archive repository implementations.\n#[async_trait]\npub trait Repository: Debug + Send + Sync {\n    /// Gets the name of the repository.\n    fn name(&self) -> &str;\n\n    /// Gets the version for the specified [version requirement](VersionReq). If a\n    /// [version](Version) for the [version requirement](VersionReq) is not found,\n    /// then an error is returned.\n    ///\n    /// # Errors\n    /// * If the version is not found.\n    async fn get_version(&self, version_req: &VersionReq) -> crate::Result<Version>;\n\n    /// Gets the archive for a given [version requirement](VersionReq) that passes the default\n    /// matcher. If no archive is found for the [version requirement](VersionReq) and matcher then\n    /// an [error](crate::error::Error) is returned.\n    ///\n    /// # Errors\n    /// * If the archive is not found.\n    /// * If the archive cannot be downloaded.\n    async fn get_archive(&self, version_req: &VersionReq) -> crate::Result<Archive>;\n}\n\n/// A struct representing an archive.\n#[derive(Clone, Debug)]\npub struct Archive {\n    name: String,\n    version: Version,\n    bytes: Vec<u8>,\n}\n\nimpl Archive {\n    /// Creates a new archive.\n    #[must_use]\n    pub fn new(name: String, version: Version, bytes: Vec<u8>) -> Self {\n        Self {\n            name,\n            version,\n            bytes,\n        }\n    }\n\n    /// Gets the name of the archive.\n    #[must_use]\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n\n    /// Gets the version of the archive.\n    #[must_use]\n    pub fn version(&self) -> &Version {\n        &self.version\n    }\n\n    /// Gets the bytes of the archive.\n    #[must_use]\n    pub fn bytes(&self) -> &[u8] {\n        &self.bytes\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use semver::Version;\n\n    #[test]\n    fn test_archive() {\n        let name = \"test\".to_string();\n        let version = Version::parse(\"1.0.0\").unwrap();\n        let bytes = vec![0, 1, 2, 3];\n        let archive = Archive::new(name.clone(), version.clone(), bytes.clone());\n        assert_eq!(archive.name(), name);\n        assert_eq!(archive.version(), &version);\n        assert_eq!(archive.bytes(), bytes.as_slice());\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/repository/registry.rs",
    "content": "use crate::Error::UnsupportedRepository;\nuse crate::Result;\n#[cfg(feature = \"theseus\")]\nuse crate::configuration::theseus;\n#[cfg(feature = \"zonky\")]\nuse crate::configuration::zonky;\n#[cfg(feature = \"github\")]\nuse crate::repository::github::repository::GitHub;\nuse crate::repository::model::Repository;\nuse std::sync::{Arc, LazyLock, Mutex, RwLock};\n\nstatic REGISTRY: LazyLock<Arc<Mutex<RepositoryRegistry>>> =\n    LazyLock::new(|| Arc::new(Mutex::new(RepositoryRegistry::default())));\n\ntype SupportsFn = fn(&str) -> Result<bool>;\ntype NewFn = dyn Fn(&str) -> Result<Box<dyn Repository>> + Send + Sync;\n\n/// Singleton struct to store repositories\n#[expect(clippy::type_complexity)]\nstruct RepositoryRegistry {\n    repositories: Vec<(Arc<RwLock<SupportsFn>>, Arc<RwLock<NewFn>>)>,\n}\n\nimpl RepositoryRegistry {\n    /// Creates a new repository registry.\n    fn new() -> Self {\n        Self {\n            repositories: Vec::new(),\n        }\n    }\n\n    /// Registers a repository. Newly registered repositories take precedence over existing ones.\n    fn register(&mut self, supports_fn: SupportsFn, new_fn: Box<NewFn>) {\n        self.repositories.insert(\n            0,\n            (\n                Arc::new(RwLock::new(supports_fn)),\n                Arc::new(RwLock::new(new_fn)),\n            ),\n        );\n    }\n\n    /// Gets a repository that supports the specified URL\n    ///\n    /// # Errors\n    /// * If the URL is not supported.\n    fn get(&self, url: &str) -> Result<Box<dyn Repository>> {\n        for (supports_fn, new_fn) in &self.repositories {\n            let supports_function = supports_fn.read()?;\n            if supports_function(url)? {\n                let new_function = new_fn.read()?;\n                return new_function(url);\n            }\n        }\n\n        Err(UnsupportedRepository(url.to_string()))\n    }\n}\n\nimpl Default for RepositoryRegistry {\n    /// Creates a new repository registry with the default repositories registered.\n    fn default() -> Self {\n        let mut registry = Self::new();\n        #[cfg(feature = \"theseus\")]\n        registry.register(\n            |url| Ok(url.starts_with(theseus::URL)),\n            Box::new(GitHub::new),\n        );\n        #[cfg(feature = \"zonky\")]\n        registry.register(\n            |url| Ok(url.starts_with(zonky::URL)),\n            Box::new(zonky::Zonky::new),\n        );\n        registry\n    }\n}\n\n/// Registers a repository. Newly registered repositories can override existing ones.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn register(supports_fn: SupportsFn, new_fn: Box<NewFn>) -> Result<()> {\n    REGISTRY.lock()?.register(supports_fn, new_fn);\n    Ok(())\n}\n\n/// Gets a repository that supports the specified URL\n///\n/// # Errors\n/// * If the URL is not supported.\npub fn get(url: &str) -> Result<Box<dyn Repository>> {\n    REGISTRY.lock()?.get(url)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::repository::Archive;\n    use async_trait::async_trait;\n    use semver::{Version, VersionReq};\n    use std::fmt::Debug;\n\n    #[derive(Debug)]\n    struct TestRepository;\n\n    impl TestRepository {\n        #[expect(clippy::new_ret_no_self)]\n        #[expect(clippy::unnecessary_wraps)]\n        fn new(_url: &str) -> Result<Box<dyn Repository>> {\n            Ok(Box::new(Self))\n        }\n    }\n\n    #[async_trait]\n    impl Repository for TestRepository {\n        fn name(&self) -> &'static str {\n            \"test\"\n        }\n\n        async fn get_version(&self, _version_req: &VersionReq) -> Result<Version> {\n            Ok(Version::new(0, 0, 42))\n        }\n\n        async fn get_archive(&self, _version_req: &VersionReq) -> Result<Archive> {\n            Ok(Archive::new(\n                \"test\".to_string(),\n                Version::new(0, 0, 42),\n                Vec::new(),\n            ))\n        }\n    }\n\n    #[tokio::test]\n    async fn test_register() -> Result<()> {\n        register(\n            |url| Ok(url == \"https://foo.com\"),\n            Box::new(TestRepository::new),\n        )?;\n        let url = \"https://foo.com\";\n        let repository = get(url)?;\n        assert_eq!(\"test\", repository.name());\n        assert!(repository.get_version(&VersionReq::STAR).await.is_ok());\n        assert!(repository.get_archive(&VersionReq::STAR).await.is_ok());\n        Ok(())\n    }\n\n    #[test]\n    fn test_get_error() {\n        let error = get(\"foo\").unwrap_err();\n        assert_eq!(\"unsupported repository for 'foo'\", error.to_string());\n    }\n\n    #[test]\n    #[cfg(feature = \"theseus\")]\n    fn test_get_theseus_postgresql_binaries() {\n        assert!(get(theseus::URL).is_ok());\n    }\n\n    #[test]\n    #[cfg(feature = \"zonky\")]\n    fn test_get_zonky_postgresql_binaries() {\n        assert!(get(zonky::URL).is_ok());\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/src/version.rs",
    "content": "use crate::Result;\nuse semver::{Version, VersionReq};\n\n/// A trait for getting the exact version from a [version requirement](VersionReq).\npub trait ExactVersion {\n    /// Gets the exact version from a [version requirement](VersionReq) or `None`.\n    fn exact_version(&self) -> Option<Version>;\n}\n\nimpl ExactVersion for VersionReq {\n    /// Gets the exact version from a [version requirement](VersionReq) or `None`.\n    fn exact_version(&self) -> Option<Version> {\n        if self.comparators.len() != 1 {\n            return None;\n        }\n        let comparator = self.comparators.first()?;\n        if comparator.op != semver::Op::Exact {\n            return None;\n        }\n        let minor = comparator.minor?;\n        let patch = comparator.patch?;\n        let version = Version::new(comparator.major, minor, patch);\n        Some(version)\n    }\n}\n\n/// A trait for getting the exact version requirement from a [version](Version).\npub trait ExactVersionReq {\n    /// Gets the exact version requirement from a [version](Version).\n    ///\n    /// # Errors\n    /// * If the version requirement cannot be parsed.\n    fn exact_version_req(&self) -> Result<VersionReq>;\n}\n\nimpl ExactVersionReq for Version {\n    /// Gets the exact version requirement from a [version](Version).\n    ///\n    /// # Errors\n    /// * If the version requirement cannot be parsed.\n    fn exact_version_req(&self) -> Result<VersionReq> {\n        let version = format!(\"={self}\");\n        let version_req = VersionReq::parse(&version)?;\n        Ok(version_req)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::Result;\n\n    #[test]\n    fn test_exact_version_star() {\n        let version_req = VersionReq::STAR;\n        assert_eq!(None, version_req.exact_version());\n    }\n\n    #[test]\n    fn test_exact_version_greater_than() -> Result<()> {\n        let version_req = VersionReq::parse(\">16\")?;\n        assert_eq!(None, version_req.exact_version());\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_full_no_equals() -> Result<()> {\n        let version_req = VersionReq::parse(\"16.4.0\")?;\n        assert_eq!(None, version_req.exact_version());\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_full_equals() -> Result<()> {\n        let version_req = VersionReq::parse(\"=16.4.0\")?;\n        let version = Version::new(16, 4, 0);\n        assert_eq!(Some(version), version_req.exact_version());\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_major_minor() -> Result<()> {\n        let version_req = VersionReq::parse(\"=16.4\")?;\n        assert_eq!(None, version_req.exact_version());\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_major() -> Result<()> {\n        let version_req = VersionReq::parse(\"=16\")?;\n        assert_eq!(None, version_req.exact_version());\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_range() -> Result<()> {\n        let version_req = VersionReq::parse(\">= 16, < 17\")?;\n        assert_eq!(None, version_req.exact_version());\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_req_not_equal() -> Result<()> {\n        let version = Version::new(1, 2, 3);\n        assert_ne!(VersionReq::parse(\"=1.0.0\")?, version.exact_version_req()?);\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_req_major_minor_patch() -> Result<()> {\n        let version = Version::new(16, 4, 0);\n        assert_eq!(VersionReq::parse(\"=16.4.0\")?, version.exact_version_req()?);\n        Ok(())\n    }\n\n    #[test]\n    fn test_exact_version_prerelease() -> Result<()> {\n        let version = Version::parse(\"1.2.3-alpha\")?;\n        assert_eq!(\n            VersionReq::parse(\"=1.2.3-alpha\")?,\n            version.exact_version_req()?\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_archive/tests/archive.rs",
    "content": "use postgresql_archive::configuration::theseus;\nuse postgresql_archive::extract;\nuse postgresql_archive::{get_archive, get_version};\nuse semver::VersionReq;\nuse std::fs::remove_dir_all;\nuse test_log::test;\n\n#[test(tokio::test)]\nasync fn test_get_version_not_found() -> postgresql_archive::Result<()> {\n    let invalid_version_req = VersionReq::parse(\"=1.0.0\")?;\n    let result = get_version(theseus::URL, &invalid_version_req).await;\n\n    assert!(result.is_err());\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_get_version() -> anyhow::Result<()> {\n    let version_req = VersionReq::parse(\"=16.4.0\")?;\n    let latest_version = get_version(theseus::URL, &version_req).await?;\n\n    assert!(version_req.matches(&latest_version));\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_get_archive_and_extract() -> anyhow::Result<()> {\n    let url = theseus::URL;\n    let version_req = VersionReq::parse(\"=16.4.0\")?;\n    let (archive_version, archive) = get_archive(url, &version_req).await?;\n\n    assert!(version_req.matches(&archive_version));\n\n    let out_dir = tempfile::tempdir()?.path().to_path_buf();\n    let files = extract(url, &archive, &out_dir).await?;\n    #[cfg(all(target_os = \"linux\", target_arch = \"x86_64\"))]\n    assert_eq!(1_312, files.len());\n    #[cfg(all(target_os = \"macos\", target_arch = \"aarch64\"))]\n    assert_eq!(1_271, files.len());\n    #[cfg(all(target_os = \"macos\", target_arch = \"x86_64\"))]\n    assert_eq!(1_271, files.len());\n    #[cfg(all(target_os = \"windows\", target_arch = \"x86_64\"))]\n    assert_eq!(3_092, files.len());\n    remove_dir_all(&out_dir)?;\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_get_archive_version_not_found() -> postgresql_archive::Result<()> {\n    let invalid_version_req = VersionReq::parse(\"=1.0.0\")?;\n    let result = get_archive(theseus::URL, &invalid_version_req).await;\n\n    assert!(result.is_err());\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_archive/tests/blocking.rs",
    "content": "#[cfg(feature = \"blocking\")]\nuse postgresql_archive::VersionReq;\n#[cfg(feature = \"blocking\")]\nuse postgresql_archive::blocking::{extract, get_archive, get_version};\n#[cfg(feature = \"blocking\")]\nuse postgresql_archive::configuration::theseus;\n#[cfg(feature = \"blocking\")]\nuse std::fs::remove_dir_all;\n#[cfg(feature = \"blocking\")]\nuse test_log::test;\n\n#[cfg(feature = \"blocking\")]\n#[test]\nfn test_get_version() -> anyhow::Result<()> {\n    let version_req = VersionReq::STAR;\n    let latest_version = get_version(theseus::URL, &version_req)?;\n\n    assert!(version_req.matches(&latest_version));\n    Ok(())\n}\n\n#[cfg(feature = \"blocking\")]\n#[test]\nfn test_get_archive_and_extract() -> anyhow::Result<()> {\n    let url = theseus::URL;\n    let version_req = &VersionReq::parse(\"=16.4.0\")?;\n    let (archive_version, archive) = get_archive(url, version_req)?;\n\n    assert!(version_req.matches(&archive_version));\n\n    let out_dir = tempfile::tempdir()?.path().to_path_buf();\n    let files = extract(url, &archive, &out_dir)?;\n    assert!(!files.is_empty());\n    remove_dir_all(&out_dir)?;\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_archive/tests/zonky.rs",
    "content": "#[cfg(feature = \"zonky\")]\nuse postgresql_archive::configuration::zonky;\n#[cfg(feature = \"zonky\")]\nuse postgresql_archive::extract;\n#[cfg(feature = \"zonky\")]\nuse postgresql_archive::{get_archive, get_version};\n#[cfg(feature = \"zonky\")]\nuse semver::VersionReq;\n#[cfg(feature = \"zonky\")]\nuse std::fs::remove_dir_all;\n#[cfg(feature = \"zonky\")]\nuse test_log::test;\n\n#[test(tokio::test)]\n#[cfg(feature = \"zonky\")]\nasync fn test_get_version_not_found() -> postgresql_archive::Result<()> {\n    let invalid_version_req = VersionReq::parse(\"=1.0.0\")?;\n    let result = get_version(zonky::URL, &invalid_version_req).await;\n\n    assert!(result.is_err());\n    Ok(())\n}\n\n#[test(tokio::test)]\n#[cfg(feature = \"zonky\")]\nasync fn test_get_version() -> anyhow::Result<()> {\n    let version_req = VersionReq::parse(\"=16.2.0\")?;\n    let latest_version = get_version(zonky::URL, &version_req).await?;\n\n    assert!(version_req.matches(&latest_version));\n    Ok(())\n}\n\n#[test(tokio::test)]\n#[cfg(feature = \"zonky\")]\nasync fn test_get_archive_and_extract() -> anyhow::Result<()> {\n    let url = zonky::URL;\n    let version_req = VersionReq::parse(\"=16.4.0\")?;\n    let (archive_version, archive) = get_archive(url, &version_req).await?;\n\n    assert!(version_req.matches(&archive_version));\n\n    let out_dir = tempfile::tempdir()?.path().to_path_buf();\n    let files = extract(url, &archive, &out_dir).await?;\n    assert!(files.len() > 1_000);\n    remove_dir_all(&out_dir)?;\n    Ok(())\n}\n\n#[test(tokio::test)]\n#[cfg(feature = \"zonky\")]\nasync fn test_get_archive_version_not_found() -> postgresql_archive::Result<()> {\n    let invalid_version_req = VersionReq::parse(\"=1.0.0\")?;\n    let result = get_archive(zonky::URL, &invalid_version_req).await;\n\n    assert!(result.is_err());\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_commands/Cargo.toml",
    "content": "[package]\nauthors.workspace = true\ncategories.workspace = true\ndescription = \"PostgreSQL commands for interacting with a PostgreSQL server.\"\nedition.workspace = true\nkeywords.workspace = true\nlicense.workspace = true\nname = \"postgresql_commands\"\nrepository = \"https://github.com/theseus-rs/postgresql-embedded\"\nrust-version.workspace = true\nversion.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"full\"], optional = true }\ntracing = { workspace = true, features = [\"log\"] }\n\n[dev-dependencies]\ntest-log = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n\n[features]\ndefault = []\ntokio = [\"dep:tokio\"]\n"
  },
  {
    "path": "postgresql_commands/README.md",
    "content": "# PostgreSQL Commands\n\n[![ci](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml)\n[![Documentation](https://docs.rs/postgresql_commands/badge.svg)](https://docs.rs/postgresql_commands)\n[![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n[![Latest version](https://img.shields.io/crates/v/postgresql_commands.svg)](https://crates.io/crates/postgresql_commands)\n[![License](https://img.shields.io/crates/l/postgresql_commands?)](https://github.com/theseus-rs/postgresql-embedded/tree/main/postgresql_commands#license)\n[![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n\nA library for executing PostgreSQL command line utilities.\n\n## Examples\n\n```rust\nuse postgresql_commands::Result;\nuse postgresql_commands::psql::PsqlBuilder;\n\nfn main() -> Result<()> {\n    let psql = PsqlBuilder::new()\n        .command(\"CREATE DATABASE \\\"test\\\"\")\n        .host(\"127.0.0.1\")\n        .port(5432)\n        .username(\"postgresql\")\n        .pg_password(\"password\")\n        .build();\n\n    let (stdout, stderr) = psql.execute()?;\n    Ok(())\n}\n```\n\n## Feature flags\n\nThe following features are available:\n\n| Name    | Description                       | Default? |\n|---------|-----------------------------------|----------|\n| `tokio` | Enables the use of tokio commands | No       |\n\n## License\n\nLicensed under either of\n\n* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)\n* MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT)\n\nat your option.\n\n## Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as\ndefined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.\n"
  },
  {
    "path": "postgresql_commands/src/clusterdb.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `clusterdb` clusters all previously clustered tables in a database.\n#[derive(Clone, Debug, Default)]\npub struct ClusterDbBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    all: bool,\n    dbname: Option<OsString>,\n    echo: bool,\n    quiet: bool,\n    table: Option<OsString>,\n    verbose: bool,\n    version: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    maintenance_db: Option<OsString>,\n}\n\nimpl ClusterDbBuilder {\n    /// Create a new [`ClusterDbBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`ClusterDbBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Cluster all databases\n    #[must_use]\n    pub fn all(mut self) -> Self {\n        self.all = true;\n        self\n    }\n\n    /// Database to cluster\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// Show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// Don't write any messages\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// Cluster specific table(s) only\n    #[must_use]\n    pub fn table<S: AsRef<OsStr>>(mut self, table: S) -> Self {\n        self.table = Some(table.as_ref().to_os_string());\n        self\n    }\n\n    /// Write a lot of output\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// Database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// Database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// User name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// Never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// Force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// Alternate maintenance database\n    #[must_use]\n    pub fn maintenance_db<S: AsRef<OsStr>>(mut self, db: S) -> Self {\n        self.maintenance_db = Some(db.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for ClusterDbBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"clusterdb\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.all {\n            args.push(\"--all\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if let Some(table) = &self.table {\n            args.push(\"--table\".into());\n            args.push(table.into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(maintenance_db) = &self.maintenance_db {\n            args.push(\"--maintenance-db\".into());\n            args.push(maintenance_db.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = ClusterDbBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"clusterdb\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = ClusterDbBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./clusterdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\clusterdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = ClusterDbBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .all()\n            .dbname(\"dbname\")\n            .echo()\n            .quiet()\n            .table(\"table\")\n            .verbose()\n            .version()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .maintenance_db(\"postgres\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"clusterdb\" \"--all\" \"--dbname\" \"dbname\" \"--echo\" \"--quiet\" \"--table\" \"table\" \"--verbose\" \"--version\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\" \"--maintenance-db\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = ClusterDbBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./clusterdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\clusterdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/createdb.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `createdb` creates a `PostgreSQL` database.\n#[derive(Clone, Debug, Default)]\npub struct CreateDbBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    tablespace: Option<OsString>,\n    echo: bool,\n    encoding: Option<OsString>,\n    locale: Option<OsString>,\n    lc_collate: Option<OsString>,\n    lc_ctype: Option<OsString>,\n    icu_locale: Option<OsString>,\n    icu_rules: Option<OsString>,\n    locale_provider: Option<OsString>,\n    owner: Option<OsString>,\n    strategy: Option<OsString>,\n    template: Option<OsString>,\n    version: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    maintenance_db: Option<OsString>,\n    dbname: Option<OsString>,\n    description: Option<OsString>,\n}\n\nimpl CreateDbBuilder {\n    /// Create a new [`CreateDbBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`CreateDbBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Default tablespace for the database\n    #[must_use]\n    pub fn tablespace<S: AsRef<OsStr>>(mut self, tablespace: S) -> Self {\n        self.tablespace = Some(tablespace.as_ref().to_os_string());\n        self\n    }\n\n    /// Show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// Encoding for the database\n    #[must_use]\n    pub fn encoding<S: AsRef<OsStr>>(mut self, encoding: S) -> Self {\n        self.encoding = Some(encoding.as_ref().to_os_string());\n        self\n    }\n\n    /// Locale settings for the database\n    #[must_use]\n    pub fn locale<S: AsRef<OsStr>>(mut self, locale: S) -> Self {\n        self.locale = Some(locale.as_ref().to_os_string());\n        self\n    }\n\n    /// `LC_COLLATE` setting for the database\n    #[must_use]\n    pub fn lc_collate<S: AsRef<OsStr>>(mut self, lc_collate: S) -> Self {\n        self.lc_collate = Some(lc_collate.as_ref().to_os_string());\n        self\n    }\n\n    /// `LC_CTYPE` setting for the database\n    #[must_use]\n    pub fn lc_ctype<S: AsRef<OsStr>>(mut self, lc_ctype: S) -> Self {\n        self.lc_ctype = Some(lc_ctype.as_ref().to_os_string());\n        self\n    }\n\n    /// ICU locale setting for the database\n    #[must_use]\n    pub fn icu_locale<S: AsRef<OsStr>>(mut self, icu_locale: S) -> Self {\n        self.icu_locale = Some(icu_locale.as_ref().to_os_string());\n        self\n    }\n\n    /// ICU rules setting for the database\n    #[must_use]\n    pub fn icu_rules<S: AsRef<OsStr>>(mut self, icu_rules: S) -> Self {\n        self.icu_rules = Some(icu_rules.as_ref().to_os_string());\n        self\n    }\n\n    /// Locale provider for the database's default collation\n    #[must_use]\n    pub fn locale_provider<S: AsRef<OsStr>>(mut self, locale_provider: S) -> Self {\n        self.locale_provider = Some(locale_provider.as_ref().to_os_string());\n        self\n    }\n\n    /// Database user to own the new database\n    #[must_use]\n    pub fn owner<S: AsRef<OsStr>>(mut self, owner: S) -> Self {\n        self.owner = Some(owner.as_ref().to_os_string());\n        self\n    }\n\n    /// Database creation strategy `wal_log` or `file_copy`\n    #[must_use]\n    pub fn strategy<S: AsRef<OsStr>>(mut self, strategy: S) -> Self {\n        self.strategy = Some(strategy.as_ref().to_os_string());\n        self\n    }\n\n    /// Template database to copy\n    #[must_use]\n    pub fn template<S: AsRef<OsStr>>(mut self, template: S) -> Self {\n        self.template = Some(template.as_ref().to_os_string());\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// Database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// Database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// User name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// Never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// Force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// Alternate maintenance database\n    #[must_use]\n    pub fn maintenance_db<S: AsRef<OsStr>>(mut self, db: S) -> Self {\n        self.maintenance_db = Some(db.as_ref().to_os_string());\n        self\n    }\n\n    /// Database name\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// Database description\n    #[must_use]\n    pub fn description<S: AsRef<OsStr>>(mut self, description: S) -> Self {\n        self.description = Some(description.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for CreateDbBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"createdb\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(tablespace) = &self.tablespace {\n            args.push(\"--tablespace\".into());\n            args.push(tablespace.into());\n        }\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if let Some(encoding) = &self.encoding {\n            args.push(\"--encoding\".into());\n            args.push(encoding.into());\n        }\n\n        if let Some(locale) = &self.locale {\n            args.push(\"--locale\".into());\n            args.push(locale.into());\n        }\n\n        if let Some(lc_collate) = &self.lc_collate {\n            args.push(\"--lc-collate\".into());\n            args.push(lc_collate.into());\n        }\n\n        if let Some(lc_ctype) = &self.lc_ctype {\n            args.push(\"--lc-ctype\".into());\n            args.push(lc_ctype.into());\n        }\n\n        if let Some(icu_locale) = &self.icu_locale {\n            args.push(\"--icu-locale\".into());\n            args.push(icu_locale.into());\n        }\n\n        if let Some(icu_rules) = &self.icu_rules {\n            args.push(\"--icu-rules\".into());\n            args.push(icu_rules.into());\n        }\n\n        if let Some(locale_provider) = &self.locale_provider {\n            args.push(\"--locale-provider\".into());\n            args.push(locale_provider.into());\n        }\n\n        if let Some(owner) = &self.owner {\n            args.push(\"--owner\".into());\n            args.push(owner.into());\n        }\n\n        if let Some(strategy) = &self.strategy {\n            args.push(\"--strategy\".into());\n            args.push(strategy.into());\n        }\n\n        if let Some(template) = &self.template {\n            args.push(\"--template\".into());\n            args.push(template.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(maintenance_db) = &self.maintenance_db {\n            args.push(\"--maintenance-db\".into());\n            args.push(maintenance_db.into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(dbname.into());\n        }\n\n        if let Some(description) = &self.description {\n            args.push(description.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = CreateDbBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"createdb\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = CreateDbBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./createdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\createdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = CreateDbBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .tablespace(\"pg_default\")\n            .echo()\n            .encoding(\"UTF8\")\n            .locale(\"en_US.UTF-8\")\n            .lc_collate(\"en_US.UTF-8\")\n            .lc_ctype(\"en_US.UTF-8\")\n            .icu_locale(\"en_US\")\n            .icu_rules(\"standard\")\n            .locale_provider(\"icu\")\n            .owner(\"postgres\")\n            .strategy(\"wal_log\")\n            .template(\"template0\")\n            .version()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .maintenance_db(\"postgres\")\n            .dbname(\"testdb\")\n            .description(\"Test Database\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"createdb\" \"--tablespace\" \"pg_default\" \"--echo\" \"--encoding\" \"UTF8\" \"--locale\" \"en_US.UTF-8\" \"--lc-collate\" \"en_US.UTF-8\" \"--lc-ctype\" \"en_US.UTF-8\" \"--icu-locale\" \"en_US\" \"--icu-rules\" \"standard\" \"--locale-provider\" \"icu\" \"--owner\" \"postgres\" \"--strategy\" \"wal_log\" \"--template\" \"template0\" \"--version\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\" \"--maintenance-db\" \"postgres\" \"testdb\" \"Test Database\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = CreateDbBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./createdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\createdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/createuser.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `createuser` creates a new `PostgreSQL` role.\n#[derive(Clone, Debug, Default)]\npub struct CreateUserBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    with_admin: Option<OsString>,\n    connection_limit: Option<u32>,\n    createdb: bool,\n    no_createdb: bool,\n    echo: bool,\n    member_of: Option<OsString>,\n    inherit: bool,\n    no_inherit: bool,\n    login: bool,\n    no_login: bool,\n    with_member: Option<OsString>,\n    pwprompt: bool,\n    createrole: bool,\n    no_createrole: bool,\n    superuser: bool,\n    no_superuser: bool,\n    valid_until: Option<OsString>,\n    version: bool,\n    interactive: bool,\n    bypassrls: bool,\n    no_bypassrls: bool,\n    replication: bool,\n    no_replication: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n}\n\nimpl CreateUserBuilder {\n    /// Create a new [`CreateUserBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`CreateUserBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// ROLE will be a member of new role with admin option\n    #[must_use]\n    pub fn with_admin<S: AsRef<OsStr>>(mut self, role: S) -> Self {\n        self.with_admin = Some(role.as_ref().to_os_string());\n        self\n    }\n\n    /// Connection limit for role (default: no limit)\n    #[must_use]\n    pub fn connection_limit(mut self, limit: u32) -> Self {\n        self.connection_limit = Some(limit);\n        self\n    }\n\n    /// Role can create new databases\n    #[must_use]\n    pub fn createdb(mut self) -> Self {\n        self.createdb = true;\n        self\n    }\n\n    /// Role cannot create databases (default)\n    #[must_use]\n    pub fn no_createdb(mut self) -> Self {\n        self.no_createdb = true;\n        self\n    }\n\n    /// Show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// New role will be a member of ROLE\n    #[must_use]\n    pub fn member_of<S: AsRef<OsStr>>(mut self, role: S) -> Self {\n        self.member_of = Some(role.as_ref().to_os_string());\n        self\n    }\n\n    /// Role inherits privileges of roles it is a member of (default)\n    #[must_use]\n    pub fn inherit(mut self) -> Self {\n        self.inherit = true;\n        self\n    }\n\n    /// Role does not inherit privileges\n    #[must_use]\n    pub fn no_inherit(mut self) -> Self {\n        self.no_inherit = true;\n        self\n    }\n\n    /// Role can login (default)\n    #[must_use]\n    pub fn login(mut self) -> Self {\n        self.login = true;\n        self\n    }\n\n    /// Role cannot login\n    #[must_use]\n    pub fn no_login(mut self) -> Self {\n        self.no_login = true;\n        self\n    }\n\n    /// ROLE will be a member of new role\n    #[must_use]\n    pub fn with_member<S: AsRef<OsStr>>(mut self, role: S) -> Self {\n        self.with_member = Some(role.as_ref().to_os_string());\n        self\n    }\n\n    /// Assign a password to new role\n    #[must_use]\n    pub fn pwprompt(mut self) -> Self {\n        self.pwprompt = true;\n        self\n    }\n\n    /// Role can create new roles\n    #[must_use]\n    pub fn createrole(mut self) -> Self {\n        self.createrole = true;\n        self\n    }\n\n    /// Role cannot create roles (default)\n    #[must_use]\n    pub fn no_createrole(mut self) -> Self {\n        self.no_createrole = true;\n        self\n    }\n\n    /// Role will be superuser\n    #[must_use]\n    pub fn superuser(mut self) -> Self {\n        self.superuser = true;\n        self\n    }\n\n    /// Role will not be superuser (default)\n    #[must_use]\n    pub fn no_superuser(mut self) -> Self {\n        self.no_superuser = true;\n        self\n    }\n\n    /// Password expiration date and time for role\n    #[must_use]\n    pub fn valid_until<S: AsRef<OsStr>>(mut self, timestamp: S) -> Self {\n        self.valid_until = Some(timestamp.as_ref().to_os_string());\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Prompt for missing role name and attributes rather than using defaults\n    #[must_use]\n    pub fn interactive(mut self) -> Self {\n        self.interactive = true;\n        self\n    }\n\n    /// Role can bypass row-level security (RLS) policy\n    #[must_use]\n    pub fn bypassrls(mut self) -> Self {\n        self.bypassrls = true;\n        self\n    }\n\n    /// Role cannot bypass row-level security (RLS) policy (default)\n    #[must_use]\n    pub fn no_bypassrls(mut self) -> Self {\n        self.no_bypassrls = true;\n        self\n    }\n\n    /// Role can initiate replication\n    #[must_use]\n    pub fn replication(mut self) -> Self {\n        self.replication = true;\n        self\n    }\n\n    /// Role cannot initiate replication (default)\n    #[must_use]\n    pub fn no_replication(mut self) -> Self {\n        self.no_replication = true;\n        self\n    }\n\n    /// Show this help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// Database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// Database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// User name to connect as (not the one to create)\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// Never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// Force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for CreateUserBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"createuser\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(role) = &self.with_admin {\n            args.push(\"--with-admin\".into());\n            args.push(role.into());\n        }\n\n        if let Some(limit) = &self.connection_limit {\n            args.push(\"--connection-limit\".into());\n            args.push(limit.to_string().into());\n        }\n\n        if self.createdb {\n            args.push(\"--createdb\".into());\n        }\n\n        if self.no_createdb {\n            args.push(\"--no-createdb\".into());\n        }\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if let Some(role) = &self.member_of {\n            args.push(\"--member-of\".into());\n            args.push(role.into());\n        }\n\n        if self.inherit {\n            args.push(\"--inherit\".into());\n        }\n\n        if self.no_inherit {\n            args.push(\"--no-inherit\".into());\n        }\n\n        if self.login {\n            args.push(\"--login\".into());\n        }\n\n        if self.no_login {\n            args.push(\"--no-login\".into());\n        }\n\n        if let Some(role) = &self.with_member {\n            args.push(\"--with-member\".into());\n            args.push(role.into());\n        }\n\n        if self.pwprompt {\n            args.push(\"--pwprompt\".into());\n        }\n\n        if self.createrole {\n            args.push(\"--createrole\".into());\n        }\n\n        if self.no_createrole {\n            args.push(\"--no-createrole\".into());\n        }\n\n        if self.superuser {\n            args.push(\"--superuser\".into());\n        }\n\n        if self.no_superuser {\n            args.push(\"--no-superuser\".into());\n        }\n\n        if let Some(timestamp) = &self.valid_until {\n            args.push(\"--valid-until\".into());\n            args.push(timestamp.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.interactive {\n            args.push(\"--interactive\".into());\n        }\n\n        if self.bypassrls {\n            args.push(\"--bypassrls\".into());\n        }\n\n        if self.no_bypassrls {\n            args.push(\"--no-bypassrls\".into());\n        }\n\n        if self.replication {\n            args.push(\"--replication\".into());\n        }\n\n        if self.no_replication {\n            args.push(\"--no-replication\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = CreateUserBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"createuser\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = CreateUserBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./createuser\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\createuser\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = CreateUserBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .with_admin(\"admin\")\n            .connection_limit(10)\n            .createdb()\n            .no_createdb()\n            .echo()\n            .member_of(\"member\")\n            .inherit()\n            .no_inherit()\n            .login()\n            .no_login()\n            .with_member(\"member\")\n            .pwprompt()\n            .createrole()\n            .no_createrole()\n            .superuser()\n            .no_superuser()\n            .valid_until(\"2021-12-31\")\n            .version()\n            .interactive()\n            .bypassrls()\n            .no_bypassrls()\n            .replication()\n            .no_replication()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"createuser\" \"--with-admin\" \"admin\" \"--connection-limit\" \"10\" \"--createdb\" \"--no-createdb\" \"--echo\" \"--member-of\" \"member\" \"--inherit\" \"--no-inherit\" \"--login\" \"--no-login\" \"--with-member\" \"member\" \"--pwprompt\" \"--createrole\" \"--no-createrole\" \"--superuser\" \"--no-superuser\" \"--valid-until\" \"2021-12-31\" \"--version\" \"--interactive\" \"--bypassrls\" \"--no-bypassrls\" \"--replication\" \"--no-replication\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--no-password\" \"--password\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = CreateUserBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./createuser\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\createuser\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/dropdb.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `dropdb` removes a `PostgreSQL` database.\n#[derive(Clone, Debug, Default)]\npub struct DropDbBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    echo: bool,\n    force: bool,\n    interactive: bool,\n    version: bool,\n    if_exists: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    maintenance_db: Option<OsString>,\n    dbname: Option<OsString>,\n}\n\nimpl DropDbBuilder {\n    /// Create a new [`DropDbBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`DropDbBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// Try to terminate other connections before dropping\n    #[must_use]\n    pub fn force(mut self) -> Self {\n        self.force = true;\n        self\n    }\n\n    /// Prompt before deleting anything\n    #[must_use]\n    pub fn interactive(mut self) -> Self {\n        self.interactive = true;\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Don't report error if database doesn't exist\n    #[must_use]\n    pub fn if_exists(mut self) -> Self {\n        self.if_exists = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// Database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// Database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// User name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// Never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// Force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// Alternate maintenance database\n    #[must_use]\n    pub fn maintenance_db<S: AsRef<OsStr>>(mut self, db: S) -> Self {\n        self.maintenance_db = Some(db.as_ref().to_os_string());\n        self\n    }\n\n    /// Database name\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for DropDbBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"dropdb\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if self.force {\n            args.push(\"--force\".into());\n        }\n\n        if self.interactive {\n            args.push(\"--interactive\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.if_exists {\n            args.push(\"--if-exists\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(db) = &self.maintenance_db {\n            args.push(\"--maintenance-db\".into());\n            args.push(db.into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(dbname.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = DropDbBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"dropdb\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = DropDbBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./dropdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\dropdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = DropDbBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .echo()\n            .force()\n            .interactive()\n            .version()\n            .if_exists()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .maintenance_db(\"postgres\")\n            .dbname(\"dbname\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"dropdb\" \"--echo\" \"--force\" \"--interactive\" \"--version\" \"--if-exists\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\" \"--maintenance-db\" \"postgres\" \"dbname\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = DropDbBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./dropdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\dropdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/dropuser.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `dropuser` removes a `PostgreSQL` role.\n#[derive(Clone, Debug, Default)]\npub struct DropUserBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    echo: bool,\n    interactive: bool,\n    version: bool,\n    if_exists: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n}\n\nimpl DropUserBuilder {\n    /// Create a new [`DropUserBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`DropUserBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// Prompt before deleting anything, and prompt for role name if not specified\n    #[must_use]\n    pub fn interactive(mut self) -> Self {\n        self.interactive = true;\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Don't report error if user doesn't exist\n    #[must_use]\n    pub fn if_exists(mut self) -> Self {\n        self.if_exists = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// Database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// Database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// User name to connect as (not the one to drop)\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// Never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// Force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for DropUserBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"dropuser\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if self.interactive {\n            args.push(\"--interactive\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.if_exists {\n            args.push(\"--if-exists\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = DropUserBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"dropuser\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = DropUserBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./dropuser\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\dropuser\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = DropUserBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .echo()\n            .interactive()\n            .version()\n            .if_exists()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"dropuser\" \"--echo\" \"--interactive\" \"--version\" \"--if-exists\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = DropUserBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./dropuser\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\dropuser\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/ecpg.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `ecpg` is the `PostgreSQL` embedded SQL preprocessor for C programs.\n#[derive(Clone, Debug, Default)]\npub struct EcpgBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    c: bool,\n    compatibility_mode: Option<OsString>,\n    symbol: Option<OsString>,\n    header_file: bool,\n    system_include_files: bool,\n    directory: Option<OsString>,\n    outfile: Option<OsString>,\n    runtime_behavior: Option<OsString>,\n    regression: bool,\n    autocommit: bool,\n    version: bool,\n    help: bool,\n}\n\nimpl EcpgBuilder {\n    /// Create a new [`EcpgBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`EcpgBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Automatically generate C code from embedded SQL code\n    #[must_use]\n    pub fn c(mut self) -> Self {\n        self.c = true;\n        self\n    }\n\n    /// Set compatibility mode\n    #[must_use]\n    pub fn compatibility_mode<S: AsRef<OsStr>>(mut self, compatibility_mode: S) -> Self {\n        self.compatibility_mode = Some(compatibility_mode.as_ref().to_os_string());\n        self\n    }\n\n    /// Define SYMBOL\n    #[must_use]\n    pub fn symbol<S: AsRef<OsStr>>(mut self, symbol: S) -> Self {\n        self.symbol = Some(symbol.as_ref().to_os_string());\n        self\n    }\n\n    /// Parse a header file\n    #[must_use]\n    pub fn header_file(mut self) -> Self {\n        self.header_file = true;\n        self.c()\n    }\n\n    /// Parse system include files as well\n    #[must_use]\n    pub fn system_include_files(mut self) -> Self {\n        self.system_include_files = true;\n        self\n    }\n\n    /// Search DIRECTORY for include files\n    #[must_use]\n    pub fn directory<S: AsRef<OsStr>>(mut self, directory: S) -> Self {\n        self.directory = Some(directory.as_ref().to_os_string());\n        self\n    }\n\n    /// Write result to OUTFILE\n    #[must_use]\n    pub fn outfile<S: AsRef<OsStr>>(mut self, outfile: S) -> Self {\n        self.outfile = Some(outfile.as_ref().to_os_string());\n        self\n    }\n\n    /// Specify run-time behavior\n    #[must_use]\n    pub fn runtime_behavior<S: AsRef<OsStr>>(mut self, runtime_behavior: S) -> Self {\n        self.runtime_behavior = Some(runtime_behavior.as_ref().to_os_string());\n        self\n    }\n\n    /// Run in regression testing mode\n    #[must_use]\n    pub fn regression(mut self) -> Self {\n        self.regression = true;\n        self\n    }\n\n    /// Turn on autocommit of transactions\n    #[must_use]\n    pub fn autocommit(mut self) -> Self {\n        self.autocommit = true;\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for EcpgBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"ecpg\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.c {\n            args.push(\"-c\".into());\n        }\n\n        if let Some(mode) = &self.compatibility_mode {\n            args.push(\"-C\".into());\n            args.push(mode.into());\n        }\n\n        if let Some(symbol) = &self.symbol {\n            args.push(\"-D\".into());\n            args.push(symbol.into());\n        }\n\n        if self.header_file {\n            args.push(\"-h\".into());\n        }\n\n        if self.system_include_files {\n            args.push(\"-i\".into());\n        }\n\n        if let Some(directory) = &self.directory {\n            args.push(\"-I\".into());\n            args.push(directory.into());\n        }\n\n        if let Some(outfile) = &self.outfile {\n            args.push(\"-o\".into());\n            args.push(outfile.into());\n        }\n\n        if let Some(behavior) = &self.runtime_behavior {\n            args.push(\"-r\".into());\n            args.push(behavior.into());\n        }\n\n        if self.regression {\n            args.push(\"--regression\".into());\n        }\n\n        if self.autocommit {\n            args.push(\"-t\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = EcpgBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"ecpg\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = EcpgBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./ecpg\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\ecpg\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = EcpgBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .c()\n            .compatibility_mode(\"mode\")\n            .symbol(\"symbol\")\n            .header_file()\n            .system_include_files()\n            .directory(\"directory\")\n            .outfile(\"outfile\")\n            .runtime_behavior(\"behavior\")\n            .regression()\n            .autocommit()\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"ecpg\" \"-c\" \"-C\" \"mode\" \"-D\" \"symbol\" \"-h\" \"-i\" \"-I\" \"directory\" \"-o\" \"outfile\" \"-r\" \"behavior\" \"--regression\" \"-t\" \"--version\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/error.rs",
    "content": "/// `PostgreSQL` command result type\npub type Result<T, E = Error> = core::result::Result<T, E>;\n\n/// `PostgreSQL` command errors\n#[derive(Debug, thiserror::Error)]\npub enum Error {\n    /// Error when a command fails\n    #[error(\"Command error: stdout={stdout}; stderr={stderr}\")]\n    CommandError { stdout: String, stderr: String },\n    /// Error when IO operations fail\n    #[error(\"{0}\")]\n    IoError(String),\n    /// Error when a command fails to execute before the timeout is reached\n    #[error(\"{0}\")]\n    TimeoutError(String),\n}\n\n/// Convert [standard IO errors](std::io::Error) to a [embedded errors](Error::IoError)\nimpl From<std::io::Error> for Error {\n    fn from(error: std::io::Error) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n#[cfg(feature = \"tokio\")]\n/// Convert [elapsed time errors](tokio::time::error::Elapsed) to [embedded errors](Error::TimeoutError)\nimpl From<tokio::time::error::Elapsed> for Error {\n    fn from(error: tokio::time::error::Elapsed) -> Self {\n        Error::TimeoutError(error.to_string())\n    }\n}\n\n/// These are relatively low value tests; they are here to reduce the coverage gap and\n/// ensure that the error conversions are working as expected.\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_from_io_error() {\n        let io_error = std::io::Error::other(\"test\");\n        let error = Error::from(io_error);\n        assert_eq!(error.to_string(), \"test\");\n    }\n\n    #[cfg(feature = \"tokio\")]\n    #[tokio::test]\n    async fn test_from_elapsed_error() {\n        let result = tokio::time::timeout(std::time::Duration::from_nanos(1), async {\n            tokio::time::sleep(std::time::Duration::from_secs(1)).await;\n        })\n        .await;\n        assert!(result.is_err());\n        if let Err(elapsed_error) = result {\n            let error = Error::from(elapsed_error);\n            assert_eq!(error.to_string(), \"deadline has elapsed\");\n        }\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/initdb.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `initdb` initializes a `PostgreSQL` database cluster.\n#[derive(Clone, Debug, Default)]\npub struct InitDbBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    auth: Option<OsString>,\n    auth_host: Option<OsString>,\n    auth_local: Option<OsString>,\n    pgdata: Option<PathBuf>,\n    encoding: Option<OsString>,\n    allow_group_access: bool,\n    icu_locale: Option<OsString>,\n    icu_rules: Option<OsString>,\n    data_checksums: bool,\n    locale: Option<OsString>,\n    lc_collate: Option<OsString>,\n    lc_ctype: Option<OsString>,\n    lc_messages: Option<OsString>,\n    lc_monetary: Option<OsString>,\n    lc_numeric: Option<OsString>,\n    lc_time: Option<OsString>,\n    no_locale: bool,\n    locale_provider: Option<OsString>,\n    pwfile: Option<PathBuf>,\n    text_search_config: Option<OsString>,\n    username: Option<OsString>,\n    pwprompt: bool,\n    waldir: Option<OsString>,\n    wal_segsize: Option<OsString>,\n    set: Option<OsString>,\n    debug: bool,\n    discard_caches: bool,\n    directory: Option<OsString>,\n    no_clean: bool,\n    no_sync: bool,\n    no_instructions: bool,\n    show: bool,\n    sync_only: bool,\n    version: bool,\n    help: bool,\n}\n\nimpl InitDbBuilder {\n    /// Create a new [`InitDbBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`InitDbBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new()\n            .program_dir(settings.get_binary_dir())\n            .username(settings.get_username())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Set the default authentication method for local connections\n    #[must_use]\n    pub fn auth<S: AsRef<OsStr>>(mut self, auth: S) -> Self {\n        self.auth = Some(auth.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default authentication method for local TCP/IP connections\n    #[must_use]\n    pub fn auth_host<S: AsRef<OsStr>>(mut self, auth_host: S) -> Self {\n        self.auth_host = Some(auth_host.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default authentication method for local-socket connections\n    #[must_use]\n    pub fn auth_local<S: AsRef<OsStr>>(mut self, auth_local: S) -> Self {\n        self.auth_local = Some(auth_local.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the location for this database cluster\n    #[must_use]\n    pub fn pgdata<P: Into<PathBuf>>(mut self, pgdata: P) -> Self {\n        self.pgdata = Some(pgdata.into());\n        self\n    }\n\n    /// Set the default encoding for new databases\n    #[must_use]\n    pub fn encoding<S: AsRef<OsStr>>(mut self, encoding: S) -> Self {\n        self.encoding = Some(encoding.as_ref().to_os_string());\n        self\n    }\n\n    /// Allow group read/execute on data directory\n    #[must_use]\n    pub fn allow_group_access(mut self) -> Self {\n        self.allow_group_access = true;\n        self\n    }\n\n    /// Set the ICU locale ID for new databases\n    #[must_use]\n    pub fn icu_locale<S: AsRef<OsStr>>(mut self, icu_locale: S) -> Self {\n        self.icu_locale = Some(icu_locale.as_ref().to_os_string());\n        self\n    }\n\n    /// Set additional ICU collation rules for new databases\n    #[must_use]\n    pub fn icu_rules<S: AsRef<OsStr>>(mut self, icu_rules: S) -> Self {\n        self.icu_rules = Some(icu_rules.as_ref().to_os_string());\n        self\n    }\n\n    /// Use data page checksums\n    #[must_use]\n    pub fn data_checksums(mut self) -> Self {\n        self.data_checksums = true;\n        self\n    }\n\n    /// Set the default locale for new databases\n    #[must_use]\n    pub fn locale<S: AsRef<OsStr>>(mut self, locale: S) -> Self {\n        self.locale = Some(locale.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default locale in the respective category for new databases\n    #[must_use]\n    pub fn lc_collate<S: AsRef<OsStr>>(mut self, lc_collate: S) -> Self {\n        self.lc_collate = Some(lc_collate.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default locale in the respective category for new databases\n    #[must_use]\n    pub fn lc_ctype<S: AsRef<OsStr>>(mut self, lc_ctype: S) -> Self {\n        self.lc_ctype = Some(lc_ctype.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default locale in the respective category for new databases\n    #[must_use]\n    pub fn lc_messages<S: AsRef<OsStr>>(mut self, lc_messages: S) -> Self {\n        self.lc_messages = Some(lc_messages.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default locale in the respective category for new databases\n    #[must_use]\n    pub fn lc_monetary<S: AsRef<OsStr>>(mut self, lc_monetary: S) -> Self {\n        self.lc_monetary = Some(lc_monetary.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default locale in the respective category for new databases\n    #[must_use]\n    pub fn lc_numeric<S: AsRef<OsStr>>(mut self, lc_numeric: S) -> Self {\n        self.lc_numeric = Some(lc_numeric.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the default locale in the respective category for new databases\n    #[must_use]\n    pub fn lc_time<S: AsRef<OsStr>>(mut self, lc_time: S) -> Self {\n        self.lc_time = Some(lc_time.as_ref().to_os_string());\n        self\n    }\n\n    /// Equivalent to --locale=C\n    #[must_use]\n    pub fn no_locale(mut self) -> Self {\n        self.no_locale = true;\n        self\n    }\n\n    /// Set the default locale provider for new databases\n    #[must_use]\n    pub fn locale_provider<S: AsRef<OsStr>>(mut self, locale_provider: S) -> Self {\n        self.locale_provider = Some(locale_provider.as_ref().to_os_string());\n        self\n    }\n\n    /// Read password for the new superuser from file\n    #[must_use]\n    pub fn pwfile<P: Into<PathBuf>>(mut self, pwfile: P) -> Self {\n        self.pwfile = Some(pwfile.into());\n        self\n    }\n\n    /// Set the default text search configuration\n    #[must_use]\n    pub fn text_search_config<S: AsRef<OsStr>>(mut self, text_search_config: S) -> Self {\n        self.text_search_config = Some(text_search_config.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the database superuser name\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// Prompt for a password for the new superuser\n    #[must_use]\n    pub fn pwprompt(mut self) -> Self {\n        self.pwprompt = true;\n        self\n    }\n\n    /// Set the location for the write-ahead log directory\n    #[must_use]\n    pub fn waldir<S: AsRef<OsStr>>(mut self, waldir: S) -> Self {\n        self.waldir = Some(waldir.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the size of WAL segments, in megabytes\n    #[must_use]\n    pub fn wal_segsize<S: AsRef<OsStr>>(mut self, wal_segsize: S) -> Self {\n        self.wal_segsize = Some(wal_segsize.as_ref().to_os_string());\n        self\n    }\n\n    /// Override default setting for server parameter\n    #[must_use]\n    pub fn set<S: AsRef<OsStr>>(mut self, set: S) -> Self {\n        self.set = Some(set.as_ref().to_os_string());\n        self\n    }\n\n    /// Generate lots of debugging output\n    #[must_use]\n    pub fn debug(mut self) -> Self {\n        self.debug = true;\n        self\n    }\n\n    /// Set `debug_discard_caches=1`\n    #[must_use]\n    pub fn discard_caches(mut self) -> Self {\n        self.discard_caches = true;\n        self\n    }\n\n    /// Set where to find the input files\n    #[must_use]\n    pub fn directory<S: AsRef<OsStr>>(mut self, directory: S) -> Self {\n        self.directory = Some(directory.as_ref().to_os_string());\n        self\n    }\n\n    /// Do not clean up after errors\n    #[must_use]\n    pub fn no_clean(mut self) -> Self {\n        self.no_clean = true;\n        self\n    }\n\n    /// Do not wait for changes to be written safely to disk\n    #[must_use]\n    pub fn no_sync(mut self) -> Self {\n        self.no_sync = true;\n        self\n    }\n\n    /// Do not print instructions for next steps\n    #[must_use]\n    pub fn no_instructions(mut self) -> Self {\n        self.no_instructions = true;\n        self\n    }\n\n    /// Show internal settings\n    #[must_use]\n    pub fn show(mut self) -> Self {\n        self.show = true;\n        self\n    }\n\n    /// Only sync database files to disk, then exit\n    #[must_use]\n    pub fn sync_only(mut self) -> Self {\n        self.sync_only = true;\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for InitDbBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"initdb\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(auth) = &self.auth {\n            args.push(\"--auth\".into());\n            args.push(auth.into());\n        }\n\n        if let Some(auth_host) = &self.auth_host {\n            args.push(\"--auth-host\".into());\n            args.push(auth_host.into());\n        }\n\n        if let Some(auth_local) = &self.auth_local {\n            args.push(\"--auth-local\".into());\n            args.push(auth_local.into());\n        }\n\n        if let Some(pgdata) = &self.pgdata {\n            args.push(\"--pgdata\".into());\n            args.push(pgdata.into());\n        }\n\n        if let Some(encoding) = &self.encoding {\n            args.push(\"--encoding\".into());\n            args.push(encoding.into());\n        }\n\n        if self.allow_group_access {\n            args.push(\"--allow-group-access\".into());\n        }\n\n        if let Some(icu_locale) = &self.icu_locale {\n            args.push(\"--icu-locale\".into());\n            args.push(icu_locale.into());\n        }\n\n        if let Some(icu_rules) = &self.icu_rules {\n            args.push(\"--icu-rules\".into());\n            args.push(icu_rules.into());\n        }\n\n        if self.data_checksums {\n            args.push(\"--data-checksums\".into());\n        }\n\n        if let Some(locale) = &self.locale {\n            args.push(\"--locale\".into());\n            args.push(locale.into());\n        }\n\n        if let Some(lc_collate) = &self.lc_collate {\n            args.push(\"--lc-collate\".into());\n            args.push(lc_collate.into());\n        }\n\n        if let Some(lc_ctype) = &self.lc_ctype {\n            args.push(\"--lc-ctype\".into());\n            args.push(lc_ctype.into());\n        }\n\n        if let Some(lc_messages) = &self.lc_messages {\n            args.push(\"--lc-messages\".into());\n            args.push(lc_messages.into());\n        }\n\n        if let Some(lc_monetary) = &self.lc_monetary {\n            args.push(\"--lc-monetary\".into());\n            args.push(lc_monetary.into());\n        }\n\n        if let Some(lc_numeric) = &self.lc_numeric {\n            args.push(\"--lc-numeric\".into());\n            args.push(lc_numeric.into());\n        }\n\n        if let Some(lc_time) = &self.lc_time {\n            args.push(\"--lc-time\".into());\n            args.push(lc_time.into());\n        }\n\n        if self.no_locale {\n            args.push(\"--no-locale\".into());\n        }\n\n        if let Some(locale_provider) = &self.locale_provider {\n            args.push(\"--locale-provider\".into());\n            args.push(locale_provider.into());\n        }\n\n        if let Some(pwfile) = &self.pwfile {\n            args.push(\"--pwfile\".into());\n            args.push(pwfile.into());\n        }\n\n        if let Some(text_search_config) = &self.text_search_config {\n            args.push(\"--text-search-config\".into());\n            args.push(text_search_config.into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.pwprompt {\n            args.push(\"--pwprompt\".into());\n        }\n\n        if let Some(waldir) = &self.waldir {\n            args.push(\"--waldir\".into());\n            args.push(waldir.into());\n        }\n\n        if let Some(wal_segsize) = &self.wal_segsize {\n            args.push(\"--wal-segsize\".into());\n            args.push(wal_segsize.into());\n        }\n\n        if let Some(set) = &self.set {\n            args.push(\"--set\".into());\n            args.push(set.into());\n        }\n\n        if self.debug {\n            args.push(\"--debug\".into());\n        }\n\n        if self.discard_caches {\n            args.push(\"--discard-caches\".into());\n        }\n\n        if let Some(directory) = &self.directory {\n            args.push(\"--directory\".into());\n            args.push(directory.into());\n        }\n\n        if self.no_clean {\n            args.push(\"--no-clean\".into());\n        }\n\n        if self.no_sync {\n            args.push(\"--no-sync\".into());\n        }\n\n        if self.no_instructions {\n            args.push(\"--no-instructions\".into());\n        }\n\n        if self.show {\n            args.push(\"--show\".into());\n        }\n\n        if self.sync_only {\n            args.push(\"--sync-only\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = InitDbBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"initdb\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = InitDbBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./initdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\initdb\" \"#;\n\n        assert_eq!(\n            format!(r#\"{command_prefix}\"--username\" \"postgres\"\"#),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = InitDbBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .auth(\"md5\")\n            .auth_host(\"md5\")\n            .auth_local(\"md5\")\n            .pgdata(\"pgdata\")\n            .encoding(\"UTF8\")\n            .allow_group_access()\n            .icu_locale(\"en_US\")\n            .icu_rules(\"phonebook\")\n            .data_checksums()\n            .locale(\"en_US\")\n            .lc_collate(\"en_US\")\n            .lc_ctype(\"en_US\")\n            .lc_messages(\"en_US\")\n            .lc_monetary(\"en_US\")\n            .lc_numeric(\"en_US\")\n            .lc_time(\"en_US\")\n            .no_locale()\n            .locale_provider(\"icu\")\n            .pwfile(\".pwfile\")\n            .text_search_config(\"english\")\n            .username(\"postgres\")\n            .pwprompt()\n            .waldir(\"waldir\")\n            .wal_segsize(\"1\")\n            .set(\"timezone=UTC\")\n            .debug()\n            .discard_caches()\n            .directory(\"directory\")\n            .no_clean()\n            .no_sync()\n            .no_instructions()\n            .show()\n            .sync_only()\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"initdb\" \"--auth\" \"md5\" \"--auth-host\" \"md5\" \"--auth-local\" \"md5\" \"--pgdata\" \"pgdata\" \"--encoding\" \"UTF8\" \"--allow-group-access\" \"--icu-locale\" \"en_US\" \"--icu-rules\" \"phonebook\" \"--data-checksums\" \"--locale\" \"en_US\" \"--lc-collate\" \"en_US\" \"--lc-ctype\" \"en_US\" \"--lc-messages\" \"en_US\" \"--lc-monetary\" \"en_US\" \"--lc-numeric\" \"en_US\" \"--lc-time\" \"en_US\" \"--no-locale\" \"--locale-provider\" \"icu\" \"--pwfile\" \".pwfile\" \"--text-search-config\" \"english\" \"--username\" \"postgres\" \"--pwprompt\" \"--waldir\" \"waldir\" \"--wal-segsize\" \"1\" \"--set\" \"timezone=UTC\" \"--debug\" \"--discard-caches\" \"--directory\" \"directory\" \"--no-clean\" \"--no-sync\" \"--no-instructions\" \"--show\" \"--sync-only\" \"--version\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/lib.rs",
    "content": "//! Command builders for interacting with `PostgreSQL` via CLI.\n//!\n//! The commands are implemented as builders, which can be used to construct a\n//! [standard Command](std::process::Command) or [tokio Command](tokio::process::Command).\n\npub mod clusterdb;\npub mod createdb;\npub mod createuser;\npub mod dropdb;\npub mod dropuser;\npub mod ecpg;\npub mod error;\npub mod initdb;\npub mod oid2name;\npub mod pg_amcheck;\npub mod pg_archivecleanup;\npub mod pg_basebackup;\npub mod pg_checksums;\npub mod pg_config;\npub mod pg_controldata;\npub mod pg_ctl;\npub mod pg_dump;\npub mod pg_dumpall;\npub mod pg_isready;\npub mod pg_receivewal;\npub mod pg_recvlogical;\npub mod pg_resetwal;\npub mod pg_restore;\npub mod pg_rewind;\npub mod pg_test_fsync;\npub mod pg_test_timing;\npub mod pg_upgrade;\npub mod pg_verifybackup;\npub mod pg_waldump;\npub mod pgbench;\npub mod postgres;\npub mod psql;\npub mod reindexdb;\npub mod traits;\npub mod vacuumdb;\npub mod vacuumlo;\n\npub use error::{Error, Result};\n#[cfg(test)]\npub use traits::TestSettings;\n#[cfg(test)]\npub use traits::TestSocketSettings;\npub use traits::{AsyncCommandExecutor, CommandBuilder, CommandExecutor, Settings};\n"
  },
  {
    "path": "postgresql_commands/src/oid2name.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `oid2name` helps to examine the file structure used by `PostgreSQL`.\n#[derive(Clone, Debug, Default)]\npub struct Oid2NameBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    filenode: Option<OsString>,\n    indexes: bool,\n    oid: Option<OsString>,\n    quiet: bool,\n    tablespaces: bool,\n    system_objects: bool,\n    table: Option<OsString>,\n    version: bool,\n    extended: bool,\n    help: bool,\n    dbname: Option<OsString>,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n}\n\nimpl Oid2NameBuilder {\n    /// Create a new [`Oid2NameBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`Oid2NameBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// show info for table with given file node\n    #[must_use]\n    pub fn filenode<S: AsRef<OsStr>>(mut self, filenode: S) -> Self {\n        self.filenode = Some(filenode.as_ref().to_os_string());\n        self\n    }\n\n    /// show indexes and sequences too\n    #[must_use]\n    pub fn indexes(mut self) -> Self {\n        self.indexes = true;\n        self\n    }\n\n    /// show info for table with given OID\n    #[must_use]\n    pub fn oid<S: AsRef<OsStr>>(mut self, oid: S) -> Self {\n        self.oid = Some(oid.as_ref().to_os_string());\n        self\n    }\n\n    /// quiet (don't show headers)\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// show all tablespaces\n    #[must_use]\n    pub fn tablespaces(mut self) -> Self {\n        self.tablespaces = true;\n        self\n    }\n\n    /// show system objects too\n    #[must_use]\n    pub fn system_objects(mut self) -> Self {\n        self.system_objects = true;\n        self\n    }\n\n    /// show info for named table\n    #[must_use]\n    pub fn table<S: AsRef<OsStr>>(mut self, table: S) -> Self {\n        self.table = Some(table.as_ref().to_os_string());\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// extended (show additional columns)\n    #[must_use]\n    pub fn extended(mut self) -> Self {\n        self.extended = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// database to connect to\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port number\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// connect as specified database user\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for Oid2NameBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"oid2name\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(filenode) = &self.filenode {\n            args.push(\"--filenode\".into());\n            args.push(filenode.into());\n        }\n\n        if self.indexes {\n            args.push(\"--indexes\".into());\n        }\n\n        if let Some(oid) = &self.oid {\n            args.push(\"--oid\".into());\n            args.push(oid.into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if self.tablespaces {\n            args.push(\"--tablespaces\".into());\n        }\n\n        if self.system_objects {\n            args.push(\"--system-objects\".into());\n        }\n\n        if let Some(table) = &self.table {\n            args.push(\"--table\".into());\n            args.push(table.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.extended {\n            args.push(\"--extended\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = Oid2NameBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"oid2name\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = Oid2NameBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./oid2name\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\oid2name\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = Oid2NameBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .filenode(\"filenode\")\n            .indexes()\n            .oid(\"oid\")\n            .quiet()\n            .tablespaces()\n            .system_objects()\n            .table(\"table\")\n            .version()\n            .extended()\n            .help()\n            .dbname(\"dbname\")\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"oid2name\" \"--filenode\" \"filenode\" \"--indexes\" \"--oid\" \"oid\" \"--quiet\" \"--tablespaces\" \"--system-objects\" \"--table\" \"table\" \"--version\" \"--extended\" \"--help\" \"--dbname\" \"dbname\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = Oid2NameBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./oid2name\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\oid2name\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_amcheck.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_amcheck` checks objects in a `PostgreSQL` database for corruption.\n#[derive(Clone, Debug, Default)]\npub struct PgAmCheckBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    all: bool,\n    database: Option<OsString>,\n    exclude_database: Option<OsString>,\n    index: Option<OsString>,\n    exclude_index: Option<OsString>,\n    relation: Option<OsString>,\n    exclude_relation: Option<OsString>,\n    schema: Option<OsString>,\n    exclude_schema: Option<OsString>,\n    table: Option<OsString>,\n    exclude_table: Option<OsString>,\n    no_dependent_indexes: bool,\n    no_dependent_toast: bool,\n    no_strict_names: bool,\n    exclude_toast_pointers: bool,\n    on_error_stop: bool,\n    skip: Option<OsString>,\n    start_block: Option<OsString>,\n    end_block: Option<OsString>,\n    heap_all_indexed: bool,\n    parent_check: bool,\n    root_descend: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    maintenance_db: Option<OsString>,\n    echo: bool,\n    jobs: Option<OsString>,\n    progress: bool,\n    verbose: bool,\n    version: bool,\n    install_missing: bool,\n    help: bool,\n}\n\nimpl PgAmCheckBuilder {\n    /// Create a new [`PgAmCheckBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgAmCheckBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// check all databases\n    #[must_use]\n    pub fn all(mut self) -> Self {\n        self.all = true;\n        self\n    }\n\n    /// check matching database(s)\n    #[must_use]\n    pub fn database<S: AsRef<OsStr>>(mut self, database: S) -> Self {\n        self.database = Some(database.as_ref().to_os_string());\n        self\n    }\n\n    /// do NOT check matching database(s)\n    #[must_use]\n    pub fn exclude_database<S: AsRef<OsStr>>(mut self, exclude_database: S) -> Self {\n        self.exclude_database = Some(exclude_database.as_ref().to_os_string());\n        self\n    }\n\n    /// check matching index(es)\n    #[must_use]\n    pub fn index<S: AsRef<OsStr>>(mut self, index: S) -> Self {\n        self.index = Some(index.as_ref().to_os_string());\n        self\n    }\n\n    /// do NOT check matching index(es)\n    #[must_use]\n    pub fn exclude_index<S: AsRef<OsStr>>(mut self, exclude_index: S) -> Self {\n        self.exclude_index = Some(exclude_index.as_ref().to_os_string());\n        self\n    }\n\n    /// check matching relation(s)\n    #[must_use]\n    pub fn relation<S: AsRef<OsStr>>(mut self, relation: S) -> Self {\n        self.relation = Some(relation.as_ref().to_os_string());\n        self\n    }\n\n    /// do NOT check matching relation(s)\n    #[must_use]\n    pub fn exclude_relation<S: AsRef<OsStr>>(mut self, exclude_relation: S) -> Self {\n        self.exclude_relation = Some(exclude_relation.as_ref().to_os_string());\n        self\n    }\n\n    /// check matching schema(s)\n    #[must_use]\n    pub fn schema<S: AsRef<OsStr>>(mut self, schema: S) -> Self {\n        self.schema = Some(schema.as_ref().to_os_string());\n        self\n    }\n\n    /// do NOT check matching schema(s)\n    #[must_use]\n    pub fn exclude_schema<S: AsRef<OsStr>>(mut self, exclude_schema: S) -> Self {\n        self.exclude_schema = Some(exclude_schema.as_ref().to_os_string());\n        self\n    }\n\n    /// check matching table(s)\n    #[must_use]\n    pub fn table<S: AsRef<OsStr>>(mut self, table: S) -> Self {\n        self.table = Some(table.as_ref().to_os_string());\n        self\n    }\n\n    /// do NOT check matching table(s)\n    #[must_use]\n    pub fn exclude_table<S: AsRef<OsStr>>(mut self, exclude_table: S) -> Self {\n        self.exclude_table = Some(exclude_table.as_ref().to_os_string());\n        self\n    }\n\n    /// do NOT expand list of relations to include indexes\n    #[must_use]\n    pub fn no_dependent_indexes(mut self) -> Self {\n        self.no_dependent_indexes = true;\n        self\n    }\n\n    /// do NOT expand list of relations to include TOAST tables\n    #[must_use]\n    pub fn no_dependent_toast(mut self) -> Self {\n        self.no_dependent_toast = true;\n        self\n    }\n\n    /// do NOT require patterns to match objects\n    #[must_use]\n    pub fn no_strict_names(mut self) -> Self {\n        self.no_strict_names = true;\n        self\n    }\n\n    /// do NOT follow relation TOAST pointers\n    #[must_use]\n    pub fn exclude_toast_pointers(mut self) -> Self {\n        self.exclude_toast_pointers = true;\n        self\n    }\n\n    /// stop checking at end of first corrupt page\n    #[must_use]\n    pub fn on_error_stop(mut self) -> Self {\n        self.on_error_stop = true;\n        self\n    }\n\n    /// do NOT check \"all-frozen\" or \"all-visible\" blocks\n    #[must_use]\n    pub fn skip<S: AsRef<OsStr>>(mut self, skip: S) -> Self {\n        self.skip = Some(skip.as_ref().to_os_string());\n        self\n    }\n\n    /// begin checking table(s) at the given block number\n    #[must_use]\n    pub fn start_block<S: AsRef<OsStr>>(mut self, start_block: S) -> Self {\n        self.start_block = Some(start_block.as_ref().to_os_string());\n        self\n    }\n\n    /// check table(s) only up to the given block number\n    #[must_use]\n    pub fn end_block<S: AsRef<OsStr>>(mut self, end_block: S) -> Self {\n        self.end_block = Some(end_block.as_ref().to_os_string());\n        self\n    }\n\n    /// check that all heap tuples are found within indexes\n    #[must_use]\n    pub fn heap_all_indexed(mut self) -> Self {\n        self.heap_all_indexed = true;\n        self\n    }\n\n    /// check index parent/child relationships\n    #[must_use]\n    pub fn parent_check(mut self) -> Self {\n        self.parent_check = true;\n        self\n    }\n\n    /// search from root page to refind tuples\n    #[must_use]\n    pub fn root_descend(mut self) -> Self {\n        self.root_descend = true;\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// user name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// alternate maintenance database\n    #[must_use]\n    pub fn maintenance_db<S: AsRef<OsStr>>(mut self, maintenance_db: S) -> Self {\n        self.maintenance_db = Some(maintenance_db.as_ref().to_os_string());\n        self\n    }\n\n    /// show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// use this many concurrent connections to the server\n    #[must_use]\n    pub fn jobs<S: AsRef<OsStr>>(mut self, jobs: S) -> Self {\n        self.jobs = Some(jobs.as_ref().to_os_string());\n        self\n    }\n\n    /// show progress information\n    #[must_use]\n    pub fn progress(mut self) -> Self {\n        self.progress = true;\n        self\n    }\n\n    /// write a lot of output\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// install missing extensions\n    #[must_use]\n    pub fn install_missing(mut self) -> Self {\n        self.install_missing = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgAmCheckBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_amcheck\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.all {\n            args.push(\"--all\".into());\n        }\n\n        if let Some(database) = &self.database {\n            args.push(\"--database\".into());\n            args.push(database.into());\n        }\n\n        if let Some(exclude_database) = &self.exclude_database {\n            args.push(\"--exclude-database\".into());\n            args.push(exclude_database.into());\n        }\n\n        if let Some(index) = &self.index {\n            args.push(\"--index\".into());\n            args.push(index.into());\n        }\n\n        if let Some(exclude_index) = &self.exclude_index {\n            args.push(\"--exclude-index\".into());\n            args.push(exclude_index.into());\n        }\n\n        if let Some(relation) = &self.relation {\n            args.push(\"--relation\".into());\n            args.push(relation.into());\n        }\n\n        if let Some(exclude_relation) = &self.exclude_relation {\n            args.push(\"--exclude-relation\".into());\n            args.push(exclude_relation.into());\n        }\n\n        if let Some(schema) = &self.schema {\n            args.push(\"--schema\".into());\n            args.push(schema.into());\n        }\n\n        if let Some(exclude_schema) = &self.exclude_schema {\n            args.push(\"--exclude-schema\".into());\n            args.push(exclude_schema.into());\n        }\n\n        if let Some(table) = &self.table {\n            args.push(\"--table\".into());\n            args.push(table.into());\n        }\n\n        if let Some(exclude_table) = &self.exclude_table {\n            args.push(\"--exclude-table\".into());\n            args.push(exclude_table.into());\n        }\n\n        if self.no_dependent_indexes {\n            args.push(\"--no-dependent-indexes\".into());\n        }\n\n        if self.no_dependent_toast {\n            args.push(\"--no-dependent-toast\".into());\n        }\n\n        if self.no_strict_names {\n            args.push(\"--no-strict-names\".into());\n        }\n\n        if self.exclude_toast_pointers {\n            args.push(\"--exclude-toast-pointers\".into());\n        }\n\n        if self.on_error_stop {\n            args.push(\"--on-error-stop\".into());\n        }\n\n        if let Some(skip) = &self.skip {\n            args.push(\"--skip\".into());\n            args.push(skip.into());\n        }\n\n        if let Some(start_block) = &self.start_block {\n            args.push(\"--startblock\".into());\n            args.push(start_block.into());\n        }\n\n        if let Some(end_block) = &self.end_block {\n            args.push(\"--endblock\".into());\n            args.push(end_block.into());\n        }\n\n        if self.heap_all_indexed {\n            args.push(\"--heapallindexed\".into());\n        }\n\n        if self.parent_check {\n            args.push(\"--parent-check\".into());\n        }\n\n        if self.root_descend {\n            args.push(\"--rootdescend\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(maintenance_db) = &self.maintenance_db {\n            args.push(\"--maintenance-db\".into());\n            args.push(maintenance_db.into());\n        }\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if let Some(jobs) = &self.jobs {\n            args.push(\"--jobs\".into());\n            args.push(jobs.into());\n        }\n\n        if self.progress {\n            args.push(\"--progress\".into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.install_missing {\n            args.push(\"--install-missing\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgAmCheckBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_amcheck\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgAmCheckBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_amcheck\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_amcheck\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgAmCheckBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_amcheck\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_amcheck\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgAmCheckBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .all()\n            .database(\"database\")\n            .exclude_database(\"exclude_database\")\n            .index(\"index\")\n            .exclude_index(\"exclude_index\")\n            .relation(\"relation\")\n            .exclude_relation(\"exclude_relation\")\n            .schema(\"schema\")\n            .exclude_schema(\"exclude_schema\")\n            .table(\"table\")\n            .exclude_table(\"exclude_table\")\n            .no_dependent_indexes()\n            .no_dependent_toast()\n            .no_strict_names()\n            .exclude_toast_pointers()\n            .on_error_stop()\n            .skip(\"skip\")\n            .start_block(\"start_block\")\n            .end_block(\"end_block\")\n            .heap_all_indexed()\n            .parent_check()\n            .root_descend()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .maintenance_db(\"maintenance_db\")\n            .echo()\n            .jobs(\"jobs\")\n            .progress()\n            .verbose()\n            .version()\n            .install_missing()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_amcheck\" \"--all\" \"--database\" \"database\" \"--exclude-database\" \"exclude_database\" \"--index\" \"index\" \"--exclude-index\" \"exclude_index\" \"--relation\" \"relation\" \"--exclude-relation\" \"exclude_relation\" \"--schema\" \"schema\" \"--exclude-schema\" \"exclude_schema\" \"--table\" \"table\" \"--exclude-table\" \"exclude_table\" \"--no-dependent-indexes\" \"--no-dependent-toast\" \"--no-strict-names\" \"--exclude-toast-pointers\" \"--on-error-stop\" \"--skip\" \"skip\" \"--startblock\" \"start_block\" \"--endblock\" \"end_block\" \"--heapallindexed\" \"--parent-check\" \"--rootdescend\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--no-password\" \"--password\" \"--maintenance-db\" \"maintenance_db\" \"--echo\" \"--jobs\" \"jobs\" \"--progress\" \"--verbose\" \"--version\" \"--install-missing\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_archivecleanup.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_archivecleanup` removes older WAL files from `PostgreSQL` archives.\n#[derive(Clone, Debug, Default)]\npub struct PgArchiveCleanupBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    debug: bool,\n    dry_run: bool,\n    version: bool,\n    ext: Option<OsString>,\n    help: bool,\n    archive_location: Option<OsString>,\n    oldest_kept_wal_file: Option<OsString>,\n}\n\nimpl PgArchiveCleanupBuilder {\n    /// Create a new [`PgArchiveCleanupBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgArchiveCleanupBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// generate debug output (verbose mode)\n    #[must_use]\n    pub fn debug(mut self) -> Self {\n        self.debug = true;\n        self\n    }\n\n    /// dry run, show the names of the files that would be removed\n    #[must_use]\n    pub fn dry_run(mut self) -> Self {\n        self.dry_run = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// clean up files if they have this extension\n    #[must_use]\n    pub fn ext<S: AsRef<OsStr>>(mut self, ext: S) -> Self {\n        self.ext = Some(ext.as_ref().to_os_string());\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// archive location\n    #[must_use]\n    pub fn archive_location<S: AsRef<OsStr>>(mut self, archive_location: S) -> Self {\n        self.archive_location = Some(archive_location.as_ref().to_os_string());\n        self\n    }\n\n    /// oldest kept WAL file\n    #[must_use]\n    pub fn oldest_kept_wal_file<S: AsRef<OsStr>>(mut self, oldest_kept_wal_file: S) -> Self {\n        self.oldest_kept_wal_file = Some(oldest_kept_wal_file.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgArchiveCleanupBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_archivecleanup\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.debug {\n            args.push(\"-d\".into());\n        }\n\n        if self.dry_run {\n            args.push(\"-n\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if let Some(ext) = &self.ext {\n            args.push(\"-x\".into());\n            args.push(ext.into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(archive_location) = &self.archive_location {\n            args.push(archive_location.into());\n        }\n\n        if let Some(oldest_kept_wal_file) = &self.oldest_kept_wal_file {\n            args.push(oldest_kept_wal_file.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgArchiveCleanupBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_archivecleanup\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgArchiveCleanupBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_archivecleanup\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_archivecleanup\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgArchiveCleanupBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .debug()\n            .dry_run()\n            .version()\n            .ext(\"partial\")\n            .help()\n            .archive_location(\"archive_location\")\n            .oldest_kept_wal_file(\"000000010000000000000001\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_archivecleanup\" \"-d\" \"-n\" \"--version\" \"-x\" \"partial\" \"--help\" \"archive_location\" \"000000010000000000000001\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_basebackup.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_basebackup` takes a base backup of a running `PostgreSQL` server.\n#[derive(Clone, Debug, Default)]\npub struct PgBaseBackupBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    pgdata: Option<PathBuf>,\n    format: Option<OsString>,\n    max_rate: Option<OsString>,\n    write_recovery_conf: bool,\n    target: Option<OsString>,\n    tablespace_mapping: Option<OsString>,\n    waldir: Option<OsString>,\n    wal_method: Option<OsString>,\n    gzip: bool,\n    compress: Option<OsString>,\n    checkpoint: Option<OsString>,\n    create_slot: bool,\n    label: Option<OsString>,\n    no_clean: bool,\n    no_sync: bool,\n    progress: bool,\n    slot: Option<OsString>,\n    verbose: bool,\n    version: bool,\n    manifest_checksums: Option<OsString>,\n    manifest_force_encode: bool,\n    no_estimate_size: bool,\n    no_manifest: bool,\n    no_slot: bool,\n    no_verify_checksums: bool,\n    help: bool,\n    dbname: Option<OsString>,\n    host: Option<OsString>,\n    port: Option<u16>,\n    status_interval: Option<OsString>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n}\n\nimpl PgBaseBackupBuilder {\n    /// Create a new [`PgBaseBackupBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgBaseBackupBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// receive base backup into directory\n    #[must_use]\n    pub fn pgdata<P: Into<PathBuf>>(mut self, pgdata: P) -> Self {\n        self.pgdata = Some(pgdata.into());\n        self\n    }\n\n    /// output format (plain (default), tar)\n    #[must_use]\n    pub fn format<S: AsRef<OsStr>>(mut self, format: S) -> Self {\n        self.format = Some(format.as_ref().to_os_string());\n        self\n    }\n\n    /// maximum transfer rate to transfer data directory (in kB/s, or use suffix \"k\" or \"M\")\n    #[must_use]\n    pub fn max_rate<S: AsRef<OsStr>>(mut self, max_rate: S) -> Self {\n        self.max_rate = Some(max_rate.as_ref().to_os_string());\n        self\n    }\n\n    /// write configuration for replication\n    #[must_use]\n    pub fn write_recovery_conf(mut self) -> Self {\n        self.write_recovery_conf = true;\n        self\n    }\n\n    /// backup target (if other than client)\n    #[must_use]\n    pub fn target<S: AsRef<OsStr>>(mut self, target: S) -> Self {\n        self.target = Some(target.as_ref().to_os_string());\n        self\n    }\n\n    /// relocate tablespace in OLDDIR to NEWDIR\n    #[must_use]\n    pub fn tablespace_mapping<S: AsRef<OsStr>>(mut self, tablespace_mapping: S) -> Self {\n        self.tablespace_mapping = Some(tablespace_mapping.as_ref().to_os_string());\n        self\n    }\n\n    /// location for the write-ahead log directory\n    #[must_use]\n    pub fn waldir<S: AsRef<OsStr>>(mut self, waldir: S) -> Self {\n        self.waldir = Some(waldir.as_ref().to_os_string());\n        self\n    }\n\n    /// include required WAL files with specified method\n    #[must_use]\n    pub fn wal_method<S: AsRef<OsStr>>(mut self, wal_method: S) -> Self {\n        self.wal_method = Some(wal_method.as_ref().to_os_string());\n        self\n    }\n\n    /// compress tar output\n    #[must_use]\n    pub fn gzip(mut self) -> Self {\n        self.gzip = true;\n        self\n    }\n\n    /// compress on client or server as specified\n    #[must_use]\n    pub fn compress<S: AsRef<OsStr>>(mut self, compress: S) -> Self {\n        self.compress = Some(compress.as_ref().to_os_string());\n        self\n    }\n\n    /// set fast or spread checkpointing\n    #[must_use]\n    pub fn checkpoint<S: AsRef<OsStr>>(mut self, checkpoint: S) -> Self {\n        self.checkpoint = Some(checkpoint.as_ref().to_os_string());\n        self\n    }\n\n    /// create replication slot\n    #[must_use]\n    pub fn create_slot(mut self) -> Self {\n        self.create_slot = true;\n        self\n    }\n\n    /// set backup label\n    #[must_use]\n    pub fn label<S: AsRef<OsStr>>(mut self, label: S) -> Self {\n        self.label = Some(label.as_ref().to_os_string());\n        self\n    }\n\n    /// do not clean up after errors\n    #[must_use]\n    pub fn no_clean(mut self) -> Self {\n        self.no_clean = true;\n        self\n    }\n\n    /// do not wait for changes to be written safely to disk\n    #[must_use]\n    pub fn no_sync(mut self) -> Self {\n        self.no_sync = true;\n        self\n    }\n\n    /// show progress information\n    #[must_use]\n    pub fn progress(mut self) -> Self {\n        self.progress = true;\n        self\n    }\n\n    /// replication slot to use\n    #[must_use]\n    pub fn slot<S: AsRef<OsStr>>(mut self, slot: S) -> Self {\n        self.slot = Some(slot.as_ref().to_os_string());\n        self\n    }\n\n    /// output verbose messages\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// use algorithm for manifest checksums\n    #[must_use]\n    pub fn manifest_checksums<S: AsRef<OsStr>>(mut self, manifest_checksums: S) -> Self {\n        self.manifest_checksums = Some(manifest_checksums.as_ref().to_os_string());\n        self\n    }\n\n    /// hex encode all file names in manifest\n    #[must_use]\n    pub fn manifest_force_encode(mut self) -> Self {\n        self.manifest_force_encode = true;\n        self\n    }\n\n    /// do not estimate backup size in server side\n    #[must_use]\n    pub fn no_estimate_size(mut self) -> Self {\n        self.no_estimate_size = true;\n        self\n    }\n\n    /// suppress generation of backup manifest\n    #[must_use]\n    pub fn no_manifest(mut self) -> Self {\n        self.no_manifest = true;\n        self\n    }\n\n    /// prevent creation of temporary replication slot\n    #[must_use]\n    pub fn no_slot(mut self) -> Self {\n        self.no_slot = true;\n        self\n    }\n\n    /// do not verify checksums\n    #[must_use]\n    pub fn no_verify_checksums(mut self) -> Self {\n        self.no_verify_checksums = true;\n        self\n    }\n\n    /// show this help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// connection string\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port number\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// time between status packets sent to server (in seconds)\n    #[must_use]\n    pub fn status_interval<S: AsRef<OsStr>>(mut self, status_interval: S) -> Self {\n        self.status_interval = Some(status_interval.as_ref().to_os_string());\n        self\n    }\n\n    /// connect as specified database user\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt (should happen automatically)\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgBaseBackupBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_basebackup\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(pgdata) = &self.pgdata {\n            args.push(\"--pgdata\".into());\n            args.push(pgdata.into());\n        }\n\n        if let Some(format) = &self.format {\n            args.push(\"--format\".into());\n            args.push(format.into());\n        }\n\n        if let Some(max_rate) = &self.max_rate {\n            args.push(\"--max-rate\".into());\n            args.push(max_rate.into());\n        }\n\n        if self.write_recovery_conf {\n            args.push(\"--write-recovery-conf\".into());\n        }\n\n        if let Some(target) = &self.target {\n            args.push(\"--target\".into());\n            args.push(target.into());\n        }\n\n        if let Some(tablespace_mapping) = &self.tablespace_mapping {\n            args.push(\"--tablespace-mapping\".into());\n            args.push(tablespace_mapping.into());\n        }\n\n        if let Some(waldir) = &self.waldir {\n            args.push(\"--waldir\".into());\n            args.push(waldir.into());\n        }\n\n        if let Some(wal_method) = &self.wal_method {\n            args.push(\"--wal-method\".into());\n            args.push(wal_method.into());\n        }\n\n        if self.gzip {\n            args.push(\"--gzip\".into());\n        }\n\n        if let Some(compress) = &self.compress {\n            args.push(\"--compress\".into());\n            args.push(compress.into());\n        }\n\n        if let Some(checkpoint) = &self.checkpoint {\n            args.push(\"--checkpoint\".into());\n            args.push(checkpoint.into());\n        }\n\n        if self.create_slot {\n            args.push(\"--create-slot\".into());\n        }\n\n        if let Some(label) = &self.label {\n            args.push(\"--label\".into());\n            args.push(label.into());\n        }\n\n        if self.no_clean {\n            args.push(\"--no-clean\".into());\n        }\n\n        if self.no_sync {\n            args.push(\"--no-sync\".into());\n        }\n\n        if self.progress {\n            args.push(\"--progress\".into());\n        }\n\n        if let Some(slot) = &self.slot {\n            args.push(\"--slot\".into());\n            args.push(slot.into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if let Some(manifest_checksums) = &self.manifest_checksums {\n            args.push(\"--manifest-checksums\".into());\n            args.push(manifest_checksums.into());\n        }\n\n        if self.manifest_force_encode {\n            args.push(\"--manifest-force-encode\".into());\n        }\n\n        if self.no_estimate_size {\n            args.push(\"--no-estimate-size\".into());\n        }\n\n        if self.no_manifest {\n            args.push(\"--no-manifest\".into());\n        }\n\n        if self.no_slot {\n            args.push(\"--no-slot\".into());\n        }\n\n        if self.no_verify_checksums {\n            args.push(\"--no-verify-checksums\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(status_interval) = &self.status_interval {\n            args.push(\"--status-interval\".into());\n            args.push(status_interval.into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgBaseBackupBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_basebackup\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgBaseBackupBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_basebackup\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_basebackup\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgBaseBackupBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_basebackup\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_basebackup\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgBaseBackupBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .pgdata(\"pgdata\")\n            .format(\"plain\")\n            .max_rate(\"100M\")\n            .write_recovery_conf()\n            .target(\"localhost\")\n            .tablespace_mapping(\"tablespace_mapping\")\n            .waldir(\"waldir\")\n            .wal_method(\"stream\")\n            .gzip()\n            .compress(\"client\")\n            .checkpoint(\"fast\")\n            .create_slot()\n            .label(\"my_backup\")\n            .no_clean()\n            .no_sync()\n            .progress()\n            .slot(\"my_slot\")\n            .verbose()\n            .version()\n            .manifest_checksums(\"sha256\")\n            .manifest_force_encode()\n            .no_estimate_size()\n            .no_manifest()\n            .no_slot()\n            .no_verify_checksums()\n            .help()\n            .dbname(\"postgres\")\n            .host(\"localhost\")\n            .port(5432)\n            .status_interval(\"10\")\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_basebackup\" \"--pgdata\" \"pgdata\" \"--format\" \"plain\" \"--max-rate\" \"100M\" \"--write-recovery-conf\" \"--target\" \"localhost\" \"--tablespace-mapping\" \"tablespace_mapping\" \"--waldir\" \"waldir\" \"--wal-method\" \"stream\" \"--gzip\" \"--compress\" \"client\" \"--checkpoint\" \"fast\" \"--create-slot\" \"--label\" \"my_backup\" \"--no-clean\" \"--no-sync\" \"--progress\" \"--slot\" \"my_slot\" \"--verbose\" \"--version\" \"--manifest-checksums\" \"sha256\" \"--manifest-force-encode\" \"--no-estimate-size\" \"--no-manifest\" \"--no-slot\" \"--no-verify-checksums\" \"--help\" \"--dbname\" \"postgres\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--status-interval\" \"10\" \"--username\" \"postgres\" \"--no-password\" \"--password\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_checksums.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_checksums` enables, disables, or verifies data checksums in a `PostgreSQL` database cluster.\n#[derive(Clone, Debug, Default)]\npub struct PgChecksumsBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    pgdata: Option<PathBuf>,\n    check: bool,\n    disable: bool,\n    enable: bool,\n    filenode: Option<OsString>,\n    no_sync: bool,\n    progress: bool,\n    verbose: bool,\n    version: bool,\n    help: bool,\n}\n\nimpl PgChecksumsBuilder {\n    /// Create a new [`PgChecksumsBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgChecksumsBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// data directory\n    #[must_use]\n    pub fn pgdata<P: Into<PathBuf>>(mut self, pgdata: P) -> Self {\n        self.pgdata = Some(pgdata.into());\n        self\n    }\n\n    /// check data checksums (default)\n    #[must_use]\n    pub fn check(mut self) -> Self {\n        self.check = true;\n        self\n    }\n\n    /// disable data checksums\n    #[must_use]\n    pub fn disable(mut self) -> Self {\n        self.disable = true;\n        self\n    }\n\n    /// enable data checksums\n    #[must_use]\n    pub fn enable(mut self) -> Self {\n        self.enable = true;\n        self\n    }\n\n    /// check only relation with specified filenode\n    #[must_use]\n    pub fn filenode<S: AsRef<OsStr>>(mut self, filenode: S) -> Self {\n        self.filenode = Some(filenode.as_ref().to_os_string());\n        self\n    }\n\n    /// do not wait for changes to be written safely to disk\n    #[must_use]\n    pub fn no_sync(mut self) -> Self {\n        self.no_sync = true;\n        self\n    }\n\n    /// show progress information\n    #[must_use]\n    pub fn progress(mut self) -> Self {\n        self.progress = true;\n        self\n    }\n\n    /// output verbose messages\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgChecksumsBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_checksums\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(pgdata) = &self.pgdata {\n            args.push(\"--pgdata\".into());\n            args.push(pgdata.into());\n        }\n\n        if self.check {\n            args.push(\"--check\".into());\n        }\n\n        if self.disable {\n            args.push(\"--disable\".into());\n        }\n\n        if self.enable {\n            args.push(\"--enable\".into());\n        }\n\n        if let Some(filenode) = &self.filenode {\n            args.push(\"--filenode\".into());\n            args.push(filenode.into());\n        }\n\n        if self.no_sync {\n            args.push(\"--no-sync\".into());\n        }\n\n        if self.progress {\n            args.push(\"--progress\".into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgChecksumsBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_checksums\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgChecksumsBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_checksums\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_checksums\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgChecksumsBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .pgdata(\"pgdata\")\n            .check()\n            .disable()\n            .enable()\n            .filenode(\"12345\")\n            .no_sync()\n            .progress()\n            .verbose()\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_checksums\" \"--pgdata\" \"pgdata\" \"--check\" \"--disable\" \"--enable\" \"--filenode\" \"12345\" \"--no-sync\" \"--progress\" \"--verbose\" \"--version\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_config.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_config` provides information about the installed version of `PostgreSQL`.\n#[derive(Clone, Debug, Default)]\npub struct PgConfigBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    bindir: bool,\n    docdir: bool,\n    htmldir: bool,\n    includedir: bool,\n    pkgincludedir: bool,\n    includedir_server: bool,\n    libdir: bool,\n    pkglibdir: bool,\n    localedir: bool,\n    mandir: bool,\n    sharedir: bool,\n    sysconfdir: bool,\n    pgxs: bool,\n    configure: bool,\n    cc: bool,\n    cppflags: bool,\n    cflags: bool,\n    cflags_sl: bool,\n    ldflags: bool,\n    ldflags_ex: bool,\n    ldflags_sl: bool,\n    libs: bool,\n    version: bool,\n    help: bool,\n}\n\nimpl PgConfigBuilder {\n    /// Create a new [`PgConfigBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgConfigBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Set the bindir\n    #[must_use]\n    pub fn bindir(mut self) -> Self {\n        self.bindir = true;\n        self\n    }\n\n    /// Set the docdir\n    #[must_use]\n    pub fn docdir(mut self) -> Self {\n        self.docdir = true;\n        self\n    }\n\n    /// Set the htmldir\n    #[must_use]\n    pub fn htmldir(mut self) -> Self {\n        self.htmldir = true;\n        self\n    }\n\n    /// Set the includedir\n    #[must_use]\n    pub fn includedir(mut self) -> Self {\n        self.includedir = true;\n        self\n    }\n\n    /// Set the pkgincludedir\n    #[must_use]\n    pub fn pkgincludedir(mut self) -> Self {\n        self.pkgincludedir = true;\n        self\n    }\n\n    /// Set the `includedir_server`\n    #[must_use]\n    pub fn includedir_server(mut self) -> Self {\n        self.includedir_server = true;\n        self\n    }\n\n    /// Set the libdir\n    #[must_use]\n    pub fn libdir(mut self) -> Self {\n        self.libdir = true;\n        self\n    }\n\n    /// Set the pkglibdir\n    #[must_use]\n    pub fn pkglibdir(mut self) -> Self {\n        self.pkglibdir = true;\n        self\n    }\n\n    /// Set the localedir\n    #[must_use]\n    pub fn localedir(mut self) -> Self {\n        self.localedir = true;\n        self\n    }\n\n    /// Set the mandir\n    #[must_use]\n    pub fn mandir(mut self) -> Self {\n        self.mandir = true;\n        self\n    }\n\n    /// Set the sharedir\n    #[must_use]\n    pub fn sharedir(mut self) -> Self {\n        self.sharedir = true;\n        self\n    }\n\n    /// Set the sysconfdir\n    #[must_use]\n    pub fn sysconfdir(mut self) -> Self {\n        self.sysconfdir = true;\n        self\n    }\n\n    /// Set the pgxs\n    #[must_use]\n    pub fn pgxs(mut self) -> Self {\n        self.pgxs = true;\n        self\n    }\n\n    /// Set the configure flag\n    #[must_use]\n    pub fn configure(mut self) -> Self {\n        self.configure = true;\n        self\n    }\n\n    /// Set the cc flag\n    #[must_use]\n    pub fn cc(mut self) -> Self {\n        self.cc = true;\n        self\n    }\n\n    /// Set the cppflags flag\n    #[must_use]\n    pub fn cppflags(mut self) -> Self {\n        self.cppflags = true;\n        self\n    }\n\n    /// Set the cflags flag\n    #[must_use]\n    pub fn cflags(mut self) -> Self {\n        self.cflags = true;\n        self\n    }\n\n    /// Set the `cflags_sl` flag\n    #[must_use]\n    pub fn cflags_sl(mut self) -> Self {\n        self.cflags_sl = true;\n        self\n    }\n\n    /// Set the ldflags flag\n    #[must_use]\n    pub fn ldflags(mut self) -> Self {\n        self.ldflags = true;\n        self\n    }\n\n    /// Set the `ldflags_ex` flag\n    #[must_use]\n    pub fn ldflags_ex(mut self) -> Self {\n        self.ldflags_ex = true;\n        self\n    }\n\n    /// Set the `ldflags_sl` flag\n    #[must_use]\n    pub fn ldflags_sl(mut self) -> Self {\n        self.ldflags_sl = true;\n        self\n    }\n\n    /// Set the libs flag\n    #[must_use]\n    pub fn libs(mut self) -> Self {\n        self.libs = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgConfigBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_config\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.bindir {\n            args.push(\"--bindir\".into());\n        }\n\n        if self.docdir {\n            args.push(\"--docdir\".into());\n        }\n\n        if self.htmldir {\n            args.push(\"--htmldir\".into());\n        }\n\n        if self.includedir {\n            args.push(\"--includedir\".into());\n        }\n\n        if self.pkgincludedir {\n            args.push(\"--pkgincludedir\".into());\n        }\n\n        if self.includedir_server {\n            args.push(\"--includedir-server\".into());\n        }\n\n        if self.libdir {\n            args.push(\"--libdir\".into());\n        }\n\n        if self.pkglibdir {\n            args.push(\"--pkglibdir\".into());\n        }\n\n        if self.localedir {\n            args.push(\"--localedir\".into());\n        }\n\n        if self.mandir {\n            args.push(\"--mandir\".into());\n        }\n\n        if self.sharedir {\n            args.push(\"--sharedir\".into());\n        }\n\n        if self.sysconfdir {\n            args.push(\"--sysconfdir\".into());\n        }\n\n        if self.pgxs {\n            args.push(\"--pgxs\".into());\n        }\n\n        if self.configure {\n            args.push(\"--configure\".into());\n        }\n\n        if self.cc {\n            args.push(\"--cc\".into());\n        }\n\n        if self.cppflags {\n            args.push(\"--cppflags\".into());\n        }\n\n        if self.cflags {\n            args.push(\"--cflags\".into());\n        }\n\n        if self.cflags_sl {\n            args.push(\"--cflags_sl\".into());\n        }\n\n        if self.ldflags {\n            args.push(\"--ldflags\".into());\n        }\n\n        if self.ldflags_ex {\n            args.push(\"--ldflags_ex\".into());\n        }\n\n        if self.ldflags_sl {\n            args.push(\"--ldflags_sl\".into());\n        }\n\n        if self.libs {\n            args.push(\"--libs\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgConfigBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_config\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgConfigBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_config\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_config\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgConfigBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .bindir()\n            .docdir()\n            .htmldir()\n            .includedir()\n            .pkgincludedir()\n            .includedir_server()\n            .libdir()\n            .pkglibdir()\n            .localedir()\n            .mandir()\n            .sharedir()\n            .sysconfdir()\n            .pgxs()\n            .configure()\n            .cc()\n            .cppflags()\n            .cflags()\n            .cflags_sl()\n            .ldflags()\n            .ldflags_ex()\n            .ldflags_sl()\n            .libs()\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_config\" \"--bindir\" \"--docdir\" \"--htmldir\" \"--includedir\" \"--pkgincludedir\" \"--includedir-server\" \"--libdir\" \"--pkglibdir\" \"--localedir\" \"--mandir\" \"--sharedir\" \"--sysconfdir\" \"--pgxs\" \"--configure\" \"--cc\" \"--cppflags\" \"--cflags\" \"--cflags_sl\" \"--ldflags\" \"--ldflags_ex\" \"--ldflags_sl\" \"--libs\" \"--version\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_controldata.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_controldata` displays control information of a `PostgreSQL` database cluster.\n#[derive(Clone, Debug, Default)]\npub struct PgControlDataBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    pgdata: Option<PathBuf>,\n    version: bool,\n    help: bool,\n}\n\nimpl PgControlDataBuilder {\n    /// Create a new [`PgControlDataBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgControlDataBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Set the data directory\n    #[must_use]\n    pub fn pgdata<P: Into<PathBuf>>(mut self, pgdata: P) -> Self {\n        self.pgdata = Some(pgdata.into());\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgControlDataBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_controldata\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(pgdata) = &self.pgdata {\n            args.push(\"--pgdata\".into());\n            args.push(pgdata.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgControlDataBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_controldata\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgControlDataBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_controldata\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_controldata\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgControlDataBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .pgdata(\"pgdata\")\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(r#\"{command_prefix}\"pg_controldata\" \"--pgdata\" \"pgdata\" \"--version\" \"--help\"\"#),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_ctl.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::fmt::Display;\nuse std::path::PathBuf;\n\n/// `pg_ctl` is a utility to initialize, start, stop, or control a `PostgreSQL` server.\n#[derive(Clone, Debug, Default)]\npub struct PgCtlBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    mode: Option<Mode>,\n    pgdata: Option<PathBuf>,\n    silent: bool,\n    timeout: Option<u16>,\n    version: bool,\n    wait: bool,\n    no_wait: bool,\n    help: bool,\n    core_files: bool,\n    log: Option<PathBuf>,\n    options: Vec<OsString>,\n    path_to_postgres: Option<OsString>,\n    shutdown_mode: Option<ShutdownMode>,\n    signal: Option<OsString>,\n    pid: Option<OsString>,\n}\n\n#[derive(Clone, Debug)]\npub enum Mode {\n    InitDb,\n    Kill,\n    LogRotate,\n    Promote,\n    Restart,\n    Reload,\n    Start,\n    Stop,\n    Status,\n}\n\nimpl Display for Mode {\n    fn fmt(&self, formatter: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            Mode::InitDb => write!(formatter, \"initdb\"),\n            Mode::Kill => write!(formatter, \"kill\"),\n            Mode::LogRotate => write!(formatter, \"logrotate\"),\n            Mode::Promote => write!(formatter, \"promote\"),\n            Mode::Restart => write!(formatter, \"restart\"),\n            Mode::Reload => write!(formatter, \"reload\"),\n            Mode::Start => write!(formatter, \"start\"),\n            Mode::Stop => write!(formatter, \"stop\"),\n            Mode::Status => write!(formatter, \"status\"),\n        }\n    }\n}\n\n#[derive(Clone, Debug)]\npub enum ShutdownMode {\n    Smart,\n    Fast,\n    Immediate,\n}\n\nimpl Display for ShutdownMode {\n    fn fmt(&self, formatter: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            ShutdownMode::Smart => write!(formatter, \"smart\"),\n            ShutdownMode::Fast => write!(formatter, \"fast\"),\n            ShutdownMode::Immediate => write!(formatter, \"immediate\"),\n        }\n    }\n}\n\nimpl PgCtlBuilder {\n    /// Create a new [`PgCtlBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgCtlBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// mode\n    #[must_use]\n    pub fn mode(mut self, mode: Mode) -> Self {\n        self.mode = Some(mode);\n        self\n    }\n\n    /// location of the database storage area\n    #[must_use]\n    pub fn pgdata<P: Into<PathBuf>>(mut self, pgdata: P) -> Self {\n        self.pgdata = Some(pgdata.into());\n        self\n    }\n\n    /// only print errors, no informational messages\n    #[must_use]\n    pub fn silent(mut self) -> Self {\n        self.silent = true;\n        self\n    }\n\n    /// seconds to wait when using -w option\n    #[must_use]\n    pub fn timeout(mut self, timeout: u16) -> Self {\n        self.timeout = Some(timeout);\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// wait until operation completes (default)\n    #[must_use]\n    pub fn wait(mut self) -> Self {\n        self.wait = true;\n        self\n    }\n\n    /// do not wait until operation completes\n    #[must_use]\n    pub fn no_wait(mut self) -> Self {\n        self.no_wait = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// allow postgres to produce core files\n    #[must_use]\n    pub fn core_files(mut self) -> Self {\n        self.core_files = true;\n        self\n    }\n\n    /// write (or append) server log to FILENAME\n    #[must_use]\n    pub fn log<P: Into<PathBuf>>(mut self, log: P) -> Self {\n        self.log = Some(log.into());\n        self\n    }\n\n    /// command line options to pass to postgres (`PostgreSQL` server executable) or initdb\n    #[must_use]\n    pub fn options<S: AsRef<OsStr>>(mut self, options: &[S]) -> Self {\n        self.options = options.iter().map(|s| s.as_ref().to_os_string()).collect();\n        self\n    }\n\n    /// normally not necessary\n    #[must_use]\n    pub fn path_to_postgres<S: AsRef<OsStr>>(mut self, path_to_postgres: S) -> Self {\n        self.path_to_postgres = Some(path_to_postgres.as_ref().to_os_string());\n        self\n    }\n\n    /// MODE can be \"smart\", \"fast\", or \"immediate\"\n    #[must_use]\n    pub fn shutdown_mode(mut self, shutdown_mode: ShutdownMode) -> Self {\n        self.shutdown_mode = Some(shutdown_mode);\n        self\n    }\n\n    /// SIGNALNAME\n    #[must_use]\n    pub fn signal<S: AsRef<OsStr>>(mut self, signal: S) -> Self {\n        self.signal = Some(signal.as_ref().to_os_string());\n        self\n    }\n\n    /// PID\n    #[must_use]\n    pub fn pid<S: AsRef<OsStr>>(mut self, pid: S) -> Self {\n        self.pid = Some(pid.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgCtlBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_ctl\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(mode) = &self.mode {\n            args.push(mode.to_string().into());\n        }\n\n        if let Some(pgdata) = &self.pgdata {\n            args.push(\"--pgdata\".into());\n            args.push(pgdata.into());\n        }\n\n        if self.silent {\n            args.push(\"--silent\".into());\n        }\n\n        if let Some(timeout) = &self.timeout {\n            args.push(\"--timeout\".into());\n            args.push(timeout.to_string().into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.wait {\n            args.push(\"--wait\".into());\n        }\n\n        if self.no_wait {\n            args.push(\"--no-wait\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if self.core_files {\n            args.push(\"--core-files\".into());\n        }\n\n        if let Some(log) = &self.log {\n            args.push(\"--log\".into());\n            args.push(log.into());\n        }\n\n        for option in &self.options {\n            args.push(\"-o\".into());\n            args.push(option.into());\n        }\n\n        if let Some(path_to_postgres) = &self.path_to_postgres {\n            args.push(\"-p\".into());\n            args.push(path_to_postgres.into());\n        }\n\n        if let Some(shutdown_mode) = &self.shutdown_mode {\n            args.push(\"--mode\".into());\n            args.push(shutdown_mode.to_string().into());\n        }\n\n        if let Some(signal) = &self.signal {\n            args.push(signal.into());\n        }\n\n        if let Some(pid) = &self.pid {\n            args.push(pid.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_display_mode() {\n        assert_eq!(\"initdb\", Mode::InitDb.to_string());\n        assert_eq!(\"kill\", Mode::Kill.to_string());\n        assert_eq!(\"logrotate\", Mode::LogRotate.to_string());\n        assert_eq!(\"promote\", Mode::Promote.to_string());\n        assert_eq!(\"restart\", Mode::Restart.to_string());\n        assert_eq!(\"reload\", Mode::Reload.to_string());\n        assert_eq!(\"start\", Mode::Start.to_string());\n        assert_eq!(\"stop\", Mode::Stop.to_string());\n        assert_eq!(\"status\", Mode::Status.to_string());\n    }\n\n    #[test]\n    fn test_display_shutdown_mode() {\n        assert_eq!(\"smart\", ShutdownMode::Smart.to_string());\n        assert_eq!(\"fast\", ShutdownMode::Fast.to_string());\n        assert_eq!(\"immediate\", ShutdownMode::Immediate.to_string());\n    }\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgCtlBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_ctl\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgCtlBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_ctl\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_ctl\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgCtlBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .mode(Mode::Start)\n            .pgdata(\"pgdata\")\n            .silent()\n            .timeout(60)\n            .version()\n            .wait()\n            .no_wait()\n            .help()\n            .core_files()\n            .log(\"log\")\n            .options(&[\"-c log_connections=on\"])\n            .path_to_postgres(\"path_to_postgres\")\n            .shutdown_mode(ShutdownMode::Smart)\n            .signal(\"HUP\")\n            .pid(\"12345\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_ctl\" \"start\" \"--pgdata\" \"pgdata\" \"--silent\" \"--timeout\" \"60\" \"--version\" \"--wait\" \"--no-wait\" \"--help\" \"--core-files\" \"--log\" \"log\" \"-o\" \"-c log_connections=on\" \"-p\" \"path_to_postgres\" \"--mode\" \"smart\" \"HUP\" \"12345\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_dump.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_dump` dumps a database as a text file or to other formats.\n#[derive(Clone, Debug, Default)]\npub struct PgDumpBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    data_only: bool,\n    large_objects: bool,\n    no_large_objects: bool,\n    clean: bool,\n    create: bool,\n    extension: Option<OsString>,\n    encoding: Option<OsString>,\n    file: Option<OsString>,\n    format: Option<OsString>,\n    jobs: Option<OsString>,\n    schema: Option<OsString>,\n    exclude_schema: Option<OsString>,\n    no_owner: bool,\n    no_reconnect: bool,\n    schema_only: bool,\n    superuser: Option<OsString>,\n    table: Option<OsString>,\n    exclude_table: Option<OsString>,\n    verbose: bool,\n    version: bool,\n    no_privileges: bool,\n    compress: Option<OsString>,\n    binary_upgrade: bool,\n    column_inserts: bool,\n    attribute_inserts: bool,\n    disable_dollar_quoting: bool,\n    disable_triggers: bool,\n    enable_row_security: bool,\n    exclude_table_data_and_children: Option<OsString>,\n    extra_float_digits: Option<OsString>,\n    if_exists: bool,\n    include_foreign_data: Option<OsString>,\n    inserts: bool,\n    load_via_partition_root: bool,\n    lock_wait_timeout: Option<u16>,\n    no_comments: bool,\n    no_publications: bool,\n    no_security_labels: bool,\n    no_subscriptions: bool,\n    no_table_access_method: bool,\n    no_tablespaces: bool,\n    no_toast_compression: bool,\n    no_unlogged_table_data: bool,\n    on_conflict_do_nothing: bool,\n    quote_all_identifiers: bool,\n    rows_per_insert: Option<u64>,\n    section: Option<OsString>,\n    serializable_deferrable: bool,\n    snapshot: Option<OsString>,\n    strict_names: bool,\n    table_and_children: Option<OsString>,\n    use_set_session_authorization: bool,\n    help: bool,\n    dbname: Option<OsString>,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    role: Option<OsString>,\n}\n\nimpl PgDumpBuilder {\n    /// Create a new [`PgDumpBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgDumpBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Dump only the data, not the schema\n    #[must_use]\n    pub fn data_only(mut self) -> Self {\n        self.data_only = true;\n        self\n    }\n\n    /// Dump large objects in binary format\n    #[must_use]\n    pub fn large_objects(mut self) -> Self {\n        self.large_objects = true;\n        self\n    }\n\n    /// Do not dump large objects\n    #[must_use]\n    pub fn no_large_objects(mut self) -> Self {\n        self.no_large_objects = true;\n        self\n    }\n\n    /// Output commands to clean (drop) database objects prior to outputting the commands for creating them\n    #[must_use]\n    pub fn clean(mut self) -> Self {\n        self.clean = true;\n        self\n    }\n\n    /// Output commands to create the database objects (data definition)\n    #[must_use]\n    pub fn create(mut self) -> Self {\n        self.create = true;\n        self\n    }\n\n    /// Dump data for the named extension\n    #[must_use]\n    pub fn extension<S: AsRef<OsStr>>(mut self, extension: S) -> Self {\n        self.extension = Some(extension.as_ref().to_os_string());\n        self\n    }\n\n    /// Dump data in encoding ENCODING\n    #[must_use]\n    pub fn encoding<S: AsRef<OsStr>>(mut self, encoding: S) -> Self {\n        self.encoding = Some(encoding.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the output file or directory name\n    #[must_use]\n    pub fn file<S: AsRef<OsStr>>(mut self, file: S) -> Self {\n        self.file = Some(file.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the output file format (custom, directory, tar, plain text (default))\n    #[must_use]\n    pub fn format<S: AsRef<OsStr>>(mut self, format: S) -> Self {\n        self.format = Some(format.as_ref().to_os_string());\n        self\n    }\n\n    /// Use this many parallel jobs to dump\n    #[must_use]\n    pub fn jobs<S: AsRef<OsStr>>(mut self, jobs: S) -> Self {\n        self.jobs = Some(jobs.as_ref().to_os_string());\n        self\n    }\n\n    /// Dump data for the named schema(s) only\n    #[must_use]\n    pub fn schema<S: AsRef<OsStr>>(mut self, schema: S) -> Self {\n        self.schema = Some(schema.as_ref().to_os_string());\n        self\n    }\n\n    /// Do not output commands to set ownership of objects to match the original database\n    #[must_use]\n    pub fn exclude_schema<S: AsRef<OsStr>>(mut self, exclude_schema: S) -> Self {\n        self.exclude_schema = Some(exclude_schema.as_ref().to_os_string());\n        self\n    }\n\n    /// Do not output commands to set ownership of objects to match the original database\n    #[must_use]\n    pub fn no_owner(mut self) -> Self {\n        self.no_owner = true;\n        self\n    }\n\n    /// Do not reconnect to the database\n    #[must_use]\n    pub fn no_reconnect(mut self) -> Self {\n        self.no_reconnect = true;\n        self\n    }\n\n    /// Dump only the schema, no data\n    #[must_use]\n    pub fn schema_only(mut self) -> Self {\n        self.schema_only = true;\n        self\n    }\n\n    /// Dump data as a superuser\n    #[must_use]\n    pub fn superuser<S: AsRef<OsStr>>(mut self, superuser: S) -> Self {\n        self.superuser = Some(superuser.as_ref().to_os_string());\n        self\n    }\n\n    /// Dump data for the named table(s) only\n    #[must_use]\n    pub fn table<S: AsRef<OsStr>>(mut self, table: S) -> Self {\n        self.table = Some(table.as_ref().to_os_string());\n        self\n    }\n\n    /// Do not output commands to create the table(s) containing the data\n    #[must_use]\n    pub fn exclude_table<S: AsRef<OsStr>>(mut self, exclude_table: S) -> Self {\n        self.exclude_table = Some(exclude_table.as_ref().to_os_string());\n        self\n    }\n\n    /// Enable verbose mode\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Do not output commands to set object privileges\n    #[must_use]\n    pub fn no_privileges(mut self) -> Self {\n        self.no_privileges = true;\n        self\n    }\n\n    /// Set the compress level to use\n    #[must_use]\n    pub fn compress<S: AsRef<OsStr>>(mut self, compress: S) -> Self {\n        self.compress = Some(compress.as_ref().to_os_string());\n        self\n    }\n\n    /// Dump data in a format suitable for binary upgrade\n    #[must_use]\n    pub fn binary_upgrade(mut self) -> Self {\n        self.binary_upgrade = true;\n        self\n    }\n\n    /// Dump data as INSERT commands with column names\n    #[must_use]\n    pub fn column_inserts(mut self) -> Self {\n        self.column_inserts = true;\n        self\n    }\n\n    /// Dump data as INSERT commands with column names\n    #[must_use]\n    pub fn attribute_inserts(mut self) -> Self {\n        self.attribute_inserts = true;\n        self\n    }\n\n    /// Disable dollar quoting, use SQL standard quoting\n    #[must_use]\n    pub fn disable_dollar_quoting(mut self) -> Self {\n        self.disable_dollar_quoting = true;\n        self\n    }\n\n    /// Disable triggers during data-only restore\n    #[must_use]\n    pub fn disable_triggers(mut self) -> Self {\n        self.disable_triggers = true;\n        self\n    }\n\n    /// Dump data with row security enabled\n    #[must_use]\n    pub fn enable_row_security(mut self) -> Self {\n        self.enable_row_security = true;\n        self\n    }\n\n    /// Dump data for the named table(s) but exclude data for their child tables\n    #[must_use]\n    pub fn exclude_table_data_and_children<S: AsRef<OsStr>>(\n        mut self,\n        exclude_table_data_and_children: S,\n    ) -> Self {\n        self.exclude_table_data_and_children =\n            Some(exclude_table_data_and_children.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the number of digits displayed for floating-point values\n    #[must_use]\n    pub fn extra_float_digits<S: AsRef<OsStr>>(mut self, extra_float_digits: S) -> Self {\n        self.extra_float_digits = Some(extra_float_digits.as_ref().to_os_string());\n        self\n    }\n\n    /// Use IF EXISTS when dropping objects\n    #[must_use]\n    pub fn if_exists(mut self) -> Self {\n        self.if_exists = true;\n        self\n    }\n\n    /// Include foreign-data wrappers in the dump\n    #[must_use]\n    pub fn include_foreign_data<S: AsRef<OsStr>>(mut self, include_foreign_data: S) -> Self {\n        self.include_foreign_data = Some(include_foreign_data.as_ref().to_os_string());\n        self\n    }\n\n    /// Dump data as INSERT commands\n    #[must_use]\n    pub fn inserts(mut self) -> Self {\n        self.inserts = true;\n        self\n    }\n\n    /// Load data via the partition root table\n    #[must_use]\n    pub fn load_via_partition_root(mut self) -> Self {\n        self.load_via_partition_root = true;\n        self\n    }\n\n    /// Fail after waiting TIMEOUT for a table lock\n    #[must_use]\n    pub fn lock_wait_timeout(mut self, lock_wait_timeout: u16) -> Self {\n        self.lock_wait_timeout = Some(lock_wait_timeout);\n        self\n    }\n\n    /// Do not output comments\n    #[must_use]\n    pub fn no_comments(mut self) -> Self {\n        self.no_comments = true;\n        self\n    }\n\n    /// Do not output publications\n    #[must_use]\n    pub fn no_publications(mut self) -> Self {\n        self.no_publications = true;\n        self\n    }\n\n    /// Do not output security labels\n    #[must_use]\n    pub fn no_security_labels(mut self) -> Self {\n        self.no_security_labels = true;\n        self\n    }\n\n    /// Do not output subscriptions\n    #[must_use]\n    pub fn no_subscriptions(mut self) -> Self {\n        self.no_subscriptions = true;\n        self\n    }\n\n    /// Do not output table access method\n    #[must_use]\n    pub fn no_table_access_method(mut self) -> Self {\n        self.no_table_access_method = true;\n        self\n    }\n\n    /// Do not output tablespace assignments\n    #[must_use]\n    pub fn no_tablespaces(mut self) -> Self {\n        self.no_tablespaces = true;\n        self\n    }\n\n    /// Do not output TOAST table compression\n    #[must_use]\n    pub fn no_toast_compression(mut self) -> Self {\n        self.no_toast_compression = true;\n        self\n    }\n\n    /// Do not output unlogged table data\n    #[must_use]\n    pub fn no_unlogged_table_data(mut self) -> Self {\n        self.no_unlogged_table_data = true;\n        self\n    }\n\n    /// Use ON CONFLICT DO NOTHING for INSERTs\n    #[must_use]\n    pub fn on_conflict_do_nothing(mut self) -> Self {\n        self.on_conflict_do_nothing = true;\n        self\n    }\n\n    /// Quote all identifiers, even if not key words\n    #[must_use]\n    pub fn quote_all_identifiers(mut self) -> Self {\n        self.quote_all_identifiers = true;\n        self\n    }\n\n    /// Set the number of rows per INSERT\n    #[must_use]\n    pub fn rows_per_insert(mut self, rows_per_insert: u64) -> Self {\n        self.rows_per_insert = Some(rows_per_insert);\n        self\n    }\n\n    /// Dump data for the named section(s) only\n    #[must_use]\n    pub fn section<S: AsRef<OsStr>>(mut self, section: S) -> Self {\n        self.section = Some(section.as_ref().to_os_string());\n        self\n    }\n\n    /// Dump data as a serializable transaction\n    #[must_use]\n    pub fn serializable_deferrable(mut self) -> Self {\n        self.serializable_deferrable = true;\n        self\n    }\n\n    /// Use a snapshot with the specified name\n    #[must_use]\n    pub fn snapshot<S: AsRef<OsStr>>(mut self, snapshot: S) -> Self {\n        self.snapshot = Some(snapshot.as_ref().to_os_string());\n        self\n    }\n\n    /// Use strict SQL identifier syntax\n    #[must_use]\n    pub fn strict_names(mut self) -> Self {\n        self.strict_names = true;\n        self\n    }\n\n    /// Dump data for the named table(s) and their children\n    #[must_use]\n    pub fn table_and_children<S: AsRef<OsStr>>(mut self, table_and_children: S) -> Self {\n        self.table_and_children = Some(table_and_children.as_ref().to_os_string());\n        self\n    }\n\n    /// Use SET SESSION AUTHORIZATION commands instead of ALTER OWNER\n    #[must_use]\n    pub fn use_set_session_authorization(mut self) -> Self {\n        self.use_set_session_authorization = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// database to connect to\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// database user name\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt (should happen automatically)\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// Specifies a role name to be used to create the dump\n    #[must_use]\n    pub fn role<S: AsRef<OsStr>>(mut self, rolename: S) -> Self {\n        self.role = Some(rolename.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgDumpBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_dump\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.data_only {\n            args.push(\"--data-only\".into());\n        }\n\n        if self.large_objects {\n            args.push(\"--large-objects\".into());\n        }\n\n        if self.no_large_objects {\n            args.push(\"--no-large-objects\".into());\n        }\n\n        if self.clean {\n            args.push(\"--clean\".into());\n        }\n\n        if self.create {\n            args.push(\"--create\".into());\n        }\n\n        if let Some(extension) = &self.extension {\n            args.push(\"--extension\".into());\n            args.push(extension.into());\n        }\n\n        if let Some(encoding) = &self.encoding {\n            args.push(\"--encoding\".into());\n            args.push(encoding.into());\n        }\n\n        if let Some(file) = &self.file {\n            args.push(\"--file\".into());\n            args.push(file.into());\n        }\n\n        if let Some(format) = &self.format {\n            args.push(\"--format\".into());\n            args.push(format.into());\n        }\n\n        if let Some(jobs) = &self.jobs {\n            args.push(\"--jobs\".into());\n            args.push(jobs.into());\n        }\n\n        if let Some(schema) = &self.schema {\n            args.push(\"--schema\".into());\n            args.push(schema.into());\n        }\n\n        if let Some(exclude_schema) = &self.exclude_schema {\n            args.push(\"--exclude-schema\".into());\n            args.push(exclude_schema.into());\n        }\n\n        if self.no_owner {\n            args.push(\"--no-owner\".into());\n        }\n\n        if self.no_reconnect {\n            args.push(\"--no-reconnect\".into());\n        }\n\n        if self.schema_only {\n            args.push(\"--schema-only\".into());\n        }\n\n        if let Some(superuser) = &self.superuser {\n            args.push(\"--superuser\".into());\n            args.push(superuser.into());\n        }\n\n        if let Some(table) = &self.table {\n            args.push(\"--table\".into());\n            args.push(table.into());\n        }\n\n        if let Some(exclude_table) = &self.exclude_table {\n            args.push(\"--exclude-table\".into());\n            args.push(exclude_table.into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.no_privileges {\n            args.push(\"--no-privileges\".into());\n        }\n\n        if let Some(compress) = &self.compress {\n            args.push(\"--compress\".into());\n            args.push(compress.into());\n        }\n\n        if self.binary_upgrade {\n            args.push(\"--binary-upgrade\".into());\n        }\n\n        if self.column_inserts {\n            args.push(\"--column-inserts\".into());\n        }\n\n        if self.attribute_inserts {\n            args.push(\"--attribute-inserts\".into());\n        }\n\n        if self.disable_dollar_quoting {\n            args.push(\"--disable-dollar-quoting\".into());\n        }\n\n        if self.disable_triggers {\n            args.push(\"--disable-triggers\".into());\n        }\n\n        if self.enable_row_security {\n            args.push(\"--enable-row-security\".into());\n        }\n\n        if let Some(exclude_table_data_and_children) = &self.exclude_table_data_and_children {\n            args.push(\"--exclude-table-data-and-children\".into());\n            args.push(exclude_table_data_and_children.into());\n        }\n\n        if let Some(extra_float_digits) = &self.extra_float_digits {\n            args.push(\"--extra-float-digits\".into());\n            args.push(extra_float_digits.into());\n        }\n\n        if self.if_exists {\n            args.push(\"--if-exists\".into());\n        }\n\n        if let Some(include_foreign_data) = &self.include_foreign_data {\n            args.push(\"--include-foreign-data\".into());\n            args.push(include_foreign_data.into());\n        }\n\n        if self.inserts {\n            args.push(\"--inserts\".into());\n        }\n\n        if self.load_via_partition_root {\n            args.push(\"--load-via-partition-root\".into());\n        }\n\n        if let Some(lock_wait_timeout) = &self.lock_wait_timeout {\n            args.push(\"--lock-wait-timeout\".into());\n            args.push(lock_wait_timeout.to_string().into());\n        }\n\n        if self.no_comments {\n            args.push(\"--no-comments\".into());\n        }\n\n        if self.no_publications {\n            args.push(\"--no-publications\".into());\n        }\n\n        if self.no_security_labels {\n            args.push(\"--no-security-labels\".into());\n        }\n\n        if self.no_subscriptions {\n            args.push(\"--no-subscriptions\".into());\n        }\n\n        if self.no_table_access_method {\n            args.push(\"--no-table-access-method\".into());\n        }\n\n        if self.no_tablespaces {\n            args.push(\"--no-tablespaces\".into());\n        }\n\n        if self.no_toast_compression {\n            args.push(\"--no-toast-compression\".into());\n        }\n\n        if self.no_unlogged_table_data {\n            args.push(\"--no-unlogged-table-data\".into());\n        }\n\n        if self.on_conflict_do_nothing {\n            args.push(\"--on-conflict-do-nothing\".into());\n        }\n\n        if self.quote_all_identifiers {\n            args.push(\"--quote-all-identifiers\".into());\n        }\n\n        if let Some(rows_per_insert) = &self.rows_per_insert {\n            args.push(\"--rows-per-insert\".into());\n            args.push(rows_per_insert.to_string().into());\n        }\n\n        if let Some(section) = &self.section {\n            args.push(\"--section\".into());\n            args.push(section.into());\n        }\n\n        if self.serializable_deferrable {\n            args.push(\"--serializable-deferrable\".into());\n        }\n\n        if let Some(snapshot) = &self.snapshot {\n            args.push(\"--snapshot\".into());\n            args.push(snapshot.into());\n        }\n\n        if self.strict_names {\n            args.push(\"--strict-names\".into());\n        }\n\n        if let Some(table_and_children) = &self.table_and_children {\n            args.push(\"--table-and-children\".into());\n            args.push(table_and_children.into());\n        }\n\n        if self.use_set_session_authorization {\n            args.push(\"--use-set-session-authorization\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(role) = &self.role {\n            args.push(\"--role\".into());\n            args.push(role.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgDumpBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_dump\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgDumpBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_dump\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_dump\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgDumpBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_dump\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_dump\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgDumpBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .data_only()\n            .large_objects()\n            .no_large_objects()\n            .clean()\n            .create()\n            .extension(\"extension\")\n            .encoding(\"UTF8\")\n            .file(\"file\")\n            .format(\"format\")\n            .jobs(\"jobs\")\n            .schema(\"schema\")\n            .exclude_schema(\"exclude_schema\")\n            .no_owner()\n            .no_reconnect()\n            .schema_only()\n            .superuser(\"superuser\")\n            .table(\"table\")\n            .exclude_table(\"exclude_table\")\n            .verbose()\n            .version()\n            .no_privileges()\n            .compress(\"compress\")\n            .binary_upgrade()\n            .column_inserts()\n            .attribute_inserts()\n            .disable_dollar_quoting()\n            .disable_triggers()\n            .enable_row_security()\n            .exclude_table_data_and_children(\"exclude_table_data_and_children\")\n            .extra_float_digits(\"extra_float_digits\")\n            .if_exists()\n            .include_foreign_data(\"include_foreign_data\")\n            .inserts()\n            .load_via_partition_root()\n            .lock_wait_timeout(10)\n            .no_comments()\n            .no_publications()\n            .no_security_labels()\n            .no_subscriptions()\n            .no_table_access_method()\n            .no_tablespaces()\n            .no_toast_compression()\n            .no_unlogged_table_data()\n            .on_conflict_do_nothing()\n            .quote_all_identifiers()\n            .rows_per_insert(100)\n            .section(\"section\")\n            .serializable_deferrable()\n            .snapshot(\"snapshot\")\n            .strict_names()\n            .table_and_children(\"table_and_children\")\n            .use_set_session_authorization()\n            .help()\n            .dbname(\"dbname\")\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .role(\"role\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_dump\" \"--data-only\" \"--large-objects\" \"--no-large-objects\" \"--clean\" \"--create\" \"--extension\" \"extension\" \"--encoding\" \"UTF8\" \"--file\" \"file\" \"--format\" \"format\" \"--jobs\" \"jobs\" \"--schema\" \"schema\" \"--exclude-schema\" \"exclude_schema\" \"--no-owner\" \"--no-reconnect\" \"--schema-only\" \"--superuser\" \"superuser\" \"--table\" \"table\" \"--exclude-table\" \"exclude_table\" \"--verbose\" \"--version\" \"--no-privileges\" \"--compress\" \"compress\" \"--binary-upgrade\" \"--column-inserts\" \"--attribute-inserts\" \"--disable-dollar-quoting\" \"--disable-triggers\" \"--enable-row-security\" \"--exclude-table-data-and-children\" \"exclude_table_data_and_children\" \"--extra-float-digits\" \"extra_float_digits\" \"--if-exists\" \"--include-foreign-data\" \"include_foreign_data\" \"--inserts\" \"--load-via-partition-root\" \"--lock-wait-timeout\" \"10\" \"--no-comments\" \"--no-publications\" \"--no-security-labels\" \"--no-subscriptions\" \"--no-table-access-method\" \"--no-tablespaces\" \"--no-toast-compression\" \"--no-unlogged-table-data\" \"--on-conflict-do-nothing\" \"--quote-all-identifiers\" \"--rows-per-insert\" \"100\" \"--section\" \"section\" \"--serializable-deferrable\" \"--snapshot\" \"snapshot\" \"--strict-names\" \"--table-and-children\" \"table_and_children\" \"--use-set-session-authorization\" \"--help\" \"--dbname\" \"dbname\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\" \"--role\" \"role\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_dumpall.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_dumpall` extracts a `PostgreSQL` database cluster into an SQL script file.\n#[derive(Clone, Debug, Default)]\npub struct PgDumpAllBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    file: Option<OsString>,\n    verbose: bool,\n    version: bool,\n    lock_wait_timeout: Option<u16>,\n    help: bool,\n    data_only: bool,\n    clean: bool,\n    encoding: Option<OsString>,\n    globals_only: bool,\n    no_owner: bool,\n    roles_only: bool,\n    schema_only: bool,\n    superuser: Option<OsString>,\n    tablespaces_only: bool,\n    no_privileges: bool,\n    binary_upgrade: bool,\n    column_inserts: bool,\n    disable_dollar_quoting: bool,\n    disable_triggers: bool,\n    exclude_database: Option<OsString>,\n    extra_float_digits: Option<OsString>,\n    if_exists: bool,\n    inserts: bool,\n    load_via_partition_root: bool,\n    no_comments: bool,\n    no_publications: bool,\n    no_role_passwords: bool,\n    no_security_labels: bool,\n    no_subscriptions: bool,\n    no_sync: bool,\n    no_table_access_method: bool,\n    no_tablespaces: bool,\n    no_toast_compression: bool,\n    no_unlogged_table_data: bool,\n    on_conflict_do_nothing: bool,\n    quote_all_identifiers: bool,\n    rows_per_insert: Option<OsString>,\n    use_set_session_authorization: bool,\n    dbname: Option<OsString>,\n    host: Option<OsString>,\n    database: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    role: Option<OsString>,\n}\n\nimpl PgDumpAllBuilder {\n    /// Create a new [`PgDumpAllBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgDumpAllBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// output file name\n    #[must_use]\n    pub fn file<S: AsRef<OsStr>>(mut self, file: S) -> Self {\n        self.file = Some(file.as_ref().to_os_string());\n        self\n    }\n\n    /// verbose mode\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// fail after waiting TIMEOUT for a table lock\n    #[must_use]\n    pub fn lock_wait_timeout(mut self, lock_wait_timeout: u16) -> Self {\n        self.lock_wait_timeout = Some(lock_wait_timeout);\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// dump only the data, not the schema\n    #[must_use]\n    pub fn data_only(mut self) -> Self {\n        self.data_only = true;\n        self\n    }\n\n    /// clean (drop) database objects before recreating them\n    #[must_use]\n    pub fn clean(mut self) -> Self {\n        self.clean = true;\n        self\n    }\n\n    /// encoding for the dump\n    #[must_use]\n    pub fn encoding<S: AsRef<OsStr>>(mut self, encoding: S) -> Self {\n        self.encoding = Some(encoding.as_ref().to_os_string());\n        self\n    }\n\n    /// dump only global objects, not database-specific objects\n    #[must_use]\n    pub fn globals_only(mut self) -> Self {\n        self.globals_only = true;\n        self\n    }\n\n    /// do not output commands to set object ownership\n    #[must_use]\n    pub fn no_owner(mut self) -> Self {\n        self.no_owner = true;\n        self\n    }\n\n    /// dump only the roles, not the role memberships or privileges\n    #[must_use]\n    pub fn roles_only(mut self) -> Self {\n        self.roles_only = true;\n        self\n    }\n\n    /// dump only the object definitions (schema), not data\n    #[must_use]\n    pub fn schema_only(mut self) -> Self {\n        self.schema_only = true;\n        self\n    }\n\n    /// superuser user name to use in the dump\n    #[must_use]\n    pub fn superuser<S: AsRef<OsStr>>(mut self, superuser: S) -> Self {\n        self.superuser = Some(superuser.as_ref().to_os_string());\n        self\n    }\n\n    /// dump only the tablespace definitions\n    #[must_use]\n    pub fn tablespaces_only(mut self) -> Self {\n        self.tablespaces_only = true;\n        self\n    }\n\n    /// do not dump object privileges (grant/revoke commands)\n    #[must_use]\n    pub fn no_privileges(mut self) -> Self {\n        self.no_privileges = true;\n        self\n    }\n\n    /// dump in a format suitable for binary upgrade\n    #[must_use]\n    pub fn binary_upgrade(mut self) -> Self {\n        self.binary_upgrade = true;\n        self\n    }\n\n    /// dump data as INSERT commands with column names\n    #[must_use]\n    pub fn column_inserts(mut self) -> Self {\n        self.column_inserts = true;\n        self\n    }\n\n    /// disable dollar quoting, use SQL standard quoting\n    #[must_use]\n    pub fn disable_dollar_quoting(mut self) -> Self {\n        self.disable_dollar_quoting = true;\n        self\n    }\n\n    /// disable triggers during data-only restore\n    #[must_use]\n    pub fn disable_triggers(mut self) -> Self {\n        self.disable_triggers = true;\n        self\n    }\n\n    /// exclude the named database from the dump\n    #[must_use]\n    pub fn exclude_database<S: AsRef<OsStr>>(mut self, exclude_database: S) -> Self {\n        self.exclude_database = Some(exclude_database.as_ref().to_os_string());\n        self\n    }\n\n    /// set the number of digits displayed for floating-point values\n    #[must_use]\n    pub fn extra_float_digits<S: AsRef<OsStr>>(mut self, extra_float_digits: S) -> Self {\n        self.extra_float_digits = Some(extra_float_digits.as_ref().to_os_string());\n        self\n    }\n\n    /// use IF EXISTS when dropping objects\n    #[must_use]\n    pub fn if_exists(mut self) -> Self {\n        self.if_exists = true;\n        self\n    }\n\n    /// dump data as proper INSERT commands\n    #[must_use]\n    pub fn inserts(mut self) -> Self {\n        self.inserts = true;\n        self\n    }\n\n    /// load data via the partition root table\n    #[must_use]\n    pub fn load_via_partition_root(mut self) -> Self {\n        self.load_via_partition_root = true;\n        self\n    }\n\n    /// do not dump comments\n    #[must_use]\n    pub fn no_comments(mut self) -> Self {\n        self.no_comments = true;\n        self\n    }\n\n    /// do not dump publications\n    #[must_use]\n    pub fn no_publications(mut self) -> Self {\n        self.no_publications = true;\n        self\n    }\n\n    /// do not dump passwords for roles\n    #[must_use]\n    pub fn no_role_passwords(mut self) -> Self {\n        self.no_role_passwords = true;\n        self\n    }\n\n    /// do not dump security labels\n    #[must_use]\n    pub fn no_security_labels(mut self) -> Self {\n        self.no_security_labels = true;\n        self\n    }\n\n    /// do not dump subscriptions\n    #[must_use]\n    pub fn no_subscriptions(mut self) -> Self {\n        self.no_subscriptions = true;\n        self\n    }\n\n    /// do not wait for changes to be written safely to disk\n    #[must_use]\n    pub fn no_sync(mut self) -> Self {\n        self.no_sync = true;\n        self\n    }\n\n    /// do not dump table access method information\n    #[must_use]\n    pub fn no_table_access_method(mut self) -> Self {\n        self.no_table_access_method = true;\n        self\n    }\n\n    /// do not dump tablespace assignments\n    #[must_use]\n    pub fn no_tablespaces(mut self) -> Self {\n        self.no_tablespaces = true;\n        self\n    }\n\n    /// do not dump TOAST compression information\n    #[must_use]\n    pub fn no_toast_compression(mut self) -> Self {\n        self.no_toast_compression = true;\n        self\n    }\n\n    /// do not dump unlogged table data\n    #[must_use]\n    pub fn no_unlogged_table_data(mut self) -> Self {\n        self.no_unlogged_table_data = true;\n        self\n    }\n\n    /// use ON CONFLICT DO NOTHING for INSERTs\n    #[must_use]\n    pub fn on_conflict_do_nothing(mut self) -> Self {\n        self.on_conflict_do_nothing = true;\n        self\n    }\n\n    /// quote all identifiers, even if not key words\n    #[must_use]\n    pub fn quote_all_identifiers(mut self) -> Self {\n        self.quote_all_identifiers = true;\n        self\n    }\n\n    /// set the number of rows per INSERT command\n    #[must_use]\n    pub fn rows_per_insert<S: AsRef<OsStr>>(mut self, rows_per_insert: S) -> Self {\n        self.rows_per_insert = Some(rows_per_insert.as_ref().to_os_string());\n        self\n    }\n\n    /// use SET SESSION AUTHORIZATION commands instead of ALTER OWNER\n    #[must_use]\n    pub fn use_set_session_authorization(mut self) -> Self {\n        self.use_set_session_authorization = true;\n        self\n    }\n\n    /// database name to connect to\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database name to connect to\n    #[must_use]\n    pub fn database<S: AsRef<OsStr>>(mut self, database: S) -> Self {\n        self.database = Some(database.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port number\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// user name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// role name to use in the dump\n    #[must_use]\n    pub fn role<S: AsRef<OsStr>>(mut self, role: S) -> Self {\n        self.role = Some(role.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgDumpAllBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_dumpall\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(file) = &self.file {\n            args.push(\"--file\".into());\n            args.push(file.into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if let Some(lock_wait_timeout) = &self.lock_wait_timeout {\n            args.push(\"--lock-wait-timeout\".into());\n            args.push(lock_wait_timeout.to_string().into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if self.data_only {\n            args.push(\"--data-only\".into());\n        }\n\n        if self.clean {\n            args.push(\"--clean\".into());\n        }\n\n        if let Some(encoding) = &self.encoding {\n            args.push(\"--encoding\".into());\n            args.push(encoding.into());\n        }\n\n        if self.globals_only {\n            args.push(\"--globals-only\".into());\n        }\n\n        if self.no_owner {\n            args.push(\"--no-owner\".into());\n        }\n\n        if self.roles_only {\n            args.push(\"--roles-only\".into());\n        }\n\n        if self.schema_only {\n            args.push(\"--schema-only\".into());\n        }\n\n        if let Some(superuser) = &self.superuser {\n            args.push(\"--superuser\".into());\n            args.push(superuser.into());\n        }\n\n        if self.tablespaces_only {\n            args.push(\"--tablespaces-only\".into());\n        }\n\n        if self.no_privileges {\n            args.push(\"--no-privileges\".into());\n        }\n\n        if self.binary_upgrade {\n            args.push(\"--binary-upgrade\".into());\n        }\n\n        if self.column_inserts {\n            args.push(\"--column-inserts\".into());\n        }\n\n        if self.disable_dollar_quoting {\n            args.push(\"--disable-dollar-quoting\".into());\n        }\n\n        if self.disable_triggers {\n            args.push(\"--disable-triggers\".into());\n        }\n\n        if let Some(exclude_database) = &self.exclude_database {\n            args.push(\"--exclude-database\".into());\n            args.push(exclude_database.into());\n        }\n\n        if let Some(extra_float_digits) = &self.extra_float_digits {\n            args.push(\"--extra-float-digits\".into());\n            args.push(extra_float_digits.into());\n        }\n\n        if self.if_exists {\n            args.push(\"--if-exists\".into());\n        }\n\n        if self.inserts {\n            args.push(\"--inserts\".into());\n        }\n\n        if self.load_via_partition_root {\n            args.push(\"--load-via-partition-root\".into());\n        }\n\n        if self.no_comments {\n            args.push(\"--no-comments\".into());\n        }\n\n        if self.no_publications {\n            args.push(\"--no-publications\".into());\n        }\n\n        if self.no_role_passwords {\n            args.push(\"--no-role-passwords\".into());\n        }\n\n        if self.no_security_labels {\n            args.push(\"--no-security-labels\".into());\n        }\n\n        if self.no_subscriptions {\n            args.push(\"--no-subscriptions\".into());\n        }\n\n        if self.no_sync {\n            args.push(\"--no-sync\".into());\n        }\n\n        if self.no_table_access_method {\n            args.push(\"--no-table-access-method\".into());\n        }\n\n        if self.no_tablespaces {\n            args.push(\"--no-tablespaces\".into());\n        }\n\n        if self.no_toast_compression {\n            args.push(\"--no-toast-compression\".into());\n        }\n\n        if self.no_unlogged_table_data {\n            args.push(\"--no-unlogged-table-data\".into());\n        }\n\n        if self.on_conflict_do_nothing {\n            args.push(\"--on-conflict-do-nothing\".into());\n        }\n\n        if self.quote_all_identifiers {\n            args.push(\"--quote-all-identifiers\".into());\n        }\n\n        if let Some(rows_per_insert) = &self.rows_per_insert {\n            args.push(\"--rows-per-insert\".into());\n            args.push(rows_per_insert.into());\n        }\n\n        if self.use_set_session_authorization {\n            args.push(\"--use-set-session-authorization\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(database) = &self.database {\n            args.push(\"--database\".into());\n            args.push(database.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(role) = &self.role {\n            args.push(\"--role\".into());\n            args.push(role.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgDumpAllBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_dumpall\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgDumpAllBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_dumpall\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_dumpall\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgDumpAllBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_dumpall\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_dumpall\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgDumpAllBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .file(\"dump.sql\")\n            .verbose()\n            .version()\n            .lock_wait_timeout(10)\n            .help()\n            .data_only()\n            .clean()\n            .encoding(\"UTF8\")\n            .globals_only()\n            .no_owner()\n            .roles_only()\n            .schema_only()\n            .superuser(\"postgres\")\n            .tablespaces_only()\n            .no_privileges()\n            .binary_upgrade()\n            .column_inserts()\n            .disable_dollar_quoting()\n            .disable_triggers()\n            .exclude_database(\"exclude\")\n            .extra_float_digits(\"2\")\n            .if_exists()\n            .inserts()\n            .load_via_partition_root()\n            .no_comments()\n            .no_publications()\n            .no_role_passwords()\n            .no_security_labels()\n            .no_subscriptions()\n            .no_sync()\n            .no_table_access_method()\n            .no_tablespaces()\n            .no_toast_compression()\n            .no_unlogged_table_data()\n            .on_conflict_do_nothing()\n            .quote_all_identifiers()\n            .rows_per_insert(\"1000\")\n            .use_set_session_authorization()\n            .dbname(\"postgres\")\n            .host(\"localhost\")\n            .database(\"postgres\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .role(\"postgres\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_dumpall\" \"--file\" \"dump.sql\" \"--verbose\" \"--version\" \"--lock-wait-timeout\" \"10\" \"--help\" \"--data-only\" \"--clean\" \"--encoding\" \"UTF8\" \"--globals-only\" \"--no-owner\" \"--roles-only\" \"--schema-only\" \"--superuser\" \"postgres\" \"--tablespaces-only\" \"--no-privileges\" \"--binary-upgrade\" \"--column-inserts\" \"--disable-dollar-quoting\" \"--disable-triggers\" \"--exclude-database\" \"exclude\" \"--extra-float-digits\" \"2\" \"--if-exists\" \"--inserts\" \"--load-via-partition-root\" \"--no-comments\" \"--no-publications\" \"--no-role-passwords\" \"--no-security-labels\" \"--no-subscriptions\" \"--no-sync\" \"--no-table-access-method\" \"--no-tablespaces\" \"--no-toast-compression\" \"--no-unlogged-table-data\" \"--on-conflict-do-nothing\" \"--quote-all-identifiers\" \"--rows-per-insert\" \"1000\" \"--use-set-session-authorization\" \"--dbname\" \"postgres\" \"--host\" \"localhost\" \"--database\" \"postgres\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\" \"--role\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_isready.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_isready` issues a connection check to a `PostgreSQL` database.\n#[derive(Clone, Debug, Default)]\npub struct PgIsReadyBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    dbname: Option<OsString>,\n    quiet: bool,\n    version: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    timeout: Option<u16>,\n    username: Option<OsString>,\n}\n\nimpl PgIsReadyBuilder {\n    /// Create a new [`PgIsReadyBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgIsReadyBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Set the database name\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// Run quietly\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// Output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// Show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// Set the database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// Set the seconds to wait when attempting connection, 0 disables (default: 3)\n    #[must_use]\n    pub fn timeout(mut self, timeout: u16) -> Self {\n        self.timeout = Some(timeout);\n        self\n    }\n\n    /// Set the user name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgIsReadyBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_isready\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(timeout) = &self.timeout {\n            args.push(\"--timeout\".into());\n            args.push(timeout.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgIsReadyBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_isready\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgIsReadyBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_isready\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_isready\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgIsReadyBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_isready\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_isready\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgIsReadyBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .dbname(\"postgres\")\n            .quiet()\n            .version()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .timeout(3)\n            .username(\"postgres\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_isready\" \"--dbname\" \"postgres\" \"--quiet\" \"--version\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--timeout\" \"3\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_receivewal.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_receivewal` receives `PostgreSQL` streaming write-ahead logs.\n#[derive(Clone, Debug, Default)]\npub struct PgReceiveWalBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    directory: Option<OsString>,\n    endpos: Option<OsString>,\n    if_not_exists: bool,\n    no_loop: bool,\n    no_sync: bool,\n    status_interval: Option<OsString>,\n    slot: Option<OsString>,\n    synchronous: bool,\n    verbose: bool,\n    version: bool,\n    compress: Option<OsString>,\n    help: bool,\n    dbname: Option<OsString>,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    create_slot: bool,\n    drop_slot: bool,\n}\n\nimpl PgReceiveWalBuilder {\n    /// Create a new [`PgReceiveWalBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgReceiveWalBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// receive write-ahead log files into this directory\n    #[must_use]\n    pub fn directory<S: AsRef<OsStr>>(mut self, directory: S) -> Self {\n        self.directory = Some(directory.as_ref().to_os_string());\n        self\n    }\n\n    /// exit after receiving the specified LSN\n    #[must_use]\n    pub fn endpos<S: AsRef<OsStr>>(mut self, endpos: S) -> Self {\n        self.endpos = Some(endpos.as_ref().to_os_string());\n        self\n    }\n\n    /// do not error if slot already exists when creating a slot\n    #[must_use]\n    pub fn if_not_exists(mut self) -> Self {\n        self.if_not_exists = true;\n        self\n    }\n\n    /// do not loop on connection lost\n    #[must_use]\n    pub fn no_loop(mut self) -> Self {\n        self.no_loop = true;\n        self\n    }\n\n    /// do not wait for changes to be written safely to disk\n    #[must_use]\n    pub fn no_sync(mut self) -> Self {\n        self.no_sync = true;\n        self\n    }\n\n    /// time between status packets sent to server (default: 10)\n    #[must_use]\n    pub fn status_interval<S: AsRef<OsStr>>(mut self, status_interval: S) -> Self {\n        self.status_interval = Some(status_interval.as_ref().to_os_string());\n        self\n    }\n\n    /// replication slot to use\n    #[must_use]\n    pub fn slot<S: AsRef<OsStr>>(mut self, slot: S) -> Self {\n        self.slot = Some(slot.as_ref().to_os_string());\n        self\n    }\n\n    /// flush write-ahead log immediately after writing\n    #[must_use]\n    pub fn synchronous(mut self) -> Self {\n        self.synchronous = true;\n        self\n    }\n\n    /// output verbose messages\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// compress as specified\n    #[must_use]\n    pub fn compress<S: AsRef<OsStr>>(mut self, compress: S) -> Self {\n        self.compress = Some(compress.as_ref().to_os_string());\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// connection string\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port number\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// connect as specified database user\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt (should happen automatically)\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// create a new replication slot (for the slot's name see --slot)\n    #[must_use]\n    pub fn create_slot(mut self) -> Self {\n        self.create_slot = true;\n        self\n    }\n\n    /// drop the replication slot (for the slot's name see --slot)\n    #[must_use]\n    pub fn drop_slot(mut self) -> Self {\n        self.drop_slot = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgReceiveWalBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_receivewal\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(directory) = &self.directory {\n            args.push(\"--directory\".into());\n            args.push(directory.into());\n        }\n\n        if let Some(endpos) = &self.endpos {\n            args.push(\"--endpos\".into());\n            args.push(endpos.into());\n        }\n\n        if self.if_not_exists {\n            args.push(\"--if-not-exists\".into());\n        }\n\n        if self.no_loop {\n            args.push(\"--no-loop\".into());\n        }\n\n        if self.no_sync {\n            args.push(\"--no-sync\".into());\n        }\n\n        if let Some(status_interval) = &self.status_interval {\n            args.push(\"--status-interval\".into());\n            args.push(status_interval.into());\n        }\n\n        if let Some(slot) = &self.slot {\n            args.push(\"--slot\".into());\n            args.push(slot.into());\n        }\n\n        if self.synchronous {\n            args.push(\"--synchronous\".into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if let Some(compress) = &self.compress {\n            args.push(\"--compress\".into());\n            args.push(compress.into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if self.create_slot {\n            args.push(\"--create-slot\".into());\n        }\n\n        if self.drop_slot {\n            args.push(\"--drop-slot\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgReceiveWalBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_receivewal\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgReceiveWalBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_receivewal\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_receivewal\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgReceiveWalBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_receivewal\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_receivewal\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = PgReceiveWalBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .directory(\"directory\")\n            .endpos(\"endpos\")\n            .if_not_exists()\n            .no_loop()\n            .no_sync()\n            .status_interval(\"status_interval\")\n            .slot(\"slot\")\n            .synchronous()\n            .verbose()\n            .version()\n            .compress(\"compress\")\n            .help()\n            .dbname(\"dbname\")\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .create_slot()\n            .drop_slot()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_receivewal\" \"--directory\" \"directory\" \"--endpos\" \"endpos\" \"--if-not-exists\" \"--no-loop\" \"--no-sync\" \"--status-interval\" \"status_interval\" \"--slot\" \"slot\" \"--synchronous\" \"--verbose\" \"--version\" \"--compress\" \"compress\" \"--help\" \"--dbname\" \"dbname\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--no-password\" \"--password\" \"--create-slot\" \"--drop-slot\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_recvlogical.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_recvlogical` controls `PostgreSQL` logical decoding streams.\n#[derive(Clone, Debug, Default)]\npub struct PgRecvLogicalBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    create_slot: bool,\n    drop_slot: bool,\n    start: bool,\n    endpos: Option<OsString>,\n    file: Option<OsString>,\n    fsync_interval: Option<OsString>,\n    if_not_exists: bool,\n    startpos: Option<OsString>,\n    no_loop: bool,\n    option: Option<OsString>,\n    plugin: Option<OsString>,\n    status_interval: Option<OsString>,\n    slot: Option<OsString>,\n    two_phase: bool,\n    verbose: bool,\n    version: bool,\n    help: bool,\n    dbname: Option<OsString>,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n}\n\nimpl PgRecvLogicalBuilder {\n    /// Create a new [`PgRecvLogicalBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgRecvLogicalBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// create a new replication slot\n    #[must_use]\n    pub fn create_slot(mut self) -> Self {\n        self.create_slot = true;\n        self\n    }\n\n    /// drop the replication slot\n    #[must_use]\n    pub fn drop_slot(mut self) -> Self {\n        self.drop_slot = true;\n        self\n    }\n\n    /// start streaming in a replication slot\n    #[must_use]\n    pub fn start(mut self) -> Self {\n        self.start = true;\n        self\n    }\n\n    /// exit after receiving the specified LSN\n    #[must_use]\n    pub fn endpos<S: AsRef<OsStr>>(mut self, endpos: S) -> Self {\n        self.endpos = Some(endpos.as_ref().to_os_string());\n        self\n    }\n\n    /// receive log into this file, - for stdout\n    #[must_use]\n    pub fn file<S: AsRef<OsStr>>(mut self, file: S) -> Self {\n        self.file = Some(file.as_ref().to_os_string());\n        self\n    }\n\n    /// time between fsyncs to the output file (default: 10)\n    #[must_use]\n    pub fn fsync_interval<S: AsRef<OsStr>>(mut self, fsync_interval: S) -> Self {\n        self.fsync_interval = Some(fsync_interval.as_ref().to_os_string());\n        self\n    }\n\n    /// do not error if slot already exists when creating a slot\n    #[must_use]\n    pub fn if_not_exists(mut self) -> Self {\n        self.if_not_exists = true;\n        self\n    }\n\n    /// where in an existing slot should the streaming start\n    #[must_use]\n    pub fn startpos<S: AsRef<OsStr>>(mut self, startpos: S) -> Self {\n        self.startpos = Some(startpos.as_ref().to_os_string());\n        self\n    }\n\n    /// do not loop on connection lost\n    #[must_use]\n    pub fn no_loop(mut self) -> Self {\n        self.no_loop = true;\n        self\n    }\n\n    /// pass option NAME with optional value VALUE to the output plugin\n    #[must_use]\n    pub fn option<S: AsRef<OsStr>>(mut self, option: S) -> Self {\n        self.option = Some(option.as_ref().to_os_string());\n        self\n    }\n\n    /// use output plugin PLUGIN (default: `test_decoding`)\n    #[must_use]\n    pub fn plugin<S: AsRef<OsStr>>(mut self, plugin: S) -> Self {\n        self.plugin = Some(plugin.as_ref().to_os_string());\n        self\n    }\n\n    /// time between status packets sent to server (default: 10)\n    #[must_use]\n    pub fn status_interval<S: AsRef<OsStr>>(mut self, status_interval: S) -> Self {\n        self.status_interval = Some(status_interval.as_ref().to_os_string());\n        self\n    }\n\n    /// name of the logical replication slot\n    #[must_use]\n    pub fn slot<S: AsRef<OsStr>>(mut self, slot: S) -> Self {\n        self.slot = Some(slot.as_ref().to_os_string());\n        self\n    }\n\n    /// enable decoding of prepared transactions when creating a slot\n    #[must_use]\n    pub fn two_phase(mut self) -> Self {\n        self.two_phase = true;\n        self\n    }\n\n    /// output verbose messages\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// database to connect to\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port number\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// connect as specified database user\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt (should happen automatically)\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgRecvLogicalBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_recvlogical\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.create_slot {\n            args.push(\"--create-slot\".into());\n        }\n\n        if self.drop_slot {\n            args.push(\"--drop-slot\".into());\n        }\n\n        if self.start {\n            args.push(\"--start\".into());\n        }\n\n        if let Some(endpos) = &self.endpos {\n            args.push(\"--endpos\".into());\n            args.push(endpos.into());\n        }\n\n        if let Some(file) = &self.file {\n            args.push(\"--file\".into());\n            args.push(file.into());\n        }\n\n        if let Some(fsync_interval) = &self.fsync_interval {\n            args.push(\"--fsync-interval\".into());\n            args.push(fsync_interval.into());\n        }\n\n        if self.if_not_exists {\n            args.push(\"--if-not-exists\".into());\n        }\n\n        if let Some(startpos) = &self.startpos {\n            args.push(\"--startpos\".into());\n            args.push(startpos.into());\n        }\n\n        if self.no_loop {\n            args.push(\"--no-loop\".into());\n        }\n\n        if let Some(option) = &self.option {\n            args.push(\"--option\".into());\n            args.push(option.into());\n        }\n\n        if let Some(plugin) = &self.plugin {\n            args.push(\"--plugin\".into());\n            args.push(plugin.into());\n        }\n\n        if let Some(status_interval) = &self.status_interval {\n            args.push(\"--status-interval\".into());\n            args.push(status_interval.into());\n        }\n\n        if let Some(slot) = &self.slot {\n            args.push(\"--slot\".into());\n            args.push(slot.into());\n        }\n\n        if self.two_phase {\n            args.push(\"--two-phase\".into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgRecvLogicalBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_recvlogical\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgRecvLogicalBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_recvlogical\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_recvlogical\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgRecvLogicalBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_recvlogical\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_recvlogical\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = PgRecvLogicalBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .create_slot()\n            .drop_slot()\n            .start()\n            .endpos(\"endpos\")\n            .file(\"file\")\n            .fsync_interval(\"fsync_interval\")\n            .if_not_exists()\n            .startpos(\"startpos\")\n            .no_loop()\n            .option(\"option\")\n            .plugin(\"plugin\")\n            .status_interval(\"status_interval\")\n            .slot(\"slot\")\n            .two_phase()\n            .verbose()\n            .version()\n            .help()\n            .dbname(\"dbname\")\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_recvlogical\" \"--create-slot\" \"--drop-slot\" \"--start\" \"--endpos\" \"endpos\" \"--file\" \"file\" \"--fsync-interval\" \"fsync_interval\" \"--if-not-exists\" \"--startpos\" \"startpos\" \"--no-loop\" \"--option\" \"option\" \"--plugin\" \"plugin\" \"--status-interval\" \"status_interval\" \"--slot\" \"slot\" \"--two-phase\" \"--verbose\" \"--version\" \"--help\" \"--dbname\" \"dbname\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--no-password\" \"--password\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_resetwal.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_resetwal` resets the `PostgreSQL` write-ahead log.\n#[derive(Clone, Debug, Default)]\npub struct PgResetWalBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    commit_timestamp_ids: Option<(OsString, OsString)>,\n    pgdata: Option<PathBuf>,\n    epoch: Option<OsString>,\n    force: bool,\n    next_wal_file: Option<OsString>,\n    multixact_ids: Option<(OsString, OsString)>,\n    dry_run: bool,\n    next_oid: Option<OsString>,\n    multixact_offset: Option<OsString>,\n    oldest_transaction_id: Option<OsString>,\n    version: bool,\n    next_transaction_id: Option<OsString>,\n    wal_segsize: Option<OsString>,\n    help: bool,\n}\n\nimpl PgResetWalBuilder {\n    /// Create a new [`PgResetWalBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgResetWalBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// set oldest and newest transactions bearing commit timestamp (zero means no change)\n    #[must_use]\n    pub fn commit_timestamp_ids<S: AsRef<OsStr>>(mut self, xid1: S, xid2: S) -> Self {\n        self.commit_timestamp_ids = Some((xid1.as_ref().into(), xid2.as_ref().into()));\n        self\n    }\n\n    /// data directory\n    #[must_use]\n    pub fn pgdata<P: Into<PathBuf>>(mut self, datadir: P) -> Self {\n        self.pgdata = Some(datadir.into());\n        self\n    }\n\n    /// set next transaction ID epoch\n    #[must_use]\n    pub fn epoch<S: AsRef<OsStr>>(mut self, xidepoch: S) -> Self {\n        self.epoch = Some(xidepoch.as_ref().to_os_string());\n        self\n    }\n\n    /// force update to be done\n    #[must_use]\n    pub fn force(mut self) -> Self {\n        self.force = true;\n        self\n    }\n\n    /// set minimum starting location for new WAL\n    #[must_use]\n    pub fn next_wal_file<S: AsRef<OsStr>>(mut self, walfile: S) -> Self {\n        self.next_wal_file = Some(walfile.as_ref().to_os_string());\n        self\n    }\n\n    /// set next and oldest multitransaction ID\n    #[must_use]\n    pub fn multixact_ids<S: AsRef<OsStr>>(mut self, mxid1: S, mxid2: S) -> Self {\n        self.multixact_ids = Some((mxid1.as_ref().into(), mxid2.as_ref().into()));\n        self\n    }\n\n    /// no update, just show what would be done\n    #[must_use]\n    pub fn dry_run(mut self) -> Self {\n        self.dry_run = true;\n        self\n    }\n\n    /// set next OID\n    #[must_use]\n    pub fn next_oid<S: AsRef<OsStr>>(mut self, oid: S) -> Self {\n        self.next_oid = Some(oid.as_ref().to_os_string());\n        self\n    }\n\n    /// set next multitransaction offset\n    #[must_use]\n    pub fn multixact_offset<S: AsRef<OsStr>>(mut self, offset: S) -> Self {\n        self.multixact_offset = Some(offset.as_ref().to_os_string());\n        self\n    }\n\n    /// set oldest transaction ID\n    #[must_use]\n    pub fn oldest_transaction_id<S: AsRef<OsStr>>(mut self, xid: S) -> Self {\n        self.oldest_transaction_id = Some(xid.as_ref().to_os_string());\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// set next transaction ID\n    #[must_use]\n    pub fn next_transaction_id<S: AsRef<OsStr>>(mut self, xid: S) -> Self {\n        self.next_transaction_id = Some(xid.as_ref().to_os_string());\n        self\n    }\n\n    /// size of WAL segments, in megabytes\n    #[must_use]\n    pub fn wal_segsize<S: AsRef<OsStr>>(mut self, size: S) -> Self {\n        self.wal_segsize = Some(size.as_ref().to_os_string());\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgResetWalBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_resetwal\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some((xid1, xid2)) = &self.commit_timestamp_ids {\n            args.push(\"--commit-timestamp-ids\".into());\n            args.push(format!(\"{},{}\", xid1.to_string_lossy(), xid2.to_string_lossy()).into());\n        }\n\n        if let Some(datadir) = &self.pgdata {\n            args.push(\"--pgdata\".into());\n            args.push(datadir.into());\n        }\n\n        if let Some(xidepoch) = &self.epoch {\n            args.push(\"--epoch\".into());\n            args.push(xidepoch.into());\n        }\n\n        if self.force {\n            args.push(\"--force\".into());\n        }\n\n        if let Some(walfile) = &self.next_wal_file {\n            args.push(\"--next-wal-file\".into());\n            args.push(walfile.into());\n        }\n\n        if let Some((mxid1, mxid2)) = &self.multixact_ids {\n            args.push(\"--multixact-ids\".into());\n            args.push(format!(\"{},{}\", mxid1.to_string_lossy(), mxid2.to_string_lossy()).into());\n        }\n\n        if self.dry_run {\n            args.push(\"--dry-run\".into());\n        }\n\n        if let Some(oid) = &self.next_oid {\n            args.push(\"--next-oid\".into());\n            args.push(oid.into());\n        }\n\n        if let Some(offset) = &self.multixact_offset {\n            args.push(\"--multixact-offset\".into());\n            args.push(offset.into());\n        }\n\n        if let Some(xid) = &self.oldest_transaction_id {\n            args.push(\"--oldest-transaction-id\".into());\n            args.push(xid.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if let Some(xid) = &self.next_transaction_id {\n            args.push(\"--next-transaction-id\".into());\n            args.push(xid.into());\n        }\n\n        if let Some(size) = &self.wal_segsize {\n            args.push(\"--wal-segsize\".into());\n            args.push(size.into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgResetWalBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_resetwal\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgResetWalBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_resetwal\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_resetwal\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgResetWalBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .commit_timestamp_ids(\"1\", \"2\")\n            .pgdata(\"pgdata\")\n            .epoch(\"epoch\")\n            .force()\n            .next_wal_file(\"next_wal_file\")\n            .multixact_ids(\"3\", \"4\")\n            .dry_run()\n            .next_oid(\"next_oid\")\n            .multixact_offset(\"multixact_offset\")\n            .oldest_transaction_id(\"oldest_transaction_id\")\n            .version()\n            .next_transaction_id(\"next_transaction_id\")\n            .wal_segsize(\"wal_segsize\")\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_resetwal\" \"--commit-timestamp-ids\" \"1,2\" \"--pgdata\" \"pgdata\" \"--epoch\" \"epoch\" \"--force\" \"--next-wal-file\" \"next_wal_file\" \"--multixact-ids\" \"3,4\" \"--dry-run\" \"--next-oid\" \"next_oid\" \"--multixact-offset\" \"multixact_offset\" \"--oldest-transaction-id\" \"oldest_transaction_id\" \"--version\" \"--next-transaction-id\" \"next_transaction_id\" \"--wal-segsize\" \"wal_segsize\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_restore.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_restore` restores a `PostgreSQL` database from an archive created by `pg_dump`.\n#[derive(Clone, Debug, Default)]\npub struct PgRestoreBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    dbname: Option<OsString>,\n    file: Option<OsString>,\n    format: Option<OsString>,\n    list: bool,\n    verbose: bool,\n    version: bool,\n    help: bool,\n    data_only: bool,\n    clean: bool,\n    create: bool,\n    exit_on_error: bool,\n    index: Option<OsString>,\n    jobs: Option<OsString>,\n    use_list: Option<OsString>,\n    schema: Option<OsString>,\n    exclude_schema: Option<OsString>,\n    no_owner: bool,\n    function: Option<OsString>,\n    schema_only: bool,\n    superuser: Option<OsString>,\n    table: Option<OsString>,\n    trigger: Option<OsString>,\n    no_privileges: bool,\n    single_transaction: bool,\n    disable_triggers: bool,\n    enable_row_security: bool,\n    if_exists: bool,\n    no_comments: bool,\n    no_data_for_failed_tables: bool,\n    no_publications: bool,\n    no_security_labels: bool,\n    no_subscriptions: bool,\n    no_table_access_method: bool,\n    no_tablespaces: bool,\n    section: Option<OsString>,\n    strict_names: bool,\n    use_set_session_authorization: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    role: Option<OsString>,\n}\n\nimpl PgRestoreBuilder {\n    /// Create a new [`PgRestoreBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgRestoreBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// connect to database name\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.dbname = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// output file name (- for stdout)\n    #[must_use]\n    pub fn file<S: AsRef<OsStr>>(mut self, filename: S) -> Self {\n        self.file = Some(filename.as_ref().to_os_string());\n        self\n    }\n\n    /// backup file format (should be automatic)\n    #[must_use]\n    pub fn format<S: AsRef<OsStr>>(mut self, format: S) -> Self {\n        self.format = Some(format.as_ref().to_os_string());\n        self\n    }\n\n    /// print summarized TOC of the archive\n    #[must_use]\n    pub fn list(mut self) -> Self {\n        self.list = true;\n        self\n    }\n\n    /// verbose mode\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// restore only the data, no schema\n    #[must_use]\n    pub fn data_only(mut self) -> Self {\n        self.data_only = true;\n        self\n    }\n\n    /// clean (drop) database objects before recreating\n    #[must_use]\n    pub fn clean(mut self) -> Self {\n        self.clean = true;\n        self\n    }\n\n    /// create the target database\n    #[must_use]\n    pub fn create(mut self) -> Self {\n        self.create = true;\n        self\n    }\n\n    /// exit on error, default is to continue\n    #[must_use]\n    pub fn exit_on_error(mut self) -> Self {\n        self.exit_on_error = true;\n        self\n    }\n\n    /// restore named index\n    #[must_use]\n    pub fn index<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.index = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// use this many parallel jobs to restore\n    #[must_use]\n    pub fn jobs<S: AsRef<OsStr>>(mut self, num: S) -> Self {\n        self.jobs = Some(num.as_ref().to_os_string());\n        self\n    }\n\n    /// use table of contents from this file for selecting/ordering output\n    #[must_use]\n    pub fn use_list<S: AsRef<OsStr>>(mut self, filename: S) -> Self {\n        self.use_list = Some(filename.as_ref().to_os_string());\n        self\n    }\n\n    /// restore only objects in this schema\n    #[must_use]\n    pub fn schema<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.schema = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// do not restore objects in this schema\n    #[must_use]\n    pub fn exclude_schema<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.exclude_schema = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// skip restoration of object ownership\n    #[must_use]\n    pub fn no_owner(mut self) -> Self {\n        self.no_owner = true;\n        self\n    }\n\n    /// restore named function\n    #[must_use]\n    pub fn function<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.function = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// restore only the schema, no data\n    #[must_use]\n    pub fn schema_only(mut self) -> Self {\n        self.schema_only = true;\n        self\n    }\n\n    /// superuser user name to use for disabling triggers\n    #[must_use]\n    pub fn superuser<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.superuser = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// restore named relation (table, view, etc.)\n    #[must_use]\n    pub fn table<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.table = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// restore named trigger\n    #[must_use]\n    pub fn trigger<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.trigger = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// skip restoration of access privileges (grant/revoke)\n    #[must_use]\n    pub fn no_privileges(mut self) -> Self {\n        self.no_privileges = true;\n        self\n    }\n\n    /// restore as a single transaction\n    #[must_use]\n    pub fn single_transaction(mut self) -> Self {\n        self.single_transaction = true;\n        self\n    }\n\n    /// disable triggers during data-only restore\n    #[must_use]\n    pub fn disable_triggers(mut self) -> Self {\n        self.disable_triggers = true;\n        self\n    }\n\n    /// enable row security\n    #[must_use]\n    pub fn enable_row_security(mut self) -> Self {\n        self.enable_row_security = true;\n        self\n    }\n\n    /// use IF EXISTS when dropping objects\n    #[must_use]\n    pub fn if_exists(mut self) -> Self {\n        self.if_exists = true;\n        self\n    }\n\n    /// do not restore comments\n    #[must_use]\n    pub fn no_comments(mut self) -> Self {\n        self.no_comments = true;\n        self\n    }\n\n    /// do not restore data of tables that could not be created\n    #[must_use]\n    pub fn no_data_for_failed_tables(mut self) -> Self {\n        self.no_data_for_failed_tables = true;\n        self\n    }\n\n    /// do not restore publications\n    #[must_use]\n    pub fn no_publications(mut self) -> Self {\n        self.no_publications = true;\n        self\n    }\n\n    /// do not restore security labels\n    #[must_use]\n    pub fn no_security_labels(mut self) -> Self {\n        self.no_security_labels = true;\n        self\n    }\n\n    /// do not restore subscriptions\n    #[must_use]\n    pub fn no_subscriptions(mut self) -> Self {\n        self.no_subscriptions = true;\n        self\n    }\n\n    /// do not restore table access methods\n    #[must_use]\n    pub fn no_table_access_method(mut self) -> Self {\n        self.no_table_access_method = true;\n        self\n    }\n\n    /// do not restore tablespace assignments\n    #[must_use]\n    pub fn no_tablespaces(mut self) -> Self {\n        self.no_tablespaces = true;\n        self\n    }\n\n    /// restore named section (pre-data, data, or post-data)\n    #[must_use]\n    pub fn section<S: AsRef<OsStr>>(mut self, section: S) -> Self {\n        self.section = Some(section.as_ref().to_os_string());\n        self\n    }\n\n    /// require table and/or schema include patterns to match at least one entity each\n    #[must_use]\n    pub fn strict_names(mut self) -> Self {\n        self.strict_names = true;\n        self\n    }\n\n    /// use SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to set ownership\n    #[must_use]\n    pub fn use_set_session_authorization(mut self) -> Self {\n        self.use_set_session_authorization = true;\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, hostname: S) -> Self {\n        self.host = Some(hostname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port number\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// connect as specified database user\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.username = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt (should happen automatically)\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// do SET ROLE before restore\n    #[must_use]\n    pub fn role<S: AsRef<OsStr>>(mut self, rolename: S) -> Self {\n        self.role = Some(rolename.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgRestoreBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_restore\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(name) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(name.into());\n        }\n\n        if let Some(filename) = &self.file {\n            args.push(\"--file\".into());\n            args.push(filename.into());\n        }\n\n        if let Some(format) = &self.format {\n            args.push(\"--format\".into());\n            args.push(format.into());\n        }\n\n        if self.list {\n            args.push(\"--list\".into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if self.data_only {\n            args.push(\"--data-only\".into());\n        }\n\n        if self.clean {\n            args.push(\"--clean\".into());\n        }\n\n        if self.create {\n            args.push(\"--create\".into());\n        }\n\n        if self.exit_on_error {\n            args.push(\"--exit-on-error\".into());\n        }\n\n        if let Some(name) = &self.index {\n            args.push(\"--index\".into());\n            args.push(name.into());\n        }\n\n        if let Some(num) = &self.jobs {\n            args.push(\"--jobs\".into());\n            args.push(num.into());\n        }\n\n        if let Some(filename) = &self.use_list {\n            args.push(\"--use-list\".into());\n            args.push(filename.into());\n        }\n\n        if let Some(name) = &self.schema {\n            args.push(\"--schema\".into());\n            args.push(name.into());\n        }\n\n        if let Some(name) = &self.exclude_schema {\n            args.push(\"--exclude-schema\".into());\n            args.push(name.into());\n        }\n\n        if self.no_owner {\n            args.push(\"--no-owner\".into());\n        }\n\n        if let Some(name) = &self.function {\n            args.push(\"--function\".into());\n            args.push(name.into());\n        }\n\n        if self.schema_only {\n            args.push(\"--schema-only\".into());\n        }\n\n        if let Some(name) = &self.superuser {\n            args.push(\"--superuser\".into());\n            args.push(name.into());\n        }\n\n        if let Some(name) = &self.table {\n            args.push(\"--table\".into());\n            args.push(name.into());\n        }\n\n        if let Some(name) = &self.trigger {\n            args.push(\"--trigger\".into());\n            args.push(name.into());\n        }\n\n        if self.no_privileges {\n            args.push(\"--no-privileges\".into());\n        }\n\n        if self.single_transaction {\n            args.push(\"--single-transaction\".into());\n        }\n\n        if self.disable_triggers {\n            args.push(\"--disable-triggers\".into());\n        }\n\n        if self.enable_row_security {\n            args.push(\"--enable-row-security\".into());\n        }\n\n        if self.if_exists {\n            args.push(\"--if-exists\".into());\n        }\n\n        if self.no_comments {\n            args.push(\"--no-comments\".into());\n        }\n\n        if self.no_data_for_failed_tables {\n            args.push(\"--no-data-for-failed-tables\".into());\n        }\n\n        if self.no_publications {\n            args.push(\"--no-publications\".into());\n        }\n\n        if self.no_security_labels {\n            args.push(\"--no-security-labels\".into());\n        }\n\n        if self.no_subscriptions {\n            args.push(\"--no-subscriptions\".into());\n        }\n\n        if self.no_table_access_method {\n            args.push(\"--no-table-access-method\".into());\n        }\n\n        if self.no_tablespaces {\n            args.push(\"--no-tablespaces\".into());\n        }\n\n        if let Some(section) = &self.section {\n            args.push(\"--section\".into());\n            args.push(section.into());\n        }\n\n        if self.strict_names {\n            args.push(\"--strict-names\".into());\n        }\n\n        if self.use_set_session_authorization {\n            args.push(\"--use-set-session-authorization\".into());\n        }\n\n        if let Some(hostname) = &self.host {\n            args.push(\"--host\".into());\n            args.push(hostname.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(name) = &self.username {\n            args.push(\"--username\".into());\n            args.push(name.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(role) = &self.role {\n            args.push(\"--role\".into());\n            args.push(role.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgRestoreBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_restore\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgRestoreBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_restore\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_restore\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgRestoreBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./pg_restore\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_restore\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = PgRestoreBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .dbname(\"dbname\")\n            .file(\"file\")\n            .format(\"format\")\n            .list()\n            .verbose()\n            .version()\n            .help()\n            .data_only()\n            .clean()\n            .create()\n            .exit_on_error()\n            .index(\"index\")\n            .jobs(\"jobs\")\n            .use_list(\"use_list\")\n            .schema(\"schema\")\n            .exclude_schema(\"exclude_schema\")\n            .no_owner()\n            .function(\"function\")\n            .schema_only()\n            .superuser(\"superuser\")\n            .table(\"table\")\n            .trigger(\"trigger\")\n            .no_privileges()\n            .single_transaction()\n            .disable_triggers()\n            .enable_row_security()\n            .if_exists()\n            .no_comments()\n            .no_data_for_failed_tables()\n            .no_publications()\n            .no_security_labels()\n            .no_subscriptions()\n            .no_table_access_method()\n            .no_tablespaces()\n            .section(\"section\")\n            .strict_names()\n            .use_set_session_authorization()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .role(\"role\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_restore\" \"--dbname\" \"dbname\" \"--file\" \"file\" \"--format\" \"format\" \"--list\" \"--verbose\" \"--version\" \"--help\" \"--data-only\" \"--clean\" \"--create\" \"--exit-on-error\" \"--index\" \"index\" \"--jobs\" \"jobs\" \"--use-list\" \"use_list\" \"--schema\" \"schema\" \"--exclude-schema\" \"exclude_schema\" \"--no-owner\" \"--function\" \"function\" \"--schema-only\" \"--superuser\" \"superuser\" \"--table\" \"table\" \"--trigger\" \"trigger\" \"--no-privileges\" \"--single-transaction\" \"--disable-triggers\" \"--enable-row-security\" \"--if-exists\" \"--no-comments\" \"--no-data-for-failed-tables\" \"--no-publications\" \"--no-security-labels\" \"--no-subscriptions\" \"--no-table-access-method\" \"--no-tablespaces\" \"--section\" \"section\" \"--strict-names\" \"--use-set-session-authorization\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--no-password\" \"--password\" \"--role\" \"role\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_rewind.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_rewind` synchronizes a `PostgreSQL` data directory with another data directory.\n#[derive(Clone, Debug, Default)]\npub struct PgRewindBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    restore_target_wal: bool,\n    target_pgdata: Option<PathBuf>,\n    source_pgdata: Option<PathBuf>,\n    source_server: Option<OsString>,\n    dry_run: bool,\n    no_sync: bool,\n    progress: bool,\n    write_recovery_conf: bool,\n    config_file: Option<OsString>,\n    debug: bool,\n    no_ensure_shutdown: bool,\n    version: bool,\n    help: bool,\n}\n\nimpl PgRewindBuilder {\n    /// Create a new [`PgRewindBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgRewindBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// use `restore_command` in target configuration to retrieve WAL files from archives\n    #[must_use]\n    pub fn restore_target_wal(mut self) -> Self {\n        self.restore_target_wal = true;\n        self\n    }\n\n    /// existing data directory to modify\n    #[must_use]\n    pub fn target_pgdata<P: Into<PathBuf>>(mut self, directory: P) -> Self {\n        self.target_pgdata = Some(directory.into());\n        self\n    }\n\n    /// source data directory to synchronize with\n    #[must_use]\n    pub fn source_pgdata<P: Into<PathBuf>>(mut self, directory: P) -> Self {\n        self.source_pgdata = Some(directory.into());\n        self\n    }\n\n    /// source server to synchronize with\n    #[must_use]\n    pub fn source_server<S: AsRef<OsStr>>(mut self, connstr: S) -> Self {\n        self.source_server = Some(connstr.as_ref().to_os_string());\n        self\n    }\n\n    /// stop before modifying anything\n    #[must_use]\n    pub fn dry_run(mut self) -> Self {\n        self.dry_run = true;\n        self\n    }\n\n    /// do not wait for changes to be written safely to disk\n    #[must_use]\n    pub fn no_sync(mut self) -> Self {\n        self.no_sync = true;\n        self\n    }\n\n    /// write progress messages\n    #[must_use]\n    pub fn progress(mut self) -> Self {\n        self.progress = true;\n        self\n    }\n\n    /// write configuration for replication (requires --source-server)\n    #[must_use]\n    pub fn write_recovery_conf(mut self) -> Self {\n        self.write_recovery_conf = true;\n        self\n    }\n\n    /// use specified main server configuration file when running target cluster\n    #[must_use]\n    pub fn config_file<S: AsRef<OsStr>>(mut self, filename: S) -> Self {\n        self.config_file = Some(filename.as_ref().to_os_string());\n        self\n    }\n\n    /// write a lot of debug messages\n    #[must_use]\n    pub fn debug(mut self) -> Self {\n        self.debug = true;\n        self\n    }\n\n    /// do not automatically fix unclean shutdown\n    #[must_use]\n    pub fn no_ensure_shutdown(mut self) -> Self {\n        self.no_ensure_shutdown = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgRewindBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_rewind\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.restore_target_wal {\n            args.push(\"--restore-target-wal\".into());\n        }\n\n        if let Some(directory) = &self.target_pgdata {\n            args.push(\"--target-pgdata\".into());\n            args.push(directory.into());\n        }\n\n        if let Some(directory) = &self.source_pgdata {\n            args.push(\"--source-pgdata\".into());\n            args.push(directory.into());\n        }\n\n        if let Some(connstr) = &self.source_server {\n            args.push(\"--source-server\".into());\n            args.push(connstr.into());\n        }\n\n        if self.dry_run {\n            args.push(\"--dry-run\".into());\n        }\n\n        if self.no_sync {\n            args.push(\"--no-sync\".into());\n        }\n\n        if self.progress {\n            args.push(\"--progress\".into());\n        }\n\n        if self.write_recovery_conf {\n            args.push(\"--write-recovery-conf\".into());\n        }\n\n        if let Some(filename) = &self.config_file {\n            args.push(\"--config-file\".into());\n            args.push(filename.into());\n        }\n\n        if self.debug {\n            args.push(\"--debug\".into());\n        }\n\n        if self.no_ensure_shutdown {\n            args.push(\"--no-ensure-shutdown\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgRewindBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_rewind\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgRewindBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_rewind\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_rewind\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgRewindBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .restore_target_wal()\n            .target_pgdata(\"target_pgdata\")\n            .source_pgdata(\"source_pgdata\")\n            .source_server(\"source_server\")\n            .dry_run()\n            .no_sync()\n            .progress()\n            .write_recovery_conf()\n            .config_file(\"config_file\")\n            .debug()\n            .no_ensure_shutdown()\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_rewind\" \"--restore-target-wal\" \"--target-pgdata\" \"target_pgdata\" \"--source-pgdata\" \"source_pgdata\" \"--source-server\" \"source_server\" \"--dry-run\" \"--no-sync\" \"--progress\" \"--write-recovery-conf\" \"--config-file\" \"config_file\" \"--debug\" \"--no-ensure-shutdown\" \"--version\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_test_fsync.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_test_fsync` command to determine fastest `wal_sync_method` for `PostgreSQL`\n#[derive(Clone, Debug, Default)]\npub struct PgTestFsyncBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    filename: Option<OsString>,\n    secs_per_test: Option<usize>,\n}\n\nimpl PgTestFsyncBuilder {\n    /// Create a new [`PgTestFsyncBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgTestFsyncBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// Set the filename\n    #[must_use]\n    pub fn filename<S: AsRef<OsStr>>(mut self, filename: S) -> Self {\n        self.filename = Some(filename.as_ref().to_os_string());\n        self\n    }\n\n    /// Set the seconds per test\n    #[must_use]\n    pub fn secs_per_test(mut self, secs: usize) -> Self {\n        self.secs_per_test = Some(secs);\n        self\n    }\n}\n\nimpl CommandBuilder for PgTestFsyncBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_test_fsync\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(filename) = &self.filename {\n            args.push(\"-f\".into());\n            args.push(filename.into());\n        }\n\n        if let Some(secs) = &self.secs_per_test {\n            args.push(\"-s\".into());\n            args.push(secs.to_string().into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgTestFsyncBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_test_fsync\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgTestFsyncBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_test_fsync\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_test_fsync\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgTestFsyncBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .filename(\"filename\")\n            .secs_per_test(10)\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(r#\"{command_prefix}\"pg_test_fsync\" \"-f\" \"filename\" \"-s\" \"10\"\"#),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_test_timing.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_test_timing` tests the timing of a `PostgreSQL` instance.\n#[derive(Clone, Debug, Default)]\npub struct PgTestTimingBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    duration: Option<OsString>,\n}\n\nimpl PgTestTimingBuilder {\n    /// Create a new [`PgTestTimingBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgTestTimingBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// set the duration for the test\n    #[must_use]\n    pub fn duration<S: AsRef<OsStr>>(mut self, duration: S) -> Self {\n        self.duration = Some(duration.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PgTestTimingBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_test_timing\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(duration) = &self.duration {\n            args.push(\"-d\".into());\n            args.push(duration.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgTestTimingBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_test_timing\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgTestTimingBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_test_timing\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_test_timing\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgTestTimingBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .duration(\"10\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(r#\"{command_prefix}\"pg_test_timing\" \"-d\" \"10\"\"#),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_upgrade.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_upgrade` upgrades a `PostgreSQL` cluster to a different major version.\n#[derive(Clone, Debug, Default)]\npub struct PgUpgradeBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    old_bindir: Option<OsString>,\n    new_bindir: Option<OsString>,\n    check: bool,\n    old_datadir: Option<OsString>,\n    new_datadir: Option<OsString>,\n    jobs: Option<OsString>,\n    link: bool,\n    no_sync: bool,\n    old_options: Option<OsString>,\n    new_options: Option<OsString>,\n    old_port: Option<u16>,\n    new_port: Option<u16>,\n    retain: bool,\n    socketdir: Option<OsString>,\n    username: Option<OsString>,\n    verbose: bool,\n    version: bool,\n    clone: bool,\n    copy: bool,\n    help: bool,\n}\n\nimpl PgUpgradeBuilder {\n    /// Create a new [`PgUpgradeBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgUpgradeBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// old cluster executable directory\n    #[must_use]\n    pub fn old_bindir<S: AsRef<OsStr>>(mut self, old_bindir: S) -> Self {\n        self.old_bindir = Some(old_bindir.as_ref().to_os_string());\n        self\n    }\n\n    /// new cluster executable directory\n    #[must_use]\n    pub fn new_bindir<S: AsRef<OsStr>>(mut self, new_bindir: S) -> Self {\n        self.new_bindir = Some(new_bindir.as_ref().to_os_string());\n        self\n    }\n\n    /// check clusters only, don't change any data\n    #[must_use]\n    pub fn check(mut self) -> Self {\n        self.check = true;\n        self\n    }\n\n    /// old cluster data directory\n    #[must_use]\n    pub fn old_datadir<S: AsRef<OsStr>>(mut self, old_datadir: S) -> Self {\n        self.old_datadir = Some(old_datadir.as_ref().to_os_string());\n        self\n    }\n\n    /// new cluster data directory\n    #[must_use]\n    pub fn new_datadir<S: AsRef<OsStr>>(mut self, new_datadir: S) -> Self {\n        self.new_datadir = Some(new_datadir.as_ref().to_os_string());\n        self\n    }\n\n    /// number of simultaneous processes or threads to use\n    #[must_use]\n    pub fn jobs<S: AsRef<OsStr>>(mut self, jobs: S) -> Self {\n        self.jobs = Some(jobs.as_ref().to_os_string());\n        self\n    }\n\n    /// link instead of copying files to new cluster\n    #[must_use]\n    pub fn link(mut self) -> Self {\n        self.link = true;\n        self\n    }\n\n    /// do not wait for changes to be written safely to disk\n    #[must_use]\n    pub fn no_sync(mut self) -> Self {\n        self.no_sync = true;\n        self\n    }\n\n    /// old cluster options to pass to the server\n    #[must_use]\n    pub fn old_options<S: AsRef<OsStr>>(mut self, old_options: S) -> Self {\n        self.old_options = Some(old_options.as_ref().to_os_string());\n        self\n    }\n\n    /// new cluster options to pass to the server\n    #[must_use]\n    pub fn new_options<S: AsRef<OsStr>>(mut self, new_options: S) -> Self {\n        self.new_options = Some(new_options.as_ref().to_os_string());\n        self\n    }\n\n    /// old cluster port number\n    #[must_use]\n    pub fn old_port(mut self, old_port: u16) -> Self {\n        self.old_port = Some(old_port);\n        self\n    }\n\n    /// new cluster port number\n    #[must_use]\n    pub fn new_port(mut self, new_port: u16) -> Self {\n        self.new_port = Some(new_port);\n        self\n    }\n\n    /// retain SQL and log files after success\n    #[must_use]\n    pub fn retain(mut self) -> Self {\n        self.retain = true;\n        self\n    }\n\n    /// socket directory to use\n    #[must_use]\n    pub fn socketdir<S: AsRef<OsStr>>(mut self, socketdir: S) -> Self {\n        self.socketdir = Some(socketdir.as_ref().to_os_string());\n        self\n    }\n\n    /// cluster superuser\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// enable verbose internal logging\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// display version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// clone instead of copying files to new cluster\n    #[must_use]\n    pub fn clone(mut self) -> Self {\n        self.clone = true;\n        self\n    }\n\n    /// copy files to new cluster\n    #[must_use]\n    pub fn copy(mut self) -> Self {\n        self.copy = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgUpgradeBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_upgrade\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(old_bindir) = &self.old_bindir {\n            args.push(\"--old-bindir\".into());\n            args.push(old_bindir.into());\n        }\n\n        if let Some(new_bindir) = &self.new_bindir {\n            args.push(\"--new-bindir\".into());\n            args.push(new_bindir.into());\n        }\n\n        if self.check {\n            args.push(\"--check\".into());\n        }\n\n        if let Some(old_datadir) = &self.old_datadir {\n            args.push(\"--old-datadir\".into());\n            args.push(old_datadir.into());\n        }\n\n        if let Some(new_datadir) = &self.new_datadir {\n            args.push(\"--new-datadir\".into());\n            args.push(new_datadir.into());\n        }\n\n        if let Some(jobs) = &self.jobs {\n            args.push(\"--jobs\".into());\n            args.push(jobs.into());\n        }\n\n        if self.link {\n            args.push(\"--link\".into());\n        }\n\n        if self.no_sync {\n            args.push(\"--no-sync\".into());\n        }\n\n        if let Some(old_options) = &self.old_options {\n            args.push(\"--old-options\".into());\n            args.push(old_options.into());\n        }\n\n        if let Some(new_options) = &self.new_options {\n            args.push(\"--new-options\".into());\n            args.push(new_options.into());\n        }\n\n        if let Some(old_port) = &self.old_port {\n            args.push(\"--old-port\".into());\n            args.push(old_port.to_string().into());\n        }\n\n        if let Some(new_port) = &self.new_port {\n            args.push(\"--new-port\".into());\n            args.push(new_port.to_string().into());\n        }\n\n        if self.retain {\n            args.push(\"--retain\".into());\n        }\n\n        if let Some(socketdir) = &self.socketdir {\n            args.push(\"--socketdir\".into());\n            args.push(socketdir.into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.clone {\n            args.push(\"--clone\".into());\n        }\n\n        if self.copy {\n            args.push(\"--copy\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgUpgradeBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_upgrade\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgUpgradeBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_upgrade\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_upgrade\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgUpgradeBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .old_bindir(\"old\")\n            .new_bindir(\"new\")\n            .check()\n            .old_datadir(\"old_data\")\n            .new_datadir(\"new_data\")\n            .jobs(\"10\")\n            .link()\n            .no_sync()\n            .old_options(\"old\")\n            .new_options(\"new\")\n            .old_port(5432)\n            .new_port(5433)\n            .retain()\n            .socketdir(\"socket\")\n            .username(\"user\")\n            .verbose()\n            .version()\n            .clone()\n            .copy()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_upgrade\" \"--old-bindir\" \"old\" \"--new-bindir\" \"new\" \"--check\" \"--old-datadir\" \"old_data\" \"--new-datadir\" \"new_data\" \"--jobs\" \"10\" \"--link\" \"--no-sync\" \"--old-options\" \"old\" \"--new-options\" \"new\" \"--old-port\" \"5432\" \"--new-port\" \"5433\" \"--retain\" \"--socketdir\" \"socket\" \"--username\" \"user\" \"--verbose\" \"--version\" \"--clone\" \"--copy\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_verifybackup.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_verifybackup` verifies a backup against the backup manifest.\n#[derive(Clone, Debug, Default)]\npub struct PgVerifyBackupBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    exit_on_error: bool,\n    ignore: Option<OsString>,\n    manifest_path: Option<OsString>,\n    no_parse_wal: bool,\n    progress: bool,\n    quiet: bool,\n    skip_checksums: bool,\n    wal_directory: Option<OsString>,\n    version: bool,\n    help: bool,\n}\n\nimpl PgVerifyBackupBuilder {\n    /// Create a new [`PgVerifyBackupBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgVerifyBackupBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// exit immediately on error\n    #[must_use]\n    pub fn exit_on_error(mut self) -> Self {\n        self.exit_on_error = true;\n        self\n    }\n\n    /// ignore indicated path\n    #[must_use]\n    pub fn ignore<S: AsRef<OsStr>>(mut self, ignore: S) -> Self {\n        self.ignore = Some(ignore.as_ref().to_os_string());\n        self\n    }\n\n    /// use specified path for manifest\n    #[must_use]\n    pub fn manifest_path<S: AsRef<OsStr>>(mut self, manifest_path: S) -> Self {\n        self.manifest_path = Some(manifest_path.as_ref().to_os_string());\n        self\n    }\n\n    /// do not try to parse WAL files\n    #[must_use]\n    pub fn no_parse_wal(mut self) -> Self {\n        self.no_parse_wal = true;\n        self\n    }\n\n    /// show progress information\n    #[must_use]\n    pub fn progress(mut self) -> Self {\n        self.progress = true;\n        self\n    }\n\n    /// do not print any output, except for errors\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// skip checksum verification\n    #[must_use]\n    pub fn skip_checksums(mut self) -> Self {\n        self.skip_checksums = true;\n        self\n    }\n\n    /// use specified path for WAL files\n    #[must_use]\n    pub fn wal_directory<S: AsRef<OsStr>>(mut self, wal_directory: S) -> Self {\n        self.wal_directory = Some(wal_directory.as_ref().to_os_string());\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgVerifyBackupBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_verifybackup\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.exit_on_error {\n            args.push(\"--exit-on-error\".into());\n        }\n\n        if let Some(ignore) = &self.ignore {\n            args.push(\"--ignore\".into());\n            args.push(ignore.into());\n        }\n\n        if let Some(manifest_path) = &self.manifest_path {\n            args.push(\"--manifest-path\".into());\n            args.push(manifest_path.into());\n        }\n\n        if self.no_parse_wal {\n            args.push(\"--no-parse-wal\".into());\n        }\n\n        if self.progress {\n            args.push(\"--progress\".into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if self.skip_checksums {\n            args.push(\"--skip-checksums\".into());\n        }\n\n        if let Some(wal_directory) = &self.wal_directory {\n            args.push(\"--wal-directory\".into());\n            args.push(wal_directory.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgVerifyBackupBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_verifybackup\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgVerifyBackupBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_verifybackup\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_verifybackup\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgVerifyBackupBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .exit_on_error()\n            .ignore(\"ignore\")\n            .manifest_path(\"manifest-path\")\n            .no_parse_wal()\n            .progress()\n            .quiet()\n            .skip_checksums()\n            .wal_directory(\"wal_directory\")\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_verifybackup\" \"--exit-on-error\" \"--ignore\" \"ignore\" \"--manifest-path\" \"manifest-path\" \"--no-parse-wal\" \"--progress\" \"--quiet\" \"--skip-checksums\" \"--wal-directory\" \"wal_directory\" \"--version\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pg_waldump.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pg_waldump` decodes and displays `PostgreSQL` write-ahead logs for debugging.\n#[derive(Clone, Debug, Default)]\npub struct PgWalDumpBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    backkup_details: bool,\n    block: Option<OsString>,\n    end: Option<OsString>,\n    follow: bool,\n    fork: Option<OsString>,\n    limit: Option<OsString>,\n    path: Option<OsString>,\n    quiet: bool,\n    rmgr: Option<OsString>,\n    relation: Option<OsString>,\n    start: Option<OsString>,\n    timeline: Option<OsString>,\n    version: bool,\n    fullpage: bool,\n    xid: Option<OsString>,\n    stats: Option<OsString>,\n    save_fullpage: Option<OsString>,\n    help: bool,\n}\n\nimpl PgWalDumpBuilder {\n    /// Create a new [`PgWalDumpBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgWalDumpBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        Self::new().program_dir(settings.get_binary_dir())\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// output detailed information about backup blocks\n    #[must_use]\n    pub fn backup_details(mut self) -> Self {\n        self.backkup_details = true;\n        self\n    }\n\n    /// with --relation, only show records that modify block N\n    #[must_use]\n    pub fn block<S: AsRef<OsStr>>(mut self, block: S) -> Self {\n        self.block = Some(block.as_ref().to_os_string());\n        self\n    }\n\n    /// stop reading at WAL location RECPTR\n    #[must_use]\n    pub fn end<S: AsRef<OsStr>>(mut self, end: S) -> Self {\n        self.end = Some(end.as_ref().to_os_string());\n        self\n    }\n\n    /// keep retrying after reaching end of WAL\n    #[must_use]\n    pub fn follow(mut self) -> Self {\n        self.follow = true;\n        self\n    }\n\n    /// only show records that modify blocks in fork FORK\n    #[must_use]\n    pub fn fork<S: AsRef<OsStr>>(mut self, fork: S) -> Self {\n        self.fork = Some(fork.as_ref().to_os_string());\n        self\n    }\n\n    /// number of records to display\n    #[must_use]\n    pub fn limit<S: AsRef<OsStr>>(mut self, limit: S) -> Self {\n        self.limit = Some(limit.as_ref().to_os_string());\n        self\n    }\n\n    /// directory in which to find WAL segment files\n    #[must_use]\n    pub fn path<S: AsRef<OsStr>>(mut self, path: S) -> Self {\n        self.path = Some(path.as_ref().to_os_string());\n        self\n    }\n\n    /// do not print any output, except for errors\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// only show records generated by resource manager RMGR\n    #[must_use]\n    pub fn rmgr<S: AsRef<OsStr>>(mut self, rmgr: S) -> Self {\n        self.rmgr = Some(rmgr.as_ref().to_os_string());\n        self\n    }\n\n    /// only show records that modify blocks in relation T/D/R\n    #[must_use]\n    pub fn relation<S: AsRef<OsStr>>(mut self, relation: S) -> Self {\n        self.relation = Some(relation.as_ref().to_os_string());\n        self\n    }\n\n    /// start reading at WAL location RECPTR\n    #[must_use]\n    pub fn start<S: AsRef<OsStr>>(mut self, start: S) -> Self {\n        self.start = Some(start.as_ref().to_os_string());\n        self\n    }\n\n    /// timeline from which to read WAL records\n    #[must_use]\n    pub fn timeline<S: AsRef<OsStr>>(mut self, timeline: S) -> Self {\n        self.timeline = Some(timeline.as_ref().to_os_string());\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// only show records with a full page write\n    #[must_use]\n    pub fn fullpage(mut self) -> Self {\n        self.fullpage = true;\n        self\n    }\n\n    /// only show records with transaction ID XID\n    #[must_use]\n    pub fn xid<S: AsRef<OsStr>>(mut self, xid: S) -> Self {\n        self.xid = Some(xid.as_ref().to_os_string());\n        self\n    }\n\n    /// show statistics instead of records\n    #[must_use]\n    pub fn stats<S: AsRef<OsStr>>(mut self, stats: S) -> Self {\n        self.stats = Some(stats.as_ref().to_os_string());\n        self\n    }\n\n    /// save full page images to DIR\n    #[must_use]\n    pub fn save_fullpage<S: AsRef<OsStr>>(mut self, save_fullpage: S) -> Self {\n        self.save_fullpage = Some(save_fullpage.as_ref().to_os_string());\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgWalDumpBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pg_waldump\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.backkup_details {\n            args.push(\"--bkp-details\".into());\n        }\n\n        if let Some(block) = &self.block {\n            args.push(\"--block\".into());\n            args.push(block.into());\n        }\n\n        if let Some(end) = &self.end {\n            args.push(\"--end\".into());\n            args.push(end.into());\n        }\n\n        if self.follow {\n            args.push(\"--follow\".into());\n        }\n\n        if let Some(fork) = &self.fork {\n            args.push(\"--fork\".into());\n            args.push(fork.into());\n        }\n\n        if let Some(limit) = &self.limit {\n            args.push(\"--limit\".into());\n            args.push(limit.into());\n        }\n\n        if let Some(path) = &self.path {\n            args.push(\"--path\".into());\n            args.push(path.into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if let Some(rmgr) = &self.rmgr {\n            args.push(\"--rmgr\".into());\n            args.push(rmgr.into());\n        }\n\n        if let Some(relation) = &self.relation {\n            args.push(\"--relation\".into());\n            args.push(relation.into());\n        }\n\n        if let Some(start) = &self.start {\n            args.push(\"--start\".into());\n            args.push(start.into());\n        }\n\n        if let Some(timeline) = &self.timeline {\n            args.push(\"--timeline\".into());\n            args.push(timeline.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.fullpage {\n            args.push(\"--fullpage\".into());\n        }\n\n        if let Some(xid) = &self.xid {\n            args.push(\"--xid\".into());\n            args.push(xid.into());\n        }\n\n        if let Some(stats) = &self.stats {\n            args.push(\"--stats\".into());\n            args.push(stats.into());\n        }\n\n        if let Some(save_fullpage) = &self.save_fullpage {\n            args.push(\"--save-fullpage\".into());\n            args.push(save_fullpage.into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgWalDumpBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pg_waldump\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgWalDumpBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pg_waldump\"\"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pg_waldump\"\"#;\n\n        assert_eq!(format!(\"{command_prefix}\"), command.to_command_string());\n    }\n\n    #[test]\n    fn test_builder() {\n        let command = PgWalDumpBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .backup_details()\n            .block(\"block\")\n            .end(\"end\")\n            .follow()\n            .fork(\"fork\")\n            .limit(\"limit\")\n            .path(\"path\")\n            .quiet()\n            .rmgr(\"rmgr\")\n            .relation(\"relation\")\n            .start(\"start\")\n            .timeline(\"timeline\")\n            .version()\n            .fullpage()\n            .xid(\"xid\")\n            .stats(\"stats\")\n            .save_fullpage(\"save_fullpage\")\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pg_waldump\" \"--bkp-details\" \"--block\" \"block\" \"--end\" \"end\" \"--follow\" \"--fork\" \"fork\" \"--limit\" \"limit\" \"--path\" \"path\" \"--quiet\" \"--rmgr\" \"rmgr\" \"--relation\" \"relation\" \"--start\" \"start\" \"--timeline\" \"timeline\" \"--version\" \"--fullpage\" \"--xid\" \"xid\" \"--stats\" \"stats\" \"--save-fullpage\" \"save_fullpage\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/pgbench.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `pgbench` is a benchmarking tool for `PostgreSQL`.\n#[derive(Clone, Debug, Default)]\npub struct PgBenchBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    initialize: bool,\n    init_steps: Option<OsString>,\n    fill_factor: Option<usize>,\n    no_vacuum: bool,\n    quiet: bool,\n    scale: Option<usize>,\n    foreign_keys: bool,\n    index_tablespace: Option<OsString>,\n    partition_method: Option<OsString>,\n    partitions: Option<usize>,\n    tablespace: Option<OsString>,\n    unlogged_tables: bool,\n    builtin: Option<OsString>,\n    file: Option<OsString>,\n    skip_some_updates: bool,\n    select_only: bool,\n    client: Option<usize>,\n    connect: bool,\n    define: Option<OsString>,\n    jobs: Option<usize>,\n    log: bool,\n    latency_limit: Option<usize>,\n    protocol: Option<OsString>,\n    no_vacuum_bench: bool,\n    progress: Option<usize>,\n    report_per_command: bool,\n    rate: Option<usize>,\n    scale_bench: Option<usize>,\n    transactions: Option<usize>,\n    time: Option<usize>,\n    vacuum_all: bool,\n    aggregate_interval: Option<usize>,\n    failures_detailed: bool,\n    log_prefix: Option<OsString>,\n    max_tries: Option<usize>,\n    progress_timestamp: bool,\n    random_seed: Option<OsString>,\n    sampling_rate: Option<f64>,\n    show_script: Option<OsString>,\n    verbose_errors: bool,\n    debug: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    version: bool,\n    help: bool,\n}\n\nimpl PgBenchBuilder {\n    /// Create a new [`PgBenchBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PgBenchBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// invokes initialization mode\n    #[must_use]\n    pub fn initialize(mut self) -> Self {\n        self.initialize = true;\n        self\n    }\n\n    /// run selected initialization steps\n    #[must_use]\n    pub fn init_steps<S: AsRef<OsStr>>(mut self, steps: S) -> Self {\n        self.init_steps = Some(steps.as_ref().to_os_string());\n        self\n    }\n\n    /// set fill factor\n    #[must_use]\n    pub fn fill_factor(mut self, factor: usize) -> Self {\n        self.fill_factor = Some(factor);\n        self\n    }\n\n    /// do not run VACUUM during initialization\n    #[must_use]\n    pub fn no_vacuum(mut self) -> Self {\n        self.no_vacuum = true;\n        self\n    }\n\n    /// quiet logging (one message each 5 seconds)\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// scaling factor\n    #[must_use]\n    pub fn scale(mut self, scale: usize) -> Self {\n        self.scale = Some(scale);\n        self\n    }\n\n    /// create foreign key constraints between tables\n    #[must_use]\n    pub fn foreign_keys(mut self) -> Self {\n        self.foreign_keys = true;\n        self\n    }\n\n    /// create indexes in the specified tablespace\n    #[must_use]\n    pub fn index_tablespace<S: AsRef<OsStr>>(mut self, tablespace: S) -> Self {\n        self.index_tablespace = Some(tablespace.as_ref().to_os_string());\n        self\n    }\n\n    /// partition `pgbench_accounts` with this method (default: range)\n    #[must_use]\n    pub fn partition_method<S: AsRef<OsStr>>(mut self, method: S) -> Self {\n        self.partition_method = Some(method.as_ref().to_os_string());\n        self\n    }\n\n    /// partition `pgbench_accounts` into NUM parts (default: 0)\n    #[must_use]\n    pub fn partitions(mut self, num: usize) -> Self {\n        self.partitions = Some(num);\n        self\n    }\n\n    /// create tables in the specified tablespace\n    #[must_use]\n    pub fn tablespace<S: AsRef<OsStr>>(mut self, tablespace: S) -> Self {\n        self.tablespace = Some(tablespace.as_ref().to_os_string());\n        self\n    }\n\n    /// create tables as unlogged tables\n    #[must_use]\n    pub fn unlogged_tables(mut self) -> Self {\n        self.unlogged_tables = true;\n        self\n    }\n\n    /// add builtin script NAME weighted at W (default: 1)\n    #[must_use]\n    pub fn builtin<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.builtin = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// add script FILENAME weighted at W (default: 1)\n    #[must_use]\n    pub fn file<S: AsRef<OsStr>>(mut self, filename: S) -> Self {\n        self.file = Some(filename.as_ref().to_os_string());\n        self\n    }\n\n    /// skip some updates\n    #[must_use]\n    pub fn skip_some_updates(mut self) -> Self {\n        self.skip_some_updates = true;\n        self\n    }\n\n    /// perform SELECT-only transactions\n    #[must_use]\n    pub fn select_only(mut self) -> Self {\n        self.select_only = true;\n        self\n    }\n\n    /// number of concurrent database clients (default: 1)\n    #[must_use]\n    pub fn client(mut self, num: usize) -> Self {\n        self.client = Some(num);\n        self\n    }\n\n    /// establish new connection for each transaction\n    #[must_use]\n    pub fn connect(mut self) -> Self {\n        self.connect = true;\n        self\n    }\n\n    /// define variable for use by custom script\n    #[must_use]\n    pub fn define<S: AsRef<OsStr>>(mut self, var: S) -> Self {\n        self.define = Some(var.as_ref().to_os_string());\n        self\n    }\n\n    /// number of threads (default: 1)\n    #[must_use]\n    pub fn jobs(mut self, num: usize) -> Self {\n        self.jobs = Some(num);\n        self\n    }\n\n    /// write transaction times to log file\n    #[must_use]\n    pub fn log(mut self) -> Self {\n        self.log = true;\n        self\n    }\n\n    /// count transactions lasting more than NUM ms as late\n    #[must_use]\n    pub fn latency_limit(mut self, num: usize) -> Self {\n        self.latency_limit = Some(num);\n        self\n    }\n\n    /// protocol for submitting queries (default: simple)\n    #[must_use]\n    pub fn protocol<S: AsRef<OsStr>>(mut self, protocol: S) -> Self {\n        self.protocol = Some(protocol.as_ref().to_os_string());\n        self\n    }\n\n    /// do not run VACUUM before tests\n    #[must_use]\n    pub fn no_vacuum_bench(mut self) -> Self {\n        self.no_vacuum_bench = true;\n        self\n    }\n\n    /// show thread progress report every NUM seconds\n    #[must_use]\n    pub fn progress(mut self, num: usize) -> Self {\n        self.progress = Some(num);\n        self\n    }\n\n    /// report latencies, failures, and retries per command\n    #[must_use]\n    pub fn report_per_command(mut self) -> Self {\n        self.report_per_command = true;\n        self\n    }\n\n    /// target rate in transactions per second\n    #[must_use]\n    pub fn rate(mut self, num: usize) -> Self {\n        self.rate = Some(num);\n        self\n    }\n\n    /// report this scale factor in output\n    #[must_use]\n    pub fn scale_bench(mut self, scale: usize) -> Self {\n        self.scale_bench = Some(scale);\n        self\n    }\n\n    /// number of transactions each client runs (default: 10)\n    #[must_use]\n    pub fn transactions(mut self, num: usize) -> Self {\n        self.transactions = Some(num);\n        self\n    }\n\n    /// duration of benchmark test in seconds\n    #[must_use]\n    pub fn time(mut self, num: usize) -> Self {\n        self.time = Some(num);\n        self\n    }\n\n    /// vacuum all four standard tables before tests\n    #[must_use]\n    pub fn vacuum_all(mut self) -> Self {\n        self.vacuum_all = true;\n        self\n    }\n\n    /// aggregate data over NUM seconds\n    #[must_use]\n    pub fn aggregate_interval(mut self, num: usize) -> Self {\n        self.aggregate_interval = Some(num);\n        self\n    }\n\n    /// report the failures grouped by basic types\n    #[must_use]\n    pub fn failures_detailed(mut self) -> Self {\n        self.failures_detailed = true;\n        self\n    }\n\n    /// prefix for transaction time log file\n    #[must_use]\n    pub fn log_prefix<S: AsRef<OsStr>>(mut self, prefix: S) -> Self {\n        self.log_prefix = Some(prefix.as_ref().to_os_string());\n        self\n    }\n\n    /// max number of tries to run transaction (default: 1)\n    #[must_use]\n    pub fn max_tries(mut self, num: usize) -> Self {\n        self.max_tries = Some(num);\n        self\n    }\n\n    /// use Unix epoch timestamps for progress\n    #[must_use]\n    pub fn progress_timestamp(mut self) -> Self {\n        self.progress_timestamp = true;\n        self\n    }\n\n    /// set random seed (\"time\", \"rand\", integer)\n    #[must_use]\n    pub fn random_seed<S: AsRef<OsStr>>(mut self, seed: S) -> Self {\n        self.random_seed = Some(seed.as_ref().to_os_string());\n        self\n    }\n\n    /// fraction of transactions to log (e.g., 0.01 for 1%)\n    #[must_use]\n    pub fn sampling_rate(mut self, rate: f64) -> Self {\n        self.sampling_rate = Some(rate);\n        self\n    }\n\n    /// show builtin script code, then exit\n    #[must_use]\n    pub fn show_script<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.show_script = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// print messages of all errors\n    #[must_use]\n    pub fn verbose_errors(mut self) -> Self {\n        self.verbose_errors = true;\n        self\n    }\n\n    /// print debugging output\n    #[must_use]\n    pub fn debug(mut self) -> Self {\n        self.debug = true;\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, hostname: S) -> Self {\n        self.host = Some(hostname.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port number\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// connect as specified database user\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PgBenchBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"pgbench\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.initialize {\n            args.push(\"--initialize\".into());\n        }\n\n        if let Some(steps) = &self.init_steps {\n            args.push(\"--init-steps\".into());\n            args.push(steps.into());\n        }\n\n        if let Some(factor) = &self.fill_factor {\n            args.push(\"--fillfactor\".into());\n            args.push(factor.to_string().into());\n        }\n\n        if self.no_vacuum {\n            args.push(\"--no-vacuum\".into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if let Some(scale) = &self.scale {\n            args.push(\"--scale\".into());\n            args.push(scale.to_string().into());\n        }\n\n        if self.foreign_keys {\n            args.push(\"--foreign-keys\".into());\n        }\n\n        if let Some(tablespace) = &self.index_tablespace {\n            args.push(\"--index-tablespace\".into());\n            args.push(tablespace.into());\n        }\n\n        if let Some(method) = &self.partition_method {\n            args.push(\"--partition-method\".into());\n            args.push(method.into());\n        }\n\n        if let Some(num) = &self.partitions {\n            args.push(\"--partitions\".into());\n            args.push(num.to_string().into());\n        }\n\n        if let Some(tablespace) = &self.tablespace {\n            args.push(\"--tablespace\".into());\n            args.push(tablespace.into());\n        }\n\n        if self.unlogged_tables {\n            args.push(\"--unlogged-tables\".into());\n        }\n\n        if let Some(name) = &self.builtin {\n            args.push(\"--builtin\".into());\n            args.push(name.into());\n        }\n\n        if let Some(filename) = &self.file {\n            args.push(\"--file\".into());\n            args.push(filename.into());\n        }\n\n        if self.skip_some_updates {\n            args.push(\"--skip-some-updates\".into());\n        }\n\n        if self.select_only {\n            args.push(\"--select-only\".into());\n        }\n\n        if let Some(num) = &self.client {\n            args.push(\"--client\".into());\n            args.push(num.to_string().into());\n        }\n\n        if self.connect {\n            args.push(\"--connect\".into());\n        }\n\n        if let Some(var) = &self.define {\n            args.push(\"--define\".into());\n            args.push(var.into());\n        }\n\n        if let Some(num) = &self.jobs {\n            args.push(\"--jobs\".into());\n            args.push(num.to_string().into());\n        }\n\n        if self.log {\n            args.push(\"--log\".into());\n        }\n\n        if let Some(num) = &self.latency_limit {\n            args.push(\"--latency-limit\".into());\n            args.push(num.to_string().into());\n        }\n\n        if let Some(protocol) = &self.protocol {\n            args.push(\"--protocol\".into());\n            args.push(protocol.into());\n        }\n\n        if self.no_vacuum_bench {\n            args.push(\"--no-vacuum\".into());\n        }\n\n        if let Some(num) = &self.progress {\n            args.push(\"--progress\".into());\n            args.push(num.to_string().into());\n        }\n\n        if self.report_per_command {\n            args.push(\"--report-per-command\".into());\n        }\n\n        if let Some(num) = &self.rate {\n            args.push(\"--rate\".into());\n            args.push(num.to_string().into());\n        }\n\n        if let Some(scale) = &self.scale_bench {\n            args.push(\"--scale\".into());\n            args.push(scale.to_string().into());\n        }\n\n        if let Some(num) = &self.transactions {\n            args.push(\"--transactions\".into());\n            args.push(num.to_string().into());\n        }\n\n        if let Some(num) = &self.time {\n            args.push(\"--time\".into());\n            args.push(num.to_string().into());\n        }\n\n        if self.vacuum_all {\n            args.push(\"--vacuum-all\".into());\n        }\n\n        if let Some(num) = &self.aggregate_interval {\n            args.push(\"--aggregate-interval\".into());\n            args.push(num.to_string().into());\n        }\n\n        if self.failures_detailed {\n            args.push(\"--failures-detailed\".into());\n        }\n\n        if let Some(prefix) = &self.log_prefix {\n            args.push(\"--log-prefix\".into());\n            args.push(prefix.into());\n        }\n\n        if let Some(num) = &self.max_tries {\n            args.push(\"--max-tries\".into());\n            args.push(num.to_string().into());\n        }\n\n        if self.progress_timestamp {\n            args.push(\"--progress-timestamp\".into());\n        }\n\n        if let Some(seed) = &self.random_seed {\n            args.push(\"--random-seed\".into());\n            args.push(seed.into());\n        }\n\n        if let Some(rate) = &self.sampling_rate {\n            args.push(\"--sampling-rate\".into());\n            args.push(rate.to_string().into());\n        }\n\n        if let Some(name) = &self.show_script {\n            args.push(\"--show-script\".into());\n            args.push(name.into());\n        }\n\n        if self.verbose_errors {\n            args.push(\"--verbose-errors\".into());\n        }\n\n        if self.debug {\n            args.push(\"--debug\".into());\n        }\n\n        if let Some(hostname) = &self.host {\n            args.push(\"--host\".into());\n            args.push(hostname.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PgBenchBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"pgbench\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PgBenchBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pgbench\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pgbench\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PgBenchBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./pgbench\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\pgbench\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = PgBenchBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .initialize()\n            .init_steps(\"steps\")\n            .fill_factor(10)\n            .no_vacuum()\n            .quiet()\n            .scale(10)\n            .foreign_keys()\n            .index_tablespace(\"tablespace\")\n            .partition_method(\"method\")\n            .partitions(10)\n            .tablespace(\"tablespace\")\n            .unlogged_tables()\n            .builtin(\"name\")\n            .file(\"filename\")\n            .skip_some_updates()\n            .select_only()\n            .client(10)\n            .connect()\n            .define(\"var\")\n            .jobs(10)\n            .log()\n            .latency_limit(10)\n            .protocol(\"protocol\")\n            .no_vacuum_bench()\n            .progress(10)\n            .report_per_command()\n            .rate(10)\n            .scale_bench(10)\n            .transactions(10)\n            .time(10)\n            .vacuum_all()\n            .aggregate_interval(10)\n            .failures_detailed()\n            .log_prefix(\"prefix\")\n            .max_tries(10)\n            .progress_timestamp()\n            .random_seed(\"seed\")\n            .sampling_rate(10.0)\n            .show_script(\"name\")\n            .verbose_errors()\n            .debug()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .version()\n            .help()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"pgbench\" \"--initialize\" \"--init-steps\" \"steps\" \"--fillfactor\" \"10\" \"--no-vacuum\" \"--quiet\" \"--scale\" \"10\" \"--foreign-keys\" \"--index-tablespace\" \"tablespace\" \"--partition-method\" \"method\" \"--partitions\" \"10\" \"--tablespace\" \"tablespace\" \"--unlogged-tables\" \"--builtin\" \"name\" \"--file\" \"filename\" \"--skip-some-updates\" \"--select-only\" \"--client\" \"10\" \"--connect\" \"--define\" \"var\" \"--jobs\" \"10\" \"--log\" \"--latency-limit\" \"10\" \"--protocol\" \"protocol\" \"--no-vacuum\" \"--progress\" \"10\" \"--report-per-command\" \"--rate\" \"10\" \"--scale\" \"10\" \"--transactions\" \"10\" \"--time\" \"10\" \"--vacuum-all\" \"--aggregate-interval\" \"10\" \"--failures-detailed\" \"--log-prefix\" \"prefix\" \"--max-tries\" \"10\" \"--progress-timestamp\" \"--random-seed\" \"seed\" \"--sampling-rate\" \"10\" \"--show-script\" \"name\" \"--verbose-errors\" \"--debug\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--version\" \"--help\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/postgres.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `postgres` is the `PostgreSQL` server.\n#[derive(Clone, Debug, Default)]\npub struct PostgresBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    n_buffers: Option<u32>,\n    runtime_param: Option<(OsString, OsString)>,\n    print_runtime_param: Option<OsString>,\n    debugging_level: Option<u8>,\n    data_dir: Option<PathBuf>,\n    european_date_format: bool,\n    fsync_off: bool,\n    host: Option<OsString>,\n    tcp_ip_connections: bool,\n    socket_location: Option<PathBuf>,\n    max_connections: Option<u32>,\n    port: Option<u16>,\n    show_stats: bool,\n    work_mem: Option<u32>,\n    version: bool,\n    describe_config: bool,\n    help: bool,\n    forbidden_plan_types: Option<OsString>,\n    allow_system_table_changes: bool,\n    disable_system_indexes: bool,\n    show_timings: Option<OsString>,\n    send_sigabrt: bool,\n    wait_seconds: Option<u32>,\n    single_user_mode: bool,\n    dbname: Option<OsString>,\n    override_debugging_level: Option<u8>,\n    echo_statement: bool,\n    no_newline_delimiter: bool,\n    output_file: Option<PathBuf>,\n    bootstrapping_mode: bool,\n    check_mode: bool,\n}\n\nimpl PostgresBuilder {\n    /// Create a new [`PostgresBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PostgresBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.socket_location(socket_dir);\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// number of shared buffers\n    #[must_use]\n    pub fn n_buffers(mut self, n_buffers: u32) -> Self {\n        self.n_buffers = Some(n_buffers);\n        self\n    }\n\n    /// set run-time parameter\n    #[must_use]\n    pub fn runtime_param<S: AsRef<OsStr>>(mut self, name: S, value: S) -> Self {\n        self.runtime_param = Some((name.as_ref().into(), value.as_ref().into()));\n        self\n    }\n\n    /// print value of run-time parameter, then exit\n    #[must_use]\n    pub fn print_runtime_param<S: AsRef<OsStr>>(mut self, name: S) -> Self {\n        self.print_runtime_param = Some(name.as_ref().to_os_string());\n        self\n    }\n\n    /// debugging level\n    #[must_use]\n    pub fn debugging_level(mut self, level: u8) -> Self {\n        self.debugging_level = Some(level);\n        self\n    }\n\n    /// database directory\n    #[must_use]\n    pub fn data_dir<P: Into<PathBuf>>(mut self, dir: P) -> Self {\n        self.data_dir = Some(dir.into());\n        self\n    }\n\n    /// use European date input format (DMY)\n    #[must_use]\n    pub fn european_date_format(mut self) -> Self {\n        self.european_date_format = true;\n        self\n    }\n\n    /// turn fsync off\n    #[must_use]\n    pub fn fsync_off(mut self) -> Self {\n        self.fsync_off = true;\n        self\n    }\n\n    /// host name or IP address to listen on\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// enable TCP/IP connections (deprecated)\n    #[must_use]\n    pub fn tcp_ip_connections(mut self) -> Self {\n        self.tcp_ip_connections = true;\n        self\n    }\n\n    /// Unix socket location\n    #[must_use]\n    pub fn socket_location<P: Into<PathBuf>>(mut self, dir: P) -> Self {\n        self.socket_location = Some(dir.into());\n        self\n    }\n\n    /// maximum number of allowed connections\n    #[must_use]\n    pub fn max_connections(mut self, max: u32) -> Self {\n        self.max_connections = Some(max);\n        self\n    }\n\n    /// port number to listen on\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// show statistics after each query\n    #[must_use]\n    pub fn show_stats(mut self) -> Self {\n        self.show_stats = true;\n        self\n    }\n\n    /// set amount of memory for sorts (in kB)\n    #[must_use]\n    pub fn work_mem(mut self, mem: u32) -> Self {\n        self.work_mem = Some(mem);\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// describe configuration parameters, then exit\n    #[must_use]\n    pub fn describe_config(mut self) -> Self {\n        self.describe_config = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// forbid use of some plan types\n    #[must_use]\n    pub fn forbidden_plan_types<S: AsRef<OsStr>>(mut self, types: S) -> Self {\n        self.forbidden_plan_types = Some(types.as_ref().to_os_string());\n        self\n    }\n\n    /// allow system table structure changes\n    #[must_use]\n    pub fn allow_system_table_changes(mut self) -> Self {\n        self.allow_system_table_changes = true;\n        self\n    }\n\n    /// disable system indexes\n    #[must_use]\n    pub fn disable_system_indexes(mut self) -> Self {\n        self.disable_system_indexes = true;\n        self\n    }\n\n    /// show timings after each query\n    #[must_use]\n    pub fn show_timings<S: AsRef<OsStr>>(mut self, timings: S) -> Self {\n        self.show_timings = Some(timings.as_ref().to_os_string());\n        self\n    }\n\n    /// send SIGABRT to all backend processes if one dies\n    #[must_use]\n    pub fn send_sigabrt(mut self) -> Self {\n        self.send_sigabrt = true;\n        self\n    }\n\n    /// wait NUM seconds to allow attach from a debugger\n    #[must_use]\n    pub fn wait_seconds(mut self, seconds: u32) -> Self {\n        self.wait_seconds = Some(seconds);\n        self\n    }\n\n    /// selects single-user mode (must be first argument)\n    #[must_use]\n    pub fn single_user_mode(mut self) -> Self {\n        self.single_user_mode = true;\n        self\n    }\n\n    /// database name (defaults to user name)\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// override debugging level\n    #[must_use]\n    pub fn override_debugging_level(mut self, level: u8) -> Self {\n        self.override_debugging_level = Some(level);\n        self\n    }\n\n    /// echo statement before execution\n    #[must_use]\n    pub fn echo_statement(mut self) -> Self {\n        self.echo_statement = true;\n        self\n    }\n\n    /// do not use newline as interactive query delimiter\n    #[must_use]\n    pub fn no_newline_delimiter(mut self) -> Self {\n        self.no_newline_delimiter = true;\n        self\n    }\n\n    /// send stdout and stderr to given file\n    #[must_use]\n    pub fn output_file<P: Into<PathBuf>>(mut self, file: P) -> Self {\n        self.output_file = Some(file.into());\n        self\n    }\n\n    /// selects bootstrapping mode (must be first argument)\n    #[must_use]\n    pub fn bootstrapping_mode(mut self) -> Self {\n        self.bootstrapping_mode = true;\n        self\n    }\n\n    /// selects check mode (must be first argument)\n    #[must_use]\n    pub fn check_mode(mut self) -> Self {\n        self.check_mode = true;\n        self\n    }\n}\n\nimpl CommandBuilder for PostgresBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"postgres\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(n_buffers) = &self.n_buffers {\n            args.push(\"-B\".into());\n            args.push(n_buffers.to_string().into());\n        }\n\n        if let Some((name, value)) = &self.runtime_param {\n            args.push(\"-c\".into());\n            args.push(format!(\"{}={}\", name.to_string_lossy(), value.to_string_lossy()).into());\n        }\n\n        if let Some(name) = &self.print_runtime_param {\n            args.push(\"-C\".into());\n            args.push(name.into());\n        }\n\n        if let Some(level) = &self.debugging_level {\n            args.push(\"-d\".into());\n            args.push(level.to_string().into());\n        }\n\n        if let Some(data_dir) = &self.data_dir {\n            args.push(\"-D\".into());\n            args.push(data_dir.into());\n        }\n\n        if self.european_date_format {\n            args.push(\"-e\".into());\n        }\n\n        if self.fsync_off {\n            args.push(\"-F\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"-h\".into());\n            args.push(host.into());\n        }\n\n        if self.tcp_ip_connections {\n            args.push(\"-i\".into());\n        }\n\n        if let Some(socket_location) = &self.socket_location {\n            args.push(\"-k\".into());\n            args.push(socket_location.into());\n        }\n\n        if let Some(max) = &self.max_connections {\n            args.push(\"-N\".into());\n            args.push(max.to_string().into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"-p\".into());\n            args.push(port.to_string().into());\n        }\n\n        if self.show_stats {\n            args.push(\"-s\".into());\n        }\n\n        if let Some(work_mem) = &self.work_mem {\n            args.push(\"-S\".into());\n            args.push(work_mem.to_string().into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.describe_config {\n            args.push(\"--describe-config\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(forbidden_plan_types) = &self.forbidden_plan_types {\n            args.push(\"-f\".into());\n            args.push(forbidden_plan_types.into());\n        }\n\n        if self.allow_system_table_changes {\n            args.push(\"-O\".into());\n        }\n\n        if self.disable_system_indexes {\n            args.push(\"-P\".into());\n        }\n\n        if let Some(show_timings) = &self.show_timings {\n            args.push(\"-t\".into());\n            args.push(show_timings.into());\n        }\n\n        if self.send_sigabrt {\n            args.push(\"-T\".into());\n        }\n\n        if let Some(seconds) = &self.wait_seconds {\n            args.push(\"-W\".into());\n            args.push(seconds.to_string().into());\n        }\n\n        if self.single_user_mode {\n            args.push(\"--single\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(dbname.into());\n        }\n\n        if let Some(level) = &self.override_debugging_level {\n            args.push(\"-d\".into());\n            args.push(level.to_string().into());\n        }\n\n        if self.echo_statement {\n            args.push(\"-E\".into());\n        }\n\n        if self.no_newline_delimiter {\n            args.push(\"-j\".into());\n        }\n\n        if let Some(file) = &self.output_file {\n            args.push(\"-r\".into());\n            args.push(file.into());\n        }\n\n        if self.bootstrapping_mode {\n            args.push(\"--boot\".into());\n        }\n\n        if self.check_mode {\n            args.push(\"--check\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        self.envs.clone()\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PostgresBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"postgres\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PostgresBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./postgres\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\postgres\" \"#;\n\n        assert_eq!(\n            format!(r#\"{command_prefix}\"-h\" \"localhost\" \"-p\" \"5432\"\"#),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PostgresBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"\"./postgres\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\postgres\" \"#;\n        assert_eq!(\n            format!(r#\"{command_prefix}\"-h\" \"localhost\" \"-k\" \"/tmp/pg_socket\" \"-p\" \"5432\"\"#),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = PostgresBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .n_buffers(100)\n            .runtime_param(\"name\", \"value\")\n            .print_runtime_param(\"name\")\n            .debugging_level(3)\n            .data_dir(\"data_dir\")\n            .european_date_format()\n            .fsync_off()\n            .host(\"localhost\")\n            .tcp_ip_connections()\n            .socket_location(\"socket_location\")\n            .max_connections(100)\n            .port(5432)\n            .show_stats()\n            .work_mem(100)\n            .version()\n            .describe_config()\n            .help()\n            .forbidden_plan_types(\"type\")\n            .allow_system_table_changes()\n            .disable_system_indexes()\n            .show_timings(\"timings\")\n            .send_sigabrt()\n            .wait_seconds(10)\n            .single_user_mode()\n            .dbname(\"dbname\")\n            .override_debugging_level(3)\n            .echo_statement()\n            .no_newline_delimiter()\n            .output_file(\"output_file\")\n            .bootstrapping_mode()\n            .check_mode()\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"postgres\" \"-B\" \"100\" \"-c\" \"name=value\" \"-C\" \"name\" \"-d\" \"3\" \"-D\" \"data_dir\" \"-e\" \"-F\" \"-h\" \"localhost\" \"-i\" \"-k\" \"socket_location\" \"-N\" \"100\" \"-p\" \"5432\" \"-s\" \"-S\" \"100\" \"--version\" \"--describe-config\" \"--help\" \"-f\" \"type\" \"-O\" \"-P\" \"-t\" \"timings\" \"-T\" \"-W\" \"10\" \"--single\" \"dbname\" \"-d\" \"3\" \"-E\" \"-j\" \"-r\" \"output_file\" \"--boot\" \"--check\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/psql.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `psql` is the `PostgreSQL` interactive terminal.\n#[derive(Clone, Debug, Default)]\npub struct PsqlBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    command: Option<OsString>,\n    dbname: Option<OsString>,\n    file: Option<PathBuf>,\n    list: bool,\n    variable: Option<(OsString, OsString)>,\n    version: bool,\n    no_psqlrc: bool,\n    single_transaction: bool,\n    help: Option<OsString>,\n    echo_all: bool,\n    echo_errors: bool,\n    echo_queries: bool,\n    echo_hidden: bool,\n    log_file: Option<PathBuf>,\n    no_readline: bool,\n    output: Option<PathBuf>,\n    quiet: bool,\n    single_step: bool,\n    single_line: bool,\n    no_align: bool,\n    csv: bool,\n    field_separator: Option<OsString>,\n    html: bool,\n    pset: Option<(OsString, OsString)>,\n    record_separator: Option<OsString>,\n    tuples_only: bool,\n    table_attr: Option<OsString>,\n    expanded: bool,\n    field_separator_zero: bool,\n    record_separator_zero: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n}\n\nimpl PsqlBuilder {\n    /// Create a new [`PsqlBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`PsqlBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// run only single command (SQL or internal) and exit\n    #[must_use]\n    pub fn command<S: AsRef<OsStr>>(mut self, command: S) -> Self {\n        self.command = Some(command.as_ref().to_os_string());\n        self\n    }\n\n    /// database name to connect to\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// execute commands from file, then exit\n    #[must_use]\n    pub fn file<P: Into<PathBuf>>(mut self, file: P) -> Self {\n        self.file = Some(file.into());\n        self\n    }\n\n    /// list available databases, then exit\n    #[must_use]\n    pub fn list(mut self) -> Self {\n        self.list = true;\n        self\n    }\n\n    /// set psql variable NAME to VALUE (e.g., `-v ON_ERROR_STOP=1`)\n    #[must_use]\n    pub fn variable<S: AsRef<OsStr>>(mut self, variable: (S, S)) -> Self {\n        let (name, value) = variable;\n        self.variable = Some((name.as_ref().into(), value.as_ref().into()));\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// do not read startup file (~/.psqlrc)\n    #[must_use]\n    pub fn no_psqlrc(mut self) -> Self {\n        self.no_psqlrc = true;\n        self\n    }\n\n    /// execute as a single transaction (if non-interactive)\n    #[must_use]\n    pub fn single_transaction(mut self) -> Self {\n        self.single_transaction = true;\n        self\n    }\n\n    /// show help, then exit\n    /// Possible values: [options, commands, variables]\n    #[must_use]\n    pub fn help<S: AsRef<OsStr>>(mut self, help: S) -> Self {\n        self.help = Some(help.as_ref().to_os_string());\n        self\n    }\n\n    /// echo all input from script\n    #[must_use]\n    pub fn echo_all(mut self) -> Self {\n        self.echo_all = true;\n        self\n    }\n\n    /// echo failed commands\n    #[must_use]\n    pub fn echo_errors(mut self) -> Self {\n        self.echo_errors = true;\n        self\n    }\n\n    /// echo commands sent to server\n    #[must_use]\n    pub fn echo_queries(mut self) -> Self {\n        self.echo_queries = true;\n        self\n    }\n\n    /// display queries that internal commands generate\n    #[must_use]\n    pub fn echo_hidden(mut self) -> Self {\n        self.echo_hidden = true;\n        self\n    }\n\n    /// send session log to file\n    #[must_use]\n    pub fn log_file<P: Into<PathBuf>>(mut self, log_file: P) -> Self {\n        self.log_file = Some(log_file.into());\n        self\n    }\n\n    /// disable enhanced command line editing (readline)\n    #[must_use]\n    pub fn no_readline(mut self) -> Self {\n        self.no_readline = true;\n        self\n    }\n\n    /// send query results to file (or |pipe)\n    #[must_use]\n    pub fn output<P: Into<PathBuf>>(mut self, output: P) -> Self {\n        self.output = Some(output.into());\n        self\n    }\n\n    /// run quietly (no messages, only query output)\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// single-step mode (confirm each query)\n    #[must_use]\n    pub fn single_step(mut self) -> Self {\n        self.single_step = true;\n        self\n    }\n\n    /// single-line mode (end of line terminates SQL command)\n    #[must_use]\n    pub fn single_line(mut self) -> Self {\n        self.single_line = true;\n        self\n    }\n\n    /// unaligned table output mode\n    #[must_use]\n    pub fn no_align(mut self) -> Self {\n        self.no_align = true;\n        self\n    }\n\n    /// CSV (Comma-Separated Values) table output mode\n    #[must_use]\n    pub fn csv(mut self) -> Self {\n        self.csv = true;\n        self\n    }\n\n    /// field separator for unaligned output (default: \"|\")\n    #[must_use]\n    pub fn field_separator<S: AsRef<OsStr>>(mut self, field_separator: S) -> Self {\n        self.field_separator = Some(field_separator.as_ref().to_os_string());\n        self\n    }\n\n    /// HTML table output mode\n    #[must_use]\n    pub fn html(mut self) -> Self {\n        self.html = true;\n        self\n    }\n\n    /// set printing option VAR to ARG (see \\pset command)\n    #[must_use]\n    pub fn pset<S: AsRef<OsStr>>(mut self, pset: (S, S)) -> Self {\n        let (var, arg) = pset;\n        self.pset = Some((var.as_ref().into(), arg.as_ref().into()));\n        self\n    }\n\n    /// record separator for unaligned output (default: newline)\n    #[must_use]\n    pub fn record_separator<S: AsRef<OsStr>>(mut self, record_separator: S) -> Self {\n        self.record_separator = Some(record_separator.as_ref().to_os_string());\n        self\n    }\n\n    /// print rows only\n    #[must_use]\n    pub fn tuples_only(mut self) -> Self {\n        self.tuples_only = true;\n        self\n    }\n\n    /// set HTML table tag attributes (e.g., width, border)\n    #[must_use]\n    pub fn table_attr<S: AsRef<OsStr>>(mut self, table_attr: S) -> Self {\n        self.table_attr = Some(table_attr.as_ref().to_os_string());\n        self\n    }\n\n    /// turn on expanded table output\n    #[must_use]\n    pub fn expanded(mut self) -> Self {\n        self.expanded = true;\n        self\n    }\n\n    /// set field separator for unaligned output to zero byte\n    #[must_use]\n    pub fn field_separator_zero(mut self) -> Self {\n        self.field_separator_zero = true;\n        self\n    }\n\n    /// set record separator for unaligned output to zero byte\n    #[must_use]\n    pub fn record_separator_zero(mut self) -> Self {\n        self.record_separator_zero = true;\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// database user name\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt (should happen automatically)\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for PsqlBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"psql\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(psql_command) = &self.command {\n            args.push(\"--command\".into());\n            args.push(psql_command.into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if let Some(file) = &self.file {\n            args.push(\"--file\".into());\n            args.push(file.into());\n        }\n\n        if self.list {\n            args.push(\"--list\".into());\n        }\n\n        if let Some((name, value)) = &self.variable {\n            args.push(\"--variable\".into());\n            args.push(format!(\"{}={}\", name.to_string_lossy(), value.to_string_lossy()).into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.no_psqlrc {\n            args.push(\"--no-psqlrc\".into());\n        }\n\n        if self.single_transaction {\n            args.push(\"--single-transaction\".into());\n        }\n\n        if let Some(help) = &self.help {\n            args.push(\"--help\".into());\n            args.push(help.into());\n        }\n\n        if self.echo_all {\n            args.push(\"--echo-all\".into());\n        }\n\n        if self.echo_errors {\n            args.push(\"--echo-errors\".into());\n        }\n\n        if self.echo_queries {\n            args.push(\"--echo-queries\".into());\n        }\n\n        if self.echo_hidden {\n            args.push(\"--echo-hidden\".into());\n        }\n\n        if let Some(log_file) = &self.log_file {\n            args.push(\"--log-file\".into());\n            args.push(log_file.into());\n        }\n\n        if self.no_readline {\n            args.push(\"--no-readline\".into());\n        }\n\n        if let Some(output) = &self.output {\n            args.push(\"--output\".into());\n            args.push(output.into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if self.single_step {\n            args.push(\"--single-step\".into());\n        }\n\n        if self.single_line {\n            args.push(\"--single-line\".into());\n        }\n\n        if self.no_align {\n            args.push(\"--no-align\".into());\n        }\n\n        if self.csv {\n            args.push(\"--csv\".into());\n        }\n\n        if let Some(field_separator) = &self.field_separator {\n            args.push(\"--field-separator\".into());\n            args.push(field_separator.into());\n        }\n\n        if self.html {\n            args.push(\"--html\".into());\n        }\n\n        if let Some((var, arg)) = &self.pset {\n            args.push(\"--pset\".into());\n            args.push(format!(\"{}={}\", var.to_string_lossy(), arg.to_string_lossy()).into());\n        }\n\n        if let Some(record_separator) = &self.record_separator {\n            args.push(\"--record-separator\".into());\n            args.push(record_separator.into());\n        }\n\n        if self.tuples_only {\n            args.push(\"--tuples-only\".into());\n        }\n\n        if let Some(table_attr) = &self.table_attr {\n            args.push(\"--table-attr\".into());\n            args.push(table_attr.into());\n        }\n\n        if self.expanded {\n            args.push(\"--expanded\".into());\n        }\n\n        if self.field_separator_zero {\n            args.push(\"--field-separator-zero\".into());\n        }\n\n        if self.record_separator_zero {\n            args.push(\"--record-separator-zero\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = PsqlBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"psql\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = PsqlBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./psql\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\psql\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = PsqlBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./psql\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\psql\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = PsqlBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .command(\"SELECT * FROM test\")\n            .dbname(\"dbname\")\n            .file(\"test.sql\")\n            .list()\n            .variable((\"ON_ERROR_STOP\", \"1\"))\n            .version()\n            .no_psqlrc()\n            .single_transaction()\n            .help(\"options\")\n            .echo_all()\n            .echo_errors()\n            .echo_queries()\n            .echo_hidden()\n            .log_file(\"psql.log\")\n            .no_readline()\n            .output(\"output.txt\")\n            .quiet()\n            .single_step()\n            .single_line()\n            .no_align()\n            .csv()\n            .field_separator(\"|\")\n            .html()\n            .pset((\"border\", \"1\"))\n            .record_separator(\"\\n\")\n            .tuples_only()\n            .table_attr(\"width=100\")\n            .expanded()\n            .field_separator_zero()\n            .record_separator_zero()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"psql\" \"--command\" \"SELECT * FROM test\" \"--dbname\" \"dbname\" \"--file\" \"test.sql\" \"--list\" \"--variable\" \"ON_ERROR_STOP=1\" \"--version\" \"--no-psqlrc\" \"--single-transaction\" \"--help\" \"options\" \"--echo-all\" \"--echo-errors\" \"--echo-queries\" \"--echo-hidden\" \"--log-file\" \"psql.log\" \"--no-readline\" \"--output\" \"output.txt\" \"--quiet\" \"--single-step\" \"--single-line\" \"--no-align\" \"--csv\" \"--field-separator\" \"|\" \"--html\" \"--pset\" \"border=1\" \"--record-separator\" \"\\n\" \"--tuples-only\" \"--table-attr\" \"width=100\" \"--expanded\" \"--field-separator-zero\" \"--record-separator-zero\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/reindexdb.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `reindexdb` reindexes a `PostgreSQL` database.\n#[derive(Clone, Debug, Default)]\npub struct ReindexDbBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    all: bool,\n    concurrently: bool,\n    dbname: Option<OsString>,\n    echo: bool,\n    index: Option<OsString>,\n    jobs: Option<u32>,\n    quiet: bool,\n    system: bool,\n    schema: Option<OsString>,\n    table: Option<OsString>,\n    tablespace: Option<OsString>,\n    verbose: bool,\n    version: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    maintenance_db: Option<OsString>,\n}\n\nimpl ReindexDbBuilder {\n    /// Create a new [`ReindexDbBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`ReindexDbBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// reindex all databases\n    #[must_use]\n    pub fn all(mut self) -> Self {\n        self.all = true;\n        self\n    }\n\n    /// reindex concurrently\n    #[must_use]\n    pub fn concurrently(mut self) -> Self {\n        self.concurrently = true;\n        self\n    }\n\n    /// database to reindex\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// recreate specific index(es) only\n    #[must_use]\n    pub fn index<S: AsRef<OsStr>>(mut self, index: S) -> Self {\n        self.index = Some(index.as_ref().to_os_string());\n        self\n    }\n\n    /// use this many concurrent connections to reindex\n    #[must_use]\n    pub fn jobs(mut self, jobs: u32) -> Self {\n        self.jobs = Some(jobs);\n        self\n    }\n\n    /// don't write any messages\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// reindex system catalogs only\n    #[must_use]\n    pub fn system(mut self) -> Self {\n        self.system = true;\n        self\n    }\n\n    /// reindex specific schema(s) only\n    #[must_use]\n    pub fn schema<S: AsRef<OsStr>>(mut self, schema: S) -> Self {\n        self.schema = Some(schema.as_ref().to_os_string());\n        self\n    }\n\n    /// reindex specific table(s) only\n    #[must_use]\n    pub fn table<S: AsRef<OsStr>>(mut self, table: S) -> Self {\n        self.table = Some(table.as_ref().to_os_string());\n        self\n    }\n\n    /// tablespace where indexes are rebuilt\n    #[must_use]\n    pub fn tablespace<S: AsRef<OsStr>>(mut self, tablespace: S) -> Self {\n        self.tablespace = Some(tablespace.as_ref().to_os_string());\n        self\n    }\n\n    /// write a lot of output\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// user name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// alternate maintenance database\n    #[must_use]\n    pub fn maintenance_db<S: AsRef<OsStr>>(mut self, maintenance_db: S) -> Self {\n        self.maintenance_db = Some(maintenance_db.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for ReindexDbBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"reindexdb\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.all {\n            args.push(\"--all\".into());\n        }\n\n        if self.concurrently {\n            args.push(\"--concurrently\".into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if let Some(index) = &self.index {\n            args.push(\"--index\".into());\n            args.push(index.into());\n        }\n\n        if let Some(jobs) = &self.jobs {\n            args.push(\"--jobs\".into());\n            args.push(jobs.to_string().into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if self.system {\n            args.push(\"--system\".into());\n        }\n\n        if let Some(schema) = &self.schema {\n            args.push(\"--schema\".into());\n            args.push(schema.into());\n        }\n\n        if let Some(table) = &self.table {\n            args.push(\"--table\".into());\n            args.push(table.into());\n        }\n\n        if let Some(tablespace) = &self.tablespace {\n            args.push(\"--tablespace\".into());\n            args.push(tablespace.into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(maintenance_db) = &self.maintenance_db {\n            args.push(\"--maintenance-db\".into());\n            args.push(maintenance_db.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = ReindexDbBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"reindexdb\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = ReindexDbBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./reindexdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\reindexdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = ReindexDbBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./reindexdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\reindexdb\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = ReindexDbBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .all()\n            .concurrently()\n            .dbname(\"dbname\")\n            .echo()\n            .index(\"index\")\n            .jobs(1)\n            .quiet()\n            .system()\n            .schema(\"schema\")\n            .table(\"table\")\n            .tablespace(\"tablespace\")\n            .verbose()\n            .version()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .maintenance_db(\"maintenance-db\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"reindexdb\" \"--all\" \"--concurrently\" \"--dbname\" \"dbname\" \"--echo\" \"--index\" \"index\" \"--jobs\" \"1\" \"--quiet\" \"--system\" \"--schema\" \"schema\" \"--table\" \"table\" \"--tablespace\" \"tablespace\" \"--verbose\" \"--version\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--no-password\" \"--password\" \"--maintenance-db\" \"maintenance-db\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/traits.rs",
    "content": "use crate::error::{Error, Result};\nuse std::env::consts::OS;\nuse std::ffi::{OsStr, OsString};\nuse std::fmt::Debug;\n#[cfg(target_os = \"windows\")]\nuse std::os::windows::process::CommandExt;\nuse std::path::PathBuf;\nuse std::process::ExitStatus;\nuse std::time::Duration;\nuse tracing::debug;\n\n/// Constant for the `CREATE_NO_WINDOW` flag on Windows to prevent the creation of a console window\n/// when executing commands. This is useful for background processes or services that do not require\n/// user interaction.\n///\n/// # References\n///\n/// - [Windows API: Process Creation Flags](https://learn.microsoft.com/en-us/windows/win32/procthread/process-creation-flags#flags)\n#[cfg(target_os = \"windows\")]\nconst CREATE_NO_WINDOW: u32 = 0x0800_0000;\n\n/// Interface for `PostgreSQL` settings\npub trait Settings {\n    /// Get the directory where the PostgreSQL binaries are located.\n    fn get_binary_dir(&self) -> PathBuf;\n    /// Get the host for the PostgreSQL connection.\n    fn get_host(&self) -> OsString;\n    /// Get the port for the PostgreSQL connection.\n    fn get_port(&self) -> u16;\n    /// Get the username for the PostgreSQL connection.\n    fn get_username(&self) -> OsString;\n    /// Get the password for the PostgreSQL connection.\n    fn get_password(&self) -> OsString;\n    /// Get the Unix socket directory, if configured.\n    /// Returns `None` when using TCP/IP connections (the default).\n    fn get_socket_dir(&self) -> Option<PathBuf> {\n        None\n    }\n}\n\n#[cfg(test)]\npub struct TestSettings;\n\n#[cfg(test)]\nimpl Settings for TestSettings {\n    fn get_binary_dir(&self) -> PathBuf {\n        PathBuf::from(\".\")\n    }\n\n    fn get_host(&self) -> OsString {\n        \"localhost\".into()\n    }\n\n    fn get_port(&self) -> u16 {\n        5432\n    }\n\n    fn get_username(&self) -> OsString {\n        \"postgres\".into()\n    }\n\n    fn get_password(&self) -> OsString {\n        \"password\".into()\n    }\n}\n\n/// Test settings that include a Unix socket directory\n#[cfg(test)]\npub struct TestSocketSettings;\n\n#[cfg(test)]\nimpl Settings for TestSocketSettings {\n    fn get_binary_dir(&self) -> PathBuf {\n        PathBuf::from(\".\")\n    }\n\n    fn get_host(&self) -> OsString {\n        \"localhost\".into()\n    }\n\n    fn get_port(&self) -> u16 {\n        5432\n    }\n\n    fn get_username(&self) -> OsString {\n        \"postgres\".into()\n    }\n\n    fn get_password(&self) -> OsString {\n        \"password\".into()\n    }\n\n    fn get_socket_dir(&self) -> Option<PathBuf> {\n        Some(PathBuf::from(\"/tmp/pg_socket\"))\n    }\n}\n\n/// Trait to build a command\npub trait CommandBuilder: Debug {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr;\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf>;\n\n    /// Fully qualified path to the program binary\n    fn get_program_file(&self) -> PathBuf {\n        let program_name = &self.get_program();\n        match self.get_program_dir() {\n            Some(program_dir) => program_dir.join(program_name),\n            None => PathBuf::from(program_name),\n        }\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        vec![]\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)>;\n\n    /// Set an environment variable for the command\n    #[must_use]\n    fn env<S: AsRef<OsStr>>(self, key: S, value: S) -> Self;\n\n    /// Build a standard Command\n    fn build(self) -> std::process::Command\n    where\n        Self: Sized,\n    {\n        let program_file = self.get_program_file();\n        let mut command = std::process::Command::new(program_file);\n\n        #[cfg(target_os = \"windows\")]\n        {\n            command.creation_flags(CREATE_NO_WINDOW);\n        }\n\n        command.args(self.get_args());\n        command.envs(self.get_envs());\n        command\n    }\n\n    #[cfg(feature = \"tokio\")]\n    /// Build a tokio Command\n    fn build_tokio(self) -> tokio::process::Command\n    where\n        Self: Sized,\n    {\n        let program_file = self.get_program_file();\n        let mut command = tokio::process::Command::new(program_file);\n\n        #[cfg(target_os = \"windows\")]\n        {\n            command.creation_flags(CREATE_NO_WINDOW);\n        }\n\n        command.args(self.get_args());\n        command.envs(self.get_envs());\n        command\n    }\n}\n\n/// Trait to convert a command to a string representation\npub trait CommandToString {\n    fn to_command_string(&self) -> String;\n}\n\n/// Implement the [`CommandToString`] trait for [`Command`](std::process::Command)\nimpl CommandToString for std::process::Command {\n    fn to_command_string(&self) -> String {\n        format!(\"{self:?}\")\n    }\n}\n\n#[cfg(feature = \"tokio\")]\n/// Implement the [`CommandToString`] trait for [`Command`](tokio::process::Command)\nimpl CommandToString for tokio::process::Command {\n    fn to_command_string(&self) -> String {\n        format!(\"{self:?}\")\n            .replace(\"Command { std: \", \"\")\n            .replace(\", kill_on_drop: false }\", \"\")\n    }\n}\n\n/// Interface for executing a command\npub trait CommandExecutor {\n    /// Execute the command and return the stdout and stderr\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the command fails\n    fn execute(&mut self) -> Result<(String, String)>;\n}\n\n/// Interface for executing a command\npub trait AsyncCommandExecutor {\n    /// Execute the command and return the stdout and stderr\n    #[expect(async_fn_in_trait)]\n    async fn execute(&mut self, timeout: Option<Duration>) -> Result<(String, String)>;\n}\n\n/// Implement the [`CommandExecutor`] trait for [`Command`](std::process::Command)\nimpl CommandExecutor for std::process::Command {\n    /// Execute the command and return the stdout and stderr\n    fn execute(&mut self) -> Result<(String, String)> {\n        debug!(\"Executing command: {}\", self.to_command_string());\n        let program = self.get_program().to_string_lossy().to_string();\n        let stdout: String;\n        let stderr: String;\n        let status: ExitStatus;\n\n        if OS == \"windows\" && program.as_str().ends_with(\"pg_ctl\") {\n            // The pg_ctl process can hang on Windows when attempting to get stdout/stderr.\n            let mut process = self\n                .stdout(std::process::Stdio::piped())\n                .stderr(std::process::Stdio::piped())\n                .spawn()?;\n            stdout = String::new();\n            stderr = String::new();\n            status = process.wait()?;\n        } else {\n            let output = self.output()?;\n            stdout = String::from_utf8_lossy(&output.stdout).into_owned();\n            stderr = String::from_utf8_lossy(&output.stderr).into_owned();\n            status = output.status;\n        }\n        debug!(\n            \"Result: {}\\nstdout: {}\\nstderr: {}\",\n            status.code().map_or(\"None\".to_string(), |c| c.to_string()),\n            stdout,\n            stderr\n        );\n\n        if status.success() {\n            Ok((stdout, stderr))\n        } else {\n            Err(Error::CommandError { stdout, stderr })\n        }\n    }\n}\n\n#[cfg(feature = \"tokio\")]\n/// Implement the [`CommandExecutor`] trait for [`Command`](tokio::process::Command)\nimpl AsyncCommandExecutor for tokio::process::Command {\n    /// Execute the command and return the stdout and stderr\n    async fn execute(&mut self, timeout: Option<Duration>) -> Result<(String, String)> {\n        debug!(\"Executing command: {}\", self.to_command_string());\n        let program = self.as_std().get_program().to_string_lossy().to_string();\n        let stdout: String;\n        let stderr: String;\n        let status: ExitStatus;\n\n        if OS == \"windows\" && program.as_str().ends_with(\"pg_ctl\") {\n            // The pg_ctl process can hang on Windows when attempting to get stdout/stderr.\n            let mut process = self\n                .stdout(std::process::Stdio::piped())\n                .stderr(std::process::Stdio::piped())\n                .spawn()?;\n            stdout = String::new();\n            stderr = String::new();\n            status = process.wait().await?;\n        } else {\n            let output = match timeout {\n                Some(duration) => tokio::time::timeout(duration, self.output()).await?,\n                None => self.output().await,\n            }?;\n            stdout = String::from_utf8_lossy(&output.stdout).into_owned();\n            stderr = String::from_utf8_lossy(&output.stderr).into_owned();\n            status = output.status;\n        }\n\n        debug!(\n            \"Result: {}\\nstdout: {}\\nstderr: {}\",\n            status.code().map_or(\"None\".to_string(), |c| c.to_string()),\n            stdout,\n            stderr\n        );\n\n        if status.success() {\n            Ok((stdout, stderr))\n        } else {\n            Err(Error::CommandError { stdout, stderr })\n        }\n    }\n}\n#[cfg(test)]\nmod test {\n    use super::*;\n    use test_log::test;\n\n    #[test]\n    fn test_command_builder_defaults() {\n        #[derive(Debug, Default)]\n        struct DefaultCommandBuilder {\n            program_dir: Option<PathBuf>,\n            envs: Vec<(OsString, OsString)>,\n        }\n\n        impl CommandBuilder for DefaultCommandBuilder {\n            fn get_program(&self) -> &'static OsStr {\n                \"test\".as_ref()\n            }\n\n            fn get_program_dir(&self) -> &Option<PathBuf> {\n                &self.program_dir\n            }\n\n            fn get_envs(&self) -> Vec<(OsString, OsString)> {\n                self.envs.clone()\n            }\n\n            fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n                self.envs\n                    .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n                self\n            }\n        }\n\n        let builder = DefaultCommandBuilder::default();\n        let command = builder.env(\"ENV\", \"foo\").build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"ENV=\"foo\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(r#\"{command_prefix}\"test\"\"#),\n            command.to_command_string()\n        );\n    }\n\n    #[derive(Debug)]\n    struct TestCommandBuilder {\n        program_dir: Option<PathBuf>,\n        args: Vec<OsString>,\n        envs: Vec<(OsString, OsString)>,\n    }\n\n    impl CommandBuilder for TestCommandBuilder {\n        fn get_program(&self) -> &'static OsStr {\n            \"test\".as_ref()\n        }\n\n        fn get_program_dir(&self) -> &Option<PathBuf> {\n            &self.program_dir\n        }\n\n        fn get_args(&self) -> Vec<OsString> {\n            self.args.clone()\n        }\n\n        fn get_envs(&self) -> Vec<(OsString, OsString)> {\n            self.envs.clone()\n        }\n\n        fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n            self.envs\n                .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n            self\n        }\n    }\n\n    #[test]\n    fn test_standard_command_builder() {\n        let builder = TestCommandBuilder {\n            program_dir: None,\n            args: vec![\"--help\".to_string().into()],\n            envs: vec![],\n        };\n        let command = builder.env(\"PASSWORD\", \"foo\").build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PASSWORD=\"foo\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"{}\" \"--help\"\"#,\n                PathBuf::from(\"test\").to_string_lossy()\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[cfg(feature = \"tokio\")]\n    #[test]\n    fn test_tokio_command_builder() {\n        let builder = TestCommandBuilder {\n            program_dir: None,\n            args: vec![\"--help\".to_string().into()],\n            envs: vec![],\n        };\n        let command = builder.env(\"PASSWORD\", \"foo\").build_tokio();\n\n        assert_eq!(\n            format!(\n                r#\"PASSWORD=\"foo\" \"{}\" \"--help\"\"#,\n                PathBuf::from(\"test\").to_string_lossy()\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_standard_to_command_string() {\n        let mut command = std::process::Command::new(\"test\");\n        command.arg(\"-l\");\n        assert_eq!(r#\"\"test\" \"-l\"\"#, command.to_command_string(),);\n    }\n\n    #[cfg(feature = \"tokio\")]\n    #[test]\n    fn test_tokio_to_command_string() {\n        let mut command = tokio::process::Command::new(\"test\");\n        command.arg(\"-l\");\n        assert_eq!(r#\"\"test\" \"-l\"\"#, command.to_command_string(),);\n    }\n\n    #[test(tokio::test)]\n    async fn test_standard_command_execute() -> Result<()> {\n        #[cfg(not(target_os = \"windows\"))]\n        let mut command = std::process::Command::new(\"sh\");\n        #[cfg(not(target_os = \"windows\"))]\n        command.args([\"-c\", \"echo foo\"]);\n\n        #[cfg(target_os = \"windows\")]\n        let mut command = std::process::Command::new(\"cmd\");\n        #[cfg(target_os = \"windows\")]\n        command.args([\"/C\", \"echo foo\"]);\n\n        let (stdout, stderr) = command.execute()?;\n        assert!(stdout.starts_with(\"foo\"));\n        assert!(stderr.is_empty());\n        Ok(())\n    }\n\n    #[test(tokio::test)]\n    async fn test_standard_command_execute_error() {\n        let mut command = std::process::Command::new(\"bogus_command\");\n        assert!(command.execute().is_err());\n    }\n\n    #[cfg(feature = \"tokio\")]\n    #[test(tokio::test)]\n    async fn test_tokio_command_execute() -> Result<()> {\n        #[cfg(not(target_os = \"windows\"))]\n        let mut command = tokio::process::Command::new(\"sh\");\n        #[cfg(not(target_os = \"windows\"))]\n        command.args([\"-c\", \"echo foo\"]);\n\n        #[cfg(target_os = \"windows\")]\n        let mut command = tokio::process::Command::new(\"cmd\");\n        #[cfg(target_os = \"windows\")]\n        command.args([\"/C\", \"echo foo\"]);\n\n        let (stdout, stderr) = command.execute(None).await?;\n        assert!(stdout.starts_with(\"foo\"));\n        assert!(stderr.is_empty());\n        Ok(())\n    }\n\n    #[cfg(feature = \"tokio\")]\n    #[test(tokio::test)]\n    async fn test_tokio_command_execute_error() -> Result<()> {\n        let mut command = tokio::process::Command::new(\"bogus_command\");\n        assert!(command.execute(None).await.is_err());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/vacuumdb.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `vacuumdb` cleans and analyzes a `PostgreSQL` database.\n#[derive(Clone, Debug, Default)]\npub struct VacuumDbBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    all: bool,\n    buffer_usage_limit: Option<OsString>,\n    dbname: Option<OsString>,\n    disable_page_skipping: bool,\n    echo: bool,\n    full: bool,\n    freeze: bool,\n    force_index_cleanup: bool,\n    jobs: Option<u32>,\n    min_mxid_age: Option<OsString>,\n    min_xid_age: Option<OsString>,\n    no_index_cleanup: bool,\n    no_process_main: bool,\n    no_process_toast: bool,\n    no_truncate: bool,\n    schema: Option<OsString>,\n    exclude_schema: Option<OsString>,\n    parallel: Option<u32>,\n    quiet: bool,\n    skip_locked: bool,\n    table: Option<OsString>,\n    verbose: bool,\n    version: bool,\n    analyze: bool,\n    analyze_only: bool,\n    analyze_in_stages: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n    maintenance_db: Option<OsString>,\n}\n\n/// vacuumdb cleans and analyzes a `PostgreSQL` database.\nimpl VacuumDbBuilder {\n    /// Create a new [`VacuumDbBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`VacuumDbBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// vacuum all databases\n    #[must_use]\n    pub fn all(mut self) -> Self {\n        self.all = true;\n        self\n    }\n\n    /// size of ring buffer used for vacuum\n    #[must_use]\n    pub fn buffer_usage_limit<S: AsRef<OsStr>>(mut self, buffer_usage_limit: S) -> Self {\n        self.buffer_usage_limit = Some(buffer_usage_limit.as_ref().to_os_string());\n        self\n    }\n\n    /// database to vacuum\n    #[must_use]\n    pub fn dbname<S: AsRef<OsStr>>(mut self, dbname: S) -> Self {\n        self.dbname = Some(dbname.as_ref().to_os_string());\n        self\n    }\n\n    /// disable all page-skipping behavior\n    #[must_use]\n    pub fn disable_page_skipping(mut self) -> Self {\n        self.disable_page_skipping = true;\n        self\n    }\n\n    /// show the commands being sent to the server\n    #[must_use]\n    pub fn echo(mut self) -> Self {\n        self.echo = true;\n        self\n    }\n\n    /// do full vacuuming\n    #[must_use]\n    pub fn full(mut self) -> Self {\n        self.full = true;\n        self\n    }\n\n    /// freeze row transaction information\n    #[must_use]\n    pub fn freeze(mut self) -> Self {\n        self.freeze = true;\n        self\n    }\n\n    /// always remove index entries that point to dead tuples\n    #[must_use]\n    pub fn force_index_cleanup(mut self) -> Self {\n        self.force_index_cleanup = true;\n        self\n    }\n\n    /// use this many concurrent connections to vacuum\n    #[must_use]\n    pub fn jobs(mut self, jobs: u32) -> Self {\n        self.jobs = Some(jobs);\n        self\n    }\n\n    /// minimum multixact ID age of tables to vacuum\n    #[must_use]\n    pub fn min_mxid_age<S: AsRef<OsStr>>(mut self, min_mxid_age: S) -> Self {\n        self.min_mxid_age = Some(min_mxid_age.as_ref().to_os_string());\n        self\n    }\n\n    /// minimum transaction ID age of tables to vacuum\n    #[must_use]\n    pub fn min_xid_age<S: AsRef<OsStr>>(mut self, min_xid_age: S) -> Self {\n        self.min_xid_age = Some(min_xid_age.as_ref().to_os_string());\n        self\n    }\n\n    /// don't remove index entries that point to dead tuples\n    #[must_use]\n    pub fn no_index_cleanup(mut self) -> Self {\n        self.no_index_cleanup = true;\n        self\n    }\n\n    /// skip the main relation\n    #[must_use]\n    pub fn no_process_main(mut self) -> Self {\n        self.no_process_main = true;\n        self\n    }\n\n    /// skip the TOAST table associated with the table to vacuum\n    #[must_use]\n    pub fn no_process_toast(mut self) -> Self {\n        self.no_process_toast = true;\n        self\n    }\n\n    /// don't truncate empty pages at the end of the table\n    #[must_use]\n    pub fn no_truncate(mut self) -> Self {\n        self.no_truncate = true;\n        self\n    }\n\n    /// vacuum tables in the specified schema(s) only\n    #[must_use]\n    pub fn schema<S: AsRef<OsStr>>(mut self, schema: S) -> Self {\n        self.schema = Some(schema.as_ref().to_os_string());\n        self\n    }\n\n    /// do not vacuum tables in the specified schema(s)\n    #[must_use]\n    pub fn exclude_schema<S: AsRef<OsStr>>(mut self, exclude_schema: S) -> Self {\n        self.exclude_schema = Some(exclude_schema.as_ref().to_os_string());\n        self\n    }\n\n    /// use this many background workers for vacuum, if available\n    #[must_use]\n    pub fn parallel(mut self, parallel: u32) -> Self {\n        self.parallel = Some(parallel);\n        self\n    }\n\n    /// don't write any messages\n    #[must_use]\n    pub fn quiet(mut self) -> Self {\n        self.quiet = true;\n        self\n    }\n\n    /// skip relations that cannot be immediately locked\n    #[must_use]\n    pub fn skip_locked(mut self) -> Self {\n        self.skip_locked = true;\n        self\n    }\n\n    /// vacuum specific table(s) only\n    #[must_use]\n    pub fn table<S: AsRef<OsStr>>(mut self, table: S) -> Self {\n        self.table = Some(table.as_ref().to_os_string());\n        self\n    }\n\n    /// write a lot of output\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// update optimizer statistics\n    #[must_use]\n    pub fn analyze(mut self) -> Self {\n        self.analyze = true;\n        self\n    }\n\n    /// only update optimizer statistics; no vacuum\n    #[must_use]\n    pub fn analyze_only(mut self) -> Self {\n        self.analyze_only = true;\n        self\n    }\n\n    /// only update optimizer statistics, in multiple stages for faster results; no vacuum\n    #[must_use]\n    pub fn analyze_in_stages(mut self) -> Self {\n        self.analyze_in_stages = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// user name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n\n    /// alternate maintenance database\n    #[must_use]\n    pub fn maintenance_db<S: AsRef<OsStr>>(mut self, maintenance_db: S) -> Self {\n        self.maintenance_db = Some(maintenance_db.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for VacuumDbBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"vacuumdb\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    #[expect(clippy::too_many_lines)]\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if self.all {\n            args.push(\"--all\".into());\n        }\n\n        if let Some(buffer_usage_limit) = &self.buffer_usage_limit {\n            args.push(\"--buffer-usage-limit\".into());\n            args.push(buffer_usage_limit.into());\n        }\n\n        if let Some(dbname) = &self.dbname {\n            args.push(\"--dbname\".into());\n            args.push(dbname.into());\n        }\n\n        if self.disable_page_skipping {\n            args.push(\"--disable-page-skipping\".into());\n        }\n\n        if self.echo {\n            args.push(\"--echo\".into());\n        }\n\n        if self.full {\n            args.push(\"--full\".into());\n        }\n\n        if self.freeze {\n            args.push(\"--freeze\".into());\n        }\n\n        if self.force_index_cleanup {\n            args.push(\"--force-index-cleanup\".into());\n        }\n\n        if let Some(jobs) = &self.jobs {\n            args.push(\"--jobs\".into());\n            args.push(jobs.to_string().into());\n        }\n\n        if let Some(min_mxid_age) = &self.min_mxid_age {\n            args.push(\"--min-mxid-age\".into());\n            args.push(min_mxid_age.into());\n        }\n\n        if let Some(min_xid_age) = &self.min_xid_age {\n            args.push(\"--min-xid-age\".into());\n            args.push(min_xid_age.into());\n        }\n\n        if self.no_index_cleanup {\n            args.push(\"--no-index-cleanup\".into());\n        }\n\n        if self.no_process_main {\n            args.push(\"--no-process-main\".into());\n        }\n\n        if self.no_process_toast {\n            args.push(\"--no-process-toast\".into());\n        }\n\n        if self.no_truncate {\n            args.push(\"--no-truncate\".into());\n        }\n\n        if let Some(schema) = &self.schema {\n            args.push(\"--schema\".into());\n            args.push(schema.into());\n        }\n\n        if let Some(exclude_schema) = &self.exclude_schema {\n            args.push(\"--exclude-schema\".into());\n            args.push(exclude_schema.into());\n        }\n\n        if let Some(parallel) = &self.parallel {\n            args.push(\"--parallel\".into());\n            args.push(parallel.to_string().into());\n        }\n\n        if self.quiet {\n            args.push(\"--quiet\".into());\n        }\n\n        if self.skip_locked {\n            args.push(\"--skip-locked\".into());\n        }\n\n        if let Some(table) = &self.table {\n            args.push(\"--table\".into());\n            args.push(table.into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.analyze {\n            args.push(\"--analyze\".into());\n        }\n\n        if self.analyze_only {\n            args.push(\"--analyze-only\".into());\n        }\n\n        if self.analyze_in_stages {\n            args.push(\"--analyze-in-stages\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        if let Some(maintenance_db) = &self.maintenance_db {\n            args.push(\"--maintenance-db\".into());\n            args.push(maintenance_db.into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = VacuumDbBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"vacuumdb\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = VacuumDbBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./vacuumdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\vacuumdb\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = VacuumDbBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./vacuumdb\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\vacuumdb\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = VacuumDbBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .all()\n            .buffer_usage_limit(\"buffer_usage_limit\")\n            .dbname(\"dbname\")\n            .disable_page_skipping()\n            .echo()\n            .full()\n            .freeze()\n            .force_index_cleanup()\n            .jobs(1)\n            .min_mxid_age(\"min_mxid_age\")\n            .min_xid_age(\"min_xid_age\")\n            .no_index_cleanup()\n            .no_process_main()\n            .no_process_toast()\n            .no_truncate()\n            .schema(\"schema\")\n            .exclude_schema(\"exclude_schema\")\n            .parallel(1)\n            .quiet()\n            .skip_locked()\n            .table(\"table\")\n            .verbose()\n            .version()\n            .analyze()\n            .analyze_only()\n            .analyze_in_stages()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"username\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .maintenance_db(\"maintenance_db\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"vacuumdb\" \"--all\" \"--buffer-usage-limit\" \"buffer_usage_limit\" \"--dbname\" \"dbname\" \"--disable-page-skipping\" \"--echo\" \"--full\" \"--freeze\" \"--force-index-cleanup\" \"--jobs\" \"1\" \"--min-mxid-age\" \"min_mxid_age\" \"--min-xid-age\" \"min_xid_age\" \"--no-index-cleanup\" \"--no-process-main\" \"--no-process-toast\" \"--no-truncate\" \"--schema\" \"schema\" \"--exclude-schema\" \"exclude_schema\" \"--parallel\" \"1\" \"--quiet\" \"--skip-locked\" \"--table\" \"table\" \"--verbose\" \"--version\" \"--analyze\" \"--analyze-only\" \"--analyze-in-stages\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"username\" \"--no-password\" \"--password\" \"--maintenance-db\" \"maintenance_db\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_commands/src/vacuumlo.rs",
    "content": "use crate::Settings;\nuse crate::traits::CommandBuilder;\nuse std::convert::AsRef;\nuse std::ffi::{OsStr, OsString};\nuse std::path::PathBuf;\n\n/// `vacuumlo` removes unreferenced large objects from databases.\n#[derive(Clone, Debug, Default)]\npub struct VacuumLoBuilder {\n    program_dir: Option<PathBuf>,\n    envs: Vec<(OsString, OsString)>,\n    limit: Option<usize>,\n    dry_run: bool,\n    verbose: bool,\n    version: bool,\n    help: bool,\n    host: Option<OsString>,\n    port: Option<u16>,\n    username: Option<OsString>,\n    no_password: bool,\n    password: bool,\n    pg_password: Option<OsString>,\n}\n\nimpl VacuumLoBuilder {\n    /// Create a new [`VacuumLoBuilder`]\n    #[must_use]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new [`VacuumLoBuilder`] from [Settings]\n    pub fn from(settings: &dyn Settings) -> Self {\n        let mut builder = Self::new()\n            .program_dir(settings.get_binary_dir())\n            .host(settings.get_host())\n            .port(settings.get_port())\n            .username(settings.get_username())\n            .pg_password(settings.get_password());\n        if let Some(socket_dir) = settings.get_socket_dir() {\n            builder = builder.host(socket_dir.to_string_lossy().to_string());\n        }\n        builder\n    }\n\n    /// Location of the program binary\n    #[must_use]\n    pub fn program_dir<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.program_dir = Some(path.into());\n        self\n    }\n\n    /// commit after removing each LIMIT large objects\n    #[must_use]\n    pub fn limit(mut self, limit: usize) -> Self {\n        self.limit = Some(limit);\n        self\n    }\n\n    /// don't remove large objects, just show what would be done\n    #[must_use]\n    pub fn dry_run(mut self) -> Self {\n        self.dry_run = true;\n        self\n    }\n\n    /// write a lot of progress messages\n    #[must_use]\n    pub fn verbose(mut self) -> Self {\n        self.verbose = true;\n        self\n    }\n\n    /// output version information, then exit\n    #[must_use]\n    pub fn version(mut self) -> Self {\n        self.version = true;\n        self\n    }\n\n    /// show help, then exit\n    #[must_use]\n    pub fn help(mut self) -> Self {\n        self.help = true;\n        self\n    }\n\n    /// database server host or socket directory\n    #[must_use]\n    pub fn host<S: AsRef<OsStr>>(mut self, host: S) -> Self {\n        self.host = Some(host.as_ref().to_os_string());\n        self\n    }\n\n    /// database server port\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.port = Some(port);\n        self\n    }\n\n    /// user name to connect as\n    #[must_use]\n    pub fn username<S: AsRef<OsStr>>(mut self, username: S) -> Self {\n        self.username = Some(username.as_ref().to_os_string());\n        self\n    }\n\n    /// never prompt for password\n    #[must_use]\n    pub fn no_password(mut self) -> Self {\n        self.no_password = true;\n        self\n    }\n\n    /// force password prompt\n    #[must_use]\n    pub fn password(mut self) -> Self {\n        self.password = true;\n        self\n    }\n\n    /// user password\n    #[must_use]\n    pub fn pg_password<S: AsRef<OsStr>>(mut self, pg_password: S) -> Self {\n        self.pg_password = Some(pg_password.as_ref().to_os_string());\n        self\n    }\n}\n\nimpl CommandBuilder for VacuumLoBuilder {\n    /// Get the program name\n    fn get_program(&self) -> &'static OsStr {\n        \"vacuumlo\".as_ref()\n    }\n\n    /// Location of the program binary\n    fn get_program_dir(&self) -> &Option<PathBuf> {\n        &self.program_dir\n    }\n\n    /// Get the arguments for the command\n    fn get_args(&self) -> Vec<OsString> {\n        let mut args: Vec<OsString> = Vec::new();\n\n        if let Some(limit) = &self.limit {\n            args.push(\"--limit\".into());\n            args.push(limit.to_string().into());\n        }\n\n        if self.dry_run {\n            args.push(\"--dry-run\".into());\n        }\n\n        if self.verbose {\n            args.push(\"--verbose\".into());\n        }\n\n        if self.version {\n            args.push(\"--version\".into());\n        }\n\n        if self.help {\n            args.push(\"--help\".into());\n        }\n\n        if let Some(host) = &self.host {\n            args.push(\"--host\".into());\n            args.push(host.into());\n        }\n\n        if let Some(port) = &self.port {\n            args.push(\"--port\".into());\n            args.push(port.to_string().into());\n        }\n\n        if let Some(username) = &self.username {\n            args.push(\"--username\".into());\n            args.push(username.into());\n        }\n\n        if self.no_password {\n            args.push(\"--no-password\".into());\n        }\n\n        if self.password {\n            args.push(\"--password\".into());\n        }\n\n        args\n    }\n\n    /// Get the environment variables for the command\n    fn get_envs(&self) -> Vec<(OsString, OsString)> {\n        let mut envs: Vec<(OsString, OsString)> = self.envs.clone();\n\n        if let Some(password) = &self.pg_password {\n            envs.push((\"PGPASSWORD\".into(), password.into()));\n        }\n\n        envs\n    }\n\n    /// Set an environment variable for the command\n    fn env<S: AsRef<OsStr>>(mut self, key: S, value: S) -> Self {\n        self.envs\n            .push((key.as_ref().to_os_string(), value.as_ref().to_os_string()));\n        self\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n    use crate::TestSocketSettings;\n    use crate::traits::CommandToString;\n    use test_log::test;\n\n    #[test]\n    fn test_builder_new() {\n        let command = VacuumLoBuilder::new().program_dir(\".\").build();\n        assert_eq!(\n            PathBuf::from(\".\").join(\"vacuumlo\"),\n            PathBuf::from(command.to_command_string().replace('\"', \"\"))\n        );\n    }\n\n    #[test]\n    fn test_builder_from() {\n        let command = VacuumLoBuilder::from(&TestSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./vacuumlo\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\vacuumlo\" \"#;\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n\n    #[test]\n    fn test_builder_from_socket() {\n        let command = VacuumLoBuilder::from(&TestSocketSettings).build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGPASSWORD=\"password\" \"./vacuumlo\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = r#\"\".\\\\vacuumlo\" \"#;\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"--host\" \"/tmp/pg_socket\" \"--port\" \"5432\" \"--username\" \"postgres\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n    #[test]\n    fn test_builder() {\n        let command = VacuumLoBuilder::new()\n            .env(\"PGDATABASE\", \"database\")\n            .limit(100)\n            .dry_run()\n            .verbose()\n            .version()\n            .help()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"postgres\")\n            .no_password()\n            .password()\n            .pg_password(\"password\")\n            .build();\n        #[cfg(not(target_os = \"windows\"))]\n        let command_prefix = r#\"PGDATABASE=\"database\" PGPASSWORD=\"password\" \"#;\n        #[cfg(target_os = \"windows\")]\n        let command_prefix = String::new();\n\n        assert_eq!(\n            format!(\n                r#\"{command_prefix}\"vacuumlo\" \"--limit\" \"100\" \"--dry-run\" \"--verbose\" \"--version\" \"--help\" \"--host\" \"localhost\" \"--port\" \"5432\" \"--username\" \"postgres\" \"--no-password\" \"--password\"\"#\n            ),\n            command.to_command_string()\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_embedded/Cargo.toml",
    "content": "[package]\nauthors.workspace = true\nbuild = \"build/build.rs\"\ncategories.workspace = true\ndescription = \"Install and run a PostgreSQL database locally on Linux, MacOS or Windows. PostgreSQL can be bundled with your application, or downloaded on demand.\"\nedition.workspace = true\nkeywords.workspace = true\nlicense.workspace = true\nname = \"postgresql_embedded\"\nrepository = \"https://github.com/theseus-rs/postgresql-embedded\"\nrust-version.workspace = true\nversion.workspace = true\n\n[build-dependencies]\nanyhow = { workspace = true }\npostgresql_archive = { path = \"../postgresql_archive\", version = \"0.20.2\", default-features = false }\ntarget-triple = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\nurl = { workspace = true }\n\n[dependencies]\npostgresql_archive = { path = \"../postgresql_archive\", version = \"0.20.2\", default-features = false }\npostgresql_commands = { path = \"../postgresql_commands\", version = \"0.20.2\" }\nrand = { workspace = true }\nsemver = { workspace = true }\nsqlx = { workspace = true, features = [\"runtime-tokio\"] }\ntempfile = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"full\"], optional = true }\ntracing = { workspace = true, features = [\"log\"] }\nurl = { workspace = true }\n\n[dev-dependencies]\nanyhow = { workspace = true }\ncriterion = { workspace = true }\ntest-log = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n\n[features]\ndefault = [\n    \"native-tls\",\n    \"theseus\",\n]\nblocking = [\"tokio\"]\nbundled = [\"postgresql_archive/github\"]\nindicatif = [\n    \"postgresql_archive/indicatif\",\n]\nnative-tls = [\n    \"postgresql_archive/native-tls\",\n    \"sqlx/tls-native-tls\",\n]\nrustls = [\n    \"postgresql_archive/rustls\",\n    \"sqlx/tls-rustls\",\n]\ntheseus = [\n    \"postgresql_archive/theseus\",\n]\ntokio = [\n    \"dep:tokio\",\n    \"postgresql_commands/tokio\",\n    \"sqlx/runtime-tokio\",\n]\nzonky = [\n    \"postgresql_archive/zonky\",\n]\n\n[package.metadata.release]\ndependent-version = \"upgrade\"\n\n[package.metadata.docs.rs]\nno-default-features = true\nfeatures = [\"blocking\", \"theseus\", \"tokio\"]\ntargets = [\"x86_64-unknown-linux-gnu\"]\n\n[[bench]]\nharness = false\nname = \"embedded\"\n"
  },
  {
    "path": "postgresql_embedded/README.md",
    "content": "# PostgreSQL Embedded\n\n[![ci](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml)\n[![Documentation](https://docs.rs/postgresql_embedded/badge.svg)](https://docs.rs/postgresql_embedded)\n[![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n[![Benchmarks](https://img.shields.io/badge/%F0%9F%90%B0_bencher-enabled-6ec241)](https://bencher.dev/perf/theseus-rs-postgresql-embedded)\n[![Latest version](https://img.shields.io/crates/v/postgresql_embedded.svg)](https://crates.io/crates/postgresql_embedded)\n[![License](https://img.shields.io/crates/l/postgresql_embedded)](https://github.com/theseus-rs/postgresql-embedded/tree/main/postgresql_embedded#license)\n[![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n\nInstall and run a PostgreSQL database locally on Linux, MacOS or Windows. PostgreSQL can be\nbundled with your application, or downloaded on demand.\n\nThis library provides an embedded-like experience for PostgreSQL similar to what you would have with\nSQLite. This is accomplished by downloading and installing PostgreSQL during runtime. There is\nalso a \"bundled\" feature that when enabled, will download the PostgreSQL installation archive at\ncompile time, include it in your binary and install from the binary version at runtime.\nIn either case, PostgreSQL will run in a separate process space.\n\n## Features\n\n- installing and running PostgreSQL\n- running PostgreSQL on ephemeral ports\n- Unix socket support\n- async and blocking API\n- bundling the PostgreSQL archive in an executable\n- semantic version resolution\n- ability to configure PostgreSQL startup options\n- settings builder for fluent configuration\n- URL based configuration\n- choice of native-tls or rustls\n\n## Examples\n\n### Asynchronous API\n\n```rust\nuse postgresql_embedded::{PostgreSQL, Result};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    postgresql.database_exists(database_name).await?;\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await\n}\n```\n\n### Synchronous API\n\n```rust\nuse postgresql_embedded::Result;\nuse postgresql_embedded::blocking::PostgreSQL;\n\nfn main() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup()?;\n    postgresql.start()?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name)?;\n    postgresql.database_exists(database_name)?;\n    postgresql.drop_database(database_name)?;\n\n    postgresql.stop()\n}\n```\n\n### Settings Builder\n\n```rust\nuse postgresql_embedded::{PostgreSQL, Result, SettingsBuilder};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let settings = SettingsBuilder::new()\n        .host(\"127.0.0.1\")\n        .port(5433)\n        .username(\"admin\")\n        .password(\"secret\")\n        .temporary(false)\n        .config(\"max_connections\", \"100\")\n        .build();\n\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    postgresql.stop().await\n}\n```\n\n### Unix Socket\n\n```rust\nuse postgresql_embedded::{PostgreSQL, Result, SettingsBuilder};\nuse std::path::PathBuf;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let settings = SettingsBuilder::new()\n        .socket_dir(PathBuf::from(\"/tmp/pg_socket\"))\n        .build();\n\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    postgresql.database_exists(database_name).await?;\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await\n}\n```\n\n## Information\n\nDuring the build process, when the `bundled` feature is enabled, the PostgreSQL binaries are\ndownloaded and included in the resulting binary. The version of the PostgreSQL binaries is\ndetermined by the `POSTGRESQL_VERSION` environment variable. If the `POSTGRESQL_VERSION`\nenvironment variable is not set, then `postgresql_archive::LATEST` will be used to determine the\nversion of the PostgreSQL binaries to download.\n\nWhen downloading the theseus PostgreSQL binaries, either during build, or at runtime, the\n`GITHUB_TOKEN` environment variable can be set to a GitHub personal access token to increase\nthe rate limit for downloading the PostgreSQL binaries. The `GITHUB_TOKEN` environment\nvariable is not required.\n\nAt runtime, the PostgreSQL binaries are cached by default in the following directories:\n\n- Unix: `$HOME/.theseus/postgresql`\n- Windows: `%USERPROFILE%\\.theseus\\postgresql`\n\nPerformance can be improved by using a specific version of the PostgreSQL binaries (e.g. `=16.4.0`).\nAfter the first download, the PostgreSQL binaries will be cached and reused for subsequent runs.\nFurther, the repository will no longer be queried to calculate the version match.\n\n## Feature flags\n\npostgresql_embedded uses feature flags to address compile time and binary size\nuses.\n\nThe following features are available:\n\n| Name         | Description                                              | Default? |\n|--------------|----------------------------------------------------------|----------|\n| `bundled`    | Bundles the PostgreSQL archive into the resulting binary | No       |\n| `blocking`   | Enables the blocking API; requires `tokio`               | No       |\n| `indicatif`  | Enables tracing-indcatif support                         | No       |\n| `native-tls` | Enables native-tls support                               | Yes      |\n| `rustls`     | Enables rustls support                                   | No       |\n| `theseus`    | Enables theseus PostgreSQL binaries                      | Yes      |\n| `tokio`      | Enables using tokio for async                            | No       |\n| `zonky`      | Enables zonky PostgreSQL binaries                        | No       |\n\n## Bundling PostgreSQL\n\nTo bundle PostgreSQL with your application, you can enable the `bundled` feature. This will download the PostgreSQL\narchive at compile time and include it in your binary. You should specify the version of PostgreSQL to bundle by\nsetting the environment variable `POSTGRESQL_VERSION` to a specific version, e.g. `=17.2.0`. In order to use the bundled\nPostgreSQL, you will also need to set an explicit matching version at runtime in `Settings`:\n\n```rust\nuse postgresql_embedded::{Result, Settings, VersionReq};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let settings = Settings {\n        version: VersionReq::from_str(\"=17.2.0\")?,\n        ..Default::default()\n    };\n    Ok(())\n}\n```\n\nThe PostgreSQL binaries can also be obtained from a different GitHub source by setting the `POSTGRESQL_RELEASES_URL`\nenvironment variable. The repository must contain the releases with archives in same structure as\n[theseus-rs/postgresql_binaries](https://github.com/theseus-rs/postgresql-binaries).\n\n## Notes\n\nSupports using PostgreSQL binaries from:\n\n* [theseus-rs/postgresql-binaries](https://github.com/theseus-rs/postgresql-binaries) (default)\n* [zonkyio/embedded-postgres-binaries](https://github.com/zonkyio/embedded-postgres-binaries)\n\n## License\n\nLicensed under either of\n\n* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)\n* MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT)\n\nat your option.\n\n## Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as\ndefined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.\n"
  },
  {
    "path": "postgresql_embedded/benches/embedded.rs",
    "content": "use criterion::{Criterion, criterion_group, criterion_main};\nuse postgresql_embedded::Result;\nuse postgresql_embedded::blocking::PostgreSQL;\nuse std::time::Duration;\n\nfn benchmarks(criterion: &mut Criterion) {\n    bench_lifecycle(criterion).ok();\n}\n\nfn bench_lifecycle(criterion: &mut Criterion) -> Result<()> {\n    criterion.bench_function(\"lifecycle\", |bencher| {\n        bencher.iter(|| {\n            lifecycle().ok();\n        });\n    });\n\n    Ok(())\n}\n\nfn lifecycle() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup()?;\n    postgresql.start()?;\n    postgresql.stop()\n}\n\ncriterion_group!(\n    name = benches;\n    config = Criterion::default()\n        .measurement_time(Duration::from_secs(30))\n        .sample_size(10);\n    targets = benchmarks\n);\ncriterion_main!(benches);\n"
  },
  {
    "path": "postgresql_embedded/build/build.rs",
    "content": "#[cfg(feature = \"bundled\")]\nmod bundle;\n\nuse anyhow::Result;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    #[cfg(feature = \"bundled\")]\n    bundle::stage_postgresql_archive().await?;\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_embedded/build/bundle.rs",
    "content": "#![allow(dead_code)]\n\nuse anyhow::Result;\nuse postgresql_archive::configuration::{custom, theseus};\nuse postgresql_archive::repository::github::repository::GitHub;\nuse postgresql_archive::{ExactVersion, Version, VersionReq, matcher};\nuse postgresql_archive::{get_archive, repository};\nuse std::fs::File;\nuse std::io::Write;\nuse std::path::PathBuf;\nuse std::str::FromStr;\nuse std::{env, fs};\nuse url::Url;\n\n/// Stage the PostgreSQL archive when the `bundled` feature is enabled so that\n/// it can be included in the final binary. This is useful for creating a\n/// self-contained binary that does not require the PostgreSQL archive to be\n/// downloaded at runtime.\npub(crate) async fn stage_postgresql_archive() -> Result<()> {\n    println!(\"cargo:rerun-if-env-changed=POSTGRESQL_VERSION\");\n    println!(\"cargo:rerun-if-env-changed=POSTGRESQL_RELEASES_URL\");\n    #[cfg(feature = \"theseus\")]\n    let default_releases_url = postgresql_archive::configuration::theseus::URL.to_string();\n    #[cfg(not(feature = \"theseus\"))]\n    let default_releases_url = String::new();\n\n    let releases_url = match env::var(\"POSTGRESQL_RELEASES_URL\") {\n        Ok(custom_url) if !default_releases_url.is_empty() => {\n            register_custom_repository()?;\n            custom_url\n        }\n        _ => {\n            register_theseus_repository()?;\n            default_releases_url\n        }\n    };\n    println!(\"PostgreSQL releases URL: {releases_url}\");\n    let postgres_version_req = env::var(\"POSTGRESQL_VERSION\").unwrap_or(\"*\".to_string());\n    let version_req = VersionReq::from_str(postgres_version_req.as_str())?;\n    println!(\"PostgreSQL version: {postgres_version_req}\");\n    println!(\"Target: {}\", target_triple::TARGET);\n\n    let out_dir = PathBuf::from(env::var(\"OUT_DIR\")?);\n    println!(\"OUT_DIR: {out_dir:?}\");\n\n    let mut archive_version_file = out_dir.clone();\n    archive_version_file.push(\"postgresql.version\");\n    let mut archive_file = out_dir.clone();\n    archive_file.push(\"postgresql.tar.gz\");\n\n    if archive_version_file.exists() && archive_file.exists() {\n        println!(\"PostgreSQL archive exists: {archive_file:?}\");\n        return Ok(());\n    }\n\n    let (asset_version, archive) = if let Some(exact_version) = version_req.exact_version() {\n        let cached_file = cached_archive_path(&exact_version);\n        println!(\n            \"Cached file: {cached_file:?}; exists: {}\",\n            cached_file.exists()\n        );\n        if cached_file.is_file() {\n            println!(\"Using cached PostgreSQL archive: {cached_file:?}\");\n            (exact_version, fs::read(&cached_file)?)\n        } else {\n            let (asset_version, archive) = get_archive(&releases_url, &version_req).await?;\n            if let Some(parent) = cached_file.parent() {\n                fs::create_dir_all(parent)?;\n            }\n            fs::write(&cached_file, &archive)?;\n            println!(\"Cached PostgreSQL archive to: {cached_file:?}\");\n            (asset_version, archive)\n        }\n    } else {\n        get_archive(&releases_url, &version_req).await?\n    };\n\n    fs::write(archive_version_file.clone(), asset_version.to_string())?;\n    let mut file = File::create(archive_file.clone())?;\n    file.write_all(&archive)?;\n    file.sync_data()?;\n    println!(\"PostgreSQL archive written to: {archive_file:?}\");\n\n    Ok(())\n}\n\n/// Returns the path for a cached archive.\nfn cached_archive_path(version: &Version) -> PathBuf {\n    let home = std::env::home_dir().unwrap_or_else(|| env::current_dir().unwrap_or_default());\n    let target = target_triple::TARGET;\n    home.join(\".theseus\")\n        .join(\"postgresql\")\n        .join(format!(\"postgresql-{version}-{target}.tar.gz\"))\n}\n\nfn supports_github_url(url: &str) -> postgresql_archive::Result<bool> {\n    let parsed_url = Url::parse(url)?;\n    let host = parsed_url.host_str().unwrap_or_default();\n    Ok(host.ends_with(\"github.com\"))\n}\n\nfn register_custom_repository() -> Result<()> {\n    repository::registry::register(supports_github_url, Box::new(GitHub::new))?;\n    matcher::registry::register(supports_github_url, custom::matcher)?;\n    Ok(())\n}\n\nfn register_theseus_repository() -> Result<()> {\n    repository::registry::register(supports_github_url, Box::new(GitHub::new))?;\n    matcher::registry::register(supports_github_url, theseus::matcher)?;\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_embedded/src/blocking/mod.rs",
    "content": "mod postgresql;\n\npub use postgresql::PostgreSQL;\n"
  },
  {
    "path": "postgresql_embedded/src/blocking/postgresql.rs",
    "content": "use crate::{Result, Settings, Status};\nuse std::sync::LazyLock;\nuse tokio::runtime::Runtime;\n\nstatic RUNTIME: LazyLock<Runtime> = LazyLock::new(|| Runtime::new().unwrap());\n\n/// `PostgreSQL` server\n#[derive(Clone, Debug, Default)]\npub struct PostgreSQL {\n    inner: crate::postgresql::PostgreSQL,\n}\n\n/// `PostgreSQL` server methods\nimpl PostgreSQL {\n    /// Create a new [`crate::postgresql::PostgreSQL`] instance\n    #[must_use]\n    pub fn new(settings: Settings) -> Self {\n        Self {\n            inner: crate::postgresql::PostgreSQL::new(settings),\n        }\n    }\n\n    /// Get the [status](Status) of the `PostgreSQL` server\n    #[must_use]\n    pub fn status(&self) -> Status {\n        self.inner.status()\n    }\n\n    /// Get the [settings](Settings) of the `PostgreSQL` server\n    #[must_use]\n    pub fn settings(&self) -> &Settings {\n        self.inner.settings()\n    }\n\n    /// Set up the database by extracting the archive and initializing the database.\n    /// If the installation directory already exists, the archive will not be extracted.\n    /// If the data directory already exists, the database will not be initialized.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the setup fails.\n    pub fn setup(&mut self) -> Result<()> {\n        RUNTIME\n            .handle()\n            .block_on(async move { self.inner.setup().await })\n    }\n\n    /// Start the database and wait for the startup to complete.\n    /// If the port is set to `0`, the database will be started on a random port.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the startup fails.\n    pub fn start(&mut self) -> Result<()> {\n        RUNTIME\n            .handle()\n            .block_on(async move { self.inner.start().await })\n    }\n\n    /// Stop the database gracefully (smart mode) and wait for the shutdown to complete.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the shutdown fails.\n    pub fn stop(&self) -> Result<()> {\n        RUNTIME\n            .handle()\n            .block_on(async move { self.inner.stop().await })\n    }\n\n    /// Create a new database with the given name.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the database creation fails.\n    pub fn create_database<S>(&self, database_name: S) -> Result<()>\n    where\n        S: AsRef<str> + std::fmt::Debug,\n    {\n        RUNTIME\n            .handle()\n            .block_on(async move { self.inner.create_database(database_name).await })\n    }\n\n    /// Check if a database with the given name exists.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the database existence check fails.\n    pub fn database_exists<S>(&self, database_name: S) -> Result<bool>\n    where\n        S: AsRef<str> + std::fmt::Debug,\n    {\n        RUNTIME\n            .handle()\n            .block_on(async move { self.inner.database_exists(database_name).await })\n    }\n\n    /// Drop a database with the given name.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the database drop fails.\n    pub fn drop_database<S>(&self, database_name: S) -> Result<()>\n    where\n        S: AsRef<str> + std::fmt::Debug,\n    {\n        RUNTIME\n            .handle()\n            .block_on(async move { self.inner.drop_database(database_name).await })\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use crate::VersionReq;\n\n    #[test]\n    fn test_postgresql() -> Result<()> {\n        let version = VersionReq::parse(\"=16.4.0\")?;\n        let settings = Settings {\n            version,\n            ..Settings::default()\n        };\n        let postgresql = PostgreSQL::new(settings);\n        let initial_statuses = [Status::NotInstalled, Status::Installed, Status::Stopped];\n        assert!(initial_statuses.contains(&postgresql.status()));\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_embedded/src/error.rs",
    "content": "use std::string::FromUtf8Error;\n\n/// `PostgreSQL` embedded result type\npub type Result<T, E = Error> = core::result::Result<T, E>;\n\n/// Errors that can occur when using `PostgreSQL` embedded\n#[derive(Debug, thiserror::Error)]\npub enum Error {\n    /// Error when `PostgreSQL` archive operations fail\n    #[error(transparent)]\n    ArchiveError(postgresql_archive::Error),\n    /// Error when a command fails\n    #[error(\"Command error: stdout={stdout}; stderr={stderr}\")]\n    CommandError { stdout: String, stderr: String },\n    /// Error when the database could not be created\n    #[error(\"{0}\")]\n    CreateDatabaseError(String),\n    /// Error when accessing the database\n    #[error(transparent)]\n    DatabaseError(#[from] sqlx::Error),\n    /// Error when determining if the database exists\n    #[error(\"{0}\")]\n    DatabaseExistsError(String),\n    /// Error when the database could not be initialized\n    #[error(\"{0}\")]\n    DatabaseInitializationError(String),\n    /// Error when the database could not be started\n    #[error(\"{0}\")]\n    DatabaseStartError(String),\n    /// Error when the database could not be stopped\n    #[error(\"{0}\")]\n    DatabaseStopError(String),\n    /// Error when the database could not be dropped\n    #[error(\"{0}\")]\n    DropDatabaseError(String),\n    /// Error when an invalid URL is provided\n    #[error(\"Invalid URL: {url}; {message}\")]\n    InvalidUrl { url: String, message: String },\n    /// Error when IO operations fail\n    #[error(\"{0}\")]\n    IoError(String),\n    /// Parse error\n    #[error(transparent)]\n    ParseError(#[from] semver::Error),\n}\n\n/// Convert `PostgreSQL` [archive errors](postgresql_archive::Error) to an [embedded errors](Error::ArchiveError)\nimpl From<postgresql_archive::Error> for Error {\n    fn from(error: postgresql_archive::Error) -> Self {\n        Error::ArchiveError(error)\n    }\n}\n\n/// Convert [standard IO errors](std::io::Error) to a [embedded errors](Error::IoError)\nimpl From<std::io::Error> for Error {\n    fn from(error: std::io::Error) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n/// Convert [utf8 errors](FromUtf8Error) to [embedded errors](Error::IoError)\nimpl From<FromUtf8Error> for Error {\n    fn from(error: FromUtf8Error) -> Self {\n        Error::IoError(error.to_string())\n    }\n}\n\n/// These are relatively low value tests; they are here to reduce the coverage gap and\n/// ensure that the error conversions are working as expected.\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_from_archive_error() {\n        let archive_error = postgresql_archive::Error::VersionNotFound(\"test\".to_string());\n        let error = Error::from(archive_error);\n        assert_eq!(error.to_string(), \"version not found for 'test'\");\n    }\n\n    #[test]\n    fn test_from_io_error() {\n        let io_error = std::io::Error::other(\"test\");\n        let error = Error::from(io_error);\n        assert_eq!(error.to_string(), \"test\");\n    }\n\n    #[test]\n    fn test_from_utf8_error() {\n        let invalid_utf8: Vec<u8> = vec![0, 159, 146, 150];\n        let from_utf8_error = String::from_utf8(invalid_utf8).expect_err(\"from utf8 error\");\n        let error = Error::from(from_utf8_error);\n        assert_eq!(\n            error.to_string(),\n            \"invalid utf-8 sequence of 1 bytes from index 1\"\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_embedded/src/lib.rs",
    "content": "//! # postgresql_embedded\n//!\n//! [![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n//! [![Benchmarks](https://img.shields.io/badge/%F0%9F%90%B0_bencher-enabled-6ec241)](https://bencher.dev/perf/theseus-rs-postgresql-embedded)\n//! [![License](https://img.shields.io/crates/l/postgresql_embedded)](https://github.com/theseus-rs/postgresql-embedded/tree/main/postgresql_embedded#license)\n//! [![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n//!\n//! Install and run a PostgreSQL database locally on Linux, MacOS or Windows.  PostgreSQL can be\n//! bundled with your application, or downloaded on demand.\n//!\n//! ## Table of contents\n//!\n//! - [Examples](#examples)\n//! - [Information](#information)\n//! - [Feature flags](#feature-flags)\n//! - [Safety](#safety)\n//! - [License](#license)\n//! - [Notes](#notes)\n//!\n//! ## Examples\n//!\n//! ### Asynchronous API\n//!\n//! ```no_run\n//! use postgresql_embedded::{PostgreSQL, Result};\n//!\n//! #[tokio::main]\n//! async fn main() -> Result<()> {\n//!     let mut postgresql = PostgreSQL::default();\n//!     postgresql.setup().await?;\n//!     postgresql.start().await?;\n//!\n//!     let database_name = \"test\";\n//!     postgresql.create_database(database_name).await?;\n//!     postgresql.database_exists(database_name).await?;\n//!     postgresql.drop_database(database_name).await?;\n//!\n//!     postgresql.stop().await\n//! }\n//! ```\n//!\n//! ### Synchronous API\n//! ```no_run\n//! #[cfg(feature = \"blocking\")] {\n//! use postgresql_embedded::blocking::PostgreSQL;\n//!\n//! let mut postgresql = PostgreSQL::default();\n//! postgresql.setup().unwrap();\n//! postgresql.start().unwrap();\n//!\n//! let database_name = \"test\";\n//! postgresql.create_database(database_name).unwrap();\n//! postgresql.database_exists(database_name).unwrap();\n//! postgresql.drop_database(database_name).unwrap();\n//!\n//! postgresql.stop().unwrap();\n//! }\n//! ```\n//!\n//! ### Settings Builder\n//!\n//! ```no_run\n//! use postgresql_embedded::{PostgreSQL, Result, SettingsBuilder};\n//!\n//! #[tokio::main]\n//! async fn main() -> Result<()> {\n//!     let settings = SettingsBuilder::new()\n//!         .host(\"127.0.0.1\")\n//!         .port(5433)\n//!         .username(\"admin\")\n//!         .password(\"secret\")\n//!         .temporary(false)\n//!         .config(\"max_connections\", \"100\")\n//!         .build();\n//!\n//!     let mut postgresql = PostgreSQL::new(settings);\n//!     postgresql.setup().await?;\n//!     postgresql.start().await?;\n//!\n//!     postgresql.stop().await\n//! }\n//! ```\n//!\n//! ### Unix Socket\n//!\n//! ```no_run\n//! use postgresql_embedded::{PostgreSQL, Result, SettingsBuilder};\n//! use std::path::PathBuf;\n//!\n//! #[tokio::main]\n//! async fn main() -> Result<()> {\n//!     let settings = SettingsBuilder::new()\n//!         .socket_dir(PathBuf::from(\"/tmp/pg_socket\"))\n//!         .build();\n//!\n//!     let mut postgresql = PostgreSQL::new(settings);\n//!     postgresql.setup().await?;\n//!     postgresql.start().await?;\n//!\n//!     let database_name = \"test\";\n//!     postgresql.create_database(database_name).await?;\n//!     postgresql.database_exists(database_name).await?;\n//!     postgresql.drop_database(database_name).await?;\n//!\n//!     postgresql.stop().await\n//! }\n//! ```\n//!\n//! ## Information\n//!\n//! During the build process, when the `bundled` feature is enabled, the PostgreSQL binaries are\n//! downloaded and included in the resulting binary. The version of the PostgreSQL binaries is\n//! determined by the `POSTGRESQL_VERSION` environment variable. If the `POSTGRESQL_VERSION`\n//! environment variable is not set, then `postgresql_archive::LATEST` will be used to determine the\n//! version of the PostgreSQL binaries to download.\n//!\n//! When downloading the theseus PostgreSQL binaries, either during build, or at runtime, the\n//! `GITHUB_TOKEN` environment variable can be set to a GitHub personal access token to increase\n//! the rate limit for downloading the PostgreSQL binaries. The `GITHUB_TOKEN` environment\n//! variable is not required.\n//!\n//! At runtime, the PostgreSQL binaries are cached by default in the following directories:\n//!\n//! - Unix: `$HOME/.theseus/postgresql`\n//! - Windows: `%USERPROFILE%\\.theseus\\postgresql`\n//!\n//! Performance can be improved by using a specific version of the PostgreSQL binaries (e.g. `=16.10.0`).\n//! After the first download, the PostgreSQL binaries will be cached and reused for subsequent runs.\n//! Further, the repository will no longer be queried to calculate the version match.\n//!\n//! ## Feature flags\n//!\n//! postgresql_embedded uses feature flags to address compile time and binary size\n//! uses.\n//!\n//! The following features are available:\n//!\n//!\n//! | Name         | Description                                              | Default? |\n//! |--------------|----------------------------------------------------------|----------|\n//! | `bundled`    | Bundles the PostgreSQL archive into the resulting binary | No       |\n//! | `blocking`   | Enables the blocking API; requires `tokio`               | No       |\n//! | `native-tls` | Enables native-tls support                               | Yes      |\n//! | `rustls`     | Enables rustls support                                   | No       |\n//! | `theseus`    | Enables theseus PostgreSQL binaries                      | Yes      |\n//! | `tokio`      | Enables using tokio for async                            | No       |\n//! | `zonky`      | Enables zonky PostgreSQL binaries                        | No       |\n//!\n//! ## Safety\n//!\n//! These crates use `#![forbid(unsafe_code)]` to ensure everything is implemented in 100% safe Rust.\n//!\n//! ## License\n//!\n//! Licensed under either of\n//!\n//! * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or <https://www.apache.org/licenses/LICENSE-2.0>)\n//! * MIT license ([LICENSE-MIT](LICENSE-MIT) or <https://opensource.org/licenses/MIT>)\n//!\n//! at your option.\n//!\n//! PostgreSQL is covered under [The PostgreSQL License](https://opensource.org/licenses/postgresql).\n\n#[cfg(feature = \"blocking\")]\npub mod blocking;\nmod error;\nmod postgresql;\nmod settings;\n\npub use error::{Error, Result};\npub use postgresql::{PostgreSQL, Status};\npub use postgresql_archive::{Version, VersionReq};\npub use settings::{Settings, SettingsBuilder};\nuse std::sync::LazyLock;\n\n/// The latest PostgreSQL version requirement\npub static LATEST: VersionReq = VersionReq::STAR;\n\n/// The latest PostgreSQL version 18\npub static V18: LazyLock<VersionReq> = LazyLock::new(|| VersionReq::parse(\"=18\").unwrap());\n\n/// The latest PostgreSQL version 17\npub static V17: LazyLock<VersionReq> = LazyLock::new(|| VersionReq::parse(\"=17\").unwrap());\n\n/// The latest PostgreSQL version 16\npub static V16: LazyLock<VersionReq> = LazyLock::new(|| VersionReq::parse(\"=16\").unwrap());\n\n/// The latest PostgreSQL version 15\npub static V15: LazyLock<VersionReq> = LazyLock::new(|| VersionReq::parse(\"=15\").unwrap());\n\n/// The latest PostgreSQL version 14\n#[deprecated(\n    since = \"0.18.0\",\n    note = \"See https://www.postgresql.org/developer/roadmap/\"\n)]\npub static V14: LazyLock<VersionReq> = LazyLock::new(|| VersionReq::parse(\"=14\").unwrap());\n\npub use settings::BOOTSTRAP_DATABASE;\npub use settings::BOOTSTRAP_SUPERUSER;\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_version() -> Result<()> {\n        let version = VersionReq::parse(\"=18.2.0\")?;\n        assert_eq!(version.to_string(), \"=18.2.0\");\n        Ok(())\n    }\n\n    #[test]\n    fn test_version_latest() {\n        assert_eq!(LATEST.to_string(), \"*\");\n    }\n\n    #[test]\n    fn test_version_18() {\n        assert_eq!(V18.to_string(), \"=18\");\n    }\n\n    #[test]\n    fn test_version_17() {\n        assert_eq!(V17.to_string(), \"=17\");\n    }\n\n    #[test]\n    fn test_version_16() {\n        assert_eq!(V16.to_string(), \"=16\");\n    }\n\n    #[test]\n    fn test_version_15() {\n        assert_eq!(V15.to_string(), \"=15\");\n    }\n\n    #[test]\n    #[allow(deprecated)]\n    fn test_version_14() {\n        assert_eq!(V14.to_string(), \"=14\");\n    }\n}\n"
  },
  {
    "path": "postgresql_embedded/src/postgresql.rs",
    "content": "use crate::error::Error::{DatabaseInitializationError, DatabaseStartError, DatabaseStopError};\nuse crate::error::Result;\nuse crate::settings::{BOOTSTRAP_DATABASE, BOOTSTRAP_SUPERUSER, Settings};\nuse postgresql_archive::extract;\n#[cfg(not(feature = \"bundled\"))]\nuse postgresql_archive::get_archive;\nuse postgresql_archive::get_version;\nuse postgresql_archive::{ExactVersion, ExactVersionReq};\n#[cfg(feature = \"tokio\")]\nuse postgresql_commands::AsyncCommandExecutor;\nuse postgresql_commands::CommandBuilder;\n#[cfg(not(feature = \"tokio\"))]\nuse postgresql_commands::CommandExecutor;\nuse postgresql_commands::initdb::InitDbBuilder;\nuse postgresql_commands::pg_ctl::Mode::{Start, Stop};\nuse postgresql_commands::pg_ctl::PgCtlBuilder;\nuse postgresql_commands::pg_ctl::ShutdownMode::Fast;\nuse semver::Version;\nuse sqlx::{PgPool, Row};\nuse std::fs::{read_dir, remove_dir_all, remove_file};\nuse std::io::prelude::*;\nuse std::net::TcpListener;\nuse std::path::PathBuf;\nuse tracing::{debug, instrument};\n\nuse crate::Error::{CreateDatabaseError, DatabaseExistsError, DropDatabaseError};\n\nconst PGDATABASE: &str = \"PGDATABASE\";\n\n/// `PostgreSQL` status\n#[derive(Debug, Clone, Copy, PartialEq)]\npub enum Status {\n    /// Archive not installed\n    NotInstalled,\n    /// Installation complete; not initialized\n    Installed,\n    /// Server started\n    Started,\n    /// Server initialized and stopped\n    Stopped,\n}\n\n/// `PostgreSQL` server\n#[derive(Clone, Debug)]\npub struct PostgreSQL {\n    settings: Settings,\n}\n\n/// `PostgreSQL` server methods\nimpl PostgreSQL {\n    /// Create a new [`PostgreSQL`] instance\n    #[must_use]\n    pub fn new(settings: Settings) -> Self {\n        let mut postgresql = PostgreSQL { settings };\n\n        // If an exact version is set, append the version to the installation directory to avoid\n        // conflicts with other versions.  This will also facilitate setting the status of the\n        // server to the correct initial value.  If the minor and release version are not set, the\n        // installation directory will be determined dynamically during the installation process.\n        if !postgresql.settings.trust_installation_dir\n            && let Some(version) = postgresql.settings.version.exact_version()\n        {\n            let path = &postgresql.settings.installation_dir;\n            let version_string = version.to_string();\n\n            if !path.ends_with(&version_string) {\n                postgresql.settings.installation_dir =\n                    postgresql.settings.installation_dir.join(version_string);\n            }\n        }\n        postgresql\n    }\n\n    /// Get the [status](Status) of the PostgreSQL server\n    #[instrument(level = \"debug\", skip(self))]\n    pub fn status(&self) -> Status {\n        if self.is_running() {\n            Status::Started\n        } else if self.is_initialized() {\n            Status::Stopped\n        } else if self.installed_dir().is_some() {\n            Status::Installed\n        } else {\n            Status::NotInstalled\n        }\n    }\n\n    /// Get the [settings](Settings) of the `PostgreSQL` server\n    #[must_use]\n    pub fn settings(&self) -> &Settings {\n        &self.settings\n    }\n\n    /// Find a directory where `PostgreSQL` server is installed.\n    /// This first checks if the installation directory exists and matches the version requirement.\n    /// If it doesn't, it will search all the child directories for the latest version that matches the requirement.\n    /// If it returns None, we couldn't find a matching installation.\n    fn installed_dir(&self) -> Option<PathBuf> {\n        if self.settings.trust_installation_dir {\n            return Some(self.settings.installation_dir.clone());\n        }\n\n        let path = &self.settings.installation_dir;\n        let maybe_path_version = path\n            .file_name()\n            .and_then(|file_name| Version::parse(&file_name.to_string_lossy()).ok());\n        // If this directory matches the version requirement, we're done.\n        if let Some(path_version) = maybe_path_version\n            && self.settings.version.matches(&path_version)\n            && path.exists()\n        {\n            return Some(path.clone());\n        }\n\n        // Get all directories in the path as versions.\n        let mut versions = read_dir(path)\n            .ok()?\n            .filter_map(|entry| {\n                let Some(entry) = entry.ok() else {\n                    // We ignore filesystem errors.\n                    return None;\n                };\n                // Skip non-directories\n                if !entry.file_type().ok()?.is_dir() {\n                    return None;\n                }\n                let file_name = entry.file_name();\n                let version = Version::parse(&file_name.to_string_lossy()).ok()?;\n                if self.settings.version.matches(&version) {\n                    Some((version, entry.path()))\n                } else {\n                    None\n                }\n            })\n            .collect::<Vec<_>>();\n        // Sort the versions in descending order i.e. latest version first\n        versions.sort_by(|(a, _), (b, _)| b.cmp(a));\n        // Get the first matching version as the best match\n        versions.first().map(|(_, path)| path.clone())\n    }\n\n    /// Check if the `PostgreSQL` server is initialized\n    fn is_initialized(&self) -> bool {\n        self.settings.data_dir.join(\"postgresql.conf\").exists()\n    }\n\n    /// Check if the `PostgreSQL` server is running\n    fn is_running(&self) -> bool {\n        let pid_file = self.settings.data_dir.join(\"postmaster.pid\");\n        pid_file.exists()\n    }\n\n    /// Set up the database by extracting the archive and initializing the database.\n    /// If the installation directory already exists, the archive will not be extracted.\n    /// If the data directory already exists, the database will not be initialized.\n    ///\n    /// # Errors\n    ///\n    /// If the installation fails, an error will be returned.\n    #[instrument(skip(self))]\n    pub async fn setup(&mut self) -> Result<()> {\n        match self.installed_dir() {\n            Some(installed_dir) => {\n                self.settings.installation_dir = installed_dir;\n            }\n            None => {\n                self.install().await?;\n            }\n        }\n        if !self.is_initialized() {\n            self.initialize().await?;\n        }\n\n        Ok(())\n    }\n\n    /// Install the PostgreSQL server from the archive. If the version minor and/or release are not set,\n    /// the latest version will be determined dynamically during the installation process. If the archive\n    /// hash does not match the expected hash, an error will be returned. If the installation directory\n    /// already exists, the archive will not be extracted. If the archive is not found, an error will be\n    /// returned.\n    #[instrument(skip(self))]\n    async fn install(&mut self) -> Result<()> {\n        #[cfg(feature = \"bundled\")]\n        {\n            self.settings.version = crate::settings::ARCHIVE_VERSION.clone();\n        }\n\n        debug!(\n            \"Starting installation process for version {}\",\n            self.settings.version\n        );\n\n        // If the exact version is not set, determine the latest version and update the version and\n        // installation directory accordingly. This is an optimization to avoid downloading the\n        // archive if the latest version is already installed.\n        if self.settings.version.exact_version().is_none() {\n            let version = get_version(&self.settings.releases_url, &self.settings.version).await?;\n            self.settings.version = version.exact_version_req()?;\n            self.settings.installation_dir =\n                self.settings.installation_dir.join(version.to_string());\n        }\n\n        if self.settings.installation_dir.exists() {\n            debug!(\"Installation directory already exists\");\n            return Ok(());\n        }\n\n        let url = &self.settings.releases_url;\n\n        // When the `bundled` feature is enabled, use the bundled archive instead of downloading it\n        // from the internet.\n        #[cfg(feature = \"bundled\")]\n        let bytes = {\n            debug!(\"Using bundled installation archive\");\n            crate::settings::ARCHIVE.to_vec()\n        };\n\n        #[cfg(not(feature = \"bundled\"))]\n        let bytes = {\n            let (version, bytes) = get_archive(url, &self.settings.version).await?;\n            self.settings.version = version.exact_version_req()?;\n            bytes\n        };\n\n        extract(url, &bytes, &self.settings.installation_dir).await?;\n\n        debug!(\n            \"Installed PostgreSQL version {} to {}\",\n            self.settings.version,\n            self.settings.installation_dir.to_string_lossy()\n        );\n\n        Ok(())\n    }\n\n    /// Initialize the database in the data directory. This will create the necessary files and\n    /// directories to start the database.\n    #[instrument(skip(self))]\n    async fn initialize(&mut self) -> Result<()> {\n        if !self.settings.password_file.exists() {\n            let mut file = std::fs::File::create(&self.settings.password_file)?;\n            file.write_all(self.settings.password.as_bytes())?;\n        }\n\n        debug!(\n            \"Initializing database {}\",\n            self.settings.data_dir.to_string_lossy()\n        );\n\n        let initdb = InitDbBuilder::from(&self.settings)\n            .pgdata(&self.settings.data_dir)\n            .username(BOOTSTRAP_SUPERUSER)\n            .auth(\"password\")\n            .pwfile(&self.settings.password_file)\n            .encoding(\"UTF8\");\n\n        match self.execute_command(initdb).await {\n            Ok((_stdout, _stderr)) => {\n                debug!(\n                    \"Initialized database {}\",\n                    self.settings.data_dir.to_string_lossy()\n                );\n                Ok(())\n            }\n            Err(error) => Err(DatabaseInitializationError(error.to_string())),\n        }\n    }\n\n    /// Start the database and wait for the startup to complete.\n    /// If the port is set to `0`, the database will be started on a random port.\n    /// If `socket_dir` is configured, the server will also listen on a Unix socket.\n    ///\n    /// # Errors\n    ///\n    /// If the database fails to start, an error will be returned.\n    #[instrument(skip(self))]\n    pub async fn start(&mut self) -> Result<()> {\n        if self.settings.port == 0 {\n            let listener = TcpListener::bind((\"0.0.0.0\", 0))?;\n            self.settings.port = listener.local_addr()?.port();\n        }\n\n        // Create the socket directory if configured and it doesn't exist\n        #[cfg(unix)]\n        if let Some(ref socket_dir) = self.settings.socket_dir\n            && !socket_dir.exists()\n        {\n            std::fs::create_dir_all(socket_dir)?;\n        }\n\n        debug!(\n            \"Starting database {} on port {}{}\",\n            self.settings.data_dir.to_string_lossy(),\n            self.settings.port,\n            self.settings\n                .socket_dir\n                .as_ref()\n                .map_or(String::new(), |d| format!(\n                    \" with socket dir {}\",\n                    d.to_string_lossy()\n                ))\n        );\n        let start_log = self.settings.data_dir.join(\"start.log\");\n        let mut options = Vec::new();\n        options.push(format!(\"-F -p {}\", self.settings.port));\n\n        #[cfg(unix)]\n        if let Some(ref socket_dir) = self.settings.socket_dir {\n            options.push(format!(\"-k {}\", socket_dir.to_string_lossy()));\n        }\n\n        for (key, value) in &self.settings.configuration {\n            options.push(format!(\"-c {key}={value}\"));\n        }\n        let pg_ctl = PgCtlBuilder::from(&self.settings)\n            .env(PGDATABASE, \"\")\n            .mode(Start)\n            .pgdata(&self.settings.data_dir)\n            .log(start_log)\n            .options(options.as_slice())\n            .wait();\n\n        match self.execute_command(pg_ctl).await {\n            Ok((_stdout, _stderr)) => {\n                debug!(\n                    \"Started database {} on port {}{}\",\n                    self.settings.data_dir.to_string_lossy(),\n                    self.settings.port,\n                    self.settings\n                        .socket_dir\n                        .as_ref()\n                        .map_or(String::new(), |d| format!(\n                            \" with socket dir {}\",\n                            d.to_string_lossy()\n                        ))\n                );\n                Ok(())\n            }\n            Err(error) => Err(DatabaseStartError(error.to_string())),\n        }\n    }\n\n    /// Stop the database gracefully (smart mode) and wait for the shutdown to complete.\n    ///\n    /// # Errors\n    ///\n    /// If the database fails to stop, an error will be returned.\n    #[instrument(skip(self))]\n    pub async fn stop(&self) -> Result<()> {\n        debug!(\n            \"Stopping database {}\",\n            self.settings.data_dir.to_string_lossy()\n        );\n        let pg_ctl = PgCtlBuilder::from(&self.settings)\n            .mode(Stop)\n            .pgdata(&self.settings.data_dir)\n            .shutdown_mode(Fast)\n            .wait();\n\n        match self.execute_command(pg_ctl).await {\n            Ok((_stdout, _stderr)) => {\n                debug!(\n                    \"Stopped database {}\",\n                    self.settings.data_dir.to_string_lossy()\n                );\n                Ok(())\n            }\n            Err(error) => Err(DatabaseStopError(error.to_string())),\n        }\n    }\n\n    /// Get a connection pool to the bootstrap database.\n    async fn get_pool(&self) -> Result<PgPool> {\n        let mut settings = self.settings.clone();\n        settings.username = BOOTSTRAP_SUPERUSER.to_string();\n        let database_url = settings.url(BOOTSTRAP_DATABASE);\n        let pool = PgPool::connect(database_url.as_str()).await?;\n        Ok(pool)\n    }\n\n    /// Create a new database with the given name.\n    ///\n    /// # Errors\n    ///\n    /// If the database creation fails, an error will be returned.\n    #[instrument(skip(self))]\n    pub async fn create_database<S>(&self, database_name: S) -> Result<()>\n    where\n        S: AsRef<str> + std::fmt::Debug,\n    {\n        let database_name = database_name.as_ref();\n        debug!(\n            \"Creating database {database_name} for {host}:{port}\",\n            host = self.settings.host,\n            port = self.settings.port\n        );\n        let pool = self.get_pool().await?;\n        sqlx::query(format!(\"CREATE DATABASE \\\"{database_name}\\\"\").as_str())\n            .execute(&pool)\n            .await\n            .map_err(|error| CreateDatabaseError(error.to_string()))?;\n        pool.close().await;\n        debug!(\n            \"Created database {database_name} for {host}:{port}\",\n            host = self.settings.host,\n            port = self.settings.port\n        );\n        Ok(())\n    }\n\n    /// Check if a database with the given name exists.\n    ///\n    /// # Errors\n    ///\n    /// If the query fails, an error will be returned.\n    #[instrument(skip(self))]\n    pub async fn database_exists<S>(&self, database_name: S) -> Result<bool>\n    where\n        S: AsRef<str> + std::fmt::Debug,\n    {\n        let database_name = database_name.as_ref();\n        debug!(\n            \"Checking if database {database_name} exists for {host}:{port}\",\n            host = self.settings.host,\n            port = self.settings.port\n        );\n        let pool = self.get_pool().await?;\n        let row = sqlx::query(\"SELECT COUNT(*) FROM pg_database WHERE datname = $1\")\n            .bind(database_name.to_string())\n            .fetch_one(&pool)\n            .await\n            .map_err(|error| DatabaseExistsError(error.to_string()))?;\n        let count: i64 = row.get(0);\n        pool.close().await;\n\n        Ok(count == 1)\n    }\n\n    /// Drop a database with the given name.\n    ///\n    /// # Errors\n    ///\n    /// If the database does not exist or if the drop command fails, an error will be returned.\n    #[instrument(skip(self))]\n    pub async fn drop_database<S>(&self, database_name: S) -> Result<()>\n    where\n        S: AsRef<str> + std::fmt::Debug,\n    {\n        let database_name = database_name.as_ref();\n        debug!(\n            \"Dropping database {database_name} for {host}:{port}\",\n            host = self.settings.host,\n            port = self.settings.port\n        );\n        let pool = self.get_pool().await?;\n        sqlx::query(format!(\"DROP DATABASE IF EXISTS \\\"{database_name}\\\"\").as_str())\n            .execute(&pool)\n            .await\n            .map_err(|error| DropDatabaseError(error.to_string()))?;\n        pool.close().await;\n        debug!(\n            \"Dropped database {database_name} for {host}:{port}\",\n            host = self.settings.host,\n            port = self.settings.port\n        );\n        Ok(())\n    }\n\n    #[cfg(not(feature = \"tokio\"))]\n    /// Execute a command and return the stdout and stderr as strings.\n    #[instrument(level = \"debug\", skip(self, command_builder), fields(program = ?command_builder.get_program()))]\n    async fn execute_command<B: CommandBuilder>(\n        &self,\n        command_builder: B,\n    ) -> postgresql_commands::Result<(String, String)> {\n        let mut command = command_builder.build();\n        command.execute()\n    }\n\n    #[cfg(feature = \"tokio\")]\n    /// Execute a command and return the stdout and stderr as strings.\n    #[instrument(level = \"debug\", skip(self, command_builder), fields(program = ?command_builder.get_program()))]\n    async fn execute_command<B: CommandBuilder>(\n        &self,\n        command_builder: B,\n    ) -> postgresql_commands::Result<(String, String)> {\n        let mut command = command_builder.build_tokio();\n        command.execute(self.settings.timeout).await\n    }\n}\n\n/// Default `PostgreSQL` server\nimpl Default for PostgreSQL {\n    fn default() -> Self {\n        Self::new(Settings::default())\n    }\n}\n\n/// Stop the `PostgreSQL` server and remove the data directory if it is marked as temporary.\nimpl Drop for PostgreSQL {\n    fn drop(&mut self) {\n        if self.status() == Status::Started {\n            let mut pg_ctl = PgCtlBuilder::from(&self.settings)\n                .mode(Stop)\n                .pgdata(&self.settings.data_dir)\n                .shutdown_mode(Fast)\n                .wait()\n                .build();\n\n            let _ = pg_ctl.output();\n        }\n\n        if self.settings.temporary {\n            let _ = remove_dir_all(&self.settings.data_dir);\n            let _ = remove_file(&self.settings.password_file);\n            if let Some(ref socket_dir) = self.settings.socket_dir {\n                let _ = remove_dir_all(socket_dir);\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "postgresql_embedded/src/settings.rs",
    "content": "use crate::error::{Error, Result};\nuse postgresql_archive::VersionReq;\n#[cfg(feature = \"bundled\")]\nuse postgresql_archive::{ExactVersionReq, Version};\nuse rand::RngExt;\nuse rand::distr::Alphanumeric;\nuse std::collections::HashMap;\nuse std::env;\nuse std::env::{current_dir, home_dir};\nuse std::ffi::OsString;\nuse std::path::PathBuf;\n#[cfg(feature = \"bundled\")]\nuse std::sync::LazyLock;\nuse std::time::Duration;\nuse url::Url;\n\n#[cfg(feature = \"bundled\")]\n#[expect(clippy::unwrap_used)]\npub(crate) static ARCHIVE_VERSION: LazyLock<VersionReq> = LazyLock::new(|| {\n    let version_string = include_str!(concat!(std::env!(\"OUT_DIR\"), \"/postgresql.version\"));\n    let version = Version::parse(version_string).unwrap();\n    let version_req = version.exact_version_req().unwrap();\n    tracing::debug!(\"Bundled installation archive version {version_string}\");\n    version_req\n});\n\n#[cfg(feature = \"bundled\")]\npub(crate) const ARCHIVE: &[u8] = include_bytes!(concat!(env!(\"OUT_DIR\"), \"/postgresql.tar.gz\"));\n\n/// `PostgreSQL` superuser\npub const BOOTSTRAP_SUPERUSER: &str = \"postgres\";\n/// `PostgreSQL` database\npub const BOOTSTRAP_DATABASE: &str = \"postgres\";\n\n/// Database settings\n#[derive(Clone, Debug, PartialEq)]\npub struct Settings {\n    /// URL for the releases location of the `PostgreSQL` installation archives\n    pub releases_url: String,\n    /// Version requirement of `PostgreSQL` to install\n    pub version: VersionReq,\n    /// `PostgreSQL` installation directory\n    pub installation_dir: PathBuf,\n    /// `PostgreSQL` password file\n    pub password_file: PathBuf,\n    /// `PostgreSQL` data directory\n    pub data_dir: PathBuf,\n    /// `PostgreSQL` host\n    pub host: String,\n    /// `PostgreSQL` port\n    pub port: u16,\n    /// `PostgreSQL` user name\n    pub username: String,\n    /// `PostgreSQL` password\n    pub password: String,\n    /// Temporary database\n    pub temporary: bool,\n    /// Command execution Timeout\n    pub timeout: Option<Duration>,\n    /// Server configuration options\n    pub configuration: HashMap<String, String>,\n    /// Skip installation and inferrence of the installation dir. Trust what the user provided.\n    pub trust_installation_dir: bool,\n    /// Unix socket directory. When set, the server will listen on a Unix socket in this directory\n    /// in addition to (or instead of) TCP/IP. Unix-only; ignored on Windows.\n    pub socket_dir: Option<PathBuf>,\n}\n\n/// Settings implementation\nimpl Settings {\n    /// Create a new instance of [`Settings`]\n    pub fn new() -> Self {\n        let home_dir = home_dir().unwrap_or_else(|| env::current_dir().unwrap_or_default());\n        let password_file_name = \".pgpass\";\n        let password_file = if let Ok(dir) = tempfile::tempdir() {\n            dir.keep().join(password_file_name)\n        } else {\n            let current_dir = current_dir().unwrap_or(PathBuf::from(\".\"));\n            current_dir.join(password_file_name)\n        };\n        let data_dir = if let Ok(dir) = tempfile::tempdir() {\n            dir.keep()\n        } else {\n            let temp_dir: String = rand::rng()\n                .sample_iter(&Alphanumeric)\n                .take(16)\n                .map(char::from)\n                .collect();\n\n            let data_dir = current_dir().unwrap_or(PathBuf::from(\".\"));\n            data_dir.join(temp_dir)\n        };\n\n        let password = rand::rng()\n            .sample_iter(&Alphanumeric)\n            .take(16)\n            .map(char::from)\n            .collect();\n\n        #[cfg(feature = \"theseus\")]\n        let releases_url = postgresql_archive::configuration::theseus::URL.to_string();\n        #[cfg(not(feature = \"theseus\"))]\n        let releases_url = String::new();\n\n        Self {\n            releases_url,\n            version: default_version(),\n            installation_dir: home_dir.join(\".theseus\").join(\"postgresql\"),\n            password_file,\n            data_dir,\n            host: \"localhost\".to_string(),\n            port: 0,\n            username: BOOTSTRAP_SUPERUSER.to_string(),\n            password,\n            temporary: true,\n            timeout: Some(Duration::from_secs(5)),\n            configuration: HashMap::new(),\n            trust_installation_dir: false,\n            socket_dir: None,\n        }\n    }\n\n    /// Returns the binary directory for the configured `PostgreSQL` installation.\n    #[must_use]\n    pub fn binary_dir(&self) -> PathBuf {\n        self.installation_dir.join(\"bin\")\n    }\n\n    /// Return the `PostgreSQL` URL for the given database name.\n    ///\n    /// When `socket_dir` is set, the URL will use the Unix socket path\n    /// (e.g. `postgresql://user:pass@localhost:5432/db?host=%2Fpath%2Fto%2Fsocket`).\n    /// When `socket_dir` is `None`, a standard TCP URL is returned.\n    pub fn url<S: AsRef<str>>(&self, database_name: S) -> String {\n        match &self.socket_dir {\n            Some(socket_dir) => {\n                let socket_str = socket_dir.to_string_lossy();\n                let encoded: String =\n                    url::form_urlencoded::byte_serialize(socket_str.as_bytes()).collect();\n                format!(\n                    \"postgresql://{}:{}@{}:{}/{}?host={}\",\n                    self.username,\n                    self.password,\n                    self.host,\n                    self.port,\n                    database_name.as_ref(),\n                    encoded\n                )\n            }\n            None => {\n                format!(\n                    \"postgresql://{}:{}@{}:{}/{}\",\n                    self.username,\n                    self.password,\n                    self.host,\n                    self.port,\n                    database_name.as_ref()\n                )\n            }\n        }\n    }\n\n    /// Create a new instance of [`Settings`] from the given URL.\n    ///\n    /// # Errors\n    ///\n    /// Returns an error if the URL is invalid.\n    pub fn from_url<S: AsRef<str>>(url: S) -> Result<Self> {\n        let parsed_url = match Url::parse(url.as_ref()) {\n            Ok(parsed_url) => parsed_url,\n            Err(error) => {\n                return Err(Error::InvalidUrl {\n                    url: url.as_ref().to_string(),\n                    message: error.to_string(),\n                });\n            }\n        };\n        let query_parameters: HashMap<String, String> =\n            parsed_url.query_pairs().into_owned().collect();\n        let mut settings = Self::default();\n\n        if let Some(releases_url) = query_parameters.get(\"releases_url\") {\n            settings.releases_url = releases_url.to_string();\n        }\n        if let Some(version) = query_parameters.get(\"version\") {\n            settings.version = VersionReq::parse(version)?;\n        }\n        if let Some(installation_dir) = query_parameters.get(\"installation_dir\") {\n            settings.installation_dir = PathBuf::from(installation_dir);\n        }\n        if let Some(password_file) = query_parameters.get(\"password_file\") {\n            settings.password_file = PathBuf::from(password_file);\n        }\n        if let Some(data_dir) = query_parameters.get(\"data_dir\") {\n            settings.data_dir = PathBuf::from(data_dir);\n        }\n        if let Some(host) = parsed_url.host() {\n            settings.host = host.to_string();\n        }\n        if let Some(port) = parsed_url.port() {\n            settings.port = port;\n        }\n        if !parsed_url.username().is_empty() {\n            settings.username = parsed_url.username().to_string();\n        }\n        if let Some(password) = parsed_url.password() {\n            settings.password = password.to_string();\n        }\n        if let Some(temporary) = query_parameters.get(\"temporary\") {\n            settings.temporary = temporary == \"true\";\n        }\n        if let Some(timeout) = query_parameters.get(\"timeout\") {\n            settings.timeout = match timeout.parse::<u64>() {\n                Ok(timeout) => Some(Duration::from_secs(timeout)),\n                Err(error) => {\n                    return Err(Error::InvalidUrl {\n                        url: url.as_ref().to_string(),\n                        message: error.to_string(),\n                    });\n                }\n            };\n        }\n        if let Some(trust_installation_dir) = query_parameters.get(\"trust_installation_dir\") {\n            settings.trust_installation_dir = trust_installation_dir == \"true\";\n        }\n        if let Some(socket_dir) = query_parameters.get(\"socket_dir\") {\n            settings.socket_dir = Some(PathBuf::from(socket_dir));\n        }\n        let configuration_prefix = \"configuration.\";\n        for (key, value) in &query_parameters {\n            if key.starts_with(configuration_prefix)\n                && let Some(configuration_key) = key.strip_prefix(configuration_prefix)\n            {\n                settings\n                    .configuration\n                    .insert(configuration_key.to_string(), value.to_string());\n            }\n        }\n\n        Ok(settings)\n    }\n}\n\n/// Implement the [`Settings`] trait for [`Settings`]\nimpl postgresql_commands::Settings for Settings {\n    fn get_binary_dir(&self) -> PathBuf {\n        self.binary_dir().clone()\n    }\n\n    fn get_host(&self) -> OsString {\n        self.host.parse().expect(\"host\")\n    }\n\n    fn get_port(&self) -> u16 {\n        self.port\n    }\n\n    fn get_username(&self) -> OsString {\n        self.username.parse().expect(\"username\")\n    }\n\n    fn get_password(&self) -> OsString {\n        self.password.parse().expect(\"password\")\n    }\n\n    fn get_socket_dir(&self) -> Option<PathBuf> {\n        self.socket_dir.clone()\n    }\n}\n\n/// Default implementation for [`Settings`]\nimpl Default for Settings {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\n/// Builder for constructing [`Settings`] with a fluent API.\n///\n/// # Examples\n///\n/// ```no_run\n/// use postgresql_embedded::SettingsBuilder;\n///\n/// let settings = SettingsBuilder::new()\n///     .host(\"127.0.0.1\")\n///     .port(5433)\n///     .username(\"admin\")\n///     .password(\"secret\")\n///     .temporary(false)\n///     .build();\n/// ```\n///\n/// To configure a Unix socket:\n///\n/// ```no_run\n/// use postgresql_embedded::SettingsBuilder;\n/// use std::path::PathBuf;\n///\n/// let settings = SettingsBuilder::new()\n///     .socket_dir(PathBuf::from(\"/tmp/pg_socket\"))\n///     .build();\n/// ```\n#[derive(Clone, Debug)]\npub struct SettingsBuilder {\n    settings: Settings,\n}\n\nimpl SettingsBuilder {\n    /// Create a new [`SettingsBuilder`] starting from the default [`Settings`].\n    #[must_use]\n    pub fn new() -> Self {\n        Self {\n            settings: Settings::new(),\n        }\n    }\n\n    /// Set the releases URL for downloading PostgreSQL archives.\n    #[must_use]\n    pub fn releases_url<S: Into<String>>(mut self, releases_url: S) -> Self {\n        self.settings.releases_url = releases_url.into();\n        self\n    }\n\n    /// Set the PostgreSQL version requirement.\n    #[must_use]\n    pub fn version(mut self, version: VersionReq) -> Self {\n        self.settings.version = version;\n        self\n    }\n\n    /// Set the installation directory.\n    #[must_use]\n    pub fn installation_dir<P: Into<PathBuf>>(mut self, dir: P) -> Self {\n        self.settings.installation_dir = dir.into();\n        self\n    }\n\n    /// Set the password file path.\n    #[must_use]\n    pub fn password_file<P: Into<PathBuf>>(mut self, path: P) -> Self {\n        self.settings.password_file = path.into();\n        self\n    }\n\n    /// Set the data directory.\n    #[must_use]\n    pub fn data_dir<P: Into<PathBuf>>(mut self, dir: P) -> Self {\n        self.settings.data_dir = dir.into();\n        self\n    }\n\n    /// Set the host name or IP address.\n    #[must_use]\n    pub fn host<S: Into<String>>(mut self, host: S) -> Self {\n        self.settings.host = host.into();\n        self\n    }\n\n    /// Set the TCP port number.\n    #[must_use]\n    pub fn port(mut self, port: u16) -> Self {\n        self.settings.port = port;\n        self\n    }\n\n    /// Set the database username.\n    #[must_use]\n    pub fn username<S: Into<String>>(mut self, username: S) -> Self {\n        self.settings.username = username.into();\n        self\n    }\n\n    /// Set the database password.\n    #[must_use]\n    pub fn password<S: Into<String>>(mut self, password: S) -> Self {\n        self.settings.password = password.into();\n        self\n    }\n\n    /// Set whether the database is temporary (cleaned up on drop).\n    #[must_use]\n    pub fn temporary(mut self, temporary: bool) -> Self {\n        self.settings.temporary = temporary;\n        self\n    }\n\n    /// Set the command execution timeout.\n    #[must_use]\n    pub fn timeout(mut self, timeout: Option<Duration>) -> Self {\n        self.settings.timeout = timeout;\n        self\n    }\n\n    /// Set server configuration options.\n    #[must_use]\n    pub fn configuration(mut self, configuration: HashMap<String, String>) -> Self {\n        self.settings.configuration = configuration;\n        self\n    }\n\n    /// Add a single server configuration option.\n    #[must_use]\n    pub fn config<K: Into<String>, V: Into<String>>(mut self, key: K, value: V) -> Self {\n        self.settings.configuration.insert(key.into(), value.into());\n        self\n    }\n\n    /// Set whether to trust the installation directory as-is.\n    #[must_use]\n    pub fn trust_installation_dir(mut self, trust: bool) -> Self {\n        self.settings.trust_installation_dir = trust;\n        self\n    }\n\n    /// Set the Unix socket directory. When set, the server will listen on a Unix socket in this directory. This is only\n    /// supported on Unix platforms.\n    #[must_use]\n    pub fn socket_dir<P: Into<PathBuf>>(mut self, dir: P) -> Self {\n        self.settings.socket_dir = Some(dir.into());\n        self\n    }\n\n    /// Consume the builder and return the configured [`Settings`].\n    #[must_use]\n    pub fn build(self) -> Settings {\n        self.settings\n    }\n}\n\nimpl Default for SettingsBuilder {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\n/// Get the default version used if not otherwise specified\n#[must_use]\nfn default_version() -> VersionReq {\n    #[cfg(feature = \"bundled\")]\n    {\n        ARCHIVE_VERSION.clone()\n    }\n\n    #[cfg(not(feature = \"bundled\"))]\n    {\n        VersionReq::STAR\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use test_log::test;\n\n    #[test]\n    #[cfg(feature = \"bundled\")]\n    fn test_archive_version() {\n        assert!(!super::ARCHIVE_VERSION.to_string().is_empty());\n    }\n\n    #[test]\n    fn test_settings_new() {\n        let settings = Settings::new();\n        assert!(\n            !settings\n                .installation_dir\n                .to_str()\n                .unwrap_or_default()\n                .is_empty()\n        );\n        assert!(settings.password_file.ends_with(\".pgpass\"));\n        assert!(!settings.data_dir.to_str().unwrap_or_default().is_empty());\n        assert_eq!(0, settings.port);\n        assert_eq!(BOOTSTRAP_SUPERUSER, settings.username);\n        assert!(!settings.password.is_empty());\n        assert_ne!(\"password\", settings.password);\n        assert!(settings.binary_dir().ends_with(\"bin\"));\n        assert_eq!(\n            \"postgresql://postgres:password@localhost:0/test\",\n            settings\n                .url(\"test\")\n                .replace(settings.password.as_str(), \"password\")\n        );\n        assert_eq!(Some(Duration::from_secs(5)), settings.timeout);\n        assert!(settings.configuration.is_empty());\n        assert!(settings.socket_dir.is_none());\n    }\n\n    #[test]\n    fn test_settings_url_with_socket_dir() {\n        let mut settings = Settings::new();\n        settings.username = \"user\".to_string();\n        settings.password = \"pass\".to_string();\n        settings.host = \"localhost\".to_string();\n        settings.port = 5432;\n        settings.socket_dir = Some(PathBuf::from(\"/tmp/pg_socket\"));\n\n        assert_eq!(\n            \"postgresql://user:pass@localhost:5432/test?host=%2Ftmp%2Fpg_socket\",\n            settings.url(\"test\")\n        );\n    }\n\n    #[test]\n    fn test_settings_from_url() -> Result<()> {\n        let base_url = \"postgresql://postgres:password@localhost:5432/test\";\n        let releases_url = \"releases_url=https%3A%2F%2Fgithub.com\";\n        let version = \"version=%3D16.4.0\";\n        let installation_dir = \"installation_dir=/tmp/postgresql\";\n        let password_file = \"password_file=/tmp/.pgpass\";\n        let data_dir = \"data_dir=/tmp/data\";\n        let temporary = \"temporary=false\";\n        let trust_installation_dir = \"trust_installation_dir=true\";\n        let timeout = \"timeout=10\";\n        let configuration = \"configuration.max_connections=42\";\n        let url = format!(\n            \"{base_url}?{releases_url}&{version}&{installation_dir}&{password_file}&{data_dir}&{temporary}&{trust_installation_dir}&{timeout}&{configuration}\"\n        );\n\n        let settings = Settings::from_url(url)?;\n\n        assert_eq!(\"https://github.com\", settings.releases_url);\n        assert_eq!(VersionReq::parse(\"=16.4.0\")?, settings.version);\n        assert_eq!(PathBuf::from(\"/tmp/postgresql\"), settings.installation_dir);\n        assert_eq!(PathBuf::from(\"/tmp/.pgpass\"), settings.password_file);\n        assert_eq!(PathBuf::from(\"/tmp/data\"), settings.data_dir);\n        assert_eq!(\"localhost\", settings.host);\n        assert_eq!(5432, settings.port);\n        assert_eq!(BOOTSTRAP_SUPERUSER, settings.username);\n        assert_eq!(\"password\", settings.password);\n        assert!(!settings.temporary);\n        assert!(settings.trust_installation_dir);\n        assert_eq!(Some(Duration::from_secs(10)), settings.timeout);\n        let configuration = HashMap::from([(\"max_connections\".to_string(), \"42\".to_string())]);\n        assert_eq!(configuration, settings.configuration);\n        assert!(settings.socket_dir.is_none());\n        assert_eq!(base_url, settings.url(\"test\"));\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_settings_from_url_with_socket_dir() -> Result<()> {\n        let url =\n            \"postgresql://postgres:password@localhost:5432/test?socket_dir=%2Ftmp%2Fpg_socket\";\n        let settings = Settings::from_url(url)?;\n\n        assert_eq!(Some(PathBuf::from(\"/tmp/pg_socket\")), settings.socket_dir);\n        assert_eq!(\"localhost\", settings.host);\n        assert_eq!(5432, settings.port);\n        assert_eq!(\n            \"postgresql://postgres:password@localhost:5432/test?host=%2Ftmp%2Fpg_socket\",\n            settings.url(\"test\")\n        );\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_settings_from_url_invalid_url() {\n        assert!(Settings::from_url(\"^`~\").is_err());\n    }\n\n    #[test]\n    fn test_settings_from_url_invalid_version() {\n        assert!(Settings::from_url(\"postgresql://?version=foo\").is_err());\n    }\n\n    #[test]\n    fn test_settings_from_url_invalid_timeout() {\n        assert!(Settings::from_url(\"postgresql://?timeout=foo\").is_err());\n    }\n\n    #[test]\n    fn test_settings_builder_defaults() {\n        let settings = SettingsBuilder::new().build();\n        assert_eq!(\"localhost\", settings.host);\n        assert_eq!(0, settings.port);\n        assert_eq!(BOOTSTRAP_SUPERUSER, settings.username);\n        assert!(settings.temporary);\n        assert!(settings.socket_dir.is_none());\n        assert_eq!(Some(Duration::from_secs(5)), settings.timeout);\n    }\n\n    #[test]\n    fn test_settings_builder_all_fields() {\n        let configuration = HashMap::from([(\"max_connections\".to_string(), \"100\".to_string())]);\n        let settings = SettingsBuilder::new()\n            .releases_url(\"https://example.com\")\n            .version(VersionReq::STAR)\n            .installation_dir(\"/tmp/install\")\n            .password_file(\"/tmp/.pgpass\")\n            .data_dir(\"/tmp/data\")\n            .host(\"127.0.0.1\")\n            .port(5433)\n            .username(\"admin\")\n            .password(\"secret\")\n            .temporary(false)\n            .timeout(Some(Duration::from_secs(30)))\n            .configuration(configuration.clone())\n            .trust_installation_dir(true)\n            .socket_dir(PathBuf::from(\"/tmp/pg_socket\"))\n            .build();\n\n        assert_eq!(\"https://example.com\", settings.releases_url);\n        assert_eq!(PathBuf::from(\"/tmp/install\"), settings.installation_dir);\n        assert_eq!(PathBuf::from(\"/tmp/.pgpass\"), settings.password_file);\n        assert_eq!(PathBuf::from(\"/tmp/data\"), settings.data_dir);\n        assert_eq!(\"127.0.0.1\", settings.host);\n        assert_eq!(5433, settings.port);\n        assert_eq!(\"admin\", settings.username);\n        assert_eq!(\"secret\", settings.password);\n        assert!(!settings.temporary);\n        assert_eq!(Some(Duration::from_secs(30)), settings.timeout);\n        assert_eq!(configuration, settings.configuration);\n        assert!(settings.trust_installation_dir);\n        assert_eq!(Some(PathBuf::from(\"/tmp/pg_socket\")), settings.socket_dir);\n    }\n\n    #[test]\n    fn test_settings_builder_config_method() {\n        let settings = SettingsBuilder::new()\n            .config(\"max_connections\", \"42\")\n            .config(\"shared_buffers\", \"128MB\")\n            .build();\n\n        assert_eq!(\n            Some(&\"42\".to_string()),\n            settings.configuration.get(\"max_connections\")\n        );\n        assert_eq!(\n            Some(&\"128MB\".to_string()),\n            settings.configuration.get(\"shared_buffers\")\n        );\n    }\n\n    #[test]\n    fn test_settings_builder_socket_dir() {\n        let settings = SettingsBuilder::new()\n            .socket_dir(PathBuf::from(\"/tmp/pg_socket\"))\n            .build();\n\n        assert_eq!(Some(PathBuf::from(\"/tmp/pg_socket\")), settings.socket_dir);\n    }\n\n    #[test]\n    fn test_settings_builder_default() {\n        let builder = SettingsBuilder::default();\n        let settings = builder.build();\n        assert_eq!(\"localhost\", settings.host);\n        assert_eq!(0, settings.port);\n    }\n}\n"
  },
  {
    "path": "postgresql_embedded/tests/blocking.rs",
    "content": "#[cfg(feature = \"blocking\")]\nuse postgresql_embedded::blocking::PostgreSQL;\n#[cfg(feature = \"blocking\")]\nuse postgresql_embedded::{Result, Status};\n#[cfg(feature = \"blocking\")]\nuse test_log::test;\n\n#[cfg(feature = \"blocking\")]\n#[test]\nfn test_embedded_blocking_lifecycle() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    let settings = postgresql.settings();\n\n    // Verify that an ephemeral instance is created by default\n    assert_eq!(0, settings.port);\n    assert!(settings.temporary);\n\n    let initial_statuses = [Status::NotInstalled, Status::Installed, Status::Stopped];\n    assert!(initial_statuses.contains(&postgresql.status()));\n\n    postgresql.setup()?;\n    assert_eq!(Status::Stopped, postgresql.status());\n\n    postgresql.start()?;\n    assert_eq!(Status::Started, postgresql.status());\n\n    let database_name = \"test\";\n    assert!(!postgresql.database_exists(database_name)?);\n    postgresql.create_database(database_name)?;\n    assert!(postgresql.database_exists(database_name)?);\n    postgresql.drop_database(database_name)?;\n\n    postgresql.stop()?;\n    assert_eq!(Status::Stopped, postgresql.status());\n\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_embedded/tests/dump_command.rs",
    "content": "use postgresql_commands::pg_dump::PgDumpBuilder;\nuse postgresql_commands::psql::PsqlBuilder;\nuse postgresql_commands::{CommandBuilder, CommandExecutor};\nuse postgresql_embedded::PostgreSQL;\nuse std::fs;\nuse tempfile::NamedTempFile;\nuse test_log::test;\n\n#[test(tokio::test)]\nasync fn dump_command() -> anyhow::Result<()> {\n    let mut postgresql = PostgreSQL::default();\n\n    postgresql.setup().await?;\n    postgresql.start().await?;\n    let settings = postgresql.settings();\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n\n    let mut psql = PsqlBuilder::from(settings)\n        .command(\"CREATE TABLE person42 (id INTEGER, name VARCHAR(20))\")\n        .dbname(database_name)\n        .no_psqlrc()\n        .no_align()\n        .tuples_only()\n        .build();\n    let (_stdout, _stderr) = psql.execute()?;\n\n    let temp_file = NamedTempFile::new()?;\n    let file = temp_file.as_ref();\n    let mut pgdump = PgDumpBuilder::from(settings)\n        .dbname(database_name)\n        .schema_only()\n        .file(file.to_string_lossy().to_string())\n        .build();\n    let (_stdout, _stderr) = pgdump.execute()?;\n\n    let contents = fs::read_to_string(file)?;\n    assert!(contents.contains(\"person42\"));\n\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_embedded/tests/environment_variables.rs",
    "content": "use postgresql_embedded::{PostgreSQL, Status};\nuse std::env;\nuse test_log::test;\n\n#[test(tokio::test)]\nasync fn lifecycle() -> anyhow::Result<()> {\n    // Explicitly set PGDATABASE environment variable to verify that the library behavior\n    // is not affected by the environment\n    unsafe {\n        env::set_var(\"PGDATABASE\", \"foodb\");\n    }\n\n    let mut postgresql = PostgreSQL::default();\n\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    assert!(!postgresql.database_exists(database_name).await?);\n    postgresql.create_database(database_name).await?;\n    assert!(postgresql.database_exists(database_name).await?);\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await?;\n    assert_eq!(Status::Stopped, postgresql.status());\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_embedded/tests/postgresql.rs",
    "content": "use postgresql_commands::CommandBuilder;\nuse postgresql_commands::psql::PsqlBuilder;\nuse postgresql_embedded::{PostgreSQL, Result, Settings, Status};\nuse std::fs::{remove_dir_all, remove_file};\nuse test_log::test;\n\nasync fn lifecycle() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    let settings = postgresql.settings();\n\n    // Verify that an ephemeral instance is created by default\n    assert_eq!(0, settings.port);\n    assert!(settings.temporary);\n\n    let initial_statuses = [Status::NotInstalled, Status::Installed, Status::Stopped];\n    assert!(initial_statuses.contains(&postgresql.status()));\n\n    postgresql.setup().await?;\n    assert_eq!(Status::Stopped, postgresql.status());\n\n    postgresql.start().await?;\n    assert_eq!(Status::Started, postgresql.status());\n\n    let database_name = \"test\";\n    assert!(!postgresql.database_exists(database_name).await?);\n    postgresql.create_database(database_name).await?;\n    assert!(postgresql.database_exists(database_name).await?);\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await?;\n    assert_eq!(Status::Stopped, postgresql.status());\n\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_embedded_async_lifecycle() -> Result<()> {\n    lifecycle().await\n}\n\n#[test(tokio::test)]\nasync fn test_temporary_database() -> Result<()> {\n    let settings = Settings::default();\n    let data_dir = settings.data_dir.clone();\n    let password_file = settings.password_file.clone();\n\n    assert!(settings.temporary);\n\n    {\n        let mut postgresql = PostgreSQL::new(settings);\n        postgresql.setup().await?;\n        postgresql.start().await?;\n        assert!(data_dir.exists());\n        assert!(password_file.exists());\n    }\n\n    // Verify that the data directory and password file are removed automatically when PostgreSQL is dropped\n    assert!(!data_dir.exists());\n    assert!(!password_file.exists());\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_persistent_database() -> Result<()> {\n    let mut settings = Settings::default();\n    let data_dir = settings.data_dir.clone();\n    let password_file = settings.password_file.clone();\n\n    settings.temporary = false;\n\n    {\n        let mut postgresql = PostgreSQL::new(settings);\n        postgresql.setup().await?;\n        postgresql.start().await?;\n        assert!(data_dir.exists());\n        assert!(password_file.exists());\n    }\n\n    // Verify that the data directory and password file are retained when PostgreSQL is dropped\n    assert!(data_dir.exists());\n    assert!(password_file.exists());\n\n    let _ = remove_dir_all(&data_dir);\n    let _ = remove_file(&password_file);\n\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_persistent_database_reuse() -> Result<()> {\n    let database_name = \"test\";\n    let mut settings = Settings::default();\n    let data_dir = settings.data_dir.clone();\n    let password = settings.password.clone();\n    let password_file = settings.password_file.clone();\n\n    settings.temporary = false;\n\n    {\n        let mut postgresql = PostgreSQL::new(settings);\n        postgresql.setup().await?;\n        postgresql.start().await?;\n        postgresql.create_database(database_name).await?;\n        assert!(postgresql.database_exists(database_name).await?);\n        postgresql.stop().await?;\n    }\n\n    // Verify that the data directory and password file are retained when PostgreSQL is dropped\n    assert!(data_dir.exists());\n    assert!(password_file.exists());\n\n    let settings = Settings {\n        data_dir: data_dir.clone(),\n        password: password.clone(),\n        password_file: password_file.clone(),\n        temporary: false,\n        ..Default::default()\n    };\n\n    {\n        let mut postgresql = PostgreSQL::new(settings);\n        postgresql.setup().await?;\n        postgresql.start().await?;\n        assert!(postgresql.database_exists(database_name).await?);\n        postgresql.stop().await?;\n    }\n\n    let _ = remove_dir_all(&data_dir);\n    let _ = remove_file(&password_file);\n\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn postgres_concurrency() -> Result<()> {\n    let handle1 = tokio::spawn(lifecycle());\n    let handle2 = tokio::spawn(lifecycle());\n    let handle3 = tokio::spawn(lifecycle());\n    match tokio::try_join!(handle1, handle2, handle3) {\n        Ok(_) => {}\n        Err(error) => {\n            assert_eq!(\"\", error.to_string());\n        }\n    }\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_authentication_success() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let mut psql = PsqlBuilder::from(postgresql.settings())\n        .command(\"SELECT 1\")\n        .no_psqlrc()\n        .tuples_only()\n        .build();\n\n    let output = psql.output()?;\n    assert!(output.status.success());\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_authentication_invalid_username() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let mut psql = PsqlBuilder::from(postgresql.settings())\n        .command(\"SELECT 1\")\n        .username(\"invalid\")\n        .no_psqlrc()\n        .tuples_only()\n        .build();\n\n    let output = psql.output()?;\n    assert!(!output.status.success());\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_authentication_invalid_password() -> Result<()> {\n    let mut postgresql = PostgreSQL::default();\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let mut psql = PsqlBuilder::from(postgresql.settings())\n        .command(\"SELECT 1\")\n        .pg_password(\"invalid\")\n        .no_psqlrc()\n        .tuples_only()\n        .build();\n\n    let output = psql.output()?;\n    assert!(!output.status.success());\n    Ok(())\n}\n\n#[test(tokio::test)]\nasync fn test_username_setting() -> Result<()> {\n    let settings = Settings {\n        username: \"admin\".to_string(),\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    postgresql.setup().await?;\n    postgresql.start().await?;\n\n    let database_name = \"test\";\n    postgresql.create_database(database_name).await?;\n    let database_exists = postgresql.database_exists(database_name).await?;\n    assert!(database_exists);\n    postgresql.drop_database(database_name).await?;\n    let database_exists = postgresql.database_exists(database_name).await?;\n    assert!(!database_exists);\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_embedded/tests/start_config.rs",
    "content": "use postgresql_embedded::{BOOTSTRAP_DATABASE, PostgreSQL, Settings};\nuse sqlx::{PgPool, Row};\nuse std::collections::HashMap;\nuse test_log::test;\n\n#[test(tokio::test)]\nasync fn start_config() -> anyhow::Result<()> {\n    let configuration = HashMap::from([(\"max_connections\".to_string(), \"42\".to_string())]);\n    let settings = Settings {\n        configuration,\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n\n    postgresql.setup().await?;\n    postgresql.start().await?;\n    let settings = postgresql.settings();\n    let database_url = settings.url(BOOTSTRAP_DATABASE);\n    let pool = PgPool::connect(database_url.as_str()).await?;\n    let row = sqlx::query(\"SELECT setting FROM pg_settings WHERE name = $1\")\n        .bind(\"max_connections\".to_string())\n        .fetch_one(&pool)\n        .await?;\n    let max_connections: String = row.get(0);\n    pool.close().await;\n\n    assert_eq!(\"42\".to_string(), max_connections);\n\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_embedded/tests/unix_socket.rs",
    "content": "#[cfg(unix)]\nmod unix_socket_tests {\n    use postgresql_embedded::{PostgreSQL, Result, SettingsBuilder, Status};\n    use sqlx::{PgPool, Row};\n    use std::path::PathBuf;\n    use test_log::test;\n\n    #[test(tokio::test)]\n    async fn test_unix_socket_lifecycle() -> Result<()> {\n        let socket_dir = tempfile::tempdir().expect(\"failed to create temp dir for socket\");\n        let socket_path = socket_dir.path().to_path_buf();\n\n        let settings = SettingsBuilder::new()\n            .socket_dir(socket_path.clone())\n            .build();\n\n        let mut postgresql = PostgreSQL::new(settings);\n\n        postgresql.setup().await?;\n        postgresql.start().await?;\n\n        assert_eq!(Status::Started, postgresql.status());\n\n        // Verify the socket file exists (PostgreSQL creates .s.PGSQL.<port> in the socket dir)\n        let port = postgresql.settings().port;\n        let socket_file = socket_path.join(format!(\".s.PGSQL.{port}\"));\n        assert!(\n            socket_file.exists(),\n            \"Expected socket file at {socket_file:?}\"\n        );\n\n        let database_name = \"test\";\n        assert!(!postgresql.database_exists(database_name).await?);\n        postgresql.create_database(database_name).await?;\n        assert!(postgresql.database_exists(database_name).await?);\n        postgresql.drop_database(database_name).await?;\n        assert!(!postgresql.database_exists(database_name).await?);\n\n        postgresql.stop().await?;\n        assert_eq!(Status::Stopped, postgresql.status());\n\n        Ok(())\n    }\n\n    #[test(tokio::test)]\n    async fn test_unix_socket_with_builder() -> Result<()> {\n        let socket_dir = tempfile::tempdir().expect(\"failed to create temp dir for socket\");\n        let socket_path = socket_dir.path().to_path_buf();\n\n        let settings = SettingsBuilder::new()\n            .socket_dir(socket_path.clone())\n            .config(\"max_connections\", \"50\")\n            .build();\n\n        assert_eq!(Some(socket_path), settings.socket_dir);\n        assert_eq!(\n            Some(&\"50\".to_string()),\n            settings.configuration.get(\"max_connections\")\n        );\n\n        let mut postgresql = PostgreSQL::new(settings);\n        postgresql.setup().await?;\n        postgresql.start().await?;\n\n        let database_name = \"builder_test\";\n        postgresql.create_database(database_name).await?;\n        assert!(postgresql.database_exists(database_name).await?);\n        postgresql.drop_database(database_name).await?;\n\n        postgresql.stop().await?;\n        Ok(())\n    }\n\n    #[test(tokio::test)]\n    async fn test_unix_socket_temporary_cleanup() -> Result<()> {\n        let socket_dir = tempfile::tempdir().expect(\"failed to create temp dir for socket\");\n        let socket_path = socket_dir.keep();\n\n        let settings = SettingsBuilder::new()\n            .socket_dir(socket_path.clone())\n            .temporary(true)\n            .build();\n        let data_dir = settings.data_dir.clone();\n        let password_file = settings.password_file.clone();\n\n        {\n            let mut postgresql = PostgreSQL::new(settings);\n            postgresql.setup().await?;\n            postgresql.start().await?;\n            assert!(socket_path.exists());\n        }\n\n        // Verify that socket dir, data dir, and password file are cleaned up\n        assert!(!data_dir.exists());\n        assert!(!password_file.exists());\n        assert!(!socket_path.exists());\n        Ok(())\n    }\n\n    #[test]\n    fn test_unix_socket_url_format() {\n        let settings = SettingsBuilder::new()\n            .host(\"localhost\")\n            .port(5432)\n            .username(\"user\")\n            .password(\"pass\")\n            .socket_dir(PathBuf::from(\"/tmp/pg_socket\"))\n            .build();\n\n        assert_eq!(\n            \"postgresql://user:pass@localhost:5432/test?host=%2Ftmp%2Fpg_socket\",\n            settings.url(\"test\")\n        );\n    }\n\n    #[test(tokio::test)]\n    async fn test_connection_type_tcp_vs_unix_socket() -> Result<()> {\n        let socket_dir = tempfile::tempdir().expect(\"failed to create temp dir for socket\");\n        let socket_path = socket_dir.path().to_path_buf();\n\n        let settings = SettingsBuilder::new()\n            .socket_dir(socket_path.clone())\n            .build();\n\n        let mut postgresql = PostgreSQL::new(settings);\n        postgresql.setup().await?;\n        postgresql.start().await?;\n\n        let database_name = \"conn_type_test\";\n        postgresql.create_database(database_name).await?;\n\n        let settings = postgresql.settings();\n\n        // Connect via TCP (construct URL without socket_dir query parameter)\n        let tcp_url = format!(\n            \"postgresql://{}:{}@{}:{}/{}\",\n            settings.username, settings.password, settings.host, settings.port, database_name\n        );\n        let tcp_pool = PgPool::connect(tcp_url.as_str()).await.unwrap();\n        let tcp_row = sqlx::query(\n            \"SELECT client_addr::TEXT, client_port \\\n             FROM pg_stat_activity \\\n             WHERE pid = pg_backend_pid()\",\n        )\n        .fetch_one(&tcp_pool)\n        .await\n        .unwrap();\n        let tcp_client_addr: Option<String> = tcp_row.get(\"client_addr\");\n        let tcp_client_port: Option<i32> = tcp_row.get(\"client_port\");\n        tcp_pool.close().await;\n\n        // TCP connections have a non-null client_addr and a positive client_port\n        assert!(\n            tcp_client_addr.is_some(),\n            \"TCP connection should have a client_addr, got None\"\n        );\n        assert!(\n            tcp_client_port.is_some_and(|p| p > 0),\n            \"TCP connection should have a positive client_port, got {tcp_client_port:?}\"\n        );\n\n        // Connect via Unix socket (URL includes ?host=<encoded_socket_dir>)\n        let socket_url = settings.url(database_name);\n        let socket_pool = PgPool::connect(socket_url.as_str()).await.unwrap();\n        let socket_row = sqlx::query(\n            \"SELECT client_addr::TEXT, client_port \\\n             FROM pg_stat_activity \\\n             WHERE pid = pg_backend_pid()\",\n        )\n        .fetch_one(&socket_pool)\n        .await?;\n        let socket_client_addr: Option<String> = socket_row.get(\"client_addr\");\n        let socket_client_port: Option<i32> = socket_row.get(\"client_port\");\n        socket_pool.close().await;\n\n        // Unix socket connections have null client_addr and client_port of -1\n        assert!(\n            socket_client_addr.is_none(),\n            \"Unix socket connection should have null client_addr, got {socket_client_addr:?}\"\n        );\n        assert_eq!(\n            socket_client_port,\n            Some(-1),\n            \"Unix socket connection should have client_port of -1, got {socket_client_port:?}\"\n        );\n\n        postgresql.stop().await?;\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_embedded/tests/zonky.rs",
    "content": "#[cfg(feature = \"zonky\")]\nuse postgresql_archive::configuration::zonky;\n#[cfg(feature = \"zonky\")]\nuse postgresql_embedded::{PostgreSQL, Result, Settings, Status};\n\n#[tokio::test]\n#[cfg(feature = \"zonky\")]\nasync fn test_zonky() -> Result<()> {\n    let settings = Settings {\n        releases_url: zonky::URL.to_string(),\n        ..Default::default()\n    };\n    let mut postgresql = PostgreSQL::new(settings);\n    let settings = postgresql.settings();\n\n    // Verify that an ephemeral instance is created by default\n    assert_eq!(0, settings.port);\n    assert!(settings.temporary);\n\n    let initial_statuses = [Status::NotInstalled, Status::Installed, Status::Stopped];\n    assert!(initial_statuses.contains(&postgresql.status()));\n\n    postgresql.setup().await?;\n    assert_eq!(Status::Stopped, postgresql.status());\n\n    postgresql.start().await?;\n    assert_eq!(Status::Started, postgresql.status());\n\n    let database_name = \"test\";\n    assert!(!postgresql.database_exists(database_name).await?);\n    postgresql.create_database(database_name).await?;\n    assert!(postgresql.database_exists(database_name).await?);\n    postgresql.drop_database(database_name).await?;\n\n    postgresql.stop().await?;\n    assert_eq!(Status::Stopped, postgresql.status());\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_extensions/Cargo.toml",
    "content": "[package]\nauthors.workspace = true\ncategories.workspace = true\ndescription = \"A library for managing PostgreSQL extensions\"\nedition.workspace = true\nkeywords.workspace = true\nlicense.workspace = true\nname = \"postgresql_extensions\"\nrepository = \"https://github.com/theseus-rs/postgresql-embedded\"\nrust-version.workspace = true\nversion.workspace = true\n\n[dependencies]\nasync-trait = { workspace = true }\npostgresql_archive = { path = \"../postgresql_archive\", version = \"0.20.2\", default-features = false }\npostgresql_commands = { path = \"../postgresql_commands\", version = \"0.20.2\", default-features = false }\nregex-lite = { workspace = true }\nreqwest = { workspace = true, default-features = false, features = [\"json\"] }\nsemver = { workspace = true, features = [\"serde\"] }\nserde = { workspace = true, features = [\"derive\"] }\nserde_json = { workspace = true, optional = true }\ntarget-triple = { workspace = true, optional = true }\ntempfile = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"full\"], optional = true }\ntracing = { workspace = true, features = [\"log\"] }\nurl = { workspace = true }\n\n[dev-dependencies]\nanyhow = { workspace = true }\npostgresql_embedded = { path = \"../postgresql_embedded\", version = \"0.20.2\" }\ntest-log = { workspace = true }\ntokio = { workspace = true, features = [\"full\"] }\n\n[features]\ndefault = [\n    \"native-tls\",\n    \"portal-corp\",\n    \"steampipe\",\n    \"tensor-chord\",\n]\nblocking = [\"tokio\"]\nportal-corp = [\n    \"dep:target-triple\",\n    \"postgresql_archive/github\",\n    \"postgresql_archive/zip\",\n]\nsteampipe = [\n    \"dep:serde_json\",\n    \"postgresql_archive/github\",\n    \"postgresql_archive/tar-gz\",\n]\ntensor-chord = [\n    \"dep:target-triple\",\n    \"postgresql_archive/github\",\n    \"postgresql_archive/zip\",\n]\ntokio = [\n    \"postgresql_commands/tokio\",\n    \"dep:tokio\"\n]\nnative-tls = [\n    \"postgresql_archive/native-tls\",\n    \"reqwest/native-tls\",\n]\nrustls = [\n    \"postgresql_archive/rustls\",\n    \"reqwest/rustls\",\n]\n\n[package.metadata.cargo-machete]\nignored = [\"reqwest\"]\n"
  },
  {
    "path": "postgresql_extensions/README.md",
    "content": "# PostgreSQL Extensions\n\n[![ci](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml)\n[![Documentation](https://docs.rs/postgresql_extensions/badge.svg)](https://docs.rs/postgresql_extensions)\n[![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n[![Benchmarks](https://img.shields.io/badge/%F0%9F%90%B0_bencher-enabled-6ec241)](https://bencher.dev/perf/theseus-rs-postgresql-embedded)\n[![Latest version](https://img.shields.io/crates/v/postgresql_extensions.svg)](https://crates.io/crates/postgresql_extensions)\n[![License](https://img.shields.io/crates/l/postgresql_extensions?)](https://github.com/theseus-rs/postgresql-embedded/tree/main/postgresql_extensions#license)\n[![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n\nA configurable library for managing PostgreSQL extensions.\n\n## Examples\n\n### Asynchronous API\n\n```rust\nuse postgresql_extensions::{get_available_extensions, Result};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    let extensions = get_available_extensions().await?;\n    Ok(())\n}\n```\n\n### Synchronous API\n\n```rust\nuse postgresql_extensions::Result;\nuse postgresql_extensions::blocking::get_available_extensions;\n\nasync fn main() -> Result<()> {\n    let extensions = get_available_extensions().await?;\n    Ok(())\n}\n```\n\n## Feature flags\n\npostgresql_extensions uses [feature flags] to address compile time and binary size\nuses.\n\nThe following features are available:\n\n| Name         | Description                | Default? |\n|--------------|----------------------------|----------|\n| `blocking`   | Enables the blocking API   | No       |\n| `native-tls` | Enables native-tls support | Yes      |\n| `rustls-tls` | Enables rustls-tls support | No       |\n\n### Repositories\n\n| Name           | Description                               | Default? |\n|----------------|-------------------------------------------|----------|\n| `portal-corp`  | Enables PortalCorp PostgreSQL extensions  | Yes      |\n| `steampipe`    | Enables Steampipe PostgreSQL extensions   | Yes      |\n| `tensor-chord` | Enables TensorChord PostgreSQL extensions | Yes      |\n\n## Supported platforms\n\n`postgresql_extensions` provides implementations for the following:\n\n* [steampipe/repositories](https://github.com/orgs/turbot/repositories)\n* [tensor-chord/pgvecto.rs](https://github.com/tensor-chord/pgvecto.rs)\n\n## License\n\nLicensed under either of\n\n* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)\n* MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT)\n\nat your option.\n\n## Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as\ndefined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.\n"
  },
  {
    "path": "postgresql_extensions/src/blocking/extensions.rs",
    "content": "#![allow(dead_code)]\nuse crate::model::AvailableExtension;\nuse crate::{InstalledExtension, Result};\nuse postgresql_commands::Settings;\nuse semver::VersionReq;\nuse std::sync::LazyLock;\nuse tokio::runtime::Runtime;\n\nstatic RUNTIME: LazyLock<Runtime> = LazyLock::new(|| Runtime::new().unwrap());\n\n/// Gets the available extensions.\n///\n/// # Errors\n/// * If an error occurs while getting the extensions.\npub fn get_available_extensions() -> Result<Vec<AvailableExtension>> {\n    RUNTIME\n        .handle()\n        .block_on(async move { crate::get_available_extensions().await })\n}\n\n/// Gets the installed extensions.\n///\n/// # Errors\n/// * If an error occurs while getting the installed extensions.\npub fn get_installed_extensions(settings: &impl Settings) -> Result<Vec<InstalledExtension>> {\n    RUNTIME\n        .handle()\n        .block_on(async move { crate::get_installed_extensions(settings).await })\n}\n\n/// Installs the extension with the specified `namespace`, `name`, and `version`.\n///\n/// # Errors\n/// * If an error occurs while installing the extension.\npub fn install(\n    settings: &impl Settings,\n    namespace: &str,\n    name: &str,\n    version: &VersionReq,\n) -> Result<()> {\n    RUNTIME\n        .handle()\n        .block_on(async move { crate::install(settings, namespace, name, version).await })\n}\n\n/// Uninstalls the extension with the specified `namespace` and `name`.\n///\n/// # Errors\n/// * If an error occurs while uninstalling the extension.\npub fn uninstall(settings: &impl Settings, namespace: &str, name: &str) -> Result<()> {\n    RUNTIME\n        .handle()\n        .block_on(async move { crate::uninstall(settings, namespace, name).await })\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n\n    #[test]\n    fn test_get_installed_extensions() -> Result<()> {\n        let extensions = get_installed_extensions(&TestSettings)?;\n        assert!(extensions.is_empty());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/blocking/mod.rs",
    "content": "mod extensions;\n\npub use extensions::{get_available_extensions, get_installed_extensions, install, uninstall};\n"
  },
  {
    "path": "postgresql_extensions/src/error.rs",
    "content": "use std::sync::PoisonError;\n\n/// PostgreSQL extensions result type\npub type Result<T, E = Error> = core::result::Result<T, E>;\n\n/// PostgreSQL extensions errors\n#[derive(Debug, thiserror::Error)]\npub enum Error {\n    /// Archive error\n    #[error(transparent)]\n    ArchiveError(#[from] postgresql_archive::Error),\n    /// Error when a command fails\n    #[error(transparent)]\n    CommandError(#[from] postgresql_commands::Error),\n    /// Extension not found\n    #[error(\"extension not found '{0}'\")]\n    ExtensionNotFound(String),\n    /// Error when an IO operation fails\n    #[error(\"{0}\")]\n    IoError(String),\n    /// Poisoned lock\n    #[error(\"poisoned lock '{0}'\")]\n    PoisonedLock(String),\n    /// Error when a regex operation fails\n    #[error(transparent)]\n    RegexError(#[from] regex_lite::Error),\n    /// Error when a deserialization or serialization operation fails\n    #[error(transparent)]\n    SerdeError(#[from] serde_json::Error),\n    /// Unsupported namespace\n    #[error(\"unsupported namespace '{0}'\")]\n    UnsupportedNamespace(String),\n}\n\n/// Converts a [`std::sync::PoisonError<T>`] into a [`ParseError`](Error::PoisonedLock)\nimpl<T> From<PoisonError<T>> for Error {\n    fn from(value: PoisonError<T>) -> Self {\n        Error::PoisonedLock(value.to_string())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_from_poison_error() {\n        let error = Error::from(std::sync::PoisonError::new(()));\n        assert!(matches!(error, Error::PoisonedLock(_)));\n        assert!(error.to_string().contains(\"poisoned lock\"));\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/extensions.rs",
    "content": "use crate::Error::IoError;\nuse crate::model::AvailableExtension;\nuse crate::repository::registry;\nuse crate::repository::registry::get_repositories;\nuse crate::{InstalledConfiguration, InstalledExtension, Result};\n#[cfg(feature = \"tokio\")]\nuse postgresql_commands::AsyncCommandExecutor;\nuse postgresql_commands::CommandBuilder;\n#[cfg(not(feature = \"tokio\"))]\nuse postgresql_commands::CommandExecutor;\nuse postgresql_commands::Settings;\nuse postgresql_commands::pg_config::PgConfigBuilder;\nuse postgresql_commands::postgres::PostgresBuilder;\nuse regex_lite::Regex;\nuse semver::VersionReq;\nuse std::path::PathBuf;\nuse tracing::{debug, instrument};\n\nconst CONFIGURATION_FILE: &str = \"postgresql_extensions.json\";\n\n/// Gets the available extensions.\n///\n/// # Errors\n/// * If an error occurs while getting the extensions.\n#[instrument(level = \"debug\")]\npub async fn get_available_extensions() -> Result<Vec<AvailableExtension>> {\n    let mut extensions = Vec::new();\n    for repository in get_repositories()? {\n        for extension in repository.get_available_extensions().await? {\n            extensions.push(extension);\n        }\n    }\n    Ok(extensions)\n}\n\n/// Gets the installed extensions.\n///\n/// # Errors\n/// * If an error occurs while getting the installed extensions.\n#[instrument(level = \"debug\", skip(settings))]\npub async fn get_installed_extensions(settings: &impl Settings) -> Result<Vec<InstalledExtension>> {\n    let configuration_file = get_configuration_file(settings).await?;\n    if !configuration_file.exists() {\n        debug!(\"No configuration file found: {configuration_file:?}\");\n        return Ok(Vec::new());\n    }\n\n    let configuration = InstalledConfiguration::read(configuration_file).await?;\n    let extensions = configuration.extensions();\n    Ok(extensions.clone())\n}\n\n/// Installs the extension with the specified `namespace`, `name`, and `version`.\n///\n/// # Errors\n/// * If an error occurs while installing the extension.\n#[instrument(level = \"debug\", skip(settings))]\npub async fn install(\n    settings: &impl Settings,\n    namespace: &str,\n    name: &str,\n    version: &VersionReq,\n) -> Result<()> {\n    let extensions = get_installed_extensions(settings).await?;\n    if extensions\n        .iter()\n        .any(|extension| extension.namespace() == namespace && extension.name() == name)\n    {\n        // Attempt to uninstall the extension first\n        uninstall(settings, namespace, name).await?;\n    }\n\n    let postgresql_version = get_postgresql_version(settings).await?;\n    let repository = registry::get(namespace)?;\n    let (version, archive) = repository\n        .get_archive(postgresql_version.as_str(), name, version)\n        .await?;\n    let library_dir = get_library_path(settings).await?;\n    let extension_dir = get_extension_path(settings).await?;\n    let files = repository\n        .install(name, library_dir, extension_dir, &archive)\n        .await?;\n\n    let configuration_file = get_configuration_file(settings).await?;\n    let mut configuration = if configuration_file.exists() {\n        InstalledConfiguration::read(&configuration_file).await?\n    } else {\n        debug!(\"No configuration file found: {configuration_file:?}; creating new file\");\n        InstalledConfiguration::default()\n    };\n    let installed_extension = InstalledExtension::new(namespace, name, version, files);\n    configuration.extensions_mut().push(installed_extension);\n    configuration.write(configuration_file).await?;\n    Ok(())\n}\n\n/// Uninstalls the extension with the specified `namespace` and `name`.\n///\n/// # Errors\n/// * If an error occurs while uninstalling the extension.\n#[instrument(level = \"debug\", skip(settings))]\npub async fn uninstall(settings: &impl Settings, namespace: &str, name: &str) -> Result<()> {\n    let configuration_file = get_configuration_file(settings).await?;\n    if !configuration_file.exists() {\n        debug!(\"No configuration file found: {configuration_file:?}; nothing to uninstall\");\n        return Ok(());\n    }\n\n    let configuration = &mut InstalledConfiguration::read(&configuration_file).await?;\n    let mut extensions = Vec::new();\n    for extension in configuration.extensions() {\n        if extension.namespace() != namespace || extension.name() != name {\n            extensions.push(extension.clone());\n        }\n\n        for file in extension.files() {\n            if file.exists() {\n                debug!(\"Removing file: {file:?}\");\n                #[cfg(feature = \"tokio\")]\n                tokio::fs::remove_file(file)\n                    .await\n                    .map_err(|error| IoError(error.to_string()))?;\n                #[cfg(not(feature = \"tokio\"))]\n                std::fs::remove_file(file)\n                    .map_err(|error| crate::error::Error::IoError(error.to_string()))?;\n            }\n        }\n    }\n\n    let configuration = InstalledConfiguration::new(extensions);\n    configuration.write(configuration_file).await?;\n\n    Ok(())\n}\n\n/// Gets the configuration file.\n///\n/// # Errors\n/// * If an error occurs while getting the configuration file.\nasync fn get_configuration_file(settings: &dyn Settings) -> Result<PathBuf> {\n    let shared_path = get_shared_path(settings).await?;\n    let file = shared_path.join(CONFIGURATION_FILE);\n    Ok(file)\n}\n\n/// Gets the library path.\n///\n/// # Errors\n/// * If an error occurs while getting the library path.\nasync fn get_library_path(settings: &dyn Settings) -> Result<PathBuf> {\n    let command = PgConfigBuilder::from(settings).libdir();\n    match execute_command(command).await {\n        Ok((stdout, _stderr)) => Ok(PathBuf::from(stdout.trim())),\n        Err(error) => {\n            debug!(\"Failed to get library path using pg_config: {error:?}\");\n            let binary_dir = settings.get_binary_dir();\n            let install_dir = if let Some(parent) = binary_dir.parent() {\n                parent.to_path_buf()\n            } else {\n                debug!(\n                    \"Failed to get parent directory of binary directory; defaulting to current directory\"\n                );\n                PathBuf::from(\".\")\n            };\n            let library_dir = install_dir.join(\"lib\");\n            debug!(\"Using library directory: {library_dir:?}\");\n            Ok(library_dir)\n        }\n    }\n}\n\n/// Gets the shared path.\n///\n/// # Errors\n/// * If an error occurs while getting the shared path.\nasync fn get_shared_path(settings: &dyn Settings) -> Result<PathBuf> {\n    let command = PgConfigBuilder::from(settings).sharedir();\n    match execute_command(command).await {\n        Ok((stdout, _stderr)) => Ok(PathBuf::from(stdout.trim())),\n        Err(error) => {\n            debug!(\"Failed to get shared path using pg_config: {error:?}\");\n            let binary_dir = settings.get_binary_dir();\n            let install_dir = if let Some(parent) = binary_dir.parent() {\n                parent.to_path_buf()\n            } else {\n                debug!(\n                    \"Failed to get parent directory of binary directory; defaulting to current directory\"\n                );\n                PathBuf::from(\".\")\n            };\n            let share_dir = install_dir.join(\"share\");\n            debug!(\"Using share directory: {share_dir:?}\");\n            Ok(share_dir)\n        }\n    }\n}\n\n/// Gets the extension path.\n///\n/// # Errors\n/// * If an error occurs while getting the extension path.\nasync fn get_extension_path(settings: &dyn Settings) -> Result<PathBuf> {\n    let shared_path = get_shared_path(settings).await?;\n    let extension_path = shared_path.join(\"extension\");\n    Ok(extension_path)\n}\n\n/// Gets the PostgreSQL version.\n///\n/// # Errors\n/// * If an error occurs while getting the PostgreSQL version.\nasync fn get_postgresql_version(settings: &dyn Settings) -> Result<String> {\n    let command = PostgresBuilder::new()\n        .program_dir(settings.get_binary_dir())\n        .version();\n    let (stdout, _stderr) = execute_command(command).await?;\n    let re = Regex::new(r\"PostgreSQL\\)\\s(\\d+\\.\\d+)\")?;\n    let Some(captures) = re.captures(&stdout) else {\n        return Err(IoError(format!(\n            \"Failed to obtain postgresql version from {stdout}\"\n        )));\n    };\n    let Some(version) = captures.get(1) else {\n        return Err(IoError(format!(\n            \"Failed to match postgresql version from {stdout}\"\n        )));\n    };\n    let version = version.as_str();\n    debug!(\"Obtained PostgreSQL version from postgres command: {version}\");\n    Ok(version.to_string())\n}\n\n#[cfg(not(feature = \"tokio\"))]\n/// Execute a command and return the stdout and stderr as strings.\n#[instrument(level = \"debug\", skip(command_builder), fields(program = ?command_builder.get_program()))]\nasync fn execute_command<B: CommandBuilder>(\n    command_builder: B,\n) -> postgresql_commands::Result<(String, String)> {\n    let mut command = command_builder.build();\n    command.execute()\n}\n\n#[cfg(feature = \"tokio\")]\n/// Execute a command and return the stdout and stderr as strings.\n#[instrument(level = \"debug\", skip(command_builder), fields(program = ?command_builder.get_program()))]\nasync fn execute_command<B: CommandBuilder>(\n    command_builder: B,\n) -> postgresql_commands::Result<(String, String)> {\n    let mut command = command_builder.build_tokio();\n    command.execute(None).await\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::TestSettings;\n\n    #[tokio::test]\n    async fn test_get_installed_extensions() -> Result<()> {\n        let extensions = get_installed_extensions(&TestSettings).await?;\n        assert!(extensions.is_empty());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/lib.rs",
    "content": "//! # PostgreSQL Extensions\n//!\n//! [![ci](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/theseus-rs/postgresql-embedded/actions/workflows/ci.yml)\n//! [![Documentation](https://docs.rs/postgresql_extensions/badge.svg)](https://docs.rs/postgresql_extensions)\n//! [![Code Coverage](https://codecov.io/gh/theseus-rs/postgresql-embedded/branch/main/graph/badge.svg)](https://codecov.io/gh/theseus-rs/postgresql-embedded)\n//! [![Benchmarks](https://img.shields.io/badge/%F0%9F%90%B0_bencher-enabled-6ec241)](https://bencher.dev/perf/theseus-rs-postgresql-embedded)\n//! [![Latest version](https://img.shields.io/crates/v/postgresql_extensions.svg)](https://crates.io/crates/postgresql_extensions)\n//! [![License](https://img.shields.io/crates/l/postgresql_extensions?)](https://github.com/theseus-rs/postgresql-embedded/tree/main/postgresql_extensions#license)\n//! [![Semantic Versioning](https://img.shields.io/badge/%E2%9A%99%EF%B8%8F_SemVer-2.0.0-blue)](https://semver.org/spec/v2.0.0.html)\n//!\n//! A configurable library for managing PostgreSQL extensions.\n//!\n//! ## Examples\n//!\n//! ### Asynchronous API\n//!\n//! ```rust\n//! use postgresql_extensions::{get_available_extensions, Result};\n//!\n//! #[tokio::main]\n//! async fn main() -> Result<()> {\n//!     let extensions = get_available_extensions().await?;\n//!     Ok(())\n//! }\n//! ```\n//!\n//! ### Synchronous API\n//!\n//! ```rust\n//! #[cfg(feature = \"blocking\")] {\n//! use postgresql_extensions::Result;\n//! use postgresql_extensions::blocking::get_available_extensions;\n//!\n//! let extensions = get_available_extensions().unwrap();\n//! }\n//! ```\n//!\n//! ## Feature flags\n//!\n//! postgresql_extensions uses [feature flags] to address compile time and binary size\n//! uses.\n//!\n//! The following features are available:\n//!\n//! | Name         | Description                | Default? |\n//! |--------------|----------------------------|----------|\n//! | `blocking`   | Enables the blocking API   | No       |\n//! | `native-tls` | Enables native-tls support | Yes      |\n//! | `rustls-tls` | Enables rustls-tls support | No       |\n//!\n//! ### Repositories\n//!\n//! | Name           | Description                               | Default? |\n//! |----------------|-------------------------------------------|----------|\n//! | `portal-corp`  | Enables PortalCorp PostgreSQL extensions  | Yes      |\n//! | `steampipe`    | Enables Steampipe PostgreSQL extensions   | Yes      |\n//! | `tensor-chord` | Enables TensorChord PostgreSQL extensions | Yes      |\n//!\n//! ## Supported platforms\n//!\n//! `postgresql_extensions` provides implementations for the following:\n//!\n//! * [steampipe/repositories](https://github.com/orgs/turbot/repositories)\n//! * [tensor-chord/pgvecto.rs](https://github.com/tensor-chord/pgvecto.rs)\n//!\n//! ## Safety\n//!\n//! This crate uses `#![forbid(unsafe_code)]` to ensure everything is implemented in 100% safe Rust.\n//!\n//! ## License\n//!\n//! Licensed under either of\n//!\n//! * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or <https://www.apache.org/licenses/LICENSE-2.0>)\n//! * MIT license ([LICENSE-MIT](LICENSE-MIT) or <https://opensource.org/licenses/MIT>)\n//!\n//! at your option.\n//!\n//! ## Contribution\n//!\n//! Unless you explicitly state otherwise, any contribution intentionally submitted\n//! for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any\n//! additional terms or conditions.\n\n#[cfg(feature = \"blocking\")]\npub mod blocking;\nmod error;\npub mod extensions;\nmod matcher;\nmod model;\npub mod repository;\n\npub use error::{Error, Result};\npub use extensions::{get_available_extensions, get_installed_extensions, install, uninstall};\npub use matcher::{matcher, tar_gz_matcher, zip_matcher};\n#[cfg(test)]\npub use model::TestSettings;\npub use model::{AvailableExtension, InstalledConfiguration, InstalledExtension};\npub use semver::{Version, VersionReq};\n"
  },
  {
    "path": "postgresql_extensions/src/matcher.rs",
    "content": "use postgresql_archive::Result;\nuse regex_lite::Regex;\nuse semver::Version;\nuse std::collections::HashMap;\nuse std::env::consts;\nuse url::Url;\n\n/// .tar.gz asset matcher that matches the asset name to the postgresql major version, target triple\n/// or OS/CPU architecture.\n///\n/// # Errors\n/// * If the asset matcher fails.\npub fn tar_gz_matcher(url: &str, name: &str, version: &Version) -> Result<bool> {\n    if !matcher(url, name, version)? {\n        return Ok(false);\n    }\n\n    Ok(name.ends_with(\".tar.gz\"))\n}\n\n/// .zip asset matcher that matches the asset name to the postgresql major version, target triple or\n/// OS/CPU architecture.\n///\n/// # Errors\n/// * If the asset matcher fails.\n#[expect(clippy::case_sensitive_file_extension_comparisons)]\npub fn zip_matcher(url: &str, name: &str, version: &Version) -> Result<bool> {\n    if !matcher(url, name, version)? {\n        return Ok(false);\n    }\n\n    Ok(name.ends_with(\".zip\"))\n}\n\n/// Default asset matcher that matches the asset name to the postgresql major version, target triple\n/// or OS/CPU architecture.\n///\n/// # Errors\n/// * If the asset matcher fails.\npub fn matcher(url: &str, name: &str, _version: &Version) -> Result<bool> {\n    let Ok(url) = Url::parse(url) else {\n        return Ok(false);\n    };\n    let query_parameters: HashMap<String, String> = url.query_pairs().into_owned().collect();\n    let Some(postgresql_version) = query_parameters.get(\"postgresql_version\") else {\n        return Ok(false);\n    };\n    let postgresql_major_version = match postgresql_version.split_once('.') {\n        None => return Ok(false),\n        Some((major, _)) => major,\n    };\n\n    let postgresql_version = format!(\"pg{postgresql_major_version}\");\n    let postgresql_version_re = regex(postgresql_version.as_str())?;\n    if !postgresql_version_re.is_match(name) {\n        return Ok(false);\n    }\n\n    let target_re = regex(target_triple::TARGET)?;\n    if target_re.is_match(name) {\n        return Ok(true);\n    }\n\n    let os = consts::OS;\n    let os_re = regex(os)?;\n    let matches_os = match os {\n        \"macos\" => {\n            let darwin_re = regex(\"darwin\")?;\n            os_re.is_match(name) || darwin_re.is_match(name)\n        }\n        _ => os_re.is_match(name),\n    };\n\n    let arch = consts::ARCH;\n    let arch_re = regex(arch)?;\n    let matches_arch = match arch {\n        \"x86_64\" => {\n            let amd64_re = regex(\"amd64\")?;\n            arch_re.is_match(name) || amd64_re.is_match(name)\n        }\n        \"aarch64\" => {\n            let arm64_re = regex(\"arm64\")?;\n            arch_re.is_match(name) || arm64_re.is_match(name)\n        }\n        _ => arch_re.is_match(name),\n    };\n    if matches_os && matches_arch {\n        return Ok(true);\n    }\n\n    Ok(false)\n}\n\n/// Creates a new regex for the specified key.\n///\n/// # Arguments\n/// * `key` - The key to create the regex for.\n///\n/// # Returns\n/// * The regex.\n///\n/// # Errors\n/// * If the regex cannot be created.\nfn regex(key: &str) -> Result<Regex> {\n    let regex = Regex::new(format!(r\"[\\W_]{key}[\\W_]\").as_str())?;\n    Ok(regex)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use anyhow::Result;\n\n    #[test]\n    fn test_invalid_url() -> Result<()> {\n        let url = \"^\";\n        assert!(!matcher(url, \"\", &Version::new(0, 0, 0))?);\n        Ok(())\n    }\n\n    #[test]\n    fn test_no_version() -> Result<()> {\n        assert!(!matcher(\"https://foo\", \"\", &Version::new(0, 0, 0))?);\n        Ok(())\n    }\n\n    #[test]\n    fn test_invalid_version() -> Result<()> {\n        assert!(!matcher(\n            \"https://foo?postgresql_version=16\",\n            \"\",\n            &Version::new(0, 0, 0)\n        )?);\n        Ok(())\n    }\n\n    #[test]\n    fn test_tar_gz_matcher() -> Result<()> {\n        let postgresql_major_version = 16;\n        let url = format!(\"https://foo?postgresql_version={postgresql_major_version}.3\");\n        let version = Version::parse(\"1.2.3\")?;\n        let target = target_triple::TARGET;\n\n        let valid_name = format!(\"postgresql-pg{postgresql_major_version}-{target}.tar.gz\");\n        let invalid_name = format!(\"postgresql-pg{postgresql_major_version}-{target}.zip\");\n        assert!(\n            tar_gz_matcher(url.as_str(), valid_name.as_str(), &version)?,\n            \"{}\",\n            valid_name\n        );\n        assert!(\n            !tar_gz_matcher(url.as_str(), invalid_name.as_str(), &version)?,\n            \"{}\",\n            invalid_name\n        );\n        Ok(())\n    }\n\n    #[test]\n    fn test_zip_matcher() -> Result<()> {\n        let postgresql_major_version = 16;\n        let url = format!(\"https://foo?postgresql_version={postgresql_major_version}.3\");\n        let version = Version::parse(\"1.2.3\")?;\n        let target = target_triple::TARGET;\n\n        let valid_name = format!(\"postgresql-pg{postgresql_major_version}-{target}.zip\");\n        let invalid_name = format!(\"postgresql-pg{postgresql_major_version}-{target}.tar.gz\");\n        assert!(\n            zip_matcher(url.as_str(), valid_name.as_str(), &version)?,\n            \"{}\",\n            valid_name\n        );\n        assert!(\n            !zip_matcher(url.as_str(), invalid_name.as_str(), &version)?,\n            \"{}\",\n            invalid_name\n        );\n        Ok(())\n    }\n\n    #[test]\n    fn test_matcher_success() -> Result<()> {\n        let postgresql_major_version = 16;\n        let url = format!(\"https://foo?postgresql_version={postgresql_major_version}.3\");\n        let version = Version::parse(\"1.2.3\")?;\n        let target = target_triple::TARGET;\n        let os = consts::OS;\n        let arch = consts::ARCH;\n        let names = vec![\n            format!(\"postgresql-pg{postgresql_major_version}-{target}.zip\"),\n            format!(\"postgresql-pg{postgresql_major_version}-{os}-{arch}.zip\"),\n            format!(\"postgresql-pg{postgresql_major_version}-{target}.tar.gz\"),\n            format!(\"postgresql-pg{postgresql_major_version}-{os}-{arch}.tar.gz\"),\n            format!(\"foo.{target}.pg{postgresql_major_version}.tar.gz\"),\n            format!(\"foo.{os}.{arch}.pg{postgresql_major_version}.tar.gz\"),\n            format!(\"foo-{arch}-{os}-pg{postgresql_major_version}.tar.gz\"),\n            format!(\"foo_{arch}_{os}_pg{postgresql_major_version}.tar.gz\"),\n        ];\n\n        for name in names {\n            assert!(matcher(url.as_str(), name.as_str(), &version)?, \"{}\", name);\n        }\n        Ok(())\n    }\n\n    #[test]\n    fn test_matcher_errors() -> Result<()> {\n        let postgresql_major_version = 16;\n        let url = format!(\"https://foo?postgresql_version={postgresql_major_version}.3\");\n        let version = Version::parse(\"1.2.3\")?;\n        let target = target_triple::TARGET;\n        let os = consts::OS;\n        let arch = consts::ARCH;\n        let names = vec![\n            format!(\"foo-pg{postgresql_major_version}.tar.gz\"),\n            format!(\"foo-{target}.tar.gz\"),\n            format!(\"foo-pg{postgresql_major_version}-{os}.tar.gz\"),\n            format!(\"foo-pg{postgresql_major_version}-{arch}.tar.gz\"),\n            format!(\"foo-pg{postgresql_major_version}{os}-{arch}.tar\"),\n            format!(\"foo-pg{postgresql_major_version}-{os}{arch}.tar.gz\"),\n        ];\n\n        for name in names {\n            assert!(!matcher(url.as_str(), name.as_str(), &version)?, \"{}\", name);\n        }\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/model.rs",
    "content": "use crate::Error::IoError;\nuse crate::Result;\nuse semver::Version;\nuse serde::{Deserialize, Serialize};\n#[cfg(test)]\nuse std::ffi::OsString;\nuse std::fmt::Display;\n#[cfg(not(feature = \"tokio\"))]\nuse std::io::Write;\nuse std::path::PathBuf;\n#[cfg(feature = \"tokio\")]\nuse tokio::io::{AsyncReadExt, AsyncWriteExt};\n\n/// A struct representing an available extension.\n#[derive(Debug)]\npub struct AvailableExtension {\n    namespace: String,\n    name: String,\n    description: String,\n}\n\nimpl AvailableExtension {\n    /// Creates a new available extension.\n    #[must_use]\n    pub fn new(namespace: &str, name: &str, description: &str) -> Self {\n        Self {\n            namespace: namespace.to_string(),\n            name: name.to_string(),\n            description: description.to_string(),\n        }\n    }\n\n    /// Gets the namespace of the extension.\n    #[must_use]\n    pub fn namespace(&self) -> &str {\n        &self.namespace\n    }\n\n    /// Gets the name of the extension.\n    #[must_use]\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n\n    /// Gets the description of the extension.\n    #[must_use]\n    pub fn description(&self) -> &str {\n        &self.description\n    }\n}\n\nimpl Display for AvailableExtension {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}:{} {}\", self.namespace, self.name, self.description)\n    }\n}\n\n/// A struct representing an installed configuration.\n#[derive(Clone, Debug, Default, Deserialize, PartialEq, Serialize)]\npub struct InstalledConfiguration {\n    extensions: Vec<InstalledExtension>,\n}\n\nimpl InstalledConfiguration {\n    /// Creates a new installed configuration.\n    #[must_use]\n    pub fn new(extensions: Vec<InstalledExtension>) -> Self {\n        Self { extensions }\n    }\n\n    /// Reads the configuration from the specified `path`.\n    ///\n    /// # Errors\n    /// * If an error occurs while reading the configuration.\n    pub async fn read<P: Into<PathBuf>>(path: P) -> Result<Self> {\n        #[cfg(feature = \"tokio\")]\n        {\n            let mut file = tokio::fs::File::open(path.into())\n                .await\n                .map_err(|error| IoError(error.to_string()))?;\n            let mut contents = vec![];\n            file.read_to_end(&mut contents)\n                .await\n                .map_err(|error| IoError(error.to_string()))?;\n            let config = serde_json::from_slice(&contents)?;\n            Ok(config)\n        }\n        #[cfg(not(feature = \"tokio\"))]\n        {\n            let file =\n                std::fs::File::open(path.into()).map_err(|error| IoError(error.to_string()))?;\n            let reader = std::io::BufReader::new(file);\n            let config =\n                serde_json::from_reader(reader).map_err(|error| IoError(error.to_string()))?;\n            Ok(config)\n        }\n    }\n\n    /// Writes the configuration to the specified `path`.\n    ///\n    /// # Errors\n    /// * If an error occurs while writing the configuration.\n    pub async fn write<P: Into<PathBuf>>(&self, path: P) -> Result<()> {\n        let content = serde_json::to_string_pretty(&self)?;\n\n        #[cfg(feature = \"tokio\")]\n        {\n            let mut file = tokio::fs::File::create(path.into())\n                .await\n                .map_err(|error| IoError(error.to_string()))?;\n            file.write_all(content.as_bytes())\n                .await\n                .map_err(|error| IoError(error.to_string()))?;\n        }\n        #[cfg(not(feature = \"tokio\"))]\n        {\n            let mut file =\n                std::fs::File::create(path.into()).map_err(|error| IoError(error.to_string()))?;\n            file.write_all(content.as_bytes())\n                .map_err(|error| IoError(error.to_string()))?;\n        }\n        Ok(())\n    }\n\n    /// Gets the extensions of the configuration.\n    #[must_use]\n    pub fn extensions(&self) -> &Vec<InstalledExtension> {\n        &self.extensions\n    }\n\n    /// Gets the extensions of the configuration.\n    #[must_use]\n    pub fn extensions_mut(&mut self) -> &mut Vec<InstalledExtension> {\n        &mut self.extensions\n    }\n}\n\n/// A struct representing an installed extension.\n#[derive(Clone, Debug, Deserialize, PartialEq, Serialize)]\npub struct InstalledExtension {\n    namespace: String,\n    name: String,\n    version: Version,\n    files: Vec<PathBuf>,\n}\n\nimpl InstalledExtension {\n    /// Creates a new installed extension.\n    #[must_use]\n    pub fn new(namespace: &str, name: &str, version: Version, files: Vec<PathBuf>) -> Self {\n        Self {\n            namespace: namespace.to_string(),\n            name: name.to_string(),\n            version,\n            files,\n        }\n    }\n\n    /// Gets the namespace of the extension.\n    #[must_use]\n    pub fn namespace(&self) -> &str {\n        &self.namespace\n    }\n\n    /// Gets the name of the extension.\n    #[must_use]\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n\n    /// Gets the version of the extension.\n    #[must_use]\n    pub fn version(&self) -> &Version {\n        &self.version\n    }\n\n    /// Gets the files of the extension.\n    #[must_use]\n    pub fn files(&self) -> &Vec<PathBuf> {\n        &self.files\n    }\n}\n\nimpl Display for InstalledExtension {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}:{}:{}\", self.namespace, self.name, self.version)\n    }\n}\n\n#[cfg(test)]\npub struct TestSettings;\n\n#[cfg(test)]\nimpl postgresql_commands::Settings for TestSettings {\n    fn get_binary_dir(&self) -> PathBuf {\n        PathBuf::from(\".\")\n    }\n\n    fn get_host(&self) -> OsString {\n        \"localhost\".into()\n    }\n\n    fn get_port(&self) -> u16 {\n        5432\n    }\n\n    fn get_username(&self) -> OsString {\n        \"postgres\".into()\n    }\n\n    fn get_password(&self) -> OsString {\n        \"password\".into()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use postgresql_commands::Settings;\n\n    #[test]\n    fn test_settings() {\n        let settings = TestSettings;\n        assert_eq!(settings.get_binary_dir(), PathBuf::from(\".\"));\n        assert_eq!(settings.get_host(), \"localhost\");\n        assert_eq!(settings.get_port(), 5432);\n        assert_eq!(settings.get_username(), \"postgres\");\n        assert_eq!(settings.get_password(), \"password\");\n    }\n\n    #[test]\n    fn test_available_extension() {\n        let available_extension = AvailableExtension::new(\"namespace\", \"name\", \"description\");\n        assert_eq!(available_extension.namespace(), \"namespace\");\n        assert_eq!(available_extension.name(), \"name\");\n        assert_eq!(available_extension.description(), \"description\");\n        assert_eq!(\n            available_extension.to_string(),\n            \"namespace:name description\"\n        );\n    }\n\n    #[test]\n    fn test_installed_configuration() {\n        let installed_configuration = InstalledConfiguration::new(vec![]);\n        assert!(installed_configuration.extensions.is_empty());\n    }\n\n    #[cfg(target_os = \"linux\")]\n    #[tokio::test]\n    async fn test_installed_configuration_io() -> Result<()> {\n        let temp_file =\n            tempfile::NamedTempFile::new().map_err(|error| IoError(error.to_string()))?;\n        let file = temp_file.as_ref();\n        let extensions = vec![InstalledExtension::new(\n            \"namespace\",\n            \"name\",\n            Version::new(1, 0, 0),\n            vec![PathBuf::from(\"file\")],\n        )];\n        let expected_configuration = InstalledConfiguration::new(extensions);\n        expected_configuration.write(file).await?;\n        let configuration = InstalledConfiguration::read(file).await?;\n        assert_eq!(expected_configuration, configuration);\n        tokio::fs::remove_file(file)\n            .await\n            .map_err(|error| IoError(error.to_string()))?;\n        Ok(())\n    }\n\n    #[test]\n    fn test_installed_extension() {\n        let installed_extension = InstalledExtension::new(\n            \"namespace\",\n            \"name\",\n            Version::new(1, 0, 0),\n            vec![PathBuf::from(\"file\")],\n        );\n        assert_eq!(installed_extension.namespace(), \"namespace\");\n        assert_eq!(installed_extension.name(), \"name\");\n        assert_eq!(installed_extension.version(), &Version::new(1, 0, 0));\n        assert_eq!(installed_extension.files(), &vec![PathBuf::from(\"file\")]);\n        assert_eq!(installed_extension.to_string(), \"namespace:name:1.0.0\");\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/repository/mod.rs",
    "content": "pub mod model;\n#[cfg(feature = \"portal-corp\")]\npub mod portal_corp;\npub mod registry;\n#[cfg(feature = \"steampipe\")]\npub mod steampipe;\n#[cfg(feature = \"tensor-chord\")]\npub mod tensor_chord;\n\npub use model::Repository;\n"
  },
  {
    "path": "postgresql_extensions/src/repository/model.rs",
    "content": "use crate::Result;\nuse crate::model::AvailableExtension;\nuse async_trait::async_trait;\nuse semver::{Version, VersionReq};\nuse std::fmt::Debug;\nuse std::path::PathBuf;\n\n/// A trait for archive repository implementations.\n#[async_trait]\npub trait Repository: Debug + Send + Sync {\n    /// Gets the name of the repository.\n    fn name(&self) -> &str;\n\n    /// Gets the available extensions.\n    ///\n    /// # Errors\n    /// * if an error occurs while getting the extensions.\n    async fn get_available_extensions(&self) -> Result<Vec<AvailableExtension>>;\n\n    /// Gets the archive for the extension with the specified `name` and `version`.\n    ///\n    /// # Errors\n    /// * if an error occurs while getting the archive.\n    async fn get_archive(\n        &self,\n        postgresql_version: &str,\n        name: &str,\n        version: &VersionReq,\n    ) -> Result<(Version, Vec<u8>)>;\n\n    /// Installs the extension with the specified `name` and `version`.\n    ///\n    /// # Errors\n    /// * if an error occurs while installing the extension.\n    async fn install(\n        &self,\n        name: &str,\n        library_dir: PathBuf,\n        extension_dir: PathBuf,\n        archive: &[u8],\n    ) -> Result<Vec<PathBuf>>;\n}\n"
  },
  {
    "path": "postgresql_extensions/src/repository/portal_corp/mod.rs",
    "content": "pub mod repository;\n\npub const URL: &str = \"https://github.com/portalcorp\";\n"
  },
  {
    "path": "postgresql_extensions/src/repository/portal_corp/repository.rs",
    "content": "use crate::Result;\nuse crate::matcher::zip_matcher;\nuse crate::model::AvailableExtension;\nuse crate::repository::Repository;\nuse crate::repository::portal_corp::URL;\nuse async_trait::async_trait;\nuse postgresql_archive::extractor::{ExtractDirectories, zip_extract};\nuse postgresql_archive::get_archive;\nuse postgresql_archive::repository::github::repository::GitHub;\nuse regex_lite::Regex;\nuse semver::{Version, VersionReq};\nuse std::fmt::Debug;\nuse std::path::PathBuf;\n\n/// PortalCorp repository.\n#[derive(Debug)]\npub struct PortalCorp;\n\nimpl PortalCorp {\n    /// Creates a new PortalCorp repository.\n    ///\n    /// # Errors\n    /// * If the repository cannot be created\n    #[expect(clippy::new_ret_no_self)]\n    pub fn new() -> Result<Box<dyn Repository>> {\n        Ok(Box::new(Self))\n    }\n\n    /// Initializes the repository.\n    ///\n    /// # Errors\n    /// * If the repository cannot be initialized.\n    pub fn initialize() -> Result<()> {\n        postgresql_archive::matcher::registry::register(\n            |url| Ok(url.starts_with(URL)),\n            zip_matcher,\n        )?;\n        postgresql_archive::repository::registry::register(\n            |url| Ok(url.starts_with(URL)),\n            Box::new(GitHub::new),\n        )?;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl Repository for PortalCorp {\n    fn name(&self) -> &'static str {\n        \"portal-corp\"\n    }\n\n    async fn get_available_extensions(&self) -> Result<Vec<AvailableExtension>> {\n        let extensions = vec![AvailableExtension::new(\n            self.name(),\n            \"pgvector_compiled\",\n            \"Precompiled OS packages for pgvector\",\n        )];\n        Ok(extensions)\n    }\n\n    async fn get_archive(\n        &self,\n        postgresql_version: &str,\n        name: &str,\n        version: &VersionReq,\n    ) -> Result<(Version, Vec<u8>)> {\n        let url = format!(\"{URL}/{name}?postgresql_version={postgresql_version}\");\n        let archive = get_archive(url.as_str(), version).await?;\n        Ok(archive)\n    }\n\n    async fn install(\n        &self,\n        _name: &str,\n        library_dir: PathBuf,\n        extension_dir: PathBuf,\n        archive: &[u8],\n    ) -> Result<Vec<PathBuf>> {\n        let mut extract_directories = ExtractDirectories::default();\n        extract_directories.add_mapping(Regex::new(r\"\\.(dll|dylib|so)$\")?, library_dir);\n        extract_directories.add_mapping(Regex::new(r\"\\.(control|sql)$\")?, extension_dir);\n        let bytes = &archive.to_vec();\n        let files = zip_extract(bytes, &extract_directories)?;\n        Ok(files)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::repository::Repository;\n\n    #[test]\n    fn test_name() {\n        let repository = PortalCorp;\n        assert_eq!(\"portal-corp\", repository.name());\n    }\n\n    #[tokio::test]\n    async fn test_get_available_extensions() -> Result<()> {\n        let repository = PortalCorp;\n        let extensions = repository.get_available_extensions().await?;\n        let extension = &extensions[0];\n\n        assert_eq!(\"pgvector_compiled\", extension.name());\n        assert_eq!(\n            \"Precompiled OS packages for pgvector\",\n            extension.description()\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/repository/registry.rs",
    "content": "use crate::Error::UnsupportedNamespace;\nuse crate::Result;\nuse crate::repository::model::Repository;\n#[cfg(feature = \"portal-corp\")]\nuse crate::repository::portal_corp::repository::PortalCorp;\n#[cfg(feature = \"steampipe\")]\nuse crate::repository::steampipe::repository::Steampipe;\n#[cfg(feature = \"tensor-chord\")]\nuse crate::repository::tensor_chord::repository::TensorChord;\nuse std::collections::HashMap;\nuse std::sync::{Arc, LazyLock, Mutex, RwLock};\n\nstatic REGISTRY: LazyLock<Arc<Mutex<RepositoryRegistry>>> =\n    LazyLock::new(|| Arc::new(Mutex::new(RepositoryRegistry::default())));\n\ntype NewFn = dyn Fn() -> Result<Box<dyn Repository>> + Send + Sync;\n\n/// Singleton struct to store repositories\nstruct RepositoryRegistry {\n    repositories: HashMap<String, Arc<RwLock<NewFn>>>,\n}\n\nimpl RepositoryRegistry {\n    /// Creates a new repository registry.\n    fn new() -> Self {\n        Self {\n            repositories: HashMap::new(),\n        }\n    }\n\n    /// Registers a repository. Newly registered repositories take precedence over existing ones.\n    fn register(&mut self, namespace: &str, new_fn: Box<NewFn>) {\n        let namespace = namespace.to_string();\n        self.repositories\n            .insert(namespace, Arc::new(RwLock::new(new_fn)));\n    }\n\n    /// Gets a repository that supports the specified namespace\n    ///\n    /// # Errors\n    /// * If the namespace is not supported.\n    fn get(&self, namespace: &str) -> Result<Box<dyn Repository>> {\n        let namespace = namespace.to_string();\n        let Some(new_fn) = self.repositories.get(&namespace) else {\n            return Err(UnsupportedNamespace(namespace.to_string()));\n        };\n        let new_function = new_fn.read()?;\n        new_function()\n    }\n}\n\nimpl Default for RepositoryRegistry {\n    /// Creates a new repository registry with the default repositories registered.\n    fn default() -> Self {\n        let mut registry = Self::new();\n        #[cfg(feature = \"portal-corp\")]\n        {\n            registry.register(\"portal-corp\", Box::new(PortalCorp::new));\n            let _ = PortalCorp::initialize();\n        }\n        #[cfg(feature = \"steampipe\")]\n        {\n            registry.register(\"steampipe\", Box::new(Steampipe::new));\n            let _ = Steampipe::initialize();\n        }\n        #[cfg(feature = \"tensor-chord\")]\n        {\n            registry.register(\"tensor-chord\", Box::new(TensorChord::new));\n            let _ = TensorChord::initialize();\n        }\n        registry\n    }\n}\n\n/// Registers a repository. Newly registered repositories can override existing ones.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn register(namespace: &str, new_fn: Box<NewFn>) -> Result<()> {\n    REGISTRY.lock()?.register(namespace, new_fn);\n    Ok(())\n}\n\n/// Gets a repository that supports the specified namespace\n///\n/// # Errors\n/// * If the namespace is not supported.\npub fn get(namespace: &str) -> Result<Box<dyn Repository>> {\n    REGISTRY.lock()?.get(namespace)\n}\n\n/// Gets the namespaces of the registered repositories.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn get_namespaces() -> Result<Vec<String>> {\n    Ok(REGISTRY.lock()?.repositories.keys().cloned().collect())\n}\n\n/// Gets all the registered repositories.\n///\n/// # Errors\n/// * If the registry is poisoned.\npub fn get_repositories() -> Result<Vec<Box<dyn Repository>>> {\n    let mut repositories = Vec::new();\n    for namespace in get_namespaces()? {\n        let repository = get(&namespace)?;\n        repositories.push(repository);\n    }\n    Ok(repositories)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::model::AvailableExtension;\n    use async_trait::async_trait;\n    use semver::{Version, VersionReq};\n    use std::path::PathBuf;\n\n    #[derive(Debug)]\n    struct TestRepository;\n\n    impl TestRepository {\n        #[expect(clippy::new_ret_no_self)]\n        #[expect(clippy::unnecessary_wraps)]\n        fn new() -> Result<Box<dyn Repository>> {\n            Ok(Box::new(Self))\n        }\n    }\n\n    #[async_trait]\n    impl Repository for TestRepository {\n        fn name(&self) -> &'static str {\n            \"test\"\n        }\n\n        async fn get_available_extensions(&self) -> Result<Vec<AvailableExtension>> {\n            Ok(Vec::new())\n        }\n\n        async fn get_archive(\n            &self,\n            _postgresql_version: &str,\n            _name: &str,\n            _version: &VersionReq,\n        ) -> Result<(Version, Vec<u8>)> {\n            Ok((Version::new(1, 0, 0), Vec::new()))\n        }\n\n        async fn install(\n            &self,\n            _name: &str,\n            _library_dir: PathBuf,\n            _extension_dir: PathBuf,\n            _archive: &[u8],\n        ) -> Result<Vec<PathBuf>> {\n            Ok(Vec::new())\n        }\n    }\n\n    #[tokio::test]\n    async fn test_register() -> Result<()> {\n        let namespace = \"test\";\n        register(namespace, Box::new(TestRepository::new))?;\n        let repository = get(namespace)?;\n        assert_eq!(\"test\", repository.name());\n        assert!(repository.get_available_extensions().await.is_ok());\n        Ok(())\n    }\n\n    #[test]\n    fn test_get_error() {\n        let error = get(\"foo\").unwrap_err();\n        assert_eq!(\"unsupported namespace 'foo'\", error.to_string());\n    }\n\n    #[test]\n    #[cfg(feature = \"portal-corp\")]\n    fn test_get_portal_corp_extensions() {\n        assert!(get(\"portal-corp\").is_ok());\n    }\n\n    #[test]\n    #[cfg(feature = \"steampipe\")]\n    fn test_get_steampipe_extensions() {\n        assert!(get(\"steampipe\").is_ok());\n    }\n\n    #[test]\n    #[cfg(feature = \"tensor-chord\")]\n    fn test_get_tensor_chord_extensions() {\n        assert!(get(\"tensor-chord\").is_ok());\n    }\n\n    #[test]\n    fn test_get_namespaces() {\n        let namespaces = get_namespaces().unwrap();\n        #[cfg(feature = \"portal-corp\")]\n        assert!(namespaces.contains(&\"portal-corp\".to_string()));\n        #[cfg(feature = \"steampipe\")]\n        assert!(namespaces.contains(&\"steampipe\".to_string()));\n        #[cfg(feature = \"tensor-chord\")]\n        assert!(namespaces.contains(&\"tensor-chord\".to_string()));\n    }\n\n    #[test]\n    fn test_get_repositories() {\n        let repositories = get_repositories().unwrap();\n        #[cfg(feature = \"steampipe\")]\n        assert!(\n            repositories\n                .iter()\n                .any(|repository| repository.name() == \"steampipe\")\n        );\n        #[cfg(feature = \"tensor-chord\")]\n        assert!(\n            repositories\n                .iter()\n                .any(|repository| repository.name() == \"tensor-chord\")\n        );\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/repository/steampipe/extensions.rs",
    "content": "use std::sync::LazyLock;\n\nstatic EXTENSIONS: LazyLock<Vec<SteampipeExtension>> = LazyLock::new(init_extensions);\n\n#[expect(clippy::too_many_lines)]\nfn init_extensions() -> Vec<SteampipeExtension> {\n    vec![\n        SteampipeExtension::new(\n            \"abuseipdb\",\n            \"Steampipe plugin to query IP address abuse data and more from AbuseIPDB.\",\n            \"https://github.com/turbot/steampipe-plugin-abuseipdb\",\n        ),\n        SteampipeExtension::new(\n            \"airtable\",\n            \"Steampipe plugin for querying Airtable.\",\n            \"https://github.com/francois2metz/steampipe-plugin-airtable\",\n        ),\n        SteampipeExtension::new(\n            \"aiven\",\n            \"Steampipe plugin to query accounts, projects, teams, users and more from Aiven.\",\n            \"https://github.com/turbot/steampipe-plugin-aiven\",\n        ),\n        SteampipeExtension::new(\n            \"algolia\",\n            \"Steampipe plugin for querying Algolia indexes, logs and more.\",\n            \"https://github.com/turbot/steampipe-plugin-algolia\",\n        ),\n        SteampipeExtension::new(\n            \"alicloud\",\n            \"Steampipe plugin for querying Alibaba Cloud servers, databases, networks, and other resources.\",\n            \"https://github.com/turbot/steampipe-plugin-alicloud\",\n        ),\n        SteampipeExtension::new(\n            \"ansible\",\n            \"Steampipe plugin to query configurations from the Ansible playbooks.\",\n            \"https://github.com/turbot/steampipe-plugin-ansible\",\n        ),\n        SteampipeExtension::new(\n            \"auth0\",\n            \"Use SQL to query users, clients, connections, keys and more from Auth0.\",\n            \"https://github.com/turbot/steampipe-plugin-auth0\",\n        ),\n        SteampipeExtension::new(\n            \"aws\",\n            \"Steampipe plugin for querying instances, buckets, databases and more from AWS.\",\n            \"https://github.com/turbot/steampipe-plugin-aws\",\n        ),\n        SteampipeExtension::new(\n            \"awscfn\",\n            \"Steampipe plugin to query data from AWS CloudFormation template files.\",\n            \"https://github.com/turbot/steampipe-plugin-awscfn\",\n        ),\n        SteampipeExtension::new(\n            \"azure\",\n            \"Steampipe plugin for querying resource groups, virtual machines, storage accounts and more from Azure.\",\n            \"https://github.com/turbot/steampipe-plugin-azure\",\n        ),\n        SteampipeExtension::new(\n            \"azuread\",\n            \"Steampipe plugin for querying resource users, groups, applications and more from Azure Active Directory.\",\n            \"https://github.com/turbot/steampipe-plugin-azuread\",\n        ),\n        SteampipeExtension::new(\n            \"azuredevops\",\n            \"Steampipe plugin to query projects, groups, builds and more from Azure DevOps.\",\n            \"https://github.com/turbot/steampipe-plugin-azuredevops\",\n        ),\n        SteampipeExtension::new(\n            \"baleen\",\n            \"Steampipe plugin for querying Baleen.\",\n            \"https://github.com/francois2metz/steampipe-plugin-baleen\",\n        ),\n        SteampipeExtension::new(\n            \"bitbucket\",\n            \"Steampipe plugin for querying repositories, issues, pull requests and more from Bitbucket.\",\n            \"https://github.com/turbot/steampipe-plugin-bitbucket\",\n        ),\n        SteampipeExtension::new(\n            \"bitfinex\",\n            \"Steampipe plugin for querying data from bitfinex\",\n            \"https://github.com/kaggrwal/steampipe-plugin-bitfinex\",\n        ),\n        SteampipeExtension::new(\n            \"btp\",\n            \"Steampipe plugin to query the account details of your SAP Business Technology Platform account.\",\n            \"https://github.com/ajmaradiaga/steampipe-plugin-btp\",\n        ),\n        SteampipeExtension::new(\n            \"buildkite\",\n            \"Steampipe plugin to query Buildkite pipelines, builds, users and more.\",\n            \"https://github.com/turbot/steampipe-plugin-buildkite\",\n        ),\n        SteampipeExtension::new(\n            \"chaos\",\n            \"Steampipe plugin to cause chaos for testing Steampipe with the craziest edge cases we can think of.\",\n            \"https://github.com/turbot/steampipe-plugin-chaos\",\n        ),\n        SteampipeExtension::new(\n            \"chaosdynamic\",\n            \"Steampipe plugin to test aggregation of dynamic plugin connections.\",\n            \"https://github.com/turbot/steampipe-plugin-chaosdynamic\",\n        ),\n        SteampipeExtension::new(\n            \"circleci\",\n            \"Steampipe plugin for querying resource projects, pipelines, builds and more from CircleCI.\",\n            \"https://github.com/turbot/steampipe-plugin-circleci\",\n        ),\n        SteampipeExtension::new(\n            \"clickup\",\n            \"Steampipe plugin for querying ClickUp Tasks, Lists and other resources.\",\n            \"https://github.com/theapsgroup/steampipe-plugin-clickup\",\n        ),\n        SteampipeExtension::new(\n            \"cloudflare\",\n            \"Steampipe plugin for querying Cloudflare databases, networks, and other resources.\",\n            \"https://github.com/turbot/steampipe-plugin-cloudflare\",\n        ),\n        SteampipeExtension::new(\n            \"code\",\n            \"Steampipe plugin to query secrets and more from Code.\",\n            \"https://github.com/turbot/steampipe-plugin-code\",\n        ),\n        SteampipeExtension::new(\n            \"cohereai\",\n            \"Steampipe plugin to query generations, classifications and more from CohereAI.\",\n            \"https://github.com/mr-destructive/steampipe-plugin-cohereai\",\n        ),\n        SteampipeExtension::new(\n            \"config\",\n            \"Steampipe plugin to query data from various types of files like INI, JSON, YML and more.\",\n            \"https://github.com/turbot/steampipe-plugin-config\",\n        ),\n        SteampipeExtension::new(\n            \"confluence\",\n            \"Steampipe plugin for querying pages, spaces, and more from Confluence.\",\n            \"https://github.com/ellisvalentiner/steampipe-plugin-confluence\",\n        ),\n        SteampipeExtension::new(\n            \"consul\",\n            \"Steampipe plugin to query nodes, ACLs, services and more from Consul.\",\n            \"https://github.com/turbot/steampipe-plugin-consul\",\n        ),\n        SteampipeExtension::new(\n            \"crowdstrike\",\n            \"Steampipe plugin to query resources from CrowdStrike.\",\n            \"https://github.com/turbot/steampipe-plugin-crowdstrike\",\n        ),\n        SteampipeExtension::new(\n            \"crtsh\",\n            \"Steampipe plugin to query certificates, logs and more from the crt.sh certificate transparency database.\",\n            \"https://github.com/turbot/steampipe-plugin-crtsh\",\n        ),\n        SteampipeExtension::new(\n            \"csv\",\n            \"Steampipe plugin to query data from CSV files.\",\n            \"https://github.com/turbot/steampipe-plugin-csv\",\n        ),\n        SteampipeExtension::new(\n            \"databricks\",\n            \"Steampipe plugin to query clusters, jobs, users, and more from Databricks.\",\n            \"https://github.com/turbot/steampipe-plugin-databricks\",\n        ),\n        SteampipeExtension::new(\n            \"datadog\",\n            \"Steampipe plugin for querying dashboards, users, roles and more from Datadog.\",\n            \"https://github.com/turbot/steampipe-plugin-datadog\",\n        ),\n        SteampipeExtension::new(\n            \"digitalocean\",\n            \"Steampipe plugin for querying DigitalOcean databases, networks, and other resources.\",\n            \"https://github.com/turbot/steampipe-plugin-digitalocean\",\n        ),\n        SteampipeExtension::new(\n            \"docker\",\n            \"Steampipe plugin to query Dockerfile commands and more from Docker.\",\n            \"https://github.com/turbot/steampipe-plugin-docker\",\n        ),\n        SteampipeExtension::new(\n            \"dockerhub\",\n            \"Steampipe plugin for querying Docker Hub repositories, tags and other resources.\",\n            \"https://github.com/turbot/steampipe-plugin-dockerhub\",\n        ),\n        SteampipeExtension::new(\n            \"doppler\",\n            \"Steampipe plugin to query projects, environments, secrets and more from Doppler.\",\n            \"https://github.com/turbot/steampipe-plugin-doppler\",\n        ),\n        SteampipeExtension::new(\n            \"duo\",\n            \"Steampipe plugin for querying Duo Security users, logs and more.\",\n            \"https://github.com/turbot/steampipe-plugin-duo\",\n        ),\n        SteampipeExtension::new(\n            \"env0\",\n            \"Steampipe plugin to query projects, teams, users and more from env0.\",\n            \"https://github.com/turbot/steampipe-plugin-env0\",\n        ),\n        SteampipeExtension::new(\n            \"equinix\",\n            \"Steampipe plugin for querying Equinix Metal servers, networks, facilities and more.\",\n            \"https://github.com/turbot/steampipe-plugin-equinix\",\n        ),\n        SteampipeExtension::new(\n            \"exec\",\n            \"Steampipe plugin to run & query shell commands on local and remote servers.\",\n            \"https://github.com/turbot/steampipe-plugin-exec\",\n        ),\n        SteampipeExtension::new(\n            \"fastly\",\n            \"Steampipe plugin to query services, acls, domains and more from Fastly.\",\n            \"https://github.com/turbot/steampipe-plugin-fastly\",\n        ),\n        SteampipeExtension::new(\n            \"finance\",\n            \"Steampipe plugin to query financial data including quotes and public company information.\",\n            \"https://github.com/turbot/steampipe-plugin-finance\",\n        ),\n        SteampipeExtension::new(\n            \"fly\",\n            \"Steampipe plugin to query applications, volumes, databases, and more from your Fly organization.\",\n            \"https://github.com/turbot/steampipe-plugin-fly\",\n        ),\n        SteampipeExtension::new(\n            \"freshping\",\n            \"Steampipe plugin for querying Freshping.\",\n            \"https://github.com/francois2metz/steampipe-plugin-freshping\",\n        ),\n        SteampipeExtension::new(\n            \"freshservice\",\n            \"Steampipe plugin for querying FreshService agents, assets, tickets and other resources.\",\n            \"https://github.com/theapsgroup/steampipe-plugin-freshservice\",\n        ),\n        SteampipeExtension::new(\n            \"gandi\",\n            \"Steampipe plugin for querying domains, mailboxes, certificates and more from Gandi.\",\n            \"https://github.com/francois2metz/steampipe-plugin-gandi\",\n        ),\n        SteampipeExtension::new(\n            \"gcp\",\n            \"Steampipe plugin for querying buckets, instances, functions and more from GCP.\",\n            \"https://github.com/turbot/steampipe-plugin-gcp\",\n        ),\n        SteampipeExtension::new(\n            \"gitguardian\",\n            \"Steampipe plugin for querying incidents from GitGuardian.\",\n            \"https://github.com/francois2metz/steampipe-plugin-gitguardian\",\n        ),\n        SteampipeExtension::new(\n            \"github\",\n            \"Steampipe plugin for querying GitHub Repositories, Organizations, and other resources.\",\n            \"https://github.com/turbot/steampipe-plugin-github\",\n        ),\n        SteampipeExtension::new(\n            \"gitlab\",\n            \"Steampipe plugin for querying GitLab Repositories, Users and other resources.\",\n            \"https://github.com/theapsgroup/steampipe-plugin-gitlab\",\n        ),\n        SteampipeExtension::new(\n            \"godaddy\",\n            \"Steampipe plugin to query domains, orders, certificates and more from GoDaddy.\",\n            \"https://github.com/turbot/steampipe-plugin-godaddy\",\n        ),\n        SteampipeExtension::new(\n            \"googledirectory\",\n            \"Steampipe plugin for querying users, groups, org units and more from your Google Workspace directory.\",\n            \"https://github.com/turbot/steampipe-plugin-googledirectory\",\n        ),\n        SteampipeExtension::new(\n            \"googlesearchconsole\",\n            \"Steampipe plugin for query data from Google Search Console (GSC).\",\n            \"https://github.com/turbot/steampipe-plugin-googlesearchconsole\",\n        ),\n        SteampipeExtension::new(\n            \"googlesheets\",\n            \"Steampipe plugin for query data from Google Sheets.\",\n            \"https://github.com/turbot/steampipe-plugin-googlesheets\",\n        ),\n        SteampipeExtension::new(\n            \"googleworkspace\",\n            \"Steampipe plugin for querying users, groups, org units and more from your Google Workspace.\",\n            \"https://github.com/turbot/steampipe-plugin-googleworkspace\",\n        ),\n        SteampipeExtension::new(\n            \"grafana\",\n            \"Steampipe plugin to query dashboards, data sources and more from Grafana.\",\n            \"https://github.com/turbot/steampipe-plugin-grafana\",\n        ),\n        SteampipeExtension::new(\n            \"guardrails\",\n            \"Steampipe plugin to query resources, controls, policies and more from Turbot Guardrails.\",\n            \"https://github.com/turbot/steampipe-plugin-guardrails\",\n        ),\n        SteampipeExtension::new(\n            \"hackernews\",\n            \"Steampipe plugin to query stories, items and users from Hacker News.\",\n            \"https://github.com/turbot/steampipe-plugin-hackernews\",\n        ),\n        SteampipeExtension::new(\n            \"hcloud\",\n            \"Steampipe plugin to query servers, networks and more from Hetzner Cloud.\",\n            \"https://github.com/turbot/steampipe-plugin-hcloud\",\n        ),\n        SteampipeExtension::new(\n            \"heroku\",\n            \"Steampipe plugin to query apps, dynos and more from Heroku.\",\n            \"https://github.com/turbot/steampipe-plugin-heroku\",\n        ),\n        SteampipeExtension::new(\n            \"hibp\",\n            \"Steampipe plugin to query breaches, account breaches, pastes and passwords from Have I Been Pwned.\",\n            \"https://github.com/turbot/steampipe-plugin-hibp\",\n        ),\n        SteampipeExtension::new(\n            \"hubspot\",\n            \"Steampipe plugin to query contacts, deals, tickets and more from HubSpot.\",\n            \"https://github.com/turbot/steampipe-plugin-hubspot\",\n        ),\n        SteampipeExtension::new(\n            \"hypothesis\",\n            \"Steampipe plugin to query Hypothesis annotations.\",\n            \"https://github.com/turbot/steampipe-plugin-hypothesis\",\n        ),\n        SteampipeExtension::new(\n            \"ibm\",\n            \"Steampipe plugin to query resources, users and more from IBM Cloud.\",\n            \"https://github.com/turbot/steampipe-plugin-ibm\",\n        ),\n        SteampipeExtension::new(\n            \"imap\",\n            \"Steampipe plugin to query mailboxes and messages using IMAP.\",\n            \"https://github.com/turbot/steampipe-plugin-imap\",\n        ),\n        SteampipeExtension::new(\n            \"ip2locationio\",\n            \"Steampipe plugin to query IP geolocation or WHOIS information from ip2location.io.\",\n            \"https://github.com/ip2location/steampipe-plugin-ip2locationio\",\n        ),\n        SteampipeExtension::new(\n            \"ipinfo\",\n            \"Steampipe plugin to query IP address information from ipinfo.io.\",\n            \"https://github.com/turbot/steampipe-plugin-ipinfo\",\n        ),\n        SteampipeExtension::new(\n            \"ipstack\",\n            \"Steampipe plugin for querying location, currency, timezone and security information about an IP address from ipstack.\",\n            \"https://github.com/turbot/steampipe-plugin-ipstack\",\n        ),\n        SteampipeExtension::new(\n            \"jenkins\",\n            \"Steampipe plugin for querying resource jobs, builds, nodes, plugin and more from Jenkins.\",\n            \"https://github.com/turbot/steampipe-plugin-jenkins\",\n        ),\n        SteampipeExtension::new(\n            \"jira\",\n            \"Steampipe plugin for querying sprints, issues, epics and more from Jira.\",\n            \"https://github.com/turbot/steampipe-plugin-jira\",\n        ),\n        SteampipeExtension::new(\n            \"jumpcloud\",\n            \"Steampipe plugin to query servers, applications, user groups, and more from your JumpCloud organization.\",\n            \"https://github.com/turbot/steampipe-plugin-jumpcloud\",\n        ),\n        SteampipeExtension::new(\n            \"keycloak\",\n            \"Steampipe plugin for querying Keycloak users, groups and other resources.\",\n            \"https://github.com/theapsgroup/steampipe-plugin-keycloak\",\n        ),\n        SteampipeExtension::new(\n            \"kolide\",\n            \"Kolide gives you accurate, valuable and complete fleet visibility across Mac, Windows and Linux endpoints\",\n            \"https://github.com/grendel-consulting/steampipe-plugin-kolide\",\n        ),\n        SteampipeExtension::new(\n            \"kubernetes\",\n            \"Steampipe plugin for Kubernetes components.\",\n            \"https://github.com/turbot/steampipe-plugin-kubernetes\",\n        ),\n        SteampipeExtension::new(\n            \"launchdarkly\",\n            \"Steampipe plugin to query projects, teams, metrics, flags and more from LaunchDarkly.\",\n            \"https://github.com/turbot/steampipe-plugin-launchdarkly\",\n        ),\n        SteampipeExtension::new(\n            \"ldap\",\n            \"Steampipe plugin for querying users, groups, organizational units and more from LDAP.\",\n            \"https://github.com/turbot/steampipe-plugin-ldap\",\n        ),\n        SteampipeExtension::new(\n            \"linear\",\n            \"Steampipe plugin to query issues, teams, users and more from Linear.\",\n            \"https://github.com/turbot/steampipe-plugin-linear\",\n        ),\n        SteampipeExtension::new(\n            \"linkedin\",\n            \"Steampipe plugin to query LinkedIn profiles.\",\n            \"https://github.com/turbot/steampipe-plugin-linkedin\",\n        ),\n        SteampipeExtension::new(\n            \"linode\",\n            \"Steampipe plugin to query resources, users and more from Linode.\",\n            \"https://github.com/turbot/steampipe-plugin-linode\",\n        ),\n        SteampipeExtension::new(\n            \"mailchimp\",\n            \"Steampipe plugin to query audiences, automation workflows, campaigns, and more from Mailchimp.\",\n            \"https://github.com/turbot/steampipe-plugin-mailchimp\",\n        ),\n        SteampipeExtension::new(\n            \"make\",\n            \"Make plugin for exploring your automations in depth.\",\n            \"https://github.com/marekjalovec/steampipe-plugin-make\",\n        ),\n        SteampipeExtension::new(\n            \"mastodon\",\n            \"Use SQL to instantly query Mastodon timelines, accounts, followers and more.\",\n            \"https://github.com/turbot/steampipe-plugin-mastodon\",\n        ),\n        SteampipeExtension::new(\n            \"microsoft365\",\n            \"Steampipe plugin for querying calendars, contacts, drives, mailboxes and more from Microsoft 365.\",\n            \"https://github.com/turbot/steampipe-plugin-microsoft365\",\n        ),\n        SteampipeExtension::new(\n            \"mongodbatlas\",\n            \"Steampipe plugin for querying clusters, users, teams and more from MongoDB Atlas.\",\n            \"https://github.com/turbot/steampipe-plugin-mongodbatlas\",\n        ),\n        SteampipeExtension::new(\n            \"namecheap\",\n            \"Steampipe plugin to query domains, DNS host records and more from Namecheap.\",\n            \"https://github.com/turbot/steampipe-plugin-namecheap\",\n        ),\n        SteampipeExtension::new(\n            \"net\",\n            \"Steampipe plugin for querying DNS records, certificates and other network information.\",\n            \"https://github.com/turbot/steampipe-plugin-net\",\n        ),\n        SteampipeExtension::new(\n            \"newrelic\",\n            \"Steampipe plugin for querying New Relic Alerts, Events and other resources.\",\n            \"https://github.com/turbot/steampipe-plugin-newrelic\",\n        ),\n        SteampipeExtension::new(\n            \"nomad\",\n            \"Steampipe plugin to query nodes, jobs, deployments and more from Nomad.\",\n            \"https://github.com/turbot/steampipe-plugin-nomad\",\n        ),\n        SteampipeExtension::new(\n            \"oci\",\n            \"Steampipe plugin for Oracle Cloud Infrastructure services and resource types.\",\n            \"https://github.com/turbot/steampipe-plugin-oci\",\n        ),\n        SteampipeExtension::new(\n            \"okta\",\n            \"Steampipe plugin for querying resource users, groups, applications and more from Okta.\",\n            \"https://github.com/turbot/steampipe-plugin-okta\",\n        ),\n        SteampipeExtension::new(\n            \"onepassword\",\n            \"Steampipe plugin to query vaults, items, files and more from 1Password.\",\n            \"https://github.com/turbot/steampipe-plugin-onepassword\",\n        ),\n        SteampipeExtension::new(\n            \"openai\",\n            \"Steampipe plugin to query models, completions and more from OpenAI.\",\n            \"https://github.com/turbot/steampipe-plugin-openai\",\n        ),\n        SteampipeExtension::new(\n            \"openapi\",\n            \"Steampipe plugin to query introspection of the OpenAPI definition.\",\n            \"https://github.com/turbot/steampipe-plugin-openapi\",\n        ),\n        SteampipeExtension::new(\n            \"openshift\",\n            \"Steampipe plugin to query projects, routes, builds and more from OpenShift.\",\n            \"https://github.com/turbot/steampipe-plugin-openshift\",\n        ),\n        SteampipeExtension::new(\n            \"openstack\",\n            \"Steampipe plugin to query cloud resource information from OpenStack deployments.\",\n            \"https://github.com/ernw/steampipe-plugin-openstack\",\n        ),\n        SteampipeExtension::new(\n            \"opsgenie\",\n            \"Steampipe plugin for querying teams and alerts from Opsgenie.\",\n            \"https://github.com/jplanckeel/steampipe-plugin-opsgenie\",\n        ),\n        SteampipeExtension::new(\n            \"ovh\",\n            \"Steampipe plugin for querying OVH.\",\n            \"https://github.com/francois2metz/steampipe-plugin-ovh\",\n        ),\n        SteampipeExtension::new(\n            \"pagerduty\",\n            \"Steampipe plugin to query services, teams, escalation policies and more from your PagerDuty account.\",\n            \"https://github.com/turbot/steampipe-plugin-pagerduty\",\n        ),\n        SteampipeExtension::new(\n            \"panos\",\n            \"Steampipe plugin to query PAN-OS firewalls, security policies and more.\",\n            \"https://github.com/turbot/steampipe-plugin-panos\",\n        ),\n        SteampipeExtension::new(\n            \"pipes\",\n            \"Steampipe plugin for querying workspaces, connections and more from Turbot Pipes.\",\n            \"https://github.com/turbot/steampipe-plugin-pipes\",\n        ),\n        SteampipeExtension::new(\n            \"planetscale\",\n            \"Steampipe plugin to query databases, logs and more from PlanetScale.\",\n            \"https://github.com/turbot/steampipe-plugin-planetscale\",\n        ),\n        SteampipeExtension::new(\n            \"prometheus\",\n            \"Steampipe plugin to query metrics, labels, alerts and more from Prometheus.\",\n            \"https://github.com/turbot/steampipe-plugin-prometheus\",\n        ),\n        SteampipeExtension::new(\n            \"reddit\",\n            \"Steampipe plugin to query Reddit users, posts, votes and more.\",\n            \"https://github.com/turbot/steampipe-plugin-reddit\",\n        ),\n        SteampipeExtension::new(\n            \"rss\",\n            \"Steampipe plugin to query RSS channels & Atom feeds\",\n            \"https://github.com/turbot/steampipe-plugin-rss\",\n        ),\n        SteampipeExtension::new(\n            \"salesforce\",\n            \"Steampipe plugin to query accounts, opportunities, users and more from your Salesforce instance.\",\n            \"https://github.com/turbot/steampipe-plugin-salesforce\",\n        ),\n        SteampipeExtension::new(\n            \"scaleway\",\n            \"Steampipe plugin to query servers, networks, databases and more from your Scaleway project.\",\n            \"https://github.com/turbot/steampipe-plugin-scaleway\",\n        ),\n        SteampipeExtension::new(\n            \"scalingo\",\n            \"Steampipe plugin for querying apps, addons and more from Scalingo.\",\n            \"https://github.com/francois2metz/steampipe-plugin-scalingo\",\n        ),\n        SteampipeExtension::new(\n            \"semgrep\",\n            \"Steampipe plugin to query deployments, findings, and projects from Semgrep.\",\n            \"https://github.com/gabrielsoltz/steampipe-plugin-semgrep\",\n        ),\n        SteampipeExtension::new(\n            \"sentry\",\n            \"Steampipe plugin to query organizations, projects, teams and more from Sentry.\",\n            \"https://github.com/turbot/steampipe-plugin-sentry\",\n        ),\n        SteampipeExtension::new(\n            \"servicenow\",\n            \"Use SQL to query CMDB CI services, servers, incidents, objects and more from ServiceNow.\",\n            \"https://github.com/turbot/steampipe-plugin-servicenow\",\n        ),\n        SteampipeExtension::new(\n            \"shodan\",\n            \"Steampipe plugin to query host, DNS and exploit information using Shodan.\",\n            \"https://github.com/turbot/steampipe-plugin-shodan\",\n        ),\n        SteampipeExtension::new(\n            \"shopify\",\n            \"Steampipe plugin to query products, order, customers and more from Shopify.\",\n            \"https://github.com/turbot/steampipe-plugin-shopify\",\n        ),\n        SteampipeExtension::new(\n            \"slack\",\n            \"Steampipe plugin for querying Slack Conversations, Groups, Users and other resources.\",\n            \"https://github.com/turbot/steampipe-plugin-slack\",\n        ),\n        SteampipeExtension::new(\n            \"snowflake\",\n            \"Steampipe plugin for querying roles, databases, and more from Snowflake.\",\n            \"https://github.com/turbot/steampipe-plugin-snowflake\",\n        ),\n        SteampipeExtension::new(\n            \"solace\",\n            \"Solace PubSub+ Cloud plugin for exploring your Solace Cloud configuration in depth.\",\n            \"https://github.com/solacelabs/steampipe-plugin-solace\",\n        ),\n        SteampipeExtension::new(\n            \"splunk\",\n            \"Steampipe plugin to query apps, indexes, logs and more from Splunk.\",\n            \"https://github.com/turbot/steampipe-plugin-splunk\",\n        ),\n        SteampipeExtension::new(\n            \"steampipe\",\n            \"Steampipe plugin for querying Steampipe components, such as the available plugins in the steampipe hub.\",\n            \"https://github.com/turbot/steampipe-plugin-steampipe\",\n        ),\n        SteampipeExtension::new(\n            \"steampipecloud\",\n            \"Steampipe plugin for querying workspaces, connections and more from Steampipe Cloud.\",\n            \"https://github.com/turbot/steampipe-plugin-steampipecloud\",\n        ),\n        SteampipeExtension::new(\n            \"stripe\",\n            \"Steampipe plugin for querying customers, products, invoices and more from Stripe.\",\n            \"https://github.com/turbot/steampipe-plugin-stripe\",\n        ),\n        SteampipeExtension::new(\n            \"supabase\",\n            \"Steampipe plugin to query projects, functions, network restrictions, and more from your Supabase organization.\",\n            \"https://github.com/turbot/steampipe-plugin-supabase\",\n        ),\n        SteampipeExtension::new(\n            \"tailscale\",\n            \"Steampipe plugin to query VPN networks, devices and more from tailscale.\",\n            \"https://github.com/turbot/steampipe-plugin-tailscale\",\n        ),\n        SteampipeExtension::new(\n            \"terraform\",\n            \"Steampipe plugin to query data from Terraform files.\",\n            \"https://github.com/turbot/steampipe-plugin-terraform\",\n        ),\n        SteampipeExtension::new(\n            \"tfe\",\n            \"Steampipe plugin to query resources, users and more from Terraform Enterprise.\",\n            \"https://github.com/turbot/steampipe-plugin-tfe\",\n        ),\n        SteampipeExtension::new(\n            \"tomba\",\n            \"Steampipe plugin to query Domain or Email information from tomba.io.\",\n            \"https://github.com/tomba-io/steampipe-plugin-tomba\",\n        ),\n        SteampipeExtension::new(\n            \"trello\",\n            \"Steampipe plugin to query boards, cards, lists, and more from Trello.\",\n            \"https://github.com/turbot/steampipe-plugin-trello\",\n        ),\n        SteampipeExtension::new(\n            \"trivy\",\n            \"Steampipe plugin using Trivy to query advisories, vulnerabilities for containers, code and more.\",\n            \"https://github.com/turbot/steampipe-plugin-trivy\",\n        ),\n        SteampipeExtension::new(\n            \"turbot\",\n            \"Steampipe plugin to query resources, controls, policies and more from Turbot.\",\n            \"https://github.com/turbot/steampipe-plugin-turbot\",\n        ),\n        SteampipeExtension::new(\n            \"twilio\",\n            \"Steampipe plugin to query calls, messages and other communication functions from your Twilio project.\",\n            \"https://github.com/turbot/steampipe-plugin-twilio\",\n        ),\n        SteampipeExtension::new(\n            \"twitter\",\n            \"Steampipe plugin to query tweets, users and followers from Twitter.\",\n            \"https://github.com/turbot/steampipe-plugin-twitter\",\n        ),\n        SteampipeExtension::new(\n            \"updown\",\n            \"Steampipe plugin for querying updown.io checks, metrics and downtime data.\",\n            \"https://github.com/turbot/steampipe-plugin-updown\",\n        ),\n        SteampipeExtension::new(\n            \"uptimerobot\",\n            \"Steampipe plugin to query monitors, alert contacts and more from UptimeRobot.\",\n            \"https://github.com/turbot/steampipe-plugin-uptimerobot\",\n        ),\n        SteampipeExtension::new(\n            \"urlscan\",\n            \"Steampipe plugin to query URL scanning results including requests cookies, headers and more from urlscan.io.\",\n            \"https://github.com/turbot/steampipe-plugin-urlscan\",\n        ),\n        SteampipeExtension::new(\n            \"vanta\",\n            \"Steampipe plugin to query users, policies, compliances, and more from your Vanta organization.\",\n            \"https://github.com/turbot/steampipe-plugin-vanta\",\n        ),\n        SteampipeExtension::new(\n            \"vault\",\n            \"Steampipe plugin for querying available secret keys (not values), etc from Hashicorp Vault.\",\n            \"https://github.com/theapsgroup/steampipe-plugin-vault\",\n        ),\n        SteampipeExtension::new(\n            \"vercel\",\n            \"Steampipe plugin to query projects, teams, domains and more from Vercel.\",\n            \"https://github.com/turbot/steampipe-plugin-vercel\",\n        ),\n        SteampipeExtension::new(\n            \"virustotal\",\n            \"Steampipe plugin to query file, domain, URL and IP scanning results from VirusTotal.\",\n            \"https://github.com/turbot/steampipe-plugin-virustotal\",\n        ),\n        SteampipeExtension::new(\n            \"vsphere\",\n            \"Steampipe plugin for querying data from a vsphere environment.\",\n            \"https://github.com/theapsgroup/steampipe-plugin-vsphere\",\n        ),\n        SteampipeExtension::new(\n            \"weatherkit\",\n            \"Steampipe plugin for querying weather from WeatherKit.\",\n            \"https://github.com/ellisvalentiner/steampipe-plugin-weatherkit\",\n        ),\n        SteampipeExtension::new(\n            \"whois\",\n            \"Steampipe plugin for querying domains, name servers and contact information from WHOIS.\",\n            \"https://github.com/turbot/steampipe-plugin-whois\",\n        ),\n        SteampipeExtension::new(\n            \"wiz\",\n            \"Steampipe plugin to query security controls, findings, vulnerabilities, and more from your Wiz subscription.\",\n            \"https://github.com/turbot/steampipe-plugin-wiz\",\n        ),\n        SteampipeExtension::new(\n            \"workos\",\n            \"Steampipe plugin to query directories, groups and more from WorkOS.\",\n            \"https://github.com/turbot/steampipe-plugin-workos\",\n        ),\n        SteampipeExtension::new(\n            \"zendesk\",\n            \"Steampipe plugin for querying tickets, users, groups and more from Zendesk.\",\n            \"https://github.com/turbot/steampipe-plugin-zendesk\",\n        ),\n        SteampipeExtension::new(\n            \"zoom\",\n            \"Steampipe plugin for querying Zoom meetings, webinars, users and more.\",\n            \"https://github.com/turbot/steampipe-plugin-zoom\",\n        ),\n    ]\n}\n\n#[derive(Debug)]\npub struct SteampipeExtension {\n    pub name: String,\n    pub description: String,\n    pub url: String,\n}\n\nimpl SteampipeExtension {\n    pub fn new(name: &str, description: &str, url: &str) -> SteampipeExtension {\n        SteampipeExtension {\n            name: name.to_string(),\n            description: description.to_string(),\n            url: url.to_string(),\n        }\n    }\n}\n\npub fn get<'a>() -> &'a Vec<SteampipeExtension> {\n    &EXTENSIONS\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_get() {\n        let extensions = get();\n        assert_eq!(143, extensions.len());\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/repository/steampipe/mod.rs",
    "content": "mod extensions;\npub mod repository;\n\npub const URL: &str = \"https://github.com/turbot\";\n"
  },
  {
    "path": "postgresql_extensions/src/repository/steampipe/repository.rs",
    "content": "use crate::Error::ExtensionNotFound;\nuse crate::Result;\nuse crate::matcher::tar_gz_matcher;\nuse crate::model::AvailableExtension;\nuse crate::repository::steampipe::URL;\nuse crate::repository::{Repository, steampipe};\nuse async_trait::async_trait;\nuse postgresql_archive::extractor::{ExtractDirectories, tar_gz_extract};\nuse postgresql_archive::get_archive;\nuse postgresql_archive::repository::github::repository::GitHub;\nuse regex_lite::Regex;\nuse semver::{Version, VersionReq};\nuse std::fmt::Debug;\nuse std::path::PathBuf;\n\n/// Steampipe repository.\n#[derive(Debug)]\npub struct Steampipe;\n\nimpl Steampipe {\n    /// Creates a new Steampipe repository.\n    ///\n    /// # Errors\n    /// * If the repository cannot be created\n    #[expect(clippy::new_ret_no_self)]\n    pub fn new() -> Result<Box<dyn Repository>> {\n        Ok(Box::new(Self))\n    }\n\n    /// Initializes the repository.\n    ///\n    /// # Errors\n    /// * If the repository cannot be initialized.\n    pub fn initialize() -> Result<()> {\n        postgresql_archive::matcher::registry::register(\n            |url| Ok(url.starts_with(URL)),\n            tar_gz_matcher,\n        )?;\n        postgresql_archive::repository::registry::register(\n            |url| Ok(url.starts_with(URL)),\n            Box::new(GitHub::new),\n        )?;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl Repository for Steampipe {\n    fn name(&self) -> &'static str {\n        \"steampipe\"\n    }\n\n    async fn get_available_extensions(&self) -> Result<Vec<AvailableExtension>> {\n        let mut extensions = Vec::new();\n        for steampipe_extension in steampipe::extensions::get() {\n            let extension = AvailableExtension::new(\n                self.name(),\n                steampipe_extension.name.as_str(),\n                steampipe_extension.description.as_str(),\n            );\n\n            extensions.push(extension);\n        }\n        Ok(extensions)\n    }\n\n    async fn get_archive(\n        &self,\n        postgresql_version: &str,\n        name: &str,\n        version: &VersionReq,\n    ) -> Result<(Version, Vec<u8>)> {\n        let Some(extension) = steampipe::extensions::get()\n            .iter()\n            .find(|extension| extension.name == name)\n        else {\n            let extension = format!(\"{}:{}:{}\", self.name(), name, version);\n            return Err(ExtensionNotFound(extension));\n        };\n        let url = format!(\"{}?postgresql_version={postgresql_version}\", extension.url);\n        let archive = get_archive(url.as_str(), version).await?;\n        Ok(archive)\n    }\n\n    async fn install(\n        &self,\n        _name: &str,\n        library_dir: PathBuf,\n        extension_dir: PathBuf,\n        archive: &[u8],\n    ) -> Result<Vec<PathBuf>> {\n        let mut extract_directories = ExtractDirectories::default();\n        extract_directories.add_mapping(Regex::new(r\"\\.(dll|dylib|so)$\")?, library_dir);\n        extract_directories.add_mapping(Regex::new(r\"\\.(control|sql)$\")?, extension_dir);\n        let bytes = &archive.to_vec();\n        let files = tar_gz_extract(bytes, &extract_directories)?;\n        Ok(files)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::repository::Repository;\n\n    #[test]\n    fn test_name() {\n        let repository = Steampipe;\n        assert_eq!(\"steampipe\", repository.name());\n    }\n\n    #[tokio::test]\n    async fn test_get_available_extensions() -> Result<()> {\n        let repository = Steampipe;\n        let extensions = repository.get_available_extensions().await?;\n        let extension = &extensions[0];\n\n        assert_eq!(\"abuseipdb\", extension.name());\n        assert_eq!(\n            \"Steampipe plugin to query IP address abuse data and more from AbuseIPDB.\",\n            extension.description()\n        );\n        assert_eq!(143, extensions.len());\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_get_archive_error() -> anyhow::Result<()> {\n        let repository = Steampipe;\n        let postgresql_version = \"15.7\";\n        let name = \"does-not-exist\";\n        let version = VersionReq::parse(\"=0.12.0\")?;\n        let result = repository\n            .get_archive(postgresql_version, name, &version)\n            .await;\n        assert!(result.is_err());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/src/repository/tensor_chord/mod.rs",
    "content": "pub mod repository;\n\npub const URL: &str = \"https://github.com/tensorchord\";\n"
  },
  {
    "path": "postgresql_extensions/src/repository/tensor_chord/repository.rs",
    "content": "use crate::Result;\nuse crate::matcher::zip_matcher;\nuse crate::model::AvailableExtension;\nuse crate::repository::Repository;\nuse crate::repository::tensor_chord::URL;\nuse async_trait::async_trait;\nuse postgresql_archive::extractor::{ExtractDirectories, zip_extract};\nuse postgresql_archive::get_archive;\nuse postgresql_archive::repository::github::repository::GitHub;\nuse regex_lite::Regex;\nuse semver::{Version, VersionReq};\nuse std::fmt::Debug;\nuse std::path::PathBuf;\n\n/// TensorChord repository.\n#[derive(Debug)]\npub struct TensorChord;\n\nimpl TensorChord {\n    /// Creates a new TensorChord repository.\n    ///\n    /// # Errors\n    /// * If the repository cannot be created\n    #[expect(clippy::new_ret_no_self)]\n    pub fn new() -> Result<Box<dyn Repository>> {\n        Ok(Box::new(Self))\n    }\n\n    /// Initializes the repository.\n    ///\n    /// # Errors\n    /// * If the repository cannot be initialized.\n    pub fn initialize() -> Result<()> {\n        postgresql_archive::matcher::registry::register(\n            |url| Ok(url.starts_with(URL)),\n            zip_matcher,\n        )?;\n        postgresql_archive::repository::registry::register(\n            |url| Ok(url.starts_with(URL)),\n            Box::new(GitHub::new),\n        )?;\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl Repository for TensorChord {\n    fn name(&self) -> &'static str {\n        \"tensor-chord\"\n    }\n\n    async fn get_available_extensions(&self) -> Result<Vec<AvailableExtension>> {\n        let extensions = vec![AvailableExtension::new(\n            self.name(),\n            \"pgvecto.rs\",\n            \"Scalable, Low-latency and Hybrid-enabled Vector Search\",\n        )];\n        Ok(extensions)\n    }\n\n    async fn get_archive(\n        &self,\n        postgresql_version: &str,\n        name: &str,\n        version: &VersionReq,\n    ) -> Result<(Version, Vec<u8>)> {\n        let url = format!(\"{URL}/{name}?postgresql_version={postgresql_version}\");\n        let archive = get_archive(url.as_str(), version).await?;\n        Ok(archive)\n    }\n\n    async fn install(\n        &self,\n        _name: &str,\n        library_dir: PathBuf,\n        extension_dir: PathBuf,\n        archive: &[u8],\n    ) -> Result<Vec<PathBuf>> {\n        let mut extract_directories = ExtractDirectories::default();\n        extract_directories.add_mapping(Regex::new(r\"\\.(dll|dylib|so)$\")?, library_dir);\n        extract_directories.add_mapping(Regex::new(r\"\\.(control|sql)$\")?, extension_dir);\n        let bytes = &archive.to_vec();\n        let files = zip_extract(bytes, &extract_directories)?;\n        Ok(files)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::repository::Repository;\n\n    #[test]\n    fn test_name() {\n        let repository = TensorChord;\n        assert_eq!(\"tensor-chord\", repository.name());\n    }\n\n    #[tokio::test]\n    async fn test_get_available_extensions() -> Result<()> {\n        let repository = TensorChord;\n        let extensions = repository.get_available_extensions().await?;\n        let extension = &extensions[0];\n\n        assert_eq!(\"pgvecto.rs\", extension.name());\n        assert_eq!(\n            \"Scalable, Low-latency and Hybrid-enabled Vector Search\",\n            extension.description()\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "postgresql_extensions/tests/blocking.rs",
    "content": "#[cfg(feature = \"blocking\")]\nuse test_log::test;\n\n#[cfg(feature = \"blocking\")]\n#[test]\nfn test_get_available_extensions() -> anyhow::Result<()> {\n    let extensions = postgresql_extensions::blocking::get_available_extensions()?;\n    #[cfg(feature = \"steampipe\")]\n    assert!(\n        extensions\n            .iter()\n            .any(|extension| extension.namespace() == \"steampipe\")\n    );\n    #[cfg(feature = \"tensor-chord\")]\n    assert!(\n        extensions\n            .iter()\n            .any(|extension| extension.namespace() == \"tensor-chord\")\n    );\n    Ok(())\n}\n\n#[cfg(all(target_os = \"linux\", feature = \"blocking\", feature = \"tensor-chord\"))]\n#[test]\nfn test_extensions_blocking_lifecycle() -> anyhow::Result<()> {\n    let installation_dir = tempfile::tempdir()?.path().to_path_buf();\n    let postgresql_version = semver::VersionReq::parse(\"=16.4.0\")?;\n    let settings = postgresql_embedded::Settings {\n        version: postgresql_version.clone(),\n        installation_dir: installation_dir.clone(),\n        ..Default::default()\n    };\n    let mut postgresql = postgresql_embedded::blocking::PostgreSQL::new(settings);\n\n    postgresql.setup()?;\n\n    let settings = postgresql.settings();\n    // Skip the test if the PostgreSQL version does not match; when testing with the 'bundled'\n    // feature, the version may vary and the test will fail.\n    if settings.version != postgresql_version {\n        return Ok(());\n    }\n\n    let namespace = \"tensor-chord\";\n    let name = \"pgvecto.rs\";\n    let version = semver::VersionReq::parse(\"=0.3.0\")?;\n\n    let installed_extensions = postgresql_extensions::blocking::get_installed_extensions(settings)?;\n    assert!(installed_extensions.is_empty());\n\n    postgresql_extensions::blocking::install(settings, namespace, name, &version)?;\n\n    let installed_extensions = postgresql_extensions::blocking::get_installed_extensions(settings)?;\n    assert!(!installed_extensions.is_empty());\n\n    postgresql_extensions::blocking::uninstall(settings, namespace, name)?;\n\n    let installed_extensions = postgresql_extensions::blocking::get_installed_extensions(settings)?;\n    assert!(installed_extensions.is_empty());\n\n    std::fs::remove_dir_all(&installation_dir)?;\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_extensions/tests/extensions.rs",
    "content": "use anyhow::Result;\nuse postgresql_extensions::get_available_extensions;\n\n#[tokio::test]\nasync fn test_get_available_extensions() -> Result<()> {\n    let extensions = get_available_extensions().await?;\n    #[cfg(feature = \"steampipe\")]\n    assert!(\n        extensions\n            .iter()\n            .any(|extension| extension.namespace() == \"steampipe\")\n    );\n    #[cfg(feature = \"tensor-chord\")]\n    assert!(\n        extensions\n            .iter()\n            .any(|extension| extension.namespace() == \"tensor-chord\")\n    );\n    Ok(())\n}\n\n#[cfg(all(target_os = \"linux\", feature = \"tensor-chord\"))]\n#[tokio::test]\nasync fn test_extensions_tensor_chord_lifecycle() -> Result<()> {\n    let installation_dir = tempfile::tempdir()?.path().to_path_buf();\n    let postgresql_version = semver::VersionReq::parse(\"=16.4.0\")?;\n    let settings = postgresql_embedded::Settings {\n        version: postgresql_version.clone(),\n        installation_dir: installation_dir.clone(),\n        ..Default::default()\n    };\n    let mut postgresql = postgresql_embedded::PostgreSQL::new(settings);\n\n    postgresql.setup().await?;\n\n    let settings = postgresql.settings();\n    // Skip the test if the PostgreSQL version does not match; when testing with the 'bundled'\n    // feature, the version may vary and the test will fail.\n    if settings.version != postgresql_version {\n        return Ok(());\n    }\n\n    let namespace = \"tensor-chord\";\n    let name = \"pgvecto.rs\";\n    let version = semver::VersionReq::parse(\"=0.3.0\")?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(installed_extensions.is_empty());\n\n    postgresql_extensions::install(settings, namespace, name, &version).await?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(!installed_extensions.is_empty());\n\n    postgresql_extensions::uninstall(settings, namespace, name).await?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(installed_extensions.is_empty());\n\n    tokio::fs::remove_dir_all(&installation_dir).await?;\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_extensions/tests/portal_corp.rs",
    "content": "#[cfg(not(any(\n    all(target_os = \"linux\", target_arch = \"aarch64\"),\n    all(target_os = \"macos\", target_arch = \"x86_64\")\n)))]\n#[cfg(feature = \"portal-corp\")]\n#[tokio::test]\nasync fn test_extensions_portal_corp_lifecycle() -> anyhow::Result<()> {\n    let installation_dir = tempfile::tempdir()?.path().to_path_buf();\n    let postgresql_version = semver::VersionReq::parse(\"=16.4.0\")?;\n    let settings = postgresql_embedded::Settings {\n        version: postgresql_version.clone(),\n        installation_dir: installation_dir.clone(),\n        ..Default::default()\n    };\n    let mut postgresql = postgresql_embedded::PostgreSQL::new(settings);\n\n    postgresql.setup().await?;\n\n    let settings = postgresql.settings();\n    // Skip the test if the PostgreSQL version does not match; when testing with the 'bundled'\n    // feature, the version may vary and the test will fail.\n    if settings.version != postgresql_version {\n        return Ok(());\n    }\n\n    let namespace = \"portal-corp\";\n    let name = \"pgvector_compiled\";\n    let version = semver::VersionReq::parse(\"=0.16.12\")?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(installed_extensions.is_empty());\n\n    postgresql_extensions::install(settings, namespace, name, &version).await?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(!installed_extensions.is_empty());\n\n    postgresql_extensions::uninstall(settings, namespace, name).await?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(installed_extensions.is_empty());\n\n    tokio::fs::remove_dir_all(&installation_dir).await?;\n    Ok(())\n}\n"
  },
  {
    "path": "postgresql_extensions/tests/steampipe.rs",
    "content": "#[cfg(any(target_os = \"linux\", target_os = \"macos\"))]\n#[cfg(feature = \"steampipe\")]\n#[tokio::test]\nasync fn test_extensions_steampipe_lifecycle() -> anyhow::Result<()> {\n    let installation_dir = tempfile::tempdir()?.path().to_path_buf();\n    let postgresql_version = semver::VersionReq::parse(\"=15.7.0\")?;\n    let settings = postgresql_embedded::Settings {\n        version: postgresql_version.clone(),\n        installation_dir: installation_dir.clone(),\n        ..Default::default()\n    };\n    let mut postgresql = postgresql_embedded::PostgreSQL::new(settings);\n\n    postgresql.setup().await?;\n\n    let settings = postgresql.settings();\n    // Skip the test if the PostgreSQL version does not match; when testing with the 'bundled'\n    // feature, the version may vary and the test will fail.\n    if settings.version != postgresql_version {\n        return Ok(());\n    }\n\n    let namespace = \"steampipe\";\n    let name = \"csv\";\n    let version = semver::VersionReq::parse(\"=0.12.0\")?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(installed_extensions.is_empty());\n\n    postgresql_extensions::install(settings, namespace, name, &version).await?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(!installed_extensions.is_empty());\n\n    postgresql_extensions::uninstall(settings, namespace, name).await?;\n\n    let installed_extensions = postgresql_extensions::get_installed_extensions(settings).await?;\n    assert!(installed_extensions.is_empty());\n\n    tokio::fs::remove_dir_all(&installation_dir).await?;\n    Ok(())\n}\n"
  },
  {
    "path": "release-plz.toml",
    "content": "[workspace]\nchangelog_path = \"./CHANGELOG.md\"\ngit_release_enable = false\ngit_tag_enable = false\npr_name = \"postgresql-embedded-v{{ version }}\"\nrelease_always = false\n\n[[package]]\nname = \"postgresql_embedded\"\nchangelog_update = true\nchangelog_include = [\n    \"postgresql_archive\",\n    \"postgresql_commands\",\n    \"postgresql_extensions\",\n]\ngit_release_enable = true\ngit_release_name = \"v{{ version }}\"\ngit_tag_enable = true\ngit_tag_name = \"v{{ version }}\"\n\n[changelog]\nbody = \"\"\"\n\n## `{{ package }}` - [{{ version | trim_start_matches(pat=\"v\") }}]{%- if release_link -%}({{ release_link }}){% endif %} - {{ timestamp | date(format=\"%Y-%m-%d\") }}\n{% for group, commits in commits | group_by(attribute=\"group\") %}\n### {{ group | upper_first }}\n{% for commit in commits %}\n{%- if commit.scope -%}\n- *({{commit.scope}})* {% if commit.breaking %}[**breaking**] {% endif %}{{ commit.message }}{%- if commit.links %} ({% for link in commit.links %}[{{link.text}}]({{link.href}}) {% endfor -%}){% endif %}\n{% else -%}\n- {% if commit.breaking %}[**breaking**] {% endif %}{{ commit.message }}\n{% endif -%}\n{% endfor -%}\n{% endfor -%}\n\"\"\"\n"
  },
  {
    "path": "rust-toolchain.toml",
    "content": "[toolchain]\nchannel = \"1.92.0\"\nprofile = \"default\"\n"
  }
]