Full Code of Jagalite/superseedr for AI

main 7249dbf25041 cached
278 files
4.8 MB
1.3M tokens
4930 symbols
1 requests
Download .txt
Showing preview only (5,119K chars total). Download the full file or copy to clipboard to get everything.
Repository: Jagalite/superseedr
Branch: main
Commit: 7249dbf25041
Files: 278
Total size: 4.8 MB

Directory structure:
gitextract_7i7vgl0x/

├── .dockerignore
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── documentation.yml
│   │   ├── enhancement.yml
│   │   ├── feature_request.yml
│   │   └── questions.yml
│   ├── dependabot.yml
│   └── workflows/
│       ├── integration-cluster-cli.yml
│       ├── integration-interop.yml
│       ├── nightly.yml
│       └── rust.yml
├── .gitignore
├── .gluetun.env.example
├── AGENTS.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Cargo.toml
├── Dockerfile
├── LICENSE
├── README.md
├── agentic_plans/
│   ├── cargo_dependency_assessment_2026-03-12.md
│   ├── cli_control_status_testing.md
│   ├── cli_shared_config_agent_validation_plan_2026-03-19.md
│   ├── client_diagnostics_full_implementation_plan_2026-05-01.md
│   ├── dht_global_planner_budget_plan_2026-04-24.md
│   ├── dht_resumable_crawls_plan_2026-04-19.md
│   ├── dht_soak_keep_after_discard_2026-04-23.md
│   ├── integration_harness_plan.md
│   ├── integrity_scheduler_plan_2026-03-03.md
│   ├── layered_shared_config_plan_2026-03-13.md
│   ├── multi_instance_zero_config_scaling_plan_2026-03-12.md
│   ├── network_activity_chart_panel_expansion_plan_2026-03-05.md
│   ├── network_history_persistence_async_restore_plan_2026-02-24.md
│   ├── non_aligned_piece_local_refactor_plan.md
│   ├── rss_tui_selection_implementation_plan.md
│   ├── runtime_scalability_cleanup_plan_2026-03-12.md
│   ├── startup_churn_cpu_reimplementation_plan_2026-03-01.md
│   ├── state_fuzz_harness_disconnect_cleanup_handoff_2026-02-13.md
│   ├── system_health_prober_plan_2026-03-27.md
│   ├── terminal_paste_fallback_plan_2026-03-10.md
│   ├── torrent_metadata_write_hardening_plan_2026-04-16.md
│   ├── torrent_remove_delete_lifecycle_plan_2026-03-02.md
│   ├── torrent_restart_revalidate_refactor_plan_2026-03-20.md
│   ├── tui_architecture_refactor.md
│   ├── tui_particle_theme_layers_plan_2026-02-25.md
│   ├── tui_phase0_baseline.md
│   ├── tui_phase0_manual_parity_checklist.md
│   └── v2_identity_lossiness_review_2026-04-14.md
├── agentic_prompts/
│   ├── changelog.md
│   ├── comments.md
│   ├── maintenance_task.md
│   └── review.md
├── agentic_testing/
│   ├── results.json
│   └── summary.md
├── assets/
│   └── app_icon.icns
├── docker-compose.yml
├── docs/
│   ├── CHANGELOG.md
│   ├── FAQ.md
│   ├── ROADMAP.md
│   ├── architecture.md
│   ├── cli.md
│   ├── dht-ownership-plan.md
│   ├── integration-e2e-automation-plan.md
│   ├── integration-harness.md
│   ├── shared-config.md
│   ├── synthetic-benchmark.md
│   └── tuning.md
├── integration_tests/
│   ├── README.md
│   ├── __init__.py
│   ├── cluster_cli/
│   │   ├── __init__.py
│   │   ├── fixtures/
│   │   │   └── manifest.json
│   │   ├── manifest.py
│   │   ├── run.py
│   │   ├── runner.py
│   │   └── tests/
│   │       ├── test_cluster_cli.py
│   │       └── test_manifest.py
│   ├── docker/
│   │   ├── docker-compose.cluster-cli.yml
│   │   ├── docker-compose.interop.yml
│   │   └── tracker.py
│   ├── harness/
│   │   ├── __init__.py
│   │   ├── clients/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── qbittorrent.py
│   │   │   ├── superseedr.py
│   │   │   └── transmission.py
│   │   ├── config.py
│   │   ├── docker_ctl.py
│   │   ├── manifest.py
│   │   ├── pytest.ini
│   │   ├── run.py
│   │   ├── scenarios/
│   │   │   ├── __init__.py
│   │   │   ├── qbittorrent_to_superseedr.py
│   │   │   ├── superseedr_to_qbittorrent.py
│   │   │   ├── superseedr_to_superseedr.py
│   │   │   ├── superseedr_to_transmission.py
│   │   │   └── transmission_to_superseedr.py
│   │   └── tests/
│   │       ├── test_manifest.py
│   │       ├── test_qbittorrent_auth_interop.py
│   │       ├── test_qbittorrent_to_superseedr_interop.py
│   │       ├── test_stub_adapters.py
│   │       ├── test_superseedr_interop.py
│   │       ├── test_superseedr_to_qbittorrent_interop.py
│   │       ├── test_superseedr_to_transmission_interop.py
│   │       ├── test_transmission_auth_interop.py
│   │       └── test_transmission_to_superseedr_interop.py
│   ├── run_cluster_cli.sh
│   ├── run_interop.sh
│   ├── settings.toml
│   └── torrents/
│       ├── hybrid/
│       │   ├── multi_file.torrent
│       │   ├── nested.torrent
│       │   ├── single_16k.bin.torrent
│       │   ├── single_4k.bin.torrent
│       │   └── single_8k.bin.torrent
│       ├── v1/
│       │   ├── multi_file.torrent
│       │   ├── nested.torrent
│       │   ├── single_16k.bin.torrent
│       │   ├── single_25k.bin.torrent
│       │   ├── single_4k.bin.torrent
│       │   └── single_8k.bin.torrent
│       └── v2/
│           ├── multi_file.torrent
│           ├── nested.torrent
│           ├── single_16k.bin.torrent
│           ├── single_4k.bin.torrent
│           └── single_8k.bin.torrent
├── packaging/
│   └── windows/
│       └── wix-template.xml
├── proptest-regressions/
│   ├── networking/
│   │   └── session.txt
│   └── torrent_manager/
│       └── state.txt
├── pytest.ini
├── requirements-integration.txt
├── rust-toolchain.toml
├── scripts/
│   ├── build_osx_universal_pkg.sh
│   ├── clear_integration_output.py
│   ├── docker_build.sh
│   ├── extract_merkle.py
│   ├── file_descriptors_printout.sh
│   ├── generate_integration_bins.py
│   ├── generate_integration_torrents.py
│   ├── get_process_FDs.sh
│   ├── git_tag.sh
│   ├── grep_io_errors.sh
│   ├── hash.py
│   ├── private_build.sh
│   ├── summarize_dht_soak.py
│   ├── test-state-simulations.sh
│   └── validate_integration_output.py
├── src/
│   ├── app.rs
│   ├── command.rs
│   ├── config.rs
│   ├── control_service.rs
│   ├── dht/
│   │   ├── anomaly.rs
│   │   ├── bep42.rs
│   │   ├── bootstrap.rs
│   │   ├── health.rs
│   │   ├── inbound.rs
│   │   ├── krpc.rs
│   │   ├── lookup.rs
│   │   ├── mod.rs
│   │   ├── peer_store.rs
│   │   ├── persist.rs
│   │   ├── public_addr.rs
│   │   ├── routing.rs
│   │   ├── scheduler.rs
│   │   ├── service/
│   │   │   ├── api.rs
│   │   │   ├── api_tests.rs
│   │   │   ├── command_tests.rs
│   │   │   ├── commands.rs
│   │   │   ├── config.rs
│   │   │   ├── driver.rs
│   │   │   ├── driver_tests.rs
│   │   │   ├── effects.rs
│   │   │   ├── lifecycle.rs
│   │   │   ├── lifecycle_tests.rs
│   │   │   ├── monitor.rs
│   │   │   ├── monitor_tests.rs
│   │   │   ├── planner/
│   │   │   │   ├── drain.rs
│   │   │   │   ├── drain_tests.rs
│   │   │   │   ├── invariant_tests.rs
│   │   │   │   ├── invariants.rs
│   │   │   │   ├── reducer_tests.rs
│   │   │   │   ├── replay_tests.rs
│   │   │   │   ├── selection.rs
│   │   │   │   ├── selection_tests.rs
│   │   │   │   ├── test_support.rs
│   │   │   │   └── types.rs
│   │   │   ├── planner.rs
│   │   │   ├── replay_tests.rs
│   │   │   ├── runtime.rs
│   │   │   ├── runtime_command_replay_tests.rs
│   │   │   ├── runtime_effect_tests.rs
│   │   │   ├── state/
│   │   │   │   ├── demand_command.rs
│   │   │   │   ├── mod.rs
│   │   │   │   └── service.rs
│   │   │   ├── state_tests.rs
│   │   │   ├── status.rs
│   │   │   ├── status_tests.rs
│   │   │   ├── subscriber_tests.rs
│   │   │   ├── subscribers.rs
│   │   │   └── test_support.rs
│   │   ├── service.rs
│   │   ├── test_support.rs
│   │   ├── token.rs
│   │   ├── transport.rs
│   │   └── types.rs
│   ├── dht_service.rs
│   ├── dht_stub.rs
│   ├── errors.rs
│   ├── fs_atomic.rs
│   ├── integrations/
│   │   ├── cli.rs
│   │   ├── control.rs
│   │   ├── mod.rs
│   │   ├── rss_ingest.rs
│   │   ├── rss_service.rs
│   │   ├── rss_url_safety.rs
│   │   ├── status.rs
│   │   └── watcher.rs
│   ├── integrity_scheduler.rs
│   ├── logging.rs
│   ├── main.rs
│   ├── networking/
│   │   ├── mod.rs
│   │   ├── protocol.rs
│   │   ├── session.rs
│   │   └── web_seed_worker.rs
│   ├── persistence/
│   │   ├── README.md
│   │   ├── activity_history.rs
│   │   ├── event_journal.rs
│   │   ├── mod.rs
│   │   ├── network_history.rs
│   │   └── rss.rs
│   ├── resource_manager.rs
│   ├── storage.rs
│   ├── synthetic_load.rs
│   ├── telemetry/
│   │   ├── activity_history_telemetry.rs
│   │   ├── manager_telemetry.rs
│   │   ├── mod.rs
│   │   ├── network_history_telemetry.rs
│   │   ├── restore_densify.rs
│   │   └── ui_telemetry.rs
│   ├── theme.rs
│   ├── token_bucket.rs
│   ├── torrent_file/
│   │   ├── mod.rs
│   │   └── parser.rs
│   ├── torrent_identity.rs
│   ├── torrent_manager/
│   │   ├── block_manager.rs
│   │   ├── manager.rs
│   │   ├── merkle.rs
│   │   ├── mod.rs
│   │   ├── piece_manager.rs
│   │   └── state.rs
│   ├── tracker/
│   │   ├── client.rs
│   │   └── mod.rs
│   ├── tui/
│   │   ├── README.md
│   │   ├── effects.rs
│   │   ├── events.rs
│   │   ├── formatters.rs
│   │   ├── layout/
│   │   │   ├── browser.rs
│   │   │   ├── common.rs
│   │   │   └── normal.rs
│   │   ├── layout.rs
│   │   ├── mod.rs
│   │   ├── particles.rs
│   │   ├── paste_burst.rs
│   │   ├── screen_context.rs
│   │   ├── screens/
│   │   │   ├── browser.rs
│   │   │   ├── config.rs
│   │   │   ├── delete_confirm.rs
│   │   │   ├── help.rs
│   │   │   ├── journal.rs
│   │   │   ├── mod.rs
│   │   │   ├── normal.rs
│   │   │   ├── power.rs
│   │   │   ├── rss.rs
│   │   │   └── welcome.rs
│   │   ├── tree.rs
│   │   └── view.rs
│   ├── tuning/
│   │   └── mod.rs
│   └── watch_inbox.rs
└── wix/
    └── main.wxs

================================================
FILE CONTENTS
================================================

================================================
FILE: .dockerignore
================================================
.git
.gitignore
target/


================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.yml
================================================
name: 🐛 Bug Report
description: Report a bug or unexpected behavior
title: "[Bug]: "
labels: ["type: bug", "triage: new"]
body:
  - type: markdown
    attributes:
      value: |
        Thanks for taking the time to report a bug! Please fill out the information below to help us resolve it quickly.
        
        **Before submitting:** Please search [existing issues](https://github.com/Jagalite/superseedr/issues) to avoid duplicates.
        
  - type: textarea
    id: description
    attributes:
      label: Bug Description
      description: A clear and concise description of what the bug is
      placeholder: What went wrong?
    validations:
      required: true
      
  - type: textarea
    id: expected
    attributes:
      label: Expected Behavior
      description: What you expected to happen
      placeholder: What should have happened instead?
    validations:
      required: true
      
  - type: textarea
    id: steps
    attributes:
      label: Steps to Reproduce
      description: Detailed steps to reproduce the behavior
      placeholder: |
        1. Start superseedr with '...'
        2. Add torrent via '...'
        3. Navigate to '...'
        4. See error
    validations:
      required: true
      
  - type: dropdown
    id: install-method
    attributes:
      label: Installation Method
      description: How did you install superseedr?
      options:
        - Native (cargo install)
        - Native (built from source)
        - Docker (standalone - no VPN)
        - Docker (with Gluetun VPN)
        - Package manager (AUR, .deb, .pkg, etc.)
        - Other (please specify below)
    validations:
      required: true
      
  - type: input
    id: version
    attributes:
      label: Superseedr Version
      description: Run `superseedr --version` or check the TUI. For Docker, check image tag.
      placeholder: "e.g., 0.9.28 or commit hash"
    validations:
      required: true
      
  - type: dropdown
    id: os
    attributes:
      label: Operating System
      description: What OS are you running?
      options:
        - Linux
        - macOS
        - Windows
        - Other (please specify below)
    validations:
      required: true
      
  - type: textarea
    id: logs
    attributes:
      label: Relevant Logs or Error Messages
      description: Please paste any relevant logs, error messages, or stack traces
      render: shell
      placeholder: |
        Paste logs here...
        
  - type: textarea
    id: context
    attributes:
      label: Additional Context
      description: |
        Any other relevant information:
        - VPN provider (if using Docker with Gluetun)
        - Terminal emulator (iTerm2, Windows Terminal, Alacritty, etc.)
        - Any custom configuration
        - Screenshots (if UI-related)
      placeholder: Add any other context about the problem here


================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
blank_issues_enabled: false
contact_links:
  - name: 💬 GitHub Discussions
    url: https://github.com/Jagalite/superseedr/discussions
    about: Ask questions, share ideas, and discuss with the community
  - name: 📖 Documentation
    url: https://github.com/Jagalite/superseedr#readme
    about: Read the project documentation and setup guides
  - name: ❓ FAQ
    url: https://github.com/Jagalite/superseedr/blob/main/docs/FAQ.md
    about: Check frequently asked questions
  - name: 🗺️ Roadmap
    url: https://github.com/Jagalite/superseedr/blob/main/docs/ROADMAP.md
    about: See planned features and project direction


================================================
FILE: .github/ISSUE_TEMPLATE/documentation.yml
================================================
name: 📚 Documentation
description: Report documentation issues or suggest improvements
title: "[Docs]: "
labels: ["type: documentation", "triage: new"]
body:
  - type: markdown
    attributes:
      value: |
        Thanks for helping improve our documentation!
        
  - type: dropdown
    id: doc-type
    attributes:
      label: Documentation Type
      description: Which documentation needs improvement?
      options:
        - README.md
        - CONTRIBUTING.md
        - docs/FAQ.md
        - docs/ROADMAP.md
        - docs/CHANGELOG.md
        - Code comments / rustdoc
        - Docker setup guides (.env examples)
        - GitHub wiki
        - Other (please specify)
    validations:
      required: true
      
  - type: textarea
    id: issue
    attributes:
      label: What's the issue?
      description: What's unclear, incorrect, outdated, or missing?
      placeholder: |
        The documentation says... but it should say...
        This section is confusing because...
        There's no documentation for...
    validations:
      required: true
      
  - type: textarea
    id: suggestion
    attributes:
      label: Suggested Improvement
      description: How should this be fixed or improved? (optional but helpful)
      placeholder: |
        Add a section explaining...
        Change the wording to...
        Include an example of...
        
  - type: input
    id: location
    attributes:
      label: Location
      description: Where is this documentation? (URL, file path, or section name)
      placeholder: "e.g., README.md line 42, or https://github.com/..."
      
  - type: textarea
    id: context
    attributes:
      label: Additional Context
      description: Any other relevant information or examples


================================================
FILE: .github/ISSUE_TEMPLATE/enhancement.yml
================================================
name: 🔧 Enhancement
description: Suggest an improvement to existing functionality
title: "[Enhancement]: "
labels: ["type: enhancement", "triage: new"]
body:
  - type: markdown
    attributes:
      value: |
        Thanks for suggesting an enhancement!
        
        **Before submitting:** Please search [existing issues](https://github.com/Jagalite/superseedr/issues) to see if this has already been proposed.
        
  - type: textarea
    id: current
    attributes:
      label: Current Behavior
      description: Describe how the feature currently works
      placeholder: |
        Currently, when I...
        The feature does...
    validations:
      required: true
      
  - type: textarea
    id: proposed
    attributes:
      label: Proposed Improvement
      description: How could this be improved?
      placeholder: |
        Instead, it should...
        A better approach would be...
        This could be enhanced by...
    validations:
      required: true
      
  - type: dropdown
    id: component
    attributes:
      label: Which component does this affect?
      description: What part of superseedr would this improve?
      options:
        - TUI/Interface
        - BitTorrent Protocol
        - Docker/VPN Integration
        - Networking/Port Management
        - Configuration/Settings
        - Performance/Efficiency
        - Documentation
        - Testing/CI-CD
        - Other (please specify below)
    validations:
      required: true
      
  - type: textarea
    id: benefit
    attributes:
      label: Benefits
      description: Why would this improvement be valuable?
      placeholder: |
        This would help users...
        The benefit would be...
        This solves the problem of...
    validations:
      required: true
      
  - type: checkboxes
    id: breaking
    attributes:
      label: Breaking Change
      description: Would this change existing behavior?
      options:
        - label: This would be a breaking change
        - label: This affects private tracker builds
        
  - type: textarea
    id: context
    attributes:
      label: Additional Context
      description: Any other context, examples, or screenshots
      placeholder: Add any other relevant information here


================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.yml
================================================
name: ✨ Feature Request
description: Suggest a new feature or capability
title: "[Feature]: "
labels: ["type: feature", "triage: new"]
body:
  - type: markdown
    attributes:
      value: |
        Thanks for suggesting a feature! 
        
        **Before submitting:** Please search [existing issues](https://github.com/Jagalite/superseedr/issues) and [discussions](https://github.com/Jagalite/superseedr/discussions) to see if this has already been proposed.
        
  - type: textarea
    id: problem
    attributes:
      label: Problem Statement
      description: What problem or user need does this address?
      placeholder: |
        I'm frustrated when...
        Users need to be able to...
        Currently there's no way to...
    validations:
      required: true
      
  - type: textarea
    id: solution
    attributes:
      label: Proposed Solution
      description: Describe how you envision this feature working
      placeholder: |
        The feature would work like this...
        Users would interact with it by...
        It would appear in the TUI as...
    validations:
      required: true
      
  - type: textarea
    id: alternatives
    attributes:
      label: Alternative Solutions or Workarounds
      description: Have you considered any alternative solutions? Are there current workarounds?
      placeholder: |
        Alternative approach: ...
        Current workaround: ...
        
  - type: dropdown
    id: component
    attributes:
      label: Which component does this affect?
      description: Where would this feature live?
      options:
        - TUI/Interface
        - BitTorrent Protocol
        - Docker/VPN Integration
        - Networking/Port Management
        - Configuration/Settings
        - Performance/Efficiency
        - Documentation
        - Other (please specify below)
    validations:
      required: true
      
  - type: checkboxes
    id: breaking
    attributes:
      label: Breaking Change
      description: Would this require changes to existing behavior or configuration?
      options:
        - label: This would be a breaking change (requires user action, config migration, etc.)
        - label: This affects private tracker builds (DHT/PEX disabled)
          
  - type: textarea
    id: context
    attributes:
      label: Additional Context
      description: |
        Any other context, examples, mockups, or links:
        - Similar features in other clients
        - Screenshots or mockups
        - Use case examples
      placeholder: Add any other context, screenshots, or examples here


================================================
FILE: .github/ISSUE_TEMPLATE/questions.yml
================================================
name: ❓ Question
description: Ask a question about using superseedr
title: "[Question]: "
labels: ["type: question", "triage: new"]
body:
  - type: markdown
    attributes:
      value: |
        ## 💬 Questions belong in GitHub Discussions
        
        For questions about using superseedr, **please use [GitHub Discussions](https://github.com/Jagalite/superseedr/discussions)** instead of creating an issue.
        
        ### When to use Discussions vs Issues:
        
        **Use Discussions for:**
        - ❓ How do I...?
        - 🤔 Why does it work this way?
        - 💡 General ideas or feedback
        - 🗣️ Community discussion
        
        **Use Issues for:**
        - 🐛 Bugs (something is broken)
        - ✨ Feature requests (specific new functionality)
        - 📚 Documentation problems
        
        ### Helpful Resources:
        - 💬 [GitHub Discussions](https://github.com/Jagalite/superseedr/discussions)
        - 📖 [README](https://github.com/Jagalite/superseedr#readme)
        - ❓ [FAQ](https://github.com/Jagalite/superseedr/blob/main/docs/FAQ.md)
        
        ---
        
        If you believe this truly needs to be an issue (not a discussion), please explain why below:
        
  - type: textarea
    id: why-issue
    attributes:
      label: Why is this an issue and not a discussion?
      description: Help us understand why this should be tracked as an issue
      placeholder: This is an issue because...
    validations:
      required: true
      
  - type: textarea
    id: question
    attributes:
      label: Your Question
      description: What would you like to know?
      placeholder: My question is...
    validations:
      required: true


================================================
FILE: .github/dependabot.yml
================================================
version: 2
updates:
  # Monitor the Rust/Cargo ecosystem
  - package-ecosystem: "cargo" 
    # Dependabot looks for Cargo.toml in the root directory
    directory: "/" 
    schedule:
      # Check for updates once per week
      interval: "weekly" 
      # Optional: Set a specific day to run updates
      day: "monday" 
    
    # BEST PRACTICE: Limit open pull requests to prevent spam
    open-pull-requests-limit: 5 
    
    # Optional: Group minor/patch updates into a single PR for less noise
    groups:
      minor-and-patch:
        update-types:
          - minor
          - patch

  # Optional: Also monitor GitHub Actions (if you use them)
  - package-ecosystem: "github-actions"
    directory: "/"
    schedule:
      interval: "monthly"


================================================
FILE: .github/workflows/integration-cluster-cli.yml
================================================
name: Integration Cluster CLI

on:
  pull_request:

jobs:
  rust_checks:
    name: Rust Checks
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Set up Rust
        uses: dtolnay/rust-toolchain@1.95.0

      - name: Cache cargo
        uses: actions/cache@v4
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            target/
          key: ${{ runner.os }}-cluster-cli-rust-checks-cargo-${{ hashFiles('**/Cargo.lock') }}

      - name: Check formatting
        run: cargo fmt --all --check

      - name: Lint with clippy
        run: cargo clippy --all-targets --all-features -- -D warnings

      - name: Run Rust tests
        run: cargo test --all-targets --all-features

  cluster_cli:
    name: Cluster CLI
    needs: rust_checks
    runs-on: ubuntu-latest
    timeout-minutes: 90

    steps:
      - uses: actions/checkout@v4

      - name: Set up Rust
        uses: dtolnay/rust-toolchain@1.95.0

      - name: Cache cargo
        uses: actions/cache@v4
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            target/
          key: ${{ runner.os }}-cluster-cli-cargo-${{ hashFiles('**/Cargo.lock') }}

      - name: Set up Python
        uses: actions/setup-python@v6
        with:
          python-version: "3.12"

      - name: Install Python dependencies
        run: |
          python -m pip install --upgrade pip
          python -m pip install -r requirements-integration.txt

      - name: Run cluster CLI integration lane
        env:
          RUN_CLUSTER_CLI: "1"
        run: |
          python -m pytest integration_tests/cluster_cli/tests -m cluster_cli

      - name: Upload cluster CLI artifacts
        if: always()
        uses: actions/upload-artifact@v7
        with:
          name: cluster-cli-artifacts-${{ github.run_id }}
          path: integration_tests/artifacts/cluster_cli/


================================================
FILE: .github/workflows/integration-interop.yml
================================================
name: Integration Interop

on:
  pull_request:

jobs:
  rust_checks:
    name: Rust Checks
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Set up Rust
        uses: dtolnay/rust-toolchain@1.95.0

      - name: Cache cargo
        uses: actions/cache@v4
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            target/
          key: ${{ runner.os }}-interop-rust-checks-cargo-${{ hashFiles('**/Cargo.lock') }}

      - name: Check formatting
        run: cargo fmt --all --check

      - name: Lint with clippy
        run: cargo clippy --all-targets --all-features -- -D warnings

      - name: Run Rust tests
        run: cargo test --all-targets --all-features

  interop:
    name: Interop (${{ matrix.lane }})
    needs: rust_checks
    runs-on: ubuntu-latest
    timeout-minutes: 90
    strategy:
      fail-fast: false
      matrix:
        include:
          - lane: superseedr
            commands: |
              ./integration_tests/run_interop.sh all superseedr_to_superseedr
          - lane: qbittorrent
            commands: |
              status=0
              ./integration_tests/run_interop.sh all superseedr_to_qbittorrent || status=1
              ./integration_tests/run_interop.sh all qbittorrent_to_superseedr || status=1
              exit "$status"
          - lane: transmission
            commands: |
              status=0
              ./integration_tests/run_interop.sh v1 superseedr_to_transmission || status=1
              ./integration_tests/run_interop.sh v1 transmission_to_superseedr || status=1
              exit "$status"

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v6
        with:
          python-version: "3.12"

      - name: Install Python dependencies
        run: |
          python -m pip install --upgrade pip
          python -m pip install -r requirements-integration.txt

      - name: Log in to Docker Hub
        uses: docker/login-action@v4
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Run interop harness
        env:
          INTEROP_TIMEOUT_SECS: "300"
        run: |
          ${{ matrix.commands }}

      - name: Upload interop artifacts
        if: always()
        uses: actions/upload-artifact@v7
        with:
          name: interop-artifacts-${{ matrix.lane }}-${{ github.run_id }}
          path: integration_tests/artifacts/


================================================
FILE: .github/workflows/nightly.yml
================================================
name: Nightly Fuzzing

on:
  schedule:
    # Runs at 02:00 UTC every day
    - cron: '0 2 * * *'
  # Allows you to click "Run workflow" manually in GitHub Actions UI
  workflow_dispatch:

env:
  CARGO_TERM_COLOR: always
  PROPTEST_CASES: 1000000 # 1 Million cases

jobs:
  deep_fuzz:
    name: Fuzz (Release)
    runs-on: ubuntu-latest
    timeout-minutes: 60 # Safety net to prevent hanging jobs costing $$
    
    permissions:
      contents: write       # Required to push the new branch with seeds
      pull-requests: write  # Required to create the Pull Request

    steps:
    - uses: actions/checkout@v6

    - name: Cache cargo
      uses: actions/cache@v5
      with:
        path: |
          ~/.cargo/bin/
          ~/.cargo/registry/index/
          ~/.cargo/registry/cache/
          ~/.cargo/git/db/
          target/
        # Distinct key for RELEASE artifacts so we don't mix with Debug
        key: ${{ runner.os }}-cargo-release-${{ hashFiles('**/Cargo.lock') }}

    - name: Build (Release)
      # Compiling in release takes longer, but makes the 1M tests run 10x faster
      run: cargo build --verbose --release

    - name: Run Deep Tests
      # If this fails, check the logs for "seed: <number>" to reproduce locally
      run: cargo test --verbose --release

    - name: Create Regression PR
      # This runs ONLY if the previous "cargo test" step failed
      if: failure() 
      uses: peter-evans/create-pull-request@v8
      with:
        token: ${{ secrets.GITHUB_TOKEN }}
        commit-message: "test: add fuzzing regression seeds"
        title: "🐛 Fuzzing Failure: New Regression Cases Found"
        body: |
          ## 💥 Fuzzing Failure Detected
          The nightly fuzzing suite detected a crash or logic error. 
          
          The cryptographic seeds for these failures have been automatically appended to the regression file. Merging this PR will ensure these specific edge cases are permanently added to the test suite and re-run on every build to prevent regression.
        branch: fuzzing-failures
        delete-branch: true


================================================
FILE: .github/workflows/rust.yml
================================================
name: Rust

on:
  push:
    branches: [ "main" ]
    tags:
      - 'v*'
  pull_request:
    branches: [ "main" ]
  workflow_dispatch:

permissions:
  contents: write

env:
  CARGO_TERM_COLOR: always
  APP_NAME: superseedr
  PROPTEST_CASES: 20000

jobs:
  build_linux:
    timeout-minutes: 120
    name: Build & Test (Linux)
    # This job now only runs on PRs or non-tag pushes to 'main'
    if: github.event_name == 'pull_request' || (github.event_name == 'push' && !startsWith(github.ref, 'refs/tags/'))
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6

      - name: Set up Rust
        uses: dtolnay/rust-toolchain@1.95.0
      
      - name: Cache cargo
        uses: actions/cache@v5
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            target/
          key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}

      - name: Install Dependencies
        run: sudo apt-get update

      - name: Check Formatting
        run: cargo fmt --all --check

      - name: Lint with Clippy
        run: cargo clippy --all-targets --all-features -- -D warnings

      - name: Run Tests
        run: cargo test --all-targets --all-features

      - name: Lint Private Build
        run: cargo clippy --all-targets --no-default-features -- -D warnings

      - name: Run Private Build Tests
        run: cargo test --all-targets --no-default-features

  package_linux:
    timeout-minutes: 120
    name: Build Linux Artifacts (${{ matrix.suffix }})
    if: startsWith(github.ref, 'refs/tags/')
    runs-on: ubuntu-latest
    strategy:
      matrix:
        include:
          - suffix: "normal"
            flags: ""
          - suffix: "private"
            flags: --no-default-features
    steps:
      - uses: actions/checkout@v6
      - name: Cache cargo
        uses: actions/cache@v5
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            target/
          key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}

      - name: Install Dependencies
        run: |
          sudo apt-get update
          sudo apt-get install -y musl-tools libssl-dev pkg-config
          cargo install cargo-bundle
          rustup target add x86_64-unknown-linux-musl

      - name: Create Staging Directory
        run: mkdir staging

      # --- Build/Package Debian (.deb) ---
      - name: Build Debian Package
        run: cargo bundle --release --format deb ${{ matrix.flags }}

      - name: Move Debian Package
        run: |
          DEB_FILE=$(find target/release/bundle/deb -name '*.deb')
          if [ -z "$DEB_FILE" ]; then
            echo "::error:: No .deb file found."
            exit 1
          fi
          
          if [ "${{ matrix.suffix }}" = "private" ]; then
            FILE_NAME="${APP_NAME}-private_${{ github.ref_name }}_amd64.deb"
          else
            FILE_NAME="${APP_NAME}_${{ github.ref_name }}_amd64.deb"
          fi
          
          echo "Moving $DEB_FILE to staging/$FILE_NAME"
          mv "$DEB_FILE" "staging/$FILE_NAME"


      # --- Build/Package MUSL (.tar.gz) ---
      # - name: Build MUSL Binary
      #   env:
      #     OPENSSL_STATIC: "true"
      #     OPENSSL_LIB_DIR: /usr/lib/x86_64-linux-gnu
      #     OPENSSL_INCLUDE_DIR: /usr/include
      #     CC_x86_64_unknown_linux_musl: musl-gcc
      #     CFLAGS_x86_64_unknown_linux_musl: -I /usr/include/x86_64-linux-gnu
      #   run: cargo build --target x86_64-unknown-linux-musl --release ${{ matrix.flags }}
        
      # - name: Package MUSL Binary
      #   run: |
      #     if [ "${{ matrix.suffix }}" = "private" ]; then
      #       FILE_NAME="${APP_NAME}-private_${{ github.ref_name }}_linux-x86_64-musl.tar.gz"
      #     else
      #       FILE_NAME="${APP_NAME}_${{ github.ref_name }}_linux-x86_64-musl.tar.gz"
      #     fi
          
      #     cd target/x86_64-unknown-linux-musl/release
      #     echo "Creating staging/$FILE_NAME"
      #     tar -czvf "../../../staging/$FILE_NAME" "${APP_NAME}"
      #     cd ../../../.. # Return to root

      - name: Upload Linux Artifacts
        uses: actions/upload-artifact@v7
        with:
          name: superseedr-linux-amd64-${{ matrix.suffix }}-${{ github.ref_name }}
          path: staging/* # Uploads both .deb and .tar.gz

  bundle_macos:
    timeout-minutes: 120
    name: Build macOS Universal PKG (${{ matrix.suffix }})
    if: startsWith(github.ref, 'refs/tags/')
    runs-on: macos-latest
    env:
      KEYCHAIN_NAME: build.keychain
    strategy:
      matrix:
        include:
          - suffix: "normal"
            flags: ""
          - suffix: "private"
            flags: --no-default-features
    steps:
      - uses: actions/checkout@v6
    
      - name: Cache cargo
        uses: actions/cache@v5
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            target/
          key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}

      - name: Install Rust Apple Targets
        run: |
          rustup target add aarch64-apple-darwin
          rustup target add x86_64-apple-darwin

      - name: Pre-compile Rust Binaries
        run: |
          echo "Starting pre-compilation to separate build time from signing time..."
          echo "Building x86_64..."
          cargo build --release --target x86_64-apple-darwin ${{ matrix.flags }}
          echo "Building aarch64..."
          cargo build --release --target aarch64-apple-darwin ${{ matrix.flags }}

      - name: Setup macOS Keychain and Certificate
        id: setup_keychain
        env:
          APPLE_INSTALLER_CERT_P12_BASE64: ${{ secrets.APPLE_INSTALLER_CERT_P12_BASE64 }}
          APPLE_INSTALLER_CERT_PASSWORD: ${{ secrets.APPLE_INSTALLER_CERT_PASSWORD }}
        run: |
          # Create a new keychain
          security create-keychain -p "$RUNNER_TEMP" "$KEYCHAIN_NAME"
          security list-keychains -s "$KEYCHAIN_NAME"
          security default-keychain -s "$KEYCHAIN_NAME"
          security unlock-keychain -p "$RUNNER_TEMP" "$KEYCHAIN_NAME"
          
          # Decode and import the .p12
          echo $APPLE_INSTALLER_CERT_P12_BASE64 | base64 --decode > certificate.p12
          security import certificate.p12 -k "$KEYCHAIN_NAME" -P "$APPLE_INSTALLER_CERT_PASSWORD" -T /usr/bin/codesign -T /usr/bin/productsign
          rm certificate.p12
          
          # Set keychain to allow signing
          security set-key-partition-list -S apple-tool:,apple: -s -k "$RUNNER_TEMP" "$KEYCHAIN_NAME"
          
          echo "Waiting for keychain to settle..."
          sleep 2

          # Find the certificate's Common Name (CN).
          CERT_CN=$(security find-identity -v "$KEYCHAIN_NAME" | grep "Developer ID Installer" | head -n 1 | sed -E 's/.*"([^"]+)".*/\1/')

          if [ -z "$CERT_CN" ]; then
            echo "::error:: No valid codesigning identity found in keychain."
            security find-identity -v "$KEYCHAIN_NAME" # Print all identities for debugging
            exit 1
          fi
     
          echo "Using certificate: $CERT_CN"
          echo "CERT_NAME=$CERT_CN" >> $GITHUB_ENV
      
      - name: Execute Custom macOS Build Script
        id: build_pkg
        run: |
          SCRIPT_PATH="scripts/build_osx_universal_pkg.sh"
          chmod +x "$SCRIPT_PATH"
          
          set -o pipefail
          
          "$SCRIPT_PATH" \
            ${{ github.ref_name }} \
            ${{ matrix.suffix }} \
            "${{ env.CERT_NAME }}" \
            ${{ matrix.flags }} \
            2>&1 | tee build_log.txt
          
          PKG_PATH=$(grep 'PKG_PATH=' build_log.txt | head -n 1 | sed -n 's/.*PKG_PATH=\(.*\)/\1/p' | tr -d '[:space:]')
          
          if [ -z "$PKG_PATH" ]; then
            echo "::error::Build script finished, but 'PKG_PATH=' was not found in the log."
            exit 1
          fi
          
          echo "PKG_PATH found: $PKG_PATH"
          echo "pkg_path=$PKG_PATH" >> $GITHUB_OUTPUT

      - name: Notarize and Staple PKG
        id: notarize
        env:
          APPLE_NOTARY_USERNAME: ${{ secrets.APPLE_NOTARY_USERNAME }}
          APPLE_NOTARY_PASSWORD: ${{ secrets.APPLE_NOTARY_PASSWORD }}
          APPLE_TEAM_ID: ${{ secrets.APPLE_TEAM_ID }}
        run: |
          PKG_FILE_PATH="${{ steps.build_pkg.outputs.pkg_path }}"
          echo "Submitting $PKG_FILE_PATH for notarization..."
          
          xcrun notarytool submit "$PKG_FILE_PATH" \
            --apple-id "$APPLE_NOTARY_USERNAME" \
            --password "$APPLE_NOTARY_PASSWORD" \
            --team-id "$APPLE_TEAM_ID" \
            --wait
             
          echo "Notarization successful. Stapling ticket..."
          
          xcrun stapler staple "$PKG_FILE_PATH"

      - name: Stage macOS PKG
        id: stage_pkg
        run: |
          mkdir -p staging
          
          PKG_SRC_PATH="${{ steps.build_pkg.outputs.pkg_path }}"
          VERSION_TAG="${{ github.ref_name }}"
          SUFFIX="${{ matrix.suffix }}"
          
          if [ "$SUFFIX" = "normal" ]; then
              PKG_NAME="${{ env.APP_NAME }}-${VERSION_TAG}-universal-macos.pkg"
          else
              PKG_NAME="${{ env.APP_NAME }}-${VERSION_TAG}-${SUFFIX}-universal-macos.pkg"
          fi
          
          DEST_PATH="staging/$PKG_NAME"
          echo "Moving $PKG_SRC_PATH to $DEST_PATH"
          mv "$PKG_SRC_PATH" "$DEST_PATH"
           
          echo "final_pkg_path=$DEST_PATH" >> $GITHUB_OUTPUT

      - name: Cleanup Keychain
        if: always() # Always run this, even if previous steps fail
        run: |
          security delete-keychain "$KEYCHAIN_NAME"

      - name: Upload macOS PKG Artifact
        uses: actions/upload-artifact@v7
        with:
          name: superseedr-macos-${{ matrix.suffix }}-universal-${{ github.ref_name }} 
          path: ${{ steps.stage_pkg.outputs.final_pkg_path }}
        
  build_windows:
    timeout-minutes: 120
    name: Build Windows MSI (${{ matrix.suffix }})
    if: startsWith(github.ref, 'refs/tags/')
    runs-on: windows-latest
    strategy:
      matrix:
        include:
          - suffix: "normal"
            flags: ""
          - suffix: "private"
            flags: "--no-default-features"
    steps:
      - uses: actions/checkout@v6
      - name: Cache cargo
        uses: actions/cache@v5
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            target/
          key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}

      - name: Install Rust MSVC Target
        run: rustup target add x86_64-pc-windows-msvc
        
      - name: Install WiX Toolset v3
        run: choco install wixtoolset
        
      - name: Install cargo-wix
        run: cargo install cargo-wix
        
      - name: Build MSI Installer (${{ matrix.suffix }})
        id: build_msi
        run: |

          if ("${{ matrix.flags }}" -eq "") {
            # For the "normal" build, just run the default command.
            # This runs 'cargo build --release' AND packages the MSI.
            echo "Running: cargo wix"
            cargo wix
          } else {
            # For the "private" build, we must build manually first.
            
            # 1. Run cargo build with our private flags
            echo "Running: cargo build --release ${{ matrix.flags }}"
            cargo build --release ${{ matrix.flags }}
            
            # 2. Run cargo wix with '--no-build' to package the binaries we just made
            echo "Running: cargo wix --no-build"
            cargo wix --no-build
          }

          
          # Update the path: 'cargo wix' outputs to 'target/wix'
          $MSI_FILE = Get-ChildItem -Path "target/wix" -Filter "*.msi" | Select-Object -First 1
          if ($null -eq $MSI_FILE) { echo "::error:: No .msi file found"; exit 1; }
          echo "msi_path=$($MSI_FILE.FullName)" >> $env:GITHUB_OUTPUT

      - name: Sign MSI Installer (if secret is present)
        # This step will be SKIPPED if the secret is empty
        if: env.WINDOWS_CERT_P12_BASE64 != ''
        id: sign_msi
        env:
          WINDOWS_CERT_P12_BASE64: ${{ secrets.WINDOWS_CERT_P12_BASE64 }}
          WINDOWS_CERT_PASSWORD: ${{ secrets.WINDOWS_CERT_PASSWORD }}
        shell: pwsh
        run: |
          # Decode the certificate
          [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($env:WINDOWS_CERT_P12_BASE64)) | Out-File -FilePath windows.pfx -Encoding OEM
          
          $MSI_PATH = "${{ steps.build_msi.outputs.msi_path }}"
          echo "Signing $MSI_PATH..."
          
          # Find signtool.exe (it's part of the Windows SDK)
          $SIGNTOOL_PATH = (Get-ChildItem -Path "C:\Program Files (x86)\Windows Kits\10\bin" -Filter "signtool.exe" -Recurse | Sort-Object VersionInfo -Descending | Select-Object -First 1).FullName
          if ($null -eq $SIGNTOOL_PATH) {
            echo "::error:: signtool.exe not found."
            exit 1
          }
          echo "Using signtool at $SIGNTOOL_PATH"

          # Sign the file
          & $SIGNTOOL_PATH sign /f "windows.pfx" /p $env:WINDOWS_CERT_PASSWORD /tr http://timestamp.digicert.com /td SHA256 $MSI_PATH
          
          # Clean up
          Remove-Item windows.pfx

      - name: Stage MSI
        shell: pwsh
        run: |
          # Use the git tag for the version, not the Cargo.toml version
          $VERSION_TAG = "${{ github.ref_name }}"
          $MSI_FILE_PATH = "${{ steps.build_msi.outputs.msi_path }}"
          
          $SUFFIX = "${{ matrix.suffix }}"
          if ($SUFFIX -eq "normal") {
              $MSI_NAME = "${{ env.APP_NAME }}_${VERSION_TAG}_x64_en-US.msi"
          } else {
              $MSI_NAME = "${{ env.APP_NAME }}-${SUFFIX}_${VERSION_TAG}_x64_en-US.msi"
          }
          mkdir staging
          $DEST_PATH = "staging/$MSI_NAME"
          echo "Moving $MSI_FILE_PATH to $DEST_PATH"
          mv $MSI_FILE_PATH $DEST_PATH
          
          # Output the final staged name for the release body
          echo "final_msi_name=$MSI_NAME" >> $env:GITHUB_OUTPUT

      - name: Upload Windows MSI Artifact
        uses: actions/upload-artifact@v7
        with:
          name: superseedr-windows-${{ matrix.suffix }}-${{ github.ref_name }}
          path: staging/*.msi

  build_and_push_docker: 
    name: Docker (${{ matrix.flavor }})
    if: startsWith(github.ref, 'refs/tags/')
    runs-on: ubuntu-latest
    needs: [package_linux, bundle_macos, build_windows]
    strategy:
      fail-fast: false
      matrix:
        flavor: [normal, private]
    steps:
      - name: Checkout
        uses: actions/checkout@v6

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v4

      - name: Log in to Docker Hub
        uses: docker/login-action@v4
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v6
        with:
          images: jagatranvo/superseedr
          tags: |
            type=ref,event=tag${{ matrix.flavor == 'private' && ',suffix=-private' || '' }}
            type=raw,value=${{ matrix.flavor == 'private' && 'private' || 'latest' }}
            ${{ matrix.flavor == 'normal' && 'type=ref,event=tag' || '' }}

      - name: Build and push
        uses: docker/build-push-action@v7
        with:
          context: .
          file: ./Dockerfile
          push: ${{ startsWith(github.ref, 'refs/tags/') }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          platforms: linux/amd64,linux/arm64
          cache-from: type=gha,scope=docker-${{ matrix.flavor }}
          cache-to: type=gha,mode=max,scope=docker-${{ matrix.flavor }}
          build-args: |
            PRIVATE_BUILD=${{ matrix.flavor == 'private' }}

      - name: Update Docker Hub Description
        if: matrix.flavor == 'normal' && startsWith(github.ref, 'refs/tags/')
        uses: peter-evans/dockerhub-description@v5
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}
          repository: jagatranvo/superseedr
          readme-filepath: ./README.md

  release:
    timeout-minutes: 120
    name: Create GitHub Release
    if: startsWith(github.ref, 'refs/tags/')
    runs-on: ubuntu-latest
    needs: [package_linux, bundle_macos, build_windows, build_and_push_docker]
    steps:
      - name: Download all build artifacts
        uses: actions/download-artifact@v8
        with:
          path: artifacts/
          pattern: superseedr-*

      - name: Set Release Version
        run: echo "RELEASE_VERSION=${{ github.ref_name }}" >> $GITHUB_ENV
      
      - name: Create Release and Upload Artifacts
        uses: softprops/action-gh-release@v3
        with:
          name: ${{ github.ref_name }}
          body: |
            ## Standard Builds (Recommended)
            * **macOS Universal:** [superseedr-${{ env.RELEASE_VERSION }}-universal-macos.pkg](https://github.com/Jagalite/superseedr/releases/download/${{ github.ref_name }}/superseedr-${{ env.RELEASE_VERSION }}-universal-macos.pkg)
            * **Linux (Debian):** [superseedr_${{ env.RELEASE_VERSION }}_amd64.deb](https://github.com/Jagalite/superseedr/releases/download/${{ github.ref_name }}/superseedr_${{ env.RELEASE_VERSION }}_amd64.deb)
            * **Windows (MSI):** [superseedr_${{ env.RELEASE_VERSION }}_x64_en-US.msi](https://github.com/Jagalite/superseedr/releases/download/${{ github.ref_name }}/superseedr_${{ env.RELEASE_VERSION }}_x64_en-US.msi)
            ---
            ## Private Builds (Advanced)
            These builds do not contain PEX or DHT in the final binary. Not recommended for normal users unless you have privacy requirements.
            
            * **macOS Universal:** [superseedr-${{ env.RELEASE_VERSION }}-private-universal-macos.pkg](https://github.com/Jagalite/superseedr/releases/download/${{ github.ref_name }}/superseedr-${{ env.RELEASE_VERSION }}-private-universal-macos.pkg)
            * **Linux (Debian):** [superseedr-private_${{ env.RELEASE_VERSION }}_amd64.deb](https://github.com/Jagalite/superseedr/releases/download/${{ github.ref_name }}/superseedr-private_${{ env.RELEASE_VERSION }}_amd64.deb)
            * **Windows (MSI):** [superseedr-private_${{ env.RELEASE_VERSION }}_x64_en-US.msi](https://github.com/Jagalite/superseedr/releases/download/${{ github.ref_name }}/superseedr-private_${{ env.RELEASE_VERSION }}_x64_en-US.msi)
          files: |
            artifacts/superseedr-linux-amd64-*-${{ github.ref_name }}/*.deb
            artifacts/superseedr-macos-*-universal-${{ github.ref_name }}/*.pkg
            artifacts/superseedr-windows-*-${{ github.ref_name }}/*.msi
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}


  publish_crates_io:
    timeout-minutes: 120
    name: Publish to Crates.io
    if: startsWith(github.ref, 'refs/tags/')
    runs-on: ubuntu-latest
    needs: [release]
    steps:
      - uses: actions/checkout@v6
      - name: Cache cargo
        uses: actions/cache@v5
        with:
          path: |
            ~/.cargo/bin/
            ~/.cargo/registry/index/
            ~/.cargo/registry/cache/
            ~/.cargo/git/db/
            # target/ is intentionally omitted for cargo publish
          key: ${{ runner.os }}-cargo-registry-${{ hashFiles('**/Cargo.lock') }}
      - name: Publish to crates.io
        run: cargo publish
        env:
          CARGO_REGISTRY_TOKEN: ${{ secrets.CRATES_IO_TOKEN }}


================================================
FILE: .gitignore
================================================
# --- Superseedr Config ---
# Ignore the local environment file
.env


# --- local temp---
tmp
*.tmp
logs/
*.log
*.lock
integration_tests/test_output/
integration_tests/artifacts/

# --- macOS ---
.DS_Store
diff.tmp
.gemini/

# --- Editor Specific ---
# Ignore VSCode workspace settings
.vscode/
__pycache__/

# --- Rust / Cargo ---
# Ignore build artifacts
/target/
rust-toolchain

# --- VPN Credentials ---
# Block these sensitive filenames ANYWHERE they appear
.gluetun.env

# BUT, explicitly UN-ignore (with '!') the template files
!.gluetun.env.example

# --- Safety Net ---
# Ignore common torrent/media files ANYWHERE in the repo
# in case of accidental downloads to the project root.
*.mkv
*.mp4
*.avi
*.mov
*.flv
*.iso
*.img
*.zip
*.rar
*.7z
*.tar
*.gz
*.nfo

# --- Local Docker Runtime State ---
docker-data/


================================================
FILE: .gluetun.env.example
================================================
# This is an example file.
# To use, copy this file to 'gluetun.env' in this same directory and fill in your values.
# For superseedr configurations checkout .env.example

# -----------------------------------------------------------------
# Gluetun VPN Configuration
# -----------------------------------------------------------------
#
# See Gluetun docs for all provider-specific settings:
# https://github.com/qdm12/gluetun-wiki/tree/main/setup/providers
#
# -----------------------------------------------------------------

# --- General Features ---
DNS_SERVER=on

# Automatic port forwarding for providers (PIA, ProtonVPN, ...) that support it. To configure static ports, update .env.
VPN_PORT_FORWARDING=on
VPN_PORT_FORWARDING_UP_COMMAND=/bin/sh -c "echo {{PORTS}} > /tmp/gluetun/forwarded_port"
VPN_PORT_FORWARDING_DOWN_COMMAND=/bin/sh -c "echo > /tmp/gluetun/forwarded_port"

# --- VPN Provider Setup ---
# (Select your provider, type, and server)
#VPN_SERVICE_PROVIDER=protonvpn
VPN_SERVICE_PROVIDER=private internet access
SERVER_REGIONS=Iceland

# -----------------------------------------------------------------
# --- Provider Credentials ---
# (Only fill out ONE section below that matches your provider)
# -----------------------------------------------------------------

# --- Section 1: OpenVPN (e.g., PIA) ---
# (Active by default)
VPN_TYPE=openvpn
OPENVPN_USER=YourVpnUserHere
OPENVPN_PASSWORD=YourVpnPasswordHere

# -----------------------------------------------------------------
# --- Section 2: WireGuard (e.g., Mullvad) ---
# (To use this: comment out Section 1 and uncomment this section)
# -----------------------------------------------------------------
#VPN_TYPE=wireguard
#WIREGUARD_PRIVATE_KEY=YourMullvadPrivateKeyGoesHere
#WIREGUARD_ADDRESSES=YourMullvadWgAddressGoesHere


================================================
FILE: AGENTS.md
================================================
Don’t use real copyrighted titles/brands in tests, fixtures, screenshots, or mock UI text.


================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct

## Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

## Our Standards

Examples of behavior that contributes to a positive environment for our community include:

* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall community

Examples of unacceptable behavior include:

* The use of sexualized language or imagery, and sexual attention or advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting

## Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

## Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at jaga.tranvo@superseedr.com. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

## Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

### 1. Correction

**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

### 2. Warning

**Community Impact**: A violation through a single incident or series of actions.

**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

### 3. Temporary Ban

**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.

**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

### 4. Permanent Ban

**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

**Consequence**: A permanent ban from any sort of public interaction within the community.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].

Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC].

For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations].

[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to superseedr

Thank you for your interest in helping improve superseedr!

You do not need programming experience to contribute. Some of the most helpful contributions are bug reports, feature ideas, and general feedback.

## 🐛 Report a Bug

If something doesn't work as expected, please open a GitHub issue and include:

- A clear title describing the problem
- What you expected to happen
- What actually happened
- Steps to reproduce the issue
- Your environment (OS, version, Docker or native, etc.)
- Any relevant logs or error messages

Before creating a new issue, please search [[existing issues](https://github.com/Jagalite/superseedr/issues)](https://github.com/Jagalite/superseedr/issues) and [[discussions](https://github.com/Jagalite/superseedr/discussions)](https://github.com/Jagalite/superseedr/discussions) to avoid duplicates or find existing solutions.

## 💡 Suggest a Feature or Idea

Have an idea to improve superseedr?

Before creating a new issue, please search [[existing issues](https://github.com/Jagalite/superseedr/issues)](https://github.com/Jagalite/superseedr/issues) and [[discussions](https://github.com/Jagalite/superseedr/discussions)](https://github.com/Jagalite/superseedr/discussions) to see if your idea has already been proposed or discussed.

You can open a GitHub issue and describe:

- What problem you're trying to solve
- Your suggested solution or idea
- Why it would be useful to users

Even rough or incomplete ideas are welcome.

## 📝 Help Improve Documentation

You can contribute by:

- Reporting confusing or outdated docs
- Suggesting clearer explanations
- Proposing examples or setup guides
- Improving the README, FAQ, or other documentation files

## 🔒 Report a Security Vulnerability

If you discover a security vulnerability, **please do not open a public issue.**

Instead:
1. Contact the maintainers privately (use GitHub Security Advisory or email)
2. Include a detailed description of the vulnerability
3. Provide steps to reproduce if possible
4. Allow time for a fix before public disclosure

We take security seriously and will respond promptly.

## Guidelines for All Contributions

### ✅ General Guidelines

- Be respectful and constructive
- Keep discussions on-topic
- Provide as much relevant detail as possible
- For existing issues or discussions, you can "bump" them by adding a comment if you have new information, want to express increased urgency, or can provide additional details/context

---

## 🧑‍💻 Contributing Code (for developers)

### Development Environment Setup

**Prerequisites:**
- Rust toolchain (latest stable version)
- Docker and Docker Compose (for Docker-related changes)
- A terminal with Unicode support (Windows Terminal, iTerm2, or modern Linux terminals)
- Git

**Quick Start:**
```bash
# Fork the repository on GitHub first, then clone your fork
git clone https://github.com/YOUR_USERNAME/superseedr.git
cd superseedr

# Build the project
cargo build

# Run tests
cargo test

# Run locally
cargo run
```

**For Docker development:**
```bash
# Build the Docker image locally
docker build -t superseedr-dev .

# Test the supported Docker Compose stack (requires .env and .gluetun.env)
docker compose up

# Or test the image directly without Gluetun
docker run --rm -it superseedr-dev
```

### Code Style & Formatting

- Run `cargo fmt` before committing to format your code
- Ensure `cargo clippy` passes without warnings
- Follow Rust naming conventions:
  - `snake_case` for functions and variables
  - `PascalCase` for types and structs
  - `SCREAMING_SNAKE_CASE` for constants
- Add documentation comments (`///`) for public APIs and complex logic
- Keep line length reasonable (suggested 100 characters, but not strict)

### Testing

Superseedr uses multiple testing strategies to ensure reliability:

**Unit Tests:**
```bash
# Run all tests
cargo test

# Run specific test
cargo test test_name

# Run tests with output
cargo test -- --nocapture
```

**Model-Based Fuzzing:**
The project uses model-based testing for protocol correctness. Fuzzing tests run nightly via GitHub Actions to verify BitTorrent protocol implementation.

**Manual Testing:**
- Test with real torrents in a safe environment (use legal content like Linux ISOs)
- Verify VPN integration with Gluetun if modifying networking code
- Check TUI rendering in different terminal emulators (iTerm2, Windows Terminal, Alacritty, etc.)
- Test in both light and dark terminal colour schemes
- Verify keyboard controls work as expected

**When contributing code:**
- Add unit tests for new functionality
- Update existing tests if changing behavior
- Ensure all tests pass before submitting a PR

### Working on the TUI

Superseedr uses [[Ratatui](https://ratatui.rs/)](https://ratatui.rs/) for the terminal interface.

**Testing UI changes:**
- Run the app locally: `cargo run`
- Test in different terminal sizes (resize your terminal window)
- Verify rendering in multiple terminal emulators
- Check that animations remain performant (1-60 FPS target)
- Ensure colour schemes work in both light and dark modes

**UI Guidelines:**
- Keep animations performant and smooth
- Ensure all features are keyboard-accessible (no mouse-only features)
- Maintain consistency with existing keybinding patterns
- Follow the existing visual style and layout conventions
- Test with the minimum supported terminal size

### Docker & VPN Changes

When modifying Docker setup or VPN integration:

- Test with the Compose stack and direct `docker run` flow
- Verify port forwarding works correctly
- Check that dynamic port reloading functions as expected
- Update `.env.example` and `.gluetun.env.example` if adding new configuration variables
- Test with at least one VPN provider if possible
- Document any new environment variables in the README

### Private Tracker Support

Superseedr supports private tracker builds that disable DHT and PEX.

When contributing:
- Ensure changes don't break private tracker mode
- Test both public and private tracker configurations if modifying protocol behavior
- Respect the privacy and security requirements of private trackers

### Continuous Integration

All PRs must pass automated checks:

- ✅ Rust build and compilation
- ✅ All unit tests
- ✅ Clippy lints (no warnings)
- ✅ Code formatting check (`cargo fmt`)
- ✅ Model-based fuzzing (runs nightly)

#### CI/CD Security Note

**For external contributors:**
- GitHub Actions workflows require maintainer approval to run on PRs from forks
- This is a security measure to protect repository secrets (see npm shai hulud incident)
- Your PR will be reviewed before CI runs
- Once approved, automated checks will execute

**What this means for you:**
- Don't be alarmed if CI doesn't run immediately on your PR
- Maintainers will review and approve workflow execution
- You can still run `cargo test`, `cargo clippy`, and `cargo fmt` locally before submitting

Check the Actions tab on your PR to see CI results. Fix any failures before requesting review.

### Branch Naming Conventions

Create descriptive branch names following these patterns:

- Feature: `feature/add-upnp-support`
- Bug fix: `fix/port-reload-crash`
- Documentation: `docs/update-contributing-guide`
- Refactoring: `refactor/simplify-peer-manager`
- Performance: `perf/optimize-piece-selection`

### Contributing to Roadmap Items

The [ROADMAP.md](docs/ROADMAP.md) outlines the project's planned features and future direction. Contributors are encouraged to:

- **Review upcoming features:** Check the roadmap to see what features are planned but not yet started
- **Start discussions:** If you're interested in working on a roadmap item, open a discussion to explore implementation ideas
- **Propose new items:** Have an idea not on the roadmap? Create an issue to propose it for consideration
- **Prioritize aligned work:** Roadmap-aligned contributions are more likely to be reviewed and merged quickly

Roadmap items are tagged in GitHub issues. Look for labels like `roadmap:v1.0`, `roadmap:v1.5`, or `roadmap:future` to find work that fits your interests and skill set.

### Claiming Work on Issues

To avoid duplicate effort and ensure coordination:

1. **Before starting work on an issue:**
   - Comment on the issue expressing your interest in working on it
   - Wait for maintainer acknowledgment/assignment before starting significant work
   - If the issue is already assigned to someone else, check if they're still working on it

2. **Discuss your approach:**
   - For non-trivial changes, outline your proposed implementation approach in the issue
   - Wait for maintainer feedback on technical feasibility, alignment with roadmap, and project vision
   - Discuss release timing considerations if relevant

3. **Assignment process:**
   - Maintainers will assign the issue to you once your approach is approved
   - If you're assigned but can no longer work on it, please comment to let maintainers know

**Important:** We do not accept unsolicited PRs without prior discussion. All code contributions must:
- Have an associated GitHub issue
- Include documented discussion of the approach
- Receive maintainer approval before implementation begins
- Consider technical feasibility, roadmap alignment, and project architecture

This ensures changes align with project goals and prevents wasted effort on work that may not be accepted.

### Contribution Workflow

1. **Find or create an issue and get approval:**
   - Search for an existing issue related to your proposed change
   - If none exists, create a new issue describing the problem/feature
   - **Comment on the issue** stating you'd like to work on it
   - **Wait for maintainer response** before starting work
   - Discuss your proposed approach, including:
     * **Technical feasibility:** Can this be implemented without breaking existing functionality?
     * **Roadmap alignment:** Does this fit the project's direction and priorities?
     * **Project vision:** Is this change consistent with superseedr's goals?
     * **Implementation details:** What's your planned approach?
     * **Release timing:** Are there version/timing considerations?
   - **Get assigned to the issue** by a maintainer before beginning implementation

2. **Fork the repository** (if you haven't already)

3. **Clone your fork locally:**
   ```bash
   git clone https://github.com/YOUR_USERNAME/superseedr.git
   cd superseedr
   ```

4. **Create a new branch** with a descriptive name:
   ```bash
   git checkout -b feature/your-feature-name
   ```

5. **Make your changes:**
   - Write clean, documented code
   - Follow existing code style and conventions
   - Add tests for new functionality

6. **Test your changes:**
   ```bash
   cargo build
   cargo test
   cargo clippy
   cargo fmt --check
   ```

7. **Commit your changes:**
   ```bash
   git add .
   git commit -m "Add feature: brief description"
   ```
   - Use clear, descriptive commit messages
   - Reference issue numbers when applicable (e.g., "Fix #123: resolve port binding issue")

8. **Push to your fork:**
   ```bash
   git push origin feature/your-feature-name
   ```

9. **Open a Pull Request** with:
   - A clear title describing the change
   - Description of what changed and why
   - Link to related issues (e.g., "Fixes #123", "Relates to #456")
   - Screenshots or demos for UI changes
   - Notes on testing performed

### Pull Request Guidelines

- Keep changes focused and scoped to a single feature or fix
- Describe what changed and why in the PR description
- Link related issues if applicable
- Respond to review feedback promptly and constructively
- Be patient - maintainers review PRs as time permits
- Update your PR if the main branch has moved forward

## 🙏 Recognition

All contributors will be acknowledged in release notes. Thank you for making superseedr better!

## Additional Resources

- 📖 [FAQ](docs/FAQ.md) - Common questions and answers
- 🗺️ [Roadmap](docs/ROADMAP.md) - Future plans and features
- 📜 [Changelog](docs/CHANGELOG.md) - Recent changes and version history
- 🤝 [Code of Conduct](CODE_OF_CONDUCT.md) - Community standards
- 💬 [[Discussions](https://github.com/Jagalite/superseedr/discussions)](https://github.com/Jagalite/superseedr/discussions) - General questions and ideas
- 📚 [[Ratatui Documentation](https://ratatui.rs/)](https://ratatui.rs/) - TUI framework reference

## Questions?

If you're unsure about anything, don't hesitate to:
- Ask in [[Discussions](https://github.com/Jagalite/superseedr/discussions)](https://github.com/Jagalite/superseedr/discussions)
- Comment on a relevant issue
- Reach out to maintainers

We're here to help and appreciate your interest in contributing! 🚀


================================================
FILE: Cargo.toml
================================================
# SPDX-FileCopyrightText: 2025 The superseedr Contributors
# SPDX-License-Identifier: GPL-3.0-or-later

[package]
name = "superseedr"
version = "1.0.7"
description = "A BitTorrent Client in your Terminal."
edition = "2021"
repository = "https://github.com/Jagalite/superseedr"
license = "GPL-3.0-or-later"
authors = ["Jaga Tranvo <jagatranvo@prudential.com>"]  # <-- ADD THIS

[profile.release]
codegen-units = 1 # Allows compiler to perform better optimization.
lto = true # Enables Link-time Optimization.
opt-level = 3 # Prioritizes small binary size. Use `3` if you prefer speed.
strip = true # Ensures debug symbols are removed.

[package.metadata.bundle]
name = "superseedr"
identifier = "com.github.jagalite.superseedr"
icon = ["assets/app_icon.icns"] # Optional: Uncomment and provide path if you have an icon
version = "1.0.7"
copyright = "Copyright © 2025 Jaga Tranvo. All rights reserved."
category = "public.app-category.utilities"
short_description = "A BitTorrent Client in your Terminal."
long_description = "A BitTorrent Client in your Terminal, written in Rust using Ratatui."
linux_use_terminal = true
linux_mime_types = ["application/x-bittorrent", "x-scheme-handler/magnet"]
linux_exec_args = "%U" # Use %U to handle URLs and potentially multiple files, or %f for single files


[package.metadata.wix]
eula = false
linker-args = ["-ext", "WixFirewallExtension"]
compiler-args = ["-ext", "WixFirewallExtension"]

[features]
# Default build includes both DHT and PEX
default = ["dht", "pex"]

# Individual features for conditional compilation
dht = []
pex = []
synthetic-load = []


[dev-dependencies]
tempfile = "3.27.0"
proptest = "1.11.0"
proptest-state-machine = "0.8"

[dependencies]
reqwest = { version = "0.12.24", features = ["json"] }
sha1 = "0.10.6"
sha2 = "0.10.9"
socket2 = "0.6.3"
tokio = { version = "1.50.0", features = ["full", "test-util"] }
tokio-stream = { version = "0.1.18", features = ["sync"] }
thiserror = "2.0.18"
tracing = "0.1.44"
tracing-subscriber = "0.3.23"
serde = { version = "1.0.228", features = ["derive"] }
serde_bencode = "0.2"
serde_bytes = "0.11.19"
magnet-url = "3.0.0"
data-encoding = "2.11.0"
urlencoding = "2.1.3"
crossterm = "0.29.0"
ratatui = "0.29.0"
rand = "0.10.1"
directories = "6.0"
toml = "0.9.11"
hex = "0.4"
sysinfo = "0.38.4"
strum = "0.27.2"
strum_macros = "0.27.2"
notify = "8.2.0"
clap = { version = "4.6.1", features = ["derive"] }
rlimit = "0.11"
fuzzy-matcher = "0.3.7"
chrono = "0.4.44"
serde_json = "1.0.149"
feed-rs = "2.3.1"
regex = "1.12.2"


================================================
FILE: Dockerfile
================================================
# SPDX-FileCopyrightText: 2025 The superseedr Contributors
# SPDX-License-Identifier: GPL-3.0-or-later

# syntax=docker/dockerfile:1

# --- Stage 1: The Cross-Builder ---
FROM --platform=$BUILDPLATFORM rust:1-bookworm AS builder

ARG TARGETPLATFORM
ARG TARGETARCH
ARG BUILDPLATFORM
ARG PRIVATE_BUILD=false

# 1. Install 'xx' - The Cross-Compilation Helper
COPY --from=tonistiigi/xx / /

# 2. Install Host Build Tools (running on Intel/AMD)
# 'pkg-config' here is the driver that xx-cargo will wrap.
RUN apt-get update && apt-get install -y clang lld pkg-config git

# 3. Install Target Libraries (ARM64/AMD64)
# [CRITICAL] Use 'xx-apt-get'. This installs libssl-dev for the TARGET architecture.
# We also install 'gcc' so the crate can run C-code tests during the build.
RUN xx-apt-get install -y libssl-dev gcc

WORKDIR /app

# 4. Copy source files
COPY Cargo.toml Cargo.lock ./
COPY ./src ./src

# 5. Fix for OpenSSL Cross-Compilation
# [CRITICAL FIX] The openssl-sys crate is paranoid. It detects cross-compilation
# and refuses to run pkg-config unless this variable is set.
# Since 'xx' is handling the paths, it is safe to force this to 1.
ENV PKG_CONFIG_ALLOW_CROSS=1

# 6. Build with xx-cargo
RUN --mount=type=cache,target=/usr/local/cargo/git/db \
    --mount=type=cache,target=/usr/local/cargo/registry/cache \
    --mount=type=cache,target=/usr/local/cargo/registry/index \
    --mount=type=cache,target=/app/target \
    TRIPLE=$(xx-cargo --print-target-triple) && \
    if [ "$PRIVATE_BUILD" = "true" ]; then \
        xx-cargo build --release --no-default-features --target "$TRIPLE" --target-dir ./target; \
    else \
        xx-cargo build --release --target "$TRIPLE" --target-dir ./target; \
    fi && \
    cp ./target/$TRIPLE/release/superseedr /app/superseedr

# --- Stage 2: The Final Image ---
FROM debian:bookworm-slim AS final

# Install runtime dependencies (OpenSSL 3 runtime)
RUN apt-get update && \
    apt-get install -y ca-certificates libssl3 && \
    rm -rf /var/lib/apt/lists/*

COPY --from=builder /app/superseedr /usr/local/bin/superseedr

ENTRYPOINT ["/usr/local/bin/superseedr"]


================================================
FILE: LICENSE
================================================
                    GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU General Public License is a free, copyleft license for
software and other kinds of works.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.  We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors.  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights.  Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.  You must make sure that they, too, receive
or can get the source code.  And you must show them these terms so they
know their rights.

  Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

  For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software.  For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

  Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so.  This is fundamentally incompatible with the aim of
protecting users' freedom to change the software.  The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable.  Therefore, we
have designed this version of the GPL to prohibit the practice for those
products.  If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

  Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary.  To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Use with the GNU Affero General Public License.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>
    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.

  The GNU General Public License does not permit incorporating your program
into proprietary programs.  If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.  But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.


================================================
FILE: README.md
================================================
<picture>
  <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/Jagalite/superseedr-assets/main/superseedr_logo_transparent.gif">
  <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/Jagalite/superseedr-assets/main/superseedr_logo.gif">
  <img alt="Superseedr Logo" src="https://raw.githubusercontent.com/Jagalite/superseedr-assets/main/superseedr_logo.gif">
</picture>

# A BitTorrent Client in your Terminal

[![Rust](https://github.com/Jagalite/superseedr/actions/workflows/rust.yml/badge.svg)](https://github.com/Jagalite/superseedr/actions/workflows/rust.yml) [![Nightly Fuzzing](https://github.com/Jagalite/superseedr/actions/workflows/nightly.yml/badge.svg)](https://github.com/Jagalite/superseedr/actions/workflows/nightly.yml) ![GitHub release](https://img.shields.io/github/v/release/Jagalite/superseedr) [![crates.io](https://img.shields.io/crates/d/superseedr)](https://crates.io/crates/superseedr) [![Built With Ratatui](https://ratatui.rs/built-with-ratatui/badge.svg)](https://ratatui.rs/) <a title="This tool is Tool of The Week on Terminal Trove, The $HOME of all things in the terminal" href="https://terminaltrove.com/"><img src="https://cdn.terminaltrove.com/media/badges/tool_of_the_week/png/terminal_trove_tool_of_the_week_gold_transparent.png" alt="Terminal Trove Tool of The Week" /></a>

Superseedr is a modern Rust BitTorrent client featuring a high-performance terminal UI, real-time swarm observability, secure VPN-aware Docker setups, and zero manual network configuration. It is fast, privacy-oriented, and built for both desktop users and homelab/server workflows.

![Feature Demo](https://raw.githubusercontent.com/Jagalite/superseedr-assets/main/superseedr_landing.webp)

## 🚀 Features at a Glance

| **Experience** | **Networking** | **Engineering** |
| :--- | :--- | :--- |
| 🎨 **60 FPS TUI + Themes**<br>Fluid, animated interface with heatmaps and 40 live-switchable built-in themes. | 🐳 **Docker + VPN**<br>Gluetun integration with dynamic port reloading. | 🧬 **BitTorrent v2**<br>Hybrid swarms & Merkle tree verification. |
| 📰 **RSS Feeds**<br>In-app feed tracking, filtering, and ingest. | 🧩 **Cluster Mode**<br>OS-agnostic shared torrent catalog with automatic failover. | 🧠 **Self-Tuning**<br>Adaptive limits control for max speed and I/O Stability. |
| 🧲 **Magnet Links**<br>Native OS-level handler support. | 👻 **Private Mode**<br>Optional builds disabling DHT/PEX. | 📡 **Integrity Prober**<br>Continuous lightweight background integrity checks with fast recovery reprobes. |

### Terminal Torrenting With Superseedr

* **Pushing TUI Boundaries:** Experience a fluid, 60 FPS interface that feels like a native GUI, featuring smooth animations, high-density visualizations, and 40 built-in themes rarely seen in terminal apps.
* **See What's Happening:** Diagnose slow downloads instantly with deep swarm analytics, heatmaps, and live bandwidth graphs.
* **Set It and Forget It:** Automatic port forwarding and dynamic listener reloading in Docker ensure your connection stays alive, even if your VPN resets.
* **Crash-Proof Design:** Leverages Rust's memory safety guarantees to run indefinitely on low-resource servers without leaks or instability, and shared cluster mode adds automatic failover across hosts.

<p align="center">
  <img src="https://raw.githubusercontent.com/Jagalite/superseedr-assets/main/superseedr-matix.gif"/>
</p>

## Installation

Download platform-specific installers from the [releases page](https://github.com/Jagalite/superseedr/releases) **(includes browser magnet link support)**:
- Windows: `.msi` installer
- macOS: `.pkg` installer  
- Debian/Ubuntu: `.deb` package

### Package Managers
- **Cargo:** `cargo install superseedr`
- **Brew:** `brew install superseedr`
- **Arch Linux:** `yay -S superseedr` (via AUR)

[![Packaging status](https://repology.org/badge/vertical-allrepos/superseedr.svg)](https://repology.org/project/superseedr/versions)

## Usage
Open a terminal
```bash
superseedr
```
### ⌨️ Key Controls
| Key | Action |
| :--- | :--- |
| `m` | **Open full manual / help** |
| `Q` | Quit |
| `↑` `↓` `←` `→` | Navigate |
| `c` | Configure Settings |

> [!TIP]  
> Add torrents by clicking magnet links in your browser or opening .torrent files.
> Copying and pasting (ctrl + v) magnet links or paths to torrent files will also work.

## Troubleshooting

**Connection or Disk issues?**
- Check your firewall allows outbound connections
- Increase file descriptor limit: `ulimit -n 65536`
- For VPN users: Verify Gluetun is running and connected

**Slow downloads?**
- Enable port forwarding in your VPN settings
- Check the swarm health in the TUI's analytics view

**More help:** See the [FAQ](docs/FAQ.md) or [open an issue](https://github.com/Jagalite/superseedr/issues)

## More Info
- 🤝[Contributing](CONTRIBUTING.md): How you can contribute to the project (technical and non-technical).
- ❓[FAQ](docs/FAQ.md): Find answers to common questions about Superseedr.
- 📜[Changelog](docs/CHANGELOG.md): See what's new in recent versions of Superseedr.
- 🗺️[Roadmap](docs/ROADMAP.md): Discover upcoming features and future plans for Superseedr.
- 🧑‍🤝‍🧑[Code of Conduct](CODE_OF_CONDUCT.md): Understand the community standards and expectations.

## 🐳 Running with Docker

Superseedr offers a fully secured Docker setup using Gluetun. All BitTorrent traffic is routed through a VPN tunnel with dynamic port forwarding and zero manual network configuration.

If you want privacy and simplicity, Docker is the recommended way to run Superseedr.

Follow steps below to create .env and .gluetun.env files to configure OpenVPN or WireGuard.

```bash
# Docker (No VPN):
# Uses internal container storage. Data persists until the container is removed.
docker run -it jagatranvo/superseedr:latest

# Docker Compose (Gluetun with your VPN):
# Requires .env and .gluetun.env configuration (see below).
docker compose up -d && docker compose attach superseedr
```

<details>
<summary><strong>Click to expand Docker Setup</strong></summary>

### Setup

1.  **Get the Docker configuration files:**
    You only need the Docker-related files to run the pre-built image, not the full source code.

    **Option A: Clone the repository (Simple)**
    This gets you everything, including the source code.
    ```bash
    git clone https://github.com/Jagalite/superseedr.git
    cd superseedr
    ```
    
    **Option B: Download only the necessary files (Minimal)**
    This is ideal if you just want to run the Docker image.
    ```bash
    mkdir superseedr
    cd superseedr

    # Download the compose file and example config files
    curl -sL \
      -O https://raw.githubusercontent.com/Jagalite/superseedr/main/docker-compose.yml \
      -O https://raw.githubusercontent.com/Jagalite/superseedr/main/.env.example \
      -O https://raw.githubusercontent.com/Jagalite/superseedr/main/.gluetun.env.example

    # Note the example files might be hidden run the commands below to make a copy.
    cp .env.example .env
    cp .gluetun.env.example .gluetun.env
    ```

2.  **Recommended: Create your environment files:**
    * **App Paths & Build Choice:** Edit your `.env` file from the example. This file controls your data paths and which build to use.
        ```bash
        cp .env.example .env
        ```
        Edit `.env` to set your absolute host paths (e.g., `HOST_SUPERSEEDR_ROOT_PATH=/my/path/seedbox`). **This is important:** it maps the container's shared seedbox root (`/seedbox`) to a real folder on your computer. Keep `superseedr-config/` inside that root for the simplest shared-config setup.

    * **VPN Config:** Edit your `.gluetun.env` file from the example.
        ```bash
        cp .gluetun.env.example .gluetun.env
        ```
        Edit `.gluetun.env` with your VPN provider, credentials, and server region.

#### Option 1: VPN with Gluetun (Recommended)

Gluetun provides:
- A VPN kill-switch
- Automatic port forwarding
- Dynamic port changes from your VPN provider

Many VPN providers frequently assign new inbound ports. Most BitTorrent clients must be restarted when this port changes, breaking connectability and slowing downloads.
Superseedr can detect Gluetun’s updated port and reload the listener **live**, without a restart, preserving swarm performance.

1.  Make sure you have created and configured your `.gluetun.env` file.
2.  Run the stack using the default `docker-compose.yml` file:

```bash
docker compose up -d && docker compose attach superseedr
```
> To detach from the TUI without stopping the container, use the Docker key sequence: `Ctrl+P` followed by `Ctrl+Q`.
> **Optional:** press `[z]` first to enter power-saving mode.

---

#### Option 2: Direct docker run

This runs the client directly without Gluetun. It is useful for advanced users who want to manage networking themselves.

    docker run --rm -it \
      -e SUPERSEEDR_DEFAULT_DOWNLOAD_FOLDER=/seedbox \
      -e SUPERSEEDR_SHARED_CONFIG_DIR=/seedbox \
      -e SUPERSEEDR_SHARED_HOST_ID=seedbox-docker \
      -p 6881:6881/tcp \
      -p 6881:6881/udp \
      -v /your/seedbox:/seedbox \
      -v ./docker-data/share:/root/.local/share/jagalite.superseedr \
      jagatranvo/superseedr:latest

Replace /your/seedbox with the shared seedbox root on your host.
Keep superseedr-config/ inside that folder so the container sees it at /seedbox/superseedr-config.

</details>

## 🔗 Integrations & Automation

Superseedr is built around a local CLI and a file-based automation model, so
you can script, queue, and inspect work without exposing a network control
stack. The same command flow works when a client is online, when it is offline,
and in shared mode when you are operating against a remote leader through a
mounted shared root.

Check out the [Superseedr Plugins Repository](https://github.com/Jagalite/superseedr-plugins) for plugins (beta testing).

<details>
<summary><strong>Click to expand automation details</strong></summary>

### 1. File Watcher & Auto-Ingest
Superseedr uses a file-based watch-folder architecture so local automation,
scripts, containers, and other processes can control ingestion without needing a
separate daemon protocol.

Each node can watch a local `watch_folder`. In standalone mode, that watch
folder feeds the local client directly. In shared mode, followers watch their
own local folders and relay supported files into the shared inbox so the leader
can process them and update the shared catalog.

Processed watch files are archived after handling so the queue stays
deterministic and auditable.

| File Type | Action |
| :--- | :--- |
| **`.torrent`** | Adds a torrent from a torrent file. In shared mode, follower-side ingest may stage the torrent for leader processing. |
| **`.magnet`** | Adds a torrent from a magnet link stored as text. |
| **`.path`** | Adds a torrent from a referenced torrent-file path. In shared mode, cross-host handling uses portable shared-root-aware staging. |
| **`.control`** | Applies queued control requests such as pause, resume, remove, purge, and priority changes. |
| **`shutdown.cmd`** | Requests graceful shutdown of the running client or shared leader. |

See [`docs/shared-config.md`](docs/shared-config.md) for shared inbox and
leader/follower watch-folder behavior.

### 2. CLI Control
The CLI uses the same file-oriented control model. Depending on mode, commands
either:

- write control files for a running client
- queue requests through the shared inbox for the leader
- or apply offline mutations directly when no runtime is available

That makes the CLI easy to script from shells, containers, task runners, and
other local automation.

See [`docs/cli.md`](docs/cli.md) for the full CLI guide.

```bash
# Add a magnet link
superseedr add "magnet:?xt=urn:btih:..."

# Add a torrent file by path
superseedr add "/path/to/linux.iso.torrent"

# Inspect the current shared launcher selection
superseedr show-shared-config

# Show resolved config, log, status, journal, and watch paths
superseedr show-configs

# Persist shared launcher config for installed/protocol launches
superseedr set-shared-config "/path/to/seedbox"

# Convert local config into layered shared config
superseedr to-shared "/path/to/seedbox"

# Convert the active shared config back into local standalone config
superseedr to-standalone

# Stop the client gracefully
superseedr stop-client
```

See [`docs/cli.md`](docs/cli.md) for full CLI command behavior, and
[`docs/shared-config.md`](docs/shared-config.md) for shared leader/follower
routing.

### 3. Status API & Monitoring
For external dashboards, health checks, and lightweight automation, Superseedr
periodically dumps runtime state to JSON.

* **Output Location:** a status JSON file in the runtime data area.
* **Shared Mode:** each host writes its own status file, and shared CLI status follows the current leader snapshot.
* **Content:** includes transfer stats, runtime metrics, and torrent-level state.

#### Configuration
You can control how often this file is updated using the `output_status_interval` setting.

**Environment Variable:**
Set this variable in your Docker config to change the update frequency (in seconds).
```bash
# Update the status file every 5 seconds
SUPERSEEDR_OUTPUT_STATUS_INTERVAL=5
```

### 4. RSS Feeds & History
Superseedr can track RSS feeds in-app, evaluate feed items against your configured
matching rules, and automatically ingest matching releases without needing an
external automation stack.

* **Feed Tracking:** monitor RSS feeds directly from the client.
* **Rule-Based Matching:** use configured match rules to decide what should be ingested.
* **Auto-Ingest:** matching items can be queued into the normal torrent ingest path.
* **History & Deduplication:** downloaded feed history is persisted so the same item is not re-ingested repeatedly.

RSS download history is capped at **1000 entries**.

* When the history grows past 1000, the **oldest entries are pruned** first.
* This limit applies to persisted runtime history in `persistence/rss.toml`.

</details>

## 🧩 Shared Configurations & Cluster Mode

Shared mode gives you an OS- and machine-agnostic torrent catalog and settings
that live alongside your data on the NAS or shared root. Any Superseedr client
that mounts that shared root can connect and reuse the same catalog in real time.
Superseedr CLI commands work against that shared config both online and offline. See
[`docs/shared-config.md`](docs/shared-config.md) for the full shared-mode guide.

```text
Same shared root, different local mount paths

NAS
/shared/superseedr
├─ superseedr-config/
│  ├─ settings.toml
│  ├─ catalog.toml
│  └─ ...
└─ video1.mkv

macOS
$ superseedr set-shared-config /Volumes/superseedr-mount
$ superseedr
/Volumes/superseedr-mount
├─ superseedr-config/
│  ├─ settings.toml
│  ├─ catalog.toml
│  └─ ...
└─ video1.mkv

Windows
> superseedr set-shared-config "X:\superseedr-mount"
> superseedr
X:\superseedr-mount
├─ superseedr-config\
│  ├─ settings.toml
│  ├─ catalog.toml
│  └─ ...
└─ video1.mkv
```

Cluster mode turns that shared catalog into an active multi-node setup. One node
acts as leader and updates shared desired state, while other nodes stay online
as followers that continue seeding and apply the leader-written catalog in real
time. If the leader goes away, another node can take over automatically, and
each host can mount the same shared root at a different local path for cross-OS
operation.

```text
                    Shared Root / NAS
                      /shared/superseedr
                  ┌───────────────────────┐
                  │ superseedr-config/    │
                  │ settings.toml         │
                  │ catalog.toml          │
                  │ inbox/                │
                  │ hosts/                │
                  └───────────────────────┘
                          ↑        ↑
                          │        │
                       Leader   Follower

       ┌──────────────────────┐    ┌──────────────────────┐
       │ Windows              │    │ macOS                │
       │ X:\superseedr-mount  │    │ /Volumes/superseedr- │
       │                      │    │ mount                │
       └──────────────────────┘    └──────────────────────┘
```


## 🧠 Advanced: Architecture & Engineering

Superseedr is built on a **Reactive Actor** architecture verified by model-based fuzzing, ensuring stability under chaos. It features a **Self-Tuning Resource Allocator** that adapts to your hardware in real-time and a hybrid **BitTorrent v2** engine, all powered by asynchronous **Tokio** streams for maximum throughput.

<details>
<summary><strong>Click to expand technical internals</strong></summary>

This section is designed for developers, contributors, and AI agents seeking to understand the internal design decisions that drive Superseedr's performance.

### ⚡ Async Networking Core
Superseedr is built on the **Tokio** runtime, leveraging asynchronous I/O for maximum concurrency.
* **Full-Duplex Streams:** Every peer connection is split into independent **Reader** and **Writer** tasks (`tokio::io::split`). This allows the client to saturate download and upload bandwidth simultaneously without thread blocking or lock contention, ensuring the UI remains responsive even with thousands of active connections.
* **Actor-Based Session Management:** Each peer operates as an isolated Actor. Communication between the network socket and the core logic happens exclusively via `mpsc` channels, meaning a slow or misbehaving peer cannot block the main event loop or affect other connections.
* **Hot-Swappable Listeners:** The application runs an async file watcher (`notify`) on the VPN configuration volume. When **Gluetun** rotates the forwarded port, Superseedr detects the file change and instantly rebinds the TCP listener to the new port without dropping the swarm state or restarting the process.

### DHT Runtime & Demand Planner
Superseedr ships a first-party Mainline DHT implementation instead of treating DHT as a black-box peer source.
* **Dual-Stack Runtime:** The internal runtime maintains IPv4 and IPv6 UDP transports, routing tables, peer storage, bootstrap state, and rotating announce tokens while serving inbound `find_node`, `get_peers`, and `announce_peer` traffic.
* **Client-Aware Demand:** Torrent managers feed demand state and live swarm metrics into the DHT service. The planner prioritizes metadata recovery and peer-starved torrents first, then spends additional query budget on active swarms that are still producing useful peers.
* **Pause/Resumable Crawls:** Lookup slices can be parked when their wall-time budget expires, preserving traversal state instead of throwing away the crawl frontier. Later planner slices can resume the crawl from the saved state, while the drain path still captures late peers from in-flight queries.
* **Adaptive Query Pressure:** DHT work is bounded by lookup slots, per-class budgets, late-peer drain handling, and peer-slot pressure. When the client is full, DHT power can ramp down quickly; when capacity returns, it ramps back up gradually.
* **Protocol Hardening:** The runtime validates response sources, filters unroutable nodes, tracks suspicious identity churn, rate-limits inbound KRPC traffic, and keeps DHT participation disabled entirely in private builds.
* **Deterministic Verification:** Planner and runtime reducers are covered by deterministic replay tests, invariant checks, and property tests for lookup traversal, scheduling, demand selection, drain behavior, and peer-pressure scaling.

### 🔒 Security & Privacy Engineering
* **VPN Isolation (Kill-Switch):** In the Docker Compose setup, Superseedr's network stack is fully routed through **Gluetun**. This guarantees that 100% of BitTorrent traffic traverses the VPN tunnel. If the tunnel drops, connectivity is cut immediately, preventing any IP leakage over the host connection.
* **Binary-Level Private Mode:** Private tracker compliance is enforced at compile time, not just runtime. By building with `--no-default-features`, the DHT and Peer Exchange (PEX) modules are completely excluded from the binary, guaranteeing zero leakage of private swarms.

### 🏗️ Reactive Actor Model & Verification
The application logic abandons traditional mutex-heavy threading in favor of a **Functional Reactive** architecture.
* **Deterministic State Machine:** The `TorrentManager` operates as a Finite State Machine (FSM). External events (Network I/O, Timer Ticks) are transmuted into `Action` enums, processed purely in memory, and result in a list of `Effects`.
* **Chaos Engineering:** We validate this core logic using **Model-Based Fuzzing** (via Proptest). Our test suite injects deterministic faults to verify correctness under hostile conditions:
* **Network Chaos:** Simulates **Packet Loss** (dropped actions), **High Latency** (reordered actions), and **Duplication** (ghost packets).
* **Malicious Peers:** Fuzzers act as "Bad Actors" that send protocol violations, infinite byte-streams, and out-of-bounds requests to ensure the engine punishes them without crashing.

### 🤖 Self-Tuning Resource Allocator
Instead of static `ulimit` values, Superseedr runs a **Stochastic Hill Climbing** optimizer in the background.
* **The Loop:** Every 90 seconds, it randomly reallocates internal permits between competing resources—**Peer Sockets**, **Disk Read Slots**, and **Disk Write Slots**—to find the local maximum for performance.
* **Universal Optimization:** This algorithm dynamically discovers the optimal configuration for *any* combination of hardware (SSD vs HDD) and network environment (Home Fiber vs Datacenter), automatically scaling concurrency to match capacity.

### 📡 Integrity Prober
Superseedr automatically and continuously checks completed torrents in the background without falling back to blunt full-library rescans.
* **Designed for Scale:** Integrity work is split into small bounded batches, keeping checks cheap even across very large collections.
* **Fast Fault Detection:** Foreground disk-read failures immediately trigger targeted recovery reprobes, surfacing missing or damaged data quickly.
* **No-Config Recovery:** Healthy torrents are monitored automatically, while unavailable torrents are prioritized for fast recovery detection without extra setup.

### 🧮 Statistical Engine
Superseedr calculates granular metrics in real-time to drive optimization and observability:
* **IOPS & Latency:** Tracks instantaneous Input/Output Operations Per Second and uses an Exponential Moving Average (EMA) to calculate precise Read/Write latency (ms). This helps distinguish between bandwidth limits and disk saturation.
* **Disk Thrash Score:** Measures physical disk head movement using `Sum(|Offset - PrevOffset|) / Ops`. This detects random I/O bottlenecks that raw speed metrics miss.
* **Seek Cost per Byte (SCPB):** Calculates the "expense" of I/O relative to throughput (`TotalSeekDistance / TotalBytes`), serving as the primary penalty factor for the self-tuner.

### ♟️ Protocol Algorithms
Superseedr implements optimized versions of the core BitTorrent exchange strategies:
* **Selective & Priority Downloading:** Support for file-level priority (Skip, Normal, High). The engine maps file boundaries to pieces, prioritizing high-value data while ensuring shared boundary pieces are handled correctly to prevent corruption.
* **Rarest-First Piece Selection:** The client continuously tracks piece availability across the swarm, prioritizing rare pieces to prevent "swarm starvation" and ensure redundant availability.
* **Tit-for-Tat Choking:** The choking algorithm uses a robust Tit-for-Tat strategy (reciprocation), rewarding peers who provide the highest bandwidth while optimistically unchoking new peers to discover better connections.

### 🔬 Unique Visualizations & UX
Superseedr includes specialized TUI components (`src/tui/view.rs`) to visualize data usually hidden by other clients:
* **Integrated File Explorer:** A custom, navigable filesystem browser that provides instant previewing of `.torrent` file contents and internal directory structures before the download begins.
* **Block Particle Stream:** A vertical "Matrix-style" flow visualizing individual 16KB data blocks entering (Blue) or leaving (Green).
* **Peer Lifecycle Scatterplot:** Tracks the exact moment peers are Discovered, Connected, and Disconnected to visually diagnose swarm "churn."
* **Backpressure Markers:** The network graph overlays red "Backpressure Events" whenever the self-tuner detects a system limit (e.g., file descriptors), proving the engine is actively managing load.

### 🧬 Hybrid BitTorrent v2 (BEP 52)
Superseedr implements the full **Merkle Tree** verification stack required for BitTorrent v2.
* **Block-Level Validation:** Incoming data is hashed and verified at the 16KiB block level using Merkle Proofs, allowing for the immediate rejection of corrupt data before it is written to disk.
* **Hybrid Swarms:** The client handles `VerifyPieceV2` effects to simultaneously handshake with legacy v1 peers (SHA-1) and modern v2 peers (SHA-256).

### 🛡️ Backpressure & Flow Control
* **Persistent Retries with Backoff:** Critical I/O operations (like disk writes) are protected by an exponential backoff retry mechanism (jittered), ensuring transient system locks or busy disks don't crash the download session.
* **Adaptive Pipelining:** The `PeerSession` uses a dynamic sliding window (AIMD-like algorithm) that expands or shrinks the request queue based on the peer's real-time response rate (`blocks_received_interval`), maximizing link saturation.
* **Token Buckets:** Global bandwidth is shaped via a hierarchical Token Bucket algorithm that enforces rate limits without blocking async executors.

### 📜 Key Standards Compliance
Superseedr implements the following BitTorrent Enhancement Proposals (BEPs):
* **BEP 3:** The BitTorrent Protocol Specification
* **BEP 5:** DHT Protocol (Mainline)
* **BEP 9:** Extension for Peers to Send Metadata Files (Magnet Links)
* **BEP 10:** Extension Protocol
* **BEP 11:** Peer Exchange (PEX)
* **BEP 19:** WebSeed - HTTP/FTP Seeding
* **BEP 52:** The BitTorrent Protocol v2

</details>







================================================
FILE: agentic_plans/cargo_dependency_assessment_2026-03-12.md
================================================
# Cargo Dependency Assessment

## Summary
This note evaluates every direct dependency in `Cargo.toml` with three questions in mind:
- can we remove it outright
- can we rewrite the small bit of functionality locally
- if we remove it, how much of the current Cargo graph actually disappears

The highest-value realistic cleanup candidates are:
- `figment`: only used in [`src/config.rs`](../src/config.rs), but it pulls in an older `toml` stack and 11 likely-exclusive lockfile crates
- `clap`: the CLI surface is small in [`src/main.rs`](../src/main.rs) and [`src/integrations/cli.rs`](../src/integrations/cli.rs), and removal would likely drop 12 exclusive crates
- `tracing-appender`: only initialized in [`src/main.rs`](../src/main.rs); removing it would likely drop 8 exclusive crates if simpler logging is acceptable
- `tokio-stream`: only used for `StreamExt` in [`src/torrent_manager/manager.rs`](../src/torrent_manager/manager.rs); low code impact, but almost no graph win because its transitive crates are already shared
- `data-encoding`, `urlencoding`, `hex`, `magnet-url`: all are replaceable with local helpers, though only `magnet-url` changes meaningful parsing behavior

The highest-value optional feature cut is:
- `mainline`: currently enabled through the default `dht` feature; removing it would likely drop 21 exclusive crates, but it also removes DHT peer discovery

The biggest dependency by graph weight is:
- `reqwest`: 109 reachable transitive crates and 32 likely-exclusive ones, but it is used across tracker HTTP, RSS fetching, and web seeds, so this is a strategic rewrite rather than a practical quick win

## Method
Counts below were gathered from the local lockfile and `cargo tree --offline` on March 12, 2026.

Two graph numbers are listed:
- `Reachable`: unique transitive crates reachable from that direct dependency in the current resolved graph
- `Exclusive`: crates that appear to be reachable only from that direct dependency among the current direct dependencies, so they are the best estimate of what really disappears from `Cargo.lock` if the dependency goes away

These numbers are directional, not a perfect build-size model:
- feature changes can materially change compile cost without changing the crate count much
- some crates are shared through multiple direct dependencies, so removing a dependency may simplify the manifest without shrinking the lockfile much

## Best Next Steps

### Phase 1: Low-Risk Manifest Cleanup
- Remove or rewrite `tokio-stream` by replacing the single `StreamExt` use in [`src/torrent_manager/manager.rs`](../src/torrent_manager/manager.rs).
- Replace `data-encoding` with a tiny local base32 helper for magnet info-hash decoding in [`src/app.rs`](../src/app.rs).
- Replace `urlencoding` with a local percent-decoder helper around the single magnet/query decode path in [`src/app.rs`](../src/app.rs) and [`src/torrent_manager/manager.rs`](../src/torrent_manager/manager.rs).
- Decide whether `hex` is worth localizing. It has no transitive cost, but it is used often enough that a local helper could remove a direct dependency with predictable code churn.

### Phase 2: Medium-Value Simplification
- Replace `clap` with a hand-rolled parser if the CLI remains just:
  - optional positional input
  - `add`
  - `stop-client`
- Replace `figment` with explicit `toml` loading plus environment overlay in [`src/config.rs`](../src/config.rs). This is the cleanest way to remove the duplicate `toml 0.8` stack.
- Replace `tracing-appender` if daily file rotation and non-blocking logging are not important enough to justify their support crates.

### Phase 3: Product-Level Decisions
- Consider making `dht` opt-in instead of default if private-tracker or minimal builds matter more than automatic peer discovery.
- Only target `reqwest` or `feed-rs` if we are willing to narrow product scope or accept a fairly invasive rewrite.

## Version And Feature Notes
- `ratatui 0.29.0` currently pulls `crossterm 0.28.1`, while the app directly depends on `crossterm 0.29.0`. Even if we keep both crates conceptually, version alignment is worth checking because it may remove one duplicate branch of the graph.
- `ratatui` also pulls `strum 0.26.3` and `strum_macros 0.26.4`, while the app directly depends on `strum 0.27.2` and `strum_macros 0.27.2`. Removing the direct deps alone will not fully remove the strum family from the lockfile.
- `figment` pulls `toml 0.8.23`, while the app also directly depends on `toml 0.9.11`. Replacing `figment` is the clearest duplicate-stack win in the manifest.
- `tokio` is configured with `features = ["full", "test-util"]`. Even if we keep `tokio`, narrowing that feature list is likely worth a follow-up pass.
- `reqwest` is using default features plus `json`. If dependency weight becomes important, this crate is the best place to investigate `default-features = false` and a narrower transport or TLS choice.

## Full Assessment

| Dependency | Main usage in repo | Reachable | Exclusive | Recommendation | Impact if removed or rewritten |
| --- | --- | ---: | ---: | --- | --- |
| `reqwest` | Tracker HTTP, RSS fetch, web seeds in `src/app.rs`, `src/integrations/rss_service.rs`, `src/tracker/client.rs`, `src/networking/web_seed_worker.rs` | 109 | 32 | Keep for now. Biggest graph target, but not a near-term cleanup. | High. Touches multiple subsystems and networking behavior. |
| `sha1` | v1 piece hashing, magnet or file hashes in `src/app.rs`, `src/integrations/*`, `src/torrent_manager/*` | 7 | 0 | Keep. | High. Required for BitTorrent v1 behavior. |
| `sha2` | v2 piece hashing and Merkle logic in `src/app.rs`, `src/torrent_manager/*` | 7 | 0 | Keep. | High. Required for BitTorrent v2 behavior. |
| `tokio` | Runtime backbone across app, networking, storage, TUI, RSS | 23 | 0 | Keep, but trim features later. | Very high. Core async runtime. |
| `tokio-stream` | Single `StreamExt` usage in `src/torrent_manager/manager.rs` | 27 | 0 | Good low-risk removal candidate. | Low. Likely one small refactor. |
| `thiserror` | Error derives in `src/errors.rs` and `src/resource_manager.rs` | 5 | 0 | Keep unless we want manual error impls. | Low to medium code churn for little graph gain. |
| `tracing` | Logging and instrumentation across most runtime modules | 8 | 0 | Keep. | High. Cross-cutting diagnostics. |
| `tracing-subscriber` | Logger setup in `src/main.rs` | 13 | 0 | Keep unless logging setup is simplified at the same time as `tracing-appender`. | Medium. One file, but user-visible logging behavior changes. |
| `tracing-appender` | Rolling file logging in `src/main.rs` | 28 | 8 | Good medium-value rewrite candidate. | Medium. Replace with simpler file writer or stdout-only logging. |
| `serde` | Serialization for config, protocol, persistence, torrent metadata | 6 | 0 | Keep. | Very high. Serialization foundation. |
| `serde_bencode` | Torrent parsing and wire extensions in `src/networking/*`, `src/torrent_file/*`, `src/torrent_manager/*`, `src/tracker/*` | 8 | 0 | Keep. | High. Deep protocol coupling. |
| `serde_bytes` | Compact byte-field serde in protocol, torrent, and tracker structs | 1 | 0 | Keep. | Medium. Small crate, low savings. |
| `magnet-url` | Magnet parsing in `src/app.rs` and `src/torrent_manager/manager.rs` | 0 | 0 | Rewrite candidate if we only need a narrow subset of magnet semantics. | Medium. Feasible local parser, but correctness matters. |
| `mainline` | DHT peer discovery in `src/app.rs` and `src/torrent_manager/*` | 46 | 21 | Keep as long as `dht` stays a default feature. Biggest optional feature cut. | High product impact. Removes DHT behavior. |
| `data-encoding` | Single base32 decode path in `src/app.rs` | 0 | 0 | Excellent tiny rewrite candidate. | Low. A small helper can replace it. |
| `urlencoding` | Single percent-decode path in magnet handling | 0 | 0 | Excellent tiny rewrite candidate. | Low. A small helper can replace it. |
| `crossterm` | Terminal mode and event handling in `src/main.rs`, `src/app.rs`, TUI modules | 21 | 3 | Keep, but investigate version alignment with `ratatui`. | High. Core terminal integration. |
| `ratatui` | Entire TUI rendering, layout, and widget stack | 46 | 24 | Keep. | Very high. This is the TUI. |
| `rand` | Test helpers, IDs, and small runtime randomness across app and TUI | 6 | 4 | Keep unless we want deterministic local helpers. | Low to medium. Savings are modest. |
| `directories` | App, watch, and config directory resolution in `src/config.rs`, `src/app.rs`, `src/main.rs`, `src/tui/screens/config.rs` | 5 | 2 | Possible rewrite candidate, but not urgent. | Medium. Cross-platform path logic would move in-house. |
| `toml` | Persisted settings and state read or write in `src/config.rs`, `src/persistence/*` | 6 | 4 | Keep. | Medium to high. Straightforward, but used in multiple persistence paths. |
| `hex` | Info-hash and digest encode or decode across app, integrations, telemetry, torrent manager, and TUI | 0 | 0 | Easy to rewrite locally if we want one less direct dep. | Medium only because there are many call sites. |
| `sysinfo` | Process, CPU, and memory telemetry in `src/app.rs` and `src/telemetry/ui_telemetry.rs` | 19 | 9 | Optional rewrite candidate if runtime telemetry becomes less important. | Medium. Feature is isolated, but cross-platform telemetry is annoying to own. |
| `strum` | Enum iteration traits in `src/networking/protocol.rs`, `src/theme.rs`, `src/tui/screens/normal.rs` | 0 | 0 | Remove only together with `strum_macros` if we are willing to hand-write enum lists. | Low to medium. Little graph win. |
| `strum_macros` | Enum derives in `src/app.rs`, `src/config.rs`, `src/networking/protocol.rs`, `src/theme.rs` | 5 | 0 | Same as `strum`: only worth removing as a pair. | Low to medium. Manual enum maintenance cost goes up. |
| `figment` | Config loading and env overlay in `src/config.rs` | 22 | 11 | Best medium-value rewrite candidate. | Medium. Localized to config loading and removes duplicate TOML machinery. |
| `notify` | Watch-folder monitoring in `src/app.rs` and `src/integrations/watcher.rs` | 12 | 4 | Keep unless we want polling or OS-specific watcher code. | Medium to high. File watching is user-visible and cross-platform. |
| `clap` | CLI parsing in `src/main.rs` and `src/integrations/cli.rs` | 21 | 12 | Best medium-value rewrite candidate if CLI stays small. | Medium. Localized parser rewrite. |
| `rlimit` | FD and resource limit tuning in `src/app.rs` | 1 | 0 | Keep unless we are comfortable dropping this tuning on some platforms. | Low. Savings are tiny. |
| `fuzzy-matcher` | Search and filter ranking in `src/app.rs`, `src/integrations/rss_service.rs`, `src/tui/screens/rss.rs` | 2 | 0 | Possible rewrite candidate if substring match is acceptable. | Medium. Behavior quality may regress. |
| `chrono` | Timestamp formatting and RSS or UI date handling in `src/config.rs`, `src/integrations/rss_service.rs`, `src/tui/screens/*` | 9 | 0 | Keep. | Medium. Replaceable, but not a clean win. |
| `serde_json` | Status output and theme serialization tests | 4 | 0 | Keep. | Low. Tiny shared crate with clear purpose. |
| `feed-rs` | RSS parsing in `src/integrations/rss_service.rs` | 53 | 6 | Keep unless we intentionally narrow RSS support. | Medium to high. Only one call site, but the parser is doing real protocol work. |
| `regex` | RSS or config validation and filtering in `src/integrations/rss_service.rs`, `src/config.rs`, `src/tui/screens/rss.rs` | 4 | 0 | Keep. | Medium. Shared and low-cost. |

## Prioritized Recommendation
If the goal is to reduce dependency count without destabilizing the product, the best order is:
1. `tokio-stream`
2. `data-encoding`
3. `urlencoding`
4. `figment`
5. `clap`
6. `tracing-appender`

If the goal is to shrink the overall dependency graph the most, the biggest levers are:
1. `reqwest` by far, but only with a major networking rewrite
2. `mainline`, but only by changing the default DHT product behavior
3. `ratatui`, which is not a practical removal target unless the app stops being a TUI
4. `figment` and `clap`, which are the most realistic graph wins

## Current Position
The manifest does not look bloated in a random way. Most direct dependencies map to real product surface area. The strongest cleanup story is not "delete lots of crates"; it is:
- remove the tiny one-off helpers first
- rewrite `figment` and possibly `clap`
- decide deliberately whether DHT and rolling file logging are product priorities
- investigate version and feature alignment before attempting any large networking rewrite


================================================
FILE: agentic_plans/cli_control_status_testing.md
================================================
# Shared-Config CLI Feature Validation Matrix: codex/unified-config

## Purpose

This is a focused validation plan for the current shared-config CLI surface in this branch.

It is not a full regression plan.

This plan validates:
- normal offline CLI behavior
- shared-config activation and precedence
- launcher shared-config commands
- launcher host-id commands
- standalone/shared conversion commands
- shared-mode read and mutating CLI commands
- shared offline CLI behavior with no leader running
- optional concurrent leader/follower shared behavior
- node-cluster failover behavior after leadership transfer
- docs matching the current CLI and shared-layout behavior

This plan does not require:
- a full download lifecycle
- tracker correctness
- deep TUI walkthroughs outside journal/status spot checks

## Core Execution Rule

- Test the checked-out code with `cargo run`, not an installed global binary.
- Prefer `cargo run -- <args>` for all CLI validation.
- Prefer env-prefixed `cargo run -- <args>` for shared-mode validation.
- Do not assume an old launcher sidecar or a previously running runtime reflects the intended test setup.

## Workspace And Shared Root Rules

- Use `./tmp/` as the default shared mount root.
- Treat `./tmp/` as both scratch space and the shared-root mount for the local round.
- Do not scatter temporary artifacts elsewhere in the repo.
- Do not commit `./tmp/` contents.
- If testing against a real mounted shared volume, create a dedicated test subfolder inside that mounted volume and use that subfolder as the shared mount root.
- Do not point tests at the root of a production or long-lived shared volume.

Examples of acceptable mounted-volume test roots:
- `X:\superseedr-test-round\`
- `/Volumes/seedbox/superseedr-test-round/`
- `/mnt/shared-drive/superseedr-test-round/`

Recommended layout:
- `./tmp/superseedr-config/hosts/`
- `./tmp/superseedr-config/inbox/`
- `./tmp/superseedr-config/processed/`
- `./tmp/superseedr-config/status/`
- `./tmp/superseedr-config/torrents/`
- `./tmp/superseedr-config/journal/`
- `./tmp/evidence/`
- `./tmp/reports/`

## Human Operator Preflight

Before recording any results, the human operator should set up the cluster intentionally.

Required preflight checks:
- pick one shared mount root and reuse it consistently for the whole round
- if using a mounted volume, create a dedicated test folder inside that volume first
- confirm every runtime can read and write that same shared root
- assign distinct host ids for each runtime, for example `host-a` and `host-b`
- decide whether the phase is testing:
  - shared offline mutation with no leader running
  - shared online behavior with one leader running
  - optional concurrent leader/follower behavior
- confirm which runtime is expected to become leader first

Recommended setup sequence:
1. Clear launcher sidecars unless the specific test is about them:
   - `cargo run -- clear-shared-config`
   - `cargo run -- clear-host-id`
2. Set or export the intended shared root and host id explicitly for each shell.
3. Start only the runtime needed for that phase.
4. Confirm leader/follower state before issuing mutating CLI commands.
5. Record the exact shared root path, host id, and whether a leader was already running.

Do not treat stale launcher sidecars, a forgotten local runtime, or mismatched host ids as acceptable setup.

## Shared Mode With Env Vars

Use env-driven launches for the main validation flow. Do not use launcher persistence as the default activation path.

Unix-like examples:
- `SUPERSEEDR_SHARED_CONFIG_DIR="$(pwd)/tmp" cargo run -- show-shared-config`
- `SUPERSEEDR_SHARED_CONFIG_DIR="$(pwd)/tmp" SUPERSEEDR_SHARED_HOST_ID="host-a" cargo run -- show-host-id`
- `SUPERSEEDR_SHARED_CONFIG_DIR="$(pwd)/tmp" SUPERSEEDR_SHARED_HOST_ID="host-a" cargo run -- status`
- `SUPERSEEDR_SHARED_CONFIG_DIR="$(pwd)/tmp" SUPERSEEDR_SHARED_HOST_ID="host-a" cargo run -- add "magnet:?xt=..."`

PowerShell:
- `$env:SUPERSEEDR_SHARED_CONFIG_DIR = "$PWD\tmp"`
- `$env:SUPERSEEDR_SHARED_HOST_ID = "host-a"`
- `cargo run -- show-shared-config`
- `cargo run -- show-host-id`

Expected env-driven result:
- `show-shared-config` reports source `env`
- mount root resolves to `./tmp`
- config root resolves to `./tmp/superseedr-config`
- `show-host-id` reports source `env`

## Launcher And Host-ID Precedence

Shared-config precedence:
1. `SUPERSEEDR_SHARED_CONFIG_DIR`
2. persisted launcher shared-config sidecar
3. normal mode

Host-id precedence:
1. `SUPERSEEDR_SHARED_HOST_ID`
2. persisted launcher host-id sidecar
3. hostname or default fallback

## Required Test Data

Prepare only what is needed:
- at least one reusable `.torrent` fixture from `integration_tests/` if present
- at least one fabricated magnet string for queue/routing validation if needed
- one shared root at `./tmp`

If only a fabricated magnet is used, record clearly that this validates routing and queueing only.

## Command Matrix

Columns:
- Single Shared Offline: shared env vars set, no running runtime
- Single Shared Online: shared env vars set, one running shared runtime
- Cluster Shared Online: two runtimes on the same shared root
- Cluster After Failover: commands run after the original leader stops and another node takes leadership
- Required: `Yes` means required for this plan; `Optional` means run only if the environment supports it
- Validation Goal: what is being proven

| Command | Single Shared Offline | Single Shared Online | Cluster Shared Online | Cluster After Failover | Required | Validation Goal |
|---|---:|---:|---:|---:|---|---|
| show-shared-config | Yes | Yes | Yes | Yes | Yes | Shared-config selection and precedence are reported correctly |
| set-shared-config | N/A | N/A | N/A | N/A | Yes | Launcher shared-config persistence works |
| clear-shared-config | N/A | N/A | N/A | N/A | Yes | Launcher shared-config clear works |
| show-host-id | Yes | Yes | Yes | Yes | Yes | Host-id selection and precedence are reported correctly |
| set-host-id | N/A | N/A | N/A | N/A | Yes | Launcher host-id persistence works |
| clear-host-id | N/A | N/A | N/A | N/A | Yes | Launcher host-id clear works |
| to-shared | N/A | N/A | N/A | N/A | Yes | Standalone config converts into layered shared config |
| to-standalone | N/A | N/A | N/A | N/A | Yes | Active shared config converts into standalone config |
| add | Yes | Yes | Yes | Yes | Yes | Shared add routing uses shared inbox path |
| status | Yes | Yes | Yes | Yes | Yes | Shared-mode status works in text and JSON |
| journal | Yes | Yes | Yes | Yes | Yes | Shared-mode journal merges shared commands and host-local health |
| torrents | Yes | Yes | Yes | Yes | Yes | Shared-mode torrent listing works |
| info | Yes | Yes | Yes | Yes | Yes | Shared-mode torrent detail lookup works |
| files | Yes | Yes | Yes | Yes | Yes | Shared-mode file listing works when metadata/source is available |
| pause | Yes | Yes | Yes | Yes | Yes | Shared-mode control path works |
| resume | Yes | Yes | Yes | Yes | Yes | Shared-mode control path works |
| remove | Yes | Yes | Yes | Yes | Yes | Shared-mode control path works |
| purge | Yes | Yes | Yes | Yes | Yes | Shared-mode control path works, including immediate offline purge when resolvable |
| priority | Yes | Yes | Yes | Yes | Yes | Shared-mode file-priority path works |
| stop-client | No | Yes | Yes | Yes | Yes | Live runtime stop path works |

Notes:
- `N/A` means the command is not meaningfully an offline-vs-online runtime test and should be covered in its dedicated section.
- Cluster Shared Online is optional unless the environment supports two live runtimes.
- Cluster After Failover is optional unless the environment supports leadership transfer testing.
- For offline shared mutating commands, record whether no leader was running. That path now directly mutates shared config instead of only queueing.

## Validation Levels

For each command, record one or more of:
- accepted
- routed
- queued
- applied
- observed
- cluster-observed

A command should not be marked fully validated unless the report states which levels were observed.

## Phase 1: Environment, Precedence, And Layout

## 0. Offline Baseline Modes

These offline sections should be run before concurrent cluster testing.

## 0A. Normal Offline

### Goal
Prove that normal non-shared offline CLI behavior still works when no runtime is running.

### Operator setup
- ensure no Superseedr runtime is running
- ensure shared env vars are unset
- ensure launcher shared-config sidecar is cleared unless the test explicitly needs it

### Commands to cover
- `status`
- `journal`
- `torrents`
- `info`
- `files`
- `pause`
- `resume`
- `remove`
- `purge`
- `priority`

### Expected
- read commands operate on local standalone persisted state
- offline-capable mutating commands directly update local standalone config
- `purge` removes data immediately only when file layout is safely resolvable
- commands accepting `INFO_HASH_HEX_OR_PATH` should be spot-checked with:
  - direct info hash
  - reverse file-path lookup where a unique match exists

## 0B. Shared Offline (No Leader)

### Goal
Prove that shared-mode offline CLI behavior works when no leader is running.

### Operator setup
- ensure no shared runtime is running
- set shared env vars or launcher sidecars intentionally
- confirm the shared root is the expected one
- confirm no process currently holds leadership

### Commands to cover
- `show-shared-config`
- `show-host-id`
- `status`
- `journal`
- `torrents`
- `info`
- `files`
- `pause`
- `resume`
- `remove`
- `purge`
- `priority`

### Expected
- shared read commands operate on persisted shared state
- offline-capable mutating commands directly update shared config rather than merely queueing
- shared `journal` reflects host-local and shared entries from persisted files
- `purge` removes data immediately only when file layout is safely resolvable
- commands accepting `INFO_HASH_HEX_OR_PATH` should be spot-checked with:
  - direct info hash
  - reverse file-path lookup where a unique match exists

## 1. Env-Driven Shared Activation

### Goal
Prove that the branch enters shared mode from env vars without relying on persisted launcher config.

### Steps
1. Ensure launcher sidecars are cleared unless the phase explicitly needs them.
2. Ensure `SUPERSEEDR_SHARED_CONFIG_DIR` is unset and record baseline `cargo run -- show-shared-config`.
3. Run with `SUPERSEEDR_SHARED_CONFIG_DIR` set to the absolute path of `./tmp`.
4. Repeat with `SUPERSEEDR_SHARED_HOST_ID=host-a` and run `show-host-id`.

### Expected
- env-driven `show-shared-config` reports enabled with source `env`
- mount root is `./tmp`
- config root is `./tmp/superseedr-config`
- env-driven `show-host-id` reports `host-a`

## 2. Shared Root Normalization

### Goal
Prove that both mount-root and explicit `superseedr-config` forms resolve correctly.

### Steps
1. Run with `SUPERSEEDR_SHARED_CONFIG_DIR` pointing at the absolute path of `./tmp`.
2. Run again with `SUPERSEEDR_SHARED_CONFIG_DIR` pointing at the absolute path of `./tmp/superseedr-config`.
3. Compare `show-shared-config`.

### Expected
- both forms resolve correctly
- no duplicated nested config root appears

## 3. Shared File Layout Smoke

### Goal
Prove that the branch creates and uses the expected shared layout.

### Steps
1. Launch once in env-driven shared mode.
2. Inspect `./tmp/superseedr-config/`.

### Expected
Relevant layout exists as needed:
- `hosts/`
- `inbox/`
- `processed/`
- `status/`
- `torrents/`
- `journal/`
- `settings.toml`
- `torrent_metadata.toml`
- `catalog.toml` if created by the exercised flow

## Phase 2: Single-Machine Shared CLI Matrix

Run these tests on one machine against `./tmp` as the shared root.

## 4. Shared Read Commands

### Commands
- `show-shared-config`
- `show-host-id`
- `status`
- `journal`
- `torrents`
- `info`
- `files`

### Required contexts
- offline shared CLI: required
- online shared runtime: required

### Expected
- each command runs successfully or fails with a correct and understandable reason
- output shape is correct in both text and JSON where supported
- read commands do not mutate unrelated shared state
- `journal` reflects merged shared-command entries plus host-local health entries
- `files` works when metadata or a locally readable torrent source is available, otherwise it returns a clear reason
- commands that accept `INFO_HASH_HEX_OR_PATH` should be tested with:
  - direct info hash
  - reverse file-path lookup where a unique match exists

## 5. Shared Mutating Commands

### Commands
- `add`
- `pause`
- `resume`
- `remove`
- `purge`
- `priority`
- `stop-client`

### Required contexts
- offline shared CLI: required for all except `stop-client`
- online shared runtime: required for all
- cluster shared online: optional unless environment supports it

### Expected
- each command reaches the correct shared-mode path
- when a leader is running, commands that should queue do queue to shared infrastructure
- when no leader is running, offline-capable commands directly mutate shared config through the offline path
- commands mutate shared or host-local state in the correct scope
- no command accidentally falls back to normal local routing

## 6. Add Routing Details

### Goal
Prove that add requests route into the shared inbox.

### Steps
1. In env-driven shared mode, run `cargo run -- add "<magnet>"`.
2. In env-driven shared mode, run `cargo run -- add "<torrent-path>"` using a reusable fixture from `integration_tests/` if present.
3. Inspect `./tmp/superseedr-config/inbox/`.

### Expected
- magnet add lands in the shared inbox
- torrent add lands in the shared inbox, typically as a `.path` file
- add does not use the normal local watch sink

### Required note
- If `cargo run -- add` was tested instead of positional direct input, record that clearly.
- If positional direct input was not tested, record that gap.

## 7. Host-ID Separation On One Machine

### Goal
Prove that host-scoped files separate correctly without requiring two concurrent machines.

### Steps
1. Run against `./tmp` with `SUPERSEEDR_SHARED_HOST_ID=host-a`.
2. Quit cleanly.
3. Run again against the same shared root with `SUPERSEEDR_SHARED_HOST_ID=host-b`.
4. Inspect:
   - `./tmp/superseedr-config/hosts/`
   - `./tmp/superseedr-config/status/`
   - `show-host-id` from each shell

### Expected
- `hosts/host-a/config.toml` and `hosts/host-b/config.toml` can coexist
- status files are host-separated when produced
- shared global files remain shared
- `show-host-id` reports the expected host id in each shell

## 8. Launcher Commands

### Commands
- `set-shared-config`
- `clear-shared-config`
- `show-shared-config`
- `set-host-id`
- `clear-host-id`
- `show-host-id`

### Goal
Prove that launcher shared-config and host-id commands work without using them as the default activation path.

### Steps
1. Record baseline `show-shared-config` and `show-host-id`.
2. Run `cargo run -- set-shared-config <absolute-path-to-tmp>`.
3. Run `cargo run -- set-host-id host-a`.
4. Run `show-shared-config` and `show-host-id`.
5. Run `cargo run -- clear-shared-config`.
6. Run `cargo run -- clear-host-id`.
7. Run `show-shared-config` and `show-host-id` again.

### Expected
- `set-shared-config` works
- `show-shared-config` shows launcher after set
- `set-host-id` works
- `show-host-id` shows launcher after set
- `clear-shared-config` works
- `clear-host-id` works
- both show commands return to baseline after clear

## 9. Conversion Commands

### Commands
- `to-shared`
- `to-standalone`

### Goal
Prove that standalone local config can be converted into layered shared config and then flattened back into standalone config.

### Steps
1. Start from a clean standalone local config.
2. Run `cargo run -- to-shared <absolute-path-to-tmp>`.
3. Inspect `./tmp/superseedr-config/` and confirm:
   - `settings.toml`
   - `catalog.toml`
   - `torrent_metadata.toml`
   - `hosts/<host-id>/config.toml`
4. Enable shared mode through env or launcher and run read commands against the converted config.
5. Run `cargo run -- to-standalone`.
6. Inspect the local standalone settings and metadata again.

### Expected
- `to-shared` succeeds from standalone mode
- layered shared files are created with the expected host split
- `to-standalone` succeeds from active shared selection
- local standalone config is restored in a usable form

## Phase 3: Optional Concurrent Shared-Cluster Matrix

Only run if the environment supports two active runtimes.

## 10. Minimal Concurrent Shared-Cluster Setup

### Goal
Create a real concurrent shared-mode environment sufficient to validate the shared CLI surface.

### Acceptable environments
- two machines with a mounted shared directory
- one native `cargo run` instance plus one container instance sharing the same mounted host directory
- two containers sharing the same mounted host directory

### Runtime setup

Runtime A:
- shared root points at the cluster mount
- host id is `host-a`

Runtime B:
- shared root points at the same contents
- host id is `host-b`

### Required operator checks
- both runtimes can create files in the shared root
- files written by one runtime are visible to the other
- both runtimes resolve the same shared-config layout
- both runtimes report the expected host id through `show-host-id`
- operator records which runtime is expected to hold leadership first

## 11. Concurrent Shared Read Commands

### Commands
- `status`
- `journal`
- `torrents`
- `info`
- `files`
- `show-shared-config`
- `show-host-id`

### Expected
- commands run successfully in cluster mode
- output is sensible from both runtimes when applicable
- results reflect shared cluster state
- `journal` shows merged shared commands plus host-local health from the issuing host context

## 12. Concurrent Shared Mutating Commands

### Commands
- `add`
- `pause`
- `resume`
- `remove`
- `purge`
- `priority`
- `stop-client`

### Expected
- both runtimes see the same shared files
- CLI commands operate through the cluster shared-config path
- follower-issued commands do not accidentally use local normal-mode routing
- if the leader is intentionally stopped and offline shared mutation is tested, record that separately from the online cluster matrix

## 13. Cluster Failover After Leadership Transfer

### Goal
Prove that a second node can take leadership and the CLI surface still behaves correctly after failover.

### Setup
1. Start runtime A and runtime B on the same shared root.
2. Confirm runtime A is leader and runtime B is follower.
3. Exercise at least one mutating command while A is leader so there is known shared state.
4. Stop runtime A cleanly or otherwise remove its leadership.
5. Wait until runtime B takes leadership.
6. Confirm runtime B is now leader before issuing more commands.
7. Restart runtime A as follower if failover validation needs both nodes alive again.
8. After post-failover validation is complete, optionally fail back and repeat a short final leader round.

### Required operator checks
- record which node was original leader
- record which node took leadership after failover
- record how leadership transfer was confirmed
- record whether any lock, status, or journal artifacts lagged before stabilizing

### Commands To Run After Failover
- `show-shared-config`
- `show-host-id`
- `status`
- `journal`
- `torrents`
- `info`
- `files`
- `add`
- `pause`
- `resume`
- `remove`
- `purge`
- `priority`
- `stop-client`

### Full Manual Sequence Used In This Round

The end-to-end cluster round that fully closed this matrix used these phases:

1. Leader round
- start this machine as leader on the shared root
- start the second machine as follower on the same shared root
- seed at least one disposable torrent into shared state
- run the full leader-side read and mutating command set

2. Failover round
- stop the original leader
- confirm the original leader process is actually gone
- wait for the other node to become leader
- restart the old leader as follower
- rerun read commands from the restarted follower
- rerun follower-issued mutating commands and confirm the new leader applies them

3. Failback round
- move leadership back to the original node
- confirm the original node is leader again
- run a short final confirmation set:
  - `show-shared-config`
  - `show-host-id`
  - `status`
  - `journal`
  - `torrents`
  - one `add`
  - one control mutation
  - one cleanup mutation

### Recommended Concrete Operator Procedure

1. Create a dedicated test folder inside the mounted shared volume.
2. Copy disposable `.torrent` fixtures into a shared `shared-fixtures/` folder under that test root.
3. Start runtime A with explicit env vars for shared root and host id.
4. Start runtime B with the same shared root and a different host id.
5. Run the full leader-side matrix first.
6. For cluster `.path` add testing, only use `.torrent` files that live on the shared volume.
7. Stop the current leader and verify the process is actually gone before assuming failover occurred.
8. Confirm leadership transfer using multiple signals:
   - live node screen
   - `journal`
   - `torrents`
   - shared status artifacts
9. Restart the old leader as follower and run follower-side read and mutating checks.
10. Fail back if desired and run one short final leader-side confirmation round.

### Expected
- the new leader accepts and applies shared mutating commands
- read commands reflect the post-failover shared state
- no command falls back to stale routing from the former leader
- journal continues to record shared command events after failover
- status and shared files converge after leadership transfer

### Required Post-Failover Mutations

Do not stop at `pause` or `resume` only.

At minimum, the post-failover round should include:
- one `add`
- `pause`
- `resume`
- `priority`
- `remove`
- `purge`

If `stop-client` is run, do it only at the very end of the overall round.

### Required note
- if any command only worked after a delay, record the delay and what artifact finally proved leadership transfer

## 14. Minimum Concurrent Proof Set

If time is limited, at minimum validate:
- `add`
- `status`
- `pause`
- `resume`
- `remove` or `purge`
- `stop-client`

For failover specifically, at minimum validate:
- `status`
- `journal`
- `pause` or `resume`
- `remove` or `purge`

## 15. Docs Match Actual Behavior

### Review
- `README.md`
- `docs/shared-config.md`

### Confirm
- env-driven activation is documented correctly
- launcher shared-config commands match actual behavior
- launcher host-id commands match actual behavior
- conversion commands match actual behavior
- shared-config precedence is described correctly
- host-id precedence is described correctly
- shared root layout matches observed behavior
- host vs shared settings scope matches observed behavior
- CLI surface described for shared mode is accurate

## Good Additional Behaviors To Preserve

1. Cleanup after launcher testing
- after `set-shared-config`, run `clear-shared-config` unless persistence is intentionally part of the test
- after `set-host-id`, run `clear-host-id` unless persistence is intentionally part of the test

2. Verify clear actually worked
- after clear commands, run the matching show commands again

3. Test both text and JSON for key reads
- shared-mode `status`, `journal`, `torrents`, `info`, and `files` should be spot-checked in both text and `--json`

4. Explicit filesystem verification
- when testing host-id separation, inspect the `hosts/` directory and confirm both host directories exist

5. Distinguish queued online mutation from offline direct mutation
- always record whether a leader was already running when a mutating command was issued

6. Record failover timing honestly
- if leadership transfer required waiting, record how long it took and how it was detected

7. Keep offline modes distinct
- do not merge normal offline findings with shared offline findings
- explicitly state whether a result came from local standalone state or shared persisted state with no leader

8. Write the report to disk
- create a report path under `./tmp/reports/` and write the final validation report there

9. Record add syntax honestly
- if `cargo run -- add "magnet:..."` is used instead of positional direct input, note that clearly

10. Record magnet quality honestly
- if only a fabricated magnet string was used, state that it validates routing and queueing only

11. Use shared-mounted `.torrent` files for cross-host `.path` validation
- a host-local repo path is not a valid cross-host success-path fixture
- for cluster `.path` testing, the `.torrent` file must live on the shared volume

12. Confirm final cleanup
- after the last `remove` and `purge`, confirm `torrents` returns an empty list

## Findings From This Round

Record these as learned expectations for future rounds:

1. Dedicated mounted test root is required
- use a dedicated subfolder inside the mounted shared volume, not the volume root

2. Shared `.path` adds must use portable payloads
- in shared mode, queued `.path` payloads must be shared-root-relative, not host-local absolute paths

3. Cluster `.path` success requires shared-mounted `.torrent` fixtures
- cross-host `.path` add only succeeds when the referenced `.torrent` lives on the shared volume

4. CLI should not bootstrap runtime/shared state
- CLI should read or mutate existing state, not create host/runtime directories as a side effect

5. CLI logging must not depend on shared log path writeability
- local CLI logging or safe fallback is needed so read commands still work when shared log creation fails

6. Runtime logging should fall back locally
- runtime should try shared host logs first, then local logs if shared log creation fails

7. Shared runtime startup errors should be explicit
- missing mount or unwritable host paths should produce mount/accessibility errors, not raw generic permission failures

8. `stop-client` in shared mode targets the leader
- do not treat it as a local-only follower stop

9. Failover confirmation needs more than one signal
- process exit alone is not enough
- use leader screen, journal activity, shared state reads, and status artifacts together

10. Brief leader/status lag during failover or failback is expected
- watcher timing and manual transition steps can leave a stale leader snapshot briefly
- treat short-lived lag as expected unless it persists

11. Full failover validation requires three rounds
- original leader round
- post-failover follower round
- failback confirmation round

## Evidence To Record

Store under `./tmp/reports/` and `./tmp/evidence/`:
- exact commands run through `cargo run`
- exact fixture paths reused from `integration_tests/` if any
- inbox file paths created by add routing
- host directory paths created for `host-a` and `host-b`
- `show-shared-config` outputs
- `show-host-id` outputs
- concise notes on what was proven versus partially validated
- which commands were validated in:
  - normal offline
  - single-machine shared offline
  - single-machine shared online
  - concurrent cluster shared online
  - cluster after failover
- which commands were only validated as routing or queueing checks
- operator notes describing cluster setup, leader/follower identity, and host ids used
- operator notes describing original leader, new leader, and how leadership transfer was confirmed

## Report Matrix

Use this table shape in the final report.

| Command | Single Shared Offline | Single Shared Online | Cluster Shared Online | Cluster After Failover | Validation Level | Notes |
|---|---|---|---|---|---|---|
| show-shared-config |  |  |  |  |  |  |
| set-shared-config | N/A | N/A | N/A | N/A |  |  |
| clear-shared-config | N/A | N/A | N/A | N/A |  |  |
| show-host-id |  |  |  |  |  |  |
| set-host-id | N/A | N/A | N/A | N/A |  |  |
| clear-host-id | N/A | N/A | N/A | N/A |  |  |
| to-shared | N/A | N/A | N/A | N/A |  |  |
| to-standalone | N/A | N/A | N/A | N/A |  |  |
| add |  |  |  |  |  |  |
| status |  |  |  |  |  |  |
| journal |  |  |  |  |  |  |
| torrents |  |  |  |  |  |  |
| info |  |  |  |  |  |  |
| files |  |  |  |  |  |  |
| pause |  |  |  |  |  |  |
| resume |  |  |  |  |  |  |
| remove |  |  |  |  |  |  |
| purge |  |  |  |  |  |  |
| priority |  |  |  |  |  |  |
| stop-client | N/A |  |  |  |  |  |

## Completed Report Format

Use the following completed-report structure when a round is fully executed.

### Complete CLI Test Matrix - All Modes

#### Normal Offline

| Command | Normal Offline | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Shows disabled or shared mode not enabled |
| status | ✅ Pass | Reads local standalone status |
| journal | ✅ Pass | Reads local standalone journal |
| torrents | ✅ Pass | Lists local standalone torrents |
| add | N/A | Not part of offline standalone mutation validation by default |
| info | ✅ Pass | Returns local torrent info |
| files | ✅ Pass | Returns local file list |
| pause | ✅ Pass | Directly updates local standalone state |
| resume | ✅ Pass | Directly updates local standalone state |
| priority | ✅ Pass | Directly updates local standalone state |
| remove | ✅ Pass | Directly updates local standalone state |
| purge | ✅ Pass | Purges immediately when file layout is resolvable |
| stop-client | N/A | No runtime running in offline mode |

---

#### Shared Offline (No Leader)

| Command | Shared Offline | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Reports active shared selection |
| show-host-id | ✅ Pass | Reports selected host id |
| status | ✅ Pass | Reads persisted shared state with no leader |
| journal | ✅ Pass | Reads persisted shared journal data |
| torrents | ✅ Pass | Lists persisted shared torrents |
| info | ✅ Pass | Returns shared torrent info |
| files | ✅ Pass | Returns shared file list when metadata/source is available |
| pause | ✅ Pass | Directly mutates shared config offline |
| resume | ✅ Pass | Directly mutates shared config offline |
| priority | ✅ Pass | Directly mutates shared config offline |
| remove | ✅ Pass | Directly mutates shared config offline |
| purge | ✅ Pass | Purges immediately when file layout is resolvable |
| stop-client | N/A | No leader running |

---

#### Cluster Mode - Leader

| Command | Cluster Leader | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Env-driven shared mode |
| set-shared-config | ✅ Pass | Persists to sidecar |
| clear-shared-config | ✅ Pass | Clears sidecar |
| show-host-id | ✅ Pass | Env-driven host id |
| set-host-id | ✅ Pass | Persists to sidecar |
| clear-host-id | ✅ Pass | Clears sidecar |
| to-shared | ✅ Pass | Converts standalone config into layered shared config |
| to-standalone | ✅ Pass | Converts active shared config back to standalone |
| status | ✅ Pass | Returns cluster status |
| journal | ✅ Pass | Reads merged shared/host journal |
| torrents | ✅ Pass | Lists cluster torrents |
| add | ✅ Pass | Queues then processes shared add |
| info | ✅ Pass | Returns torrent info |
| files | ✅ Pass | Returns file list including full paths |
| pause | ✅ Pass | Queued then applied |
| resume | ✅ Pass | Queued then applied |
| priority | ✅ Pass | Queued then applied |
| remove | ✅ Pass | Queued then removed |
| purge | ✅ Pass | Queued then removed |
| stop-client | ✅ Pass | Queues leader stop |

---

#### Cluster Mode - Follower After Failover

| Command | Cluster Follower | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Observed from follower context |
| show-host-id | ✅ Pass | Observed follower host id |
| status | ✅ Pass | Observed shared leader state from follower |
| journal | ✅ Pass | Observed shared command history after failover |
| torrents | ✅ Pass | Observed post-failover shared state |
| info | ✅ Pass | Previously validated; shared read path remained healthy after failover |
| files | ✅ Pass | Previously validated; shared read path remained healthy after failover |
| add | ✅ Pass | Queued from follower and processed by leader using shared-mounted `.torrent` |
| pause | ✅ Pass | Queued from follower then applied by new leader |
| resume | ✅ Pass | Queued from follower then applied by new leader |
| priority | ✅ Pass | Queued from follower then applied by new leader |
| remove | ✅ Pass | Queued from follower then applied by new leader |
| purge | ✅ Pass | Queued from follower then applied by new leader |
| stop-client | Not Run | Intentionally skipped in final failover round when not needed |

---

#### Cluster Mode - Failback Confirmation

| Command | Failback Round | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Shared root still resolved correctly after failback |
| show-host-id | ✅ Pass | Original leader host id restored |
| status | ✅ Pass | Shared state available after failback; brief leader snapshot lag acceptable |
| journal | ✅ Pass | New leader resumed recording events |
| torrents | ✅ Pass | Final cleanup confirmed empty shared state |
| add | ✅ Pass | Shared-mounted `.torrent` ingested successfully after failback |
| pause | ✅ Pass | Applied after failback |
| purge | ✅ Pass | Cleanup mutation applied after failback |

## Completed Report For This Round

### Complete CLI Test Matrix - All Modes

#### Normal Offline

| Command | Normal Offline | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Shows disabled / non-shared mode |
| status | ✅ Pass | Local standalone status |
| journal | ✅ Pass | Local standalone journal |
| torrents | ✅ Pass | Lists local torrents |
| add | N/A | Not part of offline standalone round |
| info | ✅ Pass | Returns local torrent info |
| files | ✅ Pass | Returns local file list |
| pause | ✅ Pass | Direct local config mutation |
| resume | ✅ Pass | Direct local config mutation |
| priority | ✅ Pass | Direct local config mutation |
| remove | ✅ Pass | Removes torrent from standalone state |
| purge | ✅ Pass | Purges torrent/data when resolvable |
| stop-client | N/A | No runtime running |

---

#### Shared Offline (No Leader)

| Command | Shared Offline | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Shows shared selection |
| show-host-id | ✅ Pass | Shows shared host id |
| status | ✅ Pass | Reads persisted shared state |
| journal | ✅ Pass | Reads persisted shared journal |
| torrents | ✅ Pass | Lists persisted shared torrents |
| info | ✅ Pass | Returns shared torrent info |
| files | ✅ Pass | Returns shared file list when metadata/source available |
| pause | ✅ Pass | Direct shared config mutation |
| resume | ✅ Pass | Direct shared config mutation |
| priority | ✅ Pass | Direct shared config mutation |
| remove | ✅ Pass | Direct shared config mutation |
| purge | ✅ Pass | Immediate purge when resolvable |
| stop-client | N/A | No leader running |

---

#### Cluster Mode - Leader

| Command | Cluster Leader | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Env-driven shared mode |
| set-shared-config | ✅ Pass | Persists to sidecar |
| clear-shared-config | ✅ Pass | Clears sidecar |
| show-host-id | ✅ Pass | Env-driven host id |
| set-host-id | ✅ Pass | Persists to sidecar |
| clear-host-id | ✅ Pass | Clears sidecar |
| to-shared | ✅ Pass | Converts standalone to layered shared config |
| to-standalone | ✅ Pass | Converts layered shared config back to standalone |
| status | ✅ Pass | Returns cluster status |
| journal | ✅ Pass | Reads merged shared/host journal |
| torrents | ✅ Pass | Lists cluster torrents |
| add | ✅ Pass | Queues then processes shared add |
| info | ✅ Pass | Returns torrent info |
| files | ✅ Pass | Returns file list with full path |
| pause | ✅ Pass | Queued then applied |
| resume | ✅ Pass | Queued then applied |
| priority | ✅ Pass | Queued then applied |
| remove | ✅ Pass | Queued then removed |
| purge | ✅ Pass | Queued then removed |
| stop-client | ✅ Pass | Queued leader stop |

---

#### Cluster Mode - Follower After Failover

| Command | Cluster Follower | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Observed from follower context |
| show-host-id | ✅ Pass | Observed follower host id |
| status | ✅ Pass | Observed shared leader state from follower |
| journal | ✅ Pass | Observed shared command history after failover |
| torrents | ✅ Pass | Observed post-failover shared state |
| info | ✅ Pass | Shared read path remained healthy after failover |
| files | ✅ Pass | Shared read path remained healthy after failover |
| add | ✅ Pass | Queued from follower and processed by leader using shared-mounted `.torrent` |
| pause | ✅ Pass | Queued from follower then applied by `jagas-air` |
| resume | ✅ Pass | Queued from follower then applied by `jagas-air` |
| priority | ✅ Pass | Queued from follower then applied by `jagas-air` |
| remove | ✅ Pass | Queued from follower then applied by `jagas-air` |
| purge | ✅ Pass | Queued from follower then applied by `jagas-air` |
| stop-client | Not Run | Skipped intentionally in the final failover-only completion round |

---

#### Cluster Mode - Failback Confirmation

| Command | Failback Round | Validation |
|---|---|---|
| show-shared-config | ✅ Pass | Shared root resolved correctly after failback |
| show-host-id | ✅ Pass | `host-a` restored as leader host id |
| status | ✅ Pass | Shared state available after failback; brief snapshot lag observed and expected |
| journal | ✅ Pass | New leader resumed recording events |
| torrents | ✅ Pass | Final cleanup returned empty torrent list |
| add | ✅ Pass | Shared-mounted `.torrent` ingested successfully after failback |
| pause | ✅ Pass | Applied after failback |
| purge | ✅ Pass | Cleanup mutation applied after failback |

Suggested values:
- Pass
- Fail
- Skipped
- N/A

Validation Level examples:
- accepted
- routed
- queued
- applied
- observed
- cluster-observed


================================================
FILE: agentic_plans/cli_shared_config_agent_validation_plan_2026-03-19.md
================================================
# CLI And Shared Config Agent Validation Plan

## Summary
Use an AI agent to run an end-to-end validation sweep for the new CLI control surface and layered shared-config behavior. The agent should create an isolated scratch workspace under `tmp/`, launch one disposable Superseedr instance against that workspace, drive the new CLI commands, mutate shared config files when needed, and validate outcomes using:

- `superseedr status`
- `status_files/app_state.json`
- `superseedr journal`
- shared-config files on disk

The agent must produce a final report that records every step as pass or fail, and for failures it must capture why the step failed, what evidence was collected, and whether the failure looks like an environment/setup issue or an application defect.

## Scope
This plan covers only the branch areas that added or materially changed:

- CLI control commands
  - `status`
  - `status --follow`
  - `status --stop`
  - `torrents`
  - `info`
  - `files`
  - `pause`
  - `resume`
  - `remove`
  - `purge`
  - `priority`
  - `journal`
  - optional `--json` output layer on all commands
- Online command delivery through watch folders and `.control` files
- Offline CLI behavior that edits settings directly
- Layered shared-config mode
  - `SUPERSEEDR_SHARED_CONFIG_DIR`
  - `SUPERSEEDR_HOST_ID`
  - shared `settings.toml`
  - shared `catalog.toml`
  - host-local `hosts/<host-id>.toml`
  - single-host shared-config live reload and reconcile
  - stale-write protection

Do not spend time on unrelated TUI-only feature validation unless it is directly required to unblock a CLI/shared-config scenario.

This automated plan is intentionally single-node. It validates one local Superseedr instance against a shared-config root and does not attempt simultaneous multi-instance coverage.

## Local Runtime Note
Even in shared-config mode, several runtime artifacts remain in the normal local app data directory rather than under the scratch shared root. The agent must treat these as local runtime outputs and copy them into the scratch evidence directory when needed.

These include:

- `status_files/app_state.json`
- `event_journal.toml`
- logs
- lock file

The agent should resolve the actual local app data directory first, then read or copy these files from there during validation.

## Safety Rails
The agent must follow these safeguards before running any test:

1. Refuse to run if another `superseedr` process is already active outside the test plan.
2. Use a dedicated scratch root under `tmp/` and never write test artifacts outside that root unless the app itself requires OS-local config/data paths.
3. Before launching the app, detect the normal Superseedr OS config/data directories and record them in the report, but do not require a backup or restore step for this validation plan.
4. Use a dedicated host ID and client port for the test instance.
5. Never use destructive git commands.
6. Treat all failures as evidence first. Do not patch code during the run. Record the failure and continue unless the environment is unusable.
7. Record the resolved local app data path early in the report so later steps know where `status_files/`, the event journal, and logs actually live.

## Scratch Layout
Create a unique run root:

```text
tmp/cli_shared_config_validation_<timestamp>/
```

Inside it create:

```text
bin/
evidence/
evidence/logs/
evidence/status/
evidence/journal/
evidence/shared_snapshots/
evidence/commands/
reports/
run/
run/shared-root/
run/shared-root/hosts/
run/shared-root/torrents/
run/host-a-watch/
run/host-a-downloads/
```

## Test Fixtures
Reuse the existing tracked interop fixtures from `integration_tests/`, but make scratch-local copies before running validation so the plan never depends on or mutates the tracked fixture paths directly.

Use this exact pair so the runtime can also see matching payload files under the interop data tree:

- `integration_tests/torrents/v1/single_4k.bin.torrent`
- `integration_tests/torrents/v1/single_8k.bin.torrent`
- `integration_tests/test_data/single/single_4k.bin`
- `integration_tests/test_data/single/single_8k.bin`

Recommended fixture mapping:

- logical torrent `alpha` -> `integration_tests/torrents/v1/single_4k.bin.torrent`
- logical torrent `beta` -> `integration_tests/torrents/v1/single_8k.bin.torrent`
- default download root -> `integration_tests/test_data/single/`

Copy strategy:

- copy the two `.torrent` fixtures into `tmp/.../run/shared-root/torrents/`
- copy the matching payload files into `tmp/.../run/host-a-downloads/`
- point seeded shared config at those scratch-local copies, not at the tracked repo paths

Important notes:

- Keep the logical names `alpha` and `beta` in the seeded shared catalog, but prefer preserving the real `.torrent` filenames or hash-stem scratch copies rather than arbitrary names like `alpha.torrent`.
- The scratch copies should preserve or derive canonical info-hash-stem filenames when practical so offline `status` and hash-targeted CLI commands still work cleanly.
- Do not mutate the tracked interop fixture files themselves. Only the scratch copies and the seeded shared config files under the scratch root should be edited during the run.
- Using two torrents is still important because one scenario needs a second live torrent to trigger an unrelated save while validating shared-catalog removal behavior.

## Build And Launch Strategy
1. Build the binary once:
   - `cargo build`
2. Use the built binary for all commands:
   - `target/debug/superseedr`
3. Launch the runtime instance with `SUPERSEEDR_SHARED_CONFIG_DIR` and `SUPERSEEDR_HOST_ID` set.
4. Prefer detached/background process launch so the agent can keep issuing CLI commands.
5. Record stdout/stderr for the launched instance into `evidence/logs/`.

If detached launch is not reliable in the current environment, the agent may use a second terminal session or platform-equivalent background process runner, but it must still preserve the same evidence layout.

## Shared Config Seed Files
Create these files before the first launch.

### Shared `settings.toml`
Use values that make CLI/status validation easier:

- `output_status_interval = 0`
- `bootstrap_nodes = []`
- `default_download_folder` should point at the scratch-local copied payload directory, typically `tmp/.../run/host-a-downloads/`
- keep RSS empty

### Shared `catalog.toml`
Seed two torrents:

- `alpha`
- `beta`

Both should point at the scratch-local copied `.torrent` fixtures under `tmp/.../run/shared-root/torrents/`, ideally using hash-stem filenames derived from the interop fixtures. Their `download_path` should resolve to the scratch-local copied payload directory `tmp/.../run/host-a-downloads/`. Set:

- `torrent_control_state = "Running"`
- `container_name = ""`
- `validation_status = false`
- `file_priorities` with only `0 = "Normal"`

### Host file
Create:

- `hosts/host-a.toml`

Set:

- `client_port`
- host-specific `watch_folder`
- any required `path_roots`

## Evidence Rules
For each test step, the agent must capture:

1. The exact command(s) run.
2. The relevant environment variables.
3. The pre-state snapshot.
4. The post-state snapshot.
5. The pass/fail decision.
6. If fail:
   - observed behavior
   - expected behavior
   - likely failure class
     - setup error
     - test harness issue
     - product bug

At minimum, persist:

- raw `status` JSON outputs
- copies of `status_files/app_state.json`
- `superseedr journal` output after mutating steps
- copies of shared `settings.toml`, `catalog.toml`, and the host file before and after each shared-config test

## Validation Heuristics
Use the following sources of truth:

- CLI success text confirms request acceptance, not final correctness
- `superseedr status` confirms live or offline resolved state
- `status_files/app_state.json` confirms daemon-observed runtime state
- shared config files confirm persistence/routing behavior
- `superseedr journal` confirms queue/applied/failed recording

Prefer JSON/file evidence over console prose when deciding pass or fail.

## Run List

### Phase 0: Environment Preparation
1. Create the scratch root under `tmp/`.
2. Build `superseedr`.
3. Detect the normal OS config/data locations used by Superseedr.
4. Copy the needed interop `.torrent` and payload fixtures from `integration_tests/` into the scratch workspace, then seed the shared config files to point only at those scratch-local copies.
5. Record the resolved local app data path and local config path in the report.
6. Snapshot the initial shared config files into `evidence/shared_snapshots/phase0_*`.

Pass criteria:
- scratch root exists
- binary builds
- shared files are valid TOML
- the plan records which OS-local paths may receive runtime artifacts

### Phase 1: Shared Config Bootstrap And Single-Host Sanity
1. Launch host A with:
   - `SUPERSEEDR_SHARED_CONFIG_DIR=<scratch shared-root>`
   - `SUPERSEEDR_HOST_ID=host-a`
2. Wait for `status_files/app_state.json` to appear.
3. Run `superseedr status` against host A's shared env.
4. Validate:
   - both torrents are present
   - info hashes are visible in status output
   - host A is using the expected client port
   - `output_status_interval` is initially disabled until explicitly requested
   - the local app data directory contains the expected runtime `status_files/app_state.json`

Pass criteria:
- host A starts successfully
- both catalog entries load
- status JSON matches seeded shared config

### Phase 2: Online CLI Status Controls
1. Run `superseedr status`.
2. Save the JSON output.
3. Run `superseedr status --follow`.
4. Observe `status_files/app_state.json` modification times for at least three updates.
5. Run `superseedr status --stop`.
6. Confirm status file updates stop after a grace period.

Pass criteria:
- `status` returns fresh JSON
- `--follow` causes repeated file updates
- `--stop` halts repeated updates

Failure notes:
- If `status` works but file updates do not continue, classify as runtime follow bug.
- If `--stop` is accepted but updates continue, classify as runtime stop bug.

### Phase 3: Online CLI Pause/Resume/Priority/Remove/Purge
Use host A while it is running.

1. From `status`, capture the `info_hash_hex` for `alpha` and `beta`.
2. Run `pause <alpha-hash>`.
3. Validate through `status` or `app_state.json` that `alpha` is paused.
4. Run `resume <alpha-hash>`.
5. Validate it returns to running.
6. Run `priority <alpha-hash> --file-index 0 skip`.
7. Validate persisted/configured file priority changed.
8. Run `priority <alpha-hash> --file-index 0 normal`.
9. Validate the override is removed or reset.
10. Run `remove <beta-hash>`.
11. Validate `beta` is removed from runtime and shared catalog without deleting payload files.
12. Re-seed or restore `beta` if needed for the next step.
13. Run `purge <alpha-hash>` or `purge <path-to-alpha-payload-file>` while host A is running.
14. Validate the queued control request is accepted and runtime begins delete-with-files handling.
15. Run `superseedr journal`.
16. Validate control entries include queued/applied records for the online actions.

Pass criteria:
- runtime state changes match each CLI action
- persistence matches runtime state
- journal records exist

### Phase 4: Offline CLI Behavior
1. Stop host A cleanly.
2. Run offline commands against the same shared root and host ID:
  - `status`
  - `torrents`
  - `info <alpha-hash>`
  - `files <alpha-hash>`
  - `pause <alpha-hash>`
  - `resume <alpha-hash>`
  - `priority <alpha-hash> --file-index 0 skip`
  - `priority <alpha-hash> --file-index 0 normal`
  - `remove <alpha-hash>`
  - `purge <alpha-hash>` only if the scratch workspace preserves enough local file layout for an immediate offline purge
3. After each mutation, inspect shared config files directly.
4. Repeat one read command and one mutating command with `--json`.
5. Run `superseedr journal` and save output.

Expected behavior:
- `status` should return offline JSON
- `torrents`, `info`, and `files` should read local state directly
- pause/resume/priority/remove should edit settings directly
- offline `purge` should either delete data immediately or fail clearly if path resolution is unavailable
- journal should record offline applied or failed entries

Pass criteria:
- offline mutations persist without a running daemon
- offline status succeeds
- offline read commands succeed
- `--json` uses the common success envelope
- journal evidence exists for offline actions

### Phase 5: Shared Config Live Remove Without Resurrection
This phase explicitly targets the removal regression.

1. Ensure both `alpha` and `beta` exist and host A is running.
2. Remove `alpha` from the shared catalog by editing `catalog.toml` externally.
3. Validate host A observes the removal and begins local teardown.
4. Before teardown fully settles, trigger an unrelated persisted save from host A by mutating `beta`:
   - `pause <beta-hash>`
   - or `resume <beta-hash>`
   - or file priority change
5. Snapshot `catalog.toml` after host A's save.
6. Validate `alpha` does not reappear in `catalog.toml`.

Pass criteria:
- removed torrent stays removed
- unrelated save does not resurrect the deleted entry

If fail:
- record the exact shared catalog contents before remove, after remove, and after host A save
- classify as shared-catalog resurrection bug

### Phase 6: Shared Config Updated-But-Missing Runtime Case
This phase explicitly targets the missing-runtime update regression.

1. Stop host A.
2. Configure host A so one seeded torrent cannot load on startup:
   - easiest path: make `alpha` point at a missing `.torrent` file in the shared catalog before launching host A
3. Launch host A and verify `alpha` is absent from runtime while still present in shared config.
4. Without restarting host A, repair the catalog entry so it points at a valid shared torrent file and also change one other field to guarantee a diff:
   - name
   - pause/resume state
   - file priority
5. Trigger shared-config reload by writing the updated `catalog.toml`.
6. Validate whether host A loads `alpha` live.

Pass criteria:
- host A loads the previously missing runtime torrent after the update diff

If fail:
- record that the catalog entry exists in both old and new config but runtime stayed absent until restart
- classify as updated-entry missing-runtime reconcile bug

### Phase 7: Stale-Write Protection
1. Keep host A running.
2. Externally edit shared `settings.toml` or `catalog.toml`.
3. Without reloading first, trigger a persisted change from host A.
4. Validate the save is rejected and the app reports reload is required.
5. Confirm the external edit was not overwritten.

Pass criteria:
- conflicting save is rejected
- on-disk shared file keeps the external edit intact

### Phase 8: Watch-Folder Delivery For Online CLI
This phase verifies the CLI-to-daemon online control path, not generic ingest coverage.

1. While host A is running, capture the host A watch folder contents.
2. Run one online CLI control command.
3. Confirm a `.control` file appears and is then archived/renamed after processing.
4. Confirm the requested action is applied.
5. Repeat once with `SUPERSEEDR_WATCH_PATH_1` configured for host A to confirm extra watch-path discovery does not break the primary command path.

Pass criteria:
- CLI writes go to the primary command watch path
- running daemon consumes the control file
- processed artifact cleanup occurs

### Phase 9: Structured Output Contract
1. Run these commands with `--json`:
   - `status`
   - `journal`
   - `torrents`
   - `info <alpha-hash>`
   - `files <alpha-hash>`
   - one mutating command such as `pause <alpha-hash>`
2. Save every JSON result as evidence.
3. Validate:
   - every response has top-level `ok`
   - every success response has `command` and `data`
   - every failure response has `command` and `error`
   - `files` remains an array field inside `info` and `torrents`

Pass criteria:
- the JSON envelope is consistent across read and mutating commands
- nested file manifests use stable field types

## Failure Classification
Use these labels in the report:

- `ENVIRONMENT`
  - binary could not launch
  - background process strategy failed
  - permissions/path issue unrelated to app behavior
- `HARNESS`
  - agent could not reliably capture evidence
  - timing window too narrow or script bug
- `PRODUCT`
  - app behavior disagrees with the documented branch intent

## Required Report Outputs
Write:

- `reports/summary.md`
- `reports/results.json`

### `summary.md`
Include:

- overall verdict
- environment summary
- list of phases with pass/fail
- concise explanation of each failure
- high-confidence suspected regressions

### `results.json`
One object per phase with:

- `phase`
- `status`
- `commands`
- `artifacts`
- `observed`
- `expected`
- `classification`

## Cleanup
At the end of the run:

1. Stop all spawned Superseedr instances.
2. Leave the scratch root under `tmp/` intact for inspection.

## Success Definition
This validation pass is successful when:

1. The agent completes every phase or records a clear reason it could not.
2. All evidence artifacts are saved under the scratch root.
3. CLI behavior is validated both online and offline.
4. Single-host shared-config live update/remove semantics are validated through external file edits and reload.
5. The final report clearly distinguishes environment problems from product bugs.


================================================
FILE: agentic_plans/client_diagnostics_full_implementation_plan_2026-05-01.md
================================================
# Full Client Diagnostics Implementation Plan

Date: 2026-05-01

## Purpose

Replace scattered developer-only tracing switches with a coherent diagnostics system that can support normal release troubleshooting, long soak analysis, DHT planner debugging, protocol-level investigation, and peer-level tracing without exposing users to hidden environment variables.

The system should be explicitly scoped, bounded, redactable, and easy to turn on and off from the client or CLI.

## Non-Goals

- Do not keep ad hoc debug environment variables as the public interface.
- Do not emit unbounded logs by default.
- Do not require recompilation to collect useful diagnostics.
- Do not make peer-level tracing part of normal logging.
- Do not leak full file names, torrent display names, peer IDs, or full info hashes unless a diagnostic profile explicitly requests unredacted local output.

## User-Facing Shape

Add a first-class diagnostics command surface:

```text
superseedr diagnostics status
superseedr diagnostics start --profile dht-soak --duration 30m
superseedr diagnostics start --profile peer-trace --torrent <hash-prefix> --peer <ip:port> --duration 5m
superseedr diagnostics stop
superseedr diagnostics bundle --latest
superseedr diagnostics summarize --latest
```

TUI follow-up:

- Add a diagnostics modal/status row showing active profile, remaining time, output directory, dropped event count, and bundle command.
- Add a confirmation step for profiles that include peer-level or protocol payload detail.

## Profiles

### `client-health`

Low overhead. Safe for normal users.

Capture:

- periodic status snapshots
- runtime settings summary
- warnings/errors
- disk/network health
- torrent counts by state
- DHT health counters
- tracker error counts
- persistence writer status

### `dht-soak`

Operational soak profile for release validation and regression checks.

Capture:

- periodic status snapshots
- DHT health snapshots
- planner aggregate counters
- launch class mix
- launch reasons
- demand class transitions
- lookup starts/finishes/parks/drains
- query pressure
- route counts
- peer yield summaries
- invariant violations

Do not capture raw KRPC payloads or per-peer protocol messages.

### `dht-planner`

Detailed planner replay/profile mode.

Capture:

- every planner action/effect
- normalized demand metrics
- selected candidates
- skipped candidates with reason class, not full peer data
- class budgets and token bucket stat
Download .txt
gitextract_7i7vgl0x/

├── .dockerignore
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── documentation.yml
│   │   ├── enhancement.yml
│   │   ├── feature_request.yml
│   │   └── questions.yml
│   ├── dependabot.yml
│   └── workflows/
│       ├── integration-cluster-cli.yml
│       ├── integration-interop.yml
│       ├── nightly.yml
│       └── rust.yml
├── .gitignore
├── .gluetun.env.example
├── AGENTS.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Cargo.toml
├── Dockerfile
├── LICENSE
├── README.md
├── agentic_plans/
│   ├── cargo_dependency_assessment_2026-03-12.md
│   ├── cli_control_status_testing.md
│   ├── cli_shared_config_agent_validation_plan_2026-03-19.md
│   ├── client_diagnostics_full_implementation_plan_2026-05-01.md
│   ├── dht_global_planner_budget_plan_2026-04-24.md
│   ├── dht_resumable_crawls_plan_2026-04-19.md
│   ├── dht_soak_keep_after_discard_2026-04-23.md
│   ├── integration_harness_plan.md
│   ├── integrity_scheduler_plan_2026-03-03.md
│   ├── layered_shared_config_plan_2026-03-13.md
│   ├── multi_instance_zero_config_scaling_plan_2026-03-12.md
│   ├── network_activity_chart_panel_expansion_plan_2026-03-05.md
│   ├── network_history_persistence_async_restore_plan_2026-02-24.md
│   ├── non_aligned_piece_local_refactor_plan.md
│   ├── rss_tui_selection_implementation_plan.md
│   ├── runtime_scalability_cleanup_plan_2026-03-12.md
│   ├── startup_churn_cpu_reimplementation_plan_2026-03-01.md
│   ├── state_fuzz_harness_disconnect_cleanup_handoff_2026-02-13.md
│   ├── system_health_prober_plan_2026-03-27.md
│   ├── terminal_paste_fallback_plan_2026-03-10.md
│   ├── torrent_metadata_write_hardening_plan_2026-04-16.md
│   ├── torrent_remove_delete_lifecycle_plan_2026-03-02.md
│   ├── torrent_restart_revalidate_refactor_plan_2026-03-20.md
│   ├── tui_architecture_refactor.md
│   ├── tui_particle_theme_layers_plan_2026-02-25.md
│   ├── tui_phase0_baseline.md
│   ├── tui_phase0_manual_parity_checklist.md
│   └── v2_identity_lossiness_review_2026-04-14.md
├── agentic_prompts/
│   ├── changelog.md
│   ├── comments.md
│   ├── maintenance_task.md
│   └── review.md
├── agentic_testing/
│   ├── results.json
│   └── summary.md
├── assets/
│   └── app_icon.icns
├── docker-compose.yml
├── docs/
│   ├── CHANGELOG.md
│   ├── FAQ.md
│   ├── ROADMAP.md
│   ├── architecture.md
│   ├── cli.md
│   ├── dht-ownership-plan.md
│   ├── integration-e2e-automation-plan.md
│   ├── integration-harness.md
│   ├── shared-config.md
│   ├── synthetic-benchmark.md
│   └── tuning.md
├── integration_tests/
│   ├── README.md
│   ├── __init__.py
│   ├── cluster_cli/
│   │   ├── __init__.py
│   │   ├── fixtures/
│   │   │   └── manifest.json
│   │   ├── manifest.py
│   │   ├── run.py
│   │   ├── runner.py
│   │   └── tests/
│   │       ├── test_cluster_cli.py
│   │       └── test_manifest.py
│   ├── docker/
│   │   ├── docker-compose.cluster-cli.yml
│   │   ├── docker-compose.interop.yml
│   │   └── tracker.py
│   ├── harness/
│   │   ├── __init__.py
│   │   ├── clients/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── qbittorrent.py
│   │   │   ├── superseedr.py
│   │   │   └── transmission.py
│   │   ├── config.py
│   │   ├── docker_ctl.py
│   │   ├── manifest.py
│   │   ├── pytest.ini
│   │   ├── run.py
│   │   ├── scenarios/
│   │   │   ├── __init__.py
│   │   │   ├── qbittorrent_to_superseedr.py
│   │   │   ├── superseedr_to_qbittorrent.py
│   │   │   ├── superseedr_to_superseedr.py
│   │   │   ├── superseedr_to_transmission.py
│   │   │   └── transmission_to_superseedr.py
│   │   └── tests/
│   │       ├── test_manifest.py
│   │       ├── test_qbittorrent_auth_interop.py
│   │       ├── test_qbittorrent_to_superseedr_interop.py
│   │       ├── test_stub_adapters.py
│   │       ├── test_superseedr_interop.py
│   │       ├── test_superseedr_to_qbittorrent_interop.py
│   │       ├── test_superseedr_to_transmission_interop.py
│   │       ├── test_transmission_auth_interop.py
│   │       └── test_transmission_to_superseedr_interop.py
│   ├── run_cluster_cli.sh
│   ├── run_interop.sh
│   ├── settings.toml
│   └── torrents/
│       ├── hybrid/
│       │   ├── multi_file.torrent
│       │   ├── nested.torrent
│       │   ├── single_16k.bin.torrent
│       │   ├── single_4k.bin.torrent
│       │   └── single_8k.bin.torrent
│       ├── v1/
│       │   ├── multi_file.torrent
│       │   ├── nested.torrent
│       │   ├── single_16k.bin.torrent
│       │   ├── single_25k.bin.torrent
│       │   ├── single_4k.bin.torrent
│       │   └── single_8k.bin.torrent
│       └── v2/
│           ├── multi_file.torrent
│           ├── nested.torrent
│           ├── single_16k.bin.torrent
│           ├── single_4k.bin.torrent
│           └── single_8k.bin.torrent
├── packaging/
│   └── windows/
│       └── wix-template.xml
├── proptest-regressions/
│   ├── networking/
│   │   └── session.txt
│   └── torrent_manager/
│       └── state.txt
├── pytest.ini
├── requirements-integration.txt
├── rust-toolchain.toml
├── scripts/
│   ├── build_osx_universal_pkg.sh
│   ├── clear_integration_output.py
│   ├── docker_build.sh
│   ├── extract_merkle.py
│   ├── file_descriptors_printout.sh
│   ├── generate_integration_bins.py
│   ├── generate_integration_torrents.py
│   ├── get_process_FDs.sh
│   ├── git_tag.sh
│   ├── grep_io_errors.sh
│   ├── hash.py
│   ├── private_build.sh
│   ├── summarize_dht_soak.py
│   ├── test-state-simulations.sh
│   └── validate_integration_output.py
├── src/
│   ├── app.rs
│   ├── command.rs
│   ├── config.rs
│   ├── control_service.rs
│   ├── dht/
│   │   ├── anomaly.rs
│   │   ├── bep42.rs
│   │   ├── bootstrap.rs
│   │   ├── health.rs
│   │   ├── inbound.rs
│   │   ├── krpc.rs
│   │   ├── lookup.rs
│   │   ├── mod.rs
│   │   ├── peer_store.rs
│   │   ├── persist.rs
│   │   ├── public_addr.rs
│   │   ├── routing.rs
│   │   ├── scheduler.rs
│   │   ├── service/
│   │   │   ├── api.rs
│   │   │   ├── api_tests.rs
│   │   │   ├── command_tests.rs
│   │   │   ├── commands.rs
│   │   │   ├── config.rs
│   │   │   ├── driver.rs
│   │   │   ├── driver_tests.rs
│   │   │   ├── effects.rs
│   │   │   ├── lifecycle.rs
│   │   │   ├── lifecycle_tests.rs
│   │   │   ├── monitor.rs
│   │   │   ├── monitor_tests.rs
│   │   │   ├── planner/
│   │   │   │   ├── drain.rs
│   │   │   │   ├── drain_tests.rs
│   │   │   │   ├── invariant_tests.rs
│   │   │   │   ├── invariants.rs
│   │   │   │   ├── reducer_tests.rs
│   │   │   │   ├── replay_tests.rs
│   │   │   │   ├── selection.rs
│   │   │   │   ├── selection_tests.rs
│   │   │   │   ├── test_support.rs
│   │   │   │   └── types.rs
│   │   │   ├── planner.rs
│   │   │   ├── replay_tests.rs
│   │   │   ├── runtime.rs
│   │   │   ├── runtime_command_replay_tests.rs
│   │   │   ├── runtime_effect_tests.rs
│   │   │   ├── state/
│   │   │   │   ├── demand_command.rs
│   │   │   │   ├── mod.rs
│   │   │   │   └── service.rs
│   │   │   ├── state_tests.rs
│   │   │   ├── status.rs
│   │   │   ├── status_tests.rs
│   │   │   ├── subscriber_tests.rs
│   │   │   ├── subscribers.rs
│   │   │   └── test_support.rs
│   │   ├── service.rs
│   │   ├── test_support.rs
│   │   ├── token.rs
│   │   ├── transport.rs
│   │   └── types.rs
│   ├── dht_service.rs
│   ├── dht_stub.rs
│   ├── errors.rs
│   ├── fs_atomic.rs
│   ├── integrations/
│   │   ├── cli.rs
│   │   ├── control.rs
│   │   ├── mod.rs
│   │   ├── rss_ingest.rs
│   │   ├── rss_service.rs
│   │   ├── rss_url_safety.rs
│   │   ├── status.rs
│   │   └── watcher.rs
│   ├── integrity_scheduler.rs
│   ├── logging.rs
│   ├── main.rs
│   ├── networking/
│   │   ├── mod.rs
│   │   ├── protocol.rs
│   │   ├── session.rs
│   │   └── web_seed_worker.rs
│   ├── persistence/
│   │   ├── README.md
│   │   ├── activity_history.rs
│   │   ├── event_journal.rs
│   │   ├── mod.rs
│   │   ├── network_history.rs
│   │   └── rss.rs
│   ├── resource_manager.rs
│   ├── storage.rs
│   ├── synthetic_load.rs
│   ├── telemetry/
│   │   ├── activity_history_telemetry.rs
│   │   ├── manager_telemetry.rs
│   │   ├── mod.rs
│   │   ├── network_history_telemetry.rs
│   │   ├── restore_densify.rs
│   │   └── ui_telemetry.rs
│   ├── theme.rs
│   ├── token_bucket.rs
│   ├── torrent_file/
│   │   ├── mod.rs
│   │   └── parser.rs
│   ├── torrent_identity.rs
│   ├── torrent_manager/
│   │   ├── block_manager.rs
│   │   ├── manager.rs
│   │   ├── merkle.rs
│   │   ├── mod.rs
│   │   ├── piece_manager.rs
│   │   └── state.rs
│   ├── tracker/
│   │   ├── client.rs
│   │   └── mod.rs
│   ├── tui/
│   │   ├── README.md
│   │   ├── effects.rs
│   │   ├── events.rs
│   │   ├── formatters.rs
│   │   ├── layout/
│   │   │   ├── browser.rs
│   │   │   ├── common.rs
│   │   │   └── normal.rs
│   │   ├── layout.rs
│   │   ├── mod.rs
│   │   ├── particles.rs
│   │   ├── paste_burst.rs
│   │   ├── screen_context.rs
│   │   ├── screens/
│   │   │   ├── browser.rs
│   │   │   ├── config.rs
│   │   │   ├── delete_confirm.rs
│   │   │   ├── help.rs
│   │   │   ├── journal.rs
│   │   │   ├── mod.rs
│   │   │   ├── normal.rs
│   │   │   ├── power.rs
│   │   │   ├── rss.rs
│   │   │   └── welcome.rs
│   │   ├── tree.rs
│   │   └── view.rs
│   ├── tuning/
│   │   └── mod.rs
│   └── watch_inbox.rs
└── wix/
    └── main.wxs
Download .txt
Showing preview only (517K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (4930 symbols across 154 files)

FILE: integration_tests/cluster_cli/manifest.py
  class ClusterFixture (line 14) | class ClusterFixture:
  function manifest_path (line 24) | def manifest_path() -> Path:
  function load_fixture_manifest (line 28) | def load_fixture_manifest() -> list[ClusterFixture]:
  function fixture_by_id (line 47) | def fixture_by_id(fixture_id: str) -> ClusterFixture:
  function magnet_info_hash_hex (line 54) | def magnet_info_hash_hex(magnet_uri: str) -> str:
  function torrent_info_hash_hex (line 66) | def torrent_info_hash_hex(torrent_path: Path) -> str:
  function _extract_top_level_info_bytes (line 71) | def _extract_top_level_info_bytes(data: bytes) -> bytes:
  function _parse_any (line 84) | def _parse_any(data: bytes, index: int) -> tuple[Any, int]:
  function _parse_int (line 97) | def _parse_int(data: bytes, index: int) -> tuple[int, int]:
  function _parse_bytes (line 102) | def _parse_bytes(data: bytes, index: int) -> tuple[bytes, int]:
  function _parse_list (line 110) | def _parse_list(data: bytes, index: int) -> tuple[list[Any], int]:
  function _parse_dict (line 119) | def _parse_dict(data: bytes, index: int) -> tuple[dict[bytes, Any], int]:

FILE: integration_tests/cluster_cli/runner.py
  class ClusterCliError (line 30) | class ClusterCliError(RuntimeError):
  function _utc_stamp (line 34) | def _utc_stamp() -> str:
  class ContainerNode (line 39) | class ContainerNode:
  class ClusterRunContext (line 45) | class ClusterRunContext:
  function run_cluster_cli_smoke (line 61) | def run_cluster_cli_smoke(run_id: str | None = None, skip_build: bool = ...
  function _prepare_context (line 96) | def _prepare_context(run_id: str) -> ClusterRunContext:
  function _write_toml (line 148) | def _write_toml(path: Path, payload: dict[str, Any]) -> None:
  function _docker_json (line 153) | def _docker_json(
  function _compose_start (line 196) | def _compose_start(ctx: ClusterRunContext, services: list[str], *, no_bu...
  function _compose_stop (line 200) | def _compose_stop(ctx: ClusterRunContext, service: str) -> None:
  function _snapshot_shared_root (line 204) | def _snapshot_shared_root(ctx: ClusterRunContext, name: str) -> None:
  function _capture_artifacts (line 220) | def _capture_artifacts(ctx: ClusterRunContext) -> None:
  function _stage_fixtures (line 231) | def _stage_fixtures(ctx: ClusterRunContext) -> None:
  function _seed_shared_config (line 244) | def _seed_shared_config(ctx: ClusterRunContext) -> None:
  function _phase_shared_offline (line 283) | def _phase_shared_offline(ctx: ClusterRunContext) -> dict[str, Any]:
  function _phase_single_online (line 323) | def _phase_single_online(ctx: ClusterRunContext, *, no_build: bool) -> d...
  function _phase_cluster_online (line 365) | def _phase_cluster_online(ctx: ClusterRunContext) -> dict[str, Any]:
  function _phase_failover (line 395) | def _phase_failover(ctx: ClusterRunContext) -> dict[str, Any]:
  function _phase_failback (line 411) | def _phase_failback(ctx: ClusterRunContext) -> dict[str, Any]:
  function _restart_regression_check (line 442) | def _restart_regression_check(ctx: ClusterRunContext) -> dict[str, Any]:
  function _standalone_completed_event_count (line 499) | def _standalone_completed_event_count(ctx: ClusterRunContext) -> int:
  function _wait_for_leader (line 506) | def _wait_for_leader(ctx: ClusterRunContext, expected_host_id: str, time...
  function _wait_for_torrent_presence (line 535) | def _wait_for_torrent_presence(
  function _wait_for_shared_path (line 556) | def _wait_for_shared_path(
  function _wait_for_status_torrent_presence (line 584) | def _wait_for_status_torrent_presence(
  function _wait_for_control_state (line 606) | def _wait_for_control_state(
  function _wait_for_files (line 627) | def _wait_for_files(
  function main (line 644) | def main(argv: list[str] | None = None) -> int:

FILE: integration_tests/cluster_cli/tests/test_cluster_cli.py
  function _docker_available (line 14) | def _docker_available() -> bool:
  function test_cluster_cli_smoke (line 29) | def test_cluster_cli_smoke() -> None:

FILE: integration_tests/cluster_cli/tests/test_manifest.py
  function test_declared_cluster_fixtures_exist_and_match_hashes (line 10) | def test_declared_cluster_fixtures_exist_and_match_hashes() -> None:

FILE: integration_tests/docker/tracker.py
  function bencode (line 16) | def bencode(value: object) -> bytes:
  class PeerStore (line 36) | class PeerStore:
    method __init__ (line 37) | def __init__(self) -> None:
    method update (line 41) | def update(self, info_hash: bytes, peer_id: bytes, ip: str, port: int,...
    method list_peers (line 50) | def list_peers(self, info_hash: bytes, requester_peer_id: bytes) -> by...
  class Handler (line 72) | class Handler(BaseHTTPRequestHandler):
    method _send_bencoded (line 75) | def _send_bencoded(self, payload: dict[str, object], status: int = 200...
    method do_GET (line 83) | def do_GET(self) -> None:
    method log_message (line 118) | def log_message(self, fmt: str, *args: object) -> None:

FILE: integration_tests/harness/clients/base.py
  class ClientAdapter (line 7) | class ClientAdapter(ABC):
    method start (line 9) | def start(self) -> None:
    method stop (line 13) | def stop(self) -> None:
    method add_torrent (line 17) | def add_torrent(self, torrent_path: str, download_dir: str) -> None:
    method wait_for_download (line 21) | def wait_for_download(self, expected_manifest: dict, timeout_secs: int...
    method collect_logs (line 25) | def collect_logs(self, dest_dir: Path) -> None:

FILE: integration_tests/harness/clients/qbittorrent.py
  class QBittorrentAdapter (line 18) | class QBittorrentAdapter(ClientAdapter):
    method __init__ (line 19) | def __init__(
    method _extract_temporary_password (line 40) | def _extract_temporary_password(logs: str) -> str | None:
    method _login_once (line 48) | def _login_once(self, password: str) -> bool:
    method authenticate (line 64) | def authenticate(self) -> None:
    method start (line 96) | def start(self) -> None:
    method stop (line 101) | def stop(self) -> None:
    method _request (line 105) | def _request(
    method _request_json (line 123) | def _request_json(self, path: str) -> Any:
    method _build_multipart_form (line 130) | def _build_multipart_form(
    method _torrent_add_succeeded (line 165) | def _torrent_add_succeeded(status: int, response_text: str) -> bool:
    method _list_torrents (line 187) | def _list_torrents(self) -> list[dict[str, Any]]:
    method add_torrent (line 197) | def add_torrent(self, torrent_path: str, download_dir: str) -> None:
    method set_force_start (line 227) | def set_force_start(self, info_hash: str, enabled: bool = True) -> None:
    method wait_for_download (line 250) | def wait_for_download(self, expected_manifest: dict, timeout_secs: int...
    method collect_logs (line 269) | def collect_logs(self, dest_dir: Path) -> None:
    method read_status (line 276) | def read_status(self) -> dict[str, Any]:

FILE: integration_tests/harness/clients/superseedr.py
  class SuperseedrAdapter (line 12) | class SuperseedrAdapter(ClientAdapter):
    method __init__ (line 13) | def __init__(
    method start (line 25) | def start(self) -> None:
    method stop (line 28) | def stop(self) -> None:
    method add_torrent (line 31) | def add_torrent(self, torrent_path: str, download_dir: str) -> None:
    method wait_for_download (line 37) | def wait_for_download(self, expected_manifest: dict, timeout_secs: int...
    method collect_logs (line 46) | def collect_logs(self, dest_dir: Path) -> None:
    method read_status (line 56) | def read_status(self) -> dict:

FILE: integration_tests/harness/clients/transmission.py
  class TransmissionAdapter (line 15) | class TransmissionAdapter(ClientAdapter):
    method __init__ (line 16) | def __init__(
    method start (line 34) | def start(self) -> None:
    method stop (line 39) | def stop(self) -> None:
    method _headers (line 43) | def _headers(self) -> dict[str, str]:
    method _rpc (line 53) | def _rpc(self, method: str, arguments: dict[str, Any] | None = None) -...
    method wait_until_ready (line 88) | def wait_until_ready(self) -> None:
    method add_torrent (line 102) | def add_torrent(self, torrent_path: str, download_dir: str) -> None:
    method _list_torrents (line 117) | def _list_torrents(self) -> list[dict[str, Any]]:
    method wait_for_download (line 139) | def wait_for_download(self, expected_manifest: dict, timeout_secs: int...
    method collect_logs (line 155) | def collect_logs(self, dest_dir: Path) -> None:
    method read_status (line 162) | def read_status(self) -> dict[str, Any]:

FILE: integration_tests/harness/config.py
  class HarnessPaths (line 14) | class HarnessPaths:
  class HarnessDefaults (line 26) | class HarnessDefaults:
  function resolve_paths (line 33) | def resolve_paths() -> HarnessPaths:
  function env_bool (line 46) | def env_bool(name: str, default: bool = False) -> bool:

FILE: integration_tests/harness/docker_ctl.py
  class DockerCompose (line 9) | class DockerCompose:
    method __init__ (line 10) | def __init__(self, compose_file: Path, project_name: str, env: dict[st...
    method _cmd (line 15) | def _cmd(self, args: Iterable[str]) -> list[str]:
    method run (line 26) | def run(self, args: Iterable[str], check: bool = True, capture: bool =...
    method up (line 35) | def up(self, services: list[str], no_build: bool = False) -> None:
    method down (line 42) | def down(self) -> None:
    method ps (line 45) | def ps(self) -> str:
    method logs (line 49) | def logs(self, service: str, tail: int = 200) -> str:
    method exec (line 53) | def exec(self, service: str, command: list[str], check: bool = True, c...

FILE: integration_tests/harness/manifest.py
  class ExpectedFile (line 11) | class ExpectedFile:
  function _sha256_file (line 17) | def _sha256_file(path: Path) -> str:
  function build_expected_manifest (line 28) | def build_expected_manifest(test_data_root: Path, mode: str) -> dict[str...
  function validate_output (line 40) | def validate_output(output_root: Path, expected: dict[str, ExpectedFile]...

FILE: integration_tests/harness/run.py
  function parse_args (line 28) | def parse_args() -> argparse.Namespace:
  function main (line 42) | def main() -> int:

FILE: integration_tests/harness/scenarios/qbittorrent_to_superseedr.py
  class ScenarioResult (line 22) | class ScenarioResult:
  function _bucket_for_torrent (line 31) | def _bucket_for_torrent(name: str) -> str:
  function _qbit_savepath_for_torrent (line 41) | def _qbit_savepath_for_torrent(mode: str, name: str) -> str:
  function _torrent_order_key (line 49) | def _torrent_order_key(name: str) -> tuple[int, str]:
  function _expected_subset (line 60) | def _expected_subset(expected: dict[str, ExpectedFile], torrent_names: l...
  function _write_leech_settings (line 76) | def _write_leech_settings(mode: str, config_path: Path, torrent_files: l...
  function _prepare_seed_data (line 127) | def _prepare_seed_data(seed_mode_root: Path, canonical_root: Path) -> None:
  function _ensure_clean_dir (line 137) | def _ensure_clean_dir(path: Path) -> None:
  function _write_json (line 143) | def _write_json(path: Path, payload: dict) -> None:
  function _wait_for_tracker (line 148) | def _wait_for_tracker(port: int, timeout_secs: int = 20) -> None:
  function _reserve_local_port (line 165) | def _reserve_local_port() -> int:
  function run_mode (line 171) | def run_mode(
  function generate_fixtures_and_torrents (line 356) | def generate_fixtures_and_torrents(root: Path, announce_url: str) -> Path:

FILE: integration_tests/harness/scenarios/superseedr_to_qbittorrent.py
  class ScenarioResult (line 22) | class ScenarioResult:
  function _bucket_for_torrent (line 31) | def _bucket_for_torrent(name: str) -> str:
  function _qbit_savepath_for_torrent (line 41) | def _qbit_savepath_for_torrent(mode: str, name: str) -> str:
  function _write_seed_settings (line 49) | def _write_seed_settings(mode: str, config_path: Path, torrent_files: li...
  function _prepare_seed_data (line 100) | def _prepare_seed_data(seed_mode_root: Path, canonical_root: Path) -> None:
  function _ensure_clean_dir (line 110) | def _ensure_clean_dir(path: Path) -> None:
  function _write_json (line 116) | def _write_json(path: Path, payload: dict) -> None:
  function _wait_for_tracker (line 121) | def _wait_for_tracker(port: int, timeout_secs: int = 20) -> None:
  function _reserve_local_port (line 138) | def _reserve_local_port() -> int:
  function run_mode (line 144) | def run_mode(
  function generate_fixtures_and_torrents (line 306) | def generate_fixtures_and_torrents(root: Path, announce_url: str) -> Path:

FILE: integration_tests/harness/scenarios/superseedr_to_superseedr.py
  class ScenarioResult (line 19) | class ScenarioResult:
  function _bucket_for_torrent (line 28) | def _bucket_for_torrent(name: str) -> str:
  function _write_settings (line 38) | def _write_settings(mode: str, role: str, config_path: Path, torrent_fil...
  function _prepare_seed_data (line 90) | def _prepare_seed_data(seed_mode_root: Path, canonical_root: Path) -> None:
  function _ensure_clean_dir (line 100) | def _ensure_clean_dir(path: Path) -> None:
  function _write_json (line 106) | def _write_json(path: Path, payload: dict) -> None:
  function _wait_for_tracker (line 111) | def _wait_for_tracker(port: int, timeout_secs: int = 20) -> None:
  function run_mode (line 128) | def run_mode(
  function generate_fixtures_and_torrents (line 269) | def generate_fixtures_and_torrents(root: Path, announce_url: str) -> Path:

FILE: integration_tests/harness/scenarios/superseedr_to_transmission.py
  class ScenarioResult (line 23) | class ScenarioResult:
  function _bucket_for_torrent (line 32) | def _bucket_for_torrent(name: str) -> str:
  function _transmission_savepath_for_torrent (line 42) | def _transmission_savepath_for_torrent(mode: str, name: str) -> str:
  function _write_seed_settings (line 50) | def _write_seed_settings(mode: str, config_path: Path, torrent_files: li...
  function _prepare_seed_data (line 101) | def _prepare_seed_data(seed_mode_root: Path, canonical_root: Path) -> None:
  function _ensure_clean_dir (line 111) | def _ensure_clean_dir(path: Path) -> None:
  function _write_json (line 117) | def _write_json(path: Path, payload: dict) -> None:
  function _wait_for_tracker (line 122) | def _wait_for_tracker(port: int, timeout_secs: int = 20) -> None:
  function _reserve_local_port (line 139) | def _reserve_local_port() -> int:
  function run_mode (line 145) | def run_mode(
  function generate_fixtures_and_torrents (line 319) | def generate_fixtures_and_torrents(root: Path, announce_url: str) -> Path:

FILE: integration_tests/harness/scenarios/transmission_to_superseedr.py
  class ScenarioResult (line 23) | class ScenarioResult:
  function _bucket_for_torrent (line 32) | def _bucket_for_torrent(name: str) -> str:
  function _transmission_savepath_for_torrent (line 42) | def _transmission_savepath_for_torrent(mode: str, name: str) -> str:
  function _write_leech_settings (line 50) | def _write_leech_settings(mode: str, config_path: Path, torrent_files: l...
  function _prepare_seed_data (line 101) | def _prepare_seed_data(seed_mode_root: Path, canonical_root: Path) -> None:
  function _ensure_clean_dir (line 111) | def _ensure_clean_dir(path: Path) -> None:
  function _write_json (line 117) | def _write_json(path: Path, payload: dict) -> None:
  function _wait_for_tracker (line 122) | def _wait_for_tracker(port: int, timeout_secs: int = 20) -> None:
  function _reserve_local_port (line 139) | def _reserve_local_port() -> int:
  function run_mode (line 145) | def run_mode(
  function generate_fixtures_and_torrents (line 319) | def generate_fixtures_and_torrents(root: Path, announce_url: str) -> Path:

FILE: integration_tests/harness/tests/test_manifest.py
  function test_build_expected_manifest_skips_v1_only_for_non_v1 (line 8) | def test_build_expected_manifest_skips_v1_only_for_non_v1(tmp_path: Path...
  function test_validate_output_detects_missing_and_extra (line 21) | def test_validate_output_detects_missing_and_extra(tmp_path: Path) -> None:

FILE: integration_tests/harness/tests/test_qbittorrent_auth_interop.py
  function _reserve_local_port (line 15) | def _reserve_local_port() -> int:
  function test_qbittorrent_container_and_auth (line 24) | def test_qbittorrent_container_and_auth() -> None:

FILE: integration_tests/harness/tests/test_qbittorrent_to_superseedr_interop.py
  function test_qbittorrent_to_superseedr_interop_mode (line 13) | def test_qbittorrent_to_superseedr_interop_mode(mode: str) -> None:

FILE: integration_tests/harness/tests/test_stub_adapters.py
  function test_qbittorrent_temporary_password_extraction (line 12) | def test_qbittorrent_temporary_password_extraction() -> None:
  function test_qbittorrent_temporary_password_extraction_case_insensitive (line 17) | def test_qbittorrent_temporary_password_extraction_case_insensitive() ->...
  function test_qbittorrent_temporary_password_extraction_missing (line 22) | def test_qbittorrent_temporary_password_extraction_missing() -> None:
  class _QbittorrentLoginResponse (line 27) | class _QbittorrentLoginResponse:
    method __init__ (line 28) | def __init__(self, status: int, body: bytes) -> None:
    method __enter__ (line 32) | def __enter__(self) -> "_QbittorrentLoginResponse":
    method __exit__ (line 35) | def __exit__(self, *_args: object) -> None:
    method read (line 38) | def read(self) -> bytes:
  function test_qbittorrent_login_accepts_legacy_ok_body (line 42) | def test_qbittorrent_login_accepts_legacy_ok_body(monkeypatch: pytest.Mo...
  function test_qbittorrent_login_accepts_empty_204 (line 53) | def test_qbittorrent_login_accepts_empty_204(monkeypatch: pytest.MonkeyP...
  function test_qbittorrent_login_rejects_failed_200 (line 64) | def test_qbittorrent_login_rejects_failed_200(monkeypatch: pytest.Monkey...
  function test_qbittorrent_authenticate_falls_back_to_temp_password (line 75) | def test_qbittorrent_authenticate_falls_back_to_temp_password(monkeypatc...
  function test_qbittorrent_authenticate_retries_temp_password_until_ready (line 98) | def test_qbittorrent_authenticate_retries_temp_password_until_ready(monk...
  function test_qbittorrent_build_multipart_form_includes_file_and_fields (line 127) | def test_qbittorrent_build_multipart_form_includes_file_and_fields() -> ...
  function test_qbittorrent_add_torrent_posts_to_api (line 141) | def test_qbittorrent_add_torrent_posts_to_api(monkeypatch: pytest.Monkey...
  function test_qbittorrent_add_torrent_accepts_json_success (line 163) | def test_qbittorrent_add_torrent_accepts_json_success(
  function test_qbittorrent_add_torrent_rejects_json_failure (line 184) | def test_qbittorrent_add_torrent_rejects_json_failure(
  function test_qbittorrent_wait_for_download_success (line 206) | def test_qbittorrent_wait_for_download_success(monkeypatch: pytest.Monke...
  function test_qbittorrent_wait_for_download_error_state (line 222) | def test_qbittorrent_wait_for_download_error_state(monkeypatch: pytest.M...
  function test_transmission_add_torrent_sends_metainfo (line 233) | def test_transmission_add_torrent_sends_metainfo(monkeypatch: pytest.Mon...
  function test_transmission_wait_for_download_success (line 256) | def test_transmission_wait_for_download_success(monkeypatch: pytest.Monk...
  function test_transmission_wait_for_download_error_state (line 271) | def test_transmission_wait_for_download_error_state(monkeypatch: pytest....

FILE: integration_tests/harness/tests/test_superseedr_interop.py
  function test_superseedr_interop_mode (line 13) | def test_superseedr_interop_mode(mode: str) -> None:

FILE: integration_tests/harness/tests/test_superseedr_to_qbittorrent_interop.py
  function test_superseedr_to_qbittorrent_interop_mode (line 13) | def test_superseedr_to_qbittorrent_interop_mode(mode: str) -> None:

FILE: integration_tests/harness/tests/test_superseedr_to_transmission_interop.py
  function test_superseedr_to_transmission_interop_mode (line 13) | def test_superseedr_to_transmission_interop_mode(mode: str) -> None:

FILE: integration_tests/harness/tests/test_transmission_auth_interop.py
  function _reserve_local_port (line 14) | def _reserve_local_port() -> int:
  function test_transmission_container_and_auth (line 23) | def test_transmission_container_and_auth() -> None:

FILE: integration_tests/harness/tests/test_transmission_to_superseedr_interop.py
  function test_transmission_to_superseedr_interop_mode (line 13) | def test_transmission_to_superseedr_interop_mode(mode: str) -> None:

FILE: scripts/clear_integration_output.py
  function parse_args (line 16) | def parse_args() -> argparse.Namespace:
  function clear_mode (line 37) | def clear_mode(mode: str, dry_run: bool) -> tuple[int, int]:
  function main (line 79) | def main() -> int:

FILE: scripts/generate_integration_bins.py
  function expected_bytes (line 73) | def expected_bytes(seed_key: str, size: int) -> bytes:
  function sha256_hex (line 84) | def sha256_hex(data: bytes) -> str:
  function check_specs (line 88) | def check_specs() -> tuple[bool, int]:
  function generate_specs (line 116) | def generate_specs() -> int:
  function parse_args (line 129) | def parse_args() -> argparse.Namespace:
  function main (line 146) | def main() -> int:

FILE: scripts/generate_integration_torrents.py
  class BencodeError (line 33) | class BencodeError(ValueError):
  function bdecode (line 37) | def bdecode(data: bytes) -> object:
  function _decode_at (line 44) | def _decode_at(data: bytes, i: int) -> tuple[object, int]:
  function bencode (line 82) | def bencode(value: object) -> bytes:
  function normalize_announce (line 103) | def normalize_announce(payload: dict[bytes, object], announce_url: str) ...
  function rewrite_announce (line 109) | def rewrite_announce(src_path: Path, dest_path: Path, announce_url: str)...
  function write_v1_single_file_torrent_manual (line 118) | def write_v1_single_file_torrent_manual(
  function generate_v1_torrents (line 147) | def generate_v1_torrents(test_data_root: Path, output_root: Path, announ...
  function copy_and_normalize_existing_modes (line 183) | def copy_and_normalize_existing_modes(
  function verify_announce (line 195) | def verify_announce(output_root: Path, announce_url: str) -> tuple[bool,...
  function parse_args (line 213) | def parse_args() -> argparse.Namespace:
  function main (line 223) | def main() -> int:

FILE: scripts/hash.py
  function calculate_merkle_root (line 11) | def calculate_merkle_root():

FILE: scripts/summarize_dht_soak.py
  function load_samples (line 35) | def load_samples(path: Path) -> list[dict[str, Any]]:
  function lines_in_window (line 49) | def lines_in_window(path: Path, start: str | None, end: str | None) -> l...
  function sum_field (line 61) | def sum_field(lines: list[str], field: str) -> tuple[int, int]:
  function some_field (line 75) | def some_field(line: str, field: str) -> str | None:
  function int_some_field (line 82) | def int_some_field(line: str, field: str) -> int | None:
  function bool_some_field (line 89) | def bool_some_field(line: str, field: str) -> bool | None:
  function count_some_values (line 98) | def count_some_values(lines: list[str], field: str) -> dict[str, int]:
  function average_int_field (line 108) | def average_int_field(lines: list[str], field: str) -> float | None:
  function summarize_samples (line 119) | def summarize_samples(samples: list[dict[str, Any]]) -> dict[str, Any]:
  function summarize_log (line 152) | def summarize_log(lines: list[str]) -> dict[str, Any]:
  function cleanup (line 261) | def cleanup(args: argparse.Namespace) -> dict[str, Any]:
  function parse_args (line 282) | def parse_args() -> argparse.Namespace:
  function assert_thresholds (line 319) | def assert_thresholds(summary: dict[str, Any], args: argparse.Namespace)...
  function main (line 349) | def main() -> None:

FILE: scripts/validate_integration_output.py
  function sha256_file (line 21) | def sha256_file(path: Path) -> str:
  function collect_files (line 32) | def collect_files(root: Path) -> dict[str, Path]:
  function validate_mode (line 46) | def validate_mode(
  function parse_args (line 106) | def parse_args() -> argparse.Namespace:
  function main (line 132) | def main() -> int:

FILE: src/app.rs
  function format_filesystem_path_error (line 121) | fn format_filesystem_path_error(action: &str, path: &Path, error: &io::E...
  constant FILE_HANDLE_MINIMUM (line 148) | const FILE_HANDLE_MINIMUM: usize = 64;
  constant SAFE_BUDGET_PERCENTAGE (line 149) | const SAFE_BUDGET_PERCENTAGE: f64 = 0.85;
  constant RSS_MAX_TORRENT_DOWNLOAD_BYTES (line 150) | pub const RSS_MAX_TORRENT_DOWNLOAD_BYTES: usize = 10 * 1024 * 1024;
  constant RSS_MANUAL_DOWNLOAD_TIMEOUT_SECS (line 151) | const RSS_MANUAL_DOWNLOAD_TIMEOUT_SECS: u64 = 20;
  constant NETWORK_HISTORY_PERSIST_INTERVAL_SECS (line 152) | const NETWORK_HISTORY_PERSIST_INTERVAL_SECS: u64 = 15 * 60;
  constant WATCH_FOLDER_RESCAN_INTERVAL_SECS (line 153) | const WATCH_FOLDER_RESCAN_INTERVAL_SECS: u64 = 5;
  constant SHARED_ROLE_RETRY_INTERVAL_SECS (line 154) | const SHARED_ROLE_RETRY_INTERVAL_SECS: u64 = 2;
  constant STARTUP_ROLLING_BATCH_SIZE (line 155) | const STARTUP_ROLLING_BATCH_SIZE: usize = 1;
  constant STARTUP_ROLLING_BATCH_INTERVAL_SECS (line 156) | const STARTUP_ROLLING_BATCH_INTERVAL_SECS: u64 = 1;
  constant STARTUP_ROLLING_LOADS_PER_INTERVAL (line 157) | const STARTUP_ROLLING_LOADS_PER_INTERVAL: usize = 1;
  constant SHUTDOWN_TIMEOUT_SECS (line 159) | const SHUTDOWN_TIMEOUT_SECS: u64 = 20;
  constant INCOMING_HANDSHAKE_TIMEOUT_SECS (line 160) | const INCOMING_HANDSHAKE_TIMEOUT_SECS: u64 = 10;
  constant PORT_FAMILY_HIGHLIGHT_DURATION (line 161) | const PORT_FAMILY_HIGHLIGHT_DURATION: Duration = Duration::from_secs(2);
  constant UI_FPS_SAMPLE_INTERVAL (line 162) | const UI_FPS_SAMPLE_INTERVAL: Duration = Duration::from_secs(1);
  constant NORMAL_IDLE_FRAME_CHECK_INTERVAL (line 163) | const NORMAL_IDLE_FRAME_CHECK_INTERVAL: Duration = Duration::from_millis...
  constant NORMAL_ANIMATION_RECENT_BLOCK_ROWS (line 164) | const NORMAL_ANIMATION_RECENT_BLOCK_ROWS: usize = 64;
  constant NORMAL_ANIMATION_RECENT_PEER_EVENTS (line 165) | const NORMAL_ANIMATION_RECENT_PEER_EVENTS: usize = 120;
  constant NORMAL_ANIMATION_FILE_ACTIVITY_WINDOW (line 166) | const NORMAL_ANIMATION_FILE_ACTIVITY_WINDOW: Duration = Duration::from_s...
  constant SWARM_AVAILABILITY_FLASH_DURATION (line 167) | const SWARM_AVAILABILITY_FLASH_DURATION: Duration = Duration::from_milli...
  constant DISK_IDLE_WOBBLE_PHASE_SPEED (line 168) | const DISK_IDLE_WOBBLE_PHASE_SPEED: f64 = 0.45;
  constant DISK_MIN_TRANSFER_PHASE_SPEED (line 169) | const DISK_MIN_TRANSFER_PHASE_SPEED: f64 = 0.80;
  constant DISK_MAX_TRANSFER_PHASE_SPEED (line 170) | const DISK_MAX_TRANSFER_PHASE_SPEED: f64 = 5.20;
  constant DISK_WRITE_THROTTLE_START_BYTES_PER_SEC (line 171) | const DISK_WRITE_THROTTLE_START_BYTES_PER_SEC: f64 = 1_000_000_000.0 / 8.0;
  constant DISK_WRITE_THROTTLE_MIN_BYTES_PER_SEC (line 172) | const DISK_WRITE_THROTTLE_MIN_BYTES_PER_SEC: f64 = 1_000_000.0 / 8.0;
  constant DISK_WRITE_THROTTLE_WINDOW_TICKS (line 173) | const DISK_WRITE_THROTTLE_WINDOW_TICKS: u8 = 5;
  constant DISK_WRITE_THROTTLE_STEP_MIN (line 174) | const DISK_WRITE_THROTTLE_STEP_MIN: f64 = 0.80;
  constant DISK_WRITE_THROTTLE_STEP_MAX (line 175) | const DISK_WRITE_THROTTLE_STEP_MAX: f64 = 1.20;
  constant DISK_WRITE_THROTTLE_BURST_SECS (line 176) | const DISK_WRITE_THROTTLE_BURST_SECS: f64 = 1.0;
  constant DISK_WRITE_THROTTLE_TARGET_LATENCY_SECS (line 177) | const DISK_WRITE_THROTTLE_TARGET_LATENCY_SECS: f64 = 2.0;
  constant BITTORRENT_PROTOCOL_STR (line 178) | const BITTORRENT_PROTOCOL_STR: &[u8] = b"BitTorrent protocol";
  type ListenerSet (line 180) | pub struct ListenerSet {
    method bind (line 186) | async fn bind(port: u16) -> io::Result<Self> {
    method accept (line 235) | async fn accept(&self) -> io::Result<(TcpStream, SocketAddr)> {
    method local_port (line 252) | fn local_port(&self) -> Option<u16> {
  type CratesResponse (line 262) | struct CratesResponse {
  type CrateInfo (line 268) | struct CrateInfo {
  type FilePriority (line 273) | pub enum FilePriority {
    method next (line 282) | pub fn next(&self) -> Self {
  type TorrentPreviewPayload (line 293) | pub struct TorrentPreviewPayload {
    method add_assign (line 307) | fn add_assign(&mut self, rhs: Self) {
  type TorrentPreviewFileEntry (line 299) | struct TorrentPreviewFileEntry {
  type BrowserPane (line 317) | pub enum BrowserPane {
  type FileBrowserMode (line 325) | pub enum FileBrowserMode {
  type FileMetadata (line 350) | pub struct FileMetadata {
  type DataRate (line 356) | pub enum DataRate {
    method as_ms (line 371) | pub fn as_ms(&self) -> u64 {
    method fps_label (line 385) | pub fn fps_label(self) -> &'static str {
    method target_fps (line 399) | pub fn target_fps(self) -> f64 {
    method frame_interval (line 413) | pub fn frame_interval(self) -> Duration {
    method next_slower (line 418) | pub fn next_slower(&self) -> Self {
    method next_faster (line 433) | pub fn next_faster(&self) -> Self {
  type CalculatedLimits (line 449) | pub struct CalculatedLimits {
    method into_map (line 456) | pub fn into_map(self) -> HashMap<ResourceType, usize> {
  type GraphDisplayMode (line 467) | pub enum GraphDisplayMode {
    method as_seconds (line 483) | pub fn as_seconds(&self) -> usize {
    method to_string (line 499) | pub fn to_string(self) -> &'static str {
    method next (line 515) | pub fn next(&self) -> Self {
    method prev (line 531) | pub fn prev(&self) -> Self {
  type ChartPanelView (line 549) | pub enum ChartPanelView {
    method to_string (line 561) | pub fn to_string(self) -> &'static str {
    method next (line 573) | pub fn next(self) -> Self {
    method prev (line 585) | pub fn prev(self) -> Self {
  type SelectedHeader (line 599) | pub enum SelectedHeader {
  method default (line 604) | fn default() -> Self {
  function torrent_sort_header (line 609) | fn torrent_sort_header(column: TorrentSortColumn) -> ColumnId {
  type AppCommand (line 618) | pub enum AppCommand {
  type AppRuntimeMode (line 670) | pub enum AppRuntimeMode {
    method is_shared (line 677) | pub fn is_shared(self) -> bool {
    method is_shared_follower (line 681) | pub fn is_shared_follower(self) -> bool {
  type AppClusterRole (line 687) | pub enum AppClusterRole {
  type ClusterCapabilities (line 693) | struct ClusterCapabilities {
  type IngestSource (line 703) | enum IngestSource {
    method relay_archive_extension (line 710) | fn relay_archive_extension(self) -> &'static str {
    method processed_archive_extension (line 718) | fn processed_archive_extension(self) -> &'static str {
  type ResolvedAddPayload (line 728) | enum ResolvedAddPayload {
  type AddIngressAction (line 734) | enum AddIngressAction {
  type ConfigItem (line 753) | pub enum ConfigItem {
  type AppMode (line 763) | pub enum AppMode {
  type AvailabilityTransitionLog (line 776) | type AvailabilityTransitionLog = (String, bool, usize, Option<std::path:...
  type PendingIngestRecord (line 779) | pub(crate) struct PendingIngestRecord {
  type PendingControlRecord (line 788) | pub(crate) struct PendingControlRecord {
  type CommandIngestResult (line 797) | pub(crate) enum CommandIngestResult {
  function move_file_with_fallback_impl (line 819) | fn move_file_with_fallback_impl<F>(
  function ingest_kind_from_path (line 830) | fn ingest_kind_from_path(path: &std::path::Path) -> Option<IngestKind> {
  function event_correlation_id_for_path (line 839) | fn event_correlation_id_for_path(path: &std::path::Path) -> String {
  type RssScreen (line 844) | pub enum RssScreen {
  type RssSectionFocus (line 851) | pub enum RssSectionFocus {
  type TorrentControlState (line 859) | pub enum TorrentControlState {
  type PeerInfo (line 867) | pub struct PeerInfo {
  function swarm_availability_counts (line 882) | pub fn swarm_availability_counts(peers: &[PeerInfo], total_pieces: u32) ...
  type TorrentMetrics (line 898) | pub struct TorrentMetrics {
  method default (line 944) | fn default() -> Self {
  type TorrentDisplayState (line 983) | pub struct TorrentDisplayState {
  type RecentFileActivity (line 1017) | pub struct RecentFileActivity {
  type SwarmAvailabilityFlashState (line 1023) | pub struct SwarmAvailabilityFlashState {
    method update (line 1033) | pub fn update(
    method update_from_peers (line 1051) | pub fn update_from_peers(
    method update_from_peer_availability (line 1071) | fn update_from_peer_availability(
    method update_from_availability (line 1113) | fn update_from_availability(
    method is_piece_flashing (line 1169) | pub fn is_piece_flashing(&self, info_hash: &[u8], piece_index: usize, ...
    method has_active_flash (line 1185) | pub fn has_active_flash(&self, now: Instant) -> bool {
    method clear_expired (line 1192) | fn clear_expired(&mut self, now: Instant) {
  function swarm_availability_flash_rollout_delay (line 1204) | fn swarm_availability_flash_rollout_delay(
  function swarm_availability_peer_bitfields (line 1222) | fn swarm_availability_peer_bitfields(
  function swarm_availability_peer_key (line 1237) | fn swarm_availability_peer_key(peer: &PeerInfo, fallback_index: usize) -...
  type DhtWaveUiState (line 1250) | pub struct DhtWaveUiState {
  type UiState (line 1265) | pub struct UiState {
    method record_drawn_frame (line 1293) | fn record_drawn_frame(&mut self, now: Instant) {
  type ConfigUiState (line 1316) | pub struct ConfigUiState {
  type DeleteConfirmUiState (line 1324) | pub struct DeleteConfirmUiState {
  type FileBrowserUiState (line 1330) | pub struct FileBrowserUiState {
  function build_torrent_preview_tree (line 1338) | pub fn build_torrent_preview_tree(
  function build_torrent_preview_tree_from_entries (line 1355) | fn build_torrent_preview_tree_from_entries(
  function collect_torrent_preview_files (line 1387) | fn collect_torrent_preview_files(
  function rebuild_torrent_preview_tree (line 1407) | fn rebuild_torrent_preview_tree(
  type JournalFilter (line 1420) | pub enum JournalFilter {
    method next (line 1429) | pub fn next(self) -> Self {
    method prev (line 1438) | pub fn prev(self) -> Self {
    method label (line 1447) | pub fn label(self) -> &'static str {
  type JournalUiState (line 1458) | pub struct JournalUiState {
  type RssUiState (line 1466) | pub struct RssUiState {
  type RssRuntimeState (line 1487) | pub struct RssRuntimeState {
  type RssFilterRuntimeStat (line 1496) | pub struct RssFilterRuntimeStat {
  type RssDerivedState (line 1502) | pub struct RssDerivedState {
  type RssPreviewItem (line 1512) | pub struct RssPreviewItem {
  type AppState (line 1524) | pub struct AppState {
  type DiskBackpressureDownloadThrottle (line 1645) | struct DiskBackpressureDownloadThrottle {
    method new (line 1673) | fn new(configured_download_limit_bps: u64) -> Self {
    method reset (line 1685) | fn reset(&mut self, configured_download_limit_bps: u64) {
    method update (line 1695) | fn update(&mut self, sample: DiskBackpressureSample) -> DiskBackpressu...
    method update_with_step_factor (line 1699) | fn update_with_step_factor(
    method finish_score_window (line 1737) | fn finish_score_window(&mut self, score: f64, step_factor: f64, ceilin...
  type DiskBackpressureSample (line 1655) | struct DiskBackpressureSample {
  type DiskBackpressureDecision (line 1664) | enum DiskBackpressureDecision {
  function initial_disk_throttle_rate (line 1756) | fn initial_disk_throttle_rate(configured_download_limit_bps: u64) -> f64 {
  function configured_download_ceiling_bytes_per_sec (line 1761) | fn configured_download_ceiling_bytes_per_sec(configured_download_limit_b...
  function configured_download_bucket_rate (line 1769) | fn configured_download_bucket_rate(configured_download_limit_bps: u64) -...
  function configured_upload_bucket_rate (line 1773) | fn configured_upload_bucket_rate(configured_upload_limit_bps: u64) -> f64 {
  function random_disk_throttle_step_factor (line 1777) | fn random_disk_throttle_step_factor() -> f64 {
  function normalize_disk_throttle_step (line 1781) | fn normalize_disk_throttle_step(step_factor: f64) -> f64 {
  function disk_backpressure_score (line 1789) | fn disk_backpressure_score(sample: DiskBackpressureSample) -> f64 {
  function disk_backpressure_has_signal (line 1795) | fn disk_backpressure_has_signal(sample: DiskBackpressureSample) -> bool {
  function effective_download_limit_bps (line 1799) | fn effective_download_limit_bps(
  function bytes_per_sec_to_bps (line 1812) | fn bytes_per_sec_to_bps(bytes_per_sec: f64) -> u64 {
  function clamp_disk_throttle_rate (line 1820) | fn clamp_disk_throttle_rate(rate_bytes_per_sec: f64, ceiling_bytes_per_s...
  function disk_throttle_capacity_for_rate (line 1834) | fn disk_throttle_capacity_for_rate(rate_bytes_per_sec: f64) -> f64 {
  type App (line 1842) | pub struct App {
    method new (line 2225) | pub async fn new(
    method new_with_lock (line 2232) | pub async fn new_with_lock(
    method cluster_role_label_for_state (line 2489) | fn cluster_role_label_for_state(&self) -> Option<&'static str> {
    method sync_cluster_role_label (line 2503) | fn sync_cluster_role_label(&mut self) {
    method should_suppress_follower_runtime_for_torrent (line 2512) | fn should_suppress_follower_runtime_for_torrent(&self, torrent: &Torre...
    method display_state_from_torrent_settings (line 2516) | fn display_state_from_torrent_settings(
    method ensure_display_only_torrent_from_settings (line 2542) | fn ensure_display_only_torrent_from_settings(&mut self, torrent: &Torr...
    method apply_leader_snapshot_to_display (line 2556) | fn apply_leader_snapshot_to_display(&mut self, snapshot: &AppOutputSta...
    method refresh_follower_read_model (line 2593) | fn refresh_follower_read_model(&mut self) {
    method start_missing_runtime_torrents_for_current_role (line 2620) | async fn start_missing_runtime_torrents_for_current_role(&mut self) {
    method is_shared_mode_enabled (line 2636) | pub fn is_shared_mode_enabled(&self) -> bool {
    method is_current_shared_leader (line 2640) | pub fn is_current_shared_leader(&self) -> bool {
    method is_current_shared_follower (line 2644) | pub fn is_current_shared_follower(&self) -> bool {
    method cluster_capabilities (line 2649) | fn cluster_capabilities(&self) -> ClusterCapabilities {
    method can_run_leader_services (line 2661) | fn can_run_leader_services(&self) -> bool {
    method can_write_shared_state (line 2665) | fn can_write_shared_state(&self) -> bool {
    method ensure_leader_services_running (line 2669) | fn ensure_leader_services_running(&mut self) {
    method current_shared_lock_path (line 2702) | fn current_shared_lock_path() -> io::Result<PathBuf> {
    method try_acquire_shared_runtime_lock (line 2708) | fn try_acquire_shared_runtime_lock() -> io::Result<Option<File>> {
    method watch_path_if_needed (line 2718) | fn watch_path_if_needed(&mut self, path: PathBuf) -> io::Result<()> {
    method desired_watch_paths_for_settings (line 2730) | fn desired_watch_paths_for_settings(&self, settings: &Settings) -> Vec...
    method reconcile_watched_paths (line 2738) | fn reconcile_watched_paths(&mut self, settings: &Settings) {
    method control_priority_overrides (line 2769) | fn control_priority_overrides(
    method shared_add_staging_dir (line 2783) | fn shared_add_staging_dir() -> Result<PathBuf, String> {
    method is_shared_staged_add_path (line 2789) | fn is_shared_staged_add_path(path: &Path) -> bool {
    method cleanup_staged_add_file (line 2795) | fn cleanup_staged_add_file(path: &Path) {
    method prepare_add_torrent_file_request (line 2812) | pub(crate) fn prepare_add_torrent_file_request(
    method prepare_add_magnet_request (line 2858) | pub(crate) fn prepare_add_magnet_request(
    method resolve_add_payload (line 2873) | fn resolve_add_payload(
    method control_request_for_add_payload (line 2906) | fn control_request_for_add_payload(
    method resolve_add_ingress_action (line 2928) | fn resolve_add_ingress_action(&self, source: IngestSource, path: &Path...
    method should_archive_processed_ingest (line 2981) | fn should_archive_processed_ingest(&self, source: IngestSource, path: ...
    method update_pending_ingest_source_path (line 2990) | fn update_pending_ingest_source_path(&mut self, path: &Path, final_pat...
    method archive_processed_ingest (line 3018) | fn archive_processed_ingest(&mut self, source: IngestSource, path: &Pa...
    method open_manual_browser_for_torrent_file (line 3040) | fn open_manual_browser_for_torrent_file(&mut self, path: PathBuf) -> R...
    method open_manual_browser_for_payload (line 3118) | fn open_manual_browser_for_payload(
    method execute_add_ingress_action (line 3171) | async fn execute_add_ingress_action(
    method queue_control_request_for_leader (line 3288) | fn queue_control_request_for_leader(
    method dispatch_cluster_control_request (line 3308) | pub async fn dispatch_cluster_control_request(
    method map_add_result_to_control_response (line 3320) | fn map_add_result_to_control_response(result: CommandIngestResult) -> ...
    method maybe_promote_to_shared_leader (line 3335) | async fn maybe_promote_to_shared_leader(&mut self) {
    method run (line 3389) | pub async fn run(
    method should_draw_this_frame (line 3612) | fn should_draw_this_frame(
    method normal_mode_animation_active (line 3624) | fn normal_mode_animation_active(
    method disk_health_has_current_signal (line 3652) | fn disk_health_has_current_signal(app_state: &AppState) -> bool {
    method disk_health_phase_speed (line 3660) | fn disk_health_phase_speed(app_state: &AppState) -> f64 {
    method dht_wave_animation_active (line 3686) | fn dht_wave_animation_active(
    method selected_torrent_animation_active (line 3707) | fn selected_torrent_animation_active(torrent: &TorrentDisplayState, no...
    method normal_idle_frame_check_interval (line 3760) | fn normal_idle_frame_check_interval(target_frame_interval: Duration) -...
    method advance_next_draw_time (line 3764) | fn advance_next_draw_time(
    method tick_ui_effects_clock (line 3775) | fn tick_ui_effects_clock(&mut self) {
    method update_swarm_availability_flash (line 3846) | fn update_swarm_availability_flash(&mut self, now: Instant) {
    method refresh_system_warning (line 3886) | fn refresh_system_warning(&mut self) {
    method startup_crossterm_event_listener (line 3892) | fn startup_crossterm_event_listener(&mut self) {
    method flush_persistence_writer (line 3938) | async fn flush_persistence_writer(&mut self) {
    method shutdown_sequence (line 3942) | async fn shutdown_sequence(&mut self, terminal: &mut Terminal<Crosster...
    method handle_incoming_peer (line 4020) | async fn handle_incoming_peer(&mut self, mut stream: TcpStream) {
    method refresh_rss_derived (line 4085) | fn refresh_rss_derived(&mut self) {
    method active_running_torrents_for_dht_announce (line 4089) | fn active_running_torrents_for_dht_announce(&self) -> Vec<Vec<u8>> {
    method announce_torrents_to_dht (line 4102) | fn announce_torrents_to_dht<I>(&self, info_hashes: I)
    method remove_torrent_runtime (line 4129) | fn remove_torrent_runtime(&mut self, info_hash: &[u8]) {
    method load_runtime_torrent_from_settings (line 4144) | async fn load_runtime_torrent_from_settings(
    method sync_runtime_torrents_from_settings (line 4206) | async fn sync_runtime_torrents_from_settings(
    method apply_settings_update (line 4332) | async fn apply_settings_update(&mut self, new_settings: Settings, pers...
    method handle_app_command (line 4412) | async fn handle_app_command(&mut self, command: AppCommand) {
    method handle_manager_event (line 4767) | fn handle_manager_event(&mut self, event: ManagerEvent) {
    method handle_file_event (line 5102) | async fn handle_file_event(&mut self, result: Result<Event, notify::Er...
    method handle_port_change (line 5123) | async fn handle_port_change(&mut self, path: PathBuf) {
    method calculate_stats (line 5203) | fn calculate_stats(&mut self, sys: &mut System) {
    method update_disk_backpressure_download_throttle (line 5246) | fn update_disk_backpressure_download_throttle(&mut self) {
    method startup_network_history_restore (line 5286) | fn startup_network_history_restore(&mut self) {
    method startup_activity_history_restore (line 5311) | fn startup_activity_history_restore(&mut self) {
    method drain_latest_torrent_metrics (line 5336) | fn drain_latest_torrent_metrics(&mut self) {
    method total_successfully_connected_peers (line 5391) | fn total_successfully_connected_peers(&self) -> usize {
    method sync_dht_peer_slot_usage (line 5399) | fn sync_dht_peer_slot_usage(&mut self) {
    method handle_dht_status_changed (line 5412) | fn handle_dht_status_changed(&mut self) {
    method tuning_resource_limits (line 5421) | async fn tuning_resource_limits(&mut self) {
    method reschedule_tuning_deadline (line 5471) | fn reschedule_tuning_deadline(&mut self) {
    method reset_tuning_for_objective_change (line 5476) | fn reset_tuning_for_objective_change(&mut self) {
    method sync_tuning_state_from_controller (line 5485) | fn sync_tuning_state_from_controller(&mut self) {
    method save_state_to_disk (line 5494) | fn save_state_to_disk(&mut self) {
    method torrent_saved_location (line 5525) | fn torrent_saved_location(metrics: &TorrentMetrics) -> Option<PathBuf> {
    method current_integrity_snapshots (line 5539) | fn current_integrity_snapshots(&self) -> Vec<TorrentIntegritySnapshot> {
    method dispatch_integrity_probe_batches (line 5561) | fn dispatch_integrity_probe_batches(&mut self) {
    method advance_integrity_scheduler (line 5588) | fn advance_integrity_scheduler(&mut self, dt: Duration) {
    method sync_integrity_probe_deadlines (line 5593) | fn sync_integrity_probe_deadlines(&mut self) {
    method clamp_selected_indices (line 5613) | fn clamp_selected_indices(&mut self) {
    method sort_and_filter_torrent_list (line 5617) | pub fn sort_and_filter_torrent_list(&mut self) {
    method find_most_common_download_path (line 5621) | pub fn find_most_common_download_path(&mut self) -> Option<PathBuf> {
    method get_initial_source_path (line 5638) | pub fn get_initial_source_path(&self) -> PathBuf {
    method get_initial_destination_path (line 5645) | pub fn get_initial_destination_path(&mut self) -> PathBuf {
    method add_torrent_from_file (line 5652) | pub async fn add_torrent_from_file(
    method add_magnet_torrent (line 5966) | pub async fn add_magnet_torrent(
    method source_watch_folder_for_path (line 6116) | fn source_watch_folder_for_path(&self, path: &std::path::Path) -> Opti...
    method has_live_runtime_for_torrent (line 6120) | fn has_live_runtime_for_torrent(&self, info_hash: &[u8]) -> bool {
    method clear_display_only_torrent (line 6124) | fn clear_display_only_torrent(&mut self, info_hash: &[u8]) {
    method is_host_watch_path (line 6135) | fn is_host_watch_path(&self, path: &Path) -> bool {
    method is_shared_inbox_path (line 6141) | fn is_shared_inbox_path(&self, path: &Path) -> bool {
    method relay_local_watch_file (line 6148) | fn relay_local_watch_file(&mut self, path: &Path, fallback_extension: ...
    method append_event_journal_entry (line 6177) | fn append_event_journal_entry(&mut self, entry: EventJournalEntry) {
    method control_event_scope (line 6181) | fn control_event_scope(&self) -> EventScope {
    method persist_torrent_metadata_snapshot (line 6189) | fn persist_torrent_metadata_snapshot(
    method record_ingest_queued (line 6224) | fn record_ingest_queued(
    method record_watch_path_discovered (line 6267) | fn record_watch_path_discovered(&mut self, path: &Path) {
    method record_rss_queued (line 6280) | fn record_rss_queued(&mut self, path: PathBuf, origin: IngestOrigin, i...
    method control_origin_for_command_path (line 6286) | fn control_origin_for_command_path(&self, path: &Path) -> ControlOrigin {
    method control_origin_for_ingest_path (line 6296) | fn control_origin_for_ingest_path(&self, path: &Path) -> ControlOrigin {
    method record_control_queued (line 6309) | fn record_control_queued(
    method record_control_result (line 6347) | fn record_control_result(
    method record_ingest_result (line 6394) | fn record_ingest_result(&mut self, path: &PathBuf, result: &CommandIng...
    method record_data_health_event (line 6494) | fn record_data_health_event(
    method record_torrent_completed_event (line 6518) | fn record_torrent_completed_event(&mut self, info_hash: &[u8], torrent...
    method apply_control_request (line 6566) | async fn apply_control_request(&mut self, request: &ControlRequest) ->...
    method watch_command_path (line 6647) | fn watch_command_path(cmd: &AppCommand) -> Option<&PathBuf> {
    method enqueue_watch_command (line 6660) | async fn enqueue_watch_command(&mut self, cmd: AppCommand, min_spacing...
    method process_pending_commands (line 6699) | async fn process_pending_commands(&mut self) {
    method flush_pending_watch_commands (line 6711) | fn flush_pending_watch_commands(&mut self) {
    method rebind_listener (line 6731) | async fn rebind_listener(&mut self, new_port: u16) -> bool {
    method download_rss_preview_item (line 6780) | async fn download_rss_preview_item(&mut self, item: RssPreviewItem) {
    method download_rss_torrent_from_url (line 6866) | async fn download_rss_torrent_from_url(
    method fetch_latest_version (line 6961) | async fn fetch_latest_version() -> Result<String, Box<dyn std::error::...
    method generate_output_state (line 6972) | pub fn generate_output_state(&self) -> AppOutputState {
    method dump_status_to_file (line 6992) | pub fn dump_status_to_file(&self) {
    method effective_status_dump_interval_secs (line 7011) | fn effective_status_dump_interval_secs(&self) -> u64 {
    method reschedule_status_dump_deadline (line 7022) | fn reschedule_status_dump_deadline(&mut self) {
    method trigger_status_dump_now (line 7031) | fn trigger_status_dump_now(&mut self) {
    method trigger_status_dump_after_successful_cluster_mutation (line 7036) | fn trigger_status_dump_after_successful_cluster_mutation(&mut self) {
    method set_runtime_status_dump_interval_override (line 7042) | fn set_runtime_status_dump_interval_override(&mut self, interval_secs:...
    method reschedule_startup_load_deadline (line 7047) | fn reschedule_startup_load_deadline(&mut self) {
    method maybe_log_startup_load_summary (line 7055) | fn maybe_log_startup_load_summary(&mut self) {
    method load_next_startup_batch (line 7071) | async fn load_next_startup_batch(&mut self) {
  type NetworkHistoryPersistRequest (line 1900) | pub struct NetworkHistoryPersistRequest {
  type ActivityHistoryPersistRequest (line 1906) | pub struct ActivityHistoryPersistRequest {
  type PersistPayload (line 1912) | pub struct PersistPayload {
  function initial_cluster_role_for_runtime_mode (line 1920) | fn initial_cluster_role_for_runtime_mode(runtime_mode: AppRuntimeMode) -...
  type DhtWaveTargets (line 1929) | struct DhtWaveTargets {
  function dht_wave_query_load_signal (line 1939) | fn dht_wave_query_load_signal(telemetry: &DhtWaveTelemetry) -> f64 {
  function dht_wave_query_pressure_signal (line 1949) | fn dht_wave_query_pressure_signal(telemetry: &DhtWaveTelemetry) -> f64 {
  function dht_wave_targets (line 1962) | fn dht_wave_targets(status: &DhtStatus, telemetry: &DhtWaveTelemetry) ->...
  function dht_wave_smoothing_factor (line 2032) | fn dht_wave_smoothing_factor(frame_dt: f64, rate: f64) -> f64 {
  function smooth_dht_wave_component (line 2036) | fn smooth_dht_wave_component(current: &mut f64, target: f64, factor: f64) {
  constant DHT_WAVE_PHASE_WRAP_PERIOD (line 2040) | const DHT_WAVE_PHASE_WRAP_PERIOD: f64 = std::f64::consts::TAU * 25.0;
  function advance_dht_wave_state (line 2042) | fn advance_dht_wave_state(
  function spawn_persistence_writer (line 2101) | fn spawn_persistence_writer(
  function build_app_dht_service_config (line 2207) | fn build_app_dht_service_config(client_configs: &Settings) -> DhtService...
  function is_valid_incoming_bittorrent_handshake (line 7146) | fn is_valid_incoming_bittorrent_handshake(buffer: &[u8]) -> bool {
  function persisted_validation_status_from_metrics (line 7152) | fn persisted_validation_status_from_metrics(
  function activity_marks_torrent_complete (line 7170) | fn activity_marks_torrent_complete(activity_message: &str) -> bool {
  function torrent_has_skipped_files (line 7174) | fn torrent_has_skipped_files(metrics: &TorrentMetrics) -> bool {
  function torrent_is_effectively_incomplete (line 7181) | pub fn torrent_is_effectively_incomplete(metrics: &TorrentMetrics) -> bo...
  function torrent_completion_percent (line 7195) | pub fn torrent_completion_percent(metrics: &TorrentMetrics) -> f64 {
  function calculate_adaptive_limits (line 7210) | fn calculate_adaptive_limits(client_configs: &Settings) -> (CalculatedLi...
  function compose_system_warning (line 7268) | fn compose_system_warning(
  function parse_hybrid_hashes (line 7280) | pub fn parse_hybrid_hashes(magnet_link: &str) -> (Option<Vec<u8>>, Optio...
  function info_hash_from_torrent_bytes (line 7284) | pub fn info_hash_from_torrent_bytes(bytes: &[u8]) -> Option<Vec<u8>> {
  function resolve_magnet_torrent_name (line 7288) | fn resolve_magnet_torrent_name(
  function torrent_file_count (line 7302) | fn torrent_file_count(torrent: &crate::torrent_file::Torrent) -> usize {
  function extract_magnet_display_name (line 7310) | fn extract_magnet_display_name(magnet_link: &str) -> Option<String> {
  function clamp_selected_indices_in_state (line 7329) | pub(crate) fn clamp_selected_indices_in_state(app_state: &mut AppState) {
  function file_activity_wave_steps_per_second (line 7351) | pub(crate) fn file_activity_wave_steps_per_second(speed_bps: u64) -> f64 {
  function sort_and_filter_torrent_list_state (line 7373) | pub(crate) fn sort_and_filter_torrent_list_state(app_state: &mut AppStat...
  function has_effectively_incomplete_torrents (line 7470) | fn has_effectively_incomplete_torrents(app_state: &AppState) -> bool {
  function clear_finished_progress_priority_pin (line 7477) | fn clear_finished_progress_priority_pin(app_state: &mut AppState) -> bool {
  function refresh_autosort_after_stats (line 7491) | pub(crate) fn refresh_autosort_after_stats(
  function set_torrent_sort_to_column (line 7515) | fn set_torrent_sort_to_column(app_state: &mut AppState, column: TorrentS...
  function set_peer_sort_to_column (line 7519) | fn set_peer_sort_to_column(app_state: &mut AppState, column: PeerSortCol...
  function align_unpinned_sort_with_visible_activity (line 7523) | pub(crate) fn align_unpinned_sort_with_visible_activity(app_state: &mut ...
  function rss_settings_changed (line 7584) | fn rss_settings_changed(old_settings: &Settings, new_settings: &Settings...
  function should_load_persisted_torrent (line 7588) | fn should_load_persisted_torrent(torrent_settings: &TorrentSettings) -> ...
  function build_persist_payload (line 7592) | fn build_persist_payload(
  function apply_network_history_persist_result (line 7712) | fn apply_network_history_persist_result(app_state: &mut AppState, reques...
  function apply_activity_history_persist_result (line 7719) | fn apply_activity_history_persist_result(app_state: &mut AppState, reque...
  function should_persist_network_history_on_interval (line 7726) | fn should_persist_network_history_on_interval(app_state: &AppState) -> b...
  function queue_persistence_payload (line 7730) | fn queue_persistence_payload(
  function flush_persistence_writer_parts (line 7744) | async fn flush_persistence_writer_parts(
  function prune_rss_feed_errors (line 7756) | fn prune_rss_feed_errors(
  function watched_parent_matches (line 7771) | fn watched_parent_matches(path: &Path, watch_dir: &Path) -> bool {
  function normalized_watch_path (line 7777) | fn normalized_watch_path(path: &Path) -> PathBuf {
  function normalized_watch_path (line 7784) | fn normalized_watch_path(path: &Path) -> PathBuf {
  function mock_display (line 7845) | fn mock_display(name: &str, peer_count: usize) -> TorrentDisplayState {
  function shared_env_guard (line 7857) | fn shared_env_guard() -> &'static std::sync::Mutex<()> {
  function lock_shared_env (line 7861) | fn lock_shared_env() -> std::sync::MutexGuard<'static, ()> {
  function disk_backpressure_sample (line 7867) | fn disk_backpressure_sample(
  function set_disk_throttle_rate (line 7880) | fn set_disk_throttle_rate(throttle: &mut DiskBackpressureDownloadThrottl...
  function completed_bps_for_cap (line 7890) | fn completed_bps_for_cap(rate_bytes_per_sec: f64, disk_capacity_bps: u64...
  function run_disk_throttle_window (line 7894) | fn run_disk_throttle_window(
  function latency_limited_disk_sample (line 7907) | fn latency_limited_disk_sample(
  function run_latency_limited_disk_window (line 7926) | fn run_latency_limited_disk_window(
  function disk_backpressure_hill_climber_converges_up_from_low_cap (line 7938) | fn disk_backpressure_hill_climber_converges_up_from_low_cap() {
  function disk_backpressure_hill_climber_converges_down_from_high_cap (line 7951) | fn disk_backpressure_hill_climber_converges_down_from_high_cap() {
  function disk_backpressure_hill_climber_rejects_candidate_that_lowers_completed_speed (line 7966) | fn disk_backpressure_hill_climber_rejects_candidate_that_lowers_complete...
  function disk_backpressure_hill_climber_converges_up_to_latency_limited_disk (line 7981) | fn disk_backpressure_hill_climber_converges_up_to_latency_limited_disk() {
  function disk_backpressure_hill_climber_converges_down_to_latency_limited_disk (line 8008) | fn disk_backpressure_hill_climber_converges_down_to_latency_limited_disk...
  function disk_backpressure_hill_climber_converges_down_from_100mbps_to_30mbps_disk (line 8035) | fn disk_backpressure_hill_climber_converges_down_from_100mbps_to_30mbps_...
  function disk_backpressure_hill_climber_climbs_after_disk_capacity_recovers (line 8062) | fn disk_backpressure_hill_climber_climbs_after_disk_capacity_recovers() {
  function disk_backpressure_score_penalizes_only_above_target_receive_to_write_latency (line 8100) | fn disk_backpressure_score_penalizes_only_above_target_receive_to_write_...
  function disk_backpressure_throttle_waits_for_disk_write_signal (line 8120) | fn disk_backpressure_throttle_waits_for_disk_write_signal() {
  function disk_backpressure_throttle_disables_when_signal_disappears (line 8138) | fn disk_backpressure_throttle_disables_when_signal_disappears() {
  function configured_rate_limit_buckets_use_bytes_per_second (line 8159) | fn configured_rate_limit_buckets_use_bytes_per_second() {
  function disk_backpressure_throttle_clamps_to_one_mbps_floor (line 8168) | fn disk_backpressure_throttle_clamps_to_one_mbps_floor() {
  function disk_backpressure_throttle_disables_when_seeding (line 8181) | fn disk_backpressure_throttle_disables_when_seeding() {
  function effective_download_limit_uses_lower_configured_or_adaptive_limit (line 8189) | fn effective_download_limit_uses_lower_configured_or_adaptive_limit() {
  function app_disk_backpressure_update_changes_live_download_bucket (line 8207) | async fn app_disk_backpressure_update_changes_live_download_bucket() {
  function configure_temp_app_paths_for_test (line 8249) | fn configure_temp_app_paths_for_test() -> tempfile::TempDir {
  function wait_for_peer_slot_usages (line 8257) | async fn wait_for_peer_slot_usages(
  function format_filesystem_path_error_reports_directory_as_file_mismatch (line 8275) | fn format_filesystem_path_error_reports_directory_as_file_mismatch() {
  function format_filesystem_path_error_reports_missing_path_clearly (line 8288) | fn format_filesystem_path_error_reports_missing_path_clearly() {
  function move_file_with_fallback_copies_when_rename_crosses_devices (line 8297) | fn move_file_with_fallback_copies_when_rename_crosses_devices() {
  function persisted_validation_status_is_true_only_when_complete (line 8320) | fn persisted_validation_status_is_true_only_when_complete() {
  function persisted_validation_status_downgrades_when_incomplete (line 8344) | fn persisted_validation_status_downgrades_when_incomplete() {
  function persisted_validation_status_preserves_prior_true_for_metadata_unavailable_snapshot (line 8359) | fn persisted_validation_status_preserves_prior_true_for_metadata_unavail...
  function persisted_validation_status_treats_effectively_complete_torrents_as_complete (line 8367) | fn persisted_validation_status_treats_effectively_complete_torrents_as_c...
  function build_persist_payload_keeps_deferred_startup_torrents_in_settings (line 8387) | fn build_persist_payload_keeps_deferred_startup_torrents_in_settings() {
  function should_draw_normal_mode_when_dirty_or_animating (line 8437) | fn should_draw_normal_mode_when_dirty_or_animating() {
  function swarm_availability_counts_pieces_across_peers (line 8444) | fn swarm_availability_counts_pieces_across_peers() {
  function swarm_availability_flash_tracks_newly_added_pieces (line 8460) | fn swarm_availability_flash_tracks_newly_added_pieces() {
  function swarm_availability_flash_rolls_batch_by_piece_index (line 8483) | fn swarm_availability_flash_rolls_batch_by_piece_index() {
  function swarm_availability_flash_suppresses_full_map_increase (line 8507) | fn swarm_availability_flash_suppresses_full_map_increase() {
  function swarm_availability_flash_keeps_partial_increase_after_complete_baseline (line 8527) | fn swarm_availability_flash_keeps_partial_increase_after_complete_baseli...
  function swarm_availability_flash_suppresses_new_peer_initial_bitfield (line 8546) | fn swarm_availability_flash_suppresses_new_peer_initial_bitfield() {
  function swarm_availability_flash_tracks_known_peer_new_piece (line 8567) | fn swarm_availability_flash_tracks_known_peer_new_piece() {
  function swarm_availability_flash_ignores_later_new_peer_bitfield (line 8593) | fn swarm_availability_flash_ignores_later_new_peer_bitfield() {
  function should_draw_every_frame_in_welcome_mode (line 8624) | fn should_draw_every_frame_in_welcome_mode() {
  function should_only_draw_dirty_in_power_saving_mode (line 8630) | fn should_only_draw_dirty_in_power_saving_mode() {
  function normal_animation_gate_is_idle_for_static_state (line 8644) | fn normal_animation_gate_is_idle_for_static_state() {
  function normal_animation_gate_detects_active_swarm_availability_flash (line 8655) | fn normal_animation_gate_detects_active_swarm_availability_flash() {
  function normal_animation_gate_ignores_held_disk_health_when_disk_is_idle (line 8679) | fn normal_animation_gate_ignores_held_disk_health_when_disk_is_idle() {
  function normal_animation_gate_detects_current_disk_activity (line 8695) | fn normal_animation_gate_detects_current_disk_activity() {
  function disk_health_phase_speed_keeps_idle_wobble_without_transfers (line 8709) | fn disk_health_phase_speed_keeps_idle_wobble_without_transfers() {
  function disk_health_phase_speed_uses_download_upload_direction (line 8719) | fn disk_health_phase_speed_uses_download_upload_direction() {
  function disk_health_phase_speed_increases_with_pressure (line 8736) | fn disk_health_phase_speed_increases_with_pressure() {
  function normal_animation_gate_detects_selected_torrent_activity (line 8756) | fn normal_animation_gate_detects_selected_torrent_activity() {
  function normal_animation_gate_detects_dht_query_activity (line 8772) | fn normal_animation_gate_detects_dht_query_activity() {
  function normal_idle_check_uses_light_polling_cadence_for_fast_targets (line 8787) | fn normal_idle_check_uses_light_polling_cadence_for_fast_targets() {
  function normal_idle_check_preserves_slower_targets (line 8795) | fn normal_idle_check_preserves_slower_targets() {
  function data_rate_sixty_uses_precise_frame_interval (line 8803) | fn data_rate_sixty_uses_precise_frame_interval() {
  function draw_scheduler_recovers_from_late_timer_wakeups (line 8810) | fn draw_scheduler_recovers_from_late_timer_wakeups() {
  function ui_fps_counter_measures_drawn_frames_per_second (line 8825) | fn ui_fps_counter_measures_drawn_frames_per_second() {
  function test_dht_wave_targets (line 8837) | fn test_dht_wave_targets(
  function test_dht_wave_signal_at (line 8856) | fn test_dht_wave_signal_at(wave: &DhtWaveUiState, x: f64) -> f64 {
  function dht_wave_targets_remain_reactive_above_ten_queries (line 8868) | fn dht_wave_targets_remain_reactive_above_ten_queries() {
  function dht_wave_state_smooths_60fps_target_transition (line 8906) | fn dht_wave_state_smooths_60fps_target_transition() {
  function dht_wave_state_stays_continuous_across_phase_wrap (line 8965) | fn dht_wave_state_stays_continuous_across_phase_wrap() {
  function completion_helper_marks_seeding_complete (line 8995) | fn completion_helper_marks_seeding_complete() {
  function completion_helper_marks_skipped_files_complete (line 9008) | fn completion_helper_marks_skipped_files_complete() {
  function completion_helper_marks_metadata_pending_incomplete (line 9021) | fn completion_helper_marks_metadata_pending_incomplete() {
  function completion_helper_marks_zero_piece_complete_when_metrics_say_complete (line 9029) | fn completion_helper_marks_zero_piece_complete_when_metrics_say_complete...
  function torrent_saved_location_uses_file_path_for_flat_torrents (line 9039) | fn torrent_saved_location_uses_file_path_for_flat_torrents() {
  function torrent_saved_location_uses_root_for_explicit_empty_container_multi_file_torrents (line 9056) | fn torrent_saved_location_uses_root_for_explicit_empty_container_multi_f...
  function torrent_saved_location_uses_root_for_single_entry_multi_file_torrents_without_container (line 9073) | fn torrent_saved_location_uses_root_for_single_entry_multi_file_torrents...
  function clamp_selected_indices_clamps_torrent_and_peer_to_bounds (line 9090) | fn clamp_selected_indices_clamps_torrent_and_peer_to_bounds() {
  function sort_and_filter_applies_query_and_clamps_selection (line 9111) | fn sort_and_filter_applies_query_and_clamps_selection() {
  function sort_and_filter_prioritizes_unavailable_torrents (line 9139) | fn sort_and_filter_prioritizes_unavailable_torrents() {
  function sort_and_filter_respects_pinned_sort_over_availability_priority (line 9169) | fn sort_and_filter_respects_pinned_sort_over_availability_priority() {
  function sort_and_filter_progress_descending_puts_most_complete_first (line 9198) | fn sort_and_filter_progress_descending_puts_most_complete_first() {
  function sort_and_filter_progress_ascending_puts_zero_progress_first (line 9225) | fn sort_and_filter_progress_ascending_puts_zero_progress_first() {
  function stats_autosort_refresh_reorders_torrents_when_sort_mode_changes (line 9252) | fn stats_autosort_refresh_reorders_torrents_when_sort_mode_changes() {
  function stats_autosort_refresh_reorders_unpinned_torrents_when_speeds_change (line 9284) | fn stats_autosort_refresh_reorders_unpinned_torrents_when_speeds_change() {
  function stats_autosort_refresh_preserves_pinned_torrent_order_when_speeds_change (line 9320) | fn stats_autosort_refresh_preserves_pinned_torrent_order_when_speeds_cha...
  function stats_autosort_refresh_clears_finished_progress_priority_pin (line 9356) | fn stats_autosort_refresh_clears_finished_progress_priority_pin() {
  function stats_autosort_refresh_keeps_progress_priority_pin_while_unfinished (line 9386) | fn stats_autosort_refresh_keeps_progress_priority_pin_while_unfinished() {
  function stats_autosort_refresh_keeps_progress_priority_pin_for_metadata_pending (line 9418) | fn stats_autosort_refresh_keeps_progress_priority_pin_for_metadata_pendi...
  function stats_autosort_refresh_keeps_non_progress_user_pin_after_completion (line 9449) | fn stats_autosort_refresh_keeps_non_progress_user_pin_after_completion() {
  function stats_autosort_refresh_clears_progress_pin_for_completed_probe_issue (line 9479) | fn stats_autosort_refresh_clears_progress_pin_for_completed_probe_issue() {
  function stats_autosort_refresh_marks_change_when_only_peer_sort_changes (line 9517) | fn stats_autosort_refresh_marks_change_when_only_peer_sort_changes() {
  function align_unpinned_sort_uses_upload_when_only_upload_is_visible (line 9534) | fn align_unpinned_sort_uses_upload_when_only_upload_is_visible() {
  function align_unpinned_sort_preserves_current_sort_when_idle_and_complete (line 9554) | fn align_unpinned_sort_preserves_current_sort_when_idle_and_complete() {
  function align_unpinned_sort_preserves_pinned_torrent_sort (line 9575) | fn align_unpinned_sort_preserves_pinned_torrent_sort() {
  function align_unpinned_sort_uses_peer_upload_when_only_peer_upload_is_visible (line 9596) | fn align_unpinned_sort_uses_peer_upload_when_only_peer_upload_is_visible...
  function align_unpinned_sort_keeps_peer_speed_sort_when_peer_activity_is_idle (line 9616) | fn align_unpinned_sort_keeps_peer_speed_sort_when_peer_activity_is_idle() {
  function extract_magnet_display_name_decodes_dn (line 9637) | fn extract_magnet_display_name_decodes_dn() {
  function resolve_magnet_name_uses_dn_for_placeholder (line 9647) | fn resolve_magnet_name_uses_dn_for_placeholder() {
  function resolve_magnet_name_falls_back_to_hash_label_when_dn_missing (line 9657) | fn resolve_magnet_name_falls_back_to_hash_label_when_dn_missing() {
  function extract_magnet_display_name_skips_malformed_segments (line 9667) | fn extract_magnet_display_name_skips_malformed_segments() {
  function parse_hybrid_hashes_handles_case_insensitive_xt_and_urn_prefixes (line 9676) | fn parse_hybrid_hashes_handles_case_insensitive_xt_and_urn_prefixes() {
  function rss_settings_changed_detects_filter_updates (line 9684) | fn rss_settings_changed_detects_filter_updates() {
  function rss_settings_changed_ignores_non_rss_updates (line 9697) | fn rss_settings_changed_ignores_non_rss_updates() {
  function prune_rss_feed_errors_removes_deleted_feed_urls (line 9706) | fn prune_rss_feed_errors_removes_deleted_feed_urls() {
  function prune_rss_feed_errors_is_noop_when_all_urls_still_configured (line 9736) | fn prune_rss_feed_errors_is_noop_when_all_urls_still_configured() {
  function compose_system_warning_merges_base_and_dht_messages (line 9758) | fn compose_system_warning_merges_base_and_dht_messages() {
  function compose_system_warning_handles_single_or_no_messages (line 9764) | fn compose_system_warning_handles_single_or_no_messages() {
  function incoming_handshake_validator_accepts_bittorrent_handshake_prefix (line 9777) | fn incoming_handshake_validator_accepts_bittorrent_handshake_prefix() {
  function incoming_handshake_validator_rejects_non_bittorrent_prefix (line 9786) | fn incoming_handshake_validator_rejects_non_bittorrent_prefix() {
  function mark_port_open_command_tracks_ipv4_and_ipv6_independently (line 9795) | async fn mark_port_open_command_tracks_ipv4_and_ipv6_independently() {
  function mark_port_open_command_treats_ipv4_mapped_ipv6_as_ipv4_reachability (line 9827) | async fn mark_port_open_command_treats_ipv4_mapped_ipv6_as_ipv4_reachabi...
  function rebind_listener_with_ephemeral_port_notifies_managers_with_bound_port (line 9848) | async fn rebind_listener_with_ephemeral_port_notifies_managers_with_boun...
  function rebind_listener_reannounces_running_torrents_on_new_port_when_already_reachable (line 9878) | async fn rebind_listener_reannounces_running_torrents_on_new_port_when_a...
  function mark_port_open_announces_running_torrents_once_per_family_transition (line 9918) | async fn mark_port_open_announces_running_torrents_once_per_family_trans...
  function apply_settings_update_restores_previous_port_when_rebind_fails (line 9998) | async fn apply_settings_update_restores_previous_port_when_rebind_fails() {
  function dht_status_change_resends_cached_peer_slot_usage (line 10050) | async fn dht_status_change_resends_cached_peer_slot_usage() {
  function apply_settings_update_reconfigures_dht_bootstrap_after_failed_port_rebind (line 10087) | async fn apply_settings_update_reconfigures_dht_bootstrap_after_failed_p...
  function should_load_persisted_torrent_skips_only_deleting_entries (line 10158) | fn should_load_persisted_torrent_skips_only_deleting_entries() {
  function reset_tuning_for_objective_change_reschedules_deadline (line 10178) | async fn reset_tuning_for_objective_change_reschedules_deadline() {
  function handle_manager_event_file_probe_status_marks_data_unavailable (line 10206) | async fn handle_manager_event_file_probe_status_marks_data_unavailable() {
  function load_next_startup_batch_loads_only_one_deferred_torrent (line 10260) | async fn load_next_startup_batch_loads_only_one_deferred_torrent() {
  function load_next_startup_batch_records_one_summary_after_queue_drains (line 10306) | async fn load_next_startup_batch_records_one_summary_after_queue_drains() {
  function load_next_startup_batch_keeps_failed_deferred_torrent_queued (line 10357) | async fn load_next_startup_batch_keeps_failed_deferred_torrent_queued() {
  function load_next_startup_batch_rotates_failed_deferred_torrent_behind_later_entries (line 10402) | async fn load_next_startup_batch_rotates_failed_deferred_torrent_behind_...
  function data_availability_fault_records_event_journal_entry (line 10473) | async fn data_availability_fault_records_event_journal_entry() {
  function ingest_journal_records_queue_and_terminal_result_with_shared_correlation (line 10518) | async fn ingest_journal_records_queue_and_terminal_result_with_shared_co...
  function startup_selected_header_reflects_pinned_torrent_sort (line 10586) | async fn startup_selected_header_reflects_pinned_torrent_sort() {
  function control_journal_preserves_watch_folder_origin (line 10606) | async fn control_journal_preserves_watch_folder_origin() {
  function control_origin_for_ingest_path_uses_rss_origin_when_available (line 10647) | async fn control_origin_for_ingest_path_uses_rss_origin_when_available() {
  function manual_torrent_browser_moves_standalone_watch_file_to_processed_and_updates_journal (line 10672) | async fn manual_torrent_browser_moves_standalone_watch_file_to_processed...
  function manual_torrent_browser_moves_shared_inbox_file_to_shared_processed_and_updates_journal (line 10720) | async fn manual_torrent_browser_moves_shared_inbox_file_to_shared_proces...
  function missing_verbatim_shared_inbox_magnet_is_ignored (line 10799) | async fn missing_verbatim_shared_inbox_magnet_is_ignored() {
  function unreadable_shared_inbox_magnet_is_not_ignored_as_missing (line 10862) | async fn unreadable_shared_inbox_magnet_is_not_ignored_as_missing() {
  function partial_probe_result_does_not_clear_previous_unavailable_state (line 10921) | async fn partial_probe_result_does_not_clear_previous_unavailable_state() {
  function dispatch_integrity_probe_batches_requests_work_immediately (line 10985) | async fn dispatch_integrity_probe_batches_requests_work_immediately() {
  function metadata_loaded_dispatches_probe_without_waiting_for_tick (line 11025) | async fn metadata_loaded_dispatches_probe_without_waiting_for_tick() {
  function metadata_loaded_updates_layout_before_fault_fanout_for_single_entry_multi_file (line 11089) | async fn metadata_loaded_updates_layout_before_fault_fanout_for_single_e...
  function data_availability_fault_does_not_fan_out_across_flat_torrents_in_same_directory (line 11179) | async fn data_availability_fault_does_not_fan_out_across_flat_torrents_i...
  function partial_probe_marks_torrent_unavailable_before_sweep_completion (line 11274) | async fn partial_probe_marks_torrent_unavailable_before_sweep_completion...
  function healthy_probe_requests_manager_recovery_but_does_not_flip_ui_until_metrics (line 11350) | async fn healthy_probe_requests_manager_recovery_but_does_not_flip_ui_un...
  function completion_transition_records_single_torrent_completed_event (line 11420) | async fn completion_transition_records_single_torrent_completed_event() {
  function completed_torrents_restored_as_complete_do_not_rejournal_on_metrics_refresh (line 11488) | async fn completed_torrents_restored_as_complete_do_not_rejournal_on_met...
  function completed_torrents_do_not_duplicate_existing_completion_journal_entries (line 11546) | async fn completed_torrents_do_not_duplicate_existing_completion_journal...
  function restored_completed_torrents_skip_startup_recompletion_once (line 11619) | async fn restored_completed_torrents_skip_startup_recompletion_once() {
  function control_request_pause_updates_runtime_config (line 11729) | async fn control_request_pause_updates_runtime_config() {
  function shared_follower_suppresses_incomplete_runtime_and_converges_display_state (line 11760) | async fn shared_follower_suppresses_incomplete_runtime_and_converges_dis...
  function apply_settings_update_refreshes_file_preview_tree_priorities (line 11806) | async fn apply_settings_update_refreshes_file_preview_tree_priorities() {
  function apply_settings_update_preserves_preview_file_indices_for_nonlexical_order (line 11861) | async fn apply_settings_update_preserves_preview_file_indices_for_nonlex...
  function shared_follower_promotion_starts_previously_suppressed_runtime (line 11942) | async fn shared_follower_promotion_starts_previously_suppressed_runtime() {
  function cluster_revision_reload_applies_for_followers_and_stops_after_promotion (line 11980) | async fn cluster_revision_reload_applies_for_followers_and_stops_after_p...
  function shared_follower_read_model_prefers_leader_snapshot_for_incomplete_torrents (line 12062) | async fn shared_follower_read_model_prefers_leader_snapshot_for_incomple...
  function shared_leader_dump_writes_host_and_cluster_status_files (line 12174) | async fn shared_leader_dump_writes_host_and_cluster_status_files() {
  function shared_leader_defaults_status_follow_to_five_seconds (line 12236) | async fn shared_leader_defaults_status_follow_to_five_seconds() {
  function shared_follower_path_file_with_default_download_routes_through_control_request (line 12281) | async fn shared_follower_path_file_with_default_download_routes_through_...
  function shared_follower_allows_host_local_config_updates_and_rewatches_host_folder (line 12363) | async fn shared_follower_allows_host_local_config_updates_and_rewatches_...
  function control_request_status_follow_start_sets_runtime_override (line 12423) | async fn control_request_status_follow_start_sets_runtime_override() {
  function enqueue_watch_command_spills_to_pending_queue_when_channel_is_full (line 12444) | async fn enqueue_watch_command_spills_to_pending_queue_when_channel_is_f...
  function add_magnet_torrent_rejects_hashless_magnet_without_panicking (line 12468) | async fn add_magnet_torrent_rejects_hashless_magnet_without_panicking() {
  function healthy_probe_for_available_torrent_does_not_request_recovery_again (line 12502) | async fn healthy_probe_for_available_torrent_does_not_request_recovery_a...
  function stale_healthy_probe_does_not_request_manager_recovery (line 12546) | async fn stale_healthy_probe_does_not_request_manager_recovery() {
  function build_persist_payload_preserves_validation_when_data_is_unavailable (line 12594) | fn build_persist_payload_preserves_validation_when_data_is_unavailable() {
  function ui_telemetry_metrics_refresh_updates_data_availability_flag (line 12616) | fn ui_telemetry_metrics_refresh_updates_data_availability_flag() {
  function network_history_interval_persistence_only_when_dirty (line 12644) | fn network_history_interval_persistence_only_when_dirty() {
  function build_persist_payload_skips_network_history_while_restore_is_pending (line 12656) | fn build_persist_payload_skips_network_history_while_restore_is_pending() {
  function build_persist_payload_syncs_rollup_snapshot_into_network_history_state (line 12679) | fn build_persist_payload_syncs_rollup_snapshot_into_network_history_stat...
  function apply_network_history_persist_result_clears_dirty_only_for_latest_success (line 12708) | fn apply_network_history_persist_result_clears_dirty_only_for_latest_suc...
  function queue_persistence_payload_carries_network_history_state (line 12735) | async fn queue_persistence_payload_carries_network_history_state() {
  function flush_persistence_writer_parts_drops_sender_and_joins_task (line 12781) | async fn flush_persistence_writer_parts_drops_sender_and_joins_task() {
  function listener_set_bind_keeps_ipv6_listener_when_ipv4_port_is_already_in_use (line 12794) | async fn listener_set_bind_keeps_ipv6_listener_when_ipv4_port_is_already...
  function listener_set_bind_keeps_ipv4_listener_when_ipv6_port_is_already_in_use (line 12838) | async fn listener_set_bind_keeps_ipv4_listener_when_ipv6_port_is_already...

FILE: src/command.rs
  type TorrentCommand (line 15) | pub enum TorrentCommand {
  type TorrentCommandSummary (line 132) | pub struct TorrentCommandSummary<'a>(pub &'a TorrentCommand);
  function fmt (line 134) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {

FILE: src/config.rs
  type TorrentSortColumn (line 30) | pub enum TorrentSortColumn {
    method default_direction (line 39) | pub fn default_direction(self) -> SortDirection {
  type PeerSortColumn (line 49) | pub enum PeerSortColumn {
    method default_direction (line 63) | pub fn default_direction(self) -> SortDirection {
  type SortDirection (line 72) | pub enum SortDirection {
  type RssAddedVia (line 79) | pub enum RssAddedVia {
  type RssFeed (line 87) | pub struct RssFeed {
  method default (line 93) | fn default() -> Self {
  type RssFilter (line 103) | pub struct RssFilter {
  method default (line 111) | fn default() -> Self {
  type RssFilterMode (line 122) | pub enum RssFilterMode {
  type RssSettings (line 130) | pub struct RssSettings {
  method default (line 139) | fn default() -> Self {
  type RssHistoryEntry (line 152) | pub struct RssHistoryEntry {
  type FeedSyncError (line 165) | pub struct FeedSyncError {
  type Settings (line 172) | pub struct Settings {
  method default (line 204) | fn default() -> Self {
  type TorrentSettings (line 246) | pub struct TorrentSettings {
  type TorrentMetadataFileEntry (line 260) | pub struct TorrentMetadataFileEntry {
  type TorrentMetadataEntry (line 267) | pub struct TorrentMetadataEntry {
    method placeholder_from_settings (line 676) | fn placeholder_from_settings(settings: &TorrentSettings) -> Option<Sel...
    method apply_settings_overrides (line 689) | fn apply_settings_overrides(&mut self, settings: &TorrentSettings) {
  type TorrentMetadataConfig (line 279) | pub struct TorrentMetadataConfig {
  function serialize (line 289) | pub fn serialize<S>(
  function deserialize (line 301) | pub fn deserialize<'de, D>(deserializer: D) -> Result<HashMap<usize, Fil...
  constant SHARED_CONFIG_DIR_ENV (line 315) | const SHARED_CONFIG_DIR_ENV: &str = "SUPERSEEDR_SHARED_CONFIG_DIR";
  constant SHARED_HOST_ID_ENV (line 316) | const SHARED_HOST_ID_ENV: &str = "SUPERSEEDR_SHARED_HOST_ID";
  constant LEGACY_SHARED_HOST_ID_ENV (line 317) | const LEGACY_SHARED_HOST_ID_ENV: &str = "SUPERSEEDR_HOST_ID";
  constant CLIENT_PORT_ENV (line 318) | const CLIENT_PORT_ENV: &str = "SUPERSEEDR_CLIENT_PORT";
  constant DEFAULT_DOWNLOAD_FOLDER_ENV (line 319) | const DEFAULT_DOWNLOAD_FOLDER_ENV: &str = "SUPERSEEDR_DEFAULT_DOWNLOAD_F...
  constant OUTPUT_STATUS_INTERVAL_ENV (line 320) | const OUTPUT_STATUS_INTERVAL_ENV: &str = "SUPERSEEDR_OUTPUT_STATUS_INTER...
  constant EXTRA_WATCH_PATH_PREFIX (line 321) | const EXTRA_WATCH_PATH_PREFIX: &str = "SUPERSEEDR_WATCH_PATH_";
  constant SHARED_TORRENT_SOURCE_PREFIX (line 322) | const SHARED_TORRENT_SOURCE_PREFIX: &str = "shared:";
  constant SHARED_CONFIG_SUBDIR (line 323) | const SHARED_CONFIG_SUBDIR: &str = "superseedr-config";
  constant LAUNCHER_SHARED_CONFIG_FILE (line 324) | const LAUNCHER_SHARED_CONFIG_FILE: &str = "launcher_shared_config.toml";
  constant LAUNCHER_HOST_ID_FILE (line 325) | const LAUNCHER_HOST_ID_FILE: &str = "launcher_host_id.toml";
  type LauncherSharedConfig (line 329) | struct LauncherSharedConfig {
  type LauncherHostId (line 335) | struct LauncherHostId {
  type SharedConfigSource (line 341) | pub enum SharedConfigSource {
  type HostIdSource (line 348) | pub enum HostIdSource {
  type HostIdSelection (line 357) | pub struct HostIdSelection {
  type SharedConfigSelection (line 363) | pub struct SharedConfigSelection {
  type CatalogTorrentSettings (line 371) | struct CatalogTorrentSettings {
    method from_settings (line 614) | fn from_settings(
    method to_settings (line 644) | fn to_settings(
  type SharedSettingsConfig (line 385) | struct SharedSettingsConfig {
    method from_settings (line 759) | fn from_settings(settings: &Settings, shared_root: Option<&Path>) -> i...
    method apply_to_settings (line 793) | fn apply_to_settings(
  method default (line 414) | fn default() -> Self {
  type CatalogConfig (line 448) | struct CatalogConfig {
    method from_settings (line 838) | fn from_settings(
    method apply_to_settings (line 858) | fn apply_to_settings(
  type LayeredConfig (line 453) | struct LayeredConfig {
    method from_flat_settings (line 549) | fn from_flat_settings(settings: &Settings) -> Self {
    method from_shared_settings (line 559) | fn from_shared_settings(
    method resolve_flat_settings (line 586) | fn resolve_flat_settings(&self) -> io::Result<Settings> {
    method resolve_shared_settings (line 590) | fn resolve_shared_settings(
    method resolve_settings (line 598) | fn resolve_settings(
  type HostConfig (line 461) | struct HostConfig {
    method from_flat_settings (line 874) | fn from_flat_settings(settings: &Settings) -> Self {
    method from_settings (line 882) | fn from_settings(settings: &Settings, shared_client_id: &str) -> Self {
    method apply_to_settings (line 890) | fn apply_to_settings(&self, settings: &mut Settings) {
  method default (line 468) | fn default() -> Self {
  type NormalConfigPaths (line 478) | struct NormalConfigPaths {
  type SharedConfigPaths (line 485) | struct SharedConfigPaths {
  type NormalConfigBackend (line 497) | struct NormalConfigBackend {
    method load_settings (line 1856) | fn load_settings(&self) -> io::Result<Settings> {
    method load_settings_for_cli (line 1881) | fn load_settings_for_cli(&self) -> io::Result<Settings> {
    method save_settings (line 1910) | fn save_settings(&self, settings: &Settings) -> io::Result<()> {
  type SharedConfigBackend (line 502) | struct SharedConfigBackend {
    method load_settings (line 1936) | fn load_settings(&self) -> io::Result<Settings> {
    method load_settings_for_cli (line 1947) | fn load_settings_for_cli(&self) -> io::Result<Settings> {
    method save_settings (line 1962) | fn save_settings(&self, settings: &Settings) -> io::Result<()> {
  type LoggedSharedConfigRevision (line 507) | struct LoggedSharedConfigRevision {
  type SharedCatalogBackupPolicy (line 514) | struct SharedCatalogBackupPolicy {
  type ConfigBackend (line 520) | enum ConfigBackend {
    method load_settings (line 2031) | fn load_settings(&self) -> io::Result<Settings> {
    method load_settings_for_cli (line 2045) | fn load_settings_for_cli(&self) -> io::Result<Settings> {
    method save_settings (line 2059) | fn save_settings(&self, settings: &Settings) -> io::Result<()> {
    method load_torrent_metadata (line 2066) | fn load_torrent_metadata(&self) -> io::Result<TorrentMetadataConfig> {
    method upsert_torrent_metadata (line 2077) | fn upsert_torrent_metadata(&self, entry: TorrentMetadataEntry) -> io::...
  function app_paths_override (line 535) | fn app_paths_override() -> &'static Mutex<Option<(PathBuf, PathBuf)>> {
  function shared_env_guard_for_tests (line 540) | pub(crate) fn shared_env_guard_for_tests() -> &'static Mutex<()> {
  function logged_shared_config_revision (line 544) | fn logged_shared_config_revision() -> &'static Mutex<Option<LoggedShared...
  function sync_torrent_metadata_with_settings (line 697) | fn sync_torrent_metadata_with_settings(
  function apply_metadata_to_settings (line 734) | fn apply_metadata_to_settings(settings: &mut Settings, metadata: &Torren...
  function sanitize_host_id (line 898) | fn sanitize_host_id(raw: &str) -> String {
  function resolve_shared_mount_and_config_root (line 914) | fn resolve_shared_mount_and_config_root(path: PathBuf) -> (PathBuf, Path...
  function launcher_shared_config_path (line 933) | fn launcher_shared_config_path() -> io::Result<PathBuf> {
  function launcher_host_id_path (line 943) | fn launcher_host_id_path() -> io::Result<PathBuf> {
  function load_launcher_shared_config (line 953) | fn load_launcher_shared_config() -> io::Result<Option<PathBuf>> {
  function load_launcher_host_id (line 965) | fn load_launcher_host_id() -> io::Result<Option<String>> {
  function resolve_shared_config_selection (line 977) | fn resolve_shared_config_selection() -> io::Result<Option<SharedConfigSe...
  function shared_mount_root (line 1002) | pub fn shared_mount_root() -> Option<PathBuf> {
  function shared_config_root (line 1009) | fn shared_config_root() -> Option<PathBuf> {
  function sanitized_host_id_candidate (line 1016) | fn sanitized_host_id_candidate(raw: &str) -> Option<String> {
  function resolve_host_id_selection_from_sources (line 1021) | fn resolve_host_id_selection_from_sources(
  function resolve_host_id (line 1072) | fn resolve_host_id() -> String {
  function resolve_host_id_selection (line 1076) | fn resolve_host_id_selection() -> HostIdSelection {
  function resolve_shared_config_paths (line 1098) | fn resolve_shared_config_paths() -> io::Result<Option<SharedConfigPaths>> {
  function resolve_config_backend (line 1118) | fn resolve_config_backend() -> io::Result<ConfigBackend> {
  function portable_relative_path_string (line 1137) | fn portable_relative_path_string(path: &Path) -> String {
  function shared_relative_path_to_pathbuf (line 1144) | fn shared_relative_path_to_pathbuf(relative: &str) -> PathBuf {
  function normalize_shared_relative_path (line 1154) | fn normalize_shared_relative_path(
  function encode_shared_data_path (line 1186) | fn encode_shared_data_path(
  function strip_shared_mount_prefix (line 1212) | fn strip_shared_mount_prefix(path: &Path, shared_mount_root: &Path) -> R...
  function path_without_verbatim_prefix (line 1235) | fn path_without_verbatim_prefix(path: &Path) -> PathBuf {
  function strip_windows_prefix_case_insensitive (line 1248) | fn strip_windows_prefix_case_insensitive(path: &Path, root: &Path) -> Op...
  function component_eq_ignore_ascii_case (line 1269) | fn component_eq_ignore_ascii_case(left: Component<'_>, right: Component<...
  function resolve_shared_data_path (line 1275) | fn resolve_shared_data_path(
  function validate_shared_runtime_settings (line 1292) | fn validate_shared_runtime_settings(
  function encode_catalog_torrent_source (line 1313) | fn encode_catalog_torrent_source(source: &str, shared_root: Option<&Path...
  function decode_catalog_torrent_source (line 1334) | fn decode_catalog_torrent_source(source: &str, shared_root: Option<&Path...
  function apply_env_overrides (line 1349) | fn apply_env_overrides(settings: &Settings) -> io::Result<Settings> {
  function parse_env_override (line 1365) | fn parse_env_override<T>(key: &str) -> io::Result<Option<T>>
  function parse_path_env_override (line 1390) | fn parse_path_env_override(key: &str) -> io::Result<Option<PathBuf>> {
  function env_var_case_insensitive (line 1405) | fn env_var_case_insensitive(key: &str) -> io::Result<Option<String>> {
  function env_var_os_case_insensitive (line 1417) | fn env_var_os_case_insensitive(key: &str) -> Option<OsString> {
  function expand_home_path (line 1430) | fn expand_home_path(value: OsString) -> PathBuf {
  function home_dir_from_env (line 1456) | fn home_dir_from_env() -> Option<PathBuf> {
  function absolutize_env_path (line 1474) | fn absolutize_env_path(path: PathBuf) -> PathBuf {
  function read_toml_or_default (line 1484) | fn read_toml_or_default<T>(path: &Path) -> io::Result<T>
  function read_torrent_metadata_or_default (line 1496) | fn read_torrent_metadata_or_default(path: &Path) -> io::Result<TorrentMe...
  function fingerprint_for_path (line 1513) | fn fingerprint_for_path(path: &Path) -> io::Result<Option<String>> {
  function ensure_fingerprint_matches (line 1523) | fn ensure_fingerprint_matches(
  function write_toml_atomically_with_fingerprint (line 1538) | fn write_toml_atomically_with_fingerprint<T: Serialize>(
  function shared_catalog_backup_policy (line 1547) | fn shared_catalog_backup_policy(torrent_count: usize) -> SharedCatalogBa...
  function shared_catalog_backup_roll_start (line 1572) | fn shared_catalog_backup_roll_start(
  function cleanup_shared_catalog_backups (line 1581) | fn cleanup_shared_catalog_backups(backup_dir: &Path, retained_backups: u...
  function backup_shared_catalog_before_write (line 1603) | fn backup_shared_catalog_before_write(
  function write_shared_cluster_revision_marker (line 1625) | fn write_shared_cluster_revision_marker(root_dir: &Path) -> io::Result<S...
  function shared_config_revision_snapshot (line 1638) | fn shared_config_revision_snapshot(
  function mark_shared_config_revision_seen (line 1654) | fn mark_shared_config_revision_seen(paths: &SharedConfigPaths, revision:...
  function log_shared_config_revision_if_changed (line 1664) | fn log_shared_config_revision_if_changed(paths: &SharedConfigPaths) {
  function validate_shared_runtime_root (line 1690) | fn validate_shared_runtime_root(paths: &SharedConfigPaths) -> io::Result...
  function bootstrap_shared_host_config (line 1735) | fn bootstrap_shared_host_config(paths: &SharedConfigPaths) -> io::Result...
  function clear_shared_config_state (line 1764) | fn clear_shared_config_state() {}
  function clear_shared_config_state_for_tests (line 1767) | pub(crate) fn clear_shared_config_state_for_tests() {
  function set_app_paths_override_for_tests (line 1772) | pub(crate) fn set_app_paths_override_for_tests(paths: Option<(PathBuf, P...
  function first_run_settings (line 1779) | fn first_run_settings() -> Settings {
  function client_never_started_error (line 1789) | fn client_never_started_error() -> io::Error {
  function runtime_lock_is_held (line 1796) | fn runtime_lock_is_held(lock_path: Option<&Path>) -> bool {
  function load_current_shared_layered (line 1825) | fn load_current_shared_layered(
  function upsert_torrent_metadata_entry (line 2104) | fn upsert_torrent_metadata_entry(
  function get_app_paths (line 2119) | pub fn get_app_paths() -> Option<(PathBuf, PathBuf)> {
  function fallback_app_paths (line 2143) | fn fallback_app_paths() -> Option<(PathBuf, PathBuf)> {
  function app_config_dir (line 2166) | pub fn app_config_dir() -> Option<PathBuf> {
  function local_runtime_data_dir (line 2170) | pub fn local_runtime_data_dir() -> Option<PathBuf> {
  function local_settings_path (line 2174) | pub fn local_settings_path() -> Option<PathBuf> {
  function effective_shared_config_selection (line 2178) | pub fn effective_shared_config_selection() -> io::Result<Option<SharedCo...
  function persisted_shared_config_path (line 2182) | pub fn persisted_shared_config_path() -> io::Result<PathBuf> {
  function effective_host_id_selection (line 2186) | pub fn effective_host_id_selection() -> io::Result<HostIdSelection> {
  function persisted_host_id_path (line 2190) | pub fn persisted_host_id_path() -> io::Result<PathBuf> {
  function set_persisted_shared_config (line 2194) | pub fn set_persisted_shared_config(path: &Path) -> io::Result<SharedConf...
  function clear_persisted_shared_config (line 2222) | pub fn clear_persisted_shared_config() -> io::Result<bool> {
  function set_persisted_host_id (line 2232) | pub fn set_persisted_host_id(host_id: &str) -> io::Result<String> {
  function clear_persisted_host_id (line 2254) | pub fn clear_persisted_host_id() -> io::Result<bool> {
  function local_normal_backend (line 2264) | fn local_normal_backend() -> io::Result<NormalConfigBackend> {
  function shared_backend_for_mount_root (line 2280) | fn shared_backend_for_mount_root(path: &Path) -> io::Result<SharedConfig...
  function convert_standalone_to_shared (line 2305) | pub fn convert_standalone_to_shared(path: &Path) -> io::Result<SharedCon...
  function convert_shared_to_standalone (line 2345) | pub fn convert_shared_to_standalone() -> io::Result<()> {
  function is_shared_config_mode (line 2367) | pub fn is_shared_config_mode() -> bool {
  function shared_settings_path (line 2371) | pub fn shared_settings_path() -> Option<PathBuf> {
  function shared_host_dir (line 2378) | pub fn shared_host_dir() -> Option<PathBuf> {
  function shared_torrents_path (line 2385) | pub fn shared_torrents_path() -> Option<PathBuf> {
  function shared_root_path (line 2389) | pub fn shared_root_path() -> Option<PathBuf> {
  function shared_data_path (line 2393) | pub fn shared_data_path() -> Option<PathBuf> {
  function shared_torrent_file_path (line 2397) | pub fn shared_torrent_file_path(info_hash: &[u8]) -> Option<PathBuf> {
  function shared_inbox_path (line 2401) | pub fn shared_inbox_path() -> Option<PathBuf> {
  function shared_processed_path (line 2405) | pub fn shared_processed_path() -> Option<PathBuf> {
  function shared_status_path (line 2409) | pub fn shared_status_path() -> Option<PathBuf> {
  function shared_leader_status_path (line 2413) | pub fn shared_leader_status_path() -> Option<PathBuf> {
  function runtime_data_dir (line 2417) | pub fn runtime_data_dir() -> Option<PathBuf> {
  function runtime_log_dir (line 2425) | pub fn runtime_log_dir() -> Option<PathBuf> {
  function local_runtime_log_dir (line 2429) | pub fn local_runtime_log_dir() -> Option<PathBuf> {
  function local_cli_log_dir (line 2433) | pub fn local_cli_log_dir() -> Option<PathBuf> {
  function runtime_persistence_dir (line 2437) | pub fn runtime_persistence_dir() -> Option<PathBuf> {
  function local_lock_path (line 2441) | pub fn local_lock_path() -> Option<PathBuf> {
  function encode_shared_cli_torrent_path (line 2445) | pub fn encode_shared_cli_torrent_path(path: &Path) -> io::Result<Option<...
  function resolve_shared_cli_torrent_path (line 2454) | pub fn resolve_shared_cli_torrent_path(path: &Path) -> io::Result<PathBu...
  function shared_cluster_revision_path (line 2466) | pub fn shared_cluster_revision_path() -> Option<PathBuf> {
  function shared_lock_path (line 2470) | pub fn shared_lock_path() -> Option<PathBuf> {
  function resolve_host_watch_path (line 2474) | pub fn resolve_host_watch_path(settings: &Settings) -> Option<PathBuf> {
  type SettingsChangeScope (line 2482) | pub enum SettingsChangeScope {
  function classify_shared_mode_settings_change (line 2488) | pub fn classify_shared_mode_settings_change(
  function resolve_command_watch_path (line 2511) | pub fn resolve_command_watch_path(settings: &Settings) -> Option<PathBuf> {
  function push_unique_path (line 2519) | fn push_unique_path(paths: &mut Vec<PathBuf>, path: PathBuf) {
  function resolve_additional_watch_paths_from_sources (line 2525) | fn resolve_additional_watch_paths_from_sources<I, K, V>(vars: I) -> Vec<...
  function additional_watch_paths (line 2563) | pub fn additional_watch_paths() -> Vec<PathBuf> {
  function normalized_watch_component (line 2567) | fn normalized_watch_component(component: Component<'_>) -> String {
  function normalized_watch_components (line 2579) | fn normalized_watch_components(path: &Path) -> Vec<String> {
  function component_prefix_matches (line 2586) | fn component_prefix_matches(path: &[String], prefix: &[String]) -> bool {
  function watch_paths_overlap (line 2590) | fn watch_paths_overlap(left: &Path, right: &Path) -> bool {
  function shared_watch_exclusion_paths (line 2599) | fn shared_watch_exclusion_paths() -> Vec<PathBuf> {
  function additional_host_watch_paths (line 2610) | fn additional_host_watch_paths() -> Vec<PathBuf> {
  function host_watch_paths (line 2622) | pub fn host_watch_paths(settings: &Settings) -> Vec<PathBuf> {
  function runtime_watch_paths (line 2636) | pub fn runtime_watch_paths(
  function configured_watch_paths (line 2670) | pub fn configured_watch_paths(settings: &Settings) -> Vec<PathBuf> {
  function get_watch_path (line 2674) | pub fn get_watch_path() -> Option<(PathBuf, PathBuf)> {
  function create_watch_directories (line 2684) | pub fn create_watch_directories() -> io::Result<()> {
  function ensure_watch_directories (line 2693) | pub fn ensure_watch_directories(settings: &Settings) -> io::Result<()> {
  function load_settings (line 2727) | pub fn load_settings() -> io::Result<Settings> {
  function load_settings_for_cli (line 2731) | pub fn load_settings_for_cli() -> io::Result<Settings> {
  function save_settings (line 2735) | pub fn save_settings(settings: &Settings) -> io::Result<()> {
  function load_torrent_metadata (line 2739) | pub fn load_torrent_metadata() -> io::Result<TorrentMetadataConfig> {
  function upsert_torrent_metadata (line 2743) | pub fn upsert_torrent_metadata(entry: TorrentMetadataEntry) -> io::Resul...
  function shared_host_id (line 2747) | pub fn shared_host_id() -> Option<String> {
  function cleanup_old_backups (line 2753) | fn cleanup_old_backups(backup_dir: &PathBuf, limit: usize) -> io::Result...
  type EnvVarRestore (line 2781) | struct EnvVarRestore {
    method capture (line 2787) | fn capture(key: &'static str) -> Self {
  method drop (line 2796) | fn drop(&mut self) {
  function test_full_settings_parsing (line 2805) | fn test_full_settings_parsing() {
  function test_partial_settings_override (line 2884) | fn test_partial_settings_override() {
  function test_default_settings (line 2925) | fn test_default_settings() {
  function test_invalid_ui_theme_type_does_not_fail_settings_parse (line 2946) | fn test_invalid_ui_theme_type_does_not_fail_settings_parse() {
  function test_rss_filter_legacy_regex_key_is_accepted (line 2966) | fn test_rss_filter_legacy_regex_key_is_accepted() {
  function test_rss_filter_mode_regex_is_parsed (line 2988) | fn test_rss_filter_mode_regex_is_parsed() {
  function test_invalid_torrent_state_parsing (line 3007) | fn test_invalid_torrent_state_parsing() {
  function test_apply_env_overrides_handles_supported_env_vars (line 3036) | fn test_apply_env_overrides_handles_supported_env_vars() {
  function test_apply_env_overrides_trims_numeric_env_and_matches_case_insensitively (line 3064) | fn test_apply_env_overrides_trims_numeric_env_and_matches_case_insensiti...
  function test_apply_env_overrides_invalid_numeric_env_reports_key (line 3084) | fn test_apply_env_overrides_invalid_numeric_env_reports_key() {
  function test_apply_env_overrides_rejects_empty_path_env (line 3104) | fn test_apply_env_overrides_rejects_empty_path_env() {
  function test_apply_env_overrides_expands_home_path_env (line 3121) | fn test_apply_env_overrides_expands_home_path_env() {
  function test_apply_env_overrides_ignores_unsupported_settings_vars (line 3145) | fn test_apply_env_overrides_ignores_unsupported_settings_vars() {
  function test_resolve_additional_watch_paths_from_sources_orders_and_deduplicates (line 3167) | fn test_resolve_additional_watch_paths_from_sources_orders_and_deduplica...
  function test_shared_config_dir_env_relative_path_is_resolved_from_current_dir (line 3190) | fn test_shared_config_dir_env_relative_path_is_resolved_from_current_dir...
  function test_shared_config_dir_env_matches_case_insensitively (line 3217) | fn test_shared_config_dir_env_matches_case_insensitively() {
  function test_shared_data_path_round_trip_under_root (line 3241) | fn test_shared_data_path_round_trip_under_root() {
  function test_shared_data_path_round_trip_allows_mount_root_itself (line 3261) | fn test_shared_data_path_round_trip_allows_mount_root_itself() {
  function test_shared_data_path_rejects_path_outside_root (line 3280) | fn test_shared_data_path_rejects_path_outside_root() {
  function test_shared_data_path_accepts_verbatim_unc_under_root (line 3300) | fn test_shared_data_path_accepts_verbatim_unc_under_root() {
  function test_shared_data_path_accepts_case_variant_under_root (line 3313) | fn test_shared_data_path_accepts_case_variant_under_root() {
  function test_resolve_host_id_uses_system_hostname_fallback (line 3325) | fn test_resolve_host_id_uses_system_hostname_fallback() {
  function test_resolve_host_id_prefers_explicit_override (line 3338) | fn test_resolve_host_id_prefers_explicit_override() {
  function test_shared_torrent_source_round_trip (line 3351) | fn test_shared_torrent_source_round_trip() {
  function test_layered_config_round_trips_flat_settings (line 3364) | fn test_layered_config_round_trips_flat_settings() {
  function test_layered_config_round_trips_shared_settings (line 3393) | fn test_layered_config_round_trips_shared_settings() {
  function test_catalog_and_host_merge_into_runtime_settings (line 3460) | fn test_catalog_and_host_merge_into_runtime_settings() {
  function test_host_override_client_id_wins_over_shared_default (line 3520) | fn test_host_override_client_id_wins_over_shared_default() {
  function test_fingerprint_detection_catches_stale_write (line 3540) | fn test_fingerprint_detection_catches_stale_write() {
  function test_write_toml_atomically_writes_file (line 3553) | fn test_write_toml_atomically_writes_file() {
  function test_write_shared_cluster_revision_marker_writes_file_atomically (line 3567) | fn test_write_shared_cluster_revision_marker_writes_file_atomically() {
  function test_normal_backend_round_trips_settings (line 3585) | fn test_normal_backend_round_trips_settings() {
  function test_normal_backend_load_applies_supported_env_overrides (line 3628) | fn test_normal_backend_load_applies_supported_env_overrides() {
  function test_normal_backend_first_run_applies_env_overrides_without_persisting_them (line 3662) | fn test_normal_backend_first_run_applies_env_overrides_without_persistin...
  function test_shared_backend_routes_shared_and_host_fields (line 3684) | fn test_shared_backend_routes_shared_and_host_fields() {
  function test_shared_catalog_backup_policy_scales_by_catalog_size (line 3772) | fn test_shared_catalog_backup_policy_scales_by_catalog_size() {
  function test_shared_catalog_backup_deduplicates_current_roll_window (line 3811) | fn test_shared_catalog_backup_deduplicates_current_roll_window() {
  function test_shared_backend_backs_up_catalog_before_overwrite (line 3848) | fn test_shared_backend_backs_up_catalog_before_overwrite() {
  function test_shared_backend_bootstraps_missing_host_file (line 3910) | fn test_shared_backend_bootstraps_missing_host_file() {
  function test_shared_backend_validates_env_overridden_default_download_folder (line 3942) | fn test_shared_backend_validates_env_overridden_default_download_folder() {
  function test_shared_backend_reports_missing_mount_root_clearly (line 3972) | fn test_shared_backend_reports_missing_mount_root_clearly() {
  function test_bootstrap_shared_host_config_error_mentions_host_and_path (line 4008) | fn test_bootstrap_shared_host_config_error_mentions_host_and_path() {
  function test_normal_backend_cli_load_bootstraps_missing_settings_when_local_client_is_not_running (line 4045) | fn test_normal_backend_cli_load_bootstraps_missing_settings_when_local_c...
  function test_normal_backend_cli_load_stays_read_only_when_local_client_is_running (line 4069) | fn test_normal_backend_cli_load_stays_read_only_when_local_client_is_run...
  function test_shared_backend_cli_load_bootstraps_missing_host_file (line 4104) | fn test_shared_backend_cli_load_bootstraps_missing_host_file() {
  function test_shared_backend_defaults_download_folder_to_mount_dir_when_unset (line 4142) | fn test_shared_backend_defaults_download_folder_to_mount_dir_when_unset() {
  function test_encode_shared_cli_torrent_path_returns_portable_relative_path (line 4179) | fn test_encode_shared_cli_torrent_path_returns_portable_relative_path() {
  function test_resolve_shared_cli_torrent_path_expands_relative_path_against_mount_root (line 4207) | fn test_resolve_shared_cli_torrent_path_expands_relative_path_against_mo...
  function test_shared_backend_preserves_shared_client_id_when_host_override_exists (line 4234) | fn test_shared_backend_preserves_shared_client_id_when_host_override_exi...
  function test_shared_backend_host_only_save_does_not_bump_cluster_revision (line 4288) | fn test_shared_backend_host_only_save_does_not_bump_cluster_revision() {
  function test_shared_backend_noop_save_does_not_rewrite_revision_or_metadata (line 4332) | fn test_shared_backend_noop_save_does_not_rewrite_revision_or_metadata() {
  function test_metadata_syncs_file_priorities_from_settings (line 4384) | fn test_metadata_syncs_file_priorities_from_settings() {
  function test_normal_load_settings_ignores_invalid_torrent_metadata (line 4420) | fn test_normal_load_settings_ignores_invalid_torrent_metadata() {
  function test_shared_load_settings_ignores_invalid_torrent_metadata (line 4456) | fn test_shared_load_settings_ignores_invalid_torrent_metadata() {
  function test_normal_save_settings_overwrites_invalid_torrent_metadata (line 4490) | fn test_normal_save_settings_overwrites_invalid_torrent_metadata() {
  function test_upsert_torrent_metadata_overwrites_invalid_metadata (line 4541) | fn test_upsert_torrent_metadata_overwrites_invalid_metadata() {
  function watch_env_guard (line 4572) | fn watch_env_guard() -> &'static std::sync::Mutex<()> {
  function shared_backend_guard (line 4576) | fn shared_backend_guard() -> &'static std::sync::Mutex<()> {
  function set_temp_app_paths (line 4580) | fn set_temp_app_paths() -> tempfile::TempDir {
  function test_persisted_shared_config_normalizes_explicit_subdir_to_mount_root (line 4589) | fn test_persisted_shared_config_normalizes_explicit_subdir_to_mount_root...
  function test_shared_config_env_takes_precedence_over_persisted_launcher_config (line 4611) | fn test_shared_config_env_takes_precedence_over_persisted_launcher_confi...
  function test_clearing_persisted_shared_config_disables_shared_mode_without_env (line 4642) | fn test_clearing_persisted_shared_config_disables_shared_mode_without_en...
  function test_set_persisted_shared_config_rejects_relative_paths (line 4664) | fn test_set_persisted_shared_config_rejects_relative_paths() {
  function test_persisted_host_id_falls_back_after_env (line 4678) | fn test_persisted_host_id_falls_back_after_env() {
  function test_host_id_env_takes_precedence_over_persisted_host_id (line 4712) | fn test_host_id_env_takes_precedence_over_persisted_host_id() {
  function test_convert_standalone_to_shared_and_back_round_trips_settings (line 4742) | fn test_convert_standalone_to_shared_and_back_round_trips_settings() {
  function test_configured_watch_paths_use_shared_inbox_in_shared_mode (line 4859) | fn test_configured_watch_paths_use_shared_inbox_in_shared_mode() {
  function test_host_watch_paths_exclude_additional_shared_config_overlaps (line 4908) | fn test_host_watch_paths_exclude_additional_shared_config_overlaps() {
  function test_shared_host_id_prefers_canonical_env_var (line 4952) | fn test_shared_host_id_prefers_canonical_env_var() {
  function test_shared_host_id_env_matches_case_insensitively (line 4985) | fn test_shared_host_id_env_matches_case_insensitively() {
  function test_shared_config_dir_env_normalizes_to_superseedr_config_subdir (line 5007) | fn test_shared_config_dir_env_normalizes_to_superseedr_config_subdir() {
  function test_shared_config_dir_env_accepts_explicit_superseedr_config_subdir (line 5059) | fn test_shared_config_dir_env_accepts_explicit_superseedr_config_subdir() {
  function test_classify_shared_mode_settings_change_scopes_host_only_changes (line 5094) | fn test_classify_shared_mode_settings_change_scopes_host_only_changes() {
  function test_runtime_watch_paths_differ_by_shared_role (line 5125) | fn test_runtime_watch_paths_differ_by_shared_role() {
  function test_resolve_host_watch_path_falls_back_to_local_app_watch_directory (line 5174) | fn test_resolve_host_watch_path_falls_back_to_local_app_watch_directory() {
  function test_shared_runtime_watch_paths_include_local_app_watch_when_host_watch_unset (line 5185) | fn test_shared_runtime_watch_paths_include_local_app_watch_when_host_wat...

FILE: src/control_service.rs
  type TorrentFileList (line 20) | type TorrentFileList = Vec<(Vec<String>, u64)>;
  type TorrentMetadataByInfoHash (line 21) | type TorrentMetadataByInfoHash = HashMap<String, TorrentMetadataEntry>;
  function load_torrent_metadata_snapshot (line 23) | fn load_torrent_metadata_snapshot() -> Result<TorrentMetadataByInfoHash,...
  function find_torrent_settings_index_by_info_hash (line 48) | pub fn find_torrent_settings_index_by_info_hash(
  function describe_priority_target (line 57) | pub fn describe_priority_target(target: &ControlPriorityTarget) -> String {
  function online_control_success_message (line 64) | pub fn online_control_success_message(request: &ControlRequest) -> String {
  function control_event_details (line 109) | pub fn control_event_details(request: &ControlRequest, origin: ControlOr...
  function load_torrent_file_list_for_settings (line 128) | pub fn load_torrent_file_list_for_settings(
  function load_torrent_file_list_from_metadata (line 160) | fn load_torrent_file_list_from_metadata(
  function file_list_from_metadata_entry (line 177) | fn file_list_from_metadata_entry(entry: &TorrentMetadataEntry) -> Vec<(V...
  function file_priorities_to_map (line 194) | pub fn file_priorities_to_map(
  type TorrentFileListEntry (line 205) | pub struct TorrentFileListEntry {
  type OfflinePurgePlan (line 213) | pub struct OfflinePurgePlan {
  function torrent_settings_by_info_hash_hex (line 219) | fn torrent_settings_by_info_hash_hex<'a>(
  function torrent_name_for_manifest (line 233) | fn torrent_name_for_manifest(
  function torrent_metadata_entry_for_settings (line 248) | fn torrent_metadata_entry_for_settings(
  function manifest_entries_for_torrent_settings (line 259) | fn manifest_entries_for_torrent_settings(
  function normalize_match_path (line 320) | fn normalize_match_path(path: &Path) -> PathBuf {
  function resolve_torrent_roots (line 346) | fn resolve_torrent_roots(
  function full_file_paths_for_torrent (line 377) | fn full_file_paths_for_torrent(
  function list_torrent_files (line 409) | pub fn list_torrent_files(
  function resolve_target_info_hash (line 430) | pub fn resolve_target_info_hash(
  function resolve_purge_target_info_hash (line 479) | pub fn resolve_purge_target_info_hash(settings: &Settings, target: &str)...
  function build_offline_purge_plan (line 483) | pub fn build_offline_purge_plan(
  function apply_offline_purge (line 550) | pub fn apply_offline_purge(settings: &mut Settings, info_hash_hex: &str)...
  type ControlExecutionPlan (line 580) | pub enum ControlExecutionPlan {
  function plan_control_request (line 604) | pub fn plan_control_request(
  function resolve_priority_file_index (line 728) | pub fn resolve_priority_file_index(
  function apply_offline_control_request (line 764) | pub fn apply_offline_control_request(
  function shared_env_guard (line 836) | fn shared_env_guard() -> &'static std::sync::Mutex<()> {
  function write_sample_torrent_file (line 840) | fn write_sample_torrent_file() -> (tempfile::TempDir, String) {
  function offline_hybrid_magnet_lookup_prefers_btih_identity (line 875) | fn offline_hybrid_magnet_lookup_prefers_btih_identity() {
  function offline_delete_targets_hybrid_magnet_by_btih (line 897) | fn offline_delete_targets_hybrid_magnet_by_btih() {
  function priority_file_path_resolution_still_requires_torrent_metadata (line 925) | fn priority_file_path_resolution_still_requires_torrent_metadata() {
  function files_list_uses_torrent_source_when_metadata_is_missing (line 950) | fn files_list_uses_torrent_source_when_metadata_is_missing() {
  function purge_target_can_resolve_from_unique_file_path (line 971) | fn purge_target_can_resolve_from_unique_file_path() {
  function command_specific_target_resolution_uses_callers_command_name (line 999) | fn command_specific_target_resolution_uses_callers_command_name() {
  function offline_purge_deletes_files_and_removes_torrent (line 1024) | fn offline_purge_deletes_files_and_removes_torrent() {
  function control_plan_and_offline_apply_share_pause_and_purge_mutations (line 1061) | fn control_plan_and_offline_apply_share_pause_and_purge_mutations() {
  function files_and_path_resolution_treat_invalid_metadata_as_missing (line 1109) | fn files_and_path_resolution_treat_invalid_metadata_as_missing() {

FILE: src/dht/anomaly.rs
  type AnomalyConfig (line 5) | pub struct AnomalyConfig {
  method default (line 11) | fn default() -> Self {
  type ReferralQuality (line 20) | pub struct ReferralQuality {
  type AnomalyScore (line 26) | pub struct AnomalyScore {

FILE: src/dht/bep42.rs
  constant IPV4_MASK (line 8) | const IPV4_MASK: u32 = 0x030f3fff;
  constant CRC32C_POLY_REVERSED (line 9) | const CRC32C_POLY_REVERSED: u32 = 0x82f63b78;
  function classify_node (line 11) | pub fn classify_node(addr: SocketAddr, node_id: Option<NodeId>) -> Bep42...
  function classify_ipv4 (line 22) | pub fn classify_ipv4(ip: Ipv4Addr, node_id: NodeId) -> Bep42State {
  function random_secure_node_id_for_ipv4 (line 35) | pub fn random_secure_node_id_for_ipv4(ip: Ipv4Addr) -> Option<NodeId> {
  function secure_node_id_for_ipv4 (line 41) | pub fn secure_node_id_for_ipv4(ip: Ipv4Addr, mut entropy: [u8; NodeId::L...
  function is_secure_public_candidate (line 53) | pub fn is_secure_public_candidate(
  function same_public_identity_group (line 63) | pub fn same_public_identity_group(
  function classify_ipv6 (line 95) | fn classify_ipv6(ip: Ipv6Addr, _node_id: NodeId) -> Bep42State {
  function ipv4_is_exempt (line 104) | fn ipv4_is_exempt(ip: Ipv4Addr) -> bool {
  function first_21_bits (line 113) | fn first_21_bits(bytes: &[u8]) -> [u8; 3] {
  function id_prefix_ipv4 (line 117) | fn id_prefix_ipv4(ip: Ipv4Addr, r: u8) -> [u8; 3] {
  function crc32c (line 130) | fn crc32c(bytes: [u8; 4]) -> u32 {
  function validates_known_bep42_vector (line 147) | fn validates_known_bep42_vector() {
  function marks_loopback_ipv4_as_exempt (line 158) | fn marks_loopback_ipv4_as_exempt() {
  function generated_ipv4_node_id_is_bep42_compliant (line 166) | fn generated_ipv4_node_id_is_bep42_compliant() {
  function generated_ipv4_node_id_rejects_exempt_addresses (line 175) | fn generated_ipv4_node_id_rejects_exempt_addresses() {

FILE: src/dht/bootstrap.rs
  type BootstrapConfig (line 11) | pub struct BootstrapConfig {
  method default (line 20) | fn default() -> Self {
  type StartupLookupPlan (line 32) | pub struct StartupLookupPlan {
  type FamilyMaintenancePlan (line 38) | pub struct FamilyMaintenancePlan {
  type BootstrapCoordinator (line 45) | pub struct BootstrapCoordinator {
    method new (line 51) | pub fn new(config: BootstrapConfig) -> Self {
    method config (line 58) | pub fn config(&self) -> &BootstrapConfig {
    method set_bootstrap_nodes (line 62) | pub fn set_bootstrap_nodes(&mut self, bootstrap_nodes: Vec<SocketAddr>) {
    method startup_plan (line 66) | pub fn startup_plan(
    method maintenance_plan (line 80) | pub fn maintenance_plan(

FILE: src/dht/health.rs
  type DhtAnomalySummary (line 11) | pub struct DhtAnomalySummary {
  type DhtHealthSnapshot (line 19) | pub struct DhtHealthSnapshot {
    method from_parts (line 42) | pub fn from_parts(
  function summarize_anomalies (line 96) | fn summarize_anomalies(

FILE: src/dht/inbound.rs
  constant ERROR_PROTOCOL (line 13) | const ERROR_PROTOCOL: i64 = 203;
  constant RATE_LIMITER_IDLE_TTL (line 14) | const RATE_LIMITER_IDLE_TTL: Duration = Duration::from_secs(300);
  constant RATE_LIMITER_PRUNE_INTERVAL (line 15) | const RATE_LIMITER_PRUNE_INTERVAL: Duration = Duration::from_secs(30);
  constant MAX_RATE_LIMITER_ENTRIES (line 16) | const MAX_RATE_LIMITER_ENTRIES: usize = 16_384;
  constant DEFAULT_RESPONSE_BYTES_PER_SECOND (line 17) | const DEFAULT_RESPONSE_BYTES_PER_SECOND: usize = 32 * 1024;
  constant DEFAULT_RESPONSE_BURST_BYTES (line 18) | const DEFAULT_RESPONSE_BURST_BYTES: usize = 64 * 1024;
  type InboundConfig (line 21) | pub struct InboundConfig {
  method default (line 31) | fn default() -> Self {
  type InboundRequestContext (line 44) | pub struct InboundRequestContext {
  type InboundAction (line 49) | pub enum InboundAction {
  type RateLimiter (line 56) | struct RateLimiter {
  type InboundActor (line 65) | pub struct InboundActor {
    method new (line 72) | pub fn new(config: InboundConfig) -> Self {
    method family (line 80) | pub fn family(&self) -> AddressFamily {
    method config (line 84) | pub fn config(&self) -> &InboundConfig {
    method handle_query (line 89) | pub fn handle_query(
    method allow_query (line 263) | fn allow_query(&mut self, source_ip: IpAddr, now: Instant) -> bool {
    method allow_response_bytes (line 300) | fn allow_response_bytes(&mut self, source_ip: IpAddr, bytes: usize, no...
    method respond_to (line 336) | fn respond_to(
    method error_to (line 352) | fn error_to(
    method prune_stale_rate_limiters (line 367) | fn prune_stale_rate_limiters(&mut self, now: Instant) {
    method closest_nodes_for (line 384) | fn closest_nodes_for(
    method append_requested_cross_family_nodes (line 404) | fn append_requested_cross_family_nodes(
  function remember_inbound_node (line 428) | fn remember_inbound_node(
  function source_ip (line 451) | fn source_ip(index: usize) -> IpAddr {
  function node_id (line 460) | fn node_id(byte: u8) -> NodeId {
  function info_hash (line 464) | fn info_hash(byte: u8) -> InfoHash {
  function rate_limiter_prunes_idle_sources (line 469) | fn rate_limiter_prunes_idle_sources() {
  function rate_limiter_rejects_new_sources_at_hard_cap (line 485) | fn rate_limiter_rejects_new_sources_at_hard_cap() {
  function response_byte_limiter_rejects_excess_payload_bytes (line 500) | fn response_byte_limiter_rejects_excess_payload_bytes() {
  function get_peers_response_includes_values_and_closest_nodes (line 515) | fn get_peers_response_includes_values_and_closest_nodes() {
  function get_peers_want_includes_cross_family_nodes (line 574) | fn get_peers_want_includes_cross_family_nodes() {
  function get_peers_withholds_token_when_peer_store_is_full_for_hash (line 632) | fn get_peers_withholds_token_when_peer_store_is_full_for_hash() {

FILE: src/dht/krpc.rs
  constant DEFAULT_KRPC_VERSION (line 13) | pub const DEFAULT_KRPC_VERSION: &[u8; 4] = b"RS\0\x05";
  constant MAX_KRPC_MESSAGE_BYTES (line 14) | const MAX_KRPC_MESSAGE_BYTES: usize = 8 * 1024;
  constant MAX_BENCODE_DEPTH (line 15) | const MAX_BENCODE_DEPTH: usize = 16;
  constant MAX_BENCODE_TOKENS (line 16) | const MAX_BENCODE_TOKENS: usize = 512;
  constant WANT_IPV4_NODES (line 17) | const WANT_IPV4_NODES: &[u8; 2] = b"n4";
  constant WANT_IPV6_NODES (line 18) | const WANT_IPV6_NODES: &[u8; 2] = b"n6";
  type KrpcQueryKind (line 21) | pub enum KrpcQueryKind {
    method as_str (line 29) | pub const fn as_str(self) -> &'static str {
  type KrpcQueryEnvelope (line 40) | pub struct KrpcQueryEnvelope<A> {
  method serialize (line 53) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  type KrpcDecodedQueryEnvelope (line 80) | struct KrpcDecodedQueryEnvelope<A> {
  function new (line 94) | pub fn new(transaction_id: TransactionId, query: KrpcQueryKind, args: A)...
  function with_version (line 98) | pub fn with_version(
  type KrpcPingArgs (line 116) | pub struct KrpcPingArgs {
    method new (line 121) | pub fn new(id: NodeId) -> Self {
  method serialize (line 129) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  type KrpcFindNodeArgs (line 140) | pub struct KrpcFindNodeArgs {
    method new (line 148) | pub fn new(id: NodeId, target: NodeId) -> Self {
    method with_want (line 156) | pub fn with_want(mut self, families: &[AddressFamily]) -> Self {
    method wants_family (line 161) | pub fn wants_family(&self, family: AddressFamily) -> bool {
  method serialize (line 167) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  type KrpcGetPeersArgs (line 182) | pub struct KrpcGetPeersArgs {
    method new (line 190) | pub fn new(id: NodeId, info_hash: InfoHash) -> Self {
    method with_want (line 198) | pub fn with_want(mut self, families: &[AddressFamily]) -> Self {
    method wants_family (line 203) | pub fn wants_family(&self, family: AddressFamily) -> bool {
  method serialize (line 209) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  type KrpcAnnouncePeerArgs (line 224) | pub struct KrpcAnnouncePeerArgs {
    method new (line 234) | pub fn new(
  method serialize (line 252) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  type KrpcIncomingQuery (line 270) | pub enum KrpcIncomingQuery {
    method kind (line 294) | pub fn kind(&self) -> KrpcQueryKind {
    method transaction_id (line 303) | pub fn transaction_id(&self) -> &[u8] {
    method version (line 312) | pub fn version(&self) -> Option<&[u8]> {
    method requester_id (line 321) | pub fn requester_id(&self) -> Option<NodeId> {
  type KrpcInboundMessage (line 332) | pub enum KrpcInboundMessage {
  type KrpcResponseEnvelope (line 339) | pub struct KrpcResponseEnvelope {
    method new (line 382) | pub fn new(transaction_id: &[u8], body: KrpcResponseBody) -> Self {
    method with_observed_addr (line 392) | pub fn with_observed_addr(mut self, addr: SocketAddr) -> Self {
    method observed_addr (line 397) | pub fn observed_addr(&self) -> Option<SocketAddr> {
    method transaction_id (line 403) | pub fn transaction_id(&self) -> Result<TransactionId, FixedLengthError> {
  method serialize (line 351) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  type KrpcErrorEnvelope (line 409) | pub struct KrpcErrorEnvelope {
    method new (line 434) | pub fn new(transaction_id: &[u8], code: i64, message: impl Into<String...
    method transaction_id (line 443) | pub fn transaction_id(&self) -> Result<TransactionId, FixedLengthError> {
  method serialize (line 418) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  type KrpcErrorBody (line 449) | pub struct KrpcErrorBody(pub i64, pub String);
  type KrpcResponseBody (line 452) | pub struct KrpcResponseBody {
    method pong (line 508) | pub fn pong(node_id: NodeId) -> Self {
    method with_nodes (line 515) | pub fn with_nodes(node_id: NodeId, nodes: &[CompactNode], family: Addr...
    method with_peers (line 524) | pub fn with_peers(node_id: NodeId, peers: &[CompactPeer], token: &[u8]...
    method with_peers_and_nodes (line 534) | pub fn with_peers_and_nodes(
    method with_closest_nodes (line 549) | pub fn with_closest_nodes(
    method node_id (line 560) | pub fn node_id(&self) -> Option<NodeId> {
    method peers (line 564) | pub fn peers(&self, family: AddressFamily) -> Vec<CompactPeer> {
    method closest_nodes (line 571) | pub fn closest_nodes(&self, family: AddressFamily) -> Vec<CompactNode> {
    method set_closest_nodes (line 578) | pub fn set_closest_nodes(&mut self, family: AddressFamily, nodes: &[Co...
  method serialize (line 466) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
  function encode_want_entries (line 586) | fn encode_want_entries(families: &[AddressFamily]) -> Vec<ByteBuf> {
  function wants_family (line 597) | fn wants_family(entries: &[ByteBuf], family: AddressFamily) -> bool {
  type KrpcEnvelopeProbe (line 606) | struct KrpcEnvelopeProbe {
  type KrpcDecodeError (line 614) | pub enum KrpcDecodeError {
  function decode_message (line 633) | pub fn decode_message(bytes: &[u8]) -> Result<KrpcInboundMessage, KrpcDe...
  function validate_bencode_limits (line 646) | fn validate_bencode_limits(bytes: &[u8]) -> Result<(), KrpcDecodeError> {
  function validate_bencode_value (line 659) | fn validate_bencode_value(
  function validate_bencode_integer (line 694) | fn validate_bencode_integer(bytes: &[u8], mut pos: usize) -> Result<usiz...
  function validate_bencode_bytes (line 711) | fn validate_bencode_bytes(bytes: &[u8], mut pos: usize) -> Result<usize,...
  function decode_query (line 730) | fn decode_query(
  function decode_compact_peers (line 774) | pub fn decode_compact_peers(bytes: &[u8], family: AddressFamily) -> Vec<...
  function encode_compact_peer (line 802) | pub fn encode_compact_peer(peer: CompactPeer) -> ByteBuf {
  function decode_compact_nodes (line 819) | pub fn decode_compact_nodes(bytes: &[u8], family: AddressFamily) -> Vec<...
  function encode_compact_nodes (line 853) | pub fn encode_compact_nodes(nodes: &[CompactNode], family: AddressFamily...
  function decode_compact_socket_addr (line 882) | pub fn decode_compact_socket_addr(bytes: &[u8]) -> Option<SocketAddr> {
  function encode_compact_socket_addr (line 899) | pub fn encode_compact_socket_addr(addr: SocketAddr) -> ByteBuf {
  function outbound_queries_do_not_advertise_read_only_by_default (line 921) | fn outbound_queries_do_not_advertise_read_only_by_default() {
  function get_peers_want_entries_round_trip (line 939) | fn get_peers_want_entries_round_trip() {
  function find_node_want_entries_round_trip (line 954) | fn find_node_want_entries_round_trip() {
  function response_observed_addr_round_trips_compact_ip (line 969) | fn response_observed_addr_round_trips_compact_ip() {
  function decode_message_rejects_oversized_payload_before_deserialize (line 984) | fn decode_message_rejects_oversized_payload_before_deserialize() {
  function decode_message_rejects_excessive_bencode_depth (line 994) | fn decode_message_rejects_excessive_bencode_depth() {
  function decode_message_rejects_excessive_bencode_tokens (line 1007) | fn decode_message_rejects_excessive_bencode_tokens() {

FILE: src/dht/lookup.rs
  type LookupKind (line 17) | pub enum LookupKind {
  type LookupTarget (line 23) | pub enum LookupTarget {
    method as_node_id (line 29) | pub fn as_node_id(self) -> NodeId {
  type LookupConfig (line 38) | pub struct LookupConfig {
  method default (line 48) | fn default() -> Self {
  type LookupRequest (line 61) | pub struct LookupRequest {
  type LookupCandidate (line 68) | pub struct LookupCandidate {
  type LookupQuery (line 81) | pub struct LookupQuery {
  type LookupUpdate (line 89) | pub struct LookupUpdate {
    method new (line 97) | fn new(completed_query: Option<LookupQuery>, finished: bool) -> Self {
  type LookupResponder (line 108) | struct LookupResponder {
  type LookupState (line 116) | pub struct LookupState {
    method request (line 200) | pub fn request(&self) -> LookupRequest {
    method family (line 204) | pub fn family(&self) -> AddressFamily {
    method target_id (line 208) | pub fn target_id(&self) -> NodeId {
    method started_at (line 212) | pub fn started_at(&self) -> Instant {
    method inflight_transaction_ids (line 216) | pub fn inflight_transaction_ids(&self) -> Vec<TransactionId> {
    method quality_snapshot (line 220) | pub fn quality_snapshot(&self) -> LookupQualitySnapshot {
    method park (line 230) | pub fn park(&mut self) {
    method resume (line 242) | pub fn resume(&mut self, lookup_id: LookupId, now: Instant) {
    method next_candidates (line 247) | pub fn next_candidates(&self) -> Vec<LookupCandidate> {
    method mark_inflight (line 277) | pub fn mark_inflight(
    method mark_soft_timeout (line 303) | pub fn mark_soft_timeout(&mut self, transaction_id: TransactionId) -> ...
    method handle_response (line 312) | pub fn handle_response(
    method handle_error (line 347) | pub fn handle_error(&mut self, transaction_id: TransactionId) -> Looku...
    method handle_timeout (line 352) | pub fn handle_timeout(&mut self, transaction_id: TransactionId) -> Loo...
    method discard_candidate (line 357) | pub fn discard_candidate(&mut self, addr: SocketAddr) -> bool {
    method is_finished (line 370) | pub fn is_finished(&self) -> bool {
    method cacheable_responders (line 407) | pub fn cacheable_responders(&self, limit: usize) -> Vec<NodeRecord> {
    method seed_from_routing (line 428) | fn seed_from_routing(&mut self, routing: &RoutingSnapshot, prefer_boot...
    method seed_cached_responders (line 449) | fn seed_cached_responders(&mut self, cached_responders: &[NodeRecord]) {
    method seed_bootstrap (line 459) | fn seed_bootstrap(&mut self, bootstrap_nodes: &[SocketAddr], prefer_bo...
    method absorb_discovered_nodes (line 482) | fn absorb_discovered_nodes(&mut self, nodes: Vec<CompactNode>) -> Vec<...
    method insert_candidate (line 524) | fn insert_candidate(&mut self, candidate: LookupCandidate) -> bool {
    method record_responder (line 537) | fn record_responder(&mut self, candidate: &LookupCandidate) {
    method eligible_responders (line 569) | fn eligible_responders(&self) -> Vec<LookupResponder> {
    method prefix_count (line 577) | fn prefix_count(&self, addr: SocketAddr) -> usize {
    method resort_frontier (line 590) | fn resort_frontier(&mut self) {
    method next_order (line 596) | fn next_order(&mut self) -> u64 {
    method conflicts_with_existing_public_identity (line 602) | fn conflicts_with_existing_public_identity(&self, candidate: &LookupCa...
  type LookupQualitySnapshot (line 130) | pub struct LookupQualitySnapshot {
  type LookupManager (line 139) | pub struct LookupManager {
    method new (line 144) | pub fn new(config: LookupConfig) -> Self {
    method config (line 148) | pub fn config(&self) -> &LookupConfig {
    method start (line 152) | pub fn start(
  function candidate_from_record (line 634) | fn candidate_from_record(
  function compare_candidates (line 652) | fn compare_candidates(
  function compare_responders (line 670) | fn compare_responders(
  function compare_responder_candidate (line 682) | fn compare_responder_candidate(
  function compare_seed_records (line 694) | fn compare_seed_records(left: &NodeRecord, right: &NodeRecord, target: &...
  function compare_candidate_distance (line 702) | fn compare_candidate_distance(
  function termination_eligible (line 715) | fn termination_eligible(candidate: &LookupCandidate) -> bool {
  function termination_eligible_responder (line 721) | fn termination_eligible_responder(candidate: &LookupResponder) -> bool {
  function trust_rank (line 727) | fn trust_rank(trust: NodeTrust) -> u8 {
  function bep42_rank (line 735) | fn bep42_rank(state: Bep42State) -> u8 {
  function referral_quality_rank (line 744) | fn referral_quality_rank(candidate: &LookupCandidate) -> (u16, u16) {
  function response_recency_rank (line 751) | fn response_recency_rank(last_response_at: Option<Instant>) -> (u8, Opti...
  function responder_conflicts (line 758) | fn responder_conflicts(existing: &LookupResponder, candidate: &LookupCan...
  type PrefixKey (line 770) | enum PrefixKey {
  function prefix_key (line 775) | fn prefix_key(addr: SocketAddr) -> PrefixKey {
  type ScriptedReply (line 802) | enum ScriptedReply {
  type LookupReplySpec (line 815) | enum LookupReplySpec {
  function lookup_reply_strategy (line 829) | fn lookup_reply_strategy() -> impl Strategy<Value = LookupReplySpec> {
  function assert_lookup_state_invariants (line 849) | fn assert_lookup_state_invariants(state: &LookupState) -> Result<(), Tes...
  function scripted_replay_walk_reaches_peers (line 885) | fn scripted_replay_walk_reaches_peers() {
  function repeated_same_node_referrals_only_admit_one_candidate (line 1032) | fn repeated_same_node_referrals_only_admit_one_candidate() {
  function park_requeues_inflight_candidates_for_resume (line 1072) | fn park_requeues_inflight_candidates_for_resume() {
  function visit_cap_finishes_lookup_even_when_frontier_remains (line 1114) | fn visit_cap_finishes_lookup_even_when_frontier_remains() {
  function empty_routing_snapshot (line 1304) | fn empty_routing_snapshot(family: AddressFamily) -> RoutingSnapshot {
  function public_compact_node (line 1314) | fn public_compact_node(seed: u8, salt: u8) -> CompactNode {
  function public_compact_peer (line 1325) | fn public_compact_peer(seed: u8, salt: u8) -> CompactPeer {
  function compact_node (line 1335) | fn compact_node(seed: u8, a: u8, b: u8, c: u8, d: u8, port: u16) -> Comp...
  function compact_peer (line 1342) | fn compact_peer(a: u8, b: u8, c: u8, d: u8, port: u16) -> CompactPeer {
  function socket (line 1348) | fn socket(a: u8, b: u8, c: u8, d: u8, port: u16) -> SocketAddr {

FILE: src/dht/mod.rs
  constant MAX_CACHED_RESPONDER_TARGETS (line 63) | const MAX_CACHED_RESPONDER_TARGETS: usize = 256;
  constant MAX_CACHED_RESPONDERS_PER_TARGET (line 64) | const MAX_CACHED_RESPONDERS_PER_TARGET: usize = 16;
  type RuntimeConfig (line 67) | pub struct RuntimeConfig {
  type ActiveLookup (line 78) | struct ActiveLookup {
  type LookupRunMode (line 86) | enum LookupRunMode {
    method is_active (line 92) | fn is_active(self) -> bool {
    method is_draining (line 96) | fn is_draining(self) -> bool {
  type LookupTaskOutcome (line 103) | enum LookupTaskOutcome {
  type LookupTaskResult (line 110) | struct LookupTaskResult {
  type AnnouncePeerJob (line 117) | pub(crate) struct AnnouncePeerJob {
    method run (line 130) | pub(crate) async fn run(self) -> bool {
  type AnnouncePeerTarget (line 124) | struct AnnouncePeerTarget {
  type Runtime (line 158) | pub struct Runtime {
    method bind (line 187) | pub async fn bind(config: RuntimeConfig) -> io::Result<Self> {
    method config (line 300) | pub fn config(&self) -> &RuntimeConfig {
    method local_node_id (line 304) | pub fn local_node_id(&self) -> NodeId {
    method family_bound (line 308) | pub fn family_bound(&self, family: AddressFamily) -> bool {
    method ipv4_local_addr (line 312) | pub fn ipv4_local_addr(&self) -> Option<SocketAddr> {
    method ipv6_local_addr (line 318) | pub fn ipv6_local_addr(&self) -> Option<SocketAddr> {
    method bound_family_count (line 324) | pub fn bound_family_count(&self) -> usize {
    method active_lookup_count (line 328) | pub fn active_lookup_count(&self) -> usize {
    method active_user_lookup_count (line 335) | pub fn active_user_lookup_count(&self) -> usize {
    method is_lookup_active (line 345) | pub fn is_lookup_active(&self, lookup_id: LookupId) -> bool {
    method draining_lookup_count (line 349) | pub fn draining_lookup_count(&self) -> usize {
    method inflight_query_counts (line 356) | pub fn inflight_query_counts(&self) -> (usize, usize) {
    method lookup_quality_snapshot (line 370) | pub fn lookup_quality_snapshot(&self, lookup_id: LookupId) -> Option<L...
    method active_route_count (line 376) | pub fn active_route_count(&self, family: AddressFamily) -> usize {
    method health_snapshot (line 384) | pub fn health_snapshot(&self) -> DhtHealthSnapshot {
    method record_responsive_bootstrap (line 409) | fn record_responsive_bootstrap(&mut self, addr: SocketAddr) {
    method responsive_bootstrap_counts (line 415) | fn responsive_bootstrap_counts(&self) -> (usize, usize, usize) {
    method save_state (line 435) | pub async fn save_state(&self) -> io::Result<()> {
    method shutdown_for_rebind (line 452) | pub async fn shutdown_for_rebind(&mut self, wait: Duration) {
    method bootstrap_startup (line 484) | pub async fn bootstrap_startup(&mut self) -> io::Result<()> {
    method run_maintenance (line 502) | pub async fn run_maintenance(&mut self) -> io::Result<()> {
    method start_lookup (line 535) | pub async fn start_lookup(
    method refresh_bootstrap_nodes_if_empty (line 595) | async fn refresh_bootstrap_nodes_if_empty(&mut self) {
    method start_lookup_with_state (line 609) | pub async fn start_lookup_with_state(
    method start_get_peers (line 644) | pub async fn start_get_peers(
    method start_get_peers_with_state (line 657) | pub async fn start_get_peers_with_state(
    method start_find_node (line 675) | pub async fn start_find_node(
    method announce_peer (line 684) | pub async fn announce_peer(
    method announce_peer_job (line 761) | pub(crate) fn announce_peer_job(
    method step (line 806) | pub async fn step(&mut self) -> io::Result<bool> {
    method handle_transport_event (line 881) | async fn handle_transport_event(
    method handle_lookup_result (line 945) | async fn handle_lookup_result(&mut self, result: LookupTaskResult) -> ...
    method apply_confirmed_public_identity (line 1083) | fn apply_confirmed_public_identity(&mut self, confirmed: Option<Socket...
    method pump_lookup (line 1117) | async fn pump_lookup(&mut self, lookup_id: LookupId) -> io::Result<()> {
    method transport_for (line 1252) | fn transport_for(&self, family: AddressFamily) -> Option<&TransportAct...
    method routing_for_family_mut (line 1259) | fn routing_for_family_mut(&mut self, family: AddressFamily) -> &mut ro...
    method routing_for_family (line 1266) | fn routing_for_family(&self, family: AddressFamily) -> &routing::Routi...
    method wanted_node_families (line 1273) | fn wanted_node_families(&self) -> Vec<AddressFamily> {
    method cleanup_closed_lookups (line 1280) | fn cleanup_closed_lookups(&mut self) {
    method cancel_lookup (line 1294) | pub fn cancel_lookup(&mut self, lookup_id: LookupId) -> bool {
    method pause_lookup_for_drain (line 1298) | pub fn pause_lookup_for_drain(&mut self, lookup_id: LookupId) -> Optio...
    method drained_lookups_ready (line 1304) | pub fn drained_lookups_ready(&self, lookup_ids: &[LookupId]) -> bool {
    method finish_drained_lookup (line 1312) | pub fn finish_drained_lookup(&mut self, lookup_id: LookupId) -> Option...
    method cancel_lookup_and_take_state (line 1333) | pub fn cancel_lookup_and_take_state(&mut self, lookup_id: LookupId) ->...
    method cancel_maintenance_lookups (line 1349) | pub fn cancel_maintenance_lookups(&mut self) {
    method cache_lookup_responders (line 1360) | fn cache_lookup_responders(&mut self, family: AddressFamily, state: &L...
    method start_internal_find_node (line 1376) | async fn start_internal_find_node(
    method has_active_find_node (line 1392) | fn has_active_find_node(&self, family: AddressFamily, target: NodeId) ...
    method ping_nodes (line 1400) | async fn ping_nodes(
    method enqueue_probe_targets (line 1444) | fn enqueue_probe_targets(&mut self, family: AddressFamily, targets: &[...
    method take_pending_probe_targets (line 1450) | fn take_pending_probe_targets(
  function node_id_hex (line 1471) | fn node_id_hex(node_id: NodeId) -> String {
  function opposite_family (line 1475) | fn opposite_family(family: AddressFamily) -> AddressFamily {
  function resolve_bootstrap_sources (line 1482) | async fn resolve_bootstrap_sources(bootstrap_sources: &[String]) -> Vec<...
  function announce_peer_to_target (line 1500) | async fn announce_peer_to_target(
  type ReplayBehavior (line 1545) | enum ReplayBehavior {
  type QueryLogEntry (line 1551) | struct QueryLogEntry {
  function spawn_replay_responder (line 1557) | async fn spawn_replay_responder(
  function spawn_delayed_get_peers_responder (line 1638) | async fn spawn_delayed_get_peers_responder(
  function wait_for_query (line 1705) | async fn wait_for_query(
  function runtime_bind_requires_ipv4_transport (line 1728) | async fn runtime_bind_requires_ipv4_transport() {
  function runtime_bind_continues_without_ipv6_when_ipv6_port_is_unavailable (line 1745) | async fn runtime_bind_continues_without_ipv6_when_ipv6_port_is_unavailab...
  function runtime_does_not_register_lookup_without_seed_candidates (line 1773) | async fn runtime_does_not_register_lookup_without_seed_candidates() {
  function runtime_tracks_unique_responsive_bootstrap_nodes_by_family (line 1798) | async fn runtime_tracks_unique_responsive_bootstrap_nodes_by_family() {
  function runtime_rotates_local_node_id_after_confirmed_public_ipv4 (line 1826) | async fn runtime_rotates_local_node_id_after_confirmed_public_ipv4() {
  function runtime_keeps_configured_local_node_id_when_public_identity_disabled (line 1869) | async fn runtime_keeps_configured_local_node_id_when_public_identity_dis...
  function ipv6_test_bind_unavailable (line 1889) | fn ipv6_test_bind_unavailable(error: &io::Error) -> bool {
  function runtime_re_resolves_bootstrap_sources_when_initial_resolution_was_empty (line 1899) | async fn runtime_re_resolves_bootstrap_sources_when_initial_resolution_w...
  function runtime_scripted_network_replay_reaches_peers (line 1963) | async fn runtime_scripted_network_replay_reaches_peers() {
  function runtime_bind_restores_persisted_routes_only_for_matching_node_id (line 2153) | async fn runtime_bind_restores_persisted_routes_only_for_matching_node_i...
  function draining_lookup_accepts_late_peers_without_pumping_more_queries (line 2222) | async fn draining_lookup_accepts_late_peers_without_pumping_more_queries...

FILE: src/dht/peer_store.rs
  type PeerStoreConfig (line 9) | pub struct PeerStoreConfig {
  method default (line 17) | fn default() -> Self {
  type StoredPeer (line 28) | pub struct StoredPeer {
  type PeerStoreKey (line 35) | struct PeerStoreKey {
  type PeerStore (line 41) | pub struct PeerStore {
    method new (line 47) | pub fn new(config: PeerStoreConfig) -> Self {
    method config (line 54) | pub fn config(&self) -> &PeerStoreConfig {
    method total_peer_count (line 58) | pub fn total_peer_count(&self) -> usize {
    method insert (line 62) | pub fn insert(&mut self, info_hash: InfoHash, peer: CompactPeer, now: ...
    method accepts_announces_for (line 91) | pub fn accepts_announces_for(
    method peers_for (line 109) | pub fn peers_for(
    method prune_expired (line 122) | pub fn prune_expired(&mut self, now: SystemTime) {
    method enforce_global_limit (line 132) | fn enforce_global_limit(&mut self) {
    method evict_oldest_bucket (line 148) | fn evict_oldest_bucket(&mut self) {
    method oldest_peer_entry (line 161) | fn oldest_peer_entry(&self) -> Option<(PeerStoreKey, usize)> {
  function info_hash (line 180) | fn info_hash(byte: u8) -> InfoHash {
  function peer (line 184) | fn peer(octet: u8) -> CompactPeer {
  function accepts_announces_for_rejects_full_hash_bucket (line 191) | fn accepts_announces_for_rejects_full_hash_bucket() {
  function accepts_announces_for_rejects_global_pressure (line 205) | fn accepts_announces_for_rejects_global_pressure() {

FILE: src/dht/persist.rs
  constant PERSISTENCE_VERSION (line 13) | const PERSISTENCE_VERSION: u32 = 1;
  type PersistenceConfig (line 16) | pub struct PersistenceConfig {
  type PersistedRoutingNode (line 22) | pub struct PersistedRoutingNode {
    method from_record (line 153) | fn from_record(record: &NodeRecord) -> Self {
    method to_record (line 166) | fn to_record(&self, now: Instant) -> NodeRecord {
  type PersistedRoutingTable (line 34) | pub struct PersistedRoutingTable {
  type PersistedStateEnvelope (line 42) | pub struct PersistedStateEnvelope {
  type PersistenceManager (line 51) | pub struct PersistenceManager {
    method new (line 56) | pub fn new(config: PersistenceConfig) -> Self {
    method config (line 60) | pub fn config(&self) -> &PersistenceConfig {
    method build_snapshot (line 64) | pub fn build_snapshot(
    method save_snapshot (line 96) | pub fn save_snapshot(&self, snapshot: &PersistedStateEnvelope) -> io::...
    method load_snapshot (line 119) | pub fn load_snapshot(&self, now: SystemTime) -> io::Result<Option<Pers...
    method restore_nodes (line 143) | pub fn restore_nodes(&self, routes: &PersistedRoutingTable, now: Insta...
  function normalize_persisted_trust (line 178) | fn normalize_persisted_trust(trust: NodeTrust) -> NodeTrust {
  function persisted_node (line 190) | fn persisted_node(
  function restore_nodes_neutralizes_stale_suspicious_trust (line 209) | fn restore_nodes_neutralizes_stale_suspicious_trust() {
  function restore_nodes_preserves_trusted_routes (line 219) | fn restore_nodes_preserves_trusted_routes() {
  function restore_nodes_normalizes_mixed_legacy_route_trust (line 227) | fn restore_nodes_normalizes_mixed_legacy_route_trust() {
  function load_snapshot_ignores_invalid_stale_and_unsupported_files (line 267) | fn load_snapshot_ignores_invalid_stale_and_unsupported_files() {

FILE: src/dht/public_addr.rs
  constant PUBLIC_ADDRESS_QUORUM (line 8) | const PUBLIC_ADDRESS_QUORUM: usize = 3;
  constant MAX_PUBLIC_ADDRESS_CANDIDATES (line 9) | const MAX_PUBLIC_ADDRESS_CANDIDATES: usize = 64;
  type PublicAddressObserver (line 12) | pub struct PublicAddressObserver {
    method record_observation (line 19) | pub fn record_observation(
    method confirmed_for (line 48) | pub fn confirmed_for(&self, family: AddressFamily) -> Option<SocketAdd...
    method prune_weakest_candidate (line 55) | fn prune_weakest_candidate(&mut self) {
  function addr (line 73) | fn addr(octet: u8, port: u16) -> SocketAddr {
  function public_address_requires_quorum (line 78) | fn public_address_requires_quorum() {
  function duplicate_voter_does_not_satisfy_quorum (line 92) | fn duplicate_voter_does_not_satisfy_quorum() {

FILE: src/dht/routing.rs
  constant GOOD_NODE_WINDOW (line 13) | pub const GOOD_NODE_WINDOW: Duration = Duration::from_secs(15 * 60);
  constant REFRESH_INTERVAL (line 14) | pub const REFRESH_INTERVAL: Duration = Duration::from_secs(15 * 60);
  constant BAD_NODE_FAILURE_THRESHOLD (line 15) | pub const BAD_NODE_FAILURE_THRESHOLD: u16 = 2;
  type RoutingConfig (line 18) | pub struct RoutingConfig {
  method default (line 25) | fn default() -> Self {
  type NodeStatus (line 35) | pub enum NodeStatus {
  type BucketRange (line 42) | pub struct BucketRange {
  type BucketSummary (line 49) | pub struct BucketSummary {
  type RoutingSnapshot (line 57) | pub struct RoutingSnapshot {
  type RefreshPlan (line 66) | pub struct RefreshPlan {
  type InsertOutcome (line 73) | pub enum InsertOutcome {
  type BucketPrefix (line 83) | struct BucketPrefix {
    method root (line 89) | fn root() -> Self {
    method contains (line 96) | fn contains(&self, node_id: &NodeId) -> bool {
    method split (line 100) | fn split(&self) -> Option<(Self, Self)> {
    method range (line 120) | fn range(&self) -> BucketRange {
    method random_target (line 135) | fn random_target(&self) -> NodeId {
  type Bucket (line 146) | struct Bucket {
    method new (line 154) | fn new(prefix: BucketPrefix, now: Instant) -> Self {
    method contains_local_id (line 163) | fn contains_local_id(&self, local_node_id: &NodeId) -> bool {
    method summary (line 167) | fn summary(&self) -> BucketSummary {
  type RoutingTable (line 178) | pub struct RoutingTable {
    method new (line 185) | pub fn new(local_node_id: NodeId, config: RoutingConfig, now: Instant)...
    method family (line 193) | pub fn family(&self) -> AddressFamily {
    method local_node_id (line 197) | pub fn local_node_id(&self) -> NodeId {
    method set_local_node_id (line 201) | pub fn set_local_node_id(&mut self, local_node_id: NodeId) {
    method bucket_count (line 205) | pub fn bucket_count(&self) -> usize {
    method all_nodes (line 209) | pub fn all_nodes(&self) -> Vec<NodeRecord> {
    method snapshot (line 216) | pub fn snapshot(&self, now: Instant) -> RoutingSnapshot {
    method insert (line 236) | pub fn insert(&mut self, mut candidate: NodeRecord, now: Instant) -> I...
    method record_query_sent (line 294) | pub fn record_query_sent(&mut self, addr: SocketAddr, now: Instant) ->...
    method record_response (line 301) | pub fn record_response(
    method record_inbound_query (line 314) | pub fn record_inbound_query(
    method record_failure (line 335) | pub fn record_failure(&mut self, addr: SocketAddr, now: Instant) -> bo...
    method closest_nodes (line 342) | pub fn closest_nodes(&self, target: NodeId, limit: usize) -> Vec<NodeR...
    method closest_good_nodes (line 349) | pub fn closest_good_nodes(
    method questionable_nodes (line 365) | pub fn questionable_nodes(&self, limit: usize, now: Instant) -> Vec<No...
    method refresh_plans (line 376) | pub fn refresh_plans(&self, now: Instant) -> Vec<RefreshPlan> {
    method bucket_index_for (line 389) | fn bucket_index_for(&self, node_id: &NodeId) -> usize {
    method split_bucket (line 396) | fn split_bucket(&mut self, bucket_index: usize, now: Instant) -> bool {
    method update_existing (line 435) | fn update_existing(
    method queue_replacement (line 464) | fn queue_replacement(&mut self, bucket_index: usize, candidate: NodeRe...
    method with_record_mut (line 475) | fn with_record_mut<F>(&mut self, addr: SocketAddr, mut apply: F) -> bool
    method has_blocking_public_identity_conflict (line 500) | fn has_blocking_public_identity_conflict(
  type RoutingActor (line 542) | pub struct RoutingActor {
    method new (line 547) | pub fn new(local_node_id: NodeId, config: RoutingConfig, now: Instant)...
    method family (line 553) | pub fn family(&self) -> AddressFamily {
    method table (line 557) | pub fn table(&self) -> &RoutingTable {
    method table_mut (line 561) | pub fn table_mut(&mut self) -> &mut RoutingTable {
    method set_local_node_id (line 565) | pub fn set_local_node_id(&mut self, local_node_id: NodeId) {
  function node_status (line 570) | pub fn node_status(record: &NodeRecord, now: Instant) -> NodeStatus {
  function xor_distance (line 593) | pub fn xor_distance(left: &NodeId, right: &NodeId) -> [u8; 20] {
  function compare_distance (line 606) | fn compare_distance(left: Option<NodeId>, right: Option<NodeId>, target:...
  function compare_record_distance (line 615) | fn compare_record_distance(left: &NodeRecord, right: &NodeRecord, target...
  function compare_replacement_priority (line 622) | fn compare_replacement_priority(left: &NodeRecord, right: &NodeRecord, n...
  function questionable_probe_targets (line 636) | fn questionable_probe_targets(bucket: &Bucket, now: Instant) -> Vec<Sock...
  function public_identity_conflicts (line 647) | fn public_identity_conflicts(candidate: &NodeRecord, existing: &NodeReco...
  function public_identity_replacement_preferred (line 659) | fn public_identity_replacement_preferred(
  function public_identity_preference_rank (line 667) | fn public_identity_preference_rank(record: &NodeRecord, now: Instant) ->...
  function node_status_rank (line 676) | fn node_status_rank(status: NodeStatus) -> u8 {
  function response_presence_rank (line 684) | fn response_presence_rank(last_response_at: Option<Instant>) -> u8 {
  function least_recently_seen_at (line 692) | fn least_recently_seen_at(record: &NodeRecord) -> Option<Instant> {
  function merge_record (line 701) | fn merge_record(target: &mut NodeRecord, candidate: &NodeRecord, now: In...
  function merge_trust (line 738) | fn merge_trust(current: NodeTrust, candidate: NodeTrust) -> NodeTrust {
  function trust_rank (line 746) | fn trust_rank(trust: NodeTrust) -> u8 {
  function bep42_rank (line 754) | fn bep42_rank(state: Bep42State) -> u8 {
  function prefix_matches (line 763) | fn prefix_matches(prefix: &[u8; 20], prefix_len: u8, candidate: &[u8; 20...
  function bit_at (line 772) | fn bit_at(bytes: &[u8; 20], bit_idx: u8) -> bool {
  function set_bit (line 778) | fn set_bit(bytes: &mut [u8; 20], bit_idx: u8, value: bool) {
  function node_id (line 794) | fn node_id(byte: u8) -> NodeId {
  function bep42_vector_node_id (line 798) | fn bep42_vector_node_id() -> NodeId {
  function responded_record (line 805) | fn responded_record(addr: SocketAddr, node_id: NodeId, now: Instant) -> ...
  function record_response_records_node_id_churn_without_distrusting (line 812) | fn record_response_records_node_id_churn_without_distrusting() {
  function non_compliant_bep42_node_keeps_neutral_trust (line 828) | fn non_compliant_bep42_node_keeps_neutral_trust() {
  function better_public_identity_candidate_replaces_non_compliant_duplicate (line 843) | fn better_public_identity_candidate_replaces_non_compliant_duplicate() {
  function weaker_public_identity_candidate_does_not_replace_secure_duplicate (line 874) | fn weaker_public_identity_candidate_does_not_replace_secure_duplicate() {

FILE: src/dht/scheduler.rs
  type DhtDemandState (line 9) | pub struct DhtDemandState {
    method is_awaiting_metadata (line 15) | pub(in crate::dht) fn is_awaiting_metadata(self) -> bool {
    method has_no_connected_peers (line 19) | pub(in crate::dht) fn has_no_connected_peers(self) -> bool {
    method scheduler_priority (line 23) | fn scheduler_priority(self) -> u8 {
  type DhtDemandMetrics (line 35) | pub struct DhtDemandMetrics {
    method activity_bps_or_bytes (line 55) | pub(in crate::dht) fn activity_bps_or_bytes(self) -> u64 {
    method wants_idle_speed_probe_for (line 62) | pub(in crate::dht) fn wants_idle_speed_probe_for(self, demand: DhtDema...
    method wants_extended_routine_search (line 72) | pub(in crate::dht) fn wants_extended_routine_search(self) -> bool {
  type DueDemandCandidate (line 94) | pub(super) struct DueDemandCandidate {
  type DemandEntrySnapshot (line 103) | pub(super) struct DemandEntrySnapshot {
  type DemandFinishMode (line 115) | pub(super) enum DemandFinishMode {
    method no_connected_peers_backoff_extra_steps (line 121) | fn no_connected_peers_backoff_extra_steps(self) -> u8 {
  type DemandEntry (line 130) | struct DemandEntry {
  type DemandScheduler (line 141) | pub(super) struct DemandScheduler {
    method new (line 150) | pub(super) fn new(
    method interval_for_demand (line 165) | fn interval_for_demand(
    method no_connected_peers_backoff_step_cap (line 185) | fn no_connected_peers_backoff_step_cap(&self) -> u8 {
    method capped_no_connected_peers_backoff_step (line 201) | fn capped_no_connected_peers_backoff_step(&self, step: u8) -> u8 {
    method apply_demand_update (line 205) | fn apply_demand_update(entry: &mut DemandEntry, demand: DhtDemandState...
    method register (line 224) | pub(super) fn register(&mut self, info_hash: InfoHash, demand: DhtDema...
    method unregister (line 247) | pub(super) fn unregister(&mut self, info_hash: InfoHash) -> bool {
    method update (line 261) | pub(super) fn update(&mut self, info_hash: InfoHash, demand: DhtDemand...
    method update_metrics (line 269) | pub(super) fn update_metrics(&mut self, info_hash: InfoHash, metrics: ...
    method demand_state (line 277) | pub(super) fn demand_state(&self, info_hash: InfoHash) -> Option<DhtDe...
    method entry_snapshot (line 281) | pub(super) fn entry_snapshot(&self, info_hash: InfoHash) -> Option<Dem...
    method entry_snapshots (line 296) | pub(super) fn entry_snapshots(&self) -> Vec<DemandEntrySnapshot> {
    method due_candidates (line 312) | pub(super) fn due_candidates(&self, now: Instant) -> Vec<DueDemandCand...
    method mark_in_progress (line 344) | pub(super) fn mark_in_progress(&mut self, info_hash: InfoHash) -> bool {
    method take_due (line 356) | pub(super) fn take_due(&mut self, now: Instant, limit: usize) -> Vec<I...
    method finish (line 370) | pub(super) fn finish(&mut self, info_hash: InfoHash, now: Instant) {
    method finish_with_mode (line 374) | pub(super) fn finish_with_mode(
    method reset_active (line 435) | pub(super) fn reset_active(&mut self, now: Instant) {
  function info_hash (line 451) | fn info_hash(byte: u8) -> InfoHash {
  function demand (line 455) | fn demand(awaiting_metadata: bool, connected_peers: usize) -> DhtDemandS...
  type SchedulerOp (line 463) | enum SchedulerOp {
  function demand_strategy (line 496) | fn demand_strategy() -> impl Strategy<Value = DhtDemandState> {
  function scheduler_op_strategy (line 505) | fn scheduler_op_strategy() -> impl Strategy<Value = SchedulerOp> {
  function assert_scheduler_invariants (line 543) | fn assert_scheduler_invariants(
  function register_is_due_immediately (line 585) | fn register_is_due_immediately() {
  function more_urgent_update_during_active_lookup_requeues_immediately (line 600) | fn more_urgent_update_during_active_lookup_requeues_immediately() {
  function urgent_entries_are_prioritized (line 623) | fn urgent_entries_are_prioritized() {
  function less_urgent_update_does_not_force_immediate_rerun (line 646) | fn less_urgent_update_does_not_force_immediate_rerun() {
  function finish_uses_reason_specific_intervals (line 671) | fn finish_uses_reason_specific_intervals() {
  function no_connected_peers_backoff_grows_to_cap (line 704) | fn no_connected_peers_backoff_grows_to_cap() {
  function accelerated_no_connected_peers_backoff_skips_one_step (line 742) | fn accelerated_no_connected_peers_backoff_skips_one_step() {
  function no_connected_peers_backoff_step_stays_capped_at_max_interval (line 778) | fn no_connected_peers_backoff_step_stays_capped_at_max_interval() {

FILE: src/dht/service.rs
  constant DHT_MAINTENANCE_INTERVAL (line 118) | const DHT_MAINTENANCE_INTERVAL: Duration = Duration::from_secs(60);
  constant DHT_REBIND_TRANSPORT_DRAIN_TIMEOUT (line 119) | const DHT_REBIND_TRANSPORT_DRAIN_TIMEOUT: Duration = Duration::from_secs...
  constant DHT_ROUTINE_LOOKUP_REFRESH_INTERVAL (line 120) | const DHT_ROUTINE_LOOKUP_REFRESH_INTERVAL: Duration = DHT_MAINTENANCE_IN...
  constant DHT_NO_CONNECTED_PEERS_BASE_INTERVAL (line 121) | const DHT_NO_CONNECTED_PEERS_BASE_INTERVAL: Duration = Duration::from_se...
  constant DHT_NO_CONNECTED_PEERS_MAX_INTERVAL (line 122) | const DHT_NO_CONNECTED_PEERS_MAX_INTERVAL: Duration = Duration::from_sec...
  constant DHT_AWAITING_METADATA_REFRESH_INTERVAL (line 123) | const DHT_AWAITING_METADATA_REFRESH_INTERVAL: Duration = Duration::from_...
  constant DHT_HEALTH_REFRESH_INTERVAL (line 124) | const DHT_HEALTH_REFRESH_INTERVAL: Duration = Duration::from_secs(30);
  constant DHT_DEMAND_SCHEDULER_INTERVAL (line 125) | const DHT_DEMAND_SCHEDULER_INTERVAL: Duration = Duration::from_millis(250);
  constant DHT_DEMAND_LOOKUP_SLOT_COUNT (line 126) | const DHT_DEMAND_LOOKUP_SLOT_COUNT: usize = 10;
  constant DHT_DEMAND_LOOKUP_SLOT_FILL_PER_TICK (line 127) | const DHT_DEMAND_LOOKUP_SLOT_FILL_PER_TICK: usize = 5;
  constant DHT_DRAIN_LOOKUPS_PER_VIRTUAL_SLOT (line 128) | const DHT_DRAIN_LOOKUPS_PER_VIRTUAL_SLOT: usize = 16;
  constant DHT_PLANNER_TOKEN_SCALE (line 129) | const DHT_PLANNER_TOKEN_SCALE: u64 = 1_000;
  constant DHT_AWAITING_METADATA_LAUNCHES_PER_MINUTE (line 130) | const DHT_AWAITING_METADATA_LAUNCHES_PER_MINUTE: u64 = 30;
  constant DHT_AWAITING_METADATA_LAUNCH_BURST (line 131) | const DHT_AWAITING_METADATA_LAUNCH_BURST: u64 = 8;
  constant DHT_NO_CONNECTED_PEERS_LAUNCHES_PER_MINUTE (line 132) | const DHT_NO_CONNECTED_PEERS_LAUNCHES_PER_MINUTE: u64 = 30;
  constant DHT_NO_CONNECTED_PEERS_LAUNCH_BURST (line 133) | const DHT_NO_CONNECTED_PEERS_LAUNCH_BURST: u64 = 10;
  constant DHT_ROUTINE_REFRESH_LAUNCHES_PER_MINUTE (line 134) | const DHT_ROUTINE_REFRESH_LAUNCHES_PER_MINUTE: u64 = 5;
  constant DHT_ROUTINE_REFRESH_LAUNCH_BURST (line 135) | const DHT_ROUTINE_REFRESH_LAUNCH_BURST: u64 = 5;
  constant DHT_DEMAND_FAIRNESS_AGE (line 136) | const DHT_DEMAND_FAIRNESS_AGE: Duration = Duration::from_secs(10 * 60);
  constant DHT_DEMAND_SPARE_RESEARCH_MAX_ACTIVE (line 137) | const DHT_DEMAND_SPARE_RESEARCH_MAX_ACTIVE: usize = 1;
  constant DHT_DEMAND_SPARE_RESEARCH_LAUNCH_LIMIT (line 138) | const DHT_DEMAND_SPARE_RESEARCH_LAUNCH_LIMIT: usize = 1;
  constant DHT_DEMAND_SPARE_RESEARCH_MIN_INTERVAL (line 139) | const DHT_DEMAND_SPARE_RESEARCH_MIN_INTERVAL: Duration = Duration::from_...
  constant DHT_DEMAND_USEFUL_YIELD_BOOST_MAX_AGE (line 140) | const DHT_DEMAND_USEFUL_YIELD_BOOST_MAX_AGE: Duration = Duration::from_s...
  constant DHT_DEMAND_STRONG_YIELD_BOOST_MAX_AGE (line 141) | const DHT_DEMAND_STRONG_YIELD_BOOST_MAX_AGE: Duration = Duration::from_s...
  constant DHT_DEMAND_STRONG_YIELD_BOOST_MIN_UNIQUE_PEERS (line 142) | const DHT_DEMAND_STRONG_YIELD_BOOST_MIN_UNIQUE_PEERS: usize = 64;
  constant DHT_DEMAND_POWER_BASE_SCALE_HALVES (line 143) | const DHT_DEMAND_POWER_BASE_SCALE_HALVES: u8 = 2;
  constant DHT_DEMAND_POWER_MAX_SCALE_HALVES (line 144) | const DHT_DEMAND_POWER_MAX_SCALE_HALVES: u8 = 8;
  constant DHT_PEER_PRESSURE_CAP_RAMP_UP_INTERVAL (line 145) | const DHT_PEER_PRESSURE_CAP_RAMP_UP_INTERVAL: Duration = Duration::from_...
  constant DHT_IDLE_SPEED_PROBE_2X_MIN_IDLE (line 146) | const DHT_IDLE_SPEED_PROBE_2X_MIN_IDLE: Duration = Duration::from_secs(30);
  constant DHT_IDLE_SPEED_PROBE_3X_MIN_IDLE (line 147) | const DHT_IDLE_SPEED_PROBE_3X_MIN_IDLE: Duration = Duration::from_secs(60);
  constant DHT_IDLE_SPEED_PROBE_4X_MIN_IDLE (line 148) | const DHT_IDLE_SPEED_PROBE_4X_MIN_IDLE: Duration = Duration::from_secs(1...
  constant DHT_IDLE_SPEED_PROBE_DECAY_INTERVAL (line 149) | const DHT_IDLE_SPEED_PROBE_DECAY_INTERVAL: Duration = Duration::from_sec...
  constant DHT_AWAITING_METADATA_SLOT_CAP (line 150) | const DHT_AWAITING_METADATA_SLOT_CAP: usize = DHT_DEMAND_LOOKUP_SLOT_COUNT;
  constant DHT_NO_CONNECTED_PEERS_SLOT_CAP (line 151) | const DHT_NO_CONNECTED_PEERS_SLOT_CAP: usize = 8;
  constant DHT_ROUTINE_LOOKUP_SLOT_CAP (line 152) | const DHT_ROUTINE_LOOKUP_SLOT_CAP: usize = 3;
  constant DHT_PERSISTENCE_MAX_AGE (line 153) | const DHT_PERSISTENCE_MAX_AGE: Duration = Duration::from_secs(24 * 60 * ...
  constant DHT_STARTUP_BOOTSTRAP_DELAY (line 154) | const DHT_STARTUP_BOOTSTRAP_DELAY: Duration = Duration::from_secs(5);
  constant DHT_IPV6_HEDGE_DELAY (line 155) | const DHT_IPV6_HEDGE_DELAY: Duration = Duration::from_millis(750);
  constant DHT_LOOKUP_BOOTSTRAP_WAIT (line 156) | const DHT_LOOKUP_BOOTSTRAP_WAIT: Duration = Duration::from_secs(2);
  constant DHT_UNIQUE_PEERS_FOUND_WINDOW (line 157) | const DHT_UNIQUE_PEERS_FOUND_WINDOW: Duration = Duration::from_secs(10);
  constant DHT_PARKED_CRAWL_MAX_AGE (line 158) | const DHT_PARKED_CRAWL_MAX_AGE: Duration = Duration::from_secs(5 * 60);
  constant DHT_DEMAND_DRAIN_MAX_AGE (line 159) | const DHT_DEMAND_DRAIN_MAX_AGE: Duration = Duration::from_secs(5);
  constant DHT_DEMAND_DRAIN_POLL_INTERVAL (line 160) | const DHT_DEMAND_DRAIN_POLL_INTERVAL: Duration = Duration::from_millis(2...
  constant DHT_DEMAND_DRAIN_MAX_INFLIGHT_QUERIES (line 161) | const DHT_DEMAND_DRAIN_MAX_INFLIGHT_QUERIES: usize = 128;
  constant DHT_DEMAND_DRAIN_NO_LATE_YIELD_GRACE (line 162) | const DHT_DEMAND_DRAIN_NO_LATE_YIELD_GRACE: Duration = Duration::from_mi...
  constant DHT_AWAITING_METADATA_DRAIN_NO_LATE_YIELD_GRACE (line 163) | const DHT_AWAITING_METADATA_DRAIN_NO_LATE_YIELD_GRACE: Duration = Durati...
  constant DHT_ROUTINE_DRAIN_NO_LATE_YIELD_GRACE (line 164) | const DHT_ROUTINE_DRAIN_NO_LATE_YIELD_GRACE: Duration = Duration::from_m...
  constant DHT_AWAITING_METADATA_SLICE_WALL_TIME (line 165) | const DHT_AWAITING_METADATA_SLICE_WALL_TIME: Duration = Duration::from_s...
  constant DHT_AWAITING_METADATA_SLICE_IDLE_TIMEOUT (line 166) | const DHT_AWAITING_METADATA_SLICE_IDLE_TIMEOUT: Duration = Duration::fro...
  constant DHT_NO_CONNECTED_PEERS_SLICE_WALL_TIME (line 167) | const DHT_NO_CONNECTED_PEERS_SLICE_WALL_TIME: Duration = Duration::from_...
  constant DHT_NO_CONNECTED_PEERS_SLICE_IDLE_TIMEOUT (line 168) | const DHT_NO_CONNECTED_PEERS_SLICE_IDLE_TIMEOUT: Duration = Duration::fr...
  constant DHT_ROUTINE_SLICE_WALL_TIME (line 169) | const DHT_ROUTINE_SLICE_WALL_TIME: Duration = Duration::from_secs(2);
  constant DHT_ROUTINE_SLICE_IDLE_TIMEOUT (line 170) | const DHT_ROUTINE_SLICE_IDLE_TIMEOUT: Duration = Duration::from_millis(7...
  constant DHT_ROUTINE_SUPPORT_SLICE_WALL_TIME (line 171) | const DHT_ROUTINE_SUPPORT_SLICE_WALL_TIME: Duration = Duration::from_sec...
  constant DHT_ROUTINE_SUPPORT_SLICE_IDLE_TIMEOUT (line 172) | const DHT_ROUTINE_SUPPORT_SLICE_IDLE_TIMEOUT: Duration = Duration::from_...
  constant DHT_AWAITING_METADATA_SLICE_UNIQUE_PEER_CAP (line 173) | const DHT_AWAITING_METADATA_SLICE_UNIQUE_PEER_CAP: usize = 128;
  constant DHT_NO_CONNECTED_PEERS_SLICE_UNIQUE_PEER_CAP (line 174) | const DHT_NO_CONNECTED_PEERS_SLICE_UNIQUE_PEER_CAP: usize = 48;
  constant DHT_ROUTINE_SLICE_UNIQUE_PEER_CAP (line 175) | const DHT_ROUTINE_SLICE_UNIQUE_PEER_CAP: usize = 16;
  constant DHT_ROUTINE_SUPPORT_SLICE_UNIQUE_PEER_CAP (line 176) | const DHT_ROUTINE_SUPPORT_SLICE_UNIQUE_PEER_CAP: usize = 48;
  constant DHT_AWAITING_METADATA_STALLED_EMPTY_SLICE_RESET_THRESHOLD (line 177) | const DHT_AWAITING_METADATA_STALLED_EMPTY_SLICE_RESET_THRESHOLD: u32 = 4;
  constant DHT_NO_CONNECTED_PEERS_STALLED_EMPTY_SLICE_RESET_THRESHOLD (line 178) | const DHT_NO_CONNECTED_PEERS_STALLED_EMPTY_SLICE_RESET_THRESHOLD: u32 = 3;
  constant DHT_ROUTINE_STALLED_EMPTY_SLICE_RESET_THRESHOLD (line 179) | const DHT_ROUTINE_STALLED_EMPTY_SLICE_RESET_THRESHOLD: u32 = 2;
  constant DHT_AWAITING_METADATA_STALLED_LOW_YIELD_SLICE_MAX_UNIQUE_PEERS (line 180) | const DHT_AWAITING_METADATA_STALLED_LOW_YIELD_SLICE_MAX_UNIQUE_PEERS: us...
  constant DHT_NO_CONNECTED_PEERS_STALLED_LOW_YIELD_SLICE_MAX_UNIQUE_PEERS (line 181) | const DHT_NO_CONNECTED_PEERS_STALLED_LOW_YIELD_SLICE_MAX_UNIQUE_PEERS: u...
  constant DHT_ROUTINE_STALLED_LOW_YIELD_SLICE_MAX_UNIQUE_PEERS (line 182) | const DHT_ROUTINE_STALLED_LOW_YIELD_SLICE_MAX_UNIQUE_PEERS: usize = 1;
  constant DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MIN_VISITED (line 183) | const DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MIN_VISITED: usize = 12;
  constant DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MAX_RESPONDERS (line 184) | const DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MAX_RESPONDERS: usize = 3;
  constant DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MAX_FRONTIER (line 185) | const DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MAX_FRONTIER: usize = 8;
  constant DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MAX_RECEIVED_PEERS (line 186) | const DHT_NO_CONNECTED_PEERS_WEAK_PARKED_MAX_RECEIVED_PEERS: usize = 12;
  constant DHT_ROUTINE_WEAK_PARKED_MIN_VISITED (line 187) | const DHT_ROUTINE_WEAK_PARKED_MIN_VISITED: usize = 8;
  constant DHT_ROUTINE_WEAK_PARKED_MAX_RESPONDERS (line 188) | const DHT_ROUTINE_WEAK_PARKED_MAX_RESPONDERS: usize = 1;
  constant DHT_ROUTINE_WEAK_PARKED_MAX_FRONTIER (line 189) | const DHT_ROUTINE_WEAK_PARKED_MAX_FRONTIER: usize = 4;
  constant DHT_ROUTINE_WEAK_PARKED_MAX_RECEIVED_PEERS (line 190) | const DHT_ROUTINE_WEAK_PARKED_MAX_RECEIVED_PEERS: usize = 4;

FILE: src/dht/service/api.rs
  type DhtLookupRun (line 8) | pub struct DhtLookupRun {
  type DhtCommandSender (line 19) | pub(in crate::dht::service) type DhtCommandSender = mpsc::UnboundedSende...
  type DhtCommandReceiver (line 20) | pub(in crate::dht::service) type DhtCommandReceiver = mpsc::UnboundedRec...
  function send_dht_command (line 22) | pub(in crate::dht::service) fn send_dht_command(
  type DhtDemandSubscriptionInner (line 30) | pub(in crate::dht::service) enum DhtDemandSubscriptionInner {
  type DhtDemandSubscription (line 42) | pub struct DhtDemandSubscription {
    method empty (line 48) | fn empty() -> Self {
    method recv (line 56) | pub async fn recv(&mut self) -> Option<Vec<SocketAddr>> {
  method drop (line 62) | fn drop(&mut self) {
  type RecordedAnnounces (line 81) | type RecordedAnnounces = Arc<StdMutex<Vec<(Vec<u8>, Option<u16>)>>>;
  type RecordedReconfigures (line 83) | type RecordedReconfigures = Arc<StdMutex<Vec<DhtServiceConfig>>>;
  type RecordedPeerSlotUsages (line 85) | type RecordedPeerSlotUsages = Arc<StdMutex<Vec<(usize, usize)>>>;
  type TestDhtRecorder (line 89) | pub(crate) struct TestDhtRecorder {
    method recorded_announces (line 97) | pub(crate) fn recorded_announces(&self) -> Vec<(Vec<u8>, Option<u16>)> {
    method recorded_reconfigures (line 104) | pub(crate) fn recorded_reconfigures(&self) -> Vec<DhtServiceConfig> {
    method recorded_peer_slot_usages (line 111) | pub(crate) fn recorded_peer_slot_usages(&self) -> Vec<(usize, usize)> {
  type DhtCommand (line 120) | pub(in crate::dht::service) enum DhtCommand {
  type DhtService (line 190) | pub struct DhtService {
    method new (line 200) | pub async fn new(
    method handle (line 258) | pub fn handle(&self) -> DhtHandle {
    method subscribe_status (line 262) | pub fn subscribe_status(&self) -> watch::Receiver<DhtStatus> {
    method current_status (line 266) | pub fn current_status(&self) -> DhtStatus {
    method current_wave_telemetry (line 270) | pub fn current_wave_telemetry(&self) -> DhtWaveTelemetry {
    method current_warning (line 274) | pub fn current_warning(&self) -> Option<String> {
    method reconfigure (line 278) | pub fn reconfigure(&self, config: DhtServiceConfig) {
    method update_peer_slot_usage (line 282) | pub fn update_peer_slot_usage(&self, total_peers: usize, max_connected...
    method from_test_recorder (line 314) | pub(crate) fn from_test_recorder(recorder: TestDhtRecorder) -> Self {
  function configured_or_persisted_local_node_id (line 293) | fn configured_or_persisted_local_node_id() -> NodeId {
  function configured_status_from_settings (line 357) | pub fn configured_status_from_settings(settings: &Settings) -> DhtStatus {
  function configured_status_from_config (line 361) | fn configured_status_from_config(config: &DhtServiceConfig) -> DhtStatus {
  type DhtHandle (line 379) | pub struct DhtHandle {
    method fmt (line 400) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
    method disabled (line 416) | pub fn disabled() -> Self {
    method from_test_recorder (line 433) | fn from_test_recorder(recorder: TestDhtRecorder) -> Self {
    method status_snapshot (line 452) | pub async fn status_snapshot(&self) -> DhtStatus {
    method spawn_lookup_task (line 461) | pub fn spawn_lookup_task(
    method lookup_once (line 522) | pub async fn lookup_once(
    method announce_peer (line 543) | pub async fn announce_peer(&self, info_hash: Vec<u8>, port: Option<u16...
    method register_demand (line 577) | pub async fn register_demand(
    method update_demand (line 619) | pub fn update_demand(&self, info_hash: Vec<u8>, demand: DhtDemandState...
    method update_demand_metrics (line 639) | pub fn update_demand_metrics(&self, info_hash: Vec<u8>, metrics: DhtDe...
    method start_lookup_receiver (line 659) | async fn start_lookup_receiver(&self, info_hash: InfoHash) -> Option<M...
    method status_rx (line 696) | fn status_rx(&self) -> &watch::Receiver<DhtStatus> {
  type DhtHandleInner (line 384) | enum DhtHandleInner {
  method default (line 410) | fn default() -> Self {

FILE: src/dht/service/api_tests.rs
  function dht_service_new_falls_back_to_disabled_when_initial_runtime_build_fails (line 5) | async fn dht_service_new_falls_back_to_disabled_when_initial_runtime_bui...
  function managed_lookup_receiver_drop_sends_cancel_for_non_empty_lookup_ids (line 35) | async fn managed_lookup_receiver_drop_sends_cancel_for_non_empty_lookup_...
  function managed_lookup_receiver_drop_ignores_empty_lookup_ids (line 58) | async fn managed_lookup_receiver_drop_ignores_empty_lookup_ids() {
  function dht_demand_subscription_drop_sends_unregister_for_service_subscription (line 75) | async fn dht_demand_subscription_drop_sends_unregister_for_service_subsc...
  function summarize_lookup_receiver_counts_unique_peer_families (line 104) | async fn summarize_lookup_receiver_counts_unique_peer_families() {

FILE: src/dht/service/command_tests.rs
  function dht_runtime_command_model_reduces_runtime_commands_only (line 5) | fn dht_runtime_command_model_reduces_runtime_commands_only() {
  function dht_runtime_command_model_routes_start_get_peers_and_announce (line 40) | fn dht_runtime_command_model_routes_start_get_peers_and_announce() {
  function dht_runtime_command_model_routes_family_attach_and_cancel (line 79) | fn dht_runtime_command_model_routes_family_attach_and_cancel() {
  function dht_runtime_command_model_routes_planner_work_with_start_due_followup (line 135) | fn dht_runtime_command_model_routes_planner_work_with_start_due_followup...

FILE: src/dht/service/commands.rs
  type DhtRuntimeLookupFamilyRequest (line 16) | pub(super) struct DhtRuntimeLookupFamilyRequest {
  type DhtRuntimeCommandAction (line 27) | pub(super) enum DhtRuntimeCommandAction {
    method kind (line 90) | fn kind(&self) -> &'static str {
  type DhtRuntimeCommandEffect (line 54) | pub(super) enum DhtRuntimeCommandEffect {
    method kind (line 105) | fn kind(&self) -> &'static str {
  type DhtRuntimeCommandReduction (line 83) | pub(super) struct DhtRuntimeCommandReduction {
  type DhtRuntimeCommandModel (line 87) | pub(super) struct DhtRuntimeCommandModel;
    method update_command (line 121) | pub(super) fn update_command(command: DhtCommand) -> Option<DhtRuntime...
    method update (line 191) | pub(super) fn update(action: DhtRuntimeCommandAction) -> DhtRuntimeCom...

FILE: src/dht/service/config.rs
  type DhtBackendKind (line 9) | pub enum DhtBackendKind {
    method from_override (line 42) | fn from_override(value: &str) -> Option<Self> {
  type DhtServiceConfig (line 17) | pub struct DhtServiceConfig {
    method from_settings (line 26) | pub fn from_settings(settings: &Settings) -> Self {
  function forced_internal_backend_error (line 52) | pub(in crate::dht::service) fn forced_internal_backend_error(

FILE: src/dht/service/driver.rs
  type LoopEvent (line 19) | pub(in crate::dht::service) enum LoopEvent {
  function command_event (line 30) | pub(in crate::dht::service) fn command_event(maybe_command: Option<DhtCo...
  function run_service (line 38) | pub(in crate::dht::service) async fn run_service(

FILE: src/dht/service/driver_tests.rs
  function disabled_service_command_loop_delivers_peers_and_honors_unregister (line 5) | async fn disabled_service_command_loop_delivers_peers_and_honors_unregis...
  function disabled_service_command_loop_returns_empty_lookup_and_failed_announce (line 121) | async fn disabled_service_command_loop_returns_empty_lookup_and_failed_a...
  function disabled_service_reconfigure_failure_publishes_warning_without_generation_bump (line 175) | async fn disabled_service_reconfigure_failure_publishes_warning_without_...
  function active_service_reconfigure_to_disabled_publishes_status_and_preserves_subscriber (line 224) | async fn active_service_reconfigure_to_disabled_publishes_status_and_pre...
  function active_service_same_port_reconfigure_drops_old_runtime_before_binding (line 320) | async fn active_service_same_port_reconfigure_drops_old_runtime_before_b...
  function active_service_same_port_reconfigure_waits_for_inflight_transport_users (line 387) | async fn active_service_same_port_reconfigure_waits_for_inflight_transpo...
  function active_service_different_port_reconfigure_releases_old_runtime_after_success (line 462) | async fn active_service_different_port_reconfigure_releases_old_runtime_...
  function active_service_same_port_reconfigure_failure_restores_previous_runtime (line 541) | async fn active_service_same_port_reconfigure_failure_restores_previous_...

FILE: src/dht/service/effects.rs
  function start_due_demands_for_state (line 6) | pub(in crate::dht::service) async fn start_due_demands_for_state(
  function apply_demand_planner_effects_for_state (line 14) | pub(in crate::dht::service) fn apply_demand_planner_effects_for_state(
  function apply_dht_service_effects (line 29) | pub(in crate::dht::service) async fn apply_dht_service_effects(
  function apply_dht_demand_command_effects (line 128) | pub(in crate::dht::service) async fn apply_dht_demand_command_effects(
  function apply_dht_lifecycle_effects (line 165) | pub(in crate::dht::service) async fn apply_dht_lifecycle_effects(
  function apply_demand_subscriber_effects (line 254) | pub(in crate::dht::service) fn apply_demand_subscriber_effects(
  function apply_dht_runtime_command_effects (line 327) | pub(in crate::dht::service) async fn apply_dht_runtime_command_effects(
  function apply_demand_planner_effects (line 432) | pub(in crate::dht::service) fn apply_demand_planner_effects(
  function finish_drained_demand_lookup (line 546) | pub(in crate::dht::service) fn finish_drained_demand_lookup(
  function start_due_demands (line 579) | pub(in crate::dht::service) async fn start_due_demands(

FILE: src/dht/service/lifecycle.rs
  type DhtLifecycleAction (line 9) | pub(super) enum DhtLifecycleAction {
    method kind (line 56) | fn kind(&self) -> &'static str {
  type DhtLifecycleEffect (line 34) | pub(super) enum DhtLifecycleEffect {
    method kind (line 71) | fn kind(&self) -> &'static str {
  type DhtLifecycleReduction (line 49) | pub(super) struct DhtLifecycleReduction {
  type DhtLifecycleModel (line 53) | pub(super) struct DhtLifecycleModel;
    method update (line 86) | pub(super) fn update(action: DhtLifecycleAction) -> DhtLifecycleReduct...

FILE: src/dht/service/lifecycle_tests.rs
  function dht_lifecycle_model_startup_bootstrap_runs_only_when_due_and_idle (line 5) | fn dht_lifecycle_model_startup_bootstrap_runs_only_when_due_and_idle() {
  function dht_lifecycle_model_startup_bootstrap_result_updates_retry_state (line 34) | fn dht_lifecycle_model_startup_bootstrap_result_updates_retry_state() {
  function dht_lifecycle_model_maintenance_only_runs_when_runtime_idle (line 59) | fn dht_lifecycle_model_maintenance_only_runs_when_runtime_idle() {
  function dht_lifecycle_model_health_tick_publishes_expires_and_saves (line 76) | fn dht_lifecycle_model_health_tick_publishes_expires_and_saves() {
  function dht_lifecycle_model_runtime_failures_publish_warning_status (line 89) | fn dht_lifecycle_model_runtime_failures_publish_warning_status() {
  function dht_lifecycle_model_shutdown_saves_runtime_state (line 113) | fn dht_lifecycle_model_shutdown_saves_runtime_state() {

FILE: src/dht/service/monitor.rs
  type DhtActionEffectSnapshot (line 7) | pub(in crate::dht::service) struct DhtActionEffectSnapshot {
  function action_effect_snapshot (line 14) | pub(in crate::dht::service) fn action_effect_snapshot(
  function observe_action_effect_reduction (line 27) | pub(in crate::dht::service) fn observe_action_effect_reduction<I>(

FILE: src/dht/service/monitor_tests.rs
  function action_effect_snapshot_records_reduction_shape (line 4) | fn action_effect_snapshot_records_reduction_shape() {

FILE: src/dht/service/planner.rs
  type DemandPlannerActionView (line 43) | pub(super) struct DemandPlannerActionView {
    method from_action (line 77) | pub(super) fn from_action(action: &DemandPlannerAction<'_>) -> Self {
    method with_demand (line 212) | fn with_demand(mut self, demand: DhtDemandState) -> Self {
    method with_metrics (line 219) | fn with_metrics(mut self, metrics: DhtDemandMetrics) -> Self {
  type DemandPlannerEffectView (line 243) | pub(super) struct DemandPlannerEffectView {
    method from_effect (line 290) | pub(super) fn from_effect(effect: &DemandPlannerEffect) -> Self {
    method with_demand (line 376) | fn with_demand(mut self, demand: DhtDemandState) -> Self {
    method with_metrics (line 383) | fn with_metrics(mut self, metrics: DhtDemandMetrics, demand: Option<Dh...
  function dht_actor_monitor_enabled (line 408) | pub(super) fn dht_actor_monitor_enabled() -> bool {
  function demand_planner_monitor_enabled (line 412) | pub(super) fn demand_planner_monitor_enabled() -> bool {
  function dht_invariant_checks_enabled (line 416) | pub(super) fn dht_invariant_checks_enabled() -> bool {
  function short_info_hash (line 420) | pub(super) fn short_info_hash(info_hash: InfoHash) -> String {
  function optional_info_hash_label (line 424) | pub(super) fn optional_info_hash_label(info_hash: Option<InfoHash>) -> S...
  function trace_demand_planner_reduction (line 428) | pub(super) fn trace_demand_planner_reduction(
  function trace_demand_planner_effect (line 519) | pub(super) fn trace_demand_planner_effect(stage: &'static str, effect: &...
  method update (line 577) | pub(super) fn update(&mut self, action: DemandPlannerAction<'_>) -> Dema...

FILE: src/dht/service/planner/drain.rs
  function take_parked_family_state (line 7) | pub(in crate::dht::service) fn take_parked_family_state(
  function store_parked_lookup_states (line 37) | pub(in crate::dht::service) fn store_parked_lookup_states(
  function parked_slice_outcome_for_quality (line 65) | pub(in crate::dht::service) fn parked_slice_outcome_for_quality(
  function aggregate_lookup_quality (line 76) | pub(in crate::dht::service) fn aggregate_lookup_quality(
  function park_lookup_ids (line 86) | pub(in crate::dht::service) fn park_lookup_ids(
  function schedule_drained_demand_finalize (line 125) | pub(in crate::dht::service) fn schedule_drained_demand_finalize(
  function demand_drain_duration (line 140) | pub(in crate::dht::service) fn demand_drain_duration(
  function demand_drain_no_late_yield_grace (line 198) | pub(in crate::dht::service) fn demand_drain_no_late_yield_grace(
  function demand_drain_score (line 208) | pub(in crate::dht::service) fn demand_drain_score(
  function draining_demand_inflight (line 240) | pub(in crate::dht::service) fn draining_demand_inflight(
  function demand_drain_admission_snapshot (line 252) | pub(in crate::dht::service) fn demand_drain_admission_snapshot(
  function cancel_lookup_ids_to_parked (line 262) | pub(in crate::dht::service) fn cancel_lookup_ids_to_parked(
  function drain_lookup_ids (line 292) | pub(in crate::dht::service) fn drain_lookup_ids(
  function drained_demand_lookup_runtime_ready (line 407) | pub(in crate::dht::service) fn drained_demand_lookup_runtime_ready(
  function record_drain_peers_received (line 414) | pub(in crate::dht::service) fn record_drain_peers_received(
  method drain_runtime_readiness (line 437) | pub(in crate::dht::service) fn drain_runtime_readiness(
  method take_parked_family_state (line 452) | pub(in crate::dht::service) fn take_parked_family_state(
  method park_lookup_ids (line 468) | pub(in crate::dht::service) fn park_lookup_ids(
  method drain_lookup_ids (line 489) | pub(in crate::dht::service) fn drain_lookup_ids(
  method drain_admission_snapshot (line 514) | pub(in crate::dht::service) fn drain_admission_snapshot(
  method finalize_drained_lookup (line 523) | pub(in crate::dht::service) fn finalize_drained_lookup(
  function drained_demand_lookup_ready_for_finalize (line 541) | pub(in crate::dht::service) fn drained_demand_lookup_ready_for_finalize(
  function finalize_drained_demand_lookup (line 553) | pub(in crate::dht::service) fn finalize_drained_demand_lookup(
  function evict_stale_parked_crawls (line 605) | pub(in crate::dht::service) fn evict_stale_parked_crawls(

FILE: src/dht/service/planner/drain_tests.rs
  function demand_crawl_state_reuses_across_class_change_and_resets_on_staleness_or_low_quality (line 6) | fn demand_crawl_state_reuses_across_class_change_and_resets_on_staleness...
  function awaiting_metadata_parked_crawl_resets_after_repeated_zero_yield (line 151) | fn awaiting_metadata_parked_crawl_resets_after_repeated_zero_yield() {
  function parked_quality_thresholds_match_class_expectations (line 183) | fn parked_quality_thresholds_match_class_expectations() {
  function parked_slice_outcome_separates_healthy_zero_from_weak_low_yield (line 212) | fn parked_slice_outcome_separates_healthy_zero_from_weak_low_yield() {
  function draining_demand_records_late_unique_peers_without_double_counting (line 255) | fn draining_demand_records_late_unique_peers_without_double_counting() {
  function drain_finalize_readiness_bounds_waiting_drains (line 289) | fn drain_finalize_readiness_bounds_waiting_drains() {
  function drain_policy_prefers_productive_slices_and_rejects_idle_no_peer_work (line 337) | fn drain_policy_prefers_productive_slices_and_rejects_idle_no_peer_work() {
  function demand_slice_metrics_record_starts_stops_and_resets (line 384) | fn demand_slice_metrics_record_starts_stops_and_resets() {
  function demand_planner_drained_lookup_lifecycle_keeps_late_peer_yield_in_state (line 464) | fn demand_planner_drained_lookup_lifecycle_keeps_late_peer_yield_in_stat...
  function parked_family_state_round_trips_each_family_and_clears_entry (line 596) | fn parked_family_state_round_trips_each_family_and_clears_entry() {
  function parked_family_state_reset_drops_low_quality_crawl_and_records_reason (line 639) | fn parked_family_state_reset_drops_low_quality_crawl_and_records_reason() {
  function demand_planner_drain_runtime_readiness_defaults_ready_without_runtime (line 668) | fn demand_planner_drain_runtime_readiness_defaults_ready_without_runtime...
  function drain_virtual_slots_reduce_launch_budget_fractionally (line 688) | fn drain_virtual_slots_reduce_launch_budget_fractionally() {
  function demand_planner_peers_received_action_records_drain_unique_peers (line 719) | fn demand_planner_peers_received_action_records_drain_unique_peers() {
  function demand_planner_drain_tick_action_requests_finalize_for_ready_drains (line 761) | fn demand_planner_drain_tick_action_requests_finalize_for_ready_drains() {
  function demand_planner_lookup_park_rejection_finishes_scheduler_entry (line 793) | fn demand_planner_lookup_park_rejection_finishes_scheduler_entry() {
  function demand_planner_lookup_park_admission_keeps_scheduler_entry_in_progress (line 864) | fn demand_planner_lookup_park_admission_keeps_scheduler_entry_in_progres...
  function demand_planner_lookup_park_admission_requests_finalize_after_class_change (line 931) | fn demand_planner_lookup_park_admission_requests_finalize_after_class_ch...
  function demand_planner_drain_finalized_action_finishes_and_applies_backoff_mode (line 990) | fn demand_planner_drain_finalized_action_finishes_and_applies_backoff_mo...

FILE: src/dht/service/planner/invariant_tests.rs
  function demand_planner_invariants_accept_normal_active_and_draining_state (line 6) | fn demand_planner_invariants_accept_normal_active_and_draining_state() {
  function demand_planner_invariants_accept_pending_lookup_start_state (line 48) | fn demand_planner_invariants_accept_pending_lookup_start_state() {
  function demand_planner_invariants_accept_pending_lookup_park_state (line 69) | fn demand_planner_invariants_accept_pending_lookup_park_state() {
  function demand_planner_invariants_accept_pending_park_after_demand_class_changes (line 90) | fn demand_planner_invariants_accept_pending_park_after_demand_class_chan...
  function demand_planner_invariants_reject_active_without_scheduler_entry (line 119) | fn demand_planner_invariants_reject_active_without_scheduler_entry() {
  function demand_planner_invariants_reject_duplicate_lookup_id (line 136) | fn demand_planner_invariants_reject_duplicate_lookup_id() {
  function demand_planner_invariants_reject_scheduler_in_progress_without_lookup_state (line 165) | fn demand_planner_invariants_reject_scheduler_in_progress_without_lookup...

FILE: src/dht/service/planner/invariants.rs
  type DemandPlannerInvariantViolation (line 7) | pub(in crate::dht::service) struct DemandPlannerInvariantViolation {
    method new (line 14) | fn new(kind: &'static str, info_hash: Option<InfoHash>, detail: impl I...
    method info_hash_label (line 22) | fn info_hash_label(&self) -> String {
  function check_demand_planner_invariants (line 27) | pub(in crate::dht::service) fn check_demand_planner_invariants(
  function observe_demand_planner_invariants (line 320) | pub(in crate::dht::service) fn observe_demand_planner_invariants(

FILE: src/dht/service/planner/reducer_tests.rs
  function demand_planner_plan_due_starts_due_demands_by_class_and_marks_state (line 7) | fn demand_planner_plan_due_starts_due_demands_by_class_and_marks_state() {
  function demand_planner_updates_demand_metrics_without_starting_work (line 92) | fn demand_planner_updates_demand_metrics_without_starting_work() {
  function demand_planner_uses_metrics_when_building_routine_lookup_plan (line 129) | fn demand_planner_uses_metrics_when_building_routine_lookup_plan() {
  function demand_planner_plan_due_skips_draining_demands_but_launches_independent_work (line 180) | fn demand_planner_plan_due_skips_draining_demands_but_launches_independe...
  function demand_planner_lookup_start_failed_releases_scheduler_entry_and_refunds_slot (line 239) | fn demand_planner_lookup_start_failed_releases_scheduler_entry_and_refun...
  function demand_planner_duplicate_subscribers_keep_lookup_until_final_unsubscribe (line 293) | fn demand_planner_duplicate_subscribers_keep_lookup_until_final_unsubscr...
  function demand_planner_runtime_reset_action_clears_runtime_state_and_preserves_demands (line 348) | fn demand_planner_runtime_reset_action_clears_runtime_state_and_preserve...
  function demand_planner_lookup_finished_action_updates_state_and_emits_metrics_effect (line 396) | fn demand_planner_lookup_finished_action_updates_state_and_emits_metrics...
  function demand_planner_update_action_requests_drain_finalize_on_class_mismatch (line 446) | fn demand_planner_update_action_requests_drain_finalize_on_class_mismatc...
  function demand_planner_duplicate_register_requests_drain_finalize_on_class_mismatch (line 493) | fn demand_planner_duplicate_register_requests_drain_finalize_on_class_mi...
  function demand_planner_subscriber_removed_action_detaches_lookup_work_on_final_subscriber (line 555) | fn demand_planner_subscriber_removed_action_detaches_lookup_work_on_fina...

FILE: src/dht/service/planner/replay_tests.rs
  type PlannerReplay (line 6) | struct PlannerReplay {
    method new (line 15) | fn new() -> Self {
    method advance (line 26) | fn advance(&mut self, duration: Duration) {
    method register (line 30) | fn register(&mut self, label: &str, key: u32, demand: DhtDemandState) {
    method update (line 41) | fn update(&mut self, label: &str, key: u32, demand: DhtDemandState) {
    method update_metrics (line 52) | fn update_metrics(&mut self, label: &str, key: u32, metrics: DhtDemand...
    method plan (line 62) | fn plan(&mut self, label: &str, runtime_available: bool) {
    method finish (line 87) | fn finish(&mut self, label: &str, key: u32, total_peers: usize, unique...
    method park_active (line 107) | fn park_active(
    method add_drain_peers (line 172) | fn add_drain_peers(&mut self, label: &str, key: u32, peer_count: u8) {
    method drain_tick (line 185) | fn drain_tick(&mut self, label: &str, runtime_ready: bool) {
    method finalize_drained (line 208) | fn finalize_drained(&mut self, label: &str, info_hash: InfoHash) {
    method runtime_reset (line 240) | fn runtime_reset(&mut self, label: &str) {
    method reduce (line 244) | fn reduce(&mut self, label: &str, action: DemandPlannerAction<'_>) -> ...
    method rendered (line 257) | fn rendered(&self) -> String {
  function effect_labels (line 267) | fn effect_labels(effects: &[DemandPlannerEffect]) -> Vec<String> {
  function effect_label (line 271) | fn effect_label(effect: &DemandPlannerEffect) -> String {
  function plan_label (line 341) | fn plan_label(plan: Option<DemandPlannerPlanStats>) -> String {
  function state_label (line 359) | fn state_label(base: Instant, planner: &DemandPlannerModel) -> String {
  function entry_labels (line 370) | fn entry_labels(base: Instant, planner: &DemandPlannerModel) -> Vec<Stri...
  function active_labels (line 392) | fn active_labels(planner: &DemandPlannerModel) -> Vec<String> {
  function pending_labels (line 409) | fn pending_labels(planner: &DemandPlannerModel) -> Vec<String> {
  function drain_labels (line 427) | fn drain_labels(base: Instant, planner: &DemandPlannerModel) -> Vec<Stri...
  function history_labels (line 447) | fn history_labels(base: Instant, planner: &DemandPlannerModel) -> Vec<St...
  function optional_instant_ms (line 466) | fn optional_instant_ms(base: Instant, instant: Option<Instant>) -> String {
  function instant_ms (line 472) | fn instant_ms(base: Instant, instant: Instant) -> u64 {
  function hash_label (line 476) | fn hash_label(info_hash: InfoHash) -> String {
  function metadata_demand (line 480) | fn metadata_demand() -> DhtDemandState {
  function no_peer_demand (line 487) | fn no_peer_demand() -> DhtDemandState {
  function routine_demand (line 494) | fn routine_demand(connected_peers: usize) -> DhtDemandState {
  function active_complete_upload_metrics (line 501) | fn active_complete_upload_metrics(connected_peers: usize) -> DhtDemandMe...
  function idle_probe_metrics (line 515) | fn idle_probe_metrics() -> DhtDemandMetrics {
  function demand_from_trace_class (line 525) | fn demand_from_trace_class(class: &str, connected_peers: usize) -> DhtDe...
  function stop_reason_from_trace (line 534) | fn stop_reason_from_trace(token: &str) -> DemandSliceStopReason {
  function metrics_from_trace (line 544) | fn metrics_from_trace(token: &str, connected_peers: usize) -> DhtDemandM...
  function replay_normalized_trace_fixture (line 561) | fn replay_normalized_trace_fixture(script: &str) -> String {
  function demand_planner_replays_normalized_trace_fixture (line 633) | fn demand_planner_replays_normalized_trace_fixture() {
  function demand_planner_replays_fixed_trace_with_stable_effects_and_state (line 661) | fn demand_planner_replays_fixed_trace_with_stable_effects_and_state() {
  function demand_planner_replays_idle_speed_probe_boost_without_wall_clock_or_network (line 709) | fn demand_planner_replays_idle_speed_probe_boost_without_wall_clock_or_n...

FILE: src/dht/service/planner/selection.rs
  function active_demand_lookup_slot_count (line 7) | pub(in crate::dht::service) fn active_demand_lookup_slot_count(
  function active_demand_lookup_slot_counts (line 13) | pub(in crate::dht::service) fn active_demand_lookup_slot_counts(
  function draining_demand_slot_counts (line 23) | pub(in crate::dht::service) fn draining_demand_slot_counts(
  function drain_virtual_slot_count (line 33) | pub(in crate::dht::service) fn drain_virtual_slot_count(draining_lookup_...
  function demand_lookup_launch_budget (line 42) | pub(in crate::dht::service) fn demand_lookup_launch_budget(
  function demand_lookup_class_slot_cap (line 52) | pub(in crate::dht::service) fn demand_lookup_class_slot_cap(class: Deman...
  function due_candidate_has_reusable_parked_crawl (line 60) | pub(in crate::dht::service) fn due_candidate_has_reusable_parked_crawl(
  function candidate_last_useful_yield_age (line 71) | pub(in crate::dht::service) fn candidate_last_useful_yield_age(
  function candidate_last_unique_peers (line 82) | pub(in crate::dht::service) fn candidate_last_unique_peers(
  function candidate_due_age (line 92) | pub(in crate::dht::service) fn candidate_due_age(
  function candidate_has_fairness_age (line 99) | pub(in crate::dht::service) fn candidate_has_fairness_age(
  function candidate_has_useful_yield_history (line 106) | pub(in crate::dht::service) fn candidate_has_useful_yield_history(
  function candidate_wants_swarm_support (line 115) | pub(in crate::dht::service) fn candidate_wants_swarm_support(
  function candidate_selection_reason (line 122) | pub(in crate::dht::service) fn candidate_selection_reason(
  function demand_candidate_priority_score (line 141) | pub(in crate::dht::service) fn demand_candidate_priority_score(
  function candidate_last_activity_age (line 169) | pub(in crate::dht::service) fn candidate_last_activity_age(
  function spare_research_candidate_ready (line 182) | pub(in crate::dht::service) fn spare_research_candidate_ready(
  function demand_planner_selection_stats (line 192) | pub(in crate::dht::service) fn demand_planner_selection_stats(
  function select_spare_research_launches (line 220) | pub(in crate::dht::service) fn select_spare_research_launches(
  function select_idle_speed_probe_launches (line 290) | pub(in crate::dht::service) fn select_idle_speed_probe_launches(
  function select_due_demand_launches (line 370) | pub(in crate::dht::service) fn select_due_demand_launches(
  function select_due_demand_launches_with_stats (line 391) | pub(in cra
Condensed preview — 278 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (5,251K chars).
[
  {
    "path": ".dockerignore",
    "chars": 24,
    "preview": ".git\n.gitignore\ntarget/\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.yml",
    "chars": 2873,
    "preview": "name: 🐛 Bug Report\ndescription: Report a bug or unexpected behavior\ntitle: \"[Bug]: \"\nlabels: [\"type: bug\", \"triage: new\""
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "chars": 623,
    "preview": "blank_issues_enabled: false\ncontact_links:\n  - name: 💬 GitHub Discussions\n    url: https://github.com/Jagalite/superseed"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/documentation.yml",
    "chars": 1762,
    "preview": "name: 📚 Documentation\ndescription: Report documentation issues or suggest improvements\ntitle: \"[Docs]: \"\nlabels: [\"type:"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/enhancement.yml",
    "chars": 2263,
    "preview": "name: 🔧 Enhancement\ndescription: Suggest an improvement to existing functionality\ntitle: \"[Enhancement]: \"\nlabels: [\"typ"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.yml",
    "chars": 2595,
    "preview": "name: ✨ Feature Request\ndescription: Suggest a new feature or capability\ntitle: \"[Feature]: \"\nlabels: [\"type: feature\", "
  },
  {
    "path": ".github/ISSUE_TEMPLATE/questions.yml",
    "chars": 1709,
    "preview": "name: ❓ Question\ndescription: Ask a question about using superseedr\ntitle: \"[Question]: \"\nlabels: [\"type: question\", \"tr"
  },
  {
    "path": ".github/dependabot.yml",
    "chars": 754,
    "preview": "version: 2\nupdates:\n  # Monitor the Rust/Cargo ecosystem\n  - package-ecosystem: \"cargo\" \n    # Dependabot looks for Carg"
  },
  {
    "path": ".github/workflows/integration-cluster-cli.yml",
    "chars": 2083,
    "preview": "name: Integration Cluster CLI\n\non:\n  pull_request:\n\njobs:\n  rust_checks:\n    name: Rust Checks\n    runs-on: ubuntu-lates"
  },
  {
    "path": ".github/workflows/integration-interop.yml",
    "chars": 2607,
    "preview": "name: Integration Interop\n\non:\n  pull_request:\n\njobs:\n  rust_checks:\n    name: Rust Checks\n    runs-on: ubuntu-latest\n\n "
  },
  {
    "path": ".github/workflows/nightly.yml",
    "chars": 2083,
    "preview": "name: Nightly Fuzzing\n\non:\n  schedule:\n    # Runs at 02:00 UTC every day\n    - cron: '0 2 * * *'\n  # Allows you to click"
  },
  {
    "path": ".github/workflows/rust.yml",
    "chars": 20042,
    "preview": "name: Rust\n\non:\n  push:\n    branches: [ \"main\" ]\n    tags:\n      - 'v*'\n  pull_request:\n    branches: [ \"main\" ]\n  workf"
  },
  {
    "path": ".gitignore",
    "chars": 819,
    "preview": "# --- Superseedr Config ---\n# Ignore the local environment file\n.env\n\n\n# --- local temp---\ntmp\n*.tmp\nlogs/\n*.log\n*.lock\n"
  },
  {
    "path": ".gluetun.env.example",
    "chars": 1810,
    "preview": "# This is an example file.\n# To use, copy this file to 'gluetun.env' in this same directory and fill in your values.\n# F"
  },
  {
    "path": "AGENTS.md",
    "chars": 91,
    "preview": "Don’t use real copyrighted titles/brands in tests, fixtures, screenshots, or mock UI text.\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "chars": 5481,
    "preview": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participa"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 12699,
    "preview": "# Contributing to superseedr\n\nThank you for your interest in helping improve superseedr!\n\nYou do not need programming ex"
  },
  {
    "path": "Cargo.toml",
    "chars": 2524,
    "preview": "# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\n[package]\nname ="
  },
  {
    "path": "Dockerfile",
    "chars": 2118,
    "preview": "# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\n# syntax=docker/"
  },
  {
    "path": "LICENSE",
    "chars": 35149,
    "preview": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free "
  },
  {
    "path": "README.md",
    "chars": 26278,
    "preview": "<picture>\n  <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://raw.githubusercontent.com/Jagalite/superseedr-a"
  },
  {
    "path": "agentic_plans/cargo_dependency_assessment_2026-03-12.md",
    "chars": 12562,
    "preview": "# Cargo Dependency Assessment\n\n## Summary\nThis note evaluates every direct dependency in `Cargo.toml` with three questio"
  },
  {
    "path": "agentic_plans/cli_control_status_testing.md",
    "chars": 37766,
    "preview": "# Shared-Config CLI Feature Validation Matrix: codex/unified-config\n\n## Purpose\n\nThis is a focused validation plan for t"
  },
  {
    "path": "agentic_plans/cli_shared_config_agent_validation_plan_2026-03-19.md",
    "chars": 17404,
    "preview": "# CLI And Shared Config Agent Validation Plan\n\n## Summary\nUse an AI agent to run an end-to-end validation sweep for the "
  },
  {
    "path": "agentic_plans/client_diagnostics_full_implementation_plan_2026-05-01.md",
    "chars": 11786,
    "preview": "# Full Client Diagnostics Implementation Plan\n\nDate: 2026-05-01\n\n## Purpose\n\nReplace scattered developer-only tracing sw"
  },
  {
    "path": "agentic_plans/dht_global_planner_budget_plan_2026-04-24.md",
    "chars": 10232,
    "preview": "# DHT Global Planner Budget Plan\n\n## Summary\n\nThe next DHT scheduler step should move from \"each torrent eventually beco"
  },
  {
    "path": "agentic_plans/dht_resumable_crawls_plan_2026-04-19.md",
    "chars": 12392,
    "preview": "# DHT Resumable Crawls Plan\n\n## Summary\n\nThis plan proposes moving DHT peer discovery from the current \"start a full loo"
  },
  {
    "path": "agentic_plans/dht_soak_keep_after_discard_2026-04-23.md",
    "chars": 1231,
    "preview": "# DHT Soak Follow-Up: Changes To Keep After Instrumentation Discard\n\nContext: before adding the 15-hour soak instrumenta"
  },
  {
    "path": "agentic_plans/integration_harness_plan.md",
    "chars": 3411,
    "preview": "# Dockerized Integration Harness (Phase 1)\n\n## Summary\nBuild a Python `pytest` harness that runs in Docker locally and i"
  },
  {
    "path": "agentic_plans/integrity_scheduler_plan_2026-03-03.md",
    "chars": 15111,
    "preview": "# No-Config Integrity Scheduler\n\n## Summary\nReplace the current fixed-interval full probe sweep with a dedicated integri"
  },
  {
    "path": "agentic_plans/layered_shared_config_plan_2026-03-13.md",
    "chars": 10017,
    "preview": "# Layered Shared Config Mode\n\n## Summary\nCreate an opt-in shared-config mode behind `SUPERSEEDR_SHARED_CONFIG_DIR` while"
  },
  {
    "path": "agentic_plans/multi_instance_zero_config_scaling_plan_2026-03-12.md",
    "chars": 10730,
    "preview": "# Zero-Config Multi-Instance Scaling\n\n## Summary\nExplore a future Superseedr feature where multiple instances can cooper"
  },
  {
    "path": "agentic_plans/network_activity_chart_panel_expansion_plan_2026-03-05.md",
    "chars": 4419,
    "preview": "# Expand Activity Chart Panel With Multi-View Modes + Persisted Torrent Overlay\n\n## Summary\nAdd a new chart-view layer o"
  },
  {
    "path": "agentic_plans/network_history_persistence_async_restore_plan_2026-02-24.md",
    "chars": 5462,
    "preview": "# Network History Persistence Plan (Async Restore + Dirty Writes)\n\n## Summary\nImplement simple file-based persistence fo"
  },
  {
    "path": "agentic_plans/non_aligned_piece_local_refactor_plan.md",
    "chars": 8689,
    "preview": "# Non-Aligned Piece-Local Scheduling Refactor Plan\n\n## Status Snapshot (2026-02-10)\n### Completed in current branch\n1. A"
  },
  {
    "path": "agentic_plans/rss_tui_selection_implementation_plan.md",
    "chars": 3986,
    "preview": "# Superseedr TUI RSS Implementation Plan (Progress Update)\n\n## Status Summary (as of current `rss` branch)\nThis document"
  },
  {
    "path": "agentic_plans/runtime_scalability_cleanup_plan_2026-03-12.md",
    "chars": 10300,
    "preview": "# Runtime Scalability Cleanup\n\n## Summary\nTrack a set of incremental runtime and persistence optimizations that improve "
  },
  {
    "path": "agentic_plans/startup_churn_cpu_reimplementation_plan_2026-03-01.md",
    "chars": 6300,
    "preview": "# Startup + Churn CPU Reimplementation Plan\n\n## Status Snapshot (2026-03-01)\nThis plan captures the two exploratory opti"
  },
  {
    "path": "agentic_plans/state_fuzz_harness_disconnect_cleanup_handoff_2026-02-13.md",
    "chars": 6093,
    "preview": "# State Fuzz Harness Handoff: Disconnect/Cleanup Fidelity + Remaining Liveness Bug\n\n## Owner Directive (2026-02-14)\n- **"
  },
  {
    "path": "agentic_plans/system_health_prober_plan_2026-03-27.md",
    "chars": 6998,
    "preview": "# System Health Prober Plan\n\n## Summary\nAdd a runtime system health prober alongside the existing torrent integrity prob"
  },
  {
    "path": "agentic_plans/terminal_paste_fallback_plan_2026-03-10.md",
    "chars": 2521,
    "preview": "# Terminal Paste Fallback Plan (Normal Screen, Clean Baseline)\n\n## Summary\n- Add a Normal-screen paste-burst fallback so"
  },
  {
    "path": "agentic_plans/torrent_metadata_write_hardening_plan_2026-04-16.md",
    "chars": 6057,
    "preview": "# Torrent Metadata Write Hardening Plan\n\n## Summary\n`torrent_metadata.toml` is not primary configuration, but today star"
  },
  {
    "path": "agentic_plans/torrent_remove_delete_lifecycle_plan_2026-03-02.md",
    "chars": 8818,
    "preview": "# Torrent Remove/Delete Lifecycle Plan\n\n## Status Snapshot (2026-03-02)\nThis plan captures a release-deferred cleanup fo"
  },
  {
    "path": "agentic_plans/torrent_restart_revalidate_refactor_plan_2026-03-20.md",
    "chars": 4354,
    "preview": "# Per-Torrent Restart/Revalidate Refactor\n\n## Summary\nRefactor torrent lifecycle so app-level restart can stop a single "
  },
  {
    "path": "agentic_plans/tui_architecture_refactor.md",
    "chars": 17440,
    "preview": "# TUI Refactor Plan: Screen-Oriented Architecture With Shared Context and Safe Phased Migration\n\n## Summary\nRefactor `sr"
  },
  {
    "path": "agentic_plans/tui_particle_theme_layers_plan_2026-02-25.md",
    "chars": 4432,
    "preview": "# TUI Particle Theme Layers Plan (`Flowers`)\n\n## Summary\nAdd a new full theme with animated particle effects rendered as"
  },
  {
    "path": "agentic_plans/tui_phase0_baseline.md",
    "chars": 2992,
    "preview": "# TUI Phase 0 Baseline: Transition Table and State Ownership Matrix\n\nThis baseline is a reference for parity checks duri"
  },
  {
    "path": "agentic_plans/tui_phase0_manual_parity_checklist.md",
    "chars": 2016,
    "preview": "# TUI Phase 0 Manual Parity Checklist\n\nRun this checklist before/after each refactor slice. Record pass/fail notes.\n\n## "
  },
  {
    "path": "agentic_plans/v2_identity_lossiness_review_2026-04-14.md",
    "chars": 5721,
    "preview": "# V2 Identity Lossiness Review\n\n## Summary\nThis note captures the review and discovery work around the current 20-byte `"
  },
  {
    "path": "agentic_prompts/changelog.md",
    "chars": 2585,
    "preview": "# Role\nYou are an expert Product Marketing Manager and Technical Writer. Your goal is to generate a clean, engaging, and"
  },
  {
    "path": "agentic_prompts/comments.md",
    "chars": 781,
    "preview": "I am preparing to merge my branch to main. Analyze all new comments in this branch.\n\nDo not commit your new changes - al"
  },
  {
    "path": "agentic_prompts/maintenance_task.md",
    "chars": 1125,
    "preview": "I am preparing to merge my branch to main. Please perform the following 4 tasks sequentially, strictly adhering to the c"
  },
  {
    "path": "agentic_prompts/review.md",
    "chars": 372,
    "preview": "I am preparing to merge my branch to main. \n\nReivew the code changes and see if they are effective in their intent.\n\nGen"
  },
  {
    "path": "agentic_testing/results.json",
    "chars": 9646,
    "preview": "[\n  {\n    \"phase\": \"Phase 0: Environment Preparation\",\n    \"status\": \"PASS\",\n    \"commands\": [\n      \"cargo build\",\n    "
  },
  {
    "path": "agentic_testing/summary.md",
    "chars": 3475,
    "preview": "# CLI And Shared Config Validation Summary\n\n## Overall Verdict\n\nCompleted all planned phases with evidence capture.\n\n- P"
  },
  {
    "path": "docker-compose.yml",
    "chars": 1558,
    "preview": "# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nservices:\n  glue"
  },
  {
    "path": "docs/CHANGELOG.md",
    "chars": 23981,
    "preview": "# Changelog\n\n## Release v1.0.7\n### New Features\n- **Synthetic Benchmark Harness**: Added feature-gated benchmark tooling"
  },
  {
    "path": "docs/FAQ.md",
    "chars": 2418,
    "preview": "# Frequently Asked Questions (FAQ)\n\n## General\n\n### What is Superseedr?\n\nSuperseedr is a command-line BitTorrent client "
  },
  {
    "path": "docs/ROADMAP.md",
    "chars": 7339,
    "preview": "# Roadmap\nThis document is a high-level guide to the direction of superseedr.\nIt is intentionally stable but flexible as"
  },
  {
    "path": "docs/architecture.md",
    "chars": 5608,
    "preview": "# Superseedr Architecture\n\n## Overview\nSuperseedr is a high-performance, asynchronous BitTorrent client featuring a term"
  },
  {
    "path": "docs/cli.md",
    "chars": 10495,
    "preview": "# CLI Guide\n\n## What The CLI Is For\n\nThe Superseedr CLI is the main user-facing control surface for scripting,\nautomatio"
  },
  {
    "path": "docs/dht-ownership-plan.md",
    "chars": 9735,
    "preview": "# DHT Ownership Plan\n\n## Goal\nReplace the external `mainline` dependency with a first-party DHT implementation that fits"
  },
  {
    "path": "docs/integration-e2e-automation-plan.md",
    "chars": 7335,
    "preview": "# Integration E2E Automation Plan\n\n## Goal\nBuild a repeatable, one-command end-to-end test pipeline that:\n- Generates de"
  },
  {
    "path": "docs/integration-harness.md",
    "chars": 1903,
    "preview": "# Integration Harness\n\n## Overview\nThis harness runs end-to-end interoperability tests in Docker.\n\nPhase 1 scope is `sup"
  },
  {
    "path": "docs/shared-config.md",
    "chars": 14083,
    "preview": "# Shared Config Cluster Mode\n\n## What It Is\n\nShared config mode lets multiple Superseedr nodes participate in one cluste"
  },
  {
    "path": "docs/synthetic-benchmark.md",
    "chars": 8992,
    "preview": "# Synthetic Benchmark And Load Testing\n\n## Overview\n\nSuperseedr has a feature-gated synthetic load harness for local per"
  },
  {
    "path": "docs/tuning.md",
    "chars": 2896,
    "preview": "# Tuning Design Notes\n\n## Purpose\n\nDocument the self-tuning control loop and define a refactor path that improves algori"
  },
  {
    "path": "integration_tests/README.md",
    "chars": 6955,
    "preview": "# Integration Tests Harness\n\nDockerized integration harness for cross-client torrent interoperability.\n\nCurrent stable s"
  },
  {
    "path": "integration_tests/__init__.py",
    "chars": 43,
    "preview": "\"\"\"Integration test assets and harness.\"\"\"\n"
  },
  {
    "path": "integration_tests/cluster_cli/__init__.py",
    "chars": 54,
    "preview": "\"\"\"CLI/cluster integration harness for Superseedr.\"\"\"\n"
  },
  {
    "path": "integration_tests/cluster_cli/fixtures/manifest.json",
    "chars": 1729,
    "preview": "{\n  \"fixtures\": [\n    {\n      \"id\": \"single_4k_v1\",\n      \"mode\": \"v1\",\n      \"torrent\": \"integration_tests/torrents/v1/"
  },
  {
    "path": "integration_tests/cluster_cli/manifest.py",
    "chars": 3982,
    "preview": "from __future__ import annotations\n\nimport hashlib\nimport json\nfrom dataclasses import dataclass\nfrom pathlib import Pat"
  },
  {
    "path": "integration_tests/cluster_cli/run.py",
    "chars": 148,
    "preview": "from __future__ import annotations\n\nfrom integration_tests.cluster_cli.runner import main\n\n\nif __name__ == \"__main__\":\n "
  },
  {
    "path": "integration_tests/cluster_cli/runner.py",
    "chars": 25111,
    "preview": "from __future__ import annotations\n\nimport argparse\nimport json\nimport os\nimport shutil\nimport subprocess\nimport time\nfr"
  },
  {
    "path": "integration_tests/cluster_cli/tests/test_cluster_cli.py",
    "chars": 781,
    "preview": "from __future__ import annotations\n\nimport os\nimport subprocess\n\nimport pytest\n\nfrom integration_tests.cluster_cli.runne"
  },
  {
    "path": "integration_tests/cluster_cli/tests/test_manifest.py",
    "chars": 741,
    "preview": "from __future__ import annotations\n\nfrom integration_tests.cluster_cli.manifest import (\n    load_fixture_manifest,\n    "
  },
  {
    "path": "integration_tests/docker/docker-compose.cluster-cli.yml",
    "chars": 2401,
    "preview": "services:\n  cluster_host_a:\n    build:\n      context: ../..\n      dockerfile: Dockerfile\n    image: superseedr:cluster-c"
  },
  {
    "path": "integration_tests/docker/docker-compose.interop.yml",
    "chars": 3012,
    "preview": "services:\n  tracker:\n    image: python:3.12-alpine\n    working_dir: /app\n    command: [\"python\", \"tracker.py\"]\n    volum"
  },
  {
    "path": "integration_tests/docker/tracker.py",
    "chars": 4544,
    "preview": "#!/usr/bin/env python3\n\"\"\"Minimal BitTorrent HTTP tracker for local integration tests.\"\"\"\n\nfrom __future__ import annota"
  },
  {
    "path": "integration_tests/harness/__init__.py",
    "chars": 35,
    "preview": "\"\"\"Integration harness package.\"\"\"\n"
  },
  {
    "path": "integration_tests/harness/clients/__init__.py",
    "chars": 47,
    "preview": "\"\"\"Client adapters for integration harness.\"\"\"\n"
  },
  {
    "path": "integration_tests/harness/clients/base.py",
    "chars": 666,
    "preview": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom pathlib import Path\n\n\nclass ClientAdapter(A"
  },
  {
    "path": "integration_tests/harness/clients/qbittorrent.py",
    "chars": 10591,
    "preview": "from __future__ import annotations\n\nimport http.cookiejar\nimport json\nimport re\nimport time\nimport urllib.parse\nimport u"
  },
  {
    "path": "integration_tests/harness/clients/superseedr.py",
    "chars": 2669,
    "preview": "from __future__ import annotations\n\nimport json\nimport time\nfrom pathlib import Path\n\nfrom integration_tests.harness.cli"
  },
  {
    "path": "integration_tests/harness/clients/transmission.py",
    "chars": 6919,
    "preview": "from __future__ import annotations\n\nimport base64\nimport json\nimport time\nimport urllib.request\nfrom pathlib import Path"
  },
  {
    "path": "integration_tests/harness/config.py",
    "chars": 1359,
    "preview": "from __future__ import annotations\n\nimport os\nfrom dataclasses import dataclass\nfrom pathlib import Path\n\n\nROOT = Path(_"
  },
  {
    "path": "integration_tests/harness/docker_ctl.py",
    "chars": 1745,
    "preview": "from __future__ import annotations\n\nimport os\nimport subprocess\nfrom pathlib import Path\nfrom typing import Iterable\n\n\nc"
  },
  {
    "path": "integration_tests/harness/manifest.py",
    "chars": 2123,
    "preview": "from __future__ import annotations\n\nimport hashlib\nfrom dataclasses import dataclass\nfrom pathlib import Path\n\nV1_ONLY_E"
  },
  {
    "path": "integration_tests/harness/pytest.ini",
    "chars": 283,
    "preview": "[pytest]\nmarkers =\n    interop: dockerized interoperability tests\n    interop_superseedr: tests for superseedr-to-supers"
  },
  {
    "path": "integration_tests/harness/run.py",
    "chars": 3409,
    "preview": "from __future__ import annotations\n\nimport argparse\nimport json\nimport sys\nimport time\nfrom pathlib import Path\n\nfrom in"
  },
  {
    "path": "integration_tests/harness/scenarios/__init__.py",
    "chars": 25,
    "preview": "\"\"\"Interop scenarios.\"\"\"\n"
  },
  {
    "path": "integration_tests/harness/scenarios/qbittorrent_to_superseedr.py",
    "chars": 14698,
    "preview": "from __future__ import annotations\n\nimport json\nimport os\nimport shutil\nimport socket\nimport subprocess\nimport time\nfrom"
  },
  {
    "path": "integration_tests/harness/scenarios/superseedr_to_qbittorrent.py",
    "chars": 12377,
    "preview": "from __future__ import annotations\n\nimport json\nimport os\nimport socket\nimport shutil\nimport subprocess\nimport time\nfrom"
  },
  {
    "path": "integration_tests/harness/scenarios/superseedr_to_superseedr.py",
    "chars": 10998,
    "preview": "from __future__ import annotations\n\nimport json\nimport shutil\nimport subprocess\nimport time\nfrom dataclasses import data"
  },
  {
    "path": "integration_tests/harness/scenarios/superseedr_to_transmission.py",
    "chars": 13033,
    "preview": "from __future__ import annotations\n\nimport json\nimport socket\nimport shutil\nimport subprocess\nimport time\nfrom dataclass"
  },
  {
    "path": "integration_tests/harness/scenarios/transmission_to_superseedr.py",
    "chars": 13022,
    "preview": "from __future__ import annotations\n\nimport json\nimport shutil\nimport socket\nimport subprocess\nimport time\nfrom dataclass"
  },
  {
    "path": "integration_tests/harness/tests/test_manifest.py",
    "chars": 1176,
    "preview": "from __future__ import annotations\n\nfrom pathlib import Path\n\nfrom integration_tests.harness.manifest import build_expec"
  },
  {
    "path": "integration_tests/harness/tests/test_qbittorrent_auth_interop.py",
    "chars": 3510,
    "preview": "from __future__ import annotations\n\nimport os\nimport socket\nimport time\nfrom pathlib import Path\n\nimport pytest\n\nfrom in"
  },
  {
    "path": "integration_tests/harness/tests/test_qbittorrent_to_superseedr_interop.py",
    "chars": 697,
    "preview": "from __future__ import annotations\n\nimport os\nimport subprocess\n\nimport pytest\n\n\n@pytest.mark.interop\n@pytest.mark.inter"
  },
  {
    "path": "integration_tests/harness/tests/test_stub_adapters.py",
    "chars": 9706,
    "preview": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import Any, cast\n\nimport pytest\n\nfrom integrati"
  },
  {
    "path": "integration_tests/harness/tests/test_superseedr_interop.py",
    "chars": 680,
    "preview": "from __future__ import annotations\n\nimport os\nimport subprocess\n\nimport pytest\n\n\n@pytest.mark.interop\n@pytest.mark.inter"
  },
  {
    "path": "integration_tests/harness/tests/test_superseedr_to_qbittorrent_interop.py",
    "chars": 697,
    "preview": "from __future__ import annotations\n\nimport os\nimport subprocess\n\nimport pytest\n\n\n@pytest.mark.interop\n@pytest.mark.inter"
  },
  {
    "path": "integration_tests/harness/tests/test_superseedr_to_transmission_interop.py",
    "chars": 684,
    "preview": "from __future__ import annotations\n\nimport os\nimport subprocess\n\nimport pytest\n\n\n@pytest.mark.interop\n@pytest.mark.inter"
  },
  {
    "path": "integration_tests/harness/tests/test_transmission_auth_interop.py",
    "chars": 3757,
    "preview": "from __future__ import annotations\n\nimport os\nimport socket\nimport time\n\nimport pytest\n\nfrom integration_tests.harness.c"
  },
  {
    "path": "integration_tests/harness/tests/test_transmission_to_superseedr_interop.py",
    "chars": 684,
    "preview": "from __future__ import annotations\n\nimport os\nimport subprocess\n\nimport pytest\n\n\n@pytest.mark.interop\n@pytest.mark.inter"
  },
  {
    "path": "integration_tests/run_cluster_cli.sh",
    "chars": 136,
    "preview": "#!/usr/bin/env bash\nset -euo pipefail\n\nexport RUN_CLUSTER_CLI=\"${RUN_CLUSTER_CLI:-1}\"\npython3 -m integration_tests.clust"
  },
  {
    "path": "integration_tests/run_interop.sh",
    "chars": 276,
    "preview": "#!/usr/bin/env bash\nset -euo pipefail\n\nMODE=\"${1:-all}\"\nSCENARIO=\"${2:-${INTEROP_SCENARIO:-superseedr_to_superseedr}}\"\nT"
  },
  {
    "path": "integration_tests/settings.toml",
    "chars": 6840,
    "preview": "client_id = \"-SS1000-7bpSAwkTK6kP\"\nclient_port = 6681\nlifetime_downloaded = 0\nlifetime_uploaded = 0\nprivate_client = fal"
  },
  {
    "path": "integration_tests/torrents/hybrid/single_16k.bin.torrent",
    "chars": 280,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768477e4:infod9:file treed14:single_16k.bind0:d6:lengthi16"
  },
  {
    "path": "integration_tests/torrents/hybrid/single_4k.bin.torrent",
    "chars": 276,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768458e4:infod9:file treed13:single_4k.bind0:d6:lengthi409"
  },
  {
    "path": "integration_tests/torrents/hybrid/single_8k.bin.torrent",
    "chars": 273,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768467e4:infod9:file treed13:single_8k.bind0:d6:lengthi819"
  },
  {
    "path": "integration_tests/torrents/v1/multi_file.torrent",
    "chars": 290,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768181e4:infod5:filesld6:lengthi4096e4:pathl14:multi_a_4k."
  },
  {
    "path": "integration_tests/torrents/v1/nested.torrent",
    "chars": 369,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768423e4:infod5:filesld6:lengthi16384e4:pathl14:nested_16k"
  },
  {
    "path": "integration_tests/torrents/v1/single_16k.bin.torrent",
    "chars": 162,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768150e4:infod6:lengthi16384e4:name14:single_16k.bin12:pie"
  },
  {
    "path": "integration_tests/torrents/v1/single_25k.bin.torrent",
    "chars": 177,
    "preview": "d10:created by28:superseedr-fixture-generator13:creation datei1770770664e4:infod6:lengthi25600e4:name14:single_25k.bin12"
  },
  {
    "path": "integration_tests/torrents/v1/single_4k.bin.torrent",
    "chars": 156,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770767923e4:infod6:lengthi4096e4:name13:single_4k.bin12:piece"
  },
  {
    "path": "integration_tests/torrents/v1/single_8k.bin.torrent",
    "chars": 157,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768129e4:infod6:lengthi8192e4:name13:single_8k.bin12:piece"
  },
  {
    "path": "integration_tests/torrents/v2/nested.torrent",
    "chars": 484,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768630e4:infod9:file treed14:nested_16k.bind0:d6:lengthi16"
  },
  {
    "path": "integration_tests/torrents/v2/single_16k.bin.torrent",
    "chars": 241,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768606e4:infod9:file treed14:single_16k.bind0:d6:lengthi16"
  },
  {
    "path": "integration_tests/torrents/v2/single_4k.bin.torrent",
    "chars": 242,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768589e4:infod9:file treed13:single_4k.bind0:d6:lengthi409"
  },
  {
    "path": "integration_tests/torrents/v2/single_8k.bin.torrent",
    "chars": 238,
    "preview": "d10:created by24:qBittorrent v5.2.0alpha113:creation datei1770768599e4:infod9:file treed13:single_8k.bind0:d6:lengthi819"
  },
  {
    "path": "packaging/windows/wix-template.xml",
    "chars": 1606,
    "preview": "<Wix xmlns=\"http://schemas.microsoft.com/wix/2006/wi\"\n     xmlns:util=\"http://schemas.microsoft.com/wix/UtilExtension\">\n"
  },
  {
    "path": "proptest-regressions/networking/session.txt",
    "chars": 446,
    "preview": "# Seeds for failure cases proptest has generated in the past. It is\n# automatically read and these particular cases re-r"
  },
  {
    "path": "proptest-regressions/torrent_manager/state.txt",
    "chars": 30190,
    "preview": "# Seeds for failure cases proptest has generated in the past. It is\n# automatically read and these particular cases re-r"
  },
  {
    "path": "pytest.ini",
    "chars": 355,
    "preview": "[pytest]\nmarkers =\n    interop: dockerized interoperability tests\n    interop_superseedr: tests for superseedr-to-supers"
  },
  {
    "path": "requirements-integration.txt",
    "chars": 50,
    "preview": "pytest>=8.0,<9.0\ntomli-w>=1.0,<2.0\ntorf>=4.3,<5.0\n"
  },
  {
    "path": "rust-toolchain.toml",
    "chars": 66,
    "preview": "[toolchain]\nchannel = \"1.95.0\"\ncomponents = [\"clippy\", \"rustfmt\"]\n"
  },
  {
    "path": "scripts/build_osx_universal_pkg.sh",
    "chars": 10382,
    "preview": "#!/bin/bash\n\n# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nset"
  },
  {
    "path": "scripts/clear_integration_output.py",
    "chars": 2806,
    "preview": "#!/usr/bin/env python3\n\"\"\"Clear integration test output files while preserving output directory layout.\"\"\"\n\nfrom __futur"
  },
  {
    "path": "scripts/docker_build.sh",
    "chars": 166,
    "preview": "#!/bin/bash\n\n# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\ndoc"
  },
  {
    "path": "scripts/extract_merkle.py",
    "chars": 397,
    "preview": "\nimport bencodepy\n\nwith open('/xxx.torrent', 'rb') as f:\n    torrent_data = bencodepy.decode(f.read())\n\nfile_root = byte"
  },
  {
    "path": "scripts/file_descriptors_printout.sh",
    "chars": 5447,
    "preview": "#!/bin/bash\n\n# SPDX-FileCopyrightText: 2025 Jaga Tranvo \n# SPDX-License-Identifier: GPL-3.0-or-later\n\n# ================"
  },
  {
    "path": "scripts/generate_integration_bins.py",
    "chars": 4590,
    "preview": "#!/usr/bin/env python3\n\"\"\"Generate deterministic small binary fixtures for integration tests.\"\"\"\n\nfrom __future__ import"
  },
  {
    "path": "scripts/generate_integration_torrents.py",
    "chars": 8580,
    "preview": "#!/usr/bin/env python3\n\"\"\"Generate normalized integration torrent fixtures.\n\nBehavior:\n- v1 torrents are regenerated fro"
  },
  {
    "path": "scripts/get_process_FDs.sh",
    "chars": 145,
    "preview": "#!/bin/bash\n\n# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nlso"
  },
  {
    "path": "scripts/git_tag.sh",
    "chars": 723,
    "preview": "#!/bin/bash\n\n# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\n# E"
  },
  {
    "path": "scripts/grep_io_errors.sh",
    "chars": 154,
    "preview": "#!/bin/bash\n\n# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\ntai"
  },
  {
    "path": "scripts/hash.py",
    "chars": 2080,
    "preview": "import hashlib\nimport os\n\n# --- Configuration ---\nFILE_PATH = '/xxx.pdf'  # Ensure this matches your local filename\nFILE"
  },
  {
    "path": "scripts/private_build.sh",
    "chars": 151,
    "preview": "#!/bin/bash\n\n# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-later\n\ncar"
  },
  {
    "path": "scripts/summarize_dht_soak.py",
    "chars": 17768,
    "preview": "#!/usr/bin/env python3\n# SPDX-FileCopyrightText: 2025 The superseedr Contributors\n# SPDX-License-Identifier: GPL-3.0-or-"
  },
  {
    "path": "scripts/test-state-simulations.sh",
    "chars": 50,
    "preview": "PROPTEST_CASES=1000000 cargo test state --release\n"
  },
  {
    "path": "scripts/validate_integration_output.py",
    "chars": 4511,
    "preview": "#!/usr/bin/env python3\n\"\"\"Cross-validate integration test outputs against canonical test_data files.\"\"\"\n\nfrom __future__"
  },
  {
    "path": "src/app.rs",
    "chars": 487344,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::fs;\nu"
  },
  {
    "path": "src/command.rs",
    "chars": 3638,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::fmt;\n"
  },
  {
    "path": "src/config.rs",
    "chars": 186357,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse chrono::{D"
  },
  {
    "path": "src/control_service.rs",
    "chars": 40678,
    "preview": "// SPDX-FileCopyrightText: 2026 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse crate::app"
  },
  {
    "path": "src/dht/anomaly.rs",
    "chars": 724,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\n#[derive(Debug"
  },
  {
    "path": "src/dht/bep42.rs",
    "chars": 5234,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::typ"
  },
  {
    "path": "src/dht/bootstrap.rs",
    "chars": 3412,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::rou"
  },
  {
    "path": "src/dht/health.rs",
    "chars": 4351,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::pee"
  },
  {
    "path": "src/dht/inbound.rs",
    "chars": 23216,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::krp"
  },
  {
    "path": "src/dht/krpc.rs",
    "chars": 31564,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::typ"
  },
  {
    "path": "src/dht/lookup.rs",
    "chars": 44258,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::bep"
  },
  {
    "path": "src/dht/mod.rs",
    "chars": 84966,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\n#![allow(dead_"
  },
  {
    "path": "src/dht/peer_store.rs",
    "chars": 6307,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::typ"
  },
  {
    "path": "src/dht/persist.rs",
    "chars": 10808,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::rou"
  },
  {
    "path": "src/dht/public_addr.rs",
    "chars": 3266,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::typ"
  },
  {
    "path": "src/dht/routing.rs",
    "chars": 28404,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::bep"
  },
  {
    "path": "src/dht/scheduler.rs",
    "chars": 30268,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::typ"
  },
  {
    "path": "src/dht/service/api.rs",
    "chars": 22924,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::*;\n"
  },
  {
    "path": "src/dht/service/api_tests.rs",
    "chars": 4348,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[tokio::test]\nasync fn dht_service_new_falls_back_to_disabled_when_initial_r"
  },
  {
    "path": "src/dht/service/command_tests.rs",
    "chars": 6751,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[test]\nfn dht_runtime_command_model_reduces_runtime_commands_only() {\n    le"
  },
  {
    "path": "src/dht/service/commands.rs",
    "chars": 8780,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::colle"
  },
  {
    "path": "src/dht/service/config.rs",
    "chars": 1759,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse serde::{De"
  },
  {
    "path": "src/dht/service/driver.rs",
    "chars": 11137,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::time:"
  },
  {
    "path": "src/dht/service/driver_tests.rs",
    "chars": 19758,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[tokio::test]\nasync fn disabled_service_command_loop_delivers_peers_and_hono"
  },
  {
    "path": "src/dht/service/effects.rs",
    "chars": 30126,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::*;\n"
  },
  {
    "path": "src/dht/service/lifecycle.rs",
    "chars": 4872,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::time:"
  },
  {
    "path": "src/dht/service/lifecycle_tests.rs",
    "chars": 3900,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[test]\nfn dht_lifecycle_model_startup_bootstrap_runs_only_when_due_and_idle("
  },
  {
    "path": "src/dht/service/monitor.rs",
    "chars": 1407,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::dht"
  },
  {
    "path": "src/dht/service/monitor_tests.rs",
    "chars": 400,
    "preview": "use super::monitor::*;\n\n#[test]\nfn action_effect_snapshot_records_reduction_shape() {\n    let snapshot =\n        action_"
  },
  {
    "path": "src/dht/service/planner/drain.rs",
    "chars": 20203,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::sup"
  },
  {
    "path": "src/dht/service/planner/drain_tests.rs",
    "chars": 33859,
    "preview": "use super::super::*;\nuse super::test_support::*;\nuse super::*;\n\n#[test]\nfn demand_crawl_state_reuses_across_class_change"
  },
  {
    "path": "src/dht/service/planner/invariant_tests.rs",
    "chars": 5666,
    "preview": "use super::super::*;\nuse super::test_support::*;\nuse super::*;\n\n#[test]\nfn demand_planner_invariants_accept_normal_activ"
  },
  {
    "path": "src/dht/service/planner/invariants.rs",
    "chars": 13084,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::*;\n"
  },
  {
    "path": "src/dht/service/planner/reducer_tests.rs",
    "chars": 18397,
    "preview": "use super::super::*;\nuse super::test_support::*;\nuse super::*;\nuse proptest::prelude::*;\n\n#[test]\nfn demand_planner_plan"
  },
  {
    "path": "src/dht/service/planner/replay_tests.rs",
    "chars": 27510,
    "preview": "use super::super::*;\nuse super::test_support::*;\nuse super::*;\n\n#[derive(Debug)]\nstruct PlannerReplay {\n    base: Instan"
  },
  {
    "path": "src/dht/service/planner/selection.rs",
    "chars": 17026,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::sup"
  },
  {
    "path": "src/dht/service/planner/selection_tests.rs",
    "chars": 58483,
    "preview": "use super::super::*;\nuse super::test_support::*;\nuse super::*;\nuse proptest::prelude::*;\n\n#[test]\nfn demand_lookup_plan_"
  },
  {
    "path": "src/dht/service/planner/test_support.rs",
    "chars": 23670,
    "preview": "#![allow(dead_code)]\n\nuse super::super::*;\nuse super::*;\nuse proptest::prelude::*;\npub(super) fn peer(addr: &str) -> Soc"
  },
  {
    "path": "src/dht/service/planner/types.rs",
    "chars": 54654,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::sup"
  },
  {
    "path": "src/dht/service/planner.rs",
    "chars": 49732,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::*;\n"
  },
  {
    "path": "src/dht/service/replay_tests.rs",
    "chars": 13090,
    "preview": "use super::test_support::*;\nuse super::*;\n\nstruct ServiceReplay {\n    base: Instant,\n    now: Instant,\n    state: DhtSer"
  },
  {
    "path": "src/dht/service/runtime.rs",
    "chars": 18298,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::path:"
  },
  {
    "path": "src/dht/service/runtime_command_replay_tests.rs",
    "chars": 6780,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[derive(Default)]\nstruct RuntimeCommandReplay {\n    transcript: Vec<String>,"
  },
  {
    "path": "src/dht/service/runtime_effect_tests.rs",
    "chars": 17471,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[tokio::test]\nasync fn start_get_peers_lookup_without_runtime_returns_empty_"
  },
  {
    "path": "src/dht/service/state/demand_command.rs",
    "chars": 12962,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::net::"
  },
  {
    "path": "src/dht/service/state/mod.rs",
    "chars": 2193,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::net::"
  },
  {
    "path": "src/dht/service/state/service.rs",
    "chars": 4433,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::sup"
  },
  {
    "path": "src/dht/service/state_tests.rs",
    "chars": 15582,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[test]\nfn dht_service_model_reconfigure_success_updates_state_and_emits_foll"
  },
  {
    "path": "src/dht/service/status.rs",
    "chars": 7985,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::colle"
  },
  {
    "path": "src/dht/service/status_tests.rs",
    "chars": 3487,
    "preview": "use super::test_support::*;\nuse super::*;\nuse tokio::sync::watch;\n\n#[test]\nfn recent_unique_peers_dedupes_and_expires_en"
  },
  {
    "path": "src/dht/service/subscriber_tests.rs",
    "chars": 4569,
    "preview": "use super::test_support::*;\nuse super::*;\n\n#[test]\nfn demand_subscriber_registry_registers_and_unregisters_once() {\n    "
  },
  {
    "path": "src/dht/service/subscribers.rs",
    "chars": 5448,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse std::colle"
  },
  {
    "path": "src/dht/service/test_support.rs",
    "chars": 5280,
    "preview": "#![allow(dead_code)]\n\nuse super::*;\n\npub(super) fn peer(addr: &str) -> SocketAddr {\n    addr.parse().expect(\"valid socke"
  },
  {
    "path": "src/dht/service.rs",
    "chars": 8432,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::loo"
  },
  {
    "path": "src/dht/test_support.rs",
    "chars": 584,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::typ"
  },
  {
    "path": "src/dht/token.rs",
    "chars": 4344,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::typ"
  },
  {
    "path": "src/dht/transport.rs",
    "chars": 19928,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse super::krp"
  },
  {
    "path": "src/dht/types.rs",
    "chars": 8680,
    "preview": "// SPDX-FileCopyrightText: 2025 The superseedr Contributors\n// SPDX-License-Identifier: GPL-3.0-or-later\n\nuse serde::{De"
  }
]

// ... and 78 more files (download for full content)

About this extraction

This page contains the full source code of the Jagalite/superseedr GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 278 files (4.8 MB), approximately 1.3M tokens, and a symbol index with 4930 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!