[
  {
    "path": ".cargo/config.toml",
    "content": "[target.armv7-unknown-linux-gnueabihf]\nlinker = \"arm-linux-gnueabihf-gcc\"\n\n[target.armv7-unknown-linux-musleabihf]\nlinker = \"arm-linux-musleabihf-gcc\"\n\n[target.aarch64-unknown-linux-gnu]\nlinker = \"aarch64-linux-gnu-gcc\"\n\n[target.aarch64-unknown-linux-musl]\nlinker = \"aarch64-linux-musl-gcc\"\n"
  },
  {
    "path": ".editorconfig",
    "content": "root = true\n\n[*]\nindent_style = tab\nindent_size = 4\nend_of_line = lf\ncharset = utf-8\ntrim_trailing_whitespace = true\ninsert_final_newline = true\n\n[cli/tests/snapshots/*]\nindent_style = space\ntrim_trailing_whitespace = false\n\n[*.{md,ronn}]\nindent_style = space\nindent_size = 4\n\n[*.{cff,yml}]\nindent_size = 2\nindent_style = space\n"
  },
  {
    "path": ".gitattributes",
    "content": "Cargo.lock merge=binary\ndoc/watchexec.* merge=binary\ncompletions/* merge=binary\n"
  },
  {
    "path": ".github/FUNDING.yml",
    "content": "liberapay: passcod\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Something is wrong\ntitle: ''\nlabels: bug, need-info\nassignees: ''\n\n---\n\nPlease delete this template text before filing, but you _need_ to include the following:\n\n- Watchexec's version\n- The OS you're using\n- A log with `-vvv --log-file` (if it has sensitive info you can email it at felix@passcod.name — do that _after_ filing so you can reference the issue ID)\n- A sample command that you've run that has the issue\n\nThank you\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Something is missing\ntitle: ''\nlabels: feature\nassignees: ''\n\n---\n\n<!-- Please note that this project has a high threshold for changing default behaviour or breaking compatibility. If your feature or change can be done without breaking, present it that way. -->\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\nIf proposing a new CLI option, option names you think would fit.\n\n**Additional context**\nAdd any other context about the feature request here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/regression.md",
    "content": "---\nname: Regression\nabout: Something changed unexpectedly\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**What used to happen**\n\n**What happens now**\n\n**Details**\n- Latest version that worked:\n- Earliest version that doesn't: (don't sweat testing earlier versions if you don't remember or have time, your current version will do)\n- OS:\n- A debug log with `-vvv --log-file`:\n\n```\n```\n\n<!-- You may truncate the log to just the part supporting your report if you're confident the rest is irrelevant. If it contains sensitive information (if you can't reduce/reproduce outside of work you'd rather remain private, you can either redact it or send it by email.) -->\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "# Dependabot dependency version checks / updates\n\nversion: 2\nupdates:\n  - package-ecosystem: \"github-actions\"\n    # Workflow files stored in the\n    # default location of `.github/workflows`\n    directory: \"/\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/cli\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/lib\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/events\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/signals\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/supervisor\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/filterer/ignore\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/filterer/globset\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/bosion\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/ignore-files\"\n    schedule:\n      interval: \"weekly\"\n  - package-ecosystem: \"cargo\"\n    directory: \"/crates/project-origins\"\n    schedule:\n      interval: \"weekly\"\n"
  },
  {
    "path": ".github/workflows/clippy.yml",
    "content": "name: Clippy\n\non:\n  workflow_dispatch:\n  pull_request:\n  push:\n    branches:\n      - main\n    tags-ignore:\n      - \"*\"\n\nenv:\n  CARGO_TERM_COLOR: always\n  CARGO_UNSTABLE_SPARSE_REGISTRY: \"true\"\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.ref || github.run_id }}\n  cancel-in-progress: true\n\njobs:\n  clippy:\n    strategy:\n      fail-fast: false\n      matrix:\n        platform:\n          - ubuntu\n          - windows\n          - macos\n\n    name: Clippy on ${{ matrix.platform }}\n    runs-on: \"${{ matrix.platform }}-latest\"\n\n    steps:\n    - uses: actions/checkout@v6\n    - name: Configure toolchain\n      run: |\n        rustup toolchain install stable --profile minimal --no-self-update --component clippy\n        rustup default stable\n\n    # https://github.com/actions/cache/issues/752\n    - if: ${{ runner.os == 'Windows' }}\n      name: Use GNU tar\n      shell: cmd\n      run: |\n        echo \"Adding GNU tar to PATH\"\n        echo C:\\Program Files\\Git\\usr\\bin>>\"%GITHUB_PATH%\"\n\n    - name: Configure caching\n      uses: actions/cache@v5\n      with:\n        path: |\n          ~/.cargo/registry/index/\n          ~/.cargo/registry/cache/\n          ~/.cargo/git/db/\n          target/\n        key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}\n        restore-keys: |\n          ${{ runner.os }}-cargo-\n\n    - run: cargo clippy\n"
  },
  {
    "path": ".github/workflows/dist-manifest.jq",
    "content": "{\n  dist_version: \"0.0.2\",\n  releases: [{\n    app_name: \"watchexec\",\n    app_version: $version,\n    changelog_title: \"CLI \\($version)\",\n    artifacts: [ $files | split(\"\\n\") | .[] | {\n      name: .,\n      kind: (if (. | test(\"[.](deb|rpm)$\")) then \"installer\" else \"executable-zip\" end),\n      target_triples: (. | [capture(\"watchexec-[^-]+-(?<target>[^.]+)[.].+\").target]),\n      assets: ([[\n        {\n          kind: \"executable\",\n          name: (if (. | test(\"windows\")) then \"watchexec.exe\" else \"watchexec\" end),\n          path: \"\\(\n            capture(\"(?<dir>watchexec-[^-]+-[^.]+)[.].+\").dir\n          )\\(\n            if (. | test(\"windows\")) then \"\\\\watchexec.exe\" else \"/watchexec\" end\n          )\",\n        },\n        (if (. | test(\"[.](deb|rpm)$\")) then null else {kind: \"readme\", name: \"README.md\"} end),\n        (if (. | test(\"[.](deb|rpm)$\")) then null else {kind: \"license\", name: \"LICENSE\"} end)\n      ][] | select(. != null)])\n    } ]\n  }]\n}\n"
  },
  {
    "path": ".github/workflows/release-cli.yml",
    "content": "name: CLI Release\n\non:\n  workflow_dispatch:\n  push:\n    tags:\n      - \"v*.*.*\"\n\nenv:\n  CARGO_TERM_COLOR: always\n  CARGO_UNSTABLE_SPARSE_REGISTRY: \"true\"\n\njobs:\n  info:\n    name: Gather info\n    runs-on: ubuntu-latest\n    outputs:\n      cli_version: ${{ steps.version.outputs.cli_version }}\n    steps:\n      - uses: actions/checkout@v6\n      - name: Extract version\n        id: version\n        shell: bash\n        run: |\n          set -euxo pipefail\n\n          version=$(grep -m1 -F 'version =' crates/cli/Cargo.toml | cut -d\\\" -f2)\n\n          if [[ -z \"$version\" ]]; then\n            echo \"Error: no version :(\"\n            exit 1\n          fi\n\n          echo \"cli_version=$version\" >> $GITHUB_OUTPUT\n\n  build:\n    strategy:\n      matrix:\n        include:\n          - name: linux-amd64-gnu\n            os: ubuntu-22.04\n            target: x86_64-unknown-linux-gnu\n            cross: false\n            experimental: false\n\n          - name: linux-amd64-musl\n            os: ubuntu-24.04\n            target: x86_64-unknown-linux-musl\n            cross: false\n            experimental: false\n\n          - name: linux-i686-musl\n            os: ubuntu-22.04\n            target: i686-unknown-linux-musl\n            cross: true\n            experimental: true\n\n          - name: linux-armhf-gnu\n            os: ubuntu-24.04\n            target: armv7-unknown-linux-gnueabihf\n            cross: true\n            experimental: false\n\n          - name: linux-arm64-gnu\n            os: ubuntu-24.04-arm\n            target: aarch64-unknown-linux-gnu\n            cross: false\n            experimental: false\n\n          - name: linux-arm64-musl\n            os: ubuntu-24.04-arm\n            target: aarch64-unknown-linux-musl\n            cross: false\n            experimental: false\n\n          - name: linux-s390x-gnu\n            os: ubuntu-24.04\n            target: s390x-unknown-linux-gnu\n            cross: true\n            experimental: true\n\n          - name: linux-riscv64gc-gnu\n            os: ubuntu-24.04\n            target: riscv64gc-unknown-linux-gnu\n            cross: true\n            experimental: true\n\n          - name: linux-ppc64le-gnu\n            os: ubuntu-24.04\n            target: powerpc64le-unknown-linux-gnu\n            cross: true\n            experimental: true\n\n          - name: illumos-x86-64\n            os: ubuntu-24.04\n            target: x86_64-unknown-illumos\n            cross: true\n            experimental: true\n\n          - name: freebsd-x86-64\n            os: ubuntu-24.04\n            target: x86_64-unknown-freebsd\n            cross: true\n            experimental: true\n\n          - name: linux-loongarch64-gnu\n            os: ubuntu-24.04\n            target: loongarch64-unknown-linux-gnu\n            cross: true\n            experimental: true\n\n          - name: mac-x86-64\n            os: macos-14\n            target: x86_64-apple-darwin\n            cross: false\n            experimental: false\n\n          - name: mac-arm64\n            os: macos-15\n            target: aarch64-apple-darwin\n            cross: false\n            experimental: false\n\n          - name: windows-x86-64\n            os: windows-latest\n            target: x86_64-pc-windows-msvc\n            cross: false\n            experimental: false\n\n          #- name: windows-arm64\n          #  os: windows-latest\n          #  target: aarch64-pc-windows-msvc\n          #  cross: true\n          #  experimental: true\n\n    name: Binaries for ${{ matrix.name }}\n    needs: info\n    runs-on: ${{ matrix.os }}\n    continue-on-error: ${{ matrix.experimental }}\n\n    env:\n      version: ${{ needs.info.outputs.cli_version }}\n      dst: watchexec-${{ needs.info.outputs.cli_version }}-${{ matrix.target }}\n\n    steps:\n      - uses: actions/checkout@v6\n\n      # https://github.com/actions/cache/issues/752\n      - if: ${{ runner.os == 'Windows' }}\n        name: Use GNU tar\n        shell: cmd\n        run: |\n          echo \"Adding GNU tar to PATH\"\n          echo C:\\Program Files\\Git\\usr\\bin>>\"%GITHUB_PATH%\"\n\n      - run: sudo apt update\n        if: startsWith(matrix.os, 'ubuntu-')\n      - name: Add musl tools\n        run: sudo apt install -y musl musl-dev musl-tools\n        if: endsWith(matrix.target, '-musl')\n      - name: Add aarch-gnu tools\n        run: sudo apt install -y gcc-aarch64-linux-gnu\n        if: startsWith(matrix.target, 'aarch64-unknown-linux')\n      - name: Add arm7hf-gnu tools\n        run: sudo apt install -y gcc-arm-linux-gnueabihf\n        if: startsWith(matrix.target, 'armv7-unknown-linux-gnueabihf')\n      - name: Add s390x-gnu tools\n        run: sudo apt install -y gcc-s390x-linux-gnu\n        if: startsWith(matrix.target, 's390x-unknown-linux-gnu')\n      - name: Add riscv64-gnu tools\n        run: sudo apt install -y gcc-riscv64-linux-gnu\n        if: startsWith(matrix.target, 'riscv64gc-unknown-linux-gnu')\n      - name: Add ppc64le-gnu tools\n        run: sudo apt install -y gcc-powerpc64le-linux-gnu\n        if: startsWith(matrix.target, 'powerpc64le-unknown-linux-gnu')\n\n      - name: Install cargo-deb\n        if: startsWith(matrix.name, 'linux-')\n        uses: taiki-e/install-action@v2\n        with:\n          tool: cargo-deb\n\n      - name: Install cargo-generate-rpm\n        if: startsWith(matrix.name, 'linux-')\n        uses: taiki-e/install-action@v2\n        with:\n          tool: cargo-generate-rpm\n\n      - name: Configure toolchain\n        run: |\n          rustup toolchain install --profile minimal --no-self-update stable\n          rustup default stable\n          rustup target add ${{ matrix.target }}\n      - uses: Swatinem/rust-cache@v2\n\n      - name: Install cross\n        if: matrix.cross\n        uses: taiki-e/install-action@v2\n        with:\n          tool: cross\n\n      - name: Build\n        shell: bash\n        run: |\n          ${{ matrix.cross && 'cross' || 'cargo' }} build \\\n            -p watchexec-cli \\\n            --release --locked \\\n            --target ${{ matrix.target }}\n\n      - name: Package\n        shell: bash\n        run: |\n          set -euxo pipefail\n          ext=\"\"\n          [[ \"${{ matrix.name }}\" == windows-* ]] && ext=\".exe\"\n          bin=\"target/${{ matrix.target }}/release/watchexec${ext}\"\n          objcopy --compress-debug-sections \"$bin\" || true\n\n          mkdir \"$dst\"\n\n          mkdir -p \"target/release\"\n          cp \"$bin\" \"target/release/\" # workaround for cargo-deb silliness with targets\n\n          cp \"$bin\" \"$dst/\"\n          cp -r crates/cli/README.md LICENSE completions doc/{logo.svg,watchexec.1{,.*}} \"$dst/\"\n\n      - name: Archive (tar)\n        if: '! startsWith(matrix.name, ''windows-'')'\n        run: tar cavf \"$dst.tar.xz\" \"$dst\"\n      - name: Archive (deb)\n        if: startsWith(matrix.name, 'linux-')\n        run: cargo deb -p watchexec-cli --no-build --no-strip --target ${{ matrix.target }} --output \"$dst.deb\"\n      - name: Archive (rpm)\n        if: startsWith(matrix.name, 'linux-')\n        shell: bash\n        run: |\n          set -euxo pipefail\n          shopt -s globstar\n          cargo generate-rpm -p crates/cli --target \"${{ matrix.target }}\" --target-dir \"target/${{ matrix.target }}\"\n          mv target/**/*.rpm \"$dst.rpm\"\n      - name: Archive (zip)\n        if: startsWith(matrix.name, 'windows-')\n        shell: bash\n        run: 7z a \"$dst.zip\" \"$dst\"\n\n      - uses: actions/upload-artifact@v6\n        with:\n          name: ${{ matrix.name }}\n          retention-days: 1\n          path: |\n            watchexec-*.tar.xz\n            watchexec-*.tar.zst\n            watchexec-*.deb\n            watchexec-*.rpm\n            watchexec-*.zip\n\n  upload:\n    needs: [build, info]\n\n    name: Checksum and publish\n    runs-on: ubuntu-latest\n\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Install b3sum\n        uses: taiki-e/install-action@v2\n        with:\n          tool: b3sum\n\n      - uses: actions/download-artifact@v7\n        with:\n          merge-multiple: true\n\n      - name: Dist manifest\n        run: |\n          jq -ncf .github/workflows/dist-manifest.jq \\\n            --arg version \"${{ needs.info.outputs.cli_version }}\" \\\n            --arg files \"$(ls watchexec-*)\" \\\n            > dist-manifest.json\n\n      - name: Bulk checksums\n        run: |\n          b3sum watchexec-* | tee B3SUMS\n          sha512sum watchexec-* | tee SHA512SUMS\n          sha256sum watchexec-* | tee SHA256SUMS\n\n      - name: File checksums\n        run: |\n          for file in watchexec-*; do\n            b3sum --no-names $file > \"$file.b3\"\n            sha256sum $file | cut -d ' ' -f1 > \"$file.sha256\"\n            sha512sum $file | cut -d ' ' -f1 > \"$file.sha512\"\n          done\n\n      - uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b\n        with:\n          tag_name: v${{ needs.info.outputs.cli_version }}\n          name: CLI v${{ needs.info.outputs.cli_version }}\n          append_body: true\n          files: |\n            dist-manifest.json\n            watchexec-*.tar.xz\n            watchexec-*.tar.zst\n            watchexec-*.deb\n            watchexec-*.rpm\n            watchexec-*.zip\n            *SUMS\n            *.b3\n            *.sha*\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/tests.yml",
    "content": "name: Test suite\n\non:\n  workflow_dispatch:\n  pull_request:\n    types:\n      - opened\n      - reopened\n      - synchronize\n  push:\n    branches:\n      - main\n    tags-ignore:\n      - \"*\"\n\nenv:\n  CARGO_TERM_COLOR: always\n  CARGO_UNSTABLE_SPARSE_REGISTRY: \"true\"\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.ref || github.run_id }}\n  cancel-in-progress: true\n\njobs:\n  libs:\n    strategy:\n      fail-fast: false\n      matrix:\n        platform:\n          - macos\n          - ubuntu\n          - windows\n\n    name: Test libraries ${{ matrix.platform }}\n    runs-on: \"${{ matrix.platform }}-latest\"\n\n    steps:\n    - uses: actions/checkout@v6\n    - name: Configure toolchain\n      run: |\n        rustup toolchain install --profile minimal --no-self-update stable\n        rustup default stable\n\n    # https://github.com/actions/cache/issues/752\n    - if: ${{ runner.os == 'Windows' }}\n      name: Use GNU tar\n      shell: cmd\n      run: |\n        echo \"Adding GNU tar to PATH\"\n        echo C:\\Program Files\\Git\\usr\\bin>>\"%GITHUB_PATH%\"\n\n    - uses: Swatinem/rust-cache@v2\n\n    - name: Run library test suite\n      run: cargo test --workspace --exclude watchexec-cli --exclude watchexec-events\n\n    - name: Run watchexec-events integration tests\n      run: cargo test -p watchexec-events -F serde\n\n  cli-e2e:\n    strategy:\n      fail-fast: false\n      matrix:\n        platform:\n          - macos\n          - ubuntu\n          - windows\n\n    name: Test CLI (e2e) ${{ matrix.platform }}\n    runs-on: \"${{ matrix.platform }}-latest\"\n\n    steps:\n    - uses: actions/checkout@v6\n    - name: Configure toolchain\n      run: |\n        rustup toolchain install --profile minimal --no-self-update stable\n        rustup default stable\n\n    # https://github.com/actions/cache/issues/752\n    - if: ${{ runner.os == 'Windows' }}\n      name: Use GNU tar\n      shell: cmd\n      run: |\n        echo \"Adding GNU tar to PATH\"\n        echo C:\\Program Files\\Git\\usr\\bin>>\"%GITHUB_PATH%\"\n\n    - name: Install coreutils on mac\n      if: ${{ matrix.platform == 'macos' }}\n      run: brew install coreutils\n\n    - uses: Swatinem/rust-cache@v2\n\n    - name: Build CLI programs\n      run: cargo build\n\n    - name: Run CLI integration tests\n      run: crates/cli/run-tests.sh ${{ matrix.platform }}\n      shell: bash\n      env:\n        WATCHEXEC_BIN: target/debug/watchexec\n        TEST_SOCKETFD_BIN: target/debug/test-socketfd\n\n  cli-docs:\n\n    name: Test CLI docs\n    runs-on: ubuntu-latest\n\n    steps:\n    - uses: actions/checkout@v6\n    - name: Configure toolchain\n      run: |\n        rustup toolchain install --profile minimal --no-self-update stable\n        rustup default stable\n\n    - uses: Swatinem/rust-cache@v2\n\n    - name: Generate manpage\n      run: cargo run -p watchexec-cli -- --manual > doc/watchexec.1\n    - name: Check that manpage is up to date\n      run: git diff --exit-code -- doc/\n\n    - name: Generate completions\n      run: bin/completions\n    - name: Check that completions are up to date\n      run: git diff --exit-code -- completions/\n\n  cli-unit:\n    strategy:\n      fail-fast: false\n      matrix:\n        platform:\n          - macos\n          - ubuntu\n          - windows\n\n    name: Test CLI (unit) ${{ matrix.platform }}\n    runs-on: \"${{ matrix.platform }}-latest\"\n\n    steps:\n    - uses: actions/checkout@v6\n    - name: Configure toolchain\n      run: |\n        rustup toolchain install --profile minimal --no-self-update stable\n        rustup default stable\n\n    # https://github.com/actions/cache/issues/752\n    - if: ${{ runner.os == 'Windows' }}\n      name: Use GNU tar\n      shell: cmd\n      run: |\n        echo \"Adding GNU tar to PATH\"\n        echo C:\\Program Files\\Git\\usr\\bin>>\"%GITHUB_PATH%\"\n\n    - uses: Swatinem/rust-cache@v2\n\n    - name: Run CLI unit tests\n      run: cargo test -p watchexec-cli\n\n  bosion:\n    strategy:\n      fail-fast: false\n      matrix:\n        platform:\n          - macos\n          - ubuntu\n          - windows\n\n    name: Bosion integration tests on ${{ matrix.platform }}\n    runs-on: \"${{ matrix.platform }}-latest\"\n\n    steps:\n    - uses: actions/checkout@v6\n    - name: Configure toolchain\n      run: |\n        rustup toolchain install --profile minimal --no-self-update stable\n        rustup default stable\n\n    # https://github.com/actions/cache/issues/752\n    - if: ${{ runner.os == 'Windows' }}\n      name: Use GNU tar\n      shell: cmd\n      run: |\n        echo \"Adding GNU tar to PATH\"\n        echo C:\\Program Files\\Git\\usr\\bin>>\"%GITHUB_PATH%\"\n\n    - uses: Swatinem/rust-cache@v2\n\n    - name: Run bosion integration tests\n      run: ./run-tests.sh\n      working-directory: crates/bosion\n      shell: bash\n\n  cross-checks:\n    strategy:\n      fail-fast: false\n      matrix:\n        target:\n          - x86_64-unknown-linux-musl\n          - x86_64-unknown-freebsd\n\n    name: Typecheck only on ${{ matrix.target }}\n    runs-on: ubuntu-latest\n\n    steps:\n    - uses: actions/checkout@v6\n    - name: Configure toolchain\n      run: |\n        rustup toolchain install --profile minimal --no-self-update stable\n        rustup default stable\n        rustup target add ${{ matrix.target }}\n\n    - if: matrix.target == 'x86_64-unknown-linux-musl'\n      run: sudo apt-get install -y musl-tools\n\n    - uses: Swatinem/rust-cache@v2\n    - run: cargo check --target ${{ matrix.target }}\n\n  tests-pass:\n    if: always()\n    name: Tests pass\n    needs:\n    - bosion\n    - cli-e2e\n    - cli-unit\n    - cross-checks\n    - libs\n    runs-on: ubuntu-latest\n    steps:\n    - uses: re-actors/alls-green@release/v1\n      with:\n        jobs: ${{ toJSON(needs) }}\n"
  },
  {
    "path": ".gitignore",
    "content": "target\n/watchexec-*\nwatchexec.*.log\n"
  },
  {
    "path": ".rustfmt.toml",
    "content": "hard_tabs = true\n"
  },
  {
    "path": "CITATION.cff",
    "content": "cff-version: 1.2.0\nmessage: |\n  If you use this software, please cite it using these metadata.\ntitle: \"Watchexec: a tool to react to filesystem changes, and a crate ecosystem to power it\"\n\nversion: \"2.5.1\"\ndate-released: 2026-03-30\n\nrepository-code: https://github.com/watchexec/watchexec\nlicense: Apache-2.0\n\nauthors:\n  - family-names: Green\n    given-names: Matt\n  - family-names: Saparelli\n    given-names: Félix\n    orcid: https://orcid.org/0000-0002-2010-630X\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contribution guidebook\n\n\nThis is a fairly free-form project, with low contribution traffic.\n\nMaintainers:\n\n- Félix Saparelli (@passcod) (active)\n- Matt Green (@mattgreen) (original author, mostly checked out)\n\nThere are a few anti goals:\n\n- Calling watchexec is to be a **simple** exercise that remains intuitive. As a specific point, it\n  should not involve any piping or require xargs.\n\n- Watchexec will not be tied to any particular ecosystem or language. Projects that themselves use\n  watchexec (the library) can be focused on a particular domain (for example Cargo Watch for Rust),\n  but watchexec itself will remain generic, usable for any purpose.\n\n\n## Debugging\n\nTo enable verbose logging in tests, run with:\n\n```console\n$ env WATCHEXEC_LOG=watchexec=trace,info RUST_TEST_THREADS=1 RUST_NOCAPTURE=1 cargo test --test testfile -- testname\n```\n\nTo use [Tokio Console](https://github.com/tokio-rs/console):\n\n1. Add `--cfg tokio_unstable` to your `RUSTFLAGS`.\n2. Run the CLI with the `dev-console` feature.\n\n\n## PR etiquette\n\n- Maintainers are busy or may not have the bandwidth, be patient.\n- Do _not_ change the version number in the PR.\n- Do _not_ change Cargo.toml or other project metadata, unless specifically asked for, or if that's\n  the point of the PR (like adding a crates.io category).\n\nApart from that, welcome and thank you for your time!\n\n\n## Releasing\n\n```\ncargo release -p crate-name --execute patch # or minor, major\n```\n\nWhen a CLI release is done, the [release notes](https://github.com/watchexec/watchexec/releases) should be edited with the changelog.\n\n### Release order\n\nUse this command to see the tree of workspace dependencies:\n\n```console\n$ cargo tree -p watchexec-cli | rg -F '(/' --color=never | sed 's/ v[0-9].*//'\n```\n\n## Overview\n\nThe architecture of watchexec is roughly:\n\n- sources gather events\n- events are debounced and filtered\n- event(s) make it through the debounce/filters and trigger an \"action\"\n- `on_action` handler is called, returning an `Outcome`\n- outcome is processed into managing the command that watchexec is running\n  - outcome can also be to exit\n- when a command is started, the `on_pre_spawn` and `on_post_spawn` handlers are called\n- commands are also a source of events, so e.g. \"command has finished\" is handled by `on_action`\n\nAnd this is the startup sequence:\n- init config sets basic immutable facts about the runtime\n- runtime starts:\n  - source workers start, and are passed their runtime config\n  - action worker starts, and is passed its runtime config\n- (unless `--postpone` is given) a synthetic event is injected to kickstart things\n\n## Guides\n\nThese are generic guides for implementing specific bits of functionality.\n\n### Adding an event source\n\n- add a worker for \"sourcing\" events. Looking at the [signal source\n  worker](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/signal/source.rs) is\n  probably easiest to get started here.\n\n- because we may not always want to enable this event source, and just to be flexible, add [runtime\n  config](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/config.rs) for the source.\n\n- for convenience, probably add [a method on the runtime\n  config](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/config.rs) which\n  configures the most common usecase.\n\n- because watchexec is reconfigurable, in the worker you'll need to react to config changes. Look at\n  how the [fs worker does it](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/fs.rs)\n  for reference.\n\n- you may need to [add to the event tag\n  enum](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/event.rs).\n\n- if you do, you should [add support to the \"tagged\n  filterer\"](https://github.com/watchexec/watchexec/blob/main/crates/filterer/tagged/src/parse.rs),\n  but this can be done in follow-up work.\n\n### Process a new event in the CLI\n\n- add an option to the\n  [args](https://github.com/watchexec/watchexec/blob/main/crates/cli/src/args.rs) if necessary\n\n- add to the [runtime\n  config](https://github.com/watchexec/watchexec/blob/main/crates/cli/src/config/runtime.rs) when\n  the option is present\n\n- process relevant events [in the action\n  handler](https://github.com/watchexec/watchexec/blob/main/crates/cli/src/config/runtime.rs)\n\n---\nvim: tw=100\n"
  },
  {
    "path": "Cargo.toml",
    "content": "[workspace]\nresolver = \"2\"\nmembers = [\n\t\"crates/lib\",\n\t\"crates/cli\",\n\t\"crates/events\",\n\t\"crates/signals\",\n\t\"crates/supervisor\",\n\t\"crates/filterer/globset\",\n\t\"crates/filterer/ignore\",\n\t\"crates/bosion\",\n\t\"crates/ignore-files\",\n\t\"crates/project-origins\",\n\t\"crates/test-socketfd\",\n]\n\n[workspace.dependencies]\nrand = \"0.9.1\"\nuuid = \"1.5.0\"\n\n[profile.release]\nlto = true\ndebug = 1 # for stack traces\ncodegen-units = 1\nstrip = \"symbols\"\n\n[profile.dev.build-override]\nopt-level = 0\ncodegen-units = 1024\ndebug = false\ndebug-assertions = false\noverflow-checks = false\nincremental = false\n\n[profile.release.build-override]\nopt-level = 0\ncodegen-units = 1024\ndebug = false\ndebug-assertions = false\noverflow-checks = false\nincremental = false\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "[![CI status on main branch](https://github.com/watchexec/watchexec/actions/workflows/tests.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/tests.yml)\n\n# Watchexec\n\nSoftware development often involves running the same commands over and over. Boring!\n\n`watchexec` is a simple, standalone tool that watches a path and runs a command whenever it detects modifications.\n\nExample use cases:\n\n* Automatically run unit tests\n* Run linters/syntax checkers\n* Rebuild artifacts\n\n\n## Features\n\n* Simple invocation and use, does not require a cryptic command line involving `xargs`\n* Runs on OS X, Linux, and Windows\n* Monitors current directory and all subdirectories for changes\n* Coalesces multiple filesystem events into one, for editors that use swap/backup files during saving\n* Loads `.gitignore` and `.ignore` files\n* Uses process groups to keep hold of forking programs\n* Provides the paths that changed in environment variables or STDIN\n* Does not require a language runtime, not tied to any particular language or ecosystem\n* [And more!](./crates/cli/#features)\n\n\n## Quick start\n\nWatch all JavaScript, CSS and HTML files in the current directory and all subdirectories for changes, running `npm run build` when a change is detected:\n\n    $ watchexec -e js,css,html npm run build\n\nCall/restart `python server.py` when any Python file in the current directory (and all subdirectories) changes:\n\n    $ watchexec -r -e py -- python server.py\n\nMore usage examples: [in the CLI README](./crates/cli/#usage-examples)!\n\n## Install\n\n<a href=\"https://repology.org/project/watchexec/versions\"><img align=\"right\" src=\"https://repology.org/badge/vertical-allrepos/watchexec.svg\" alt=\"Packaging status\"></a>\n\n- With [your package manager](./doc/packages.md) for Arch, Debian, Homebrew, Nix, Scoop, Chocolatey…\n- From binary with [Binstall](https://github.com/cargo-bins/cargo-binstall): `cargo binstall watchexec-cli` <!-- this line does NOT contain a typo -->\n- As [pre-built binary package from Github](https://github.com/watchexec/watchexec/releases/latest)\n- From source with Cargo: `cargo install --locked watchexec-cli`\n\nAll options in detail: [in the CLI README](./crates/cli/#installation),\nin the online help (`watchexec -h`, `watchexec --help`, or `watchexec --manual`),\nand [in the manual page](./doc/watchexec.1.md).\n\n\n## Augment\n\nWatchexec pairs well with:\n\n- [checkexec](https://github.com/kurtbuilds/checkexec): to run only when source files are newer than a target file\n- [just](https://github.com/casey/just): a modern alternative to `make`\n- [systemfd](https://github.com/mitsuhiko/systemfd): socket-passing in development\n\n## Extend\n\n- [watchexec library](./crates/lib/): to create more specialised watchexec-powered tools.\n  - [watchexec-events](./crates/events/): event types for watchexec.\n  - [watchexec-signals](./crates/signals/): signal types for watchexec.\n  - [watchexec-supervisor](./crates/supervisor/): process lifecycle manager (the _exec_ part of watchexec).\n- [clearscreen](https://github.com/watchexec/clearscreen): to clear the (terminal) screen on every platform.\n- [command group](https://github.com/watchexec/command-group): to run commands in process groups.\n- [ignore files](./crates/ignore-files/): to find, parse, and interpret ignore files.\n- [project origins](./crates/project-origins/): to find the origin(s) directory of a project.\n- [notify](https://github.com/notify-rs/notify): to respond to file modifications (third-party).\n\n### Downstreams\n\nSelected downstreams of watchexec and associated crates:\n\n- [cargo watch](https://github.com/watchexec/cargo-watch): a specialised watcher for Rust/Cargo projects.\n- [cargo lambda](https://github.com/cargo-lambda/cargo-lambda): a dev tool for Rust-powered AWS Lambda functions.\n- [create-rust-app](https://create-rust-app.dev): a template for Rust+React web apps.\n- [devenv.sh](https://github.com/cachix/devenv): a developer environment with nix-based declarative configs.\n- [dotter](https://github.com/supercuber/dotter): a dotfile manager.\n- [ghciwatch](https://github.com/mercurytechnologies/ghciwatch): a specialised watcher for Haskell projects.\n- [tectonic](https://tectonic-typesetting.github.io/book/latest/): a TeX/LaTeX typesetting system.\n"
  },
  {
    "path": "bin/completions",
    "content": "#!/bin/sh\ncargo run -p watchexec-cli $* -- --completions bash > completions/bash\ncargo run -p watchexec-cli $* -- --completions elvish > completions/elvish\ncargo run -p watchexec-cli $* -- --completions fish > completions/fish\ncargo run -p watchexec-cli $* -- --completions nu > completions/nu\ncargo run -p watchexec-cli $* -- --completions powershell > completions/powershell\ncargo run -p watchexec-cli $* -- --completions zsh > completions/zsh\n"
  },
  {
    "path": "bin/dates.mjs",
    "content": "#!/usr/bin/env node\n\nconst id = Math.floor(Math.random() * 100);\nlet n = 0;\nconst m = 5;\nwhile (n < m) {\n\tn += 1;\n\tconsole.log(`[${id} : ${n}/${m}] ${new Date}`);\n\tawait new Promise(done => setTimeout(done, 2000));\n}\n"
  },
  {
    "path": "bin/manpage",
    "content": "#!/bin/sh\ncargo run -p watchexec-cli -- --manual > doc/watchexec.1\npandoc doc/watchexec.1 -t markdown > doc/watchexec.1.md\n"
  },
  {
    "path": "bin/release-notes",
    "content": "#!/bin/sh\nexec git cliff --include-path '**/crates/cli/**/*' --count-tags 'v*' --unreleased $*\n"
  },
  {
    "path": "cliff.toml",
    "content": "[changelog]\ntrim = true\nheader = \"\"\nfooter = \"\"\nbody = \"\"\"\n{% if version %}\\\n\t## v{{ version | trim_start_matches(pat=\"v\") }} ({{ timestamp | date(format=\"%Y-%m-%d\") }})\n{% else %}\\\n    ## [unreleased]\n{% endif %}\\\n{% raw %}\\n{% endraw %}\\\n\n{%- for commit in commits | sort(attribute=\"group\") %}\n\t{%- if commit.scope -%}\n\t{% else -%}\n        - **{{commit.group | striptags | trim | upper_first}}:** \\\n\t\t\t{% if commit.breaking %} [**⚠️ breaking ⚠️**] {% endif %}\\\n\t\t\t{{ commit.message | upper_first }} - ([{{ commit.id | truncate(length=7, end=\"\") }}]($REPO/commit/{{ commit.id }}))\n\t{% endif -%}\n{% endfor -%}\n\n{% for scope, commits in commits | filter(attribute=\"group\") | group_by(attribute=\"scope\") %}\n    ### {{ scope | striptags | trim | upper_first }}\n    {% for commit in commits | sort(attribute=\"group\") %}\n        - **{{commit.group | striptags | trim | upper_first}}:** \\\n\t\t\t{% if commit.breaking %} [**⚠️ breaking ⚠️**] {% endif %}\\\n            {{ commit.message | upper_first }} - ([{{ commit.id | truncate(length=7, end=\"\") }}]($REPO/commit/{{ commit.id }}))\n    {%- endfor -%}\n    {% raw %}\\n{% endraw %}\\\n{% endfor %}\n\"\"\"\npostprocessors = [\n  { pattern = '\\$REPO', replace = \"https://github.com/watchexec/watchexec\" },\n]\n\n[git]\nconventional_commits = true\nfilter_unconventional = true\nsplit_commits = true\nprotect_breaking_commits = true\nfilter_commits = true\ntag_pattern = \"v[0-9].*\"\nsort_commits = \"oldest\"\n\nlink_parsers = [\n\t{ pattern = \"#(\\\\d+)\", href = \"https://github.com/watchexec/watchexec/issues/$1\"},\n\t{ pattern = \"RFC(\\\\d+)\", text = \"ietf-rfc$1\", href = \"https://datatracker.ietf.org/doc/html/rfc$1\"},\n]\n\ncommit_parsers = [\n  { message = \"^feat\", group = \"Feature\" },\n  { message = \"^fix\", group = \"Bugfix\" },\n  { message = \"^tweak\", group = \"Tweak\" },\n  { message = \"^doc\", group = \"Documentation\" },\n  { message = \"^perf\", group = \"Performance\" },\n  { message = \"^deps\", group = \"Deps\" },\n  { message = \"^Initial [cC]ommit$\", skip = true },\n  { message = \"^(release|merge|fmt|chore|ci|refactor|style|draft|wip|repo)\", skip = true },\n  { body = \".*breaking\", group = \"Breaking\" },\n  { body = \".*security\", group = \"Security\" },\n  { message = \"^revert\", group = \"Revert\" },\n]\n"
  },
  {
    "path": "completions/bash",
    "content": "_watchexec() {\n    local i cur prev opts cmd\n    COMPREPLY=()\n    if [[ \"${BASH_VERSINFO[0]}\" -ge 4 ]]; then\n        cur=\"$2\"\n    else\n        cur=\"${COMP_WORDS[COMP_CWORD]}\"\n    fi\n    prev=\"$3\"\n    cmd=\"\"\n    opts=\"\"\n\n    for i in \"${COMP_WORDS[@]:0:COMP_CWORD}\"\n    do\n        case \"${cmd},${i}\" in\n            \",$1\")\n                cmd=\"watchexec\"\n                ;;\n            *)\n                ;;\n        esac\n    done\n\n    case \"${cmd}\" in\n        watchexec)\n            opts=\"-1 -n -E -o -r -s -d -I -p -w -W -F -e -f -j -i -v -c -N -q -h -V --manual --completions --only-emit-events --shell --no-environment --env --no-process-group --wrap-process --stop-signal --stop-timeout --timeout --delay-run --workdir --socket --on-busy-update --restart --signal --map-signal --debounce --stdin-quit --interactive --exit-on-error --postpone --poll --emit-events-to --watch --watch-non-recursive --watch-file --no-vcs-ignore --no-project-ignore --no-global-ignore --no-default-ignore --no-discover-ignore --ignore-nothing --exts --filter --filter-file --project-origin --filter-prog --ignore --ignore-file --fs-events --no-meta --verbose --log-file --print-events --clear --notify --color --timings --quiet --bell --help --version [COMMAND]...\"\n            if [[ ${cur} == -* || ${COMP_CWORD} -eq 1 ]] ; then\n                COMPREPLY=( $(compgen -W \"${opts}\" -- \"${cur}\") )\n                return 0\n            fi\n            case \"${prev}\" in\n                --completions)\n                    COMPREPLY=($(compgen -W \"bash elvish fish nu powershell zsh\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                --shell)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --env)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -E)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --wrap-process)\n                    COMPREPLY=($(compgen -W \"group session none\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                --stop-signal)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --stop-timeout)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --timeout)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --delay-run)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --workdir)\n                    COMPREPLY=()\n                    if [[ \"${BASH_VERSINFO[0]}\" -ge 4 ]]; then\n                        compopt -o plusdirs\n                    fi\n                    return 0\n                    ;;\n                --socket)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --on-busy-update)\n                    COMPREPLY=($(compgen -W \"queue do-nothing restart signal\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                -o)\n                    COMPREPLY=($(compgen -W \"queue do-nothing restart signal\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                --signal)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -s)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --map-signal)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --debounce)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -d)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --poll)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --emit-events-to)\n                    COMPREPLY=($(compgen -W \"environment stdio file json-stdio json-file none\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                --watch)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -w)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --watch-non-recursive)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -W)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --watch-file)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -F)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --exts)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -e)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --filter)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -f)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --filter-file)\n                    local oldifs\n                    if [ -n \"${IFS+x}\" ]; then\n                        oldifs=\"$IFS\"\n                    fi\n                    IFS=$'\\n'\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    if [ -n \"${oldifs+x}\" ]; then\n                        IFS=\"$oldifs\"\n                    fi\n                    if [[ \"${BASH_VERSINFO[0]}\" -ge 4 ]]; then\n                        compopt -o filenames\n                    fi\n                    return 0\n                    ;;\n                --project-origin)\n                    COMPREPLY=()\n                    if [[ \"${BASH_VERSINFO[0]}\" -ge 4 ]]; then\n                        compopt -o plusdirs\n                    fi\n                    return 0\n                    ;;\n                --filter-prog)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -j)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --ignore)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                -i)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --ignore-file)\n                    local oldifs\n                    if [ -n \"${IFS+x}\" ]; then\n                        oldifs=\"$IFS\"\n                    fi\n                    IFS=$'\\n'\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    if [ -n \"${oldifs+x}\" ]; then\n                        IFS=\"$oldifs\"\n                    fi\n                    if [[ \"${BASH_VERSINFO[0]}\" -ge 4 ]]; then\n                        compopt -o filenames\n                    fi\n                    return 0\n                    ;;\n                --fs-events)\n                    COMPREPLY=($(compgen -W \"access create remove rename modify metadata\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                --log-file)\n                    COMPREPLY=($(compgen -f \"${cur}\"))\n                    return 0\n                    ;;\n                --clear)\n                    COMPREPLY=($(compgen -W \"clear reset\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                -c)\n                    COMPREPLY=($(compgen -W \"clear reset\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                --notify)\n                    COMPREPLY=($(compgen -W \"both start end\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                -N)\n                    COMPREPLY=($(compgen -W \"both start end\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                --color)\n                    COMPREPLY=($(compgen -W \"auto always never\" -- \"${cur}\"))\n                    return 0\n                    ;;\n                *)\n                    COMPREPLY=()\n                    ;;\n            esac\n            COMPREPLY=( $(compgen -W \"${opts}\" -- \"${cur}\") )\n            return 0\n            ;;\n    esac\n}\n\nif [[ \"${BASH_VERSINFO[0]}\" -eq 4 && \"${BASH_VERSINFO[1]}\" -ge 4 || \"${BASH_VERSINFO[0]}\" -gt 4 ]]; then\n    complete -F _watchexec -o nosort -o bashdefault -o default watchexec\nelse\n    complete -F _watchexec -o bashdefault -o default watchexec\nfi\n"
  },
  {
    "path": "completions/elvish",
    "content": "\nuse builtin;\nuse str;\n\nset edit:completion:arg-completer[watchexec] = {|@words|\n    fn spaces {|n|\n        builtin:repeat $n ' ' | str:join ''\n    }\n    fn cand {|text desc|\n        edit:complex-candidate $text &display=$text' '(spaces (- 14 (wcswidth $text)))$desc\n    }\n    var command = 'watchexec'\n    for word $words[1..-1] {\n        if (str:has-prefix $word '-') {\n            break\n        }\n        set command = $command';'$word\n    }\n    var completions = [\n        &'watchexec'= {\n            cand --completions 'Generate a shell completions script'\n            cand --shell 'Use a different shell'\n            cand -E 'Add env vars to the command'\n            cand --env 'Add env vars to the command'\n            cand --wrap-process 'Configure how the process is wrapped'\n            cand --stop-signal 'Signal to send to stop the command'\n            cand --stop-timeout 'Time to wait for the command to exit gracefully'\n            cand --timeout 'Kill the command if it runs longer than this duration'\n            cand --delay-run 'Sleep before running the command'\n            cand --workdir 'Set the working directory'\n            cand --socket 'Provide a socket to the command'\n            cand -o 'What to do when receiving events while the command is running'\n            cand --on-busy-update 'What to do when receiving events while the command is running'\n            cand -s 'Send a signal to the process when it''s still running'\n            cand --signal 'Send a signal to the process when it''s still running'\n            cand --map-signal 'Translate signals from the OS to signals to send to the command'\n            cand -d 'Time to wait for new events before taking action'\n            cand --debounce 'Time to wait for new events before taking action'\n            cand --poll 'Poll for filesystem changes'\n            cand --emit-events-to 'Configure event emission'\n            cand -w 'Watch a specific file or directory'\n            cand --watch 'Watch a specific file or directory'\n            cand -W 'Watch a specific directory, non-recursively'\n            cand --watch-non-recursive 'Watch a specific directory, non-recursively'\n            cand -F 'Watch files and directories from a file'\n            cand --watch-file 'Watch files and directories from a file'\n            cand -e 'Filename extensions to filter to'\n            cand --exts 'Filename extensions to filter to'\n            cand -f 'Filename patterns to filter to'\n            cand --filter 'Filename patterns to filter to'\n            cand --filter-file 'Files to load filters from'\n            cand --project-origin 'Set the project origin'\n            cand -j 'Filter programs'\n            cand --filter-prog 'Filter programs'\n            cand -i 'Filename patterns to filter out'\n            cand --ignore 'Filename patterns to filter out'\n            cand --ignore-file 'Files to load ignores from'\n            cand --fs-events 'Filesystem events to filter to'\n            cand --log-file 'Write diagnostic logs to a file'\n            cand -c 'Clear screen before running command'\n            cand --clear 'Clear screen before running command'\n            cand -N 'Alert when commands start and end'\n            cand --notify 'Alert when commands start and end'\n            cand --color 'When to use terminal colours'\n            cand --manual 'Show the manual page'\n            cand --only-emit-events 'Only emit events to stdout, run no commands'\n            cand -1 'Testing only: exit Watchexec after the first run and return the command''s exit code'\n            cand -n 'Shorthand for ''--shell=none'''\n            cand --no-environment 'Deprecated shorthand for ''--emit-events=none'''\n            cand --no-process-group 'Don''t use a process group'\n            cand -r 'Restart the process if it''s still running'\n            cand --restart 'Restart the process if it''s still running'\n            cand --stdin-quit 'Exit when stdin closes'\n            cand -I 'Respond to keypresses to quit, restart, or pause'\n            cand --interactive 'Respond to keypresses to quit, restart, or pause'\n            cand --exit-on-error 'Exit when the command has an error'\n            cand -p 'Wait until first change before running command'\n            cand --postpone 'Wait until first change before running command'\n            cand --no-vcs-ignore 'Don''t load gitignores'\n            cand --no-project-ignore 'Don''t load project-local ignores'\n            cand --no-global-ignore 'Don''t load global ignores'\n            cand --no-default-ignore 'Don''t use internal default ignores'\n            cand --no-discover-ignore 'Don''t discover ignore files at all'\n            cand --ignore-nothing 'Don''t ignore anything at all'\n            cand --no-meta 'Don''t emit fs events for metadata changes'\n            cand -v 'Set diagnostic log level'\n            cand --verbose 'Set diagnostic log level'\n            cand --print-events 'Print events that trigger actions'\n            cand --timings 'Print how long the command took to run'\n            cand -q 'Don''t print starting and stopping messages'\n            cand --quiet 'Don''t print starting and stopping messages'\n            cand --bell 'Ring the terminal bell on command completion'\n            cand -h 'Print help (see more with ''--help'')'\n            cand --help 'Print help (see more with ''--help'')'\n            cand -V 'Print version'\n            cand --version 'Print version'\n        }\n    ]\n    $completions[$command]\n}\n"
  },
  {
    "path": "completions/fish",
    "content": "complete -c watchexec -l completions -d 'Generate a shell completions script' -r -f -a \"bash\\t''\nelvish\\t''\nfish\\t''\nnu\\t''\npowershell\\t''\nzsh\\t''\"\ncomplete -c watchexec -l shell -d 'Use a different shell' -r\ncomplete -c watchexec -s E -l env -d 'Add env vars to the command' -r\ncomplete -c watchexec -l wrap-process -d 'Configure how the process is wrapped' -r -f -a \"group\\t''\nsession\\t''\nnone\\t''\"\ncomplete -c watchexec -l stop-signal -d 'Signal to send to stop the command' -r\ncomplete -c watchexec -l stop-timeout -d 'Time to wait for the command to exit gracefully' -r\ncomplete -c watchexec -l timeout -d 'Kill the command if it runs longer than this duration' -r\ncomplete -c watchexec -l delay-run -d 'Sleep before running the command' -r\ncomplete -c watchexec -l workdir -d 'Set the working directory' -r -f -a \"(__fish_complete_directories)\"\ncomplete -c watchexec -l socket -d 'Provide a socket to the command' -r\ncomplete -c watchexec -s o -l on-busy-update -d 'What to do when receiving events while the command is running' -r -f -a \"queue\\t''\ndo-nothing\\t''\nrestart\\t''\nsignal\\t''\"\ncomplete -c watchexec -s s -l signal -d 'Send a signal to the process when it\\'s still running' -r\ncomplete -c watchexec -l map-signal -d 'Translate signals from the OS to signals to send to the command' -r\ncomplete -c watchexec -s d -l debounce -d 'Time to wait for new events before taking action' -r\ncomplete -c watchexec -l poll -d 'Poll for filesystem changes' -r\ncomplete -c watchexec -l emit-events-to -d 'Configure event emission' -r -f -a \"environment\\t''\nstdio\\t''\nfile\\t''\njson-stdio\\t''\njson-file\\t''\nnone\\t''\"\ncomplete -c watchexec -s w -l watch -d 'Watch a specific file or directory' -r -F\ncomplete -c watchexec -s W -l watch-non-recursive -d 'Watch a specific directory, non-recursively' -r -F\ncomplete -c watchexec -s F -l watch-file -d 'Watch files and directories from a file' -r -F\ncomplete -c watchexec -s e -l exts -d 'Filename extensions to filter to' -r\ncomplete -c watchexec -s f -l filter -d 'Filename patterns to filter to' -r\ncomplete -c watchexec -l filter-file -d 'Files to load filters from' -r -F\ncomplete -c watchexec -l project-origin -d 'Set the project origin' -r -f -a \"(__fish_complete_directories)\"\ncomplete -c watchexec -s j -l filter-prog -d 'Filter programs' -r\ncomplete -c watchexec -s i -l ignore -d 'Filename patterns to filter out' -r\ncomplete -c watchexec -l ignore-file -d 'Files to load ignores from' -r -F\ncomplete -c watchexec -l fs-events -d 'Filesystem events to filter to' -r -f -a \"access\\t''\ncreate\\t''\nremove\\t''\nrename\\t''\nmodify\\t''\nmetadata\\t''\"\ncomplete -c watchexec -l log-file -d 'Write diagnostic logs to a file' -r -F\ncomplete -c watchexec -s c -l clear -d 'Clear screen before running command' -r -f -a \"clear\\t''\nreset\\t''\"\ncomplete -c watchexec -s N -l notify -d 'Alert when commands start and end' -r -f -a \"both\\t'Notify on both start and end'\nstart\\t'Notify only when the command starts'\nend\\t'Notify only when the command ends'\"\ncomplete -c watchexec -l color -d 'When to use terminal colours' -r -f -a \"auto\\t''\nalways\\t''\nnever\\t''\"\ncomplete -c watchexec -l manual -d 'Show the manual page'\ncomplete -c watchexec -l only-emit-events -d 'Only emit events to stdout, run no commands'\ncomplete -c watchexec -s 1 -d 'Testing only: exit Watchexec after the first run and return the command\\'s exit code'\ncomplete -c watchexec -s n -d 'Shorthand for \\'--shell=none\\''\ncomplete -c watchexec -l no-environment -d 'Deprecated shorthand for \\'--emit-events=none\\''\ncomplete -c watchexec -l no-process-group -d 'Don\\'t use a process group'\ncomplete -c watchexec -s r -l restart -d 'Restart the process if it\\'s still running'\ncomplete -c watchexec -l stdin-quit -d 'Exit when stdin closes'\ncomplete -c watchexec -s I -l interactive -d 'Respond to keypresses to quit, restart, or pause'\ncomplete -c watchexec -l exit-on-error -d 'Exit when the command has an error'\ncomplete -c watchexec -s p -l postpone -d 'Wait until first change before running command'\ncomplete -c watchexec -l no-vcs-ignore -d 'Don\\'t load gitignores'\ncomplete -c watchexec -l no-project-ignore -d 'Don\\'t load project-local ignores'\ncomplete -c watchexec -l no-global-ignore -d 'Don\\'t load global ignores'\ncomplete -c watchexec -l no-default-ignore -d 'Don\\'t use internal default ignores'\ncomplete -c watchexec -l no-discover-ignore -d 'Don\\'t discover ignore files at all'\ncomplete -c watchexec -l ignore-nothing -d 'Don\\'t ignore anything at all'\ncomplete -c watchexec -l no-meta -d 'Don\\'t emit fs events for metadata changes'\ncomplete -c watchexec -s v -l verbose -d 'Set diagnostic log level'\ncomplete -c watchexec -l print-events -d 'Print events that trigger actions'\ncomplete -c watchexec -l timings -d 'Print how long the command took to run'\ncomplete -c watchexec -s q -l quiet -d 'Don\\'t print starting and stopping messages'\ncomplete -c watchexec -l bell -d 'Ring the terminal bell on command completion'\ncomplete -c watchexec -s h -l help -d 'Print help (see more with \\'--help\\')'\ncomplete -c watchexec -s V -l version -d 'Print version'\n"
  },
  {
    "path": "completions/nu",
    "content": "module completions {\n\n  def \"nu-complete watchexec completions\" [] {\n    [ \"bash\" \"elvish\" \"fish\" \"nu\" \"powershell\" \"zsh\" ]\n  }\n\n  def \"nu-complete watchexec wrap_process\" [] {\n    [ \"group\" \"session\" \"none\" ]\n  }\n\n  def \"nu-complete watchexec on_busy_update\" [] {\n    [ \"queue\" \"do-nothing\" \"restart\" \"signal\" ]\n  }\n\n  def \"nu-complete watchexec emit_events_to\" [] {\n    [ \"environment\" \"stdio\" \"file\" \"json-stdio\" \"json-file\" \"none\" ]\n  }\n\n  def \"nu-complete watchexec filter_fs_events\" [] {\n    [ \"access\" \"create\" \"remove\" \"rename\" \"modify\" \"metadata\" ]\n  }\n\n  def \"nu-complete watchexec screen_clear\" [] {\n    [ \"clear\" \"reset\" ]\n  }\n\n  def \"nu-complete watchexec notify\" [] {\n    [ \"both\" \"start\" \"end\" ]\n  }\n\n  def \"nu-complete watchexec color\" [] {\n    [ \"auto\" \"always\" \"never\" ]\n  }\n\n  # Execute commands when watched files change\n  export extern watchexec [\n    --manual                  # Show the manual page\n    --completions: string@\"nu-complete watchexec completions\" # Generate a shell completions script\n    --only-emit-events        # Only emit events to stdout, run no commands\n    -1                        # Testing only: exit Watchexec after the first run and return the command's exit code\n    --shell: string           # Use a different shell\n    -n                        # Shorthand for '--shell=none'\n    --no-environment          # Deprecated shorthand for '--emit-events=none'\n    --env(-E): string         # Add env vars to the command\n    --no-process-group        # Don't use a process group\n    --wrap-process: string@\"nu-complete watchexec wrap_process\" # Configure how the process is wrapped\n    --stop-signal: string     # Signal to send to stop the command\n    --stop-timeout: string    # Time to wait for the command to exit gracefully\n    --timeout: string         # Kill the command if it runs longer than this duration\n    --delay-run: string       # Sleep before running the command\n    --workdir: path           # Set the working directory\n    --socket: string          # Provide a socket to the command\n    --on-busy-update(-o): string@\"nu-complete watchexec on_busy_update\" # What to do when receiving events while the command is running\n    --restart(-r)             # Restart the process if it's still running\n    --signal(-s): string      # Send a signal to the process when it's still running\n    --map-signal: string      # Translate signals from the OS to signals to send to the command\n    --debounce(-d): string    # Time to wait for new events before taking action\n    --stdin-quit              # Exit when stdin closes\n    --interactive(-I)         # Respond to keypresses to quit, restart, or pause\n    --exit-on-error           # Exit when the command has an error\n    --postpone(-p)            # Wait until first change before running command\n    --poll: string            # Poll for filesystem changes\n    --emit-events-to: string@\"nu-complete watchexec emit_events_to\" # Configure event emission\n    --watch(-w): path         # Watch a specific file or directory\n    --watch-non-recursive(-W): path # Watch a specific directory, non-recursively\n    --watch-file(-F): path    # Watch files and directories from a file\n    --no-vcs-ignore           # Don't load gitignores\n    --no-project-ignore       # Don't load project-local ignores\n    --no-global-ignore        # Don't load global ignores\n    --no-default-ignore       # Don't use internal default ignores\n    --no-discover-ignore      # Don't discover ignore files at all\n    --ignore-nothing          # Don't ignore anything at all\n    --exts(-e): string        # Filename extensions to filter to\n    --filter(-f): string      # Filename patterns to filter to\n    --filter-file: path       # Files to load filters from\n    --project-origin: path    # Set the project origin\n    --filter-prog(-j): string # Filter programs\n    --ignore(-i): string      # Filename patterns to filter out\n    --ignore-file: path       # Files to load ignores from\n    --fs-events: string@\"nu-complete watchexec filter_fs_events\" # Filesystem events to filter to\n    --no-meta                 # Don't emit fs events for metadata changes\n    --verbose(-v)             # Set diagnostic log level\n    --log-file: path          # Write diagnostic logs to a file\n    --print-events            # Print events that trigger actions\n    --clear(-c): string@\"nu-complete watchexec screen_clear\" # Clear screen before running command\n    --notify(-N): string@\"nu-complete watchexec notify\" # Alert when commands start and end\n    --color: string@\"nu-complete watchexec color\" # When to use terminal colours\n    --timings                 # Print how long the command took to run\n    --quiet(-q)               # Don't print starting and stopping messages\n    --bell                    # Ring the terminal bell on command completion\n    --help(-h)                # Print help (see more with '--help')\n    --version(-V)             # Print version\n    ...program: string        # Command (program and arguments) to run on changes\n  ]\n\n}\n\nexport use completions *\n"
  },
  {
    "path": "completions/powershell",
    "content": "\nusing namespace System.Management.Automation\nusing namespace System.Management.Automation.Language\n\nRegister-ArgumentCompleter -Native -CommandName 'watchexec' -ScriptBlock {\n    param($wordToComplete, $commandAst, $cursorPosition)\n\n    $commandElements = $commandAst.CommandElements\n    $command = @(\n        'watchexec'\n        for ($i = 1; $i -lt $commandElements.Count; $i++) {\n            $element = $commandElements[$i]\n            if ($element -isnot [StringConstantExpressionAst] -or\n                $element.StringConstantType -ne [StringConstantType]::BareWord -or\n                $element.Value.StartsWith('-') -or\n                $element.Value -eq $wordToComplete) {\n                break\n        }\n        $element.Value\n    }) -join ';'\n\n    $completions = @(switch ($command) {\n        'watchexec' {\n            [CompletionResult]::new('--completions', '--completions', [CompletionResultType]::ParameterName, 'Generate a shell completions script')\n            [CompletionResult]::new('--shell', '--shell', [CompletionResultType]::ParameterName, 'Use a different shell')\n            [CompletionResult]::new('-E', '-E ', [CompletionResultType]::ParameterName, 'Add env vars to the command')\n            [CompletionResult]::new('--env', '--env', [CompletionResultType]::ParameterName, 'Add env vars to the command')\n            [CompletionResult]::new('--wrap-process', '--wrap-process', [CompletionResultType]::ParameterName, 'Configure how the process is wrapped')\n            [CompletionResult]::new('--stop-signal', '--stop-signal', [CompletionResultType]::ParameterName, 'Signal to send to stop the command')\n            [CompletionResult]::new('--stop-timeout', '--stop-timeout', [CompletionResultType]::ParameterName, 'Time to wait for the command to exit gracefully')\n            [CompletionResult]::new('--timeout', '--timeout', [CompletionResultType]::ParameterName, 'Kill the command if it runs longer than this duration')\n            [CompletionResult]::new('--delay-run', '--delay-run', [CompletionResultType]::ParameterName, 'Sleep before running the command')\n            [CompletionResult]::new('--workdir', '--workdir', [CompletionResultType]::ParameterName, 'Set the working directory')\n            [CompletionResult]::new('--socket', '--socket', [CompletionResultType]::ParameterName, 'Provide a socket to the command')\n            [CompletionResult]::new('-o', '-o', [CompletionResultType]::ParameterName, 'What to do when receiving events while the command is running')\n            [CompletionResult]::new('--on-busy-update', '--on-busy-update', [CompletionResultType]::ParameterName, 'What to do when receiving events while the command is running')\n            [CompletionResult]::new('-s', '-s', [CompletionResultType]::ParameterName, 'Send a signal to the process when it''s still running')\n            [CompletionResult]::new('--signal', '--signal', [CompletionResultType]::ParameterName, 'Send a signal to the process when it''s still running')\n            [CompletionResult]::new('--map-signal', '--map-signal', [CompletionResultType]::ParameterName, 'Translate signals from the OS to signals to send to the command')\n            [CompletionResult]::new('-d', '-d', [CompletionResultType]::ParameterName, 'Time to wait for new events before taking action')\n            [CompletionResult]::new('--debounce', '--debounce', [CompletionResultType]::ParameterName, 'Time to wait for new events before taking action')\n            [CompletionResult]::new('--poll', '--poll', [CompletionResultType]::ParameterName, 'Poll for filesystem changes')\n            [CompletionResult]::new('--emit-events-to', '--emit-events-to', [CompletionResultType]::ParameterName, 'Configure event emission')\n            [CompletionResult]::new('-w', '-w', [CompletionResultType]::ParameterName, 'Watch a specific file or directory')\n            [CompletionResult]::new('--watch', '--watch', [CompletionResultType]::ParameterName, 'Watch a specific file or directory')\n            [CompletionResult]::new('-W', '-W ', [CompletionResultType]::ParameterName, 'Watch a specific directory, non-recursively')\n            [CompletionResult]::new('--watch-non-recursive', '--watch-non-recursive', [CompletionResultType]::ParameterName, 'Watch a specific directory, non-recursively')\n            [CompletionResult]::new('-F', '-F ', [CompletionResultType]::ParameterName, 'Watch files and directories from a file')\n            [CompletionResult]::new('--watch-file', '--watch-file', [CompletionResultType]::ParameterName, 'Watch files and directories from a file')\n            [CompletionResult]::new('-e', '-e', [CompletionResultType]::ParameterName, 'Filename extensions to filter to')\n            [CompletionResult]::new('--exts', '--exts', [CompletionResultType]::ParameterName, 'Filename extensions to filter to')\n            [CompletionResult]::new('-f', '-f', [CompletionResultType]::ParameterName, 'Filename patterns to filter to')\n            [CompletionResult]::new('--filter', '--filter', [CompletionResultType]::ParameterName, 'Filename patterns to filter to')\n            [CompletionResult]::new('--filter-file', '--filter-file', [CompletionResultType]::ParameterName, 'Files to load filters from')\n            [CompletionResult]::new('--project-origin', '--project-origin', [CompletionResultType]::ParameterName, 'Set the project origin')\n            [CompletionResult]::new('-j', '-j', [CompletionResultType]::ParameterName, 'Filter programs')\n            [CompletionResult]::new('--filter-prog', '--filter-prog', [CompletionResultType]::ParameterName, 'Filter programs')\n            [CompletionResult]::new('-i', '-i', [CompletionResultType]::ParameterName, 'Filename patterns to filter out')\n            [CompletionResult]::new('--ignore', '--ignore', [CompletionResultType]::ParameterName, 'Filename patterns to filter out')\n            [CompletionResult]::new('--ignore-file', '--ignore-file', [CompletionResultType]::ParameterName, 'Files to load ignores from')\n            [CompletionResult]::new('--fs-events', '--fs-events', [CompletionResultType]::ParameterName, 'Filesystem events to filter to')\n            [CompletionResult]::new('--log-file', '--log-file', [CompletionResultType]::ParameterName, 'Write diagnostic logs to a file')\n            [CompletionResult]::new('-c', '-c', [CompletionResultType]::ParameterName, 'Clear screen before running command')\n            [CompletionResult]::new('--clear', '--clear', [CompletionResultType]::ParameterName, 'Clear screen before running command')\n            [CompletionResult]::new('-N', '-N ', [CompletionResultType]::ParameterName, 'Alert when commands start and end')\n            [CompletionResult]::new('--notify', '--notify', [CompletionResultType]::ParameterName, 'Alert when commands start and end')\n            [CompletionResult]::new('--color', '--color', [CompletionResultType]::ParameterName, 'When to use terminal colours')\n            [CompletionResult]::new('--manual', '--manual', [CompletionResultType]::ParameterName, 'Show the manual page')\n            [CompletionResult]::new('--only-emit-events', '--only-emit-events', [CompletionResultType]::ParameterName, 'Only emit events to stdout, run no commands')\n            [CompletionResult]::new('-1', '-1', [CompletionResultType]::ParameterName, 'Testing only: exit Watchexec after the first run and return the command''s exit code')\n            [CompletionResult]::new('-n', '-n', [CompletionResultType]::ParameterName, 'Shorthand for ''--shell=none''')\n            [CompletionResult]::new('--no-environment', '--no-environment', [CompletionResultType]::ParameterName, 'Deprecated shorthand for ''--emit-events=none''')\n            [CompletionResult]::new('--no-process-group', '--no-process-group', [CompletionResultType]::ParameterName, 'Don''t use a process group')\n            [CompletionResult]::new('-r', '-r', [CompletionResultType]::ParameterName, 'Restart the process if it''s still running')\n            [CompletionResult]::new('--restart', '--restart', [CompletionResultType]::ParameterName, 'Restart the process if it''s still running')\n            [CompletionResult]::new('--stdin-quit', '--stdin-quit', [CompletionResultType]::ParameterName, 'Exit when stdin closes')\n            [CompletionResult]::new('-I', '-I ', [CompletionResultType]::ParameterName, 'Respond to keypresses to quit, restart, or pause')\n            [CompletionResult]::new('--interactive', '--interactive', [CompletionResultType]::ParameterName, 'Respond to keypresses to quit, restart, or pause')\n            [CompletionResult]::new('--exit-on-error', '--exit-on-error', [CompletionResultType]::ParameterName, 'Exit when the command has an error')\n            [CompletionResult]::new('-p', '-p', [CompletionResultType]::ParameterName, 'Wait until first change before running command')\n            [CompletionResult]::new('--postpone', '--postpone', [CompletionResultType]::ParameterName, 'Wait until first change before running command')\n            [CompletionResult]::new('--no-vcs-ignore', '--no-vcs-ignore', [CompletionResultType]::ParameterName, 'Don''t load gitignores')\n            [CompletionResult]::new('--no-project-ignore', '--no-project-ignore', [CompletionResultType]::ParameterName, 'Don''t load project-local ignores')\n            [CompletionResult]::new('--no-global-ignore', '--no-global-ignore', [CompletionResultType]::ParameterName, 'Don''t load global ignores')\n            [CompletionResult]::new('--no-default-ignore', '--no-default-ignore', [CompletionResultType]::ParameterName, 'Don''t use internal default ignores')\n            [CompletionResult]::new('--no-discover-ignore', '--no-discover-ignore', [CompletionResultType]::ParameterName, 'Don''t discover ignore files at all')\n            [CompletionResult]::new('--ignore-nothing', '--ignore-nothing', [CompletionResultType]::ParameterName, 'Don''t ignore anything at all')\n            [CompletionResult]::new('--no-meta', '--no-meta', [CompletionResultType]::ParameterName, 'Don''t emit fs events for metadata changes')\n            [CompletionResult]::new('-v', '-v', [CompletionResultType]::ParameterName, 'Set diagnostic log level')\n            [CompletionResult]::new('--verbose', '--verbose', [CompletionResultType]::ParameterName, 'Set diagnostic log level')\n            [CompletionResult]::new('--print-events', '--print-events', [CompletionResultType]::ParameterName, 'Print events that trigger actions')\n            [CompletionResult]::new('--timings', '--timings', [CompletionResultType]::ParameterName, 'Print how long the command took to run')\n            [CompletionResult]::new('-q', '-q', [CompletionResultType]::ParameterName, 'Don''t print starting and stopping messages')\n            [CompletionResult]::new('--quiet', '--quiet', [CompletionResultType]::ParameterName, 'Don''t print starting and stopping messages')\n            [CompletionResult]::new('--bell', '--bell', [CompletionResultType]::ParameterName, 'Ring the terminal bell on command completion')\n            [CompletionResult]::new('-h', '-h', [CompletionResultType]::ParameterName, 'Print help (see more with ''--help'')')\n            [CompletionResult]::new('--help', '--help', [CompletionResultType]::ParameterName, 'Print help (see more with ''--help'')')\n            [CompletionResult]::new('-V', '-V ', [CompletionResultType]::ParameterName, 'Print version')\n            [CompletionResult]::new('--version', '--version', [CompletionResultType]::ParameterName, 'Print version')\n            break\n        }\n    })\n\n    $completions.Where{ $_.CompletionText -like \"$wordToComplete*\" } |\n        Sort-Object -Property ListItemText\n}\n"
  },
  {
    "path": "completions/zsh",
    "content": "#compdef watchexec\n\nautoload -U is-at-least\n\n_watchexec() {\n    typeset -A opt_args\n    typeset -a _arguments_options\n    local ret=1\n\n    if is-at-least 5.2; then\n        _arguments_options=(-s -S -C)\n    else\n        _arguments_options=(-s -C)\n    fi\n\n    local context curcontext=\"$curcontext\" state line\n    _arguments \"${_arguments_options[@]}\" : \\\n'(--manual --only-emit-events)--completions=[Generate a shell completions script]:SHELL:(bash elvish fish nu powershell zsh)' \\\n'--shell=[Use a different shell]:SHELL:_default' \\\n'*-E+[Add env vars to the command]:KEY=VALUE:_default' \\\n'*--env=[Add env vars to the command]:KEY=VALUE:_default' \\\n'--wrap-process=[Configure how the process is wrapped]:MODE:(group session none)' \\\n'--stop-signal=[Signal to send to stop the command]:SIGNAL:_default' \\\n'--stop-timeout=[Time to wait for the command to exit gracefully]:TIMEOUT:_default' \\\n'--timeout=[Kill the command if it runs longer than this duration]:TIMEOUT:_default' \\\n'--delay-run=[Sleep before running the command]:DURATION:_default' \\\n'--workdir=[Set the working directory]:DIRECTORY:_files -/' \\\n'*--socket=[Provide a socket to the command]:PORT:_default' \\\n'-o+[What to do when receiving events while the command is running]:MODE:(queue do-nothing restart signal)' \\\n'--on-busy-update=[What to do when receiving events while the command is running]:MODE:(queue do-nothing restart signal)' \\\n'(-r --restart)-s+[Send a signal to the process when it'\\''s still running]:SIGNAL:_default' \\\n'(-r --restart)--signal=[Send a signal to the process when it'\\''s still running]:SIGNAL:_default' \\\n'*--map-signal=[Translate signals from the OS to signals to send to the command]:SIGNAL:SIGNAL:_default' \\\n'-d+[Time to wait for new events before taking action]:TIMEOUT:_default' \\\n'--debounce=[Time to wait for new events before taking action]:TIMEOUT:_default' \\\n'--poll=[Poll for filesystem changes]::INTERVAL:_default' \\\n'--emit-events-to=[Configure event emission]:MODE:(environment stdio file json-stdio json-file none)' \\\n'*-w+[Watch a specific file or directory]:PATH:_files' \\\n'*--watch=[Watch a specific file or directory]:PATH:_files' \\\n'*-W+[Watch a specific directory, non-recursively]:PATH:_files' \\\n'*--watch-non-recursive=[Watch a specific directory, non-recursively]:PATH:_files' \\\n'-F+[Watch files and directories from a file]:PATH:_files' \\\n'--watch-file=[Watch files and directories from a file]:PATH:_files' \\\n'*-e+[Filename extensions to filter to]:EXTENSIONS:_default' \\\n'*--exts=[Filename extensions to filter to]:EXTENSIONS:_default' \\\n'*-f+[Filename patterns to filter to]:PATTERN:_default' \\\n'*--filter=[Filename patterns to filter to]:PATTERN:_default' \\\n'*--filter-file=[Files to load filters from]:PATH:_files' \\\n'--project-origin=[Set the project origin]:DIRECTORY:_files -/' \\\n'*-j+[Filter programs]:EXPRESSION:_default' \\\n'*--filter-prog=[Filter programs]:EXPRESSION:_default' \\\n'*-i+[Filename patterns to filter out]:PATTERN:_default' \\\n'*--ignore=[Filename patterns to filter out]:PATTERN:_default' \\\n'*--ignore-file=[Files to load ignores from]:PATH:_files' \\\n'*--fs-events=[Filesystem events to filter to]:EVENTS:(access create remove rename modify metadata)' \\\n'--log-file=[Write diagnostic logs to a file]::PATH:_files' \\\n'-c+[Clear screen before running command]::MODE:(clear reset)' \\\n'--clear=[Clear screen before running command]::MODE:(clear reset)' \\\n'-N+[Alert when commands start and end]::WHEN:((both\\:\"Notify on both start and end\"\nstart\\:\"Notify only when the command starts\"\nend\\:\"Notify only when the command ends\"))' \\\n'--notify=[Alert when commands start and end]::WHEN:((both\\:\"Notify on both start and end\"\nstart\\:\"Notify only when the command starts\"\nend\\:\"Notify only when the command ends\"))' \\\n'--color=[When to use terminal colours]:MODE:(auto always never)' \\\n'(--completions --only-emit-events)--manual[Show the manual page]' \\\n'(--completions --manual)--only-emit-events[Only emit events to stdout, run no commands]' \\\n'-1[Testing only\\: exit Watchexec after the first run and return the command'\\''s exit code]' \\\n'-n[Shorthand for '\\''--shell=none'\\'']' \\\n'--no-environment[Deprecated shorthand for '\\''--emit-events=none'\\'']' \\\n'--no-process-group[Don'\\''t use a process group]' \\\n'(-o --on-busy-update)-r[Restart the process if it'\\''s still running]' \\\n'(-o --on-busy-update)--restart[Restart the process if it'\\''s still running]' \\\n'--stdin-quit[Exit when stdin closes]' \\\n'-I[Respond to keypresses to quit, restart, or pause]' \\\n'--interactive[Respond to keypresses to quit, restart, or pause]' \\\n'--exit-on-error[Exit when the command has an error]' \\\n'-p[Wait until first change before running command]' \\\n'--postpone[Wait until first change before running command]' \\\n'--no-vcs-ignore[Don'\\''t load gitignores]' \\\n'--no-project-ignore[Don'\\''t load project-local ignores]' \\\n'--no-global-ignore[Don'\\''t load global ignores]' \\\n'--no-default-ignore[Don'\\''t use internal default ignores]' \\\n'--no-discover-ignore[Don'\\''t discover ignore files at all]' \\\n'--ignore-nothing[Don'\\''t ignore anything at all]' \\\n'(--fs-events)--no-meta[Don'\\''t emit fs events for metadata changes]' \\\n'*-v[Set diagnostic log level]' \\\n'*--verbose[Set diagnostic log level]' \\\n'--print-events[Print events that trigger actions]' \\\n'--timings[Print how long the command took to run]' \\\n'-q[Don'\\''t print starting and stopping messages]' \\\n'--quiet[Don'\\''t print starting and stopping messages]' \\\n'--bell[Ring the terminal bell on command completion]' \\\n'-h[Print help (see more with '\\''--help'\\'')]' \\\n'--help[Print help (see more with '\\''--help'\\'')]' \\\n'-V[Print version]' \\\n'--version[Print version]' \\\n'*::program -- Command (program and arguments) to run on changes:_cmdstring' \\\n&& ret=0\n}\n\n(( $+functions[_watchexec_commands] )) ||\n_watchexec_commands() {\n    local commands; commands=()\n    _describe -t commands 'watchexec commands' commands \"$@\"\n}\n\nif [ \"$funcstack[1]\" = \"_watchexec\" ]; then\n    _watchexec \"$@\"\nelse\n    compdef _watchexec watchexec\nfi\n"
  },
  {
    "path": "crates/bosion/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v2.0.0 (2026-01-20)\n\n- Remove `GIT_COMMIT_DESCRIPTION`. In practice this had zero usage, and dropping it means we can stop depending on gix.\n- Deps: remove gix. This drops dependencies from 327 crates to just 6.\n\n## v1.1.3 (2025-05-15)\n\n- Deps: gix 0.72\n\n## v1.1.2 (2025-02-09)\n\n- Deps: gix 0.70\n\n## v1.1.1 (2024-10-14)\n\n- Deps: gix 0.66\n\n## v1.1.0 (2024-05-16)\n\n- Add `git-describe` support (#832, by @lu-zero)\n\n## v1.0.3 (2024-04-20)\n\n- Deps: gix 0.62\n\n## v1.0.2 (2023-11-26)\n\n- Deps: upgrade to gix 0.55\n\n## v1.0.1 (2023-07-02)\n\n- Deps: upgrade to gix 0.44\n\n## v1.0.0 (2023-03-05)\n\n- Initial release.\n"
  },
  {
    "path": "crates/bosion/Cargo.toml",
    "content": "[package]\nname = \"bosion\"\nversion = \"2.0.0\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0 OR MIT\"\ndescription = \"Gather build information for verbose versions flags\"\nkeywords = [\"version\", \"git\", \"verbose\", \"long\"]\n\ndocumentation = \"https://docs.rs/bosion\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.64.0\"\nedition = \"2021\"\n\n[dependencies]\nflate2 = { version = \"1.0.35\", optional = true }\n\n[dependencies.time]\nversion = \"0.3.30\"\nfeatures = [\"macros\", \"formatting\"]\n\n[features]\ndefault = [\"git\", \"reproducible\", \"std\"]\n\n### Read from git repo, provide GIT_* vars\ngit = [\"dep:flate2\"]\n\n### Read from SOURCE_DATE_EPOCH when available\nreproducible = []\n\n### Provide a long_version_with() function to add extra info\n###\n### Specifically this is std support for the _using_ crate, not for the bosion crate itself. It's\n### assumed that the bosion crate is always std, as it runs in build.rs.\nstd = []\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\nneedless_doctest_main = \"allow\"\n"
  },
  {
    "path": "crates/bosion/README.md",
    "content": "# Bosion\n\n_Gather build information for verbose versions flags._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license] or [MIT](https://passcod.mit-license.org).\n- Status: maintained.\n\n[docs]: https://docs.rs/bosion\n[license]: ../../LICENSE\n\n## Quick start\n\nIn your `Cargo.toml`:\n\n```toml\n[build-dependencies]\nbosion = \"2.0.0\"\n```\n\nIn your `build.rs`:\n\n```rust ,no_run\nfn main() {\n    bosion::gather();\n}\n```\n\nIn your `src/main.rs`:\n\n```rust ,ignore\ninclude!(env!(\"BOSION_PATH\"));\n\nfn main() {\n    // default output, like rustc -Vv\n    println!(\"{}\", Bosion::LONG_VERSION);\n\n    // with additional fields\n    println!(\"{}\", Bosion::long_version_with(&[\n        (\"custom data\", \"value\"),\n        (\"LLVM version\", \"15.0.6\"),\n    ]));\n\n    // enabled features like +feature +an-other\n    println!(\"{}\", Bosion::CRATE_FEATURE_STRING);\n\n    // the raw data\n    println!(\"{}\", Bosion::GIT_COMMIT_HASH);\n    println!(\"{}\", Bosion::GIT_COMMIT_SHORTHASH);\n    println!(\"{}\", Bosion::GIT_COMMIT_DATE);\n    println!(\"{}\", Bosion::GIT_COMMIT_DATETIME);\n    println!(\"{}\", Bosion::CRATE_VERSION);\n    println!(\"{:?}\", Bosion::CRATE_FEATURES);\n    println!(\"{}\", Bosion::BUILD_DATE);\n    println!(\"{}\", Bosion::BUILD_DATETIME);\n}\n```\n\n## Advanced usage\n\nGenerating a struct with public visibility:\n\n```rust ,no_run\n// build.rs\nbosion::gather_pub();\n```\n\nCustomising the output file and struct names:\n\n```rust ,no_run\n// build.rs\nbosion::gather_to(\"buildinfo.rs\", \"Build\", /* public? */ false);\n```\n\nOutputting build-time environment variables instead of source:\n\n```rust ,ignore\n// build.rs\nbosion::gather_to_env();\n\n// src/main.rs\nfn main() {\n    println!(\"{}\", env!(\"BOSION_GIT_COMMIT_HASH\"));\n    println!(\"{}\", env!(\"BOSION_GIT_COMMIT_SHORTHASH\"));\n    println!(\"{}\", env!(\"BOSION_GIT_COMMIT_DATE\"));\n    println!(\"{}\", env!(\"BOSION_GIT_COMMIT_DATETIME\"));\n    println!(\"{}\", env!(\"BOSION_BUILD_DATE\"));\n    println!(\"{}\", env!(\"BOSION_BUILD_DATETIME\"));\n    println!(\"{}\", env!(\"BOSION_CRATE_VERSION\"));\n    println!(\"{}\", env!(\"BOSION_CRATE_FEATURES\")); // comma-separated\n}\n```\n\nCustom env prefix:\n\n```rust ,no_run\n// build.rs\nbosion::gather_to_env_with_prefix(\"MYAPP_\");\n```\n\n## Features\n\n- `reproducible`: reads [`SOURCE_DATE_EPOCH`](https://reproducible-builds.org/docs/source-date-epoch/) (default).\n- `git`: enables gathering git information (default).\n- `std`: enables the `long_version_with` method (default).\n  Specifically, this is about the downstream crate's std support, not Bosion's, which always requires std.\n\n## Why not...?\n\n- [bugreport](https://github.com/sharkdp/bugreport): runtime library, for bug information.\n- [git-testament](https://github.com/kinnison/git-testament): uses the `git` CLI instead of gitoxide.\n- [human-panic](https://github.com/rust-cli/human-panic): runtime library, for panics.\n- [shadow-rs](https://github.com/baoyachi/shadow-rs): uses libgit2 instead of gitoxide, doesn't rebuild on git changes.\n- [vergen](https://github.com/rustyhorde/vergen): uses the `git` CLI instead of gitoxide.\n\nBosion also requires no dependencies outside of build.rs, and was specifically made for crates\ninstalled in a variety of ways, like with `cargo install`, from pre-built binary, from source with\ngit, or from source without git (like a tarball), on a variety of platforms. Its default output with\n[clap](https://clap.rs) is almost exactly like `rustc -Vv`.\n\n## Examples\n\nThe [examples](./examples) directory contains a practical and runnable [clap-based example](./examples/clap/), as well\nas several other crates which are actually used for integration testing.\n\nHere is the output for the Watchexec CLI:\n\n```plain\nwatchexec 1.21.1 (5026793 2023-03-05)\ncommit-hash: 5026793a12ff895edf2dafb92111e7bd1767650e\ncommit-date: 2023-03-05\nbuild-date: 2023-03-05\nrelease: 1.21.1\nfeatures:\n```\n\nFor comparison, here's `rustc -Vv`:\n\n```plain\nrustc 1.67.1 (d5a82bbd2 2023-02-07)\nbinary: rustc\ncommit-hash: d5a82bbd26e1ad8b7401f6a718a9c57c96905483\ncommit-date: 2023-02-07\nhost: x86_64-unknown-linux-gnu\nrelease: 1.67.1\nLLVM version: 15.0.6\n```\n"
  },
  {
    "path": "crates/bosion/examples/clap/Cargo.toml",
    "content": "[package]\nname = \"bosion-example-clap\"\nversion = \"0.1.0\"\npublish = false\nedition = \"2021\"\n\n[workspace]\n\n[features]\ndefault = [\"foo\"]\nfoo = []\n\n[build-dependencies.bosion]\nversion = \"*\"\npath = \"../..\"\n\n[dependencies.clap]\nversion = \"4.1.8\"\nfeatures = [\"cargo\", \"derive\"]\n"
  },
  {
    "path": "crates/bosion/examples/clap/build.rs",
    "content": "fn main() {\n\tbosion::gather();\n}\n"
  },
  {
    "path": "crates/bosion/examples/clap/src/main.rs",
    "content": "use clap::Parser;\n\ninclude!(env!(\"BOSION_PATH\"));\n\n#[derive(Parser)]\n#[clap(version, long_version = Bosion::LONG_VERSION)]\nstruct Args {\n\t#[clap(long)]\n\textras: bool,\n\n\t#[clap(long)]\n\tfeatures: bool,\n\n\t#[clap(long)]\n\tdates: bool,\n\n\t#[clap(long)]\n\thashes: bool,\n}\n\nfn main() {\n\tlet args = Args::parse();\n\n\tif args.extras {\n\t\tprintln!(\n\t\t\t\"{}\",\n\t\t\tBosion::long_version_with(&[(\"extra\", \"field\"), (\"custom\", \"1.2.3\"),])\n\t\t);\n\t} else if args.features {\n\t\tprintln!(\"Features: {}\", Bosion::CRATE_FEATURE_STRING);\n\t} else if args.dates {\n\t\tprintln!(\"commit date: {}\", Bosion::GIT_COMMIT_DATE);\n\t\tprintln!(\"commit datetime: {}\", Bosion::GIT_COMMIT_DATETIME);\n\t\tprintln!(\"build date: {}\", Bosion::BUILD_DATE);\n\t\tprintln!(\"build datetime: {}\", Bosion::BUILD_DATETIME);\n\t} else if args.hashes {\n\t\tprintln!(\"commit hash: {}\", Bosion::GIT_COMMIT_HASH);\n\t\tprintln!(\"commit shorthash: {}\", Bosion::GIT_COMMIT_SHORTHASH);\n\t} else {\n\t\tprintln!(\"{}\", Bosion::LONG_VERSION);\n\t}\n}\n"
  },
  {
    "path": "crates/bosion/examples/default/Cargo.toml",
    "content": "[package]\nname = \"bosion-test-default\"\nversion = \"0.1.0\"\npublish = false\nedition = \"2021\"\n\n[workspace]\n\n[features]\ndefault = [\"foo\"]\nfoo = []\n\n[build-dependencies.bosion]\nversion = \"*\"\npath = \"../..\"\n\n[dependencies]\nleon = { version = \"3.0.2\", default-features = false }\nsnapbox = \"0.5.9\"\ntime = { version = \"0.3.30\", features = [\"formatting\", \"macros\"] }\n"
  },
  {
    "path": "crates/bosion/examples/default/build.rs",
    "content": "fn main() {\n\tbosion::gather();\n}\n"
  },
  {
    "path": "crates/bosion/examples/default/src/common.rs",
    "content": "#[cfg(test)]\npub(crate) fn git_commit_info(format: &str) -> String {\n\tlet output = std::process::Command::new(\"git\")\n\t\t.arg(\"show\")\n\t\t.arg(\"--no-notes\")\n\t\t.arg(\"--no-patch\")\n\t\t.arg(format!(\"--pretty=format:{format}\"))\n\t\t.output()\n\t\t.expect(\"git\");\n\n\tString::from_utf8(output.stdout)\n\t\t.expect(\"git\")\n\t\t.trim()\n\t\t.to_string()\n}\n\n#[macro_export]\nmacro_rules! test_snapshot {\n\t($name:ident, $actual:expr) => {\n\t\t#[cfg(test)]\n\t\t#[test]\n\t\tfn $name() {\n\t\t\tuse std::str::FromStr;\n\t\t\tlet gittime = ::time::OffsetDateTime::from_unix_timestamp(\n\t\t\t\ti64::from_str(&crate::common::git_commit_info(\"%ct\")).expect(\"git i64\"),\n\t\t\t)\n\t\t\t.expect(\"git time\");\n\n\t\t\t::snapbox::Assert::new().matches(\n\t\t\t\t::leon::Template::parse(\n\t\t\t\t\tstd::fs::read_to_string(format!(\"../snapshots/{}.txt\", stringify!($name)))\n\t\t\t\t\t\t.expect(\"read file\")\n\t\t\t\t\t\t.trim(),\n\t\t\t\t)\n\t\t\t\t.expect(\"leon parse\")\n\t\t\t\t.render(&[\n\t\t\t\t\t(\n\t\t\t\t\t\t\"today date\".to_string(),\n\t\t\t\t\t\t::time::OffsetDateTime::now_utc()\n\t\t\t\t\t\t\t.format(::time::macros::format_description!(\"[year]-[month]-[day]\"))\n\t\t\t\t\t\t\t.unwrap(),\n\t\t\t\t\t),\n\t\t\t\t\t(\"git hash\".to_string(), crate::common::git_commit_info(\"%H\")),\n\t\t\t\t\t(\n\t\t\t\t\t\t\"git shorthash\".to_string(),\n\t\t\t\t\t\tcrate::common::git_commit_info(\"%H\").chars().take(8).collect(),\n\t\t\t\t\t),\n\t\t\t\t\t(\n\t\t\t\t\t\t\"git date\".to_string(),\n\t\t\t\t\t\tgittime\n\t\t\t\t\t\t\t.format(::time::macros::format_description!(\"[year]-[month]-[day]\"))\n\t\t\t\t\t\t\t.expect(\"git date format\"),\n\t\t\t\t\t),\n\t\t\t\t\t(\n\t\t\t\t\t\t\"git datetime\".to_string(),\n\t\t\t\t\t\tgittime\n\t\t\t\t\t\t\t.format(::time::macros::format_description!(\n\t\t\t\t\t\t\t\t\"[year]-[month]-[day] [hour]:[minute]:[second]\"\n\t\t\t\t\t\t\t))\n\t\t\t\t\t\t\t.expect(\"git time format\"),\n\t\t\t\t\t),\n\t\t\t\t])\n\t\t\t\t.expect(\"leon render\"),\n\t\t\t\t$actual,\n\t\t\t);\n\t\t}\n\t};\n}\n"
  },
  {
    "path": "crates/bosion/examples/default/src/main.rs",
    "content": "include!(env!(\"BOSION_PATH\"));\n\nmod common;\nfn main() {}\n\ntest_snapshot!(crate_version, Bosion::CRATE_VERSION);\n\ntest_snapshot!(crate_features, format!(\"{:#?}\", Bosion::CRATE_FEATURES));\n\ntest_snapshot!(build_date, Bosion::BUILD_DATE);\n\ntest_snapshot!(build_datetime, Bosion::BUILD_DATETIME);\n\ntest_snapshot!(git_commit_hash, Bosion::GIT_COMMIT_HASH);\n\ntest_snapshot!(git_commit_shorthash, Bosion::GIT_COMMIT_SHORTHASH);\n\ntest_snapshot!(git_commit_date, Bosion::GIT_COMMIT_DATE);\n\ntest_snapshot!(git_commit_datetime, Bosion::GIT_COMMIT_DATETIME);\n\ntest_snapshot!(default_long_version, Bosion::LONG_VERSION);\n\ntest_snapshot!(\n\tdefault_long_version_with,\n\tBosion::long_version_with(&[(\"extra\", \"field\"), (\"custom\", \"1.2.3\")])\n);\n"
  },
  {
    "path": "crates/bosion/examples/no-git/Cargo.toml",
    "content": "[package]\nname = \"bosion-test-no-git\"\nversion = \"0.1.0\"\npublish = false\nedition = \"2021\"\n\n[workspace]\n\n[features]\ndefault = [\"foo\"]\nfoo = []\n\n[build-dependencies.bosion]\nversion = \"*\"\npath = \"../..\"\ndefault-features = false\nfeatures = [\"std\"]\n\n[dependencies]\nleon = { version = \"3.0.2\", default-features = false }\nsnapbox = \"0.5.9\"\ntime = { version = \"0.3.30\", features = [\"formatting\", \"macros\"] }\n"
  },
  {
    "path": "crates/bosion/examples/no-git/build.rs",
    "content": "fn main() {\n\tbosion::gather();\n}\n"
  },
  {
    "path": "crates/bosion/examples/no-git/src/main.rs",
    "content": "include!(env!(\"BOSION_PATH\"));\n\n#[path = \"../../default/src/common.rs\"]\nmod common;\nfn main() {}\n\ntest_snapshot!(crate_version, Bosion::CRATE_VERSION);\n\ntest_snapshot!(crate_features, format!(\"{:#?}\", Bosion::CRATE_FEATURES));\n\ntest_snapshot!(build_date, Bosion::BUILD_DATE);\n\ntest_snapshot!(build_datetime, Bosion::BUILD_DATETIME);\n\ntest_snapshot!(no_git_long_version, Bosion::LONG_VERSION);\n\ntest_snapshot!(\n\tno_git_long_version_with,\n\tBosion::long_version_with(&[(\"extra\", \"field\"), (\"custom\", \"1.2.3\")])\n);\n"
  },
  {
    "path": "crates/bosion/examples/no-std/Cargo.toml",
    "content": "[package]\nname = \"bosion-test-no-std\"\nversion = \"0.1.0\"\npublish = false\nedition = \"2021\"\n\n[profile.dev]\npanic = \"abort\"\n\n[profile.release]\npanic = \"abort\"\n\n[workspace]\n\n[features]\ndefault = [\"foo\"]\nfoo = []\n\n[build-dependencies.bosion]\nversion = \"*\"\npath = \"../..\"\ndefault-features = false\n\n[dependencies]\nleon = { version = \"3.0.2\", default-features = false }\nsnapbox = \"0.5.9\"\ntime = { version = \"0.3.30\", features = [\"formatting\", \"macros\"] }\n"
  },
  {
    "path": "crates/bosion/examples/no-std/build.rs",
    "content": "fn main() {\n\tbosion::gather();\n}\n"
  },
  {
    "path": "crates/bosion/examples/no-std/src/main.rs",
    "content": "#![cfg_attr(not(test), no_main)]\n#![cfg_attr(not(test), no_std)]\n\n#[cfg(not(test))]\nuse core::panic::PanicInfo;\n\n#[cfg(not(test))]\n#[panic_handler]\nfn panic(_panic: &PanicInfo<'_>) -> ! {\n    loop {}\n}\n\ninclude!(env!(\"BOSION_PATH\"));\n\n#[cfg(test)]\n#[path = \"../../default/src/common.rs\"]\nmod common;\n\n#[cfg(test)]\nmod test {\n\tuse super::*;\n\n\ttest_snapshot!(crate_version, Bosion::CRATE_VERSION);\n\n\ttest_snapshot!(crate_features, format!(\"{:#?}\", Bosion::CRATE_FEATURES));\n\n\ttest_snapshot!(build_date, Bosion::BUILD_DATE);\n\n\ttest_snapshot!(build_datetime, Bosion::BUILD_DATETIME);\n\n\ttest_snapshot!(no_git_long_version, Bosion::LONG_VERSION);\n}\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/build_date.txt",
    "content": "{today date}\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/build_datetime.txt",
    "content": "{today date} [..]\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/crate_features.txt",
    "content": "[\n    \"default\",\n    \"foo\",\n]\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/crate_version.txt",
    "content": "0.1.0\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/default_long_version.txt",
    "content": "0.1.0 ({git shorthash} {git date}) +foo\ncommit-hash: {git hash}\ncommit-date: {git date}\nbuild-date: {today date}\nrelease: 0.1.0\nfeatures: default,foo\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/default_long_version_with.txt",
    "content": "0.1.0 ({git shorthash} {git date}) +foo\ncommit-hash: {git hash}\ncommit-date: {git date}\nbuild-date: {today date}\nrelease: 0.1.0\nfeatures: default,foo\nextra: field\ncustom: 1.2.3\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/git_commit_date.txt",
    "content": "{git date}\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/git_commit_datetime.txt",
    "content": "{git datetime}\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/git_commit_hash.txt",
    "content": "{git hash}\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/git_commit_shorthash.txt",
    "content": "{git shorthash}\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/no_git_long_version.txt",
    "content": "0.1.0 ({today date}) +foo\nbuild-date: {today date}\nrelease: 0.1.0\nfeatures: default,foo\n"
  },
  {
    "path": "crates/bosion/examples/snapshots/no_git_long_version_with.txt",
    "content": "0.1.0 ({today date}) +foo\nbuild-date: {today date}\nrelease: 0.1.0\nfeatures: default,foo\nextra: field\ncustom: 1.2.3\n"
  },
  {
    "path": "crates/bosion/release.toml",
    "content": "pre-release-commit-message = \"release: bosion v{{version}}\"\ntag-prefix = \"bosion-\"\ntag-message = \"bosion {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n\n[[pre-release-replacements]]\nfile = \"README.md\"\nsearch = \"^bosion = \\\".*\\\"$\"\nreplace = \"bosion = \\\"{{version}}\\\"\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/bosion/run-tests.sh",
    "content": "#!/bin/bash\n\nset -euo pipefail\n\nfor test in examples/*; do\n\techo \"Testing $test\"\n\tpushd $test\n\tif ! test -f Cargo.toml; then\n\t\tpopd\n\t\tcontinue\n\tfi\n\n\tcargo check\n\tcargo test\n\n\tpopd\ndone\n"
  },
  {
    "path": "crates/bosion/src/info.rs",
    "content": "use std::{\n\tenv::var,\n\tpath::{Path, PathBuf},\n};\n\nuse time::{format_description::FormatItem, macros::format_description, OffsetDateTime};\n\n/// Gathered build-time information\n///\n/// This struct contains all the information gathered by `bosion`. It is not meant to be used\n/// directly under normal circumstances, but is public for documentation purposes and if you wish\n/// to build your own frontend for whatever reason. In that case, note that no effort has been made\n/// to make this usable outside of the build.rs environment.\n///\n/// The `git` field is only available when the `git` feature is enabled, and if there is a git\n/// repository to read from. The repository is discovered by walking up the directory tree until one\n/// is found, which means workspaces or more complex monorepos are automatically supported. If there\n/// are any errors reading the repository, the `git` field will be `None` and a rustc warning will\n/// be printed.\n#[derive(Debug, Clone)]\npub struct Info {\n\t/// The crate version, as read from the `CARGO_PKG_VERSION` environment variable.\n\tpub crate_version: String,\n\n\t/// The crate features, as found by the presence of `CARGO_FEATURE_*` environment variables.\n\t///\n\t/// These are normalised to lowercase and have underscores replaced by hyphens.\n\tpub crate_features: Vec<String>,\n\n\t/// The build date, in the format `YYYY-MM-DD`, at UTC.\n\t///\n\t/// This is either current as of build time, or from the timestamp specified by the\n\t/// `SOURCE_DATE_EPOCH` environment variable, for\n\t/// [reproducible builds](https://reproducible-builds.org/).\n\tpub build_date: String,\n\n\t/// The build datetime, in the format `YYYY-MM-DD HH:MM:SS`, at UTC.\n\t///\n\t/// This is either current as of build time, or from the timestamp specified by the\n\t/// `SOURCE_DATE_EPOCH` environment variable, for\n\t/// [reproducible builds](https://reproducible-builds.org/).\n\tpub build_datetime: String,\n\n\t/// Git repository information, if available.\n\tpub git: Option<GitInfo>,\n}\n\ntrait ErrString<T> {\n\tfn err_string(self) -> Result<T, String>;\n}\n\nimpl<T, E> ErrString<T> for Result<T, E>\nwhere\n\tE: std::fmt::Display,\n{\n\tfn err_string(self) -> Result<T, String> {\n\t\tself.map_err(|e| e.to_string())\n\t}\n}\n\nconst DATE_FORMAT: &[FormatItem<'static>] = format_description!(\"[year]-[month]-[day]\");\nconst DATETIME_FORMAT: &[FormatItem<'static>] =\n\tformat_description!(\"[year]-[month]-[day] [hour]:[minute]:[second]\");\n\nimpl Info {\n\t/// Gathers build-time information\n\t///\n\t/// This is not meant to be used directly under normal circumstances, but is public if you wish\n\t/// to build your own frontend for whatever reason. In that case, note that no effort has been\n\t/// made to make this usable outside of the build.rs environment.\n\tpub fn gather() -> Result<Self, String> {\n\t\tlet build_date = Self::build_date()?;\n\n\t\tOk(Self {\n\t\t\tcrate_version: var(\"CARGO_PKG_VERSION\").err_string()?,\n\t\t\tcrate_features: Self::features(),\n\t\t\tbuild_date: build_date.format(DATE_FORMAT).err_string()?,\n\t\t\tbuild_datetime: build_date.format(DATETIME_FORMAT).err_string()?,\n\n\t\t\t#[cfg(feature = \"git\")]\n\t\t\tgit: GitInfo::gather()\n\t\t\t\t.map_err(|e| {\n\t\t\t\t\tprintln!(\"cargo:warning=git info gathering failed: {e}\");\n\t\t\t\t})\n\t\t\t\t.ok(),\n\t\t\t#[cfg(not(feature = \"git\"))]\n\t\t\tgit: None,\n\t\t})\n\t}\n\n\tfn build_date() -> Result<OffsetDateTime, String> {\n\t\tif cfg!(feature = \"reproducible\") {\n\t\t\tif let Ok(date) = var(\"SOURCE_DATE_EPOCH\") {\n\t\t\t\tif let Ok(date) = date.parse::<i64>() {\n\t\t\t\t\treturn OffsetDateTime::from_unix_timestamp(date).err_string();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tOk(OffsetDateTime::now_utc())\n\t}\n\n\tfn features() -> Vec<String> {\n\t\tlet mut features = Vec::new();\n\n\t\tfor (key, _) in std::env::vars() {\n\t\t\tif let Some(stripped) = key.strip_prefix(\"CARGO_FEATURE_\") {\n\t\t\t\tfeatures.push(stripped.replace('_', \"-\").to_lowercase().clone());\n\t\t\t}\n\t\t}\n\n\t\tfeatures\n\t}\n\n\tpub(crate) fn set_reruns(&self) {\n\t\tif cfg!(feature = \"reproducible\") {\n\t\t\tprintln!(\"cargo:rerun-if-env-changed=SOURCE_DATE_EPOCH\");\n\t\t}\n\n\t\tif let Some(git) = &self.git {\n\t\t\tlet git_head = git.git_root.join(\"HEAD\");\n\t\t\tprintln!(\"cargo:rerun-if-changed={}\", git_head.display());\n\t\t}\n\t}\n}\n\n/// Git repository information\n#[derive(Debug, Clone)]\npub struct GitInfo {\n\t/// The absolute path to the git repository's data folder.\n\t///\n\t/// In a normal repository, this is `.git`, _not_ the index or working directory.\n\tpub git_root: PathBuf,\n\n\t/// The full hash of the current commit.\n\t///\n\t/// Note that this makes no effore to handle dirty working directories, so it may not be\n\t/// representative of the current state of the code.\n\tpub git_hash: String,\n\n\t/// The short hash of the current commit.\n\t///\n\t/// This is truncated to 8 characters.\n\tpub git_shorthash: String,\n\n\t/// The date of the current commit, in the format `YYYY-MM-DD`, at UTC.\n\tpub git_date: String,\n\n\t/// The datetime of the current commit, in the format `YYYY-MM-DD HH:MM:SS`, at UTC.\n\tpub git_datetime: String,\n}\n\n#[cfg(feature = \"git\")]\nimpl GitInfo {\n\tfn gather() -> Result<Self, String> {\n\t\tlet git_root = Self::find_git_dir(Path::new(\".\"))\n\t\t\t.ok_or_else(|| \"no git repository found\".to_string())?;\n\n\t\tlet hash =\n\t\t\tSelf::resolve_head(&git_root).ok_or_else(|| \"could not resolve HEAD\".to_string())?;\n\n\t\tlet timestamp = Self::read_commit_timestamp(&git_root, &hash)\n\t\t\t.ok_or_else(|| \"could not read commit timestamp\".to_string())?;\n\n\t\tlet timestamp = OffsetDateTime::from_unix_timestamp(timestamp).err_string()?;\n\n\t\tOk(Self {\n\t\t\tgit_root: git_root.canonicalize().err_string()?,\n\t\t\tgit_shorthash: hash.chars().take(8).collect(),\n\t\t\tgit_hash: hash,\n\t\t\tgit_date: timestamp.format(DATE_FORMAT).err_string()?,\n\t\t\tgit_datetime: timestamp.format(DATETIME_FORMAT).err_string()?,\n\t\t})\n\t}\n\n\tfn find_git_dir(start: &Path) -> Option<PathBuf> {\n\t\tuse std::fs;\n\n\t\tlet mut current = start.canonicalize().ok()?;\n\t\tloop {\n\t\t\tlet git_dir = current.join(\".git\");\n\t\t\tif git_dir.is_dir() {\n\t\t\t\treturn Some(git_dir);\n\t\t\t}\n\t\t\t// Handle git worktrees: .git can be a file containing \"gitdir: <path>\"\n\t\t\tif git_dir.is_file() {\n\t\t\t\tlet content = fs::read_to_string(&git_dir).ok()?;\n\t\t\t\tif let Some(path) = content.strip_prefix(\"gitdir: \") {\n\t\t\t\t\treturn Some(PathBuf::from(path.trim()));\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !current.pop() {\n\t\t\t\treturn None;\n\t\t\t}\n\t\t}\n\t}\n\n\tfn resolve_head(git_dir: &Path) -> Option<String> {\n\t\tuse std::fs;\n\n\t\tlet head_content = fs::read_to_string(git_dir.join(\"HEAD\")).ok()?;\n\t\tlet head_content = head_content.trim();\n\n\t\tif let Some(ref_path) = head_content.strip_prefix(\"ref: \") {\n\t\t\tSelf::resolve_ref(git_dir, ref_path)\n\t\t} else {\n\t\t\t// Detached HEAD - direct commit hash\n\t\t\tSome(head_content.to_string())\n\t\t}\n\t}\n\n\tfn resolve_ref(git_dir: &Path, ref_path: &str) -> Option<String> {\n\t\tuse std::fs;\n\n\t\t// Try loose ref first\n\t\tlet ref_file = git_dir.join(ref_path);\n\t\tif let Ok(content) = fs::read_to_string(&ref_file) {\n\t\t\treturn Some(content.trim().to_string());\n\t\t}\n\n\t\t// Try packed-refs\n\t\tlet packed_refs = git_dir.join(\"packed-refs\");\n\t\tif let Ok(content) = fs::read_to_string(&packed_refs) {\n\t\t\tfor line in content.lines() {\n\t\t\t\tif line.starts_with('#') || line.starts_with('^') {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tlet parts: Vec<_> = line.split_whitespace().collect();\n\t\t\t\tif parts.len() >= 2 && parts[1] == ref_path {\n\t\t\t\t\treturn Some(parts[0].to_string());\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tNone\n\t}\n\n\tfn read_commit_timestamp(git_dir: &Path, hash: &str) -> Option<i64> {\n\t\t// Try loose object first\n\t\tif let Some(timestamp) = Self::read_loose_commit_timestamp(git_dir, hash) {\n\t\t\treturn Some(timestamp);\n\t\t}\n\n\t\t// Try packfiles\n\t\tSelf::read_packed_commit_timestamp(git_dir, hash)\n\t}\n\n\tfn read_loose_commit_timestamp(git_dir: &Path, hash: &str) -> Option<i64> {\n\t\tuse flate2::read::ZlibDecoder;\n\t\tuse std::{fs, io::Read};\n\n\t\tlet (prefix, suffix) = hash.split_at(2);\n\t\tlet object_path = git_dir.join(\"objects\").join(prefix).join(suffix);\n\n\t\tlet compressed = fs::read(&object_path).ok()?;\n\t\tlet mut decoder = ZlibDecoder::new(&compressed[..]);\n\t\tlet mut decompressed = Vec::new();\n\t\tdecoder.read_to_end(&mut decompressed).ok()?;\n\n\t\tSelf::parse_commit_timestamp(&decompressed)\n\t}\n\n\tfn read_packed_commit_timestamp(git_dir: &Path, hash: &str) -> Option<i64> {\n\t\tuse std::fs;\n\n\t\tlet pack_dir = git_dir.join(\"objects\").join(\"pack\");\n\t\tlet entries = fs::read_dir(&pack_dir).ok()?;\n\n\t\t// Parse the hash into bytes for comparison\n\t\tlet hash_bytes = Self::hex_to_bytes(hash)?;\n\n\t\tfor entry in entries.flatten() {\n\t\t\tlet path = entry.path();\n\t\t\tif path.extension().and_then(|e| e.to_str()) == Some(\"idx\") {\n\t\t\t\tif let Some(offset) = Self::find_object_in_index(&path, &hash_bytes) {\n\t\t\t\t\tlet pack_path = path.with_extension(\"pack\");\n\t\t\t\t\tif let Some(data) = Self::read_pack_object(&pack_path, offset) {\n\t\t\t\t\t\treturn Self::parse_commit_timestamp(&data);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tNone\n\t}\n\n\tfn hex_to_bytes(hex: &str) -> Option<[u8; 20]> {\n\t\tlet mut bytes = [0u8; 20];\n\t\tif hex.len() != 40 {\n\t\t\treturn None;\n\t\t}\n\t\tfor (i, chunk) in hex.as_bytes().chunks(2).enumerate() {\n\t\t\tlet s = std::str::from_utf8(chunk).ok()?;\n\t\t\tbytes[i] = u8::from_str_radix(s, 16).ok()?;\n\t\t}\n\t\tSome(bytes)\n\t}\n\n\tfn find_object_in_index(idx_path: &Path, hash: &[u8; 20]) -> Option<u64> {\n\t\tuse std::{\n\t\t\tfs::File,\n\t\t\tio::{Read, Seek, SeekFrom},\n\t\t};\n\n\t\tlet mut file = File::open(idx_path).ok()?;\n\t\tlet mut header = [0u8; 8];\n\t\tfile.read_exact(&mut header).ok()?;\n\n\t\t// Check for v2 index magic: 0xff744f63\n\t\tif header[0..4] != [0xff, 0x74, 0x4f, 0x63] {\n\t\t\treturn None; // Only support v2 index\n\t\t}\n\n\t\tlet version = u32::from_be_bytes([header[4], header[5], header[6], header[7]]);\n\t\tif version != 2 {\n\t\t\treturn None;\n\t\t}\n\n\t\t// Read fanout table (256 * 4 bytes)\n\t\tlet mut fanout = [0u32; 256];\n\t\tfor entry in &mut fanout {\n\t\t\tlet mut buf = [0u8; 4];\n\t\t\tfile.read_exact(&mut buf).ok()?;\n\t\t\t*entry = u32::from_be_bytes(buf);\n\t\t}\n\n\t\tlet total_objects = fanout[255] as usize;\n\t\tlet first_byte = hash[0] as usize;\n\n\t\t// Find range of objects with this first byte\n\t\tlet start = if first_byte == 0 {\n\t\t\t0\n\t\t} else {\n\t\t\tfanout[first_byte - 1] as usize\n\t\t};\n\t\tlet end = fanout[first_byte] as usize;\n\n\t\tif start >= end {\n\t\t\treturn None;\n\t\t}\n\n\t\t// Binary search within the hash section\n\t\t// Hashes start at offset 8 + 256*4 = 1032\n\t\tlet hash_section_offset = 8 + 256 * 4;\n\n\t\tlet mut left = start;\n\t\tlet mut right = end;\n\n\t\twhile left < right {\n\t\t\tlet mid = left + (right - left) / 2;\n\t\t\tlet hash_offset = hash_section_offset + mid * 20;\n\n\t\t\tfile.seek(SeekFrom::Start(hash_offset as u64)).ok()?;\n\t\t\tlet mut found_hash = [0u8; 20];\n\t\t\tfile.read_exact(&mut found_hash).ok()?;\n\n\t\t\tmatch found_hash.cmp(hash) {\n\t\t\t\tstd::cmp::Ordering::Equal => {\n\t\t\t\t\t// Found! Now get the offset\n\t\t\t\t\t// CRC section starts after all hashes\n\t\t\t\t\t// Offset section starts after CRC section\n\t\t\t\t\tlet offset_section =\n\t\t\t\t\t\thash_section_offset + total_objects * 20 + total_objects * 4;\n\t\t\t\t\tlet offset_entry = offset_section + mid * 4;\n\n\t\t\t\t\tfile.seek(SeekFrom::Start(offset_entry as u64)).ok()?;\n\t\t\t\t\tlet mut offset_buf = [0u8; 4];\n\t\t\t\t\tfile.read_exact(&mut offset_buf).ok()?;\n\t\t\t\t\tlet offset = u32::from_be_bytes(offset_buf);\n\n\t\t\t\t\t// Check if this is a large offset (MSB set)\n\t\t\t\t\tif offset & 0x80000000 != 0 {\n\t\t\t\t\t\t// Large offset - need to read from 8-byte offset table\n\t\t\t\t\t\tlet large_idx = (offset & 0x7fffffff) as usize;\n\t\t\t\t\t\tlet large_offset_section = offset_section + total_objects * 4;\n\t\t\t\t\t\tlet large_entry = large_offset_section + large_idx * 8;\n\n\t\t\t\t\t\tfile.seek(SeekFrom::Start(large_entry as u64)).ok()?;\n\t\t\t\t\t\tlet mut large_buf = [0u8; 8];\n\t\t\t\t\t\tfile.read_exact(&mut large_buf).ok()?;\n\t\t\t\t\t\treturn Some(u64::from_be_bytes(large_buf));\n\t\t\t\t\t}\n\n\t\t\t\t\treturn Some(u64::from(offset));\n\t\t\t\t}\n\t\t\t\tstd::cmp::Ordering::Less => left = mid + 1,\n\t\t\t\tstd::cmp::Ordering::Greater => right = mid,\n\t\t\t}\n\t\t}\n\n\t\tNone\n\t}\n\n\tfn read_pack_object(pack_path: &Path, offset: u64) -> Option<Vec<u8>> {\n\t\tuse flate2::read::ZlibDecoder;\n\t\tuse std::{\n\t\t\tfs::File,\n\t\t\tio::{Read, Seek, SeekFrom},\n\t\t};\n\n\t\tlet mut file = File::open(pack_path).ok()?;\n\t\tfile.seek(SeekFrom::Start(offset)).ok()?;\n\n\t\t// Read object header (variable length encoding)\n\t\tlet mut byte = [0u8; 1];\n\t\tfile.read_exact(&mut byte).ok()?;\n\n\t\tlet obj_type = (byte[0] >> 4) & 0x07;\n\t\tlet mut size = u64::from(byte[0] & 0x0f);\n\t\tlet mut shift = 4;\n\n\t\twhile byte[0] & 0x80 != 0 {\n\t\t\tfile.read_exact(&mut byte).ok()?;\n\t\t\tsize |= u64::from(byte[0] & 0x7f) << shift;\n\t\t\tshift += 7;\n\t\t}\n\n\t\t// Object types: 1=commit, 2=tree, 3=blob, 4=tag, 6=ofs_delta, 7=ref_delta\n\t\tmatch obj_type {\n\t\t\t1..=4 => {\n\t\t\t\t// Regular object - just decompress\n\t\t\t\tlet mut decoder = ZlibDecoder::new(&mut file);\n\t\t\t\t#[allow(clippy::cast_possible_truncation)]\n\t\t\t\tlet mut data = Vec::with_capacity(size as usize);\n\t\t\t\tdecoder.read_to_end(&mut data).ok()?;\n\n\t\t\t\t// Add the git object header\n\t\t\t\tlet type_name = match obj_type {\n\t\t\t\t\t1 => \"commit\",\n\t\t\t\t\t2 => \"tree\",\n\t\t\t\t\t3 => \"blob\",\n\t\t\t\t\t4 => \"tag\",\n\t\t\t\t\t_ => unreachable!(),\n\t\t\t\t};\n\t\t\t\tlet mut result = format!(\"{} {}\\0\", type_name, data.len()).into_bytes();\n\t\t\t\tresult.extend(data);\n\t\t\t\tSome(result)\n\t\t\t}\n\t\t\t6 | 7 => {\n\t\t\t\t// Delta objects - not supported for simplicity\n\t\t\t\t// In practice, the HEAD commit is often a delta, but resolving\n\t\t\t\t// deltas requires recursive lookups which adds complexity\n\t\t\t\tNone\n\t\t\t}\n\t\t\t_ => None,\n\t\t}\n\t}\n\n\tfn parse_commit_timestamp(data: &[u8]) -> Option<i64> {\n\t\tlet content = std::str::from_utf8(data).ok()?;\n\t\t// Skip the header (e.g., \"commit 123\\0\")\n\t\tlet content = content.split('\\0').nth(1)?;\n\n\t\tfor line in content.lines() {\n\t\t\tif let Some(rest) = line.strip_prefix(\"committer \") {\n\t\t\t\t// Format: \"Name <email> timestamp timezone\"\n\t\t\t\tlet parts: Vec<_> = rest.rsplitn(3, ' ').collect();\n\t\t\t\tif parts.len() >= 2 {\n\t\t\t\t\treturn parts[1].parse().ok();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tNone\n\t}\n}\n"
  },
  {
    "path": "crates/bosion/src/lib.rs",
    "content": "#![doc = include_str!(\"../README.md\")]\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n\nuse std::{env::var, fs::File, io::Write, path::PathBuf};\n\npub use info::*;\nmod info;\n\n/// Gather build-time information for the current crate\n///\n/// See the crate-level documentation for a guide. This function is a convenience wrapper around\n/// [`gather_to`] with the most common defaults: it writes to `bosion.rs` a pub(crate) struct named\n/// `Bosion`.\npub fn gather() {\n\tgather_to(\"bosion.rs\", \"Bosion\", false);\n}\n\n/// Gather build-time information for the current crate (public visibility)\n///\n/// See the crate-level documentation for a guide. This function is a convenience wrapper around\n/// [`gather_to`]: it writes to `bosion.rs` a pub struct named `Bosion`.\npub fn gather_pub() {\n\tgather_to(\"bosion.rs\", \"Bosion\", true);\n}\n\n/// Gather build-time information for the current crate (custom output)\n///\n/// Gathers a limited set of build-time information for the current crate and writes it to a file.\n/// The file is always written to the `OUT_DIR` directory, as per Cargo conventions. It contains a\n/// zero-size struct with a bunch of associated constants containing the gathered information, and a\n/// `long_version_with` function (when the `std` feature is enabled) that takes a slice of extra\n/// key-value pairs to append in the same format.\n///\n/// `public` controls whether the struct is `pub` (true) or `pub(crate)` (false).\n///\n/// The generated code is entirely documented, and will appear in your documentation (in docs.rs, it\n/// only will if visibility is public).\n///\n/// See [`Info`] for a list of gathered data.\n///\n/// The constants include all the information from [`Info`], as well as the following:\n///\n/// - `LONG_VERSION`: A clap-ready long version string, including the crate version, features, build\n///   date, and git information when available.\n/// - `CRATE_FEATURE_STRING`: A string containing the crate features, in the format `+feat1 +feat2`.\n///\n/// We also instruct rustc to rerun the build script if the environment changes, as necessary.\npub fn gather_to(filename: &str, structname: &str, public: bool) {\n\tlet path = PathBuf::from(var(\"OUT_DIR\").expect(\"bosion\")).join(filename);\n\tprintln!(\"cargo:rustc-env=BOSION_PATH={}\", path.display());\n\n\tlet info = Info::gather().expect(\"bosion\");\n\tinfo.set_reruns();\n\tlet Info {\n\t\tcrate_version,\n\t\tcrate_features,\n\t\tbuild_date,\n\t\tbuild_datetime,\n\t\tgit,\n\t} = info;\n\n\tlet crate_feature_string = crate_features\n\t\t.iter()\n\t\t.filter(|feat| *feat != \"default\")\n\t\t.map(|feat| format!(\"+{feat}\"))\n\t\t.collect::<Vec<_>>()\n\t\t.join(\" \");\n\n\tlet crate_feature_list = crate_features.join(\",\");\n\n\tlet viz = if public { \"pub\" } else { \"pub(crate)\" };\n\n\tlet (git_render, long_version) = if let Some(GitInfo {\n\t\tgit_hash,\n\t\tgit_shorthash,\n\t\tgit_date,\n\t\tgit_datetime,\n\t\t..\n\t}) = git\n\t{\n\t\t(format!(\n\t\t\"\n\t\t\t/// The git commit hash\n\t\t\t///\n\t\t\t/// This is the full hash of the commit that was built. Note that if the repository was\n\t\t\t/// dirty, this will be the hash of the last commit, not including the changes.\n\t\t\tpub const GIT_COMMIT_HASH: &'static str = {git_hash:?};\n\n\t\t\t/// The git commit hash, shortened\n\t\t\t///\n\t\t\t/// This is the shortened hash of the commit that was built. Same caveats as with\n\t\t\t/// `GIT_COMMIT_HASH` apply. The length of the hash is fixed at 8 characters.\n\t\t\tpub const GIT_COMMIT_SHORTHASH: &'static str = {git_shorthash:?};\n\n\t\t\t/// The git commit date\n\t\t\t///\n\t\t\t/// This is the date (`YYYY-MM-DD`) of the commit that was built. Same caveats as with\n\t\t\t/// `GIT_COMMIT_HASH` apply.\n\t\t\tpub const GIT_COMMIT_DATE: &'static str = {git_date:?};\n\n\t\t\t/// The git commit date and time\n\t\t\t///\n\t\t\t/// This is the date and time (`YYYY-MM-DD HH:MM:SS`) of the commit that was built. Same\n\t\t\t/// caveats as with `GIT_COMMIT_HASH` apply.\n\t\t\tpub const GIT_COMMIT_DATETIME: &'static str = {git_datetime:?};\n\t\t\"\n\t), format!(\"{crate_version} ({git_shorthash} {git_date}) {crate_feature_string}\\ncommit-hash: {git_hash}\\ncommit-date: {git_date}\\nbuild-date: {build_date}\\nrelease: {crate_version}\\nfeatures: {crate_feature_list}\"))\n\t} else {\n\t\t(String::new(), format!(\"{crate_version} ({build_date}) {crate_feature_string}\\nbuild-date: {build_date}\\nrelease: {crate_version}\\nfeatures: {crate_feature_list}\"))\n\t};\n\n\t#[cfg(feature = \"std\")]\n\tlet long_version_with_fn = r#\"\n\t\t/// Returns the long version string with extra information tacked on\n\t\t///\n\t\t/// This is the same as `LONG_VERSION` but takes a slice of key-value pairs to append to the\n\t\t/// end in the same format.\n\t\tpub fn long_version_with(extra: &[(&str, &str)]) -> String {\n\t\t\tlet mut output = Self::LONG_VERSION.to_string();\n\n\t\t\tfor (k, v) in extra {\n\t\t\t\toutput.push_str(&format!(\"\\n{k}: {v}\"));\n\t\t\t}\n\n\t\t\toutput\n\t\t}\n\t\"#;\n\t#[cfg(not(feature = \"std\"))]\n\tlet long_version_with_fn = \"\";\n\n\tlet bosion_version = env!(\"CARGO_PKG_VERSION\");\n\tlet render = format!(\n\t\tr#\"\n\t\t/// Build-time information\n\t\t///\n\t\t/// This struct is generated by the [bosion](https://docs.rs/bosion) crate at build time.\n\t\t///\n\t\t/// Bosion version: {bosion_version}\n\t\t#[derive(Debug, Clone, Copy)]\n\t\t{viz} struct {structname};\n\n\t\t#[allow(dead_code)]\n\t\timpl {structname} {{\n\t\t\t/// Clap-compatible long version string\n\t\t\t///\n\t\t\t/// At minimum, this will be the crate version and build date.\n\t\t\t///\n\t\t\t/// It presents as a first \"summary\" line like `crate_version (build_date) features`,\n\t\t\t/// followed by `key: value` pairs. This is the same format used by `rustc -Vv`.\n\t\t\t///\n\t\t\t/// If git info is available, it also includes the git hash, short hash and commit date,\n\t\t\t/// and swaps the build date for the commit date in the summary line.\n\t\t\tpub const LONG_VERSION: &'static str = {long_version:?};\n\n\t\t\t/// The crate version, as reported by Cargo\n\t\t\t///\n\t\t\t/// You should probably prefer reading the `CARGO_PKG_VERSION` environment variable.\n\t\t\tpub const CRATE_VERSION: &'static str = {crate_version:?};\n\n\t\t\t/// The crate features\n\t\t\t///\n\t\t\t/// This is a list of the features that were enabled when this crate was built,\n\t\t\t/// lowercased and with underscores replaced by hyphens.\n\t\t\tpub const CRATE_FEATURES: &'static [&'static str] = &{crate_features:?};\n\n\t\t\t/// The crate features, as a string\n\t\t\t///\n\t\t\t/// This is in format `+feature +feature2 +feature3`, lowercased with underscores\n\t\t\t/// replaced by hyphens.\n\t\t\tpub const CRATE_FEATURE_STRING: &'static str = {crate_feature_string:?};\n\n\t\t\t/// The build date\n\t\t\t///\n\t\t\t/// This is the date that the crate was built, in the format `YYYY-MM-DD`. If the\n\t\t\t/// environment variable `SOURCE_DATE_EPOCH` was set, it's used instead of the current\n\t\t\t/// time, for [reproducible builds](https://reproducible-builds.org/).\n\t\t\tpub const BUILD_DATE: &'static str = {build_date:?};\n\n\t\t\t/// The build datetime\n\t\t\t///\n\t\t\t/// This is the date and time that the crate was built, in the format\n\t\t\t/// `YYYY-MM-DD HH:MM:SS`. If the environment variable `SOURCE_DATE_EPOCH` was set, it's\n\t\t\t/// used instead of the current time, for\n\t\t\t/// [reproducible builds](https://reproducible-builds.org/).\n\t\t\tpub const BUILD_DATETIME: &'static str = {build_datetime:?};\n\n\t\t\t{git_render}\n\n\t\t\t{long_version_with_fn}\n\t\t}}\n\t\t\"#\n\t);\n\n\tlet mut file = File::create(path).expect(\"bosion\");\n\tfile.write_all(render.as_bytes()).expect(\"bosion\");\n}\n\n/// Gather build-time information and write it to the environment\n///\n/// See the crate-level documentation for a guide. This function is a convenience wrapper around\n/// [`gather_to_env_with_prefix`] with the most common default prefix of `BOSION_`.\npub fn gather_to_env() {\n\tgather_to_env_with_prefix(\"BOSION_\");\n}\n\n/// Gather build-time information and write it to the environment\n///\n/// Gathers a limited set of build-time information for the current crate and makes it available to\n/// the crate as build environment variables. This is an alternative to [`include!`]ing a file which\n/// is generated at build time, like for [`gather`] and variants, which doesn't create any new code\n/// and doesn't include any information in the binary that you do not explicitly use.\n///\n/// The environment variables are prefixed with the given string, which should be generally be\n/// uppercase and end with an underscore.\n///\n/// See [`Info`] for a list of gathered data.\n///\n/// Unlike [`gather`], there is no Clap-ready `LONG_VERSION` string, but you can of course generate\n/// one yourself from the environment variables.\n///\n/// We also instruct rustc to rerun the build script if the environment changes, as necessary.\npub fn gather_to_env_with_prefix(prefix: &str) {\n\tlet info = Info::gather().expect(\"bosion\");\n\tinfo.set_reruns();\n\tlet Info {\n\t\tcrate_version,\n\t\tcrate_features,\n\t\tbuild_date,\n\t\tbuild_datetime,\n\t\tgit,\n\t} = info;\n\n\tprintln!(\"cargo:rustc-env={prefix}CRATE_VERSION={crate_version}\");\n\tprintln!(\n\t\t\"cargo:rustc-env={prefix}CRATE_FEATURES={}\",\n\t\tcrate_features.join(\",\")\n\t);\n\tprintln!(\"cargo:rustc-env={prefix}BUILD_DATE={build_date}\");\n\tprintln!(\"cargo:rustc-env={prefix}BUILD_DATETIME={build_datetime}\");\n\n\tif let Some(GitInfo {\n\t\tgit_hash,\n\t\tgit_shorthash,\n\t\tgit_date,\n\t\tgit_datetime,\n\t\t..\n\t}) = git\n\t{\n\t\tprintln!(\"cargo:rustc-env={prefix}GIT_COMMIT_HASH={git_hash}\");\n\t\tprintln!(\"cargo:rustc-env={prefix}GIT_COMMIT_SHORTHASH={git_shorthash}\");\n\t\tprintln!(\"cargo:rustc-env={prefix}GIT_COMMIT_DATE={git_date}\");\n\t\tprintln!(\"cargo:rustc-env={prefix}GIT_COMMIT_DATETIME={git_datetime}\");\n\t}\n}\n"
  },
  {
    "path": "crates/cli/Cargo.toml",
    "content": "[package]\nname = \"watchexec-cli\"\nversion = \"2.5.1\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\", \"Matt Green <mattgreenrocks@gmail.com>\"]\nlicense = \"Apache-2.0\"\ndescription = \"Executes commands in response to file modifications\"\nkeywords = [\"watcher\", \"filesystem\", \"cli\", \"watchexec\"]\ncategories = [\"command-line-utilities\"]\n\ndocumentation = \"https://watchexec.github.io/docs/#watchexec\"\nhomepage = \"https://watchexec.github.io\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nedition = \"2021\"\n\n# sets the default for the workspace\ndefault-run = \"watchexec\"\n\n[[bin]]\nname = \"watchexec\"\npath = \"src/main.rs\"\n\n[dependencies]\nargfile = \"0.2.0\"\nchrono = \"0.4.31\"\nclap_complete = \"4.5.44\"\nclap_complete_nushell = \"4.4.2\"\nclap_mangen = \"0.2.15\"\nclearscreen = \"4.0.4\"\ndashmap = \"6.1.0\"\ndirs = \"6.0.0\"\ndunce = \"1.0.4\"\nfoldhash = \"0.1.5\" # needs to be in sync with jaq's requirement\nfutures = \"0.3.29\"\nhumantime = \"2.1.0\"\nindexmap = \"2.10.0\" # needs to be in sync with jaq's requirement\njaq-core = \"2.1.0\"\njaq-json = { version = \"1.1.0\", features = [\"serde_json\"] }\njaq-std = \"2.1.0\"\nnotify-rust = \"4.11.7\"\nserde_json = \"1.0.138\"\ntempfile = \"3.16.0\"\ntermcolor = \"1.4.0\"\ntracing = \"0.1.40\"\ntracing-appender = \"0.2.3\"\nwhich = \"8.0.0\"\n\n[dependencies.blake3]\nversion = \"1.3.3\"\nfeatures = [\"rayon\"]\n\n[dependencies.clap]\nversion = \"4.4.7\"\nfeatures = [\"cargo\", \"derive\", \"env\", \"wrap_help\"]\n\n[dependencies.console-subscriber]\nversion = \"0.5.0\"\noptional = true\n\n[dependencies.eyra]\nversion = \"0.22.0\"\nfeatures = [\"log\", \"env_logger\"]\noptional = true\n\n[dependencies.ignore-files]\nversion = \"3.0.5\"\npath = \"../ignore-files\"\n\n[dependencies.miette]\nversion = \"7.5.0\"\nfeatures = [\"fancy\"]\n\n[dependencies.pid1]\nversion = \"0.1.1\"\noptional = true\n\n[dependencies.project-origins]\nversion = \"1.4.2\"\npath = \"../project-origins\"\n\n[dependencies.watchexec]\nversion = \"8.2.0\"\npath = \"../lib\"\n\n[dependencies.watchexec-events]\nversion = \"6.1.0\"\npath = \"../events\"\nfeatures = [\"serde\"]\n\n[dependencies.watchexec-signals]\nversion = \"5.0.1\"\npath = \"../signals\"\n\n[dependencies.watchexec-filterer-globset]\nversion = \"8.0.0\"\npath = \"../filterer/globset\"\n\n[dependencies.tokio]\nversion = \"1.33.0\"\nfeatures = [\n\t\"fs\",\n\t\"io-std\",\n\t\"process\",\n\t\"net\",\n\t\"rt\",\n\t\"rt-multi-thread\",\n\t\"signal\",\n\t\"sync\",\n]\n\n[dependencies.tracing-subscriber]\nversion = \"0.3.6\"\nfeatures = [\n\t\"env-filter\",\n\t\"fmt\",\n\t\"json\",\n\t\"tracing-log\",\n\t\"ansi\",\n]\n\n[target.'cfg(unix)'.dependencies]\nlibc = \"0.2.74\"\nnix = { version = \"0.30.1\", features = [\"net\"] }\n\n[target.'cfg(windows)'.dependencies]\nsocket2 = \"0.6.1\"\nuuid = { version = \"1.13.1\", features = [\"v4\"] }\nwindows-sys = { version = \">= 0.59.0, < 0.62.0\", features = [\"Win32_Networking_WinSock\"] }\n\n[target.'cfg(target_env = \"musl\")'.dependencies]\nmimalloc = \"0.1.39\"\n\n[build-dependencies]\nembed-resource = \"3.0.1\"\n\n[build-dependencies.bosion]\nversion = \"2.0.0\"\npath = \"../bosion\"\n\n[dev-dependencies]\ntracing-test = \"0.2.4\"\nuuid = { workspace = true, features = [ \"v4\", \"fast-rng\" ] }\nrand = { workspace = true }\n\n[features]\ndefault = [\"pid1\"]\n\n## Build using Eyra's pure-Rust libc\neyra = [\"dep:eyra\"]\n\n## Enables PID1 handling.\npid1 = [\"dep:pid1\"]\n\n## Enables logging for PID1 handling.\npid1-withlog = [\"pid1\"]\n\n## For debugging only: enables the Tokio Console.\ndev-console = [\"dep:console-subscriber\"]\n\n[package.metadata.binstall]\npkg-url = \"{ repo }/releases/download/v{ version }/watchexec-{ version }-{ target }.{ archive-format }\"\nbin-dir = \"watchexec-{ version }-{ target }/{ bin }{ binary-ext }\"\npkg-fmt = \"txz\"\n\n[package.metadata.binstall.overrides.x86_64-pc-windows-msvc]\npkg-fmt = \"zip\"\n\n[package.metadata.deb]\nmaintainer = \"Félix Saparelli <felix@passcod.name>\"\nlicense-file = [\"../../LICENSE\", \"0\"]\nsection = \"utility\"\ndepends = \"libc6, libgcc-s1\" # not needed for musl, but see below\n# conf-files = [] # look me up when config file lands\nassets = [\n\t[\"../../target/release/watchexec\", \"usr/bin/watchexec\", \"755\"],\n\t[\"README.md\", \"usr/share/doc/watchexec/README\", \"644\"],\n\t[\"../../doc/watchexec.1.md\", \"usr/share/doc/watchexec/watchexec.1.md\", \"644\"],\n\t[\"../../doc/watchexec.1\", \"usr/share/man/man1/watchexec.1\", \"644\"],\n\t[\"../../completions/bash\", \"usr/share/bash-completion/completions/watchexec\", \"644\"],\n\t[\"../../completions/fish\", \"usr/share/fish/vendor_completions.d/watchexec.fish\", \"644\"],\n\t[\"../../completions/zsh\", \"usr/share/zsh/site-functions/_watchexec\", \"644\"],\n\t[\"../../doc/logo.svg\", \"usr/share/icons/hicolor/scalable/apps/watchexec.svg\", \"644\"],\n]\n\n[package.metadata.generate-rpm]\nassets = [\n\t{ source = \"../../target/release/watchexec\", dest = \"/usr/bin/watchexec\", mode = \"755\" },\n\t{ source = \"README.md\", dest = \"/usr/share/doc/watchexec/README\", mode = \"644\", doc = true },\n\t{ source = \"../../doc/watchexec.1.md\", dest = \"/usr/share/doc/watchexec/watchexec.1.md\", mode = \"644\", doc = true },\n\t{ source = \"../../doc/watchexec.1\", dest = \"/usr/share/man/man1/watchexec.1\", mode = \"644\" },\n\t{ source = \"../../completions/bash\", dest = \"/usr/share/bash-completion/completions/watchexec\", mode = \"644\" },\n\t{ source = \"../../completions/fish\", dest = \"/usr/share/fish/vendor_completions.d/watchexec.fish\", mode = \"644\" },\n\t{ source = \"../../completions/zsh\", dest = \"/usr/share/zsh/site-functions/_watchexec\", mode = \"644\" },\n\t{ source = \"../../doc/logo.svg\", dest = \"/usr/share/icons/hicolor/scalable/apps/watchexec.svg\", mode = \"644\" },\n\t# set conf = true for config file when that lands\n]\n\nauto-req = \"disabled\"\n# technically incorrect when using musl, but these are probably\n# present on every rpm-using system, so let's worry about it if\n# someone asks.\n[package.metadata.generate-rpm.requires]\nglibc = \"*\"\nlibgcc = \"*\"\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\ndoc_markdown = \"allow\"\n"
  },
  {
    "path": "crates/cli/README.md",
    "content": "# Watchexec CLI\n\nA simple standalone tool that watches a path and runs a command whenever it detects modifications.\n\nExample use cases:\n\n* Automatically run unit tests\n* Run linters/syntax checkers\n\n## Features\n\n* Simple invocation and use\n* Runs on Linux, Mac, Windows, and more\n* Monitors current directory and all subdirectories for changes\n    * Uses efficient event polling mechanism (on Linux, Mac, Windows, BSD)\n* Coalesces multiple filesystem events into one, for editors that use swap/backup files during saving\n* By default, uses `.gitignore`, `.ignore`, and other such files to determine which files to ignore notifications for\n* Support for watching files with a specific extension\n* Support for filtering/ignoring events based on [glob patterns](https://docs.rs/globset/*/globset/#syntax)\n* Launches the command in a new process group (can be disabled with `--no-process-group`)\n* Optionally clears screen between executions\n* Optionally restarts the command with every modification (good for servers)\n* Optionally sends a desktop notification on command start and end\n* Does not require a language runtime\n* Sets the following environment variables in the process:\n\n    `$WATCHEXEC_COMMON_PATH` is set to the longest common path of all of the below variables, and so should be prepended to each path to obtain the full/real path.\n\n    | Variable name | Event kind |\n    |---|---|\n    | `$WATCHEXEC_CREATED_PATH` | files/folders were created |\n    | `$WATCHEXEC_REMOVED_PATH` | files/folders were removed |\n    | `$WATCHEXEC_RENAMED_PATH` | files/folders were renamed |\n    | `$WATCHEXEC_WRITTEN_PATH` | files/folders were modified |\n    | `$WATCHEXEC_META_CHANGED_PATH` | files/folders' metadata were modified |\n    | `$WATCHEXEC_OTHERWISE_CHANGED_PATH` | every other kind of event |\n\n    These variables may contain multiple paths: these are separated by the platform's path separator, as with the `PATH` system environment variable. On Unix that is `:`, and on Windows `;`. Within each variable, paths are deduplicated and sorted in binary order (i.e. neither Unicode nor locale aware).\n\n    This can be disabled with `--emit-events=none` or changed to JSON events on STDIN with `--emit-events=json-stdio`.\n\n## Anti-Features\n\n* Not tied to any particular language or ecosystem\n* Not tied to Git or the presence of a repository/project\n* Does not require a cryptic command line involving `xargs`\n\n## Usage Examples\n\nWatch all JavaScript, CSS and HTML files in the current directory and all subdirectories for changes, running `make` when a change is detected:\n\n    $ watchexec --exts js,css,html make\n\nCall `make test` when any file changes in this directory/subdirectory, except for everything below `target`:\n\n    $ watchexec -i \"target/**\" make test\n\nCall `ls -la` when any file changes in this directory/subdirectory:\n\n    $ watchexec -- ls -la\n\nCall/restart `python server.py` when any Python file in the current directory (and all subdirectories) changes:\n\n    $ watchexec -e py -r python server.py\n\nCall/restart `my_server` when any file in the current directory (and all subdirectories) changes, sending `SIGKILL` to stop the command:\n\n    $ watchexec -r --stop-signal SIGKILL my_server\n\nSend a SIGHUP to the command upon changes (Note: using `-n` here we're executing `my_server` directly, instead of wrapping it in a shell:\n\n    $ watchexec -n --signal SIGHUP my_server\n\nRun `make` when any file changes, using the `.gitignore` file in the current directory to filter:\n\n    $ watchexec make\n\nRun `make` when any file in `lib` or `src` changes:\n\n    $ watchexec -w lib -w src make\n\nRun `bundle install` when the `Gemfile` changes:\n\n    $ watchexec -w Gemfile bundle install\n\nRun two commands:\n\n    $ watchexec 'date; make'\n\nGet desktop (\"toast\") notifications when the command starts and finishes:\n\n    $ watchexec -N go build\n\nOnly run when files are created:\n\n    $ watchexec --fs-events create -- s3 sync . s3://my-bucket\n\nIf you come from `entr`, note that the watchexec command is run in a shell by default. You can use `-n` or `--shell=none` to not do that:\n\n    $ watchexec -n -- echo ';' lorem ipsum\n\nOn Windows, you may prefer to use Powershell:\n\n    $ watchexec --shell=pwsh -- Test-Connection example.com\n\nYou can eschew running commands entirely and get a stream of events to process on your own:\n\n```console\n$ watchexec --emit-events-to=json-stdio --only-emit-events\n\n{\"tags\":[{\"kind\":\"source\",\"source\":\"filesystem\"},{\"kind\":\"fs\",\"simple\":\"modify\",\"full\":\"Modify(Data(Any))\"},{\"kind\":\"path\",\"absolute\":\"/home/code/rust/watchexec/crates/cli/README.md\",\"filetype\":\"file\"}]}\n{\"tags\":[{\"kind\":\"source\",\"source\":\"filesystem\"},{\"kind\":\"fs\",\"simple\":\"modify\",\"full\":\"Modify(Data(Any))\"},{\"kind\":\"path\",\"absolute\":\"/home/code/rust/watchexec/crates/lib/Cargo.toml\",\"filetype\":\"file\"}]}\n{\"tags\":[{\"kind\":\"source\",\"source\":\"filesystem\"},{\"kind\":\"fs\",\"simple\":\"modify\",\"full\":\"Modify(Data(Any))\"},{\"kind\":\"path\",\"absolute\":\"/home/code/rust/watchexec/crates/cli/src/args.rs\",\"filetype\":\"file\"}]}\n```\n\nPrint the time commands take to run:\n\n```console\n$ watchexec --timings -- make\n[Running: make]\n...\n[Command was successful, lasted 52.748081074s]\n```\n\n## Installation\n\n### Package manager\n\nWatchexec is in many package managers. A full list of [known packages](../../doc/packages.md) is available,\nand there may be more out there! Please contribute any you find to the list :)\n\nCommon package managers:\n\n- Alpine: `$ apk add watchexec`\n- ArchLinux: `$ pacman -S watchexec`\n- Nix: `$ nix-shell -p watchexec`\n- Debian/Ubuntu via [apt.cli.rs](https://apt.cli.rs): `$ apt install watchexec`\n- Homebrew on Mac:  `$ brew install watchexec`\n- Chocolatey on Windows: `#> choco install watchexec`\n\n### [Binstall](https://github.com/cargo-bins/cargo-binstall)\n\n    $ cargo binstall watchexec-cli\n\n### Pre-built binaries\n\nUse the download section on [Github](https://github.com/watchexec/watchexec/releases/latest)\nor [the website](https://watchexec.github.io/downloads/) to obtain the package appropriate for your\nplatform and architecture, extract it, and place it in your `PATH`.\n\nThere are also Debian/Ubuntu (DEB) and Fedora/RedHat (RPM) packages.\n\nChecksums and signatures are available.\n\n### Cargo (from source)\n\nOnly the latest Rust stable is supported, but older versions may work.\n\n    $ cargo install watchexec-cli\n\n## Shell completions\n\nCurrently available shell completions:\n\n- bash: `completions/bash` should be installed to `/usr/share/bash-completion/completions/watchexec`\n- elvish: `completions/elvish` should be installed to `$XDG_CONFIG_HOME/elvish/completions/`\n- fish: `completions/fish` should be installed to `/usr/share/fish/vendor_completions.d/watchexec.fish`\n- nu: `completions/nu` should be installed to `$XDG_CONFIG_HOME/nu/completions/`\n- powershell: `completions/powershell` should be installed to `$PROFILE/`\n- zsh: `completions/zsh` should be installed to `/usr/share/zsh/site-functions/_watchexec`\n\nIf not bundled, you can generate completions for your shell with `watchexec --completions <shell>`.\n\n## Manual\n\nThere's a manual page at `doc/watchexec.1`. Install it to `/usr/share/man/man1/`.\nIf not bundled, you can generate a manual page with `watchexec --manual > /path/to/watchexec.1`, or view it inline with `watchexec --manual` (requires `man`).\n\nYou can also [read a text version](../../doc/watchexec.1.md).\n\nNote that it is automatically generated from the help text, so it is not as pretty as a carefully hand-written one.\n\n## Advanced builds\n\nThese are additional options available with custom builds by setting features:\n\n### PID1\n\nIf you're using Watchexec as PID1 (most frequently in containers or namespaces), and it's not doing what you expect, you can create a build with PID1 early logging: `--features pid1-withlog`.\n\nIf you don't need PID1 support, or if you're doing something that conflicts with this program's PID1 support, you can disable it with `--no-default-features`.\n\n### Eyra\n\n[Eyra](https://github.com/sunfishcode/eyra) is a system to build Linux programs with no dependency on C code (in the libc path). To build Watchexec like this, use `--features eyra` and a Nightly compiler.\n\nThis feature also lets you get early logging into program startup, with `RUST_LOG=trace`.\n"
  },
  {
    "path": "crates/cli/build.rs",
    "content": "fn main() {\n\tembed_resource::compile(\"watchexec-manifest.rc\", embed_resource::NONE)\n\t\t.manifest_optional()\n\t\t.unwrap();\n\n\tbosion::gather();\n\n\tif std::env::var(\"CARGO_FEATURE_EYRA\").is_ok() {\n\t\tprintln!(\"cargo:rustc-link-arg=-nostartfiles\");\n\t}\n}\n"
  },
  {
    "path": "crates/cli/integration/env-unix.sh",
    "content": "#!/bin/bash\n\nset -euxo pipefail\n\nwatchexec=${WATCHEXEC_BIN:-watchexec}\n\n$watchexec -1 --env FOO=BAR echo '$FOO' | grep BAR\n"
  },
  {
    "path": "crates/cli/integration/no-shell-unix.sh",
    "content": "#!/bin/bash\n\nset -euxo pipefail\n\nwatchexec=${WATCHEXEC_BIN:-watchexec}\n\n$watchexec -1 -n echo 'foo  bar' | grep 'foo  bar'\n"
  },
  {
    "path": "crates/cli/integration/socket.sh",
    "content": "#!/bin/bash\n\nset -euxo pipefail\n\nwatchexec=${WATCHEXEC_BIN:-watchexec}\ntest_socketfd=${TEST_SOCKETFD_BIN:-test-socketfd}\n\n$watchexec --socket 18080 -1 -- $test_socketfd tcp\n$watchexec --socket udp::18080 -1 -- $test_socketfd udp\n$watchexec --socket 18080 --socket 28080 -1 -- $test_socketfd tcp tcp\n$watchexec --socket 18080 --socket 28080 --socket udp::38080 -1 -- $test_socketfd tcp tcp udp\n\nif [[ \"$TEST_PLATFORM\" = \"linux\" ]]; then\n\t$watchexec --socket 127.0.1.1:18080 -1 -- $test_socketfd tcp\nfi\n\n"
  },
  {
    "path": "crates/cli/integration/stdin-quit-unix.sh",
    "content": "#!/bin/bash\n\nset -euxo pipefail\n\nwatchexec=${WATCHEXEC_BIN:-watchexec}\n\ntimeout -s9 30s sh -c \"sleep 10 | $watchexec --stdin-quit echo\"\n"
  },
  {
    "path": "crates/cli/integration/trailingargfile-unix.sh",
    "content": "#!/bin/bash\n\nset -euxo pipefail\n\nwatchexec=${WATCHEXEC_BIN:-watchexec}\n\n$watchexec -1 -- echo @trailingargfile\n"
  },
  {
    "path": "crates/cli/release.toml",
    "content": "pre-release-commit-message = \"release: cli v{{version}}\"\ntag-prefix = \"\"\ntag-message = \"watchexec {{version}}\"\n\npre-release-hook = [\"sh\", \"-c\", \"cd ../.. && bin/completions && bin/manpage\"]\n\n[[pre-release-replacements]]\nfile = \"watchexec.exe.manifest\"\nsearch = \"^\t\tversion=\\\"[\\\\d.]+[.]0\\\"\"\nreplace = \"\t\tversion=\\\"{{version}}.0\\\"\"\nprerelease = false\nmax = 1\n\n[[pre-release-replacements]]\nfile = \"../../CITATION.cff\"\nsearch = \"^version: \\\"?[\\\\d.]+(-.+)?\\\"?\"\nreplace = \"version: \\\"{{version}}\\\"\"\nprerelease = true\nmax = 1\n\n[[pre-release-replacements]]\nfile = \"../../CITATION.cff\"\nsearch = \"^date-released: .+\"\nreplace = \"date-released: {{date}}\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/cli/run-tests.sh",
    "content": "#!/bin/bash\n\nset -euo pipefail\n\nexport WATCHEXEC_BIN=$(realpath ${WATCHEXEC_BIN:-$(which watchexec)})\nexport TEST_SOCKETFD_BIN=$(realpath ${TEST_SOCKETFD_BIN:-$(which test-socketfd)})\n\nexport TEST_PLATFORM=\"${1:-linux}\"\n\ncd \"$(dirname \"${BASH_SOURCE[0]}\")/integration\"\nfor test in *.sh; do\n\tif [[ \"$test\" == *-unix.sh && \"$TEST_PLATFORM\" = \"windows\" ]]; then\n\t\techo \"Skipping $test as it requires unix\"\n\t\tcontinue\n\tfi\n\tif [[ \"$test\" == *-win.sh && \"$TEST_PLATFORM\" != \"windows\" ]]; then\n\t\techo \"Skipping $test as it requires windows\"\n\t\tcontinue\n\tfi\n\n\techo\n\techo\n\techo \"======= Testing $test =======\"\n\t./$test\ndone\n"
  },
  {
    "path": "crates/cli/src/args/command.rs",
    "content": "use std::{\n\tffi::{OsStr, OsString},\n\tmem::take,\n\tpath::PathBuf,\n};\n\nuse clap::{\n\tbuilder::TypedValueParser,\n\terror::{Error, ErrorKind},\n\tParser, ValueEnum, ValueHint,\n};\nuse miette::{IntoDiagnostic, Result};\nuse tracing::{info, warn};\nuse watchexec_signals::Signal;\n\nuse crate::socket::{SocketSpec, SocketSpecValueParser};\n\nuse super::{TimeSpan, OPTSET_COMMAND};\n\n#[derive(Debug, Clone, Parser)]\npub struct CommandArgs {\n\t/// Use a different shell\n\t///\n\t/// By default, Watchexec will use '$SHELL' if it's defined or a default of 'sh' on Unix-likes,\n\t/// and either 'pwsh', 'powershell', or 'cmd' (CMD.EXE) on Windows, depending on what Watchexec\n\t/// detects is the running shell.\n\t///\n\t/// With this option, you can override that and use a different shell, for example one with more\n\t/// features or one which has your custom aliases and functions.\n\t///\n\t/// If the value has spaces, it is parsed as a command line, and the first word used as the\n\t/// shell program, with the rest as arguments to the shell.\n\t///\n\t/// The command is run with the '-c' flag (except for 'cmd' on Windows, where it's '/C').\n\t///\n\t/// The special value 'none' can be used to disable shell use entirely. In that case, the\n\t/// command provided to Watchexec will be parsed, with the first word being the executable and\n\t/// the rest being the arguments, and executed directly. Note that this parsing is rudimentary,\n\t/// and may not work as expected in all cases.\n\t///\n\t/// Using 'none' is a little more efficient and can enable a stricter interpretation of the\n\t/// input, but it also means that you can't use shell features like globbing, redirection,\n\t/// control flow, logic, or pipes.\n\t///\n\t/// Examples:\n\t///\n\t/// Use without shell:\n\t///\n\t///   $ watchexec -n -- zsh -x -o shwordsplit scr\n\t///\n\t/// Use with powershell core:\n\t///\n\t///   $ watchexec --shell=pwsh -- Test-Connection localhost\n\t///\n\t/// Use with CMD.exe:\n\t///\n\t///   $ watchexec --shell=cmd -- dir\n\t///\n\t/// Use with a different unix shell:\n\t///\n\t///   $ watchexec --shell=bash -- 'echo $BASH_VERSION'\n\t///\n\t/// Use with a unix shell and options:\n\t///\n\t///   $ watchexec --shell='zsh -x -o shwordsplit' -- scr\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_name = \"SHELL\",\n\t\tdisplay_order = 190,\n\t)]\n\tpub shell: Option<String>,\n\n\t/// Shorthand for '--shell=none'\n\t#[arg(\n\t\tshort = 'n',\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tdisplay_order = 140,\n\t)]\n\tpub no_shell: bool,\n\n\t/// Deprecated shorthand for '--emit-events=none'\n\t///\n\t/// This is the old way to disable event emission into the environment. See '--emit-events' for\n\t/// more. Will be removed at next major release.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\thide = true, // deprecated\n\t)]\n\tpub no_environment: bool,\n\n\t/// Add env vars to the command\n\t///\n\t/// This is a convenience option for setting environment variables for the command, without\n\t/// setting them for the Watchexec process itself.\n\t///\n\t/// Use key=value syntax. Multiple variables can be set by repeating the option.\n\t#[arg(\n\t\tlong,\n\t\tshort = 'E',\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_name = \"KEY=VALUE\",\n\t\tvalue_parser = EnvVarValueParser,\n\t\tdisplay_order = 50,\n\t)]\n\tpub env: Vec<EnvVar>,\n\n\t/// Don't use a process group\n\t///\n\t/// By default, Watchexec will run the command in a process group, so that signals and\n\t/// terminations are sent to all processes in the group. Sometimes that's not what you want, and\n\t/// you can disable the behaviour with this option.\n\t///\n\t/// Deprecated, use '--wrap-process=none' instead.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tdisplay_order = 141,\n\t)]\n\tpub no_process_group: bool,\n\n\t/// Configure how the process is wrapped\n\t///\n\t/// By default, Watchexec will run the command in a session on Mac, in a process group in Unix,\n\t/// and in a Job Object in Windows.\n\t///\n\t/// Some Unix programs prefer running in a session, while others do not work in a process group.\n\t///\n\t/// Use 'group' to use a process group, 'session' to use a process session, and 'none' to run\n\t/// the command directly. On Windows, either of 'group' or 'session' will use a Job Object.\n\t///\n\t/// If you find you need to specify this frequently for different kinds of programs, file an\n\t/// issue at <https://github.com/watchexec/watchexec/issues>. As errors of this nature are hard to\n\t/// debug and can be highly environment-dependent, reports from *multiple affected people* are\n\t/// more likely to be actioned promptly. Ask your friends/colleagues!\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_name = \"MODE\",\n\t\tdefault_value = WRAP_DEFAULT,\n\t\tdisplay_order = 231,\n\t)]\n\tpub wrap_process: WrapMode,\n\n\t/// Signal to send to stop the command\n\t///\n\t/// This is used by 'restart' and 'signal' modes of '--on-busy-update' (unless '--signal' is\n\t/// provided). The restart behaviour is to send the signal, wait for the command to exit, and if\n\t/// it hasn't exited after some time (see '--timeout-stop'), forcefully terminate it.\n\t///\n\t/// The default on unix is \"SIGTERM\".\n\t///\n\t/// Input is parsed as a full signal name (like \"SIGTERM\"), a short signal name (like \"TERM\"),\n\t/// or a signal number (like \"15\"). All input is case-insensitive.\n\t///\n\t/// On Windows this option is technically supported but only supports the \"KILL\" event, as\n\t/// Watchexec cannot yet deliver other events. Windows doesn't have signals as such; instead it\n\t/// has termination (here called \"KILL\" or \"STOP\") and \"CTRL+C\", \"CTRL+BREAK\", and \"CTRL+CLOSE\"\n\t/// events. For portability the unix signals \"SIGKILL\", \"SIGINT\", \"SIGTERM\", and \"SIGHUP\" are\n\t/// respectively mapped to these.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_name = \"SIGNAL\",\n\t\tdisplay_order = 191,\n\t)]\n\tpub stop_signal: Option<Signal>,\n\n\t/// Time to wait for the command to exit gracefully\n\t///\n\t/// This is used by the 'restart' mode of '--on-busy-update'. After the graceful stop signal\n\t/// is sent, Watchexec will wait for the command to exit. If it hasn't exited after this time,\n\t/// it is forcefully terminated.\n\t///\n\t/// Takes a unit-less value in seconds, or a time span value such as \"5min 20s\".\n\t/// Providing a unit-less value is deprecated and will warn; it will be an error in the future.\n\t///\n\t/// The default is 10 seconds. Set to 0 to immediately force-kill the command.\n\t///\n\t/// This has no practical effect on Windows as the command is always forcefully terminated; see\n\t/// '--stop-signal' for why.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tdefault_value = \"10s\",\n\t\thide_default_value = true,\n\t\tvalue_name = \"TIMEOUT\",\n\t\tdisplay_order = 192,\n\t)]\n\tpub stop_timeout: TimeSpan,\n\n\t/// Kill the command if it runs longer than this duration\n\t///\n\t/// Takes a time span value such as \"30s\", \"5min\", or \"1h 30m\".\n\t///\n\t/// When the timeout is reached, the command is gracefully stopped using --stop-signal, then\n\t/// forcefully terminated after --stop-timeout if still running.\n\t///\n\t/// Each run of the command has its own independent timeout.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_name = \"TIMEOUT\",\n\t\tdisplay_order = 193,\n\t)]\n\tpub timeout: Option<TimeSpan>,\n\n\t/// Sleep before running the command\n\t///\n\t/// This option will cause Watchexec to sleep for the specified amount of time before running\n\t/// the command, after an event is detected. This is like using \"sleep 5 && command\" in a shell,\n\t/// but portable and slightly more efficient.\n\t///\n\t/// Takes a unit-less value in seconds, or a time span value such as \"2min 5s\".\n\t/// Providing a unit-less value is deprecated and will warn; it will be an error in the future.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_name = \"DURATION\",\n\t\tdisplay_order = 40,\n\t)]\n\tpub delay_run: Option<TimeSpan>,\n\n\t/// Set the working directory\n\t///\n\t/// By default, the working directory of the command is the working directory of Watchexec. You\n\t/// can change that with this option. Note that paths may be less intuitive to use with this.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_hint = ValueHint::DirPath,\n\t\tvalue_name = \"DIRECTORY\",\n\t\tdisplay_order = 230,\n\t)]\n\tpub workdir: Option<PathBuf>,\n\n\t/// Provide a socket to the command\n\t///\n\t/// This implements the systemd socket-passing protocol, like with `systemfd`: sockets are\n\t/// opened from the watchexec process, and then passed to the commands it runs. This lets you\n\t/// keep sockets open and avoid address reuse issues or dropping packets.\n\t///\n\t/// This option can be supplied multiple times, to open multiple sockets.\n\t///\n\t/// The value can be either of `PORT` (opens a TCP listening socket at that port), `HOST:PORT`\n\t/// (specify a host IP address; IPv6 addresses can be specified `[bracketed]`), `TYPE::PORT` or\n\t/// `TYPE::HOST:PORT` (specify a socket type, `tcp` / `udp`).\n\t///\n\t/// This integration only provides basic support, if you want more control you should use the\n\t/// `systemfd` tool from <https://github.com/mitsuhiko/systemfd>, upon which this is based. The\n\t/// syntax here and the spawning behaviour is identical to `systemfd`, and both watchexec and\n\t/// systemfd are compatible implementations of the systemd socket-activation protocol.\n\t///\n\t/// Watchexec does _not_ set the `LISTEN_PID` variable on unix, which means any child process of\n\t/// your command could accidentally bind to the sockets, unless the `LISTEN_*` variables are\n\t/// removed from the environment.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_COMMAND,\n\t\tvalue_name = \"PORT\",\n\t\tvalue_parser = SocketSpecValueParser,\n\t\tdisplay_order = 60,\n\t)]\n\tpub socket: Vec<SocketSpec>,\n}\n\nimpl CommandArgs {\n\tpub(crate) async fn normalise(&mut self) -> Result<()> {\n\t\tif self.no_process_group {\n\t\t\twarn!(\"--no-process-group is deprecated\");\n\t\t\tself.wrap_process = WrapMode::None;\n\t\t}\n\n\t\tlet workdir = if let Some(w) = take(&mut self.workdir) {\n\t\t\tw\n\t\t} else {\n\t\t\tlet curdir = std::env::current_dir().into_diagnostic()?;\n\t\t\tdunce::canonicalize(curdir).into_diagnostic()?\n\t\t};\n\t\tinfo!(path=?workdir, \"effective working directory\");\n\t\tself.workdir = Some(workdir);\n\n\t\tdebug_assert!(self.workdir.is_some());\n\t\tOk(())\n\t}\n}\n\n#[derive(Clone, Copy, Debug, Default, ValueEnum)]\npub enum WrapMode {\n\t#[default]\n\tGroup,\n\tSession,\n\tNone,\n}\n\npub const WRAP_DEFAULT: &str = if cfg!(target_os = \"macos\") {\n\t\"session\"\n} else {\n\t\"group\"\n};\n\n#[derive(Clone, Debug)]\npub struct EnvVar {\n\tpub key: String,\n\tpub value: OsString,\n}\n\n#[derive(Clone)]\npub(crate) struct EnvVarValueParser;\n\nimpl TypedValueParser for EnvVarValueParser {\n\ttype Value = EnvVar;\n\n\tfn parse_ref(\n\t\t&self,\n\t\t_cmd: &clap::Command,\n\t\t_arg: Option<&clap::Arg>,\n\t\tvalue: &OsStr,\n\t) -> Result<Self::Value, Error> {\n\t\tlet value = value\n\t\t\t.to_str()\n\t\t\t.ok_or_else(|| Error::raw(ErrorKind::ValueValidation, \"invalid UTF-8\"))?;\n\n\t\tlet (key, value) = value\n\t\t\t.split_once('=')\n\t\t\t.ok_or_else(|| Error::raw(ErrorKind::ValueValidation, \"missing = separator\"))?;\n\n\t\tOk(EnvVar {\n\t\t\tkey: key.into(),\n\t\t\tvalue: value.into(),\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/args/events.rs",
    "content": "use std::{ffi::OsStr, path::PathBuf};\n\nuse clap::{\n\tbuilder::TypedValueParser, error::ErrorKind, Arg, Command, CommandFactory, Parser, ValueEnum,\n};\nuse miette::Result;\n\nuse tracing::warn;\nuse watchexec_signals::Signal;\n\nuse super::{command::CommandArgs, filtering::FilteringArgs, TimeSpan, OPTSET_EVENTS};\n\n#[derive(Debug, Clone, Parser)]\npub struct EventsArgs {\n\t/// What to do when receiving events while the command is running\n\t///\n\t/// Default is to 'do-nothing', which ignores events while the command is running, so that\n\t/// changes that occur due to the command are ignored, like compilation outputs. You can also\n\t/// use 'queue' which will run the command once again when the current run has finished if any\n\t/// events occur while it's running, or 'restart', which terminates the running command and starts\n\t/// a new one. Finally, there's 'signal', which only sends a signal; this can be useful with\n\t/// programs that can reload their configuration without a full restart.\n\t///\n\t/// The signal can be specified with the '--signal' option.\n\t#[arg(\n\t\tshort,\n\t\tlong,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tdefault_value = \"do-nothing\",\n\t\thide_default_value = true,\n\t\tvalue_name = \"MODE\",\n\t\tdisplay_order = 150,\n\t)]\n\tpub on_busy_update: OnBusyUpdate,\n\n\t/// Restart the process if it's still running\n\t///\n\t/// This is a shorthand for '--on-busy-update=restart'.\n\t#[arg(\n\t\tshort,\n\t\tlong,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tconflicts_with_all = [\"on_busy_update\"],\n\t\tdisplay_order = 180,\n\t)]\n\tpub restart: bool,\n\n\t/// Send a signal to the process when it's still running\n\t///\n\t/// Specify a signal to send to the process when it's still running. This implies\n\t/// '--on-busy-update=signal'; otherwise the signal used when that mode is 'restart' is\n\t/// controlled by '--stop-signal'.\n\t///\n\t/// See the long documentation for '--stop-signal' for syntax.\n\t///\n\t/// Signals are not supported on Windows at the moment, and will always be overridden to 'kill'.\n\t/// See '--stop-signal' for more on Windows \"signals\".\n\t#[arg(\n\t\tshort,\n\t\tlong,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tconflicts_with_all = [\"restart\"],\n\t\tvalue_name = \"SIGNAL\",\n\t\tdisplay_order = 190,\n\t)]\n\tpub signal: Option<Signal>,\n\n\t/// Translate signals from the OS to signals to send to the command\n\t///\n\t/// Takes a pair of signal names, separated by a colon, such as \"TERM:INT\" to map SIGTERM to\n\t/// SIGINT. The first signal is the one received by watchexec, and the second is the one sent to\n\t/// the command. The second can be omitted to discard the first signal, such as \"TERM:\" to\n\t/// not do anything on SIGTERM.\n\t///\n\t/// If SIGINT or SIGTERM are mapped, then they no longer quit Watchexec. Besides making it hard\n\t/// to quit Watchexec itself, this is useful to send pass a Ctrl-C to the command without also\n\t/// terminating Watchexec and the underlying program with it, e.g. with \"INT:INT\".\n\t///\n\t/// This option can be specified multiple times to map multiple signals.\n\t///\n\t/// Signal syntax is case-insensitive for short names (like \"TERM\", \"USR2\") and long names (like\n\t/// \"SIGKILL\", \"SIGHUP\"). Signal numbers are also supported (like \"15\", \"31\"). On Windows, the\n\t/// forms \"STOP\", \"CTRL+C\", and \"CTRL+BREAK\" are also supported to receive, but Watchexec cannot\n\t/// yet deliver other \"signals\" than a STOP.\n\t#[arg(\n\t\tlong = \"map-signal\",\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tvalue_name = \"SIGNAL:SIGNAL\",\n\t\tvalue_parser = SignalMappingValueParser,\n\t\tdisplay_order = 130,\n\t)]\n\tpub signal_map: Vec<SignalMapping>,\n\n\t/// Time to wait for new events before taking action\n\t///\n\t/// When an event is received, Watchexec will wait for up to this amount of time before handling\n\t/// it (such as running the command). This is essential as what you might perceive as a single\n\t/// change may actually emit many events, and without this behaviour, Watchexec would run much\n\t/// too often. Additionally, it's not infrequent that file writes are not atomic, and each write\n\t/// may emit an event, so this is a good way to avoid running a command while a file is\n\t/// partially written.\n\t///\n\t/// An alternative use is to set a high value (like \"30min\" or longer), to save power or\n\t/// bandwidth on intensive tasks, like an ad-hoc backup script. In those use cases, note that\n\t/// every accumulated event will build up in memory.\n\t///\n\t/// Takes a unit-less value in milliseconds, or a time span value such as \"5sec 20ms\".\n\t/// Providing a unit-less value is deprecated and will warn; it will be an error in the future.\n\t///\n\t/// The default is 50 milliseconds. Setting to 0 is highly discouraged.\n\t#[arg(\n\t\tlong,\n\t\tshort,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tdefault_value = \"50ms\",\n\t\thide_default_value = true,\n\t\tvalue_name = \"TIMEOUT\",\n\t\tdisplay_order = 40,\n\t)]\n\tpub debounce: TimeSpan<1_000_000>,\n\n\t/// Exit when stdin closes\n\t///\n\t/// This watches the stdin file descriptor for EOF, and exits Watchexec gracefully when it is\n\t/// closed. This is used by some process managers to avoid leaving zombie processes around.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tdisplay_order = 191,\n\t)]\n\tpub stdin_quit: bool,\n\n\t/// Respond to keypresses to quit, restart, or pause\n\t///\n\t/// In interactive mode, Watchexec listens for keypresses and responds to them. Currently\n\t/// supported keys are: 'r' to restart the command, 'p' to toggle pausing the watch, and 'q'\n\t/// to quit. This requires a terminal (TTY) and puts stdin into raw mode, so the child process\n\t/// will not receive stdin input.\n\t#[arg(\n\t\tlong,\n\t\tshort = 'I',\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tdisplay_order = 90,\n\t)]\n\tpub interactive: bool,\n\n\t/// Exit when the command has an error\n\t///\n\t/// By default, Watchexec will continue to watch and re-run the command after the command\n\t/// exits, regardless of its exit status. With this option, it will instead exit when the\n\t/// command completes with any non-success exit status.\n\t///\n\t/// This is useful when running Watchexec in a process manager or container, where you want\n\t/// the container to restart when the command fails rather than hang waiting for file changes.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tdisplay_order = 91,\n\t)]\n\tpub exit_on_error: bool,\n\n\t/// Wait until first change before running command\n\t///\n\t/// By default, Watchexec will run the command once immediately. With this option, it will\n\t/// instead wait until an event is detected before running the command as normal.\n\t#[arg(\n\t\tlong,\n\t\tshort,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tdisplay_order = 161,\n\t)]\n\tpub postpone: bool,\n\n\t/// Poll for filesystem changes\n\t///\n\t/// By default, and where available, Watchexec uses the operating system's native file system\n\t/// watching capabilities. This option disables that and instead uses a polling mechanism, which\n\t/// is less efficient but can work around issues with some file systems (like network shares) or\n\t/// edge cases.\n\t///\n\t/// Optionally takes a unit-less value in milliseconds, or a time span value such as \"2s 500ms\",\n\t/// to use as the polling interval. If not specified, the default is 30 seconds.\n\t/// Providing a unit-less value is deprecated and will warn; it will be an error in the future.\n\t///\n\t/// Aliased as '--force-poll'.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\talias = \"force-poll\",\n\t\tnum_args = 0..=1,\n\t\tdefault_missing_value = \"30s\",\n\t\tvalue_name = \"INTERVAL\",\n\t\tdisplay_order = 160,\n\t)]\n\tpub poll: Option<TimeSpan<1_000_000>>,\n\n\t/// Configure event emission\n\t///\n\t/// Watchexec can emit event information when running a command, which can be used by the child\n\t/// process to target specific changed files.\n\t///\n\t/// One thing to take care with is assuming inherent behaviour where there is only chance.\n\t/// Notably, it could appear as if the `RENAMED` variable contains both the original and the new\n\t/// path being renamed. In previous versions, it would even appear on some platforms as if the\n\t/// original always came before the new. However, none of this was true. It's impossible to\n\t/// reliably and portably know which changed path is the old or new, \"half\" renames may appear\n\t/// (only the original, only the new), \"unknown\" renames may appear (change was a rename, but\n\t/// whether it was the old or new isn't known), rename events might split across two debouncing\n\t/// boundaries, and so on.\n\t///\n\t/// This option controls where that information is emitted. It defaults to 'none', which doesn't\n\t/// emit event information at all. The other options are 'environment' (deprecated), 'stdio',\n\t/// 'file', 'json-stdio', and 'json-file'.\n\t///\n\t/// The 'stdio' and 'file' modes are text-based: 'stdio' writes absolute paths to the stdin of\n\t/// the command, one per line, each prefixed with `create:`, `remove:`, `rename:`, `modify:`,\n\t/// or `other:`, then closes the handle; 'file' writes the same thing to a temporary file, and\n\t/// its path is given with the $WATCHEXEC_EVENTS_FILE environment variable.\n\t///\n\t/// There are also two JSON modes, which are based on JSON objects and can represent the full\n\t/// set of events Watchexec handles. Here's an example of a folder being created on Linux:\n\t///\n\t/// ```json\n\t///   {\n\t///     \"tags\": [\n\t///       {\n\t///         \"kind\": \"path\",\n\t///         \"absolute\": \"/home/user/your/new-folder\",\n\t///         \"filetype\": \"dir\"\n\t///       },\n\t///       {\n\t///         \"kind\": \"fs\",\n\t///         \"simple\": \"create\",\n\t///         \"full\": \"Create(Folder)\"\n\t///       },\n\t///       {\n\t///         \"kind\": \"source\",\n\t///         \"source\": \"filesystem\",\n\t///       }\n\t///     ],\n\t///     \"metadata\": {\n\t///       \"notify-backend\": \"inotify\"\n\t///     }\n\t///   }\n\t/// ```\n\t///\n\t/// The fields are as follows:\n\t///\n\t///   - `tags`, structured event data.\n\t///   - `tags[].kind`, which can be:\n\t///     * 'path', along with:\n\t///       + `absolute`, an absolute path.\n\t///       + `filetype`, a file type if known ('dir', 'file', 'symlink', 'other').\n\t///     * 'fs':\n\t///       + `simple`, the \"simple\" event type ('access', 'create', 'modify', 'remove', or 'other').\n\t///       + `full`, the \"full\" event type, which is too complex to fully describe here, but looks like 'General(Precise(Specific))'.\n\t///     * 'source', along with:\n\t///       + `source`, the source of the event ('filesystem', 'keyboard', 'mouse', 'os', 'time', 'internal').\n\t///     * 'keyboard', along with:\n\t///       + `keycode`. Currently only the value 'eof' is supported.\n\t///     * 'process', for events caused by processes:\n\t///       + `pid`, the process ID.\n\t///     * 'signal', for signals sent to Watchexec:\n\t///       + `signal`, the normalised signal name ('hangup', 'interrupt', 'quit', 'terminate', 'user1', 'user2').\n\t///     * 'completion', for when a command ends:\n\t///       + `disposition`, the exit disposition ('success', 'error', 'signal', 'stop', 'exception', 'continued').\n\t///       + `code`, the exit, signal, stop, or exception code.\n\t///   - `metadata`, additional information about the event.\n\t///\n\t/// The 'json-stdio' mode will emit JSON events to the standard input of the command, one per\n\t/// line, then close stdin. The 'json-file' mode will create a temporary file, write the\n\t/// events to it, and provide the path to the file with the $WATCHEXEC_EVENTS_FILE\n\t/// environment variable.\n\t///\n\t/// Finally, the 'environment' mode was the default until 2.0. It sets environment variables\n\t/// with the paths of the affected files, for filesystem events:\n\t///\n\t/// $WATCHEXEC_COMMON_PATH is set to the longest common path of all of the below variables,\n\t/// and so should be prepended to each path to obtain the full/real path. Then:\n\t///\n\t///   - $WATCHEXEC_CREATED_PATH is set when files/folders were created\n\t///   - $WATCHEXEC_REMOVED_PATH is set when files/folders were removed\n\t///   - $WATCHEXEC_RENAMED_PATH is set when files/folders were renamed\n\t///   - $WATCHEXEC_WRITTEN_PATH is set when files/folders were modified\n\t///   - $WATCHEXEC_META_CHANGED_PATH is set when files/folders' metadata were modified\n\t///   - $WATCHEXEC_OTHERWISE_CHANGED_PATH is set for every other kind of pathed event\n\t///\n\t/// Multiple paths are separated by the system path separator, ';' on Windows and ':' on unix.\n\t/// Within each variable, paths are deduplicated and sorted in binary order (i.e. neither\n\t/// Unicode nor locale aware).\n\t///\n\t/// This is the legacy mode, is deprecated, and will be removed in the future. The environment\n\t/// is a very restricted space, while also limited in what it can usefully represent. Large\n\t/// numbers of files will either cause the environment to be truncated, or may error or crash\n\t/// the process entirely. The $WATCHEXEC_COMMON_PATH is also unintuitive, as demonstrated by the\n\t/// multiple confused queries that have landed in my inbox over the years.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_EVENTS,\n\t\tverbatim_doc_comment,\n\t\tdefault_value = \"none\",\n\t\thide_default_value = true,\n\t\tvalue_name = \"MODE\",\n\t\tdisplay_order = 50,\n\t)]\n\tpub emit_events_to: EmitEvents,\n}\n\nimpl EventsArgs {\n\tpub(crate) fn normalise(\n\t\t&mut self,\n\t\tcommand: &CommandArgs,\n\t\tfiltering: &FilteringArgs,\n\t\tonly_emit_events: bool,\n\t) -> Result<()> {\n\t\tif self.signal.is_some() {\n\t\t\tself.on_busy_update = OnBusyUpdate::Signal;\n\t\t} else if self.restart {\n\t\t\tself.on_busy_update = OnBusyUpdate::Restart;\n\t\t}\n\n\t\tif command.no_environment {\n\t\t\twarn!(\"--no-environment is deprecated\");\n\t\t\tself.emit_events_to = EmitEvents::None;\n\t\t}\n\n\t\tif only_emit_events\n\t\t\t&& !matches!(\n\t\t\t\tself.emit_events_to,\n\t\t\t\tEmitEvents::JsonStdio | EmitEvents::Stdio\n\t\t\t) {\n\t\t\tself.emit_events_to = EmitEvents::JsonStdio;\n\t\t}\n\n\t\tif self.stdin_quit && filtering.watch_file == Some(PathBuf::from(\"-\")) {\n\t\t\tsuper::Args::command()\n\t\t\t\t.error(\n\t\t\t\t\tErrorKind::InvalidValue,\n\t\t\t\t\t\"stdin-quit cannot be used when --watch-file=-\",\n\t\t\t\t)\n\t\t\t\t.exit();\n\t\t}\n\n\t\tif self.interactive && filtering.watch_file == Some(PathBuf::from(\"-\")) {\n\t\t\tsuper::Args::command()\n\t\t\t\t.error(\n\t\t\t\t\tErrorKind::InvalidValue,\n\t\t\t\t\t\"interactive mode cannot be used when --watch-file=-\",\n\t\t\t\t)\n\t\t\t\t.exit();\n\t\t}\n\n\t\tOk(())\n\t}\n}\n\n#[derive(Clone, Copy, Debug, Default, ValueEnum)]\npub enum EmitEvents {\n\t#[default]\n\tEnvironment,\n\tStdio,\n\tFile,\n\tJsonStdio,\n\tJsonFile,\n\tNone,\n}\n\n#[derive(Clone, Copy, Debug, Default, ValueEnum)]\npub enum OnBusyUpdate {\n\t#[default]\n\tQueue,\n\tDoNothing,\n\tRestart,\n\tSignal,\n}\n\n#[derive(Clone, Copy, Debug)]\npub struct SignalMapping {\n\tpub from: Signal,\n\tpub to: Option<Signal>,\n}\n\n#[derive(Clone)]\nstruct SignalMappingValueParser;\n\nimpl TypedValueParser for SignalMappingValueParser {\n\ttype Value = SignalMapping;\n\n\tfn parse_ref(\n\t\t&self,\n\t\t_cmd: &Command,\n\t\t_arg: Option<&Arg>,\n\t\tvalue: &OsStr,\n\t) -> Result<Self::Value, clap::error::Error> {\n\t\tlet value = value\n\t\t\t.to_str()\n\t\t\t.ok_or_else(|| clap::error::Error::raw(ErrorKind::ValueValidation, \"invalid UTF-8\"))?;\n\t\tlet (from, to) = value\n\t\t\t.split_once(':')\n\t\t\t.ok_or_else(|| clap::error::Error::raw(ErrorKind::ValueValidation, \"missing ':'\"))?;\n\n\t\tlet from = from\n\t\t\t.parse::<Signal>()\n\t\t\t.map_err(|sigparse| clap::error::Error::raw(ErrorKind::ValueValidation, sigparse))?;\n\t\tlet to = if to.is_empty() {\n\t\t\tNone\n\t\t} else {\n\t\t\tSome(to.parse::<Signal>().map_err(|sigparse| {\n\t\t\t\tclap::error::Error::raw(ErrorKind::ValueValidation, sigparse)\n\t\t\t})?)\n\t\t};\n\n\t\tOk(Self::Value { from, to })\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/args/filtering.rs",
    "content": "use std::{\n\tcollections::BTreeSet,\n\tmem::take,\n\tpath::{Path, PathBuf},\n};\n\nuse clap::{Parser, ValueEnum, ValueHint};\nuse miette::{IntoDiagnostic, Result};\nuse tokio::{\n\tfs::File,\n\tio::{AsyncBufReadExt, BufReader},\n};\nuse tracing::{debug, info};\nuse watchexec::{paths::PATH_SEPARATOR, WatchedPath};\n\nuse crate::filterer::parse::FilterProgram;\n\nuse super::{command::CommandArgs, OPTSET_FILTERING};\n\n#[derive(Debug, Clone, Parser)]\npub struct FilteringArgs {\n\t#[doc(hidden)]\n\t#[arg(skip)]\n\tpub paths: Vec<WatchedPath>,\n\n\t/// Watch a specific file or directory\n\t///\n\t/// By default, Watchexec watches the current directory.\n\t///\n\t/// When watching a single file, it's often better to watch the containing directory instead,\n\t/// and filter on the filename. Some editors may replace the file with a new one when saving,\n\t/// and some platforms may not detect that or further changes.\n\t///\n\t/// Upon starting, Watchexec resolves a \"project origin\" from the watched paths. See the help\n\t/// for '--project-origin' for more information.\n\t///\n\t/// This option can be specified multiple times to watch multiple files or directories.\n\t///\n\t/// The special value '/dev/null', provided as the only path watched, will cause Watchexec to\n\t/// not watch any paths. Other event sources (like signals or key events) may still be used.\n\t#[arg(\n\t\tshort = 'w',\n\t\tlong = \"watch\",\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_hint = ValueHint::AnyPath,\n\t\tvalue_name = \"PATH\",\n\t\tdisplay_order = 230,\n\t)]\n\tpub recursive_paths: Vec<PathBuf>,\n\n\t/// Watch a specific directory, non-recursively\n\t///\n\t/// Unlike '-w', folders watched with this option are not recursed into.\n\t///\n\t/// This option can be specified multiple times to watch multiple directories non-recursively.\n\t#[arg(\n\t\tshort = 'W',\n\t\tlong = \"watch-non-recursive\",\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_hint = ValueHint::AnyPath,\n\t\tvalue_name = \"PATH\",\n\t\tdisplay_order = 231,\n\t)]\n\tpub non_recursive_paths: Vec<PathBuf>,\n\n\t/// Watch files and directories from a file\n\t///\n\t/// Each line in the file will be interpreted as if given to '-w'.\n\t///\n\t/// For more complex uses (like watching non-recursively), use the argfile capability: build a\n\t/// file containing command-line options and pass it to watchexec with `@path/to/argfile`.\n\t///\n\t/// The special value '-' will read from STDIN; this in incompatible with '--stdin-quit'.\n\t#[arg(\n\t\tshort = 'F',\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_hint = ValueHint::AnyPath,\n\t\tvalue_name = \"PATH\",\n\t\tdisplay_order = 232,\n\t)]\n\tpub watch_file: Option<PathBuf>,\n\n\t/// Don't load gitignores\n\t///\n\t/// Among other VCS exclude files, like for Mercurial, Subversion, Bazaar, DARCS, Fossil. Note\n\t/// that Watchexec will detect which of these is in use, if any, and only load the relevant\n\t/// files. Both global (like '~/.gitignore') and local (like '.gitignore') files are considered.\n\t///\n\t/// This option is useful if you want to watch files that are ignored by Git.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tdisplay_order = 145,\n\t)]\n\tpub no_vcs_ignore: bool,\n\n\t/// Don't load project-local ignores\n\t///\n\t/// This disables loading of project-local ignore files, like '.gitignore' or '.ignore' in the\n\t/// watched project. This is contrasted with '--no-vcs-ignore', which disables loading of Git\n\t/// and other VCS ignore files, and with '--no-global-ignore', which disables loading of global\n\t/// or user ignore files, like '~/.gitignore' or '~/.config/watchexec/ignore'.\n\t///\n\t/// Supported project ignore files:\n\t///\n\t///   - Git: .gitignore at project root and child directories, .git/info/exclude, and the file pointed to by `core.excludesFile` in .git/config.\n\t///   - Mercurial: .hgignore at project root and child directories.\n\t///   - Bazaar: .bzrignore at project root.\n\t///   - Darcs: _darcs/prefs/boring\n\t///   - Fossil: .fossil-settings/ignore-glob\n\t///   - Ripgrep/Watchexec/generic: .ignore at project root and child directories.\n\t///\n\t/// VCS ignore files (Git, Mercurial, Bazaar, Darcs, Fossil) are only used if the corresponding\n\t/// VCS is discovered to be in use for the project/origin. For example, a .bzrignore in a Git\n\t/// repository will be discarded.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tverbatim_doc_comment,\n\t\tdisplay_order = 144,\n\t)]\n\tpub no_project_ignore: bool,\n\n\t/// Don't load global ignores\n\t///\n\t/// This disables loading of global or user ignore files, like '~/.gitignore',\n\t/// '~/.config/watchexec/ignore', or '%APPDATA%\\Bazzar\\2.0\\ignore'. Contrast with\n\t/// '--no-vcs-ignore' and '--no-project-ignore'.\n\t///\n\t/// Supported global ignore files\n\t///\n\t///   - Git (if core.excludesFile is set): the file at that path\n\t///   - Git (otherwise): the first found of $XDG_CONFIG_HOME/git/ignore, %APPDATA%/.gitignore, %USERPROFILE%/.gitignore, $HOME/.config/git/ignore, $HOME/.gitignore.\n\t///   - Bazaar: the first found of %APPDATA%/Bazzar/2.0/ignore, $HOME/.bazaar/ignore.\n\t///   - Watchexec: the first found of $XDG_CONFIG_HOME/watchexec/ignore, %APPDATA%/watchexec/ignore, %USERPROFILE%/.watchexec/ignore, $HOME/.watchexec/ignore.\n\t///\n\t/// Like for project files, Git and Bazaar global files will only be used for the corresponding\n\t/// VCS as used in the project.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tverbatim_doc_comment,\n\t\tdisplay_order = 142,\n\t)]\n\tpub no_global_ignore: bool,\n\n\t/// Don't use internal default ignores\n\t///\n\t/// Watchexec has a set of default ignore patterns, such as editor swap files, `*.pyc`, `*.pyo`,\n\t/// `.DS_Store`, `.bzr`, `_darcs`, `.fossil-settings`, `.git`, `.hg`, `.pijul`, `.svn`, and\n\t/// Watchexec log files.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tdisplay_order = 140,\n\t)]\n\tpub no_default_ignore: bool,\n\n\t/// Don't discover ignore files at all\n\t///\n\t/// This is a shorthand for '--no-global-ignore', '--no-vcs-ignore', '--no-project-ignore', but\n\t/// even more efficient as it will skip all the ignore discovery mechanisms from the get go.\n\t///\n\t/// Note that default ignores are still loaded, see '--no-default-ignore'.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tdisplay_order = 141,\n\t)]\n\tpub no_discover_ignore: bool,\n\n\t/// Don't ignore anything at all\n\t///\n\t/// This is a shorthand for '--no-discover-ignore', '--no-default-ignore'.\n\t///\n\t/// Note that ignores explicitly loaded via other command line options, such as '--ignore' or\n\t/// '--ignore-file', will still be used.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tdisplay_order = 92,\n\t)]\n\tpub ignore_nothing: bool,\n\n\t/// Filename extensions to filter to\n\t///\n\t/// This is a quick filter to only emit events for files with the given extensions. Extensions\n\t/// can be given with or without the leading dot (e.g. 'js' or '.js'). Multiple extensions can\n\t/// be given by repeating the option or by separating them with commas.\n\t#[arg(\n\t\tlong = \"exts\",\n\t\tshort = 'e',\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_delimiter = ',',\n\t\tvalue_name = \"EXTENSIONS\",\n\t\tdisplay_order = 50,\n\t)]\n\tpub filter_extensions: Vec<String>,\n\n\t/// Filename patterns to filter to\n\t///\n\t/// Provide a glob-like filter pattern, and only events for files matching the pattern will be\n\t/// emitted. Multiple patterns can be given by repeating the option. Events that are not from\n\t/// files (e.g. signals, keyboard events) will pass through untouched.\n\t#[arg(\n\t\tlong = \"filter\",\n\t\tshort = 'f',\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_name = \"PATTERN\",\n\t\tdisplay_order = 60,\n\t)]\n\tpub filter_patterns: Vec<String>,\n\n\t/// Files to load filters from\n\t///\n\t/// Provide a path to a file containing filters, one per line. Empty lines and lines starting\n\t/// with '#' are ignored. Uses the same pattern format as the '--filter' option.\n\t///\n\t/// This can also be used via the $WATCHEXEC_FILTER_FILES environment variable.\n\t#[arg(\n\t\tlong = \"filter-file\",\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_delimiter = PATH_SEPARATOR.chars().next().unwrap(),\n\t\tvalue_hint = ValueHint::FilePath,\n\t\tvalue_name = \"PATH\",\n\t\tenv = \"WATCHEXEC_FILTER_FILES\",\n\t\thide_env = true,\n\t\tdisplay_order = 61,\n\t)]\n\tpub filter_files: Vec<PathBuf>,\n\n\t/// Set the project origin\n\t///\n\t/// Watchexec will attempt to discover the project's \"origin\" (or \"root\") by searching for a\n\t/// variety of markers, like files or directory patterns. It does its best but sometimes gets it\n\t/// it wrong, and you can override that with this option.\n\t///\n\t/// The project origin is used to determine the path of certain ignore files, which VCS is being\n\t/// used, the meaning of a leading '/' in filtering patterns, and maybe more in the future.\n\t///\n\t/// When set, Watchexec will also not bother searching, which can be significantly faster.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_hint = ValueHint::DirPath,\n\t\tvalue_name = \"DIRECTORY\",\n\t\tdisplay_order = 160,\n\t)]\n\tpub project_origin: Option<PathBuf>,\n\n\t/// Filter programs.\n\t///\n\t/// Provide your own custom filter programs in jaq (similar to jq) syntax. Programs are given\n\t/// an event in the same format as described in '--emit-events-to' and must return a boolean.\n\t/// Invalid programs will make watchexec fail to start; use '-v' to see program runtime errors.\n\t///\n\t/// In addition to the jaq stdlib, watchexec adds some custom filter definitions:\n\t///\n\t///   - 'path | file_meta' returns file metadata or null if the file does not exist.\n\t///\n\t///   - 'path | file_size' returns the size of the file at path, or null if it does not exist.\n\t///\n\t///   - 'path | file_read(bytes)' returns a string with the first n bytes of the file at path.\n\t///     If the file is smaller than n bytes, the whole file is returned. There is no filter to\n\t///     read the whole file at once to encourage limiting the amount of data read and processed.\n\t///\n\t///   - 'string | hash', and 'path | file_hash' return the hash of the string or file at path.\n\t///     No guarantee is made about the algorithm used: treat it as an opaque value.\n\t///\n\t///   - 'any | kv_store(key)', 'kv_fetch(key)', and 'kv_clear' provide a simple key-value store.\n\t///     Data is kept in memory only, there is no persistence. Consistency is not guaranteed.\n\t///\n\t///   - 'any | printout', 'any | printerr', and 'any | log(level)' will print or log any given\n\t///     value to stdout, stderr, or the log (levels = error, warn, info, debug, trace), and\n\t///     pass the value through (so '[1] | log(\"debug\") | .[]' will produce a '1' and log '[1]').\n\t///\n\t/// All filtering done with such programs, and especially those using kv or filesystem access,\n\t/// is much slower than the other filtering methods. If filtering is too slow, events will back\n\t/// up and stall watchexec. Take care when designing your filters.\n\t///\n\t/// If the argument to this option starts with an '@', the rest of the argument is taken to be\n\t/// the path to a file containing a jaq program.\n\t///\n\t/// Jaq programs are run in order, after all other filters, and short-circuit: if a filter (jaq\n\t/// or not) rejects an event, execution stops there, and no other filters are run. Additionally,\n\t/// they stop after outputting the first value, so you'll want to use 'any' or 'all' when\n\t/// iterating, otherwise only the first item will be processed, which can be quite confusing!\n\t///\n\t/// Find user-contributed programs or submit your own useful ones at\n\t/// <https://github.com/watchexec/watchexec/discussions/592>.\n\t///\n\t/// ## Examples:\n\t///\n\t/// Regexp ignore filter on paths:\n\t///\n\t///   'all(.tags[] | select(.kind == \"path\"); .absolute | test(\"[.]test[.]js$\")) | not'\n\t///\n\t/// Pass any event that creates a file:\n\t///\n\t///   'any(.tags[] | select(.kind == \"fs\"); .simple == \"create\")'\n\t///\n\t/// Pass events that touch executable files:\n\t///\n\t///   'any(.tags[] | select(.kind == \"path\" && .filetype == \"file\"); .absolute | metadata | .executable)'\n\t///\n\t/// Ignore files that start with shebangs:\n\t///\n\t///   'any(.tags[] | select(.kind == \"path\" && .filetype == \"file\"); .absolute | read(2) == \"#!\") | not'\n\t#[arg(\n\t\tlong = \"filter-prog\",\n\t\tshort = 'j',\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_name = \"EXPRESSION\",\n\t\tdisplay_order = 62,\n\t)]\n\tpub filter_programs: Vec<String>,\n\n\t#[doc(hidden)]\n\t#[clap(skip)]\n\tpub filter_programs_parsed: Vec<FilterProgram>,\n\n\t/// Filename patterns to filter out\n\t///\n\t/// Provide a glob-like filter pattern, and events for files matching the pattern will be\n\t/// excluded. Multiple patterns can be given by repeating the option. Events that are not from\n\t/// files (e.g. signals, keyboard events) will pass through untouched.\n\t#[arg(\n\t\tlong = \"ignore\",\n\t\tshort = 'i',\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_name = \"PATTERN\",\n\t\tdisplay_order = 90,\n\t)]\n\tpub ignore_patterns: Vec<String>,\n\n\t/// Files to load ignores from\n\t///\n\t/// Provide a path to a file containing ignores, one per line. Empty lines and lines starting\n\t/// with '#' are ignored. Uses the same pattern format as the '--ignore' option.\n\t///\n\t/// This can also be used via the $WATCHEXEC_IGNORE_FILES environment variable.\n\t#[arg(\n\t\tlong = \"ignore-file\",\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tvalue_delimiter = PATH_SEPARATOR.chars().next().unwrap(),\n\t\tvalue_hint = ValueHint::FilePath,\n\t\tvalue_name = \"PATH\",\n\t\tenv = \"WATCHEXEC_IGNORE_FILES\",\n\t\thide_env = true,\n\t\tdisplay_order = 91,\n\t)]\n\tpub ignore_files: Vec<PathBuf>,\n\n\t/// Filesystem events to filter to\n\t///\n\t/// This is a quick filter to only emit events for the given types of filesystem changes. Choose\n\t/// from 'access', 'create', 'remove', 'rename', 'modify', 'metadata'. Multiple types can be\n\t/// given by repeating the option or by separating them with commas. By default, this is all\n\t/// types except for 'access'.\n\t///\n\t/// This may apply filtering at the kernel level when possible, which can be more efficient, but\n\t/// may be more confusing when reading the logs.\n\t#[arg(\n\t\tlong = \"fs-events\",\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tdefault_value = \"create,remove,rename,modify,metadata\",\n\t\tvalue_delimiter = ',',\n\t\thide_default_value = true,\n\t\tvalue_name = \"EVENTS\",\n\t\tdisplay_order = 63,\n\t)]\n\tpub filter_fs_events: Vec<FsEvent>,\n\n\t/// Don't emit fs events for metadata changes\n\t///\n\t/// This is a shorthand for '--fs-events create,remove,rename,modify'. Using it alongside the\n\t/// '--fs-events' option is non-sensical and not allowed.\n\t#[arg(\n\t\tlong = \"no-meta\",\n\t\thelp_heading = OPTSET_FILTERING,\n\t\tconflicts_with = \"filter_fs_events\",\n\t\tdisplay_order = 142,\n\t)]\n\tpub filter_fs_meta: bool,\n}\n\nimpl FilteringArgs {\n\tpub(crate) async fn normalise(&mut self, command: &CommandArgs) -> Result<()> {\n\t\tif self.ignore_nothing {\n\t\t\tself.no_global_ignore = true;\n\t\t\tself.no_vcs_ignore = true;\n\t\t\tself.no_project_ignore = true;\n\t\t\tself.no_default_ignore = true;\n\t\t\tself.no_discover_ignore = true;\n\t\t}\n\n\t\tif self.filter_fs_meta {\n\t\t\tself.filter_fs_events = vec![\n\t\t\t\tFsEvent::Create,\n\t\t\t\tFsEvent::Remove,\n\t\t\t\tFsEvent::Rename,\n\t\t\t\tFsEvent::Modify,\n\t\t\t];\n\t\t}\n\n\t\tif let Some(watch_file) = self.watch_file.as_ref() {\n\t\t\tif watch_file == Path::new(\"-\") {\n\t\t\t\tlet file = tokio::io::stdin();\n\t\t\t\tlet mut lines = BufReader::new(file).lines();\n\t\t\t\twhile let Ok(Some(line)) = lines.next_line().await {\n\t\t\t\t\tself.recursive_paths.push(line.into());\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tlet file = File::open(watch_file).await.into_diagnostic()?;\n\t\t\t\tlet mut lines = BufReader::new(file).lines();\n\t\t\t\twhile let Ok(Some(line)) = lines.next_line().await {\n\t\t\t\t\tself.recursive_paths.push(line.into());\n\t\t\t\t}\n\t\t\t};\n\t\t}\n\n\t\tlet project_origin = if let Some(p) = take(&mut self.project_origin) {\n\t\t\tp\n\t\t} else {\n\t\t\tcrate::dirs::project_origin(&self, command).await?\n\t\t};\n\t\tdebug!(path=?project_origin, \"resolved project origin\");\n\t\tlet project_origin = dunce::canonicalize(project_origin).into_diagnostic()?;\n\t\tinfo!(path=?project_origin, \"effective project origin\");\n\t\tself.project_origin = Some(project_origin.clone());\n\n\t\tself.paths = take(&mut self.recursive_paths)\n\t\t\t.into_iter()\n\t\t\t.map(|path| {\n\t\t\t\t{\n\t\t\t\t\tif path.is_absolute() {\n\t\t\t\t\t\tOk(path)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tdunce::canonicalize(project_origin.join(path)).into_diagnostic()\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t.map(WatchedPath::recursive)\n\t\t\t})\n\t\t\t.chain(take(&mut self.non_recursive_paths).into_iter().map(|path| {\n\t\t\t\t{\n\t\t\t\t\tif path.is_absolute() {\n\t\t\t\t\t\tOk(path)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tdunce::canonicalize(project_origin.join(path)).into_diagnostic()\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t.map(WatchedPath::non_recursive)\n\t\t\t}))\n\t\t\t.collect::<Result<BTreeSet<_>>>()?\n\t\t\t.into_iter()\n\t\t\t.collect();\n\n\t\tif self.paths.len() == 1\n\t\t\t&& self\n\t\t\t\t.paths\n\t\t\t\t.first()\n\t\t\t\t.map_or(false, |p| p.as_ref() == Path::new(\"/dev/null\"))\n\t\t{\n\t\t\tinfo!(\"only path is /dev/null, not watching anything\");\n\t\t\tself.paths = Vec::new();\n\t\t} else if self.paths.is_empty() {\n\t\t\tinfo!(\"no paths, using current directory\");\n\t\t\tself.paths.push(command.workdir.as_deref().unwrap().into());\n\t\t}\n\t\tinfo!(paths=?self.paths, \"effective watched paths\");\n\n\t\tfor (n, prog) in self.filter_programs.iter().enumerate() {\n\t\t\tif let Some(progpath) = prog.strip_prefix('@') {\n\t\t\t\tself.filter_programs_parsed\n\t\t\t\t\t.push(FilterProgram::new_jaq_from_file(progpath).await?);\n\t\t\t} else {\n\t\t\t\tself.filter_programs_parsed\n\t\t\t\t\t.push(FilterProgram::new_jaq_from_arg(n, prog.clone())?);\n\t\t\t}\n\t\t}\n\n\t\tdebug_assert!(self.project_origin.is_some());\n\t\tOk(())\n\t}\n}\n\n#[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)]\npub enum FsEvent {\n\tAccess,\n\tCreate,\n\tRemove,\n\tRename,\n\tModify,\n\tMetadata,\n}\n"
  },
  {
    "path": "crates/cli/src/args/logging.rs",
    "content": "use std::{env::var, io::stderr, path::PathBuf};\n\nuse clap::{ArgAction, Parser, ValueHint};\nuse miette::{bail, Result};\nuse tokio::fs::metadata;\nuse tracing::{info, warn};\nuse tracing_appender::{non_blocking, non_blocking::WorkerGuard, rolling};\nuse tracing_subscriber::{EnvFilter, FmtSubscriber};\n\nuse super::OPTSET_DEBUGGING;\n\n#[derive(Debug, Clone, Parser)]\npub struct LoggingArgs {\n\t/// Set diagnostic log level\n\t///\n\t/// This enables diagnostic logging, which is useful for investigating bugs or gaining more\n\t/// insight into faulty filters or \"missing\" events. Use multiple times to increase verbosity.\n\t///\n\t/// Goes up to '-vvvv'. When submitting bug reports, default to a '-vvv' log level.\n\t///\n\t/// You may want to use with '--log-file' to avoid polluting your terminal.\n\t///\n\t/// Setting $WATCHEXEC_LOG also works, and takes precedence, but is not recommended. However, using\n\t/// $WATCHEXEC_LOG is the only way to get logs from before these options are parsed.\n\t#[arg(\n\t\tlong,\n\t\tshort,\n\t\thelp_heading = OPTSET_DEBUGGING,\n\t\taction = ArgAction::Count,\n\t\tdefault_value = \"0\",\n\t\tnum_args = 0,\n\t\tdisplay_order = 220,\n\t)]\n\tpub verbose: u8,\n\n\t/// Write diagnostic logs to a file\n\t///\n\t/// This writes diagnostic logs to a file, instead of the terminal, in JSON format. If a log\n\t/// level was not already specified, this will set it to '-vvv'.\n\t///\n\t/// If a path is not provided, the default is the working directory. Note that with\n\t/// '--ignore-nothing', the write events to the log will likely get picked up by Watchexec,\n\t/// causing a loop; prefer setting a path outside of the watched directory.\n\t///\n\t/// If the path provided is a directory, a file will be created in that directory. The file name\n\t/// will be the current date and time, in the format 'watchexec.YYYY-MM-DDTHH-MM-SSZ.log'.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_DEBUGGING,\n\t\tnum_args = 0..=1,\n\t\tdefault_missing_value = \".\",\n\t\tvalue_hint = ValueHint::AnyPath,\n\t\tvalue_name = \"PATH\",\n\t\tdisplay_order = 120,\n\t)]\n\tpub log_file: Option<PathBuf>,\n\n\t/// Print events that trigger actions\n\t///\n\t/// This prints the events that triggered the action when handling it (after debouncing), in a\n\t/// human readable form. This is useful for debugging filters.\n\t///\n\t/// Use '-vvv' instead when you need more diagnostic information.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_DEBUGGING,\n\t\tdisplay_order = 160,\n\t)]\n\tpub print_events: bool,\n}\n\npub fn preargs() -> bool {\n\tlet mut log_on = false;\n\n\t#[cfg(feature = \"dev-console\")]\n\tmatch console_subscriber::try_init() {\n\t\tOk(_) => {\n\t\t\twarn!(\"dev-console enabled\");\n\t\t\tlog_on = true;\n\t\t}\n\t\tErr(e) => {\n\t\t\teprintln!(\"Failed to initialise tokio console, falling back to normal logging\\n{e}\")\n\t\t}\n\t}\n\n\tif !log_on && var(\"WATCHEXEC_LOG\").is_ok() {\n\t\tlet subscriber =\n\t\t\tFmtSubscriber::builder().with_env_filter(EnvFilter::from_env(\"WATCHEXEC_LOG\"));\n\t\tmatch subscriber.try_init() {\n\t\t\tOk(()) => {\n\t\t\t\twarn!(WATCHEXEC_LOG=%var(\"WATCHEXEC_LOG\").unwrap(), \"logging configured from WATCHEXEC_LOG\");\n\t\t\t\tlog_on = true;\n\t\t\t}\n\t\t\tErr(e) => {\n\t\t\t\teprintln!(\"Failed to initialise logging with WATCHEXEC_LOG, falling back\\n{e}\");\n\t\t\t}\n\t\t}\n\t}\n\n\tlog_on\n}\n\npub async fn postargs(args: &LoggingArgs) -> Result<Option<WorkerGuard>> {\n\tif args.verbose == 0 {\n\t\treturn Ok(None);\n\t}\n\n\tlet (log_writer, guard) = if let Some(file) = &args.log_file {\n\t\tlet is_dir = metadata(&file).await.map_or(false, |info| info.is_dir());\n\t\tlet (dir, filename) = if is_dir {\n\t\t\t(\n\t\t\t\tfile.to_owned(),\n\t\t\t\tPathBuf::from(format!(\n\t\t\t\t\t\"watchexec.{}.log\",\n\t\t\t\t\tchrono::Utc::now().format(\"%Y-%m-%dT%H-%M-%SZ\")\n\t\t\t\t)),\n\t\t\t)\n\t\t} else if let (Some(parent), Some(file_name)) = (file.parent(), file.file_name()) {\n\t\t\t(parent.into(), PathBuf::from(file_name))\n\t\t} else {\n\t\t\tbail!(\"Failed to determine log file name\");\n\t\t};\n\n\t\tnon_blocking(rolling::never(dir, filename))\n\t} else {\n\t\tnon_blocking(stderr())\n\t};\n\n\tlet mut builder = tracing_subscriber::fmt().with_env_filter(match args.verbose {\n\t\t0 => unreachable!(\"checked by if earlier\"),\n\t\t1 => \"warn\",\n\t\t2 => \"info\",\n\t\t3 => \"debug\",\n\t\t_ => \"trace\",\n\t});\n\n\tif args.verbose > 2 {\n\t\tuse tracing_subscriber::fmt::format::FmtSpan;\n\t\tbuilder = builder.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE);\n\t}\n\n\tmatch if args.log_file.is_some() {\n\t\tbuilder.json().with_writer(log_writer).try_init()\n\t} else if args.verbose > 3 {\n\t\tbuilder.pretty().with_writer(log_writer).try_init()\n\t} else {\n\t\tbuilder.with_writer(log_writer).try_init()\n\t} {\n\t\tOk(()) => info!(\"logging initialised\"),\n\t\tErr(e) => eprintln!(\"Failed to initialise logging, continuing with none\\n{e}\"),\n\t}\n\n\tOk(Some(guard))\n}\n"
  },
  {
    "path": "crates/cli/src/args/output.rs",
    "content": "use clap::{Parser, ValueEnum};\nuse miette::Result;\n\nuse super::OPTSET_OUTPUT;\n\n#[derive(Debug, Clone, Parser)]\npub struct OutputArgs {\n\t/// Clear screen before running command\n\t///\n\t/// If this doesn't completely clear the screen, try '--clear=reset'.\n\t#[arg(\n\t\tshort = 'c',\n\t\tlong = \"clear\",\n\t\thelp_heading = OPTSET_OUTPUT,\n\t\tnum_args = 0..=1,\n\t\tdefault_missing_value = \"clear\",\n\t\tvalue_name = \"MODE\",\n\t\tdisplay_order = 30,\n\t)]\n\tpub screen_clear: Option<ClearMode>,\n\n\t/// Alert when commands start and end\n\t///\n\t/// With this, Watchexec will emit a desktop notification when a command starts and ends, on\n\t/// supported platforms. On unsupported platforms, it may silently do nothing, or log a warning.\n\t///\n\t/// The mode can be specified to only notify when the command `start`s, `end`s, or for `both`\n\t/// (which is the default).\n\t#[arg(\n\t\tshort = 'N',\n\t\tlong,\n\t\thelp_heading = OPTSET_OUTPUT,\n\t\tnum_args = 0..=1,\n\t\tdefault_missing_value = \"both\",\n\t\tvalue_name = \"WHEN\",\n\t\tdisplay_order = 140,\n\t)]\n\tpub notify: Option<NotifyMode>,\n\n\t/// When to use terminal colours\n\t///\n\t/// Setting the environment variable `NO_COLOR` to any value is equivalent to `--color=never`.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_OUTPUT,\n\t\tdefault_value = \"auto\",\n\t\tvalue_name = \"MODE\",\n\t\talias = \"colour\",\n\t\tdisplay_order = 31,\n\t)]\n\tpub color: ColourMode,\n\n\t/// Print how long the command took to run\n\t///\n\t/// This may not be exactly accurate, as it includes some overhead from Watchexec itself. Use\n\t/// the `time` utility, high-precision timers, or benchmarking tools for more accurate results.\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_OUTPUT,\n\t\tdisplay_order = 200,\n\t)]\n\tpub timings: bool,\n\n\t/// Don't print starting and stopping messages\n\t///\n\t/// By default Watchexec will print a message when the command starts and stops. This option\n\t/// disables this behaviour, so only the command's output, warnings, and errors will be printed.\n\t#[arg(\n\t\tshort,\n\t\tlong,\n\t\thelp_heading = OPTSET_OUTPUT,\n\t\tdisplay_order = 170,\n\t)]\n\tpub quiet: bool,\n\n\t/// Ring the terminal bell on command completion\n\t#[arg(\n\t\tlong,\n\t\thelp_heading = OPTSET_OUTPUT,\n\t\tdisplay_order = 20,\n\t)]\n\tpub bell: bool,\n}\n\nimpl OutputArgs {\n\tpub(crate) fn normalise(&mut self) -> Result<()> {\n\t\t// https://no-color.org/\n\t\tif self.color == ColourMode::Auto && std::env::var(\"NO_COLOR\").is_ok() {\n\t\t\tself.color = ColourMode::Never;\n\t\t}\n\n\t\tOk(())\n\t}\n}\n\n#[derive(Clone, Copy, Debug, Default, ValueEnum)]\npub enum ClearMode {\n\t#[default]\n\tClear,\n\tReset,\n}\n\n#[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)]\npub enum ColourMode {\n\tAuto,\n\tAlways,\n\tNever,\n}\n\n#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, ValueEnum)]\npub enum NotifyMode {\n\t/// Notify on both start and end\n\t#[default]\n\tBoth,\n\t/// Notify only when the command starts\n\tStart,\n\t/// Notify only when the command ends\n\tEnd,\n}\n\nimpl NotifyMode {\n\t/// Whether to notify on command start\n\tpub fn on_start(self) -> bool {\n\t\tmatches!(self, Self::Both | Self::Start)\n\t}\n\n\t/// Whether to notify on command end\n\tpub fn on_end(self) -> bool {\n\t\tmatches!(self, Self::Both | Self::End)\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/args.rs",
    "content": "use std::{\n\tffi::{OsStr, OsString},\n\tstr::FromStr,\n\ttime::Duration,\n};\n\nuse clap::{Parser, ValueEnum, ValueHint};\nuse miette::Result;\nuse tracing::{debug, info, warn};\nuse tracing_appender::non_blocking::WorkerGuard;\n\npub(crate) mod command;\npub(crate) mod events;\npub(crate) mod filtering;\npub(crate) mod logging;\npub(crate) mod output;\n\nconst OPTSET_COMMAND: &str = \"Command\";\nconst OPTSET_DEBUGGING: &str = \"Debugging\";\nconst OPTSET_EVENTS: &str = \"Events\";\nconst OPTSET_FILTERING: &str = \"Filtering\";\nconst OPTSET_OUTPUT: &str = \"Output\";\n\ninclude!(env!(\"BOSION_PATH\"));\n\n/// Execute commands when watched files change.\n///\n/// Recursively monitors the current directory for changes, executing the command when a filesystem\n/// change is detected (among other event sources). By default, watchexec uses efficient\n/// kernel-level mechanisms to watch for changes.\n///\n/// At startup, the specified command is run once, and watchexec begins monitoring for changes.\n///\n/// Events are debounced and checked using a variety of mechanisms, which you can control using\n/// the flags in the **Filtering** section. The order of execution is: internal prioritisation\n/// (signals come before everything else, and SIGINT/SIGTERM are processed even more urgently),\n/// then file event kind (`--fs-events`), then files explicitly watched with `-w`, then ignores\n/// (`--ignore` and co), then filters (which includes `--exts`), then filter programs.\n///\n/// Examples:\n///\n/// Rebuild a project when source files change:\n///\n///   $ watchexec make\n///\n/// Watch all HTML, CSS, and JavaScript files for changes:\n///\n///   $ watchexec -e html,css,js make\n///\n/// Run tests when source files change, clearing the screen each time:\n///\n///   $ watchexec -c make test\n///\n/// Launch and restart a node.js server:\n///\n///   $ watchexec -r node app.js\n///\n/// Watch lib and src directories for changes, rebuilding each time:\n///\n///   $ watchexec -w lib -w src make\n#[derive(Debug, Clone, Parser)]\n#[command(\n\tname = \"watchexec\",\n\tbin_name = \"watchexec\",\n\tauthor,\n\tversion,\n\tlong_version = Bosion::LONG_VERSION,\n\tafter_help = \"Want more detail? Try the long '--help' flag!\",\n\tafter_long_help = \"Use @argfile as first argument to load arguments from the file 'argfile' (one argument per line) which will be inserted in place of the @argfile (further arguments on the CLI will override or add onto those in the file).\\n\\nDidn't expect this much output? Use the short '-h' flag to get short help.\",\n\thide_possible_values = true,\n)]\npub struct Args {\n\t/// Command (program and arguments) to run on changes\n\t///\n\t/// It's run when events pass filters and the debounce period (and once at startup unless\n\t/// '--postpone' is given). If you pass flags to the command, you should separate it with --\n\t/// though that is not strictly required.\n\t///\n\t/// Examples:\n\t///\n\t///   $ watchexec -w src npm run build\n\t///\n\t///   $ watchexec -w src -- rsync -a src dest\n\t///\n\t/// Take care when using globs or other shell expansions in the command. Your shell may expand\n\t/// them before ever passing them to Watchexec, and the results may not be what you expect.\n\t/// Compare:\n\t///\n\t///   $ watchexec echo src/*.rs\n\t///\n\t///   $ watchexec echo 'src/*.rs'\n\t///\n\t///   $ watchexec --shell=none echo 'src/*.rs'\n\t///\n\t/// Behaviour depends on the value of '--shell': for all except 'none', every part of the\n\t/// command is joined together into one string with a single ascii space character, and given to\n\t/// the shell as described in the help for '--shell'. For 'none', each distinct element the\n\t/// command is passed as per the execvp(3) convention: first argument is the program, as a path\n\t/// or searched for in the 'PATH' environment variable, rest are arguments.\n\t#[arg(\n\t\ttrailing_var_arg = true,\n\t\tnum_args = 1..,\n\t\tvalue_hint = ValueHint::CommandString,\n\t\tvalue_name = \"COMMAND\",\n\t\trequired_unless_present_any = [\"completions\", \"manual\", \"only_emit_events\"],\n\t)]\n\tpub program: Vec<String>,\n\n\t/// Show the manual page\n\t///\n\t/// This shows the manual page for Watchexec, if the output is a terminal and the 'man' program\n\t/// is available. If not, the manual page is printed to stdout in ROFF format (suitable for\n\t/// writing to a watchexec.1 file).\n\t#[arg(\n\t\tlong,\n\t\tconflicts_with_all = [\"program\", \"completions\", \"only_emit_events\"],\n\t\tdisplay_order = 130,\n\t)]\n\tpub manual: bool,\n\n\t/// Generate a shell completions script\n\t///\n\t/// Provides a completions script or configuration for the given shell. If Watchexec is not\n\t/// distributed with pre-generated completions, you can use this to generate them yourself.\n\t///\n\t/// Supported shells: bash, elvish, fish, nu, powershell, zsh.\n\t#[arg(\n\t\tlong,\n\t\tvalue_name = \"SHELL\",\n\t\tconflicts_with_all = [\"program\", \"manual\", \"only_emit_events\"],\n\t\tdisplay_order = 30,\n\t)]\n\tpub completions: Option<ShellCompletion>,\n\n\t/// Only emit events to stdout, run no commands.\n\t///\n\t/// This is a convenience option for using Watchexec as a file watcher, without running any\n\t/// commands. It is almost equivalent to using `cat` as the command, except that it will not\n\t/// spawn a new process for each event.\n\t///\n\t/// This option implies `--emit-events-to=json-stdio`; you may also use the text mode by\n\t/// specifying `--emit-events-to=stdio`.\n\t#[arg(\n\t\tlong,\n\t\tconflicts_with_all = [\"program\", \"completions\", \"manual\"],\n\t\tdisplay_order = 150,\n\t)]\n\tpub only_emit_events: bool,\n\n\t/// Testing only: exit Watchexec after the first run and return the command's exit code\n\t#[arg(short = '1', hide = true)]\n\tpub once: bool,\n\n\t#[command(flatten)]\n\tpub command: command::CommandArgs,\n\n\t#[command(flatten)]\n\tpub events: events::EventsArgs,\n\n\t#[command(flatten)]\n\tpub filtering: filtering::FilteringArgs,\n\n\t#[command(flatten)]\n\tpub logging: logging::LoggingArgs,\n\n\t#[command(flatten)]\n\tpub output: output::OutputArgs,\n}\n\n#[derive(Clone, Copy, Debug)]\npub struct TimeSpan<const UNITLESS_NANOS_MULTIPLIER: u64 = { 1_000_000_000 }>(pub Duration);\n\nimpl<const UNITLESS_NANOS_MULTIPLIER: u64> FromStr for TimeSpan<UNITLESS_NANOS_MULTIPLIER> {\n\ttype Err = humantime::DurationError;\n\n\tfn from_str(s: &str) -> Result<Self, Self::Err> {\n\t\ts.parse::<u64>()\n\t\t\t.map_or_else(\n\t\t\t\t|_| humantime::parse_duration(s),\n\t\t\t\t|unitless| {\n\t\t\t\t\tif unitless != 0 {\n\t\t\t\t\t\teprintln!(\"Warning: unitless non-zero time span values are deprecated and will be removed in an upcoming version\");\n\t\t\t\t\t}\n\t\t\t\t\tOk(Duration::from_nanos(unitless * UNITLESS_NANOS_MULTIPLIER))\n\t\t\t\t},\n\t\t\t)\n\t\t\t.map(TimeSpan)\n\t}\n}\n\nfn expand_args_up_to_doubledash() -> Result<Vec<OsString>, std::io::Error> {\n\tuse argfile::Argument;\n\tuse std::collections::VecDeque;\n\n\tlet args = std::env::args_os();\n\tlet mut expanded_args = Vec::with_capacity(args.size_hint().0);\n\n\tlet mut todo: VecDeque<_> = args.map(|a| Argument::parse(a, argfile::PREFIX)).collect();\n\twhile let Some(next) = todo.pop_front() {\n\t\tmatch next {\n\t\t\tArgument::PassThrough(arg) => {\n\t\t\t\texpanded_args.push(arg.clone());\n\t\t\t\tif arg == \"--\" {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tArgument::Path(path) => {\n\t\t\t\tlet content = std::fs::read_to_string(path)?;\n\t\t\t\tlet new_args = argfile::parse_fromfile(&content, argfile::PREFIX);\n\t\t\t\ttodo.reserve(new_args.len());\n\t\t\t\tfor (i, arg) in new_args.into_iter().enumerate() {\n\t\t\t\t\ttodo.insert(i, arg);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\twhile let Some(next) = todo.pop_front() {\n\t\texpanded_args.push(match next {\n\t\t\tArgument::PassThrough(arg) => arg,\n\t\t\tArgument::Path(path) => {\n\t\t\t\tlet path = path.as_os_str();\n\t\t\t\tlet mut restored = OsString::with_capacity(path.len() + 1);\n\t\t\t\trestored.push(OsStr::new(\"@\"));\n\t\t\t\trestored.push(path);\n\t\t\t\trestored\n\t\t\t}\n\t\t});\n\t}\n\tOk(expanded_args)\n}\n\n#[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)]\npub enum ShellCompletion {\n\tBash,\n\tElvish,\n\tFish,\n\tNu,\n\tPowershell,\n\tZsh,\n}\n\n#[derive(Debug, Default)]\npub struct Guards {\n\t_log: Option<WorkerGuard>,\n}\n\npub async fn get_args() -> Result<(Args, Guards)> {\n\tlet prearg_logs = logging::preargs();\n\tif prearg_logs {\n\t\twarn!(\n\t\t\t\"⚠ WATCHEXEC_LOG environment variable set or hardcoded, logging options have no effect\"\n\t\t);\n\t}\n\n\tdebug!(\"expanding @argfile arguments if any\");\n\tlet args = expand_args_up_to_doubledash().expect(\"while expanding @argfile\");\n\n\tdebug!(\"parsing arguments\");\n\tlet mut args = Args::parse_from(args);\n\n\tlet _log = if !prearg_logs {\n\t\tlogging::postargs(&args.logging).await?\n\t} else {\n\t\tNone\n\t};\n\n\targs.output.normalise()?;\n\targs.command.normalise().await?;\n\targs.filtering.normalise(&args.command).await?;\n\targs.events\n\t\t.normalise(&args.command, &args.filtering, args.only_emit_events)?;\n\n\tinfo!(?args, \"got arguments\");\n\tOk((args, Guards { _log }))\n}\n\n#[test]\nfn verify_cli() {\n\tuse clap::CommandFactory;\n\tArgs::command().debug_assert()\n}\n"
  },
  {
    "path": "crates/cli/src/config.rs",
    "content": "use std::{\n\tborrow::Cow,\n\tcollections::HashMap,\n\tenv::var,\n\tffi::OsStr,\n\tfmt,\n\tfs::File,\n\tio::{IsTerminal, Write},\n\titer::once,\n\tprocess::{ExitCode, Stdio},\n\tsync::{\n\t\tatomic::{AtomicBool, AtomicU8, Ordering},\n\t\tArc,\n\t},\n\ttime::Duration,\n};\n\nuse clearscreen::ClearScreen;\nuse miette::{IntoDiagnostic, Report, Result};\nuse notify_rust::Notification;\nuse termcolor::{Color, ColorChoice, ColorSpec, StandardStream, WriteColor};\nuse tokio::{process::Command as TokioCommand, time::sleep};\nuse tracing::{debug, debug_span, error, instrument, trace, trace_span, Instrument};\nuse watchexec::{\n\taction::ActionHandler,\n\tcommand::{Command, Program, Shell, SpawnOptions},\n\terror::RuntimeError,\n\tjob::{CommandState, Job},\n\tsources::fs::Watcher,\n\tConfig, ErrorHook, Id,\n};\nuse watchexec_events::{Event, KeyCode, Keyboard, Priority, ProcessEnd, Tag};\nuse watchexec_signals::Signal;\n\nuse crate::{\n\targs::{\n\t\tcommand::{EnvVar, WrapMode},\n\t\tevents::{EmitEvents, OnBusyUpdate, SignalMapping},\n\t\toutput::{ClearMode, ColourMode, NotifyMode},\n\t\tArgs,\n\t},\n\temits::events_to_simple_format,\n\tsocket::Sockets,\n\tstate::State,\n};\n\n#[derive(Clone, Copy, Debug)]\nstruct OutputFlags {\n\tquiet: bool,\n\tcolour: ColorChoice,\n\ttimings: bool,\n\tbell: bool,\n\tnotify: Option<NotifyMode>,\n}\n\n#[derive(Clone, Copy, Debug)]\nstruct TimeoutConfig {\n\t/// The maximum duration the command is allowed to run\n\ttimeout: Option<Duration>,\n\t/// Signal to send for graceful stop (used when timeout fires)\n\tstop_signal: Signal,\n\t/// Grace period after stop signal before force kill\n\tstop_timeout: Duration,\n}\n\npub fn make_config(args: &Args, state: &State) -> Result<Config> {\n\tlet _span = debug_span!(\"args-runtime\").entered();\n\tlet config = Config::default();\n\tconfig.on_error(|err: ErrorHook| {\n\t\tif let RuntimeError::IoError {\n\t\t\tabout: \"waiting on process group\",\n\t\t\t..\n\t\t} = err.error\n\t\t{\n\t\t\t// \"No child processes\" and such\n\t\t\t// these are often spurious, so condemn them to -v only\n\t\t\terror!(\"{}\", err.error);\n\t\t\treturn;\n\t\t}\n\n\t\tif cfg!(debug_assertions) {\n\t\t\teprintln!(\"[[{:?}]]\", err.error);\n\t\t}\n\n\t\teprintln!(\"[[Error (not fatal)]]\\n{}\", Report::new(err.error));\n\t});\n\n\tconfig.pathset(args.filtering.paths.clone());\n\n\tconfig.throttle(args.events.debounce.0);\n\tconfig.keyboard_events(args.events.stdin_quit || args.events.interactive);\n\n\tif let Some(interval) = args.events.poll {\n\t\tconfig.file_watcher(Watcher::Poll(interval.0));\n\t}\n\n\tlet once = args.once;\n\tlet clear = args.output.screen_clear;\n\n\tlet emit_events_to = args.events.emit_events_to;\n\tlet state = state.clone();\n\n\tif args.only_emit_events {\n\t\tconfig.on_action(move |mut action| {\n\t\t\t// if we got a terminate or interrupt signal, quit\n\t\t\tif action\n\t\t\t\t.signals()\n\t\t\t\t.any(|sig| sig == Signal::Terminate || sig == Signal::Interrupt)\n\t\t\t{\n\t\t\t\t// no need to be graceful as there's no commands\n\t\t\t\taction.quit();\n\t\t\t\treturn action;\n\t\t\t}\n\n\t\t\t// clear the screen before printing events\n\t\t\tif let Some(mode) = clear {\n\t\t\t\tmatch mode {\n\t\t\t\t\tClearMode::Clear => {\n\t\t\t\t\t\tclearscreen::clear().ok();\n\t\t\t\t\t}\n\t\t\t\t\tClearMode::Reset => {\n\t\t\t\t\t\treset_screen();\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tmatch emit_events_to {\n\t\t\t\tEmitEvents::Stdio => {\n\t\t\t\t\tprintln!(\n\t\t\t\t\t\t\"{}\",\n\t\t\t\t\t\tevents_to_simple_format(action.events.as_ref()).unwrap_or_default()\n\t\t\t\t\t);\n\t\t\t\t}\n\t\t\t\tEmitEvents::JsonStdio => {\n\t\t\t\t\tfor event in action.events.iter().filter(|e| !e.is_empty()) {\n\t\t\t\t\t\tprintln!(\"{}\", serde_json::to_string(event).unwrap_or_default());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tother => unreachable!(\n\t\t\t\t\t\"emit_events_to should have been validated earlier: {:?}\",\n\t\t\t\t\tother\n\t\t\t\t),\n\t\t\t}\n\n\t\t\taction\n\t\t});\n\n\t\treturn Ok(config);\n\t}\n\n\tlet delay_run = args.command.delay_run.map(|ts| ts.0);\n\tlet on_busy = args.events.on_busy_update;\n\tlet stdin_quit = args.events.stdin_quit;\n\tlet interactive = args.events.interactive;\n\tlet exit_on_error = args.events.exit_on_error;\n\n\tlet signal = args.events.signal;\n\tlet stop_signal = args.command.stop_signal;\n\tlet stop_timeout = args.command.stop_timeout.0;\n\n\tlet print_events = args.logging.print_events;\n\tlet outflags = OutputFlags {\n\t\tquiet: args.output.quiet,\n\t\tcolour: match args.output.color {\n\t\t\tColourMode::Auto if !std::io::stdin().is_terminal() => ColorChoice::Never,\n\t\t\tColourMode::Auto => ColorChoice::Auto,\n\t\t\tColourMode::Always => ColorChoice::Always,\n\t\t\tColourMode::Never => ColorChoice::Never,\n\t\t},\n\t\ttimings: args.output.timings,\n\t\tbell: args.output.bell,\n\t\tnotify: args.output.notify,\n\t};\n\n\tlet timeout_config = TimeoutConfig {\n\t\ttimeout: args.command.timeout.map(|ts| ts.0),\n\t\tstop_signal: stop_signal.unwrap_or(Signal::Terminate),\n\t\tstop_timeout,\n\t};\n\n\tlet workdir = Arc::new(args.command.workdir.clone());\n\n\tlet add_envs: Arc<[EnvVar]> = args.command.env.clone().into();\n\tdebug!(\n\t\tenvs=?args.command.env,\n\t\t\"additional environment variables to add to command\"\n\t);\n\n\tlet id = Id::default();\n\tlet command = interpret_command_args(args)?;\n\n\tlet signal_map: Arc<HashMap<Signal, Option<Signal>>> = Arc::new(\n\t\targs.events\n\t\t\t.signal_map\n\t\t\t.iter()\n\t\t\t.copied()\n\t\t\t.map(|SignalMapping { from, to }| (from, to))\n\t\t\t.collect(),\n\t);\n\n\tlet queued = Arc::new(AtomicBool::new(false));\n\tlet quit_again = Arc::new(AtomicU8::new(0));\n\tlet paused = Arc::new(AtomicBool::new(false));\n\tlet should_quit = Arc::new(AtomicBool::new(false));\n\n\tconfig.on_action_async(move |mut action| {\n\t\tlet add_envs = add_envs.clone();\n\t\tlet command = command.clone();\n\t\tlet state = state.clone();\n\t\tlet queued = queued.clone();\n\t\tlet quit_again = quit_again.clone();\n\t\tlet paused = paused.clone();\n\t\tlet should_quit = should_quit.clone();\n\t\tlet signal_map = signal_map.clone();\n\t\tlet workdir = workdir.clone();\n\t\tBox::new(\n\t\t\tasync move {\n\t\t\t\ttrace!(events=?action.events, \"handling action\");\n\n\t\t\t\tlet add_envs = add_envs.clone();\n\t\t\t\tlet command = command.clone();\n\t\t\t\tlet queued = queued.clone();\n\t\t\t\tlet quit_again = quit_again.clone();\n\t\t\t\tlet paused = paused.clone();\n\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\tlet signal_map = signal_map.clone();\n\t\t\t\tlet workdir = workdir.clone();\n\n\t\t\t\ttrace!(\"set spawn hook for workdir and environment variables\");\n\t\t\t\tlet job = action.get_or_create_job(id, move || command.clone());\n\t\t\t\tlet events = action.events.clone();\n\t\t\t\tjob.set_spawn_hook({\n\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\tmove |command, _| {\n\t\t\t\t\t\tlet add_envs = add_envs.clone();\n\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\tlet events = events.clone();\n\n\t\t\t\t\t\tif let Some(ref workdir) = workdir.as_ref() {\n\t\t\t\t\t\t\tdebug!(?workdir, \"set command workdir\");\n\t\t\t\t\t\t\tcommand.command_mut().current_dir(workdir);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif let Some(ref socket_set) = state.socket_set {\n\t\t\t\t\t\t\tfor env in socket_set.envs() {\n\t\t\t\t\t\t\t\tcommand.command_mut().env(env.key, env.value);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\temit_events_to_command(\n\t\t\t\t\t\t\tcommand.command_mut(),\n\t\t\t\t\t\t\tevents,\n\t\t\t\t\t\t\tstate,\n\t\t\t\t\t\t\temit_events_to,\n\t\t\t\t\t\t\tadd_envs,\n\t\t\t\t\t\t);\n\t\t\t\t\t}\n\t\t\t\t});\n\n\t\t\t\tlet show_events = {\n\t\t\t\t\tlet events = action.events.clone();\n\t\t\t\t\tmove || {\n\t\t\t\t\t\tif print_events {\n\t\t\t\t\t\t\ttrace!(\"print events to stderr\");\n\t\t\t\t\t\t\tfor (n, event) in events.iter().enumerate() {\n\t\t\t\t\t\t\t\teprintln!(\"[EVENT {n}] {event}\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t};\n\n\t\t\t\tlet clear_screen = {\n\t\t\t\t\tlet events = action.events.clone();\n\t\t\t\t\tmove || {\n\t\t\t\t\t\tif let Some(mode) = clear {\n\t\t\t\t\t\t\tmatch mode {\n\t\t\t\t\t\t\t\tClearMode::Clear => {\n\t\t\t\t\t\t\t\t\tclearscreen::clear().ok();\n\t\t\t\t\t\t\t\t\tdebug!(\"cleared screen\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tClearMode::Reset => {\n\t\t\t\t\t\t\t\t\treset_screen();\n\t\t\t\t\t\t\t\t\tdebug!(\"hard-reset screen\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// re-show events after clearing\n\t\t\t\t\t\tif print_events {\n\t\t\t\t\t\t\ttrace!(\"print events to stderr\");\n\t\t\t\t\t\t\tfor (n, event) in events.iter().enumerate() {\n\t\t\t\t\t\t\t\teprintln!(\"[EVENT {n}] {event}\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t};\n\n\t\t\t\tlet quit = |mut action: ActionHandler| {\n\t\t\t\t\tmatch quit_again.fetch_add(1, Ordering::Relaxed) {\n\t\t\t\t\t\t0 => {\n\t\t\t\t\t\t\tif stop_timeout > Duration::ZERO\n\t\t\t\t\t\t\t\t&& action.list_jobs().any(|(_, job)| job.is_running())\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\teprintln!(\"[Waiting {stop_timeout:?} for processes to exit before stopping...]\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t// eprintln!(\"[Waiting {stop_timeout:?} for processes to exit before stopping... Ctrl-C again to exit faster]\");\n\t\t\t\t\t\t\t// see TODO in action/worker.rs\n\t\t\t\t\t\t\taction.quit_gracefully(\n\t\t\t\t\t\t\t\tstop_signal.unwrap_or(Signal::Terminate),\n\t\t\t\t\t\t\t\tstop_timeout,\n\t\t\t\t\t\t\t);\n\t\t\t\t\t\t}\n\t\t\t\t\t\t1 => {\n\t\t\t\t\t\t\taction.quit_gracefully(Signal::ForceStop, Duration::ZERO);\n\t\t\t\t\t\t}\n\t\t\t\t\t\t_ => {\n\t\t\t\t\t\t\taction.quit();\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\taction\n\t\t\t\t};\n\n\t\t\t\t// Check if we should quit due to command failure (--exit-on-error)\n\t\t\t\tif should_quit.load(Ordering::SeqCst) {\n\t\t\t\t\tdebug!(\"command failed with --exit-on-error, quitting\");\n\t\t\t\t\treturn quit(action);\n\t\t\t\t}\n\n\t\t\t\tif once {\n\t\t\t\t\tdebug!(\"debug mode: run once and quit\");\n\t\t\t\t\tshow_events();\n\n\t\t\t\t\tif let Some(delay) = delay_run {\n\t\t\t\t\t\tjob.run_async(move |_| {\n\t\t\t\t\t\t\tBox::new(async move {\n\t\t\t\t\t\t\t\tsleep(delay).await;\n\t\t\t\t\t\t\t})\n\t\t\t\t\t\t});\n\t\t\t\t\t}\n\n\t\t\t\t\t// this blocks the event loop, but also this is a debug feature so i don't care\n\t\t\t\t\tjob.start().await;\n\t\t\t\t\tlet timed_out = if let Some(timeout) = timeout_config.timeout {\n\t\t\t\t\t\ttokio::select! {\n\t\t\t\t\t\t\t_ = job.to_wait() => false,\n\t\t\t\t\t\t\t_ = tokio::time::sleep(timeout) => {\n\t\t\t\t\t\t\t\tif cfg!(windows) {\n\t\t\t\t\t\t\t\t\tjob.stop().await;\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tjob.stop_with_signal(timeout_config.stop_signal, timeout_config.stop_timeout).await;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\ttrue\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tjob.to_wait().await;\n\t\t\t\t\t\tfalse\n\t\t\t\t\t};\n\t\t\t\t\tjob.run({\n\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\tmove |context| {\n\t\t\t\t\t\t\tif let Some(end) = end_of_process(context.current, outflags, timed_out)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t*state.exit_code.lock().unwrap() = ExitCode::from(\n\t\t\t\t\t\t\t\t\tend.into_exitstatus()\n\t\t\t\t\t\t\t\t\t\t.code()\n\t\t\t\t\t\t\t\t\t\t.unwrap_or(0)\n\t\t\t\t\t\t\t\t\t\t.try_into()\n\t\t\t\t\t\t\t\t\t\t.unwrap_or(1),\n\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t\t.await;\n\t\t\t\t\treturn quit(action);\n\t\t\t\t}\n\n\t\t\t\tlet is_keyboard_eof = action\n\t\t\t\t\t.events\n\t\t\t\t\t.iter()\n\t\t\t\t\t.any(|e| e.tags.contains(&Tag::Keyboard(Keyboard::Eof)));\n\t\t\t\tif stdin_quit && is_keyboard_eof {\n\t\t\t\t\tdebug!(\"keyboard EOF, quit\");\n\t\t\t\t\tshow_events();\n\t\t\t\t\treturn quit(action);\n\t\t\t\t}\n\n\t\t\t\tif interactive {\n\t\t\t\t\tfor event in action.events.iter() {\n\t\t\t\t\t\tfor tag in &event.tags {\n\t\t\t\t\t\t\tmatch tag {\n\t\t\t\t\t\t\t\tTag::Keyboard(Keyboard::Eof) => {\n\t\t\t\t\t\t\t\t\tdebug!(\"interactive: Ctrl-C/D, quit\");\n\t\t\t\t\t\t\t\t\treturn quit(action);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tTag::Keyboard(Keyboard::Key { key, .. }) => match key {\n\t\t\t\t\t\t\t\t\tKeyCode::Char('q') => {\n\t\t\t\t\t\t\t\t\t\tdebug!(\"interactive: quit\");\n\t\t\t\t\t\t\t\t\t\treturn quit(action);\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tKeyCode::Char('p') => {\n\t\t\t\t\t\t\t\t\t\tlet was_paused = paused.fetch_xor(true, Ordering::SeqCst);\n\t\t\t\t\t\t\t\t\t\tif was_paused {\n\t\t\t\t\t\t\t\t\t\t\tdebug!(\"interactive: unpause\");\n\t\t\t\t\t\t\t\t\t\t\teprintln!(\"[Unpaused]\");\n\t\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\t\tdebug!(\"interactive: pause\");\n\t\t\t\t\t\t\t\t\t\t\teprintln!(\"[Paused]\");\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\treturn action;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tKeyCode::Char('r') => {\n\t\t\t\t\t\t\t\t\t\tdebug!(\"interactive: restart\");\n\t\t\t\t\t\t\t\t\t\tclear_screen();\n\t\t\t\t\t\t\t\t\t\tif cfg!(windows) {\n\t\t\t\t\t\t\t\t\t\t\tjob.restart();\n\t\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\t\tjob.restart_with_signal(\n\t\t\t\t\t\t\t\t\t\t\t\tstop_signal.unwrap_or(Signal::Terminate),\n\t\t\t\t\t\t\t\t\t\t\t\tstop_timeout,\n\t\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tjob.run({\n\t\t\t\t\t\t\t\t\t\t\tlet job = job.clone();\n\t\t\t\t\t\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\t\t\t\t\t\tmove |context| {\n\t\t\t\t\t\t\t\t\t\t\t\tsetup_process(\n\t\t\t\t\t\t\t\t\t\t\t\t\tjob.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tcontext.command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\toutflags,\n\t\t\t\t\t\t\t\t\t\t\t\t\ttimeout_config,\n\t\t\t\t\t\t\t\t\t\t\t\t\texit_on_error,\n\t\t\t\t\t\t\t\t\t\t\t\t\tshould_quit.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tstate.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\t\t\treturn action;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t_ => {}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t_ => {}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tlet signals: Vec<Signal> = action.signals().collect();\n\t\t\t\ttrace!(?signals, \"received some signals\");\n\n\t\t\t\t// if we got a terminate or interrupt signal and they're not mapped, quit\n\t\t\t\tif (signals.contains(&Signal::Terminate)\n\t\t\t\t\t&& !signal_map.contains_key(&Signal::Terminate))\n\t\t\t\t\t|| (signals.contains(&Signal::Interrupt)\n\t\t\t\t\t\t&& !signal_map.contains_key(&Signal::Interrupt))\n\t\t\t\t{\n\t\t\t\t\tdebug!(\"unmapped terminate or interrupt signal, quit\");\n\t\t\t\t\tshow_events();\n\t\t\t\t\treturn quit(action);\n\t\t\t\t}\n\n\t\t\t\t// pass all other signals on\n\t\t\t\tfor signal in signals {\n\t\t\t\t\tmatch signal_map.get(&signal) {\n\t\t\t\t\t\tSome(Some(mapped)) => {\n\t\t\t\t\t\t\tdebug!(?signal, ?mapped, \"passing mapped signal\");\n\t\t\t\t\t\t\tjob.signal(*mapped);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tSome(None) => {\n\t\t\t\t\t\t\tdebug!(?signal, \"discarding signal\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tNone => {\n\t\t\t\t\t\t\tdebug!(?signal, \"passing signal on\");\n\t\t\t\t\t\t\tjob.signal(signal);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// only filesystem events below here (or empty synthetic events)\n\t\t\t\tif action.paths().next().is_none()\n\t\t\t\t\t&& !action.events.iter().any(watchexec_events::Event::is_empty)\n\t\t\t\t{\n\t\t\t\t\tdebug!(\"no filesystem or synthetic events, skip without doing more\");\n\t\t\t\t\tshow_events();\n\t\t\t\t\treturn action;\n\t\t\t\t}\n\n\t\t\t\tif interactive && paused.load(Ordering::SeqCst) {\n\t\t\t\t\tdebug!(\"interactive: paused, ignoring filesystem event\");\n\t\t\t\t\treturn action;\n\t\t\t\t}\n\n\t\t\t\tshow_events();\n\n\t\t\t\tif let Some(delay) = delay_run {\n\t\t\t\t\ttrace!(\"delaying run by sleeping inside the job\");\n\t\t\t\t\tjob.run_async(move |_| {\n\t\t\t\t\t\tBox::new(async move {\n\t\t\t\t\t\t\tsleep(delay).await;\n\t\t\t\t\t\t})\n\t\t\t\t\t});\n\t\t\t\t}\n\n\t\t\t\ttrace!(\"querying job state via run_async\");\n\t\t\t\tjob.run_async({\n\t\t\t\t\tlet job = job.clone();\n\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\tmove |context| {\n\t\t\t\t\t\tlet job = job.clone();\n\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\tlet is_running = matches!(context.current, CommandState::Running { .. });\n\t\t\t\t\t\tBox::new(async move {\n\t\t\t\t\t\t\tlet innerjob = job.clone();\n\t\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\t\tif is_running {\n\t\t\t\t\t\t\t\ttrace!(?on_busy, \"job is running, decide what to do\");\n\t\t\t\t\t\t\t\tmatch on_busy {\n\t\t\t\t\t\t\t\t\tOnBusyUpdate::DoNothing => {}\n\t\t\t\t\t\t\t\t\tOnBusyUpdate::Signal => {\n\t\t\t\t\t\t\t\t\t\tjob.signal(if cfg!(windows) {\n\t\t\t\t\t\t\t\t\t\t\tSignal::ForceStop\n\t\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\t\tstop_signal.or(signal).unwrap_or(Signal::Terminate)\n\t\t\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tOnBusyUpdate::Restart if cfg!(windows) => {\n\t\t\t\t\t\t\t\t\t\tjob.restart();\n\t\t\t\t\t\t\t\t\t\tjob.run({\n\t\t\t\t\t\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\t\t\t\t\t\tmove |context| {\n\t\t\t\t\t\t\t\t\t\t\t\tclear_screen();\n\t\t\t\t\t\t\t\t\t\t\t\tsetup_process(\n\t\t\t\t\t\t\t\t\t\t\t\t\tinnerjob.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tcontext.command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\toutflags,\n\t\t\t\t\t\t\t\t\t\t\t\t\ttimeout_config,\n\t\t\t\t\t\t\t\t\t\t\t\t\texit_on_error,\n\t\t\t\t\t\t\t\t\t\t\t\t\tshould_quit.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tstate.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tOnBusyUpdate::Restart => {\n\t\t\t\t\t\t\t\t\t\tjob.restart_with_signal(\n\t\t\t\t\t\t\t\t\t\t\tstop_signal.unwrap_or(Signal::Terminate),\n\t\t\t\t\t\t\t\t\t\t\tstop_timeout,\n\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\t\t\tjob.run({\n\t\t\t\t\t\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\t\t\t\t\t\tmove |context| {\n\t\t\t\t\t\t\t\t\t\t\t\tclear_screen();\n\t\t\t\t\t\t\t\t\t\t\t\tsetup_process(\n\t\t\t\t\t\t\t\t\t\t\t\t\tinnerjob.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tcontext.command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\toutflags,\n\t\t\t\t\t\t\t\t\t\t\t\t\ttimeout_config,\n\t\t\t\t\t\t\t\t\t\t\t\t\texit_on_error,\n\t\t\t\t\t\t\t\t\t\t\t\t\tshould_quit.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tstate.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tOnBusyUpdate::Queue => {\n\t\t\t\t\t\t\t\t\t\tlet job = job.clone();\n\t\t\t\t\t\t\t\t\t\tlet already_queued =\n\t\t\t\t\t\t\t\t\t\t\tqueued.fetch_or(true, Ordering::SeqCst);\n\t\t\t\t\t\t\t\t\t\tif already_queued {\n\t\t\t\t\t\t\t\t\t\t\tdebug!(\"next start is already queued, do nothing\");\n\t\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\t\tdebug!(\"queueing next start of job\");\n\t\t\t\t\t\t\t\t\t\t\ttokio::spawn({\n\t\t\t\t\t\t\t\t\t\t\t\tlet queued = queued.clone();\n\t\t\t\t\t\t\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\t\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\t\t\t\t\t\t\tasync move {\n\t\t\t\t\t\t\t\t\t\t\t\t\ttrace!(\"waiting for job to finish\");\n\t\t\t\t\t\t\t\t\t\t\t\t\tjob.to_wait().await;\n\t\t\t\t\t\t\t\t\t\t\t\t\ttrace!(\"job finished, starting queued\");\n\t\t\t\t\t\t\t\t\t\t\t\t\tjob.start();\n\t\t\t\t\t\t\t\t\t\t\t\t\tjob.run({\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tmove |context| {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tclear_screen();\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tsetup_process(\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tinnerjob.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tcontext.command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\toutflags,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\ttimeout_config,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\texit_on_error,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tshould_quit.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tstate.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t})\n\t\t\t\t\t\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t\t\t\t\t\t\t\ttrace!(\"resetting queued state\");\n\t\t\t\t\t\t\t\t\t\t\t\t\tqueued.store(false, Ordering::SeqCst);\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\ttrace!(\"job is not running, start it\");\n\t\t\t\t\t\t\t\tjob.start();\n\t\t\t\t\t\t\t\tjob.run({\n\t\t\t\t\t\t\t\t\tlet should_quit = should_quit.clone();\n\t\t\t\t\t\t\t\t\tlet state = state.clone();\n\t\t\t\t\t\t\t\t\tmove |context| {\n\t\t\t\t\t\t\t\t\t\tclear_screen();\n\t\t\t\t\t\t\t\t\t\tsetup_process(\n\t\t\t\t\t\t\t\t\t\t\tinnerjob.clone(),\n\t\t\t\t\t\t\t\t\t\t\tcontext.command.clone(),\n\t\t\t\t\t\t\t\t\t\t\toutflags,\n\t\t\t\t\t\t\t\t\t\ttimeout_config,\n\t\t\t\t\t\t\t\t\t\t\texit_on_error,\n\t\t\t\t\t\t\t\t\t\t\tshould_quit.clone(),\n\t\t\t\t\t\t\t\t\t\t\tstate.clone(),\n\t\t\t\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t})\n\t\t\t\t\t}\n\t\t\t\t});\n\n\t\t\t\taction\n\t\t\t}\n\t\t\t.instrument(trace_span!(\"action handler\")),\n\t\t)\n\t});\n\n\tOk(config)\n}\n\n#[instrument(level = \"debug\")]\nfn interpret_command_args(args: &Args) -> Result<Arc<Command>> {\n\tlet mut cmd = args.program.clone();\n\tassert!(!cmd.is_empty(), \"(clap) Bug: command is not present\");\n\n\tlet shell = if args.command.no_shell {\n\t\tNone\n\t} else {\n\t\tlet shell = args.command.shell.clone().or_else(|| var(\"SHELL\").ok());\n\t\tmatch shell\n\t\t\t.as_deref()\n\t\t\t.or_else(|| {\n\t\t\t\tif cfg!(not(windows)) {\n\t\t\t\t\tSome(\"sh\")\n\t\t\t\t} else if var(\"POWERSHELL_DISTRIBUTION_CHANNEL\").is_ok()\n\t\t\t\t\t&& (which::which(\"pwsh\").is_ok() || which::which(\"pwsh.exe\").is_ok())\n\t\t\t\t{\n\t\t\t\t\ttrace!(\"detected pwsh\");\n\t\t\t\t\tSome(\"pwsh\")\n\t\t\t\t} else if var(\"PSModulePath\").is_ok()\n\t\t\t\t\t&& (which::which(\"powershell\").is_ok()\n\t\t\t\t\t\t|| which::which(\"powershell.exe\").is_ok())\n\t\t\t\t{\n\t\t\t\t\ttrace!(\"detected powershell\");\n\t\t\t\t\tSome(\"powershell\")\n\t\t\t\t} else {\n\t\t\t\t\tSome(\"cmd\")\n\t\t\t\t}\n\t\t\t})\n\t\t\t.or(Some(\"default\"))\n\t\t{\n\t\t\tSome(\"\") => return Err(RuntimeError::CommandShellEmptyShell).into_diagnostic(),\n\n\t\t\tSome(\"none\") | None => None,\n\n\t\t\t#[cfg(windows)]\n\t\t\tSome(\"cmd\") | Some(\"cmd.exe\") | Some(\"CMD\") | Some(\"CMD.EXE\") => Some(Shell::cmd()),\n\n\t\t\tSome(other) => {\n\t\t\t\tlet sh = other.split_ascii_whitespace().collect::<Vec<_>>();\n\n\t\t\t\t// UNWRAP: checked by Some(\"\")\n\t\t\t\t#[allow(clippy::unwrap_used)]\n\t\t\t\tlet (shprog, shopts) = sh.split_first().unwrap();\n\n\t\t\t\tSome(Shell {\n\t\t\t\t\tprog: shprog.into(),\n\t\t\t\t\toptions: shopts.iter().map(|s| (*s).to_string()).collect(),\n\t\t\t\t\tprogram_option: Some(Cow::Borrowed(OsStr::new(\"-c\"))),\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t};\n\n\tlet program = if let Some(shell) = shell {\n\t\tProgram::Shell {\n\t\t\tshell,\n\t\t\tcommand: cmd.join(\" \"),\n\t\t\targs: Vec::new(),\n\t\t}\n\t} else {\n\t\tProgram::Exec {\n\t\t\tprog: cmd.remove(0).into(),\n\t\t\targs: cmd,\n\t\t}\n\t};\n\n\tOk(Arc::new(Command {\n\t\tprogram,\n\t\toptions: SpawnOptions {\n\t\t\tgrouped: matches!(args.command.wrap_process, WrapMode::Group),\n\t\t\tsession: matches!(args.command.wrap_process, WrapMode::Session),\n\t\t\t..Default::default()\n\t\t},\n\t}))\n}\n\n#[instrument(level = \"trace\")]\nfn setup_process(\n\tjob: Job,\n\tcommand: Arc<Command>,\n\toutflags: OutputFlags,\n\ttimeout_config: TimeoutConfig,\n\texit_on_error: bool,\n\tshould_quit: Arc<AtomicBool>,\n\tstate: State,\n) {\n\tif outflags.notify.is_some_and(|m| m.on_start()) {\n\t\tNotification::new()\n\t\t\t.summary(\"Watchexec: change detected\")\n\t\t\t.body(&format!(\"Running {command}\"))\n\t\t\t.show()\n\t\t\t.map_or_else(\n\t\t\t\t|err| {\n\t\t\t\t\teprintln!(\"[[Failed to send desktop notification: {err}]]\");\n\t\t\t\t},\n\t\t\t\tdrop,\n\t\t\t);\n\t}\n\n\tif !outflags.quiet {\n\t\tlet mut stderr = StandardStream::stderr(outflags.colour);\n\t\tstderr.reset().ok();\n\t\tstderr\n\t\t\t.set_color(ColorSpec::new().set_fg(Some(Color::Green)))\n\t\t\t.ok();\n\t\twriteln!(&mut stderr, \"[Running: {command}]\").ok();\n\t\tstderr.reset().ok();\n\t}\n\n\tlet send_quit_event = Arc::new(AtomicBool::new(false));\n\ttokio::spawn({\n\t\tlet send_quit_event = send_quit_event.clone();\n\t\tlet state_for_event = state.clone();\n\t\tasync move {\n\t\t\tlet timed_out = if let Some(timeout) = timeout_config.timeout {\n\t\t\t\ttokio::select! {\n\t\t\t\t\t_ = job.to_wait() => false,\n\t\t\t\t\t_ = tokio::time::sleep(timeout) => {\n\t\t\t\t\t\tif cfg!(windows) {\n\t\t\t\t\t\t\tjob.stop().await;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tjob.stop_with_signal(timeout_config.stop_signal, timeout_config.stop_timeout).await;\n\t\t\t\t\t\t}\n\t\t\t\t\t\ttrue\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tjob.to_wait().await;\n\t\t\t\tfalse\n\t\t\t};\n\n\t\t\tjob.run({\n\t\t\t\tlet send_quit_event = send_quit_event.clone();\n\t\t\t\tmove |context| {\n\t\t\t\t\tif let Some(status) = end_of_process(context.current, outflags, timed_out) {\n\t\t\t\t\t\t// Store exit code in state\n\t\t\t\t\t\t*state.exit_code.lock().unwrap() = ExitCode::from(\n\t\t\t\t\t\t\tstatus\n\t\t\t\t\t\t\t\t.into_exitstatus()\n\t\t\t\t\t\t\t\t.code()\n\t\t\t\t\t\t\t\t.unwrap_or(0)\n\t\t\t\t\t\t\t\t.try_into()\n\t\t\t\t\t\t\t\t.unwrap_or(1),\n\t\t\t\t\t\t);\n\n\t\t\t\t\t\t// If exit_on_error is enabled and command failed, signal quit\n\t\t\t\t\t\tif exit_on_error && !matches!(status, ProcessEnd::Success) {\n\t\t\t\t\t\t\tdebug!(\"command failed, setting should_quit flag for --exit-on-error\");\n\t\t\t\t\t\t\tshould_quit.store(true, Ordering::SeqCst);\n\t\t\t\t\t\t\tsend_quit_event.store(true, Ordering::SeqCst);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t})\n\t\t\t.await;\n\n\t\t\t// Send a synthetic event to trigger the action handler to check should_quit\n\t\t\t// This ensures we quit immediately instead of waiting for the next file event\n\t\t\tif send_quit_event.load(Ordering::SeqCst) {\n\t\t\t\tif let Some(wx) = state_for_event.watchexec.get() {\n\t\t\t\t\tdebug!(\"sending synthetic event to trigger quit\");\n\t\t\t\t\tif let Err(e) = wx.send_event(Event::default(), Priority::Urgent).await {\n\t\t\t\t\t\terror!(\"failed to send synthetic quit event: {e}\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t});\n}\n\nfn format_duration(duration: Duration) -> impl fmt::Display {\n\tfmt::from_fn(move |f| {\n\t\tlet secs = duration.as_secs();\n\t\tif secs > 0 {\n\t\t\twrite!(f, \"{secs}s\")\n\t\t} else {\n\t\t\twrite!(f, \"{}ms\", duration.subsec_millis())\n\t\t}\n\t})\n}\n\n#[instrument(level = \"trace\")]\nfn end_of_process(\n\tstate: &CommandState,\n\toutflags: OutputFlags,\n\ttimed_out: bool,\n) -> Option<ProcessEnd> {\n\tlet CommandState::Finished {\n\t\tstatus,\n\t\tstarted,\n\t\tfinished,\n\t} = state\n\telse {\n\t\treturn None;\n\t};\n\n\tlet duration = *finished - *started;\n\tlet duration_display = format_duration(duration);\n\tlet timing = if outflags.timings {\n\t\tformat!(\", lasted {duration_display}\")\n\t} else {\n\t\tString::new()\n\t};\n\n\t// Show timeout message and return early - no need for redundant status message\n\tif timed_out {\n\t\tif outflags.notify.is_some_and(|m| m.on_end()) {\n\t\t\tNotification::new()\n\t\t\t\t.summary(\"Watchexec: command timed out\")\n\t\t\t\t.body(&format!(\"Command timed out after {duration_display}\"))\n\t\t\t\t.show()\n\t\t\t\t.map_or_else(\n\t\t\t\t\t|err| {\n\t\t\t\t\t\teprintln!(\"[[Failed to send desktop notification: {err}]]\");\n\t\t\t\t\t},\n\t\t\t\t\tdrop,\n\t\t\t\t);\n\t\t}\n\n\t\tif !outflags.quiet {\n\t\t\tlet mut stderr = StandardStream::stderr(outflags.colour);\n\t\t\tstderr.reset().ok();\n\t\t\tstderr\n\t\t\t\t.set_color(ColorSpec::new().set_fg(Some(Color::Yellow)))\n\t\t\t\t.ok();\n\t\t\twriteln!(&mut stderr, \"[Command timed out after {duration_display}]\").ok();\n\t\t\tstderr.reset().ok();\n\t\t}\n\n\t\tif outflags.bell {\n\t\t\tlet mut stdout = std::io::stdout();\n\t\t\tstdout.write_all(b\"\\x07\").ok();\n\t\t\tstdout.flush().ok();\n\t\t}\n\n\t\treturn Some(*status);\n\t}\n\n\tlet (msg, fg) = match status {\n\t\tProcessEnd::ExitError(code) => (format!(\"Command exited with {code}{timing}\"), Color::Red),\n\t\tProcessEnd::ExitSignal(sig) => {\n\t\t\t(format!(\"Command killed by {sig:?}{timing}\"), Color::Magenta)\n\t\t}\n\t\tProcessEnd::ExitStop(sig) => (format!(\"Command stopped by {sig:?}{timing}\"), Color::Blue),\n\t\tProcessEnd::Continued => (format!(\"Command continued{timing}\"), Color::Cyan),\n\t\tProcessEnd::Exception(ex) => (\n\t\t\tformat!(\"Command ended by exception {ex:#x}{timing}\"),\n\t\t\tColor::Yellow,\n\t\t),\n\t\tProcessEnd::Success => (format!(\"Command was successful{timing}\"), Color::Green),\n\t};\n\n\tif outflags.notify.is_some_and(|m| m.on_end()) {\n\t\tNotification::new()\n\t\t\t.summary(\"Watchexec: command ended\")\n\t\t\t.body(&msg)\n\t\t\t.show()\n\t\t\t.map_or_else(\n\t\t\t\t|err| {\n\t\t\t\t\teprintln!(\"[[Failed to send desktop notification: {err}]]\");\n\t\t\t\t},\n\t\t\t\tdrop,\n\t\t\t);\n\t}\n\n\tif !outflags.quiet {\n\t\tlet mut stderr = StandardStream::stderr(outflags.colour);\n\t\tstderr.reset().ok();\n\t\tstderr.set_color(ColorSpec::new().set_fg(Some(fg))).ok();\n\t\twriteln!(&mut stderr, \"[{msg}]\").ok();\n\t\tstderr.reset().ok();\n\t}\n\n\tif outflags.bell {\n\t\tlet mut stdout = std::io::stdout();\n\t\tstdout.write_all(b\"\\x07\").ok();\n\t\tstdout.flush().ok();\n\t}\n\n\tSome(*status)\n}\n\n#[instrument(level = \"trace\")]\nfn emit_events_to_command(\n\tcommand: &mut TokioCommand,\n\tevents: Arc<[Event]>,\n\tstate: State,\n\temit_events_to: EmitEvents,\n\tadd_envs: Arc<[EnvVar]>,\n) {\n\tuse crate::emits::{emits_to_environment, emits_to_file, emits_to_json_file};\n\n\tlet mut stdin = None;\n\n\tlet add_envs = add_envs.clone();\n\tlet mut envs = Box::new(add_envs.into_iter().cloned()) as Box<dyn Iterator<Item = EnvVar>>;\n\n\tmatch emit_events_to {\n\t\tEmitEvents::Environment => {\n\t\t\tenvs = Box::new(envs.chain(emits_to_environment(&events)));\n\t\t}\n\t\tEmitEvents::Stdio => match emits_to_file(&state.emit_file, &events)\n\t\t\t.and_then(|path| File::open(path).into_diagnostic())\n\t\t{\n\t\t\tOk(file) => {\n\t\t\t\tstdin.replace(Stdio::from(file));\n\t\t\t}\n\t\t\tErr(err) => {\n\t\t\t\terror!(\"Failed to write events to stdin, continuing without it: {err}\");\n\t\t\t}\n\t\t},\n\t\tEmitEvents::File => match emits_to_file(&state.emit_file, &events) {\n\t\t\tOk(path) => {\n\t\t\t\tenvs = Box::new(envs.chain(once(EnvVar {\n\t\t\t\t\tkey: \"WATCHEXEC_EVENTS_FILE\".into(),\n\t\t\t\t\tvalue: path.into(),\n\t\t\t\t})));\n\t\t\t}\n\t\t\tErr(err) => {\n\t\t\t\terror!(\"Failed to write WATCHEXEC_EVENTS_FILE, continuing without it: {err}\");\n\t\t\t}\n\t\t},\n\t\tEmitEvents::JsonStdio => match emits_to_json_file(&state.emit_file, &events)\n\t\t\t.and_then(|path| File::open(path).into_diagnostic())\n\t\t{\n\t\t\tOk(file) => {\n\t\t\t\tstdin.replace(Stdio::from(file));\n\t\t\t}\n\t\t\tErr(err) => {\n\t\t\t\terror!(\"Failed to write events to stdin, continuing without it: {err}\");\n\t\t\t}\n\t\t},\n\t\tEmitEvents::JsonFile => match emits_to_json_file(&state.emit_file, &events) {\n\t\t\tOk(path) => {\n\t\t\t\tenvs = Box::new(envs.chain(once(EnvVar {\n\t\t\t\t\tkey: \"WATCHEXEC_EVENTS_FILE\".into(),\n\t\t\t\t\tvalue: path.into(),\n\t\t\t\t})));\n\t\t\t}\n\t\t\tErr(err) => {\n\t\t\t\terror!(\"Failed to write WATCHEXEC_EVENTS_FILE, continuing without it: {err}\");\n\t\t\t}\n\t\t},\n\t\tEmitEvents::None => {}\n\t}\n\n\tfor var in envs {\n\t\tdebug!(?var, \"inserting environment variable\");\n\t\tcommand.env(var.key, var.value);\n\t}\n\n\tif let Some(stdin) = stdin {\n\t\tdebug!(\"set command stdin\");\n\t\tcommand.stdin(stdin);\n\t}\n}\n\npub fn reset_screen() {\n\tfor cs in [\n\t\tClearScreen::WindowsCooked,\n\t\tClearScreen::WindowsVt,\n\t\tClearScreen::VtLeaveAlt,\n\t\tClearScreen::VtWellDone,\n\t\tClearScreen::default(),\n\t] {\n\t\tcs.clear().ok();\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/dirs.rs",
    "content": "use std::{\n\tcollections::HashSet,\n\tpath::{Path, PathBuf},\n};\n\nuse ignore_files::{IgnoreFile, IgnoreFilesFromOriginArgs};\nuse miette::{miette, IntoDiagnostic, Result};\nuse project_origins::ProjectType;\nuse tokio::fs::canonicalize;\nuse tracing::{debug, info, warn};\nuse watchexec::paths::common_prefix;\n\nuse crate::args::{command::CommandArgs, filtering::FilteringArgs, Args};\n\npub async fn project_origin(\n\tFilteringArgs {\n\t\tproject_origin,\n\t\tpaths,\n\t\t..\n\t}: &FilteringArgs,\n\tCommandArgs { workdir, .. }: &CommandArgs,\n) -> Result<PathBuf> {\n\tlet project_origin = if let Some(origin) = project_origin {\n\t\tdebug!(?origin, \"project origin override\");\n\t\tcanonicalize(origin).await.into_diagnostic()?\n\t} else {\n\t\tlet homedir = match dirs::home_dir() {\n\t\t\tNone => None,\n\t\t\tSome(dir) => Some(canonicalize(dir).await.into_diagnostic()?),\n\t\t};\n\t\tdebug!(?homedir, \"home directory\");\n\n\t\tlet homedir_requested = homedir.as_ref().map_or(false, |home| {\n\t\t\tpaths\n\t\t\t\t.binary_search_by_key(home, |w| PathBuf::from(w.clone()))\n\t\t\t\t.is_ok()\n\t\t});\n\t\tdebug!(\n\t\t\t?homedir_requested,\n\t\t\t\"resolved whether the homedir is explicitly requested\"\n\t\t);\n\n\t\tlet mut origins = HashSet::new();\n\t\tfor path in paths {\n\t\t\torigins.extend(project_origins::origins(path).await);\n\t\t}\n\n\t\tmatch (homedir, homedir_requested) {\n\t\t\t(Some(ref dir), false) if origins.contains(dir) => {\n\t\t\t\tdebug!(\"removing homedir from origins\");\n\t\t\t\torigins.remove(dir);\n\t\t\t}\n\t\t\t_ => {}\n\t\t}\n\n\t\tif origins.is_empty() {\n\t\t\tdebug!(\"no origins, using current directory\");\n\t\t\torigins.insert(workdir.clone().unwrap());\n\t\t}\n\n\t\tdebug!(?origins, \"resolved all project origins\");\n\n\t\t// This canonicalize is probably redundant\n\t\tcanonicalize(\n\t\t\tcommon_prefix(&origins)\n\t\t\t\t.ok_or_else(|| miette!(\"no common prefix, but this should never fail\"))?,\n\t\t)\n\t\t.await\n\t\t.into_diagnostic()?\n\t};\n\tdebug!(?project_origin, \"resolved common/project origin\");\n\n\tOk(project_origin)\n}\n\npub async fn vcs_types(origin: &Path) -> Vec<ProjectType> {\n\tlet vcs_types = project_origins::types(origin)\n\t\t.await\n\t\t.into_iter()\n\t\t.filter(|pt| pt.is_vcs())\n\t\t.collect::<Vec<_>>();\n\tinfo!(?vcs_types, \"effective vcs types\");\n\tvcs_types\n}\n\npub async fn ignores(args: &Args, vcs_types: &[ProjectType]) -> Result<Vec<IgnoreFile>> {\n\tlet origin = args.filtering.project_origin.clone().unwrap();\n\tlet mut skip_git_global_excludes = false;\n\n\tlet mut ignores = if args.filtering.no_project_ignore {\n\t\tVec::new()\n\t} else {\n\t\tlet ignore_files = args.filtering.ignore_files.iter().map(|path| {\n\t\t\tif path.is_absolute() {\n\t\t\t\tpath.into()\n\t\t\t} else {\n\t\t\t\torigin.join(path)\n\t\t\t}\n\t\t});\n\n\t\tlet (mut ignores, errors) = ignore_files::from_origin(\n\t\t\tIgnoreFilesFromOriginArgs::new_unchecked(\n\t\t\t\t&origin,\n\t\t\t\targs.filtering.paths.iter().map(PathBuf::from),\n\t\t\t\tignore_files,\n\t\t\t)\n\t\t\t.canonicalise()\n\t\t\t.await\n\t\t\t.into_diagnostic()?,\n\t\t)\n\t\t.await;\n\n\t\tfor err in errors {\n\t\t\twarn!(\"while discovering project-local ignore files: {}\", err);\n\t\t}\n\t\tdebug!(?ignores, \"discovered ignore files from project origin\");\n\n\t\tif !vcs_types.is_empty() {\n\t\t\tignores = ignores\n\t\t\t\t.into_iter()\n\t\t\t\t.filter(|ig| match ig.applies_to {\n\t\t\t\t\tSome(pt) if pt.is_vcs() => vcs_types.contains(&pt),\n\t\t\t\t\t_ => true,\n\t\t\t\t})\n\t\t\t\t.inspect(|ig| {\n\t\t\t\t\tif let IgnoreFile {\n\t\t\t\t\t\tapplies_to: Some(ProjectType::Git),\n\t\t\t\t\t\tapplies_in: None,\n\t\t\t\t\t\t..\n\t\t\t\t\t} = ig\n\t\t\t\t\t{\n\t\t\t\t\t\twarn!(\"project git config overrides the global excludes\");\n\t\t\t\t\t\tskip_git_global_excludes = true;\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\t.collect::<Vec<_>>();\n\t\t\tdebug!(?ignores, \"filtered ignores to only those for project vcs\");\n\t\t}\n\n\t\tignores\n\t};\n\n\tlet global_ignores = if args.filtering.no_global_ignore {\n\t\tVec::new()\n\t} else {\n\t\tlet (mut global_ignores, errors) = ignore_files::from_environment(Some(\"watchexec\")).await;\n\t\tfor err in errors {\n\t\t\twarn!(\"while discovering global ignore files: {}\", err);\n\t\t}\n\t\tdebug!(?global_ignores, \"discovered ignore files from environment\");\n\n\t\tif skip_git_global_excludes {\n\t\t\tglobal_ignores = global_ignores\n\t\t\t\t.into_iter()\n\t\t\t\t.filter(|gig| {\n\t\t\t\t\t!matches!(\n\t\t\t\t\t\tgig,\n\t\t\t\t\t\tIgnoreFile {\n\t\t\t\t\t\t\tapplies_to: Some(ProjectType::Git),\n\t\t\t\t\t\t\tapplies_in: None,\n\t\t\t\t\t\t\t..\n\t\t\t\t\t\t}\n\t\t\t\t\t)\n\t\t\t\t})\n\t\t\t\t.collect::<Vec<_>>();\n\t\t\tdebug!(\n\t\t\t\t?global_ignores,\n\t\t\t\t\"filtered global ignores to exclude global git ignores\"\n\t\t\t);\n\t\t}\n\n\t\tglobal_ignores\n\t};\n\n\tignores.extend(global_ignores.into_iter().filter(|ig| match ig.applies_to {\n\t\tSome(pt) if pt.is_vcs() => vcs_types.contains(&pt),\n\t\t_ => true,\n\t}));\n\tdebug!(\n\t\t?ignores,\n\t\t?vcs_types,\n\t\t\"combined and applied overall vcs filter over ignores\"\n\t);\n\n\tignores.extend(args.filtering.ignore_files.iter().map(|ig| IgnoreFile {\n\t\tapplies_to: None,\n\t\tapplies_in: None,\n\t\tpath: ig.clone(),\n\t}));\n\tdebug!(\n\t\t?ignores,\n\t\t?args.filtering.ignore_files,\n\t\t\"combined with ignore files from command line / env\"\n\t);\n\n\tif args.filtering.no_project_ignore {\n\t\tignores = ignores\n\t\t\t.into_iter()\n\t\t\t.filter(|ig| {\n\t\t\t\t!ig.applies_in\n\t\t\t\t\t.as_ref()\n\t\t\t\t\t.map_or(false, |p| p.starts_with(&origin))\n\t\t\t})\n\t\t\t.collect::<Vec<_>>();\n\t\tdebug!(\n\t\t\t?ignores,\n\t\t\t\"filtered ignores to exclude project-local ignores\"\n\t\t);\n\t}\n\n\tif args.filtering.no_global_ignore {\n\t\tignores = ignores\n\t\t\t.into_iter()\n\t\t\t.filter(|ig| ig.applies_in.is_some())\n\t\t\t.collect::<Vec<_>>();\n\t\tdebug!(?ignores, \"filtered ignores to exclude global ignores\");\n\t}\n\n\tif args.filtering.no_vcs_ignore {\n\t\tignores = ignores\n\t\t\t.into_iter()\n\t\t\t.filter(|ig| ig.applies_to.is_none())\n\t\t\t.collect::<Vec<_>>();\n\t\tdebug!(?ignores, \"filtered ignores to exclude VCS-specific ignores\");\n\t}\n\n\tinfo!(files=?ignores.iter().map(|ig| ig.path.as_path()).collect::<Vec<_>>(), \"found some ignores\");\n\tOk(ignores)\n}\n"
  },
  {
    "path": "crates/cli/src/emits.rs",
    "content": "use std::{fmt::Write, path::PathBuf};\n\nuse miette::{IntoDiagnostic, Result};\nuse watchexec::paths::summarise_events_to_env;\nuse watchexec_events::{filekind::FileEventKind, Event, Tag};\n\nuse crate::{args::command::EnvVar, state::RotatingTempFile};\n\npub fn emits_to_environment(events: &[Event]) -> impl Iterator<Item = EnvVar> {\n\tsummarise_events_to_env(events.iter())\n\t\t.into_iter()\n\t\t.map(|(k, value)| EnvVar {\n\t\t\tkey: format!(\"WATCHEXEC_{k}_PATH\"),\n\t\t\tvalue,\n\t\t})\n}\n\npub fn events_to_simple_format(events: &[Event]) -> Result<String> {\n\tlet mut buf = String::new();\n\tfor event in events {\n\t\tlet feks = event\n\t\t\t.tags\n\t\t\t.iter()\n\t\t\t.filter_map(|tag| match tag {\n\t\t\t\tTag::FileEventKind(kind) => Some(kind),\n\t\t\t\t_ => None,\n\t\t\t})\n\t\t\t.collect::<Vec<_>>();\n\n\t\tfor path in event.paths().map(|(p, _)| p) {\n\t\t\tif feks.is_empty() {\n\t\t\t\twriteln!(&mut buf, \"other:{}\", path.to_string_lossy()).into_diagnostic()?;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tfor fek in &feks {\n\t\t\t\twriteln!(\n\t\t\t\t\t&mut buf,\n\t\t\t\t\t\"{}:{}\",\n\t\t\t\t\tmatch fek {\n\t\t\t\t\t\tFileEventKind::Any | FileEventKind::Other => \"other\",\n\t\t\t\t\t\tFileEventKind::Access(_) => \"access\",\n\t\t\t\t\t\tFileEventKind::Create(_) => \"create\",\n\t\t\t\t\t\tFileEventKind::Modify(_) => \"modify\",\n\t\t\t\t\t\tFileEventKind::Remove(_) => \"remove\",\n\t\t\t\t\t},\n\t\t\t\t\tpath.to_string_lossy()\n\t\t\t\t)\n\t\t\t\t.into_diagnostic()?;\n\t\t\t}\n\t\t}\n\t}\n\n\tOk(buf)\n}\n\npub fn emits_to_file(target: &RotatingTempFile, events: &[Event]) -> Result<PathBuf> {\n\ttarget.rotate()?;\n\ttarget.write(events_to_simple_format(events)?.as_bytes())?;\n\tOk(target.path())\n}\n\npub fn emits_to_json_file(target: &RotatingTempFile, events: &[Event]) -> Result<PathBuf> {\n\ttarget.rotate()?;\n\tfor event in events {\n\t\tif event.is_empty() {\n\t\t\tcontinue;\n\t\t}\n\n\t\ttarget.write(&serde_json::to_vec(event).into_diagnostic()?)?;\n\t\ttarget.write(b\"\\n\")?;\n\t}\n\tOk(target.path())\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/parse.rs",
    "content": "use std::{fmt::Debug, path::PathBuf};\n\nuse jaq_core::{\n\tload::{Arena, File, Loader},\n\tCtx, Filter, Native, RcIter,\n};\nuse jaq_json::Val;\nuse miette::{miette, IntoDiagnostic, Result, WrapErr};\nuse tokio::io::AsyncReadExt;\nuse tracing::{debug, trace};\nuse watchexec_events::Event;\n\nuse super::proglib::jaq_lib;\n\n#[derive(Clone)]\npub enum FilterProgram {\n\tJaq(Filter<Native<Val>>),\n}\n\nimpl Debug for FilterProgram {\n\tfn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n\t\tmatch self {\n\t\t\tSelf::Jaq(_) => f.debug_tuple(\"Jaq\").field(&\"filter\").finish(),\n\t\t}\n\t}\n}\n\nimpl FilterProgram {\n\tpub(crate) async fn new_jaq_from_file(path: impl Into<PathBuf>) -> Result<Self> {\n\t\tasync fn inner(path: PathBuf) -> Result<FilterProgram> {\n\t\t\ttrace!(?path, \"reading filter program from file\");\n\t\t\tlet mut progfile = tokio::fs::File::open(&path).await.into_diagnostic()?;\n\t\t\tlet mut buf =\n\t\t\t\tString::with_capacity(progfile.metadata().await.into_diagnostic()?.len() as _);\n\t\t\tlet bytes_read = progfile.read_to_string(&mut buf).await.into_diagnostic()?;\n\t\t\tdebug!(?path, %bytes_read, \"read filter program from file\");\n\t\t\tFilterProgram::new_jaq(path, buf)\n\t\t}\n\n\t\tlet path = path.into();\n\t\tlet error = format!(\"in file {path:?}\");\n\t\tinner(path).await.wrap_err(error)\n\t}\n\n\tpub(crate) fn new_jaq_from_arg(n: usize, arg: String) -> Result<Self> {\n\t\tlet path = PathBuf::from(format!(\"<arg {n}>\"));\n\t\tlet error = format!(\"in --filter-prog {n}\");\n\t\tSelf::new_jaq(path, arg).wrap_err(error)\n\t}\n\n\tfn new_jaq(path: PathBuf, code: String) -> Result<Self> {\n\t\tlet user_lib_paths = [\n\t\t\tPathBuf::from(\"~/.jq\"),\n\t\t\tPathBuf::from(\"$ORIGIN/../lib/jq\"),\n\t\t\tPathBuf::from(\"$ORIGIN/../lib\"),\n\t\t];\n\t\tlet arena = Arena::default();\n\t\tlet loader =\n\t\t\tLoader::new(jaq_std::defs().chain(jaq_json::defs())).with_std_read(&user_lib_paths);\n\t\tlet modules = match loader.load(&arena, File { path, code: &code }) {\n\t\t\tOk(m) => m,\n\t\t\tErr(errs) => {\n\t\t\t\tlet errs = errs\n\t\t\t\t\t.into_iter()\n\t\t\t\t\t.map(|(_, err)| format!(\"{err:?}\"))\n\t\t\t\t\t.collect::<Vec<_>>()\n\t\t\t\t\t.join(\"\\n\");\n\t\t\t\treturn Err(miette!(\"{}\", errs).wrap_err(\"failed to load filter program\"));\n\t\t\t}\n\t\t};\n\n\t\tlet filter = jaq_lib()\n\t\t\t.compile(modules)\n\t\t\t.map_err(|errs| miette!(\"Failed to compile jaq program: {:?}\", errs))?;\n\t\tOk(Self::Jaq(filter))\n\t}\n\n\tpub(crate) fn run(&self, event: &Event) -> Result<bool> {\n\t\tmatch self {\n\t\t\tSelf::Jaq(filter) => {\n\t\t\t\tlet inputs = RcIter::new(std::iter::empty());\n\t\t\t\tlet val = serde_json::to_value(event)\n\t\t\t\t\t.map_err(|err| miette!(\"failed to serialize event: {}\", err))\n\t\t\t\t\t.map(Val::from)?;\n\n\t\t\t\tlet mut results = filter.run((Ctx::new([], &inputs), val));\n\t\t\t\tresults\n\t\t\t\t\t.next()\n\t\t\t\t\t.ok_or_else(|| miette!(\"returned no value\"))?\n\t\t\t\t\t.map_err(|err| miette!(\"program failed: {err}\"))\n\t\t\t\t\t.and_then(|val| match val {\n\t\t\t\t\t\tVal::Bool(b) => Ok(b),\n\t\t\t\t\t\tval => Err(miette!(\"returned non-boolean {val:?}\")),\n\t\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/proglib/file.rs",
    "content": "use std::{\n\tfs::{metadata, File, FileType, Metadata},\n\tio::{BufReader, Read},\n\titer::once,\n\ttime::{SystemTime, UNIX_EPOCH},\n};\n\nuse jaq_core::{Error, Native};\nuse jaq_json::Val;\nuse jaq_std::{v, Filter};\nuse serde_json::{json, Value};\nuse tracing::{debug, error};\n\nuse super::macros::return_err;\n\npub fn funs() -> [Filter<Native<jaq_json::Val>>; 3] {\n\t[\n\t\t(\n\t\t\t\"file_read\",\n\t\t\tv(0),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (mut ctx, val)| {\n\t\t\t\t\tlet path = match &val {\n\t\t\t\t\t\tVal::Str(v) => v.to_string(),\n\t\t\t\t\t\t_ => return_err!(Err(Error::str(\"expected string (path) but got {val:?}\"))),\n\t\t\t\t\t};\n\n\t\t\t\t\tlet Val::Int(bytes) = ctx.pop_var() else {\n\t\t\t\t\t\treturn_err!(Err(Error::str(\"expected integer\")));\n\t\t\t\t\t};\n\n\t\t\t\t\tlet bytes = match u64::try_from(bytes) {\n\t\t\t\t\t\tOk(b) => b,\n\t\t\t\t\t\tErr(err) => return_err!(Err(Error::str(format!(\n\t\t\t\t\t\t\t\"expected positive integer; {err}\"\n\t\t\t\t\t\t)))),\n\t\t\t\t\t};\n\n\t\t\t\t\tBox::new(once(Ok(match File::open(&path) {\n\t\t\t\t\t\tOk(file) => {\n\t\t\t\t\t\t\tlet buf_reader = BufReader::new(file);\n\t\t\t\t\t\t\tlet mut limited = buf_reader.take(bytes);\n\t\t\t\t\t\t\tlet mut buffer = String::with_capacity(bytes as _);\n\t\t\t\t\t\t\tmatch limited.read_to_string(&mut buffer) {\n\t\t\t\t\t\t\t\tOk(read) => {\n\t\t\t\t\t\t\t\t\tdebug!(\"jaq: read {read} bytes from {path:?}\");\n\t\t\t\t\t\t\t\t\tVal::Str(buffer.into())\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\t\t\terror!(\"jaq: failed to read from {path:?}: {err:?}\");\n\t\t\t\t\t\t\t\t\tVal::Null\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\terror!(\"jaq: failed to open file {path:?}: {err:?}\");\n\t\t\t\t\t\t\tVal::Null\n\t\t\t\t\t\t}\n\t\t\t\t\t})))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t\t(\n\t\t\t\"file_meta\",\n\t\t\tv(0),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (_, val)| {\n\t\t\t\t\tlet path = match &val {\n\t\t\t\t\t\tVal::Str(v) => v.to_string(),\n\t\t\t\t\t\t_ => return_err!(Err(Error::str(\"expected string (path) but got {val:?}\"))),\n\t\t\t\t\t};\n\n\t\t\t\t\tBox::new(once(Ok(match metadata(&path) {\n\t\t\t\t\t\tOk(meta) => Val::from(json_meta(meta)),\n\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\terror!(\"jaq: failed to open {path:?}: {err:?}\");\n\t\t\t\t\t\t\tVal::Null\n\t\t\t\t\t\t}\n\t\t\t\t\t})))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t\t(\n\t\t\t\"file_size\",\n\t\t\tv(0),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (_, val)| {\n\t\t\t\t\tlet path = match &val {\n\t\t\t\t\t\tVal::Str(v) => v.to_string(),\n\t\t\t\t\t\t_ => return_err!(Err(Error::str(\"expected string (path) but got {val:?}\"))),\n\t\t\t\t\t};\n\n\t\t\t\t\tBox::new(once(Ok(match metadata(&path) {\n\t\t\t\t\t\tOk(meta) => Val::Int(meta.len() as _),\n\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\terror!(\"jaq: failed to open {path:?}: {err:?}\");\n\t\t\t\t\t\t\tVal::Null\n\t\t\t\t\t\t}\n\t\t\t\t\t})))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t]\n}\n\nfn json_meta(meta: Metadata) -> Value {\n\tlet perms = meta.permissions();\n\t#[cfg_attr(not(unix), allow(unused_mut))]\n\tlet mut val = json!({\n\t\t\"type\": filetype_str(meta.file_type()),\n\t\t\"size\": meta.len(),\n\t\t\"modified\": fs_time(meta.modified()),\n\t\t\"accessed\": fs_time(meta.accessed()),\n\t\t\"created\": fs_time(meta.created()),\n\t\t\"dir\": meta.is_dir(),\n\t\t\"file\": meta.is_file(),\n\t\t\"symlink\": meta.is_symlink(),\n\t\t\"readonly\": perms.readonly(),\n\t});\n\n\t#[cfg(unix)]\n\t{\n\t\tuse std::os::unix::fs::PermissionsExt;\n\t\tlet map = val.as_object_mut().unwrap();\n\t\tmap.insert(\n\t\t\t\"mode\".to_string(),\n\t\t\tValue::String(format!(\"{:o}\", perms.mode())),\n\t\t);\n\t\tmap.insert(\"mode_byte\".to_string(), Value::from(perms.mode()));\n\t\tmap.insert(\n\t\t\t\"executable\".to_string(),\n\t\t\tValue::Bool(perms.mode() & 0o111 != 0),\n\t\t);\n\t}\n\n\tval\n}\n\nfn filetype_str(filetype: FileType) -> &'static str {\n\t#[cfg(unix)]\n\t{\n\t\tuse std::os::unix::fs::FileTypeExt;\n\t\tif filetype.is_char_device() {\n\t\t\treturn \"char\";\n\t\t} else if filetype.is_block_device() {\n\t\t\treturn \"block\";\n\t\t} else if filetype.is_fifo() {\n\t\t\treturn \"fifo\";\n\t\t} else if filetype.is_socket() {\n\t\t\treturn \"socket\";\n\t\t}\n\t}\n\n\t#[cfg(windows)]\n\t{\n\t\tuse std::os::windows::fs::FileTypeExt;\n\t\tif filetype.is_symlink_dir() {\n\t\t\treturn \"symdir\";\n\t\t} else if filetype.is_symlink_file() {\n\t\t\treturn \"symfile\";\n\t\t}\n\t}\n\n\tif filetype.is_dir() {\n\t\t\"dir\"\n\t} else if filetype.is_file() {\n\t\t\"file\"\n\t} else if filetype.is_symlink() {\n\t\t\"symlink\"\n\t} else {\n\t\t\"unknown\"\n\t}\n}\n\nfn fs_time(time: std::io::Result<SystemTime>) -> Option<u64> {\n\ttime.ok()\n\t\t.and_then(|time| time.duration_since(UNIX_EPOCH).ok())\n\t\t.map(|dur| dur.as_secs())\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/proglib/hash.rs",
    "content": "use std::{fs::File, io::Read, iter::once};\n\nuse jaq_core::{Error, Native};\nuse jaq_json::Val;\nuse jaq_std::{v, Filter};\nuse tracing::{debug, error};\n\nuse super::macros::return_err;\n\npub fn funs() -> [Filter<Native<jaq_json::Val>>; 2] {\n\t[\n\t\t(\n\t\t\t\"hash\",\n\t\t\tv(0),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (_, val)| {\n\t\t\t\t\tlet string = match &val {\n\t\t\t\t\t\tVal::Str(v) => v.to_string(),\n\t\t\t\t\t\t_ => return_err!(Err(Error::str(\"expected string but got {val:?}\"))),\n\t\t\t\t\t};\n\n\t\t\t\t\tBox::new(once(Ok(Val::Str(\n\t\t\t\t\t\tblake3::hash(string.as_bytes()).to_hex().to_string().into(),\n\t\t\t\t\t))))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t\t(\n\t\t\t\"file_hash\",\n\t\t\tv(0),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (_, val)| {\n\t\t\t\t\tlet path = match &val {\n\t\t\t\t\t\tVal::Str(v) => v.to_string(),\n\t\t\t\t\t\t_ => return_err!(Err(Error::str(\"expected string but got {val:?}\"))),\n\t\t\t\t\t};\n\n\t\t\t\t\tBox::new(once(Ok(match File::open(&path) {\n\t\t\t\t\t\tOk(mut file) => {\n\t\t\t\t\t\t\tconst BUFFER_SIZE: usize = 1024 * 1024;\n\t\t\t\t\t\t\tlet mut hasher = blake3::Hasher::new();\n\t\t\t\t\t\t\tlet mut buf = vec![0; BUFFER_SIZE];\n\t\t\t\t\t\t\twhile let Ok(bytes) = file.read(&mut buf) {\n\t\t\t\t\t\t\t\tdebug!(\"jaq: read {bytes} bytes from {path:?}\");\n\t\t\t\t\t\t\t\tif bytes == 0 {\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\thasher.update(&buf[..bytes]);\n\t\t\t\t\t\t\t\tbuf = vec![0; BUFFER_SIZE];\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tVal::Str(hasher.finalize().to_hex().to_string().into())\n\t\t\t\t\t\t}\n\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\terror!(\"jaq: failed to open file {path:?}: {err:?}\");\n\t\t\t\t\t\t\tVal::Null\n\t\t\t\t\t\t}\n\t\t\t\t\t})))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t]\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/proglib/kv.rs",
    "content": "use std::{\n\titer::once,\n\tsync::{Arc, OnceLock},\n};\n\nuse dashmap::DashMap;\nuse jaq_core::Native;\nuse jaq_json::Val;\nuse jaq_std::{v, Filter};\n\nuse crate::filterer::syncval::SyncVal;\n\ntype KvStore = Arc<DashMap<String, SyncVal>>;\nfn kv_store() -> KvStore {\n\tstatic KV_STORE: OnceLock<KvStore> = OnceLock::new();\n\tKV_STORE.get_or_init(KvStore::default).clone()\n}\n\npub fn funs() -> [Filter<Native<jaq_json::Val>>; 3] {\n\t[\n\t\t(\n\t\t\t\"kv_clear\",\n\t\t\tv(0),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (_, val)| {\n\t\t\t\t\tlet kv = kv_store();\n\t\t\t\t\tkv.clear();\n\t\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t\t(\n\t\t\t\"kv_store\",\n\t\t\tv(1),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (mut ctx, val)| {\n\t\t\t\t\tlet kv = kv_store();\n\n\t\t\t\t\tlet key = ctx.pop_var().to_string();\n\t\t\t\t\tkv.insert(key, (&val).into());\n\t\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t\t(\n\t\t\t\"kv_fetch\",\n\t\t\tv(1),\n\t\t\tNative::new({\n\t\t\t\tmove |_, (mut ctx, _)| {\n\t\t\t\t\tlet kv = kv_store();\n\t\t\t\t\tlet key = ctx.pop_var().to_string();\n\n\t\t\t\t\tBox::new(once(Ok(kv\n\t\t\t\t\t\t.get(&key)\n\t\t\t\t\t\t.map_or(Val::Null, |val| val.value().into()))))\n\t\t\t\t}\n\t\t\t}),\n\t\t),\n\t]\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/proglib/macros.rs",
    "content": "macro_rules! return_err {\n\t($err:expr) => {\n\t\treturn Box::new(once($err.map_err(Into::into)))\n\t};\n}\npub(crate) use return_err;\n"
  },
  {
    "path": "crates/cli/src/filterer/proglib/output.rs",
    "content": "use std::iter::once;\n\nuse jaq_core::{Ctx, Error, Native};\nuse jaq_json::Val;\nuse jaq_std::{v, Filter};\nuse tracing::{debug, error, info, trace, warn};\n\nuse super::macros::return_err;\n\nmacro_rules! log_action {\n\t($level:expr, $val:expr) => {\n\t\tmatch $level.to_ascii_lowercase().as_str() {\n\t\t\t\"trace\" => trace!(\"jaq: {}\", $val),\n\t\t\t\"debug\" => debug!(\"jaq: {}\", $val),\n\t\t\t\"info\" => info!(\"jaq: {}\", $val),\n\t\t\t\"warn\" => warn!(\"jaq: {}\", $val),\n\t\t\t\"error\" => error!(\"jaq: {}\", $val),\n\t\t\t_ => return_err!(Err(Error::str(\"invalid log level\"))),\n\t\t}\n\t};\n}\n\npub fn funs() -> [Filter<Native<jaq_json::Val>>; 3] {\n\t[\n\t\t(\n\t\t\t\"log\",\n\t\t\tv(1),\n\t\t\tNative::new(|_, (mut ctx, val): (Ctx<'_, Val>, _)| {\n\t\t\t\tlet level = ctx.pop_var().to_string();\n\t\t\t\tlog_action!(level, val);\n\n\t\t\t\t// passthrough\n\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t})\n\t\t\t.with_update(|_, (mut ctx, val), _| {\n\t\t\t\tlet level = ctx.pop_var().to_string();\n\t\t\t\tlog_action!(level, val);\n\n\t\t\t\t// passthrough\n\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t}),\n\t\t),\n\t\t(\n\t\t\t\"printout\",\n\t\t\tv(0),\n\t\t\tNative::new(|_, (_, val)| {\n\t\t\t\tprintln!(\"{val}\");\n\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t})\n\t\t\t.with_update(|_, (_, val), _| {\n\t\t\t\tprintln!(\"{val}\");\n\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t}),\n\t\t),\n\t\t(\n\t\t\t\"printerr\",\n\t\t\tv(0),\n\t\t\tNative::new(|_, (_, val)| {\n\t\t\t\teprintln!(\"{val}\");\n\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t})\n\t\t\t.with_update(|_, (_, val), _| {\n\t\t\t\teprintln!(\"{val}\");\n\t\t\t\tBox::new(once(Ok(val)))\n\t\t\t}),\n\t\t),\n\t]\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/proglib.rs",
    "content": "use jaq_core::{Compiler, Native};\n\nmod file;\nmod hash;\nmod kv;\nmod macros;\nmod output;\n\npub fn jaq_lib<'s>() -> Compiler<&'s str, Native<jaq_json::Val>> {\n\tCompiler::<_, Native<_>>::default().with_funs(\n\t\tjaq_std::funs()\n\t\t\t.chain(jaq_json::funs())\n\t\t\t.chain(file::funs())\n\t\t\t.chain(hash::funs())\n\t\t\t.chain(kv::funs())\n\t\t\t.chain(output::funs()),\n\t)\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/progs.rs",
    "content": "use std::marker::PhantomData;\n\nuse miette::miette;\nuse tokio::{\n\tsync::{mpsc, oneshot},\n\ttask::{block_in_place, spawn_blocking},\n};\nuse tracing::{error, trace, warn};\nuse watchexec::error::RuntimeError;\nuse watchexec_events::Event;\n\nuse crate::args::Args;\n\nconst BUFFER: usize = 128;\n\n#[derive(Debug)]\npub struct FilterProgs {\n\tchannel: Requester<Event, bool>,\n}\n\n#[derive(Debug, Clone)]\npub struct Requester<S, R> {\n\tsender: mpsc::Sender<(S, oneshot::Sender<R>)>,\n\t_receiver: PhantomData<R>,\n}\n\nimpl<S, R> Requester<S, R>\nwhere\n\tS: Send + Sync,\n\tR: Send + Sync,\n{\n\tpub fn new(capacity: usize) -> (Self, mpsc::Receiver<(S, oneshot::Sender<R>)>) {\n\t\tlet (sender, receiver) = mpsc::channel(capacity);\n\t\t(\n\t\t\tSelf {\n\t\t\t\tsender,\n\t\t\t\t_receiver: PhantomData,\n\t\t\t},\n\t\t\treceiver,\n\t\t)\n\t}\n\n\tpub fn call(&self, value: S) -> Result<R, RuntimeError> {\n\t\t// FIXME: this should really be async with a timeout, but that needs filtering in general\n\t\t// to be async, which should be done at some point\n\t\tblock_in_place(|| {\n\t\t\tlet (sender, receiver) = oneshot::channel();\n\t\t\tself.sender.blocking_send((value, sender)).map_err(|err| {\n\t\t\t\tRuntimeError::External(miette!(\"filter progs internal channel: {}\", err).into())\n\t\t\t})?;\n\t\t\treceiver\n\t\t\t\t.blocking_recv()\n\t\t\t\t.map_err(|err| RuntimeError::External(Box::new(err)))\n\t\t})\n\t}\n}\n\nimpl FilterProgs {\n\tpub fn check(&self, event: &Event) -> Result<bool, RuntimeError> {\n\t\tself.channel.call(event.clone())\n\t}\n\n\tpub fn new(args: &Args) -> miette::Result<Self> {\n\t\tlet progs = args.filtering.filter_programs_parsed.clone();\n\t\tlet (requester, mut receiver) = Requester::<Event, bool>::new(BUFFER);\n\t\tlet task = spawn_blocking(move || {\n\t\t\t'chan: while let Some((event, sender)) = receiver.blocking_recv() {\n\t\t\t\tfor (n, prog) in progs.iter().enumerate() {\n\t\t\t\t\ttrace!(?n, \"trying filter program\");\n\t\t\t\t\tmatch prog.run(&event) {\n\t\t\t\t\t\tOk(false) => {\n\t\t\t\t\t\t\ttrace!(\n\t\t\t\t\t\t\t\t?n,\n\t\t\t\t\t\t\t\tverdict = false,\n\t\t\t\t\t\t\t\t\"filter program finished; fail so stopping there\"\n\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\tsender\n\t\t\t\t\t\t\t\t.send(false)\n\t\t\t\t\t\t\t\t.unwrap_or_else(|_| warn!(\"failed to send filter result\"));\n\t\t\t\t\t\t\tcontinue 'chan;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tOk(true) => {\n\t\t\t\t\t\t\ttrace!(\n\t\t\t\t\t\t\t\t?n,\n\t\t\t\t\t\t\t\tverdict = true,\n\t\t\t\t\t\t\t\t\"filter program finished; pass so trying next\"\n\t\t\t\t\t\t\t);\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\terror!(?n, error=%err, \"filter program failed, so trying next\");\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\ttrace!(\"all filters failed, sending pass as default\");\n\t\t\t\tsender\n\t\t\t\t\t.send(true)\n\t\t\t\t\t.unwrap_or_else(|_| warn!(\"failed to send filter result\"));\n\t\t\t}\n\n\t\t\tOk(()) as miette::Result<()>\n\t\t});\n\n\t\ttokio::spawn(async {\n\t\t\tmatch task.await {\n\t\t\t\tOk(Ok(())) => {}\n\t\t\t\tOk(Err(err)) => error!(\"filter progs task failed: {}\", err),\n\t\t\t\tErr(err) => error!(\"filter progs task panicked: {}\", err),\n\t\t\t}\n\t\t});\n\n\t\tOk(Self { channel: requester })\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/filterer/syncval.rs",
    "content": "/// Jaq's [Val](jaq_json::Val) uses Rc, but we want to use in Sync contexts. UGH!\nuse std::{rc::Rc, sync::Arc};\n\nuse indexmap::IndexMap;\nuse jaq_json::Val;\n\n#[derive(Clone, Debug)]\npub enum SyncVal {\n\tNull,\n\tBool(bool),\n\tInt(isize),\n\tFloat(f64),\n\tNum(Arc<str>),\n\tStr(Arc<str>),\n\tArr(Arc<[SyncVal]>),\n\tObj(Arc<IndexMap<Arc<str>, SyncVal>>),\n}\n\nimpl From<&Val> for SyncVal {\n\tfn from(val: &Val) -> Self {\n\t\tmatch val {\n\t\t\tVal::Null => Self::Null,\n\t\t\tVal::Bool(b) => Self::Bool(*b),\n\t\t\tVal::Int(i) => Self::Int(*i),\n\t\t\tVal::Float(f) => Self::Float(*f),\n\t\t\tVal::Num(s) => Self::Num(s.to_string().into()),\n\t\t\tVal::Str(s) => Self::Str(s.to_string().into()),\n\t\t\tVal::Arr(a) => Self::Arr({\n\t\t\t\tlet mut arr = Vec::with_capacity(a.len());\n\t\t\t\tfor v in a.iter() {\n\t\t\t\t\tarr.push(v.into());\n\t\t\t\t}\n\t\t\t\tarr.into()\n\t\t\t}),\n\t\t\tVal::Obj(m) => Self::Obj(Arc::new({\n\t\t\t\tlet mut map = IndexMap::new();\n\t\t\t\tfor (k, v) in m.iter() {\n\t\t\t\t\tmap.insert(k.to_string().into(), v.into());\n\t\t\t\t}\n\t\t\t\tmap\n\t\t\t})),\n\t\t}\n\t}\n}\n\nimpl From<&SyncVal> for Val {\n\tfn from(val: &SyncVal) -> Self {\n\t\tmatch val {\n\t\t\tSyncVal::Null => Self::Null,\n\t\t\tSyncVal::Bool(b) => Self::Bool(*b),\n\t\t\tSyncVal::Int(i) => Self::Int(*i),\n\t\t\tSyncVal::Float(f) => Self::Float(*f),\n\t\t\tSyncVal::Num(s) => Self::Num(s.to_string().into()),\n\t\t\tSyncVal::Str(s) => Self::Str(s.to_string().into()),\n\t\t\tSyncVal::Arr(a) => Self::Arr({\n\t\t\t\tlet mut arr = Vec::with_capacity(a.len());\n\t\t\t\tfor v in a.iter() {\n\t\t\t\t\tarr.push(v.into());\n\t\t\t\t}\n\t\t\t\tarr.into()\n\t\t\t}),\n\t\t\tSyncVal::Obj(m) => Self::Obj(Rc::new({\n\t\t\t\tlet mut map: IndexMap<_, _, foldhash::fast::RandomState> = Default::default();\n\t\t\t\tfor (k, v) in m.iter() {\n\t\t\t\t\tmap.insert(k.to_string().into(), v.into());\n\t\t\t\t}\n\t\t\t\tmap\n\t\t\t})),\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/filterer.rs",
    "content": "use std::{\n\tffi::OsString,\n\tpath::{Path, PathBuf, MAIN_SEPARATOR},\n\tsync::Arc,\n};\n\nuse miette::{IntoDiagnostic, Result};\nuse tokio::io::{AsyncBufReadExt, BufReader};\nuse tracing::{info, trace, trace_span};\nuse watchexec::{error::RuntimeError, filter::Filterer};\nuse watchexec_events::{\n\tfilekind::{FileEventKind, ModifyKind},\n\tEvent, Priority, Tag,\n};\nuse watchexec_filterer_globset::GlobsetFilterer;\n\nuse crate::args::{filtering::FsEvent, Args};\n\npub mod parse;\nmod proglib;\nmod progs;\nmod syncval;\n\n/// A custom filterer that combines the library's Globset filterer and a switch for --no-meta\n#[derive(Debug)]\npub struct WatchexecFilterer {\n\tinner: GlobsetFilterer,\n\tfs_events: Vec<FsEvent>,\n\tprogs: Option<progs::FilterProgs>,\n}\n\nimpl Filterer for WatchexecFilterer {\n\t#[tracing::instrument(level = \"trace\", skip(self))]\n\tfn check_event(&self, event: &Event, priority: Priority) -> Result<bool, RuntimeError> {\n\t\tfor tag in &event.tags {\n\t\t\tif let Tag::FileEventKind(fek) = tag {\n\t\t\t\tlet normalised = match fek {\n\t\t\t\t\tFileEventKind::Access(_) => FsEvent::Access,\n\t\t\t\t\tFileEventKind::Modify(ModifyKind::Name(_)) => FsEvent::Rename,\n\t\t\t\t\tFileEventKind::Modify(ModifyKind::Metadata(_)) => FsEvent::Metadata,\n\t\t\t\t\tFileEventKind::Modify(_) => FsEvent::Modify,\n\t\t\t\t\tFileEventKind::Create(_) => FsEvent::Create,\n\t\t\t\t\tFileEventKind::Remove(_) => FsEvent::Remove,\n\t\t\t\t\t_ => continue,\n\t\t\t\t};\n\n\t\t\t\ttrace!(allowed=?self.fs_events, this=?normalised, \"check against fs event filter\");\n\t\t\t\tif !self.fs_events.contains(&normalised) {\n\t\t\t\t\treturn Ok(false);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\ttrace!(\"check against original event\");\n\t\tif !self.inner.check_event(event, priority)? {\n\t\t\treturn Ok(false);\n\t\t}\n\n\t\tif let Some(progs) = &self.progs {\n\t\t\ttrace!(\"check against program filters\");\n\t\t\tif !progs.check(event)? {\n\t\t\t\treturn Ok(false);\n\t\t\t}\n\t\t}\n\n\t\tOk(true)\n\t}\n}\n\nimpl WatchexecFilterer {\n\t/// Create a new filterer from the given arguments\n\tpub async fn new(args: &Args) -> Result<Arc<Self>> {\n\t\tlet project_origin = args.filtering.project_origin.clone().unwrap();\n\t\tlet workdir = args.command.workdir.clone().unwrap();\n\n\t\tlet ignore_files = if args.filtering.no_discover_ignore {\n\t\t\tVec::new()\n\t\t} else {\n\t\t\tlet vcs_types = crate::dirs::vcs_types(&project_origin).await;\n\t\t\tcrate::dirs::ignores(args, &vcs_types).await?\n\t\t};\n\n\t\tlet mut ignores = Vec::new();\n\n\t\tif !args.filtering.no_default_ignore {\n\t\t\tignores.extend([\n\t\t\t\t(format!(\"**{MAIN_SEPARATOR}.DS_Store\"), None),\n\t\t\t\t(String::from(\"watchexec.*.log\"), None),\n\t\t\t\t(String::from(\"*.py[co]\"), None),\n\t\t\t\t(String::from(\"#*#\"), None),\n\t\t\t\t(String::from(\".#*\"), None),\n\t\t\t\t(String::from(\".*.kate-swp\"), None),\n\t\t\t\t(String::from(\".*.sw?\"), None),\n\t\t\t\t(String::from(\".*.sw?x\"), None),\n\t\t\t\t(format!(\"**{MAIN_SEPARATOR}.bzr{MAIN_SEPARATOR}**\"), None),\n\t\t\t\t(format!(\"**{MAIN_SEPARATOR}_darcs{MAIN_SEPARATOR}**\"), None),\n\t\t\t\t(\n\t\t\t\t\tformat!(\"**{MAIN_SEPARATOR}.fossil-settings{MAIN_SEPARATOR}**\"),\n\t\t\t\t\tNone,\n\t\t\t\t),\n\t\t\t\t(format!(\"**{MAIN_SEPARATOR}.git{MAIN_SEPARATOR}**\"), None),\n\t\t\t\t(format!(\"**{MAIN_SEPARATOR}.hg{MAIN_SEPARATOR}**\"), None),\n\t\t\t\t(format!(\"**{MAIN_SEPARATOR}.pijul{MAIN_SEPARATOR}**\"), None),\n\t\t\t\t(format!(\"**{MAIN_SEPARATOR}.svn{MAIN_SEPARATOR}**\"), None),\n\t\t\t]);\n\t\t}\n\n\t\tlet whitelist = args\n\t\t\t.filtering\n\t\t\t.paths\n\t\t\t.iter()\n\t\t\t.map(std::convert::Into::into)\n\t\t\t.filter(|p: &PathBuf| p.is_file());\n\n\t\tlet mut filters = args\n\t\t\t.filtering\n\t\t\t.filter_patterns\n\t\t\t.iter()\n\t\t\t.map(|f| (f.to_owned(), Some(workdir.clone())))\n\t\t\t.collect::<Vec<_>>();\n\n\t\tfor filter_file in &args.filtering.filter_files {\n\t\t\tfilters.extend(read_filter_file(filter_file).await?);\n\t\t}\n\n\t\tignores.extend(\n\t\t\targs.filtering\n\t\t\t\t.ignore_patterns\n\t\t\t\t.iter()\n\t\t\t\t.map(|f| (f.to_owned(), Some(workdir.clone()))),\n\t\t);\n\n\t\tlet exts = args\n\t\t\t.filtering\n\t\t\t.filter_extensions\n\t\t\t.iter()\n\t\t\t.map(|e| OsString::from(e.strip_prefix('.').unwrap_or(e)));\n\n\t\tinfo!(\"initialising Globset filterer\");\n\t\tOk(Arc::new(Self {\n\t\t\tinner: GlobsetFilterer::new(\n\t\t\t\tproject_origin,\n\t\t\t\tfilters,\n\t\t\t\tignores,\n\t\t\t\twhitelist,\n\t\t\t\tignore_files,\n\t\t\t\texts,\n\t\t\t)\n\t\t\t.await\n\t\t\t.into_diagnostic()?,\n\t\t\tfs_events: args.filtering.filter_fs_events.clone(),\n\t\t\tprogs: if args.filtering.filter_programs_parsed.is_empty() {\n\t\t\t\tNone\n\t\t\t} else {\n\t\t\t\tSome(progs::FilterProgs::new(args)?)\n\t\t\t},\n\t\t}))\n\t}\n}\n\nasync fn read_filter_file(path: &Path) -> Result<Vec<(String, Option<PathBuf>)>> {\n\tlet _span = trace_span!(\"loading filter file\", ?path).entered();\n\n\tlet file = tokio::fs::File::open(path).await.into_diagnostic()?;\n\n\tlet metadata_len = file\n\t\t.metadata()\n\t\t.await\n\t\t.map(|m| usize::try_from(m.len()))\n\t\t.unwrap_or(Ok(0))\n\t\t.into_diagnostic()?;\n\tlet filter_capacity = if metadata_len == 0 {\n\t\t0\n\t} else {\n\t\tmetadata_len / 20\n\t};\n\tlet mut filters = Vec::with_capacity(filter_capacity);\n\n\tlet reader = BufReader::new(file);\n\tlet mut lines = reader.lines();\n\twhile let Some(line) = lines.next_line().await.into_diagnostic()? {\n\t\tlet line = line.trim();\n\t\tif line.is_empty() || line.starts_with('#') {\n\t\t\tcontinue;\n\t\t}\n\n\t\ttrace!(?line, \"adding filter line\");\n\t\tfilters.push((line.to_owned(), Some(path.to_owned())));\n\t}\n\n\tOk(filters)\n}\n"
  },
  {
    "path": "crates/cli/src/lib.rs",
    "content": "#![deny(rust_2018_idioms)]\n#![allow(clippy::missing_const_for_fn, clippy::future_not_send)]\n\nuse std::{\n\tio::{IsTerminal, Write},\n\tprocess::{ExitCode, Stdio},\n};\n\nuse clap::CommandFactory;\nuse clap_complete::{Generator, Shell};\nuse clap_mangen::Man;\nuse miette::{IntoDiagnostic, Result};\nuse std::sync::Arc;\nuse tokio::{io::AsyncWriteExt, process::Command};\nuse tracing::{debug, info};\nuse watchexec::Watchexec;\nuse watchexec_events::{Event, Priority};\n\nuse crate::{\n\targs::{Args, ShellCompletion},\n\tfilterer::WatchexecFilterer,\n};\n\npub mod args;\nmod config;\nmod dirs;\nmod emits;\nmod filterer;\nmod socket;\nmod state;\n\nasync fn run_watchexec(args: Args, state: state::State) -> Result<()> {\n\tinfo!(version=%env!(\"CARGO_PKG_VERSION\"), \"constructing Watchexec from CLI\");\n\n\tlet config = config::make_config(&args, &state)?;\n\tconfig.filterer(WatchexecFilterer::new(&args).await?);\n\n\tinfo!(\"initialising Watchexec runtime\");\n\tlet wx = Arc::new(Watchexec::with_config(config)?);\n\n\t// Set the watchexec reference in state so it can be used for sending synthetic events\n\tstate\n\t\t.watchexec\n\t\t.set(wx.clone())\n\t\t.expect(\"watchexec reference already set\");\n\n\tif !args.events.postpone {\n\t\tdebug!(\"kicking off with empty event\");\n\t\twx.send_event(Event::default(), Priority::Urgent).await?;\n\t}\n\n\tif args.events.interactive {\n\t\teprintln!(\"[Interactive] q: quit, p: pause/unpause, r: restart\");\n\t}\n\n\tinfo!(\"running main loop\");\n\twx.main().await.into_diagnostic()??;\n\n\tif matches!(\n\t\targs.output.screen_clear,\n\t\tSome(args::output::ClearMode::Reset)\n\t) {\n\t\tconfig::reset_screen();\n\t}\n\n\tinfo!(\"done with main loop\");\n\n\tOk(())\n}\n\nasync fn run_manpage() -> Result<()> {\n\tinfo!(version=%env!(\"CARGO_PKG_VERSION\"), \"constructing manpage\");\n\n\tlet man = Man::new(Args::command().long_version(None));\n\tlet mut buffer: Vec<u8> = Default::default();\n\tman.render(&mut buffer).into_diagnostic()?;\n\n\tif std::io::stdout().is_terminal() && which::which(\"man\").is_ok() {\n\t\tlet mut child = Command::new(\"man\")\n\t\t\t.arg(\"-l\")\n\t\t\t.arg(\"-\")\n\t\t\t.stdin(Stdio::piped())\n\t\t\t.stdout(Stdio::inherit())\n\t\t\t.stderr(Stdio::inherit())\n\t\t\t.kill_on_drop(true)\n\t\t\t.spawn()\n\t\t\t.into_diagnostic()?;\n\t\tchild\n\t\t\t.stdin\n\t\t\t.as_mut()\n\t\t\t.unwrap()\n\t\t\t.write_all(&buffer)\n\t\t\t.await\n\t\t\t.into_diagnostic()?;\n\n\t\tif let Some(code) = child\n\t\t\t.wait()\n\t\t\t.await\n\t\t\t.into_diagnostic()?\n\t\t\t.code()\n\t\t\t.and_then(|code| if code == 0 { None } else { Some(code) })\n\t\t{\n\t\t\treturn Err(miette::miette!(\"Exited with status code {}\", code));\n\t\t}\n\t} else {\n\t\tstd::io::stdout()\n\t\t\t.lock()\n\t\t\t.write_all(&buffer)\n\t\t\t.into_diagnostic()?;\n\t}\n\n\tOk(())\n}\n\n#[allow(clippy::unused_async)]\nasync fn run_completions(shell: ShellCompletion) -> Result<()> {\n\tfn generate(generator: impl Generator) {\n\t\tlet mut cmd = Args::command();\n\t\tclap_complete::generate(generator, &mut cmd, \"watchexec\", &mut std::io::stdout());\n\t}\n\n\tinfo!(version=%env!(\"CARGO_PKG_VERSION\"), \"constructing completions\");\n\n\tmatch shell {\n\t\tShellCompletion::Bash => generate(Shell::Bash),\n\t\tShellCompletion::Elvish => generate(Shell::Elvish),\n\t\tShellCompletion::Fish => generate(Shell::Fish),\n\t\tShellCompletion::Nu => generate(clap_complete_nushell::Nushell),\n\t\tShellCompletion::Powershell => generate(Shell::PowerShell),\n\t\tShellCompletion::Zsh => generate(Shell::Zsh),\n\t}\n\n\tOk(())\n}\n\npub async fn run() -> Result<ExitCode> {\n\tlet (args, _guards) = args::get_args().await?;\n\n\tOk(if args.manual {\n\t\trun_manpage().await?;\n\t\tExitCode::SUCCESS\n\t} else if let Some(shell) = args.completions {\n\t\trun_completions(shell).await?;\n\t\tExitCode::SUCCESS\n\t} else {\n\t\tlet state = state::new(&args).await?;\n\t\trun_watchexec(args, state.clone()).await?;\n\t\tlet exit = *(state.exit_code.lock().unwrap());\n\t\texit\n\t})\n}\n"
  },
  {
    "path": "crates/cli/src/main.rs",
    "content": "#[cfg(feature = \"eyra\")]\nextern crate eyra;\n\nuse std::process::ExitCode;\n\nuse miette::IntoDiagnostic;\n\n#[cfg(target_env = \"musl\")]\n#[global_allocator]\nstatic GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;\n\nfn main() -> miette::Result<ExitCode> {\n\t#[cfg(feature = \"pid1\")]\n\tpid1::Pid1Settings::new()\n\t\t.enable_log(cfg!(feature = \"pid1-withlog\"))\n\t\t.launch()\n\t\t.into_diagnostic()?;\n\n\ttokio::runtime::Builder::new_multi_thread()\n\t\t.enable_all()\n\t\t.build()\n\t\t.unwrap()\n\t\t.block_on(async { watchexec_cli::run().await })\n}\n"
  },
  {
    "path": "crates/cli/src/socket/fallback.rs",
    "content": "use miette::{bail, Result};\n\nuse crate::args::command::EnvVar;\n\nuse super::{SocketSpec, Sockets};\n\n#[derive(Debug)]\npub struct SocketSet;\n\nimpl SocketSet for SocketSet {\n\tasync fn create(_: &[SocketSpec]) -> Result<Self> {\n\t\tbail!(\"--socket is not supported on your platform\")\n\t}\n\n\tfn envs(&self) -> Vec<EnvVar> {\n\t\tVec::new()\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/socket/parser.rs",
    "content": "use std::{\n\tffi::OsStr,\n\tnet::{IpAddr, Ipv4Addr, SocketAddr},\n\tnum::{IntErrorKind, NonZero},\n\tstr::FromStr,\n};\n\nuse clap::{\n\tbuilder::TypedValueParser,\n\terror::{Error, ErrorKind},\n};\nuse miette::Result;\n\nuse super::{SocketSpec, SocketType};\n\n#[derive(Clone)]\npub(crate) struct SocketSpecValueParser;\n\nimpl TypedValueParser for SocketSpecValueParser {\n\ttype Value = SocketSpec;\n\n\tfn parse_ref(\n\t\t&self,\n\t\t_cmd: &clap::Command,\n\t\t_arg: Option<&clap::Arg>,\n\t\tvalue: &OsStr,\n\t) -> Result<Self::Value, Error> {\n\t\tlet value = value\n\t\t\t.to_str()\n\t\t\t.ok_or_else(|| Error::raw(ErrorKind::ValueValidation, \"invalid UTF-8\"))?\n\t\t\t.to_ascii_lowercase();\n\n\t\tlet (socket, value) = if let Some(val) = value.strip_prefix(\"tcp::\") {\n\t\t\t(SocketType::Tcp, val)\n\t\t} else if let Some(val) = value.strip_prefix(\"udp::\") {\n\t\t\t(SocketType::Udp, val)\n\t\t} else if let Some((pre, _)) = value.split_once(\"::\") {\n\t\t\tif !pre.starts_with(\"[\") {\n\t\t\t\treturn Err(Error::raw(\n\t\t\t\t\tErrorKind::ValueValidation,\n\t\t\t\t\tformat!(\"invalid prefix {pre:?}\"),\n\t\t\t\t));\n\t\t\t}\n\n\t\t\t(SocketType::Tcp, value.as_ref())\n\t\t} else {\n\t\t\t(SocketType::Tcp, value.as_ref())\n\t\t};\n\n\t\tlet addr = if let Ok(addr) = SocketAddr::from_str(value) {\n\t\t\taddr\n\t\t} else {\n\t\t\tmatch NonZero::<u16>::from_str(value) {\n\t\t\t\tOk(port) => SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), port.get()),\n\t\t\t\tErr(err) if *err.kind() == IntErrorKind::Zero => {\n\t\t\t\t\treturn Err(Error::raw(\n\t\t\t\t\t\tErrorKind::ValueValidation,\n\t\t\t\t\t\t\"invalid port number: cannot be zero\",\n\t\t\t\t\t))\n\t\t\t\t}\n\t\t\t\tErr(err) if *err.kind() == IntErrorKind::PosOverflow => {\n\t\t\t\t\treturn Err(Error::raw(\n\t\t\t\t\t\tErrorKind::ValueValidation,\n\t\t\t\t\t\t\"invalid port number: greater than 65535\",\n\t\t\t\t\t))\n\t\t\t\t}\n\t\t\t\tErr(_) => {\n\t\t\t\t\treturn Err(Error::raw(\n\t\t\t\t\t\tErrorKind::ValueValidation,\n\t\t\t\t\t\t\"invalid port number\",\n\t\t\t\t\t))\n\t\t\t\t}\n\t\t\t}\n\t\t};\n\n\t\tOk(SocketSpec { socket, addr })\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/socket/test.rs",
    "content": "use crate::args::Args;\n\nuse super::*;\nuse clap::{builder::TypedValueParser, CommandFactory};\nuse std::{\n\tffi::OsStr,\n\tnet::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6},\n};\n\n#[test]\nfn parse_port_only() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"8080\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Tcp,\n\t\t\taddr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_addr_port_v4() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"1.2.3.4:38192\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Tcp,\n\t\t\taddr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(1, 2, 3, 4), 38192)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_addr_port_v6() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"[ff64::1234]:81\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Tcp,\n\t\t\taddr: SocketAddr::V6(SocketAddrV6::new(\n\t\t\t\tIpv6Addr::new(0xff64, 0, 0, 0, 0, 0, 0, 0x1234),\n\t\t\t\t81,\n\t\t\t\t0,\n\t\t\t\t0\n\t\t\t)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_port_only_explicit_tcp() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"tcp::443\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Tcp,\n\t\t\taddr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 443)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_addr_port_v4_explicit_tcp() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"tcp::1.2.3.4:38192\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Tcp,\n\t\t\taddr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(1, 2, 3, 4), 38192)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_addr_port_v6_explicit_tcp() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"tcp::[ff64::1234]:81\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Tcp,\n\t\t\taddr: SocketAddr::V6(SocketAddrV6::new(\n\t\t\t\tIpv6Addr::new(0xff64, 0, 0, 0, 0, 0, 0, 0x1234),\n\t\t\t\t81,\n\t\t\t\t0,\n\t\t\t\t0\n\t\t\t)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_port_only_explicit_udp() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"udp::443\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Udp,\n\t\t\taddr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 443)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_addr_port_v4_explicit_udp() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"udp::1.2.3.4:38192\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Udp,\n\t\t\taddr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(1, 2, 3, 4), 38192)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_addr_port_v6_explicit_udp() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"udp::[ff64::1234]:81\"))\n\t\t\t.unwrap(),\n\t\tSocketSpec {\n\t\t\tsocket: SocketType::Udp,\n\t\t\taddr: SocketAddr::V6(SocketAddrV6::new(\n\t\t\t\tIpv6Addr::new(0xff64, 0, 0, 0, 0, 0, 0, 0x1234),\n\t\t\t\t81,\n\t\t\t\t0,\n\t\t\t\t0\n\t\t\t)),\n\t\t}\n\t);\n}\n\n#[test]\nfn parse_bad_prefix() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"gopher::777\"))\n\t\t\t.unwrap_err()\n\t\t\t.to_string(),\n\t\tString::from(r#\"error: invalid prefix \"gopher\"\"#),\n\t);\n}\n\n#[test]\nfn parse_bad_port_zero() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"0\"))\n\t\t\t.unwrap_err()\n\t\t\t.to_string(),\n\t\tString::from(\"error: invalid port number: cannot be zero\"),\n\t);\n}\n\n#[test]\nfn parse_bad_port_high() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"100000\"))\n\t\t\t.unwrap_err()\n\t\t\t.to_string(),\n\t\tString::from(\"error: invalid port number: greater than 65535\"),\n\t);\n}\n\n#[test]\nfn parse_bad_port_alpha() {\n\tlet cmd = Args::command();\n\tassert_eq!(\n\t\tSocketSpecValueParser\n\t\t\t.parse_ref(&cmd, None, OsStr::new(\"port\"))\n\t\t\t.unwrap_err()\n\t\t\t.to_string(),\n\t\tString::from(\"error: invalid port number\"),\n\t);\n}\n"
  },
  {
    "path": "crates/cli/src/socket/unix.rs",
    "content": "use std::os::fd::{AsRawFd, OwnedFd};\n\nuse miette::{IntoDiagnostic, Result};\nuse nix::sys::socket::{\n\tbind, listen, setsockopt, socket, sockopt, AddressFamily, Backlog, SockFlag, SockType,\n\tSockaddrStorage,\n};\nuse tracing::instrument;\n\nuse crate::args::command::EnvVar;\n\nuse super::{SocketSpec, SocketType, Sockets};\n\n#[derive(Debug)]\npub struct SocketSet {\n\tfds: Vec<OwnedFd>,\n}\n\nimpl Sockets for SocketSet {\n\t#[instrument(level = \"trace\")]\n\tasync fn create(specs: &[SocketSpec]) -> Result<Self> {\n\t\tdebug_assert!(!specs.is_empty());\n\t\tspecs\n\t\t\t.into_iter()\n\t\t\t.map(SocketSpec::create)\n\t\t\t.collect::<Result<Vec<_>>>()\n\t\t\t.map(|fds| Self { fds })\n\t}\n\n\t#[instrument(level = \"trace\")]\n\tfn envs(&self) -> Vec<EnvVar> {\n\t\tvec![\n\t\t\tEnvVar {\n\t\t\t\tkey: \"LISTEN_FDS\".into(),\n\t\t\t\tvalue: self.fds.len().to_string().into(),\n\t\t\t},\n\t\t\tEnvVar {\n\t\t\t\tkey: \"LISTEN_FDS_FIRST_FD\".into(),\n\t\t\t\tvalue: self.fds.first().unwrap().as_raw_fd().to_string().into(),\n\t\t\t},\n\t\t]\n\t}\n}\n\nimpl SocketSpec {\n\tfn create(&self) -> Result<OwnedFd> {\n\t\tlet addr = SockaddrStorage::from(self.addr);\n\t\tlet fam = if self.addr.is_ipv4() {\n\t\t\tAddressFamily::Inet\n\t\t} else {\n\t\t\tAddressFamily::Inet6\n\t\t};\n\t\tlet ty = match self.socket {\n\t\t\tSocketType::Tcp => SockType::Stream,\n\t\t\tSocketType::Udp => SockType::Datagram,\n\t\t};\n\n\t\tlet sock = socket(fam, ty, SockFlag::empty(), None).into_diagnostic()?;\n\n\t\tsetsockopt(&sock, sockopt::ReuseAddr, &true).into_diagnostic()?;\n\n\t\tif matches!(fam, AddressFamily::Inet | AddressFamily::Inet6) {\n\t\t\tsetsockopt(&sock, sockopt::ReusePort, &true).into_diagnostic()?;\n\t\t}\n\n\t\tbind(sock.as_raw_fd(), &addr).into_diagnostic()?;\n\n\t\tif let SocketType::Tcp = self.socket {\n\t\t\tlisten(&sock, Backlog::new(1).unwrap()).into_diagnostic()?;\n\t\t}\n\n\t\tOk(sock)\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/socket/windows.rs",
    "content": "use std::{\n\tio::ErrorKind,\n\tnet::SocketAddr,\n\tos::windows::io::{AsRawSocket, OwnedSocket},\n\tstr::FromStr,\n\tsync::Arc,\n};\n\nuse miette::{IntoDiagnostic, Result};\nuse tokio::{\n\tio::{AsyncReadExt, AsyncWriteExt},\n\tnet::{TcpListener, TcpStream},\n\ttask::spawn,\n};\nuse tracing::instrument;\nuse uuid::Uuid;\nuse windows_sys::Win32::Networking::WinSock::{WSADuplicateSocketW, SOCKET, WSAPROTOCOL_INFOW};\n\nuse crate::args::command::EnvVar;\n\nuse super::{SocketSpec, SocketType, Sockets};\n\n#[derive(Debug)]\npub struct SocketSet {\n\tsockets: Arc<[OwnedSocket]>,\n\tsecret: Uuid,\n\tserver: Option<TcpListener>,\n\tserver_addr: SocketAddr,\n}\n\nimpl Sockets for SocketSet {\n\t#[instrument(level = \"trace\")]\n\tasync fn create(specs: &[SocketSpec]) -> Result<Self> {\n\t\tdebug_assert!(!specs.is_empty());\n\t\tlet sockets = specs\n\t\t\t.into_iter()\n\t\t\t.map(SocketSpec::create)\n\t\t\t.collect::<Result<Vec<_>>>()?;\n\n\t\tlet server = TcpListener::bind(\"127.0.0.1:0\").await.into_diagnostic()?;\n\t\tlet server_addr = server.local_addr().into_diagnostic()?;\n\n\t\tOk(Self {\n\t\t\tsockets: sockets.into(),\n\t\t\tsecret: Uuid::new_v4(),\n\t\t\tserver: Some(server),\n\t\t\tserver_addr,\n\t\t})\n\t}\n\n\t#[instrument(level = \"trace\")]\n\tfn envs(&self) -> Vec<EnvVar> {\n\t\tvec![\n\t\t\tEnvVar {\n\t\t\t\tkey: \"SYSTEMFD_SOCKET_SERVER\".into(),\n\t\t\t\tvalue: self.server_addr.to_string().into(),\n\t\t\t},\n\t\t\tEnvVar {\n\t\t\t\tkey: \"SYSTEMFD_SOCKET_SECRET\".into(),\n\t\t\t\tvalue: self.secret.to_string().into(),\n\t\t\t},\n\t\t]\n\t}\n\n\t#[instrument(level = \"trace\", skip(self))]\n\tfn serve(&mut self) {\n\t\tlet listener = self.server.take().unwrap();\n\t\tlet secret = self.secret;\n\t\tlet sockets = self.sockets.clone();\n\t\tspawn(async move {\n\t\t\tloop {\n\t\t\t\tlet Ok((stream, _)) = listener.accept().await else {\n\t\t\t\t\tbreak;\n\t\t\t\t};\n\n\t\t\t\tspawn(provide_sockets(stream, sockets.clone(), secret));\n\t\t\t}\n\t\t});\n\t}\n}\n\nasync fn provide_sockets(\n\tmut stream: TcpStream,\n\tsockets: Arc<[OwnedSocket]>,\n\tsecret: Uuid,\n) -> std::io::Result<()> {\n\tlet mut data = Vec::new();\n\tstream.read_to_end(&mut data).await?;\n\tlet Ok(out) = String::from_utf8(data) else {\n\t\treturn Err(ErrorKind::InvalidInput.into());\n\t};\n\n\tlet Some((challenge, pid)) = out.split_once('|') else {\n\t\treturn Err(ErrorKind::InvalidInput.into());\n\t};\n\n\tlet Ok(uuid) = Uuid::from_str(challenge) else {\n\t\treturn Err(ErrorKind::InvalidInput.into());\n\t};\n\n\tlet Ok(pid) = u32::from_str(pid) else {\n\t\treturn Err(ErrorKind::InvalidInput.into());\n\t};\n\n\tif uuid != secret {\n\t\treturn Err(ErrorKind::InvalidData.into());\n\t}\n\n\tfor socket in sockets.iter() {\n\t\tlet payload = socket_to_payload(socket, pid)?;\n\t\tstream.write_all(&payload).await?;\n\t}\n\n\tstream.shutdown().await\n}\n\nfn socket_to_payload(socket: &OwnedSocket, pid: u32) -> std::io::Result<Vec<u8>> {\n\t// SAFETY:\n\t// - we're not reading from this until it gets populated by WSADuplicateSocketW\n\t// - the struct is entirely integers and arrays of integers\n\tlet mut proto_info: WSAPROTOCOL_INFOW = unsafe { std::mem::zeroed() };\n\n\t// SAFETY: ffi\n\tif unsafe { WSADuplicateSocketW(socket.as_raw_socket() as SOCKET, pid, &mut proto_info) } != 0 {\n\t\treturn Err(ErrorKind::InvalidData.into());\n\t}\n\n\t// SAFETY:\n\t// - non-nullability, alignment, and contiguousness are taken care of by serialising a single value\n\t// - WSAPROTOCOL_INFOW is repr(C)\n\t// - we don't mutate that memory (we immediately to_vec it)\n\t// - we have its exact size\n\tOk(unsafe {\n\t\tlet bytes: *const u8 = &proto_info as *const WSAPROTOCOL_INFOW as *const _;\n\t\tstd::slice::from_raw_parts(bytes, std::mem::size_of::<WSAPROTOCOL_INFOW>())\n\t}\n\t.to_vec())\n}\n\nimpl SocketSpec {\n\tfn create(&self) -> Result<OwnedSocket> {\n\t\tuse socket2::{Domain, SockAddr, Socket, Type};\n\n\t\tlet addr = SockAddr::from(self.addr);\n\t\tlet dom = if self.addr.is_ipv4() {\n\t\t\tDomain::IPV4\n\t\t} else {\n\t\t\tDomain::IPV6\n\t\t};\n\t\tlet ty = match self.socket {\n\t\t\tSocketType::Tcp => Type::STREAM,\n\t\t\tSocketType::Udp => Type::DGRAM,\n\t\t};\n\n\t\tlet sock = Socket::new(dom, ty, None).into_diagnostic()?;\n\t\tsock.set_reuse_address(true).into_diagnostic()?;\n\t\tsock.bind(&addr).into_diagnostic()?;\n\n\t\tif let SocketType::Tcp = self.socket {\n\t\t\tsock.listen(1).into_diagnostic()?;\n\t\t}\n\n\t\tOk(sock.into())\n\t}\n}\n"
  },
  {
    "path": "crates/cli/src/socket.rs",
    "content": "// listen-fd code inspired by systemdfd source by @mitsuhiko (Apache-2.0)\n// https://github.com/mitsuhiko/systemfd/blob/master/src/fd.rs\n\nuse std::net::SocketAddr;\n\nuse clap::ValueEnum;\nuse miette::Result;\n\npub(crate) use imp::*;\npub(crate) use parser::SocketSpecValueParser;\n\nuse crate::args::command::EnvVar;\n\n#[cfg(unix)]\n#[path = \"socket/unix.rs\"]\nmod imp;\n#[cfg(windows)]\n#[path = \"socket/windows.rs\"]\nmod imp;\n#[cfg(not(any(unix, windows)))]\n#[path = \"socket/fallback.rs\"]\nmod imp;\nmod parser;\n#[cfg(test)]\nmod test;\n\n#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, ValueEnum)]\npub enum SocketType {\n\t#[default]\n\tTcp,\n\tUdp,\n}\n\n#[derive(Clone, Copy, Debug, PartialEq, Eq)]\npub struct SocketSpec {\n\tpub socket: SocketType,\n\tpub addr: SocketAddr,\n}\n\npub(crate) trait Sockets\nwhere\n\tSelf: Sized,\n{\n\tasync fn create(specs: &[SocketSpec]) -> Result<Self>;\n\tfn envs(&self) -> Vec<EnvVar>;\n\tfn serve(&mut self) {}\n}\n"
  },
  {
    "path": "crates/cli/src/state.rs",
    "content": "use std::{\n\tenv::var_os,\n\tio::Write,\n\tpath::PathBuf,\n\tprocess::ExitCode,\n\tsync::{Arc, Mutex, OnceLock},\n};\n\nuse watchexec::Watchexec;\n\nuse miette::{IntoDiagnostic, Result};\nuse tempfile::NamedTempFile;\n\nuse crate::{\n\targs::Args,\n\tsocket::{SocketSet, Sockets},\n};\n\npub type State = Arc<InnerState>;\n\npub async fn new(args: &Args) -> Result<State> {\n\tlet socket_set = if args.command.socket.is_empty() {\n\t\tNone\n\t} else {\n\t\tlet mut sockets = SocketSet::create(&args.command.socket).await?;\n\t\tsockets.serve();\n\t\tSome(sockets)\n\t};\n\n\tOk(Arc::new(InnerState {\n\t\temit_file: RotatingTempFile::default(),\n\t\tsocket_set,\n\t\texit_code: Mutex::new(ExitCode::SUCCESS),\n\t\twatchexec: OnceLock::new(),\n\t}))\n}\n\n#[derive(Debug)]\npub struct InnerState {\n\tpub emit_file: RotatingTempFile,\n\tpub socket_set: Option<SocketSet>,\n\tpub exit_code: Mutex<ExitCode>,\n\t/// Reference to the Watchexec instance, set after creation.\n\t/// Used to send synthetic events (e.g., to trigger immediate quit on error).\n\tpub watchexec: OnceLock<Arc<Watchexec>>,\n}\n\n#[derive(Debug, Default)]\npub struct RotatingTempFile(Mutex<Option<NamedTempFile>>);\n\nimpl RotatingTempFile {\n\tpub fn rotate(&self) -> Result<()> {\n\t\t// implicitly drops the old file\n\t\t*self.0.lock().unwrap() = Some(\n\t\t\tif let Some(dir) = var_os(\"WATCHEXEC_TMPDIR\") {\n\t\t\t\tNamedTempFile::new_in(dir)\n\t\t\t} else {\n\t\t\t\tNamedTempFile::new()\n\t\t\t}\n\t\t\t.into_diagnostic()?,\n\t\t);\n\t\tOk(())\n\t}\n\n\tpub fn write(&self, data: &[u8]) -> Result<()> {\n\t\tif let Some(file) = self.0.lock().unwrap().as_mut() {\n\t\t\tfile.write_all(data).into_diagnostic()?;\n\t\t}\n\n\t\tOk(())\n\t}\n\n\tpub fn path(&self) -> PathBuf {\n\t\tif let Some(file) = self.0.lock().unwrap().as_ref() {\n\t\t\tfile.path().to_owned()\n\t\t} else {\n\t\t\tPathBuf::new()\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/cli/tests/common/mod.rs",
    "content": "use std::path::PathBuf;\nuse std::{fs, sync::OnceLock};\n\nuse miette::{Context, IntoDiagnostic, Result};\nuse rand::Rng;\n\nstatic PLACEHOLDER_DATA: OnceLock<String> = OnceLock::new();\nfn get_placeholder_data() -> &'static str {\n\tPLACEHOLDER_DATA.get_or_init(|| \"PLACEHOLDER\\n\".repeat(500))\n}\n\n/// The amount of nesting that will be used for generated files\n#[derive(Debug, Clone, PartialEq, Eq)]\npub enum GeneratedFileNesting {\n\t/// Only one level of files\n\tFlat,\n\t/// Random, up to a certiain maximum\n\tRandomToMax(usize),\n}\n\n/// Configuration for creating testing subfolders\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct TestSubfolderConfiguration {\n\t/// The amount of nesting that will be used when folders are generated\n\tpub(crate) nesting: GeneratedFileNesting,\n\n\t/// Number of files the folder should contain\n\tpub(crate) file_count: usize,\n\n\t/// Subfolder name\n\tpub(crate) name: String,\n}\n\n/// Options for generating test files\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\npub struct GenerateTestFilesArgs {\n\t/// The path where the files should be generated\n\t/// if None, the current working directory will be used.\n\tpub(crate) path: Option<PathBuf>,\n\n\t/// Configurations for subfolders to generate\n\tpub(crate) subfolder_configs: Vec<TestSubfolderConfiguration>,\n}\n\n/// Generate test files\n///\n/// This returns the same number of paths that were requested via subfolder_configs.\npub fn generate_test_files(args: GenerateTestFilesArgs) -> Result<Vec<PathBuf>> {\n\t// Use or create a temporary directory for the test files\n\tlet tmpdir = if let Some(p) = args.path {\n\t\tp\n\t} else {\n\t\ttempfile::tempdir()\n\t\t\t.into_diagnostic()\n\t\t\t.wrap_err(\"failed to build tempdir\")?\n\t\t\t.keep()\n\t};\n\tlet mut paths = vec![tmpdir.clone()];\n\n\t// Generate subfolders matching each config\n\tfor subfolder_config in &args.subfolder_configs {\n\t\t// Create the subfolder path\n\t\tlet subfolder_path = tmpdir.join(&subfolder_config.name);\n\t\tfs::create_dir(&subfolder_path)\n\t\t\t.into_diagnostic()\n\t\t\t.wrap_err(format!(\n\t\t\t\t\"failed to create path for dir [{}]\",\n\t\t\t\tsubfolder_path.display()\n\t\t\t))?;\n\t\tpaths.push(subfolder_path.clone());\n\n\t\t// Fill the subfolder with files\n\t\tmatch subfolder_config.nesting {\n\t\t\tGeneratedFileNesting::Flat => {\n\t\t\t\tfor idx in 0..subfolder_config.file_count {\n\t\t\t\t\t// Write stub file contents\n\t\t\t\t\tfs::write(\n\t\t\t\t\t\tsubfolder_path.join(format!(\"stub-file-{idx}\")),\n\t\t\t\t\t\tget_placeholder_data(),\n\t\t\t\t\t)\n\t\t\t\t\t.into_diagnostic()\n\t\t\t\t\t.wrap_err(format!(\n\t\t\t\t\t\t\"failed to write temporary file in subfolder {} @ idx {idx}\",\n\t\t\t\t\t\tsubfolder_path.display()\n\t\t\t\t\t))?;\n\t\t\t\t}\n\t\t\t}\n\t\t\tGeneratedFileNesting::RandomToMax(max_depth) => {\n\t\t\t\tlet mut generator = rand::rng();\n\t\t\t\tfor idx in 0..subfolder_config.file_count {\n\t\t\t\t\t// Build a randomized path up to max depth\n\t\t\t\t\tlet mut generated_path = subfolder_path.clone();\n\t\t\t\t\tlet depth = generator.random_range(0..max_depth);\n\t\t\t\t\tfor _ in 0..depth {\n\t\t\t\t\t\tgenerated_path.push(\"stub-dir\");\n\t\t\t\t\t}\n\t\t\t\t\t// Create the path\n\t\t\t\t\tfs::create_dir_all(&generated_path)\n\t\t\t\t\t\t.into_diagnostic()\n\t\t\t\t\t\t.wrap_err(format!(\n\t\t\t\t\t\t\t\"failed to create randomly generated path [{}]\",\n\t\t\t\t\t\t\tgenerated_path.display()\n\t\t\t\t\t\t))?;\n\n\t\t\t\t\t// Write stub file contents @ the new randomized path\n\t\t\t\t\tfs::write(\n\t\t\t\t\t\tgenerated_path.join(format!(\"stub-file-{idx}\")),\n\t\t\t\t\t\tget_placeholder_data(),\n\t\t\t\t\t)\n\t\t\t\t\t.into_diagnostic()\n\t\t\t\t\t.wrap_err(format!(\n\t\t\t\t\t\t\"failed to write temporary file in subfolder {} @ idx {idx}\",\n\t\t\t\t\t\tsubfolder_path.display()\n\t\t\t\t\t))?;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tOk(paths)\n}\n"
  },
  {
    "path": "crates/cli/tests/ignore.rs",
    "content": "use std::{\n\tpath::{Path, PathBuf},\n\tprocess::Stdio,\n\ttime::Duration,\n};\n\nuse miette::{IntoDiagnostic, Result, WrapErr};\nuse tokio::{process::Command, time::Instant};\nuse tracing_test::traced_test;\nuse uuid::Uuid;\n\nmod common;\nuse common::{generate_test_files, GenerateTestFilesArgs};\n\nuse crate::common::{GeneratedFileNesting, TestSubfolderConfiguration};\n\n/// Directory name that will be sued for the dir that *should* be watched\nconst WATCH_DIR_NAME: &str = \"watch\";\n\n/// The token that watch will echo every time a match is found\nconst WATCH_TOKEN: &str = \"updated\";\n\n/// Ensure that watchexec runtime does not increase with the\n/// number of *ignored* files in a given folder\n///\n/// This test creates two separate folders, one small and the other large\n///\n/// Each folder has two subfolders:\n///   - a shallow one to be watched, with a few files of single depth (20 files)\n///   - a deep one to be ignored, with many files at varying depths (small case 200 files, large case 200,000 files)\n///\n/// watchexec, when executed on *either* folder should *not* experience a more\n/// than 10x degradation in performance, because the vast majority of the files\n/// are supposed to be ignored to begin with.\n///\n/// When running the CLI on the root folders, it should *not* take a long time to start de\n#[tokio::test]\n#[traced_test]\nasync fn e2e_ignore_many_files_200_000() -> Result<()> {\n\t// Create a tempfile so that drop will clean it up\n\tlet small_test_dir = tempfile::tempdir()\n\t\t.into_diagnostic()\n\t\t.wrap_err(\"failed to create tempdir for test use\")?;\n\n\t// Determine the watchexec bin to use & build arguments\n\tlet wexec_bin = std::env::var(\"TEST_WATCHEXEC_BIN\").unwrap_or(\n\t\toption_env!(\"CARGO_BIN_EXE_watchexec\")\n\t\t\t.map(std::string::ToString::to_string)\n\t\t\t.unwrap_or(\"watchexec\".into()),\n\t);\n\tlet token = format!(\"{WATCH_TOKEN}-{}\", Uuid::new_v4());\n\tlet args: Vec<String> = vec![\n\t\t\"-1\".into(), // exit as soon as watch completes\n\t\t\"--watch\".into(),\n\t\tWATCH_DIR_NAME.into(),\n\t\t\"echo\".into(),\n\t\ttoken.clone(),\n\t];\n\n\t// Generate a small directory of files containing dirs that *will* and will *not* be watched\n\tlet [ref root_dir_path, _, _] = generate_test_files(GenerateTestFilesArgs {\n\t\tpath: Some(PathBuf::from(small_test_dir.path())),\n\t\tsubfolder_configs: vec![\n\t\t\t// Shallow folder will have a small number of files and won't be watched\n\t\t\tTestSubfolderConfiguration {\n\t\t\t\tname: \"watch\".into(),\n\t\t\t\tnesting: GeneratedFileNesting::Flat,\n\t\t\t\tfile_count: 5,\n\t\t\t},\n\t\t\t// Deep folder will have *many* amll files and will be watched\n\t\t\tTestSubfolderConfiguration {\n\t\t\t\tname: \"unrelated\".into(),\n\t\t\t\tnesting: GeneratedFileNesting::RandomToMax(42),\n\t\t\t\tfile_count: 200,\n\t\t\t},\n\t\t],\n\t})?[..] else {\n\t\tpanic!(\"unexpected number of paths returned from generate_test_files\");\n\t};\n\n\t// Get the number of elapsed\n\tlet small_elapsed = run_watchexec_cmd(&wexec_bin, root_dir_path, args.clone()).await?;\n\n\t// Create a tempfile so that drop will clean it up\n\tlet large_test_dir = tempfile::tempdir()\n\t\t.into_diagnostic()\n\t\t.wrap_err(\"failed to create tempdir for test use\")?;\n\n\t// Generate a *large* directory of files\n\tlet [ref root_dir_path, _, _] = generate_test_files(GenerateTestFilesArgs {\n\t\tpath: Some(PathBuf::from(large_test_dir.path())),\n\t\tsubfolder_configs: vec![\n\t\t\t// Shallow folder will have a small number of files and won't be watched\n\t\t\tTestSubfolderConfiguration {\n\t\t\t\tname: \"watch\".into(),\n\t\t\t\tnesting: GeneratedFileNesting::Flat,\n\t\t\t\tfile_count: 5,\n\t\t\t},\n\t\t\t// Deep folder will have *many* amll files and will be watched\n\t\t\tTestSubfolderConfiguration {\n\t\t\t\tname: \"unrelated\".into(),\n\t\t\t\tnesting: GeneratedFileNesting::RandomToMax(42),\n\t\t\t\tfile_count: 200_000,\n\t\t\t},\n\t\t],\n\t})?[..] else {\n\t\tpanic!(\"unexpected number of paths returned from generate_test_files\");\n\t};\n\n\t// Get the number of elapsed\n\tlet large_elapsed = run_watchexec_cmd(&wexec_bin, root_dir_path, args.clone()).await?;\n\n\t// We expect the ignores to not impact watchexec startup time at all\n\t// whether there are 200 files in there or 200k\n\tassert!(\n\t\tlarge_elapsed < small_elapsed * 10,\n\t\t\"200k ignore folder ({:?}) took more than 10x more time ({:?}) than 200 ignore folder ({:?})\",\n\t\tlarge_elapsed,\n\t\tsmall_elapsed * 10,\n\t\tsmall_elapsed,\n\t);\n\tOk(())\n}\n\n/// Run a watchexec command once\nasync fn run_watchexec_cmd(\n\twexec_bin: impl AsRef<str>,\n\tdir: impl AsRef<Path>,\n\targs: impl Into<Vec<String>>,\n) -> Result<Duration> {\n\t// Build the subprocess command\n\tlet mut cmd = Command::new(wexec_bin.as_ref());\n\tcmd.args(args.into());\n\tcmd.current_dir(dir);\n\tcmd.stdout(Stdio::piped());\n\tcmd.stderr(Stdio::piped());\n\n\tlet start = Instant::now();\n\tcmd.kill_on_drop(true)\n\t\t.output()\n\t\t.await\n\t\t.into_diagnostic()\n\t\t.wrap_err(\"fixed\")?;\n\n\tOk(start.elapsed())\n}\n"
  },
  {
    "path": "crates/cli/watchexec-manifest.rc",
    "content": "#define RT_MANIFEST 24\n1 RT_MANIFEST \"watchexec.exe.manifest\"\n"
  },
  {
    "path": "crates/cli/watchexec.exe.manifest",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<assembly xmlns=\"urn:schemas-microsoft-com:asm.v1\" manifestVersion=\"1.0\">\n\t<assemblyIdentity\n\t\ttype=\"win32\"\n\t\tname=\"Watchexec.Cli.watchexec\"\n\t\tversion=\"2.5.1.0\"\n\t/>\n\n\t<trustInfo>\n\t\t<security>\n\t\t\t<!--\n\t\t\tUAC settings:\n\t\t\t- app should run at same integrity level as calling process\n\t\t\t- app does not need to manipulate windows belonging to\n\t\t\thigher-integrity-level processes\n\t\t\t-->\n\t\t\t<requestedPrivileges>\n\t\t\t\t<requestedExecutionLevel level=\"asInvoker\" uiAccess=\"false\"/>\n\t\t\t</requestedPrivileges>\n\t\t</security>\n\t</trustInfo>\n\n\t<compatibility xmlns=\"urn:schemas-microsoft-com:compatibility.v1\">\n\t\t<application>\n\t\t\t<!-- Windows 10, 11 -->\n\t\t\t<supportedOS Id=\"{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}\"/>\n\t\t\t<!-- Windows 8.1 -->\n\t\t\t<supportedOS Id=\"{1f676c76-80e1-4239-95bb-83d0f6d0da78}\"/>\n\t\t\t<!-- Windows 8 -->\n\t\t\t<supportedOS Id=\"{4a2f28e3-53b9-4441-ba9c-d69d4a4a6e38}\"/>\n\t\t</application>\n\t</compatibility>\n\n\t<application xmlns=\"urn:schemas-microsoft-com:asm.v3\">\n\t\t<windowsSettings xmlns:ws=\"http://schemas.microsoft.com/SMI/2020/WindowsSettings\">\n\t\t\t<ws:longPathAware xmlns:ws=\"http://schemas.microsoft.com/SMI/2016/WindowsSettings\">true</ws:longPathAware>\n\t\t\t<ws:activeCodePage xmlns:ws=\"http://schemas.microsoft.com/SMI/2019/WindowsSettings\">UTF-8</ws:activeCodePage>\n\t\t\t<ws:heapType xmlns:ws=\"http://schemas.microsoft.com/SMI/2020/WindowsSettings\">SegmentHeap</ws:heapType>\n\t\t</windowsSettings>\n\t</application>\n</assembly>\n"
  },
  {
    "path": "crates/events/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v6.1.0 (2026-02-22)\n\n- Add `Keyboard::Key` to describe arbitrary single-key keyboard events\n\n## v6.0.0 (2025-05-15)\n\n## v5.0.1 (2025-05-15)\n\n- Deps: remove unused dependency `nix` ([#930](https://github.com/watchexec/watchexec/pull/930))\n\n## v5.0.0 (2025-02-09)\n\n## v4.0.0 (2024-10-14)\n\n- Deps: nix 0.29\n\n## v3.0.0 (2024-04-20)\n\n- Deps: nix 0.28\n\n## v2.0.1 (2023-11-29)\n\n- Add `ProcessEnd::into_exitstatus` testing-only utility method.\n- Deps: upgrade to Notify 6.0\n- Deps: upgrade to nix 0.27\n- Deps: upgrade to watchexec-signals 2.0.0\n\n## v2.0.0 (2023-11-29)\n\nSame as 2.0.1, but yanked.\n\n## v1.1.0 (2023-11-26)\n\nSame as 2.0.1, but yanked.\n\n## v1.0.0 (2023-03-18)\n\n- Split off new `watchexec-events` crate (this one), to have a lightweight library that can parse\n  and generate events and maintain the JSON event format.\n"
  },
  {
    "path": "crates/events/Cargo.toml",
    "content": "[package]\nname = \"watchexec-events\"\nversion = \"6.1.0\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0 OR MIT\"\ndescription = \"Watchexec's event types\"\nkeywords = [\"watchexec\", \"event\", \"format\", \"json\"]\n\ndocumentation = \"https://docs.rs/watchexec-events\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.61.0\"\nedition = \"2021\"\n\n[dependencies.notify-types]\nversion = \"2.0.0\"\noptional = true\n\n[dependencies.serde]\nversion = \"1.0.183\"\noptional = true\nfeatures = [\"derive\"]\n\n[dependencies.watchexec-signals]\nversion = \"5.0.1\"\npath = \"../signals\"\ndefault-features = false\n\n[dev-dependencies]\nsnapbox = \"0.6.18\"\nserde_json = \"1.0.107\"\n\n[features]\ndefault = [\"notify\"]\nnotify = [\"dep:notify-types\"]\nserde = [\"dep:serde\", \"notify-types?/serde\", \"watchexec-signals/serde\"]\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\n"
  },
  {
    "path": "crates/events/README.md",
    "content": "# watchexec-events\n\n_Watchexec's event types._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license] or [MIT](https://passcod.mit-license.org).\n- Status: maintained.\n\n[docs]: https://docs.rs/watchexec-events\n[license]: ../../LICENSE\n\nFundamentally, events in watchexec have three purposes:\n\n1. To trigger the launch, restart, or other interruption of a process;\n2. To be filtered upon according to whatever set of criteria is desired;\n3. To carry information about what caused the event, which may be provided to the process.\n\nOutside of Watchexec, this library is particularly useful if you're building a tool that runs under\nit, and want to easily read its events (with `--emit-events-to=json-file` and `--emit-events-to=json-stdio`).\n\n```rust ,no_run\nuse std::io::{stdin, Result};\nuse watchexec_events::Event;\n\nfn main() -> Result<()> {\n    for line in stdin().lines() {\n        let event: Event = serde_json::from_str(&line?)?;\n        dbg!(event);\n    }\n\n    Ok(())\n}\n```\n\n## Features\n\n- `serde`: enables serde support.\n- `notify`: use Notify's file event types (default).\n\nIf you disable `notify`, you'll get a leaner dependency tree that's still able to parse the entire\nevents, but isn't type compatible with Notify. In most deserialisation usecases, this is fine, but\nit's not the default to avoid surprises.\n"
  },
  {
    "path": "crates/events/examples/parse-and-print.rs",
    "content": "use std::io::{stdin, Result};\nuse watchexec_events::Event;\n\nfn main() -> Result<()> {\n\tfor line in stdin().lines() {\n\t\tlet event: Event = serde_json::from_str(&line?)?;\n\t\tdbg!(event);\n\t}\n\n\tOk(())\n}\n"
  },
  {
    "path": "crates/events/release.toml",
    "content": "pre-release-commit-message = \"release: events v{{version}}\"\ntag-prefix = \"watchexec-events-\"\ntag-message = \"watchexec-events {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/events/src/event.rs",
    "content": "use std::{\n\tcollections::HashMap,\n\tfmt,\n\tpath::{Path, PathBuf},\n};\n\nuse watchexec_signals::Signal;\n\n#[cfg(feature = \"serde\")]\nuse crate::serde_formats::{SerdeEvent, SerdeTag};\n\nuse crate::{filekind::FileEventKind, FileType, Keyboard, ProcessEnd};\n\n/// An event, as far as watchexec cares about.\n#[derive(Clone, Debug, Default, Eq, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(from = \"SerdeEvent\", into = \"SerdeEvent\"))]\npub struct Event {\n\t/// Structured, classified information which can be used to filter or classify the event.\n\tpub tags: Vec<Tag>,\n\n\t/// Arbitrary other information, cannot be used for filtering.\n\tpub metadata: HashMap<String, Vec<String>>,\n}\n\n/// Something which can be used to filter or qualify an event.\n#[derive(Clone, Debug, Eq, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(from = \"SerdeTag\", into = \"SerdeTag\"))]\n#[non_exhaustive]\npub enum Tag {\n\t/// The event is about a path or file in the filesystem.\n\tPath {\n\t\t/// Path to the file or directory.\n\t\tpath: PathBuf,\n\n\t\t/// Optional file type, if known.\n\t\tfile_type: Option<FileType>,\n\t},\n\n\t/// Kind of a filesystem event (create, remove, modify, etc).\n\tFileEventKind(FileEventKind),\n\n\t/// The general source of the event.\n\tSource(Source),\n\n\t/// The event is about a keyboard input.\n\tKeyboard(Keyboard),\n\n\t/// The event was caused by a particular process.\n\tProcess(u32),\n\n\t/// The event is about a signal being delivered to the main process.\n\tSignal(Signal),\n\n\t/// The event is about a subprocess ending.\n\tProcessCompletion(Option<ProcessEnd>),\n\n\t#[cfg(feature = \"serde\")]\n\t/// The event is unknown (or not yet implemented).\n\tUnknown,\n}\n\nimpl Tag {\n\t/// The name of the variant.\n\t#[must_use]\n\tpub const fn discriminant_name(&self) -> &'static str {\n\t\tmatch self {\n\t\t\tSelf::Path { .. } => \"Path\",\n\t\t\tSelf::FileEventKind(_) => \"FileEventKind\",\n\t\t\tSelf::Source(_) => \"Source\",\n\t\t\tSelf::Keyboard(_) => \"Keyboard\",\n\t\t\tSelf::Process(_) => \"Process\",\n\t\t\tSelf::Signal(_) => \"Signal\",\n\t\t\tSelf::ProcessCompletion(_) => \"ProcessCompletion\",\n\t\t\t#[cfg(feature = \"serde\")]\n\t\t\tSelf::Unknown => \"Unknown\",\n\t\t}\n\t}\n}\n\n/// The general origin of the event.\n///\n/// This is set by the event source. Note that not all of these are currently used.\n#[derive(Clone, Copy, Debug, Eq, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\n#[non_exhaustive]\npub enum Source {\n\t/// Event comes from a file change.\n\tFilesystem,\n\n\t/// Event comes from a keyboard input.\n\tKeyboard,\n\n\t/// Event comes from a mouse click.\n\tMouse,\n\n\t/// Event comes from the OS.\n\tOs,\n\n\t/// Event is time based.\n\tTime,\n\n\t/// Event is internal to Watchexec.\n\tInternal,\n}\n\nimpl fmt::Display for Source {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\twrite!(\n\t\t\tf,\n\t\t\t\"{}\",\n\t\t\tmatch self {\n\t\t\t\tSelf::Filesystem => \"filesystem\",\n\t\t\t\tSelf::Keyboard => \"keyboard\",\n\t\t\t\tSelf::Mouse => \"mouse\",\n\t\t\t\tSelf::Os => \"os\",\n\t\t\t\tSelf::Time => \"time\",\n\t\t\t\tSelf::Internal => \"internal\",\n\t\t\t}\n\t\t)\n\t}\n}\n\n/// The priority of the event in the queue.\n///\n/// In the event queue, events are inserted with a priority, such that more important events are\n/// delivered ahead of others. This is especially important when there is a large amount of events\n/// generated and relatively slow filtering, as events can become noticeably delayed, and may give\n/// the impression of stalling.\n#[derive(Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum Priority {\n\t/// Low priority\n\t///\n\t/// Used for:\n\t/// - process completion events\n\tLow,\n\n\t/// Normal priority\n\t///\n\t/// Used for:\n\t/// - filesystem events\n\tNormal,\n\n\t/// High priority\n\t///\n\t/// Used for:\n\t/// - signals to main process, except Interrupt and Terminate\n\tHigh,\n\n\t/// Urgent events bypass filtering entirely.\n\t///\n\t/// Used for:\n\t/// - Interrupt and Terminate signals to main process\n\tUrgent,\n}\n\nimpl Default for Priority {\n\tfn default() -> Self {\n\t\tSelf::Normal\n\t}\n}\n\nimpl Event {\n\t/// Returns true if the event has an Internal source tag.\n\t#[must_use]\n\tpub fn is_internal(&self) -> bool {\n\t\tself.tags\n\t\t\t.iter()\n\t\t\t.any(|tag| matches!(tag, Tag::Source(Source::Internal)))\n\t}\n\n\t/// Returns true if the event has no tags.\n\t#[must_use]\n\tpub fn is_empty(&self) -> bool {\n\t\tself.tags.is_empty()\n\t}\n\n\t/// Return all paths in the event's tags.\n\tpub fn paths(&self) -> impl Iterator<Item = (&Path, Option<&FileType>)> {\n\t\tself.tags.iter().filter_map(|p| match p {\n\t\t\tTag::Path { path, file_type } => Some((path.as_path(), file_type.as_ref())),\n\t\t\t_ => None,\n\t\t})\n\t}\n\n\t/// Return all signals in the event's tags.\n\tpub fn signals(&self) -> impl Iterator<Item = Signal> + '_ {\n\t\tself.tags.iter().filter_map(|p| match p {\n\t\t\tTag::Signal(s) => Some(*s),\n\t\t\t_ => None,\n\t\t})\n\t}\n\n\t/// Return all process completions in the event's tags.\n\tpub fn completions(&self) -> impl Iterator<Item = Option<ProcessEnd>> + '_ {\n\t\tself.tags.iter().filter_map(|p| match p {\n\t\t\tTag::ProcessCompletion(s) => Some(*s),\n\t\t\t_ => None,\n\t\t})\n\t}\n}\n\nimpl fmt::Display for Event {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\twrite!(f, \"Event\")?;\n\t\tfor p in &self.tags {\n\t\t\tmatch p {\n\t\t\t\tTag::Path { path, file_type } => {\n\t\t\t\t\twrite!(f, \" path={}\", path.display())?;\n\t\t\t\t\tif let Some(ft) = file_type {\n\t\t\t\t\t\twrite!(f, \" filetype={ft}\")?;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tTag::FileEventKind(kind) => write!(f, \" kind={kind:?}\")?,\n\t\t\t\tTag::Source(s) => write!(f, \" source={s:?}\")?,\n\t\t\t\tTag::Keyboard(k) => write!(f, \" keyboard={k:?}\")?,\n\t\t\t\tTag::Process(p) => write!(f, \" process={p}\")?,\n\t\t\t\tTag::Signal(s) => write!(f, \" signal={s:?}\")?,\n\t\t\t\tTag::ProcessCompletion(None) => write!(f, \" command-completed\")?,\n\t\t\t\tTag::ProcessCompletion(Some(c)) => write!(f, \" command-completed({c:?})\")?,\n\t\t\t\t#[cfg(feature = \"serde\")]\n\t\t\t\tTag::Unknown => write!(f, \" unknown\")?,\n\t\t\t}\n\t\t}\n\n\t\tif !self.metadata.is_empty() {\n\t\t\twrite!(f, \" meta: {:?}\", self.metadata)?;\n\t\t}\n\n\t\tOk(())\n\t}\n}\n"
  },
  {
    "path": "crates/events/src/fs.rs",
    "content": "use std::fmt;\n\n/// Re-export of the Notify file event types.\n#[cfg(feature = \"notify\")]\npub mod filekind {\n\tpub use notify_types::event::{\n\t\tAccessKind, AccessMode, CreateKind, DataChange, EventKind as FileEventKind, MetadataKind,\n\t\tModifyKind, RemoveKind, RenameMode,\n\t};\n}\n\n/// Pseudo file event types without dependency on Notify.\n#[cfg(not(feature = \"notify\"))]\npub mod filekind {\n\tpub use crate::sans_notify::{\n\t\tAccessKind, AccessMode, CreateKind, DataChange, EventKind as FileEventKind, MetadataKind,\n\t\tModifyKind, RemoveKind, RenameMode,\n\t};\n}\n\n/// The type of a file.\n///\n/// This is a simplification of the [`std::fs::FileType`] type, which is not constructable and may\n/// differ on different platforms.\n#[derive(Clone, Copy, Debug, Eq, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum FileType {\n\t/// A regular file.\n\tFile,\n\n\t/// A directory.\n\tDir,\n\n\t/// A symbolic link.\n\tSymlink,\n\n\t/// Something else.\n\tOther,\n}\n\nimpl From<std::fs::FileType> for FileType {\n\tfn from(ft: std::fs::FileType) -> Self {\n\t\tif ft.is_file() {\n\t\t\tSelf::File\n\t\t} else if ft.is_dir() {\n\t\t\tSelf::Dir\n\t\t} else if ft.is_symlink() {\n\t\t\tSelf::Symlink\n\t\t} else {\n\t\t\tSelf::Other\n\t\t}\n\t}\n}\n\nimpl fmt::Display for FileType {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\tmatch self {\n\t\t\tSelf::File => write!(f, \"file\"),\n\t\t\tSelf::Dir => write!(f, \"dir\"),\n\t\t\tSelf::Symlink => write!(f, \"symlink\"),\n\t\t\tSelf::Other => write!(f, \"other\"),\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/events/src/keyboard.rs",
    "content": "#[derive(Debug, Clone, PartialEq, Eq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\n#[non_exhaustive]\n/// A keyboard input.\npub enum Keyboard {\n\t/// Event representing an 'end of file' on stdin\n\tEof,\n\n\t/// A key press in interactive mode\n\tKey {\n\t\t/// The key that was pressed.\n\t\tkey: KeyCode,\n\n\t\t/// Modifier keys held during the press.\n\t\t#[cfg_attr(\n\t\t\tfeature = \"serde\",\n\t\t\tserde(default, skip_serializing_if = \"Modifiers::is_empty\")\n\t\t)]\n\t\tmodifiers: Modifiers,\n\t},\n}\n\n/// A key code.\n#[derive(Debug, Clone, PartialEq, Eq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\n#[non_exhaustive]\npub enum KeyCode {\n\t/// A unicode character (letter, digit, symbol, space).\n\tChar(char),\n\t/// Enter / Return.\n\tEnter,\n\t/// Escape.\n\tEscape,\n}\n\n/// Modifier key flags.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\npub struct Modifiers {\n\t/// Ctrl / Control was held.\n\t#[cfg_attr(feature = \"serde\", serde(default, skip_serializing_if = \"is_false\"))]\n\tpub ctrl: bool,\n\t/// Alt / Option was held.\n\t#[cfg_attr(feature = \"serde\", serde(default, skip_serializing_if = \"is_false\"))]\n\tpub alt: bool,\n\t/// Shift was held.\n\t#[cfg_attr(feature = \"serde\", serde(default, skip_serializing_if = \"is_false\"))]\n\tpub shift: bool,\n}\n\n#[cfg(feature = \"serde\")]\nfn is_false(b: &bool) -> bool {\n\t!b\n}\n\nimpl Modifiers {\n\t/// Returns true if no modifier keys are set.\n\t#[must_use]\n\tpub fn is_empty(&self) -> bool {\n\t\t!self.ctrl && !self.alt && !self.shift\n\t}\n}\n"
  },
  {
    "path": "crates/events/src/lib.rs",
    "content": "#![doc = include_str!(\"../README.md\")]\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n\n#[doc(inline)]\npub use event::*;\n\n#[doc(inline)]\npub use fs::*;\n\n#[doc(inline)]\npub use keyboard::*;\n\n#[doc(inline)]\npub use process::*;\n\nmod event;\nmod fs;\nmod keyboard;\nmod process;\n\n#[cfg(not(feature = \"notify\"))]\nmod sans_notify;\n\n#[cfg(feature = \"serde\")]\nmod serde_formats;\n"
  },
  {
    "path": "crates/events/src/process.rs",
    "content": "use std::{\n\tnum::{NonZeroI32, NonZeroI64},\n\tprocess::ExitStatus,\n};\n\nuse watchexec_signals::Signal;\n\n/// The end status of a process.\n///\n/// This is a sort-of equivalent of the [`std::process::ExitStatus`] type which, while\n/// constructable, differs on various platforms. The native type is an integer that is interpreted\n/// either through convention or via platform-dependent libc or kernel calls; our type is a more\n/// structured representation for the purpose of being clearer and transportable.\n///\n/// On Unix, one can tell whether a process dumped core from the exit status; this is not replicated\n/// in this structure; if that's desirable you can obtain it manually via `libc::WCOREDUMP` and the\n/// `ExitSignal` variant.\n///\n/// On Unix and Windows, the exit status is a 32-bit integer; on Fuchsia it's a 64-bit integer. For\n/// portability, we use `i64`. On all platforms, the \"success\" value is zero, so we special-case\n/// that as a variant and use `NonZeroI*` to limit the other values.\n#[derive(Clone, Copy, Debug, Eq, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(tag = \"disposition\", content = \"code\"))]\npub enum ProcessEnd {\n\t/// The process ended successfully, with exit status = 0.\n\t#[cfg_attr(feature = \"serde\", serde(rename = \"success\"))]\n\tSuccess,\n\n\t/// The process exited with a non-zero exit status.\n\t#[cfg_attr(feature = \"serde\", serde(rename = \"error\"))]\n\tExitError(NonZeroI64),\n\n\t/// The process exited due to a signal.\n\t#[cfg_attr(feature = \"serde\", serde(rename = \"signal\"))]\n\tExitSignal(Signal),\n\n\t/// The process was stopped (but not terminated) (`libc::WIFSTOPPED`).\n\t#[cfg_attr(feature = \"serde\", serde(rename = \"stop\"))]\n\tExitStop(NonZeroI32),\n\n\t/// The process suffered an unhandled exception or warning (typically Windows only).\n\t#[cfg_attr(feature = \"serde\", serde(rename = \"exception\"))]\n\tException(NonZeroI32),\n\n\t/// The process was continued (`libc::WIFCONTINUED`).\n\t#[cfg_attr(feature = \"serde\", serde(rename = \"continued\"))]\n\tContinued,\n}\n\nimpl From<ExitStatus> for ProcessEnd {\n\t#[cfg(unix)]\n\tfn from(es: ExitStatus) -> Self {\n\t\tuse std::os::unix::process::ExitStatusExt;\n\n\t\tmatch (es.code(), es.signal(), es.stopped_signal()) {\n\t\t\t(Some(_), Some(_), _) => {\n\t\t\t\tunreachable!(\"exitstatus cannot both be code and signal?!\")\n\t\t\t}\n\t\t\t(Some(code), None, _) => {\n\t\t\t\tNonZeroI64::try_from(i64::from(code)).map_or(Self::Success, Self::ExitError)\n\t\t\t}\n\t\t\t(None, Some(_), Some(stopsig)) => {\n\t\t\t\tNonZeroI32::try_from(stopsig).map_or(Self::Success, Self::ExitStop)\n\t\t\t}\n\t\t\t#[cfg(not(target_os = \"vxworks\"))]\n\t\t\t(None, Some(_), _) if es.continued() => Self::Continued,\n\t\t\t(None, Some(signal), _) => Self::ExitSignal(signal.into()),\n\t\t\t(None, None, _) => Self::Success,\n\t\t}\n\t}\n\n\t#[cfg(windows)]\n\tfn from(es: ExitStatus) -> Self {\n\t\tmatch es.code().map(NonZeroI32::try_from) {\n\t\t\tNone | Some(Err(_)) => Self::Success,\n\t\t\tSome(Ok(code)) if code.get() < 0 => Self::Exception(code),\n\t\t\tSome(Ok(code)) => Self::ExitError(code.into()),\n\t\t}\n\t}\n\n\t#[cfg(not(any(unix, windows)))]\n\tfn from(es: ExitStatus) -> Self {\n\t\tif es.success() {\n\t\t\tSelf::Success\n\t\t} else {\n\t\t\tSelf::ExitError(NonZeroI64::new(1).unwrap())\n\t\t}\n\t}\n}\n\nimpl ProcessEnd {\n\t/// Convert a `ProcessEnd` to an `ExitStatus`.\n\t///\n\t/// This is a testing function only! **It will panic** if the `ProcessEnd` is not representable\n\t/// as an `ExitStatus` on Unix. This is also not guaranteed to be accurate, as the `waitpid()`\n\t/// status union is platform-specific. Exit codes and signals are implemented, other variants\n\t/// are not.\n\t#[cfg(unix)]\n\t#[must_use]\n\tpub fn into_exitstatus(self) -> ExitStatus {\n\t\tuse std::os::unix::process::ExitStatusExt;\n\t\tmatch self {\n\t\t\tSelf::Success => ExitStatus::from_raw(0),\n\t\t\tSelf::ExitError(code) => {\n\t\t\t\tExitStatus::from_raw(i32::from(u8::try_from(code.get()).unwrap_or_default()) << 8)\n\t\t\t}\n\t\t\tSelf::ExitSignal(signal) => {\n\t\t\t\tExitStatus::from_raw(signal.to_nix().map_or(0, |sig| sig as i32))\n\t\t\t}\n\t\t\tSelf::Continued => ExitStatus::from_raw(0xffff),\n\t\t\t_ => unimplemented!(),\n\t\t}\n\t}\n\n\t/// Convert a `ProcessEnd` to an `ExitStatus`.\n\t///\n\t/// This is a testing function only! **It will panic** if the `ProcessEnd` is not representable\n\t/// as an `ExitStatus` on Windows.\n\t#[cfg(windows)]\n\t#[must_use]\n\tpub fn into_exitstatus(self) -> ExitStatus {\n\t\tuse std::os::windows::process::ExitStatusExt;\n\t\tmatch self {\n\t\t\tSelf::Success => ExitStatus::from_raw(0),\n\t\t\tSelf::ExitError(code) => ExitStatus::from_raw(code.get().try_into().unwrap()),\n\t\t\t_ => unimplemented!(),\n\t\t}\n\t}\n\n\t/// Unimplemented on this platform.\n\t#[cfg(not(any(unix, windows)))]\n\t#[must_use]\n\tpub fn into_exitstatus(self) -> ExitStatus {\n\t\tunimplemented!()\n\t}\n}\n"
  },
  {
    "path": "crates/events/src/sans_notify.rs",
    "content": "// This file is dual-licensed under the Artistic License 2.0 as per the\n// LICENSE.ARTISTIC file, and the Creative Commons Zero 1.0 license.\n//\n// Taken verbatim from the `notify` crate, with the Event types removed.\n\nuse std::hash::Hash;\n\n#[cfg(feature = \"serde\")]\nuse serde::{Deserialize, Serialize};\n\n/// An event describing open or close operations on files.\n#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum AccessMode {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event emitted when the file is executed, or the folder opened.\n\tExecute,\n\n\t/// An event emitted when the file is opened for reading.\n\tRead,\n\n\t/// An event emitted when the file is opened for writing.\n\tWrite,\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// An event describing non-mutating access operations on files.\n#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(tag = \"kind\", content = \"mode\"))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum AccessKind {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event emitted when the file is read.\n\tRead,\n\n\t/// An event emitted when the file, or a handle to the file, is opened.\n\tOpen(AccessMode),\n\n\t/// An event emitted when the file, or a handle to the file, is closed.\n\tClose(AccessMode),\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// An event describing creation operations on files.\n#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(tag = \"kind\"))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum CreateKind {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event which results in the creation of a file.\n\tFile,\n\n\t/// An event which results in the creation of a folder.\n\tFolder,\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// An event emitted when the data content of a file is changed.\n#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum DataChange {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event emitted when the size of the data is changed.\n\tSize,\n\n\t/// An event emitted when the content of the data is changed.\n\tContent,\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// An event emitted when the metadata of a file or folder is changed.\n#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum MetadataKind {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event emitted when the access time of the file or folder is changed.\n\tAccessTime,\n\n\t/// An event emitted when the write or modify time of the file or folder is changed.\n\tWriteTime,\n\n\t/// An event emitted when the permissions of the file or folder are changed.\n\tPermissions,\n\n\t/// An event emitted when the ownership of the file or folder is changed.\n\tOwnership,\n\n\t/// An event emitted when an extended attribute of the file or folder is changed.\n\t///\n\t/// If the extended attribute's name or type is known, it should be provided in the\n\t/// `Info` event attribute.\n\tExtended,\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// An event emitted when the name of a file or folder is changed.\n#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum RenameMode {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event emitted on the file or folder resulting from a rename.\n\tTo,\n\n\t/// An event emitted on the file or folder that was renamed.\n\tFrom,\n\n\t/// A single event emitted with both the `From` and `To` paths.\n\t///\n\t/// This event should be emitted when both source and target are known. The paths should be\n\t/// provided in this exact order (from, to).\n\tBoth,\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// An event describing mutation of content, name, or metadata.\n#[derive(Clone, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(tag = \"kind\", content = \"mode\"))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum ModifyKind {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event emitted when the data content of a file is changed.\n\tData(DataChange),\n\n\t/// An event emitted when the metadata of a file or folder is changed.\n\tMetadata(MetadataKind),\n\n\t/// An event emitted when the name of a file or folder is changed.\n\t#[cfg_attr(feature = \"serde\", serde(rename = \"rename\"))]\n\tName(RenameMode),\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// An event describing removal operations on files.\n#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(tag = \"kind\"))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum RemoveKind {\n\t/// The catch-all case, to be used when the specific kind of event is unknown.\n\tAny,\n\n\t/// An event emitted when a file is removed.\n\tFile,\n\n\t/// An event emitted when a folder is removed.\n\tFolder,\n\n\t/// An event which specific kind is known but cannot be represented otherwise.\n\tOther,\n}\n\n/// Top-level event kind.\n///\n/// This is arguably the most important classification for events. All subkinds below this one\n/// represent details that may or may not be available for any particular backend, but most tools\n/// and Notify systems will only care about which of these four general kinds an event is about.\n#[derive(Clone, Debug, Eq, Hash, PartialEq)]\n#[cfg_attr(feature = \"serde\", derive(Serialize, Deserialize))]\n#[cfg_attr(feature = \"serde\", serde(rename_all = \"kebab-case\"))]\npub enum EventKind {\n\t/// The catch-all event kind, for unsupported/unknown events.\n\t///\n\t/// This variant should be used as the \"else\" case when mapping native kernel bitmasks or\n\t/// bitmaps, such that if the mask is ever extended with new event types the backend will not\n\t/// gain bugs due to not matching new unknown event types.\n\t///\n\t/// This variant is also the default variant used when Notify is in \"imprecise\" mode.\n\tAny,\n\n\t/// An event describing non-mutating access operations on files.\n\t///\n\t/// This event is about opening and closing file handles, as well as executing files, and any\n\t/// other such event that is about accessing files, folders, or other structures rather than\n\t/// mutating them.\n\t///\n\t/// Only some platforms are capable of generating these.\n\tAccess(AccessKind),\n\n\t/// An event describing creation operations on files.\n\t///\n\t/// This event is about the creation of files, folders, or other structures but not about e.g.\n\t/// writing new content into them.\n\tCreate(CreateKind),\n\n\t/// An event describing mutation of content, name, or metadata.\n\t///\n\t/// This event is about the mutation of files', folders', or other structures' content, name\n\t/// (path), or associated metadata (attributes).\n\tModify(ModifyKind),\n\n\t/// An event describing removal operations on files.\n\t///\n\t/// This event is about the removal of files, folders, or other structures but not e.g. erasing\n\t/// content from them. This may also be triggered for renames/moves that move files _out of the\n\t/// watched subpath_.\n\t///\n\t/// Some editors also trigger Remove events when saving files as they may opt for removing (or\n\t/// renaming) the original then creating a new file in-place.\n\tRemove(RemoveKind),\n\n\t/// An event not fitting in any of the above four categories.\n\t///\n\t/// This may be used for meta-events about the watch itself.\n\tOther,\n}\n\nimpl EventKind {\n\t/// Indicates whether an event is an Access variant.\n\tpub fn is_access(&self) -> bool {\n\t\tmatches!(self, EventKind::Access(_))\n\t}\n\n\t/// Indicates whether an event is a Create variant.\n\tpub fn is_create(&self) -> bool {\n\t\tmatches!(self, EventKind::Create(_))\n\t}\n\n\t/// Indicates whether an event is a Modify variant.\n\tpub fn is_modify(&self) -> bool {\n\t\tmatches!(self, EventKind::Modify(_))\n\t}\n\n\t/// Indicates whether an event is a Remove variant.\n\tpub fn is_remove(&self) -> bool {\n\t\tmatches!(self, EventKind::Remove(_))\n\t}\n\n\t/// Indicates whether an event is an Other variant.\n\tpub fn is_other(&self) -> bool {\n\t\tmatches!(self, EventKind::Other)\n\t}\n}\n\nimpl Default for EventKind {\n\tfn default() -> Self {\n\t\tEventKind::Any\n\t}\n}\n"
  },
  {
    "path": "crates/events/src/serde_formats.rs",
    "content": "use std::{\n\tcollections::BTreeMap,\n\tnum::{NonZeroI32, NonZeroI64},\n\tpath::PathBuf,\n};\n\nuse serde::{Deserialize, Serialize};\nuse watchexec_signals::Signal;\n\nuse crate::{\n\tfs::filekind::{\n\t\tAccessKind, AccessMode, CreateKind, DataChange, FileEventKind as EventKind, MetadataKind,\n\t\tModifyKind, RemoveKind, RenameMode,\n\t},\n\tEvent, FileType, Keyboard, ProcessEnd, Source, Tag,\n};\n\n#[derive(Clone, Debug, Default, Serialize, Deserialize)]\npub struct SerdeTag {\n\tkind: TagKind,\n\n\t// path\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tabsolute: Option<PathBuf>,\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tfiletype: Option<FileType>,\n\n\t// fs\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tsimple: Option<FsEventKind>,\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tfull: Option<String>,\n\n\t// source\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tsource: Option<Source>,\n\n\t// keyboard\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tkeycode: Option<Keyboard>,\n\n\t// process\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tpid: Option<u32>,\n\n\t// signal\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tsignal: Option<Signal>,\n\n\t// completion\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tdisposition: Option<ProcessDisposition>,\n\t#[serde(default, skip_serializing_if = \"Option::is_none\")]\n\tcode: Option<i64>,\n}\n\n#[derive(Clone, Copy, Debug, Default, Serialize, Deserialize)]\n#[serde(rename_all = \"kebab-case\")]\npub enum TagKind {\n\t#[default]\n\tNone,\n\tPath,\n\tFs,\n\tSource,\n\tKeyboard,\n\tProcess,\n\tSignal,\n\tCompletion,\n}\n\n#[derive(Clone, Copy, Debug, Serialize, Deserialize)]\n#[serde(rename_all = \"kebab-case\")]\npub enum ProcessDisposition {\n\tUnknown,\n\tSuccess,\n\tError,\n\tSignal,\n\tStop,\n\tException,\n\tContinued,\n}\n\n#[derive(Clone, Copy, Debug, Serialize, Deserialize)]\n#[serde(rename_all = \"kebab-case\")]\npub enum FsEventKind {\n\tAccess,\n\tCreate,\n\tModify,\n\tRemove,\n\tOther,\n}\n\nimpl From<EventKind> for FsEventKind {\n\tfn from(value: EventKind) -> Self {\n\t\tmatch value {\n\t\t\tEventKind::Access(_) => Self::Access,\n\t\t\tEventKind::Create(_) => Self::Create,\n\t\t\tEventKind::Modify(_) => Self::Modify,\n\t\t\tEventKind::Remove(_) => Self::Remove,\n\t\t\tEventKind::Any | EventKind::Other => Self::Other,\n\t\t}\n\t}\n}\n\nimpl From<Tag> for SerdeTag {\n\tfn from(value: Tag) -> Self {\n\t\tmatch value {\n\t\t\tTag::Path { path, file_type } => Self {\n\t\t\t\tkind: TagKind::Path,\n\t\t\t\tabsolute: Some(path),\n\t\t\t\tfiletype: file_type,\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::FileEventKind(fek) => Self {\n\t\t\t\tkind: TagKind::Fs,\n\t\t\t\tfull: Some(format!(\"{fek:?}\")),\n\t\t\t\tsimple: Some(fek.into()),\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::Source(source) => Self {\n\t\t\t\tkind: TagKind::Source,\n\t\t\t\tsource: Some(source),\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::Keyboard(keycode) => Self {\n\t\t\t\tkind: TagKind::Keyboard,\n\t\t\t\tkeycode: Some(keycode),\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::Process(pid) => Self {\n\t\t\t\tkind: TagKind::Process,\n\t\t\t\tpid: Some(pid),\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::Signal(signal) => Self {\n\t\t\t\tkind: TagKind::Signal,\n\t\t\t\tsignal: Some(signal),\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::ProcessCompletion(None) => Self {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: Some(ProcessDisposition::Unknown),\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::ProcessCompletion(Some(end)) => Self {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tcode: match &end {\n\t\t\t\t\tProcessEnd::Success | ProcessEnd::Continued | ProcessEnd::ExitSignal(_) => None,\n\t\t\t\t\tProcessEnd::ExitError(err) => Some(err.get()),\n\t\t\t\t\tProcessEnd::ExitStop(code) => Some(code.get().into()),\n\t\t\t\t\tProcessEnd::Exception(exc) => Some(exc.get().into()),\n\t\t\t\t},\n\t\t\t\tsignal: if let ProcessEnd::ExitSignal(sig) = &end {\n\t\t\t\t\tSome(*sig)\n\t\t\t\t} else {\n\t\t\t\t\tNone\n\t\t\t\t},\n\t\t\t\tdisposition: Some(match end {\n\t\t\t\t\tProcessEnd::Success => ProcessDisposition::Success,\n\t\t\t\t\tProcessEnd::ExitError(_) => ProcessDisposition::Error,\n\t\t\t\t\tProcessEnd::ExitSignal(_) => ProcessDisposition::Signal,\n\t\t\t\t\tProcessEnd::ExitStop(_) => ProcessDisposition::Stop,\n\t\t\t\t\tProcessEnd::Exception(_) => ProcessDisposition::Exception,\n\t\t\t\t\tProcessEnd::Continued => ProcessDisposition::Continued,\n\t\t\t\t}),\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t\tTag::Unknown => Self::default(),\n\t\t}\n\t}\n}\n\n#[allow(\n\tclippy::fallible_impl_from,\n\treason = \"this triggers due to the unwraps, which are checked by branches\"\n)]\n#[allow(\n\tclippy::too_many_lines,\n\treason = \"clearer as a single match tree than broken up\"\n)]\nimpl From<SerdeTag> for Tag {\n\tfn from(value: SerdeTag) -> Self {\n\t\tmatch value {\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Path,\n\t\t\t\tabsolute: Some(path),\n\t\t\t\tfiletype,\n\t\t\t\t..\n\t\t\t} => Self::Path {\n\t\t\t\tpath,\n\t\t\t\tfile_type: filetype,\n\t\t\t},\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Fs,\n\t\t\t\tfull: Some(full),\n\t\t\t\t..\n\t\t\t} => Self::FileEventKind(match full.as_str() {\n\t\t\t\t\"Any\" => EventKind::Any,\n\t\t\t\t\"Access(Any)\" => EventKind::Access(AccessKind::Any),\n\t\t\t\t\"Access(Read)\" => EventKind::Access(AccessKind::Read),\n\t\t\t\t\"Access(Open(Any))\" => EventKind::Access(AccessKind::Open(AccessMode::Any)),\n\t\t\t\t\"Access(Open(Execute))\" => EventKind::Access(AccessKind::Open(AccessMode::Execute)),\n\t\t\t\t\"Access(Open(Read))\" => EventKind::Access(AccessKind::Open(AccessMode::Read)),\n\t\t\t\t\"Access(Open(Write))\" => EventKind::Access(AccessKind::Open(AccessMode::Write)),\n\t\t\t\t\"Access(Open(Other))\" => EventKind::Access(AccessKind::Open(AccessMode::Other)),\n\t\t\t\t\"Access(Close(Any))\" => EventKind::Access(AccessKind::Close(AccessMode::Any)),\n\t\t\t\t\"Access(Close(Execute))\" => {\n\t\t\t\t\tEventKind::Access(AccessKind::Close(AccessMode::Execute))\n\t\t\t\t}\n\t\t\t\t\"Access(Close(Read))\" => EventKind::Access(AccessKind::Close(AccessMode::Read)),\n\t\t\t\t\"Access(Close(Write))\" => EventKind::Access(AccessKind::Close(AccessMode::Write)),\n\t\t\t\t\"Access(Close(Other))\" => EventKind::Access(AccessKind::Close(AccessMode::Other)),\n\t\t\t\t\"Access(Other)\" => EventKind::Access(AccessKind::Other),\n\t\t\t\t\"Create(Any)\" => EventKind::Create(CreateKind::Any),\n\t\t\t\t\"Create(File)\" => EventKind::Create(CreateKind::File),\n\t\t\t\t\"Create(Folder)\" => EventKind::Create(CreateKind::Folder),\n\t\t\t\t\"Create(Other)\" => EventKind::Create(CreateKind::Other),\n\t\t\t\t\"Modify(Any)\" => EventKind::Modify(ModifyKind::Any),\n\t\t\t\t\"Modify(Data(Any))\" => EventKind::Modify(ModifyKind::Data(DataChange::Any)),\n\t\t\t\t\"Modify(Data(Size))\" => EventKind::Modify(ModifyKind::Data(DataChange::Size)),\n\t\t\t\t\"Modify(Data(Content))\" => EventKind::Modify(ModifyKind::Data(DataChange::Content)),\n\t\t\t\t\"Modify(Data(Other))\" => EventKind::Modify(ModifyKind::Data(DataChange::Other)),\n\t\t\t\t\"Modify(Metadata(Any))\" => {\n\t\t\t\t\tEventKind::Modify(ModifyKind::Metadata(MetadataKind::Any))\n\t\t\t\t}\n\t\t\t\t\"Modify(Metadata(AccessTime))\" => {\n\t\t\t\t\tEventKind::Modify(ModifyKind::Metadata(MetadataKind::AccessTime))\n\t\t\t\t}\n\t\t\t\t\"Modify(Metadata(WriteTime))\" => {\n\t\t\t\t\tEventKind::Modify(ModifyKind::Metadata(MetadataKind::WriteTime))\n\t\t\t\t}\n\t\t\t\t\"Modify(Metadata(Permissions))\" => {\n\t\t\t\t\tEventKind::Modify(ModifyKind::Metadata(MetadataKind::Permissions))\n\t\t\t\t}\n\t\t\t\t\"Modify(Metadata(Ownership))\" => {\n\t\t\t\t\tEventKind::Modify(ModifyKind::Metadata(MetadataKind::Ownership))\n\t\t\t\t}\n\t\t\t\t\"Modify(Metadata(Extended))\" => {\n\t\t\t\t\tEventKind::Modify(ModifyKind::Metadata(MetadataKind::Extended))\n\t\t\t\t}\n\t\t\t\t\"Modify(Metadata(Other))\" => {\n\t\t\t\t\tEventKind::Modify(ModifyKind::Metadata(MetadataKind::Other))\n\t\t\t\t}\n\t\t\t\t\"Modify(Name(Any))\" => EventKind::Modify(ModifyKind::Name(RenameMode::Any)),\n\t\t\t\t\"Modify(Name(To))\" => EventKind::Modify(ModifyKind::Name(RenameMode::To)),\n\t\t\t\t\"Modify(Name(From))\" => EventKind::Modify(ModifyKind::Name(RenameMode::From)),\n\t\t\t\t\"Modify(Name(Both))\" => EventKind::Modify(ModifyKind::Name(RenameMode::Both)),\n\t\t\t\t\"Modify(Name(Other))\" => EventKind::Modify(ModifyKind::Name(RenameMode::Other)),\n\t\t\t\t\"Modify(Other)\" => EventKind::Modify(ModifyKind::Other),\n\t\t\t\t\"Remove(Any)\" => EventKind::Remove(RemoveKind::Any),\n\t\t\t\t\"Remove(File)\" => EventKind::Remove(RemoveKind::File),\n\t\t\t\t\"Remove(Folder)\" => EventKind::Remove(RemoveKind::Folder),\n\t\t\t\t\"Remove(Other)\" => EventKind::Remove(RemoveKind::Other),\n\t\t\t\t_ => EventKind::Other, // and literal \"Other\"\n\t\t\t}),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Fs,\n\t\t\t\tsimple: Some(simple),\n\t\t\t\t..\n\t\t\t} => Self::FileEventKind(match simple {\n\t\t\t\tFsEventKind::Access => EventKind::Access(AccessKind::Any),\n\t\t\t\tFsEventKind::Create => EventKind::Create(CreateKind::Any),\n\t\t\t\tFsEventKind::Modify => EventKind::Modify(ModifyKind::Any),\n\t\t\t\tFsEventKind::Remove => EventKind::Remove(RemoveKind::Any),\n\t\t\t\tFsEventKind::Other => EventKind::Other,\n\t\t\t}),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Source,\n\t\t\t\tsource: Some(source),\n\t\t\t\t..\n\t\t\t} => Self::Source(source),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Keyboard,\n\t\t\t\tkeycode: Some(keycode),\n\t\t\t\t..\n\t\t\t} => Self::Keyboard(keycode),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Process,\n\t\t\t\tpid: Some(pid),\n\t\t\t\t..\n\t\t\t} => Self::Process(pid),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Signal,\n\t\t\t\tsignal: Some(sig),\n\t\t\t\t..\n\t\t\t} => Self::Signal(sig),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: None | Some(ProcessDisposition::Unknown),\n\t\t\t\t..\n\t\t\t} => Self::ProcessCompletion(None),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: Some(ProcessDisposition::Success),\n\t\t\t\t..\n\t\t\t} => Self::ProcessCompletion(Some(ProcessEnd::Success)),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: Some(ProcessDisposition::Continued),\n\t\t\t\t..\n\t\t\t} => Self::ProcessCompletion(Some(ProcessEnd::Continued)),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: Some(ProcessDisposition::Signal),\n\t\t\t\tsignal: Some(sig),\n\t\t\t\t..\n\t\t\t} => Self::ProcessCompletion(Some(ProcessEnd::ExitSignal(sig))),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: Some(ProcessDisposition::Error),\n\t\t\t\tcode: Some(err),\n\t\t\t\t..\n\t\t\t} if err != 0 => Self::ProcessCompletion(Some(ProcessEnd::ExitError(unsafe {\n\t\t\t\tNonZeroI64::new_unchecked(err)\n\t\t\t}))),\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: Some(ProcessDisposition::Stop),\n\t\t\t\tcode: Some(code),\n\t\t\t\t..\n\t\t\t} if code != 0 && i32::try_from(code).is_ok() => {\n\t\t\t\tSelf::ProcessCompletion(Some(ProcessEnd::ExitStop(unsafe {\n\t\t\t\t\t// SAFETY&UNWRAP: checked above\n\t\t\t\t\tNonZeroI32::new_unchecked(code.try_into().unwrap())\n\t\t\t\t})))\n\t\t\t}\n\t\t\tSerdeTag {\n\t\t\t\tkind: TagKind::Completion,\n\t\t\t\tdisposition: Some(ProcessDisposition::Exception),\n\t\t\t\tcode: Some(exc),\n\t\t\t\t..\n\t\t\t} if exc != 0 && i32::try_from(exc).is_ok() => {\n\t\t\t\tSelf::ProcessCompletion(Some(ProcessEnd::Exception(unsafe {\n\t\t\t\t\t// SAFETY&UNWRAP: checked above\n\t\t\t\t\tNonZeroI32::new_unchecked(exc.try_into().unwrap())\n\t\t\t\t})))\n\t\t\t}\n\t\t\t_ => Self::Unknown,\n\t\t}\n\t}\n}\n\n#[derive(Clone, Debug, Default, Serialize, Deserialize)]\npub struct SerdeEvent {\n\t#[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n\ttags: Vec<Tag>,\n\n\t// for a consistent serialization order\n\t#[serde(default, skip_serializing_if = \"BTreeMap::is_empty\")]\n\tmetadata: BTreeMap<String, Vec<String>>,\n}\n\nimpl From<Event> for SerdeEvent {\n\tfn from(Event { tags, metadata }: Event) -> Self {\n\t\tSelf {\n\t\t\ttags,\n\t\t\tmetadata: metadata.into_iter().collect(),\n\t\t}\n\t}\n}\n\nimpl From<SerdeEvent> for Event {\n\tfn from(SerdeEvent { tags, metadata }: SerdeEvent) -> Self {\n\t\tSelf {\n\t\t\ttags,\n\t\t\tmetadata: metadata.into_iter().collect(),\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/events/tests/json.rs",
    "content": "#![cfg(feature = \"serde\")]\n\nuse std::num::{NonZeroI32, NonZeroI64};\n\nuse snapbox::{assert_data_eq, file};\nuse watchexec_events::{\n\tfilekind::{CreateKind, FileEventKind as EventKind, ModifyKind, RemoveKind, RenameMode},\n\tEvent, FileType, Keyboard, ProcessEnd, Source, Tag,\n};\nuse watchexec_signals::Signal;\n\nfn parse_file(path: &str) -> Vec<Event> {\n\tserde_json::from_str(&std::fs::read_to_string(path).unwrap()).unwrap()\n}\n\n#[test]\nfn single() {\n\tlet single = Event {\n\t\ttags: vec![Tag::Source(Source::Internal)],\n\t\tmetadata: Default::default(),\n\t};\n\n\tassert_data_eq!(\n\t\tserde_json::to_string_pretty(&single).unwrap(),\n\t\tfile![\"snapshots/single.json\"],\n\t);\n\n\tassert_eq!(\n\t\tserde_json::from_str::<Event>(\n\t\t\t&std::fs::read_to_string(\"tests/snapshots/single.json\").unwrap()\n\t\t)\n\t\t.unwrap(),\n\t\tsingle\n\t);\n}\n\n#[test]\nfn array() {\n\tlet array = &[\n\t\tEvent {\n\t\t\ttags: vec![Tag::Source(Source::Internal)],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::Success)),\n\t\t\t\tTag::Process(123),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![Tag::Keyboard(Keyboard::Eof)],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t];\n\n\tassert_data_eq!(\n\t\tserde_json::to_string_pretty(array).unwrap(),\n\t\tfile![\"snapshots/array.json\"],\n\t);\n\n\tassert_eq!(parse_file(\"tests/snapshots/array.json\"), array);\n}\n\n#[test]\nfn metadata() {\n\tlet metadata = &[Event {\n\t\ttags: vec![Tag::Source(Source::Internal)],\n\t\tmetadata: [\n\t\t\t(\"Dafan\".into(), vec![\"Mountain\".into()]),\n\t\t\t(\"Lan\".into(), vec![\"Zhan\".into()]),\n\t\t]\n\t\t.into(),\n\t}];\n\n\tassert_data_eq!(\n\t\tserde_json::to_string_pretty(metadata).unwrap(),\n\t\tfile![\"snapshots/metadata.json\"],\n\t);\n\n\tassert_eq!(parse_file(\"tests/snapshots/metadata.json\"), metadata);\n}\n\n#[test]\nfn asymmetric() {\n\t// asymmetric because these have information loss or missing fields\n\n\tassert_eq!(\n\t\tparse_file(\"tests/snapshots/asymmetric.json\"),\n\t\t&[\n\t\t\tEvent {\n\t\t\t\ttags: vec![\n\t\t\t\t\t// no filetype field\n\t\t\t\t\tTag::Path {\n\t\t\t\t\t\tpath: \"/foo/bar/baz\".into(),\n\t\t\t\t\t\tfile_type: None\n\t\t\t\t\t},\n\t\t\t\t\t// fs with only simple representation\n\t\t\t\t\tTag::FileEventKind(EventKind::Create(CreateKind::Any)),\n\t\t\t\t\t// unparsable of a known kind\n\t\t\t\t\tTag::Unknown,\n\t\t\t\t],\n\t\t\t\tmetadata: Default::default(),\n\t\t\t},\n\t\t\tEvent {\n\t\t\t\ttags: vec![\n\t\t\t\t\t// no simple field\n\t\t\t\t\tTag::FileEventKind(EventKind::Modify(ModifyKind::Other)),\n\t\t\t\t\t// no disposition field\n\t\t\t\t\tTag::ProcessCompletion(None)\n\t\t\t\t],\n\t\t\t\tmetadata: Default::default(),\n\t\t\t},\n\t\t]\n\t);\n}\n\n#[test]\nfn sources() {\n\tlet sources = vec![\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::Source(Source::Filesystem),\n\t\t\t\tTag::Source(Source::Keyboard),\n\t\t\t\tTag::Source(Source::Mouse),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::Source(Source::Os),\n\t\t\t\tTag::Source(Source::Time),\n\t\t\t\tTag::Source(Source::Internal),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t];\n\n\tassert_data_eq!(\n\t\tserde_json::to_string_pretty(&sources).unwrap(),\n\t\tfile![\"snapshots/sources.json\"],\n\t);\n\n\tassert_eq!(parse_file(\"tests/snapshots/sources.json\"), sources);\n}\n\n#[test]\nfn signals() {\n\tlet signals = vec![\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::Signal(Signal::Interrupt),\n\t\t\t\tTag::Signal(Signal::User1),\n\t\t\t\tTag::Signal(Signal::ForceStop),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::Signal(Signal::Custom(66)),\n\t\t\t\tTag::Signal(Signal::Custom(0)),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t];\n\n\tassert_data_eq!(\n\t\tserde_json::to_string_pretty(&signals).unwrap(),\n\t\tfile![\"snapshots/signals.json\"],\n\t);\n\n\tassert_eq!(parse_file(\"tests/snapshots/signals.json\"), signals);\n}\n\n#[test]\nfn completions() {\n\tlet completions = vec![\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::ProcessCompletion(None),\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::Success)),\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::Continued)),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::ExitError(NonZeroI64::new(12).unwrap()))),\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::ExitSignal(Signal::Interrupt))),\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::ExitSignal(Signal::Custom(34)))),\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::ExitStop(NonZeroI32::new(56).unwrap()))),\n\t\t\t\tTag::ProcessCompletion(Some(ProcessEnd::Exception(NonZeroI32::new(78).unwrap()))),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t];\n\n\tassert_data_eq!(\n\t\tserde_json::to_string_pretty(&completions).unwrap(),\n\t\tfile![\"snapshots/completions.json\"],\n\t);\n\n\tassert_eq!(parse_file(\"tests/snapshots/completions.json\"), completions);\n}\n\n#[test]\nfn paths() {\n\tlet paths = vec![\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::Path {\n\t\t\t\t\tpath: \"/foo/bar/baz\".into(),\n\t\t\t\t\tfile_type: Some(FileType::Symlink),\n\t\t\t\t},\n\t\t\t\tTag::FileEventKind(EventKind::Create(CreateKind::File)),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::Path {\n\t\t\t\t\tpath: \"/rename/from/this\".into(),\n\t\t\t\t\tfile_type: Some(FileType::File),\n\t\t\t\t},\n\t\t\t\tTag::Path {\n\t\t\t\t\tpath: \"/rename/into/that\".into(),\n\t\t\t\t\tfile_type: Some(FileType::Other),\n\t\t\t\t},\n\t\t\t\tTag::FileEventKind(EventKind::Modify(ModifyKind::Name(RenameMode::Both))),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![\n\t\t\t\tTag::Path {\n\t\t\t\t\tpath: \"/delete/this\".into(),\n\t\t\t\t\tfile_type: Some(FileType::Dir),\n\t\t\t\t},\n\t\t\t\tTag::Path {\n\t\t\t\t\tpath: \"/\".into(),\n\t\t\t\t\tfile_type: None,\n\t\t\t\t},\n\t\t\t\tTag::FileEventKind(EventKind::Remove(RemoveKind::Any)),\n\t\t\t],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t];\n\n\tassert_data_eq!(\n\t\tserde_json::to_string_pretty(&paths).unwrap(),\n\t\tfile![\"snapshots/paths.json\"],\n\t);\n\n\tassert_eq!(parse_file(\"tests/snapshots/paths.json\"), paths);\n}\n"
  },
  {
    "path": "crates/events/tests/snapshots/array.json",
    "content": "[\n  {\n    \"tags\": [\n      {\n        \"kind\": \"source\",\n        \"source\": \"internal\"\n      }\n    ]\n  },\n  {\n    \"tags\": [\n      {\n        \"kind\": \"completion\",\n        \"disposition\": \"success\"\n      },\n      {\n        \"kind\": \"process\",\n        \"pid\": 123\n      }\n    ]\n  },\n  {\n    \"tags\": [\n      {\n        \"kind\": \"keyboard\",\n        \"keycode\": \"eof\"\n      }\n    ]\n  }\n]"
  },
  {
    "path": "crates/events/tests/snapshots/asymmetric.json",
    "content": "[\n\t{\n\t\t\"tags\": [\n\t\t\t{\n\t\t\t\t\"kind\": \"path\",\n\t\t\t\t\"absolute\": \"/foo/bar/baz\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"kind\": \"fs\",\n\t\t\t\t\"simple\": \"create\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"kind\": \"fs\"\n\t\t\t}\n\t\t]\n\t},\n\t{\n\t\t\"tags\": [\n\t\t\t{\n\t\t\t\t\"kind\": \"fs\",\n\t\t\t\t\"full\": \"Modify(Other)\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"kind\": \"completion\"\n\t\t\t}\n\t\t]\n\t}\n]\n"
  },
  {
    "path": "crates/events/tests/snapshots/completions.json",
    "content": "[\n  {\n    \"tags\": [\n      {\n        \"kind\": \"completion\",\n        \"disposition\": \"unknown\"\n      },\n      {\n        \"kind\": \"completion\",\n        \"disposition\": \"success\"\n      },\n      {\n        \"kind\": \"completion\",\n        \"disposition\": \"continued\"\n      }\n    ]\n  },\n  {\n    \"tags\": [\n      {\n        \"kind\": \"completion\",\n        \"disposition\": \"error\",\n        \"code\": 12\n      },\n      {\n        \"kind\": \"completion\",\n        \"signal\": \"SIGINT\",\n        \"disposition\": \"signal\"\n      },\n      {\n        \"kind\": \"completion\",\n        \"signal\": 34,\n        \"disposition\": \"signal\"\n      },\n      {\n        \"kind\": \"completion\",\n        \"disposition\": \"stop\",\n        \"code\": 56\n      },\n      {\n        \"kind\": \"completion\",\n        \"disposition\": \"exception\",\n        \"code\": 78\n      }\n    ]\n  }\n]"
  },
  {
    "path": "crates/events/tests/snapshots/metadata.json",
    "content": "[\n  {\n    \"tags\": [\n      {\n        \"kind\": \"source\",\n        \"source\": \"internal\"\n      }\n    ],\n    \"metadata\": {\n      \"Dafan\": [\n        \"Mountain\"\n      ],\n      \"Lan\": [\n        \"Zhan\"\n      ]\n    }\n  }\n]"
  },
  {
    "path": "crates/events/tests/snapshots/paths.json",
    "content": "[\n  {\n    \"tags\": [\n      {\n        \"kind\": \"path\",\n        \"absolute\": \"/foo/bar/baz\",\n        \"filetype\": \"symlink\"\n      },\n      {\n        \"kind\": \"fs\",\n        \"simple\": \"create\",\n        \"full\": \"Create(File)\"\n      }\n    ]\n  },\n  {\n    \"tags\": [\n      {\n        \"kind\": \"path\",\n        \"absolute\": \"/rename/from/this\",\n        \"filetype\": \"file\"\n      },\n      {\n        \"kind\": \"path\",\n        \"absolute\": \"/rename/into/that\",\n        \"filetype\": \"other\"\n      },\n      {\n        \"kind\": \"fs\",\n        \"simple\": \"modify\",\n        \"full\": \"Modify(Name(Both))\"\n      }\n    ]\n  },\n  {\n    \"tags\": [\n      {\n        \"kind\": \"path\",\n        \"absolute\": \"/delete/this\",\n        \"filetype\": \"dir\"\n      },\n      {\n        \"kind\": \"path\",\n        \"absolute\": \"/\"\n      },\n      {\n        \"kind\": \"fs\",\n        \"simple\": \"remove\",\n        \"full\": \"Remove(Any)\"\n      }\n    ]\n  }\n]"
  },
  {
    "path": "crates/events/tests/snapshots/signals.json",
    "content": "[\n  {\n    \"tags\": [\n      {\n        \"kind\": \"signal\",\n        \"signal\": \"SIGINT\"\n      },\n      {\n        \"kind\": \"signal\",\n        \"signal\": \"SIGUSR1\"\n      },\n      {\n        \"kind\": \"signal\",\n        \"signal\": \"SIGKILL\"\n      }\n    ]\n  },\n  {\n    \"tags\": [\n      {\n        \"kind\": \"signal\",\n        \"signal\": 66\n      },\n      {\n        \"kind\": \"signal\",\n        \"signal\": 0\n      }\n    ]\n  }\n]"
  },
  {
    "path": "crates/events/tests/snapshots/single.json",
    "content": "{\n  \"tags\": [\n    {\n      \"kind\": \"source\",\n      \"source\": \"internal\"\n    }\n  ]\n}"
  },
  {
    "path": "crates/events/tests/snapshots/sources.json",
    "content": "[\n  {\n    \"tags\": [\n      {\n        \"kind\": \"source\",\n        \"source\": \"filesystem\"\n      },\n      {\n        \"kind\": \"source\",\n        \"source\": \"keyboard\"\n      },\n      {\n        \"kind\": \"source\",\n        \"source\": \"mouse\"\n      }\n    ]\n  },\n  {\n    \"tags\": [\n      {\n        \"kind\": \"source\",\n        \"source\": \"os\"\n      },\n      {\n        \"kind\": \"source\",\n        \"source\": \"time\"\n      },\n      {\n        \"kind\": \"source\",\n        \"source\": \"internal\"\n      }\n    ]\n  }\n]"
  },
  {
    "path": "crates/filterer/globset/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v8.0.0 (2025-05-15)\n\n## v7.0.0 (2025-02-09)\n\n## v6.0.0 (2024-10-14)\n\n- Deps: watchexec 5\n\n## v5.0.0 (2024-10-13)\n\n- Add whitelist parameter.\n\n## v4.0.1 (2024-04-28)\n\n- Hide fmt::Debug spew from ignore crate, use `full_debug` feature to restore.\n\n## v4.0.0 (2024-04-20)\n\n- Deps: watchexec 4\n\n## v3.0.0 (2024-01-01)\n\n- Deps: `watchexec-filterer-ignore` and `ignore-files`\n\n## v2.0.1 (2023-12-09)\n\n- Depend on `watchexec-events` instead of the `watchexec` re-export.\n\n## v1.2.0 (2023-03-18)\n\n- Ditch MSRV policy. The `rust-version` indication will remain, for the minimum estimated Rust version for the code features used in the crate's own code, but dependencies may have already moved on. From now on, only latest stable is assumed and tested for. ([#510](https://github.com/watchexec/watchexec/pull/510))\n\n## v1.1.0 (2023-01-09)\n\n- MSRV: bump to 1.61.0\n\n## v1.0.1 (2022-09-07)\n\n- Deps: update miette to 5.3.0\n\n## v1.0.0 (2022-06-23)\n\n- Initial release as a separate crate.\n"
  },
  {
    "path": "crates/filterer/globset/Cargo.toml",
    "content": "[package]\nname = \"watchexec-filterer-globset\"\nversion = \"8.0.0\"\n\nauthors = [\"Matt Green <mattgreenrocks@gmail.com>\", \"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0\"\ndescription = \"Watchexec filterer component based on globset\"\nkeywords = [\"watchexec\", \"filterer\", \"globset\"]\n\ndocumentation = \"https://docs.rs/watchexec-filterer-globset\"\nhomepage = \"https://watchexec.github.io\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.61.0\"\nedition = \"2021\"\n\n[dependencies]\nignore = \"0.4.18\"\ntracing = \"0.1.40\"\n\n[dependencies.ignore-files]\nversion = \"3.0.5\"\npath = \"../../ignore-files\"\n\n[dependencies.watchexec]\nversion = \"8.2.0\"\npath = \"../../lib\"\n\n[dependencies.watchexec-events]\nversion = \"6.1.0\"\npath = \"../../events\"\n\n[dependencies.watchexec-filterer-ignore]\nversion = \"7.0.0\"\npath = \"../ignore\"\n\n[dev-dependencies]\ntracing-subscriber = \"0.3.6\"\ntempfile = \"3.16.0\"\n\n[dev-dependencies.tokio]\nversion = \"1.33.0\"\nfeatures = [\n\t\"fs\",\n\t\"io-std\",\n\t\"rt\",\n\t\"rt-multi-thread\",\n\t\"macros\",\n]\n\n[features]\ndefault = []\n\n## Don't hide ignore::gitignore::Gitignore Debug impl\nfull_debug = []\n"
  },
  {
    "path": "crates/filterer/globset/README.md",
    "content": "[![Crates.io page](https://badgen.net/crates/v/watchexec-filterer-globset)](https://crates.io/crates/watchexec-filterer-globset)\n[![API Docs](https://docs.rs/watchexec-filterer-globset/badge.svg)][docs]\n[![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license]\n[![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml)\n\n# Watchexec filterer: globset\n\n_The default filterer implementation for Watchexec._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license].\n- Status: maintained.\n\n[docs]: https://docs.rs/watchexec-filterer-globset\n[license]: ../../../LICENSE\n"
  },
  {
    "path": "crates/filterer/globset/release.toml",
    "content": "pre-release-commit-message = \"release: filterer-globset v{{version}}\"\ntag-prefix = \"watchexec-filterer-globset-\"\ntag-message = \"watchexec-filterer-globset {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/filterer/globset/src/lib.rs",
    "content": "//! A path-only Watchexec filterer based on globsets.\n//!\n//! This filterer mimics the behavior of the `watchexec` v1 filter, but does not match it exactly,\n//! due to differing internals. It is used as the default filterer in Watchexec CLI currently.\n\n#![doc(html_favicon_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![doc(html_logo_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![warn(clippy::unwrap_used, missing_docs)]\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n#![deny(rust_2018_idioms)]\n\nuse std::{\n\tffi::OsString,\n\tpath::{Path, PathBuf},\n};\n\nuse ignore::gitignore::{Gitignore, GitignoreBuilder};\nuse ignore_files::{Error, IgnoreFile, IgnoreFilter};\nuse tracing::{debug, trace, trace_span};\nuse watchexec::{error::RuntimeError, filter::Filterer};\nuse watchexec_events::{Event, FileType, Priority};\nuse watchexec_filterer_ignore::IgnoreFilterer;\n\n/// A simple filterer in the style of the watchexec v1.17 filter.\n#[cfg_attr(feature = \"full_debug\", derive(Debug))]\npub struct GlobsetFilterer {\n\t#[cfg_attr(not(unix), allow(dead_code))]\n\torigin: PathBuf,\n\tfilters: Gitignore,\n\tignores: Gitignore,\n\twhitelist: Vec<PathBuf>,\n\tignore_files: IgnoreFilterer,\n\textensions: Vec<OsString>,\n}\n\n#[cfg(not(feature = \"full_debug\"))]\nimpl std::fmt::Debug for GlobsetFilterer {\n\tfn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n\t\tf.debug_struct(\"GlobsetFilterer\")\n\t\t\t.field(\"origin\", &self.origin)\n\t\t\t.field(\"filters\", &\"ignore::gitignore::Gitignore{...}\")\n\t\t\t.field(\"ignores\", &\"ignore::gitignore::Gitignore{...}\")\n\t\t\t.field(\"ignore_files\", &self.ignore_files)\n\t\t\t.field(\"extensions\", &self.extensions)\n\t\t\t.finish()\n\t}\n}\n\nimpl GlobsetFilterer {\n\t/// Create a new `GlobsetFilterer` from a project origin, allowed extensions, and lists of globs.\n\t///\n\t/// The first list is used to filter paths (only matching paths will pass the filter), the\n\t/// second is used to ignore paths (matching paths will fail the pattern). If the filter list is\n\t/// empty, only the ignore list will be used. If both lists are empty, the filter always passes.\n\t/// Whitelist is used to automatically accept files even if they would be filtered out\n\t/// otherwise. It is passed as an absolute path to the file that should not be filtered.\n\t///\n\t/// Ignores and filters are passed as a tuple of the glob pattern as a string and an optional\n\t/// path of the folder the pattern should apply in (e.g. the folder a gitignore file is in).\n\t/// A `None` to the latter will mark the pattern as being global.\n\t///\n\t/// The extensions list is used to filter files by extension.\n\t///\n\t/// Non-path events are always passed.\n\t#[allow(clippy::future_not_send)]\n\tpub async fn new(\n\t\torigin: impl AsRef<Path>,\n\t\tfilters: impl IntoIterator<Item = (String, Option<PathBuf>)>,\n\t\tignores: impl IntoIterator<Item = (String, Option<PathBuf>)>,\n\t\twhitelist: impl IntoIterator<Item = PathBuf>,\n\t\tignore_files: impl IntoIterator<Item = IgnoreFile>,\n\t\textensions: impl IntoIterator<Item = OsString>,\n\t) -> Result<Self, Error> {\n\t\tlet origin = origin.as_ref();\n\t\tlet mut filters_builder = GitignoreBuilder::new(origin);\n\t\tlet mut ignores_builder = GitignoreBuilder::new(origin);\n\n\t\tfor (filter, in_path) in filters {\n\t\t\ttrace!(filter=?&filter, \"add filter to globset filterer\");\n\t\t\tfilters_builder\n\t\t\t\t.add_line(in_path.clone(), &filter)\n\t\t\t\t.map_err(|err| Error::Glob { file: in_path, err })?;\n\t\t}\n\n\t\tfor (ignore, in_path) in ignores {\n\t\t\ttrace!(ignore=?&ignore, \"add ignore to globset filterer\");\n\t\t\tignores_builder\n\t\t\t\t.add_line(in_path.clone(), &ignore)\n\t\t\t\t.map_err(|err| Error::Glob { file: in_path, err })?;\n\t\t}\n\n\t\tlet filters = filters_builder\n\t\t\t.build()\n\t\t\t.map_err(|err| Error::Glob { file: None, err })?;\n\t\tlet ignores = ignores_builder\n\t\t\t.build()\n\t\t\t.map_err(|err| Error::Glob { file: None, err })?;\n\n\t\tlet extensions: Vec<OsString> = extensions.into_iter().collect();\n\n\t\tlet mut ignore_files =\n\t\t\tIgnoreFilter::new(origin, &ignore_files.into_iter().collect::<Vec<_>>()).await?;\n\t\tignore_files.finish();\n\t\tlet ignore_files = IgnoreFilterer(ignore_files);\n\n\t\tlet whitelist = whitelist.into_iter().collect::<Vec<_>>();\n\n\t\tdebug!(\n\t\t\t?origin,\n\t\t\tnum_filters=%filters.num_ignores(),\n\t\t\tnum_neg_filters=%filters.num_whitelists(),\n\t\t\tnum_ignores=%ignores.num_ignores(),\n\t\t\tnum_in_ignore_files=?ignore_files.0.num_ignores(),\n\t\t\tnum_neg_ignores=%ignores.num_whitelists(),\n\t\t\tnum_extensions=%extensions.len(),\n\t\t\"globset filterer built\");\n\n\t\tOk(Self {\n\t\t\torigin: origin.into(),\n\t\t\tfilters,\n\t\t\tignores,\n\t\t\twhitelist,\n\t\t\tignore_files,\n\t\t\textensions,\n\t\t})\n\t}\n}\n\nimpl Filterer for GlobsetFilterer {\n\t/// Filter an event.\n\t///\n\t/// This implementation never errors.\n\tfn check_event(&self, event: &Event, priority: Priority) -> Result<bool, RuntimeError> {\n\t\tlet _span = trace_span!(\"filterer_check\").entered();\n\n\t\t{\n\t\t\ttrace!(\"checking internal whitelist\");\n\t\t\t// Ideally check path equality backwards for better perf\n\t\t\t// There could be long matching prefixes so we will exit late\n\t\t\tif event\n\t\t\t\t.paths()\n\t\t\t\t.any(|(p, _)| self.whitelist.iter().any(|w| w == p))\n\t\t\t{\n\t\t\t\ttrace!(\"internal whitelist filterer matched (success)\");\n\t\t\t\treturn Ok(true);\n\t\t\t}\n\t\t}\n\n\t\t{\n\t\t\ttrace!(\"checking internal ignore filterer\");\n\t\t\tif !self\n\t\t\t\t.ignore_files\n\t\t\t\t.check_event(event, priority)\n\t\t\t\t.expect(\"IgnoreFilterer never errors\")\n\t\t\t{\n\t\t\t\ttrace!(\"internal ignore filterer matched (fail)\");\n\t\t\t\treturn Ok(false);\n\t\t\t}\n\t\t}\n\n\t\tlet mut paths = event.paths().peekable();\n\t\tif paths.peek().is_none() {\n\t\t\ttrace!(\"non-path event (pass)\");\n\t\t\tOk(true)\n\t\t} else {\n\t\t\tOk(paths.any(|(path, file_type)| {\n\t\t\t\tlet _span = trace_span!(\"path\", ?path).entered();\n\t\t\t\tlet is_dir = file_type.map_or(false, |t| matches!(t, FileType::Dir));\n\n\t\t\t\tif self.ignores.matched(path, is_dir).is_ignore() {\n\t\t\t\t\ttrace!(\"ignored by globset ignore\");\n\t\t\t\t\treturn false;\n\t\t\t\t}\n\n\t\t\t\tlet mut filtered = false;\n\t\t\t\tif self.filters.num_ignores() > 0 {\n\t\t\t\t\ttrace!(\"running through glob filters\");\n\t\t\t\t\tfiltered = true;\n\n\t\t\t\t\tif self.filters.matched(path, is_dir).is_ignore() {\n\t\t\t\t\t\ttrace!(\"allowed by globset filters\");\n\t\t\t\t\t\treturn true;\n\t\t\t\t\t}\n\n\t\t\t\t\t// Watchexec 1.x bug, TODO remove at 2.0\n\t\t\t\t\t#[cfg(unix)]\n\t\t\t\t\tif let Ok(based) = path.strip_prefix(&self.origin) {\n\t\t\t\t\t\tlet rebased = {\n\t\t\t\t\t\t\tuse std::path::MAIN_SEPARATOR;\n\t\t\t\t\t\t\tlet mut b = self.origin.clone().into_os_string();\n\t\t\t\t\t\t\tb.push(PathBuf::from(String::from(MAIN_SEPARATOR)));\n\t\t\t\t\t\t\tb.push(PathBuf::from(String::from(MAIN_SEPARATOR)));\n\t\t\t\t\t\t\tb.push(based.as_os_str());\n\t\t\t\t\t\t\tb\n\t\t\t\t\t\t};\n\n\t\t\t\t\t\ttrace!(?rebased, \"testing on rebased path, 1.x bug compat (#258)\");\n\t\t\t\t\t\tif self.filters.matched(rebased, is_dir).is_ignore() {\n\t\t\t\t\t\t\ttrace!(\"allowed by globset filters, 1.x bug compat (#258)\");\n\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif !self.extensions.is_empty() {\n\t\t\t\t\ttrace!(\"running through extension filters\");\n\t\t\t\t\tfiltered = true;\n\n\t\t\t\t\tif is_dir {\n\t\t\t\t\t\ttrace!(\"failed on extension check due to being a dir\");\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\n\t\t\t\t\tif let Some(ext) = path.extension() {\n\t\t\t\t\t\tif self.extensions.iter().any(|e| e == ext) {\n\t\t\t\t\t\t\ttrace!(\"allowed by extension filter\");\n\t\t\t\t\t\t\treturn true;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\ttrace!(\n\t\t\t\t\t\t\t?path,\n\t\t\t\t\t\t\t\"failed on extension check due to having no extension\"\n\t\t\t\t\t\t);\n\t\t\t\t\t\treturn false;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t!filtered\n\t\t\t}))\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/filterer/globset/tests/filtering.rs",
    "content": "mod helpers;\nuse helpers::globset::*;\nuse std::io::Write;\n\n#[tokio::test]\nasync fn empty_filter_passes_everything() {\n\tlet filterer = filt(&[], &[], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.file_does_pass(\"Gemfile.toml\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_does_pass(\"/test/Cargo.toml\");\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.file_does_pass(\"apples/carrots/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples/carrots/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.dir_does_pass(\"apples/oranges/bananas\");\n}\n\n#[tokio::test]\nasync fn exact_filename() {\n\tlet filterer = filt(&[\"Cargo.toml\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"Cargo.json\");\n\tfilterer.file_doesnt_pass(\"Gemfile.toml\");\n\tfilterer.file_doesnt_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.dir_does_pass(\"/test/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn exact_filename_in_folder() {\n\tlet filterer = filt(&[\"sub/Cargo.toml\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"sub/Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"Cargo.json\");\n\tfilterer.file_doesnt_pass(\"Gemfile.toml\");\n\tfilterer.file_doesnt_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.dir_does_pass(\"/test/sub/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn exact_filename_in_hidden_folder() {\n\tlet filterer = filt(&[\".sub/Cargo.toml\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\".sub/Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"Cargo.json\");\n\tfilterer.file_doesnt_pass(\"Gemfile.toml\");\n\tfilterer.file_doesnt_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.dir_does_pass(\"/test/.sub/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn exact_filenames_multiple() {\n\tlet filterer = filt(&[\"Cargo.toml\", \"package.json\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_does_pass(\"package.json\");\n\tfilterer.file_does_pass(\"/test/foo/bar/package.json\");\n\tfilterer.file_doesnt_pass(\"Cargo.json\");\n\tfilterer.file_doesnt_pass(\"package.toml\");\n\tfilterer.file_doesnt_pass(\"Gemfile.toml\");\n\tfilterer.file_doesnt_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.dir_does_pass(\"/test/Cargo.toml\");\n\tfilterer.dir_does_pass(\"/test/package.json\");\n}\n\n#[tokio::test]\nasync fn glob_single_final_ext_star() {\n\tlet filterer = filt(&[\"Cargo.*\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.file_doesnt_pass(\"Gemfile.toml\");\n\tfilterer.file_doesnt_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.dir_does_pass(\"Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn glob_star_trailing_slash() {\n\tlet filterer = filt(&[\"Cargo.*/\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"Cargo.json\");\n\tfilterer.file_doesnt_pass(\"Gemfile.toml\");\n\tfilterer.file_doesnt_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.dir_does_pass(\"Cargo.toml\");\n\tfilterer.unk_doesnt_pass(\"Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn glob_star_leading_slash() {\n\tlet filterer = filt(&[\"/Cargo.*\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.dir_does_pass(\"Cargo.toml\");\n\tfilterer.unk_does_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"foo/Cargo.toml\");\n\tfilterer.dir_doesnt_pass(\"foo/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn glob_leading_double_star() {\n\tlet filterer = filt(&[\"**/possum\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"possum\");\n\tfilterer.file_does_pass(\"foo/bar/possum\");\n\tfilterer.file_does_pass(\"/foo/bar/possum\");\n\tfilterer.dir_does_pass(\"possum\");\n\tfilterer.dir_does_pass(\"foo/bar/possum\");\n\tfilterer.dir_does_pass(\"/foo/bar/possum\");\n\tfilterer.file_doesnt_pass(\"rat\");\n\tfilterer.file_doesnt_pass(\"foo/bar/rat\");\n\tfilterer.file_doesnt_pass(\"/foo/bar/rat\");\n}\n\n#[tokio::test]\nasync fn glob_trailing_double_star() {\n\tlet filterer = filt(&[\"possum/**\"], &[], &[], &[], &[]).await;\n\n\t// these do work by expectation and in v1\n\tfilterer.file_does_pass(\"/test/possum/foo/bar\");\n\tfilterer.dir_doesnt_pass(\"possum\");\n\tfilterer.dir_doesnt_pass(\"foo/bar/possum\");\n\tfilterer.dir_does_pass(\"possum/foo/bar\");\n\tfilterer.file_doesnt_pass(\"rat\");\n\tfilterer.file_doesnt_pass(\"foo/bar/rat\");\n\tfilterer.file_doesnt_pass(\"/foo/bar/rat\");\n}\n\n#[tokio::test]\nasync fn glob_middle_double_star() {\n\tlet filterer = filt(&[\"apples/**/oranges\"], &[], &[], &[], &[]).await;\n\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.file_does_pass(\"apples/carrots/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_doesnt_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples/carrots/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.dir_doesnt_pass(\"apples/oranges/bananas\");\n}\n\n#[tokio::test]\nasync fn glob_double_star_trailing_slash() {\n\tlet filterer = filt(&[\"apples/**/oranges/\"], &[], &[], &[], &[]).await;\n\n\tfilterer.dir_doesnt_pass(\"/a/folder\");\n\tfilterer.file_doesnt_pass(\"apples/carrots/oranges\");\n\tfilterer.file_doesnt_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_doesnt_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_doesnt_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples/carrots/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.dir_doesnt_pass(\"apples/oranges/bananas\");\n\tfilterer.unk_doesnt_pass(\"apples/carrots/oranges\");\n\tfilterer.unk_doesnt_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.unk_doesnt_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n}\n\n#[tokio::test]\nasync fn ignore_exact_filename() {\n\tlet filterer = filt(&[], &[\"Cargo.toml\"], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.file_does_pass(\"Gemfile.toml\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.dir_doesnt_pass(\"/test/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn ignore_exact_filename_in_folder() {\n\tlet filterer = filt(&[], &[\"sub/Cargo.toml\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"sub/Cargo.toml\");\n\tfilterer.file_does_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.file_does_pass(\"Gemfile.toml\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.dir_doesnt_pass(\"/test/sub/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn ignore_exact_filename_in_hidden_folder() {\n\tlet filterer = filt(&[], &[\".sub/Cargo.toml\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\".sub/Cargo.toml\");\n\tfilterer.file_does_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.file_does_pass(\"Gemfile.toml\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.dir_doesnt_pass(\"/test/.sub/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn ignore_exact_filenames_multiple() {\n\tlet filterer = filt(&[], &[\"Cargo.toml\", \"package.json\"], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"package.json\");\n\tfilterer.file_doesnt_pass(\"/test/foo/bar/package.json\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.file_does_pass(\"package.toml\");\n\tfilterer.file_does_pass(\"Gemfile.toml\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.dir_doesnt_pass(\"/test/Cargo.toml\");\n\tfilterer.dir_doesnt_pass(\"/test/package.json\");\n}\n\n#[tokio::test]\nasync fn ignore_glob_single_final_ext_star() {\n\tlet filterer = filt(&[], &[\"Cargo.*\"], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"Cargo.json\");\n\tfilterer.file_does_pass(\"Gemfile.toml\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.dir_doesnt_pass(\"Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn ignore_glob_star_trailing_slash() {\n\tlet filterer = filt(&[], &[\"Cargo.*/\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\tfilterer.file_does_pass(\"Gemfile.toml\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.dir_doesnt_pass(\"Cargo.toml\");\n\tfilterer.unk_does_pass(\"Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn ignore_glob_star_leading_slash() {\n\tlet filterer = filt(&[], &[\"/Cargo.*\"], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"Cargo.json\");\n\tfilterer.dir_doesnt_pass(\"Cargo.toml\");\n\tfilterer.unk_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"foo/Cargo.toml\");\n\tfilterer.dir_does_pass(\"foo/Cargo.toml\");\n}\n\n#[tokio::test]\nasync fn ignore_glob_leading_double_star() {\n\tlet filterer = filt(&[], &[\"**/possum\"], &[], &[], &[]).await;\n\n\tfilterer.file_doesnt_pass(\"possum\");\n\tfilterer.file_doesnt_pass(\"foo/bar/possum\");\n\tfilterer.file_doesnt_pass(\"/foo/bar/possum\");\n\tfilterer.dir_doesnt_pass(\"possum\");\n\tfilterer.dir_doesnt_pass(\"foo/bar/possum\");\n\tfilterer.dir_doesnt_pass(\"/foo/bar/possum\");\n\tfilterer.file_does_pass(\"rat\");\n\tfilterer.file_does_pass(\"foo/bar/rat\");\n\tfilterer.file_does_pass(\"/foo/bar/rat\");\n}\n\n#[tokio::test]\nasync fn ignore_glob_trailing_double_star() {\n\tlet filterer = filt(&[], &[\"possum/**\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"possum\");\n\tfilterer.file_doesnt_pass(\"possum/foo/bar\");\n\tfilterer.file_does_pass(\"/possum/foo/bar\");\n\tfilterer.file_doesnt_pass(\"/test/possum/foo/bar\");\n\tfilterer.dir_does_pass(\"possum\");\n\tfilterer.dir_does_pass(\"foo/bar/possum\");\n\tfilterer.dir_does_pass(\"/foo/bar/possum\");\n\tfilterer.dir_doesnt_pass(\"possum/foo/bar\");\n\tfilterer.dir_does_pass(\"/possum/foo/bar\");\n\tfilterer.dir_doesnt_pass(\"/test/possum/foo/bar\");\n\tfilterer.file_does_pass(\"rat\");\n\tfilterer.file_does_pass(\"foo/bar/rat\");\n\tfilterer.file_does_pass(\"/foo/bar/rat\");\n}\n\n#[tokio::test]\nasync fn ignore_glob_middle_double_star() {\n\tlet filterer = filt(&[], &[\"apples/**/oranges\"], &[], &[], &[]).await;\n\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.file_doesnt_pass(\"apples/carrots/oranges\");\n\tfilterer.file_doesnt_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_doesnt_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_doesnt_pass(\"apples/carrots/oranges\");\n\tfilterer.dir_doesnt_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_doesnt_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.dir_does_pass(\"apples/oranges/bananas\");\n}\n\n#[tokio::test]\nasync fn ignore_glob_double_star_trailing_slash() {\n\tlet filterer = filt(&[], &[\"apples/**/oranges/\"], &[], &[], &[]).await;\n\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.file_does_pass(\"apples/carrots/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_doesnt_pass(\"apples/carrots/oranges\");\n\tfilterer.dir_doesnt_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_doesnt_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.dir_does_pass(\"apples/oranges/bananas\");\n\tfilterer.unk_does_pass(\"apples/carrots/oranges\");\n\tfilterer.unk_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.unk_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n}\n\n#[tokio::test]\nasync fn ignores_take_precedence() {\n\tlet filterer = filt(\n\t\t&[\"*.docx\", \"*.toml\", \"*.json\"],\n\t\t&[\"*.toml\", \"*.json\"],\n\t\t&[],\n\t\t&[],\n\t\t&[],\n\t)\n\t.await;\n\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"/test/foo/bar/Cargo.toml\");\n\tfilterer.file_doesnt_pass(\"package.json\");\n\tfilterer.file_doesnt_pass(\"/test/foo/bar/package.json\");\n\tfilterer.dir_doesnt_pass(\"/test/Cargo.toml\");\n\tfilterer.dir_doesnt_pass(\"/test/package.json\");\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n}\n\n#[tokio::test]\nasync fn extensions_fail_dirs() {\n\tlet filterer = filt(&[], &[], &[], &[\"py\"], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.py\");\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.dir_doesnt_pass(\"Cargo\");\n\tfilterer.dir_doesnt_pass(\"Cargo.toml\");\n\tfilterer.dir_doesnt_pass(\"Cargo.py\");\n}\n\n#[tokio::test]\nasync fn extensions_fail_extensionless() {\n\tlet filterer = filt(&[], &[], &[], &[\"py\"], &[]).await;\n\n\tfilterer.file_does_pass(\"Cargo.py\");\n\tfilterer.file_doesnt_pass(\"Cargo\");\n}\n\n#[tokio::test]\nasync fn multipath_allow_on_any_one_pass() {\n\tuse watchexec::filter::Filterer;\n\tuse watchexec_events::{Event, FileType, Tag};\n\n\tlet filterer = filt(&[], &[], &[], &[\"py\"], &[]).await;\n\tlet origin = tokio::fs::canonicalize(\".\").await.unwrap();\n\n\tlet event = Event {\n\t\ttags: vec![\n\t\t\tTag::Path {\n\t\t\t\tpath: origin.join(\"Cargo.py\"),\n\t\t\t\tfile_type: Some(FileType::File),\n\t\t\t},\n\t\t\tTag::Path {\n\t\t\t\tpath: origin.join(\"Cargo.toml\"),\n\t\t\t\tfile_type: Some(FileType::File),\n\t\t\t},\n\t\t\tTag::Path {\n\t\t\t\tpath: origin.join(\"Cargo.py\"),\n\t\t\t\tfile_type: Some(FileType::Dir),\n\t\t\t},\n\t\t],\n\t\tmetadata: Default::default(),\n\t};\n\n\tassert!(filterer.check_event(&event, Priority::Normal).unwrap());\n}\n\n#[tokio::test]\nasync fn extensions_and_filters_glob() {\n\tlet filterer = filt(&[\"*/justfile\"], &[], &[], &[\"md\", \"css\"], &[]).await;\n\n\tfilterer.file_does_pass(\"foo/justfile\");\n\tfilterer.file_does_pass(\"bar.md\");\n\tfilterer.file_does_pass(\"qux.css\");\n\tfilterer.file_doesnt_pass(\"nope.py\");\n\n\t// Watchexec 1.x buggy behaviour, should not pass\n\t#[cfg(unix)]\n\tfilterer.file_does_pass(\"justfile\");\n}\n\n#[tokio::test]\nasync fn extensions_and_filters_slash() {\n\tlet filterer = filt(&[\"/justfile\"], &[], &[], &[\"md\", \"css\"], &[]).await;\n\n\tfilterer.file_does_pass(\"justfile\");\n\tfilterer.file_does_pass(\"bar.md\");\n\tfilterer.file_does_pass(\"qux.css\");\n\tfilterer.file_doesnt_pass(\"nope.py\");\n}\n\n#[tokio::test]\nasync fn leading_single_glob_file() {\n\tlet filterer = filt(&[\"*/justfile\"], &[], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"foo/justfile\");\n\tfilterer.file_doesnt_pass(\"notfile\");\n\tfilterer.file_doesnt_pass(\"not/thisfile\");\n\n\t// Watchexec 1.x buggy behaviour, should not pass\n\t#[cfg(unix)]\n\tfilterer.file_does_pass(\"justfile\");\n}\n\n#[tokio::test]\nasync fn nonpath_event_passes() {\n\tuse watchexec::filter::Filterer;\n\tuse watchexec_events::{Event, Source, Tag};\n\n\tlet filterer = filt(&[], &[], &[], &[\"py\"], &[]).await;\n\n\tassert!(filterer\n\t\t.check_event(\n\t\t\t&Event {\n\t\t\t\ttags: vec![Tag::Source(Source::Internal)],\n\t\t\t\tmetadata: Default::default(),\n\t\t\t},\n\t\t\tPriority::Normal\n\t\t)\n\t\t.unwrap());\n\n\tassert!(filterer\n\t\t.check_event(\n\t\t\t&Event {\n\t\t\t\ttags: vec![Tag::Source(Source::Keyboard)],\n\t\t\t\tmetadata: Default::default(),\n\t\t\t},\n\t\t\tPriority::Normal\n\t\t)\n\t\t.unwrap());\n}\n\n// The following tests replicate the \"buggy\"/\"confusing\" watchexec v1 behaviour.\n\n#[tokio::test]\nasync fn ignore_folder_incorrectly_with_bare_match() {\n\tlet filterer = filt(&[], &[\"prunes\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_doesnt_pass(\"prunes\");\n\tfilterer.dir_doesnt_pass(\"prunes\");\n\n\t// buggy behaviour (should be doesnt):\n\tfilterer.file_does_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n}\n\n#[tokio::test]\nasync fn ignore_folder_incorrectly_with_bare_and_leading_slash() {\n\tlet filterer = filt(&[], &[\"/prunes\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_doesnt_pass(\"prunes\");\n\tfilterer.dir_doesnt_pass(\"prunes\");\n\n\t// buggy behaviour (should be doesnt):\n\tfilterer.file_does_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n}\n\n#[tokio::test]\nasync fn ignore_folder_incorrectly_with_bare_and_trailing_slash() {\n\tlet filterer = filt(&[], &[\"prunes/\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.dir_doesnt_pass(\"prunes\");\n\n\t// buggy behaviour (should be doesnt):\n\tfilterer.file_does_pass(\"prunes\");\n\tfilterer.file_does_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n}\n\n#[tokio::test]\nasync fn ignore_folder_incorrectly_with_only_double_double_glob() {\n\tlet filterer = filt(&[], &[\"**/prunes/**\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_doesnt_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_doesnt_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_doesnt_pass(\"prunes/oranges/bananas\");\n\tfilterer.dir_doesnt_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_doesnt_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n\n\t// buggy behaviour (should be doesnt):\n\tfilterer.file_does_pass(\"prunes\");\n\tfilterer.dir_does_pass(\"prunes\");\n}\n\n#[tokio::test]\nasync fn ignore_folder_correctly_with_double_and_double_double_globs() {\n\tlet filterer = filt(&[], &[\"**/prunes\", \"**/prunes/**\"], &[], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"raw-prunes/oranges/bananas\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"raw-prunes/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_doesnt_pass(\"prunes\");\n\tfilterer.file_doesnt_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.file_doesnt_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_doesnt_pass(\"prunes/oranges/bananas\");\n\tfilterer.dir_doesnt_pass(\"prunes\");\n\tfilterer.dir_doesnt_pass(\"prunes/carrots/cauliflowers/oranges\");\n\tfilterer.dir_doesnt_pass(\"prunes/carrots/cauliflowers/artichokes/oranges\");\n}\n\n#[tokio::test]\nasync fn whitelist_overrides_ignore() {\n\tlet filterer = filt(&[], &[\"**/prunes\"], &[\"/prunes\"], &[], &[]).await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"/prunes\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"/prunes\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\n\tfilterer.file_doesnt_pass(\"apples/prunes\");\n\tfilterer.file_doesnt_pass(\"raw/prunes\");\n\tfilterer.dir_doesnt_pass(\"apples/prunes\");\n\tfilterer.dir_doesnt_pass(\"raw/prunes\");\n}\n\n#[tokio::test]\nasync fn whitelist_overrides_ignore_files() {\n\tlet mut ignore_file = tempfile::NamedTempFile::new().unwrap();\n\tlet _ = ignore_file.write(b\"prunes\");\n\n\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\tlet whitelist = origin.join(\"prunes\").display().to_string();\n\n\tlet filterer = filt(\n\t\t&[],\n\t\t&[],\n\t\t&[&whitelist],\n\t\t&[],\n\t\t&[ignore_file.path().to_path_buf()],\n\t)\n\t.await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"prunes\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"prunes\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\n\tfilterer.file_doesnt_pass(\"apples/prunes\");\n\tfilterer.file_doesnt_pass(\"raw/prunes\");\n\tfilterer.dir_doesnt_pass(\"apples/prunes\");\n\tfilterer.dir_doesnt_pass(\"raw/prunes\");\n}\n\n#[tokio::test]\nasync fn whitelist_overrides_ignore_files_nested() {\n\tlet mut ignore_file = tempfile::NamedTempFile::new().unwrap();\n\tlet _ = ignore_file.write(b\"prunes\\n\");\n\n\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\tlet whitelist = origin.join(\"prunes\").join(\"target\").display().to_string();\n\n\tlet filterer = filt(\n\t\t&[],\n\t\t&[],\n\t\t&[&whitelist],\n\t\t&[],\n\t\t&[ignore_file.path().to_path_buf()],\n\t)\n\t.await;\n\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_doesnt_pass(\"prunes\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_doesnt_pass(\"prunes\");\n\n\tfilterer.file_does_pass(\"raw-prunes\");\n\tfilterer.dir_does_pass(\"raw-prunes\");\n\n\tfilterer.file_doesnt_pass(\"prunes/apples\");\n\tfilterer.file_doesnt_pass(\"prunes/raw\");\n\tfilterer.dir_doesnt_pass(\"prunes/apples\");\n\tfilterer.dir_doesnt_pass(\"prunes/raw\");\n\n\tfilterer.file_doesnt_pass(\"apples/prunes\");\n\tfilterer.file_doesnt_pass(\"raw/prunes\");\n\tfilterer.dir_doesnt_pass(\"apples/prunes\");\n\tfilterer.dir_doesnt_pass(\"raw/prunes\");\n\n\tfilterer.file_does_pass(\"prunes/target\");\n\tfilterer.dir_does_pass(\"prunes/target\");\n\n\tfilterer.file_doesnt_pass(\"prunes/nested/target\");\n\tfilterer.dir_doesnt_pass(\"prunes/nested/target\");\n}\n"
  },
  {
    "path": "crates/filterer/globset/tests/helpers/mod.rs",
    "content": "use std::{\n\tffi::OsString,\n\tpath::{Path, PathBuf},\n};\n\nuse ignore_files::IgnoreFile;\nuse watchexec::{error::RuntimeError, filter::Filterer};\nuse watchexec_events::{Event, FileType, Priority, Tag};\nuse watchexec_filterer_globset::GlobsetFilterer;\nuse watchexec_filterer_ignore::IgnoreFilterer;\n\npub mod globset {\n\tpub use super::globset_filt as filt;\n\tpub use super::PathHarness;\n\tpub use watchexec_events::Priority;\n}\n\npub trait PathHarness: Filterer {\n\tfn check_path(\n\t\t&self,\n\t\tpath: PathBuf,\n\t\tfile_type: Option<FileType>,\n\t) -> std::result::Result<bool, RuntimeError> {\n\t\tlet event = Event {\n\t\t\ttags: vec![Tag::Path { path, file_type }],\n\t\t\tmetadata: Default::default(),\n\t\t};\n\n\t\tself.check_event(&event, Priority::Normal)\n\t}\n\n\tfn path_pass(&self, path: &str, file_type: Option<FileType>, pass: bool) {\n\t\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\t\tlet full_path = if let Some(suf) = path.strip_prefix(\"/test/\") {\n\t\t\torigin.join(suf)\n\t\t} else if Path::new(path).has_root() {\n\t\t\tpath.into()\n\t\t} else {\n\t\t\torigin.join(path)\n\t\t};\n\n\t\ttracing::info!(?path, ?file_type, ?pass, \"check\");\n\n\t\tassert_eq!(\n\t\t\tself.check_path(full_path, file_type).unwrap(),\n\t\t\tpass,\n\t\t\t\"{} {:?} (expected {})\",\n\t\t\tmatch file_type {\n\t\t\t\tSome(FileType::File) => \"file\",\n\t\t\t\tSome(FileType::Dir) => \"dir\",\n\t\t\t\tSome(FileType::Symlink) => \"symlink\",\n\t\t\t\tSome(FileType::Other) => \"other\",\n\t\t\t\tNone => \"path\",\n\t\t\t},\n\t\t\tpath,\n\t\t\tif pass { \"pass\" } else { \"fail\" }\n\t\t);\n\t}\n\n\tfn file_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::File), true);\n\t}\n\n\tfn file_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::File), false);\n\t}\n\n\tfn dir_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::Dir), true);\n\t}\n\n\tfn dir_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::Dir), false);\n\t}\n\n\tfn unk_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, None, true);\n\t}\n\n\tfn unk_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, None, false);\n\t}\n}\n\nimpl PathHarness for GlobsetFilterer {}\nimpl PathHarness for IgnoreFilterer {}\n\nfn tracing_init() {\n\tuse tracing_subscriber::{\n\t\tfmt::{format::FmtSpan, Subscriber},\n\t\tutil::SubscriberInitExt,\n\t\tEnvFilter,\n\t};\n\tSubscriber::builder()\n\t\t.pretty()\n\t\t.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)\n\t\t.with_env_filter(EnvFilter::from_default_env())\n\t\t.finish()\n\t\t.try_init()\n\t\t.ok();\n}\n\npub async fn globset_filt(\n\tfilters: &[&str],\n\tignores: &[&str],\n\twhitelists: &[&str],\n\textensions: &[&str],\n\tignore_files: &[PathBuf],\n) -> GlobsetFilterer {\n\tlet origin = tokio::fs::canonicalize(\".\").await.unwrap();\n\ttracing_init();\n\tGlobsetFilterer::new(\n\t\torigin,\n\t\tfilters.iter().map(|s| ((*s).to_string(), None)),\n\t\tignores.iter().map(|s| ((*s).to_string(), None)),\n\t\twhitelists.iter().map(|s| (*s).into()),\n\t\tignore_files.iter().map(|path| IgnoreFile {\n\t\t\tpath: path.clone(),\n\t\t\tapplies_in: None,\n\t\t\tapplies_to: None,\n\t\t}),\n\t\textensions.iter().map(OsString::from),\n\t)\n\t.await\n\t.expect(\"making filterer\")\n}\n"
  },
  {
    "path": "crates/filterer/ignore/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v7.0.0 (2025-05-15)\n\n- Deps: remove unused dependency `watchexec-signals` ([#930](https://github.com/watchexec/watchexec/pull/930))\n\n## v6.0.0 (2025-02-09)\n\n## v5.0.0 (2024-10-14)\n\n## v4.0.1 (2024-04-28)\n\n## v4.0.0 (2024-04-20)\n\n- Deps: watchexec 4\n\n## v3.0.1 (2024-01-04)\n\n- Normalise paths on all platforms (via `normalize-path`).\n\n## v3.0.0 (2024-01-01)\n\n- Deps: `ignore-files` 2.0.0\n\n## v2.0.1 (2023-12-09)\n\n- Depend on `watchexec-events` instead of the `watchexec` re-export.\n\n## v1.2.1 (2023-05-14)\n\n- Use IO-free dunce::simplify to normalise paths on Windows.\n- Known regression: some filtering patterns misbehave slightly on Windows with paths outside the project root.\n  - As filters were previously completely broken on Windows, this is still considered an improvement.\n\n## v1.2.0 (2023-03-18)\n\n- Ditch MSRV policy. The `rust-version` indication will remain, for the minimum estimated Rust version for the code features used in the crate's own code, but dependencies may have already moved on. From now on, only latest stable is assumed and tested for. ([#510](https://github.com/watchexec/watchexec/pull/510))\n\n## v1.1.0 (2023-01-09)\n\n- MSRV: bump to 1.61.0\n\n## v1.0.0 (2022-06-23)\n\n- Initial release as a separate crate.\n"
  },
  {
    "path": "crates/filterer/ignore/Cargo.toml",
    "content": "[package]\nname = \"watchexec-filterer-ignore\"\nversion = \"7.0.0\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0\"\ndescription = \"Watchexec filterer component for ignore files\"\nkeywords = [\"watchexec\", \"filterer\", \"ignore\"]\n\ndocumentation = \"https://docs.rs/watchexec-filterer-ignore\"\nhomepage = \"https://watchexec.github.io\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.61.0\"\nedition = \"2021\"\n\n[dependencies]\nignore = \"0.4.18\"\ndunce = \"1.0.4\"\nnormalize-path = \"0.2.1\"\ntracing = \"0.1.40\"\n\n[dependencies.ignore-files]\nversion = \"3.0.5\"\npath = \"../../ignore-files\"\n\n[dependencies.watchexec]\nversion = \"8.2.0\"\npath = \"../../lib\"\n\n[dependencies.watchexec-events]\nversion = \"6.1.0\"\npath = \"../../events\"\n\n[dev-dependencies.project-origins]\nversion = \"1.4.2\"\npath = \"../../project-origins\"\n\n[dev-dependencies.tokio]\nversion = \"1.33.0\"\nfeatures = [\n\t\"fs\",\n\t\"io-std\",\n\t\"rt\",\n\t\"rt-multi-thread\",\n\t\"macros\",\n]\n\n[dev-dependencies.tracing-subscriber]\nversion = \"0.3.6\"\nfeatures = [\"env-filter\"]\n"
  },
  {
    "path": "crates/filterer/ignore/README.md",
    "content": "[![Crates.io page](https://badgen.net/crates/v/watchexec-filterer-ignore)](https://crates.io/crates/watchexec-filterer-ignore)\n[![API Docs](https://docs.rs/watchexec-filterer-ignore/badge.svg)][docs]\n[![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license]\n[![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml)\n\n# Watchexec filterer: ignore\n\n_(Sub)filterer implementation for ignore files._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license].\n- Status: maintained.\n\nThis is mostly a thin layer above the [ignore-files](../../ignore-files) crate, and is meant to be\nused as part of another more general filterer. However, there's nothing wrong with using it\ndirectly if all that's needed is to handle ignore files.\n\n[docs]: https://docs.rs/watchexec-filterer-ignore\n[license]: ../../../LICENSE\n"
  },
  {
    "path": "crates/filterer/ignore/release.toml",
    "content": "pre-release-commit-message = \"release: filterer-ignore v{{version}}\"\ntag-prefix = \"watchexec-filterer-ignore-\"\ntag-message = \"watchexec-filterer-ignore {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/filterer/ignore/src/lib.rs",
    "content": "//! A Watchexec Filterer implementation for ignore files.\n//!\n//! This filterer is meant to be used as a backing filterer inside a more complex or complete\n//! filterer, and not as a standalone filterer.\n//!\n//! This is a fairly simple wrapper around the [`ignore_files`] crate, which is probably where you\n//! want to look for any detail or to use this outside of Watchexec.\n\n#![doc(html_favicon_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![doc(html_logo_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![warn(clippy::unwrap_used, missing_docs)]\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n#![deny(rust_2018_idioms)]\n\nuse ignore::Match;\nuse ignore_files::IgnoreFilter;\nuse normalize_path::NormalizePath;\nuse tracing::{trace, trace_span};\nuse watchexec::{error::RuntimeError, filter::Filterer};\nuse watchexec_events::{Event, FileType, Priority};\n\n/// A Watchexec [`Filterer`] implementation for [`IgnoreFilter`].\n#[derive(Clone, Debug)]\npub struct IgnoreFilterer(pub IgnoreFilter);\n\nimpl Filterer for IgnoreFilterer {\n\t/// Filter an event.\n\t///\n\t/// This implementation never errors. It returns `Ok(false)` if the event is ignored according\n\t/// to the ignore files, and `Ok(true)` otherwise. It ignores event priority.\n\tfn check_event(&self, event: &Event, _priority: Priority) -> Result<bool, RuntimeError> {\n\t\tlet _span = trace_span!(\"filterer_check\").entered();\n\t\tlet mut pass = true;\n\n\t\tfor (path, file_type) in event.paths() {\n\t\t\tlet path = dunce::simplified(path).normalize();\n\t\t\tlet path = path.as_path();\n\t\t\tlet _span = trace_span!(\"checking_against_compiled\", ?path, ?file_type).entered();\n\t\t\tlet is_dir = file_type.map_or(false, |t| matches!(t, FileType::Dir));\n\n\t\t\tmatch self.0.match_path(path, is_dir) {\n\t\t\t\tMatch::None => {\n\t\t\t\t\ttrace!(\"no match (pass)\");\n\t\t\t\t\tpass &= true;\n\t\t\t\t}\n\t\t\t\tMatch::Ignore(glob) => {\n\t\t\t\t\tif glob.from().map_or(true, |f| path.strip_prefix(f).is_ok()) {\n\t\t\t\t\t\ttrace!(?glob, \"positive match (fail)\");\n\t\t\t\t\t\tpass &= false;\n\t\t\t\t\t} else {\n\t\t\t\t\t\ttrace!(?glob, \"positive match, but not in scope (ignore)\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tMatch::Whitelist(glob) => {\n\t\t\t\t\ttrace!(?glob, \"negative match (pass)\");\n\t\t\t\t\tpass = true;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\ttrace!(?pass, \"verdict\");\n\t\tOk(pass)\n\t}\n}\n"
  },
  {
    "path": "crates/filterer/ignore/tests/filtering.rs",
    "content": "use ignore_files::IgnoreFilter;\nuse watchexec_filterer_ignore::IgnoreFilterer;\n\nmod helpers;\nuse helpers::ignore::*;\n\n#[tokio::test]\nasync fn folders() {\n\tlet filterer = filt(\"\", &[file(\"folders\")]).await;\n\n\tfilterer.file_doesnt_pass(\"prunes\");\n\tfilterer.dir_doesnt_pass(\"prunes\");\n\tfolders_suite(&filterer, \"prunes\");\n\n\tfilterer.file_doesnt_pass(\"apricots\");\n\tfilterer.dir_doesnt_pass(\"apricots\");\n\tfolders_suite(&filterer, \"apricots\");\n\n\tfilterer.file_does_pass(\"cherries\");\n\tfilterer.dir_doesnt_pass(\"cherries\");\n\tfolders_suite(&filterer, \"cherries\");\n\n\tfilterer.file_does_pass(\"grapes\");\n\tfilterer.dir_does_pass(\"grapes\");\n\tfolders_suite(&filterer, \"grapes\");\n\n\tfilterer.file_doesnt_pass(\"feijoa\");\n\tfilterer.dir_doesnt_pass(\"feijoa\");\n\tfolders_suite(&filterer, \"feijoa\");\n}\n\nfn folders_suite(filterer: &IgnoreFilterer, name: &str) {\n\tfilterer.file_does_pass(\"apples\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.file_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\tfilterer.file_does_pass(\"apples/oranges/bananas\");\n\tfilterer.dir_does_pass(\"apples\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/oranges\");\n\tfilterer.dir_does_pass(\"apples/carrots/cauliflowers/artichokes/oranges\");\n\n\tfilterer.file_does_pass(&format!(\"raw-{name}\"));\n\tfilterer.dir_does_pass(&format!(\"raw-{name}\"));\n\tfilterer.file_does_pass(&format!(\"raw-{name}/carrots/cauliflowers/oranges\"));\n\tfilterer.file_does_pass(&format!(\"raw-{name}/oranges/bananas\"));\n\tfilterer.dir_does_pass(&format!(\"raw-{name}/carrots/cauliflowers/oranges\"));\n\tfilterer.file_does_pass(&format!(\n\t\t\"raw-{}/carrots/cauliflowers/artichokes/oranges\",\n\t\tname\n\t));\n\tfilterer.dir_does_pass(&format!(\n\t\t\"raw-{}/carrots/cauliflowers/artichokes/oranges\",\n\t\tname\n\t));\n\n\tfilterer.dir_doesnt_pass(&format!(\"{name}/carrots/cauliflowers/oranges\"));\n\tfilterer.dir_doesnt_pass(&format!(\"{name}/carrots/cauliflowers/artichokes/oranges\"));\n\tfilterer.file_doesnt_pass(&format!(\"{name}/carrots/cauliflowers/oranges\"));\n\tfilterer.file_doesnt_pass(&format!(\"{name}/carrots/cauliflowers/artichokes/oranges\"));\n\tfilterer.file_doesnt_pass(&format!(\"{name}/oranges/bananas\"));\n}\n\n#[tokio::test]\nasync fn globs() {\n\tlet filterer = filt(\"\", &[file(\"globs\").applies_globally()]).await;\n\n\t// Unmatched\n\tfilterer.file_does_pass(\"FINAL-FINAL.docx\");\n\t#[cfg(windows)]\n\tfilterer.dir_does_pass(r\"C:\\a\\folder\");\n\t#[cfg(not(windows))]\n\tfilterer.dir_does_pass(\"/a/folder\");\n\tfilterer.file_does_pass(\"rat\");\n\tfilterer.file_does_pass(\"foo/bar/rat\");\n\t#[cfg(windows)]\n\tfilterer.file_does_pass(r\"C:\\foo\\bar\\rat\");\n\t#[cfg(not(windows))]\n\tfilterer.file_does_pass(\"/foo/bar/rat\");\n\n\t// Cargo.toml\n\tfilterer.file_doesnt_pass(\"Cargo.toml\");\n\tfilterer.dir_doesnt_pass(\"Cargo.toml\");\n\tfilterer.file_does_pass(\"Cargo.json\");\n\n\t// package.json\n\tfilterer.file_doesnt_pass(\"package.json\");\n\tfilterer.dir_doesnt_pass(\"package.json\");\n\tfilterer.file_does_pass(\"package.toml\");\n\n\t// *.gemspec\n\tfilterer.file_doesnt_pass(\"pearl.gemspec\");\n\tfilterer.dir_doesnt_pass(\"sapphire.gemspec\");\n\tfilterer.file_doesnt_pass(\".gemspec\");\n\tfilterer.file_does_pass(\"diamond.gemspecial\");\n\n\t// test-*\n\tfilterer.file_doesnt_pass(\"test-unit\");\n\tfilterer.dir_doesnt_pass(\"test-integration\");\n\tfilterer.file_does_pass(\"tester-helper\");\n\n\t// *.sw*\n\tfilterer.file_doesnt_pass(\"source.file.swa\");\n\tfilterer.file_doesnt_pass(\".source.file.swb\");\n\tfilterer.dir_doesnt_pass(\"source.folder.swd\");\n\tfilterer.file_does_pass(\"other.thing.s_w\");\n\n\t// sources.*/\n\tfilterer.file_does_pass(\"sources.waters\");\n\tfilterer.dir_doesnt_pass(\"sources.rivers\");\n\n\t// /output.*\n\tfilterer.file_doesnt_pass(\"output.toml\");\n\tfilterer.file_doesnt_pass(\"output.json\");\n\tfilterer.dir_doesnt_pass(\"output.toml\");\n\tfilterer.unk_doesnt_pass(\"output.toml\");\n\tfilterer.file_does_pass(\"foo/output.toml\");\n\tfilterer.dir_does_pass(\"foo/output.toml\");\n\n\t// **/possum\n\tfilterer.file_doesnt_pass(\"possum\");\n\tfilterer.file_doesnt_pass(\"foo/bar/possum\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.file_doesnt_pass(r\"C:\\foo\\bar\\possum\");\n\t#[cfg(not(windows))]\n\tfilterer.file_doesnt_pass(\"/foo/bar/possum\");\n\tfilterer.dir_doesnt_pass(\"possum\");\n\tfilterer.dir_doesnt_pass(\"foo/bar/possum\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.dir_doesnt_pass(r\"C:\\foo\\bar\\possum\");\n\t#[cfg(not(windows))]\n\tfilterer.dir_doesnt_pass(\"/foo/bar/possum\");\n\n\t// zebra/**\n\tfilterer.file_does_pass(\"zebra\");\n\tfilterer.file_doesnt_pass(\"zebra/foo/bar\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.file_does_pass(r\"C:\\zebra\\foo\\bar\");\n\t#[cfg(not(windows))]\n\tfilterer.file_does_pass(\"/zebra/foo/bar\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.file_doesnt_pass(r\"C:\\test\\zebra\\foo\\bar\");\n\t#[cfg(not(windows))]\n\tfilterer.file_doesnt_pass(\"/test/zebra/foo/bar\");\n\tfilterer.dir_does_pass(\"zebra\");\n\tfilterer.dir_does_pass(\"foo/bar/zebra\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.dir_does_pass(r\"C:\\foo\\bar\\zebra\");\n\t#[cfg(not(windows))]\n\tfilterer.dir_does_pass(\"/foo/bar/zebra\");\n\tfilterer.dir_doesnt_pass(\"zebra/foo/bar\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.dir_does_pass(r\"C:\\zebra\\foo\\bar\");\n\t#[cfg(not(windows))]\n\tfilterer.dir_does_pass(\"/zebra/foo/bar\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.dir_doesnt_pass(r\"C:\\test\\zebra\\foo\\bar\");\n\t#[cfg(not(windows))]\n\tfilterer.dir_doesnt_pass(\"/test/zebra/foo/bar\");\n\n\t// elep/**/hant\n\tfilterer.file_doesnt_pass(\"elep/carrots/hant\");\n\tfilterer.file_doesnt_pass(\"elep/carrots/cauliflowers/hant\");\n\tfilterer.file_doesnt_pass(\"elep/carrots/cauliflowers/artichokes/hant\");\n\tfilterer.dir_doesnt_pass(\"elep/carrots/hant\");\n\tfilterer.dir_doesnt_pass(\"elep/carrots/cauliflowers/hant\");\n\tfilterer.dir_doesnt_pass(\"elep/carrots/cauliflowers/artichokes/hant\");\n\tfilterer.file_doesnt_pass(\"elep/hant/bananas\");\n\tfilterer.dir_doesnt_pass(\"elep/hant/bananas\");\n\n\t// song/**/bird/\n\tfilterer.file_does_pass(\"song/carrots/bird\");\n\tfilterer.file_does_pass(\"song/carrots/cauliflowers/bird\");\n\tfilterer.file_does_pass(\"song/carrots/cauliflowers/artichokes/bird\");\n\tfilterer.dir_doesnt_pass(\"song/carrots/bird\");\n\tfilterer.dir_doesnt_pass(\"song/carrots/cauliflowers/bird\");\n\tfilterer.dir_doesnt_pass(\"song/carrots/cauliflowers/artichokes/bird\");\n\tfilterer.unk_does_pass(\"song/carrots/bird\");\n\tfilterer.unk_does_pass(\"song/carrots/cauliflowers/bird\");\n\tfilterer.unk_does_pass(\"song/carrots/cauliflowers/artichokes/bird\");\n\tfilterer.file_doesnt_pass(\"song/bird/bananas\");\n\tfilterer.dir_doesnt_pass(\"song/bird/bananas\");\n}\n\n#[tokio::test]\nasync fn negate() {\n\tlet filterer = filt(\"\", &[file(\"negate\")]).await;\n\n\tfilterer.file_does_pass(\"yeah\");\n\tfilterer.file_doesnt_pass(\"nah\");\n\tfilterer.file_does_pass(\"nah.yeah\");\n}\n\n#[tokio::test]\nasync fn allowlist() {\n\tlet filterer = filt(\"\", &[file(\"allowlist\")]).await;\n\n\tfilterer.file_does_pass(\"mod.go\");\n\tfilterer.file_does_pass(\"foo.go\");\n\tfilterer.file_does_pass(\"go.sum\");\n\tfilterer.file_does_pass(\"go.mod\");\n\tfilterer.file_does_pass(\"README.md\");\n\tfilterer.file_does_pass(\"LICENSE\");\n\tfilterer.file_does_pass(\".gitignore\");\n\n\tfilterer.file_doesnt_pass(\"evil.sum\");\n\tfilterer.file_doesnt_pass(\"evil.mod\");\n\tfilterer.file_doesnt_pass(\"gofile.gone\");\n\tfilterer.file_doesnt_pass(\"go.js\");\n\tfilterer.file_doesnt_pass(\"README.asciidoc\");\n\tfilterer.file_doesnt_pass(\"LICENSE.txt\");\n\tfilterer.file_doesnt_pass(\"foo/.gitignore\");\n}\n\n#[tokio::test]\nasync fn scopes() {\n\tlet filterer = filt(\n\t\t\"\",\n\t\t&[\n\t\t\tfile(\"scopes-global\").applies_globally(),\n\t\t\tfile(\"scopes-local\"),\n\t\t\tfile(\"scopes-sublocal\").applies_in(\"tests\"),\n\t\t\tfile(\"none-allowed\").applies_in(\"tests/child\"),\n\t\t],\n\t)\n\t.await;\n\n\tfilterer.file_doesnt_pass(\"global.a\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.file_doesnt_pass(r\"C:\\global.b\");\n\t#[cfg(not(windows))]\n\tfilterer.file_doesnt_pass(\"/global.b\");\n\tfilterer.file_doesnt_pass(\"tests/global.c\");\n\n\tfilterer.file_doesnt_pass(\"local.a\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.file_does_pass(r\"C:\\local.b\");\n\t#[cfg(not(windows))]\n\tfilterer.file_does_pass(\"/local.b\");\n\t// FIXME flaky\n\t// filterer.file_doesnt_pass(\"tests/local.c\");\n\n\tfilterer.file_does_pass(\"sublocal.a\");\n\t// #[cfg(windows)] FIXME should work\n\t// filterer.file_does_pass(r\"C:\\sublocal.b\");\n\t#[cfg(not(windows))]\n\tfilterer.file_does_pass(\"/sublocal.b\");\n\tfilterer.file_doesnt_pass(\"tests/sublocal.c\");\n\n\tfilterer.file_doesnt_pass(\"tests/child/child.txt\");\n\tfilterer.file_doesnt_pass(\"tests/child/grandchild/grandchild.c\");\n}\n\n#[tokio::test]\nasync fn self_ignored() {\n\tlet filterer = filt(\"\", &[file(\"self.ignore\").applies_in(\"tests/ignores\")]).await;\n\n\tfilterer.file_doesnt_pass(\"tests/ignores/self.ignore\");\n\tfilterer.file_does_pass(\"self.ignore\");\n}\n\n#[tokio::test]\nasync fn add_globs_without_any_ignore_file() {\n\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\tlet mut ignore_filter = IgnoreFilter::new(&origin, &[]).await.unwrap();\n\tignore_filter\n\t\t.add_globs(&[\"other/\"], Some(&origin))\n\t\t.expect(\"Failed to add globs to ignore filter\");\n\n\tlet filterer = IgnoreFilterer(ignore_filter);\n\tfilterer.file_doesnt_pass(\"other/some/file.txt\");\n\tfilterer.file_does_pass(\"tests/ignores/self.ignore\");\n}\n\n#[tokio::test]\nasync fn add_globs_to_existing_ignore_file() {\n\tlet ignore_file = file(\"self.ignore\").applies_in(\"tests/ignores\");\n\tlet ignore_file_applies_in = ignore_file.applies_in.clone().unwrap();\n\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\tlet mut ignore_filter = IgnoreFilter::new(&origin, &[ignore_file]).await.unwrap();\n\tignore_filter\n\t\t.add_globs(&[\"other/\"], Some(&ignore_file_applies_in))\n\t\t.expect(\"Failed to add globs to ignore filter\");\n\n\tlet filterer = IgnoreFilterer(ignore_filter);\n\tfilterer.file_doesnt_pass(\"tests/ignores/other/some/file.txt\");\n\tfilterer.file_doesnt_pass(\"tests/ignores/self.ignore\");\n\tfilterer.file_does_pass(\"README.md\");\n}\n\n#[tokio::test]\nasync fn add_ignore_file_without_any_preexisting_ignore_file() {\n\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\tlet mut ignore_filter = IgnoreFilter::new(&origin, &[]).await.unwrap();\n\tlet new_ignore_file = file(\"self.ignore\").applies_in(\"tests/ignores\");\n\tignore_filter.add_file(&new_ignore_file).await.unwrap();\n\n\tlet filterer = IgnoreFilterer(ignore_filter);\n\tfilterer.file_doesnt_pass(\"tests/ignores/self.ignore\");\n\tfilterer.file_does_pass(\"README.md\");\n}\n\n#[tokio::test]\nasync fn add_ignore_file_to_existing_ignore_file() {\n\tlet ignore_file = file(\"scopes-global\").applies_in(\"tests/ignores\");\n\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\tlet mut ignore_filter = IgnoreFilter::new(&origin, &[ignore_file]).await.unwrap();\n\tlet new_ignore_file = file(\"self.ignore\").applies_in(\"tests/ignores\");\n\tignore_filter.add_file(&new_ignore_file).await.unwrap();\n\n\tlet filterer = IgnoreFilterer(ignore_filter);\n\tfilterer.file_doesnt_pass(\"tests/ignores/self.ignore\");\n\tfilterer.file_doesnt_pass(\"tests/ignores/global.txt\");\n\tfilterer.file_does_pass(\"README.md\");\n}\n"
  },
  {
    "path": "crates/filterer/ignore/tests/helpers/mod.rs",
    "content": "use std::path::{Path, PathBuf};\n\nuse ignore_files::{IgnoreFile, IgnoreFilter};\nuse watchexec::{error::RuntimeError, filter::Filterer};\nuse watchexec_events::{Event, FileType, Priority, Tag};\nuse watchexec_filterer_ignore::IgnoreFilterer;\n\npub mod ignore {\n\tpub use super::ig_file as file;\n\tpub use super::ignore_filt as filt;\n\tpub use super::Applies;\n\tpub use super::PathHarness;\n}\n\npub trait PathHarness: Filterer {\n\tfn check_path(\n\t\t&self,\n\t\tpath: PathBuf,\n\t\tfile_type: Option<FileType>,\n\t) -> std::result::Result<bool, RuntimeError> {\n\t\tlet event = Event {\n\t\t\ttags: vec![Tag::Path { path, file_type }],\n\t\t\tmetadata: Default::default(),\n\t\t};\n\n\t\tself.check_event(&event, Priority::Normal)\n\t}\n\n\tfn path_pass(&self, path: &str, file_type: Option<FileType>, pass: bool) {\n\t\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\t\tlet full_path = if let Some(suf) = path.strip_prefix(\"/test/\") {\n\t\t\torigin.join(suf)\n\t\t} else if Path::new(path).has_root() {\n\t\t\tpath.into()\n\t\t} else {\n\t\t\torigin.join(path)\n\t\t};\n\n\t\ttracing::info!(?path, ?file_type, ?pass, \"check\");\n\n\t\tassert_eq!(\n\t\t\tself.check_path(full_path, file_type).unwrap(),\n\t\t\tpass,\n\t\t\t\"{} {:?} (expected {})\",\n\t\t\tmatch file_type {\n\t\t\t\tSome(FileType::File) => \"file\",\n\t\t\t\tSome(FileType::Dir) => \"dir\",\n\t\t\t\tSome(FileType::Symlink) => \"symlink\",\n\t\t\t\tSome(FileType::Other) => \"other\",\n\t\t\t\tNone => \"path\",\n\t\t\t},\n\t\t\tpath,\n\t\t\tif pass { \"pass\" } else { \"fail\" }\n\t\t);\n\t}\n\n\tfn file_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::File), true);\n\t}\n\n\tfn file_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::File), false);\n\t}\n\n\tfn dir_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::Dir), true);\n\t}\n\n\tfn dir_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, Some(FileType::Dir), false);\n\t}\n\n\tfn unk_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, None, true);\n\t}\n\n\tfn unk_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, None, false);\n\t}\n}\n\nimpl PathHarness for IgnoreFilterer {}\n\nfn tracing_init() {\n\tuse tracing_subscriber::{\n\t\tfmt::{format::FmtSpan, Subscriber},\n\t\tutil::SubscriberInitExt,\n\t\tEnvFilter,\n\t};\n\tSubscriber::builder()\n\t\t.pretty()\n\t\t.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)\n\t\t.with_env_filter(EnvFilter::from_default_env())\n\t\t.finish()\n\t\t.try_init()\n\t\t.ok();\n}\n\npub async fn ignore_filt(origin: &str, ignore_files: &[IgnoreFile]) -> IgnoreFilterer {\n\ttracing_init();\n\tlet origin = tokio::fs::canonicalize(\".\").await.unwrap().join(origin);\n\tIgnoreFilterer(\n\t\tIgnoreFilter::new(origin, ignore_files)\n\t\t\t.await\n\t\t\t.expect(\"making filterer\"),\n\t)\n}\n\npub fn ig_file(name: &str) -> IgnoreFile {\n\tlet origin = std::fs::canonicalize(\".\").unwrap();\n\tlet path = origin.join(\"tests\").join(\"ignores\").join(name);\n\tIgnoreFile {\n\t\tpath,\n\t\tapplies_in: Some(origin),\n\t\tapplies_to: None,\n\t}\n}\n\npub trait Applies {\n\tfn applies_globally(self) -> Self;\n\tfn applies_in(self, origin: &str) -> Self;\n}\n\nimpl Applies for IgnoreFile {\n\tfn applies_globally(mut self) -> Self {\n\t\tself.applies_in = None;\n\t\tself\n\t}\n\n\tfn applies_in(mut self, origin: &str) -> Self {\n\t\tlet origin = std::fs::canonicalize(\".\").unwrap().join(origin);\n\t\tself.applies_in = Some(origin);\n\t\tself\n\t}\n}\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/allowlist",
    "content": "# from https://github.com/github/gitignore\n\n*\n\n!/.gitignore\n\n!*.go\n!go.sum\n!go.mod\n\n!README.md\n!LICENSE\n\n!*/\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/folders",
    "content": "prunes\n/apricots\ncherries/\n**/grapes/**\n**/feijoa\n**/feijoa/**\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/globs",
    "content": "Cargo.toml\npackage.json\n*.gemspec\ntest-*\n*.sw*\nsources.*/\n/output.*\n**/possum\nzebra/**\nelep/**/hant\nsong/**/bird/\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/negate",
    "content": "nah\n!nah.yeah\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/none-allowed",
    "content": "*\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/scopes-global",
    "content": "global.*\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/scopes-local",
    "content": "local.*\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/scopes-sublocal",
    "content": "sublocal.*\n"
  },
  {
    "path": "crates/filterer/ignore/tests/ignores/self.ignore",
    "content": "self.ignore\n"
  },
  {
    "path": "crates/ignore-files/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v3.0.5 (2026-01-20)\n\n- Deps: gix-config 0.50\n- Deps: radix-trie 0.3\n- Fix: match git's behaviour for finding ignores\n\n## v3.0.4 (2025-05-15)\n\n- Calls to `add_globs()` and `add_file()` dynamically create a new ignore entry if there isn't one at the location of `applies_in` param. This allows users to e.g. add globs to a path that previously has no ignore files in. ([#908](https://github.com/watchexec/watchexec/pull/908))\n- Deps: gix-config 0.45\n\n## v3.0.3 (2025-02-09)\n\n- Deps: gix-config 0.43\n\n## v3.0.2 (2024-10-14)\n\n- Deps: gix-config 0.40\n\n## v3.0.1 (2024-04-28)\n\n- Hide fmt::Debug spew from ignore crate, use `full_debug` feature to restore.\n\n## v3.0.0 (2024-04-20)\n\n- Deps: gix-config 0.36\n- Deps: miette 7\n\n## v2.1.0 (2024-01-04)\n\n- Normalise paths on all platforms (via `normalize-path`).\n- Require paths be normalised before discovery.\n- Add convenience APIs to `IgnoreFilesFromOriginArgs` for that purpose.\n\n## v2.0.0 (2024-01-01)\n\n- A round of optimisation by @t3hmrman, improving directory traversal to avoid crawling unneeded paths. ([#663](https://github.com/watchexec/watchexec/pull/663))\n- Respect `applies_in` scope when processing nested ignores, by @thislooksfun. ([#746](https://github.com/watchexec/watchexec/pull/746))\n\n## v1.3.2 (2023-11-26)\n\n- Remove error diagnostic codes.\n- Deps: upgrade to gix-config 0.31.0\n- Deps: upgrade Tokio requirement to 1.33.0\n\n## v1.3.1 (2023-06-03)\n\n- Use Tokio's canonicalize instead of dunce::simplified.\n\n## v1.3.0 (2023-05-14)\n\n- Use IO-free dunce::simplify to normalise paths on Windows.\n- Handle gitignores correctly (one GitIgnoreBuilder per path).\n- Deps: update gix-config to 0.22.\n\n## v1.2.0 (2023-03-18)\n\n- Deps: update git-config to gix-config.\n- Deps: update tokio to 1.24\n- Ditch MSRV policy (only latest supported now).\n- `from_environment()` no longer looks at `WATCHEXEC_IGNORE_FILES`.\n\n## v1.1.0 (2023-01-08)\n\n- Add missing `Send` bound to async functions.\n\n## v1.0.1 (2022-09-07)\n\n- Deps: update git-config to 0.7.1\n- Deps: update miette to 5.3.0\n\n## v1.0.0 (2022-06-16)\n\n- Initial release as a separate crate.\n"
  },
  {
    "path": "crates/ignore-files/Cargo.toml",
    "content": "[package]\nname = \"ignore-files\"\nversion = \"3.0.5\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0\"\ndescription = \"Find, parse, and interpret ignore files\"\nkeywords = [\"ignore\", \"files\", \"discover\", \"find\"]\n\ndocumentation = \"https://docs.rs/ignore-files\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.70.0\"\nedition = \"2021\"\n\n[dependencies]\nfutures = \"0.3.29\"\ngix-config = \"0.50.0\"\nignore = \"0.4.18\"\nmiette = \"7.2.0\"\nnormalize-path = \"0.2.1\"\nthiserror = \"2.0.11\"\ntracing = \"0.1.40\"\nradix_trie = \"0.3.0\"\ndunce = \"1.0.4\"\n\n[dependencies.tokio]\nversion = \"1.33.0\"\ndefault-features = false\nfeatures = [\n\t\"fs\",\n\t\"macros\",\n\t\"rt\",\n]\n\n[dependencies.project-origins]\nversion = \"1.4.2\"\npath = \"../project-origins\"\n\n[dev-dependencies]\ntracing-subscriber = \"0.3.6\"\n\n[features]\ndefault = []\n\n## Don't hide ignore::gitignore::Gitignore Debug impl\nfull_debug = []\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\n"
  },
  {
    "path": "crates/ignore-files/README.md",
    "content": "[![Crates.io page](https://badgen.net/crates/v/ignore-files)](https://crates.io/crates/ignore-files)\n[![API Docs](https://docs.rs/ignore-files/badge.svg)][docs]\n[![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license]\n[![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml)\n\n# Ignore files\n\n_Find, parse, and interpret ignore files._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license].\n- Status: done.\n\n[docs]: https://docs.rs/ignore-files\n[license]: ../../LICENSE\n"
  },
  {
    "path": "crates/ignore-files/release.toml",
    "content": "pre-release-commit-message = \"release: ignore-files v{{version}}\"\ntag-prefix = \"ignore-files-\"\ntag-message = \"ignore-files {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/ignore-files/src/discover.rs",
    "content": "use std::{\n\tcollections::HashSet,\n\tenv,\n\tio::{Error, ErrorKind},\n\tpath::{Path, PathBuf},\n};\n\nuse futures::future::try_join_all;\nuse gix_config::{path::interpolate::Context as InterpolateContext, File, Path as GitPath};\nuse miette::{bail, Result};\nuse normalize_path::NormalizePath;\nuse project_origins::ProjectType;\nuse tokio::fs::{canonicalize, metadata, read_dir};\nuse tracing::{trace, trace_span};\n\nuse crate::{IgnoreFile, IgnoreFilter};\n\n/// Arguments for finding ignored files in a given directory and subdirectories\n#[derive(Clone, Debug, Default, PartialEq, Eq)]\n#[non_exhaustive]\npub struct IgnoreFilesFromOriginArgs {\n\t/// Origin from which finding ignored files will start.\n\tpub origin: PathBuf,\n\n\t/// Paths that have been explicitly selected to be watched.\n\t///\n\t/// If this list is non-empty, all paths not on this list will be ignored.\n\t///\n\t/// These paths *must* be absolute and normalised (no `.` and `..` components).\n\tpub explicit_watches: Vec<PathBuf>,\n\n\t/// Paths that have been explicitly ignored.\n\t///\n\t/// If this list is non-empty, all paths on this list will be ignored.\n\t///\n\t/// These paths *must* be absolute and normalised (no `.` and `..` components).\n\tpub explicit_ignores: Vec<PathBuf>,\n}\n\nimpl IgnoreFilesFromOriginArgs {\n\t/// Check that this struct is correctly-formed.\n\tpub fn check(&self) -> Result<()> {\n\t\tif self.explicit_watches.iter().any(|p| !p.is_absolute()) {\n\t\t\tbail!(\"explicit_watches contains non-absolute paths\");\n\t\t}\n\t\tif self.explicit_watches.iter().any(|p| !p.is_normalized()) {\n\t\t\tbail!(\"explicit_watches contains non-normalised paths\");\n\t\t}\n\t\tif self.explicit_ignores.iter().any(|p| !p.is_absolute()) {\n\t\t\tbail!(\"explicit_ignores contains non-absolute paths\");\n\t\t}\n\t\tif self.explicit_ignores.iter().any(|p| !p.is_normalized()) {\n\t\t\tbail!(\"explicit_ignores contains non-normalised paths\");\n\t\t}\n\n\t\tOk(())\n\t}\n\n\t/// Canonicalise all paths.\n\t///\n\t/// The result is always well-formed.\n\tpub async fn canonicalise(self) -> std::io::Result<Self> {\n\t\tOk(Self {\n\t\t\torigin: canonicalize(&self.origin).await?,\n\t\t\texplicit_watches: try_join_all(self.explicit_watches.into_iter().map(canonicalize))\n\t\t\t\t.await?,\n\t\t\texplicit_ignores: try_join_all(self.explicit_ignores.into_iter().map(canonicalize))\n\t\t\t\t.await?,\n\t\t})\n\t}\n\n\t/// Create args with all fields set and check that they are correctly-formed.\n\tpub fn new(\n\t\torigin: impl AsRef<Path>,\n\t\texplicit_watches: Vec<PathBuf>,\n\t\texplicit_ignores: Vec<PathBuf>,\n\t) -> Result<Self> {\n\t\tlet this = Self {\n\t\t\torigin: PathBuf::from(origin.as_ref()),\n\t\t\texplicit_watches,\n\t\t\texplicit_ignores,\n\t\t};\n\t\tthis.check()?;\n\t\tOk(this)\n\t}\n\n\t/// Create args without checking well-formed-ness.\n\t///\n\t/// Use this only if you know that the args are well-formed, or if you are about to call\n\t/// [`canonicalise()`][IgnoreFilesFromOriginArgs::canonicalise()] on them.\n\tpub fn new_unchecked(\n\t\torigin: impl AsRef<Path>,\n\t\texplicit_watches: impl IntoIterator<Item = impl Into<PathBuf>>,\n\t\texplicit_ignores: impl IntoIterator<Item = impl Into<PathBuf>>,\n\t) -> Self {\n\t\tSelf {\n\t\t\torigin: origin.as_ref().into(),\n\t\t\texplicit_watches: explicit_watches.into_iter().map(Into::into).collect(),\n\t\t\texplicit_ignores: explicit_ignores.into_iter().map(Into::into).collect(),\n\t\t}\n\t}\n}\n\nimpl From<&Path> for IgnoreFilesFromOriginArgs {\n\tfn from(path: &Path) -> Self {\n\t\tSelf {\n\t\t\torigin: path.into(),\n\t\t\t..Default::default()\n\t\t}\n\t}\n}\n\n/// Finds all ignore files in the given directory and subdirectories.\n///\n/// This considers:\n/// - Git ignore files (`.gitignore`)\n/// - Mercurial ignore files (`.hgignore`)\n/// - Tool-generic `.ignore` files\n/// - `.git/info/exclude` files in the `path` directory only\n/// - Git configurable project ignore files (with `core.excludesFile` in `.git/config`)\n///\n/// Importantly, this should be called from the origin of the project, not a subfolder. This\n/// function will not discover the project origin, and will not traverse parent directories. Use the\n/// `project-origins` crate for that.\n///\n/// This function also does not distinguish between project folder types, and collects all files for\n/// all supported VCSs and other project types. Use the `applies_to` field to filter the results.\n///\n/// All errors (permissions, etc) are collected and returned alongside the ignore files: you may\n/// want to show them to the user while still using whatever ignores were successfully found. Errors\n/// from files not being found are silently ignored (the files are just not returned).\n///\n/// ## Special case: project-local git config specifying `core.excludesFile`\n///\n/// If the project's `.git/config` specifies a value for `core.excludesFile`, this function will\n/// return an `IgnoreFile { path: path/to/that/file, applies_in: None, applies_to: Some(ProjectType::Git) }`.\n/// This is the only case in which the `applies_in` field is None from this function. When such is\n/// received the global Git ignore files found by [`from_environment()`] **should be ignored**.\n///\n/// ## Async\n///\n/// This future is not `Send` due to [`gix_config`] internals.\n///\n/// ## Panics\n///\n/// This function panics if the `args` are not correctly-formed; this can be checked beforehand\n/// without panicking with [`IgnoreFilesFromOriginArgs::check()`].\n#[expect(\n\tclippy::future_not_send,\n\treason = \"gix_config internals, if this changes: update the doc\"\n)]\n#[allow(\n\tclippy::too_many_lines,\n\treason = \"it's just the discover_file calls that explode the line count\"\n)]\npub async fn from_origin(\n\targs: impl Into<IgnoreFilesFromOriginArgs>,\n) -> (Vec<IgnoreFile>, Vec<Error>) {\n\tlet args = args.into();\n\targs.check()\n\t\t.expect(\"checking well-formedness of IgnoreFilesFromOriginArgs\");\n\n\tlet origin = &args.origin;\n\tlet mut ignore_files = args\n\t\t.explicit_ignores\n\t\t.iter()\n\t\t.map(|p| IgnoreFile {\n\t\t\tpath: p.clone(),\n\t\t\tapplies_in: Some(origin.clone()),\n\t\t\tapplies_to: None,\n\t\t})\n\t\t.collect();\n\tlet mut errors = Vec::new();\n\n\tmatch find_file(origin.join(\".git/config\")).await {\n\t\tErr(err) => errors.push(err),\n\t\tOk(None) => {}\n\t\tOk(Some(path)) => match path.parent().map(|path| File::from_git_dir(path.into())) {\n\t\t\tNone => errors.push(Error::new(\n\t\t\t\tErrorKind::Other,\n\t\t\t\t\"unreachable: .git/config must have a parent\",\n\t\t\t)),\n\t\t\tSome(Err(err)) => errors.push(Error::new(ErrorKind::Other, err)),\n\t\t\tSome(Ok(config)) => {\n\t\t\t\tlet config_excludes = config.value::<GitPath<'_>>(\"core.excludesFile\");\n\t\t\t\tif let Ok(excludes) = config_excludes {\n\t\t\t\t\tmatch excludes.interpolate(InterpolateContext {\n\t\t\t\t\t\thome_dir: env::var(\"HOME\").ok().map(PathBuf::from).as_deref(),\n\t\t\t\t\t\t..Default::default()\n\t\t\t\t\t}) {\n\t\t\t\t\t\tOk(e) => {\n\t\t\t\t\t\t\tdiscover_file(\n\t\t\t\t\t\t\t\t&mut ignore_files,\n\t\t\t\t\t\t\t\t&mut errors,\n\t\t\t\t\t\t\t\tNone,\n\t\t\t\t\t\t\t\tSome(ProjectType::Git),\n\t\t\t\t\t\t\t\te.into(),\n\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\terrors.push(Error::new(ErrorKind::Other, err));\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t}\n\n\tdiscover_file(\n\t\t&mut ignore_files,\n\t\t&mut errors,\n\t\tSome(origin.clone()),\n\t\tSome(ProjectType::Bazaar),\n\t\torigin.join(\".bzrignore\"),\n\t)\n\t.await;\n\n\tdiscover_file(\n\t\t&mut ignore_files,\n\t\t&mut errors,\n\t\tSome(origin.clone()),\n\t\tSome(ProjectType::Darcs),\n\t\torigin.join(\"_darcs/prefs/boring\"),\n\t)\n\t.await;\n\n\tdiscover_file(\n\t\t&mut ignore_files,\n\t\t&mut errors,\n\t\tSome(origin.clone()),\n\t\tSome(ProjectType::Fossil),\n\t\torigin.join(\".fossil-settings/ignore-glob\"),\n\t)\n\t.await;\n\n\tdiscover_file(\n\t\t&mut ignore_files,\n\t\t&mut errors,\n\t\tSome(origin.clone()),\n\t\tSome(ProjectType::Git),\n\t\torigin.join(\".git/info/exclude\"),\n\t)\n\t.await;\n\n\ttrace!(\"visiting child directories for ignore files\");\n\tmatch DirTourist::new(origin, &ignore_files, &args.explicit_watches).await {\n\t\tOk(mut dirs) => {\n\t\t\tloop {\n\t\t\t\tmatch dirs.next().await {\n\t\t\t\t\tVisit::Done => break,\n\t\t\t\t\tVisit::Skip => continue,\n\t\t\t\t\tVisit::Find(dir) => {\n\t\t\t\t\t\t// Attempt to find a .ignore file in the directory\n\t\t\t\t\t\tif discover_file(\n\t\t\t\t\t\t\t&mut ignore_files,\n\t\t\t\t\t\t\t&mut errors,\n\t\t\t\t\t\t\tSome(dir.clone()),\n\t\t\t\t\t\t\tNone,\n\t\t\t\t\t\t\tdir.join(\".ignore\"),\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.await\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdirs.add_last_file_to_filter(&ignore_files, &mut errors)\n\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// Attempt to find a .gitignore file in the directory\n\t\t\t\t\t\tif discover_file(\n\t\t\t\t\t\t\t&mut ignore_files,\n\t\t\t\t\t\t\t&mut errors,\n\t\t\t\t\t\t\tSome(dir.clone()),\n\t\t\t\t\t\t\tSome(ProjectType::Git),\n\t\t\t\t\t\t\tdir.join(\".gitignore\"),\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.await\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdirs.add_last_file_to_filter(&ignore_files, &mut errors)\n\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// Attempt to find a .hgignore file in the directory\n\t\t\t\t\t\tif discover_file(\n\t\t\t\t\t\t\t&mut ignore_files,\n\t\t\t\t\t\t\t&mut errors,\n\t\t\t\t\t\t\tSome(dir.clone()),\n\t\t\t\t\t\t\tSome(ProjectType::Mercurial),\n\t\t\t\t\t\t\tdir.join(\".hgignore\"),\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.await\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdirs.add_last_file_to_filter(&ignore_files, &mut errors)\n\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\terrors.extend(dirs.errors);\n\t\t}\n\t\tErr(err) => {\n\t\t\terrors.push(err);\n\t\t}\n\t}\n\n\t(ignore_files, errors)\n}\n\n/// Finds all ignore files that apply to the current runtime.\n///\n/// Takes an optional `appname` for the calling application for application-specific config files.\n///\n/// This considers:\n/// - User-specific git ignore files (e.g. `~/.gitignore`)\n/// - Git configurable ignore files (e.g. with `core.excludesFile` in system or user config)\n/// - `$XDG_CONFIG_HOME/{appname}/ignore`, as well as other locations (APPDATA on Windows…)\n///\n/// All errors (permissions, etc) are collected and returned alongside the ignore files: you may\n/// want to show them to the user while still using whatever ignores were successfully found. Errors\n/// from files not being found are silently ignored (the files are just not returned).\n///\n/// ## Async\n///\n/// This future is not `Send` due to [`gix_config`] internals.\n#[expect(\n\tclippy::future_not_send,\n\treason = \"gix_config internals, if this changes: update the doc\"\n)]\n#[allow(clippy::too_many_lines, reason = \"clearer than broken up needlessly\")]\npub async fn from_environment(appname: Option<&str>) -> (Vec<IgnoreFile>, Vec<Error>) {\n\tlet mut files = Vec::new();\n\tlet mut errors = Vec::new();\n\n\tlet mut found_git_global = false;\n\tmatch File::from_environment_overrides().map(|mut env| {\n\t\tFile::from_globals().map(move |glo| {\n\t\t\tenv.append(glo);\n\t\t\tenv\n\t\t})\n\t}) {\n\t\tErr(err) => errors.push(Error::new(ErrorKind::Other, err)),\n\t\tOk(Err(err)) => errors.push(Error::new(ErrorKind::Other, err)),\n\t\tOk(Ok(config)) => {\n\t\t\tlet config_excludes = config.value::<GitPath<'_>>(\"core.excludesFile\");\n\t\t\tif let Ok(excludes) = config_excludes {\n\t\t\t\tmatch excludes.interpolate(InterpolateContext {\n\t\t\t\t\thome_dir: env::var(\"HOME\").ok().map(PathBuf::from).as_deref(),\n\t\t\t\t\t..Default::default()\n\t\t\t\t}) {\n\t\t\t\t\tOk(e) => {\n\t\t\t\t\t\tif discover_file(\n\t\t\t\t\t\t\t&mut files,\n\t\t\t\t\t\t\t&mut errors,\n\t\t\t\t\t\t\tNone,\n\t\t\t\t\t\t\tSome(ProjectType::Git),\n\t\t\t\t\t\t\te.into(),\n\t\t\t\t\t\t)\n\t\t\t\t\t\t.await\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfound_git_global = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\terrors.push(Error::new(ErrorKind::Other, err));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif !found_git_global {\n\t\tlet mut tries = Vec::with_capacity(3);\n\t\tif let Ok(home) = env::var(\"XDG_CONFIG_HOME\") {\n\t\t\ttries.push(Path::new(&home).join(\"git/ignore\"));\n\t\t}\n\t\tif let Ok(home) = env::var(\"HOME\") {\n\t\t\ttries.push(Path::new(&home).join(\".config/git/ignore\"));\n\t\t}\n\t\tif let Ok(home) = env::var(\"USERPROFILE\") {\n\t\t\ttries.push(Path::new(&home).join(\".config/git/ignore\"));\n\t\t}\n\n\t\tfor path in tries {\n\t\t\tif discover_file(&mut files, &mut errors, None, Some(ProjectType::Git), path).await {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tlet mut bzrs = Vec::with_capacity(5);\n\tif let Ok(home) = env::var(\"APPDATA\") {\n\t\tbzrs.push(Path::new(&home).join(\"Bazzar/2.0/ignore\"));\n\t}\n\tif let Ok(home) = env::var(\"HOME\") {\n\t\tbzrs.push(Path::new(&home).join(\".bazarr/ignore\"));\n\t}\n\n\tfor path in bzrs {\n\t\tif discover_file(\n\t\t\t&mut files,\n\t\t\t&mut errors,\n\t\t\tNone,\n\t\t\tSome(ProjectType::Bazaar),\n\t\t\tpath,\n\t\t)\n\t\t.await\n\t\t{\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif let Some(name) = appname {\n\t\tlet mut wgis = Vec::with_capacity(4);\n\t\tif let Ok(home) = env::var(\"XDG_CONFIG_HOME\") {\n\t\t\twgis.push(Path::new(&home).join(format!(\"{name}/ignore\")));\n\t\t}\n\t\tif let Ok(home) = env::var(\"APPDATA\") {\n\t\t\twgis.push(Path::new(&home).join(format!(\"{name}/ignore\")));\n\t\t}\n\t\tif let Ok(home) = env::var(\"USERPROFILE\") {\n\t\t\twgis.push(Path::new(&home).join(format!(\".{name}/ignore\")));\n\t\t}\n\t\tif let Ok(home) = env::var(\"HOME\") {\n\t\t\twgis.push(Path::new(&home).join(format!(\".{name}/ignore\")));\n\t\t}\n\n\t\tfor path in wgis {\n\t\t\tif discover_file(&mut files, &mut errors, None, None, path).await {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\t(files, errors)\n}\n\n// TODO: add context to these errors\n\n/// Utility function to handle looking for an ignore file and adding it to a list if found.\n///\n/// This is mostly an internal function, but it is exposed for other filterers to use.\n#[allow(clippy::future_not_send)]\n#[tracing::instrument(skip(files, errors), level = \"trace\")]\n#[inline]\npub async fn discover_file(\n\tfiles: &mut Vec<IgnoreFile>,\n\terrors: &mut Vec<Error>,\n\tapplies_in: Option<PathBuf>,\n\tapplies_to: Option<ProjectType>,\n\tpath: PathBuf,\n) -> bool {\n\tmatch find_file(path).await {\n\t\tErr(err) => {\n\t\t\ttrace!(?err, \"found an error\");\n\t\t\terrors.push(err);\n\t\t\tfalse\n\t\t}\n\t\tOk(None) => {\n\t\t\ttrace!(\"found nothing\");\n\t\t\tfalse\n\t\t}\n\t\tOk(Some(path)) => {\n\t\t\ttrace!(?path, \"found a file\");\n\t\t\tfiles.push(IgnoreFile {\n\t\t\t\tpath,\n\t\t\t\tapplies_in,\n\t\t\t\tapplies_to,\n\t\t\t});\n\t\t\ttrue\n\t\t}\n\t}\n}\n\nasync fn find_file(path: PathBuf) -> Result<Option<PathBuf>, Error> {\n\tmatch metadata(&path).await {\n\t\tErr(err) if err.kind() == std::io::ErrorKind::NotFound => Ok(None),\n\t\tErr(err) => Err(err),\n\t\tOk(meta) if meta.is_file() && meta.len() > 0 => Ok(Some(path)),\n\t\tOk(_) => Ok(None),\n\t}\n}\n\n#[derive(Debug)]\nstruct DirTourist {\n\tbase: PathBuf,\n\tto_visit: Vec<PathBuf>,\n\tto_skip: HashSet<PathBuf>,\n\tto_explicitly_watch: HashSet<PathBuf>,\n\tpub errors: Vec<std::io::Error>,\n\tfilter: IgnoreFilter,\n}\n\n#[derive(Debug)]\nenum Visit {\n\tFind(PathBuf),\n\tSkip,\n\tDone,\n}\n\nimpl DirTourist {\n\tpub async fn new(\n\t\tbase: &Path,\n\t\tignore_files: &[IgnoreFile],\n\t\twatch_files: &[PathBuf],\n\t) -> Result<Self, Error> {\n\t\tlet base = canonicalize(base).await?;\n\t\ttrace!(\"create IgnoreFilterer for visiting directories\");\n\t\tlet mut filter = IgnoreFilter::new(&base, ignore_files)\n\t\t\t.await\n\t\t\t.map_err(|err| Error::new(ErrorKind::Other, err))?;\n\n\t\tfilter\n\t\t\t.add_globs(\n\t\t\t\t&[\n\t\t\t\t\t\"/.git\",\n\t\t\t\t\t\"/.hg\",\n\t\t\t\t\t\"/.bzr\",\n\t\t\t\t\t\"/_darcs\",\n\t\t\t\t\t\"/.fossil-settings\",\n\t\t\t\t\t\"/.svn\",\n\t\t\t\t\t\"/.pijul\",\n\t\t\t\t],\n\t\t\t\tSome(&base),\n\t\t\t)\n\t\t\t.map_err(|err| Error::new(ErrorKind::Other, err))?;\n\n\t\tOk(Self {\n\t\t\tto_visit: vec![base.clone()],\n\t\t\tbase,\n\t\t\tto_skip: HashSet::new(),\n\t\t\tto_explicitly_watch: watch_files.iter().cloned().collect(),\n\t\t\terrors: Vec::new(),\n\t\t\tfilter,\n\t\t})\n\t}\n\n\t#[allow(clippy::future_not_send)]\n\tpub async fn next(&mut self) -> Visit {\n\t\tif let Some(path) = self.to_visit.pop() {\n\t\t\tself.visit_path(path).await\n\t\t} else {\n\t\t\tVisit::Done\n\t\t}\n\t}\n\n\t#[allow(clippy::future_not_send)]\n\t#[tracing::instrument(skip(self), level = \"trace\")]\n\tasync fn visit_path(&mut self, path: PathBuf) -> Visit {\n\t\tif self.must_skip(&path) {\n\t\t\ttrace!(\"in skip list\");\n\t\t\treturn Visit::Skip;\n\t\t}\n\n\t\tif !self.filter.check_dir(&path) {\n\t\t\ttrace!(?path, \"path is ignored, adding to skip list\");\n\t\t\tself.skip(path);\n\t\t\treturn Visit::Skip;\n\t\t}\n\n\t\t// If explicitly watched paths were not specified, we can include any path\n\t\t//\n\t\t// If explicitly watched paths *were* specified, then to include the path, either:\n\t\t// - the path in question starts with an explicitly included path (/a/b starting with /a)\n\t\t// - the path in question is *above* the explicitly included path (/a is above /a/b)\n\t\tif self.to_explicitly_watch.is_empty()\n\t\t\t|| self\n\t\t\t\t.to_explicitly_watch\n\t\t\t\t.iter()\n\t\t\t\t.any(|p| path.starts_with(p) || p.starts_with(&path))\n\t\t{\n\t\t\ttrace!(?path, ?self.to_explicitly_watch, \"including path; it starts with one of the explicitly watched paths\");\n\t\t} else {\n\t\t\ttrace!(?path, ?self.to_explicitly_watch, \"excluding path; it did not start with any of explicitly watched paths\");\n\t\t\tself.skip(path);\n\t\t\treturn Visit::Skip;\n\t\t}\n\n\t\tlet mut dir = match read_dir(&path).await {\n\t\t\tOk(dir) => dir,\n\t\t\tErr(err) => {\n\t\t\t\ttrace!(\"failed to read dir: {}\", err);\n\t\t\t\tself.errors.push(err);\n\t\t\t\treturn Visit::Skip;\n\t\t\t}\n\t\t};\n\n\t\twhile let Some(entry) = match dir.next_entry().await {\n\t\t\tOk(entry) => entry,\n\t\t\tErr(err) => {\n\t\t\t\ttrace!(\"failed to read dir entries: {}\", err);\n\t\t\t\tself.errors.push(err);\n\t\t\t\treturn Visit::Skip;\n\t\t\t}\n\t\t} {\n\t\t\tlet path = entry.path();\n\t\t\tlet _span = trace_span!(\"dir_entry\", ?path).entered();\n\n\t\t\tif self.must_skip(&path) {\n\t\t\t\ttrace!(\"in skip list\");\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tmatch entry.file_type().await {\n\t\t\t\tOk(ft) => {\n\t\t\t\t\tif ft.is_dir() {\n\t\t\t\t\t\tif !self.filter.check_dir(&path) {\n\t\t\t\t\t\t\ttrace!(\"path is ignored, adding to skip list\");\n\t\t\t\t\t\t\tself.skip(path);\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\ttrace!(\"found a dir, adding to list\");\n\t\t\t\t\t\tself.to_visit.push(path);\n\t\t\t\t\t} else {\n\t\t\t\t\t\ttrace!(\"not a dir\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tErr(err) => {\n\t\t\t\t\ttrace!(\"failed to read filetype, adding to skip list: {}\", err);\n\t\t\t\t\tself.errors.push(err);\n\t\t\t\t\tself.skip(path);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tVisit::Find(path)\n\t}\n\n\tpub fn skip(&mut self, path: PathBuf) {\n\t\tlet check_path = path.as_path();\n\t\tself.to_visit.retain(|p| !p.starts_with(check_path));\n\t\tself.to_skip.insert(path);\n\t}\n\n\tpub(crate) async fn add_last_file_to_filter(\n\t\t&mut self,\n\t\tfiles: &[IgnoreFile],\n\t\terrors: &mut Vec<Error>,\n\t) {\n\t\tif let Some(ig) = files.last() {\n\t\t\tif let Err(err) = self.filter.add_file(ig).await {\n\t\t\t\terrors.push(Error::new(ErrorKind::Other, err));\n\t\t\t}\n\t\t}\n\t}\n\n\tfn must_skip(&self, mut path: &Path) -> bool {\n\t\tif self.to_skip.contains(path) {\n\t\t\treturn true;\n\t\t}\n\t\twhile let Some(parent) = path.parent() {\n\t\t\tif parent == self.base {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif self.to_skip.contains(parent) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tpath = parent;\n\t\t}\n\n\t\tfalse\n\t}\n}\n"
  },
  {
    "path": "crates/ignore-files/src/error.rs",
    "content": "use std::path::PathBuf;\n\nuse miette::Diagnostic;\nuse thiserror::Error;\n\n#[derive(Debug, Error, Diagnostic)]\n#[non_exhaustive]\npub enum Error {\n\t/// Error received when an [`IgnoreFile`] cannot be read.\n\t///\n\t/// [`IgnoreFile`]: crate::IgnoreFile\n\t#[error(\"cannot read ignore '{file}': {err}\")]\n\tRead {\n\t\t/// The path to the erroring ignore file.\n\t\tfile: PathBuf,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: std::io::Error,\n\t},\n\n\t/// Error received when parsing a glob fails.\n\t#[error(\"cannot parse glob from ignore '{file:?}': {err}\")]\n\tGlob {\n\t\t/// The path to the erroring ignore file.\n\t\tfile: Option<PathBuf>,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: ignore::Error,\n\t\t// TODO: extract glob error into diagnostic\n\t},\n\n\t/// Multiple related [`Error`](enum@Error)s.\n\t#[error(\"multiple: {0:?}\")]\n\tMulti(#[related] Vec<Error>),\n\n\t/// Error received when trying to canonicalize a path\n\t#[error(\"cannot canonicalize '{path:?}'\")]\n\tCanonicalize {\n\t\t/// the path that cannot be canonicalized\n\t\tpath: PathBuf,\n\n\t\t/// the underlying error\n\t\t#[source]\n\t\terr: std::io::Error,\n\t},\n}\n"
  },
  {
    "path": "crates/ignore-files/src/filter.rs",
    "content": "use std::path::{Path, PathBuf};\n\nuse futures::stream::{FuturesUnordered, StreamExt};\nuse ignore::{\n\tgitignore::{Gitignore, GitignoreBuilder, Glob},\n\tMatch,\n};\nuse radix_trie::{Trie, TrieCommon};\nuse tokio::fs::{canonicalize, read_to_string};\nuse tracing::{trace, trace_span};\n\nuse crate::{simplify_path, Error, IgnoreFile};\n\n#[derive(Clone)]\n#[cfg_attr(feature = \"full_debug\", derive(Debug))]\nstruct Ignore {\n\tgitignore: Gitignore,\n\tbuilder: Option<GitignoreBuilder>,\n}\n\n#[cfg(not(feature = \"full_debug\"))]\nimpl std::fmt::Debug for Ignore {\n\tfn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n\t\tf.debug_struct(\"Ignore\")\n\t\t\t.field(\"gitignore\", &\"ignore::gitignore::Gitignore{...}\")\n\t\t\t.field(\"builder\", &\"ignore::gitignore::GitignoreBuilder{...}\")\n\t\t\t.finish()\n\t}\n}\n\n/// A mutable filter dedicated to ignore files and trees of ignore files.\n///\n/// This reads and compiles ignore files, and should be used for handling ignore files. It's created\n/// with a project origin and a list of ignore files, and new ignore files can be added later\n/// (unless [`finish`](IgnoreFilter::finish()) is called).\n#[derive(Clone, Debug)]\npub struct IgnoreFilter {\n\torigin: PathBuf,\n\tignores: Trie<String, Ignore>,\n}\n\nimpl IgnoreFilter {\n\t/// Create a new empty filterer.\n\t///\n\t/// Prefer [`new()`](IgnoreFilter::new()) if you have ignore files ready to use.\n\tpub fn empty(origin: impl AsRef<Path>) -> Self {\n\t\tlet origin = origin.as_ref();\n\n\t\tlet mut ignores = Trie::new();\n\t\tignores.insert(\n\t\t\torigin.display().to_string(),\n\t\t\tIgnore {\n\t\t\t\tgitignore: Gitignore::empty(),\n\t\t\t\tbuilder: Some(GitignoreBuilder::new(origin)),\n\t\t\t},\n\t\t);\n\n\t\tSelf {\n\t\t\torigin: origin.to_owned(),\n\t\t\tignores,\n\t\t}\n\t}\n\n\t/// Read ignore files from disk and load them for filtering.\n\t///\n\t/// Use [`empty()`](IgnoreFilter::empty()) if you want an empty filterer,\n\t/// or to construct one outside an async environment.\n\tpub async fn new(origin: impl AsRef<Path> + Send, files: &[IgnoreFile]) -> Result<Self, Error> {\n\t\tlet origin = origin.as_ref().to_owned();\n\t\tlet origin = canonicalize(&origin)\n\t\t\t.await\n\t\t\t.map_err(move |err| Error::Canonicalize { path: origin, err })?;\n\n\t\tlet origin = simplify_path(&origin);\n\t\tlet _span = trace_span!(\"build_filterer\", ?origin);\n\n\t\ttrace!(files=%files.len(), \"loading file contents\");\n\t\tlet (files_contents, errors): (Vec<_>, Vec<_>) = files\n\t\t\t.iter()\n\t\t\t.map(|file| async move {\n\t\t\t\ttrace!(?file, \"loading ignore file\");\n\t\t\t\tlet content = read_to_string(&file.path)\n\t\t\t\t\t.await\n\t\t\t\t\t.map_err(|err| Error::Read {\n\t\t\t\t\t\tfile: file.path.clone(),\n\t\t\t\t\t\terr,\n\t\t\t\t\t})?;\n\t\t\t\tOk((file.clone(), content))\n\t\t\t})\n\t\t\t.collect::<FuturesUnordered<_>>()\n\t\t\t.collect::<Vec<_>>()\n\t\t\t.await\n\t\t\t.into_iter()\n\t\t\t.map(|res| match res {\n\t\t\t\tOk(o) => (Some(o), None),\n\t\t\t\tErr(e) => (None, Some(e)),\n\t\t\t})\n\t\t\t.unzip();\n\n\t\tlet errors: Vec<Error> = errors.into_iter().flatten().collect();\n\t\tif !errors.is_empty() {\n\t\t\ttrace!(\"found {} errors\", errors.len());\n\t\t\treturn Err(Error::Multi(errors));\n\t\t}\n\n\t\t// TODO: different parser/adapter for non-git-syntax ignore files?\n\n\t\ttrace!(files=%files_contents.len(), \"building ignore list\");\n\n\t\tlet mut ignores_trie = Trie::new();\n\n\t\t// add builder for the root of the file system, so that we can handle global ignores and globs\n\t\tignores_trie.insert(\n\t\t\tprefix(&origin),\n\t\t\tIgnore {\n\t\t\t\tgitignore: Gitignore::empty(),\n\t\t\t\tbuilder: Some(GitignoreBuilder::new(&origin)),\n\t\t\t},\n\t\t);\n\n\t\tlet mut total_num_ignores = 0;\n\t\tlet mut total_num_whitelists = 0;\n\n\t\tfor (file, content) in files_contents.into_iter().flatten() {\n\t\t\tlet _span = trace_span!(\"loading ignore file\", ?file).entered();\n\n\t\t\tlet applies_in = get_applies_in_path(&origin, &file);\n\n\t\t\tlet mut builder = ignores_trie\n\t\t\t\t.get(&applies_in.display().to_string())\n\t\t\t\t.and_then(|node| node.builder.clone())\n\t\t\t\t.unwrap_or_else(|| GitignoreBuilder::new(&applies_in));\n\n\t\t\tfor line in content.lines() {\n\t\t\t\tif line.is_empty() || line.starts_with('#') {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\ttrace!(?line, \"adding ignore line\");\n\t\t\t\tbuilder\n\t\t\t\t\t.add_line(Some(applies_in.clone().clone()), line)\n\t\t\t\t\t.map_err(|err| Error::Glob {\n\t\t\t\t\t\tfile: Some(file.path.clone()),\n\t\t\t\t\t\terr,\n\t\t\t\t\t})?;\n\t\t\t}\n\t\t\ttrace!(\"compiling globset\");\n\t\t\tlet compiled_builder = builder\n\t\t\t\t.build()\n\t\t\t\t.map_err(|err| Error::Glob { file: None, err })?;\n\n\t\t\ttotal_num_ignores += compiled_builder.num_ignores();\n\t\t\ttotal_num_whitelists += compiled_builder.num_whitelists();\n\n\t\t\tignores_trie.insert(\n\t\t\t\tapplies_in.display().to_string(),\n\t\t\t\tIgnore {\n\t\t\t\t\tgitignore: compiled_builder,\n\t\t\t\t\tbuilder: Some(builder),\n\t\t\t\t},\n\t\t\t);\n\t\t}\n\n\t\ttrace!(\n\t\t\tfiles=%files.len(),\n\t\t\ttrie=?ignores_trie,\n\t\t\tignores=%total_num_ignores,\n\t\t\tallows=%total_num_whitelists,\n\t\t\t\"ignore files loaded and compiled\",\n\t\t);\n\n\t\tOk(Self {\n\t\t\torigin: origin.clone(),\n\t\t\tignores: ignores_trie,\n\t\t})\n\t}\n\n\t/// Returns the number of ignores and allowlists loaded.\n\t#[must_use]\n\tpub fn num_ignores(&self) -> (u64, u64) {\n\t\tself.ignores.iter().fold((0, 0), |mut acc, (_, ignore)| {\n\t\t\tacc.0 += ignore.gitignore.num_ignores();\n\t\t\tacc.1 += ignore.gitignore.num_whitelists();\n\t\t\tacc\n\t\t})\n\t}\n\n\t/// Deletes the internal builder, to save memory.\n\t///\n\t/// This makes it impossible to add new ignore files without re-compiling the whole set.\n\tpub fn finish(&mut self) {\n\t\tlet keys = self.ignores.keys().cloned().collect::<Vec<_>>();\n\t\tfor key in keys {\n\t\t\tif let Some(ignore) = self.ignores.get_mut(&key) {\n\t\t\t\tignore.builder = None;\n\t\t\t}\n\t\t}\n\t}\n\n\t/// Reads and adds an ignore file, if the builder is available.\n\t///\n\t/// Does nothing silently otherwise.\n\tpub async fn add_file(&mut self, file: &IgnoreFile) -> Result<(), Error> {\n\t\tlet applies_in = get_applies_in_path(&self.origin, file);\n\t\tlet applies_in_str = applies_in.display().to_string();\n\n\t\tif self.ignores.get(&applies_in_str).is_none() {\n\t\t\tself.ignores.insert(\n\t\t\t\tapplies_in_str.clone(),\n\t\t\t\tIgnore {\n\t\t\t\t\tgitignore: Gitignore::empty(),\n\t\t\t\t\tbuilder: Some(GitignoreBuilder::new(&applies_in)),\n\t\t\t\t},\n\t\t\t);\n\t\t}\n\n\t\tlet Some(Ignore {\n\t\t\tbuilder: Some(ref mut builder),\n\t\t\t..\n\t\t}) = self.ignores.get_mut(&applies_in_str)\n\t\telse {\n\t\t\treturn Ok(());\n\t\t};\n\n\t\ttrace!(?file, \"reading ignore file\");\n\t\tlet content = read_to_string(&file.path)\n\t\t\t.await\n\t\t\t.map_err(|err| Error::Read {\n\t\t\t\tfile: file.path.clone(),\n\t\t\t\terr,\n\t\t\t})?;\n\n\t\tlet _span = trace_span!(\"loading ignore file\", ?file).entered();\n\t\tfor line in content.lines() {\n\t\t\tif line.is_empty() || line.starts_with('#') {\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\ttrace!(?line, \"adding ignore line\");\n\t\t\tbuilder\n\t\t\t\t.add_line(Some(applies_in.clone()), line)\n\t\t\t\t.map_err(|err| Error::Glob {\n\t\t\t\t\tfile: Some(file.path.clone()),\n\t\t\t\t\terr,\n\t\t\t\t})?;\n\t\t}\n\n\t\tself.recompile(file)?;\n\n\t\tOk(())\n\t}\n\n\tfn recompile(&mut self, file: &IgnoreFile) -> Result<(), Error> {\n\t\tlet applies_in = get_applies_in_path(&self.origin, file)\n\t\t\t.display()\n\t\t\t.to_string();\n\n\t\tlet Some(Ignore {\n\t\t\tgitignore: compiled,\n\t\t\tbuilder: Some(builder),\n\t\t}) = self.ignores.get(&applies_in)\n\t\telse {\n\t\t\treturn Ok(());\n\t\t};\n\n\t\tlet pre_ignores = compiled.num_ignores();\n\t\tlet pre_allows = compiled.num_whitelists();\n\n\t\ttrace!(\"recompiling globset\");\n\t\tlet recompiled = builder.build().map_err(|err| Error::Glob {\n\t\t\tfile: Some(file.path.clone()),\n\t\t\terr,\n\t\t})?;\n\n\t\ttrace!(\n\t\t\tnew_ignores=%(recompiled.num_ignores() - pre_ignores),\n\t\t\tnew_allows=%(recompiled.num_whitelists() - pre_allows),\n\t\t\t\"ignore file loaded and set recompiled\",\n\t\t);\n\n\t\tself.ignores.insert(\n\t\t\tapplies_in,\n\t\t\tIgnore {\n\t\t\t\tgitignore: recompiled,\n\t\t\t\tbuilder: Some(builder.to_owned()),\n\t\t\t},\n\t\t);\n\n\t\tOk(())\n\t}\n\n\t/// Adds some globs manually, if the builder is available.\n\t///\n\t/// Does nothing silently otherwise.\n\tpub fn add_globs(&mut self, globs: &[&str], applies_in: Option<&PathBuf>) -> Result<(), Error> {\n\t\tlet virtual_ignore_file = IgnoreFile {\n\t\t\tpath: \"manual glob\".into(),\n\t\t\tapplies_in: applies_in.cloned(),\n\t\t\tapplies_to: None,\n\t\t};\n\t\tlet applies_in = get_applies_in_path(&self.origin, &virtual_ignore_file);\n\t\tlet applies_in_str = applies_in.display().to_string();\n\n\t\tif self.ignores.get(&applies_in_str).is_none() {\n\t\t\tself.ignores.insert(\n\t\t\t\tapplies_in_str.clone(),\n\t\t\t\tIgnore {\n\t\t\t\t\tgitignore: Gitignore::empty(),\n\t\t\t\t\tbuilder: Some(GitignoreBuilder::new(&applies_in)),\n\t\t\t\t},\n\t\t\t);\n\t\t}\n\n\t\tlet Some(Ignore {\n\t\t\tbuilder: Some(builder),\n\t\t\t..\n\t\t}) = self.ignores.get_mut(&applies_in_str)\n\t\telse {\n\t\t\treturn Ok(());\n\t\t};\n\n\t\tlet _span = trace_span!(\"loading ignore globs\", ?globs).entered();\n\t\tfor line in globs {\n\t\t\tif line.is_empty() || line.starts_with('#') {\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\ttrace!(?line, \"adding ignore line\");\n\t\t\tbuilder\n\t\t\t\t.add_line(Some(applies_in.clone()), line)\n\t\t\t\t.map_err(|err| Error::Glob { file: None, err })?;\n\t\t}\n\n\t\tself.recompile(&virtual_ignore_file)?;\n\n\t\tOk(())\n\t}\n\n\t/// Match a particular path against the ignore set.\n\tpub fn match_path(&self, path: &Path, is_dir: bool) -> Match<&Glob> {\n\t\tlet path = simplify_path(path);\n\t\tlet path = path.as_path();\n\n\t\tlet mut search_path = path;\n\t\tloop {\n\t\t\tlet Some(trie_node) = self\n\t\t\t\t.ignores\n\t\t\t\t.get_ancestor(&search_path.display().to_string())\n\t\t\telse {\n\t\t\t\ttrace!(?path, ?search_path, \"no ignores for path\");\n\t\t\t\treturn Match::None;\n\t\t\t};\n\n\t\t\t// Unwrap will always succeed because every node has an entry.\n\t\t\tlet ignores = trie_node.value().unwrap();\n\n\t\t\tlet match_ = if path.strip_prefix(&self.origin).is_ok() {\n\t\t\t\ttrace!(?path, ?search_path, \"checking against path or parents\");\n\t\t\t\tignores.gitignore.matched_path_or_any_parents(path, is_dir)\n\t\t\t} else {\n\t\t\t\ttrace!(?path, ?search_path, \"checking against path only\");\n\t\t\t\tignores.gitignore.matched(path, is_dir)\n\t\t\t};\n\n\t\t\tmatch match_ {\n\t\t\t\tMatch::None => {\n\t\t\t\t\ttrace!(\n\t\t\t\t\t\t?path,\n\t\t\t\t\t\t?search_path,\n\t\t\t\t\t\t\"no match found, searching for parent ignores\"\n\t\t\t\t\t);\n\t\t\t\t\t// Unwrap will always succeed because every node has an entry.\n\t\t\t\t\tlet trie_path = Path::new(trie_node.key().unwrap());\n\t\t\t\t\tif let Some(trie_parent) = trie_path.parent() {\n\t\t\t\t\t\ttrace!(?path, ?search_path, \"checking parent ignore\");\n\t\t\t\t\t\tsearch_path = trie_parent;\n\t\t\t\t\t} else {\n\t\t\t\t\t\ttrace!(?path, ?search_path, \"no parent ignore found\");\n\t\t\t\t\t\treturn Match::None;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t_ => return match_,\n\t\t\t}\n\t\t}\n\t}\n\n\t/// Check a particular folder path against the ignore set.\n\t///\n\t/// Returns `false` if the folder should be ignored.\n\t///\n\t/// Note that this is a slightly different implementation than watchexec's Filterer trait, as\n\t/// the latter handles events with multiple associated paths.\n\tpub fn check_dir(&self, path: &Path) -> bool {\n\t\tlet _span = trace_span!(\"check_dir\", ?path).entered();\n\n\t\ttrace!(\"checking against compiled ignore files\");\n\t\tmatch self.match_path(path, true) {\n\t\t\tMatch::None => {\n\t\t\t\ttrace!(\"no match (pass)\");\n\t\t\t\ttrue\n\t\t\t}\n\t\t\tMatch::Ignore(glob) => {\n\t\t\t\tif glob.from().map_or(true, |f| path.strip_prefix(f).is_ok()) {\n\t\t\t\t\ttrace!(?glob, \"positive match (fail)\");\n\t\t\t\t\tfalse\n\t\t\t\t} else {\n\t\t\t\t\ttrace!(?glob, \"positive match, but not in scope (pass)\");\n\t\t\t\t\ttrue\n\t\t\t\t}\n\t\t\t}\n\t\t\tMatch::Whitelist(glob) => {\n\t\t\t\ttrace!(?glob, \"negative match (pass)\");\n\t\t\t\ttrue\n\t\t\t}\n\t\t}\n\t}\n}\n\nfn get_applies_in_path(origin: &Path, ignore_file: &IgnoreFile) -> PathBuf {\n\tlet root_path = PathBuf::from(prefix(origin));\n\tignore_file\n\t\t.applies_in\n\t\t.as_ref()\n\t\t.map_or(root_path, |p| simplify_path(p))\n}\n\n/// Gets the root component of a given path.\n///\n/// This will be `/` on unix systems, or a Drive letter (`C:`, `D:`, etc)\nfn prefix<T: AsRef<Path>>(path: T) -> String {\n\tlet path = path.as_ref();\n\n\tlet Some(prefix) = path.components().next() else {\n\t\treturn \"/\".into();\n\t};\n\n\tmatch prefix {\n\t\tstd::path::Component::Prefix(prefix_component) => {\n\t\t\tprefix_component.as_os_str().to_str().unwrap_or(\"/\").into()\n\t\t}\n\t\t_ => \"/\".into(),\n\t}\n}\n\n#[cfg(test)]\nmod tests {\n\tuse super::IgnoreFilter;\n\n\t#[tokio::test]\n\tasync fn handle_relative_paths() {\n\t\tlet ignore = IgnoreFilter::new(\".\", &[]).await.unwrap();\n\t\tassert!(ignore.origin.is_absolute());\n\t}\n}\n"
  },
  {
    "path": "crates/ignore-files/src/lib.rs",
    "content": "//! Find, parse, and interpret ignore files.\n//!\n//! Ignore files are files that contain ignore patterns, often following the `.gitignore` format.\n//! There may be one or more global ignore files, which apply everywhere, and one or more per-folder\n//! ignore files, which apply to a specific folder and its subfolders. Furthermore, there may be\n//! more ignore files in _these_ subfolders, and so on. Discovering and interpreting all of these in\n//! a single context is not a simple task: this is what this crate provides.\n\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n\nuse std::path::{Path, PathBuf};\n\nuse normalize_path::NormalizePath;\nuse project_origins::ProjectType;\n\n#[doc(inline)]\npub use discover::*;\nmod discover;\n\n#[doc(inline)]\npub use error::*;\nmod error;\n\n#[doc(inline)]\npub use filter::*;\nmod filter;\n\n/// An ignore file.\n///\n/// This records both the path to the ignore file and some basic metadata about it: which project\n/// type it applies to if any, and which subtree it applies in if any (`None` = global ignore file).\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub struct IgnoreFile {\n\t/// The path to the ignore file.\n\tpub path: PathBuf,\n\n\t/// The path to the subtree the ignore file applies to, or `None` for global ignores.\n\tpub applies_in: Option<PathBuf>,\n\n\t/// Which project type the ignore file applies to, or was found through.\n\tpub applies_to: Option<ProjectType>,\n}\n\npub(crate) fn simplify_path(path: &Path) -> PathBuf {\n\tdunce::simplified(path).normalize()\n}\n"
  },
  {
    "path": "crates/ignore-files/tests/filtering.rs",
    "content": "mod helpers;\n\nuse helpers::ignore_tests::*;\n\n#[tokio::test]\nasync fn globals() {\n\tlet filter = filt(\n\t\t\"tree\",\n\t\t&[\n\t\t\tfile(\"global/first\").applies_globally(),\n\t\t\tfile(\"global/second\").applies_globally(),\n\t\t],\n\t)\n\t.await;\n\n\t// Both ignores should be loaded as global\n\tfilter.agnostic_fail(\"/apples\");\n\tfilter.agnostic_fail(\"/oranges\");\n\n\t// Sanity check\n\tfilter.agnostic_pass(\"/kiwi\");\n}\n\n#[tokio::test]\nasync fn tree() {\n\tlet filter = filt(\"tree\", &[file(\"tree/base\"), file(\"tree/branch/inner\")]).await;\n\n\t// \"oranges\" is not ignored at any level\n\tfilter.agnostic_pass(\"tree/oranges\");\n\tfilter.agnostic_pass(\"tree/branch/oranges\");\n\tfilter.agnostic_pass(\"tree/branch/inner/oranges\");\n\tfilter.agnostic_pass(\"tree/other/oranges\");\n\n\t// \"apples\" should only be ignored at the root\n\tfilter.agnostic_fail(\"tree/apples\");\n\tfilter.agnostic_pass(\"tree/branch/apples\");\n\tfilter.agnostic_pass(\"tree/branch/inner/apples\");\n\tfilter.agnostic_pass(\"tree/other/apples\");\n\n\t// \"carrots\" should be ignored at any level\n\tfilter.agnostic_fail(\"tree/carrots\");\n\tfilter.agnostic_fail(\"tree/branch/carrots\");\n\tfilter.agnostic_fail(\"tree/branch/inner/carrots\");\n\tfilter.agnostic_fail(\"tree/other/carrots\");\n\n\t// \"pineapples/grapes\" should only be ignored at the root\n\tfilter.agnostic_fail(\"tree/pineapples/grapes\");\n\tfilter.agnostic_pass(\"tree/branch/pineapples/grapes\");\n\tfilter.agnostic_pass(\"tree/branch/inner/pineapples/grapes\");\n\tfilter.agnostic_pass(\"tree/other/pineapples/grapes\");\n\n\t// \"cauliflowers\" should only be ignored at the root of \"branch/\"\n\tfilter.agnostic_pass(\"tree/cauliflowers\");\n\tfilter.agnostic_fail(\"tree/branch/cauliflowers\");\n\tfilter.agnostic_pass(\"tree/branch/inner/cauliflowers\");\n\tfilter.agnostic_pass(\"tree/other/cauliflowers\");\n\n\t// \"artichokes\" should be ignored anywhere inside of \"branch/\"\n\tfilter.agnostic_pass(\"tree/artichokes\");\n\tfilter.agnostic_fail(\"tree/branch/artichokes\");\n\tfilter.agnostic_fail(\"tree/branch/inner/artichokes\");\n\tfilter.agnostic_pass(\"tree/other/artichokes\");\n\n\t// \"bananas/pears\" should only be ignored at the root of \"branch/\"\n\tfilter.agnostic_pass(\"tree/bananas/pears\");\n\tfilter.agnostic_fail(\"tree/branch/bananas/pears\");\n\tfilter.agnostic_pass(\"tree/branch/inner/bananas/pears\");\n\tfilter.agnostic_pass(\"tree/other/bananas/pears\");\n}\n"
  },
  {
    "path": "crates/ignore-files/tests/global/first",
    "content": "apples\n"
  },
  {
    "path": "crates/ignore-files/tests/global/second",
    "content": "oranges\n"
  },
  {
    "path": "crates/ignore-files/tests/helpers/mod.rs",
    "content": "use std::path::{Path, PathBuf};\n\nuse ignore::{gitignore::Glob, Match};\nuse ignore_files::{IgnoreFile, IgnoreFilter};\n\npub mod ignore_tests {\n\tpub use super::ig_file as file;\n\tpub use super::ignore_filt as filt;\n\tpub use super::Applies;\n\tpub use super::PathHarness;\n}\n\n/// Get the drive letter of the current working directory.\n#[cfg(windows)]\nfn drive_root() -> String {\n\tlet path = std::fs::canonicalize(\".\").unwrap();\n\n\tlet Some(prefix) = path.components().next() else {\n\t\treturn r\"C:\\\".into();\n\t};\n\n\tmatch prefix {\n\t\tstd::path::Component::Prefix(prefix_component) => prefix_component\n\t\t\t.as_os_str()\n\t\t\t.to_str()\n\t\t\t.map(|p| p.to_owned() + r\"\\\")\n\t\t\t.unwrap_or(r\"C:\\\".into()),\n\t\t_ => r\"C:\\\".into(),\n\t}\n}\n\nfn normalize_path(path: &str) -> PathBuf {\n\t#[cfg(windows)]\n\tlet path: &str = &String::from(path)\n\t\t.strip_prefix(\"/\")\n\t\t.map_or(path.into(), |p| drive_root() + p);\n\n\tlet path: PathBuf = if Path::new(path).has_root() {\n\t\tpath.into()\n\t} else {\n\t\tstd::fs::canonicalize(\".\").unwrap().join(\"tests\").join(path)\n\t};\n\n\tdunce::simplified(&path).into()\n}\n\npub trait PathHarness {\n\tfn check_path(&self, path: &Path, is_dir: bool) -> Match<&Glob>;\n\n\tfn path_pass(&self, path: &str, is_dir: bool, pass: bool) {\n\t\tlet full_path = &normalize_path(path);\n\n\t\ttracing::info!(?path, ?is_dir, ?pass, \"check\");\n\n\t\tlet result = self.check_path(full_path, is_dir);\n\n\t\tassert_eq!(\n\t\t\tmatch result {\n\t\t\t\tMatch::None => true,\n\t\t\t\tMatch::Ignore(glob) => !glob.from().map_or(true, |f| full_path.starts_with(f)),\n\t\t\t\tMatch::Whitelist(_glob) => true,\n\t\t\t},\n\t\t\tpass,\n\t\t\t\"{} {:?} (expected {}) [result: {}]\",\n\t\t\tif is_dir { \"dir\" } else { \"file\" },\n\t\t\tfull_path,\n\t\t\tif pass { \"pass\" } else { \"fail\" },\n\t\t\tmatch result {\n\t\t\t\tMatch::None => String::from(\"None\"),\n\t\t\t\tMatch::Ignore(glob) => format!(\n\t\t\t\t\t\"Ignore({})\",\n\t\t\t\t\tglob.from()\n\t\t\t\t\t\t.map_or(String::new(), |f| f.display().to_string())\n\t\t\t\t),\n\t\t\t\tMatch::Whitelist(glob) => format!(\n\t\t\t\t\t\"Whitelist({})\",\n\t\t\t\t\tglob.from()\n\t\t\t\t\t\t.map_or(String::new(), |f| f.display().to_string())\n\t\t\t\t),\n\t\t\t},\n\t\t);\n\t}\n\n\tfn file_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, false, true);\n\t}\n\n\tfn file_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, false, false);\n\t}\n\n\tfn dir_does_pass(&self, path: &str) {\n\t\tself.path_pass(path, true, true);\n\t}\n\n\tfn dir_doesnt_pass(&self, path: &str) {\n\t\tself.path_pass(path, true, false);\n\t}\n\n\tfn agnostic_pass(&self, path: &str) {\n\t\tself.file_does_pass(path);\n\t\tself.dir_does_pass(path);\n\t}\n\n\tfn agnostic_fail(&self, path: &str) {\n\t\tself.file_doesnt_pass(path);\n\t\tself.dir_doesnt_pass(path);\n\t}\n}\n\nimpl PathHarness for IgnoreFilter {\n\tfn check_path(&self, path: &Path, is_dir: bool) -> Match<&Glob> {\n\t\tself.match_path(path, is_dir)\n\t}\n}\n\nfn tracing_init() {\n\tuse tracing_subscriber::{\n\t\tfmt::{format::FmtSpan, Subscriber},\n\t\tutil::SubscriberInitExt,\n\t\tEnvFilter,\n\t};\n\tSubscriber::builder()\n\t\t.pretty()\n\t\t.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE)\n\t\t.with_env_filter(EnvFilter::from_default_env())\n\t\t.finish()\n\t\t.try_init()\n\t\t.ok();\n}\n\npub async fn ignore_filt(origin: &str, ignore_files: &[IgnoreFile]) -> IgnoreFilter {\n\ttracing_init();\n\tlet origin = normalize_path(origin);\n\tIgnoreFilter::new(origin, ignore_files)\n\t\t.await\n\t\t.expect(\"making filterer\")\n}\n\npub fn ig_file(name: &str) -> IgnoreFile {\n\tlet path = normalize_path(name);\n\tlet parent: PathBuf = path.parent().unwrap_or(&path).into();\n\tIgnoreFile {\n\t\tpath,\n\t\tapplies_in: Some(parent),\n\t\tapplies_to: None,\n\t}\n}\n\npub trait Applies {\n\tfn applies_globally(self) -> Self;\n}\n\nimpl Applies for IgnoreFile {\n\tfn applies_globally(mut self) -> Self {\n\t\tself.applies_in = None;\n\t\tself\n\t}\n}\n"
  },
  {
    "path": "crates/ignore-files/tests/tree/base",
    "content": "/apples\ncarrots\npineapples/grapes\n"
  },
  {
    "path": "crates/ignore-files/tests/tree/branch/inner",
    "content": "/cauliflowers\nartichokes\nbananas/pears\n"
  },
  {
    "path": "crates/lib/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v8.2.0 (2026-03-02)\n\n- Feat: add `fs_ready` signal for watcher readiness ([#1024](https://github.com/watchexec/watchexec/pull/1024))\n\n## v8.1.2 (2026-02-24)\n\n## v8.1.1 (2026-02-22)\n\n- Fix: bug on macOS where a task in the keyboard events worker would hang after graceful quit ([#1018](https://github.com/watchexec/watchexec/pull/1018))\n\n## v8.1.0 (2026-02-22)\n\n- Augments `keyboard_events` config to emit events for all single keyboard key inputs, in addition to the existing EOF\n- `keyboard_events` now switches to raw mode (and disabling it switches back to cooked)\n\n## v8.0.1 (2025-05-15)\n\n## v8.0.0 (2025-05-15)\n\n## v7.0.0 (2025-05-15)\n\n- Deps: remove unused dependency `async-recursion` ([#930](https://github.com/watchexec/watchexec/pull/930))\n- Deps: remove unused dependency `process-wrap` ([#930](https://github.com/watchexec/watchexec/pull/930))\n- Deps: remove unused dependency `project-origins` ([#930](https://github.com/watchexec/watchexec/pull/930))\n- Deps: remove ignore-files dependency ([#929](https://github.com/watchexec/watchexec/pull/929))\n- Breaking: remove deprecated IgnoreFiles variant on RuntimeError ([#929](https://github.com/watchexec/watchexec/pull/929))\n\n## v6.0.0 (2025-02-09)\n\n## v5.0.0 (2024-10-14)\n\n- Deps: nix 0.29\n\n## v4.1.0 (2024-04-28)\n\n- Feature: non-recursive watches with `WatchedPath::non_recursive()`\n- Fix: `config.pathset()` now preserves `WatchedPath` attributes\n- Refactor: move `WatchedPath` to the root of the crate (old path remains as re-export for now)\n\n## v4.0.0 (2024-04-20)\n\n- Deps: replace command-group with process-wrap (in supervisor, but has flow-on effects)\n- Deps: miette 7\n- Deps: nix 0.28\n\n## v3.0.1 (2023-11-29)\n\n- Deps: watchexec-events and watchexec-signals after major bump and yank\n\n## v3.0.0 (2023-11-26)\n\n### General\n\n- Crate is more oriented around `Watchexec` the core experience rather than providing the kitchensink / components so you could build your own from the pieces; that helps the cohesion of the whole and simplifies many patterns.\n- Deprecated items (mostly leftover from splitting out the `watchexec_events` and `watchexec_signals` crates) are removed.\n- Watchexec can now supervise multiple commands at once. See [Action](#Action) below, the [Action docs](https://docs.rs/watchexec/latest/watchexec/action/struct.Action.html), and the [Supervisor docs](https://docs.rs/watchexec-supervisor) for more.\n- Because of this new feature, the one where multiple commands could be set under the one supervisor is removed.\n- Watchexec's supervisor was split up into its own crate, [`watchexec-supervisor`](https://docs.rs/watchexec-supervisor).\n- Tokio requirement is now 1.33.\n- Notify was upgraded to 6.0.\n- Nix was upgraded to 0.27.\n\n### `Watchexec`\n\n- `Watchexec::new()` now takes the `on_action` handler. As this is the most important handler to define and Watchexec will not be functional without one, that enforces providing it first.\n- `Watchexec::with_config()` lets one provide a config upfront, otherwise the default values are used.\n- `Watchexec::default()` is mostly used to avoid boilerplate in doc comment examples, and panics on initialisation errors.\n- `Watchexec::reconfigure()` is removed. Use the public `config` field instead to access the \"live\" `Arc<Config>` (see below).\n- Completion events aren't emitted anymore. They still exist in the Event enum, but they're not generated by Watchexec itself. Use `Job#to_wait` instead. Of course you can insert them as synthetic events if you want.\n\n### Config\n\n- `InitConfig` and `RuntimeConfig` have been unified into a single `Config` struct.\n- Instead of module-specific `WorkingData` structures, all of the config is now flat in the same `Config`. That makes it easier to work with as all that's needed is to pass an `Arc<Config>` around, but it does mean the event sources are no longer independent.\n- Instead of using `tokio::sync::watch` for some values, and `HandlerLock` for handlers, and so on, everything is now a new `Changeable` type, specialised to `ChangeableFn` for closures and `ChangeableFilterer` for the Filterer.\n- There's now a `signal_change()` method which must be called after changes to the config; this is taken care of when using the methods on `Config`. This is required for the few places in Watchexec which need active reconfiguration rather than reading config values just-in-time.\n- The above means that instead of using `Watchexec::reconfigure()` and keeping a clone of the config around, an `Arc<Config>` is now \"live\" and changes applied to it will affect the Watchexec instance directly.\n- `command` / `commands` are removed from config. Instead use the Action handler API for creating new supervised commands.\n- `command_grouped` is removed from config. That's now an option set on `Command`.\n- `action_throttle` is renamed to `throttle` and now defaults to `50ms`, which is the default in Watchexec CLI.\n- `keyboard_emit_eof` is renamed to `keyboard_events`.\n- `pre_spawn_handler` is removed. Use `Job#set_spawn_hook` instead.\n- `post_spawn_handler` is removed. Use `Job#run` instead.\n\n### Command\n\nThe structure has been reworked to be simpler and more extensible. Instead of a Command _enum_, there's now a Command _struct_, which holds a single `Program` and behaviour-altering options. `Shell` has also been redone, with less special-casing.\n\nIf you had:\n\n```rust\nCommand::Exec {\n    prog: \"date\".into(),\n    args: vec![\"+%s\".into()],\n}\n```\n\nYou should now write:\n\n```rust\nCommand {\n    program: Program::Exec {\n        prog: \"date\".into(),\n        args: vec![\"+%s\".into()],\n    },\n    options: Default::default(),\n}\n```\n\nThe new `Program::Shell` field `args: Vec<String>` lets you pass (trailing) arguments to the shell invocation:\n\n```rust\nProgram::Shell {\n    shell: Shell::new(\"sh\"),\n    command: \"ls\".into(),\n    args: vec![\"--\".into(), \"movies\".into()],\n}\n```\n\nis equivalent to:\n\n```console\n$ sh -c \"ls\" -- movies\n```\n\n- The old `args` field of `Command::Shell` is now the `options` field of `Shell`.\n- `Shell` has a new field `program_option: Option<Cow<OsStr>>` which is the syntax of the option used to provide the command. Ie for most shells it's `-c` and for `CMD.EXE` it's `/C`; this makes it fully customisable (including its absence!) if you want to use weird shells or non-shell programs as shells.\n- The special-cased `Shell::Powershell` is removed.\n- On Windows, arguments are specified with [`raw_arg`](https://doc.rust-lang.org/stable/std/os/windows/process/trait.CommandExt.html#tymethod.raw_arg) instead of `arg` to avoid quoting issues.\n- `Command` can no longer take a list of programs. That was always quite a hack; now that multiple supervised commands are possible, that's how multiple programs should be handled.\n- The top-level Watchexec `command_grouped` option is now Command-level, so you can start both grouped and non-grouped programs.\n- There's a new `reset_sigmask` option to control whether commands should have their signal masks reset on Unix. By default the signal mask is inherited.\n\n### Errors\n\n- `RuntimeError::NoCommands`, `RuntimeError::Handler`, `RuntimeError::HandlerLockHeld`, and `CriticalError::MissingHandler` are removed as the relevant types/structures don't exist anymore.\n- `RuntimeError::CommandShellEmptyCommand` and `RuntimeError::CommandShellEmptyShell` are removed; you can construct `Shell` with empty shell program and `Program::Shell` with an empty command, these will at best do nothing but they won't error early through Watchexec.\n- `RuntimeError::ClearScreen` is removed, as clearing the screen is now done by the consumer of Watchexec, not Watchexec itself.\n- Watchexec will now panic if locks are poisoned; we can't recover from that.\n- The filesystem watcher's \"too many files\", \"too many handles\", and other initialisation errors are removed as `RuntimeErrors`, and are now `CriticalErrors`. These being runtime, nominally recoverable errors instead of end-the-world failures is one of the most common pitfalls of using the library, and though recovery _is_ technically possible, it's better approached other ways.\n- The `on_error` handler is now sync only and no longer returns a `Result`; as such there's no longer the weird logic of \"if the `on_error` handler errors, it will call itself on the error once, then crash\".\n- If you were doing async work in `on_error`, you should instead use non-async calls (like `try_send()` for Tokio channels). The error handler is expected to return as fast as possible, and _not_ do blocking work if it can at all avoid it; this was always the case but is now documented more explicitly.\n- Error diagnostic codes are removed.\n\n### Action\n\nThe process supervision system is entirely reworked. Instead of \"applying `Outcome`s\", there's now a `Job` type which is a single supervised command, provided by the separate [`watchexec-supervisor`](https://docs.rs/watchexec-supervisor) crate. The Action handler itself can only create new jobs and list existing ones, and interaction with commands is done through the `Job` type.\n\nThe controls available on `Job` are now modeled on \"real\" supervisors like systemd, and are both more and less powerful than the old `Outcome` system. This can be seen clearly in how a \"restart\" is specified. Previously, this was an `Outcome` combinator:\n\n```rust\nOutcome::if_running(\n    Outcome::both(Outcome::stop(), Outcome::start()),\n    Outcome::start(),\n)\n```\n\nNow, it's a discrete method:\n\n```rust\njob.restart();\n```\n\nPreviously, a graceful stop was a mess:\n\n```rust\nOutcome::if_running(\n    Outcome::both(\n        Outcome::both(\n            Outcome::signal(Signal::Terminate),\n            Outcome::wait_timeout(Duration::from_secs(30)),\n        ),\n        Outcome::both(Outcome::stop(), Outcome::start()),\n    ),\n    Outcome::DoNothing,\n)\n```\n\nNow, it's again a discrete method:\n\n```rust\njob.stop_with_signal(Signal::Terminate, Duration::from_secs(30));\n```\n\nThe `stop()` and `start()` methods also do nothing if the process is already stopped or started, respectively, so you don't need to check the status of the job before calling them. The `try_restart()` method is available to do a restart only if the job is running, with the `try_restart_with_signal()` variant for graceful restarts.\n\nFurther, all of these methods are non-blocking sync (and take `&self`), but they return a `Ticket`, a future which resolves when the control has been processed. That can be dropped if you don't care about it without affecting the job, or used to perform more advanced flow control. The special `to_wait()` method returns a detached, cloneable, \"wait()\" future, which will resolve when the process exits, without needing to hold on to the `Job` or a reference at all.\n\nSee the [`restart_run_on_successful_build` example](./examples/restart_run_on_successful_build.rs) which starts a `cargo build`, waits for it to end, and then (re)starts `cargo run` if the build exited successfully.\n\nFinally: `Outcome::Clear` and `Outcome::Reset` are gone, and there's no equivalent on `Job`: that's because these are screen control actions, not job control. You should use the [clearscreen](https://docs.rs/clearscreen) crate directly in your action handler, in conjunction with job control, to achieve the desired effect.\n\n## v2.3.0 (2023-03-22)\n\n- New: `Outcome::Race` and `Outcome::race()` ([#548](https://github.com/watchexec/watchexec/pull/548))\n- New: `Outcome::wait_timeout()` ([#548](https://github.com/watchexec/watchexec/pull/548))\n- New: `Outcome::sequence()` ([#548](https://github.com/watchexec/watchexec/pull/548))\n- Fix: `kill_on_drop(true)` set for group commands as well as ungrouped ([#549](https://github.com/watchexec/watchexec/pull/549))\n- Some `debug!`s upgraded to `info!`s, based on experience reading logs ([#547](https://github.com/watchexec/watchexec/pull/547))\n\n## v2.2.0 (2023-03-18)\n\n- Ditch MSRV policy. The `rust-version` indication will remain, for the minimum estimated Rust version for the code features used in the crate's own code, but dependencies may have already moved on. From now on, only latest stable is assumed and tested for. ([#510](https://github.com/watchexec/watchexec/pull/510))\n- Split off `watchexec-events` and `watchexec-signals` crates.\n- Unify `SubSignal` and `MainSignal` into a new `Signal` type. The former types and paths exist as deprecated aliases/re-exports.\n\n## v2.1.1 (2023-02-14)\n\n## v2.1.0 (2023-01-08)\n\n- MSRV: bump to 1.61.0\n- Deps: drop explicit dependency on `libc` on Unix.\n- Internal: remove all usage of `dunce`, replaced with either Tokio's `canonicalize` (properly async) or [normalize-path](https://docs.rs/normalize-path) (performs no I/O).\n- Internal: drop support code for Fuchsia. MIO already didn't support it, so it never compiled there.\n- Add `#[must_use]` annotations to a bunch of functions.\n- Add missing `Send` bound to `HandlerLock`.\n- Add new keyboard event source; initially supports just detecting EOF on STDIN. ([#449](https://github.com/watchexec/watchexec/pull/449))\n- Fix `summarise_events_to_env` on Windows to output paths with backslashes.\n\n## v2.0.2 (2022-09-07)\n\n- Deps: upgrade to miette 5.3.0\n\n## v2.0.1 (2022-09-07)\n\n- Deps: upgrade to Notify 5.0.0\n\n## v2.0.0 (2022-06-17)\n\nFirst \"stable\" release of the library.\n\n- **Change: the library is split into even more crates**\n    - Two new low-level crates, `project-origins` and `ignore-files`, extract standalone functionality\n    - Filterers are now separate crates, so they can evolve independently (faster) to the main library crate\n    - These five new crates live in the watchexec monorepo, rather than being completely separate like `command-group` and `clearscreen`\n    - This makes the main library bit less likely to change as often as it did, so it was finally time to release 2.0.0!\n\n- **Change: the Action worker now launches a set of Commands**\n    - A new type `Command` replaces and augments `Shell`, making explicit which style of calling will be used\n    - The action working data now takes a `Vec<Command>`, so multiple commands to be run as a set\n    - Commands in the set are run sequentially, with an error interrupting the sequence\n    - It is thus possible to run both \"shelled\" and \"raw exec\" commands in a set\n    - `PreSpawn` and `PostSpawn` handlers are run per Command, not per command set\n    - This new style should be preferred over sending command lines like `cmd1 && cmd2`\n\n- **Change: the event queue is now a priority queue**\n    - Shutting down the runtime is faster and more predictable. No more hanging after hitting Ctrl-C if there's tonnes of events coming in!\n    - Signals sent to the main process have higher priority\n    - Events marked \"urgent\" skip filtering entirely\n    - SIGINT, SIGTERM, and Ctrl-C on Windows are marked urgent\n        - This means it's no longer possible to accidentally filter these events out\n        - They still require handling in `on_action` to do anything\n    - The API for the `Filterer` trait changes slightly to let filterers use event priority\n\n- Improvement: the main subtasks of the runtime are now aborted on error\n- Improvement: the event queue is explicitly closed when shutting down\n- Improvement: the action worker will check if the event queue is closed more often, to shutdown early\n- Improvement: `kill_on_drop` is set on Commands, which will be a little more eager to terminate processes when we're done with them\n- Feature: `Outcome::Sleep` waits for a given duration ([#79](https://github.com/watchexec/watchexec/issues/79))\n\nOther miscellaneous:\n\n- Deps: add the `log` feature to tracing so logs can be emitted to `log` subscribers\n- Deps: upgrade to Tokio 1.19\n- Deps: upgrade to Miette 4\n- Deps: upgrade to Notify 5.0.0-pre.15\n\n- Docs: fix the main example in lib.rs ([#297](https://github.com/watchexec/watchexec/pull/297))\n- Docs: describe a tuple argument in the globset filterer interface\n- Docs: the library crate gains a file-based CHANGELOG.md (and won't go in the Github releases tab anymore)\n- Docs: the library's readme's code block example is now checked as a doc-test\n\n- Meta: PRs are now merged by Bors\n\n## v2.0.0-pre.14 (2022-04-04)\n\n- Replace git2 dependency by git-config ([#267](https://github.com/watchexec/watchexec/pull/267)). This makes using the library more pleasant and will also avoid library version mismatch errors when the libgit2 library updates on the system.\n\n## v2.0.0-pre.13 (2022-03-18)\n\n- Revert backend switch on mac from previous release. We'll do it a different way later ([#269](https://github.com/watchexec/watchexec/issues/269))\n\n## v2.0.0-pre.12 (2022-03-16)\n\n- Upgraded to [Notify pre.14](https://github.com/notify-rs/notify/releases/tag/5.0.0-pre.14)\n- Internal change: kqueue backend is used on mac. This _should_ reduce or eliminate some old persistent bugs on mac, and improve response times, but please report any issues you have!\n- `Watchexec::new()` now reports the library's version at debug level\n- Notify version is now specified with an exact (`=`) requirement, to avoid breakage ([#266](https://github.com/watchexec/watchexec/issues/266))\n\n## v2.0.0-pre.11 (2022-03-07)\n\n- New `error::FsWatcherError` enum split off from `RuntimeError`, and with additional variants to take advantage of targeted help text for known inotify errors on Linux\n- Help text is now carried through elevated errors properly\n- Globset filterer: `extensions` and `filters` are now cooperative rather than exclusionary. That is, a filters of `[\"Gemfile\"]` and an extensions of `[\"js\", \"rb\"]` will match _both_ `Gemfile` and `index.js` rather than matching nothing at all. This restores pre 2.0 behaviour.\n- Globset filterer: on unix, a filter of `*/file` will match both `file` and `dir/file` instead of just `dir/file`. This is a compatibility fix and is incorrect behaviour which will be removed in the future. Do not rely on it.\n\n## v2.0.0-pre.10 (2022-02-07)\n\n- The `on_error` handler gets an upgraded parameter which lets it upgrade (runtime) errors to critical.\n- `summarize_events_to_paths` now deduplicates paths within each variable.\n\n## v2.0.0-pre.9 (2022-01-31)\n\n- `Action`, `PreSpawn`, and `PostSpawn` structs passed to handlers now contain an `Arc<[Event]>` instead of an `Arc<Vec<Event>>`\n- `Outcome` processing (the final bit of an action) now runs concurrently, so it doesn't block further event processing ([#247](https://github.com/watchexec/watchexec/issues/247), and to a certain extent, [#241](https://github.com/watchexec/watchexec/issues/241))\n\n## v2.0.0-pre.8 (2022-01-26)\n\n- Fix: globset filterer should pass all non-path events ([#248](https://github.com/watchexec/watchexec/pull/248))\n\n## v2.0.0-pre.7 (2022-01-26) [YANKED]\n\n**Yanked for critical bug in globset filterer (fixed in pre.8) on 2022-01-26**\n\n- Fix: typo in logging/errors ([#242](https://github.com/watchexec/watchexec/pull/242))\n- Globset: an extension filter should fail all paths that are about folders ([#244](https://github.com/watchexec/watchexec/issues/244))\n- Globset: in the case of an event with multiple paths, any pass should pass the entire event\n- Removal: `filter::check_glob` and `error::GlobParseError`\n\n## v2.0.0-pre.6 (2022-01-19)\n\nFirst version of library v2 that was used in a CLI release.\n\n- Globset filterer was erroneously passing files with no extension when an extension filter was specified\n\n## v2.0.0-pre.5 (2022-01-18)\n\n- Update MSRV (to 1.58) and policy (bump incurs minor semver only)\n- Some bugfixes around canonicalisation of paths\n- Eliminate context-less IO errors\n- Move error types around\n- Prep library readme\n- Update deps\n\n## v2.0.0-pre.4 (2022-01-16)\n\n- More logging, especially around ignore file discovery and filtering\n- The const `paths::PATH_SEPARATOR` is now public, being `:` on Unix and `;` and Windows.\n- Add Subversion to discovered ProjectTypes\n- Add common (sub)Filterer for ignore files, so they benefit from a single consistent implementation. This also makes ignore file discovery correct and efficient by being able to interpret ignore files which searching for ignore files, or in other words, _not_ descending into directories which are ignored.\n- Integrate this new IgnoreFilterer into the GlobsetFilterer and TaggedFilterer. This does mean that some old v1 behaviour of patterns in gitignores will not behave quite the same now, but that was arguably always a bug. The old \"buggy\" v1 behaviour around folder filtering remains for manual filters, which are those most likely to be surprising if \"fixed\".\n\n## v2.0.0-pre.3 (2021-12-29)\n\n- [`summarise_events_to_env`](https://docs.rs/watchexec/2.0.0-pre.3/watchexec/paths/fn.summarise_events_to_env.html) used to return `COMMON_PATH`, it now returns `COMMON`, in keeping with the other variable names.\n\n## v2.0.0-pre.2 (2021-12-29)\n\n- [`summarise_events_to_env`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/paths/fn.summarise_events_to_env.html) returns a `HashMap<&str, OsString>` rather than `HashMap<&OsStr, OsString>`, because the expectation is that the variable names are processed, e.g. in the CLI: `WATCHEXEC_{}_PATH`. `OsStr` makes that painful for no reason (the strings are static anyway).\n- The [`Action`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/action/struct.Action.html) struct's `events` field changes to be an `Arc<Vec<Event>>` rather than a `Vec<Event>`: the intent is for the events to be immutable/read-only (and it also made it easier/cheaper to implement the next change below).\n- The [`PreSpawn`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/action/struct.PreSpawn.html) and [`PostSpawn`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/action/struct.PostSpawn.html) structs got a new `events: Arc<Vec<Event>>` field so these handlers get read-only access to the events that triggered the command.\n\n## v2.0.0-pre.1 (2021-12-21)\n\n- MSRV bumped to 1.56\n- Rust 2021 edition\n- More documentation around tagged filterer:\n\t- `==` and `!=` are case-insensitive\n\t- the mapping of matcher to tags\n\t- the mapping of matcher to auto op\n- Finished the tagged filterer:\n\t- Proper path glob matching\n\t- Signal matching\n\t- Process completion matching\n\t- Allowlisting pattern works\n\t- More matcher aliases to the parser\n\t- Negated filters\n\t- Some silly filter parsing bugs\n\t- File event kind matching\n\t- Folder filtering (main confusing behaviour in v1)\n- Lots of tests:\n\t- Globset filterer\n\t- Including the \"buggy\"/confusing behaviour of v1, for parity/compat\n\t- Tagged filterer:\n\t\t- Paths\n\t\t- Including verifying that the v1 confusing behaviour is fixed\n\t\t- Non-path filters\n\t\t- Filter parsing\n\t- Ignore files\n\t- Filter scopes\n\t- Outcomes\n\t- Change reporting in the environment\n\t\t- ...Specify behaviour a little more precisely through that process\n- Prepare the watchexec event type to be serializable\n\t- A synthetic `FileType`\n\t- A synthetic `ProcessEnd` (`ExitStatus` replacement)\n- Some ease-of-use improvements, mainly removing generics when overkill\n\n## v2.0.0-pre.0 (2021-10-17)\n\n- Placeholder release of v2 library (preview)\n\n## v1.17.1 (2021-07-22)\n\n- Process handling code replaced with the new [command-group](https://github.com/watchexec/command-group) crate.\n- [#158](https://github.com/watchexec/watchexec/issues/158) New option `use_process_group` (default `true`) allows disabling use of process groups.\n- [#168](https://github.com/watchexec/watchexec/issues/168) Default debounce time further decreased to 100ms.\n- Binstall configuration and transitional `cargo install watchexec` stub removed.\n\n## v1.16.1 (2021-07-10)\n\n- [#200](https://github.com/watchexec/watchexec/issues/200): Expose when the process is done running\n- [`ba26999`](https://github.com/watchexec/watchexec/commit/ba26999028cfcac410120330800a9a9026ca7274) Pin globset to 0.4.6 to avoid breakage due to a bugfix in 0.4.7\n\n## v1.16.0 (2021-05-09)\n\n- Initial release as a separate crate.\n"
  },
  {
    "path": "crates/lib/Cargo.toml",
    "content": "[package]\nname = \"watchexec\"\nversion = \"8.2.0\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\", \"Matt Green <mattgreenrocks@gmail.com>\"]\nlicense = \"Apache-2.0\"\ndescription = \"Library to execute commands in response to file modifications\"\nkeywords = [\"watcher\", \"filesystem\", \"watchexec\"]\n\ndocumentation = \"https://docs.rs/watchexec\"\nhomepage = \"https://watchexec.github.io\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.61.0\"\nedition = \"2021\"\n\n[dependencies]\nasync-priority-channel = \"0.2.0\"\natomic-take = \"1.0.0\"\nfutures = \"0.3.29\"\nmiette = \"7.2.0\"\nnotify = \"8.0.0\"\nthiserror = \"2.0.11\"\nnormalize-path = \"0.2.0\"\n\n[dependencies.watchexec-events]\nversion = \"6.1.0\"\npath = \"../events\"\n\n[dependencies.watchexec-signals]\nversion = \"5.0.1\"\npath = \"../signals\"\n\n[dependencies.watchexec-supervisor]\nversion = \"5.2.0\"\npath = \"../supervisor\"\n\n[dependencies.tokio]\nversion = \"1.33.0\"\nfeatures = [\n\t\"fs\",\n\t\"io-std\",\n\t\"process\",\n\t\"rt\",\n\t\"rt-multi-thread\",\n\t\"signal\",\n\t\"sync\",\n]\n\n[dependencies.tracing]\nversion = \"0.1.40\"\nfeatures = [\"log\"]\n\n[target.'cfg(unix)'.dependencies]\nlibc = \"0.2.74\"\n\n[target.'cfg(windows)'.dependencies.windows-sys]\nversion = \">= 0.59.0, < 0.62.0\"\nfeatures = [\"Win32_System_Console\", \"Win32_Foundation\"]\n\n[dev-dependencies.tracing-subscriber]\nversion = \"0.3.6\"\nfeatures = [\"env-filter\"]\n\n[target.'cfg(unix)'.dev-dependencies.nix]\nversion = \"0.30.1\"\nfeatures = [\"signal\"]\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\n"
  },
  {
    "path": "crates/lib/README.md",
    "content": "[![Crates.io page](https://badgen.net/crates/v/watchexec)](https://crates.io/crates/watchexec)\n[![API Docs](https://docs.rs/watchexec/badge.svg)][docs]\n[![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license]\n[![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml)\n\n# Watchexec library\n\n_The library which powers [Watchexec CLI](https://watchexec.github.io) and other tools._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license].\n- Status: maintained.\n\n[docs]: https://docs.rs/watchexec\n[license]: ../../LICENSE\n\n\n## Examples\n\nHere's a complete example showing some of the library's features:\n\n```rust ,no_run\nuse miette::{IntoDiagnostic, Result};\nuse std::{\n    sync::{Arc, Mutex},\n    time::Duration,\n};\nuse watchexec::{\n    command::{Command, Program, Shell},\n    job::CommandState,\n    Watchexec,\n};\nuse watchexec_events::{Event, Priority};\nuse watchexec_signals::Signal;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n    // this is okay to start with, but Watchexec logs a LOT of data,\n    // even at error level. you will quickly want to filter it down.\n    tracing_subscriber::fmt()\n        .with_env_filter(tracing_subscriber::EnvFilter::from_default_env())\n        .init();\n\n    // initialise Watchexec with a simple initial action handler\n    let job = Arc::new(Mutex::new(None));\n    let wx = Watchexec::new({\n        let outerjob = job.clone();\n        move |mut action| {\n            let (_, job) = action.create_job(Arc::new(Command {\n                program: Program::Shell {\n                    shell: Shell::new(\"bash\"),\n                    command: \"\n                        echo 'Hello world'\n                        trap 'echo Not quitting yet!' TERM\n                        read\n                    \"\n                    .into(),\n                    args: Vec::new(),\n                },\n                options: Default::default(),\n            }));\n\n            // store the job outside this closure too\n            *outerjob.lock().unwrap() = Some(job.clone());\n\n            // block SIGINT\n            #[cfg(unix)]\n            job.set_spawn_hook(|cmd, _| {\n                use nix::sys::signal::{sigprocmask, SigSet, SigmaskHow, Signal};\n                unsafe {\n                    cmd.command_mut().pre_exec(|| {\n                        let mut newset = SigSet::empty();\n                        newset.add(Signal::SIGINT);\n                        sigprocmask(SigmaskHow::SIG_BLOCK, Some(&newset), None)?;\n                        Ok(())\n                    });\n                }\n            });\n\n            // start the command\n            job.start();\n\n            action\n        }\n    })?;\n\n    // start the engine\n    let main = wx.main();\n\n    // send an event to start\n    wx.send_event(Event::default(), Priority::Urgent)\n        .await\n        .unwrap();\n    // ^ this will cause the action handler we've defined above to run,\n    //   creating and starting our little bash program, and storing it in the mutex\n\n    // spin until we've got the job\n    while job.lock().unwrap().is_none() {\n        tokio::task::yield_now().await;\n    }\n\n    // watch the job and restart it when it exits\n    let job = job.lock().unwrap().clone().unwrap();\n    let auto_restart = tokio::spawn(async move {\n        loop {\n            job.to_wait().await;\n            job.run(|context| {\n                if let CommandState::Finished {\n                    status,\n                    started,\n                    finished,\n                } = context.current\n                {\n                    let duration = *finished - *started;\n                    eprintln!(\"[Program stopped with {status:?}; ran for {duration:?}]\")\n                }\n            })\n            .await;\n\n            eprintln!(\"[Restarting...]\");\n            job.start().await;\n        }\n    });\n\n    // now we change what the action does:\n    let auto_restart_abort = auto_restart.abort_handle();\n    wx.config.on_action(move |mut action| {\n        // if we get Ctrl-C on the Watchexec instance, we quit\n        if action.signals().any(|sig| sig == Signal::Interrupt) {\n            eprintln!(\"[Quitting...]\");\n            auto_restart_abort.abort();\n            action.quit_gracefully(Signal::ForceStop, Duration::ZERO);\n            return action;\n        }\n\n        // if the action was triggered by file events, gracefully stop the program\n        if action.paths().next().is_some() {\n            // watchexec can manage (\"supervise\") more than one program;\n            // here we only have one but we don't know its Id so we grab it out of the iterator\n            if let Some(job) = action.list_jobs().next().map(|(_, job)| job.clone()) {\n                eprintln!(\"[Asking program to stop...]\");\n                job.stop_with_signal(Signal::Terminate, Duration::from_secs(5));\n            }\n        }\n\n        action\n    });\n\n    // and watch all files in the current directory:\n    wx.config.pathset([\".\"]);\n\n    // then keep running until Watchexec quits!\n    let _ = main.await.into_diagnostic()?;\n    auto_restart.abort();\n    Ok(())\n}\n```\n\nOther examples:\n- [Only Commands](./examples/only_commands.rs): skip watching files, only use the supervisor.\n- [Only Events](./examples/only_events.rs): never start any processes, only print events.\n- [Restart `cargo run` only when `cargo build` succeeds](./examples/restart_run_on_successful_build.rs)\n\n\n## Kitchen sink\n\nThough not its primary usecase, the library exposes most of its relatively standalone components,\navailable to make other tools that are not Watchexec-shaped:\n\n- **Event sources**: [Filesystem](https://docs.rs/watchexec/3/watchexec/sources/fs/index.html),\n  [Signals](https://docs.rs/watchexec/3/watchexec/sources/signal/index.html),\n  [Keyboard](https://docs.rs/watchexec/3/watchexec/sources/keyboard/index.html).\n\n- Finding **[a common prefix](https://docs.rs/watchexec/3/watchexec/paths/fn.common_prefix.html)**\n  of a set of paths.\n\n- A **[Changeable](https://docs.rs/watchexec/3/watchexec/changeable/index.html)** type, which\n  powers the \"live\" configuration system.\n\n- And [more][docs]!\n\nFilterers are split into their own crates, so they can be evolved independently:\n\n- The **[Globset](https://docs.rs/watchexec-filterer-globset) filterer** implements the default\n  Watchexec CLI filtering, based on the regex crate's ignore mechanisms.\n\n- ~~The **[Tagged](https://docs.rs/watchexec-filterer-tagged) filterer**~~ was an experiment in\n  creating a more powerful filtering solution, which could operate on every part of events, not\n  just their paths, using a custom syntax. It is no longer maintained.\n\n- The **[Ignore](https://docs.rs/watchexec-filterer-ignore) filterer** implements ignore-file\n  semantics, and especially supports _trees_ of ignore files. It is used as a subfilterer in both\n  of the main filterers above.\n\nThere are also separate, standalone crates used to build Watchexec which you can tap into:\n\n- **[Supervisor](https://docs.rs/watchexec-supervisor)** is Watchexec's process supervisor and\n  command abstraction.\n\n- **[ClearScreen](https://docs.rs/clearscreen)** makes clearing the terminal screen in a\n  cross-platform way easy by default, and provides advanced options to fit your usecase.\n\n- **[Command Group](https://docs.rs/command-group)** augments the std and tokio `Command` with\n  support for process groups, portable between Unix and Windows.\n\n- **[Event types](https://docs.rs/watchexec-events)** contains the event types used by Watchexec,\n  including the JSON format used for passing event data to child processes.\n\n- **[Signal types](https://docs.rs/watchexec-signals)** contains the signal types used by Watchexec.\n\n- **[Ignore files](https://docs.rs/ignore-files)** finds, parses, and interprets ignore files.\n\n- **[Project Origins](https://docs.rs/project-origins)** finds the origin (or root) path of a\n  project, and what kind of project it is.\n\n## Rust version (MSRV)\n\nDue to the unpredictability of dependencies changing their MSRV, this library no longer tries to\nkeep to a minimum supported Rust version behind stable. Instead, it is assumed that developers use\nthe latest stable at all times.\n\nApplications that wish to support lower-than-stable Rust (such as the Watchexec CLI does) should:\n- use a lock file\n- recommend the use of `--locked` when installing from source\n- provide pre-built binaries (and [Binstall](https://github.com/cargo-bins/cargo-binstall) support) for non-distro users\n- avoid using newer features until some time has passed, to let distro users catch up\n- consider recommending that distro-Rust users switch to distro `rustup` where available\n"
  },
  {
    "path": "crates/lib/examples/only_commands.rs",
    "content": "use std::{\n\tsync::Arc,\n\ttime::{Duration, Instant},\n};\n\nuse miette::{IntoDiagnostic, Result};\nuse tokio::time::sleep;\nuse watchexec::{\n\tcommand::{Command, Program},\n\tWatchexec,\n};\nuse watchexec_events::{Event, Priority};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n\tlet wx = Watchexec::new(|mut action| {\n\t\t// you don't HAVE to respond to filesystem events:\n\t\t// here, we start a command every five seconds, unless we get a signal and quit\n\n\t\tif action.signals().next().is_some() {\n\t\t\teprintln!(\"[Quitting...]\");\n\t\t\taction.quit();\n\t\t} else {\n\t\t\tlet (_, job) = action.create_job(Arc::new(Command {\n\t\t\t\tprogram: Program::Exec {\n\t\t\t\t\tprog: \"echo\".into(),\n\t\t\t\t\targs: vec![\n\t\t\t\t\t\t\"Hello world!\".into(),\n\t\t\t\t\t\tformat!(\"Current time: {:?}\", Instant::now()),\n\t\t\t\t\t\t\"Press Ctrl+C to quit\".into(),\n\t\t\t\t\t],\n\t\t\t\t},\n\t\t\t\toptions: Default::default(),\n\t\t\t}));\n\t\t\tjob.start();\n\t\t}\n\n\t\taction\n\t})?;\n\n\ttokio::spawn({\n\t\tlet wx = wx.clone();\n\t\tasync move {\n\t\t\tloop {\n\t\t\t\tsleep(Duration::from_secs(5)).await;\n\t\t\t\twx.send_event(Event::default(), Priority::Urgent)\n\t\t\t\t\t.await\n\t\t\t\t\t.unwrap();\n\t\t\t}\n\t\t}\n\t});\n\n\tlet _ = wx.main().await.into_diagnostic()?;\n\tOk(())\n}\n"
  },
  {
    "path": "crates/lib/examples/only_events.rs",
    "content": "use miette::{IntoDiagnostic, Result};\nuse watchexec::Watchexec;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n\tlet wx = Watchexec::new(|mut action| {\n\t\t// you don't HAVE to spawn jobs:\n\t\t// here, we just print out the events as they come in\n\t\tfor event in action.events.iter() {\n\t\t\teprintln!(\"{event:?}\");\n\t\t}\n\n\t\t// quit when we get a signal\n\t\tif action.signals().next().is_some() {\n\t\t\teprintln!(\"[Quitting...]\");\n\t\t\taction.quit();\n\t\t}\n\n\t\taction\n\t})?;\n\n\t// start the engine\n\tlet main = wx.main();\n\n\t// and watch all files in the current directory:\n\twx.config.pathset([\".\"]);\n\n\tlet _ = main.await.into_diagnostic()?;\n\tOk(())\n}\n"
  },
  {
    "path": "crates/lib/examples/readme.rs",
    "content": "use std::{\n\tsync::{Arc, Mutex},\n\ttime::Duration,\n};\n\nuse miette::{IntoDiagnostic, Result};\nuse watchexec::{\n\tcommand::{Command, Program, Shell},\n\tjob::CommandState,\n\tWatchexec,\n};\nuse watchexec_events::{Event, Priority};\nuse watchexec_signals::Signal;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n\t// this is okay to start with, but Watchexec logs a LOT of data,\n\t// even at error level. you will quickly want to filter it down.\n\ttracing_subscriber::fmt()\n\t\t.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())\n\t\t.init();\n\n\t// initialise Watchexec with a simple initial action handler\n\tlet job = Arc::new(Mutex::new(None));\n\tlet wx = Watchexec::new({\n\t\tlet outerjob = job.clone();\n\t\tmove |mut action| {\n\t\t\tlet (_, job) = action.create_job(Arc::new(Command {\n\t\t\t\tprogram: Program::Shell {\n\t\t\t\t\tshell: Shell::new(\"bash\"),\n\t\t\t\t\tcommand: \"\n\t\t\t\t\t\techo 'Hello world'\n\t\t\t\t\t\ttrap 'echo Not quitting yet!' TERM\n\t\t\t\t\t\tread\n\t\t\t\t\t\"\n\t\t\t\t\t.into(),\n\t\t\t\t\targs: Vec::new(),\n\t\t\t\t},\n\t\t\t\toptions: Default::default(),\n\t\t\t}));\n\n\t\t\t// store the job outside this closure too\n\t\t\t*outerjob.lock().unwrap() = Some(job.clone());\n\n\t\t\t// block SIGINT\n\t\t\t#[cfg(unix)]\n\t\t\tjob.set_spawn_hook(|cmd, _| {\n\t\t\t\tuse nix::sys::signal::{sigprocmask, SigSet, SigmaskHow, Signal};\n\t\t\t\tunsafe {\n\t\t\t\t\tcmd.command_mut().pre_exec(|| {\n\t\t\t\t\t\tlet mut newset = SigSet::empty();\n\t\t\t\t\t\tnewset.add(Signal::SIGINT);\n\t\t\t\t\t\tsigprocmask(SigmaskHow::SIG_BLOCK, Some(&newset), None)?;\n\t\t\t\t\t\tOk(())\n\t\t\t\t\t});\n\t\t\t\t}\n\t\t\t});\n\n\t\t\t// start the command\n\t\t\tjob.start();\n\n\t\t\taction\n\t\t}\n\t})?;\n\n\t// start the engine\n\tlet main = wx.main();\n\n\t// send an event to start\n\twx.send_event(Event::default(), Priority::Urgent)\n\t\t.await\n\t\t.unwrap();\n\t// ^ this will cause the action handler we've defined above to run,\n\t//   creating and starting our little bash program, and storing it in the mutex\n\n\t// spin until we've got the job\n\twhile job.lock().unwrap().is_none() {\n\t\ttokio::task::yield_now().await;\n\t}\n\n\t// watch the job and restart it when it exits\n\tlet job = job.lock().unwrap().clone().unwrap();\n\tlet auto_restart = tokio::spawn(async move {\n\t\tloop {\n\t\t\tjob.to_wait().await;\n\t\t\tjob.run(|context| {\n\t\t\t\tif let CommandState::Finished {\n\t\t\t\t\tstatus,\n\t\t\t\t\tstarted,\n\t\t\t\t\tfinished,\n\t\t\t\t} = context.current\n\t\t\t\t{\n\t\t\t\t\tlet duration = *finished - *started;\n\t\t\t\t\teprintln!(\"[Program stopped with {status:?}; ran for {duration:?}]\");\n\t\t\t\t}\n\t\t\t})\n\t\t\t.await;\n\n\t\t\teprintln!(\"[Restarting...]\");\n\t\t\tjob.start().await;\n\t\t}\n\t});\n\n\t// now we change what the action does:\n\tlet auto_restart_abort = auto_restart.abort_handle();\n\twx.config.on_action(move |mut action| {\n\t\t// if we get Ctrl-C on the Watchexec instance, we quit\n\t\tif action.signals().any(|sig| sig == Signal::Interrupt) {\n\t\t\teprintln!(\"[Quitting...]\");\n\t\t\tauto_restart_abort.abort();\n\t\t\taction.quit_gracefully(Signal::ForceStop, Duration::ZERO);\n\t\t\treturn action;\n\t\t}\n\n\t\t// if the action was triggered by file events, gracefully stop the program\n\t\tif action.paths().next().is_some() {\n\t\t\t// watchexec can manage (\"supervise\") more than one program;\n\t\t\t// here we only have one but we don't know its Id so we grab it out of the iterator\n\t\t\tif let Some(job) = action.list_jobs().next().map(|(_, job)| job) {\n\t\t\t\teprintln!(\"[Asking program to stop...]\");\n\t\t\t\tjob.stop_with_signal(Signal::Terminate, Duration::from_secs(5));\n\t\t\t}\n\n\t\t\t// we could also use `action.get_or_create_job` initially and store its Id to use here,\n\t\t\t// see the CHANGELOG.md for an example under \"3.0.0 > Action\".\n\t\t}\n\n\t\taction\n\t});\n\n\t// and watch all files in the current directory:\n\twx.config.pathset([\".\"]);\n\n\t// then keep running until Watchexec quits!\n\tlet _ = main.await.into_diagnostic()?;\n\tauto_restart.abort();\n\tOk(())\n}\n"
  },
  {
    "path": "crates/lib/examples/restart_run_on_successful_build.rs",
    "content": "use std::sync::Arc;\n\nuse miette::{IntoDiagnostic, Result};\nuse watchexec::{\n\tcommand::{Command, Program, SpawnOptions},\n\tjob::CommandState,\n\tId, Watchexec,\n};\nuse watchexec_events::{Event, Priority, ProcessEnd};\nuse watchexec_signals::Signal;\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n\tlet build_id = Id::default();\n\tlet run_id = Id::default();\n\tlet wx = Watchexec::new_async(move |mut action| {\n\t\tBox::new(async move {\n\t\t\tif action.signals().any(|sig| sig == Signal::Interrupt) {\n\t\t\t\teprintln!(\"[Quitting...]\");\n\t\t\t\taction.quit();\n\t\t\t\treturn action;\n\t\t\t}\n\n\t\t\tlet build = action.get_or_create_job(build_id, || {\n\t\t\t\tArc::new(Command {\n\t\t\t\t\tprogram: Program::Exec {\n\t\t\t\t\t\tprog: \"cargo\".into(),\n\t\t\t\t\t\targs: vec![\"build\".into()],\n\t\t\t\t\t},\n\t\t\t\t\toptions: Default::default(),\n\t\t\t\t})\n\t\t\t});\n\n\t\t\tlet run = action.get_or_create_job(run_id, || {\n\t\t\t\tArc::new(Command {\n\t\t\t\t\tprogram: Program::Exec {\n\t\t\t\t\t\tprog: \"cargo\".into(),\n\t\t\t\t\t\targs: vec![\"run\".into()],\n\t\t\t\t\t},\n\t\t\t\t\toptions: SpawnOptions {\n\t\t\t\t\t\tgrouped: true,\n\t\t\t\t\t\t..Default::default()\n\t\t\t\t\t},\n\t\t\t\t})\n\t\t\t});\n\n\t\t\tif action.paths().next().is_some()\n\t\t\t\t|| action.events.iter().any(|event| event.tags.is_empty())\n\t\t\t{\n\t\t\t\tbuild.restart().await;\n\t\t\t}\n\n\t\t\tbuild.to_wait().await;\n\t\t\tbuild\n\t\t\t\t.run(move |context| {\n\t\t\t\t\tif let CommandState::Finished {\n\t\t\t\t\t\tstatus: ProcessEnd::Success,\n\t\t\t\t\t\t..\n\t\t\t\t\t} = context.current\n\t\t\t\t\t{\n\t\t\t\t\t\trun.restart();\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\t.await;\n\n\t\t\taction\n\t\t})\n\t})?;\n\n\t// start the engine\n\tlet main = wx.main();\n\n\t// send an event to start\n\twx.send_event(Event::default(), Priority::Urgent)\n\t\t.await\n\t\t.unwrap();\n\n\t// and watch all files in cli src\n\twx.config.pathset([\"crates/cli/src\"]);\n\n\t// then keep running until Watchexec quits!\n\tlet _ = main.await.into_diagnostic()?;\n\tOk(())\n}\n"
  },
  {
    "path": "crates/lib/release.toml",
    "content": "pre-release-commit-message = \"release: lib v{{version}}\"\ntag-prefix = \"watchexec-\"\ntag-message = \"watchexec {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/lib/src/action/handler.rs",
    "content": "use std::{collections::HashMap, path::Path, sync::Arc, time::Duration};\nuse tokio::task::JoinHandle;\nuse watchexec_events::{Event, FileType, ProcessEnd};\nuse watchexec_signals::Signal;\nuse watchexec_supervisor::{\n\tcommand::Command,\n\tjob::{start_job, Job},\n};\n\nuse crate::id::Id;\n\nuse super::QuitManner;\n\n/// The environment given to the action handler.\n///\n/// The action handler is the heart of a Watchexec program. Within, you decide what happens when an\n/// event successfully passes all filters. Watchexec maintains a set of Supervised [`Job`]s, which\n/// are assigned a unique [`Id`] for lightweight reference. In this action handler, you should\n/// add commands to be supervised with `create_job()`, or find an already-supervised job with\n/// `get_job()` or `list_jobs()`. You can interact with jobs directly via their handles, and can\n/// even store clones of the handles for later use outside the action handler.\n///\n/// The action handler is also given the [`Event`]s which triggered the action. These are expected\n/// to be the way to determine what to do with a job. However, in some applications you might not\n/// care about them, and that's fine too: for example, you can build a Watchexec which only does\n/// process supervision, and is triggered entirely by synthetic events. Conversely, you are also not\n/// obligated to use the job handles: you can build a Watchexec which only does something with the\n/// events, and never actually starts any processes.\n///\n/// There are some important considerations to keep in mind when writing an action handler:\n///\n/// 1. The action handler is called with the supervisor set _as of when the handler was called_.\n///    This is particularly important when multiple action handlers might be running at the same\n///    time: they might have incomplete views of the supervisor set.\n///\n/// 2. The way the action handler communicates with the Watchexec handler is through the return\n///    value of the handler. That is, when you add a job with `create_job()`, the job is not added\n///    to the Watchexec instance's supervisor set until the action handler returns. Similarly, when\n///    using `quit()`, the quit action is not performed until the action handler returns and the\n///    Watchexec instance is able to see it.\n///\n/// 3. The action handler blocks the action main loop. This means that if you have a long-running\n///    action handler, the Watchexec instance will not be able to process events until the handler\n///    returns. That will cause events to accumulate and then get dropped once the channel reaches\n///    capacity, which will impact your ability to receive signals (such as a Ctrl-C), and may spew\n///    [`EventChannelTrySend` errors](crate::error::RuntimeError::EventChannelTrySend).\n///\n///    If you want to do something long-running, you should either ignore that error, and accept\n///    events may be dropped, or preferrably spawn a task to do it, and return from the action\n///    handler as soon as possible.\n#[derive(Debug)]\npub struct Handler {\n\t/// The collected events which triggered the action.\n\tpub events: Arc<[Event]>,\n\textant: HashMap<Id, Job>,\n\tpub(crate) new: HashMap<Id, (Job, JoinHandle<()>)>,\n\tpub(crate) quit: Option<QuitManner>,\n}\n\nimpl Handler {\n\tpub(crate) fn new(events: Arc<[Event]>, jobs: HashMap<Id, Job>) -> Self {\n\t\tSelf {\n\t\t\tevents,\n\t\t\textant: jobs,\n\t\t\tnew: HashMap::new(),\n\t\t\tquit: None,\n\t\t}\n\t}\n\n\t/// Create a new job and return its handle.\n\t///\n\t/// This starts the [`Job`] immediately, and stores a copy of its handle and [`Id`] in this\n\t/// `Action` (and thus in the Watchexec instance, when the action handler returns).\n\tpub fn create_job(&mut self, command: Arc<Command>) -> (Id, Job) {\n\t\tlet id = Id::default();\n\t\tlet (job, task) = start_job(command);\n\t\tself.new.insert(id, (job.clone(), task));\n\t\t(id, job)\n\t}\n\n\t// exposing this is dangerous as it allows duplicate IDs which may leak jobs\n\tfn create_job_with_id(&mut self, id: Id, command: Arc<Command>) -> Job {\n\t\tlet (job, task) = start_job(command);\n\t\tself.new.insert(id, (job.clone(), task));\n\t\tjob\n\t}\n\n\t/// Get an existing job or create a new one given an Id.\n\t///\n\t/// This starts the [`Job`] immediately if one with the Id doesn't exist, and stores a copy of\n\t/// its handle and [`Id`] in this `Action` (and thus in the Watchexec instance, when the action\n\t/// handler returns).\n\tpub fn get_or_create_job(&mut self, id: Id, command: impl Fn() -> Arc<Command>) -> Job {\n\t\tself.get_job(id)\n\t\t\t.unwrap_or_else(|| self.create_job_with_id(id, command()))\n\t}\n\n\t/// Get a job given its Id.\n\t///\n\t/// This returns a job handle, if it existed when this handler was called.\n\t#[must_use]\n\tpub fn get_job(&self, id: Id) -> Option<Job> {\n\t\tself.extant.get(&id).cloned()\n\t}\n\n\t/// List all jobs currently supervised by Watchexec.\n\t///\n\t/// This returns an iterator over all jobs, in no particular order, as of when this handler was\n\t/// called.\n\tpub fn list_jobs(&self) -> impl Iterator<Item = (Id, Job)> + '_ {\n\t\tself.extant.iter().map(|(id, job)| (*id, job.clone()))\n\t}\n\n\t/// Shut down the Watchexec instance immediately.\n\t///\n\t/// This will kill and drop all jobs without waiting on processes, then quit.\n\t///\n\t/// Use `graceful_quit()` to wait for processes to finish before quitting.\n\t///\n\t/// The quit is initiated once the action handler returns, not when this method is called.\n\tpub fn quit(&mut self) {\n\t\tself.quit = Some(QuitManner::Abort);\n\t}\n\n\t/// Shut down the Watchexec instance gracefully.\n\t///\n\t/// This will send graceful stops to all jobs, wait on them to finish, then reap them and quit.\n\t///\n\t/// Use `quit()` to quit more abruptly.\n\t///\n\t/// If you want to wait for all other actions to finish and for jobs to get cleaned up, but not\n\t/// gracefully delay for processes, you can do:\n\t///\n\t/// ```no_compile\n\t/// action.quit_gracefully(Signal::ForceStop, Duration::ZERO);\n\t/// ```\n\t///\n\t/// The quit is initiated once the action handler returns, not when this method is called.\n\tpub fn quit_gracefully(&mut self, signal: Signal, grace: Duration) {\n\t\tself.quit = Some(QuitManner::Graceful { signal, grace });\n\t}\n\n\t/// Convenience to get all signals in the event set.\n\tpub fn signals(&self) -> impl Iterator<Item = Signal> + '_ {\n\t\tself.events.iter().flat_map(Event::signals)\n\t}\n\n\t/// Convenience to get all paths in the event set.\n\t///\n\t/// An action contains a set of events, and some of those events might relate to watched\n\t/// files, and each of *those* events may have one or more paths that were affected.\n\t/// To hide this complexity this method just provides any and all paths in the event,\n\t/// along with the type of file at that path, if Watchexec knows that.\n\tpub fn paths(&self) -> impl Iterator<Item = (&Path, Option<&FileType>)> + '_ {\n\t\tself.events.iter().flat_map(Event::paths)\n\t}\n\n\t/// Convenience to get all process completions in the event set.\n\tpub fn completions(&self) -> impl Iterator<Item = Option<ProcessEnd>> + '_ {\n\t\tself.events.iter().flat_map(Event::completions)\n\t}\n}\n"
  },
  {
    "path": "crates/lib/src/action/quit.rs",
    "content": "use std::time::Duration;\nuse watchexec_signals::Signal;\n\n/// How the Watchexec instance should quit.\n#[derive(Clone, Copy, Debug, Eq, PartialEq)]\npub enum QuitManner {\n\t/// Kill all processes and drop all jobs, then quit.\n\tAbort,\n\n\t/// Gracefully stop all jobs, then quit.\n\tGraceful {\n\t\t/// Signal to send immediately\n\t\tsignal: Signal,\n\t\t/// Time to wait before forceful termination\n\t\tgrace: Duration,\n\t},\n}\n"
  },
  {
    "path": "crates/lib/src/action/return.rs",
    "content": "use std::future::Future;\n\nuse super::ActionHandler;\n\n/// The return type of an action.\n///\n/// This is the type returned by the raw action handler, used internally or when setting the action\n/// handler directly via the field on [`Config`](crate::Config). It is not used when setting the\n/// action handler via [`Config::on_action`](crate::Config::on_action) and\n/// [`Config::on_action_async`](crate::Config::on_action_async) as that takes care of wrapping the\n/// return type from the specialised signature on these methods.\npub enum ActionReturn {\n\t/// The action handler is synchronous and here's its return value.\n\tSync(ActionHandler),\n\n\t/// The action handler is asynchronous: this is the future that will resolve to its return value.\n\tAsync(Box<dyn Future<Output = ActionHandler> + Send + Sync>),\n}\n"
  },
  {
    "path": "crates/lib/src/action/worker.rs",
    "content": "use std::{\n\tcollections::HashMap,\n\tmem::take,\n\tsync::Arc,\n\ttime::{Duration, Instant},\n};\n\nuse async_priority_channel as priority;\nuse tokio::{sync::mpsc, time::timeout};\nuse tracing::{debug, trace};\nuse watchexec_events::{Event, Priority};\nuse watchexec_supervisor::job::Job;\n\nuse super::{handler::Handler, quit::QuitManner};\nuse crate::{\n\taction::ActionReturn,\n\terror::{CriticalError, RuntimeError},\n\tfilter::Filterer,\n\tid::Id,\n\tlate_join_set::LateJoinSet,\n\tConfig,\n};\n\n/// The main worker of a Watchexec process.\n///\n/// This is the main loop of the process. It receives events from the event channel, filters them,\n/// debounces them, obtains the desired outcome of an actioned event, calls the appropriate handlers\n/// and schedules processes as needed.\npub async fn worker(\n\tconfig: Arc<Config>,\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Receiver<Event, Priority>,\n) -> Result<(), CriticalError> {\n\tlet mut jobtasks = LateJoinSet::default();\n\tlet mut jobs = HashMap::<Id, Job>::new();\n\n\twhile let Some(mut set) = throttle_collect(\n\t\tconfig.clone(),\n\t\tevents.clone(),\n\t\terrors.clone(),\n\t\tInstant::now(),\n\t)\n\t.await?\n\t{\n\t\tlet events: Arc<[Event]> = Arc::from(take(&mut set).into_boxed_slice());\n\n\t\ttrace!(\"preparing action handler\");\n\t\tlet action = Handler::new(events.clone(), jobs.clone());\n\n\t\tdebug!(\"running action handler\");\n\t\tlet action = match config.action_handler.call(action) {\n\t\t\tActionReturn::Sync(action) => action,\n\t\t\tActionReturn::Async(action) => Box::into_pin(action).await,\n\t\t};\n\n\t\tdebug!(\"take control of new tasks\");\n\t\tfor (id, (job, task)) in action.new {\n\t\t\ttrace!(?id, \"taking control of new task\");\n\t\t\tjobtasks.insert(task);\n\t\t\tjobs.insert(id, job);\n\t\t}\n\n\t\tif let Some(manner) = action.quit {\n\t\t\tdebug!(?manner, \"quitting worker\");\n\t\t\tmatch manner {\n\t\t\t\tQuitManner::Abort => break,\n\t\t\t\tQuitManner::Graceful { signal, grace } => {\n\t\t\t\t\tdebug!(?signal, ?grace, \"quitting worker gracefully\");\n\t\t\t\t\tlet mut tasks = LateJoinSet::default();\n\t\t\t\t\tfor (id, job) in jobs.drain() {\n\t\t\t\t\t\ttrace!(?id, \"quitting job\");\n\t\t\t\t\t\ttasks.spawn(async move {\n\t\t\t\t\t\t\tjob.stop_with_signal(signal, grace);\n\t\t\t\t\t\t\tjob.delete().await;\n\t\t\t\t\t\t});\n\t\t\t\t\t}\n\t\t\t\t\t// TODO: spawn to process actions, and allow events to come in while\n\t\t\t\t\t//       waiting for graceful shutdown, e.g. a second Ctrl-C to hasten\n\t\t\t\t\tdebug!(\"waiting for graceful shutdown tasks\");\n\t\t\t\t\ttasks.join_all().await;\n\t\t\t\t\tdebug!(\"waiting for job tasks to end\");\n\t\t\t\t\tjobtasks.join_all().await;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tlet gc: Vec<Id> = jobs\n\t\t\t.iter()\n\t\t\t.filter_map(|(id, job)| {\n\t\t\t\tif job.is_dead() {\n\t\t\t\t\ttrace!(?id, \"job is dead, gc'ing\");\n\t\t\t\t\tSome(*id)\n\t\t\t\t} else {\n\t\t\t\t\tNone\n\t\t\t\t}\n\t\t\t})\n\t\t\t.collect();\n\t\tif !gc.is_empty() {\n\t\t\tdebug!(\"garbage collect old tasks\");\n\t\t\tfor id in gc {\n\t\t\t\tjobs.remove(&id);\n\t\t\t}\n\t\t}\n\n\t\tdebug!(\"action handler finished\");\n\t}\n\n\tdebug!(\"action worker finished\");\n\tOk(())\n}\n\npub async fn throttle_collect(\n\tconfig: Arc<Config>,\n\tevents: priority::Receiver<Event, Priority>,\n\terrors: mpsc::Sender<RuntimeError>,\n\tmut last: Instant,\n) -> Result<Option<Vec<Event>>, CriticalError> {\n\tif events.is_closed() {\n\t\ttrace!(\"events channel closed, stopping\");\n\t\treturn Ok(None);\n\t}\n\n\tlet mut set: Vec<Event> = vec![];\n\tloop {\n\t\tlet maxtime = if set.is_empty() {\n\t\t\ttrace!(\"nothing in set, waiting forever for next event\");\n\t\t\tDuration::from_secs(u64::MAX)\n\t\t} else {\n\t\t\tconfig.throttle.get().saturating_sub(last.elapsed())\n\t\t};\n\n\t\tif maxtime.is_zero() {\n\t\t\tif set.is_empty() {\n\t\t\t\ttrace!(\"out of throttle but nothing to do, resetting\");\n\t\t\t\tlast = Instant::now();\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\ttrace!(\"out of throttle on recycle\");\n\t\t} else {\n\t\t\ttrace!(?maxtime, \"waiting for event\");\n\t\t\tlet maybe_event = timeout(maxtime, events.recv()).await;\n\t\t\tif events.is_closed() {\n\t\t\t\ttrace!(\"events channel closed during timeout, stopping\");\n\t\t\t\treturn Ok(None);\n\t\t\t}\n\n\t\t\tmatch maybe_event {\n\t\t\t\tErr(_timeout) => {\n\t\t\t\t\ttrace!(\"timed out, cycling\");\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tOk(Err(_empty)) => return Ok(None),\n\t\t\t\tOk(Ok((event, priority))) => {\n\t\t\t\t\ttrace!(?event, ?priority, \"got event\");\n\n\t\t\t\t\tif priority == Priority::Urgent {\n\t\t\t\t\t\ttrace!(\"urgent event, by-passing filters\");\n\t\t\t\t\t} else if event.is_empty() {\n\t\t\t\t\t\ttrace!(\"empty event, by-passing filters\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tlet filtered = config.filterer.check_event(&event, priority);\n\t\t\t\t\t\tmatch filtered {\n\t\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\t\ttrace!(%err, \"filter errored on event\");\n\t\t\t\t\t\t\t\terrors.send(err).await?;\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tOk(false) => {\n\t\t\t\t\t\t\t\ttrace!(\"filter rejected event\");\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tOk(true) => {\n\t\t\t\t\t\t\t\ttrace!(\"filter passed event\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif set.is_empty() {\n\t\t\t\t\t\ttrace!(\"event is the first, resetting throttle window\");\n\t\t\t\t\t\tlast = Instant::now();\n\t\t\t\t\t}\n\n\t\t\t\t\tset.push(event);\n\n\t\t\t\t\tif priority == Priority::Urgent {\n\t\t\t\t\t\ttrace!(\"urgent event, by-passing throttle\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tlet elapsed = last.elapsed();\n\t\t\t\t\t\tif elapsed < config.throttle.get() {\n\t\t\t\t\t\t\ttrace!(?elapsed, \"still within throttle window, cycling\");\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\treturn Ok(Some(set));\n\t}\n}\n"
  },
  {
    "path": "crates/lib/src/action.rs",
    "content": "//! Processor responsible for receiving events, filtering them, and scheduling actions in response.\n\n#[doc(inline)]\npub use handler::Handler as ActionHandler;\n#[doc(inline)]\npub use quit::QuitManner;\n#[doc(inline)]\npub use r#return::ActionReturn;\n#[doc(inline)]\npub use worker::worker;\n\nmod handler;\nmod quit;\nmod r#return;\nmod worker;\n"
  },
  {
    "path": "crates/lib/src/changeable.rs",
    "content": "//! Changeable values.\n\nuse std::{\n\tany::type_name,\n\tfmt,\n\tsync::{Arc, RwLock},\n};\n\n/// A shareable value that doesn't keep a lock when it is read.\n///\n/// This is essentially an `Arc<RwLock<T: Clone>>`, with the only two methods to use it as:\n/// - replace the value, which obtains a write lock\n/// - get a clone of that value, which obtains a read lock\n///\n/// but importantly because you get a clone of the value, the read lock is not held after the\n/// `get()` method returns.\n///\n/// See [`ChangeableFn`] for a specialised variant which holds an [`Fn`].\n#[derive(Clone)]\npub struct Changeable<T>(Arc<RwLock<T>>);\nimpl<T> Changeable<T>\nwhere\n\tT: Clone + Send,\n{\n\t/// Create a new Changeable.\n\t///\n\t/// If `T: Default`, prefer using `::default()`.\n\t#[must_use]\n\tpub fn new(value: T) -> Self {\n\t\tSelf(Arc::new(RwLock::new(value)))\n\t}\n\n\t/// Replace the value with a new one.\n\t///\n\t/// Panics if the lock was poisoned.\n\tpub fn replace(&self, new: T) {\n\t\t*(self.0.write().expect(\"changeable lock poisoned\")) = new;\n\t}\n\n\t/// Get a clone of the value.\n\t///\n\t/// Panics if the lock was poisoned.\n\t#[must_use]\n\tpub fn get(&self) -> T {\n\t\tself.0.read().expect(\"handler lock poisoned\").clone()\n\t}\n}\n\nimpl<T> Default for Changeable<T>\nwhere\n\tT: Clone + Send + Default,\n{\n\tfn default() -> Self {\n\t\tSelf::new(T::default())\n\t}\n}\n\n// TODO: with specialisation, write a better impl when T: Debug\nimpl<T> fmt::Debug for Changeable<T> {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\tf.debug_struct(\"Changeable\")\n\t\t\t.field(\"inner type\", &type_name::<T>())\n\t\t\t.finish_non_exhaustive()\n\t}\n}\n\n/// A shareable `Fn` that doesn't hold a lock when it is called.\n///\n/// This is a specialisation of [`Changeable`] for the `Fn` usecase.\n///\n/// As this is for Watchexec, only `Fn`s with a single argument and return value are supported\n/// here; it's simple enough to make your own if you want more.\npub struct ChangeableFn<T, U>(Changeable<Arc<dyn (Fn(T) -> U) + Send + Sync>>);\nimpl<T, U> ChangeableFn<T, U>\nwhere\n\tT: Send,\n\tU: Send,\n{\n\tpub(crate) fn new(f: impl (Fn(T) -> U) + Send + Sync + 'static) -> Self {\n\t\tSelf(Changeable::new(Arc::new(f)))\n\t}\n\n\t/// Replace the fn with a new one.\n\t///\n\t/// Panics if the lock was poisoned.\n\tpub fn replace(&self, new: impl (Fn(T) -> U) + Send + Sync + 'static) {\n\t\tself.0.replace(Arc::new(new));\n\t}\n\n\t/// Call the fn.\n\t///\n\t/// Panics if the lock was poisoned.\n\tpub fn call(&self, data: T) -> U {\n\t\t(self.0.get())(data)\n\t}\n}\n\n// the derive adds a T: Clone bound\nimpl<T, U> Clone for ChangeableFn<T, U> {\n\tfn clone(&self) -> Self {\n\t\tSelf(Changeable::clone(&self.0))\n\t}\n}\n\nimpl<T, U> Default for ChangeableFn<T, U>\nwhere\n\tT: Send,\n\tU: Send + Default,\n{\n\tfn default() -> Self {\n\t\tSelf::new(|_| U::default())\n\t}\n}\n\nimpl<T, U> fmt::Debug for ChangeableFn<T, U> {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\tf.debug_struct(\"ChangeableFn\")\n\t\t\t.field(\"payload type\", &type_name::<T>())\n\t\t\t.field(\"return type\", &type_name::<U>())\n\t\t\t.finish_non_exhaustive()\n\t}\n}\n"
  },
  {
    "path": "crates/lib/src/config.rs",
    "content": "//! Configuration and builders for [`crate::Watchexec`].\n\nuse std::{future::Future, pin::pin, sync::Arc, time::Duration};\n\nuse tokio::sync::{watch, Notify};\nuse tracing::{debug, trace};\n\nuse crate::{\n\taction::{ActionHandler, ActionReturn},\n\tchangeable::{Changeable, ChangeableFn},\n\tfilter::{ChangeableFilterer, Filterer},\n\tsources::fs::{WatchedPath, Watcher},\n\tErrorHook,\n};\n\n/// Configuration for [`Watchexec`][crate::Watchexec].\n///\n/// Almost every field is a [`Changeable`], such that its value can be changed from a `&self`.\n///\n/// Fields are public for advanced use, but in most cases changes should be made through the\n/// methods provided: not only are they more convenient, each calls `debug!` on the new value,\n/// providing a quick insight into what your application sets.\n///\n/// The methods also set the \"change signal\" of the Config: this notifies some parts of Watchexec\n/// they should re-read the config. If you modify values via the fields directly, you should call\n/// `signal_change()` yourself. Note that this doesn't mean that changing values _without_ calling\n/// this will prevent Watchexec changing until it's called: most parts of Watchexec take a\n/// \"just-in-time\" approach and read a config item immediately before it's needed, every time it's\n/// needed, and thus don't need to listen for the change signal.\n#[derive(Clone, Debug)]\n#[non_exhaustive]\npub struct Config {\n\t/// This is set by the change methods whenever they're called, and notifies Watchexec that it\n\t/// should read the configuration again.\n\tpub(crate) change_signal: Arc<Notify>,\n\n\t/// The main handler to define: what to do when an action is triggered.\n\t///\n\t/// This handler is called with the [`Action`] environment, look at its doc for more detail.\n\t///\n\t/// If this handler is not provided, or does nothing, Watchexec in turn will do nothing, not\n\t/// even quit. Hence, you really need to provide a handler. This is enforced when using\n\t/// [`Watchexec::new()`], but not when using [`Watchexec::default()`].\n\t///\n\t/// It is possible to change the handler or any other configuration inside the previous handler.\n\t/// This and other handlers are fetched \"just in time\" when needed, so changes to handlers can\n\t/// appear instant, or may lag a little depending on lock contention, but a handler being called\n\t/// does not hold its lock. A handler changing while it's being called doesn't affect the run of\n\t/// a previous version of the handler: it will neither be stopped nor retried with the new code.\n\t///\n\t/// It is important for this handler to return quickly: avoid performing blocking work in it.\n\t/// This is true for all handlers, but especially for this one, as it will block the event loop\n\t/// and you'll find that the internal event queues quickly fill up and it all grinds to a halt.\n\t/// Spawn threads or tasks, or use channels or other async primitives to communicate with your\n\t/// expensive code.\n\tpub action_handler: ChangeableFn<ActionHandler, ActionReturn>,\n\n\t/// Runtime error handler.\n\t///\n\t/// This is run on every runtime error that occurs within Watchexec. The default handler\n\t/// is a no-op.\n\t///\n\t/// # Examples\n\t///\n\t/// Set the error handler:\n\t///\n\t/// ```\n\t/// # use watchexec::{config::Config, ErrorHook};\n\t/// let mut config = Config::default();\n\t/// config.on_error(|err: ErrorHook| {\n\t///     tracing::error!(\"{}\", err.error);\n\t/// });\n\t/// ```\n\t///\n\t/// Output a critical error (which will terminate Watchexec):\n\t///\n\t/// ```\n\t/// # use watchexec::{config::Config, ErrorHook, error::{CriticalError, RuntimeError}};\n\t/// let mut config = Config::default();\n\t/// config.on_error(|err: ErrorHook| {\n\t///     tracing::error!(\"{}\", err.error);\n\t///\n\t///     if matches!(err.error, RuntimeError::FsWatcher { .. }) {\n\t///         err.critical(CriticalError::External(\"fs watcher failed\".into()));\n\t///     }\n\t/// });\n\t/// ```\n\t///\n\t/// Elevate a runtime error to critical (will preserve the error information):\n\t///\n\t/// ```\n\t/// # use watchexec::{config::Config, ErrorHook, error::RuntimeError};\n\t/// let mut config = Config::default();\n\t/// config.on_error(|err: ErrorHook| {\n\t///     tracing::error!(\"{}\", err.error);\n\t///\n\t///     if matches!(err.error, RuntimeError::FsWatcher { .. }) {\n\t///            err.elevate();\n\t///     }\n\t/// });\n\t/// ```\n\t///\n\t/// It is important for this to return quickly: avoid performing blocking work. Locking and\n\t/// writing to stdio is fine, but waiting on the network is a bad idea. Of course, an\n\t/// asynchronous log writer or separate UI thread is always a better idea than `println!` if\n\t/// have that ability.\n\tpub error_handler: ChangeableFn<ErrorHook, ()>,\n\n\t/// The set of filesystem paths to be watched.\n\t///\n\t/// If this is non-empty, the filesystem event source is started and configured to provide\n\t/// events for these paths. If it becomes empty, the filesystem event source is shut down.\n\tpub pathset: Changeable<Vec<WatchedPath>>,\n\n\t/// The kind of filesystem watcher to be used.\n\tpub file_watcher: Changeable<Watcher>,\n\n\t/// Watch stdin and emit events when input comes in over the keyboard.\n\t///\n\t/// If this is true, the keyboard event source is started and stdin is switched to raw mode\n\t/// (disabling line buffering). Individual key events are emitted, as well as EOF. If it\n\t/// becomes false, the keyboard event source is shut down, cooked mode is restored, and stdin\n\t/// may flow to commands again.\n\t///\n\t/// This requires a TTY and is opt-in.\n\tpub keyboard_events: Changeable<bool>,\n\n\t/// How long to wait for events to build up before executing an action.\n\t///\n\t/// This is sometimes called \"debouncing.\" We debounce on the trailing edge: an action is\n\t/// triggered only after that amount of time has passed since the first event in the cycle. The\n\t/// action is called with all the collected events in the cycle.\n\t///\n\t/// Default is 50ms.\n\tpub throttle: Changeable<Duration>,\n\n\t/// The filterer implementation to use when filtering events.\n\t///\n\t/// The default is a no-op, which will always pass every event.\n\tpub filterer: ChangeableFilterer,\n\n\t/// The buffer size of the channel which carries runtime errors.\n\t///\n\t/// The default (64) is usually fine. If you expect a much larger throughput of runtime errors,\n\t/// or if your `error_handler` is slow, adjusting this value may help.\n\t///\n\t/// This is unchangeable at runtime and must be set before Watchexec instantiation.\n\tpub error_channel_size: usize,\n\n\t/// The buffer size of the channel which carries events.\n\t///\n\t/// The default (4096) is usually fine. If you expect a much larger throughput of events,\n\t/// adjusting this value may help.\n\t///\n\t/// This is unchangeable at runtime and must be set before Watchexec instantiation.\n\tpub event_channel_size: usize,\n\n\t/// Signalled by the filesystem worker after it finishes applying a pathset change\n\t/// (registering/unregistering OS watches). Subscribe via [`Config::fs_ready()`] **before**\n\t/// calling [`Config::pathset()`] to avoid missing the notification.\n\tpub(crate) fs_ready: watch::Sender<()>,\n}\n\nimpl Default for Config {\n\tfn default() -> Self {\n\t\tSelf {\n\t\t\tchange_signal: Default::default(),\n\t\t\taction_handler: ChangeableFn::new(ActionReturn::Sync),\n\t\t\terror_handler: Default::default(),\n\t\t\tpathset: Default::default(),\n\t\t\tfile_watcher: Default::default(),\n\t\t\tkeyboard_events: Default::default(),\n\t\t\tthrottle: Changeable::new(Duration::from_millis(50)),\n\t\t\tfilterer: Default::default(),\n\t\t\terror_channel_size: 64,\n\t\t\tevent_channel_size: 4096,\n\t\t\tfs_ready: watch::channel(()).0,\n\t\t}\n\t}\n}\n\nimpl Config {\n\t/// Signal that the configuration has changed.\n\t///\n\t/// This is called automatically by all other methods here, so most of the time calling this\n\t/// isn't needed, but it can be useful for some advanced uses.\n\t#[allow(\n\t\tclippy::must_use_candidate,\n\t\treason = \"this return can explicitly be ignored\"\n\t)]\n\tpub fn signal_change(&self) -> &Self {\n\t\tself.change_signal.notify_waiters();\n\t\tself\n\t}\n\n\t/// Watch the config for a change, but run once first.\n\t///\n\t/// This returns a Stream where the first value is available immediately, and then every\n\t/// subsequent one is from a change signal for this Config.\n\t#[must_use]\n\tpub(crate) fn watch(&self) -> ConfigWatched {\n\t\tConfigWatched::new(self.change_signal.clone())\n\t}\n\n\t/// Subscribe to filesystem worker readiness notifications.\n\t///\n\t/// Returns a [`watch::Receiver`] that is notified each time the filesystem worker finishes\n\t/// applying a pathset change (i.e. OS watches are registered/unregistered). Signals readiness\n\t/// even if some paths failed to register; check the error handler for failures. To avoid\n\t/// missing a notification, subscribe **before** calling [`Config::pathset()`], then\n\t/// `.changed().await`.\n\tpub fn fs_ready(&self) -> watch::Receiver<()> {\n\t\tself.fs_ready.subscribe()\n\t}\n\n\t/// Set the pathset to be watched.\n\tpub fn pathset<I, P>(&self, pathset: I) -> &Self\n\twhere\n\t\tI: IntoIterator<Item = P>,\n\t\tP: Into<WatchedPath>,\n\t{\n\t\tlet pathset = pathset.into_iter().map(std::convert::Into::into).collect();\n\t\tdebug!(?pathset, \"Config: pathset\");\n\t\tself.pathset.replace(pathset);\n\t\tself.signal_change()\n\t}\n\n\t/// Set the file watcher type to use.\n\tpub fn file_watcher(&self, watcher: Watcher) -> &Self {\n\t\tdebug!(?watcher, \"Config: file watcher\");\n\t\tself.file_watcher.replace(watcher);\n\t\tself.signal_change()\n\t}\n\n\t/// Enable keyboard/stdin event source.\n\tpub fn keyboard_events(&self, enable: bool) -> &Self {\n\t\tdebug!(?enable, \"Config: keyboard\");\n\t\tself.keyboard_events.replace(enable);\n\t\tself.signal_change()\n\t}\n\n\t/// Set the throttle.\n\tpub fn throttle(&self, throttle: impl Into<Duration>) -> &Self {\n\t\tlet throttle = throttle.into();\n\t\tdebug!(?throttle, \"Config: throttle\");\n\t\tself.throttle.replace(throttle);\n\t\tself.signal_change()\n\t}\n\n\t/// Set the filterer implementation to use.\n\tpub fn filterer(&self, filterer: impl Filterer + 'static) -> &Self {\n\t\tdebug!(?filterer, \"Config: filterer\");\n\t\tself.filterer.replace(filterer);\n\t\tself.signal_change()\n\t}\n\n\t/// Set the runtime error handler.\n\tpub fn on_error(&self, handler: impl Fn(ErrorHook) + Send + Sync + 'static) -> &Self {\n\t\tdebug!(\"Config: on_error\");\n\t\tself.error_handler.replace(handler);\n\t\tself.signal_change()\n\t}\n\n\t/// Set the action handler.\n\tpub fn on_action(\n\t\t&self,\n\t\thandler: impl (Fn(ActionHandler) -> ActionHandler) + Send + Sync + 'static,\n\t) -> &Self {\n\t\tdebug!(\"Config: on_action\");\n\t\tself.action_handler\n\t\t\t.replace(move |action| ActionReturn::Sync(handler(action)));\n\t\tself.signal_change()\n\t}\n\n\t/// Set the action handler to a future-returning closure.\n\tpub fn on_action_async(\n\t\t&self,\n\t\thandler: impl (Fn(ActionHandler) -> Box<dyn Future<Output = ActionHandler> + Send + Sync>)\n\t\t\t+ Send\n\t\t\t+ Sync\n\t\t\t+ 'static,\n\t) -> &Self {\n\t\tdebug!(\"Config: on_action_async\");\n\t\tself.action_handler\n\t\t\t.replace(move |action| ActionReturn::Async(handler(action)));\n\t\tself.signal_change()\n\t}\n}\n\n#[derive(Debug)]\npub(crate) struct ConfigWatched {\n\tfirst_run: bool,\n\tnotify: Arc<Notify>,\n}\n\nimpl ConfigWatched {\n\tfn new(notify: Arc<Notify>) -> Self {\n\t\tlet notified = notify.notified();\n\t\tpin!(notified).as_mut().enable();\n\n\t\tSelf {\n\t\t\tfirst_run: true,\n\t\t\tnotify,\n\t\t}\n\t}\n\n\tpub async fn next(&mut self) {\n\t\tlet notified = self.notify.notified();\n\t\tlet mut notified = pin!(notified);\n\t\tnotified.as_mut().enable();\n\n\t\tif self.first_run {\n\t\t\ttrace!(\"ConfigWatched: first run\");\n\t\t\tself.first_run = false;\n\t\t} else {\n\t\t\ttrace!(?notified, \"ConfigWatched: waiting for change\");\n\t\t\t// there's a bit of a gotcha where any config changes made after a Notified resolves\n\t\t\t// but before a new one is issued will not be caught. not sure how to fix that yet.\n\t\t\tnotified.await;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/lib/src/error/critical.rs",
    "content": "use miette::Diagnostic;\nuse thiserror::Error;\nuse tokio::{sync::mpsc, task::JoinError};\nuse watchexec_events::{Event, Priority};\n\nuse super::{FsWatcherError, RuntimeError};\nuse crate::sources::fs::Watcher;\n\n/// Errors which are not recoverable and stop watchexec execution.\n#[derive(Debug, Diagnostic, Error)]\n#[non_exhaustive]\npub enum CriticalError {\n\t/// Pseudo-error used to signal a graceful exit.\n\t#[error(\"this should never be printed (exit)\")]\n\tExit,\n\n\t/// For custom critical errors.\n\t///\n\t/// This should be used for errors by external code which are not covered by the other error\n\t/// types; watchexec-internal errors should never use this.\n\t#[error(\"external(critical): {0}\")]\n\tExternal(#[from] Box<dyn std::error::Error + Send + Sync>),\n\n\t/// For elevated runtime errors.\n\t///\n\t/// This is used for runtime errors elevated to critical.\n\t#[error(\"a runtime error is too serious for the process to continue\")]\n\tElevated {\n\t\t/// The runtime error to be elevated.\n\t\t#[source]\n\t\terr: RuntimeError,\n\n\t\t/// Some context or help for the user.\n\t\thelp: Option<String>,\n\t},\n\n\t/// A critical I/O error occurred.\n\t#[error(\"io({about}): {err}\")]\n\tIoError {\n\t\t/// What it was about.\n\t\tabout: &'static str,\n\n\t\t/// The I/O error which occurred.\n\t\t#[source]\n\t\terr: std::io::Error,\n\t},\n\n\t/// Error received when a runtime error cannot be sent to the errors channel.\n\t#[error(\"cannot send internal runtime error: {0}\")]\n\tErrorChannelSend(#[from] mpsc::error::SendError<RuntimeError>),\n\n\t/// Error received when an event cannot be sent to the events channel.\n\t#[error(\"cannot send event to internal channel: {0}\")]\n\tEventChannelSend(#[from] async_priority_channel::SendError<(Event, Priority)>),\n\n\t/// Error received when joining the main watchexec task.\n\t#[error(\"main task join: {0}\")]\n\tMainTaskJoin(#[source] JoinError),\n\n\t/// Error received when the filesystem watcher can't initialise.\n\t///\n\t/// In theory this is recoverable but in practice it's generally not, so we treat it as critical.\n\t#[error(\"fs: cannot initialise {kind:?} watcher\")]\n\tFsWatcherInit {\n\t\t/// The kind of watcher.\n\t\tkind: Watcher,\n\n\t\t/// The error which occurred.\n\t\t#[source]\n\t\terr: FsWatcherError,\n\t},\n}\n"
  },
  {
    "path": "crates/lib/src/error/runtime.rs",
    "content": "use miette::Diagnostic;\nuse thiserror::Error;\nuse watchexec_events::{Event, Priority};\nuse watchexec_signals::Signal;\n\nuse crate::sources::fs::Watcher;\n\n/// Errors which _may_ be recoverable, transient, or only affect a part of the operation, and should\n/// be reported to the user and/or acted upon programmatically, but will not outright stop watchexec.\n///\n/// Some errors that are classified here are spurious and may be ignored. For example,\n/// \"waiting on process\" errors should not be printed to the user by default:\n///\n/// ```\n/// # use tracing::error;\n/// # use watchexec::{Config, ErrorHook, error::RuntimeError};\n/// # let mut config = Config::default();\n/// config.on_error(|err: ErrorHook| {\n///     if let RuntimeError::IoError {\n///         about: \"waiting on process group\",\n///         ..\n///     } = err.error\n///     {\n///         error!(\"{}\", err.error);\n///         return;\n///     }\n///\n///     // ...\n/// });\n/// ```\n///\n/// On the other hand, some errors may not be fatal to this library's understanding, but will be to\n/// your application. In those cases, you should \"elevate\" these errors, which will transform them\n/// to [`CriticalError`](super::CriticalError)s:\n///\n/// ```\n/// # use watchexec::{Config, ErrorHook, error::{RuntimeError, FsWatcherError}};\n/// # let mut config = Config::default();\n/// config.on_error(|err: ErrorHook| {\n///     if let RuntimeError::FsWatcher {\n///         err:\n///             FsWatcherError::Create { .. }\n///             | FsWatcherError::TooManyWatches { .. }\n///             | FsWatcherError::TooManyHandles { .. },\n///         ..\n///     } = err.error {\n///         err.elevate();\n///         return;\n///     }\n///\n///     // ...\n/// });\n/// ```\n#[derive(Debug, Diagnostic, Error)]\n#[non_exhaustive]\npub enum RuntimeError {\n\t/// Pseudo-error used to signal a graceful exit.\n\t#[error(\"this should never be printed (exit)\")]\n\tExit,\n\n\t/// For custom runtime errors.\n\t///\n\t/// This should be used for errors by external code which are not covered by the other error\n\t/// types; watchexec-internal errors should never use this.\n\t#[error(\"external(runtime): {0}\")]\n\tExternal(#[from] Box<dyn std::error::Error + Send + Sync>),\n\n\t/// Generic I/O error, with some context.\n\t#[error(\"io({about}): {err}\")]\n\tIoError {\n\t\t/// What it was about.\n\t\tabout: &'static str,\n\n\t\t/// The I/O error which occurred.\n\t\t#[source]\n\t\terr: std::io::Error,\n\t},\n\n\t/// Events from the filesystem watcher event source.\n\t#[error(\"{kind:?} fs watcher error\")]\n\tFsWatcher {\n\t\t/// The kind of watcher that failed to instantiate.\n\t\tkind: Watcher,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: super::FsWatcherError,\n\t},\n\n\t/// Events from the keyboard event source\n\t#[error(\"keyboard watcher error\")]\n\tKeyboardWatcher {\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: super::KeyboardWatcherError,\n\t},\n\n\t/// Opaque internal error from a command supervisor.\n\t#[error(\"internal: command supervisor: {0}\")]\n\tInternalSupervisor(String),\n\n\t/// Error received when an event cannot be sent to the event channel.\n\t#[error(\"cannot send event from {ctx}: {err}\")]\n\tEventChannelSend {\n\t\t/// The context in which this error happened.\n\t\t///\n\t\t/// This is not stable and its value should not be relied on except for printing the error.\n\t\tctx: &'static str,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: async_priority_channel::SendError<(Event, Priority)>,\n\t},\n\n\t/// Error received when an event cannot be sent to the event channel.\n\t#[error(\"cannot send event from {ctx}: {err}\")]\n\tEventChannelTrySend {\n\t\t/// The context in which this error happened.\n\t\t///\n\t\t/// This is not stable and its value should not be relied on except for printing the error.\n\t\tctx: &'static str,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: async_priority_channel::TrySendError<(Event, Priority)>,\n\t},\n\n\t/// Error received when a [`Handler`][crate::handler::Handler] errors.\n\t///\n\t/// The error is completely opaque, having been flattened into a string at the error point.\n\t#[error(\"handler error while {ctx}: {err}\")]\n\tHandler {\n\t\t/// The context in which this error happened.\n\t\t///\n\t\t/// This is not stable and its value should not be relied on except for printing the error.\n\t\tctx: &'static str,\n\n\t\t/// The underlying error, as the Display representation of the original error.\n\t\terr: String,\n\t},\n\n\t/// Error received when a [`Handler`][crate::handler::Handler] which has been passed a lock has kept that lock open after the handler has completed.\n\t#[error(\"{0} handler returned while holding a lock alive\")]\n\tHandlerLockHeld(&'static str),\n\n\t/// Error received when operating on a process.\n\t#[error(\"when operating on process: {0}\")]\n\tProcess(#[source] std::io::Error),\n\n\t/// Error received when a process did not start correctly, or finished before we could even tell.\n\t#[error(\"process was dead on arrival\")]\n\tProcessDeadOnArrival,\n\n\t/// Error received when a [`Signal`] is unsupported\n\t///\n\t/// This may happen if the signal is not supported on the current platform, or if Watchexec\n\t/// doesn't support sending the signal.\n\t#[error(\"unsupported signal: {0:?}\")]\n\tUnsupportedSignal(Signal),\n\n\t/// Error received when there are no commands to run.\n\t///\n\t/// This is generally a programmer error and should be caught earlier.\n\t#[error(\"no commands to run\")]\n\tNoCommands,\n\n\t/// Error received when trying to render a [`Command::Shell`](crate::command::Command) that has no `command`\n\t///\n\t/// This is generally a programmer error and should be caught earlier.\n\t#[error(\"empty shelled command\")]\n\tCommandShellEmptyCommand,\n\n\t/// Error received when trying to render a [`Shell::Unix`](crate::command::Shell) with an empty shell\n\t///\n\t/// This is generally a programmer error and should be caught earlier.\n\t#[error(\"empty shell program\")]\n\tCommandShellEmptyShell,\n\n\t/// Error emitted by a [`Filterer`](crate::filter::Filterer).\n\t#[error(\"{kind} filterer: {err}\")]\n\tFilterer {\n\t\t/// The kind of filterer that failed.\n\t\t///\n\t\t/// This should be set by the filterer itself to a short name for the filterer.\n\t\t///\n\t\t/// This is not stable and its value should not be relied on except for printing the error.\n\t\tkind: &'static str,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: Box<dyn std::error::Error + Send + Sync>,\n\t},\n}\n"
  },
  {
    "path": "crates/lib/src/error/specialised.rs",
    "content": "use std::path::PathBuf;\n\nuse miette::Diagnostic;\nuse thiserror::Error;\n\n/// Errors emitted by the filesystem watcher.\n#[derive(Debug, Diagnostic, Error)]\n#[non_exhaustive]\npub enum FsWatcherError {\n\t/// Error received when creating a filesystem watcher fails.\n\t///\n\t/// Also see `TooManyWatches` and `TooManyHandles`.\n\t#[error(\"failed to instantiate\")]\n\t#[diagnostic(help(\"perhaps retry with the poll watcher\"))]\n\tCreate(#[source] notify::Error),\n\n\t/// Error received when creating or updating a filesystem watcher fails because there are too many watches.\n\t///\n\t/// This is the OS error 28 on Linux.\n\t#[error(\"failed to instantiate: too many watches\")]\n\t#[cfg_attr(target_os = \"linux\", diagnostic(help(\"you will want to increase your inotify.max_user_watches, see inotify(7) and https://watchexec.github.io/docs/inotify-limits.html\")))]\n\t#[cfg_attr(\n\t\tnot(target_os = \"linux\"),\n\t\tdiagnostic(help(\"this should not happen on your platform\"))\n\t)]\n\tTooManyWatches(#[source] notify::Error),\n\n\t/// Error received when creating or updating a filesystem watcher fails because there are too many file handles open.\n\t///\n\t/// This is the OS error 24 on Linux. It may also occur when the limit for inotify instances is reached.\n\t#[error(\"failed to instantiate: too many handles\")]\n\t#[cfg_attr(target_os = \"linux\", diagnostic(help(\"you will want to increase your `nofile` limit, see pam_limits(8); or increase your inotify.max_user_instances, see inotify(7) and https://watchexec.github.io/docs/inotify-limits.html\")))]\n\t#[cfg_attr(\n\t\tnot(target_os = \"linux\"),\n\t\tdiagnostic(help(\"this should not happen on your platform\"))\n\t)]\n\tTooManyHandles(#[source] notify::Error),\n\n\t/// Error received when reading a filesystem event fails.\n\t#[error(\"received an event that we could not read\")]\n\tEvent(#[source] notify::Error),\n\n\t/// Error received when adding to the pathset for the filesystem watcher fails.\n\t#[error(\"while adding {path:?}\")]\n\tPathAdd {\n\t\t/// The path that was attempted to be added.\n\t\tpath: PathBuf,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: notify::Error,\n\t},\n\n\t/// Error received when removing from the pathset for the filesystem watcher fails.\n\t#[error(\"while removing {path:?}\")]\n\tPathRemove {\n\t\t/// The path that was attempted to be removed.\n\t\tpath: PathBuf,\n\n\t\t/// The underlying error.\n\t\t#[source]\n\t\terr: notify::Error,\n\t},\n}\n\n/// Errors emitted by the keyboard watcher.\n#[derive(Debug, Diagnostic, Error)]\n#[non_exhaustive]\npub enum KeyboardWatcherError {\n\t/// Error received when shutting down stdin watcher fails.\n\t#[error(\"failed to shut down stdin watcher\")]\n\tStdinShutdown,\n}\n"
  },
  {
    "path": "crates/lib/src/error.rs",
    "content": "//! Error types for critical, runtime, and specialised errors.\n\n#[doc(inline)]\npub use critical::*;\n#[doc(inline)]\npub use runtime::*;\n#[doc(inline)]\npub use specialised::*;\n\nmod critical;\nmod runtime;\nmod specialised;\n"
  },
  {
    "path": "crates/lib/src/filter.rs",
    "content": "//! The `Filterer` trait for event filtering.\n\nuse std::{fmt, sync::Arc};\n\nuse watchexec_events::{Event, Priority};\n\nuse crate::{changeable::Changeable, error::RuntimeError};\n\n/// An interface for filtering events.\npub trait Filterer: std::fmt::Debug + Send + Sync {\n\t/// Called on (almost) every event, and should return `false` if the event is to be discarded.\n\t///\n\t/// Checking whether an event passes a filter is synchronous, should be fast, and must not block\n\t/// the thread. Do any expensive stuff upfront during construction of your filterer, or in a\n\t/// separate thread/task, as needed.\n\t///\n\t/// Returning an error will also fail the event processing, but the error will be propagated to\n\t/// the watchexec error handler. While the type signature supports any [`RuntimeError`], it's\n\t/// preferred that you create your own error type and return it wrapped in the\n\t/// [`RuntimeError::Filterer`] variant with the name of your filterer as `kind`.\n\tfn check_event(&self, event: &Event, priority: Priority) -> Result<bool, RuntimeError>;\n}\n\nimpl Filterer for () {\n\tfn check_event(&self, _event: &Event, _priority: Priority) -> Result<bool, RuntimeError> {\n\t\tOk(true)\n\t}\n}\n\nimpl<T: Filterer> Filterer for Arc<T> {\n\tfn check_event(&self, event: &Event, priority: Priority) -> Result<bool, RuntimeError> {\n\t\tSelf::as_ref(self).check_event(event, priority)\n\t}\n}\n\n/// A shareable `Filterer` that doesn't hold a lock when it is called.\n///\n/// This is a specialisation of [`Changeable`] for `Filterer`.\npub struct ChangeableFilterer(Changeable<Arc<dyn Filterer>>);\nimpl ChangeableFilterer {\n\t/// Replace the filterer with a new one.\n\t///\n\t/// Panics if the lock was poisoned.\n\tpub fn replace(&self, new: impl Filterer + 'static) {\n\t\tself.0.replace(Arc::new(new));\n\t}\n}\n\nimpl Filterer for ChangeableFilterer {\n\tfn check_event(&self, event: &Event, priority: Priority) -> Result<bool, RuntimeError> {\n\t\tArc::as_ref(&self.0.get()).check_event(event, priority)\n\t}\n}\n\n// the derive adds a T: Clone bound\nimpl Clone for ChangeableFilterer {\n\tfn clone(&self) -> Self {\n\t\tSelf(Changeable::clone(&self.0))\n\t}\n}\n\nimpl Default for ChangeableFilterer {\n\tfn default() -> Self {\n\t\tSelf(Changeable::new(Arc::new(())))\n\t}\n}\n\nimpl fmt::Debug for ChangeableFilterer {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\tf.debug_struct(\"ChangeableFilterer\")\n\t\t\t.field(\"filterer\", &format!(\"{:?}\", self.0.get()))\n\t\t\t.finish_non_exhaustive()\n\t}\n}\n"
  },
  {
    "path": "crates/lib/src/id.rs",
    "content": "use std::{cell::Cell, num::NonZeroU64};\n\n/// Unique opaque identifier.\n#[must_use]\n#[derive(Debug, Hash, PartialEq, Eq, Clone, Copy)]\npub struct Id {\n\tthread: NonZeroU64,\n\tcounter: u64,\n}\n\nthread_local! {\n\tstatic COUNTER: Cell<u64> = const { Cell::new(0) };\n}\n\nimpl Default for Id {\n\tfn default() -> Self {\n\t\tlet counter = COUNTER.get();\n\t\tCOUNTER.set(counter.wrapping_add(1));\n\n\t\tSelf {\n\t\t\tthread: threadid(),\n\t\t\tcounter,\n\t\t}\n\t}\n}\n\nfn threadid() -> NonZeroU64 {\n\tuse std::hash::{Hash, Hasher};\n\n\tstruct Extractor {\n\t\tid: u64,\n\t}\n\n\timpl Hasher for Extractor {\n\t\tfn finish(&self) -> u64 {\n\t\t\tself.id\n\t\t}\n\n\t\tfn write(&mut self, _bytes: &[u8]) {}\n\t\tfn write_u64(&mut self, n: u64) {\n\t\t\tself.id = n;\n\t\t}\n\t}\n\n\tlet mut ex = Extractor { id: 0 };\n\tstd::thread::current().id().hash(&mut ex);\n\n\t// SAFETY: guaranteed to be > 0\n\t// safeguarded by the max(1), but this is already guaranteed by the thread id being a NonZeroU64\n\t// internally; as that guarantee is not stable, we do make sure, just to be on the safe side.\n\tunsafe { NonZeroU64::new_unchecked(ex.finish().max(1)) }\n}\n\n// Replace with this when the thread_id_value feature is stable\n// fn threadid() -> NonZeroU64 {\n// \tstd::thread::current().id().as_u64()\n// }\n\n#[test]\nfn test_threadid() {\n\tlet top = threadid();\n\tstd::thread::spawn(move || {\n\t\tassert_ne!(top, threadid());\n\t})\n\t.join()\n\t.expect(\"thread failed\");\n}\n"
  },
  {
    "path": "crates/lib/src/late_join_set.rs",
    "content": "use std::future::Future;\n\nuse futures::{stream::FuturesUnordered, StreamExt};\nuse tokio::task::{JoinError, JoinHandle};\n\n/// A collection of tasks spawned on a Tokio runtime.\n///\n/// This is conceptually a variant of Tokio's [`JoinSet`](tokio::task::JoinSet) which can attach\n/// tasks after they've been spawned.\n///\n/// # Examples\n///\n/// Spawn multiple tasks and wait for them.\n///\n/// ```no_compile\n/// use crate::late_join_set::LateJoinSet;\n///\n/// #[tokio::main]\n/// async fn main() {\n///     let mut set = LateJoinSet::default();\n///\n///     for i in 0..10 {\n///         set.spawn(async move { println!(\"{i}\"); });\n///     }\n///\n///     let mut seen = [false; 10];\n///     while let Some(res) = set.join_next().await {\n///         let idx = res.unwrap();\n///         seen[idx] = true;\n///     }\n///\n///     for i in 0..10 {\n///         assert!(seen[i]);\n///     }\n/// }\n/// ```\n///\n/// Attach a task to a set after it's been spawned.\n///\n/// ```no_compile\n/// use crate::late_join_set::LateJoinSet;\n///\n/// #[tokio::main]\n/// async fn main() {\n///     let mut set = LateJoinSet::default();\n///\n///     let handle = tokio::spawn(async move { println!(\"Hello, world!\"); });\n///     set.insert(handle);\n///     set.abort_all();\n/// }\n/// ```\n#[derive(Debug, Default)]\npub struct LateJoinSet {\n\ttasks: FuturesUnordered<JoinHandle<()>>,\n}\n\nimpl LateJoinSet {\n\t/// Spawn the provided task on the `LateJoinSet`.\n\t///\n\t/// The provided future will start running in the background immediately when this method is\n\t/// called, even if you don't await anything on this `LateJoinSet`.\n\t///\n\t/// # Panics\n\t///\n\t/// This method panics if called outside of a Tokio runtime.\n\t#[track_caller]\n\tpub fn spawn(&self, task: impl Future<Output = ()> + Send + 'static) {\n\t\tself.insert(tokio::spawn(task));\n\t}\n\n\t/// Insert an already-spawned task into the [`LateJoinSet`].\n\tpub fn insert(&self, task: JoinHandle<()>) {\n\t\tself.tasks.push(task);\n\t}\n\n\t/// Waits until one of the tasks in the set completes.\n\t///\n\t/// Returns `None` if the set is empty.\n\tpub async fn join_next(&mut self) -> Option<Result<(), JoinError>> {\n\t\tself.tasks.next().await\n\t}\n\n\t/// Waits until all the tasks in the set complete.\n\t///\n\t/// Ignores any panics in the tasks shutting down.\n\tpub async fn join_all(&mut self) {\n\t\twhile self.join_next().await.is_some() {}\n\t\tself.tasks.clear();\n\t}\n\n\t/// Aborts all tasks on this `LateJoinSet`.\n\t///\n\t/// This does not remove the tasks from the `LateJoinSet`. To wait for the tasks to complete\n\t/// cancellation, use `join_all` or call `join_next` in a loop until the `LateJoinSet` is empty.\n\tpub fn abort_all(&self) {\n\t\tself.tasks.iter().for_each(JoinHandle::abort);\n\t}\n}\n\nimpl Drop for LateJoinSet {\n\tfn drop(&mut self) {\n\t\tself.abort_all();\n\t\tself.tasks.clear();\n\t}\n}\n"
  },
  {
    "path": "crates/lib/src/lib.rs",
    "content": "//! Watchexec: a library for utilities and programs which respond to (file, signal, etc) events\n//! primarily by launching or managing other programs.\n//!\n//! Also see the CLI tool: <https://github.com/watchexec/watchexec>\n//!\n//! This library is powered by [Tokio](https://tokio.rs).\n//!\n//! The main way to use this crate involves constructing a [`Watchexec`] around a [`Config`], then\n//! running it. Handlers (defined in [`Config`]) are used to hook into Watchexec at various points.\n//! The config can be changed at any time with the `config` field on your [`Watchexec`] instance.\n//!\n//! It's recommended to use the [miette] erroring library in applications, but all errors implement\n//! [`std::error::Error`] so your favourite error handling library can of course be used.\n//!\n//! ```no_run\n//! use miette::{IntoDiagnostic, Result};\n//! use watchexec_signals::Signal;\n//! use watchexec::Watchexec;\n//!\n//! #[tokio::main]\n//! async fn main() -> Result<()> {\n//!     let wx = Watchexec::new(|mut action| {\n//!         // print any events\n//!         for event in action.events.iter() {\n//!             eprintln!(\"EVENT: {event:?}\");\n//!         }\n//!\n//!         // if Ctrl-C is received, quit\n//!         if action.signals().any(|sig| sig == Signal::Interrupt) {\n//!             action.quit();\n//!         }\n//!\n//!         action\n//!     })?;\n//!\n//!     // watch the current directory\n//!     wx.config.pathset([\".\"]);\n//!\n//!     wx.main().await.into_diagnostic()?;\n//!     Ok(())\n//! }\n//! ```\n//!\n//! Alternatively, you can use the modules exposed by the crate and the external crates such as\n//! [`notify`], [`clearscreen`](https://docs.rs/clearscreen), [`process_wrap`]... to build something\n//! more advanced, at the cost of reimplementing the glue code.\n//!\n//! Note that the library generates a _lot_ of debug messaging with [tracing]. **You should not\n//! enable printing even `error`-level log messages for this crate unless it's for debugging.**\n//! Instead, make use of the [`Config::on_error()`] method to define a handler for errors\n//! occurring at runtime that are _meant_ for you to handle (by printing out or otherwise).\n\n#![doc(html_favicon_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![doc(html_logo_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![warn(clippy::unwrap_used, missing_docs)]\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n#![deny(rust_2018_idioms)]\n\n// the toolkit to make your own\npub mod action;\npub mod error;\npub mod filter;\npub mod paths;\npub mod sources;\n\n// the core experience\npub mod changeable;\npub mod config;\n\nmod id;\nmod late_join_set;\nmod watched_path;\nmod watchexec;\n\n#[doc(inline)]\npub use crate::{\n\tid::Id,\n\twatched_path::WatchedPath,\n\twatchexec::{ErrorHook, Watchexec},\n};\n\n#[doc(no_inline)]\npub use crate::config::Config;\n#[doc(no_inline)]\npub use watchexec_supervisor::{command, job};\n\n#[cfg(debug_assertions)]\n#[doc(hidden)]\npub mod readme_doc_check {\n\t#[doc = include_str!(\"../README.md\")]\n\tpub struct Readme;\n}\n"
  },
  {
    "path": "crates/lib/src/paths.rs",
    "content": "//! Utilities for paths and sets of paths.\n\nuse std::{\n\tcollections::{HashMap, HashSet},\n\tffi::OsString,\n\tpath::{Path, PathBuf},\n};\n\nuse watchexec_events::{Event, FileType, Tag};\n\n/// The separator for paths used in environment variables.\n#[cfg(unix)]\npub const PATH_SEPARATOR: &str = \":\";\n/// The separator for paths used in environment variables.\n#[cfg(not(unix))]\npub const PATH_SEPARATOR: &str = \";\";\n\n/// Returns the longest common prefix of all given paths.\n///\n/// This is a utility function which is useful for finding the common root of a set of origins.\n///\n/// Returns `None` if zero paths are given or paths share no common prefix.\npub fn common_prefix<I, P>(paths: I) -> Option<PathBuf>\nwhere\n\tI: IntoIterator<Item = P>,\n\tP: AsRef<Path>,\n{\n\tlet mut paths = paths.into_iter();\n\tlet first_path = paths.next().map(|p| p.as_ref().to_owned());\n\tlet mut longest_path = if let Some(ref p) = first_path {\n\t\tp.components().collect::<Vec<_>>()\n\t} else {\n\t\treturn None;\n\t};\n\n\tfor path in paths {\n\t\tlet mut greatest_distance = 0;\n\t\tfor component_pair in path.as_ref().components().zip(longest_path.iter()) {\n\t\t\tif component_pair.0 != *component_pair.1 {\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tgreatest_distance += 1;\n\t\t}\n\n\t\tif greatest_distance != longest_path.len() {\n\t\t\tlongest_path.truncate(greatest_distance);\n\t\t}\n\t}\n\n\tif longest_path.is_empty() {\n\t\tNone\n\t} else {\n\t\tlet mut result = PathBuf::new();\n\t\tfor component in longest_path {\n\t\t\tresult.push(component.as_os_str());\n\t\t}\n\t\tSome(result)\n\t}\n}\n\n/// Summarise [`Event`]s as a set of environment variables by category.\n///\n/// - `CREATED` -> `Create(_)`\n/// - `META_CHANGED` -> `Modify(Metadata(_))`\n/// - `REMOVED` -> `Remove(_)`\n/// - `RENAMED` -> `Modify(Name(_))`\n/// - `WRITTEN` -> `Modify(Data(_))`, `Access(Close(Write))`\n/// - `OTHERWISE_CHANGED` -> anything else\n/// - plus `COMMON` with the common prefix of all paths (even if there's only one path).\n///\n/// It ignores non-path events and pathed events without event kind. Multiple events are sorted in\n/// byte order and joined with the platform-specific path separator (`:` for unix, `;` for Windows).\npub fn summarise_events_to_env<'events>(\n\tevents: impl IntoIterator<Item = &'events Event>,\n) -> HashMap<&'static str, OsString> {\n\tlet mut all_trunks = Vec::new();\n\tlet mut kind_buckets = HashMap::new();\n\tfor event in events {\n\t\tlet (paths, trunks): (Vec<_>, Vec<_>) = event\n\t\t\t.paths()\n\t\t\t.map(|(p, ft)| {\n\t\t\t\t(\n\t\t\t\t\tp.to_owned(),\n\t\t\t\t\tmatch ft {\n\t\t\t\t\t\tSome(FileType::Dir) => None,\n\t\t\t\t\t\t_ => p.parent(),\n\t\t\t\t\t}\n\t\t\t\t\t.unwrap_or(p)\n\t\t\t\t\t.to_owned(),\n\t\t\t\t)\n\t\t\t})\n\t\t\t.unzip();\n\t\ttracing::trace!(?paths, ?trunks, \"event paths\");\n\n\t\tif paths.is_empty() {\n\t\t\tcontinue;\n\t\t}\n\n\t\tall_trunks.extend(trunks.clone());\n\n\t\t// usually there's only one but just in case\n\t\tfor kind in event.tags.iter().filter_map(|t| {\n\t\t\tif let Tag::FileEventKind(kind) = t {\n\t\t\t\tSome(kind)\n\t\t\t} else {\n\t\t\t\tNone\n\t\t\t}\n\t\t}) {\n\t\t\tkind_buckets\n\t\t\t\t.entry(kind)\n\t\t\t\t.or_insert_with(HashSet::new)\n\t\t\t\t.extend(paths.clone());\n\t\t}\n\t}\n\n\tlet common_path = common_prefix(all_trunks);\n\n\tlet mut grouped_buckets = HashMap::new();\n\tfor (kind, paths) in kind_buckets {\n\t\tuse notify::event::{AccessKind::*, AccessMode::*, EventKind::*, ModifyKind::*};\n\t\tgrouped_buckets\n\t\t\t.entry(match kind {\n\t\t\t\tModify(Data(_)) | Access(Close(Write)) => \"WRITTEN\",\n\t\t\t\tModify(Metadata(_)) => \"META_CHANGED\",\n\t\t\t\tRemove(_) => \"REMOVED\",\n\t\t\t\tCreate(_) => \"CREATED\",\n\t\t\t\tModify(Name(_)) => \"RENAMED\",\n\t\t\t\t_ => \"OTHERWISE_CHANGED\",\n\t\t\t})\n\t\t\t.or_insert_with(HashSet::new)\n\t\t\t.extend(paths.into_iter().map(|ref p| {\n\t\t\t\tcommon_path\n\t\t\t\t\t.as_ref()\n\t\t\t\t\t.and_then(|prefix| p.strip_prefix(prefix).ok())\n\t\t\t\t\t.map_or_else(\n\t\t\t\t\t\t|| p.clone().into_os_string(),\n\t\t\t\t\t\t|suffix| suffix.as_os_str().to_owned(),\n\t\t\t\t\t)\n\t\t\t}));\n\t}\n\n\tlet mut res: HashMap<&'static str, OsString> = grouped_buckets\n\t\t.into_iter()\n\t\t.map(|(kind, paths)| {\n\t\t\tlet mut joined =\n\t\t\t\tOsString::with_capacity(paths.iter().map(|p| p.len()).sum::<usize>() + paths.len());\n\n\t\t\tlet mut paths = paths.into_iter().collect::<Vec<_>>();\n\t\t\tpaths.sort();\n\t\t\tpaths.into_iter().enumerate().for_each(|(i, path)| {\n\t\t\t\tif i > 0 {\n\t\t\t\t\tjoined.push(PATH_SEPARATOR);\n\t\t\t\t}\n\t\t\t\tjoined.push(path);\n\t\t\t});\n\n\t\t\t(kind, joined)\n\t\t})\n\t\t.collect();\n\n\tif let Some(common_path) = common_path {\n\t\tres.insert(\"COMMON\", common_path.into_os_string());\n\t}\n\n\tres\n}\n"
  },
  {
    "path": "crates/lib/src/sources/fs.rs",
    "content": "//! Event source for changes to files and directories.\n\nuse std::{\n\tcollections::{HashMap, HashSet},\n\tfs::metadata,\n\tmem::take,\n\tsync::Arc,\n\ttime::Duration,\n};\n\nuse async_priority_channel as priority;\nuse normalize_path::NormalizePath;\nuse tokio::sync::mpsc;\nuse tracing::{debug, error, trace};\nuse watchexec_events::{Event, Priority, Source, Tag};\n\nuse crate::{\n\terror::{CriticalError, FsWatcherError, RuntimeError},\n\tConfig,\n};\n\n// re-export for compatibility, until next major version\npub use crate::WatchedPath;\n\n/// What kind of filesystem watcher to use.\n///\n/// For now only native and poll watchers are supported. In the future there may be additional\n/// watchers available on some platforms.\n#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]\n#[non_exhaustive]\npub enum Watcher {\n\t/// The Notify-recommended watcher on the platform.\n\t///\n\t/// For platforms Notify supports, that's a [native implementation][notify::RecommendedWatcher],\n\t/// for others it's polling with a default interval.\n\t#[default]\n\tNative,\n\n\t/// Notify’s [poll watcher][notify::PollWatcher] with a custom interval.\n\tPoll(Duration),\n}\n\nimpl Watcher {\n\tfn create(\n\t\tself,\n\t\tf: impl notify::EventHandler,\n\t) -> Result<Box<dyn notify::Watcher + Send>, CriticalError> {\n\t\tuse notify::{Config, Watcher as _};\n\n\t\tmatch self {\n\t\t\tSelf::Native => {\n\t\t\t\tnotify::RecommendedWatcher::new(f, Config::default()).map(|w| Box::new(w) as _)\n\t\t\t}\n\t\t\tSelf::Poll(delay) => {\n\t\t\t\tnotify::PollWatcher::new(f, Config::default().with_poll_interval(delay))\n\t\t\t\t\t.map(|w| Box::new(w) as _)\n\t\t\t}\n\t\t}\n\t\t.map_err(|err| CriticalError::FsWatcherInit {\n\t\t\tkind: self,\n\t\t\terr: if cfg!(target_os = \"linux\")\n\t\t\t\t&& (matches!(err.kind, notify::ErrorKind::MaxFilesWatch)\n\t\t\t\t\t|| matches!(err.kind, notify::ErrorKind::Io(ref ioerr) if ioerr.raw_os_error() == Some(28)))\n\t\t\t{\n\t\t\t\tFsWatcherError::TooManyWatches(err)\n\t\t\t} else if cfg!(target_os = \"linux\")\n\t\t\t\t&& matches!(err.kind, notify::ErrorKind::Io(ref ioerr) if ioerr.raw_os_error() == Some(24))\n\t\t\t{\n\t\t\t\tFsWatcherError::TooManyHandles(err)\n\t\t\t} else {\n\t\t\t\tFsWatcherError::Create(err)\n\t\t\t},\n\t\t})\n\t}\n}\n\n/// Launch the filesystem event worker.\n///\n/// While you can run several, you should only have one.\n///\n/// This only does a bare minimum of setup; to actually start the work, you need to set a non-empty\n/// pathset in the [`Config`].\n///\n/// Note that the paths emitted by the watcher are normalised. No guarantee is made about the\n/// implementation or output of that normalisation (it may change without notice).\n///\n/// # Examples\n///\n/// Direct usage:\n///\n/// ```no_run\n/// use async_priority_channel as priority;\n/// use tokio::sync::mpsc;\n/// use watchexec::{Config, sources::fs::worker};\n///\n/// #[tokio::main]\n/// async fn main() -> Result<(), Box<dyn std::error::Error>> {\n///     let (ev_s, _) = priority::bounded(1024);\n///     let (er_s, _) = mpsc::channel(64);\n///\n///     let config = Config::default();\n///     config.pathset([\".\"]);\n///\n///     worker(config.into(), er_s, ev_s).await?;\n///     Ok(())\n/// }\n/// ```\npub async fn worker(\n\tconfig: Arc<Config>,\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n) -> Result<(), CriticalError> {\n\tdebug!(\"launching filesystem worker\");\n\n\tlet mut watcher_type = Watcher::default();\n\tlet mut watcher = None;\n\tlet mut pathset = HashSet::new();\n\n\tlet mut config_watch = config.watch();\n\tloop {\n\t\tconfig_watch.next().await;\n\t\ttrace!(\"filesystem worker got a config change\");\n\n\t\tif config.pathset.get().is_empty() {\n\t\t\ttrace!(\n\t\t\t\t\"{}\",\n\t\t\t\tif pathset.is_empty() {\n\t\t\t\t\t\"no watched paths, no watcher needed\"\n\t\t\t\t} else {\n\t\t\t\t\t\"no more watched paths, dropping watcher\"\n\t\t\t\t}\n\t\t\t);\n\t\t\twatcher.take();\n\t\t\tpathset.clear();\n\t\t\tlet _ = config.fs_ready.send(());\n\t\t\tcontinue;\n\t\t}\n\n\t\t// now we know the watcher should be alive, so let's start it if it's not already:\n\n\t\tlet config_watcher = config.file_watcher.get();\n\t\tif watcher.is_none() || watcher_type != config_watcher {\n\t\t\tdebug!(kind=?config_watcher, \"creating new watcher\");\n\t\t\tlet n_errors = errors.clone();\n\t\t\tlet n_events = events.clone();\n\t\t\twatcher_type = config_watcher;\n\t\t\twatcher = config_watcher\n\t\t\t\t.create(move |nev: Result<notify::Event, notify::Error>| {\n\t\t\t\t\ttrace!(event = ?nev, \"receiving possible event from watcher\");\n\t\t\t\t\tif let Err(e) = process_event(nev, config_watcher, &n_events) {\n\t\t\t\t\t\tn_errors.try_send(e).ok();\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\t.map(Some)?;\n\t\t}\n\n\t\t// now let's calculate which paths we should add to the watch, and which we should drop:\n\n\t\tlet config_pathset = config.pathset.get();\n\t\ttracing::info!(?config_pathset, \"obtaining pathset\");\n\t\tlet (to_watch, to_drop) = if pathset.is_empty() {\n\t\t\t// if the current pathset is empty, we can take a shortcut\n\t\t\t(config_pathset, Vec::new())\n\t\t} else {\n\t\t\tlet mut to_watch = Vec::with_capacity(config_pathset.len());\n\t\t\tlet mut to_drop = Vec::with_capacity(pathset.len());\n\n\t\t\tfor path in &pathset {\n\t\t\t\tif !config_pathset.contains(path) {\n\t\t\t\t\tto_drop.push(path.clone()); // try dropping the clone?\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor path in config_pathset {\n\t\t\t\tif !pathset.contains(&path) {\n\t\t\t\t\tto_watch.push(path);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t(to_watch, to_drop)\n\t\t};\n\n\t\t// now apply it to the watcher\n\n\t\tlet Some(watcher) = watcher.as_mut() else {\n\t\t\tpanic!(\"BUG: watcher should exist at this point\");\n\t\t};\n\n\t\tdebug!(?to_watch, ?to_drop, \"applying changes to the watcher\");\n\n\t\tfor path in to_drop {\n\t\t\ttrace!(?path, \"removing path from the watcher\");\n\t\t\tif let Err(err) = watcher.unwatch(path.path.as_ref()) {\n\t\t\t\terror!(?err, \"notify unwatch() error\");\n\t\t\t\tfor e in notify_multi_path_errors(watcher_type, path, err, true) {\n\t\t\t\t\terrors.send(e).await?;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tpathset.remove(&path);\n\t\t\t}\n\t\t}\n\n\t\tfor path in to_watch {\n\t\t\ttrace!(?path, \"adding path to the watcher\");\n\t\t\tif let Err(err) = watcher.watch(\n\t\t\t\tpath.path.as_ref(),\n\t\t\t\tif path.recursive {\n\t\t\t\t\tnotify::RecursiveMode::Recursive\n\t\t\t\t} else {\n\t\t\t\t\tnotify::RecursiveMode::NonRecursive\n\t\t\t\t},\n\t\t\t) {\n\t\t\t\terror!(?err, \"notify watch() error\");\n\t\t\t\tfor e in notify_multi_path_errors(watcher_type, path, err, false) {\n\t\t\t\t\terrors.send(e).await?;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tpathset.insert(path);\n\t\t\t}\n\t\t}\n\n\t\tlet _ = config.fs_ready.send(());\n\t}\n}\n\nfn notify_multi_path_errors(\n\tkind: Watcher,\n\twatched_path: WatchedPath,\n\tmut err: notify::Error,\n\trm: bool,\n) -> Vec<RuntimeError> {\n\tlet mut paths = take(&mut err.paths);\n\tif paths.is_empty() {\n\t\tpaths.push(watched_path.into());\n\t}\n\n\tlet generic = err.to_string();\n\tlet mut err = Some(err);\n\n\tlet mut errs = Vec::with_capacity(paths.len());\n\tfor path in paths {\n\t\tlet e = err\n\t\t\t.take()\n\t\t\t.unwrap_or_else(|| notify::Error::generic(&generic))\n\t\t\t.add_path(path.clone());\n\n\t\terrs.push(RuntimeError::FsWatcher {\n\t\t\tkind,\n\t\t\terr: if rm {\n\t\t\t\tFsWatcherError::PathRemove { path, err: e }\n\t\t\t} else {\n\t\t\t\tFsWatcherError::PathAdd { path, err: e }\n\t\t\t},\n\t\t});\n\t}\n\n\terrs\n}\n\nfn process_event(\n\tnev: Result<notify::Event, notify::Error>,\n\tkind: Watcher,\n\tn_events: &priority::Sender<Event, Priority>,\n) -> Result<(), RuntimeError> {\n\tlet nev = nev.map_err(|err| RuntimeError::FsWatcher {\n\t\tkind,\n\t\terr: FsWatcherError::Event(err),\n\t})?;\n\n\tlet mut tags = Vec::with_capacity(4);\n\ttags.push(Tag::Source(Source::Filesystem));\n\ttags.push(Tag::FileEventKind(nev.kind));\n\n\tfor path in nev.paths {\n\t\t// possibly pull file_type from whatever notify (or the native driver) returns?\n\t\ttags.push(Tag::Path {\n\t\t\tfile_type: metadata(&path).ok().map(|m| m.file_type().into()),\n\t\t\tpath: path.normalize(),\n\t\t});\n\t}\n\n\tif let Some(pid) = nev.attrs.process_id() {\n\t\ttags.push(Tag::Process(pid));\n\t}\n\n\tlet mut metadata = HashMap::new();\n\n\tif let Some(uid) = nev.attrs.info() {\n\t\tmetadata.insert(\"file-event-info\".to_string(), vec![uid.to_string()]);\n\t}\n\n\tif let Some(src) = nev.attrs.source() {\n\t\tmetadata.insert(\"notify-backend\".to_string(), vec![src.to_string()]);\n\t}\n\n\tlet ev = Event { tags, metadata };\n\n\ttrace!(event = ?ev, \"processed notify event into watchexec event\");\n\tn_events\n\t\t.try_send(ev, Priority::Normal)\n\t\t.map_err(|err| RuntimeError::EventChannelTrySend {\n\t\t\tctx: \"fs watcher\",\n\t\t\terr,\n\t\t})?;\n\n\tOk(())\n}\n"
  },
  {
    "path": "crates/lib/src/sources/keyboard.rs",
    "content": "//! Event source for keyboard input and related events\nuse std::io::Read;\nuse std::sync::atomic::{AtomicBool, Ordering};\nuse std::sync::Arc;\n\nuse async_priority_channel as priority;\nuse tokio::{\n\tspawn,\n\tsync::{mpsc, oneshot},\n};\nuse tracing::trace;\nuse watchexec_events::{Event, KeyCode, Keyboard, Modifiers, Priority, Source, Tag};\n\nuse crate::{\n\terror::{CriticalError, RuntimeError},\n\tConfig,\n};\n\n/// Launch the keyboard event worker.\n///\n/// While you can run several, you should only have one.\n///\n/// Sends keyboard events via to the provided 'events' channel\npub async fn worker(\n\tconfig: Arc<Config>,\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n) -> Result<(), CriticalError> {\n\tlet mut send_close = None;\n\tlet mut config_watch = config.watch();\n\tloop {\n\t\tconfig_watch.next().await;\n\t\tlet want_keyboard = config.keyboard_events.get();\n\t\tmatch (want_keyboard, &send_close) {\n\t\t\t// if we want to watch stdin and we're not already watching it then spawn a task to watch it\n\t\t\t(true, None) => {\n\t\t\t\tlet (close_s, close_r) = oneshot::channel::<()>();\n\n\t\t\t\tsend_close = Some(close_s);\n\t\t\t\tspawn(watch_stdin(errors.clone(), events.clone(), close_r));\n\t\t\t}\n\t\t\t// if we don't want to watch stdin but we are already watching it then send a close signal to end\n\t\t\t// the watching\n\t\t\t(false, Some(_)) => {\n\t\t\t\t// ignore send error as if channel is closed watch is already gone\n\t\t\t\tsend_close\n\t\t\t\t\t.take()\n\t\t\t\t\t.expect(\"unreachable due to match\")\n\t\t\t\t\t.send(())\n\t\t\t\t\t.ok();\n\t\t\t}\n\t\t\t// otherwise no action is required\n\t\t\t_ => {}\n\t\t}\n\t}\n}\n\n#[cfg(unix)]\nmod raw_mode {\n\tuse std::os::fd::AsRawFd;\n\n\t/// Stored original termios to restore on drop.\n\tpub struct RawModeGuard {\n\t\tfd: i32,\n\t\toriginal: libc::termios,\n\t}\n\n\timpl RawModeGuard {\n\t\t/// Switch stdin to raw mode. Returns None if stdin is not a TTY.\n\t\tpub fn enter() -> Option<Self> {\n\t\t\tlet fd = std::io::stdin().as_raw_fd();\n\t\t\t// SAFETY: isatty, tcgetattr, cfmakeraw, and tcsetattr are POSIX standard\n\t\t\t// functions operating on a valid fd (stdin). We check return values before\n\t\t\t// proceeding. The original termios is saved and restored in Drop.\n\t\t\tunsafe {\n\t\t\t\tif libc::isatty(fd) == 0 {\n\t\t\t\t\treturn None;\n\t\t\t\t}\n\t\t\t\tlet mut original: libc::termios = std::mem::zeroed();\n\t\t\t\tif libc::tcgetattr(fd, &mut original) != 0 {\n\t\t\t\t\treturn None;\n\t\t\t\t}\n\t\t\t\tlet mut raw = original;\n\t\t\t\tlibc::cfmakeraw(&mut raw);\n\t\t\t\t// Re-enable output post-processing so \\n still maps to \\r\\n\n\t\t\t\traw.c_oflag |= libc::OPOST;\n\t\t\t\t// Non-blocking reads: return after 100ms if no input available.\n\t\t\t\t// This ensures the tokio blocking thread doesn't park forever,\n\t\t\t\t// allowing graceful shutdown when the close signal is received.\n\t\t\t\traw.c_cc[libc::VMIN] = 0;\n\t\t\t\traw.c_cc[libc::VTIME] = 1;\n\t\t\t\tif libc::tcsetattr(fd, libc::TCSANOW, &raw) != 0 {\n\t\t\t\t\treturn None;\n\t\t\t\t}\n\t\t\t\tSome(Self { fd, original })\n\t\t\t}\n\t\t}\n\t}\n\n\timpl Drop for RawModeGuard {\n\t\tfn drop(&mut self) {\n\t\t\t// SAFETY: restoring the original termios saved in enter() on the same fd.\n\t\t\tunsafe {\n\t\t\t\tlibc::tcsetattr(self.fd, libc::TCSANOW, &self.original);\n\t\t\t}\n\t\t}\n\t}\n}\n\n#[cfg(windows)]\nmod raw_mode {\n\tuse windows_sys::Win32::Foundation::{HANDLE, INVALID_HANDLE_VALUE};\n\tuse windows_sys::Win32::System::Console::{\n\t\tGetConsoleMode, GetStdHandle, SetConsoleMode, ENABLE_ECHO_INPUT, ENABLE_LINE_INPUT,\n\t\tENABLE_PROCESSED_INPUT, STD_INPUT_HANDLE,\n\t};\n\n\t/// Stored original console mode to restore on drop.\n\tpub struct RawModeGuard {\n\t\thandle: HANDLE,\n\t\toriginal_mode: u32,\n\t}\n\n\t// SAFETY: HANDLE is a process-global value (stdin) that is safe to use from any thread.\n\tunsafe impl Send for RawModeGuard {}\n\n\timpl RawModeGuard {\n\t\t/// Switch stdin to raw-like mode. Returns None if stdin is not a console.\n\t\tpub fn enter() -> Option<Self> {\n\t\t\t// SAFETY: GetStdHandle, GetConsoleMode, and SetConsoleMode are Windows Console\n\t\t\t// API functions. We check return values before proceeding. The handle is valid\n\t\t\t// for the lifetime of the process. The original mode is saved and restored in Drop.\n\t\t\tunsafe {\n\t\t\t\tlet handle = GetStdHandle(STD_INPUT_HANDLE);\n\t\t\t\tif handle == INVALID_HANDLE_VALUE || handle.is_null() {\n\t\t\t\t\treturn None;\n\t\t\t\t}\n\t\t\t\tlet mut original_mode: u32 = 0;\n\t\t\t\tif GetConsoleMode(handle, &mut original_mode) == 0 {\n\t\t\t\t\treturn None;\n\t\t\t\t}\n\t\t\t\t// Disable line input, echo, and Ctrl+C signal processing\n\t\t\t\tlet raw_mode = original_mode\n\t\t\t\t\t& !(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT);\n\t\t\t\tif SetConsoleMode(handle, raw_mode) == 0 {\n\t\t\t\t\treturn None;\n\t\t\t\t}\n\t\t\t\tSome(Self {\n\t\t\t\t\thandle,\n\t\t\t\t\toriginal_mode,\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n\n\timpl Drop for RawModeGuard {\n\t\tfn drop(&mut self) {\n\t\t\t// SAFETY: restoring the original console mode saved in enter() on the same handle.\n\t\t\tunsafe {\n\t\t\t\tSetConsoleMode(self.handle, self.original_mode);\n\t\t\t}\n\t\t}\n\t}\n}\n\nfn byte_to_keyboard(byte: u8) -> Option<Keyboard> {\n\tmatch byte {\n\t\t// Ctrl-C / Ctrl-D\n\t\t3 | 4 => Some(Keyboard::Eof),\n\t\t// Enter (byte 13, before Ctrl range to avoid overlap)\n\t\t13 => Some(Keyboard::Key {\n\t\t\tkey: KeyCode::Enter,\n\t\t\tmodifiers: Modifiers::default(),\n\t\t}),\n\t\t// Ctrl+letter (1-26 excluding 3,4,13 handled above)\n\t\tb @ 1..=26 => Some(Keyboard::Key {\n\t\t\tkey: KeyCode::Char((b + b'a' - 1) as char),\n\t\t\tmodifiers: Modifiers {\n\t\t\t\tctrl: true,\n\t\t\t\t..Default::default()\n\t\t\t},\n\t\t}),\n\t\t27 => Some(Keyboard::Key {\n\t\t\tkey: KeyCode::Escape,\n\t\t\tmodifiers: Modifiers::default(),\n\t\t}),\n\t\tb if char::from(b).is_ascii_graphic() || b == b' ' => Some(Keyboard::Key {\n\t\t\tkey: KeyCode::Char(char::from(b)),\n\t\t\tmodifiers: Modifiers::default(),\n\t\t}),\n\t\t_ => None,\n\t}\n}\n\nasync fn watch_stdin(\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n\tclose_r: oneshot::Receiver<()>,\n) -> Result<(), CriticalError> {\n\t// Use an AtomicBool to signal the blocking reader to stop.\n\t// This avoids tokio::io::stdin() which uses blocking threads that can't be\n\t// interrupted, causing the process to hang on shutdown (issue #1017).\n\tlet cancel = Arc::new(AtomicBool::new(false));\n\tlet cancel_clone = cancel.clone();\n\n\tlet (tx, mut rx) = mpsc::channel::<Result<Vec<u8>, ()>>(16);\n\n\t// Spawn a blocking task that reads stdin directly\n\ttokio::task::spawn_blocking(move || {\n\t\t#[cfg(any(unix, windows))]\n\t\tlet _raw_guard = raw_mode::RawModeGuard::enter();\n\n\t\tlet mut stdin = std::io::stdin().lock();\n\t\tlet mut buffer = [0u8; 10];\n\n\t\twhile !cancel_clone.load(Ordering::Relaxed) {\n\t\t\tmatch stdin.read(&mut buffer) {\n\t\t\t\tOk(0) => {\n\t\t\t\t\t// EOF or VTIME timeout with no data\n\t\t\t\t\t// With VMIN=0/VTIME=1, this is a timeout - just loop and check cancel\n\t\t\t\t\t#[cfg(any(unix, windows))]\n\t\t\t\t\tif _raw_guard.is_some() {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\t// Real EOF in non-raw mode\n\t\t\t\t\tlet _ = tx.blocking_send(Ok(vec![]));\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tOk(n) => {\n\t\t\t\t\tif tx.blocking_send(Ok(buffer[..n].to_vec())).is_err() {\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tErr(_) => {\n\t\t\t\t\tlet _ = tx.blocking_send(Err(()));\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t});\n\n\t// Wait for either data from stdin or the close signal\n\ttokio::select! {\n\t\t_ = async {\n\t\t\t'read: while let Some(result) = rx.recv().await {\n\t\t\t\tmatch result {\n\t\t\t\t\tOk(bytes) if bytes.is_empty() => {\n\t\t\t\t\t\t// EOF\n\t\t\t\t\t\tlet _ = send_event(errors.clone(), events.clone(), Keyboard::Eof).await;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tOk(bytes) => {\n\t\t\t\t\t\tfor &byte in &bytes {\n\t\t\t\t\t\t\tif let Some(key) = byte_to_keyboard(byte) {\n\t\t\t\t\t\t\t\tlet is_eof = matches!(key, Keyboard::Eof);\n\t\t\t\t\t\t\t\tlet _ = send_event(errors.clone(), events.clone(), key).await;\n\t\t\t\t\t\t\t\tif is_eof {\n\t\t\t\t\t\t\t\t\tbreak 'read;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tErr(()) => break,\n\t\t\t\t}\n\t\t\t}\n\t\t} => {}\n\t\t_ = close_r => {}\n\t}\n\n\t// Always signal the blocking thread to stop when we exit\n\tcancel.store(true, Ordering::Relaxed);\n\n\tOk(())\n}\n\nasync fn send_event(\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n\tmsg: Keyboard,\n) -> Result<(), CriticalError> {\n\tlet tags = vec![Tag::Source(Source::Keyboard), Tag::Keyboard(msg)];\n\n\tlet event = Event {\n\t\ttags,\n\t\tmetadata: Default::default(),\n\t};\n\n\ttrace!(?event, \"processed keyboard input into event\");\n\tif let Err(err) = events.send(event, Priority::Normal).await {\n\t\terrors\n\t\t\t.send(RuntimeError::EventChannelSend {\n\t\t\t\tctx: \"keyboard\",\n\t\t\t\terr,\n\t\t\t})\n\t\t\t.await?;\n\t}\n\n\tOk(())\n}\n"
  },
  {
    "path": "crates/lib/src/sources/signal.rs",
    "content": "//! Event source for signals / notifications sent to the main process.\n\nuse std::sync::Arc;\n\nuse async_priority_channel as priority;\nuse tokio::{select, sync::mpsc};\nuse tracing::{debug, trace};\nuse watchexec_events::{Event, Priority, Source, Tag};\nuse watchexec_signals::Signal;\n\nuse crate::{\n\terror::{CriticalError, RuntimeError},\n\tConfig,\n};\n\n/// Launch the signal event worker.\n///\n/// While you _could_ run several (it won't panic), you **must** only have one (for correctness).\n/// This may be enforced later.\n///\n/// # Examples\n///\n/// Direct usage:\n///\n/// ```no_run\n/// use tokio::sync::mpsc;\n/// use async_priority_channel as priority;\n/// use watchexec::sources::signal::worker;\n///\n/// #[tokio::main]\n/// async fn main() -> Result<(), Box<dyn std::error::Error>> {\n///     let (ev_s, _) = priority::bounded(1024);\n///     let (er_s, _) = mpsc::channel(64);\n///\n///     worker(Default::default(), er_s, ev_s).await?;\n///     Ok(())\n/// }\n/// ```\npub async fn worker(\n\tconfig: Arc<Config>,\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n) -> Result<(), CriticalError> {\n\timp_worker(config, errors, events).await\n}\n\n#[cfg(unix)]\nasync fn imp_worker(\n\t_config: Arc<Config>,\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n) -> Result<(), CriticalError> {\n\tuse tokio::signal::unix::{signal, SignalKind};\n\n\tdebug!(\"launching unix signal worker\");\n\n\tmacro_rules! listen {\n    ($sig:ident) => {{\n        trace!(kind=%stringify!($sig), \"listening for unix signal\");\n        signal(SignalKind::$sig()).map_err(|err| CriticalError::IoError {\n        about: concat!(\"setting \", stringify!($sig), \" signal listener\"), err\n    })?\n    }}\n}\n\n\tlet mut s_hangup = listen!(hangup);\n\tlet mut s_interrupt = listen!(interrupt);\n\tlet mut s_quit = listen!(quit);\n\tlet mut s_terminate = listen!(terminate);\n\tlet mut s_user1 = listen!(user_defined1);\n\tlet mut s_user2 = listen!(user_defined2);\n\n\tloop {\n\t\tlet sig = select!(\n\t\t\t_ = s_hangup.recv() => Signal::Hangup,\n\t\t\t_ = s_interrupt.recv() => Signal::Interrupt,\n\t\t\t_ = s_quit.recv() => Signal::Quit,\n\t\t\t_ = s_terminate.recv() => Signal::Terminate,\n\t\t\t_ = s_user1.recv() => Signal::User1,\n\t\t\t_ = s_user2.recv() => Signal::User2,\n\t\t);\n\n\t\tdebug!(?sig, \"received unix signal\");\n\t\tsend_event(errors.clone(), events.clone(), sig).await?;\n\t}\n}\n\n#[cfg(windows)]\nasync fn imp_worker(\n\t_config: Arc<Config>,\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n) -> Result<(), CriticalError> {\n\tuse tokio::signal::windows::{ctrl_break, ctrl_c};\n\n\tdebug!(\"launching windows signal worker\");\n\n\tmacro_rules! listen {\n    ($sig:ident) => {{\n        trace!(kind=%stringify!($sig), \"listening for windows process notification\");\n        $sig().map_err(|err| CriticalError::IoError {\n            about: concat!(\"setting \", stringify!($sig), \" signal listener\"), err\n        })?\n    }}\n}\n\n\tlet mut sigint = listen!(ctrl_c);\n\tlet mut sigbreak = listen!(ctrl_break);\n\n\tloop {\n\t\tlet sig = select!(\n\t\t\t_ = sigint.recv() => Signal::Interrupt,\n\t\t\t_ = sigbreak.recv() => Signal::Terminate,\n\t\t);\n\n\t\tdebug!(?sig, \"received windows process notification\");\n\t\tsend_event(errors.clone(), events.clone(), sig).await?;\n\t}\n}\n\nasync fn send_event(\n\terrors: mpsc::Sender<RuntimeError>,\n\tevents: priority::Sender<Event, Priority>,\n\tsig: Signal,\n) -> Result<(), CriticalError> {\n\tlet tags = vec![\n\t\tTag::Source(if sig == Signal::Interrupt {\n\t\t\tSource::Keyboard\n\t\t} else {\n\t\t\tSource::Os\n\t\t}),\n\t\tTag::Signal(sig),\n\t];\n\n\tlet event = Event {\n\t\ttags,\n\t\tmetadata: Default::default(),\n\t};\n\n\ttrace!(?event, \"processed signal into event\");\n\tif let Err(err) = events\n\t\t.send(\n\t\t\tevent,\n\t\t\tmatch sig {\n\t\t\t\tSignal::Interrupt | Signal::Terminate => Priority::Urgent,\n\t\t\t\t_ => Priority::High,\n\t\t\t},\n\t\t)\n\t\t.await\n\t{\n\t\terrors\n\t\t\t.send(RuntimeError::EventChannelSend {\n\t\t\t\tctx: \"signals\",\n\t\t\t\terr,\n\t\t\t})\n\t\t\t.await?;\n\t}\n\n\tOk(())\n}\n"
  },
  {
    "path": "crates/lib/src/sources.rs",
    "content": "//! Sources of events.\n\npub mod fs;\npub mod keyboard;\npub mod signal;\n"
  },
  {
    "path": "crates/lib/src/watched_path.rs",
    "content": "use std::path::{Path, PathBuf};\n\n/// A path to watch.\n///\n/// Can be a recursive or non-recursive watch.\n#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub struct WatchedPath {\n\tpub(crate) path: PathBuf,\n\tpub(crate) recursive: bool,\n}\n\nimpl From<PathBuf> for WatchedPath {\n\tfn from(path: PathBuf) -> Self {\n\t\tSelf {\n\t\t\tpath,\n\t\t\trecursive: true,\n\t\t}\n\t}\n}\n\nimpl From<&str> for WatchedPath {\n\tfn from(path: &str) -> Self {\n\t\tSelf {\n\t\t\tpath: path.into(),\n\t\t\trecursive: true,\n\t\t}\n\t}\n}\n\nimpl From<String> for WatchedPath {\n\tfn from(path: String) -> Self {\n\t\tSelf {\n\t\t\tpath: path.into(),\n\t\t\trecursive: true,\n\t\t}\n\t}\n}\n\nimpl From<&Path> for WatchedPath {\n\tfn from(path: &Path) -> Self {\n\t\tSelf {\n\t\t\tpath: path.into(),\n\t\t\trecursive: true,\n\t\t}\n\t}\n}\n\nimpl From<WatchedPath> for PathBuf {\n\tfn from(path: WatchedPath) -> Self {\n\t\tpath.path\n\t}\n}\n\nimpl From<&WatchedPath> for PathBuf {\n\tfn from(path: &WatchedPath) -> Self {\n\t\tpath.path.clone()\n\t}\n}\n\nimpl AsRef<Path> for WatchedPath {\n\tfn as_ref(&self) -> &Path {\n\t\tself.path.as_ref()\n\t}\n}\n\nimpl WatchedPath {\n\t/// Create a new watched path, recursively descending into subdirectories.\n\tpub fn recursive(path: impl Into<PathBuf>) -> Self {\n\t\tSelf {\n\t\t\tpath: path.into(),\n\t\t\trecursive: true,\n\t\t}\n\t}\n\n\t/// Create a new watched path, not descending into subdirectories.\n\tpub fn non_recursive(path: impl Into<PathBuf>) -> Self {\n\t\tSelf {\n\t\t\tpath: path.into(),\n\t\t\trecursive: false,\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/lib/src/watchexec.rs",
    "content": "use std::{\n\tfmt,\n\tfuture::Future,\n\tsync::{Arc, OnceLock},\n};\n\nuse async_priority_channel as priority;\nuse atomic_take::AtomicTake;\nuse futures::TryFutureExt;\nuse miette::Diagnostic;\nuse tokio::{\n\tspawn,\n\tsync::{mpsc, Notify},\n\ttask::{JoinHandle, JoinSet},\n};\nuse tracing::{debug, error, trace};\nuse watchexec_events::{Event, Priority};\n\nuse crate::{\n\taction::{self, ActionHandler},\n\tchangeable::ChangeableFn,\n\terror::{CriticalError, RuntimeError},\n\tsources::{fs, keyboard, signal},\n\tConfig,\n};\n\n/// The main watchexec runtime.\n///\n/// All this really does is tie the pieces together in one convenient interface.\n///\n/// It creates the correct channels, spawns every available event sources, the action worker, the\n/// error hook, and provides an interface to change the runtime configuration during the runtime,\n/// inject synthetic events, and wait for graceful shutdown.\npub struct Watchexec {\n\t/// The configuration of this Watchexec instance.\n\t///\n\t/// Configuration can be changed at any time using the provided methods on [`Config`].\n\t///\n\t/// Treat this field as readonly: replacing it with a different instance of `Config` will not do\n\t/// anything except potentially lose you access to the actual Watchexec config. In normal use\n\t/// you'll have obtained `Watchexec` behind an `Arc` so that won't be an issue.\n\t///\n\t/// # Examples\n\t///\n\t/// Change the action handler:\n\t///\n\t/// ```no_run\n\t/// # use watchexec::Watchexec;\n\t/// let wx = Watchexec::default();\n\t/// wx.config.on_action(|mut action| {\n\t///     if action.signals().next().is_some() {\n\t///         action.quit();\n\t///     }\n\t///\n\t///     action\n\t/// });\n\t/// ```\n\t///\n\t/// Set paths to be watched:\n\t///\n\t/// ```no_run\n\t/// # use watchexec::Watchexec;\n\t/// let wx = Watchexec::new(|mut action| {\n\t///     if action.signals().next().is_some() {\n\t///         action.quit();\n\t///     } else {\n\t///         for event in action.events.iter() {\n\t///             println!(\"{event:?}\");\n\t///         }\n\t///     }\n\t///\n\t///     action\n\t/// }).unwrap();\n\t///\n\t/// wx.config.pathset([\".\"]);\n\t/// ```\n\tpub config: Arc<Config>,\n\tstart_lock: Arc<Notify>,\n\tevent_input: priority::Sender<Event, Priority>,\n\thandle: Arc<AtomicTake<JoinHandle<Result<(), CriticalError>>>>,\n}\n\nimpl Default for Watchexec {\n\t/// Instantiate with default config.\n\t///\n\t/// Note that this will panic if the constructor errors.\n\t///\n\t/// Prefer calling `new()` instead.\n\tfn default() -> Self {\n\t\tSelf::with_config(Default::default()).expect(\"Use Watchexec::new() to avoid this panic\")\n\t}\n}\n\nimpl fmt::Debug for Watchexec {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\tf.debug_struct(\"Watchexec\").finish_non_exhaustive()\n\t}\n}\n\nimpl Watchexec {\n\t/// Instantiates a new `Watchexec` runtime given an initial action handler.\n\t///\n\t/// Returns an [`Arc`] for convenience; use [`try_unwrap`][Arc::try_unwrap()] to get the value\n\t/// directly if needed, or use `new_with_config`.\n\t///\n\t/// Look at the [`Config`] documentation for more on the required action handler.\n\t/// Watchexec will subscribe to most signals sent to the process it runs in and send them, as\n\t/// [`Event`]s, to the action handler. At minimum, you should check for interrupt/ctrl-c events\n\t/// and call `action.quit()` in your handler, otherwise hitting ctrl-c will do nothing.\n\tpub fn new(\n\t\taction_handler: impl (Fn(ActionHandler) -> ActionHandler) + Send + Sync + 'static,\n\t) -> Result<Arc<Self>, CriticalError> {\n\t\tlet config = Config::default();\n\t\tconfig.on_action(action_handler);\n\t\tSelf::with_config(config).map(Arc::new)\n\t}\n\n\t/// Instantiates a new `Watchexec` runtime given an initial async action handler.\n\t///\n\t/// This is the same as [`new`](fn@Self::new) except the action handler is async.\n\tpub fn new_async(\n\t\taction_handler: impl (Fn(ActionHandler) -> Box<dyn Future<Output = ActionHandler> + Send + Sync>)\n\t\t\t+ Send\n\t\t\t+ Sync\n\t\t\t+ 'static,\n\t) -> Result<Arc<Self>, CriticalError> {\n\t\tlet config = Config::default();\n\t\tconfig.on_action_async(action_handler);\n\t\tSelf::with_config(config).map(Arc::new)\n\t}\n\n\t/// Instantiates a new `Watchexec` runtime with a config.\n\t///\n\t/// This is generally not needed: the config can be changed after instantiation (before and\n\t/// after _starting_ Watchexec with `main()`). The only time this should be used is to set the\n\t/// \"unchangeable\" configuration items for internal details like buffer sizes for queues, or to\n\t/// obtain Self unwrapped by an Arc like `new()` does.\n\tpub fn with_config(config: Config) -> Result<Self, CriticalError> {\n\t\tdebug!(?config, pid=%std::process::id(), version=%env!(\"CARGO_PKG_VERSION\"), \"initialising\");\n\t\tlet config = Arc::new(config);\n\t\tlet outer_config = config.clone();\n\n\t\tlet notify = Arc::new(Notify::new());\n\t\tlet start_lock = notify.clone();\n\n\t\tlet (ev_s, ev_r) =\n\t\t\tpriority::bounded(config.event_channel_size.try_into().unwrap_or(u64::MAX));\n\t\tlet event_input = ev_s.clone();\n\n\t\ttrace!(\"creating main task\");\n\t\tlet handle = spawn(async move {\n\t\t\ttrace!(\"waiting for start lock\");\n\t\t\tnotify.notified().await;\n\t\t\tdebug!(\"starting main task\");\n\n\t\t\tlet (er_s, er_r) = mpsc::channel(config.error_channel_size);\n\n\t\t\tlet mut tasks = JoinSet::new();\n\n\t\t\ttasks.spawn(action::worker(config.clone(), er_s.clone(), ev_r).map_ok(|()| \"action\"));\n\t\t\ttasks.spawn(fs::worker(config.clone(), er_s.clone(), ev_s.clone()).map_ok(|()| \"fs\"));\n\t\t\ttasks.spawn(\n\t\t\t\tsignal::worker(config.clone(), er_s.clone(), ev_s.clone()).map_ok(|()| \"signal\"),\n\t\t\t);\n\t\t\ttasks.spawn(\n\t\t\t\tkeyboard::worker(config.clone(), er_s.clone(), ev_s.clone())\n\t\t\t\t\t.map_ok(|()| \"keyboard\"),\n\t\t\t);\n\t\t\ttasks.spawn(error_hook(er_r, config.error_handler.clone()).map_ok(|()| \"error\"));\n\n\t\t\twhile let Some(Ok(res)) = tasks.join_next().await {\n\t\t\t\tmatch res {\n\t\t\t\t\tOk(\"action\") => {\n\t\t\t\t\t\tdebug!(\"action worker exited, ending watchexec\");\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tOk(task) => {\n\t\t\t\t\t\tdebug!(task, \"worker exited\");\n\t\t\t\t\t}\n\t\t\t\t\tErr(CriticalError::Exit) => {\n\t\t\t\t\t\ttrace!(\"got graceful exit request via critical error, erasing the error\");\n\t\t\t\t\t\t// Close event channel to signal worker task to stop\n\t\t\t\t\t\tev_s.close();\n\t\t\t\t\t}\n\t\t\t\t\tErr(e) => {\n\t\t\t\t\t\treturn Err(e);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tdebug!(\"main task graceful exit\");\n\t\t\ttasks.shutdown().await;\n\t\t\tOk(())\n\t\t});\n\n\t\ttrace!(\"done with setup\");\n\t\tOk(Self {\n\t\t\tconfig: outer_config,\n\t\t\tstart_lock,\n\t\t\tevent_input,\n\t\t\thandle: Arc::new(AtomicTake::new(handle)),\n\t\t})\n\t}\n\n\t/// Inputs an [`Event`] directly.\n\t///\n\t/// This can be useful for testing, for custom event sources, or for one-off action triggers\n\t/// (for example, on start).\n\t///\n\t/// Hint: use [`Event::default()`] to send an empty event (which won't be filtered).\n\tpub async fn send_event(&self, event: Event, priority: Priority) -> Result<(), CriticalError> {\n\t\tself.event_input.send(event, priority).await?;\n\t\tOk(())\n\t}\n\n\t/// Start watchexec and obtain the handle to its main task.\n\t///\n\t/// This must only be called once.\n\t///\n\t/// # Panics\n\t/// Panics if called twice.\n\tpub fn main(&self) -> JoinHandle<Result<(), CriticalError>> {\n\t\ttrace!(\"notifying start lock\");\n\t\tself.start_lock.notify_one();\n\n\t\tdebug!(\"handing over main task handle\");\n\t\tself.handle\n\t\t\t.take()\n\t\t\t.expect(\"Watchexec::main was called twice\")\n\t}\n}\n\nasync fn error_hook(\n\tmut errors: mpsc::Receiver<RuntimeError>,\n\thandler: ChangeableFn<ErrorHook, ()>,\n) -> Result<(), CriticalError> {\n\twhile let Some(err) = errors.recv().await {\n\t\tif matches!(err, RuntimeError::Exit) {\n\t\t\ttrace!(\"got graceful exit request via runtime error, upgrading to crit\");\n\t\t\treturn Err(CriticalError::Exit);\n\t\t}\n\n\t\terror!(%err, \"runtime error\");\n\t\tlet payload = ErrorHook::new(err);\n\t\tlet crit = payload.critical.clone();\n\t\thandler.call(payload);\n\t\tErrorHook::handle_crit(crit)?;\n\t}\n\n\tOk(())\n}\n\n/// The environment given to the error handler.\n///\n/// This deliberately does not implement Clone to make it hard to move it out of the handler, which\n/// you should not do.\n///\n/// The [`ErrorHook::critical()`] method should be used to send a [`CriticalError`], which will\n/// terminate watchexec. This is useful to e.g. upgrade certain errors to be fatal.\n///\n/// Note that returning errors from the error handler does not result in critical errors.\n#[derive(Debug)]\npub struct ErrorHook {\n\t/// The runtime error for which this handler was called.\n\tpub error: RuntimeError,\n\tcritical: Arc<OnceLock<CriticalError>>,\n}\n\nimpl ErrorHook {\n\tfn new(error: RuntimeError) -> Self {\n\t\tSelf {\n\t\t\terror,\n\t\t\tcritical: Default::default(),\n\t\t}\n\t}\n\n\tfn handle_crit(crit: Arc<OnceLock<CriticalError>>) -> Result<(), CriticalError> {\n\t\tmatch Arc::try_unwrap(crit) {\n\t\t\tErr(err) => {\n\t\t\t\terror!(?err, \"error handler hook has an outstanding ref\");\n\t\t\t\tOk(())\n\t\t\t}\n\t\t\tOk(crit) => crit.into_inner().map_or_else(\n\t\t\t\t|| Ok(()),\n\t\t\t\t|crit| {\n\t\t\t\t\tdebug!(%crit, \"error handler output a critical error\");\n\t\t\t\t\tErr(crit)\n\t\t\t\t},\n\t\t\t),\n\t\t}\n\t}\n\n\t/// Set a critical error to be emitted.\n\t///\n\t/// This takes `self` and `ErrorHook` is not `Clone`, so it's only possible to call it once.\n\t/// Regardless, if you _do_ manage to call it twice, it will do nothing beyond the first call.\n\tpub fn critical(self, critical: CriticalError) {\n\t\tself.critical.set(critical).ok();\n\t}\n\n\t/// Elevate the current runtime error to critical.\n\t///\n\t/// This is a shorthand method for `ErrorHook::critical(CriticalError::Elevated(error))`.\n\tpub fn elevate(self) {\n\t\tlet Self { error, critical } = self;\n\t\tcritical\n\t\t\t.set(CriticalError::Elevated {\n\t\t\t\thelp: error.help().map(|h| h.to_string()),\n\t\t\t\terr: error,\n\t\t\t})\n\t\t\t.ok();\n\t}\n}\n"
  },
  {
    "path": "crates/lib/tests/env_reporting.rs",
    "content": "use std::{collections::HashMap, ffi::OsString, path::MAIN_SEPARATOR};\n\nuse notify::event::CreateKind;\nuse watchexec::paths::summarise_events_to_env;\nuse watchexec_events::{filekind::*, Event, Tag};\n\n#[cfg(unix)]\nconst ENV_SEP: &str = \":\";\n#[cfg(not(unix))]\nconst ENV_SEP: &str = \";\";\n\nfn ospath(path: &str) -> OsString {\n\tlet root = std::fs::canonicalize(\".\").unwrap();\n\tif path.is_empty() {\n\t\troot\n\t} else {\n\t\troot.join(path)\n\t}\n\t.into()\n}\n\nfn event(path: &str, kind: FileEventKind) -> Event {\n\tEvent {\n\t\ttags: vec![\n\t\t\tTag::Path {\n\t\t\t\tpath: ospath(path).into(),\n\t\t\t\tfile_type: None,\n\t\t\t},\n\t\t\tTag::FileEventKind(kind),\n\t\t],\n\t\tmetadata: Default::default(),\n\t}\n}\n\n#[test]\nfn no_events_no_env() {\n\tlet events = Vec::<Event>::new();\n\tassert_eq!(summarise_events_to_env(&events), HashMap::new());\n}\n\n#[test]\nfn single_created() {\n\tlet events = vec![event(\"file.txt\", FileEventKind::Create(CreateKind::File))];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\"CREATED\", OsString::from(\"file.txt\")),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn single_meta() {\n\tlet events = vec![event(\n\t\t\"file.txt\",\n\t\tFileEventKind::Modify(ModifyKind::Metadata(MetadataKind::Any)),\n\t)];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\"META_CHANGED\", OsString::from(\"file.txt\")),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn single_removed() {\n\tlet events = vec![event(\"file.txt\", FileEventKind::Remove(RemoveKind::File))];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\"REMOVED\", OsString::from(\"file.txt\")),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn single_renamed() {\n\tlet events = vec![event(\n\t\t\"file.txt\",\n\t\tFileEventKind::Modify(ModifyKind::Name(RenameMode::Any)),\n\t)];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\"RENAMED\", OsString::from(\"file.txt\")),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn single_written() {\n\tlet events = vec![event(\n\t\t\"file.txt\",\n\t\tFileEventKind::Modify(ModifyKind::Data(DataChange::Any)),\n\t)];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\"WRITTEN\", OsString::from(\"file.txt\")),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn single_otherwise() {\n\tlet events = vec![event(\"file.txt\", FileEventKind::Any)];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\"OTHERWISE_CHANGED\", OsString::from(\"file.txt\")),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn all_types_once() {\n\tlet events = vec![\n\t\tevent(\"create.txt\", FileEventKind::Create(CreateKind::File)),\n\t\tevent(\n\t\t\t\"metadata.txt\",\n\t\t\tFileEventKind::Modify(ModifyKind::Metadata(MetadataKind::Any)),\n\t\t),\n\t\tevent(\"remove.txt\", FileEventKind::Remove(RemoveKind::File)),\n\t\tevent(\n\t\t\t\"rename.txt\",\n\t\t\tFileEventKind::Modify(ModifyKind::Name(RenameMode::Any)),\n\t\t),\n\t\tevent(\n\t\t\t\"modify.txt\",\n\t\t\tFileEventKind::Modify(ModifyKind::Data(DataChange::Any)),\n\t\t),\n\t\tevent(\"any.txt\", FileEventKind::Any),\n\t];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\"CREATED\", OsString::from(\"create.txt\")),\n\t\t\t(\"META_CHANGED\", OsString::from(\"metadata.txt\")),\n\t\t\t(\"REMOVED\", OsString::from(\"remove.txt\")),\n\t\t\t(\"RENAMED\", OsString::from(\"rename.txt\")),\n\t\t\t(\"WRITTEN\", OsString::from(\"modify.txt\")),\n\t\t\t(\"OTHERWISE_CHANGED\", OsString::from(\"any.txt\")),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn single_type_multipath() {\n\tlet events = vec![\n\t\tevent(\"root.txt\", FileEventKind::Create(CreateKind::File)),\n\t\tevent(\"sub/folder.txt\", FileEventKind::Create(CreateKind::File)),\n\t\tevent(\"dom/folder.txt\", FileEventKind::Create(CreateKind::File)),\n\t\tevent(\n\t\t\t\"deeper/sub/folder.txt\",\n\t\t\tFileEventKind::Create(CreateKind::File),\n\t\t),\n\t];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\n\t\t\t\t\"CREATED\",\n\t\t\t\tOsString::from(\n\t\t\t\t\t[\n\t\t\t\t\t\tformat!(\"deeper{MAIN_SEPARATOR}sub{MAIN_SEPARATOR}folder.txt\"),\n\t\t\t\t\t\tformat!(\"dom{MAIN_SEPARATOR}folder.txt\"),\n\t\t\t\t\t\t\"root.txt\".to_string(),\n\t\t\t\t\t\tformat!(\"sub{MAIN_SEPARATOR}folder.txt\"),\n\t\t\t\t\t]\n\t\t\t\t\t.join(ENV_SEP)\n\t\t\t\t)\n\t\t\t),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn single_type_divergent_paths() {\n\tlet events = vec![\n\t\tevent(\"sub/folder.txt\", FileEventKind::Create(CreateKind::File)),\n\t\tevent(\"dom/folder.txt\", FileEventKind::Create(CreateKind::File)),\n\t];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\n\t\t\t\t\"CREATED\",\n\t\t\t\tOsString::from(\n\t\t\t\t\t[\n\t\t\t\t\t\tformat!(\"dom{MAIN_SEPARATOR}folder.txt\"),\n\t\t\t\t\t\tformat!(\"sub{MAIN_SEPARATOR}folder.txt\"),\n\t\t\t\t\t]\n\t\t\t\t\t.join(ENV_SEP)\n\t\t\t\t)\n\t\t\t),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn multitype_multipath() {\n\tlet events = vec![\n\t\tevent(\"root.txt\", FileEventKind::Create(CreateKind::File)),\n\t\tevent(\"sibling.txt\", FileEventKind::Create(CreateKind::Any)),\n\t\tevent(\n\t\t\t\"sub/folder.txt\",\n\t\t\tFileEventKind::Modify(ModifyKind::Metadata(MetadataKind::Ownership)),\n\t\t),\n\t\tevent(\"dom/folder.txt\", FileEventKind::Remove(RemoveKind::Folder)),\n\t\tevent(\"deeper/sub/folder.txt\", FileEventKind::Other),\n\t];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\n\t\t\t\t\"CREATED\",\n\t\t\t\tOsString::from([\"root.txt\", \"sibling.txt\"].join(ENV_SEP)),\n\t\t\t),\n\t\t\t(\n\t\t\t\t\"META_CHANGED\",\n\t\t\t\tOsString::from(format!(\"sub{MAIN_SEPARATOR}folder.txt\"))\n\t\t\t),\n\t\t\t(\n\t\t\t\t\"REMOVED\",\n\t\t\t\tOsString::from(format!(\"dom{MAIN_SEPARATOR}folder.txt\"))\n\t\t\t),\n\t\t\t(\n\t\t\t\t\"OTHERWISE_CHANGED\",\n\t\t\t\tOsString::from(format!(\n\t\t\t\t\t\"deeper{MAIN_SEPARATOR}sub{MAIN_SEPARATOR}folder.txt\"\n\t\t\t\t))\n\t\t\t),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn multiple_paths_in_one_event() {\n\tlet events = vec![Event {\n\t\ttags: vec![\n\t\t\tTag::Path {\n\t\t\t\tpath: ospath(\"one.txt\").into(),\n\t\t\t\tfile_type: None,\n\t\t\t},\n\t\t\tTag::Path {\n\t\t\t\tpath: ospath(\"two.txt\").into(),\n\t\t\t\tfile_type: None,\n\t\t\t},\n\t\t\tTag::FileEventKind(FileEventKind::Any),\n\t\t],\n\t\tmetadata: Default::default(),\n\t}];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\n\t\t\t\t\"OTHERWISE_CHANGED\",\n\t\t\t\tOsString::from(String::new() + \"one.txt\" + ENV_SEP + \"two.txt\")\n\t\t\t),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn mixed_non_paths_events() {\n\tlet events = vec![\n\t\tevent(\"one.txt\", FileEventKind::Any),\n\t\tEvent {\n\t\t\ttags: vec![Tag::Process(1234)],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tevent(\"two.txt\", FileEventKind::Any),\n\t\tEvent {\n\t\t\ttags: vec![Tag::FileEventKind(FileEventKind::Any)],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\n\t\t\t\t\"OTHERWISE_CHANGED\",\n\t\t\t\tOsString::from(String::new() + \"one.txt\" + ENV_SEP + \"two.txt\")\n\t\t\t),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn only_non_paths_events() {\n\tlet events = vec![\n\t\tEvent {\n\t\t\ttags: vec![Tag::Process(1234)],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t\tEvent {\n\t\t\ttags: vec![Tag::FileEventKind(FileEventKind::Any)],\n\t\t\tmetadata: Default::default(),\n\t\t},\n\t];\n\tassert_eq!(summarise_events_to_env(&events), HashMap::new());\n}\n\n#[test]\nfn multipath_is_sorted() {\n\tlet events = vec![\n\t\tevent(\"0123.txt\", FileEventKind::Any),\n\t\tevent(\"a.txt\", FileEventKind::Any),\n\t\tevent(\"b.txt\", FileEventKind::Any),\n\t\tevent(\"c.txt\", FileEventKind::Any),\n\t\tevent(\"ᄁ.txt\", FileEventKind::Any),\n\t];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\n\t\t\t\t\"OTHERWISE_CHANGED\",\n\t\t\t\tOsString::from(\n\t\t\t\t\tString::new()\n\t\t\t\t\t\t+ \"0123.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"a.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"b.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"c.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"ᄁ.txt\"\n\t\t\t\t)\n\t\t\t),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n\n#[test]\nfn multipath_is_deduped() {\n\tlet events = vec![\n\t\tevent(\"0123.txt\", FileEventKind::Any),\n\t\tevent(\"0123.txt\", FileEventKind::Any),\n\t\tevent(\"a.txt\", FileEventKind::Any),\n\t\tevent(\"a.txt\", FileEventKind::Any),\n\t\tevent(\"b.txt\", FileEventKind::Any),\n\t\tevent(\"b.txt\", FileEventKind::Any),\n\t\tevent(\"c.txt\", FileEventKind::Any),\n\t\tevent(\"ᄁ.txt\", FileEventKind::Any),\n\t\tevent(\"ᄁ.txt\", FileEventKind::Any),\n\t];\n\tassert_eq!(\n\t\tsummarise_events_to_env(&events),\n\t\tHashMap::from([\n\t\t\t(\n\t\t\t\t\"OTHERWISE_CHANGED\",\n\t\t\t\tOsString::from(\n\t\t\t\t\tString::new()\n\t\t\t\t\t\t+ \"0123.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"a.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"b.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"c.txt\" + ENV_SEP\n\t\t\t\t\t\t+ \"ᄁ.txt\"\n\t\t\t\t)\n\t\t\t),\n\t\t\t(\"COMMON\", ospath(\"\")),\n\t\t])\n\t);\n}\n"
  },
  {
    "path": "crates/lib/tests/error_handler.rs",
    "content": "use std::time::Duration;\n\nuse miette::Result;\nuse tokio::time::sleep;\nuse watchexec::{ErrorHook, Watchexec};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n\ttracing_subscriber::fmt::init();\n\n\tlet wx = Watchexec::default();\n\twx.config.on_error(|err: ErrorHook| {\n\t\teprintln!(\"Watchexec Runtime Error: {}\", err.error);\n\t});\n\twx.main();\n\n\t// TODO: induce an error here\n\n\tsleep(Duration::from_secs(1)).await;\n\n\tOk(())\n}\n"
  },
  {
    "path": "crates/project-origins/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v1.4.2 (2025-05-15)\n\n## v1.4.1 (2025-02-09)\n\n## v1.4.0 (2024-04-28)\n\n- Add out-of-tree Git repositories (`.git` file instead of folder).\n\n## v1.3.0 (2024-01-01)\n\n- Remove `README.md` files from detection; those were causing too many false positives and were a weak signal anyway.\n- Add Node.js package manager lockfiles.\n\n## v1.2.1 (2023-11-26)\n\n- Deps: upgrade Tokio requirement to 1.33.0\n\n## v1.2.0 (2023-01-08)\n\n- Add `const` qualifier to `ProjectType::is_vcs` and `::is_soft`.\n- Use Tokio's canonicalize instead of dunce.\n- Add missing `Send` bound to `origins()` and `types()`.\n\n## v1.1.1 (2022-09-07)\n\n- Deps: update miette to 5.3.0\n\n## v1.1.0 (2022-08-24)\n\n- Add support for Go.\n- Add support for Zig.\n- Add `Pipfile` support for Pip.\n- Add detection of `CONTRIBUTING.md`.\n- Document what causes the detection of each project type.\n\n## v1.0.0 (2022-06-16)\n\n- Initial release as a separate crate.\n"
  },
  {
    "path": "crates/project-origins/Cargo.toml",
    "content": "[package]\nname = \"project-origins\"\nversion = \"1.4.2\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0\"\ndescription = \"Resolve project origins and kinds from a path\"\nkeywords = [\"project\", \"origin\", \"root\", \"git\"]\n\ndocumentation = \"https://docs.rs/project-origins\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.58.0\"\nedition = \"2021\"\n\n[dependencies]\nfutures = \"0.3.29\"\ntokio = { version = \"1.33.0\", features = [\"fs\"] }\ntokio-stream = { version = \"0.1.9\", features = [\"fs\"] }\n\n[dev-dependencies]\nmiette = \"7.2.0\"\ntracing-subscriber = \"0.3.11\"\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\n"
  },
  {
    "path": "crates/project-origins/README.md",
    "content": "[![Crates.io page](https://badgen.net/crates/v/project-origins)](https://crates.io/crates/project-origins)\n[![API Docs](https://docs.rs/project-origins/badge.svg)][docs]\n[![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license]\n[![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml)\n\n# Project origins\n\n_Resolve project origins and kinds from a path._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license].\n- Status: maintained.\n\n[docs]: https://docs.rs/project-origins\n[license]: ../../LICENSE\n"
  },
  {
    "path": "crates/project-origins/examples/find-origins.rs",
    "content": "use std::env::args;\n\nuse miette::{IntoDiagnostic, Result};\nuse project_origins::origins;\n\n// Run with: `cargo run --example find-origins [PATH]`\n#[tokio::main]\nasync fn main() -> Result<()> {\n\ttracing_subscriber::fmt::init();\n\n\tlet first_arg = args().nth(1).unwrap_or_else(|| \".\".to_string());\n\tlet path = tokio::fs::canonicalize(first_arg).await.into_diagnostic()?;\n\n\tfor origin in origins(&path).await {\n\t\tprintln!(\"{}\", origin.display());\n\t}\n\n\tOk(())\n}\n"
  },
  {
    "path": "crates/project-origins/release.toml",
    "content": "pre-release-commit-message = \"release: project-origins v{{version}}\"\ntag-prefix = \"project-origins-\"\ntag-message = \"project-origins {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/project-origins/src/lib.rs",
    "content": "//! Resolve project origins and kinds from a path.\n//!\n//! This crate originated in [Watchexec](https://docs.rs/watchexec): it is used to resolve where a\n//! project's origin (or root) is, starting either at that origin, or within a subdirectory of it.\n//!\n//! This crate also provides the kind of project it is, and defines two categories within this:\n//! version control systems, and software development environments.\n//!\n//! As it is possible to find several project origins, of different or similar kinds, from a given\n//! directory and walking up, [`origins`] returns a set, rather than a single path. Determining\n//! which of these is the \"one true origin\" (if necessary) is left to the caller.\n\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n\nuse std::{\n\tcollections::{HashMap, HashSet},\n\tfs::FileType,\n\tpath::{Path, PathBuf},\n};\n\nuse futures::StreamExt;\nuse tokio::fs::read_dir;\nuse tokio_stream::wrappers::ReadDirStream;\n\n/// Project types recognised by watchexec.\n///\n/// There are two kinds of projects: VCS and software suite. The latter is more characterised by\n/// what package manager or build system is in use. The enum is marked non-exhaustive as more types\n/// can get added in the future.\n///\n/// Do not rely on the ordering or value (e.g. with transmute) of the variants.\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\n#[non_exhaustive]\npub enum ProjectType {\n\t/// VCS: [Bazaar](https://bazaar.canonical.com/).\n\t///\n\t/// Detects when a `.bzr` folder or a `.bzrignore` file is present. Bazaar does not support (at\n\t/// writing, anyway) ignore files deeper than the repository origin, so this should not\n\t/// false-positive.\n\tBazaar,\n\n\t/// VCS: [Darcs](http://darcs.net/).\n\t///\n\t/// Detects when a `_darcs` folder is present.\n\tDarcs,\n\n\t/// VCS: [Fossil](https://www.fossil-scm.org/).\n\t///\n\t/// Detects when a `.fossil-settings` folder is present.\n\tFossil,\n\n\t/// VCS: [Git](https://git-scm.com/).\n\t///\n\t/// Detects when a `.git` file or folder is present, or any of the files `.gitattributes` or\n\t/// `.gitmodules`. Does _not_ check or return from the presence of `.gitignore` files, as Git\n\t/// supports nested ignores, and that would result in false-positives.\n\tGit,\n\n\t/// VCS: [Mercurial](https://www.mercurial-scm.org/).\n\t///\n\t/// Detects when a `.hg` folder is present, or any of the files `.hgignore` or `.hgtags`.\n\t/// Mercurial does not support (at writing, anyway) ignore files deeper than the repository\n\t/// origin, so this should not false-positive.\n\tMercurial,\n\n\t/// VCS: [Pijul](https://pijul.org/).\n\t///\n\t/// This is not detected at the moment.\n\tPijul,\n\n\t/// VCS: [Subversion](https://subversion.apache.org) (aka SVN).\n\t///\n\t/// Detects when a `.svn` folder is present.\n\tSubversion,\n\n\t/// Soft: [Ruby](https://www.ruby-lang.org/)’s [Bundler](https://bundler.io/).\n\t///\n\t/// Detects when a `Gemfile` file is present.\n\tBundler,\n\n\t/// Soft: the [C programming language](https://en.wikipedia.org/wiki/C_(programming_language)).\n\t///\n\t/// Detects when a `.ctags` file is present.\n\tC,\n\n\t/// Soft: [Rust](https://www.rust-lang.org/)’s [Cargo](https://doc.rust-lang.org/cargo/).\n\t///\n\t/// Detects Cargo workspaces and Cargo crates through the presence of a `Cargo.toml` file.\n\tCargo,\n\n\t/// Soft: the [Docker](https://www.docker.com/) container runtime.\n\t///\n\t/// Detects when a `Dockerfile` file is present.\n\tDocker,\n\n\t/// Soft: the [Elixir](https://elixir-lang.org/) language.\n\t///\n\t/// Detects when a `mix.exs` file is present.\n\tElixir,\n\n\t/// Soft: the [Go](https://golang.net) language.\n\t///\n\t/// Detects when a `go.mod` or `go.sum` file is present.\n\tGo,\n\n\t/// Soft: [Java](https://www.java.com/)’s [Gradle](https://gradle.org/).\n\t///\n\t/// Detects when a `build.gradle` file is present.\n\tGradle,\n\n\t/// Soft: [EcmaScript](https://www.ecmascript.org/) (aka JavaScript).\n\t///\n\t/// Detects when a `package.json` or `cgmanifest.json` file is present.\n\t///\n\t/// This is a catch-all for all `package.json`-based projects, and does not differentiate\n\t/// between NPM, Yarn, PNPM, Node, browser, Deno, Bun, etc.\n\tJavaScript,\n\n\t/// Soft: [Clojure](https://clojure.org/)’s [Leiningen](https://leiningen.org/).\n\t///\n\t/// Detects when a `project.clj` file is present.\n\tLeiningen,\n\n\t/// Soft: [Java](https://www.java.com/)’s [Maven](https://maven.apache.org/).\n\t///\n\t/// Detects when a `pom.xml` file is present.\n\tMaven,\n\n\t/// Soft: the [Perl](https://www.perl.org/) language.\n\t///\n\t/// Detects when a `.perltidyrc` or `Makefile.PL` file is present.\n\tPerl,\n\n\t/// Soft: the [PHP](https://www.php.net/) language.\n\t///\n\t/// Detects when a `composer.json` file is present.\n\tPHP,\n\n\t/// Soft: [Python](https://www.python.org/)’s [Pip](https://www.pip.org/).\n\t///\n\t/// Detects when a `requirements.txt` or `Pipfile` file is present.\n\tPip,\n\n\t/// Soft: the [V](https://www.v-lang.org/) language.\n\t///\n\t/// Detects when a `v.mod` file is present.\n\tV,\n\n\t/// Soft: the [Zig](https://ziglang.org/) language.\n\t///\n\t/// Detects when a `build.zig` file is present.\n\tZig,\n}\n\nimpl ProjectType {\n\t/// Returns true if the project type is a VCS.\n\t#[must_use]\n\tpub const fn is_vcs(self) -> bool {\n\t\tmatches!(\n\t\t\tself,\n\t\t\tSelf::Bazaar\n\t\t\t\t| Self::Darcs\n\t\t\t\t| Self::Fossil\n\t\t\t\t| Self::Git | Self::Mercurial\n\t\t\t\t| Self::Pijul\n\t\t\t\t| Self::Subversion\n\t\t)\n\t}\n\n\t/// Returns true if the project type is a software suite.\n\t#[must_use]\n\tpub const fn is_soft(self) -> bool {\n\t\tmatches!(\n\t\t\tself,\n\t\t\tSelf::Bundler\n\t\t\t\t| Self::C | Self::Cargo\n\t\t\t\t| Self::Docker\n\t\t\t\t| Self::Elixir\n\t\t\t\t| Self::Gradle\n\t\t\t\t| Self::JavaScript\n\t\t\t\t| Self::Leiningen\n\t\t\t\t| Self::Maven\n\t\t\t\t| Self::Perl | Self::PHP\n\t\t\t\t| Self::Pip | Self::V\n\t\t)\n\t}\n}\n\n/// Traverses the parents of the given path and returns _all_ that are project origins.\n///\n/// This checks for the presence of a wide range of files and directories that are likely to be\n/// present and indicative of the root or origin path of a project. It's entirely possible to have\n/// multiple such origins show up: for example, a member of a Cargo workspace will list both the\n/// member project and the workspace root as origins.\n///\n/// This looks at a wider variety of files than the [`types`] function does: something can be\n/// detected as an origin but not be able to match to any particular [`ProjectType`].\npub async fn origins(path: impl AsRef<Path> + Send) -> HashSet<PathBuf> {\n\tfn check_list(list: &DirList) -> bool {\n\t\tif list.is_empty() {\n\t\t\treturn false;\n\t\t}\n\n\t\t[\n\t\t\tlist.has_dir(\"_darcs\"),\n\t\t\tlist.has_dir(\".bzr\"),\n\t\t\tlist.has_dir(\".fossil-settings\"),\n\t\t\tlist.has_dir(\".git\"),\n\t\t\tlist.has_dir(\".github\"),\n\t\t\tlist.has_dir(\".hg\"),\n\t\t\tlist.has_dir(\".svn\"),\n\t\t\tlist.has_file(\".asf.yaml\"),\n\t\t\tlist.has_file(\".bzrignore\"),\n\t\t\tlist.has_file(\".codecov.yml\"),\n\t\t\tlist.has_file(\".ctags\"),\n\t\t\tlist.has_file(\".editorconfig\"),\n\t\t\tlist.has_file(\".git\"),\n\t\t\tlist.has_file(\".gitattributes\"),\n\t\t\tlist.has_file(\".gitmodules\"),\n\t\t\tlist.has_file(\".hgignore\"),\n\t\t\tlist.has_file(\".hgtags\"),\n\t\t\tlist.has_file(\".perltidyrc\"),\n\t\t\tlist.has_file(\".travis.yml\"),\n\t\t\tlist.has_file(\"appveyor.yml\"),\n\t\t\tlist.has_file(\"build.gradle\"),\n\t\t\tlist.has_file(\"build.properties\"),\n\t\t\tlist.has_file(\"build.xml\"),\n\t\t\tlist.has_file(\"Cargo.toml\"),\n\t\t\tlist.has_file(\"Cargo.lock\"),\n\t\t\tlist.has_file(\"cgmanifest.json\"),\n\t\t\tlist.has_file(\"CMakeLists.txt\"),\n\t\t\tlist.has_file(\"composer.json\"),\n\t\t\tlist.has_file(\"COPYING\"),\n\t\t\tlist.has_file(\"docker-compose.yml\"),\n\t\t\tlist.has_file(\"Dockerfile\"),\n\t\t\tlist.has_file(\"Gemfile\"),\n\t\t\tlist.has_file(\"LICENSE.txt\"),\n\t\t\tlist.has_file(\"LICENSE\"),\n\t\t\tlist.has_file(\"Makefile.am\"),\n\t\t\tlist.has_file(\"Makefile.pl\"),\n\t\t\tlist.has_file(\"Makefile.PL\"),\n\t\t\tlist.has_file(\"Makefile\"),\n\t\t\tlist.has_file(\"mix.exs\"),\n\t\t\tlist.has_file(\"moonshine-dependencies.xml\"),\n\t\t\tlist.has_file(\"package.json\"),\n\t\t\tlist.has_file(\"package-lock.json\"),\n\t\t\tlist.has_file(\"pnpm-lock.yaml\"),\n\t\t\tlist.has_file(\"yarn.lock\"),\n\t\t\tlist.has_file(\"pom.xml\"),\n\t\t\tlist.has_file(\"project.clj\"),\n\t\t\tlist.has_file(\"requirements.txt\"),\n\t\t\tlist.has_file(\"v.mod\"),\n\t\t\tlist.has_file(\"CONTRIBUTING.md\"),\n\t\t\tlist.has_file(\"go.mod\"),\n\t\t\tlist.has_file(\"go.sum\"),\n\t\t\tlist.has_file(\"Pipfile\"),\n\t\t\tlist.has_file(\"build.zig\"),\n\t\t]\n\t\t.into_iter()\n\t\t.any(|f| f)\n\t}\n\n\tlet mut origins = HashSet::new();\n\n\tlet path = path.as_ref();\n\tlet mut current = path;\n\tif check_list(&DirList::obtain(current).await) {\n\t\torigins.insert(current.to_owned());\n\t}\n\n\twhile let Some(parent) = current.parent() {\n\t\tcurrent = parent;\n\t\tif check_list(&DirList::obtain(current).await) {\n\t\t\torigins.insert(current.to_owned());\n\t\t}\n\t}\n\n\torigins\n}\n\n/// Returns all project types detected at this given origin.\n///\n/// This should be called with a result of [`origins()`], or a project origin if already known; it\n/// will not find the origin itself.\n///\n/// The returned list may be empty.\n///\n/// Note that this only detects project types listed in the [`ProjectType`] enum, and may not detect\n/// anything for some paths returned by [`origins()`].\npub async fn types(path: impl AsRef<Path> + Send) -> HashSet<ProjectType> {\n\tlet path = path.as_ref();\n\tlet list = DirList::obtain(path).await;\n\t[\n\t\tlist.if_has_dir(\"_darcs\", ProjectType::Darcs),\n\t\tlist.if_has_dir(\".bzr\", ProjectType::Bazaar),\n\t\tlist.if_has_dir(\".fossil-settings\", ProjectType::Fossil),\n\t\tlist.if_has_dir(\".git\", ProjectType::Git),\n\t\tlist.if_has_dir(\".hg\", ProjectType::Mercurial),\n\t\tlist.if_has_dir(\".svn\", ProjectType::Subversion),\n\t\tlist.if_has_file(\".bzrignore\", ProjectType::Bazaar),\n\t\tlist.if_has_file(\".ctags\", ProjectType::C),\n\t\tlist.if_has_file(\".git\", ProjectType::Git),\n\t\tlist.if_has_file(\".gitattributes\", ProjectType::Git),\n\t\tlist.if_has_file(\".gitmodules\", ProjectType::Git),\n\t\tlist.if_has_file(\".hgignore\", ProjectType::Mercurial),\n\t\tlist.if_has_file(\".hgtags\", ProjectType::Mercurial),\n\t\tlist.if_has_file(\".perltidyrc\", ProjectType::Perl),\n\t\tlist.if_has_file(\"build.gradle\", ProjectType::Gradle),\n\t\tlist.if_has_file(\"Cargo.toml\", ProjectType::Cargo),\n\t\tlist.if_has_file(\"cgmanifest.json\", ProjectType::JavaScript),\n\t\tlist.if_has_file(\"composer.json\", ProjectType::PHP),\n\t\tlist.if_has_file(\"Dockerfile\", ProjectType::Docker),\n\t\tlist.if_has_file(\"Gemfile\", ProjectType::Bundler),\n\t\tlist.if_has_file(\"Makefile.PL\", ProjectType::Perl),\n\t\tlist.if_has_file(\"mix.exs\", ProjectType::Elixir),\n\t\tlist.if_has_file(\"package.json\", ProjectType::JavaScript),\n\t\tlist.if_has_file(\"pom.xml\", ProjectType::Maven),\n\t\tlist.if_has_file(\"project.clj\", ProjectType::Leiningen),\n\t\tlist.if_has_file(\"requirements.txt\", ProjectType::Pip),\n\t\tlist.if_has_file(\"v.mod\", ProjectType::V),\n\t\tlist.if_has_file(\"go.mod\", ProjectType::Go),\n\t\tlist.if_has_file(\"go.sum\", ProjectType::Go),\n\t\tlist.if_has_file(\"Pipfile\", ProjectType::Pip),\n\t\tlist.if_has_file(\"build.zig\", ProjectType::Zig),\n\t]\n\t.into_iter()\n\t.flatten()\n\t.collect()\n}\n\n#[derive(Debug, Default)]\nstruct DirList(HashMap<PathBuf, FileType>);\nimpl DirList {\n\tasync fn obtain(path: &Path) -> Self {\n\t\tif let Ok(s) = read_dir(path).await {\n\t\t\tSelf(\n\t\t\t\tReadDirStream::new(s)\n\t\t\t\t\t.filter_map(|entry| async move {\n\t\t\t\t\t\tmatch entry {\n\t\t\t\t\t\t\tErr(_) => None,\n\t\t\t\t\t\t\tOk(entry) => {\n\t\t\t\t\t\t\t\tif let (Ok(path), Ok(file_type)) =\n\t\t\t\t\t\t\t\t\t(entry.path().strip_prefix(path), entry.file_type().await)\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tSome((path.to_owned(), file_type))\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tNone\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t\t.collect::<HashMap<_, _>>()\n\t\t\t\t\t.await,\n\t\t\t)\n\t\t} else {\n\t\t\tSelf::default()\n\t\t}\n\t}\n\n\t#[inline]\n\tfn is_empty(&self) -> bool {\n\t\tself.0.is_empty()\n\t}\n\n\t#[inline]\n\tfn has_file(&self, name: impl AsRef<Path>) -> bool {\n\t\tlet name = name.as_ref();\n\t\tself.0.get(name).map_or(false, std::fs::FileType::is_file)\n\t}\n\n\t#[inline]\n\tfn has_dir(&self, name: impl AsRef<Path>) -> bool {\n\t\tlet name = name.as_ref();\n\t\tself.0.get(name).map_or(false, std::fs::FileType::is_dir)\n\t}\n\n\t#[inline]\n\tfn if_has_file(&self, name: impl AsRef<Path>, project: ProjectType) -> Option<ProjectType> {\n\t\tif self.has_file(name) {\n\t\t\tSome(project)\n\t\t} else {\n\t\t\tNone\n\t\t}\n\t}\n\n\t#[inline]\n\tfn if_has_dir(&self, name: impl AsRef<Path>, project: ProjectType) -> Option<ProjectType> {\n\t\tif self.has_dir(name) {\n\t\t\tSome(project)\n\t\t} else {\n\t\t\tNone\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/signals/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v5.0.1 (2026-01-20)\n\n## v5.0.0 (2025-05-15)\n\n- Deps: nix 0.30\n\n## v4.0.1 (2025-02-09)\n\n## v4.0.0 (2024-10-14)\n\n - Deps: nix 0.29\n\n## v3.0.0 (2024-04-20)\n\n- Deps: miette 7\n- Deps: nix 0.28\n\n## v2.1.0 (2023-12-09)\n\n- Derive `Hash` for `Signal`.\n\n## v2.0.0 (2023-11-29)\n\n- Deps: upgrade nix to 0.27\n\n## v1.0.1 (2023-11-26)\n\nSame as 2.0.0, but yanked.\n\n## v1.0.0 (2023-03-18)\n\n- Split off new `watchexec-signals` crate (this one), to have a lightweight library that can parse\n  and represent signals as handled by Watchexec.\n"
  },
  {
    "path": "crates/signals/Cargo.toml",
    "content": "[package]\nname = \"watchexec-signals\"\nversion = \"5.0.1\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0 OR MIT\"\ndescription = \"Watchexec's signal types\"\nkeywords = [\"watchexec\", \"signal\"]\n\ndocumentation = \"https://docs.rs/watchexec-signals\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.61.0\"\nedition = \"2021\"\n\n[dependencies.miette]\nversion = \"7.2.0\"\noptional = true\n\n[dependencies.thiserror]\nversion = \"2.0.11\"\noptional = true\n\n[dependencies.serde]\nversion = \"1.0.183\"\noptional = true\nfeatures = [\"derive\"]\n\n[target.'cfg(unix)'.dependencies.nix]\nversion = \"0.30.1\"\nfeatures = [\"signal\"]\n\n[features]\ndefault = [\"fromstr\", \"miette\"]\nfromstr = [\"dep:thiserror\"]\nmiette = [\"dep:miette\"]\nserde = [\"dep:serde\"]\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\nneedless_doctest_main = \"allow\"\n"
  },
  {
    "path": "crates/signals/README.md",
    "content": "# watchexec-signals\n\n_Watchexec's signal type._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license] or [MIT](https://passcod.mit-license.org).\n- Status: maintained.\n\n[docs]: https://docs.rs/watchexec-signals\n[license]: ../../LICENSE\n\n```rust\nuse std::str::FromStr;\nuse watchexec_signals::Signal;\n\nfn main() {\n    assert_eq!(Signal::from_str(\"SIGINT\").unwrap(), Signal::Interrupt);\n}\n```\n\n## Features\n\n- `serde`: enables serde support.\n- `fromstr`: enables `FromStr` support (default).\n- `miette`: enables miette (rich diagnostics) support (default).\n"
  },
  {
    "path": "crates/signals/release.toml",
    "content": "pre-release-commit-message = \"release: signals v{{version}}\"\ntag-prefix = \"watchexec-signals-\"\ntag-message = \"watchexec-signals {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/signals/src/lib.rs",
    "content": "#![doc = include_str!(\"../README.md\")]\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n// thiserror's macro generates code that triggers this lint spuriously\n#![allow(unused_assignments)]\n\nuse std::fmt;\n\n#[cfg(feature = \"fromstr\")]\nuse std::str::FromStr;\n\n#[cfg(unix)]\nuse nix::sys::signal::Signal as NixSignal;\n\n/// A notification (signals or Windows control events) sent to a process.\n///\n/// This signal type in Watchexec is used for any of:\n/// - signals sent to the main process by some external actor,\n/// - signals received from a sub process by the main process,\n/// - signals sent to a sub process by Watchexec.\n///\n/// On Windows, only some signals are supported, as described. Others will be ignored.\n///\n/// On Unix, there are several \"first-class\" signals which have their own variants, and a generic\n/// [`Custom`][Signal::Custom] variant which can be used to send arbitrary signals.\n#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]\n#[non_exhaustive]\n#[cfg_attr(feature = \"serde\", derive(serde::Serialize, serde::Deserialize))]\n#[cfg_attr(\n\tfeature = \"serde\",\n\tserde(\n\t\tfrom = \"serde_support::SerdeSignal\",\n\t\tinto = \"serde_support::SerdeSignal\"\n\t)\n)]\npub enum Signal {\n\t/// Indicate that the terminal is disconnected.\n\t///\n\t/// On Unix, this is `SIGHUP`. On Windows, this is ignored for now but may be supported in the\n\t/// future (see [#219](https://github.com/watchexec/watchexec/issues/219)).\n\t///\n\t/// Despite its nominal purpose, on Unix this signal is often used to reload configuration files.\n\tHangup,\n\n\t/// Indicate to the kernel that the process should stop.\n\t///\n\t/// On Unix, this is `SIGKILL`. On Windows, this is `TerminateProcess`.\n\t///\n\t/// This signal is not handled by the process, but directly by the kernel, and thus cannot be\n\t/// intercepted. Subprocesses may exit in inconsistent states.\n\tForceStop,\n\n\t/// Indicate that the process should stop.\n\t///\n\t/// On Unix, this is `SIGINT`. On Windows, this is ignored for now but may be supported in the\n\t/// future (see [#219](https://github.com/watchexec/watchexec/issues/219)).\n\t///\n\t/// This signal generally indicates an action taken by the user, so it may be handled\n\t/// differently than a termination.\n\tInterrupt,\n\n\t/// Indicate that the process is to stop, the kernel will then dump its core.\n\t///\n\t/// On Unix, this is `SIGQUIT`. On Windows, it is ignored.\n\t///\n\t/// This is rarely used.\n\tQuit,\n\n\t/// Indicate that the process should stop.\n\t///\n\t/// On Unix, this is `SIGTERM`. On Windows, this is ignored for now but may be supported in the\n\t/// future (see [#219](https://github.com/watchexec/watchexec/issues/219)).\n\t///\n\t/// On Unix, this signal generally indicates an action taken by the system, so it may be handled\n\t/// differently than an interruption.\n\tTerminate,\n\n\t/// Indicate an application-defined behaviour should happen.\n\t///\n\t/// On Unix, this is `SIGUSR1`. On Windows, it is ignored.\n\t///\n\t/// This signal is generally used to start debugging.\n\tUser1,\n\n\t/// Indicate an application-defined behaviour should happen.\n\t///\n\t/// On Unix, this is `SIGUSR2`. On Windows, it is ignored.\n\t///\n\t/// This signal is generally used to reload configuration.\n\tUser2,\n\n\t/// Indicate using a custom signal.\n\t///\n\t/// Internally, this is converted to a [`nix::Signal`](https://docs.rs/nix/*/nix/sys/signal/enum.Signal.html)\n\t/// but for portability this variant is a raw `i32`.\n\t///\n\t/// Invalid signals on the current platform will be ignored. Does nothing on Windows.\n\t///\n\t/// The special value `0` is used to indicate an unknown signal. That is, a signal was received\n\t/// or parsed, but it is not known which. This is not a usual case, and should in general be\n\t/// ignored rather than hard-erroring.\n\t///\n\t/// # Examples\n\t///\n\t/// ```\n\t/// # #[cfg(unix)]\n\t/// # {\n\t/// use watchexec_signals::Signal;\n\t/// use nix::sys::signal::Signal as NixSignal;\n\t/// assert_eq!(Signal::Custom(6), Signal::from(NixSignal::SIGABRT as i32));\n\t/// # }\n\t/// ```\n\t///\n\t/// On Unix the [`from_nix`][Signal::from_nix] method should be preferred if converting from\n\t/// Nix's `Signal` type:\n\t///\n\t/// ```\n\t/// # #[cfg(unix)]\n\t/// # {\n\t/// use watchexec_signals::Signal;\n\t/// use nix::sys::signal::Signal as NixSignal;\n\t/// assert_eq!(Signal::Custom(6), Signal::from_nix(NixSignal::SIGABRT));\n\t/// # }\n\t/// ```\n\tCustom(i32),\n}\n\nimpl Signal {\n\t/// Converts to a [`nix::Signal`][NixSignal] if possible.\n\t///\n\t/// This will return `None` if the signal is not supported on the current platform (only for\n\t/// [`Custom`][Signal::Custom], as the first-class ones are always supported).\n\t#[cfg(unix)]\n\t#[must_use]\n\tpub fn to_nix(self) -> Option<NixSignal> {\n\t\tmatch self {\n\t\t\tSelf::Hangup => Some(NixSignal::SIGHUP),\n\t\t\tSelf::ForceStop => Some(NixSignal::SIGKILL),\n\t\t\tSelf::Interrupt => Some(NixSignal::SIGINT),\n\t\t\tSelf::Quit => Some(NixSignal::SIGQUIT),\n\t\t\tSelf::Terminate => Some(NixSignal::SIGTERM),\n\t\t\tSelf::User1 => Some(NixSignal::SIGUSR1),\n\t\t\tSelf::User2 => Some(NixSignal::SIGUSR2),\n\t\t\tSelf::Custom(sig) => NixSignal::try_from(sig).ok(),\n\t\t}\n\t}\n\n\t/// Converts from a [`nix::Signal`][NixSignal].\n\t#[cfg(unix)]\n\t#[allow(clippy::missing_const_for_fn)]\n\t#[must_use]\n\tpub fn from_nix(sig: NixSignal) -> Self {\n\t\tmatch sig {\n\t\t\tNixSignal::SIGHUP => Self::Hangup,\n\t\t\tNixSignal::SIGKILL => Self::ForceStop,\n\t\t\tNixSignal::SIGINT => Self::Interrupt,\n\t\t\tNixSignal::SIGQUIT => Self::Quit,\n\t\t\tNixSignal::SIGTERM => Self::Terminate,\n\t\t\tNixSignal::SIGUSR1 => Self::User1,\n\t\t\tNixSignal::SIGUSR2 => Self::User2,\n\t\t\tsig => Self::Custom(sig as _),\n\t\t}\n\t}\n}\n\nimpl From<i32> for Signal {\n\t/// Converts from a raw signal number.\n\t///\n\t/// This uses hardcoded numbers for the first-class signals.\n\tfn from(raw: i32) -> Self {\n\t\tmatch raw {\n\t\t\t1 => Self::Hangup,\n\t\t\t2 => Self::Interrupt,\n\t\t\t3 => Self::Quit,\n\t\t\t9 => Self::ForceStop,\n\t\t\t10 => Self::User1,\n\t\t\t12 => Self::User2,\n\t\t\t15 => Self::Terminate,\n\t\t\t_ => Self::Custom(raw),\n\t\t}\n\t}\n}\n\n#[cfg(feature = \"fromstr\")]\nimpl Signal {\n\t/// Parse the input as a unix signal.\n\t///\n\t/// This parses the input as a signal name, or a signal number, in a case-insensitive manner.\n\t/// It supports integers, the short name of the signal (like `INT`, `HUP`, `USR1`, etc), and\n\t/// the long name of the signal (like `SIGINT`, `SIGHUP`, `SIGUSR1`, etc).\n\t///\n\t/// Note that this is entirely accurate only when used on unix targets; on other targets it\n\t/// falls back to a hardcoded approximation instead of looking up signal tables (via [`nix`]).\n\t///\n\t/// ```\n\t/// # use watchexec_signals::Signal;\n\t/// assert_eq!(Signal::Hangup, Signal::from_unix_str(\"hup\").unwrap());\n\t/// assert_eq!(Signal::Interrupt, Signal::from_unix_str(\"SIGINT\").unwrap());\n\t/// assert_eq!(Signal::ForceStop, Signal::from_unix_str(\"Kill\").unwrap());\n\t/// ```\n\t///\n\t/// Using [`FromStr`] is recommended for practical use, as it will also parse Windows control\n\t/// events, see [`Signal::from_windows_str`].\n\tpub fn from_unix_str(s: &str) -> Result<Self, SignalParseError> {\n\t\tSelf::from_unix_str_impl(s)\n\t}\n\n\t#[cfg(unix)]\n\tfn from_unix_str_impl(s: &str) -> Result<Self, SignalParseError> {\n\t\tif let Ok(sig) = i32::from_str(s) {\n\t\t\tif let Ok(sig) = NixSignal::try_from(sig) {\n\t\t\t\treturn Ok(Self::from_nix(sig));\n\t\t\t}\n\t\t}\n\n\t\tif let Ok(sig) = NixSignal::from_str(&s.to_ascii_uppercase())\n\t\t\t.or_else(|_| NixSignal::from_str(&format!(\"SIG{}\", s.to_ascii_uppercase())))\n\t\t{\n\t\t\treturn Ok(Self::from_nix(sig));\n\t\t}\n\n\t\tErr(SignalParseError::new(s, \"unsupported signal\"))\n\t}\n\n\t#[cfg(not(unix))]\n\tfn from_unix_str_impl(s: &str) -> Result<Self, SignalParseError> {\n\t\tmatch s.to_ascii_uppercase().as_str() {\n\t\t\t\"KILL\" | \"SIGKILL\" | \"9\" => Ok(Self::ForceStop),\n\t\t\t\"HUP\" | \"SIGHUP\" | \"1\" => Ok(Self::Hangup),\n\t\t\t\"INT\" | \"SIGINT\" | \"2\" => Ok(Self::Interrupt),\n\t\t\t\"QUIT\" | \"SIGQUIT\" | \"3\" => Ok(Self::Quit),\n\t\t\t\"TERM\" | \"SIGTERM\" | \"15\" => Ok(Self::Terminate),\n\t\t\t\"USR1\" | \"SIGUSR1\" | \"10\" => Ok(Self::User1),\n\t\t\t\"USR2\" | \"SIGUSR2\" | \"12\" => Ok(Self::User2),\n\t\t\tnumber => match i32::from_str(number) {\n\t\t\t\tOk(int) => Ok(Self::Custom(int)),\n\t\t\t\tErr(_) => Err(SignalParseError::new(s, \"unsupported signal\")),\n\t\t\t},\n\t\t}\n\t}\n\n\t/// Parse the input as a windows control event.\n\t///\n\t/// This parses the input as a control event name, in a case-insensitive manner.\n\t///\n\t/// The names matched are mostly made up as there's no standard for them, but should be familiar\n\t/// to Windows users. They are mapped to the corresponding unix concepts as follows:\n\t///\n\t/// - `CTRL-CLOSE`, `CTRL+CLOSE`, or `CLOSE` for a hangup\n\t/// - `CTRL-BREAK`, `CTRL+BREAK`, or `BREAK` for a terminate\n\t/// - `CTRL-C`, `CTRL+C`, or `C` for an interrupt\n\t/// - `STOP`, `FORCE-STOP` for a forced stop. This is also mapped to `KILL` and `SIGKILL`.\n\t///\n\t/// ```\n\t/// # use watchexec_signals::Signal;\n\t/// assert_eq!(Signal::Hangup, Signal::from_windows_str(\"ctrl+close\").unwrap());\n\t/// assert_eq!(Signal::Interrupt, Signal::from_windows_str(\"C\").unwrap());\n\t/// assert_eq!(Signal::ForceStop, Signal::from_windows_str(\"Stop\").unwrap());\n\t/// ```\n\t///\n\t/// Using [`FromStr`] is recommended for practical use, as it will fall back to parsing as a\n\t/// unix signal, which can be helpful for portability.\n\tpub fn from_windows_str(s: &str) -> Result<Self, SignalParseError> {\n\t\tmatch s.to_ascii_uppercase().as_str() {\n\t\t\t\"CTRL-CLOSE\" | \"CTRL+CLOSE\" | \"CLOSE\" => Ok(Self::Hangup),\n\t\t\t\"CTRL-BREAK\" | \"CTRL+BREAK\" | \"BREAK\" => Ok(Self::Terminate),\n\t\t\t\"CTRL-C\" | \"CTRL+C\" | \"C\" => Ok(Self::Interrupt),\n\t\t\t\"KILL\" | \"SIGKILL\" | \"FORCE-STOP\" | \"STOP\" => Ok(Self::ForceStop),\n\t\t\t_ => Err(SignalParseError::new(s, \"unknown control name\")),\n\t\t}\n\t}\n}\n\n#[cfg(feature = \"fromstr\")]\nimpl FromStr for Signal {\n\ttype Err = SignalParseError;\n\n\tfn from_str(s: &str) -> Result<Self, Self::Err> {\n\t\tSelf::from_windows_str(s).or_else(|err| Self::from_unix_str(s).map_err(|_| err))\n\t}\n}\n\n/// Error when parsing a signal from string.\n#[cfg(feature = \"fromstr\")]\n#[cfg_attr(feature = \"miette\", derive(miette::Diagnostic))]\n#[derive(Debug, thiserror::Error)]\n#[error(\"invalid signal `{src}`: {err}\")]\npub struct SignalParseError {\n\t// The string that was parsed.\n\t#[cfg_attr(feature = \"miette\", source_code)]\n\tsrc: String,\n\n\t// The error that occurred.\n\terr: String,\n\n\t// The span of the source which is in error.\n\t#[cfg_attr(feature = \"miette\", label = \"invalid signal\")]\n\tspan: (usize, usize),\n}\n\n#[cfg(feature = \"fromstr\")]\nimpl SignalParseError {\n\t#[must_use]\n\tpub fn new(src: &str, err: &str) -> Self {\n\t\tSelf {\n\t\t\tsrc: src.to_owned(),\n\t\t\terr: err.to_owned(),\n\t\t\tspan: (0, src.len()),\n\t\t}\n\t}\n}\n\nimpl fmt::Display for Signal {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\twrite!(\n\t\t\tf,\n\t\t\t\"{}\",\n\t\t\tmatch (self, cfg!(windows)) {\n\t\t\t\t(Self::Hangup, false) => \"SIGHUP\",\n\t\t\t\t(Self::Hangup, true) => \"CTRL-CLOSE\",\n\t\t\t\t(Self::ForceStop, false) => \"SIGKILL\",\n\t\t\t\t(Self::ForceStop, true) => \"STOP\",\n\t\t\t\t(Self::Interrupt, false) => \"SIGINT\",\n\t\t\t\t(Self::Interrupt, true) => \"CTRL-C\",\n\t\t\t\t(Self::Quit, _) => \"SIGQUIT\",\n\t\t\t\t(Self::Terminate, false) => \"SIGTERM\",\n\t\t\t\t(Self::Terminate, true) => \"CTRL-BREAK\",\n\t\t\t\t(Self::User1, _) => \"SIGUSR1\",\n\t\t\t\t(Self::User2, _) => \"SIGUSR2\",\n\t\t\t\t(Self::Custom(n), _) => {\n\t\t\t\t\treturn write!(f, \"{n}\");\n\t\t\t\t}\n\t\t\t}\n\t\t)\n\t}\n}\n\n#[cfg(feature = \"serde\")]\nmod serde_support {\n\tuse super::Signal;\n\n\t#[derive(Clone, Copy, Debug, serde::Serialize, serde::Deserialize)]\n\t#[serde(untagged)]\n\tpub enum SerdeSignal {\n\t\tNamed(NamedSignal),\n\t\tNumber(i32),\n\t}\n\n\t#[derive(Clone, Copy, Debug, serde::Serialize, serde::Deserialize)]\n\t#[serde(rename_all = \"kebab-case\")]\n\tpub enum NamedSignal {\n\t\t#[serde(rename = \"SIGHUP\")]\n\t\tHangup,\n\t\t#[serde(rename = \"SIGKILL\")]\n\t\tForceStop,\n\t\t#[serde(rename = \"SIGINT\")]\n\t\tInterrupt,\n\t\t#[serde(rename = \"SIGQUIT\")]\n\t\tQuit,\n\t\t#[serde(rename = \"SIGTERM\")]\n\t\tTerminate,\n\t\t#[serde(rename = \"SIGUSR1\")]\n\t\tUser1,\n\t\t#[serde(rename = \"SIGUSR2\")]\n\t\tUser2,\n\t}\n\n\timpl From<Signal> for SerdeSignal {\n\t\tfn from(signal: Signal) -> Self {\n\t\t\tmatch signal {\n\t\t\t\tSignal::Hangup => Self::Named(NamedSignal::Hangup),\n\t\t\t\tSignal::Interrupt => Self::Named(NamedSignal::Interrupt),\n\t\t\t\tSignal::Quit => Self::Named(NamedSignal::Quit),\n\t\t\t\tSignal::Terminate => Self::Named(NamedSignal::Terminate),\n\t\t\t\tSignal::User1 => Self::Named(NamedSignal::User1),\n\t\t\t\tSignal::User2 => Self::Named(NamedSignal::User2),\n\t\t\t\tSignal::ForceStop => Self::Named(NamedSignal::ForceStop),\n\t\t\t\tSignal::Custom(number) => Self::Number(number),\n\t\t\t}\n\t\t}\n\t}\n\n\timpl From<SerdeSignal> for Signal {\n\t\tfn from(signal: SerdeSignal) -> Self {\n\t\t\tmatch signal {\n\t\t\t\tSerdeSignal::Named(NamedSignal::Hangup) => Self::Hangup,\n\t\t\t\tSerdeSignal::Named(NamedSignal::ForceStop) => Self::ForceStop,\n\t\t\t\tSerdeSignal::Named(NamedSignal::Interrupt) => Self::Interrupt,\n\t\t\t\tSerdeSignal::Named(NamedSignal::Quit) => Self::Quit,\n\t\t\t\tSerdeSignal::Named(NamedSignal::Terminate) => Self::Terminate,\n\t\t\t\tSerdeSignal::Named(NamedSignal::User1) => Self::User1,\n\t\t\t\tSerdeSignal::Named(NamedSignal::User2) => Self::User2,\n\t\t\t\tSerdeSignal::Number(number) => Self::Custom(number),\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/CHANGELOG.md",
    "content": "# Changelog\n\n## Next (YYYY-MM-DD)\n\n## v5.2.0 (2026-03-09)\n\n- Add the ability to use `spawn_with` from process-wrap (#1013)\n\n## v5.1.0 (2026-02-22)\n\n- Add `is_running()` and clarify what `is_dead()` is measuring\n\n## v5.0.2 (2026-01-20)\n\n- Deps: process-wrap 9\n- Fix: handle graceful stop when job handle dropped (#981, #982)\n\n## v5.0.1 (2025-05-15)\n\n## v5.0.0 (2025-05-15)\n\n- Deps: process-wrap 8.2.1\n\n## v4.0.0 (2025-02-09)\n\n## v3.0.0 (2024-10-14)\n\n- Deps: nix 0.29\n\n## v2.0.0 (2024-04-20)\n\n- Deps: replace command-group with process-wrap\n- Deps: nix 0.28\n\n## v1.0.3 (2023-12-19)\n\n- Fix Start executing even when the job is running.\n- Add kill-on-drop to guarantee no two processes run at the same time.\n\n## v1.0.2 (2023-12-09)\n\n- Add `trace`-level logging to Job task.\n\n## v1.0.1 (2023-11-29)\n\n- Deps: watchexec-events 2.0.1\n- Deps: watchexec-signals 2.0.0\n\n## v1.0.0 (2023-11-26)\n\n- Initial release as a separate crate.\n"
  },
  {
    "path": "crates/supervisor/Cargo.toml",
    "content": "[package]\nname = \"watchexec-supervisor\"\nversion = \"5.2.0\"\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0 OR MIT\"\ndescription = \"Watchexec's process supervisor component\"\nkeywords = [\"process\", \"command\", \"supervisor\", \"watchexec\"]\n\ndocumentation = \"https://docs.rs/watchexec-supervisor\"\nrepository = \"https://github.com/watchexec/watchexec\"\nreadme = \"README.md\"\n\nrust-version = \"1.64.0\"\nedition = \"2021\"\n\n[dependencies]\nfutures = \"0.3.29\"\ntracing = \"0.1.40\"\n\n[dependencies.process-wrap]\nversion = \"9.1.0\"\nfeatures = [\"reset-sigmask\", \"tokio1\"]\n\n[dependencies.tokio]\nversion = \"1.38.0\"\ndefault-features = false\nfeatures = [\"macros\", \"process\", \"rt\", \"sync\", \"time\"]\n\n[dependencies.watchexec-events]\nversion = \"6.1.0\"\ndefault-features = false\npath = \"../events\"\n\n[dependencies.watchexec-signals]\nversion = \"5.0.1\"\ndefault-features = false\npath = \"../signals\"\n\n[dev-dependencies]\nboxcar = \"0.2.9\"\n\n[target.'cfg(unix)'.dev-dependencies.nix]\nversion = \"0.30.1\"\nfeatures = [\"signal\"]\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\nmodule_name_repetitions = \"allow\"\nsimilar_names = \"allow\"\ncognitive_complexity = \"allow\"\ntoo_many_lines = \"allow\"\nmissing_errors_doc = \"allow\"\nmissing_panics_doc = \"allow\"\ndefault_trait_access = \"allow\"\nenum_glob_use = \"allow\"\noption_if_let_else = \"allow\"\nblocks_in_conditions = \"allow\"\n"
  },
  {
    "path": "crates/supervisor/README.md",
    "content": "[![Crates.io page](https://badgen.net/crates/v/watchexec-supervisor)](https://crates.io/crates/watchexec-supervisor)\n[![API Docs](https://docs.rs/watchexec-supervisor/badge.svg)][docs]\n[![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license]\n[![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml)\n\n# Supervisor\n\n_Watchexec's process supervisor._\n\n- **[API documentation][docs]**.\n- Licensed under [Apache 2.0][license].\n- Status: maintained.\n\n[docs]: https://docs.rs/watchexec-supervisor\n[license]: ../../LICENSE\n"
  },
  {
    "path": "crates/supervisor/release.toml",
    "content": "pre-release-commit-message = \"release: supervisor v{{version}}\"\ntag-prefix = \"watchexec-supervisor-\"\ntag-message = \"watchexec-supervisor {{version}}\"\n\n[[pre-release-replacements]]\nfile = \"CHANGELOG.md\"\nsearch = \"^## Next.*$\"\nreplace = \"## Next (YYYY-MM-DD)\\n\\n## v{{version}} ({{date}})\"\nprerelease = true\nmax = 1\n"
  },
  {
    "path": "crates/supervisor/src/command/conversions.rs",
    "content": "use std::fmt;\n\nuse process_wrap::tokio::{CommandWrap, KillOnDrop};\nuse tokio::process::Command as TokioCommand;\nuse tracing::trace;\n\nuse super::{Command, Program, SpawnOptions};\n\nimpl Command {\n\t/// Obtain a [`process_wrap::tokio::CommandWrap`].\n\tpub fn to_spawnable(&self) -> CommandWrap {\n\t\ttrace!(program=?self.program, \"constructing command\");\n\n\t\tlet cmd = match &self.program {\n\t\t\tProgram::Exec { prog, args, .. } => {\n\t\t\t\tlet mut c = TokioCommand::new(prog);\n\t\t\t\tc.args(args);\n\t\t\t\tc\n\t\t\t}\n\n\t\t\tProgram::Shell {\n\t\t\t\tshell,\n\t\t\t\targs,\n\t\t\t\tcommand,\n\t\t\t} => {\n\t\t\t\tlet mut c = TokioCommand::new(shell.prog.clone());\n\n\t\t\t\t// Avoid quoting issues on Windows by using raw_arg everywhere\n\t\t\t\t#[cfg(windows)]\n\t\t\t\t{\n\t\t\t\t\tfor opt in &shell.options {\n\t\t\t\t\t\tc.raw_arg(opt);\n\t\t\t\t\t}\n\t\t\t\t\tif let Some(progopt) = &shell.program_option {\n\t\t\t\t\t\tc.raw_arg(progopt);\n\t\t\t\t\t}\n\t\t\t\t\tc.raw_arg(command);\n\t\t\t\t\tfor arg in args {\n\t\t\t\t\t\tc.raw_arg(arg);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t#[cfg(not(windows))]\n\t\t\t\t{\n\t\t\t\t\tc.args(shell.options.clone());\n\t\t\t\t\tif let Some(progopt) = &shell.program_option {\n\t\t\t\t\t\tc.arg(progopt);\n\t\t\t\t\t}\n\t\t\t\t\tc.arg(command);\n\t\t\t\t\tfor arg in args {\n\t\t\t\t\t\tc.arg(arg);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tc\n\t\t\t}\n\t\t};\n\n\t\tlet mut cmd = CommandWrap::from(cmd);\n\t\tcmd.wrap(KillOnDrop);\n\n\t\tmatch self.options {\n\t\t\t#[cfg(unix)]\n\t\t\tSpawnOptions { session: true, .. } => {\n\t\t\t\tcmd.wrap(process_wrap::tokio::ProcessSession);\n\t\t\t}\n\t\t\t#[cfg(unix)]\n\t\t\tSpawnOptions { grouped: true, .. } => {\n\t\t\t\tcmd.wrap(process_wrap::tokio::ProcessGroup::leader());\n\t\t\t}\n\t\t\t#[cfg(windows)]\n\t\t\tSpawnOptions { grouped: true, .. } | SpawnOptions { session: true, .. } => {\n\t\t\t\tcmd.wrap(process_wrap::tokio::JobObject);\n\t\t\t}\n\t\t\t_ => {}\n\t\t}\n\n\t\t#[cfg(unix)]\n\t\tif self.options.reset_sigmask {\n\t\t\tcmd.wrap(process_wrap::tokio::ResetSigmask);\n\t\t}\n\n\t\tcmd\n\t}\n}\n\nimpl fmt::Display for Program {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\tmatch self {\n\t\t\tSelf::Exec { prog, args, .. } => {\n\t\t\t\twrite!(f, \"{}\", prog.display())?;\n\t\t\t\tfor arg in args {\n\t\t\t\t\twrite!(f, \" {arg}\")?;\n\t\t\t\t}\n\n\t\t\t\tOk(())\n\t\t\t}\n\t\t\tSelf::Shell { command, .. } => {\n\t\t\t\twrite!(f, \"{command}\")\n\t\t\t}\n\t\t}\n\t}\n}\n\nimpl fmt::Display for Command {\n\tfn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n\t\twrite!(f, \"{}\", self.program)\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/command/program.rs",
    "content": "use std::path::PathBuf;\n\nuse super::Shell;\n\n/// A single program call.\n#[derive(Clone, Debug, PartialEq, Eq, Hash)]\npub enum Program {\n\t/// A raw program call: the path or name of a program and its argument list.\n\tExec {\n\t\t/// Path or name of the program.\n\t\tprog: PathBuf,\n\n\t\t/// The arguments to pass.\n\t\targs: Vec<String>,\n\t},\n\n\t/// A shell program: a string which is to be executed by a shell.\n\t///\n\t/// (Tip: in general, a shell will handle its own job control, so there's no inherent need to\n\t/// set `grouped: true` at the [`Command`](super::Command) level.)\n\tShell {\n\t\t/// The shell to run.\n\t\tshell: Shell,\n\n\t\t/// The command line to pass to the shell.\n\t\tcommand: String,\n\n\t\t/// The arguments to pass to the shell invocation.\n\t\t///\n\t\t/// This may not be supported by all shells. Note that some shells require the use of `--`\n\t\t/// for disambiguation: this is not handled by Watchexec, and will need to be the first\n\t\t/// item in this vec if desired.\n\t\t///\n\t\t/// This appends the values within to the shell process invocation.\n\t\targs: Vec<String>,\n\t},\n}\n"
  },
  {
    "path": "crates/supervisor/src/command/shell.rs",
    "content": "use std::{borrow::Cow, ffi::OsStr, path::PathBuf};\n\n/// How to call the shell used to run shelled programs.\n#[derive(Clone, Debug, PartialEq, Eq, Hash)]\npub struct Shell {\n\t/// Path or name of the shell.\n\tpub prog: PathBuf,\n\n\t/// Additional options or arguments to pass to the shell.\n\t///\n\t/// These will be inserted before the `program_option` immediately preceding the program string.\n\tpub options: Vec<String>,\n\n\t/// The syntax of the option which precedes the program string.\n\t///\n\t/// For most shells, this is `-c`. On Windows, CMD.EXE prefers `/C`. If this is `None`, then no\n\t/// option is prepended; this may be useful for non-shell or non-standard shell programs.\n\tpub program_option: Option<Cow<'static, OsStr>>,\n}\n\nimpl Shell {\n\t/// Shorthand for most shells, using the `-c` convention.\n\tpub fn new(name: impl Into<PathBuf>) -> Self {\n\t\tSelf {\n\t\t\tprog: name.into(),\n\t\t\toptions: Vec::new(),\n\t\t\tprogram_option: Some(Cow::Borrowed(OsStr::new(\"-c\"))),\n\t\t}\n\t}\n\n\t#[cfg(windows)]\n\t#[must_use]\n\t/// Shorthand for the CMD.EXE shell.\n\tpub fn cmd() -> Self {\n\t\tSelf {\n\t\t\tprog: \"CMD.EXE\".into(),\n\t\t\toptions: Vec::new(),\n\t\t\tprogram_option: Some(Cow::Borrowed(OsStr::new(\"/C\"))),\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/command.rs",
    "content": "//! Command construction and configuration.\n\n#[doc(inline)]\npub use self::{program::Program, shell::Shell};\n\nmod conversions;\nmod program;\nmod shell;\n\n/// A command to execute.\n///\n/// # Example\n///\n/// ```\n/// # use watchexec_supervisor::command::{Command, Program};\n/// Command {\n///     program: Program::Exec {\n///         prog: \"make\".into(),\n///         args: vec![\"check\".into()],\n///     },\n///     options: Default::default(),\n/// };\n/// ```\n#[derive(Clone, Debug, PartialEq, Eq, Hash)]\npub struct Command {\n\t/// Program to execute for this command.\n\tpub program: Program,\n\n\t/// Options for spawning the program.\n\tpub options: SpawnOptions,\n}\n\n/// Options set when constructing or spawning a command.\n///\n/// It's recommended to use the [`Default`] implementation for this struct, and only set the options\n/// you need to change, to proof against new options being added in future.\n///\n/// # Examples\n///\n/// ```\n/// # use watchexec_supervisor::command::{Command, Program, SpawnOptions};\n/// Command {\n///     program: Program::Exec {\n///         prog: \"make\".into(),\n///         args: vec![\"check\".into()],\n///     },\n///     options: SpawnOptions {\n///         grouped: true,\n///         ..Default::default()\n///     },\n/// };\n/// ```\n#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Hash)]\npub struct SpawnOptions {\n\t/// Run the program in a new process group.\n\t///\n\t/// This will use either of Unix [process groups] or Windows [Job Objects] via the\n\t/// [`process-wrap`](process_wrap) crate.\n\t///\n\t/// [process groups]: https://en.wikipedia.org/wiki/Process_group\n\t/// [Job Objects]: https://en.wikipedia.org/wiki/Object_Manager_(Windows)\n\tpub grouped: bool,\n\n\t/// Run the program in a new session.\n\t///\n\t/// This will use Unix [sessions]. On Windows, this is not supported. This\n\t/// implies `grouped: true`.\n\t///\n\t/// [sessions]: https://pubs.opengroup.org/onlinepubs/9699919799/functions/setsid.html\n\tpub session: bool,\n\n\t/// Reset the signal mask of the process before we spawn it.\n\t///\n\t/// By default, the signal mask of the process is inherited from the parent process. This means\n\t/// that if the parent process has blocked any signals, the child process will also block those\n\t/// signals. This can cause problems if the child process is expecting to receive those signals.\n\t///\n\t/// This is only supported on Unix systems.\n\tpub reset_sigmask: bool,\n}\n"
  },
  {
    "path": "crates/supervisor/src/errors.rs",
    "content": "//! Error types.\n\nuse std::{\n\tio::Error,\n\tsync::{Arc, OnceLock},\n};\n\n/// Convenience type for a [`std::io::Error`] which can be shared across threads.\npub type SyncIoError = Arc<OnceLock<Error>>;\n\n/// Make a [`SyncIoError`] from a [`std::io::Error`].\n#[must_use]\npub fn sync_io_error(err: Error) -> SyncIoError {\n\tlet lock = OnceLock::new();\n\tlock.set(err).expect(\"unreachable: lock was just created\");\n\tArc::new(lock)\n}\n"
  },
  {
    "path": "crates/supervisor/src/flag.rs",
    "content": "//! A flag that can be raised to wake a task.\n//!\n//! Copied wholesale from <https://docs.rs/futures/latest/futures/task/struct.AtomicWaker.html>\n//! unfortunately not aware of crated version!\n\nuse std::{\n\tpin::Pin,\n\tsync::{\n\t\tatomic::{AtomicBool, Ordering::Relaxed},\n\t\tArc,\n\t},\n};\n\nuse futures::{\n\tfuture::Future,\n\ttask::{AtomicWaker, Context, Poll},\n};\n\n#[derive(Debug)]\nstruct Inner {\n\twaker: AtomicWaker,\n\tset: AtomicBool,\n}\n\n#[derive(Clone, Debug)]\npub struct Flag(Arc<Inner>);\n\nimpl Default for Flag {\n\tfn default() -> Self {\n\t\tSelf::new(false)\n\t}\n}\n\nimpl Flag {\n\tpub fn new(value: bool) -> Self {\n\t\tSelf(Arc::new(Inner {\n\t\t\twaker: AtomicWaker::new(),\n\t\t\tset: AtomicBool::new(value),\n\t\t}))\n\t}\n\n\tpub fn raised(&self) -> bool {\n\t\tself.0.set.load(Relaxed)\n\t}\n\n\tpub fn raise(&self) {\n\t\tself.0.set.store(true, Relaxed);\n\t\tself.0.waker.wake();\n\t}\n}\n\nimpl Future for Flag {\n\ttype Output = ();\n\n\tfn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> {\n\t\t// quick check to avoid registration if already done.\n\t\tif self.0.set.load(Relaxed) {\n\t\t\treturn Poll::Ready(());\n\t\t}\n\n\t\tself.0.waker.register(cx.waker());\n\n\t\t// Need to check condition **after** `register` to avoid a race\n\t\t// condition that would result in lost notifications.\n\t\tif self.0.set.load(Relaxed) {\n\t\t\tPoll::Ready(())\n\t\t} else {\n\t\t\tPoll::Pending\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/job/job.rs",
    "content": "#![allow(clippy::must_use_candidate)] // Ticket-returning methods are supposed to be used without awaiting\n\nuse std::{\n\tfuture::Future,\n\tsync::{\n\t\tatomic::{AtomicBool, Ordering},\n\t\tArc,\n\t},\n\ttime::Duration,\n};\n\nuse process_wrap::tokio::CommandWrap;\nuse watchexec_signals::Signal;\n\nuse crate::{command::Command, errors::SyncIoError, flag::Flag};\n\nuse super::{\n\tmessages::{Control, ControlMessage, Ticket},\n\tpriority::{Priority, PrioritySender},\n\tJobTaskContext,\n};\n\n/// A handle to a job task spawned in the supervisor.\n///\n/// A job is a task which manages a [`Command`]. It is responsible for spawning the command's\n/// program, for handling messages which control it, for managing the program's lifetime, and for\n/// collecting its exit status and some timing information.\n///\n/// Most of the methods here queue [`Control`]s to the job task and return [`Ticket`]s. Controls\n/// execute in order, except where noted. Tickets are futures which resolve when the corresponding\n/// control has been run. Unlike most futures, tickets don't need to be polled for controls to make\n/// progress; the future is only used to signal completion. Dropping a ticket will not drop the\n/// control, so it's safe to do so if you don't care about when the control completes.\n///\n/// Note that controls are not guaranteed to run, like if the job task stops or panics before a\n/// control is processed. If a job task stops gracefully, all pending tickets will resolve\n/// immediately. If a job task panics (outside of hooks, panics are bugs!), pending tickets will\n/// never resolve.\n///\n/// This struct is cloneable (internally it is made of Arcs). Dropping the last instance of a Job\n/// will close the job's control queue, which will cause the job task to stop gracefully. Note that\n/// a task graceful stop is not the same as a graceful stop of the contained command; when the job\n/// drops, the command will be dropped in turn, and forcefully terminated via `kill_on_drop`.\n#[derive(Debug, Clone)]\npub struct Job {\n\tpub(crate) command: Arc<Command>,\n\tpub(crate) control_queue: PrioritySender,\n\n\t/// Set to true when the command task has stopped gracefully.\n\tpub(crate) gone: Flag,\n\n\t/// Mirrors the command state: true when a child process is running.\n\tpub(crate) running: Arc<AtomicBool>,\n}\n\nimpl Job {\n\t/// The [`Command`] this job is managing.\n\tpub fn command(&self) -> Arc<Command> {\n\t\tself.command.clone()\n\t}\n\n\t/// If this job is dead.\n\t///\n\t/// A dead job is one where the job task has stopped entirely, not just\n\t/// a job whose command has finished. See [`is_running`](Self::is_running).\n\tpub fn is_dead(&self) -> bool {\n\t\tself.gone.raised()\n\t}\n\n\t/// If a child process is currently running.\n\t///\n\t/// This returns `false` if the command has finished, hasn't been started\n\t/// yet, or the job is dead.\n\tpub fn is_running(&self) -> bool {\n\t\tself.running.load(Ordering::Relaxed)\n\t}\n\n\tfn prepare_control(&self, control: Control) -> (Ticket, ControlMessage) {\n\t\tlet done = Flag::default();\n\t\t(\n\t\t\tTicket {\n\t\t\t\tjob_gone: self.gone.clone(),\n\t\t\t\tcontrol_done: done.clone(),\n\t\t\t},\n\t\t\tControlMessage { control, done },\n\t\t)\n\t}\n\n\tpub(crate) fn send_controls<const N: usize>(\n\t\t&self,\n\t\tcontrols: [Control; N],\n\t\tpriority: Priority,\n\t) -> Ticket {\n\t\tif N == 0 || self.gone.raised() {\n\t\t\tTicket::cancelled()\n\t\t} else if N == 1 {\n\t\t\tlet control = controls.into_iter().next().expect(\"UNWRAP: N > 0\");\n\t\t\tlet (ticket, control) = self.prepare_control(control);\n\t\t\tself.control_queue.send(control, priority);\n\t\t\tticket\n\t\t} else {\n\t\t\tlet mut last_ticket = None;\n\t\t\tfor control in controls {\n\t\t\t\tlet (ticket, control) = self.prepare_control(control);\n\t\t\t\tlast_ticket = Some(ticket);\n\t\t\t\tself.control_queue.send(control, priority);\n\t\t\t}\n\t\t\tlast_ticket.expect(\"UNWRAP: N > 0\")\n\t\t}\n\t}\n\n\t/// Send a control message to the command.\n\t///\n\t/// All control messages are queued in the order they're sent and processed in order.\n\t///\n\t/// In general prefer using the other methods on this struct rather than sending [`Control`]s\n\t/// directly.\n\tpub fn control(&self, control: Control) -> Ticket {\n\t\tself.send_controls([control], Priority::Normal)\n\t}\n\n\t/// Start the command if it's not running.\n\tpub fn start(&self) -> Ticket {\n\t\tself.control(Control::Start)\n\t}\n\n\t/// Stop the command if it's running and wait for completion.\n\t///\n\t/// If you don't want to wait for completion, use `signal(Signal::ForceStop)` instead.\n\tpub fn stop(&self) -> Ticket {\n\t\tself.control(Control::Stop)\n\t}\n\n\t/// Gracefully stop the command if it's running.\n\t///\n\t/// The command will be sent `signal` and then given `grace` time before being forcefully\n\t/// terminated. If `grace` is zero, that still happens, but the command is terminated forcefully\n\t/// on the next \"tick\" of the supervisor loop, which doesn't leave the process a lot of time to\n\t/// do anything.\n\tpub fn stop_with_signal(&self, signal: Signal, grace: Duration) -> Ticket {\n\t\tif cfg!(unix) {\n\t\t\tself.control(Control::GracefulStop { signal, grace })\n\t\t} else {\n\t\t\tself.stop()\n\t\t}\n\t}\n\n\t/// Restart the command if it's running, or start it if it's not.\n\tpub fn restart(&self) -> Ticket {\n\t\tself.send_controls([Control::Stop, Control::Start], Priority::Normal)\n\t}\n\n\t/// Gracefully restart the command if it's running, or start it if it's not.\n\t///\n\t/// The command will be sent `signal` and then given `grace` time before being forcefully\n\t/// terminated. If `grace` is zero, that still happens, but the command is terminated forcefully\n\t/// on the next \"tick\" of the supervisor loop, which doesn't leave the process a lot of time to\n\t/// do anything.\n\tpub fn restart_with_signal(&self, signal: Signal, grace: Duration) -> Ticket {\n\t\tif cfg!(unix) {\n\t\t\tself.send_controls(\n\t\t\t\t[Control::GracefulStop { signal, grace }, Control::Start],\n\t\t\t\tPriority::Normal,\n\t\t\t)\n\t\t} else {\n\t\t\tself.restart()\n\t\t}\n\t}\n\n\t/// Restart the command if it's running, but don't start it if it's not.\n\tpub fn try_restart(&self) -> Ticket {\n\t\tself.control(Control::TryRestart)\n\t}\n\n\t/// Restart the command if it's running, but don't start it if it's not.\n\t///\n\t/// The command will be sent `signal` and then given `grace` time before being forcefully\n\t/// terminated. If `grace` is zero, that still happens, but the command is terminated forcefully\n\t/// on the next \"tick\" of the supervisor loop, which doesn't leave the process a lot of time to\n\t/// do anything.\n\tpub fn try_restart_with_signal(&self, signal: Signal, grace: Duration) -> Ticket {\n\t\tif cfg!(unix) {\n\t\t\tself.control(Control::TryGracefulRestart { signal, grace })\n\t\t} else {\n\t\t\tself.try_restart()\n\t\t}\n\t}\n\n\t/// Send a signal to the command.\n\t///\n\t/// Sends a signal to the current program, if there is one. If there isn't, this is a no-op.\n\t///\n\t/// On Windows, this is a no-op for all signals but [`Signal::ForceStop`], which tries to stop\n\t/// the command like a `stop()` would, but doesn't wait for completion. This is because Windows\n\t/// doesn't have signals; in future [`Hangup`](Signal::Hangup), [`Interrupt`](Signal::Interrupt),\n\t/// and [`Terminate`](Signal::Terminate) may be implemented using [GenerateConsoleCtrlEvent],\n\t/// see [tracking issue #219](https://github.com/watchexec/watchexec/issues/219).\n\t///\n\t/// [GenerateConsoleCtrlEvent]: https://learn.microsoft.com/en-us/windows/console/generateconsolectrlevent\n\tpub fn signal(&self, sig: Signal) -> Ticket {\n\t\tself.control(Control::Signal(sig))\n\t}\n\n\t/// Stop the command, then mark it for garbage collection.\n\t///\n\t/// The underlying control messages are sent like normal, so they wait for all pending controls\n\t/// to process. If you want to delete the command immediately, use `delete_now()`.\n\tpub fn delete(&self) -> Ticket {\n\t\tself.send_controls([Control::Stop, Control::Delete], Priority::Normal)\n\t}\n\n\t/// Stop the command immediately, then mark it for garbage collection.\n\t///\n\t/// The underlying control messages are sent with higher priority than normal, so they bypass\n\t/// all others. If you want to delete after all current controls are processed, use `delete()`.\n\tpub fn delete_now(&self) -> Ticket {\n\t\tself.send_controls([Control::Stop, Control::Delete], Priority::Urgent)\n\t}\n\n\t/// Get a future which resolves when the command ends.\n\t///\n\t/// If the command is not running, the future resolves immediately.\n\t///\n\t/// The underlying control message is sent with higher priority than normal, so it targets the\n\t/// actively running command, not the one that will be running after the rest of the controls\n\t/// get done; note that may still be racy if the command ends between the time the message is\n\t/// sent and the time it's processed.\n\tpub fn to_wait(&self) -> Ticket {\n\t\tself.send_controls([Control::NextEnding], Priority::High)\n\t}\n\n\t/// Run an arbitrary function.\n\t///\n\t/// The function is given [`&JobTaskContext`](JobTaskContext), which contains the state of the\n\t/// currently executing, next-to-start, or just-finished command, as well as the final state of\n\t/// the _previous_ run of the command.\n\t///\n\t/// Technically, some operations can be done through a `&self` shared borrow on the running\n\t/// command's [`ChildWrapper`], but this library recommends against taking advantage of this,\n\t/// and prefer using the methods on here instead, so that the supervisor can keep track of\n\t/// what's going on.\n\tpub fn run(&self, fun: impl FnOnce(&JobTaskContext<'_>) + Send + Sync + 'static) -> Ticket {\n\t\tself.control(Control::SyncFunc(Box::new(fun)))\n\t}\n\n\t/// Run an arbitrary function and await the returned future.\n\t///\n\t/// The function is given [`&JobTaskContext`](JobTaskContext), which contains the state of the\n\t/// currently executing, next-to-start, or just-finished command, as well as the final state of\n\t/// the _previous_ run of the command.\n\t///\n\t/// Technically, some operations can be done through a `&self` shared borrow on the running\n\t/// command's [`ChildWrapper`], but this library recommends against taking advantage of this,\n\t/// and prefer using the methods on here instead, so that the supervisor can keep track of\n\t/// what's going on.\n\t///\n\t/// A gotcha when using this method is that the future returned by the function can live longer\n\t/// than the `&JobTaskContext` it was given, so you can't bring the context into the async block\n\t/// and instead must clone or copy the parts you need beforehand, in the sync portion.\n\t///\n\t/// For example, this won't compile:\n\t///\n\t/// ```compile_fail\n\t/// # use std::sync::Arc;\n\t/// # use tokio::sync::mpsc;\n\t/// # use watchexec_supervisor::command::{Command, Program};\n\t/// # use watchexec_supervisor::job::{CommandState, start_job};\n\t/// #\n\t/// # let (job, _task) = start_job(Arc::new(Command { program: Program::Exec { prog: \"/bin/date\".into(), args: Vec::new() }.into(), options: Default::default() }));\n\t/// let (channel, receiver) = mpsc::channel(10);\n\t/// job.run_async(|context| Box::new(async move {\n\t///     if let CommandState::Finished { status, .. } = context.current {\n\t///         channel.send(status).await.ok();\n\t///     }\n\t/// }));\n\t/// ```\n\t///\n\t/// But this does:\n\t///\n\t/// ```no_run\n\t/// # use std::sync::Arc;\n\t/// # use tokio::sync::mpsc;\n\t/// # use watchexec_supervisor::command::{Command, Program};\n\t/// # use watchexec_supervisor::job::{CommandState, start_job};\n\t/// #\n\t/// # let (job, _task) = start_job(Arc::new(Command { program: Program::Exec { prog: \"/bin/date\".into(), args: Vec::new() }.into(), options: Default::default() }));\n\t/// let (channel, receiver) = mpsc::channel(10);\n\t/// job.run_async(|context| {\n\t///     let status = if let CommandState::Finished { status, .. } = context.current {\n\t///         Some(*status)\n\t///     } else {\n\t///         None\n\t///     };\n\t///\n\t///     Box::new(async move {\n\t///         if let Some(status) = status {\n\t///             channel.send(status).await.ok();\n\t///         }\n\t///     })\n\t/// });\n\t/// ```\n\tpub fn run_async(\n\t\t&self,\n\t\tfun: impl (FnOnce(&JobTaskContext<'_>) -> Box<dyn Future<Output = ()> + Send + Sync>)\n\t\t\t+ Send\n\t\t\t+ Sync\n\t\t\t+ 'static,\n\t) -> Ticket {\n\t\tself.control(Control::AsyncFunc(Box::new(fun)))\n\t}\n\n\t/// Set the spawn hook.\n\t///\n\t/// The hook will be called once per process spawned, before the process is spawned. It's given\n\t/// a mutable reference to the [`process_wrap::tokio::CommandWrap`] and some context; it\n\t/// can modify or further [wrap](process_wrap) the command as it sees fit.\n\tpub fn set_spawn_hook(\n\t\t&self,\n\t\tfun: impl Fn(&mut CommandWrap, &JobTaskContext<'_>) + Send + Sync + 'static,\n\t) -> Ticket {\n\t\tself.control(Control::SetSyncSpawnHook(Arc::new(fun)))\n\t}\n\n\t/// Set the spawn hook (async version).\n\t///\n\t/// The hook will be called once per process spawned, before the process is spawned. It's given\n\t/// a mutable reference to the [`process_wrap::tokio::CommandWrap`] and some context; it\n\t/// can modify or further [wrap](process_wrap) the command as it sees fit.\n\t///\n\t/// A gotcha when using this method is that the future returned by the function can live longer\n\t/// than the references it was given, so you can't bring the command or context into the async\n\t/// block and instead must clone or copy the parts you need beforehand, in the sync portion. See\n\t/// the documentation for [`run_async`](Job::run_async) for an example.\n\t///\n\t/// Fortunately, async spawn hooks should be exceedingly rare: there's very few things to do in\n\t/// spawn hooks that can't be done in the simpler sync version.\n\tpub fn set_spawn_async_hook(\n\t\t&self,\n\t\tfun: impl (Fn(&mut CommandWrap, &JobTaskContext<'_>) -> Box<dyn Future<Output = ()> + Send + Sync>)\n\t\t\t+ Send\n\t\t\t+ Sync\n\t\t\t+ 'static,\n\t) -> Ticket {\n\t\tself.control(Control::SetAsyncSpawnHook(Arc::new(fun)))\n\t}\n\n\t/// Unset any spawn hook.\n\tpub fn unset_spawn_hook(&self) -> Ticket {\n\t\tself.control(Control::UnsetSpawnHook)\n\t}\n\n\t/// Set the spawn function.\n\t///\n\t/// When set, this function is passed to\n\t/// [`CommandWrap::spawn_with()`](process_wrap::tokio::CommandWrap::spawn_with) instead of\n\t/// using the default [`CommandWrap::spawn()`]. It receives a `&mut tokio::process::Command`\n\t/// and must return the spawned [`tokio::process::Child`].\n\t///\n\t/// All process-wrap layers are still applied around the child, so this only customises the\n\t/// low-level spawn step. This is useful for delegating process spawning to a privileged\n\t/// helper (e.g. for Linux capability granting) while keeping the supervisor's lifecycle\n\t/// management.\n\tpub fn set_spawn_fn(\n\t\t&self,\n\t\tfun: impl Fn(&mut tokio::process::Command) -> std::io::Result<tokio::process::Child>\n\t\t\t+ Send\n\t\t\t+ Sync\n\t\t\t+ 'static,\n\t) -> Ticket {\n\t\tself.control(Control::SetSpawnFn(Arc::new(fun)))\n\t}\n\n\t/// Unset any spawn function, reverting to the default `CommandWrap::spawn()`.\n\tpub fn unset_spawn_fn(&self) -> Ticket {\n\t\tself.control(Control::ClearSpawnFn)\n\t}\n\n\t/// Set the error handler.\n\tpub fn set_error_handler(&self, fun: impl Fn(SyncIoError) + Send + Sync + 'static) -> Ticket {\n\t\tself.control(Control::SetSyncErrorHandler(Arc::new(fun)))\n\t}\n\n\t/// Set the error handler (async version).\n\tpub fn set_async_error_handler(\n\t\t&self,\n\t\tfun: impl (Fn(SyncIoError) -> Box<dyn Future<Output = ()> + Send + Sync>)\n\t\t\t+ Send\n\t\t\t+ Sync\n\t\t\t+ 'static,\n\t) -> Ticket {\n\t\tself.control(Control::SetAsyncErrorHandler(Arc::new(fun)))\n\t}\n\n\t/// Unset the error handler.\n\t///\n\t/// Errors will be silently ignored.\n\tpub fn unset_error_handler(&self) -> Ticket {\n\t\tself.control(Control::UnsetErrorHandler)\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/job/messages.rs",
    "content": "use std::{\n\tfuture::Future,\n\tpin::Pin,\n\ttask::{Context, Poll},\n\ttime::Duration,\n};\n\nuse futures::{future::select, FutureExt};\nuse watchexec_signals::Signal;\n\nuse crate::flag::Flag;\n\nuse super::task::{\n\tAsyncErrorHandler, AsyncFunc, AsyncSpawnHook, SyncErrorHandler, SyncFunc, SyncSpawnHook,\n\tSpawnFn,\n};\n\n/// The underlying control message types for [`Job`](super::Job).\n///\n/// You may use [`Job::control()`](super::Job::control()) to send these messages directly, but in\n/// general should prefer the higher-level methods on [`Job`](super::Job) itself.\npub enum Control {\n\t/// For [`Job::start()`](super::Job::start()).\n\tStart,\n\t/// For [`Job::stop()`](super::Job::stop()).\n\tStop,\n\t/// For [`Job::stop_with_signal()`](super::Job::stop_with_signal()).\n\tGracefulStop {\n\t\t/// Signal to send immediately\n\t\tsignal: Signal,\n\t\t/// Time to wait before forceful termination\n\t\tgrace: Duration,\n\t},\n\t/// For [`Job::try_restart()`](super::Job::try_restart()).\n\tTryRestart,\n\t/// For [`Job::try_restart_with_signal()`](super::Job::try_restart_with_signal()).\n\tTryGracefulRestart {\n\t\t/// Signal to send immediately\n\t\tsignal: Signal,\n\t\t/// Time to wait before forceful termination and restart\n\t\tgrace: Duration,\n\t},\n\t/// Internal implementation detail of [`Control::TryGracefulRestart`].\n\tContinueTryGracefulRestart,\n\t/// For [`Job::signal()`](super::Job::signal()).\n\tSignal(Signal),\n\t/// For [`Job::delete()`](super::Job::delete()) and [`Job::delete_now()`](super::Job::delete_now()).\n\tDelete,\n\n\t/// For [`Job::to_wait()`](super::Job::to_wait()).\n\tNextEnding,\n\n\t/// For [`Job::run()`](super::Job::run()).\n\tSyncFunc(SyncFunc),\n\t/// For [`Job::run_async()`](super::Job::run_async()).\n\tAsyncFunc(AsyncFunc),\n\n\t/// For [`Job::set_spawn_hook()`](super::Job::set_spawn_hook()).\n\tSetSyncSpawnHook(SyncSpawnHook),\n\t/// For [`Job::set_spawn_async_hook()`](super::Job::set_spawn_async_hook()).\n\tSetAsyncSpawnHook(AsyncSpawnHook),\n\t/// For [`Job::unset_spawn_hook()`](super::Job::unset_spawn_hook()).\n\tUnsetSpawnHook,\n\t/// For [`Job::set_error_handler()`](super::Job::set_error_handler()).\n\tSetSyncErrorHandler(SyncErrorHandler),\n\t/// For [`Job::set_async_error_handler()`](super::Job::set_async_error_handler()).\n\tSetAsyncErrorHandler(AsyncErrorHandler),\n\t/// For [`Job::unset_error_handler()`](super::Job::unset_error_handler()).\n\tUnsetErrorHandler,\n\n\t/// For [`Job::set_spawn_fn()`](super::Job::set_spawn_fn()).\n\tSetSpawnFn(SpawnFn),\n\t/// For [`Job::unset_spawn_fn()`](super::Job::unset_spawn_fn()).\n\tClearSpawnFn,\n}\n\nimpl std::fmt::Debug for Control {\n\tfn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n\t\tmatch self {\n\t\t\tSelf::Start => f.debug_struct(\"Start\").finish(),\n\t\t\tSelf::Stop => f.debug_struct(\"Stop\").finish(),\n\t\t\tSelf::GracefulStop { signal, grace } => f\n\t\t\t\t.debug_struct(\"GracefulStop\")\n\t\t\t\t.field(\"signal\", signal)\n\t\t\t\t.field(\"grace\", grace)\n\t\t\t\t.finish(),\n\t\t\tSelf::TryRestart => f.debug_struct(\"TryRestart\").finish(),\n\t\t\tSelf::TryGracefulRestart { signal, grace } => f\n\t\t\t\t.debug_struct(\"TryGracefulRestart\")\n\t\t\t\t.field(\"signal\", signal)\n\t\t\t\t.field(\"grace\", grace)\n\t\t\t\t.finish(),\n\t\t\tSelf::ContinueTryGracefulRestart => {\n\t\t\t\tf.debug_struct(\"ContinueTryGracefulRestart\").finish()\n\t\t\t}\n\t\t\tSelf::Signal(signal) => f.debug_struct(\"Signal\").field(\"signal\", signal).finish(),\n\t\t\tSelf::Delete => f.debug_struct(\"Delete\").finish(),\n\n\t\t\tSelf::NextEnding => f.debug_struct(\"NextEnding\").finish(),\n\n\t\t\tSelf::SyncFunc(_) => f.debug_struct(\"SyncFunc\").finish_non_exhaustive(),\n\t\t\tSelf::AsyncFunc(_) => f.debug_struct(\"AsyncFunc\").finish_non_exhaustive(),\n\n\t\t\tSelf::SetSyncSpawnHook(_) => f.debug_struct(\"SetSyncSpawnHook\").finish_non_exhaustive(),\n\t\t\tSelf::SetAsyncSpawnHook(_) => {\n\t\t\t\tf.debug_struct(\"SetSpawnAsyncHook\").finish_non_exhaustive()\n\t\t\t}\n\t\t\tSelf::UnsetSpawnHook => f.debug_struct(\"UnsetSpawnHook\").finish(),\n\t\t\tSelf::SetSyncErrorHandler(_) => f\n\t\t\t\t.debug_struct(\"SetSyncErrorHandler\")\n\t\t\t\t.finish_non_exhaustive(),\n\t\t\tSelf::SetAsyncErrorHandler(_) => f\n\t\t\t\t.debug_struct(\"SetAsyncErrorHandler\")\n\t\t\t\t.finish_non_exhaustive(),\n\t\t\tSelf::UnsetErrorHandler => f.debug_struct(\"UnsetErrorHandler\").finish(),\n\t\t\tSelf::SetSpawnFn(_) => f.debug_struct(\"SetSpawnFn\").finish_non_exhaustive(),\n\t\t\tSelf::ClearSpawnFn => f.debug_struct(\"ClearSpawnFn\").finish(),\n\t\t}\n\t}\n}\n\n#[derive(Debug)]\npub struct ControlMessage {\n\tpub control: Control,\n\tpub done: Flag,\n}\n\n/// Lightweight future which resolves when the corresponding control has been run.\n///\n/// Unlike most futures, tickets don't need to be polled for controls to make progress; the future\n/// is only used to signal completion. Dropping a ticket will not drop the control, so it's safe to\n/// do so if you don't care about when the control completes.\n///\n/// Tickets can be cloned, and all clones will resolve at the same time.\n#[derive(Debug, Clone)]\npub struct Ticket {\n\tpub(crate) job_gone: Flag,\n\tpub(crate) control_done: Flag,\n}\n\nimpl Ticket {\n\tpub(crate) fn cancelled() -> Self {\n\t\tSelf {\n\t\t\tjob_gone: Flag::new(true),\n\t\t\tcontrol_done: Flag::new(true),\n\t\t}\n\t}\n}\n\nimpl Future for Ticket {\n\ttype Output = ();\n\n\tfn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n\t\tPin::new(&mut select(self.job_gone.clone(), self.control_done.clone()).map(|_| ())).poll(cx)\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/job/priority.rs",
    "content": "use std::time::Duration;\n\nuse tokio::{\n\tselect,\n\tsync::mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender},\n\ttime::{sleep_until, Instant, Sleep},\n};\n\nuse crate::flag::Flag;\n\nuse super::{messages::ControlMessage, Control};\n\n#[derive(Debug, Copy, Clone)]\npub enum Priority {\n\tNormal,\n\tHigh,\n\tUrgent,\n}\n\n#[derive(Debug)]\npub struct PriorityReceiver {\n\tpub normal: UnboundedReceiver<ControlMessage>,\n\tpub high: UnboundedReceiver<ControlMessage>,\n\tpub urgent: UnboundedReceiver<ControlMessage>,\n}\n\n#[derive(Debug, Clone)]\npub struct PrioritySender {\n\tpub normal: UnboundedSender<ControlMessage>,\n\tpub high: UnboundedSender<ControlMessage>,\n\tpub urgent: UnboundedSender<ControlMessage>,\n}\n\nimpl PrioritySender {\n\tpub fn send(&self, message: ControlMessage, priority: Priority) {\n\t\t// drop errors: if the channel is closed, the job is dead\n\t\tlet _ = match priority {\n\t\t\tPriority::Normal => self.normal.send(message),\n\t\t\tPriority::High => self.high.send(message),\n\t\t\tPriority::Urgent => self.urgent.send(message),\n\t\t};\n\t}\n}\n\nimpl PriorityReceiver {\n\t/// Receive a control message from the command.\n\t///\n\t/// If `stop_timer` is `Some`, normal priority messages are not received; instead, only high and\n\t/// urgent priority messages are received until the timer expires, and when the timer completes,\n\t/// a `Stop` control message is returned and the `stop_timer` is `None`d.\n\t///\n\t/// This is used to implement stop's, restart's, and try-restart's graceful stopping logic.\n\tpub async fn recv(&mut self, stop_timer: &mut Option<Timer>) -> Option<ControlMessage> {\n\t\tif stop_timer.as_ref().map_or(false, Timer::is_past) {\n\t\t\treturn stop_timer.take().map(|timer| timer.to_control());\n\t\t}\n\n\t\tif let Ok(message) = self.urgent.try_recv() {\n\t\t\treturn Some(message);\n\t\t}\n\n\t\tif let Ok(message) = self.high.try_recv() {\n\t\t\treturn Some(message);\n\t\t}\n\n\t\tif let Some(timer) = stop_timer.clone() {\n\t\t\tselect! {\n\t\t\t\t() = timer.to_sleep() => {\n\t\t\t\t\t*stop_timer = None;\n\t\t\t\t\tSome(timer.to_control())\n\t\t\t\t}\n\t\t\t\tmessage = self.urgent.recv() => message,\n\t\t\t\tmessage = self.high.recv() => message,\n\t\t\t}\n\t\t} else {\n\t\t\tselect! {\n\t\t\t\tmessage = self.urgent.recv() => message,\n\t\t\t\tmessage = self.high.recv() => message,\n\t\t\t\tmessage = self.normal.recv() => message,\n\t\t\t}\n\t\t}\n\t}\n}\n\npub fn new() -> (PrioritySender, PriorityReceiver) {\n\tlet (normal_tx, normal_rx) = unbounded_channel();\n\tlet (high_tx, high_rx) = unbounded_channel();\n\tlet (urgent_tx, urgent_rx) = unbounded_channel();\n\n\t(\n\t\tPrioritySender {\n\t\t\tnormal: normal_tx,\n\t\t\thigh: high_tx,\n\t\t\turgent: urgent_tx,\n\t\t},\n\t\tPriorityReceiver {\n\t\t\tnormal: normal_rx,\n\t\t\thigh: high_rx,\n\t\t\turgent: urgent_rx,\n\t\t},\n\t)\n}\n\n#[derive(Debug, Clone)]\npub struct Timer {\n\tpub until: Instant,\n\tpub done: Flag,\n\tpub is_restart: bool,\n}\n\nimpl Timer {\n\tpub fn stop(grace: Duration, done: Flag) -> Self {\n\t\tSelf {\n\t\t\tuntil: Instant::now() + grace,\n\t\t\tdone,\n\t\t\tis_restart: false,\n\t\t}\n\t}\n\n\tpub fn restart(grace: Duration, done: Flag) -> Self {\n\t\tSelf {\n\t\t\tuntil: Instant::now() + grace,\n\t\t\tdone,\n\t\t\tis_restart: true,\n\t\t}\n\t}\n\n\tfn to_sleep(&self) -> Sleep {\n\t\tsleep_until(self.until)\n\t}\n\n\tfn is_past(&self) -> bool {\n\t\tself.until <= Instant::now()\n\t}\n\n\tfn to_control(&self) -> ControlMessage {\n\t\tControlMessage {\n\t\t\tcontrol: if self.is_restart {\n\t\t\t\tControl::ContinueTryGracefulRestart\n\t\t\t} else {\n\t\t\t\tControl::Stop\n\t\t\t},\n\t\t\tdone: self.done.clone(),\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/job/state.rs",
    "content": "use std::{sync::Arc, time::Instant};\n\n#[cfg(not(test))]\nuse process_wrap::tokio::ChildWrapper;\nuse process_wrap::tokio::CommandWrap;\nuse tracing::trace;\nuse watchexec_events::ProcessEnd;\n\nuse crate::command::Command;\nuse super::task::SpawnFn;\n\n/// The state of the job's command / process.\n///\n/// This is used both internally to represent the current state (ready/pending, running, finished)\n/// of the command, and can be queried via the [`JobTaskContext`](super::JobTaskContext) by hooks.\n///\n/// Technically, some operations can be done through a `&self` shared borrow on the running\n/// command's [`ChildWrapper`], but this library recommends against taking advantage of this,\n/// and prefer using the methods on [`Job`](super::Job) instead, so that the job can keep track of\n/// what's going on.\n#[derive(Debug)]\n#[cfg_attr(test, derive(Clone))]\npub enum CommandState {\n\t/// The command is neither running nor has finished. This is the initial state.\n\tPending,\n\n\t/// The command is currently running. Note that this is established after the process is spawned\n\t/// and not precisely synchronised with the process' aliveness: in some cases the process may be\n\t/// exited but still `Running` in this enum.\n\tRunning {\n\t\t/// The child process (test version).\n\t\t#[cfg(test)]\n\t\tchild: super::TestChild,\n\n\t\t/// The child process.\n\t\t#[cfg(not(test))]\n\t\tchild: Box<dyn ChildWrapper>,\n\n\t\t/// The time at which the process was spawned.\n\t\tstarted: Instant,\n\t},\n\n\t/// The command has completed and its status was collected.\n\tFinished {\n\t\t/// The command's exit status.\n\t\tstatus: ProcessEnd,\n\n\t\t/// The time at which the process was spawned.\n\t\tstarted: Instant,\n\n\t\t/// The time at which the process finished, or more precisely, when its status was collected.\n\t\tfinished: Instant,\n\t},\n}\n\nimpl CommandState {\n\t/// Whether the command is pending, i.e. not running or finished.\n\t#[must_use]\n\tpub const fn is_pending(&self) -> bool {\n\t\tmatches!(self, Self::Pending)\n\t}\n\n\t/// Whether the command is running.\n\t#[must_use]\n\tpub const fn is_running(&self) -> bool {\n\t\tmatches!(self, Self::Running { .. })\n\t}\n\n\t/// Whether the command is finished.\n\t#[must_use]\n\tpub const fn is_finished(&self) -> bool {\n\t\tmatches!(self, Self::Finished { .. })\n\t}\n\n\t#[cfg_attr(test, allow(unused_mut, unused_variables))]\n\tpub(crate) fn spawn(\n\t\t&mut self,\n\t\tcommand: Arc<Command>,\n\t\tmut spawnable: CommandWrap,\n\t\tspawn_fn: Option<&SpawnFn>,\n\t) -> std::io::Result<bool> {\n\t\tif let Self::Running { .. } = self {\n\t\t\ttrace!(\"command running, not spawning again\");\n\t\t\treturn Ok(false);\n\t\t}\n\n\t\ttrace!(?command, \"spawning command\");\n\n\t\t#[cfg(test)]\n\t\tlet child = super::TestChild::new(command)?;\n\n\t\t#[cfg(not(test))]\n\t\tlet child = if let Some(f) = spawn_fn {\n\t\t\tspawnable.spawn_with(|cmd| f(cmd))?\n\t\t} else {\n\t\t\tspawnable.spawn()?\n\t\t};\n\n\t\t*self = Self::Running {\n\t\t\tchild,\n\t\t\tstarted: Instant::now(),\n\t\t};\n\t\tOk(true)\n\t}\n\n\t#[must_use]\n\tpub(crate) fn reset(&mut self) -> Self {\n\t\ttrace!(?self, \"resetting command state\");\n\t\tmatch self {\n\t\t\tSelf::Pending => Self::Pending,\n\t\t\tSelf::Finished {\n\t\t\t\tstatus,\n\t\t\t\tstarted,\n\t\t\t\tfinished,\n\t\t\t\t..\n\t\t\t} => {\n\t\t\t\tlet copy = Self::Finished {\n\t\t\t\t\tstatus: *status,\n\t\t\t\t\tstarted: *started,\n\t\t\t\t\tfinished: *finished,\n\t\t\t\t};\n\n\t\t\t\t*self = Self::Pending;\n\t\t\t\tcopy\n\t\t\t}\n\t\t\tSelf::Running { started, .. } => {\n\t\t\t\tlet copy = Self::Finished {\n\t\t\t\t\tstatus: ProcessEnd::Continued,\n\t\t\t\t\tstarted: *started,\n\t\t\t\t\tfinished: Instant::now(),\n\t\t\t\t};\n\n\t\t\t\t*self = Self::Pending;\n\t\t\t\tcopy\n\t\t\t}\n\t\t}\n\t}\n\n\tpub(crate) async fn wait(&mut self) -> std::io::Result<bool> {\n\t\tif let Self::Running { child, started } = self {\n\t\t\tlet end = child.wait().await?;\n\t\t\t*self = Self::Finished {\n\t\t\t\tstatus: end.into(),\n\t\t\t\tstarted: *started,\n\t\t\t\tfinished: Instant::now(),\n\t\t\t};\n\t\t\tOk(true)\n\t\t} else {\n\t\t\tOk(false)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/job/task.rs",
    "content": "use std::{\n\tfuture::Future,\n\tmem::take,\n\tsync::{\n\t\tatomic::{AtomicBool, Ordering},\n\t\tArc,\n\t},\n\ttime::Instant,\n};\n\nuse process_wrap::tokio::CommandWrap;\nuse tokio::{select, task::JoinHandle};\nuse tracing::{instrument, trace, trace_span, Instrument};\nuse watchexec_signals::Signal;\n\nuse crate::{\n\tcommand::Command,\n\terrors::{sync_io_error, SyncIoError},\n\tflag::Flag,\n\tjob::priority::Timer,\n};\n\nuse super::{\n\tjob::Job,\n\tmessages::{Control, ControlMessage},\n\tpriority,\n\tstate::CommandState,\n};\n\n/// Spawn a job task and return a [`Job`] handle and a [`JoinHandle`].\n///\n/// The job task immediately starts in the background: it does not need polling.\n#[must_use]\n#[instrument(level = \"trace\")]\npub fn start_job(command: Arc<Command>) -> (Job, JoinHandle<()>) {\n\tenum Loop {\n\t\tNormally,\n\t\tSkip,\n\t\tBreak,\n\t}\n\n\tlet (sender, mut receiver) = priority::new();\n\tlet gone = Flag::default();\n\tlet done = gone.clone();\n\tlet running = Arc::new(AtomicBool::new(false));\n\tlet running_flag = running.clone();\n\n\t(\n\t\tJob {\n\t\t\tcommand: command.clone(),\n\t\t\tcontrol_queue: sender,\n\t\t\tgone,\n\t\t\trunning,\n\t\t},\n\t\ttokio::spawn(async move {\n\t\t\tlet mut error_handler = ErrorHandler::None;\n\t\t\tlet mut spawn_hook = SpawnHook::None;\n\t\t\tlet mut spawn_fn: Option<SpawnFn> = None;\n\t\t\tlet mut command_state = CommandState::Pending;\n\t\t\tlet mut previous_run = None;\n\t\t\tlet mut stop_timer = None;\n\t\t\tlet mut on_end: Vec<Flag> = Vec::new();\n\t\t\tlet mut on_end_restart: Option<Flag> = None;\n\n\t\t\t'main: loop {\n\t\t\t\trunning_flag.store(command_state.is_running(), Ordering::Relaxed);\n\t\t\t\tselect! {\n\t\t\t\t\tresult = command_state.wait(), if command_state.is_running() => {\n\t\t\t\t\t\ttrace!(?result, ?command_state, \"got wait result\");\n\t\t\t\t\t\tmatch async {\n\t\t\t\t\t\t\t#[cfg(test)] eprintln!(\"[{:?}] waited: {result:?}\", Instant::now());\n\n\t\t\t\t\t\t\tmatch result {\n\t\t\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\t\t\tlet fut = error_handler.call(sync_io_error(err));\n\t\t\t\t\t\t\t\t\tfut.await;\n\t\t\t\t\t\t\t\t\treturn Loop::Skip;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tOk(true) => {\n\t\t\t\t\t\t\t\t\ttrace!(existing=?stop_timer, \"erasing stop timer\");\n\t\t\t\t\t\t\t\t\tif let Some(timer) = stop_timer.take() {\n\t\t\t\t\t\t\t\t\t\ttimer.done.raise();\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\ttrace!(count=%on_end.len(), \"raising all pending end flags\");\n\t\t\t\t\t\t\t\t\tfor done in take(&mut on_end) {\n\t\t\t\t\t\t\t\t\t\tdone.raise();\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tif let Some(flag) = on_end_restart.take() {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"continuing a graceful restart\");\n\n\t\t\t\t\t\t\t\t\t\tlet mut spawnable = command.to_spawnable();\n\t\t\t\t\t\t\t\t\t\tprevious_run = Some(command_state.reset());\n\t\t\t\t\t\t\t\t\t\tspawn_hook\n\t\t\t\t\t\t\t\t\t\t\t.call(\n\t\t\t\t\t\t\t\t\t\t\t\t&mut spawnable,\n\t\t\t\t\t\t\t\t\t\t\t\t&JobTaskContext {\n\t\t\t\t\t\t\t\t\t\t\t\t\tcommand: command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tcurrent: &command_state,\n\t\t\t\t\t\t\t\t\t\t\t\t\tprevious: previous_run.as_ref(),\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t\t\t\t\tif let Err(err) = command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref()) {\n\t\t\t\t\t\t\t\t\t\t\tlet fut = error_handler.call(sync_io_error(err));\n\t\t\t\t\t\t\t\t\t\t\tfut.await;\n\t\t\t\t\t\t\t\t\t\t\treturn Loop::Skip;\n\t\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\t\ttrace!(\"raising graceful restart's flag\");\n\t\t\t\t\t\t\t\t\t\tflag.raise();\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tOk(false) => {\n\t\t\t\t\t\t\t\t\ttrace!(\"child wasn't running, ignoring wait result\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tLoop::Normally\n\t\t\t\t\t\t}.instrument(trace_span!(\"handle wait result\")).await {\n\t\t\t\t\t\t\tLoop::Normally => {}\n\t\t\t\t\t\t\tLoop::Skip => {\n\t\t\t\t\t\t\t\ttrace!(\"skipping to next event\");\n\t\t\t\t\t\t\t\tcontinue 'main;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tLoop::Break => {\n\t\t\t\t\t\t\t\ttrace!(\"breaking out of main loop\");\n\t\t\t\t\t\t\t\tbreak 'main;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tSome(ControlMessage { control, done }) = receiver.recv(&mut stop_timer) => {\n\t\t\t\t\t\tmatch async {\n\t\t\t\t\t\t\ttrace!(?control, ?command_state, \"got control message\");\n\t\t\t\t\t\t\t#[cfg(test)] eprintln!(\"[{:?}] control: {control:?}\", Instant::now());\n\n\t\t\t\t\t\t\tmacro_rules! try_with_handler {\n\t\t\t\t\t\t\t\t($erroring:expr) => {\n\t\t\t\t\t\t\t\t\tmatch $erroring {\n\t\t\t\t\t\t\t\t\t\tErr(err) => {\n\t\t\t\t\t\t\t\t\t\t\tlet fut = error_handler.call(sync_io_error(err));\n\t\t\t\t\t\t\t\t\t\t\tfut.await;\n\t\t\t\t\t\t\t\t\t\t\ttrace!(\"raising done flag for this control after error\");\n\t\t\t\t\t\t\t\t\t\t\tdone.raise();\n\t\t\t\t\t\t\t\t\t\t\treturn Loop::Normally;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tOk(value) => value,\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t};\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tmatch control {\n\t\t\t\t\t\t\t\tControl::Start => {\n\t\t\t\t\t\t\t\t\tif command_state.is_running() {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"child is running, skip\");\n\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\tlet mut spawnable = command.to_spawnable();\n\t\t\t\t\t\t\t\t\t\tprevious_run = Some(command_state.reset());\n\t\t\t\t\t\t\t\t\t\tspawn_hook\n\t\t\t\t\t\t\t\t\t\t\t.call(\n\t\t\t\t\t\t\t\t\t\t\t\t&mut spawnable,\n\t\t\t\t\t\t\t\t\t\t\t\t&JobTaskContext {\n\t\t\t\t\t\t\t\t\t\t\t\t\tcommand: command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tcurrent: &command_state,\n\t\t\t\t\t\t\t\t\t\t\t\t\tprevious: previous_run.as_ref(),\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref()));\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::Stop => {\n\t\t\t\t\t\t\t\t\tif let CommandState::Running { child, started, .. } = &mut command_state {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"stopping child\");\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(Box::into_pin(child.kill()).await);\n\t\t\t\t\t\t\t\t\t\ttrace!(\"waiting on child\");\n\t\t\t\t\t\t\t\t\t\tlet status = try_with_handler!(child.wait().await);\n\n\t\t\t\t\t\t\t\t\t\ttrace!(?status, \"got child end status\");\n\t\t\t\t\t\t\t\t\t\tcommand_state = CommandState::Finished {\n\t\t\t\t\t\t\t\t\t\t\tstatus: status.into(),\n\t\t\t\t\t\t\t\t\t\t\tstarted: *started,\n\t\t\t\t\t\t\t\t\t\t\tfinished: Instant::now(),\n\t\t\t\t\t\t\t\t\t\t};\n\n\t\t\t\t\t\t\t\t\t\ttrace!(count=%on_end.len(), \"raising all pending end flags\");\n\t\t\t\t\t\t\t\t\t\tfor done in take(&mut on_end) {\n\t\t\t\t\t\t\t\t\t\t\tdone.raise();\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"child isn't running, skip\");\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::GracefulStop { signal, grace } => {\n\t\t\t\t\t\t\t\t\tif let CommandState::Running { child, .. } = &mut command_state {\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(signal_child(signal, child).await);\n\n\t\t\t\t\t\t\t\t\t\ttrace!(?grace, \"setting up graceful stop timer\");\n\t\t\t\t\t\t\t\t\t\tstop_timer.replace(Timer::stop(grace, done));\n\t\t\t\t\t\t\t\t\t\treturn Loop::Skip;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\ttrace!(\"child isn't running, skip\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::TryRestart => {\n\t\t\t\t\t\t\t\t\tif let CommandState::Running { child, started, .. } = &mut command_state {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"stopping child\");\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(Box::into_pin(child.kill()).await);\n\t\t\t\t\t\t\t\t\t\ttrace!(\"waiting on child\");\n\t\t\t\t\t\t\t\t\t\tlet status = try_with_handler!(child.wait().await);\n\n\t\t\t\t\t\t\t\t\t\ttrace!(?status, \"got child end status\");\n\t\t\t\t\t\t\t\t\t\tcommand_state = CommandState::Finished {\n\t\t\t\t\t\t\t\t\t\t\tstatus: status.into(),\n\t\t\t\t\t\t\t\t\t\t\tstarted: *started,\n\t\t\t\t\t\t\t\t\t\t\tfinished: Instant::now(),\n\t\t\t\t\t\t\t\t\t\t};\n\t\t\t\t\t\t\t\t\t\tprevious_run = Some(command_state.reset());\n\n\t\t\t\t\t\t\t\t\t\ttrace!(count=%on_end.len(), \"raising all pending end flags\");\n\t\t\t\t\t\t\t\t\t\tfor done in take(&mut on_end) {\n\t\t\t\t\t\t\t\t\t\t\tdone.raise();\n\t\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\t\tlet mut spawnable = command.to_spawnable();\n\t\t\t\t\t\t\t\t\t\tspawn_hook\n\t\t\t\t\t\t\t\t\t\t\t.call(\n\t\t\t\t\t\t\t\t\t\t\t\t&mut spawnable,\n\t\t\t\t\t\t\t\t\t\t\t\t&JobTaskContext {\n\t\t\t\t\t\t\t\t\t\t\t\t\tcommand: command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\t\tcurrent: &command_state,\n\t\t\t\t\t\t\t\t\t\t\t\t\tprevious: previous_run.as_ref(),\n\t\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref()));\n\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"child isn't running, skip\");\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::TryGracefulRestart { signal, grace } => {\n\t\t\t\t\t\t\t\t\tif let CommandState::Running { child, .. } = &mut command_state {\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(signal_child(signal, child).await);\n\n\t\t\t\t\t\t\t\t\t\ttrace!(?grace, \"setting up graceful stop timer\");\n\t\t\t\t\t\t\t\t\t\tstop_timer.replace(Timer::restart(grace, done.clone()));\n\t\t\t\t\t\t\t\t\t\ttrace!(\"setting up graceful restart flag\");\n\t\t\t\t\t\t\t\t\t\ton_end_restart = Some(done);\n\t\t\t\t\t\t\t\t\t\treturn Loop::Skip;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\ttrace!(\"child isn't running, skip\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::ContinueTryGracefulRestart => {\n\t\t\t\t\t\t\t\t\ttrace!(\"continuing a graceful try-restart\");\n\n\t\t\t\t\t\t\t\t\tif let CommandState::Running { child, started, .. } = &mut command_state {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"stopping child forcefully\");\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(Box::into_pin(child.kill()).await);\n\t\t\t\t\t\t\t\t\t\ttrace!(\"waiting on child\");\n\t\t\t\t\t\t\t\t\t\tlet status = try_with_handler!(child.wait().await);\n\n\t\t\t\t\t\t\t\t\t\ttrace!(?status, \"got child end status\");\n\t\t\t\t\t\t\t\t\t\tcommand_state = CommandState::Finished {\n\t\t\t\t\t\t\t\t\t\t\tstatus: status.into(),\n\t\t\t\t\t\t\t\t\t\t\tstarted: *started,\n\t\t\t\t\t\t\t\t\t\t\tfinished: Instant::now(),\n\t\t\t\t\t\t\t\t\t\t};\n\n\t\t\t\t\t\t\t\t\t\ttrace!(count=%on_end.len(), \"raising all pending end flags\");\n\t\t\t\t\t\t\t\t\t\tfor done in take(&mut on_end) {\n\t\t\t\t\t\t\t\t\t\t\tdone.raise();\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tlet mut spawnable = command.to_spawnable();\n\t\t\t\t\t\t\t\t\tprevious_run = Some(command_state.reset());\n\t\t\t\t\t\t\t\t\tspawn_hook\n\t\t\t\t\t\t\t\t\t\t.call(\n\t\t\t\t\t\t\t\t\t\t\t&mut spawnable,\n\t\t\t\t\t\t\t\t\t\t\t&JobTaskContext {\n\t\t\t\t\t\t\t\t\t\t\t\tcommand: command.clone(),\n\t\t\t\t\t\t\t\t\t\t\t\tcurrent: &command_state,\n\t\t\t\t\t\t\t\t\t\t\t\tprevious: previous_run.as_ref(),\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t\t\t\ttry_with_handler!(command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref()));\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::Signal(signal) => {\n\t\t\t\t\t\t\t\t\tif let CommandState::Running { child, .. } = &mut command_state {\n\t\t\t\t\t\t\t\t\t\ttry_with_handler!(signal_child(signal, child).await);\n\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"child isn't running, skip\");\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::Delete => {\n\t\t\t\t\t\t\t\t\ttrace!(\"raising done flag immediately\");\n\t\t\t\t\t\t\t\t\tdone.raise();\n\t\t\t\t\t\t\t\t\treturn Loop::Break;\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tControl::NextEnding => {\n\t\t\t\t\t\t\t\t\tif matches!(command_state, CommandState::Finished { .. }) {\n\t\t\t\t\t\t\t\t\t\ttrace!(\"child is finished, raise done flag immediately\");\n\t\t\t\t\t\t\t\t\t\tdone.raise();\n\t\t\t\t\t\t\t\t\t\treturn Loop::Normally;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\ttrace!(\"queue end flag\");\n\t\t\t\t\t\t\t\t\t\ton_end.push(done);\n\t\t\t\t\t\t\t\t\t\treturn Loop::Skip;\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tControl::SyncFunc(f) => {\n\t\t\t\t\t\t\t\t\tf(&JobTaskContext {\n\t\t\t\t\t\t\t\t\t\tcommand: command.clone(),\n\t\t\t\t\t\t\t\t\t\tcurrent: &command_state,\n\t\t\t\t\t\t\t\t\t\tprevious: previous_run.as_ref(),\n\t\t\t\t\t\t\t\t\t});\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::AsyncFunc(f) => {\n\t\t\t\t\t\t\t\t\tBox::into_pin(f(&JobTaskContext {\n\t\t\t\t\t\t\t\t\t\tcommand: command.clone(),\n\t\t\t\t\t\t\t\t\t\tcurrent: &command_state,\n\t\t\t\t\t\t\t\t\t\tprevious: previous_run.as_ref(),\n\t\t\t\t\t\t\t\t\t}))\n\t\t\t\t\t\t\t\t\t.await;\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tControl::SetSyncErrorHandler(f) => {\n\t\t\t\t\t\t\t\t\ttrace!(\"setting sync error handler\");\n\t\t\t\t\t\t\t\t\terror_handler = ErrorHandler::Sync(f);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::SetAsyncErrorHandler(f) => {\n\t\t\t\t\t\t\t\t\ttrace!(\"setting async error handler\");\n\t\t\t\t\t\t\t\t\terror_handler = ErrorHandler::Async(f);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::UnsetErrorHandler => {\n\t\t\t\t\t\t\t\t\ttrace!(\"unsetting error handler\");\n\t\t\t\t\t\t\t\t\terror_handler = ErrorHandler::None;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::SetSyncSpawnHook(f) => {\n\t\t\t\t\t\t\t\t\ttrace!(\"setting sync spawn hook\");\n\t\t\t\t\t\t\t\t\tspawn_hook = SpawnHook::Sync(f);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::SetAsyncSpawnHook(f) => {\n\t\t\t\t\t\t\t\t\ttrace!(\"setting async spawn hook\");\n\t\t\t\t\t\t\t\t\tspawn_hook = SpawnHook::Async(f);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::UnsetSpawnHook => {\n\t\t\t\t\t\t\t\t\ttrace!(\"unsetting spawn hook\");\n\t\t\t\t\t\t\t\t\tspawn_hook = SpawnHook::None;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::SetSpawnFn(f) => {\n\t\t\t\t\t\t\t\t\ttrace!(\"setting spawn fn\");\n\t\t\t\t\t\t\t\t\tspawn_fn = Some(f);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tControl::ClearSpawnFn => {\n\t\t\t\t\t\t\t\t\ttrace!(\"clearing spawn fn\");\n\t\t\t\t\t\t\t\t\tspawn_fn = None;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\ttrace!(\"raising control done flag\");\n\t\t\t\t\t\t\tdone.raise();\n\n\t\t\t\t\t\t\tLoop::Normally\n\t\t\t\t\t}.instrument(trace_span!(\"handle control message\")).await {\n\t\t\t\t\t\tLoop::Normally => {}\n\t\t\t\t\t\tLoop::Skip => {\n\t\t\t\t\t\t\ttrace!(\"skipping to next event (without raising done flag)\");\n\t\t\t\t\t\t\tcontinue 'main;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tLoop::Break => {\n\t\t\t\t\t\t\ttrace!(\"breaking out of main loop\");\n\t\t\t\t\t\t\tbreak 'main;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\telse => {\n\t\t\t\t\ttrace!(\"all select branches disabled, exiting\");\n\t\t\t\t\tbreak 'main;\n\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\ttrace!(\"raising job done flag\");\n\t\t\trunning_flag.store(false, Ordering::Relaxed);\n\t\t\tdone.raise();\n\t\t}),\n\t)\n}\n\nmacro_rules! sync_async_callbox {\n\t($name:ident, $synct:ty, $asynct:ty, ($($argname:ident : $argtype:ty),*)) => {\n\t\tpub enum $name {\n\t\t\tNone,\n\t\t\tSync($synct),\n\t\t\tAsync($asynct),\n\t\t}\n\n\t\timpl $name {\n\t\t\t#[instrument(level = \"trace\", skip(self, $($argname),*))]\n\t\t\tpub async fn call(&self, $($argname: $argtype),*) {\n\t\t\t\tmatch self {\n\t\t\t\t\t$name::None => (),\n\t\t\t\t\t$name::Sync(f) => {\n\t\t\t\t\t\t::tracing::trace!(\"calling sync {:?}\", stringify!($name));\n\t\t\t\t\t\tf($($argname),*)\n\t\t\t\t\t}\n\t\t\t\t\t$name::Async(f) => {\n\t\t\t\t\t\t::tracing::trace!(\"calling async {:?}\", stringify!($name));\n\t\t\t\t\t\tBox::into_pin(f($($argname),*)).await\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t};\n}\n\n/// Job task internals exposed via hooks.\n#[derive(Debug)]\npub struct JobTaskContext<'task> {\n\t/// The job's [`Command`].\n\tpub command: Arc<Command>,\n\n\t/// The current state of the job.\n\tpub current: &'task CommandState,\n\n\t/// The state of the previous iteration of the job, if any.\n\t///\n\t/// This is generally [`CommandState::Finished`], but may be other states in rare cases.\n\tpub previous: Option<&'task CommandState>,\n}\n\npub type SyncFunc = Box<dyn FnOnce(&JobTaskContext<'_>) + Send + Sync + 'static>;\npub type AsyncFunc = Box<\n\tdyn (FnOnce(&JobTaskContext<'_>) -> Box<dyn Future<Output = ()> + Send + Sync>)\n\t\t+ Send\n\t\t+ Sync\n\t\t+ 'static,\n>;\n\npub type SyncSpawnHook = Arc<dyn Fn(&mut CommandWrap, &JobTaskContext<'_>) + Send + Sync + 'static>;\npub type AsyncSpawnHook = Arc<\n\tdyn (Fn(&mut CommandWrap, &JobTaskContext<'_>) -> Box<dyn Future<Output = ()> + Send + Sync>)\n\t\t+ Send\n\t\t+ Sync\n\t\t+ 'static,\n>;\n\n/// A function that customises how the underlying process is spawned.\n///\n/// When set on a [`Job`](super::Job), this function is passed to\n/// [`CommandWrap::spawn_with()`](process_wrap::tokio::CommandWrap::spawn_with) instead of using\n/// the default [`CommandWrap::spawn()`](process_wrap::tokio::CommandWrap::spawn). It receives a\n/// `&mut tokio::process::Command` and must return the spawned `tokio::process::Child`.\n///\n/// All process-wrap layers are still applied around the child, so this only customises the\n/// low-level spawn step. This is useful for delegating process spawning to a privileged helper\n/// (e.g. for Linux capability granting) while keeping the supervisor's lifecycle management.\npub type SpawnFn = Arc<\n\tdyn Fn(&mut tokio::process::Command) -> std::io::Result<tokio::process::Child>\n\t\t+ Send\n\t\t+ Sync\n\t\t+ 'static,\n>;\n\nsync_async_callbox!(SpawnHook, SyncSpawnHook, AsyncSpawnHook, (command: &mut CommandWrap, context: &JobTaskContext<'_>));\n\npub type SyncErrorHandler = Arc<dyn Fn(SyncIoError) + Send + Sync + 'static>;\npub type AsyncErrorHandler = Arc<\n\tdyn (Fn(SyncIoError) -> Box<dyn Future<Output = ()> + Send + Sync>) + Send + Sync + 'static,\n>;\n\nsync_async_callbox!(ErrorHandler, SyncErrorHandler, AsyncErrorHandler, (error: SyncIoError));\n\n#[cfg_attr(not(windows), allow(clippy::needless_pass_by_ref_mut))] // needed for start_kill()\n#[instrument(level = \"trace\")]\nasync fn signal_child(\n\tsignal: Signal,\n\t#[cfg(not(test))] child: &mut Box<dyn process_wrap::tokio::ChildWrapper>,\n\t#[cfg(test)] child: &mut super::TestChild,\n) -> std::io::Result<()> {\n\t#[cfg(unix)]\n\t{\n\t\tlet sig = signal\n\t\t\t.to_nix()\n\t\t\t.or_else(|| Signal::Terminate.to_nix())\n\t\t\t.expect(\"UNWRAP: guaranteed for Signal::Terminate default\");\n\t\ttrace!(signal=?sig, \"sending signal\");\n\t\tchild.signal(sig as _)?;\n\t}\n\n\t#[cfg(windows)]\n\tif signal == Signal::ForceStop {\n\t\ttrace!(\"starting kill, without waiting\");\n\t\tchild.start_kill()?;\n\t} else {\n\t\ttrace!(?signal, \"ignoring unsupported signal\");\n\t}\n\n\tOk(())\n}\n"
  },
  {
    "path": "crates/supervisor/src/job/test.rs",
    "content": "#![allow(clippy::unwrap_used)]\n\nuse std::{\n\tnum::NonZeroI64,\n\tprocess::{ExitStatus, Output},\n\tsync::{\n\t\tatomic::{AtomicBool, Ordering},\n\t\tArc, Mutex,\n\t},\n\ttime::{Duration, Instant},\n};\n\nuse tokio::time::sleep;\nuse watchexec_events::ProcessEnd;\n\n#[cfg(unix)]\nuse crate::job::TestChildCall;\nuse crate::{\n\tcommand::{Command, Program},\n\tjob::{start_job, CommandState},\n};\n\nuse super::{Control, Job, Priority};\n\nconst GRACE: u64 = 10; // millis\n\nfn erroring_command() -> Arc<Command> {\n\tArc::new(Command {\n\t\tprogram: Program::Exec {\n\t\t\tprog: \"/does/not/exist\".into(),\n\t\t\targs: Vec::new(),\n\t\t},\n\t\toptions: Default::default(),\n\t})\n}\n\nfn working_command() -> Arc<Command> {\n\tArc::new(Command {\n\t\tprogram: Program::Exec {\n\t\t\tprog: \"/does/not/run\".into(),\n\t\t\targs: Vec::new(),\n\t\t},\n\t\toptions: Default::default(),\n\t})\n}\n\nfn ungraceful_command() -> Arc<Command> {\n\tArc::new(Command {\n\t\tprogram: Program::Exec {\n\t\t\tprog: \"sleep\".into(),\n\t\t\targs: vec![(GRACE * 2).to_string()],\n\t\t},\n\t\toptions: Default::default(),\n\t})\n}\n\nfn graceful_command() -> Arc<Command> {\n\tArc::new(Command {\n\t\tprogram: Program::Exec {\n\t\t\tprog: \"sleep\".into(),\n\t\t\targs: vec![(2 * GRACE / 3).to_string()],\n\t\t},\n\t\toptions: Default::default(),\n\t})\n}\n\n#[tokio::test]\nasync fn sync_error_handler() {\n\tlet (job, task) = start_job(erroring_command());\n\tlet error_handler_called = Arc::new(AtomicBool::new(false));\n\n\tjob.set_error_handler({\n\t\tlet error_handler_called = error_handler_called.clone();\n\t\tmove |_| {\n\t\t\terror_handler_called.store(true, Ordering::Relaxed);\n\t\t}\n\t})\n\t.await;\n\n\tjob.start().await;\n\n\tassert!(\n\t\terror_handler_called.load(Ordering::Relaxed),\n\t\t\"called on start\"\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn async_error_handler() {\n\tlet (job, task) = start_job(erroring_command());\n\tlet error_handler_called = Arc::new(AtomicBool::new(false));\n\n\tjob.set_async_error_handler({\n\t\tlet error_handler_called = error_handler_called.clone();\n\t\tmove |_| {\n\t\t\tlet error_handler_called = error_handler_called.clone();\n\t\t\tBox::new(async move {\n\t\t\t\terror_handler_called.store(true, Ordering::Relaxed);\n\t\t\t})\n\t\t}\n\t})\n\t.await;\n\n\tjob.start().await;\n\n\tassert!(\n\t\terror_handler_called.load(Ordering::Relaxed),\n\t\t\"called on start\"\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn unset_error_handler() {\n\tlet (job, task) = start_job(erroring_command());\n\tlet error_handler_called = Arc::new(AtomicBool::new(false));\n\n\tjob.set_error_handler({\n\t\tlet error_handler_called = error_handler_called.clone();\n\t\tmove |_| {\n\t\t\terror_handler_called.store(true, Ordering::Relaxed);\n\t\t}\n\t})\n\t.await;\n\n\tjob.unset_error_handler().await;\n\n\tjob.start().await;\n\n\tassert!(\n\t\t!error_handler_called.load(Ordering::Relaxed),\n\t\t\"not called even after start\"\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn queue_ordering() {\n\tlet (job, task) = start_job(working_command());\n\tlet error_handler_called = Arc::new(AtomicBool::new(false));\n\n\tjob.set_error_handler({\n\t\tlet error_handler_called = error_handler_called.clone();\n\t\tmove |_| {\n\t\t\terror_handler_called.store(true, Ordering::Relaxed);\n\t\t}\n\t});\n\n\tjob.unset_error_handler();\n\n\t// We're not awaiting until this one, but because the queue is processed in\n\t// order, it's effectively the same as waiting them all.\n\tjob.start().await;\n\n\tassert!(\n\t\t!error_handler_called.load(Ordering::Relaxed),\n\t\t\"called after queue await\"\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn sync_func() {\n\tlet (job, task) = start_job(working_command());\n\tlet func_called = Arc::new(AtomicBool::new(false));\n\n\tlet ticket = job.run({\n\t\tlet func_called = func_called.clone();\n\t\tmove |_| {\n\t\t\tfunc_called.store(true, Ordering::Relaxed);\n\t\t}\n\t});\n\n\tassert!(\n\t\t!func_called.load(Ordering::Relaxed),\n\t\t\"immediately after submit, likely before processed\"\n\t);\n\n\tticket.await;\n\tassert!(\n\t\tfunc_called.load(Ordering::Relaxed),\n\t\t\"after it's been processed\"\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn async_func() {\n\tlet (job, task) = start_job(working_command());\n\tlet func_called = Arc::new(AtomicBool::new(false));\n\n\tlet ticket = job.run_async({\n\t\tlet func_called = func_called.clone();\n\t\tmove |_| {\n\t\t\tlet func_called = func_called;\n\t\t\tBox::new(async move {\n\t\t\t\tfunc_called.store(true, Ordering::Relaxed);\n\t\t\t})\n\t\t}\n\t});\n\n\tassert!(\n\t\t!func_called.load(Ordering::Relaxed),\n\t\t\"immediately after submit, likely before processed\"\n\t);\n\n\tticket.await;\n\tassert!(\n\t\tfunc_called.load(Ordering::Relaxed),\n\t\t\"after it's been processed\"\n\t);\n\n\ttask.abort();\n}\n\n// TODO: figure out how to test spawn hooks\n\nasync fn refresh_state(job: &Job, state: &Arc<Mutex<Option<CommandState>>>, current: bool) {\n\tjob.send_controls(\n\t\t[Control::SyncFunc(Box::new({\n\t\t\tlet state = state.clone();\n\t\t\tmove |context| {\n\t\t\t\tif current {\n\t\t\t\t\tstate.lock().unwrap().replace(context.current.clone());\n\t\t\t\t} else {\n\t\t\t\t\t*state.lock().unwrap() = context.previous.cloned();\n\t\t\t\t}\n\t\t\t}\n\t\t}))],\n\t\tPriority::Urgent,\n\t)\n\t.await;\n}\n\nasync fn set_running_child_status(job: &Job, status: ExitStatus) {\n\tjob.send_controls(\n\t\t[Control::AsyncFunc(Box::new({\n\t\t\tmove |context| {\n\t\t\t\tlet output_lock = if let CommandState::Running { child, .. } = context.current {\n\t\t\t\t\tSome(child.output.clone())\n\t\t\t\t} else {\n\t\t\t\t\tNone\n\t\t\t\t};\n\n\t\t\t\tBox::new(async move {\n\t\t\t\t\tif let Some(output_lock) = output_lock {\n\t\t\t\t\t\t*output_lock.lock().await = Some(Output {\n\t\t\t\t\t\t\tstatus,\n\t\t\t\t\t\t\tstdout: Vec::new(),\n\t\t\t\t\t\t\tstderr: Vec::new(),\n\t\t\t\t\t\t});\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t}\n\t\t}))],\n\t\tPriority::Urgent,\n\t)\n\t.await;\n}\n\nmacro_rules! expect_state {\n\t($current:literal, $job:expr, $expected:pat, $reason:literal) => {\n\t\tlet state = Arc::new(Mutex::new(None));\n\t\trefresh_state(&$job, &state, $current).await;\n\t\t{\n\t\t\tlet state = state.lock().unwrap();\n\t\t\tlet reason = $reason;\n\t\t\tlet reason = if reason.is_empty() {\n\t\t\t\tString::new()\n\t\t\t} else {\n\t\t\t\tformat!(\" ({reason})\")\n\t\t\t};\n\t\t\tassert!(\n\t\t\t\tmatches!(*state, Some($expected)),\n\t\t\t\t\"expected Some({}), got {state:?}{reason}\",\n\t\t\t\tstringify!($expected),\n\t\t\t);\n\t\t}\n\t};\n\n\t($job:expr, $expected:pat, $reason:literal) => {\n\t\texpect_state!(true, $job, $expected, $reason)\n\t};\n\n\t($job:expr, $expected:pat) => {\n\t\texpect_state!(true, $job, $expected, \"\")\n\t};\n\n\t(previous: $job:expr, $expected:pat, $reason:literal) => {\n\t\texpect_state!(false, $job, $expected, $reason)\n\t};\n\n\t(previous: $job:expr, $expected:pat) => {\n\t\texpect_state!(false, $job, $expected, \"\")\n\t};\n}\n\n#[cfg(unix)]\nasync fn get_child(job: &Job) -> super::TestChild {\n\tlet state = Arc::new(Mutex::new(None));\n\trefresh_state(job, &state, true).await;\n\tlet state = state.lock().unwrap();\n\tlet state = state.as_ref().expect(\"no state\");\n\tmatch state {\n\t\tCommandState::Running { ref child, .. } => child.clone(),\n\t\t_ => panic!(\"get_child: expected IsRunning, got {state:?}\"),\n\t}\n}\n\n#[tokio::test]\nasync fn start() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\ttask.abort();\n}\n\n#[cfg(unix)]\n#[tokio::test]\nasync fn signal_unix() {\n\tuse nix::sys::signal::Signal;\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start();\n\tjob.signal(watchexec_signals::Signal::User1).await;\n\n\tlet calls = get_child(&job).await.calls;\n\tassert!(calls.iter().any(\n\t\t|(_, call)| matches!(call, TestChildCall::Signal(sig) if *sig == Signal::SIGUSR1 as i32)\n\t));\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn stop() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::Success,\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn stop_when_running() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.stop().await;\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn stop_fail() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(\n\t\t&job,\n\t\tProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(),\n\t)\n\t.await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::ExitError(_),\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn restart() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(\n\t\t&job,\n\t\tProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(),\n\t)\n\t.await;\n\n\tjob.restart().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tprevious: job,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::ExitError(_),\n\t\t\t..\n\t\t}\n\t);\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::Success,\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn graceful_stop() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tlet stop = job.stop_with_signal(\n\t\twatchexec_signals::Signal::Terminate,\n\t\tDuration::from_millis(GRACE),\n\t);\n\n\tsleep(Duration::from_millis(GRACE / 2)).await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished { .. },\n\t\t\"after signal but before delayed force-stop\"\n\t);\n\n\tstop.await;\n\n\texpect_state!(job, CommandState::Finished { .. });\n\n\ttask.abort();\n}\n\n/// Regression test for https://github.com/watchexec/watchexec/issues/981\n///\n/// When a process responds to SIGTERM gracefully and the Job handle is dropped,\n/// the task should exit cleanly without panicking.\n#[tokio::test]\nasync fn graceful_stop_with_job_dropped() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\t// Start graceful stop but don't await the ticket\n\tlet _stop = job.stop_with_signal(\n\t\twatchexec_signals::Signal::Terminate,\n\t\tDuration::from_millis(GRACE),\n\t);\n\n\t// Give the task time to process the graceful stop\n\tsleep(Duration::from_millis(GRACE / 2)).await;\n\n\t// Drop the job handle (simulating the caller losing interest)\n\t// This closes all channels to the task\n\tdrop(job);\n\n\t// The task should exit cleanly without panicking\n\t// Previously this would panic with \"all branches are disabled and there is no else branch\"\n\ttokio::time::timeout(Duration::from_millis(GRACE * 10), task)\n\t\t.await\n\t\t.expect(\"task should complete within timeout\")\n\t\t.expect(\"task should not panic\");\n}\n\n#[tokio::test]\nasync fn graceful_restart() {\n\tlet (job, task) = start_job(working_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(\n\t\t&job,\n\t\tProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(),\n\t)\n\t.await;\n\n\tjob.restart_with_signal(\n\t\twatchexec_signals::Signal::Terminate,\n\t\tDuration::from_millis(GRACE),\n\t)\n\t.await;\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tprevious: job,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::ExitError(_),\n\t\t\t..\n\t\t}\n\t);\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::Success,\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn graceful_stop_beyond_grace() {\n\tlet (job, task) = start_job(ungraceful_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tlet stop = job.stop_with_signal(\n\t\twatchexec_signals::Signal::User1,\n\t\tDuration::from_millis(GRACE),\n\t);\n\n\t#[cfg(unix)]\n\t{\n\t\tuse nix::sys::signal::Signal;\n\t\texpect_state!(\n\t\t\tjob,\n\t\t\tCommandState::Running { .. },\n\t\t\t\"after USR1 but before delayed stop\"\n\t\t);\n\n\t\tlet calls = get_child(&job).await.calls;\n\t\tassert!(calls.iter().any(|(_, call)| matches!(\n\t\t\tcall,\n\t\t\tTestChildCall::Signal(sig) if *sig == Signal::SIGUSR1 as i32\n\t\t)));\n\t}\n\n\tstop.await;\n\n\texpect_state!(job, CommandState::Finished { .. });\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn graceful_restart_beyond_grace() {\n\tlet (job, task) = start_job(ungraceful_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(\n\t\t&job,\n\t\tProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(),\n\t)\n\t.await;\n\n\tlet restart = job.restart_with_signal(\n\t\twatchexec_signals::Signal::User1,\n\t\tDuration::from_millis(GRACE),\n\t);\n\n\t#[cfg(unix)]\n\t{\n\t\tuse nix::sys::signal::Signal;\n\t\texpect_state!(\n\t\t\tjob,\n\t\t\tCommandState::Running { .. },\n\t\t\t\"after USR1 but before delayed restart\"\n\t\t);\n\n\t\tlet calls = get_child(&job).await.calls;\n\t\tassert!(calls.iter().any(|(_, call)| matches!(\n\t\t\tcall,\n\t\t\tTestChildCall::Signal(sig) if *sig == Signal::SIGUSR1 as i32\n\t\t)));\n\t}\n\n\trestart.await;\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tprevious: job,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::ExitError(_),\n\t\t\t..\n\t\t}\n\t);\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::Success,\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn try_restart() {\n\tlet (job, task) = start_job(graceful_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.try_restart().await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Pending,\n\t\t\"command still not running after try-restart\"\n\t);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tlet try_restart = job.try_restart();\n\n\teprintln!(\"[{:?}] test: await try_restart\", Instant::now());\n\ttry_restart.await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tprevious: job,\n\t\tCommandState::Finished { .. }\n\t);\n\n\texpect_state!(job, CommandState::Finished { .. });\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn try_graceful_restart() {\n\tlet (job, task) = start_job(graceful_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.try_restart_with_signal(\n\t\twatchexec_signals::Signal::User1,\n\t\tDuration::from_millis(GRACE),\n\t)\n\t.await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Pending,\n\t\t\"command still not running after try-graceful-restart\"\n\t);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(\n\t\t&job,\n\t\tProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(),\n\t)\n\t.await;\n\n\tlet restart = job.try_restart_with_signal(\n\t\twatchexec_signals::Signal::User1,\n\t\tDuration::from_millis(GRACE),\n\t);\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\teprintln!(\"[{:?}] await restart\", Instant::now());\n\trestart.await;\n\teprintln!(\"[{:?}] awaited restart\", Instant::now());\n\n\texpect_state!(\n\t\tprevious: job,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::ExitError(_),\n\t\t\t..\n\t\t}\n\t);\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::Success,\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn try_restart_beyond_grace() {\n\tlet (job, task) = start_job(ungraceful_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.try_restart().await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Pending,\n\t\t\"command still not running after try-restart\"\n\t);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(\n\t\t&job,\n\t\tProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(),\n\t)\n\t.await;\n\n\tjob.try_restart().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tprevious: job,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::ExitError(_),\n\t\t\t..\n\t\t}\n\t);\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::Success,\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n\n#[tokio::test]\nasync fn try_graceful_restart_beyond_grace() {\n\tlet (job, task) = start_job(ungraceful_command());\n\n\texpect_state!(job, CommandState::Pending);\n\n\tjob.try_restart_with_signal(\n\t\twatchexec_signals::Signal::User1,\n\t\tDuration::from_millis(GRACE),\n\t)\n\t.await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Pending,\n\t\t\"command still not running after try-graceful-restart\"\n\t);\n\n\tjob.start().await;\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(\n\t\t&job,\n\t\tProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(),\n\t)\n\t.await;\n\n\tlet restart = job.try_restart_with_signal(\n\t\twatchexec_signals::Signal::User1,\n\t\tDuration::from_millis(GRACE),\n\t);\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\trestart.await;\n\n\texpect_state!(\n\t\tprevious: job,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::ExitError(_),\n\t\t\t..\n\t\t}\n\t);\n\n\texpect_state!(job, CommandState::Running { .. });\n\n\tset_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await;\n\n\tjob.stop().await;\n\n\texpect_state!(\n\t\tjob,\n\t\tCommandState::Finished {\n\t\t\tstatus: ProcessEnd::Success,\n\t\t\t..\n\t\t}\n\t);\n\n\ttask.abort();\n}\n"
  },
  {
    "path": "crates/supervisor/src/job/testchild.rs",
    "content": "use std::{\n\tfuture::Future,\n\tio::Result,\n\tpath::Path,\n\tpin::Pin,\n\tprocess::{ExitStatus, Output},\n\tsync::Arc,\n\ttime::{Duration, Instant},\n};\n\nuse tokio::{sync::Mutex, time::sleep};\nuse watchexec_events::ProcessEnd;\n\nuse crate::command::{Command, Program};\n\n/// Mock version of [`TokioChildWrapper`](process_wrap::tokio::TokioChildWrapper).\n#[derive(Debug, Clone)]\npub struct TestChild {\n\tpub grouped: bool,\n\tpub command: Arc<Command>,\n\tpub calls: Arc<boxcar::Vec<TestChildCall>>,\n\tpub output: Arc<Mutex<Option<Output>>>,\n\tpub spawned: Instant,\n}\n\nimpl TestChild {\n\tpub fn new(command: Arc<Command>) -> std::io::Result<Self> {\n\t\tif let Program::Exec { prog, .. } = &command.program {\n\t\t\tif prog == Path::new(\"/does/not/exist\") {\n\t\t\t\treturn Err(std::io::Error::new(\n\t\t\t\t\tstd::io::ErrorKind::NotFound,\n\t\t\t\t\t\"file not found\",\n\t\t\t\t));\n\t\t\t}\n\t\t}\n\n\t\tOk(Self {\n\t\t\tgrouped: command.options.grouped || command.options.session,\n\t\t\tcommand,\n\t\t\tcalls: Arc::new(boxcar::Vec::new()),\n\t\t\toutput: Arc::new(Mutex::new(None)),\n\t\t\tspawned: Instant::now(),\n\t\t})\n\t}\n}\n\n#[derive(Debug)]\npub enum TestChildCall {\n\tId,\n\tKill,\n\tStartKill,\n\tTryWait,\n\tWait,\n\t#[cfg(unix)]\n\tSignal(i32),\n}\n\n// Exact same signatures as ErasedChild\nimpl TestChild {\n\tpub fn id(&mut self) -> Option<u32> {\n\t\tself.calls.push(TestChildCall::Id);\n\t\tNone\n\t}\n\n\tpub fn kill(&mut self) -> Box<dyn Future<Output = Result<()>> + Send + '_> {\n\t\tself.calls.push(TestChildCall::Kill);\n\t\tBox::new(async { Ok(()) })\n\t}\n\n\tpub fn start_kill(&mut self) -> Result<()> {\n\t\tself.calls.push(TestChildCall::StartKill);\n\t\tOk(())\n\t}\n\n\tpub fn try_wait(&mut self) -> Result<Option<ExitStatus>> {\n\t\tself.calls.push(TestChildCall::TryWait);\n\n\t\tif let Program::Exec { prog, args } = &self.command.program {\n\t\t\tif prog == Path::new(\"sleep\") {\n\t\t\t\tif let Some(time) = args\n\t\t\t\t\t.first()\n\t\t\t\t\t.and_then(|arg| arg.parse().ok())\n\t\t\t\t\t.map(Duration::from_millis)\n\t\t\t\t{\n\t\t\t\t\tif self.spawned.elapsed() < time {\n\t\t\t\t\t\treturn Ok(None);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tOk(self\n\t\t\t.output\n\t\t\t.try_lock()\n\t\t\t.ok()\n\t\t\t.and_then(|o| o.as_ref().map(|o| o.status)))\n\t}\n\n\tpub fn wait(&mut self) -> Pin<Box<dyn Future<Output = Result<ExitStatus>> + Send + '_>> {\n\t\tself.calls.push(TestChildCall::Wait);\n\t\tBox::pin(async {\n\t\t\tif let Program::Exec { prog, args } = &self.command.program {\n\t\t\t\tif prog == Path::new(\"sleep\") {\n\t\t\t\t\tif let Some(time) = args\n\t\t\t\t\t\t.first()\n\t\t\t\t\t\t.and_then(|arg| arg.parse().ok())\n\t\t\t\t\t\t.map(Duration::from_millis)\n\t\t\t\t\t{\n\t\t\t\t\t\tif self.spawned.elapsed() < time {\n\t\t\t\t\t\t\tsleep(time - self.spawned.elapsed()).await;\n\t\t\t\t\t\t\tif let Ok(guard) = self.output.try_lock() {\n\t\t\t\t\t\t\t\tif let Some(output) = guard.as_ref() {\n\t\t\t\t\t\t\t\t\treturn Ok(output.status);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\treturn Ok(ProcessEnd::Success.into_exitstatus());\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tloop {\n\t\t\t\teprintln!(\"[{:?}] child: output lock\", Instant::now());\n\t\t\t\tlet output = self.output.lock().await;\n\t\t\t\tif let Some(output) = output.as_ref() {\n\t\t\t\t\treturn Ok(output.status);\n\t\t\t\t}\n\t\t\t\teprintln!(\"[{:?}] child: output unlock\", Instant::now());\n\n\t\t\t\tsleep(Duration::from_secs(1)).await;\n\t\t\t}\n\t\t})\n\t}\n\n\tpub fn wait_with_output(self) -> Box<dyn Future<Output = Result<Output>> + Send> {\n\t\tBox::new(async move {\n\t\t\tloop {\n\t\t\t\tlet mut output = self.output.lock().await;\n\t\t\t\tif let Some(output) = output.take() {\n\t\t\t\t\treturn Ok(output);\n\t\t\t\t} else {\n\t\t\t\t\tsleep(Duration::from_secs(1)).await;\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n\n\t#[cfg(unix)]\n\tpub fn signal(&self, sig: i32) -> Result<()> {\n\t\tself.calls.push(TestChildCall::Signal(sig));\n\t\tOk(())\n\t}\n}\n"
  },
  {
    "path": "crates/supervisor/src/job.rs",
    "content": "//! Job supervision.\n\n#[doc(inline)]\npub use self::{\n\tjob::Job,\n\tmessages::{Control, Ticket},\n\tstate::CommandState,\n\ttask::{JobTaskContext, SpawnFn},\n};\n\n#[cfg(test)]\npub(crate) use self::{priority::Priority, testchild::TestChild};\n\n#[cfg(all(unix, test))]\npub(crate) use self::testchild::TestChildCall;\n\n#[doc(inline)]\npub use task::start_job;\n\n#[allow(clippy::module_inception)]\nmod job;\nmod messages;\nmod priority;\nmod state;\nmod task;\n\n#[cfg(test)]\nmod testchild;\n\n#[cfg(test)]\nmod test;\n"
  },
  {
    "path": "crates/supervisor/src/lib.rs",
    "content": "//! Watchexec's process supervisor.\n//!\n//! This crate implements the process supervisor for Watchexec. It is responsible for spawning and\n//! managing processes, and for sending events to them.\n//!\n//! You may use this crate to implement your own process supervisor, but keep in mind its direction\n//! will always primarily be driven by the needs of Watchexec itself.\n//!\n//! # Usage\n//!\n//! There is no struct or implementation of a single supervisor, as the particular needs of the\n//! application will dictate how that is designed. Instead, this crate provides a [`Job`](job::Job)\n//! construct, which is a handle to a single [`Command`](command::Command), and manages its\n//! lifecycle. The `Job` API has been modeled after the `systemctl` set of commands for service\n//! control, with operations for starting, stopping, restarting, sending signals, waiting for the\n//! process to complete, etc.\n//!\n//! There are also methods for running hooks within the job's runtime task, and for handling errors.\n//!\n//! # Theory of Operation\n//!\n//! A [`Job`](job::Job) is, properly speaking, a handle which lets one control a Tokio task. That\n//! task is spawned on the Tokio runtime, and so runs in the background. A `Job` takes as input a\n//! [`Command`](command::Command), which describes how to start a single process, through either a\n//! shell command or a direct executable invocation, and if the process should be grouped (using\n//! [`process-wrap`](process_wrap)) or not.\n//!\n//! The job's task runs an event loop on two sources: the process's `wait()` (i.e. when the process\n//! ends) and the job's control queue. The control queue is a hybrid MPSC queue, with three priority\n//! levels and a timer. When the timer is active, the lowest (\"Normal\") priority queue is disabled.\n//! This is an internal detail which serves to implement graceful stops and restarts. The internals\n//! of the job's task are not available to the API user, actions and queries are performed by\n//! sending messages on this control queue.\n//!\n//! The control queue is executed in priority and in order within priorities. Sending a control to\n//! the task returns a [`Ticket`](job::Ticket), which is a future that resolves when the control has\n//! been processed. Dropping the ticket will not cancel the control. This provides two complementary\n//! ways to orchestrate actions: queueing controls in the desired order if there is no need for\n//! branching flow or for signaling, and sending controls or performing other actions after awaiting\n//! tickets.\n//!\n//! Do note that both of these can be used together. There is no need for the below pattern:\n//!\n//! ```no_run\n//! # #[tokio::main(flavor = \"current_thread\")] async fn main() { // single-threaded for doctest only\n//! # use std::sync::Arc;\n//! # use watchexec_supervisor::Signal;\n//! # use watchexec_supervisor::command::{Command, Program};\n//! # use watchexec_supervisor::job::{CommandState, start_job};\n//! #\n//! # let (job, task) = start_job(Arc::new(Command { program: Program::Exec { prog: \"/bin/date\".into(), args: Vec::new() }.into(), options: Default::default() }));\n//! #\n//! job.start().await;\n//! job.signal(Signal::User1).await;\n//! job.stop().await;\n//! # task.abort();\n//! # }\n//! ```\n//!\n//! Because of ordering, it behaves the same as this:\n//!\n//! ```no_run\n//! # #[tokio::main(flavor = \"current_thread\")] async fn main() { // single-threaded for doctest only\n//! # use std::sync::Arc;\n//! # use watchexec_supervisor::Signal;\n//! # use watchexec_supervisor::command::{Command, Program};\n//! # use watchexec_supervisor::job::{CommandState, start_job};\n//! #\n//! # let (job, task) = start_job(Arc::new(Command { program: Program::Exec { prog: \"/bin/date\".into(), args: Vec::new() }.into(), options: Default::default() }));\n//! #\n//! job.start();\n//! job.signal(Signal::User1);\n//! job.stop().await; // here, all of start(), signal(), and stop() will have run in order\n//! # task.abort();\n//! # }\n//! ```\n//!\n//! However, this is a different program:\n//!\n//! ```no_run\n//! # #[tokio::main(flavor = \"current_thread\")] async fn main() { // single-threaded for doctest only\n//! # use std::sync::Arc;\n//! # use std::time::Duration;\n//! # use tokio::time::sleep;\n//! # use watchexec_supervisor::Signal;\n//! # use watchexec_supervisor::command::{Command, Program};\n//! # use watchexec_supervisor::job::{CommandState, start_job};\n//! #\n//! # let (job, task) = start_job(Arc::new(Command { program: Program::Exec { prog: \"/bin/date\".into(), args: Vec::new() }.into(), options: Default::default() }));\n//! #\n//! job.start().await;\n//! println!(\"program started!\");\n//! sleep(Duration::from_secs(5)).await; // wait until program is fully started\n//!\n//! job.signal(Signal::User1).await;\n//! sleep(Duration::from_millis(150)).await; // wait until program has dumped stats\n//! println!(\"program stats dumped via USR1 signal!\");\n//!\n//! job.stop().await;\n//! println!(\"program stopped\");\n//! #\n//! # task.abort();\n//! # }\n//! ```\n//!\n//! # Example\n//!\n//! ```no_run\n//! # #[tokio::main(flavor = \"current_thread\")] async fn main() { // single-threaded for doctest only\n//! # use std::sync::Arc;\n//! use watchexec_supervisor::Signal;\n//! use watchexec_supervisor::command::{Command, Program};\n//! use watchexec_supervisor::job::{CommandState, start_job};\n//!\n//! let (job, task) = start_job(Arc::new(Command {\n//!     program: Program::Exec {\n//!         prog: \"/bin/date\".into(),\n//!         args: Vec::new(),\n//!     }.into(),\n//!     options: Default::default(),\n//! }));\n//!\n//! job.start().await;\n//! job.signal(Signal::User1).await;\n//! job.stop().await;\n//!\n//! job.delete_now().await;\n//!\n//! task.await; // make sure the task is fully cleaned up\n//! # }\n//! ```\n\n#![doc(html_favicon_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![doc(html_logo_url = \"https://watchexec.github.io/logo:watchexec.svg\")]\n#![warn(clippy::unwrap_used, missing_docs, rustdoc::unescaped_backticks)]\n#![cfg_attr(not(test), warn(unused_crate_dependencies))]\n#![deny(rust_2018_idioms)]\n\n#[doc(no_inline)]\npub use watchexec_events::ProcessEnd;\n#[doc(no_inline)]\npub use watchexec_signals::Signal;\n\npub mod command;\npub mod errors;\npub mod job;\n\nmod flag;\n"
  },
  {
    "path": "crates/supervisor/tests/programs.rs",
    "content": "use watchexec_supervisor::command::{Command, Program, Shell};\n\n#[tokio::test]\n#[cfg(unix)]\nasync fn unix_shell_none() -> Result<(), std::io::Error> {\n\tassert!(Command {\n\t\tprogram: Program::Exec {\n\t\t\tprog: \"echo\".into(),\n\t\t\targs: vec![\"hi\".into()],\n\t\t},\n\t\toptions: Default::default()\n\t}\n\t.to_spawnable()\n\t.spawn()?\n\t.wait()\n\t.await?\n\t.success());\n\tOk(())\n}\n\n#[tokio::test]\n#[cfg(unix)]\nasync fn unix_shell_sh() -> Result<(), std::io::Error> {\n\tassert!(Command {\n\t\tprogram: Program::Shell {\n\t\t\tshell: Shell::new(\"sh\"),\n\t\t\tcommand: \"echo hi\".into(),\n\t\t\targs: Vec::new(),\n\t\t},\n\t\toptions: Default::default()\n\t}\n\t.to_spawnable()\n\t.spawn()?\n\t.wait()\n\t.await?\n\t.success());\n\tOk(())\n}\n\n#[tokio::test]\n#[cfg(unix)]\nasync fn unix_shell_alternate() -> Result<(), std::io::Error> {\n\tassert!(Command {\n\t\tprogram: Program::Shell {\n\t\t\tshell: Shell::new(\"bash\"),\n\t\t\tcommand: \"echo\".into(),\n\t\t\targs: vec![\"--\".into(), \"hi\".into()],\n\t\t},\n\t\toptions: Default::default()\n\t}\n\t.to_spawnable()\n\t.spawn()?\n\t.wait()\n\t.await?\n\t.success());\n\tOk(())\n}\n\n#[tokio::test]\n#[cfg(unix)]\nasync fn unix_shell_alternate_shopts() -> Result<(), std::io::Error> {\n\tassert!(Command {\n\t\tprogram: Program::Shell {\n\t\t\tshell: Shell {\n\t\t\t\toptions: vec![\"-o\".into(), \"errexit\".into()],\n\t\t\t\t..Shell::new(\"bash\")\n\t\t\t},\n\t\t\tcommand: \"echo hi\".into(),\n\t\t\targs: Vec::new(),\n\t\t},\n\t\toptions: Default::default()\n\t}\n\t.to_spawnable()\n\t.spawn()?\n\t.wait()\n\t.await?\n\t.success());\n\tOk(())\n}\n\n#[tokio::test]\n#[cfg(windows)]\nasync fn windows_shell_none() -> Result<(), std::io::Error> {\n\tassert!(Command {\n\t\tprogram: Program::Exec {\n\t\t\tprog: \"echo\".into(),\n\t\t\targs: vec![\"hi\".into()],\n\t\t},\n\t\toptions: Default::default()\n\t}\n\t.to_spawnable()\n\t.spawn()?\n\t.wait()\n\t.await?\n\t.success());\n\tOk(())\n}\n\n#[tokio::test]\n#[cfg(windows)]\nasync fn windows_shell_cmd() -> Result<(), std::io::Error> {\n\tassert!(Command {\n\t\tprogram: Program::Shell {\n\t\t\tshell: Shell::cmd(),\n\t\t\targs: Vec::new(),\n\t\t\tcommand: r#\"\"echo\" hi\"#.into()\n\t\t},\n\t\toptions: Default::default()\n\t}\n\t.to_spawnable()\n\t.spawn()?\n\t.wait()\n\t.await?\n\t.success());\n\tOk(())\n}\n\n#[tokio::test]\n#[cfg(windows)]\nasync fn windows_shell_powershell() -> Result<(), std::io::Error> {\n\tassert!(Command {\n\t\tprogram: Program::Shell {\n\t\t\tshell: Shell::new(\"pwsh.exe\"),\n\t\t\targs: Vec::new(),\n\t\t\tcommand: \"echo hi\".into()\n\t\t},\n\t\toptions: Default::default()\n\t}\n\t.to_spawnable()\n\t.spawn()?\n\t.wait()\n\t.await?\n\t.success());\n\tOk(())\n}\n"
  },
  {
    "path": "crates/test-socketfd/Cargo.toml",
    "content": "[package]\nname = \"test-socketfd\"\nversion = \"0.0.0\"\npublish = false\n\nauthors = [\"Félix Saparelli <felix@passcod.name>\"]\nlicense = \"Apache-2.0 OR MIT\"\ndescription = \"Test program for --socket\"\n\nedition = \"2021\"\n\n[dependencies]\nlistenfd = \"1.0.2\"\n\n[lints.clippy]\nnursery = \"warn\"\npedantic = \"warn\"\n"
  },
  {
    "path": "crates/test-socketfd/README.md",
    "content": "This is a testing tool for the `--socket` option, which can also be used by third-parties to check compatibility.\n\n## Install\n\n```console\ncargo install --git https://github.com/watchexec/watchexec test-socketfd\n```\n\n## Usage\n\nPrint the control env variables and the number of available sockets:\n\n```\ntest-socketfd\n```\n\nValidate that one TCP socket and one UDP socket are available, in this order:\n\n```\ntest-socketfd tcp udp\n```\n\nThe tool also supports `unix-stream`, `unix-datagram`, and `unix-raw` on unix, even if watchexec itself doesn't.\nThese correspond to the `ListenFd` methods here: https://docs.rs/listenfd/latest/listenfd/struct.ListenFd.html\n"
  },
  {
    "path": "crates/test-socketfd/src/main.rs",
    "content": "use std::{\n\tenv::{args, var},\n\tio::ErrorKind,\n};\n\nuse listenfd::ListenFd;\n\nfn main() {\n\teprintln!(\"LISTEN_FDS={:?}\", var(\"LISTEN_FDS\"));\n\teprintln!(\"LISTEN_FDS_FIRST_FD={:?}\", var(\"LISTEN_FDS_FIRST_FD\"));\n\teprintln!(\"LISTEN_PID={:?}\", var(\"LISTEN_PID\"));\n\teprintln!(\"SYSTEMFD_SOCKET_SERVER={:?}\", var(\"SYSTEMFD_SOCKET_SERVER\"));\n\teprintln!(\"SYSTEMFD_SOCKET_SECRET={:?}\", var(\"SYSTEMFD_SOCKET_SECRET\"));\n\n\tlet mut listenfd = ListenFd::from_env();\n\tprintln!(\"\\n{} sockets available\\n\", listenfd.len());\n\n\tfor (n, arg) in args().skip(1).enumerate() {\n\t\tmatch arg.as_str() {\n\t\t\t\"tcp\" => {\n\t\t\t\tif let Ok(addr) = listenfd\n\t\t\t\t\t.take_tcp_listener(n)\n\t\t\t\t\t.and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into()))\n\t\t\t\t\t.expect(&format!(\"expected TCP listener at FD#{n}\"))\n\t\t\t\t\t.local_addr()\n\t\t\t\t{\n\t\t\t\t\tprintln!(\"obtained TCP listener at FD#{n}, at addr {addr:?}\");\n\t\t\t\t} else {\n\t\t\t\t\tprintln!(\"obtained TCP listener at FD#{n}, unknown addr\");\n\t\t\t\t}\n\t\t\t}\n\t\t\t\"udp\" => {\n\t\t\t\tif let Ok(addr) = listenfd\n\t\t\t\t\t.take_udp_socket(n)\n\t\t\t\t\t.and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into()))\n\t\t\t\t\t.expect(&format!(\"expected UDP socket at FD#{n}\"))\n\t\t\t\t\t.local_addr()\n\t\t\t\t{\n\t\t\t\t\tprintln!(\"obtained UDP socket at FD#{n}, at addr {addr:?}\");\n\t\t\t\t} else {\n\t\t\t\t\tprintln!(\"obtained UDP socket at FD#{n}, unknown addr\");\n\t\t\t\t}\n\t\t\t}\n\t\t\t#[cfg(unix)]\n\t\t\t\"unix-stream\" => {\n\t\t\t\tif let Ok(addr) = listenfd\n\t\t\t\t\t.take_unix_listener(n)\n\t\t\t\t\t.and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into()))\n\t\t\t\t\t.expect(&format!(\"expected Unix stream listener at FD#{n}\"))\n\t\t\t\t\t.local_addr()\n\t\t\t\t{\n\t\t\t\t\tprintln!(\"obtained Unix stream listener at FD#{n}, at addr {addr:?}\");\n\t\t\t\t} else {\n\t\t\t\t\tprintln!(\"obtained Unix stream listener at FD#{n}, unknown addr\");\n\t\t\t\t}\n\t\t\t}\n\t\t\t#[cfg(unix)]\n\t\t\t\"unix-datagram\" => {\n\t\t\t\tif let Ok(addr) = listenfd\n\t\t\t\t\t.take_unix_datagram(n)\n\t\t\t\t\t.and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into()))\n\t\t\t\t\t.expect(&format!(\"expected Unix datagram socket at FD#{n}\"))\n\t\t\t\t\t.local_addr()\n\t\t\t\t{\n\t\t\t\t\tprintln!(\"obtained Unix datagram socket at FD#{n}, at addr {addr:?}\");\n\t\t\t\t} else {\n\t\t\t\t\tprintln!(\"obtained Unix datagram socket at FD#{n}, unknown addr\");\n\t\t\t\t}\n\t\t\t}\n\t\t\t#[cfg(unix)]\n\t\t\t\"unix-raw\" => {\n\t\t\t\tlet raw = listenfd\n\t\t\t\t\t.take_raw_fd(n)\n\t\t\t\t\t.and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into()))\n\t\t\t\t\t.expect(&format!(\"expected Unix raw socket at FD#{n}\"));\n\t\t\t\tprintln!(\"obtained Unix raw socket at FD#{n}: {raw}\");\n\t\t\t}\n\t\t\tother => {\n\t\t\t\tif cfg!(unix) {\n\t\t\t\t\tpanic!(\"expected one of (tcp, udp, unix-stream, unix-datagram, unix-raw), found {other}\")\n\t\t\t\t} else {\n\t\t\t\t\tpanic!(\"expected one of (tcp, udp), found {other}\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "doc/packages.md",
    "content": "# Known packages of Watchexec\n\nNote that only first-party packages are maintained here.\nAnyone is welcome to create and maintain a packaging of\nWatchexec for their platform/distribution and submit it\nto their upstreams, and anyone may submit a PR to update\nthis list. To report issues with non-first-party packages\n(outside of bugs that belong to Watchexec), contact the\nrelevant packager.\n\n| Platform | Distributor | Package name | Status | Install command |\n|:-|:-|:-:|:-:|-:|\n| Linux | _n/a_ (deb) | [`watchexec-{version}-{platform}.deb`](https://github.com/watchexec/watchexec/releases) | first-party | `dpkg -i watchexec-*.deb` |\n| Linux | _n/a_ (rpm) | [`watchexec-{version}-{platform}.rpm`](https://github.com/watchexec/watchexec/releases) | first-party | `dnf install watchexec-*.deb` |\n| Linux | _n/a_ (tarball) | [`watchexec-{version}-{platform}.tar.xz`](https://github.com/watchexec/watchexec/releases) | first-party | `tar xf watchexec-*.tar.xz` |\n| Linux | Alpine | [`watchexec`](https://pkgs.alpinelinux.org/packages?name=watchexec) | official | `apk add watchexec` |\n| Linux | ALT Sisyphus | [`watchexec`](https://packages.altlinux.org/en/sisyphus/srpms/watchexec/) | official | `apt-get install watchexec` |\n| Linux | ~~[APT repo](https://apt.cli.rs) (Debian & Ubuntu)~~ | [`watchexec-cli`](https://apt.cli.rs) | defunct | |\n| Linux | Arch | [`watchexec`](https://archlinux.org/packages/extra/x86_64/watchexec/) | official | `pacman -S watchexec` |\n| Linux | Gentoo GURU | [`watchexec`](https://gpo.zugaina.org/Overlays/guru/app-misc/watchexec) | community | `emerge -av watchexec` |\n| Linux | GNU Guix | [`watchexec`](https://packages.guix.gnu.org/packages/watchexec/) | outdated | `guix install watchexec` |\n| Linux | LiGurOS | [`watchexec`](https://gitlab.com/liguros/liguros-repo/-/tree/stable/app-misc/watchexec) | official | `emerge -av watchexec` |\n| Linux | Manjaro | [`watchexec`](https://software.manjaro.org/package/watchexec) | official | `pamac install watchexec` |\n| Linux | Nix | [`watchexec`](https://search.nixos.org/packages?query=watchexec) | official | `nix-shell -p watchexec` |\n| Linux | openSUSE | [`watchexec`](https://software.opensuse.org/package/watchexec) | official | `zypper install watchexec` |\n| Linux | pacstall (Ubuntu) | [`watchexec-cli`](https://pacstall.dev/packages/watchexec-bin) | community | `pacstall -I watchexec-bin` |\n| Linux | Parabola | [`watchexec`](https://www.parabola.nu/packages/?q=watchexec) | official | `pacman -S watchexec` |\n| Linux | Solus | [`watchexec`](https://github.com/getsolus/packages/blob/main/packages/w/watchexec/package.yml) | official | `eopkg install watchexec` |\n| Linux | Termux (Android) | [`watchexec`](https://github.com/termux/termux-packages/blob/master/packages/watchexec/build.sh) | official | `pkg install watchexec` |\n| Linux | Void | [`watchexec`](https://github.com/void-linux/void-packages/tree/master/srcpkgs/watchexec) | official | `xbps-install watchexec` |\n| MacOS | _n/a_ (tarball) | [`watchexec-{version}-{platform}.tar.xz`](https://github.com/watchexec/watchexec/releases) | first-party | `tar xf watchexec-*.tar.xz` |\n| MacOS | Homebrew | [`watchexec`](https://formulae.brew.sh/formula/watchexec) | official | `brew install watchexec` |\n| MacOS | MacPorts | [`watchexec`](https://ports.macports.org/port/watchexec/summary/) | official | `port install watchexec` |\n| Windows | _n/a_ (zip) | [`watchexec-{version}-{platform}.zip`](https://github.com/watchexec/watchexec/releases) | first-party | `Expand-Archive -Path watchexec-*.zip` |\n| Windows | Baulk | [`watchexec`](https://github.com/baulk/bucket/blob/master/bucket/watchexec.json) | official | `baulk install watchexec` |\n| Windows | Chocolatey | [`watchexec`](https://community.chocolatey.org/packages/watchexec) | community | `choco install watchexec` |\n| Windows | MSYS2 mingw | [`mingw-w64-watchexec`](https://github.com/msys2/MINGW-packages/blob/master/mingw-w64-watchexec) | official | `pacman -S mingw-w64-x86_64-watchexec` |\n| Windows | Scoop | [`watchexec`](https://github.com/ScoopInstaller/Main/blob/master/bucket/watchexec.json) | official | `scoop install watchexec` |\n| _Any_ | Crates.io | [`watchexec-cli`](https://crates.io/crates/watchexec-cli) | first-party | `cargo install --locked watchexec-cli` |\n| _Any_ | Binstall | [`watchexec-cli`](https://crates.io/crates/watchexec-cli) | first-party | `cargo binstall watchexec-cli` |\n| _Any_ | Webi | [`watchexec`](https://webinstall.dev/watchexec/) | third-party | varies (see webpage) |\n\nLegend:\n- first-party: packaged and distributed by the Watchexec developers (in this repo)\n- official: packaged and distributed by the official package team for the listed distribution\n- community: packaged by a community member or organisation, outside of the official distribution\n- third-party: a redistribution of another package (e.g. using the first-party tarballs via a non-first-party installer)\n- outdated: an official or community packaging that is severely outdated (not just a couple releases out)\n"
  },
  {
    "path": "doc/socket.md",
    "content": "# `--socket`, `systemd.socket`, `systemfd`\n\nThe `--socket` option is a lightweight version of [the `systemfd` tool][systemfd], which itself is\nan implementation of [systemd's socket activation feature][systemd sockets], which itself is a\nreimagination of earlier socket activation efforts, such as inetd and launchd.\n\nAll three of these are compatible with each other in some ways.\nThis document attempts to describe the commonalities and specify minimum behaviour that additional implementations should follow to keep compatibility.\nIt does not seek to establish authority over any project.\n\n[systemfd]: https://github.com/mitsuhiko/systemfd\n[systemd sockets]: https://0pointer.de/blog/projects/socket-activation.html\n\n## Basic principle of operation\n\nThere's two programs involved: a socket provider, and a socket consumer.\n\nIn systemd, the provider is systemd itself, and the consumer is the main service process.\nIn watchexec (and systemfd), the provider is watchexec itself, and the consumer is the command it runs.\n\nThe provider creates a socket and binds them to an address, and then makes it available to the consumer.\nThere is an optional authentication layer to avoid the wrong process from attaching to the wrong socket.\nThe consumer that obtains a socket is then able to listen on it.\nWhen the consumer exits, it doesn't close the socket; the provider then makes it available to the next instance.\n\nSocket activation is an advanced behaviour, where the provider listens on the socket itself and uses that to start the consumer service.\nAs the provider controls the socket, more behaviours are possible such as having the real address bound to a separate socket and passing data through, or providing new sockets instead of sharing a single one.\nThe important principle is that the consumer should not need to care: socket control is decoupled from application message and stream handling.\n\n## Unix\n\nThe Unix protocol was designed by systemd.\n\nSockets are provided to consumers through file descriptors.\n\n- The file descriptors are assigned in a contiguous block.\n- The number of socket file descriptors is passed to the consumer using the environment variable `LISTEN_FDS`.\n- The starting file descriptor is read from the environment variable `LISTEN_FDS_FIRST_FD`, or defaults to `3` if that variable is not present.\n- If the `LISTEN_PID` environment variable is present, and the process ID of the consumer process doesn't match it, it must stop and not listen on any of the file descriptors.\n- The consumer may choose to reject the sockets if the file descriptor count isn't what it expects.\n- The consumer should strip the above environment variables from any child process it starts.\n\nThe consumer side in pseudo code:\n\n```\nlet pid_check = env::get(\"LISTEN_PID\");\nif pid_check && pid_check != getpid() {\n    return;\n}\n\nlet expected_socket_count = 2;\nlet fd_count = env::get(\"LISTEN_FDS\");\nif !fd_count || fd_count != expected_socket_count {\n    return;\n}\n\nlet starting_fd = env::get(\"LISTEN_FDS_FIRST_FD\");\nif !starting_fd {\n    starting_fd = 3;\n}\n\nfor (let fd = starting_fd; fd += 1; fd < starting_fd + fd_count) {\n    configure_socket(fd);\n}\n```\n\n## Windows\n\nThe Windows protocol was designed by systemfd.\n\nSockets are provided to consumers through the [WSAPROTOCOL_INFOW] structure.\n\n- The provider starts a TCP server bound to 127.0.0.1 on a random port.\n  - It writes the address to the server to the `SYSTEMFD_SOCKET_SERVER` environment variable for the consumer processes.\n- The provider generates and stores a random 128 bit value as a key for a socket set.\n  - It writes the key in UUID hex string format (e.g. `59fb60fe-2634-4ec8-aa81-038793888c8e`) to the `SYSTEMFD_SOCKET_SECRET` environment variable for the consumer processes.\n- The consumer opens a connection to the `SYSTEMFD_SOCKET_SERVER` and:\n  1. reads the key from `SYSTEMFD_SOCKET_SECRET`;\n  2. writes the key in the same format, then a `|` character, then its own process ID as a string (in base 10), and then EOF;\n  2. reads the response to EOF.\n- The response will be one or more `WSAPROTOCOL_INFOW` structures, with no padding or separators.\n- If the provider has no record of the key (i.e. if it doesn't match the one provided to the consumer via `SYSTEMFD_SOCKET_SECRET`), it will close the connection without sending any data.\n- Optionally, the provider can check the consumer's PID is what it expects, and reject if it's unhappy (by closing the connection without sending any data).\n\nThe consumer side in pseudo code:\n\n```\nlet server = env::get(\"SYSTEMFD_SOCKET_SERVER\");\nlet key = env::get(\"SYSTEMFD_SOCKET_SECRET\");\n\nif !server || !key {\n    return;\n}\n\nif !valid_uuid(key) {\n    return;\n}\n\nlet (writer, reader) = TcpClient::connect(server);\nwriter.write(key);\nwriter.write(\"|\");\nwriter.write(getpid().to_string());\nwriter.close();\n\nwhile reader.has_more_data() {\n    let socket = reader.read(size_of(WSAPROTOCOL_INFOW)) as WSAPROTOCOL_INFOW;\n    configure_socket(socket);\n}\n```\n\n[WSAPROTOCOL_INFOW]: https://learn.microsoft.com/en-us/windows/win32/api/winsock2/ns-winsock2-wsaprotocol_infow\n"
  },
  {
    "path": "doc/watchexec.1",
    "content": ".ie \\n(.g .ds Aq \\(aq\n.el .ds Aq '\n.TH watchexec 1  \"watchexec 2.5.1\" \n.SH NAME\nwatchexec \\- Execute commands when watched files change\n.SH SYNOPSIS\n\\fBwatchexec\\fR [\\fB\\-\\-bell\\fR] [\\fB\\-c\\fR|\\fB\\-\\-clear\\fR] [\\fB\\-\\-completions\\fR] [\\fB\\-\\-color\\fR] [\\fB\\-d\\fR|\\fB\\-\\-debounce\\fR] [\\fB\\-\\-delay\\-run\\fR] [\\fB\\-e\\fR|\\fB\\-\\-exts\\fR] [\\fB\\-E\\fR|\\fB\\-\\-env\\fR] [\\fB\\-\\-emit\\-events\\-to\\fR] [\\fB\\-f\\fR|\\fB\\-\\-filter\\fR] [\\fB\\-\\-socket\\fR] [\\fB\\-\\-filter\\-file\\fR] [\\fB\\-j\\fR|\\fB\\-\\-filter\\-prog\\fR] [\\fB\\-\\-fs\\-events\\fR] [\\fB\\-i\\fR|\\fB\\-\\-ignore\\fR] [\\fB\\-I\\fR|\\fB\\-\\-interactive\\fR] [\\fB\\-\\-exit\\-on\\-error\\fR] [\\fB\\-\\-ignore\\-file\\fR] [\\fB\\-\\-ignore\\-nothing\\fR] [\\fB\\-\\-log\\-file\\fR] [\\fB\\-\\-manual\\fR] [\\fB\\-\\-map\\-signal\\fR] [\\fB\\-n \\fR] [\\fB\\-N\\fR|\\fB\\-\\-notify\\fR] [\\fB\\-\\-no\\-default\\-ignore\\fR] [\\fB\\-\\-no\\-discover\\-ignore\\fR] [\\fB\\-\\-no\\-process\\-group\\fR] [\\fB\\-\\-no\\-global\\-ignore\\fR] [\\fB\\-\\-no\\-meta\\fR] [\\fB\\-\\-no\\-project\\-ignore\\fR] [\\fB\\-\\-no\\-vcs\\-ignore\\fR] [\\fB\\-o\\fR|\\fB\\-\\-on\\-busy\\-update\\fR] [\\fB\\-\\-only\\-emit\\-events\\fR] [\\fB\\-\\-poll\\fR] [\\fB\\-\\-print\\-events\\fR] [\\fB\\-\\-project\\-origin\\fR] [\\fB\\-p\\fR|\\fB\\-\\-postpone\\fR] [\\fB\\-q\\fR|\\fB\\-\\-quiet\\fR] [\\fB\\-r\\fR|\\fB\\-\\-restart\\fR] [\\fB\\-s\\fR|\\fB\\-\\-signal\\fR] [\\fB\\-\\-shell\\fR] [\\fB\\-\\-stdin\\-quit\\fR] [\\fB\\-\\-stop\\-signal\\fR] [\\fB\\-\\-stop\\-timeout\\fR] [\\fB\\-\\-timeout\\fR] [\\fB\\-\\-timings\\fR] [\\fB\\-v\\fR|\\fB\\-\\-verbose\\fR]... [\\fB\\-w\\fR|\\fB\\-\\-watch\\fR] [\\fB\\-\\-workdir\\fR] [\\fB\\-W\\fR|\\fB\\-\\-watch\\-non\\-recursive\\fR] [\\fB\\-\\-wrap\\-process\\fR] [\\fB\\-F\\fR|\\fB\\-\\-watch\\-file\\fR] [\\fB\\-h\\fR|\\fB\\-\\-help\\fR] [\\fB\\-V\\fR|\\fB\\-\\-version\\fR] [\\fICOMMAND\\fR] \n.SH DESCRIPTION\nExecute commands when watched files change.\n.PP\nRecursively monitors the current directory for changes, executing the command when a filesystem change is detected (among other event sources). By default, watchexec uses efficient kernel\\-level mechanisms to watch for changes.\n.PP\nAt startup, the specified command is run once, and watchexec begins monitoring for changes.\n.PP\nEvents are debounced and checked using a variety of mechanisms, which you can control using the flags in the **Filtering** section. The order of execution is: internal prioritisation (signals come before everything else, and SIGINT/SIGTERM are processed even more urgently), then file event kind (`\\-\\-fs\\-events`), then files explicitly watched with `\\-w`, then ignores (`\\-\\-ignore` and co), then filters (which includes `\\-\\-exts`), then filter programs.\n.PP\nExamples:\n.PP\nRebuild a project when source files change:\n.PP\n$ watchexec make\n.PP\nWatch all HTML, CSS, and JavaScript files for changes:\n.PP\n$ watchexec \\-e html,css,js make\n.PP\nRun tests when source files change, clearing the screen each time:\n.PP\n$ watchexec \\-c make test\n.PP\nLaunch and restart a node.js server:\n.PP\n$ watchexec \\-r node app.js\n.PP\nWatch lib and src directories for changes, rebuilding each time:\n.PP\n$ watchexec \\-w lib \\-w src make\n.SH OPTIONS\n.TP\n\\fB\\-\\-completions\\fR \\fI<SHELL>\\fR\nGenerate a shell completions script\n\nProvides a completions script or configuration for the given shell. If Watchexec is not distributed with pre\\-generated completions, you can use this to generate them yourself.\n\nSupported shells: bash, elvish, fish, nu, powershell, zsh.\n.TP\n\\fB\\-\\-manual\\fR\nShow the manual page\n\nThis shows the manual page for Watchexec, if the output is a terminal and the \\*(Aqman\\*(Aq program is available. If not, the manual page is printed to stdout in ROFF format (suitable for writing to a watchexec.1 file).\n.TP\n\\fB\\-\\-only\\-emit\\-events\\fR\nOnly emit events to stdout, run no commands.\n\nThis is a convenience option for using Watchexec as a file watcher, without running any commands. It is almost equivalent to using `cat` as the command, except that it will not spawn a new process for each event.\n\nThis option implies `\\-\\-emit\\-events\\-to=json\\-stdio`; you may also use the text mode by specifying `\\-\\-emit\\-events\\-to=stdio`.\n.TP\n\\fB\\-h\\fR, \\fB\\-\\-help\\fR\nPrint help (see a summary with \\*(Aq\\-h\\*(Aq)\n.TP\n\\fB\\-V\\fR, \\fB\\-\\-version\\fR\nPrint version\n.TP\n[\\fICOMMAND\\fR]\nCommand (program and arguments) to run on changes\n\nIt\\*(Aqs run when events pass filters and the debounce period (and once at startup unless \\*(Aq\\-\\-postpone\\*(Aq is given). If you pass flags to the command, you should separate it with \\-\\- though that is not strictly required.\n\nExamples:\n\n$ watchexec \\-w src npm run build\n\n$ watchexec \\-w src \\-\\- rsync \\-a src dest\n\nTake care when using globs or other shell expansions in the command. Your shell may expand them before ever passing them to Watchexec, and the results may not be what you expect. Compare:\n\n$ watchexec echo src/*.rs\n\n$ watchexec echo \\*(Aqsrc/*.rs\\*(Aq\n\n$ watchexec \\-\\-shell=none echo \\*(Aqsrc/*.rs\\*(Aq\n\nBehaviour depends on the value of \\*(Aq\\-\\-shell\\*(Aq: for all except \\*(Aqnone\\*(Aq, every part of the command is joined together into one string with a single ascii space character, and given to the shell as described in the help for \\*(Aq\\-\\-shell\\*(Aq. For \\*(Aqnone\\*(Aq, each distinct element the command is passed as per the execvp(3) convention: first argument is the program, as a path or searched for in the \\*(AqPATH\\*(Aq environment variable, rest are arguments.\n.SH COMMAND\n.TP\n\\fB\\-\\-delay\\-run\\fR \\fI<DURATION>\\fR\nSleep before running the command\n\nThis option will cause Watchexec to sleep for the specified amount of time before running the command, after an event is detected. This is like using \"sleep 5 && command\" in a shell, but portable and slightly more efficient.\n\nTakes a unit\\-less value in seconds, or a time span value such as \"2min 5s\". Providing a unit\\-less value is deprecated and will warn; it will be an error in the future.\n.TP\n\\fB\\-E\\fR, \\fB\\-\\-env\\fR \\fI<KEY=VALUE>\\fR\nAdd env vars to the command\n\nThis is a convenience option for setting environment variables for the command, without setting them for the Watchexec process itself.\n\nUse key=value syntax. Multiple variables can be set by repeating the option.\n.TP\n\\fB\\-\\-socket\\fR \\fI<PORT>\\fR\nProvide a socket to the command\n\nThis implements the systemd socket\\-passing protocol, like with `systemfd`: sockets are opened from the watchexec process, and then passed to the commands it runs. This lets you keep sockets open and avoid address reuse issues or dropping packets.\n\nThis option can be supplied multiple times, to open multiple sockets.\n\nThe value can be either of `PORT` (opens a TCP listening socket at that port), `HOST:PORT` (specify a host IP address; IPv6 addresses can be specified `[bracketed]`), `TYPE::PORT` or `TYPE::HOST:PORT` (specify a socket type, `tcp` / `udp`).\n\nThis integration only provides basic support, if you want more control you should use the `systemfd` tool from <https://github.com/mitsuhiko/systemfd>, upon which this is based. The syntax here and the spawning behaviour is identical to `systemfd`, and both watchexec and systemfd are compatible implementations of the systemd socket\\-activation protocol.\n\nWatchexec does _not_ set the `LISTEN_PID` variable on unix, which means any child process of your command could accidentally bind to the sockets, unless the `LISTEN_*` variables are removed from the environment.\n.TP\n\\fB\\-n\\fR\nShorthand for \\*(Aq\\-\\-shell=none\\*(Aq\n.TP\n\\fB\\-\\-no\\-process\\-group\\fR\nDon\\*(Aqt use a process group\n\nBy default, Watchexec will run the command in a process group, so that signals and terminations are sent to all processes in the group. Sometimes that\\*(Aqs not what you want, and you can disable the behaviour with this option.\n\nDeprecated, use \\*(Aq\\-\\-wrap\\-process=none\\*(Aq instead.\n.TP\n\\fB\\-\\-shell\\fR \\fI<SHELL>\\fR\nUse a different shell\n\nBy default, Watchexec will use \\*(Aq$SHELL\\*(Aq if it\\*(Aqs defined or a default of \\*(Aqsh\\*(Aq on Unix\\-likes, and either \\*(Aqpwsh\\*(Aq, \\*(Aqpowershell\\*(Aq, or \\*(Aqcmd\\*(Aq (CMD.EXE) on Windows, depending on what Watchexec detects is the running shell.\n\nWith this option, you can override that and use a different shell, for example one with more features or one which has your custom aliases and functions.\n\nIf the value has spaces, it is parsed as a command line, and the first word used as the shell program, with the rest as arguments to the shell.\n\nThe command is run with the \\*(Aq\\-c\\*(Aq flag (except for \\*(Aqcmd\\*(Aq on Windows, where it\\*(Aqs \\*(Aq/C\\*(Aq).\n\nThe special value \\*(Aqnone\\*(Aq can be used to disable shell use entirely. In that case, the command provided to Watchexec will be parsed, with the first word being the executable and the rest being the arguments, and executed directly. Note that this parsing is rudimentary, and may not work as expected in all cases.\n\nUsing \\*(Aqnone\\*(Aq is a little more efficient and can enable a stricter interpretation of the input, but it also means that you can\\*(Aqt use shell features like globbing, redirection, control flow, logic, or pipes.\n\nExamples:\n\nUse without shell:\n\n$ watchexec \\-n \\-\\- zsh \\-x \\-o shwordsplit scr\n\nUse with powershell core:\n\n$ watchexec \\-\\-shell=pwsh \\-\\- Test\\-Connection localhost\n\nUse with CMD.exe:\n\n$ watchexec \\-\\-shell=cmd \\-\\- dir\n\nUse with a different unix shell:\n\n$ watchexec \\-\\-shell=bash \\-\\- \\*(Aqecho $BASH_VERSION\\*(Aq\n\nUse with a unix shell and options:\n\n$ watchexec \\-\\-shell=\\*(Aqzsh \\-x \\-o shwordsplit\\*(Aq \\-\\- scr\n.TP\n\\fB\\-\\-stop\\-signal\\fR \\fI<SIGNAL>\\fR\nSignal to send to stop the command\n\nThis is used by \\*(Aqrestart\\*(Aq and \\*(Aqsignal\\*(Aq modes of \\*(Aq\\-\\-on\\-busy\\-update\\*(Aq (unless \\*(Aq\\-\\-signal\\*(Aq is provided). The restart behaviour is to send the signal, wait for the command to exit, and if it hasn\\*(Aqt exited after some time (see \\*(Aq\\-\\-timeout\\-stop\\*(Aq), forcefully terminate it.\n\nThe default on unix is \"SIGTERM\".\n\nInput is parsed as a full signal name (like \"SIGTERM\"), a short signal name (like \"TERM\"), or a signal number (like \"15\"). All input is case\\-insensitive.\n\nOn Windows this option is technically supported but only supports the \"KILL\" event, as Watchexec cannot yet deliver other events. Windows doesn\\*(Aqt have signals as such; instead it has termination (here called \"KILL\" or \"STOP\") and \"CTRL+C\", \"CTRL+BREAK\", and \"CTRL+CLOSE\" events. For portability the unix signals \"SIGKILL\", \"SIGINT\", \"SIGTERM\", and \"SIGHUP\" are respectively mapped to these.\n.TP\n\\fB\\-\\-stop\\-timeout\\fR \\fI<TIMEOUT>\\fR\nTime to wait for the command to exit gracefully\n\nThis is used by the \\*(Aqrestart\\*(Aq mode of \\*(Aq\\-\\-on\\-busy\\-update\\*(Aq. After the graceful stop signal is sent, Watchexec will wait for the command to exit. If it hasn\\*(Aqt exited after this time, it is forcefully terminated.\n\nTakes a unit\\-less value in seconds, or a time span value such as \"5min 20s\". Providing a unit\\-less value is deprecated and will warn; it will be an error in the future.\n\nThe default is 10 seconds. Set to 0 to immediately force\\-kill the command.\n\nThis has no practical effect on Windows as the command is always forcefully terminated; see \\*(Aq\\-\\-stop\\-signal\\*(Aq for why.\n.TP\n\\fB\\-\\-timeout\\fR \\fI<TIMEOUT>\\fR\nKill the command if it runs longer than this duration\n\nTakes a time span value such as \"30s\", \"5min\", or \"1h 30m\".\n\nWhen the timeout is reached, the command is gracefully stopped using \\-\\-stop\\-signal, then forcefully terminated after \\-\\-stop\\-timeout if still running.\n\nEach run of the command has its own independent timeout.\n.TP\n\\fB\\-\\-workdir\\fR \\fI<DIRECTORY>\\fR\nSet the working directory\n\nBy default, the working directory of the command is the working directory of Watchexec. You can change that with this option. Note that paths may be less intuitive to use with this.\n.TP\n\\fB\\-\\-wrap\\-process\\fR \\fI<MODE>\\fR [default: group]\nConfigure how the process is wrapped\n\nBy default, Watchexec will run the command in a session on Mac, in a process group in Unix, and in a Job Object in Windows.\n\nSome Unix programs prefer running in a session, while others do not work in a process group.\n\nUse \\*(Aqgroup\\*(Aq to use a process group, \\*(Aqsession\\*(Aq to use a process session, and \\*(Aqnone\\*(Aq to run the command directly. On Windows, either of \\*(Aqgroup\\*(Aq or \\*(Aqsession\\*(Aq will use a Job Object.\n\nIf you find you need to specify this frequently for different kinds of programs, file an issue at <https://github.com/watchexec/watchexec/issues>. As errors of this nature are hard to debug and can be highly environment\\-dependent, reports from *multiple affected people* are more likely to be actioned promptly. Ask your friends/colleagues!\n.SH EVENTS\n.TP\n\\fB\\-d\\fR, \\fB\\-\\-debounce\\fR \\fI<TIMEOUT>\\fR\nTime to wait for new events before taking action\n\nWhen an event is received, Watchexec will wait for up to this amount of time before handling it (such as running the command). This is essential as what you might perceive as a single change may actually emit many events, and without this behaviour, Watchexec would run much too often. Additionally, it\\*(Aqs not infrequent that file writes are not atomic, and each write may emit an event, so this is a good way to avoid running a command while a file is partially written.\n\nAn alternative use is to set a high value (like \"30min\" or longer), to save power or bandwidth on intensive tasks, like an ad\\-hoc backup script. In those use cases, note that every accumulated event will build up in memory.\n\nTakes a unit\\-less value in milliseconds, or a time span value such as \"5sec 20ms\". Providing a unit\\-less value is deprecated and will warn; it will be an error in the future.\n\nThe default is 50 milliseconds. Setting to 0 is highly discouraged.\n.TP\n\\fB\\-\\-emit\\-events\\-to\\fR \\fI<MODE>\\fR\nConfigure event emission\n\nWatchexec can emit event information when running a command, which can be used by the child\nprocess to target specific changed files.\n\nOne thing to take care with is assuming inherent behaviour where there is only chance.\nNotably, it could appear as if the `RENAMED` variable contains both the original and the new\npath being renamed. In previous versions, it would even appear on some platforms as if the\noriginal always came before the new. However, none of this was true. It\\*(Aqs impossible to\nreliably and portably know which changed path is the old or new, \"half\" renames may appear\n(only the original, only the new), \"unknown\" renames may appear (change was a rename, but\nwhether it was the old or new isn\\*(Aqt known), rename events might split across two debouncing\nboundaries, and so on.\n\nThis option controls where that information is emitted. It defaults to \\*(Aqnone\\*(Aq, which doesn\\*(Aqt\nemit event information at all. The other options are \\*(Aqenvironment\\*(Aq (deprecated), \\*(Aqstdio\\*(Aq,\n\\*(Aqfile\\*(Aq, \\*(Aqjson\\-stdio\\*(Aq, and \\*(Aqjson\\-file\\*(Aq.\n\nThe \\*(Aqstdio\\*(Aq and \\*(Aqfile\\*(Aq modes are text\\-based: \\*(Aqstdio\\*(Aq writes absolute paths to the stdin of\nthe command, one per line, each prefixed with `create:`, `remove:`, `rename:`, `modify:`,\nor `other:`, then closes the handle; \\*(Aqfile\\*(Aq writes the same thing to a temporary file, and\nits path is given with the $WATCHEXEC_EVENTS_FILE environment variable.\n\nThere are also two JSON modes, which are based on JSON objects and can represent the full\nset of events Watchexec handles. Here\\*(Aqs an example of a folder being created on Linux:\n\n```json\n  {\n    \"tags\": [\n      {\n        \"kind\": \"path\",\n        \"absolute\": \"/home/user/your/new\\-folder\",\n        \"filetype\": \"dir\"\n      },\n      {\n        \"kind\": \"fs\",\n        \"simple\": \"create\",\n        \"full\": \"Create(Folder)\"\n      },\n      {\n        \"kind\": \"source\",\n        \"source\": \"filesystem\",\n      }\n    ],\n    \"metadata\": {\n      \"notify\\-backend\": \"inotify\"\n    }\n  }\n```\n\nThe fields are as follows:\n\n  \\- `tags`, structured event data.\n  \\- `tags[].kind`, which can be:\n    * \\*(Aqpath\\*(Aq, along with:\n      + `absolute`, an absolute path.\n      + `filetype`, a file type if known (\\*(Aqdir\\*(Aq, \\*(Aqfile\\*(Aq, \\*(Aqsymlink\\*(Aq, \\*(Aqother\\*(Aq).\n    * \\*(Aqfs\\*(Aq:\n      + `simple`, the \"simple\" event type (\\*(Aqaccess\\*(Aq, \\*(Aqcreate\\*(Aq, \\*(Aqmodify\\*(Aq, \\*(Aqremove\\*(Aq, or \\*(Aqother\\*(Aq).\n      + `full`, the \"full\" event type, which is too complex to fully describe here, but looks like \\*(AqGeneral(Precise(Specific))\\*(Aq.\n    * \\*(Aqsource\\*(Aq, along with:\n      + `source`, the source of the event (\\*(Aqfilesystem\\*(Aq, \\*(Aqkeyboard\\*(Aq, \\*(Aqmouse\\*(Aq, \\*(Aqos\\*(Aq, \\*(Aqtime\\*(Aq, \\*(Aqinternal\\*(Aq).\n    * \\*(Aqkeyboard\\*(Aq, along with:\n      + `keycode`. Currently only the value \\*(Aqeof\\*(Aq is supported.\n    * \\*(Aqprocess\\*(Aq, for events caused by processes:\n      + `pid`, the process ID.\n    * \\*(Aqsignal\\*(Aq, for signals sent to Watchexec:\n      + `signal`, the normalised signal name (\\*(Aqhangup\\*(Aq, \\*(Aqinterrupt\\*(Aq, \\*(Aqquit\\*(Aq, \\*(Aqterminate\\*(Aq, \\*(Aquser1\\*(Aq, \\*(Aquser2\\*(Aq).\n    * \\*(Aqcompletion\\*(Aq, for when a command ends:\n      + `disposition`, the exit disposition (\\*(Aqsuccess\\*(Aq, \\*(Aqerror\\*(Aq, \\*(Aqsignal\\*(Aq, \\*(Aqstop\\*(Aq, \\*(Aqexception\\*(Aq, \\*(Aqcontinued\\*(Aq).\n      + `code`, the exit, signal, stop, or exception code.\n  \\- `metadata`, additional information about the event.\n\nThe \\*(Aqjson\\-stdio\\*(Aq mode will emit JSON events to the standard input of the command, one per\nline, then close stdin. The \\*(Aqjson\\-file\\*(Aq mode will create a temporary file, write the\nevents to it, and provide the path to the file with the $WATCHEXEC_EVENTS_FILE\nenvironment variable.\n\nFinally, the \\*(Aqenvironment\\*(Aq mode was the default until 2.0. It sets environment variables\nwith the paths of the affected files, for filesystem events:\n\n$WATCHEXEC_COMMON_PATH is set to the longest common path of all of the below variables,\nand so should be prepended to each path to obtain the full/real path. Then:\n\n  \\- $WATCHEXEC_CREATED_PATH is set when files/folders were created\n  \\- $WATCHEXEC_REMOVED_PATH is set when files/folders were removed\n  \\- $WATCHEXEC_RENAMED_PATH is set when files/folders were renamed\n  \\- $WATCHEXEC_WRITTEN_PATH is set when files/folders were modified\n  \\- $WATCHEXEC_META_CHANGED_PATH is set when files/folders\\*(Aq metadata were modified\n  \\- $WATCHEXEC_OTHERWISE_CHANGED_PATH is set for every other kind of pathed event\n\nMultiple paths are separated by the system path separator, \\*(Aq;\\*(Aq on Windows and \\*(Aq:\\*(Aq on unix.\nWithin each variable, paths are deduplicated and sorted in binary order (i.e. neither\nUnicode nor locale aware).\n\nThis is the legacy mode, is deprecated, and will be removed in the future. The environment\nis a very restricted space, while also limited in what it can usefully represent. Large\nnumbers of files will either cause the environment to be truncated, or may error or crash\nthe process entirely. The $WATCHEXEC_COMMON_PATH is also unintuitive, as demonstrated by the\nmultiple confused queries that have landed in my inbox over the years.\n.TP\n\\fB\\-I\\fR, \\fB\\-\\-interactive\\fR\nRespond to keypresses to quit, restart, or pause\n\nIn interactive mode, Watchexec listens for keypresses and responds to them. Currently supported keys are: \\*(Aqr\\*(Aq to restart the command, \\*(Aqp\\*(Aq to toggle pausing the watch, and \\*(Aqq\\*(Aq to quit. This requires a terminal (TTY) and puts stdin into raw mode, so the child process will not receive stdin input.\n.TP\n\\fB\\-\\-exit\\-on\\-error\\fR\nExit when the command has an error\n\nBy default, Watchexec will continue to watch and re\\-run the command after the command exits, regardless of its exit status. With this option, it will instead exit when the command completes with any non\\-success exit status.\n\nThis is useful when running Watchexec in a process manager or container, where you want the container to restart when the command fails rather than hang waiting for file changes.\n.TP\n\\fB\\-\\-map\\-signal\\fR \\fI<SIGNAL:SIGNAL>\\fR\nTranslate signals from the OS to signals to send to the command\n\nTakes a pair of signal names, separated by a colon, such as \"TERM:INT\" to map SIGTERM to SIGINT. The first signal is the one received by watchexec, and the second is the one sent to the command. The second can be omitted to discard the first signal, such as \"TERM:\" to not do anything on SIGTERM.\n\nIf SIGINT or SIGTERM are mapped, then they no longer quit Watchexec. Besides making it hard to quit Watchexec itself, this is useful to send pass a Ctrl\\-C to the command without also terminating Watchexec and the underlying program with it, e.g. with \"INT:INT\".\n\nThis option can be specified multiple times to map multiple signals.\n\nSignal syntax is case\\-insensitive for short names (like \"TERM\", \"USR2\") and long names (like \"SIGKILL\", \"SIGHUP\"). Signal numbers are also supported (like \"15\", \"31\"). On Windows, the forms \"STOP\", \"CTRL+C\", and \"CTRL+BREAK\" are also supported to receive, but Watchexec cannot yet deliver other \"signals\" than a STOP.\n.TP\n\\fB\\-o\\fR, \\fB\\-\\-on\\-busy\\-update\\fR \\fI<MODE>\\fR\nWhat to do when receiving events while the command is running\n\nDefault is to \\*(Aqdo\\-nothing\\*(Aq, which ignores events while the command is running, so that changes that occur due to the command are ignored, like compilation outputs. You can also use \\*(Aqqueue\\*(Aq which will run the command once again when the current run has finished if any events occur while it\\*(Aqs running, or \\*(Aqrestart\\*(Aq, which terminates the running command and starts a new one. Finally, there\\*(Aqs \\*(Aqsignal\\*(Aq, which only sends a signal; this can be useful with programs that can reload their configuration without a full restart.\n\nThe signal can be specified with the \\*(Aq\\-\\-signal\\*(Aq option.\n.TP\n\\fB\\-\\-poll\\fR [\\fI<INTERVAL>\\fR]\nPoll for filesystem changes\n\nBy default, and where available, Watchexec uses the operating system\\*(Aqs native file system watching capabilities. This option disables that and instead uses a polling mechanism, which is less efficient but can work around issues with some file systems (like network shares) or edge cases.\n\nOptionally takes a unit\\-less value in milliseconds, or a time span value such as \"2s 500ms\", to use as the polling interval. If not specified, the default is 30 seconds. Providing a unit\\-less value is deprecated and will warn; it will be an error in the future.\n\nAliased as \\*(Aq\\-\\-force\\-poll\\*(Aq.\n.TP\n\\fB\\-p\\fR, \\fB\\-\\-postpone\\fR\nWait until first change before running command\n\nBy default, Watchexec will run the command once immediately. With this option, it will instead wait until an event is detected before running the command as normal.\n.TP\n\\fB\\-r\\fR, \\fB\\-\\-restart\\fR\nRestart the process if it\\*(Aqs still running\n\nThis is a shorthand for \\*(Aq\\-\\-on\\-busy\\-update=restart\\*(Aq.\n.TP\n\\fB\\-s\\fR, \\fB\\-\\-signal\\fR \\fI<SIGNAL>\\fR\nSend a signal to the process when it\\*(Aqs still running\n\nSpecify a signal to send to the process when it\\*(Aqs still running. This implies \\*(Aq\\-\\-on\\-busy\\-update=signal\\*(Aq; otherwise the signal used when that mode is \\*(Aqrestart\\*(Aq is controlled by \\*(Aq\\-\\-stop\\-signal\\*(Aq.\n\nSee the long documentation for \\*(Aq\\-\\-stop\\-signal\\*(Aq for syntax.\n\nSignals are not supported on Windows at the moment, and will always be overridden to \\*(Aqkill\\*(Aq. See \\*(Aq\\-\\-stop\\-signal\\*(Aq for more on Windows \"signals\".\n.TP\n\\fB\\-\\-stdin\\-quit\\fR\nExit when stdin closes\n\nThis watches the stdin file descriptor for EOF, and exits Watchexec gracefully when it is closed. This is used by some process managers to avoid leaving zombie processes around.\n.SH FILTERING\n.TP\n\\fB\\-e\\fR, \\fB\\-\\-exts\\fR \\fI<EXTENSIONS>\\fR\nFilename extensions to filter to\n\nThis is a quick filter to only emit events for files with the given extensions. Extensions can be given with or without the leading dot (e.g. \\*(Aqjs\\*(Aq or \\*(Aq.js\\*(Aq). Multiple extensions can be given by repeating the option or by separating them with commas.\n.TP\n\\fB\\-f\\fR, \\fB\\-\\-filter\\fR \\fI<PATTERN>\\fR\nFilename patterns to filter to\n\nProvide a glob\\-like filter pattern, and only events for files matching the pattern will be emitted. Multiple patterns can be given by repeating the option. Events that are not from files (e.g. signals, keyboard events) will pass through untouched.\n.TP\n\\fB\\-\\-filter\\-file\\fR \\fI<PATH>\\fR\nFiles to load filters from\n\nProvide a path to a file containing filters, one per line. Empty lines and lines starting with \\*(Aq#\\*(Aq are ignored. Uses the same pattern format as the \\*(Aq\\-\\-filter\\*(Aq option.\n\nThis can also be used via the $WATCHEXEC_FILTER_FILES environment variable.\n.TP\n\\fB\\-j\\fR, \\fB\\-\\-filter\\-prog\\fR \\fI<EXPRESSION>\\fR\nFilter programs.\n\nProvide your own custom filter programs in jaq (similar to jq) syntax. Programs are given an event in the same format as described in \\*(Aq\\-\\-emit\\-events\\-to\\*(Aq and must return a boolean. Invalid programs will make watchexec fail to start; use \\*(Aq\\-v\\*(Aq to see program runtime errors.\n\nIn addition to the jaq stdlib, watchexec adds some custom filter definitions:\n\n\\- \\*(Aqpath | file_meta\\*(Aq returns file metadata or null if the file does not exist.\n\n\\- \\*(Aqpath | file_size\\*(Aq returns the size of the file at path, or null if it does not exist.\n\n\\- \\*(Aqpath | file_read(bytes)\\*(Aq returns a string with the first n bytes of the file at path. If the file is smaller than n bytes, the whole file is returned. There is no filter to read the whole file at once to encourage limiting the amount of data read and processed.\n\n\\- \\*(Aqstring | hash\\*(Aq, and \\*(Aqpath | file_hash\\*(Aq return the hash of the string or file at path. No guarantee is made about the algorithm used: treat it as an opaque value.\n\n\\- \\*(Aqany | kv_store(key)\\*(Aq, \\*(Aqkv_fetch(key)\\*(Aq, and \\*(Aqkv_clear\\*(Aq provide a simple key\\-value store. Data is kept in memory only, there is no persistence. Consistency is not guaranteed.\n\n\\- \\*(Aqany | printout\\*(Aq, \\*(Aqany | printerr\\*(Aq, and \\*(Aqany | log(level)\\*(Aq will print or log any given value to stdout, stderr, or the log (levels = error, warn, info, debug, trace), and pass the value through (so \\*(Aq[1] | log(\"debug\") | .[]\\*(Aq will produce a \\*(Aq1\\*(Aq and log \\*(Aq[1]\\*(Aq).\n\nAll filtering done with such programs, and especially those using kv or filesystem access, is much slower than the other filtering methods. If filtering is too slow, events will back up and stall watchexec. Take care when designing your filters.\n\nIf the argument to this option starts with an \\*(Aq@\\*(Aq, the rest of the argument is taken to be the path to a file containing a jaq program.\n\nJaq programs are run in order, after all other filters, and short\\-circuit: if a filter (jaq or not) rejects an event, execution stops there, and no other filters are run. Additionally, they stop after outputting the first value, so you\\*(Aqll want to use \\*(Aqany\\*(Aq or \\*(Aqall\\*(Aq when iterating, otherwise only the first item will be processed, which can be quite confusing!\n\nFind user\\-contributed programs or submit your own useful ones at <https://github.com/watchexec/watchexec/discussions/592>.\n\n## Examples:\n\nRegexp ignore filter on paths:\n\n\\*(Aqall(.tags[] | select(.kind == \"path\"); .absolute | test(\"[.]test[.]js$\")) | not\\*(Aq\n\nPass any event that creates a file:\n\n\\*(Aqany(.tags[] | select(.kind == \"fs\"); .simple == \"create\")\\*(Aq\n\nPass events that touch executable files:\n\n\\*(Aqany(.tags[] | select(.kind == \"path\" && .filetype == \"file\"); .absolute | metadata | .executable)\\*(Aq\n\nIgnore files that start with shebangs:\n\n\\*(Aqany(.tags[] | select(.kind == \"path\" && .filetype == \"file\"); .absolute | read(2) == \"#!\") | not\\*(Aq\n.TP\n\\fB\\-\\-fs\\-events\\fR \\fI<EVENTS>\\fR\nFilesystem events to filter to\n\nThis is a quick filter to only emit events for the given types of filesystem changes. Choose from \\*(Aqaccess\\*(Aq, \\*(Aqcreate\\*(Aq, \\*(Aqremove\\*(Aq, \\*(Aqrename\\*(Aq, \\*(Aqmodify\\*(Aq, \\*(Aqmetadata\\*(Aq. Multiple types can be given by repeating the option or by separating them with commas. By default, this is all types except for \\*(Aqaccess\\*(Aq.\n\nThis may apply filtering at the kernel level when possible, which can be more efficient, but may be more confusing when reading the logs.\n.TP\n\\fB\\-i\\fR, \\fB\\-\\-ignore\\fR \\fI<PATTERN>\\fR\nFilename patterns to filter out\n\nProvide a glob\\-like filter pattern, and events for files matching the pattern will be excluded. Multiple patterns can be given by repeating the option. Events that are not from files (e.g. signals, keyboard events) will pass through untouched.\n.TP\n\\fB\\-\\-ignore\\-file\\fR \\fI<PATH>\\fR\nFiles to load ignores from\n\nProvide a path to a file containing ignores, one per line. Empty lines and lines starting with \\*(Aq#\\*(Aq are ignored. Uses the same pattern format as the \\*(Aq\\-\\-ignore\\*(Aq option.\n\nThis can also be used via the $WATCHEXEC_IGNORE_FILES environment variable.\n.TP\n\\fB\\-\\-ignore\\-nothing\\fR\nDon\\*(Aqt ignore anything at all\n\nThis is a shorthand for \\*(Aq\\-\\-no\\-discover\\-ignore\\*(Aq, \\*(Aq\\-\\-no\\-default\\-ignore\\*(Aq.\n\nNote that ignores explicitly loaded via other command line options, such as \\*(Aq\\-\\-ignore\\*(Aq or \\*(Aq\\-\\-ignore\\-file\\*(Aq, will still be used.\n.TP\n\\fB\\-\\-no\\-default\\-ignore\\fR\nDon\\*(Aqt use internal default ignores\n\nWatchexec has a set of default ignore patterns, such as editor swap files, `*.pyc`, `*.pyo`, `.DS_Store`, `.bzr`, `_darcs`, `.fossil\\-settings`, `.git`, `.hg`, `.pijul`, `.svn`, and Watchexec log files.\n.TP\n\\fB\\-\\-no\\-discover\\-ignore\\fR\nDon\\*(Aqt discover ignore files at all\n\nThis is a shorthand for \\*(Aq\\-\\-no\\-global\\-ignore\\*(Aq, \\*(Aq\\-\\-no\\-vcs\\-ignore\\*(Aq, \\*(Aq\\-\\-no\\-project\\-ignore\\*(Aq, but even more efficient as it will skip all the ignore discovery mechanisms from the get go.\n\nNote that default ignores are still loaded, see \\*(Aq\\-\\-no\\-default\\-ignore\\*(Aq.\n.TP\n\\fB\\-\\-no\\-global\\-ignore\\fR\nDon\\*(Aqt load global ignores\n\nThis disables loading of global or user ignore files, like \\*(Aq~/.gitignore\\*(Aq,\n\\*(Aq~/.config/watchexec/ignore\\*(Aq, or \\*(Aq%APPDATA%\\\\Bazzar\\\\2.0\\\\ignore\\*(Aq. Contrast with\n\\*(Aq\\-\\-no\\-vcs\\-ignore\\*(Aq and \\*(Aq\\-\\-no\\-project\\-ignore\\*(Aq.\n\nSupported global ignore files\n\n  \\- Git (if core.excludesFile is set): the file at that path\n  \\- Git (otherwise): the first found of $XDG_CONFIG_HOME/git/ignore, %APPDATA%/.gitignore, %USERPROFILE%/.gitignore, $HOME/.config/git/ignore, $HOME/.gitignore.\n  \\- Bazaar: the first found of %APPDATA%/Bazzar/2.0/ignore, $HOME/.bazaar/ignore.\n  \\- Watchexec: the first found of $XDG_CONFIG_HOME/watchexec/ignore, %APPDATA%/watchexec/ignore, %USERPROFILE%/.watchexec/ignore, $HOME/.watchexec/ignore.\n\nLike for project files, Git and Bazaar global files will only be used for the corresponding\nVCS as used in the project.\n.TP\n\\fB\\-\\-no\\-meta\\fR\nDon\\*(Aqt emit fs events for metadata changes\n\nThis is a shorthand for \\*(Aq\\-\\-fs\\-events create,remove,rename,modify\\*(Aq. Using it alongside the \\*(Aq\\-\\-fs\\-events\\*(Aq option is non\\-sensical and not allowed.\n.TP\n\\fB\\-\\-no\\-project\\-ignore\\fR\nDon\\*(Aqt load project\\-local ignores\n\nThis disables loading of project\\-local ignore files, like \\*(Aq.gitignore\\*(Aq or \\*(Aq.ignore\\*(Aq in the\nwatched project. This is contrasted with \\*(Aq\\-\\-no\\-vcs\\-ignore\\*(Aq, which disables loading of Git\nand other VCS ignore files, and with \\*(Aq\\-\\-no\\-global\\-ignore\\*(Aq, which disables loading of global\nor user ignore files, like \\*(Aq~/.gitignore\\*(Aq or \\*(Aq~/.config/watchexec/ignore\\*(Aq.\n\nSupported project ignore files:\n\n  \\- Git: .gitignore at project root and child directories, .git/info/exclude, and the file pointed to by `core.excludesFile` in .git/config.\n  \\- Mercurial: .hgignore at project root and child directories.\n  \\- Bazaar: .bzrignore at project root.\n  \\- Darcs: _darcs/prefs/boring\n  \\- Fossil: .fossil\\-settings/ignore\\-glob\n  \\- Ripgrep/Watchexec/generic: .ignore at project root and child directories.\n\nVCS ignore files (Git, Mercurial, Bazaar, Darcs, Fossil) are only used if the corresponding\nVCS is discovered to be in use for the project/origin. For example, a .bzrignore in a Git\nrepository will be discarded.\n.TP\n\\fB\\-\\-no\\-vcs\\-ignore\\fR\nDon\\*(Aqt load gitignores\n\nAmong other VCS exclude files, like for Mercurial, Subversion, Bazaar, DARCS, Fossil. Note that Watchexec will detect which of these is in use, if any, and only load the relevant files. Both global (like \\*(Aq~/.gitignore\\*(Aq) and local (like \\*(Aq.gitignore\\*(Aq) files are considered.\n\nThis option is useful if you want to watch files that are ignored by Git.\n.TP\n\\fB\\-\\-project\\-origin\\fR \\fI<DIRECTORY>\\fR\nSet the project origin\n\nWatchexec will attempt to discover the project\\*(Aqs \"origin\" (or \"root\") by searching for a variety of markers, like files or directory patterns. It does its best but sometimes gets it it wrong, and you can override that with this option.\n\nThe project origin is used to determine the path of certain ignore files, which VCS is being used, the meaning of a leading \\*(Aq/\\*(Aq in filtering patterns, and maybe more in the future.\n\nWhen set, Watchexec will also not bother searching, which can be significantly faster.\n.TP\n\\fB\\-w\\fR, \\fB\\-\\-watch\\fR \\fI<PATH>\\fR\nWatch a specific file or directory\n\nBy default, Watchexec watches the current directory.\n\nWhen watching a single file, it\\*(Aqs often better to watch the containing directory instead, and filter on the filename. Some editors may replace the file with a new one when saving, and some platforms may not detect that or further changes.\n\nUpon starting, Watchexec resolves a \"project origin\" from the watched paths. See the help for \\*(Aq\\-\\-project\\-origin\\*(Aq for more information.\n\nThis option can be specified multiple times to watch multiple files or directories.\n\nThe special value \\*(Aq/dev/null\\*(Aq, provided as the only path watched, will cause Watchexec to not watch any paths. Other event sources (like signals or key events) may still be used.\n.TP\n\\fB\\-W\\fR, \\fB\\-\\-watch\\-non\\-recursive\\fR \\fI<PATH>\\fR\nWatch a specific directory, non\\-recursively\n\nUnlike \\*(Aq\\-w\\*(Aq, folders watched with this option are not recursed into.\n\nThis option can be specified multiple times to watch multiple directories non\\-recursively.\n.TP\n\\fB\\-F\\fR, \\fB\\-\\-watch\\-file\\fR \\fI<PATH>\\fR\nWatch files and directories from a file\n\nEach line in the file will be interpreted as if given to \\*(Aq\\-w\\*(Aq.\n\nFor more complex uses (like watching non\\-recursively), use the argfile capability: build a file containing command\\-line options and pass it to watchexec with `@path/to/argfile`.\n\nThe special value \\*(Aq\\-\\*(Aq will read from STDIN; this in incompatible with \\*(Aq\\-\\-stdin\\-quit\\*(Aq.\n.SH DEBUGGING\n.TP\n\\fB\\-\\-log\\-file\\fR [\\fI<PATH>\\fR]\nWrite diagnostic logs to a file\n\nThis writes diagnostic logs to a file, instead of the terminal, in JSON format. If a log level was not already specified, this will set it to \\*(Aq\\-vvv\\*(Aq.\n\nIf a path is not provided, the default is the working directory. Note that with \\*(Aq\\-\\-ignore\\-nothing\\*(Aq, the write events to the log will likely get picked up by Watchexec, causing a loop; prefer setting a path outside of the watched directory.\n\nIf the path provided is a directory, a file will be created in that directory. The file name will be the current date and time, in the format \\*(Aqwatchexec.YYYY\\-MM\\-DDTHH\\-MM\\-SSZ.log\\*(Aq.\n.TP\n\\fB\\-\\-print\\-events\\fR\nPrint events that trigger actions\n\nThis prints the events that triggered the action when handling it (after debouncing), in a human readable form. This is useful for debugging filters.\n\nUse \\*(Aq\\-vvv\\*(Aq instead when you need more diagnostic information.\n.TP\n\\fB\\-v\\fR, \\fB\\-\\-verbose\\fR\nSet diagnostic log level\n\nThis enables diagnostic logging, which is useful for investigating bugs or gaining more insight into faulty filters or \"missing\" events. Use multiple times to increase verbosity.\n\nGoes up to \\*(Aq\\-vvvv\\*(Aq. When submitting bug reports, default to a \\*(Aq\\-vvv\\*(Aq log level.\n\nYou may want to use with \\*(Aq\\-\\-log\\-file\\*(Aq to avoid polluting your terminal.\n\nSetting $WATCHEXEC_LOG also works, and takes precedence, but is not recommended. However, using $WATCHEXEC_LOG is the only way to get logs from before these options are parsed.\n.SH OUTPUT\n.TP\n\\fB\\-\\-bell\\fR\nRing the terminal bell on command completion\n.TP\n\\fB\\-c\\fR, \\fB\\-\\-clear\\fR [\\fI<MODE>\\fR]\nClear screen before running command\n\nIf this doesn\\*(Aqt completely clear the screen, try \\*(Aq\\-\\-clear=reset\\*(Aq.\n.TP\n\\fB\\-\\-color\\fR \\fI<MODE>\\fR [default: auto]\nWhen to use terminal colours\n\nSetting the environment variable `NO_COLOR` to any value is equivalent to `\\-\\-color=never`.\n.TP\n\\fB\\-N\\fR, \\fB\\-\\-notify\\fR [\\fI<WHEN>\\fR]\nAlert when commands start and end\n\nWith this, Watchexec will emit a desktop notification when a command starts and ends, on supported platforms. On unsupported platforms, it may silently do nothing, or log a warning.\n\nThe mode can be specified to only notify when the command `start`s, `end`s, or for `both` (which is the default).\n.TP\n\\fB\\-q\\fR, \\fB\\-\\-quiet\\fR\nDon\\*(Aqt print starting and stopping messages\n\nBy default Watchexec will print a message when the command starts and stops. This option disables this behaviour, so only the command\\*(Aqs output, warnings, and errors will be printed.\n.TP\n\\fB\\-\\-timings\\fR\nPrint how long the command took to run\n\nThis may not be exactly accurate, as it includes some overhead from Watchexec itself. Use the `time` utility, high\\-precision timers, or benchmarking tools for more accurate results.\n.SH EXTRA\nUse @argfile as first argument to load arguments from the file \\*(Aqargfile\\*(Aq (one argument per line) which will be inserted in place of the @argfile (further arguments on the CLI will override or add onto those in the file).\n\nDidn\\*(Aqt expect this much output? Use the short \\*(Aq\\-h\\*(Aq flag to get short help.\n.SH VERSION\nv2.5.1\n.SH AUTHORS\nFélix Saparelli <felix@passcod.name>, Matt Green <mattgreenrocks@gmail.com>\n"
  },
  {
    "path": "doc/watchexec.1.md",
    "content": "# NAME\n\nwatchexec - Execute commands when watched files change\n\n# SYNOPSIS\n\n**watchexec** \\[**\\--bell**\\] \\[**-c**\\|**\\--clear**\\]\n\\[**\\--completions**\\] \\[**\\--color**\\] \\[**-d**\\|**\\--debounce**\\]\n\\[**\\--delay-run**\\] \\[**-e**\\|**\\--exts**\\] \\[**-E**\\|**\\--env**\\]\n\\[**\\--emit-events-to**\\] \\[**-f**\\|**\\--filter**\\] \\[**\\--socket**\\]\n\\[**\\--filter-file**\\] \\[**-j**\\|**\\--filter-prog**\\]\n\\[**\\--fs-events**\\] \\[**-i**\\|**\\--ignore**\\]\n\\[**-I**\\|**\\--interactive**\\] \\[**\\--exit-on-error**\\]\n\\[**\\--ignore-file**\\] \\[**\\--ignore-nothing**\\] \\[**\\--log-file**\\]\n\\[**\\--manual**\\] \\[**\\--map-signal**\\] \\[**-n** \\]\n\\[**-N**\\|**\\--notify**\\] \\[**\\--no-default-ignore**\\]\n\\[**\\--no-discover-ignore**\\] \\[**\\--no-process-group**\\]\n\\[**\\--no-global-ignore**\\] \\[**\\--no-meta**\\]\n\\[**\\--no-project-ignore**\\] \\[**\\--no-vcs-ignore**\\]\n\\[**-o**\\|**\\--on-busy-update**\\] \\[**\\--only-emit-events**\\]\n\\[**\\--poll**\\] \\[**\\--print-events**\\] \\[**\\--project-origin**\\]\n\\[**-p**\\|**\\--postpone**\\] \\[**-q**\\|**\\--quiet**\\]\n\\[**-r**\\|**\\--restart**\\] \\[**-s**\\|**\\--signal**\\] \\[**\\--shell**\\]\n\\[**\\--stdin-quit**\\] \\[**\\--stop-signal**\\] \\[**\\--stop-timeout**\\]\n\\[**\\--timeout**\\] \\[**\\--timings**\\] \\[**-v**\\|**\\--verbose**\\]\\...\n\\[**-w**\\|**\\--watch**\\] \\[**\\--workdir**\\]\n\\[**-W**\\|**\\--watch-non-recursive**\\] \\[**\\--wrap-process**\\]\n\\[**-F**\\|**\\--watch-file**\\] \\[**-h**\\|**\\--help**\\]\n\\[**-V**\\|**\\--version**\\] \\[*COMMAND*\\]\n\n# DESCRIPTION\n\nExecute commands when watched files change.\n\nRecursively monitors the current directory for changes, executing the\ncommand when a filesystem change is detected (among other event\nsources). By default, watchexec uses efficient kernel-level mechanisms\nto watch for changes.\n\nAt startup, the specified command is run once, and watchexec begins\nmonitoring for changes.\n\nEvents are debounced and checked using a variety of mechanisms, which\nyou can control using the flags in the \\*\\*Filtering\\*\\* section. The\norder of execution is: internal prioritisation (signals come before\neverything else, and SIGINT/SIGTERM are processed even more urgently),\nthen file event kind (\\`\\--fs-events\\`), then files explicitly watched\nwith \\`-w\\`, then ignores (\\`\\--ignore\\` and co), then filters (which\nincludes \\`\\--exts\\`), then filter programs.\n\nExamples:\n\nRebuild a project when source files change:\n\n\\$ watchexec make\n\nWatch all HTML, CSS, and JavaScript files for changes:\n\n\\$ watchexec -e html,css,js make\n\nRun tests when source files change, clearing the screen each time:\n\n\\$ watchexec -c make test\n\nLaunch and restart a node.js server:\n\n\\$ watchexec -r node app.js\n\nWatch lib and src directories for changes, rebuilding each time:\n\n\\$ watchexec -w lib -w src make\n\n# OPTIONS\n\n**\\--completions** *\\<SHELL\\>*\n\n:   Generate a shell completions script\n\n    Provides a completions script or configuration for the given shell.\n    If Watchexec is not distributed with pre-generated completions, you\n    can use this to generate them yourself.\n\n    Supported shells: bash, elvish, fish, nu, powershell, zsh.\n\n**\\--manual**\n\n:   Show the manual page\n\n    This shows the manual page for Watchexec, if the output is a\n    terminal and the man program is available. If not, the manual page\n    is printed to stdout in ROFF format (suitable for writing to a\n    watchexec.1 file).\n\n**\\--only-emit-events**\n\n:   Only emit events to stdout, run no commands.\n\n    This is a convenience option for using Watchexec as a file watcher,\n    without running any commands. It is almost equivalent to using\n    \\`cat\\` as the command, except that it will not spawn a new process\n    for each event.\n\n    This option implies \\`\\--emit-events-to=json-stdio\\`; you may also\n    use the text mode by specifying \\`\\--emit-events-to=stdio\\`.\n\n**-h**, **\\--help**\n\n:   Print help (see a summary with -h)\n\n**-V**, **\\--version**\n\n:   Print version\n\n\\[*COMMAND*\\]\n\n:   Command (program and arguments) to run on changes\n\n    Its run when events pass filters and the debounce period (and once\n    at startup unless \\--postpone is given). If you pass flags to the\n    command, you should separate it with \\-- though that is not strictly\n    required.\n\n    Examples:\n\n    \\$ watchexec -w src npm run build\n\n    \\$ watchexec -w src \\-- rsync -a src dest\n\n    Take care when using globs or other shell expansions in the command.\n    Your shell may expand them before ever passing them to Watchexec,\n    and the results may not be what you expect. Compare:\n\n    \\$ watchexec echo src/\\*.rs\n\n    \\$ watchexec echo src/\\*.rs\n\n    \\$ watchexec \\--shell=none echo src/\\*.rs\n\n    Behaviour depends on the value of \\--shell: for all except none,\n    every part of the command is joined together into one string with a\n    single ascii space character, and given to the shell as described in\n    the help for \\--shell. For none, each distinct element the command\n    is passed as per the execvp(3) convention: first argument is the\n    program, as a path or searched for in the PATH environment variable,\n    rest are arguments.\n\n# COMMAND\n\n**\\--delay-run** *\\<DURATION\\>*\n\n:   Sleep before running the command\n\n    This option will cause Watchexec to sleep for the specified amount\n    of time before running the command, after an event is detected. This\n    is like using \\\"sleep 5 && command\\\" in a shell, but portable and\n    slightly more efficient.\n\n    Takes a unit-less value in seconds, or a time span value such as\n    \\\"2min 5s\\\". Providing a unit-less value is deprecated and will\n    warn; it will be an error in the future.\n\n**-E**, **\\--env** *\\<KEY=VALUE\\>*\n\n:   Add env vars to the command\n\n    This is a convenience option for setting environment variables for\n    the command, without setting them for the Watchexec process itself.\n\n    Use key=value syntax. Multiple variables can be set by repeating the\n    option.\n\n**\\--socket** *\\<PORT\\>*\n\n:   Provide a socket to the command\n\n    This implements the systemd socket-passing protocol, like with\n    \\`systemfd\\`: sockets are opened from the watchexec process, and\n    then passed to the commands it runs. This lets you keep sockets open\n    and avoid address reuse issues or dropping packets.\n\n    This option can be supplied multiple times, to open multiple\n    sockets.\n\n    The value can be either of \\`PORT\\` (opens a TCP listening socket at\n    that port), \\`HOST:PORT\\` (specify a host IP address; IPv6 addresses\n    can be specified \\`\\[bracketed\\]\\`), \\`TYPE::PORT\\` or\n    \\`TYPE::HOST:PORT\\` (specify a socket type, \\`tcp\\` / \\`udp\\`).\n\n    This integration only provides basic support, if you want more\n    control you should use the \\`systemfd\\` tool from\n    \\<https://github.com/mitsuhiko/systemfd\\>, upon which this is based.\n    The syntax here and the spawning behaviour is identical to\n    \\`systemfd\\`, and both watchexec and systemfd are compatible\n    implementations of the systemd socket-activation protocol.\n\n    Watchexec does \\_not\\_ set the \\`LISTEN_PID\\` variable on unix,\n    which means any child process of your command could accidentally\n    bind to the sockets, unless the \\`LISTEN\\_\\*\\` variables are removed\n    from the environment.\n\n**-n**\n\n:   Shorthand for \\--shell=none\n\n**\\--no-process-group**\n\n:   Dont use a process group\n\n    By default, Watchexec will run the command in a process group, so\n    that signals and terminations are sent to all processes in the\n    group. Sometimes thats not what you want, and you can disable the\n    behaviour with this option.\n\n    Deprecated, use \\--wrap-process=none instead.\n\n**\\--shell** *\\<SHELL\\>*\n\n:   Use a different shell\n\n    By default, Watchexec will use \\$SHELL if its defined or a default\n    of sh on Unix-likes, and either pwsh, powershell, or cmd (CMD.EXE)\n    on Windows, depending on what Watchexec detects is the running\n    shell.\n\n    With this option, you can override that and use a different shell,\n    for example one with more features or one which has your custom\n    aliases and functions.\n\n    If the value has spaces, it is parsed as a command line, and the\n    first word used as the shell program, with the rest as arguments to\n    the shell.\n\n    The command is run with the -c flag (except for cmd on Windows,\n    where its /C).\n\n    The special value none can be used to disable shell use entirely. In\n    that case, the command provided to Watchexec will be parsed, with\n    the first word being the executable and the rest being the\n    arguments, and executed directly. Note that this parsing is\n    rudimentary, and may not work as expected in all cases.\n\n    Using none is a little more efficient and can enable a stricter\n    interpretation of the input, but it also means that you cant use\n    shell features like globbing, redirection, control flow, logic, or\n    pipes.\n\n    Examples:\n\n    Use without shell:\n\n    \\$ watchexec -n \\-- zsh -x -o shwordsplit scr\n\n    Use with powershell core:\n\n    \\$ watchexec \\--shell=pwsh \\-- Test-Connection localhost\n\n    Use with CMD.exe:\n\n    \\$ watchexec \\--shell=cmd \\-- dir\n\n    Use with a different unix shell:\n\n    \\$ watchexec \\--shell=bash \\-- echo \\$BASH_VERSION\n\n    Use with a unix shell and options:\n\n    \\$ watchexec \\--shell=zsh -x -o shwordsplit \\-- scr\n\n**\\--stop-signal** *\\<SIGNAL\\>*\n\n:   Signal to send to stop the command\n\n    This is used by restart and signal modes of \\--on-busy-update\n    (unless \\--signal is provided). The restart behaviour is to send the\n    signal, wait for the command to exit, and if it hasnt exited after\n    some time (see \\--timeout-stop), forcefully terminate it.\n\n    The default on unix is \\\"SIGTERM\\\".\n\n    Input is parsed as a full signal name (like \\\"SIGTERM\\\"), a short\n    signal name (like \\\"TERM\\\"), or a signal number (like \\\"15\\\"). All\n    input is case-insensitive.\n\n    On Windows this option is technically supported but only supports\n    the \\\"KILL\\\" event, as Watchexec cannot yet deliver other events.\n    Windows doesnt have signals as such; instead it has termination\n    (here called \\\"KILL\\\" or \\\"STOP\\\") and \\\"CTRL+C\\\", \\\"CTRL+BREAK\\\",\n    and \\\"CTRL+CLOSE\\\" events. For portability the unix signals\n    \\\"SIGKILL\\\", \\\"SIGINT\\\", \\\"SIGTERM\\\", and \\\"SIGHUP\\\" are\n    respectively mapped to these.\n\n**\\--stop-timeout** *\\<TIMEOUT\\>*\n\n:   Time to wait for the command to exit gracefully\n\n    This is used by the restart mode of \\--on-busy-update. After the\n    graceful stop signal is sent, Watchexec will wait for the command to\n    exit. If it hasnt exited after this time, it is forcefully\n    terminated.\n\n    Takes a unit-less value in seconds, or a time span value such as\n    \\\"5min 20s\\\". Providing a unit-less value is deprecated and will\n    warn; it will be an error in the future.\n\n    The default is 10 seconds. Set to 0 to immediately force-kill the\n    command.\n\n    This has no practical effect on Windows as the command is always\n    forcefully terminated; see \\--stop-signal for why.\n\n**\\--timeout** *\\<TIMEOUT\\>*\n\n:   Kill the command if it runs longer than this duration\n\n    Takes a time span value such as \\\"30s\\\", \\\"5min\\\", or \\\"1h 30m\\\".\n\n    When the timeout is reached, the command is gracefully stopped using\n    \\--stop-signal, then forcefully terminated after \\--stop-timeout if\n    still running.\n\n    Each run of the command has its own independent timeout.\n\n**\\--workdir** *\\<DIRECTORY\\>*\n\n:   Set the working directory\n\n    By default, the working directory of the command is the working\n    directory of Watchexec. You can change that with this option. Note\n    that paths may be less intuitive to use with this.\n\n**\\--wrap-process** *\\<MODE\\>* \\[default: group\\]\n\n:   Configure how the process is wrapped\n\n    By default, Watchexec will run the command in a session on Mac, in a\n    process group in Unix, and in a Job Object in Windows.\n\n    Some Unix programs prefer running in a session, while others do not\n    work in a process group.\n\n    Use group to use a process group, session to use a process session,\n    and none to run the command directly. On Windows, either of group or\n    session will use a Job Object.\n\n    If you find you need to specify this frequently for different kinds\n    of programs, file an issue at\n    \\<https://github.com/watchexec/watchexec/issues\\>. As errors of this\n    nature are hard to debug and can be highly environment-dependent,\n    reports from \\*multiple affected people\\* are more likely to be\n    actioned promptly. Ask your friends/colleagues!\n\n# EVENTS\n\n**-d**, **\\--debounce** *\\<TIMEOUT\\>*\n\n:   Time to wait for new events before taking action\n\n    When an event is received, Watchexec will wait for up to this amount\n    of time before handling it (such as running the command). This is\n    essential as what you might perceive as a single change may actually\n    emit many events, and without this behaviour, Watchexec would run\n    much too often. Additionally, its not infrequent that file writes\n    are not atomic, and each write may emit an event, so this is a good\n    way to avoid running a command while a file is partially written.\n\n    An alternative use is to set a high value (like \\\"30min\\\" or\n    longer), to save power or bandwidth on intensive tasks, like an\n    ad-hoc backup script. In those use cases, note that every\n    accumulated event will build up in memory.\n\n    Takes a unit-less value in milliseconds, or a time span value such\n    as \\\"5sec 20ms\\\". Providing a unit-less value is deprecated and will\n    warn; it will be an error in the future.\n\n    The default is 50 milliseconds. Setting to 0 is highly discouraged.\n\n**\\--emit-events-to** *\\<MODE\\>*\n\n:   Configure event emission\n\n    Watchexec can emit event information when running a command, which\n    can be used by the child process to target specific changed files.\n\n    One thing to take care with is assuming inherent behaviour where\n    there is only chance. Notably, it could appear as if the \\`RENAMED\\`\n    variable contains both the original and the new path being renamed.\n    In previous versions, it would even appear on some platforms as if\n    the original always came before the new. However, none of this was\n    true. Its impossible to reliably and portably know which changed\n    path is the old or new, \\\"half\\\" renames may appear (only the\n    original, only the new), \\\"unknown\\\" renames may appear (change was\n    a rename, but whether it was the old or new isnt known), rename\n    events might split across two debouncing boundaries, and so on.\n\n    This option controls where that information is emitted. It defaults\n    to none, which doesnt emit event information at all. The other\n    options are environment (deprecated), stdio, file, json-stdio, and\n    json-file.\n\n    The stdio and file modes are text-based: stdio writes absolute paths\n    to the stdin of the command, one per line, each prefixed with\n    \\`create:\\`, \\`remove:\\`, \\`rename:\\`, \\`modify:\\`, or \\`other:\\`,\n    then closes the handle; file writes the same thing to a temporary\n    file, and its path is given with the \\$WATCHEXEC_EVENTS_FILE\n    environment variable.\n\n    There are also two JSON modes, which are based on JSON objects and\n    can represent the full set of events Watchexec handles. Heres an\n    example of a folder being created on Linux:\n\n    \\`\\`\\`json { \\\"tags\\\": \\[ { \\\"kind\\\": \\\"path\\\", \\\"absolute\\\":\n    \\\"/home/user/your/new-folder\\\", \\\"filetype\\\": \\\"dir\\\" }, { \\\"kind\\\":\n    \\\"fs\\\", \\\"simple\\\": \\\"create\\\", \\\"full\\\": \\\"Create(Folder)\\\" }, {\n    \\\"kind\\\": \\\"source\\\", \\\"source\\\": \\\"filesystem\\\", } \\],\n    \\\"metadata\\\": { \\\"notify-backend\\\": \\\"inotify\\\" } } \\`\\`\\`\n\n    The fields are as follows:\n\n    \\- \\`tags\\`, structured event data. - \\`tags\\[\\].kind\\`, which can\n    be: \\* path, along with: + \\`absolute\\`, an absolute path. +\n    \\`filetype\\`, a file type if known (dir, file, symlink, other). \\*\n    fs: + \\`simple\\`, the \\\"simple\\\" event type (access, create, modify,\n    remove, or other). + \\`full\\`, the \\\"full\\\" event type, which is too\n    complex to fully describe here, but looks like\n    General(Precise(Specific)). \\* source, along with: + \\`source\\`, the\n    source of the event (filesystem, keyboard, mouse, os, time,\n    internal). \\* keyboard, along with: + \\`keycode\\`. Currently only\n    the value eof is supported. \\* process, for events caused by\n    processes: + \\`pid\\`, the process ID. \\* signal, for signals sent to\n    Watchexec: + \\`signal\\`, the normalised signal name (hangup,\n    interrupt, quit, terminate, user1, user2). \\* completion, for when a\n    command ends: + \\`disposition\\`, the exit disposition (success,\n    error, signal, stop, exception, continued). + \\`code\\`, the exit,\n    signal, stop, or exception code. - \\`metadata\\`, additional\n    information about the event.\n\n    The json-stdio mode will emit JSON events to the standard input of\n    the command, one per line, then close stdin. The json-file mode will\n    create a temporary file, write the events to it, and provide the\n    path to the file with the \\$WATCHEXEC_EVENTS_FILE environment\n    variable.\n\n    Finally, the environment mode was the default until 2.0. It sets\n    environment variables with the paths of the affected files, for\n    filesystem events:\n\n    \\$WATCHEXEC_COMMON_PATH is set to the longest common path of all of\n    the below variables, and so should be prepended to each path to\n    obtain the full/real path. Then:\n\n    \\- \\$WATCHEXEC_CREATED_PATH is set when files/folders were created -\n    \\$WATCHEXEC_REMOVED_PATH is set when files/folders were removed -\n    \\$WATCHEXEC_RENAMED_PATH is set when files/folders were renamed -\n    \\$WATCHEXEC_WRITTEN_PATH is set when files/folders were modified -\n    \\$WATCHEXEC_META_CHANGED_PATH is set when files/folders metadata\n    were modified - \\$WATCHEXEC_OTHERWISE_CHANGED_PATH is set for every\n    other kind of pathed event\n\n    Multiple paths are separated by the system path separator, ; on\n    Windows and : on unix. Within each variable, paths are deduplicated\n    and sorted in binary order (i.e. neither Unicode nor locale aware).\n\n    This is the legacy mode, is deprecated, and will be removed in the\n    future. The environment is a very restricted space, while also\n    limited in what it can usefully represent. Large numbers of files\n    will either cause the environment to be truncated, or may error or\n    crash the process entirely. The \\$WATCHEXEC_COMMON_PATH is also\n    unintuitive, as demonstrated by the multiple confused queries that\n    have landed in my inbox over the years.\n\n**-I**, **\\--interactive**\n\n:   Respond to keypresses to quit, restart, or pause\n\n    In interactive mode, Watchexec listens for keypresses and responds\n    to them. Currently supported keys are: r to restart the command, p\n    to toggle pausing the watch, and q to quit. This requires a terminal\n    (TTY) and puts stdin into raw mode, so the child process will not\n    receive stdin input.\n\n**\\--exit-on-error**\n\n:   Exit when the command has an error\n\n    By default, Watchexec will continue to watch and re-run the command\n    after the command exits, regardless of its exit status. With this\n    option, it will instead exit when the command completes with any\n    non-success exit status.\n\n    This is useful when running Watchexec in a process manager or\n    container, where you want the container to restart when the command\n    fails rather than hang waiting for file changes.\n\n**\\--map-signal** *\\<SIGNAL:SIGNAL\\>*\n\n:   Translate signals from the OS to signals to send to the command\n\n    Takes a pair of signal names, separated by a colon, such as\n    \\\"TERM:INT\\\" to map SIGTERM to SIGINT. The first signal is the one\n    received by watchexec, and the second is the one sent to the\n    command. The second can be omitted to discard the first signal, such\n    as \\\"TERM:\\\" to not do anything on SIGTERM.\n\n    If SIGINT or SIGTERM are mapped, then they no longer quit Watchexec.\n    Besides making it hard to quit Watchexec itself, this is useful to\n    send pass a Ctrl-C to the command without also terminating Watchexec\n    and the underlying program with it, e.g. with \\\"INT:INT\\\".\n\n    This option can be specified multiple times to map multiple signals.\n\n    Signal syntax is case-insensitive for short names (like \\\"TERM\\\",\n    \\\"USR2\\\") and long names (like \\\"SIGKILL\\\", \\\"SIGHUP\\\"). Signal\n    numbers are also supported (like \\\"15\\\", \\\"31\\\"). On Windows, the\n    forms \\\"STOP\\\", \\\"CTRL+C\\\", and \\\"CTRL+BREAK\\\" are also supported to\n    receive, but Watchexec cannot yet deliver other \\\"signals\\\" than a\n    STOP.\n\n**-o**, **\\--on-busy-update** *\\<MODE\\>*\n\n:   What to do when receiving events while the command is running\n\n    Default is to do-nothing, which ignores events while the command is\n    running, so that changes that occur due to the command are ignored,\n    like compilation outputs. You can also use queue which will run the\n    command once again when the current run has finished if any events\n    occur while its running, or restart, which terminates the running\n    command and starts a new one. Finally, theres signal, which only\n    sends a signal; this can be useful with programs that can reload\n    their configuration without a full restart.\n\n    The signal can be specified with the \\--signal option.\n\n**\\--poll** \\[*\\<INTERVAL\\>*\\]\n\n:   Poll for filesystem changes\n\n    By default, and where available, Watchexec uses the operating\n    systems native file system watching capabilities. This option\n    disables that and instead uses a polling mechanism, which is less\n    efficient but can work around issues with some file systems (like\n    network shares) or edge cases.\n\n    Optionally takes a unit-less value in milliseconds, or a time span\n    value such as \\\"2s 500ms\\\", to use as the polling interval. If not\n    specified, the default is 30 seconds. Providing a unit-less value is\n    deprecated and will warn; it will be an error in the future.\n\n    Aliased as \\--force-poll.\n\n**-p**, **\\--postpone**\n\n:   Wait until first change before running command\n\n    By default, Watchexec will run the command once immediately. With\n    this option, it will instead wait until an event is detected before\n    running the command as normal.\n\n**-r**, **\\--restart**\n\n:   Restart the process if its still running\n\n    This is a shorthand for \\--on-busy-update=restart.\n\n**-s**, **\\--signal** *\\<SIGNAL\\>*\n\n:   Send a signal to the process when its still running\n\n    Specify a signal to send to the process when its still running. This\n    implies \\--on-busy-update=signal; otherwise the signal used when\n    that mode is restart is controlled by \\--stop-signal.\n\n    See the long documentation for \\--stop-signal for syntax.\n\n    Signals are not supported on Windows at the moment, and will always\n    be overridden to kill. See \\--stop-signal for more on Windows\n    \\\"signals\\\".\n\n**\\--stdin-quit**\n\n:   Exit when stdin closes\n\n    This watches the stdin file descriptor for EOF, and exits Watchexec\n    gracefully when it is closed. This is used by some process managers\n    to avoid leaving zombie processes around.\n\n# FILTERING\n\n**-e**, **\\--exts** *\\<EXTENSIONS\\>*\n\n:   Filename extensions to filter to\n\n    This is a quick filter to only emit events for files with the given\n    extensions. Extensions can be given with or without the leading dot\n    (e.g. js or .js). Multiple extensions can be given by repeating the\n    option or by separating them with commas.\n\n**-f**, **\\--filter** *\\<PATTERN\\>*\n\n:   Filename patterns to filter to\n\n    Provide a glob-like filter pattern, and only events for files\n    matching the pattern will be emitted. Multiple patterns can be given\n    by repeating the option. Events that are not from files (e.g.\n    signals, keyboard events) will pass through untouched.\n\n**\\--filter-file** *\\<PATH\\>*\n\n:   Files to load filters from\n\n    Provide a path to a file containing filters, one per line. Empty\n    lines and lines starting with \\# are ignored. Uses the same pattern\n    format as the \\--filter option.\n\n    This can also be used via the \\$WATCHEXEC_FILTER_FILES environment\n    variable.\n\n**-j**, **\\--filter-prog** *\\<EXPRESSION\\>*\n\n:   Filter programs.\n\n    Provide your own custom filter programs in jaq (similar to jq)\n    syntax. Programs are given an event in the same format as described\n    in \\--emit-events-to and must return a boolean. Invalid programs\n    will make watchexec fail to start; use -v to see program runtime\n    errors.\n\n    In addition to the jaq stdlib, watchexec adds some custom filter\n    definitions:\n\n    \\- path \\| file_meta returns file metadata or null if the file does\n    not exist.\n\n    \\- path \\| file_size returns the size of the file at path, or null\n    if it does not exist.\n\n    \\- path \\| file_read(bytes) returns a string with the first n bytes\n    of the file at path. If the file is smaller than n bytes, the whole\n    file is returned. There is no filter to read the whole file at once\n    to encourage limiting the amount of data read and processed.\n\n    \\- string \\| hash, and path \\| file_hash return the hash of the\n    string or file at path. No guarantee is made about the algorithm\n    used: treat it as an opaque value.\n\n    \\- any \\| kv_store(key), kv_fetch(key), and kv_clear provide a\n    simple key-value store. Data is kept in memory only, there is no\n    persistence. Consistency is not guaranteed.\n\n    \\- any \\| printout, any \\| printerr, and any \\| log(level) will\n    print or log any given value to stdout, stderr, or the log (levels =\n    error, warn, info, debug, trace), and pass the value through (so\n    \\[1\\] \\| log(\\\"debug\\\") \\| .\\[\\] will produce a 1 and log \\[1\\]).\n\n    All filtering done with such programs, and especially those using kv\n    or filesystem access, is much slower than the other filtering\n    methods. If filtering is too slow, events will back up and stall\n    watchexec. Take care when designing your filters.\n\n    If the argument to this option starts with an @, the rest of the\n    argument is taken to be the path to a file containing a jaq program.\n\n    Jaq programs are run in order, after all other filters, and\n    short-circuit: if a filter (jaq or not) rejects an event, execution\n    stops there, and no other filters are run. Additionally, they stop\n    after outputting the first value, so youll want to use any or all\n    when iterating, otherwise only the first item will be processed,\n    which can be quite confusing!\n\n    Find user-contributed programs or submit your own useful ones at\n    \\<https://github.com/watchexec/watchexec/discussions/592\\>.\n\n    \\## Examples:\n\n    Regexp ignore filter on paths:\n\n    all(.tags\\[\\] \\| select(.kind == \\\"path\\\"); .absolute \\|\n    test(\\\"\\[.\\]test\\[.\\]js\\$\\\")) \\| not\n\n    Pass any event that creates a file:\n\n    any(.tags\\[\\] \\| select(.kind == \\\"fs\\\"); .simple == \\\"create\\\")\n\n    Pass events that touch executable files:\n\n    any(.tags\\[\\] \\| select(.kind == \\\"path\\\" && .filetype == \\\"file\\\");\n    .absolute \\| metadata \\| .executable)\n\n    Ignore files that start with shebangs:\n\n    any(.tags\\[\\] \\| select(.kind == \\\"path\\\" && .filetype == \\\"file\\\");\n    .absolute \\| read(2) == \\\"#!\\\") \\| not\n\n**\\--fs-events** *\\<EVENTS\\>*\n\n:   Filesystem events to filter to\n\n    This is a quick filter to only emit events for the given types of\n    filesystem changes. Choose from access, create, remove, rename,\n    modify, metadata. Multiple types can be given by repeating the\n    option or by separating them with commas. By default, this is all\n    types except for access.\n\n    This may apply filtering at the kernel level when possible, which\n    can be more efficient, but may be more confusing when reading the\n    logs.\n\n**-i**, **\\--ignore** *\\<PATTERN\\>*\n\n:   Filename patterns to filter out\n\n    Provide a glob-like filter pattern, and events for files matching\n    the pattern will be excluded. Multiple patterns can be given by\n    repeating the option. Events that are not from files (e.g. signals,\n    keyboard events) will pass through untouched.\n\n**\\--ignore-file** *\\<PATH\\>*\n\n:   Files to load ignores from\n\n    Provide a path to a file containing ignores, one per line. Empty\n    lines and lines starting with \\# are ignored. Uses the same pattern\n    format as the \\--ignore option.\n\n    This can also be used via the \\$WATCHEXEC_IGNORE_FILES environment\n    variable.\n\n**\\--ignore-nothing**\n\n:   Dont ignore anything at all\n\n    This is a shorthand for \\--no-discover-ignore, \\--no-default-ignore.\n\n    Note that ignores explicitly loaded via other command line options,\n    such as \\--ignore or \\--ignore-file, will still be used.\n\n**\\--no-default-ignore**\n\n:   Dont use internal default ignores\n\n    Watchexec has a set of default ignore patterns, such as editor swap\n    files, \\`\\*.pyc\\`, \\`\\*.pyo\\`, \\`.DS_Store\\`, \\`.bzr\\`, \\`\\_darcs\\`,\n    \\`.fossil-settings\\`, \\`.git\\`, \\`.hg\\`, \\`.pijul\\`, \\`.svn\\`, and\n    Watchexec log files.\n\n**\\--no-discover-ignore**\n\n:   Dont discover ignore files at all\n\n    This is a shorthand for \\--no-global-ignore, \\--no-vcs-ignore,\n    \\--no-project-ignore, but even more efficient as it will skip all\n    the ignore discovery mechanisms from the get go.\n\n    Note that default ignores are still loaded, see\n    \\--no-default-ignore.\n\n**\\--no-global-ignore**\n\n:   Dont load global ignores\n\n    This disables loading of global or user ignore files, like\n    \\~/.gitignore, \\~/.config/watchexec/ignore, or\n    %APPDATA%\\\\Bazzar\\\\2.0\\\\ignore. Contrast with \\--no-vcs-ignore and\n    \\--no-project-ignore.\n\n    Supported global ignore files\n\n    \\- Git (if core.excludesFile is set): the file at that path - Git\n    (otherwise): the first found of \\$XDG_CONFIG_HOME/git/ignore,\n    %APPDATA%/.gitignore, %USERPROFILE%/.gitignore,\n    \\$HOME/.config/git/ignore, \\$HOME/.gitignore. - Bazaar: the first\n    found of %APPDATA%/Bazzar/2.0/ignore, \\$HOME/.bazaar/ignore. -\n    Watchexec: the first found of \\$XDG_CONFIG_HOME/watchexec/ignore,\n    %APPDATA%/watchexec/ignore, %USERPROFILE%/.watchexec/ignore,\n    \\$HOME/.watchexec/ignore.\n\n    Like for project files, Git and Bazaar global files will only be\n    used for the corresponding VCS as used in the project.\n\n**\\--no-meta**\n\n:   Dont emit fs events for metadata changes\n\n    This is a shorthand for \\--fs-events create,remove,rename,modify.\n    Using it alongside the \\--fs-events option is non-sensical and not\n    allowed.\n\n**\\--no-project-ignore**\n\n:   Dont load project-local ignores\n\n    This disables loading of project-local ignore files, like .gitignore\n    or .ignore in the watched project. This is contrasted with\n    \\--no-vcs-ignore, which disables loading of Git and other VCS ignore\n    files, and with \\--no-global-ignore, which disables loading of\n    global or user ignore files, like \\~/.gitignore or\n    \\~/.config/watchexec/ignore.\n\n    Supported project ignore files:\n\n    \\- Git: .gitignore at project root and child directories,\n    .git/info/exclude, and the file pointed to by \\`core.excludesFile\\`\n    in .git/config. - Mercurial: .hgignore at project root and child\n    directories. - Bazaar: .bzrignore at project root. - Darcs:\n    \\_darcs/prefs/boring - Fossil: .fossil-settings/ignore-glob -\n    Ripgrep/Watchexec/generic: .ignore at project root and child\n    directories.\n\n    VCS ignore files (Git, Mercurial, Bazaar, Darcs, Fossil) are only\n    used if the corresponding VCS is discovered to be in use for the\n    project/origin. For example, a .bzrignore in a Git repository will\n    be discarded.\n\n**\\--no-vcs-ignore**\n\n:   Dont load gitignores\n\n    Among other VCS exclude files, like for Mercurial, Subversion,\n    Bazaar, DARCS, Fossil. Note that Watchexec will detect which of\n    these is in use, if any, and only load the relevant files. Both\n    global (like \\~/.gitignore) and local (like .gitignore) files are\n    considered.\n\n    This option is useful if you want to watch files that are ignored by\n    Git.\n\n**\\--project-origin** *\\<DIRECTORY\\>*\n\n:   Set the project origin\n\n    Watchexec will attempt to discover the projects \\\"origin\\\" (or\n    \\\"root\\\") by searching for a variety of markers, like files or\n    directory patterns. It does its best but sometimes gets it it wrong,\n    and you can override that with this option.\n\n    The project origin is used to determine the path of certain ignore\n    files, which VCS is being used, the meaning of a leading / in\n    filtering patterns, and maybe more in the future.\n\n    When set, Watchexec will also not bother searching, which can be\n    significantly faster.\n\n**-w**, **\\--watch** *\\<PATH\\>*\n\n:   Watch a specific file or directory\n\n    By default, Watchexec watches the current directory.\n\n    When watching a single file, its often better to watch the\n    containing directory instead, and filter on the filename. Some\n    editors may replace the file with a new one when saving, and some\n    platforms may not detect that or further changes.\n\n    Upon starting, Watchexec resolves a \\\"project origin\\\" from the\n    watched paths. See the help for \\--project-origin for more\n    information.\n\n    This option can be specified multiple times to watch multiple files\n    or directories.\n\n    The special value /dev/null, provided as the only path watched, will\n    cause Watchexec to not watch any paths. Other event sources (like\n    signals or key events) may still be used.\n\n**-W**, **\\--watch-non-recursive** *\\<PATH\\>*\n\n:   Watch a specific directory, non-recursively\n\n    Unlike -w, folders watched with this option are not recursed into.\n\n    This option can be specified multiple times to watch multiple\n    directories non-recursively.\n\n**-F**, **\\--watch-file** *\\<PATH\\>*\n\n:   Watch files and directories from a file\n\n    Each line in the file will be interpreted as if given to -w.\n\n    For more complex uses (like watching non-recursively), use the\n    argfile capability: build a file containing command-line options and\n    pass it to watchexec with \\`@path/to/argfile\\`.\n\n    The special value - will read from STDIN; this in incompatible with\n    \\--stdin-quit.\n\n# DEBUGGING\n\n**\\--log-file** \\[*\\<PATH\\>*\\]\n\n:   Write diagnostic logs to a file\n\n    This writes diagnostic logs to a file, instead of the terminal, in\n    JSON format. If a log level was not already specified, this will set\n    it to -vvv.\n\n    If a path is not provided, the default is the working directory.\n    Note that with \\--ignore-nothing, the write events to the log will\n    likely get picked up by Watchexec, causing a loop; prefer setting a\n    path outside of the watched directory.\n\n    If the path provided is a directory, a file will be created in that\n    directory. The file name will be the current date and time, in the\n    format watchexec.YYYY-MM-DDTHH-MM-SSZ.log.\n\n**\\--print-events**\n\n:   Print events that trigger actions\n\n    This prints the events that triggered the action when handling it\n    (after debouncing), in a human readable form. This is useful for\n    debugging filters.\n\n    Use -vvv instead when you need more diagnostic information.\n\n**-v**, **\\--verbose**\n\n:   Set diagnostic log level\n\n    This enables diagnostic logging, which is useful for investigating\n    bugs or gaining more insight into faulty filters or \\\"missing\\\"\n    events. Use multiple times to increase verbosity.\n\n    Goes up to -vvvv. When submitting bug reports, default to a -vvv log\n    level.\n\n    You may want to use with \\--log-file to avoid polluting your\n    terminal.\n\n    Setting \\$WATCHEXEC_LOG also works, and takes precedence, but is not\n    recommended. However, using \\$WATCHEXEC_LOG is the only way to get\n    logs from before these options are parsed.\n\n# OUTPUT\n\n**\\--bell**\n\n:   Ring the terminal bell on command completion\n\n**-c**, **\\--clear** \\[*\\<MODE\\>*\\]\n\n:   Clear screen before running command\n\n    If this doesnt completely clear the screen, try \\--clear=reset.\n\n**\\--color** *\\<MODE\\>* \\[default: auto\\]\n\n:   When to use terminal colours\n\n    Setting the environment variable \\`NO_COLOR\\` to any value is\n    equivalent to \\`\\--color=never\\`.\n\n**-N**, **\\--notify** \\[*\\<WHEN\\>*\\]\n\n:   Alert when commands start and end\n\n    With this, Watchexec will emit a desktop notification when a command\n    starts and ends, on supported platforms. On unsupported platforms,\n    it may silently do nothing, or log a warning.\n\n    The mode can be specified to only notify when the command\n    \\`start\\`s, \\`end\\`s, or for \\`both\\` (which is the default).\n\n**-q**, **\\--quiet**\n\n:   Dont print starting and stopping messages\n\n    By default Watchexec will print a message when the command starts\n    and stops. This option disables this behaviour, so only the commands\n    output, warnings, and errors will be printed.\n\n**\\--timings**\n\n:   Print how long the command took to run\n\n    This may not be exactly accurate, as it includes some overhead from\n    Watchexec itself. Use the \\`time\\` utility, high-precision timers,\n    or benchmarking tools for more accurate results.\n\n# EXTRA\n\nUse \\@argfile as first argument to load arguments from the file argfile\n(one argument per line) which will be inserted in place of the \\@argfile\n(further arguments on the CLI will override or add onto those in the\nfile).\n\nDidnt expect this much output? Use the short -h flag to get short help.\n\n# VERSION\n\nv2.5.1\n\n# AUTHORS\n\nFélix Saparelli \\<felix@passcod.name\\>, Matt Green\n\\<mattgreenrocks@gmail.com\\>\n"
  }
]