Repository: mattgreen/watchexec Branch: main Commit: 9b7fed6a52c7 Files: 232 Total size: 756.8 KB Directory structure: gitextract_c7s6gsdt/ ├── .cargo/ │ └── config.toml ├── .editorconfig ├── .gitattributes ├── .github/ │ ├── FUNDING.yml │ ├── ISSUE_TEMPLATE/ │ │ ├── bug_report.md │ │ ├── feature_request.md │ │ └── regression.md │ ├── dependabot.yml │ └── workflows/ │ ├── clippy.yml │ ├── dist-manifest.jq │ ├── release-cli.yml │ └── tests.yml ├── .gitignore ├── .rustfmt.toml ├── CITATION.cff ├── CONTRIBUTING.md ├── Cargo.toml ├── LICENSE ├── README.md ├── bin/ │ ├── completions │ ├── dates.mjs │ ├── manpage │ └── release-notes ├── cliff.toml ├── completions/ │ ├── bash │ ├── elvish │ ├── fish │ ├── nu │ ├── powershell │ └── zsh ├── crates/ │ ├── bosion/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── examples/ │ │ │ ├── clap/ │ │ │ │ ├── Cargo.toml │ │ │ │ ├── build.rs │ │ │ │ └── src/ │ │ │ │ └── main.rs │ │ │ ├── default/ │ │ │ │ ├── Cargo.toml │ │ │ │ ├── build.rs │ │ │ │ └── src/ │ │ │ │ ├── common.rs │ │ │ │ └── main.rs │ │ │ ├── no-git/ │ │ │ │ ├── Cargo.toml │ │ │ │ ├── build.rs │ │ │ │ └── src/ │ │ │ │ └── main.rs │ │ │ ├── no-std/ │ │ │ │ ├── Cargo.toml │ │ │ │ ├── build.rs │ │ │ │ └── src/ │ │ │ │ └── main.rs │ │ │ └── snapshots/ │ │ │ ├── build_date.txt │ │ │ ├── build_datetime.txt │ │ │ ├── crate_features.txt │ │ │ ├── crate_version.txt │ │ │ ├── default_long_version.txt │ │ │ ├── default_long_version_with.txt │ │ │ ├── git_commit_date.txt │ │ │ ├── git_commit_datetime.txt │ │ │ ├── git_commit_hash.txt │ │ │ ├── git_commit_shorthash.txt │ │ │ ├── no_git_long_version.txt │ │ │ └── no_git_long_version_with.txt │ │ ├── release.toml │ │ ├── run-tests.sh │ │ └── src/ │ │ ├── info.rs │ │ └── lib.rs │ ├── cli/ │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── build.rs │ │ ├── integration/ │ │ │ ├── env-unix.sh │ │ │ ├── no-shell-unix.sh │ │ │ ├── socket.sh │ │ │ ├── stdin-quit-unix.sh │ │ │ └── trailingargfile-unix.sh │ │ ├── release.toml │ │ ├── run-tests.sh │ │ ├── src/ │ │ │ ├── args/ │ │ │ │ ├── command.rs │ │ │ │ ├── events.rs │ │ │ │ ├── filtering.rs │ │ │ │ ├── logging.rs │ │ │ │ └── output.rs │ │ │ ├── args.rs │ │ │ ├── config.rs │ │ │ ├── dirs.rs │ │ │ ├── emits.rs │ │ │ ├── filterer/ │ │ │ │ ├── parse.rs │ │ │ │ ├── proglib/ │ │ │ │ │ ├── file.rs │ │ │ │ │ ├── hash.rs │ │ │ │ │ ├── kv.rs │ │ │ │ │ ├── macros.rs │ │ │ │ │ └── output.rs │ │ │ │ ├── proglib.rs │ │ │ │ ├── progs.rs │ │ │ │ └── syncval.rs │ │ │ ├── filterer.rs │ │ │ ├── lib.rs │ │ │ ├── main.rs │ │ │ ├── socket/ │ │ │ │ ├── fallback.rs │ │ │ │ ├── parser.rs │ │ │ │ ├── test.rs │ │ │ │ ├── unix.rs │ │ │ │ └── windows.rs │ │ │ ├── socket.rs │ │ │ └── state.rs │ │ ├── tests/ │ │ │ ├── common/ │ │ │ │ └── mod.rs │ │ │ └── ignore.rs │ │ ├── watchexec-manifest.rc │ │ └── watchexec.exe.manifest │ ├── events/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── examples/ │ │ │ └── parse-and-print.rs │ │ ├── release.toml │ │ ├── src/ │ │ │ ├── event.rs │ │ │ ├── fs.rs │ │ │ ├── keyboard.rs │ │ │ ├── lib.rs │ │ │ ├── process.rs │ │ │ ├── sans_notify.rs │ │ │ └── serde_formats.rs │ │ └── tests/ │ │ ├── json.rs │ │ └── snapshots/ │ │ ├── array.json │ │ ├── asymmetric.json │ │ ├── completions.json │ │ ├── metadata.json │ │ ├── paths.json │ │ ├── signals.json │ │ ├── single.json │ │ └── sources.json │ ├── filterer/ │ │ ├── globset/ │ │ │ ├── CHANGELOG.md │ │ │ ├── Cargo.toml │ │ │ ├── README.md │ │ │ ├── release.toml │ │ │ ├── src/ │ │ │ │ └── lib.rs │ │ │ └── tests/ │ │ │ ├── filtering.rs │ │ │ └── helpers/ │ │ │ └── mod.rs │ │ └── ignore/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── release.toml │ │ ├── src/ │ │ │ └── lib.rs │ │ └── tests/ │ │ ├── filtering.rs │ │ ├── helpers/ │ │ │ └── mod.rs │ │ └── ignores/ │ │ ├── allowlist │ │ ├── folders │ │ ├── globs │ │ ├── negate │ │ ├── none-allowed │ │ ├── scopes-global │ │ ├── scopes-local │ │ ├── scopes-sublocal │ │ └── self.ignore │ ├── ignore-files/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── release.toml │ │ ├── src/ │ │ │ ├── discover.rs │ │ │ ├── error.rs │ │ │ ├── filter.rs │ │ │ └── lib.rs │ │ └── tests/ │ │ ├── filtering.rs │ │ ├── global/ │ │ │ ├── first │ │ │ └── second │ │ ├── helpers/ │ │ │ └── mod.rs │ │ └── tree/ │ │ ├── base │ │ └── branch/ │ │ └── inner │ ├── lib/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── examples/ │ │ │ ├── only_commands.rs │ │ │ ├── only_events.rs │ │ │ ├── readme.rs │ │ │ └── restart_run_on_successful_build.rs │ │ ├── release.toml │ │ ├── src/ │ │ │ ├── action/ │ │ │ │ ├── handler.rs │ │ │ │ ├── quit.rs │ │ │ │ ├── return.rs │ │ │ │ └── worker.rs │ │ │ ├── action.rs │ │ │ ├── changeable.rs │ │ │ ├── config.rs │ │ │ ├── error/ │ │ │ │ ├── critical.rs │ │ │ │ ├── runtime.rs │ │ │ │ └── specialised.rs │ │ │ ├── error.rs │ │ │ ├── filter.rs │ │ │ ├── id.rs │ │ │ ├── late_join_set.rs │ │ │ ├── lib.rs │ │ │ ├── paths.rs │ │ │ ├── sources/ │ │ │ │ ├── fs.rs │ │ │ │ ├── keyboard.rs │ │ │ │ └── signal.rs │ │ │ ├── sources.rs │ │ │ ├── watched_path.rs │ │ │ └── watchexec.rs │ │ └── tests/ │ │ ├── env_reporting.rs │ │ └── error_handler.rs │ ├── project-origins/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── examples/ │ │ │ └── find-origins.rs │ │ ├── release.toml │ │ └── src/ │ │ └── lib.rs │ ├── signals/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── release.toml │ │ └── src/ │ │ └── lib.rs │ ├── supervisor/ │ │ ├── CHANGELOG.md │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── release.toml │ │ ├── src/ │ │ │ ├── command/ │ │ │ │ ├── conversions.rs │ │ │ │ ├── program.rs │ │ │ │ └── shell.rs │ │ │ ├── command.rs │ │ │ ├── errors.rs │ │ │ ├── flag.rs │ │ │ ├── job/ │ │ │ │ ├── job.rs │ │ │ │ ├── messages.rs │ │ │ │ ├── priority.rs │ │ │ │ ├── state.rs │ │ │ │ ├── task.rs │ │ │ │ ├── test.rs │ │ │ │ └── testchild.rs │ │ │ ├── job.rs │ │ │ └── lib.rs │ │ └── tests/ │ │ └── programs.rs │ └── test-socketfd/ │ ├── Cargo.toml │ ├── README.md │ └── src/ │ └── main.rs └── doc/ ├── packages.md ├── socket.md ├── watchexec.1 └── watchexec.1.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: .cargo/config.toml ================================================ [target.armv7-unknown-linux-gnueabihf] linker = "arm-linux-gnueabihf-gcc" [target.armv7-unknown-linux-musleabihf] linker = "arm-linux-musleabihf-gcc" [target.aarch64-unknown-linux-gnu] linker = "aarch64-linux-gnu-gcc" [target.aarch64-unknown-linux-musl] linker = "aarch64-linux-musl-gcc" ================================================ FILE: .editorconfig ================================================ root = true [*] indent_style = tab indent_size = 4 end_of_line = lf charset = utf-8 trim_trailing_whitespace = true insert_final_newline = true [cli/tests/snapshots/*] indent_style = space trim_trailing_whitespace = false [*.{md,ronn}] indent_style = space indent_size = 4 [*.{cff,yml}] indent_size = 2 indent_style = space ================================================ FILE: .gitattributes ================================================ Cargo.lock merge=binary doc/watchexec.* merge=binary completions/* merge=binary ================================================ FILE: .github/FUNDING.yml ================================================ liberapay: passcod ================================================ FILE: .github/ISSUE_TEMPLATE/bug_report.md ================================================ --- name: Bug report about: Something is wrong title: '' labels: bug, need-info assignees: '' --- Please delete this template text before filing, but you _need_ to include the following: - Watchexec's version - The OS you're using - A log with `-vvv --log-file` (if it has sensitive info you can email it at felix@passcod.name — do that _after_ filing so you can reference the issue ID) - A sample command that you've run that has the issue Thank you ================================================ FILE: .github/ISSUE_TEMPLATE/feature_request.md ================================================ --- name: Feature request about: Something is missing title: '' labels: feature assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. If proposing a new CLI option, option names you think would fit. **Additional context** Add any other context about the feature request here. ================================================ FILE: .github/ISSUE_TEMPLATE/regression.md ================================================ --- name: Regression about: Something changed unexpectedly title: '' labels: '' assignees: '' --- **What used to happen** **What happens now** **Details** - Latest version that worked: - Earliest version that doesn't: (don't sweat testing earlier versions if you don't remember or have time, your current version will do) - OS: - A debug log with `-vvv --log-file`: ``` ``` ================================================ FILE: .github/dependabot.yml ================================================ # Dependabot dependency version checks / updates version: 2 updates: - package-ecosystem: "github-actions" # Workflow files stored in the # default location of `.github/workflows` directory: "/" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/cli" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/lib" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/events" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/signals" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/supervisor" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/filterer/ignore" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/filterer/globset" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/bosion" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/ignore-files" schedule: interval: "weekly" - package-ecosystem: "cargo" directory: "/crates/project-origins" schedule: interval: "weekly" ================================================ FILE: .github/workflows/clippy.yml ================================================ name: Clippy on: workflow_dispatch: pull_request: push: branches: - main tags-ignore: - "*" env: CARGO_TERM_COLOR: always CARGO_UNSTABLE_SPARSE_REGISTRY: "true" concurrency: group: ${{ github.workflow }}-${{ github.ref || github.run_id }} cancel-in-progress: true jobs: clippy: strategy: fail-fast: false matrix: platform: - ubuntu - windows - macos name: Clippy on ${{ matrix.platform }} runs-on: "${{ matrix.platform }}-latest" steps: - uses: actions/checkout@v6 - name: Configure toolchain run: | rustup toolchain install stable --profile minimal --no-self-update --component clippy rustup default stable # https://github.com/actions/cache/issues/752 - if: ${{ runner.os == 'Windows' }} name: Use GNU tar shell: cmd run: | echo "Adding GNU tar to PATH" echo C:\Program Files\Git\usr\bin>>"%GITHUB_PATH%" - name: Configure caching uses: actions/cache@v5 with: path: | ~/.cargo/registry/index/ ~/.cargo/registry/cache/ ~/.cargo/git/db/ target/ key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }} restore-keys: | ${{ runner.os }}-cargo- - run: cargo clippy ================================================ FILE: .github/workflows/dist-manifest.jq ================================================ { dist_version: "0.0.2", releases: [{ app_name: "watchexec", app_version: $version, changelog_title: "CLI \($version)", artifacts: [ $files | split("\n") | .[] | { name: ., kind: (if (. | test("[.](deb|rpm)$")) then "installer" else "executable-zip" end), target_triples: (. | [capture("watchexec-[^-]+-(?[^.]+)[.].+").target]), assets: ([[ { kind: "executable", name: (if (. | test("windows")) then "watchexec.exe" else "watchexec" end), path: "\( capture("(?watchexec-[^-]+-[^.]+)[.].+").dir )\( if (. | test("windows")) then "\\watchexec.exe" else "/watchexec" end )", }, (if (. | test("[.](deb|rpm)$")) then null else {kind: "readme", name: "README.md"} end), (if (. | test("[.](deb|rpm)$")) then null else {kind: "license", name: "LICENSE"} end) ][] | select(. != null)]) } ] }] } ================================================ FILE: .github/workflows/release-cli.yml ================================================ name: CLI Release on: workflow_dispatch: push: tags: - "v*.*.*" env: CARGO_TERM_COLOR: always CARGO_UNSTABLE_SPARSE_REGISTRY: "true" jobs: info: name: Gather info runs-on: ubuntu-latest outputs: cli_version: ${{ steps.version.outputs.cli_version }} steps: - uses: actions/checkout@v6 - name: Extract version id: version shell: bash run: | set -euxo pipefail version=$(grep -m1 -F 'version =' crates/cli/Cargo.toml | cut -d\" -f2) if [[ -z "$version" ]]; then echo "Error: no version :(" exit 1 fi echo "cli_version=$version" >> $GITHUB_OUTPUT build: strategy: matrix: include: - name: linux-amd64-gnu os: ubuntu-22.04 target: x86_64-unknown-linux-gnu cross: false experimental: false - name: linux-amd64-musl os: ubuntu-24.04 target: x86_64-unknown-linux-musl cross: false experimental: false - name: linux-i686-musl os: ubuntu-22.04 target: i686-unknown-linux-musl cross: true experimental: true - name: linux-armhf-gnu os: ubuntu-24.04 target: armv7-unknown-linux-gnueabihf cross: true experimental: false - name: linux-arm64-gnu os: ubuntu-24.04-arm target: aarch64-unknown-linux-gnu cross: false experimental: false - name: linux-arm64-musl os: ubuntu-24.04-arm target: aarch64-unknown-linux-musl cross: false experimental: false - name: linux-s390x-gnu os: ubuntu-24.04 target: s390x-unknown-linux-gnu cross: true experimental: true - name: linux-riscv64gc-gnu os: ubuntu-24.04 target: riscv64gc-unknown-linux-gnu cross: true experimental: true - name: linux-ppc64le-gnu os: ubuntu-24.04 target: powerpc64le-unknown-linux-gnu cross: true experimental: true - name: illumos-x86-64 os: ubuntu-24.04 target: x86_64-unknown-illumos cross: true experimental: true - name: freebsd-x86-64 os: ubuntu-24.04 target: x86_64-unknown-freebsd cross: true experimental: true - name: linux-loongarch64-gnu os: ubuntu-24.04 target: loongarch64-unknown-linux-gnu cross: true experimental: true - name: mac-x86-64 os: macos-14 target: x86_64-apple-darwin cross: false experimental: false - name: mac-arm64 os: macos-15 target: aarch64-apple-darwin cross: false experimental: false - name: windows-x86-64 os: windows-latest target: x86_64-pc-windows-msvc cross: false experimental: false #- name: windows-arm64 # os: windows-latest # target: aarch64-pc-windows-msvc # cross: true # experimental: true name: Binaries for ${{ matrix.name }} needs: info runs-on: ${{ matrix.os }} continue-on-error: ${{ matrix.experimental }} env: version: ${{ needs.info.outputs.cli_version }} dst: watchexec-${{ needs.info.outputs.cli_version }}-${{ matrix.target }} steps: - uses: actions/checkout@v6 # https://github.com/actions/cache/issues/752 - if: ${{ runner.os == 'Windows' }} name: Use GNU tar shell: cmd run: | echo "Adding GNU tar to PATH" echo C:\Program Files\Git\usr\bin>>"%GITHUB_PATH%" - run: sudo apt update if: startsWith(matrix.os, 'ubuntu-') - name: Add musl tools run: sudo apt install -y musl musl-dev musl-tools if: endsWith(matrix.target, '-musl') - name: Add aarch-gnu tools run: sudo apt install -y gcc-aarch64-linux-gnu if: startsWith(matrix.target, 'aarch64-unknown-linux') - name: Add arm7hf-gnu tools run: sudo apt install -y gcc-arm-linux-gnueabihf if: startsWith(matrix.target, 'armv7-unknown-linux-gnueabihf') - name: Add s390x-gnu tools run: sudo apt install -y gcc-s390x-linux-gnu if: startsWith(matrix.target, 's390x-unknown-linux-gnu') - name: Add riscv64-gnu tools run: sudo apt install -y gcc-riscv64-linux-gnu if: startsWith(matrix.target, 'riscv64gc-unknown-linux-gnu') - name: Add ppc64le-gnu tools run: sudo apt install -y gcc-powerpc64le-linux-gnu if: startsWith(matrix.target, 'powerpc64le-unknown-linux-gnu') - name: Install cargo-deb if: startsWith(matrix.name, 'linux-') uses: taiki-e/install-action@v2 with: tool: cargo-deb - name: Install cargo-generate-rpm if: startsWith(matrix.name, 'linux-') uses: taiki-e/install-action@v2 with: tool: cargo-generate-rpm - name: Configure toolchain run: | rustup toolchain install --profile minimal --no-self-update stable rustup default stable rustup target add ${{ matrix.target }} - uses: Swatinem/rust-cache@v2 - name: Install cross if: matrix.cross uses: taiki-e/install-action@v2 with: tool: cross - name: Build shell: bash run: | ${{ matrix.cross && 'cross' || 'cargo' }} build \ -p watchexec-cli \ --release --locked \ --target ${{ matrix.target }} - name: Package shell: bash run: | set -euxo pipefail ext="" [[ "${{ matrix.name }}" == windows-* ]] && ext=".exe" bin="target/${{ matrix.target }}/release/watchexec${ext}" objcopy --compress-debug-sections "$bin" || true mkdir "$dst" mkdir -p "target/release" cp "$bin" "target/release/" # workaround for cargo-deb silliness with targets cp "$bin" "$dst/" cp -r crates/cli/README.md LICENSE completions doc/{logo.svg,watchexec.1{,.*}} "$dst/" - name: Archive (tar) if: '! startsWith(matrix.name, ''windows-'')' run: tar cavf "$dst.tar.xz" "$dst" - name: Archive (deb) if: startsWith(matrix.name, 'linux-') run: cargo deb -p watchexec-cli --no-build --no-strip --target ${{ matrix.target }} --output "$dst.deb" - name: Archive (rpm) if: startsWith(matrix.name, 'linux-') shell: bash run: | set -euxo pipefail shopt -s globstar cargo generate-rpm -p crates/cli --target "${{ matrix.target }}" --target-dir "target/${{ matrix.target }}" mv target/**/*.rpm "$dst.rpm" - name: Archive (zip) if: startsWith(matrix.name, 'windows-') shell: bash run: 7z a "$dst.zip" "$dst" - uses: actions/upload-artifact@v6 with: name: ${{ matrix.name }} retention-days: 1 path: | watchexec-*.tar.xz watchexec-*.tar.zst watchexec-*.deb watchexec-*.rpm watchexec-*.zip upload: needs: [build, info] name: Checksum and publish runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 - name: Install b3sum uses: taiki-e/install-action@v2 with: tool: b3sum - uses: actions/download-artifact@v7 with: merge-multiple: true - name: Dist manifest run: | jq -ncf .github/workflows/dist-manifest.jq \ --arg version "${{ needs.info.outputs.cli_version }}" \ --arg files "$(ls watchexec-*)" \ > dist-manifest.json - name: Bulk checksums run: | b3sum watchexec-* | tee B3SUMS sha512sum watchexec-* | tee SHA512SUMS sha256sum watchexec-* | tee SHA256SUMS - name: File checksums run: | for file in watchexec-*; do b3sum --no-names $file > "$file.b3" sha256sum $file | cut -d ' ' -f1 > "$file.sha256" sha512sum $file | cut -d ' ' -f1 > "$file.sha512" done - uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b with: tag_name: v${{ needs.info.outputs.cli_version }} name: CLI v${{ needs.info.outputs.cli_version }} append_body: true files: | dist-manifest.json watchexec-*.tar.xz watchexec-*.tar.zst watchexec-*.deb watchexec-*.rpm watchexec-*.zip *SUMS *.b3 *.sha* env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} ================================================ FILE: .github/workflows/tests.yml ================================================ name: Test suite on: workflow_dispatch: pull_request: types: - opened - reopened - synchronize push: branches: - main tags-ignore: - "*" env: CARGO_TERM_COLOR: always CARGO_UNSTABLE_SPARSE_REGISTRY: "true" concurrency: group: ${{ github.workflow }}-${{ github.ref || github.run_id }} cancel-in-progress: true jobs: libs: strategy: fail-fast: false matrix: platform: - macos - ubuntu - windows name: Test libraries ${{ matrix.platform }} runs-on: "${{ matrix.platform }}-latest" steps: - uses: actions/checkout@v6 - name: Configure toolchain run: | rustup toolchain install --profile minimal --no-self-update stable rustup default stable # https://github.com/actions/cache/issues/752 - if: ${{ runner.os == 'Windows' }} name: Use GNU tar shell: cmd run: | echo "Adding GNU tar to PATH" echo C:\Program Files\Git\usr\bin>>"%GITHUB_PATH%" - uses: Swatinem/rust-cache@v2 - name: Run library test suite run: cargo test --workspace --exclude watchexec-cli --exclude watchexec-events - name: Run watchexec-events integration tests run: cargo test -p watchexec-events -F serde cli-e2e: strategy: fail-fast: false matrix: platform: - macos - ubuntu - windows name: Test CLI (e2e) ${{ matrix.platform }} runs-on: "${{ matrix.platform }}-latest" steps: - uses: actions/checkout@v6 - name: Configure toolchain run: | rustup toolchain install --profile minimal --no-self-update stable rustup default stable # https://github.com/actions/cache/issues/752 - if: ${{ runner.os == 'Windows' }} name: Use GNU tar shell: cmd run: | echo "Adding GNU tar to PATH" echo C:\Program Files\Git\usr\bin>>"%GITHUB_PATH%" - name: Install coreutils on mac if: ${{ matrix.platform == 'macos' }} run: brew install coreutils - uses: Swatinem/rust-cache@v2 - name: Build CLI programs run: cargo build - name: Run CLI integration tests run: crates/cli/run-tests.sh ${{ matrix.platform }} shell: bash env: WATCHEXEC_BIN: target/debug/watchexec TEST_SOCKETFD_BIN: target/debug/test-socketfd cli-docs: name: Test CLI docs runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 - name: Configure toolchain run: | rustup toolchain install --profile minimal --no-self-update stable rustup default stable - uses: Swatinem/rust-cache@v2 - name: Generate manpage run: cargo run -p watchexec-cli -- --manual > doc/watchexec.1 - name: Check that manpage is up to date run: git diff --exit-code -- doc/ - name: Generate completions run: bin/completions - name: Check that completions are up to date run: git diff --exit-code -- completions/ cli-unit: strategy: fail-fast: false matrix: platform: - macos - ubuntu - windows name: Test CLI (unit) ${{ matrix.platform }} runs-on: "${{ matrix.platform }}-latest" steps: - uses: actions/checkout@v6 - name: Configure toolchain run: | rustup toolchain install --profile minimal --no-self-update stable rustup default stable # https://github.com/actions/cache/issues/752 - if: ${{ runner.os == 'Windows' }} name: Use GNU tar shell: cmd run: | echo "Adding GNU tar to PATH" echo C:\Program Files\Git\usr\bin>>"%GITHUB_PATH%" - uses: Swatinem/rust-cache@v2 - name: Run CLI unit tests run: cargo test -p watchexec-cli bosion: strategy: fail-fast: false matrix: platform: - macos - ubuntu - windows name: Bosion integration tests on ${{ matrix.platform }} runs-on: "${{ matrix.platform }}-latest" steps: - uses: actions/checkout@v6 - name: Configure toolchain run: | rustup toolchain install --profile minimal --no-self-update stable rustup default stable # https://github.com/actions/cache/issues/752 - if: ${{ runner.os == 'Windows' }} name: Use GNU tar shell: cmd run: | echo "Adding GNU tar to PATH" echo C:\Program Files\Git\usr\bin>>"%GITHUB_PATH%" - uses: Swatinem/rust-cache@v2 - name: Run bosion integration tests run: ./run-tests.sh working-directory: crates/bosion shell: bash cross-checks: strategy: fail-fast: false matrix: target: - x86_64-unknown-linux-musl - x86_64-unknown-freebsd name: Typecheck only on ${{ matrix.target }} runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 - name: Configure toolchain run: | rustup toolchain install --profile minimal --no-self-update stable rustup default stable rustup target add ${{ matrix.target }} - if: matrix.target == 'x86_64-unknown-linux-musl' run: sudo apt-get install -y musl-tools - uses: Swatinem/rust-cache@v2 - run: cargo check --target ${{ matrix.target }} tests-pass: if: always() name: Tests pass needs: - bosion - cli-e2e - cli-unit - cross-checks - libs runs-on: ubuntu-latest steps: - uses: re-actors/alls-green@release/v1 with: jobs: ${{ toJSON(needs) }} ================================================ FILE: .gitignore ================================================ target /watchexec-* watchexec.*.log ================================================ FILE: .rustfmt.toml ================================================ hard_tabs = true ================================================ FILE: CITATION.cff ================================================ cff-version: 1.2.0 message: | If you use this software, please cite it using these metadata. title: "Watchexec: a tool to react to filesystem changes, and a crate ecosystem to power it" version: "2.5.1" date-released: 2026-03-30 repository-code: https://github.com/watchexec/watchexec license: Apache-2.0 authors: - family-names: Green given-names: Matt - family-names: Saparelli given-names: Félix orcid: https://orcid.org/0000-0002-2010-630X ================================================ FILE: CONTRIBUTING.md ================================================ # Contribution guidebook This is a fairly free-form project, with low contribution traffic. Maintainers: - Félix Saparelli (@passcod) (active) - Matt Green (@mattgreen) (original author, mostly checked out) There are a few anti goals: - Calling watchexec is to be a **simple** exercise that remains intuitive. As a specific point, it should not involve any piping or require xargs. - Watchexec will not be tied to any particular ecosystem or language. Projects that themselves use watchexec (the library) can be focused on a particular domain (for example Cargo Watch for Rust), but watchexec itself will remain generic, usable for any purpose. ## Debugging To enable verbose logging in tests, run with: ```console $ env WATCHEXEC_LOG=watchexec=trace,info RUST_TEST_THREADS=1 RUST_NOCAPTURE=1 cargo test --test testfile -- testname ``` To use [Tokio Console](https://github.com/tokio-rs/console): 1. Add `--cfg tokio_unstable` to your `RUSTFLAGS`. 2. Run the CLI with the `dev-console` feature. ## PR etiquette - Maintainers are busy or may not have the bandwidth, be patient. - Do _not_ change the version number in the PR. - Do _not_ change Cargo.toml or other project metadata, unless specifically asked for, or if that's the point of the PR (like adding a crates.io category). Apart from that, welcome and thank you for your time! ## Releasing ``` cargo release -p crate-name --execute patch # or minor, major ``` When a CLI release is done, the [release notes](https://github.com/watchexec/watchexec/releases) should be edited with the changelog. ### Release order Use this command to see the tree of workspace dependencies: ```console $ cargo tree -p watchexec-cli | rg -F '(/' --color=never | sed 's/ v[0-9].*//' ``` ## Overview The architecture of watchexec is roughly: - sources gather events - events are debounced and filtered - event(s) make it through the debounce/filters and trigger an "action" - `on_action` handler is called, returning an `Outcome` - outcome is processed into managing the command that watchexec is running - outcome can also be to exit - when a command is started, the `on_pre_spawn` and `on_post_spawn` handlers are called - commands are also a source of events, so e.g. "command has finished" is handled by `on_action` And this is the startup sequence: - init config sets basic immutable facts about the runtime - runtime starts: - source workers start, and are passed their runtime config - action worker starts, and is passed its runtime config - (unless `--postpone` is given) a synthetic event is injected to kickstart things ## Guides These are generic guides for implementing specific bits of functionality. ### Adding an event source - add a worker for "sourcing" events. Looking at the [signal source worker](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/signal/source.rs) is probably easiest to get started here. - because we may not always want to enable this event source, and just to be flexible, add [runtime config](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/config.rs) for the source. - for convenience, probably add [a method on the runtime config](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/config.rs) which configures the most common usecase. - because watchexec is reconfigurable, in the worker you'll need to react to config changes. Look at how the [fs worker does it](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/fs.rs) for reference. - you may need to [add to the event tag enum](https://github.com/watchexec/watchexec/blob/main/crates/lib/src/event.rs). - if you do, you should [add support to the "tagged filterer"](https://github.com/watchexec/watchexec/blob/main/crates/filterer/tagged/src/parse.rs), but this can be done in follow-up work. ### Process a new event in the CLI - add an option to the [args](https://github.com/watchexec/watchexec/blob/main/crates/cli/src/args.rs) if necessary - add to the [runtime config](https://github.com/watchexec/watchexec/blob/main/crates/cli/src/config/runtime.rs) when the option is present - process relevant events [in the action handler](https://github.com/watchexec/watchexec/blob/main/crates/cli/src/config/runtime.rs) --- vim: tw=100 ================================================ FILE: Cargo.toml ================================================ [workspace] resolver = "2" members = [ "crates/lib", "crates/cli", "crates/events", "crates/signals", "crates/supervisor", "crates/filterer/globset", "crates/filterer/ignore", "crates/bosion", "crates/ignore-files", "crates/project-origins", "crates/test-socketfd", ] [workspace.dependencies] rand = "0.9.1" uuid = "1.5.0" [profile.release] lto = true debug = 1 # for stack traces codegen-units = 1 strip = "symbols" [profile.dev.build-override] opt-level = 0 codegen-units = 1024 debug = false debug-assertions = false overflow-checks = false incremental = false [profile.release.build-override] opt-level = 0 codegen-units = 1024 debug = false debug-assertions = false overflow-checks = false incremental = false ================================================ FILE: LICENSE ================================================ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================ FILE: README.md ================================================ [![CI status on main branch](https://github.com/watchexec/watchexec/actions/workflows/tests.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/tests.yml) # Watchexec Software development often involves running the same commands over and over. Boring! `watchexec` is a simple, standalone tool that watches a path and runs a command whenever it detects modifications. Example use cases: * Automatically run unit tests * Run linters/syntax checkers * Rebuild artifacts ## Features * Simple invocation and use, does not require a cryptic command line involving `xargs` * Runs on OS X, Linux, and Windows * Monitors current directory and all subdirectories for changes * Coalesces multiple filesystem events into one, for editors that use swap/backup files during saving * Loads `.gitignore` and `.ignore` files * Uses process groups to keep hold of forking programs * Provides the paths that changed in environment variables or STDIN * Does not require a language runtime, not tied to any particular language or ecosystem * [And more!](./crates/cli/#features) ## Quick start Watch all JavaScript, CSS and HTML files in the current directory and all subdirectories for changes, running `npm run build` when a change is detected: $ watchexec -e js,css,html npm run build Call/restart `python server.py` when any Python file in the current directory (and all subdirectories) changes: $ watchexec -r -e py -- python server.py More usage examples: [in the CLI README](./crates/cli/#usage-examples)! ## Install Packaging status - With [your package manager](./doc/packages.md) for Arch, Debian, Homebrew, Nix, Scoop, Chocolatey… - From binary with [Binstall](https://github.com/cargo-bins/cargo-binstall): `cargo binstall watchexec-cli` - As [pre-built binary package from Github](https://github.com/watchexec/watchexec/releases/latest) - From source with Cargo: `cargo install --locked watchexec-cli` All options in detail: [in the CLI README](./crates/cli/#installation), in the online help (`watchexec -h`, `watchexec --help`, or `watchexec --manual`), and [in the manual page](./doc/watchexec.1.md). ## Augment Watchexec pairs well with: - [checkexec](https://github.com/kurtbuilds/checkexec): to run only when source files are newer than a target file - [just](https://github.com/casey/just): a modern alternative to `make` - [systemfd](https://github.com/mitsuhiko/systemfd): socket-passing in development ## Extend - [watchexec library](./crates/lib/): to create more specialised watchexec-powered tools. - [watchexec-events](./crates/events/): event types for watchexec. - [watchexec-signals](./crates/signals/): signal types for watchexec. - [watchexec-supervisor](./crates/supervisor/): process lifecycle manager (the _exec_ part of watchexec). - [clearscreen](https://github.com/watchexec/clearscreen): to clear the (terminal) screen on every platform. - [command group](https://github.com/watchexec/command-group): to run commands in process groups. - [ignore files](./crates/ignore-files/): to find, parse, and interpret ignore files. - [project origins](./crates/project-origins/): to find the origin(s) directory of a project. - [notify](https://github.com/notify-rs/notify): to respond to file modifications (third-party). ### Downstreams Selected downstreams of watchexec and associated crates: - [cargo watch](https://github.com/watchexec/cargo-watch): a specialised watcher for Rust/Cargo projects. - [cargo lambda](https://github.com/cargo-lambda/cargo-lambda): a dev tool for Rust-powered AWS Lambda functions. - [create-rust-app](https://create-rust-app.dev): a template for Rust+React web apps. - [devenv.sh](https://github.com/cachix/devenv): a developer environment with nix-based declarative configs. - [dotter](https://github.com/supercuber/dotter): a dotfile manager. - [ghciwatch](https://github.com/mercurytechnologies/ghciwatch): a specialised watcher for Haskell projects. - [tectonic](https://tectonic-typesetting.github.io/book/latest/): a TeX/LaTeX typesetting system. ================================================ FILE: bin/completions ================================================ #!/bin/sh cargo run -p watchexec-cli $* -- --completions bash > completions/bash cargo run -p watchexec-cli $* -- --completions elvish > completions/elvish cargo run -p watchexec-cli $* -- --completions fish > completions/fish cargo run -p watchexec-cli $* -- --completions nu > completions/nu cargo run -p watchexec-cli $* -- --completions powershell > completions/powershell cargo run -p watchexec-cli $* -- --completions zsh > completions/zsh ================================================ FILE: bin/dates.mjs ================================================ #!/usr/bin/env node const id = Math.floor(Math.random() * 100); let n = 0; const m = 5; while (n < m) { n += 1; console.log(`[${id} : ${n}/${m}] ${new Date}`); await new Promise(done => setTimeout(done, 2000)); } ================================================ FILE: bin/manpage ================================================ #!/bin/sh cargo run -p watchexec-cli -- --manual > doc/watchexec.1 pandoc doc/watchexec.1 -t markdown > doc/watchexec.1.md ================================================ FILE: bin/release-notes ================================================ #!/bin/sh exec git cliff --include-path '**/crates/cli/**/*' --count-tags 'v*' --unreleased $* ================================================ FILE: cliff.toml ================================================ [changelog] trim = true header = "" footer = "" body = """ {% if version %}\ ## v{{ version | trim_start_matches(pat="v") }} ({{ timestamp | date(format="%Y-%m-%d") }}) {% else %}\ ## [unreleased] {% endif %}\ {% raw %}\n{% endraw %}\ {%- for commit in commits | sort(attribute="group") %} {%- if commit.scope -%} {% else -%} - **{{commit.group | striptags | trim | upper_first}}:** \ {% if commit.breaking %} [**⚠️ breaking ⚠️**] {% endif %}\ {{ commit.message | upper_first }} - ([{{ commit.id | truncate(length=7, end="") }}]($REPO/commit/{{ commit.id }})) {% endif -%} {% endfor -%} {% for scope, commits in commits | filter(attribute="group") | group_by(attribute="scope") %} ### {{ scope | striptags | trim | upper_first }} {% for commit in commits | sort(attribute="group") %} - **{{commit.group | striptags | trim | upper_first}}:** \ {% if commit.breaking %} [**⚠️ breaking ⚠️**] {% endif %}\ {{ commit.message | upper_first }} - ([{{ commit.id | truncate(length=7, end="") }}]($REPO/commit/{{ commit.id }})) {%- endfor -%} {% raw %}\n{% endraw %}\ {% endfor %} """ postprocessors = [ { pattern = '\$REPO', replace = "https://github.com/watchexec/watchexec" }, ] [git] conventional_commits = true filter_unconventional = true split_commits = true protect_breaking_commits = true filter_commits = true tag_pattern = "v[0-9].*" sort_commits = "oldest" link_parsers = [ { pattern = "#(\\d+)", href = "https://github.com/watchexec/watchexec/issues/$1"}, { pattern = "RFC(\\d+)", text = "ietf-rfc$1", href = "https://datatracker.ietf.org/doc/html/rfc$1"}, ] commit_parsers = [ { message = "^feat", group = "Feature" }, { message = "^fix", group = "Bugfix" }, { message = "^tweak", group = "Tweak" }, { message = "^doc", group = "Documentation" }, { message = "^perf", group = "Performance" }, { message = "^deps", group = "Deps" }, { message = "^Initial [cC]ommit$", skip = true }, { message = "^(release|merge|fmt|chore|ci|refactor|style|draft|wip|repo)", skip = true }, { body = ".*breaking", group = "Breaking" }, { body = ".*security", group = "Security" }, { message = "^revert", group = "Revert" }, ] ================================================ FILE: completions/bash ================================================ _watchexec() { local i cur prev opts cmd COMPREPLY=() if [[ "${BASH_VERSINFO[0]}" -ge 4 ]]; then cur="$2" else cur="${COMP_WORDS[COMP_CWORD]}" fi prev="$3" cmd="" opts="" for i in "${COMP_WORDS[@]:0:COMP_CWORD}" do case "${cmd},${i}" in ",$1") cmd="watchexec" ;; *) ;; esac done case "${cmd}" in watchexec) opts="-1 -n -E -o -r -s -d -I -p -w -W -F -e -f -j -i -v -c -N -q -h -V --manual --completions --only-emit-events --shell --no-environment --env --no-process-group --wrap-process --stop-signal --stop-timeout --timeout --delay-run --workdir --socket --on-busy-update --restart --signal --map-signal --debounce --stdin-quit --interactive --exit-on-error --postpone --poll --emit-events-to --watch --watch-non-recursive --watch-file --no-vcs-ignore --no-project-ignore --no-global-ignore --no-default-ignore --no-discover-ignore --ignore-nothing --exts --filter --filter-file --project-origin --filter-prog --ignore --ignore-file --fs-events --no-meta --verbose --log-file --print-events --clear --notify --color --timings --quiet --bell --help --version [COMMAND]..." if [[ ${cur} == -* || ${COMP_CWORD} -eq 1 ]] ; then COMPREPLY=( $(compgen -W "${opts}" -- "${cur}") ) return 0 fi case "${prev}" in --completions) COMPREPLY=($(compgen -W "bash elvish fish nu powershell zsh" -- "${cur}")) return 0 ;; --shell) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --env) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -E) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --wrap-process) COMPREPLY=($(compgen -W "group session none" -- "${cur}")) return 0 ;; --stop-signal) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --stop-timeout) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --timeout) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --delay-run) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --workdir) COMPREPLY=() if [[ "${BASH_VERSINFO[0]}" -ge 4 ]]; then compopt -o plusdirs fi return 0 ;; --socket) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --on-busy-update) COMPREPLY=($(compgen -W "queue do-nothing restart signal" -- "${cur}")) return 0 ;; -o) COMPREPLY=($(compgen -W "queue do-nothing restart signal" -- "${cur}")) return 0 ;; --signal) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -s) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --map-signal) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --debounce) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -d) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --poll) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --emit-events-to) COMPREPLY=($(compgen -W "environment stdio file json-stdio json-file none" -- "${cur}")) return 0 ;; --watch) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -w) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --watch-non-recursive) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -W) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --watch-file) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -F) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --exts) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -e) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --filter) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -f) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --filter-file) local oldifs if [ -n "${IFS+x}" ]; then oldifs="$IFS" fi IFS=$'\n' COMPREPLY=($(compgen -f "${cur}")) if [ -n "${oldifs+x}" ]; then IFS="$oldifs" fi if [[ "${BASH_VERSINFO[0]}" -ge 4 ]]; then compopt -o filenames fi return 0 ;; --project-origin) COMPREPLY=() if [[ "${BASH_VERSINFO[0]}" -ge 4 ]]; then compopt -o plusdirs fi return 0 ;; --filter-prog) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -j) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --ignore) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; -i) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --ignore-file) local oldifs if [ -n "${IFS+x}" ]; then oldifs="$IFS" fi IFS=$'\n' COMPREPLY=($(compgen -f "${cur}")) if [ -n "${oldifs+x}" ]; then IFS="$oldifs" fi if [[ "${BASH_VERSINFO[0]}" -ge 4 ]]; then compopt -o filenames fi return 0 ;; --fs-events) COMPREPLY=($(compgen -W "access create remove rename modify metadata" -- "${cur}")) return 0 ;; --log-file) COMPREPLY=($(compgen -f "${cur}")) return 0 ;; --clear) COMPREPLY=($(compgen -W "clear reset" -- "${cur}")) return 0 ;; -c) COMPREPLY=($(compgen -W "clear reset" -- "${cur}")) return 0 ;; --notify) COMPREPLY=($(compgen -W "both start end" -- "${cur}")) return 0 ;; -N) COMPREPLY=($(compgen -W "both start end" -- "${cur}")) return 0 ;; --color) COMPREPLY=($(compgen -W "auto always never" -- "${cur}")) return 0 ;; *) COMPREPLY=() ;; esac COMPREPLY=( $(compgen -W "${opts}" -- "${cur}") ) return 0 ;; esac } if [[ "${BASH_VERSINFO[0]}" -eq 4 && "${BASH_VERSINFO[1]}" -ge 4 || "${BASH_VERSINFO[0]}" -gt 4 ]]; then complete -F _watchexec -o nosort -o bashdefault -o default watchexec else complete -F _watchexec -o bashdefault -o default watchexec fi ================================================ FILE: completions/elvish ================================================ use builtin; use str; set edit:completion:arg-completer[watchexec] = {|@words| fn spaces {|n| builtin:repeat $n ' ' | str:join '' } fn cand {|text desc| edit:complex-candidate $text &display=$text' '(spaces (- 14 (wcswidth $text)))$desc } var command = 'watchexec' for word $words[1..-1] { if (str:has-prefix $word '-') { break } set command = $command';'$word } var completions = [ &'watchexec'= { cand --completions 'Generate a shell completions script' cand --shell 'Use a different shell' cand -E 'Add env vars to the command' cand --env 'Add env vars to the command' cand --wrap-process 'Configure how the process is wrapped' cand --stop-signal 'Signal to send to stop the command' cand --stop-timeout 'Time to wait for the command to exit gracefully' cand --timeout 'Kill the command if it runs longer than this duration' cand --delay-run 'Sleep before running the command' cand --workdir 'Set the working directory' cand --socket 'Provide a socket to the command' cand -o 'What to do when receiving events while the command is running' cand --on-busy-update 'What to do when receiving events while the command is running' cand -s 'Send a signal to the process when it''s still running' cand --signal 'Send a signal to the process when it''s still running' cand --map-signal 'Translate signals from the OS to signals to send to the command' cand -d 'Time to wait for new events before taking action' cand --debounce 'Time to wait for new events before taking action' cand --poll 'Poll for filesystem changes' cand --emit-events-to 'Configure event emission' cand -w 'Watch a specific file or directory' cand --watch 'Watch a specific file or directory' cand -W 'Watch a specific directory, non-recursively' cand --watch-non-recursive 'Watch a specific directory, non-recursively' cand -F 'Watch files and directories from a file' cand --watch-file 'Watch files and directories from a file' cand -e 'Filename extensions to filter to' cand --exts 'Filename extensions to filter to' cand -f 'Filename patterns to filter to' cand --filter 'Filename patterns to filter to' cand --filter-file 'Files to load filters from' cand --project-origin 'Set the project origin' cand -j 'Filter programs' cand --filter-prog 'Filter programs' cand -i 'Filename patterns to filter out' cand --ignore 'Filename patterns to filter out' cand --ignore-file 'Files to load ignores from' cand --fs-events 'Filesystem events to filter to' cand --log-file 'Write diagnostic logs to a file' cand -c 'Clear screen before running command' cand --clear 'Clear screen before running command' cand -N 'Alert when commands start and end' cand --notify 'Alert when commands start and end' cand --color 'When to use terminal colours' cand --manual 'Show the manual page' cand --only-emit-events 'Only emit events to stdout, run no commands' cand -1 'Testing only: exit Watchexec after the first run and return the command''s exit code' cand -n 'Shorthand for ''--shell=none''' cand --no-environment 'Deprecated shorthand for ''--emit-events=none''' cand --no-process-group 'Don''t use a process group' cand -r 'Restart the process if it''s still running' cand --restart 'Restart the process if it''s still running' cand --stdin-quit 'Exit when stdin closes' cand -I 'Respond to keypresses to quit, restart, or pause' cand --interactive 'Respond to keypresses to quit, restart, or pause' cand --exit-on-error 'Exit when the command has an error' cand -p 'Wait until first change before running command' cand --postpone 'Wait until first change before running command' cand --no-vcs-ignore 'Don''t load gitignores' cand --no-project-ignore 'Don''t load project-local ignores' cand --no-global-ignore 'Don''t load global ignores' cand --no-default-ignore 'Don''t use internal default ignores' cand --no-discover-ignore 'Don''t discover ignore files at all' cand --ignore-nothing 'Don''t ignore anything at all' cand --no-meta 'Don''t emit fs events for metadata changes' cand -v 'Set diagnostic log level' cand --verbose 'Set diagnostic log level' cand --print-events 'Print events that trigger actions' cand --timings 'Print how long the command took to run' cand -q 'Don''t print starting and stopping messages' cand --quiet 'Don''t print starting and stopping messages' cand --bell 'Ring the terminal bell on command completion' cand -h 'Print help (see more with ''--help'')' cand --help 'Print help (see more with ''--help'')' cand -V 'Print version' cand --version 'Print version' } ] $completions[$command] } ================================================ FILE: completions/fish ================================================ complete -c watchexec -l completions -d 'Generate a shell completions script' -r -f -a "bash\t'' elvish\t'' fish\t'' nu\t'' powershell\t'' zsh\t''" complete -c watchexec -l shell -d 'Use a different shell' -r complete -c watchexec -s E -l env -d 'Add env vars to the command' -r complete -c watchexec -l wrap-process -d 'Configure how the process is wrapped' -r -f -a "group\t'' session\t'' none\t''" complete -c watchexec -l stop-signal -d 'Signal to send to stop the command' -r complete -c watchexec -l stop-timeout -d 'Time to wait for the command to exit gracefully' -r complete -c watchexec -l timeout -d 'Kill the command if it runs longer than this duration' -r complete -c watchexec -l delay-run -d 'Sleep before running the command' -r complete -c watchexec -l workdir -d 'Set the working directory' -r -f -a "(__fish_complete_directories)" complete -c watchexec -l socket -d 'Provide a socket to the command' -r complete -c watchexec -s o -l on-busy-update -d 'What to do when receiving events while the command is running' -r -f -a "queue\t'' do-nothing\t'' restart\t'' signal\t''" complete -c watchexec -s s -l signal -d 'Send a signal to the process when it\'s still running' -r complete -c watchexec -l map-signal -d 'Translate signals from the OS to signals to send to the command' -r complete -c watchexec -s d -l debounce -d 'Time to wait for new events before taking action' -r complete -c watchexec -l poll -d 'Poll for filesystem changes' -r complete -c watchexec -l emit-events-to -d 'Configure event emission' -r -f -a "environment\t'' stdio\t'' file\t'' json-stdio\t'' json-file\t'' none\t''" complete -c watchexec -s w -l watch -d 'Watch a specific file or directory' -r -F complete -c watchexec -s W -l watch-non-recursive -d 'Watch a specific directory, non-recursively' -r -F complete -c watchexec -s F -l watch-file -d 'Watch files and directories from a file' -r -F complete -c watchexec -s e -l exts -d 'Filename extensions to filter to' -r complete -c watchexec -s f -l filter -d 'Filename patterns to filter to' -r complete -c watchexec -l filter-file -d 'Files to load filters from' -r -F complete -c watchexec -l project-origin -d 'Set the project origin' -r -f -a "(__fish_complete_directories)" complete -c watchexec -s j -l filter-prog -d 'Filter programs' -r complete -c watchexec -s i -l ignore -d 'Filename patterns to filter out' -r complete -c watchexec -l ignore-file -d 'Files to load ignores from' -r -F complete -c watchexec -l fs-events -d 'Filesystem events to filter to' -r -f -a "access\t'' create\t'' remove\t'' rename\t'' modify\t'' metadata\t''" complete -c watchexec -l log-file -d 'Write diagnostic logs to a file' -r -F complete -c watchexec -s c -l clear -d 'Clear screen before running command' -r -f -a "clear\t'' reset\t''" complete -c watchexec -s N -l notify -d 'Alert when commands start and end' -r -f -a "both\t'Notify on both start and end' start\t'Notify only when the command starts' end\t'Notify only when the command ends'" complete -c watchexec -l color -d 'When to use terminal colours' -r -f -a "auto\t'' always\t'' never\t''" complete -c watchexec -l manual -d 'Show the manual page' complete -c watchexec -l only-emit-events -d 'Only emit events to stdout, run no commands' complete -c watchexec -s 1 -d 'Testing only: exit Watchexec after the first run and return the command\'s exit code' complete -c watchexec -s n -d 'Shorthand for \'--shell=none\'' complete -c watchexec -l no-environment -d 'Deprecated shorthand for \'--emit-events=none\'' complete -c watchexec -l no-process-group -d 'Don\'t use a process group' complete -c watchexec -s r -l restart -d 'Restart the process if it\'s still running' complete -c watchexec -l stdin-quit -d 'Exit when stdin closes' complete -c watchexec -s I -l interactive -d 'Respond to keypresses to quit, restart, or pause' complete -c watchexec -l exit-on-error -d 'Exit when the command has an error' complete -c watchexec -s p -l postpone -d 'Wait until first change before running command' complete -c watchexec -l no-vcs-ignore -d 'Don\'t load gitignores' complete -c watchexec -l no-project-ignore -d 'Don\'t load project-local ignores' complete -c watchexec -l no-global-ignore -d 'Don\'t load global ignores' complete -c watchexec -l no-default-ignore -d 'Don\'t use internal default ignores' complete -c watchexec -l no-discover-ignore -d 'Don\'t discover ignore files at all' complete -c watchexec -l ignore-nothing -d 'Don\'t ignore anything at all' complete -c watchexec -l no-meta -d 'Don\'t emit fs events for metadata changes' complete -c watchexec -s v -l verbose -d 'Set diagnostic log level' complete -c watchexec -l print-events -d 'Print events that trigger actions' complete -c watchexec -l timings -d 'Print how long the command took to run' complete -c watchexec -s q -l quiet -d 'Don\'t print starting and stopping messages' complete -c watchexec -l bell -d 'Ring the terminal bell on command completion' complete -c watchexec -s h -l help -d 'Print help (see more with \'--help\')' complete -c watchexec -s V -l version -d 'Print version' ================================================ FILE: completions/nu ================================================ module completions { def "nu-complete watchexec completions" [] { [ "bash" "elvish" "fish" "nu" "powershell" "zsh" ] } def "nu-complete watchexec wrap_process" [] { [ "group" "session" "none" ] } def "nu-complete watchexec on_busy_update" [] { [ "queue" "do-nothing" "restart" "signal" ] } def "nu-complete watchexec emit_events_to" [] { [ "environment" "stdio" "file" "json-stdio" "json-file" "none" ] } def "nu-complete watchexec filter_fs_events" [] { [ "access" "create" "remove" "rename" "modify" "metadata" ] } def "nu-complete watchexec screen_clear" [] { [ "clear" "reset" ] } def "nu-complete watchexec notify" [] { [ "both" "start" "end" ] } def "nu-complete watchexec color" [] { [ "auto" "always" "never" ] } # Execute commands when watched files change export extern watchexec [ --manual # Show the manual page --completions: string@"nu-complete watchexec completions" # Generate a shell completions script --only-emit-events # Only emit events to stdout, run no commands -1 # Testing only: exit Watchexec after the first run and return the command's exit code --shell: string # Use a different shell -n # Shorthand for '--shell=none' --no-environment # Deprecated shorthand for '--emit-events=none' --env(-E): string # Add env vars to the command --no-process-group # Don't use a process group --wrap-process: string@"nu-complete watchexec wrap_process" # Configure how the process is wrapped --stop-signal: string # Signal to send to stop the command --stop-timeout: string # Time to wait for the command to exit gracefully --timeout: string # Kill the command if it runs longer than this duration --delay-run: string # Sleep before running the command --workdir: path # Set the working directory --socket: string # Provide a socket to the command --on-busy-update(-o): string@"nu-complete watchexec on_busy_update" # What to do when receiving events while the command is running --restart(-r) # Restart the process if it's still running --signal(-s): string # Send a signal to the process when it's still running --map-signal: string # Translate signals from the OS to signals to send to the command --debounce(-d): string # Time to wait for new events before taking action --stdin-quit # Exit when stdin closes --interactive(-I) # Respond to keypresses to quit, restart, or pause --exit-on-error # Exit when the command has an error --postpone(-p) # Wait until first change before running command --poll: string # Poll for filesystem changes --emit-events-to: string@"nu-complete watchexec emit_events_to" # Configure event emission --watch(-w): path # Watch a specific file or directory --watch-non-recursive(-W): path # Watch a specific directory, non-recursively --watch-file(-F): path # Watch files and directories from a file --no-vcs-ignore # Don't load gitignores --no-project-ignore # Don't load project-local ignores --no-global-ignore # Don't load global ignores --no-default-ignore # Don't use internal default ignores --no-discover-ignore # Don't discover ignore files at all --ignore-nothing # Don't ignore anything at all --exts(-e): string # Filename extensions to filter to --filter(-f): string # Filename patterns to filter to --filter-file: path # Files to load filters from --project-origin: path # Set the project origin --filter-prog(-j): string # Filter programs --ignore(-i): string # Filename patterns to filter out --ignore-file: path # Files to load ignores from --fs-events: string@"nu-complete watchexec filter_fs_events" # Filesystem events to filter to --no-meta # Don't emit fs events for metadata changes --verbose(-v) # Set diagnostic log level --log-file: path # Write diagnostic logs to a file --print-events # Print events that trigger actions --clear(-c): string@"nu-complete watchexec screen_clear" # Clear screen before running command --notify(-N): string@"nu-complete watchexec notify" # Alert when commands start and end --color: string@"nu-complete watchexec color" # When to use terminal colours --timings # Print how long the command took to run --quiet(-q) # Don't print starting and stopping messages --bell # Ring the terminal bell on command completion --help(-h) # Print help (see more with '--help') --version(-V) # Print version ...program: string # Command (program and arguments) to run on changes ] } export use completions * ================================================ FILE: completions/powershell ================================================ using namespace System.Management.Automation using namespace System.Management.Automation.Language Register-ArgumentCompleter -Native -CommandName 'watchexec' -ScriptBlock { param($wordToComplete, $commandAst, $cursorPosition) $commandElements = $commandAst.CommandElements $command = @( 'watchexec' for ($i = 1; $i -lt $commandElements.Count; $i++) { $element = $commandElements[$i] if ($element -isnot [StringConstantExpressionAst] -or $element.StringConstantType -ne [StringConstantType]::BareWord -or $element.Value.StartsWith('-') -or $element.Value -eq $wordToComplete) { break } $element.Value }) -join ';' $completions = @(switch ($command) { 'watchexec' { [CompletionResult]::new('--completions', '--completions', [CompletionResultType]::ParameterName, 'Generate a shell completions script') [CompletionResult]::new('--shell', '--shell', [CompletionResultType]::ParameterName, 'Use a different shell') [CompletionResult]::new('-E', '-E ', [CompletionResultType]::ParameterName, 'Add env vars to the command') [CompletionResult]::new('--env', '--env', [CompletionResultType]::ParameterName, 'Add env vars to the command') [CompletionResult]::new('--wrap-process', '--wrap-process', [CompletionResultType]::ParameterName, 'Configure how the process is wrapped') [CompletionResult]::new('--stop-signal', '--stop-signal', [CompletionResultType]::ParameterName, 'Signal to send to stop the command') [CompletionResult]::new('--stop-timeout', '--stop-timeout', [CompletionResultType]::ParameterName, 'Time to wait for the command to exit gracefully') [CompletionResult]::new('--timeout', '--timeout', [CompletionResultType]::ParameterName, 'Kill the command if it runs longer than this duration') [CompletionResult]::new('--delay-run', '--delay-run', [CompletionResultType]::ParameterName, 'Sleep before running the command') [CompletionResult]::new('--workdir', '--workdir', [CompletionResultType]::ParameterName, 'Set the working directory') [CompletionResult]::new('--socket', '--socket', [CompletionResultType]::ParameterName, 'Provide a socket to the command') [CompletionResult]::new('-o', '-o', [CompletionResultType]::ParameterName, 'What to do when receiving events while the command is running') [CompletionResult]::new('--on-busy-update', '--on-busy-update', [CompletionResultType]::ParameterName, 'What to do when receiving events while the command is running') [CompletionResult]::new('-s', '-s', [CompletionResultType]::ParameterName, 'Send a signal to the process when it''s still running') [CompletionResult]::new('--signal', '--signal', [CompletionResultType]::ParameterName, 'Send a signal to the process when it''s still running') [CompletionResult]::new('--map-signal', '--map-signal', [CompletionResultType]::ParameterName, 'Translate signals from the OS to signals to send to the command') [CompletionResult]::new('-d', '-d', [CompletionResultType]::ParameterName, 'Time to wait for new events before taking action') [CompletionResult]::new('--debounce', '--debounce', [CompletionResultType]::ParameterName, 'Time to wait for new events before taking action') [CompletionResult]::new('--poll', '--poll', [CompletionResultType]::ParameterName, 'Poll for filesystem changes') [CompletionResult]::new('--emit-events-to', '--emit-events-to', [CompletionResultType]::ParameterName, 'Configure event emission') [CompletionResult]::new('-w', '-w', [CompletionResultType]::ParameterName, 'Watch a specific file or directory') [CompletionResult]::new('--watch', '--watch', [CompletionResultType]::ParameterName, 'Watch a specific file or directory') [CompletionResult]::new('-W', '-W ', [CompletionResultType]::ParameterName, 'Watch a specific directory, non-recursively') [CompletionResult]::new('--watch-non-recursive', '--watch-non-recursive', [CompletionResultType]::ParameterName, 'Watch a specific directory, non-recursively') [CompletionResult]::new('-F', '-F ', [CompletionResultType]::ParameterName, 'Watch files and directories from a file') [CompletionResult]::new('--watch-file', '--watch-file', [CompletionResultType]::ParameterName, 'Watch files and directories from a file') [CompletionResult]::new('-e', '-e', [CompletionResultType]::ParameterName, 'Filename extensions to filter to') [CompletionResult]::new('--exts', '--exts', [CompletionResultType]::ParameterName, 'Filename extensions to filter to') [CompletionResult]::new('-f', '-f', [CompletionResultType]::ParameterName, 'Filename patterns to filter to') [CompletionResult]::new('--filter', '--filter', [CompletionResultType]::ParameterName, 'Filename patterns to filter to') [CompletionResult]::new('--filter-file', '--filter-file', [CompletionResultType]::ParameterName, 'Files to load filters from') [CompletionResult]::new('--project-origin', '--project-origin', [CompletionResultType]::ParameterName, 'Set the project origin') [CompletionResult]::new('-j', '-j', [CompletionResultType]::ParameterName, 'Filter programs') [CompletionResult]::new('--filter-prog', '--filter-prog', [CompletionResultType]::ParameterName, 'Filter programs') [CompletionResult]::new('-i', '-i', [CompletionResultType]::ParameterName, 'Filename patterns to filter out') [CompletionResult]::new('--ignore', '--ignore', [CompletionResultType]::ParameterName, 'Filename patterns to filter out') [CompletionResult]::new('--ignore-file', '--ignore-file', [CompletionResultType]::ParameterName, 'Files to load ignores from') [CompletionResult]::new('--fs-events', '--fs-events', [CompletionResultType]::ParameterName, 'Filesystem events to filter to') [CompletionResult]::new('--log-file', '--log-file', [CompletionResultType]::ParameterName, 'Write diagnostic logs to a file') [CompletionResult]::new('-c', '-c', [CompletionResultType]::ParameterName, 'Clear screen before running command') [CompletionResult]::new('--clear', '--clear', [CompletionResultType]::ParameterName, 'Clear screen before running command') [CompletionResult]::new('-N', '-N ', [CompletionResultType]::ParameterName, 'Alert when commands start and end') [CompletionResult]::new('--notify', '--notify', [CompletionResultType]::ParameterName, 'Alert when commands start and end') [CompletionResult]::new('--color', '--color', [CompletionResultType]::ParameterName, 'When to use terminal colours') [CompletionResult]::new('--manual', '--manual', [CompletionResultType]::ParameterName, 'Show the manual page') [CompletionResult]::new('--only-emit-events', '--only-emit-events', [CompletionResultType]::ParameterName, 'Only emit events to stdout, run no commands') [CompletionResult]::new('-1', '-1', [CompletionResultType]::ParameterName, 'Testing only: exit Watchexec after the first run and return the command''s exit code') [CompletionResult]::new('-n', '-n', [CompletionResultType]::ParameterName, 'Shorthand for ''--shell=none''') [CompletionResult]::new('--no-environment', '--no-environment', [CompletionResultType]::ParameterName, 'Deprecated shorthand for ''--emit-events=none''') [CompletionResult]::new('--no-process-group', '--no-process-group', [CompletionResultType]::ParameterName, 'Don''t use a process group') [CompletionResult]::new('-r', '-r', [CompletionResultType]::ParameterName, 'Restart the process if it''s still running') [CompletionResult]::new('--restart', '--restart', [CompletionResultType]::ParameterName, 'Restart the process if it''s still running') [CompletionResult]::new('--stdin-quit', '--stdin-quit', [CompletionResultType]::ParameterName, 'Exit when stdin closes') [CompletionResult]::new('-I', '-I ', [CompletionResultType]::ParameterName, 'Respond to keypresses to quit, restart, or pause') [CompletionResult]::new('--interactive', '--interactive', [CompletionResultType]::ParameterName, 'Respond to keypresses to quit, restart, or pause') [CompletionResult]::new('--exit-on-error', '--exit-on-error', [CompletionResultType]::ParameterName, 'Exit when the command has an error') [CompletionResult]::new('-p', '-p', [CompletionResultType]::ParameterName, 'Wait until first change before running command') [CompletionResult]::new('--postpone', '--postpone', [CompletionResultType]::ParameterName, 'Wait until first change before running command') [CompletionResult]::new('--no-vcs-ignore', '--no-vcs-ignore', [CompletionResultType]::ParameterName, 'Don''t load gitignores') [CompletionResult]::new('--no-project-ignore', '--no-project-ignore', [CompletionResultType]::ParameterName, 'Don''t load project-local ignores') [CompletionResult]::new('--no-global-ignore', '--no-global-ignore', [CompletionResultType]::ParameterName, 'Don''t load global ignores') [CompletionResult]::new('--no-default-ignore', '--no-default-ignore', [CompletionResultType]::ParameterName, 'Don''t use internal default ignores') [CompletionResult]::new('--no-discover-ignore', '--no-discover-ignore', [CompletionResultType]::ParameterName, 'Don''t discover ignore files at all') [CompletionResult]::new('--ignore-nothing', '--ignore-nothing', [CompletionResultType]::ParameterName, 'Don''t ignore anything at all') [CompletionResult]::new('--no-meta', '--no-meta', [CompletionResultType]::ParameterName, 'Don''t emit fs events for metadata changes') [CompletionResult]::new('-v', '-v', [CompletionResultType]::ParameterName, 'Set diagnostic log level') [CompletionResult]::new('--verbose', '--verbose', [CompletionResultType]::ParameterName, 'Set diagnostic log level') [CompletionResult]::new('--print-events', '--print-events', [CompletionResultType]::ParameterName, 'Print events that trigger actions') [CompletionResult]::new('--timings', '--timings', [CompletionResultType]::ParameterName, 'Print how long the command took to run') [CompletionResult]::new('-q', '-q', [CompletionResultType]::ParameterName, 'Don''t print starting and stopping messages') [CompletionResult]::new('--quiet', '--quiet', [CompletionResultType]::ParameterName, 'Don''t print starting and stopping messages') [CompletionResult]::new('--bell', '--bell', [CompletionResultType]::ParameterName, 'Ring the terminal bell on command completion') [CompletionResult]::new('-h', '-h', [CompletionResultType]::ParameterName, 'Print help (see more with ''--help'')') [CompletionResult]::new('--help', '--help', [CompletionResultType]::ParameterName, 'Print help (see more with ''--help'')') [CompletionResult]::new('-V', '-V ', [CompletionResultType]::ParameterName, 'Print version') [CompletionResult]::new('--version', '--version', [CompletionResultType]::ParameterName, 'Print version') break } }) $completions.Where{ $_.CompletionText -like "$wordToComplete*" } | Sort-Object -Property ListItemText } ================================================ FILE: completions/zsh ================================================ #compdef watchexec autoload -U is-at-least _watchexec() { typeset -A opt_args typeset -a _arguments_options local ret=1 if is-at-least 5.2; then _arguments_options=(-s -S -C) else _arguments_options=(-s -C) fi local context curcontext="$curcontext" state line _arguments "${_arguments_options[@]}" : \ '(--manual --only-emit-events)--completions=[Generate a shell completions script]:SHELL:(bash elvish fish nu powershell zsh)' \ '--shell=[Use a different shell]:SHELL:_default' \ '*-E+[Add env vars to the command]:KEY=VALUE:_default' \ '*--env=[Add env vars to the command]:KEY=VALUE:_default' \ '--wrap-process=[Configure how the process is wrapped]:MODE:(group session none)' \ '--stop-signal=[Signal to send to stop the command]:SIGNAL:_default' \ '--stop-timeout=[Time to wait for the command to exit gracefully]:TIMEOUT:_default' \ '--timeout=[Kill the command if it runs longer than this duration]:TIMEOUT:_default' \ '--delay-run=[Sleep before running the command]:DURATION:_default' \ '--workdir=[Set the working directory]:DIRECTORY:_files -/' \ '*--socket=[Provide a socket to the command]:PORT:_default' \ '-o+[What to do when receiving events while the command is running]:MODE:(queue do-nothing restart signal)' \ '--on-busy-update=[What to do when receiving events while the command is running]:MODE:(queue do-nothing restart signal)' \ '(-r --restart)-s+[Send a signal to the process when it'\''s still running]:SIGNAL:_default' \ '(-r --restart)--signal=[Send a signal to the process when it'\''s still running]:SIGNAL:_default' \ '*--map-signal=[Translate signals from the OS to signals to send to the command]:SIGNAL:SIGNAL:_default' \ '-d+[Time to wait for new events before taking action]:TIMEOUT:_default' \ '--debounce=[Time to wait for new events before taking action]:TIMEOUT:_default' \ '--poll=[Poll for filesystem changes]::INTERVAL:_default' \ '--emit-events-to=[Configure event emission]:MODE:(environment stdio file json-stdio json-file none)' \ '*-w+[Watch a specific file or directory]:PATH:_files' \ '*--watch=[Watch a specific file or directory]:PATH:_files' \ '*-W+[Watch a specific directory, non-recursively]:PATH:_files' \ '*--watch-non-recursive=[Watch a specific directory, non-recursively]:PATH:_files' \ '-F+[Watch files and directories from a file]:PATH:_files' \ '--watch-file=[Watch files and directories from a file]:PATH:_files' \ '*-e+[Filename extensions to filter to]:EXTENSIONS:_default' \ '*--exts=[Filename extensions to filter to]:EXTENSIONS:_default' \ '*-f+[Filename patterns to filter to]:PATTERN:_default' \ '*--filter=[Filename patterns to filter to]:PATTERN:_default' \ '*--filter-file=[Files to load filters from]:PATH:_files' \ '--project-origin=[Set the project origin]:DIRECTORY:_files -/' \ '*-j+[Filter programs]:EXPRESSION:_default' \ '*--filter-prog=[Filter programs]:EXPRESSION:_default' \ '*-i+[Filename patterns to filter out]:PATTERN:_default' \ '*--ignore=[Filename patterns to filter out]:PATTERN:_default' \ '*--ignore-file=[Files to load ignores from]:PATH:_files' \ '*--fs-events=[Filesystem events to filter to]:EVENTS:(access create remove rename modify metadata)' \ '--log-file=[Write diagnostic logs to a file]::PATH:_files' \ '-c+[Clear screen before running command]::MODE:(clear reset)' \ '--clear=[Clear screen before running command]::MODE:(clear reset)' \ '-N+[Alert when commands start and end]::WHEN:((both\:"Notify on both start and end" start\:"Notify only when the command starts" end\:"Notify only when the command ends"))' \ '--notify=[Alert when commands start and end]::WHEN:((both\:"Notify on both start and end" start\:"Notify only when the command starts" end\:"Notify only when the command ends"))' \ '--color=[When to use terminal colours]:MODE:(auto always never)' \ '(--completions --only-emit-events)--manual[Show the manual page]' \ '(--completions --manual)--only-emit-events[Only emit events to stdout, run no commands]' \ '-1[Testing only\: exit Watchexec after the first run and return the command'\''s exit code]' \ '-n[Shorthand for '\''--shell=none'\'']' \ '--no-environment[Deprecated shorthand for '\''--emit-events=none'\'']' \ '--no-process-group[Don'\''t use a process group]' \ '(-o --on-busy-update)-r[Restart the process if it'\''s still running]' \ '(-o --on-busy-update)--restart[Restart the process if it'\''s still running]' \ '--stdin-quit[Exit when stdin closes]' \ '-I[Respond to keypresses to quit, restart, or pause]' \ '--interactive[Respond to keypresses to quit, restart, or pause]' \ '--exit-on-error[Exit when the command has an error]' \ '-p[Wait until first change before running command]' \ '--postpone[Wait until first change before running command]' \ '--no-vcs-ignore[Don'\''t load gitignores]' \ '--no-project-ignore[Don'\''t load project-local ignores]' \ '--no-global-ignore[Don'\''t load global ignores]' \ '--no-default-ignore[Don'\''t use internal default ignores]' \ '--no-discover-ignore[Don'\''t discover ignore files at all]' \ '--ignore-nothing[Don'\''t ignore anything at all]' \ '(--fs-events)--no-meta[Don'\''t emit fs events for metadata changes]' \ '*-v[Set diagnostic log level]' \ '*--verbose[Set diagnostic log level]' \ '--print-events[Print events that trigger actions]' \ '--timings[Print how long the command took to run]' \ '-q[Don'\''t print starting and stopping messages]' \ '--quiet[Don'\''t print starting and stopping messages]' \ '--bell[Ring the terminal bell on command completion]' \ '-h[Print help (see more with '\''--help'\'')]' \ '--help[Print help (see more with '\''--help'\'')]' \ '-V[Print version]' \ '--version[Print version]' \ '*::program -- Command (program and arguments) to run on changes:_cmdstring' \ && ret=0 } (( $+functions[_watchexec_commands] )) || _watchexec_commands() { local commands; commands=() _describe -t commands 'watchexec commands' commands "$@" } if [ "$funcstack[1]" = "_watchexec" ]; then _watchexec "$@" else compdef _watchexec watchexec fi ================================================ FILE: crates/bosion/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v2.0.0 (2026-01-20) - Remove `GIT_COMMIT_DESCRIPTION`. In practice this had zero usage, and dropping it means we can stop depending on gix. - Deps: remove gix. This drops dependencies from 327 crates to just 6. ## v1.1.3 (2025-05-15) - Deps: gix 0.72 ## v1.1.2 (2025-02-09) - Deps: gix 0.70 ## v1.1.1 (2024-10-14) - Deps: gix 0.66 ## v1.1.0 (2024-05-16) - Add `git-describe` support (#832, by @lu-zero) ## v1.0.3 (2024-04-20) - Deps: gix 0.62 ## v1.0.2 (2023-11-26) - Deps: upgrade to gix 0.55 ## v1.0.1 (2023-07-02) - Deps: upgrade to gix 0.44 ## v1.0.0 (2023-03-05) - Initial release. ================================================ FILE: crates/bosion/Cargo.toml ================================================ [package] name = "bosion" version = "2.0.0" authors = ["Félix Saparelli "] license = "Apache-2.0 OR MIT" description = "Gather build information for verbose versions flags" keywords = ["version", "git", "verbose", "long"] documentation = "https://docs.rs/bosion" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.64.0" edition = "2021" [dependencies] flate2 = { version = "1.0.35", optional = true } [dependencies.time] version = "0.3.30" features = ["macros", "formatting"] [features] default = ["git", "reproducible", "std"] ### Read from git repo, provide GIT_* vars git = ["dep:flate2"] ### Read from SOURCE_DATE_EPOCH when available reproducible = [] ### Provide a long_version_with() function to add extra info ### ### Specifically this is std support for the _using_ crate, not for the bosion crate itself. It's ### assumed that the bosion crate is always std, as it runs in build.rs. std = [] [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" needless_doctest_main = "allow" ================================================ FILE: crates/bosion/README.md ================================================ # Bosion _Gather build information for verbose versions flags._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license] or [MIT](https://passcod.mit-license.org). - Status: maintained. [docs]: https://docs.rs/bosion [license]: ../../LICENSE ## Quick start In your `Cargo.toml`: ```toml [build-dependencies] bosion = "2.0.0" ``` In your `build.rs`: ```rust ,no_run fn main() { bosion::gather(); } ``` In your `src/main.rs`: ```rust ,ignore include!(env!("BOSION_PATH")); fn main() { // default output, like rustc -Vv println!("{}", Bosion::LONG_VERSION); // with additional fields println!("{}", Bosion::long_version_with(&[ ("custom data", "value"), ("LLVM version", "15.0.6"), ])); // enabled features like +feature +an-other println!("{}", Bosion::CRATE_FEATURE_STRING); // the raw data println!("{}", Bosion::GIT_COMMIT_HASH); println!("{}", Bosion::GIT_COMMIT_SHORTHASH); println!("{}", Bosion::GIT_COMMIT_DATE); println!("{}", Bosion::GIT_COMMIT_DATETIME); println!("{}", Bosion::CRATE_VERSION); println!("{:?}", Bosion::CRATE_FEATURES); println!("{}", Bosion::BUILD_DATE); println!("{}", Bosion::BUILD_DATETIME); } ``` ## Advanced usage Generating a struct with public visibility: ```rust ,no_run // build.rs bosion::gather_pub(); ``` Customising the output file and struct names: ```rust ,no_run // build.rs bosion::gather_to("buildinfo.rs", "Build", /* public? */ false); ``` Outputting build-time environment variables instead of source: ```rust ,ignore // build.rs bosion::gather_to_env(); // src/main.rs fn main() { println!("{}", env!("BOSION_GIT_COMMIT_HASH")); println!("{}", env!("BOSION_GIT_COMMIT_SHORTHASH")); println!("{}", env!("BOSION_GIT_COMMIT_DATE")); println!("{}", env!("BOSION_GIT_COMMIT_DATETIME")); println!("{}", env!("BOSION_BUILD_DATE")); println!("{}", env!("BOSION_BUILD_DATETIME")); println!("{}", env!("BOSION_CRATE_VERSION")); println!("{}", env!("BOSION_CRATE_FEATURES")); // comma-separated } ``` Custom env prefix: ```rust ,no_run // build.rs bosion::gather_to_env_with_prefix("MYAPP_"); ``` ## Features - `reproducible`: reads [`SOURCE_DATE_EPOCH`](https://reproducible-builds.org/docs/source-date-epoch/) (default). - `git`: enables gathering git information (default). - `std`: enables the `long_version_with` method (default). Specifically, this is about the downstream crate's std support, not Bosion's, which always requires std. ## Why not...? - [bugreport](https://github.com/sharkdp/bugreport): runtime library, for bug information. - [git-testament](https://github.com/kinnison/git-testament): uses the `git` CLI instead of gitoxide. - [human-panic](https://github.com/rust-cli/human-panic): runtime library, for panics. - [shadow-rs](https://github.com/baoyachi/shadow-rs): uses libgit2 instead of gitoxide, doesn't rebuild on git changes. - [vergen](https://github.com/rustyhorde/vergen): uses the `git` CLI instead of gitoxide. Bosion also requires no dependencies outside of build.rs, and was specifically made for crates installed in a variety of ways, like with `cargo install`, from pre-built binary, from source with git, or from source without git (like a tarball), on a variety of platforms. Its default output with [clap](https://clap.rs) is almost exactly like `rustc -Vv`. ## Examples The [examples](./examples) directory contains a practical and runnable [clap-based example](./examples/clap/), as well as several other crates which are actually used for integration testing. Here is the output for the Watchexec CLI: ```plain watchexec 1.21.1 (5026793 2023-03-05) commit-hash: 5026793a12ff895edf2dafb92111e7bd1767650e commit-date: 2023-03-05 build-date: 2023-03-05 release: 1.21.1 features: ``` For comparison, here's `rustc -Vv`: ```plain rustc 1.67.1 (d5a82bbd2 2023-02-07) binary: rustc commit-hash: d5a82bbd26e1ad8b7401f6a718a9c57c96905483 commit-date: 2023-02-07 host: x86_64-unknown-linux-gnu release: 1.67.1 LLVM version: 15.0.6 ``` ================================================ FILE: crates/bosion/examples/clap/Cargo.toml ================================================ [package] name = "bosion-example-clap" version = "0.1.0" publish = false edition = "2021" [workspace] [features] default = ["foo"] foo = [] [build-dependencies.bosion] version = "*" path = "../.." [dependencies.clap] version = "4.1.8" features = ["cargo", "derive"] ================================================ FILE: crates/bosion/examples/clap/build.rs ================================================ fn main() { bosion::gather(); } ================================================ FILE: crates/bosion/examples/clap/src/main.rs ================================================ use clap::Parser; include!(env!("BOSION_PATH")); #[derive(Parser)] #[clap(version, long_version = Bosion::LONG_VERSION)] struct Args { #[clap(long)] extras: bool, #[clap(long)] features: bool, #[clap(long)] dates: bool, #[clap(long)] hashes: bool, } fn main() { let args = Args::parse(); if args.extras { println!( "{}", Bosion::long_version_with(&[("extra", "field"), ("custom", "1.2.3"),]) ); } else if args.features { println!("Features: {}", Bosion::CRATE_FEATURE_STRING); } else if args.dates { println!("commit date: {}", Bosion::GIT_COMMIT_DATE); println!("commit datetime: {}", Bosion::GIT_COMMIT_DATETIME); println!("build date: {}", Bosion::BUILD_DATE); println!("build datetime: {}", Bosion::BUILD_DATETIME); } else if args.hashes { println!("commit hash: {}", Bosion::GIT_COMMIT_HASH); println!("commit shorthash: {}", Bosion::GIT_COMMIT_SHORTHASH); } else { println!("{}", Bosion::LONG_VERSION); } } ================================================ FILE: crates/bosion/examples/default/Cargo.toml ================================================ [package] name = "bosion-test-default" version = "0.1.0" publish = false edition = "2021" [workspace] [features] default = ["foo"] foo = [] [build-dependencies.bosion] version = "*" path = "../.." [dependencies] leon = { version = "3.0.2", default-features = false } snapbox = "0.5.9" time = { version = "0.3.30", features = ["formatting", "macros"] } ================================================ FILE: crates/bosion/examples/default/build.rs ================================================ fn main() { bosion::gather(); } ================================================ FILE: crates/bosion/examples/default/src/common.rs ================================================ #[cfg(test)] pub(crate) fn git_commit_info(format: &str) -> String { let output = std::process::Command::new("git") .arg("show") .arg("--no-notes") .arg("--no-patch") .arg(format!("--pretty=format:{format}")) .output() .expect("git"); String::from_utf8(output.stdout) .expect("git") .trim() .to_string() } #[macro_export] macro_rules! test_snapshot { ($name:ident, $actual:expr) => { #[cfg(test)] #[test] fn $name() { use std::str::FromStr; let gittime = ::time::OffsetDateTime::from_unix_timestamp( i64::from_str(&crate::common::git_commit_info("%ct")).expect("git i64"), ) .expect("git time"); ::snapbox::Assert::new().matches( ::leon::Template::parse( std::fs::read_to_string(format!("../snapshots/{}.txt", stringify!($name))) .expect("read file") .trim(), ) .expect("leon parse") .render(&[ ( "today date".to_string(), ::time::OffsetDateTime::now_utc() .format(::time::macros::format_description!("[year]-[month]-[day]")) .unwrap(), ), ("git hash".to_string(), crate::common::git_commit_info("%H")), ( "git shorthash".to_string(), crate::common::git_commit_info("%H").chars().take(8).collect(), ), ( "git date".to_string(), gittime .format(::time::macros::format_description!("[year]-[month]-[day]")) .expect("git date format"), ), ( "git datetime".to_string(), gittime .format(::time::macros::format_description!( "[year]-[month]-[day] [hour]:[minute]:[second]" )) .expect("git time format"), ), ]) .expect("leon render"), $actual, ); } }; } ================================================ FILE: crates/bosion/examples/default/src/main.rs ================================================ include!(env!("BOSION_PATH")); mod common; fn main() {} test_snapshot!(crate_version, Bosion::CRATE_VERSION); test_snapshot!(crate_features, format!("{:#?}", Bosion::CRATE_FEATURES)); test_snapshot!(build_date, Bosion::BUILD_DATE); test_snapshot!(build_datetime, Bosion::BUILD_DATETIME); test_snapshot!(git_commit_hash, Bosion::GIT_COMMIT_HASH); test_snapshot!(git_commit_shorthash, Bosion::GIT_COMMIT_SHORTHASH); test_snapshot!(git_commit_date, Bosion::GIT_COMMIT_DATE); test_snapshot!(git_commit_datetime, Bosion::GIT_COMMIT_DATETIME); test_snapshot!(default_long_version, Bosion::LONG_VERSION); test_snapshot!( default_long_version_with, Bosion::long_version_with(&[("extra", "field"), ("custom", "1.2.3")]) ); ================================================ FILE: crates/bosion/examples/no-git/Cargo.toml ================================================ [package] name = "bosion-test-no-git" version = "0.1.0" publish = false edition = "2021" [workspace] [features] default = ["foo"] foo = [] [build-dependencies.bosion] version = "*" path = "../.." default-features = false features = ["std"] [dependencies] leon = { version = "3.0.2", default-features = false } snapbox = "0.5.9" time = { version = "0.3.30", features = ["formatting", "macros"] } ================================================ FILE: crates/bosion/examples/no-git/build.rs ================================================ fn main() { bosion::gather(); } ================================================ FILE: crates/bosion/examples/no-git/src/main.rs ================================================ include!(env!("BOSION_PATH")); #[path = "../../default/src/common.rs"] mod common; fn main() {} test_snapshot!(crate_version, Bosion::CRATE_VERSION); test_snapshot!(crate_features, format!("{:#?}", Bosion::CRATE_FEATURES)); test_snapshot!(build_date, Bosion::BUILD_DATE); test_snapshot!(build_datetime, Bosion::BUILD_DATETIME); test_snapshot!(no_git_long_version, Bosion::LONG_VERSION); test_snapshot!( no_git_long_version_with, Bosion::long_version_with(&[("extra", "field"), ("custom", "1.2.3")]) ); ================================================ FILE: crates/bosion/examples/no-std/Cargo.toml ================================================ [package] name = "bosion-test-no-std" version = "0.1.0" publish = false edition = "2021" [profile.dev] panic = "abort" [profile.release] panic = "abort" [workspace] [features] default = ["foo"] foo = [] [build-dependencies.bosion] version = "*" path = "../.." default-features = false [dependencies] leon = { version = "3.0.2", default-features = false } snapbox = "0.5.9" time = { version = "0.3.30", features = ["formatting", "macros"] } ================================================ FILE: crates/bosion/examples/no-std/build.rs ================================================ fn main() { bosion::gather(); } ================================================ FILE: crates/bosion/examples/no-std/src/main.rs ================================================ #![cfg_attr(not(test), no_main)] #![cfg_attr(not(test), no_std)] #[cfg(not(test))] use core::panic::PanicInfo; #[cfg(not(test))] #[panic_handler] fn panic(_panic: &PanicInfo<'_>) -> ! { loop {} } include!(env!("BOSION_PATH")); #[cfg(test)] #[path = "../../default/src/common.rs"] mod common; #[cfg(test)] mod test { use super::*; test_snapshot!(crate_version, Bosion::CRATE_VERSION); test_snapshot!(crate_features, format!("{:#?}", Bosion::CRATE_FEATURES)); test_snapshot!(build_date, Bosion::BUILD_DATE); test_snapshot!(build_datetime, Bosion::BUILD_DATETIME); test_snapshot!(no_git_long_version, Bosion::LONG_VERSION); } ================================================ FILE: crates/bosion/examples/snapshots/build_date.txt ================================================ {today date} ================================================ FILE: crates/bosion/examples/snapshots/build_datetime.txt ================================================ {today date} [..] ================================================ FILE: crates/bosion/examples/snapshots/crate_features.txt ================================================ [ "default", "foo", ] ================================================ FILE: crates/bosion/examples/snapshots/crate_version.txt ================================================ 0.1.0 ================================================ FILE: crates/bosion/examples/snapshots/default_long_version.txt ================================================ 0.1.0 ({git shorthash} {git date}) +foo commit-hash: {git hash} commit-date: {git date} build-date: {today date} release: 0.1.0 features: default,foo ================================================ FILE: crates/bosion/examples/snapshots/default_long_version_with.txt ================================================ 0.1.0 ({git shorthash} {git date}) +foo commit-hash: {git hash} commit-date: {git date} build-date: {today date} release: 0.1.0 features: default,foo extra: field custom: 1.2.3 ================================================ FILE: crates/bosion/examples/snapshots/git_commit_date.txt ================================================ {git date} ================================================ FILE: crates/bosion/examples/snapshots/git_commit_datetime.txt ================================================ {git datetime} ================================================ FILE: crates/bosion/examples/snapshots/git_commit_hash.txt ================================================ {git hash} ================================================ FILE: crates/bosion/examples/snapshots/git_commit_shorthash.txt ================================================ {git shorthash} ================================================ FILE: crates/bosion/examples/snapshots/no_git_long_version.txt ================================================ 0.1.0 ({today date}) +foo build-date: {today date} release: 0.1.0 features: default,foo ================================================ FILE: crates/bosion/examples/snapshots/no_git_long_version_with.txt ================================================ 0.1.0 ({today date}) +foo build-date: {today date} release: 0.1.0 features: default,foo extra: field custom: 1.2.3 ================================================ FILE: crates/bosion/release.toml ================================================ pre-release-commit-message = "release: bosion v{{version}}" tag-prefix = "bosion-" tag-message = "bosion {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 [[pre-release-replacements]] file = "README.md" search = "^bosion = \".*\"$" replace = "bosion = \"{{version}}\"" prerelease = true max = 1 ================================================ FILE: crates/bosion/run-tests.sh ================================================ #!/bin/bash set -euo pipefail for test in examples/*; do echo "Testing $test" pushd $test if ! test -f Cargo.toml; then popd continue fi cargo check cargo test popd done ================================================ FILE: crates/bosion/src/info.rs ================================================ use std::{ env::var, path::{Path, PathBuf}, }; use time::{format_description::FormatItem, macros::format_description, OffsetDateTime}; /// Gathered build-time information /// /// This struct contains all the information gathered by `bosion`. It is not meant to be used /// directly under normal circumstances, but is public for documentation purposes and if you wish /// to build your own frontend for whatever reason. In that case, note that no effort has been made /// to make this usable outside of the build.rs environment. /// /// The `git` field is only available when the `git` feature is enabled, and if there is a git /// repository to read from. The repository is discovered by walking up the directory tree until one /// is found, which means workspaces or more complex monorepos are automatically supported. If there /// are any errors reading the repository, the `git` field will be `None` and a rustc warning will /// be printed. #[derive(Debug, Clone)] pub struct Info { /// The crate version, as read from the `CARGO_PKG_VERSION` environment variable. pub crate_version: String, /// The crate features, as found by the presence of `CARGO_FEATURE_*` environment variables. /// /// These are normalised to lowercase and have underscores replaced by hyphens. pub crate_features: Vec, /// The build date, in the format `YYYY-MM-DD`, at UTC. /// /// This is either current as of build time, or from the timestamp specified by the /// `SOURCE_DATE_EPOCH` environment variable, for /// [reproducible builds](https://reproducible-builds.org/). pub build_date: String, /// The build datetime, in the format `YYYY-MM-DD HH:MM:SS`, at UTC. /// /// This is either current as of build time, or from the timestamp specified by the /// `SOURCE_DATE_EPOCH` environment variable, for /// [reproducible builds](https://reproducible-builds.org/). pub build_datetime: String, /// Git repository information, if available. pub git: Option, } trait ErrString { fn err_string(self) -> Result; } impl ErrString for Result where E: std::fmt::Display, { fn err_string(self) -> Result { self.map_err(|e| e.to_string()) } } const DATE_FORMAT: &[FormatItem<'static>] = format_description!("[year]-[month]-[day]"); const DATETIME_FORMAT: &[FormatItem<'static>] = format_description!("[year]-[month]-[day] [hour]:[minute]:[second]"); impl Info { /// Gathers build-time information /// /// This is not meant to be used directly under normal circumstances, but is public if you wish /// to build your own frontend for whatever reason. In that case, note that no effort has been /// made to make this usable outside of the build.rs environment. pub fn gather() -> Result { let build_date = Self::build_date()?; Ok(Self { crate_version: var("CARGO_PKG_VERSION").err_string()?, crate_features: Self::features(), build_date: build_date.format(DATE_FORMAT).err_string()?, build_datetime: build_date.format(DATETIME_FORMAT).err_string()?, #[cfg(feature = "git")] git: GitInfo::gather() .map_err(|e| { println!("cargo:warning=git info gathering failed: {e}"); }) .ok(), #[cfg(not(feature = "git"))] git: None, }) } fn build_date() -> Result { if cfg!(feature = "reproducible") { if let Ok(date) = var("SOURCE_DATE_EPOCH") { if let Ok(date) = date.parse::() { return OffsetDateTime::from_unix_timestamp(date).err_string(); } } } Ok(OffsetDateTime::now_utc()) } fn features() -> Vec { let mut features = Vec::new(); for (key, _) in std::env::vars() { if let Some(stripped) = key.strip_prefix("CARGO_FEATURE_") { features.push(stripped.replace('_', "-").to_lowercase().clone()); } } features } pub(crate) fn set_reruns(&self) { if cfg!(feature = "reproducible") { println!("cargo:rerun-if-env-changed=SOURCE_DATE_EPOCH"); } if let Some(git) = &self.git { let git_head = git.git_root.join("HEAD"); println!("cargo:rerun-if-changed={}", git_head.display()); } } } /// Git repository information #[derive(Debug, Clone)] pub struct GitInfo { /// The absolute path to the git repository's data folder. /// /// In a normal repository, this is `.git`, _not_ the index or working directory. pub git_root: PathBuf, /// The full hash of the current commit. /// /// Note that this makes no effore to handle dirty working directories, so it may not be /// representative of the current state of the code. pub git_hash: String, /// The short hash of the current commit. /// /// This is truncated to 8 characters. pub git_shorthash: String, /// The date of the current commit, in the format `YYYY-MM-DD`, at UTC. pub git_date: String, /// The datetime of the current commit, in the format `YYYY-MM-DD HH:MM:SS`, at UTC. pub git_datetime: String, } #[cfg(feature = "git")] impl GitInfo { fn gather() -> Result { let git_root = Self::find_git_dir(Path::new(".")) .ok_or_else(|| "no git repository found".to_string())?; let hash = Self::resolve_head(&git_root).ok_or_else(|| "could not resolve HEAD".to_string())?; let timestamp = Self::read_commit_timestamp(&git_root, &hash) .ok_or_else(|| "could not read commit timestamp".to_string())?; let timestamp = OffsetDateTime::from_unix_timestamp(timestamp).err_string()?; Ok(Self { git_root: git_root.canonicalize().err_string()?, git_shorthash: hash.chars().take(8).collect(), git_hash: hash, git_date: timestamp.format(DATE_FORMAT).err_string()?, git_datetime: timestamp.format(DATETIME_FORMAT).err_string()?, }) } fn find_git_dir(start: &Path) -> Option { use std::fs; let mut current = start.canonicalize().ok()?; loop { let git_dir = current.join(".git"); if git_dir.is_dir() { return Some(git_dir); } // Handle git worktrees: .git can be a file containing "gitdir: " if git_dir.is_file() { let content = fs::read_to_string(&git_dir).ok()?; if let Some(path) = content.strip_prefix("gitdir: ") { return Some(PathBuf::from(path.trim())); } } if !current.pop() { return None; } } } fn resolve_head(git_dir: &Path) -> Option { use std::fs; let head_content = fs::read_to_string(git_dir.join("HEAD")).ok()?; let head_content = head_content.trim(); if let Some(ref_path) = head_content.strip_prefix("ref: ") { Self::resolve_ref(git_dir, ref_path) } else { // Detached HEAD - direct commit hash Some(head_content.to_string()) } } fn resolve_ref(git_dir: &Path, ref_path: &str) -> Option { use std::fs; // Try loose ref first let ref_file = git_dir.join(ref_path); if let Ok(content) = fs::read_to_string(&ref_file) { return Some(content.trim().to_string()); } // Try packed-refs let packed_refs = git_dir.join("packed-refs"); if let Ok(content) = fs::read_to_string(&packed_refs) { for line in content.lines() { if line.starts_with('#') || line.starts_with('^') { continue; } let parts: Vec<_> = line.split_whitespace().collect(); if parts.len() >= 2 && parts[1] == ref_path { return Some(parts[0].to_string()); } } } None } fn read_commit_timestamp(git_dir: &Path, hash: &str) -> Option { // Try loose object first if let Some(timestamp) = Self::read_loose_commit_timestamp(git_dir, hash) { return Some(timestamp); } // Try packfiles Self::read_packed_commit_timestamp(git_dir, hash) } fn read_loose_commit_timestamp(git_dir: &Path, hash: &str) -> Option { use flate2::read::ZlibDecoder; use std::{fs, io::Read}; let (prefix, suffix) = hash.split_at(2); let object_path = git_dir.join("objects").join(prefix).join(suffix); let compressed = fs::read(&object_path).ok()?; let mut decoder = ZlibDecoder::new(&compressed[..]); let mut decompressed = Vec::new(); decoder.read_to_end(&mut decompressed).ok()?; Self::parse_commit_timestamp(&decompressed) } fn read_packed_commit_timestamp(git_dir: &Path, hash: &str) -> Option { use std::fs; let pack_dir = git_dir.join("objects").join("pack"); let entries = fs::read_dir(&pack_dir).ok()?; // Parse the hash into bytes for comparison let hash_bytes = Self::hex_to_bytes(hash)?; for entry in entries.flatten() { let path = entry.path(); if path.extension().and_then(|e| e.to_str()) == Some("idx") { if let Some(offset) = Self::find_object_in_index(&path, &hash_bytes) { let pack_path = path.with_extension("pack"); if let Some(data) = Self::read_pack_object(&pack_path, offset) { return Self::parse_commit_timestamp(&data); } } } } None } fn hex_to_bytes(hex: &str) -> Option<[u8; 20]> { let mut bytes = [0u8; 20]; if hex.len() != 40 { return None; } for (i, chunk) in hex.as_bytes().chunks(2).enumerate() { let s = std::str::from_utf8(chunk).ok()?; bytes[i] = u8::from_str_radix(s, 16).ok()?; } Some(bytes) } fn find_object_in_index(idx_path: &Path, hash: &[u8; 20]) -> Option { use std::{ fs::File, io::{Read, Seek, SeekFrom}, }; let mut file = File::open(idx_path).ok()?; let mut header = [0u8; 8]; file.read_exact(&mut header).ok()?; // Check for v2 index magic: 0xff744f63 if header[0..4] != [0xff, 0x74, 0x4f, 0x63] { return None; // Only support v2 index } let version = u32::from_be_bytes([header[4], header[5], header[6], header[7]]); if version != 2 { return None; } // Read fanout table (256 * 4 bytes) let mut fanout = [0u32; 256]; for entry in &mut fanout { let mut buf = [0u8; 4]; file.read_exact(&mut buf).ok()?; *entry = u32::from_be_bytes(buf); } let total_objects = fanout[255] as usize; let first_byte = hash[0] as usize; // Find range of objects with this first byte let start = if first_byte == 0 { 0 } else { fanout[first_byte - 1] as usize }; let end = fanout[first_byte] as usize; if start >= end { return None; } // Binary search within the hash section // Hashes start at offset 8 + 256*4 = 1032 let hash_section_offset = 8 + 256 * 4; let mut left = start; let mut right = end; while left < right { let mid = left + (right - left) / 2; let hash_offset = hash_section_offset + mid * 20; file.seek(SeekFrom::Start(hash_offset as u64)).ok()?; let mut found_hash = [0u8; 20]; file.read_exact(&mut found_hash).ok()?; match found_hash.cmp(hash) { std::cmp::Ordering::Equal => { // Found! Now get the offset // CRC section starts after all hashes // Offset section starts after CRC section let offset_section = hash_section_offset + total_objects * 20 + total_objects * 4; let offset_entry = offset_section + mid * 4; file.seek(SeekFrom::Start(offset_entry as u64)).ok()?; let mut offset_buf = [0u8; 4]; file.read_exact(&mut offset_buf).ok()?; let offset = u32::from_be_bytes(offset_buf); // Check if this is a large offset (MSB set) if offset & 0x80000000 != 0 { // Large offset - need to read from 8-byte offset table let large_idx = (offset & 0x7fffffff) as usize; let large_offset_section = offset_section + total_objects * 4; let large_entry = large_offset_section + large_idx * 8; file.seek(SeekFrom::Start(large_entry as u64)).ok()?; let mut large_buf = [0u8; 8]; file.read_exact(&mut large_buf).ok()?; return Some(u64::from_be_bytes(large_buf)); } return Some(u64::from(offset)); } std::cmp::Ordering::Less => left = mid + 1, std::cmp::Ordering::Greater => right = mid, } } None } fn read_pack_object(pack_path: &Path, offset: u64) -> Option> { use flate2::read::ZlibDecoder; use std::{ fs::File, io::{Read, Seek, SeekFrom}, }; let mut file = File::open(pack_path).ok()?; file.seek(SeekFrom::Start(offset)).ok()?; // Read object header (variable length encoding) let mut byte = [0u8; 1]; file.read_exact(&mut byte).ok()?; let obj_type = (byte[0] >> 4) & 0x07; let mut size = u64::from(byte[0] & 0x0f); let mut shift = 4; while byte[0] & 0x80 != 0 { file.read_exact(&mut byte).ok()?; size |= u64::from(byte[0] & 0x7f) << shift; shift += 7; } // Object types: 1=commit, 2=tree, 3=blob, 4=tag, 6=ofs_delta, 7=ref_delta match obj_type { 1..=4 => { // Regular object - just decompress let mut decoder = ZlibDecoder::new(&mut file); #[allow(clippy::cast_possible_truncation)] let mut data = Vec::with_capacity(size as usize); decoder.read_to_end(&mut data).ok()?; // Add the git object header let type_name = match obj_type { 1 => "commit", 2 => "tree", 3 => "blob", 4 => "tag", _ => unreachable!(), }; let mut result = format!("{} {}\0", type_name, data.len()).into_bytes(); result.extend(data); Some(result) } 6 | 7 => { // Delta objects - not supported for simplicity // In practice, the HEAD commit is often a delta, but resolving // deltas requires recursive lookups which adds complexity None } _ => None, } } fn parse_commit_timestamp(data: &[u8]) -> Option { let content = std::str::from_utf8(data).ok()?; // Skip the header (e.g., "commit 123\0") let content = content.split('\0').nth(1)?; for line in content.lines() { if let Some(rest) = line.strip_prefix("committer ") { // Format: "Name timestamp timezone" let parts: Vec<_> = rest.rsplitn(3, ' ').collect(); if parts.len() >= 2 { return parts[1].parse().ok(); } } } None } } ================================================ FILE: crates/bosion/src/lib.rs ================================================ #![doc = include_str!("../README.md")] #![cfg_attr(not(test), warn(unused_crate_dependencies))] use std::{env::var, fs::File, io::Write, path::PathBuf}; pub use info::*; mod info; /// Gather build-time information for the current crate /// /// See the crate-level documentation for a guide. This function is a convenience wrapper around /// [`gather_to`] with the most common defaults: it writes to `bosion.rs` a pub(crate) struct named /// `Bosion`. pub fn gather() { gather_to("bosion.rs", "Bosion", false); } /// Gather build-time information for the current crate (public visibility) /// /// See the crate-level documentation for a guide. This function is a convenience wrapper around /// [`gather_to`]: it writes to `bosion.rs` a pub struct named `Bosion`. pub fn gather_pub() { gather_to("bosion.rs", "Bosion", true); } /// Gather build-time information for the current crate (custom output) /// /// Gathers a limited set of build-time information for the current crate and writes it to a file. /// The file is always written to the `OUT_DIR` directory, as per Cargo conventions. It contains a /// zero-size struct with a bunch of associated constants containing the gathered information, and a /// `long_version_with` function (when the `std` feature is enabled) that takes a slice of extra /// key-value pairs to append in the same format. /// /// `public` controls whether the struct is `pub` (true) or `pub(crate)` (false). /// /// The generated code is entirely documented, and will appear in your documentation (in docs.rs, it /// only will if visibility is public). /// /// See [`Info`] for a list of gathered data. /// /// The constants include all the information from [`Info`], as well as the following: /// /// - `LONG_VERSION`: A clap-ready long version string, including the crate version, features, build /// date, and git information when available. /// - `CRATE_FEATURE_STRING`: A string containing the crate features, in the format `+feat1 +feat2`. /// /// We also instruct rustc to rerun the build script if the environment changes, as necessary. pub fn gather_to(filename: &str, structname: &str, public: bool) { let path = PathBuf::from(var("OUT_DIR").expect("bosion")).join(filename); println!("cargo:rustc-env=BOSION_PATH={}", path.display()); let info = Info::gather().expect("bosion"); info.set_reruns(); let Info { crate_version, crate_features, build_date, build_datetime, git, } = info; let crate_feature_string = crate_features .iter() .filter(|feat| *feat != "default") .map(|feat| format!("+{feat}")) .collect::>() .join(" "); let crate_feature_list = crate_features.join(","); let viz = if public { "pub" } else { "pub(crate)" }; let (git_render, long_version) = if let Some(GitInfo { git_hash, git_shorthash, git_date, git_datetime, .. }) = git { (format!( " /// The git commit hash /// /// This is the full hash of the commit that was built. Note that if the repository was /// dirty, this will be the hash of the last commit, not including the changes. pub const GIT_COMMIT_HASH: &'static str = {git_hash:?}; /// The git commit hash, shortened /// /// This is the shortened hash of the commit that was built. Same caveats as with /// `GIT_COMMIT_HASH` apply. The length of the hash is fixed at 8 characters. pub const GIT_COMMIT_SHORTHASH: &'static str = {git_shorthash:?}; /// The git commit date /// /// This is the date (`YYYY-MM-DD`) of the commit that was built. Same caveats as with /// `GIT_COMMIT_HASH` apply. pub const GIT_COMMIT_DATE: &'static str = {git_date:?}; /// The git commit date and time /// /// This is the date and time (`YYYY-MM-DD HH:MM:SS`) of the commit that was built. Same /// caveats as with `GIT_COMMIT_HASH` apply. pub const GIT_COMMIT_DATETIME: &'static str = {git_datetime:?}; " ), format!("{crate_version} ({git_shorthash} {git_date}) {crate_feature_string}\ncommit-hash: {git_hash}\ncommit-date: {git_date}\nbuild-date: {build_date}\nrelease: {crate_version}\nfeatures: {crate_feature_list}")) } else { (String::new(), format!("{crate_version} ({build_date}) {crate_feature_string}\nbuild-date: {build_date}\nrelease: {crate_version}\nfeatures: {crate_feature_list}")) }; #[cfg(feature = "std")] let long_version_with_fn = r#" /// Returns the long version string with extra information tacked on /// /// This is the same as `LONG_VERSION` but takes a slice of key-value pairs to append to the /// end in the same format. pub fn long_version_with(extra: &[(&str, &str)]) -> String { let mut output = Self::LONG_VERSION.to_string(); for (k, v) in extra { output.push_str(&format!("\n{k}: {v}")); } output } "#; #[cfg(not(feature = "std"))] let long_version_with_fn = ""; let bosion_version = env!("CARGO_PKG_VERSION"); let render = format!( r#" /// Build-time information /// /// This struct is generated by the [bosion](https://docs.rs/bosion) crate at build time. /// /// Bosion version: {bosion_version} #[derive(Debug, Clone, Copy)] {viz} struct {structname}; #[allow(dead_code)] impl {structname} {{ /// Clap-compatible long version string /// /// At minimum, this will be the crate version and build date. /// /// It presents as a first "summary" line like `crate_version (build_date) features`, /// followed by `key: value` pairs. This is the same format used by `rustc -Vv`. /// /// If git info is available, it also includes the git hash, short hash and commit date, /// and swaps the build date for the commit date in the summary line. pub const LONG_VERSION: &'static str = {long_version:?}; /// The crate version, as reported by Cargo /// /// You should probably prefer reading the `CARGO_PKG_VERSION` environment variable. pub const CRATE_VERSION: &'static str = {crate_version:?}; /// The crate features /// /// This is a list of the features that were enabled when this crate was built, /// lowercased and with underscores replaced by hyphens. pub const CRATE_FEATURES: &'static [&'static str] = &{crate_features:?}; /// The crate features, as a string /// /// This is in format `+feature +feature2 +feature3`, lowercased with underscores /// replaced by hyphens. pub const CRATE_FEATURE_STRING: &'static str = {crate_feature_string:?}; /// The build date /// /// This is the date that the crate was built, in the format `YYYY-MM-DD`. If the /// environment variable `SOURCE_DATE_EPOCH` was set, it's used instead of the current /// time, for [reproducible builds](https://reproducible-builds.org/). pub const BUILD_DATE: &'static str = {build_date:?}; /// The build datetime /// /// This is the date and time that the crate was built, in the format /// `YYYY-MM-DD HH:MM:SS`. If the environment variable `SOURCE_DATE_EPOCH` was set, it's /// used instead of the current time, for /// [reproducible builds](https://reproducible-builds.org/). pub const BUILD_DATETIME: &'static str = {build_datetime:?}; {git_render} {long_version_with_fn} }} "# ); let mut file = File::create(path).expect("bosion"); file.write_all(render.as_bytes()).expect("bosion"); } /// Gather build-time information and write it to the environment /// /// See the crate-level documentation for a guide. This function is a convenience wrapper around /// [`gather_to_env_with_prefix`] with the most common default prefix of `BOSION_`. pub fn gather_to_env() { gather_to_env_with_prefix("BOSION_"); } /// Gather build-time information and write it to the environment /// /// Gathers a limited set of build-time information for the current crate and makes it available to /// the crate as build environment variables. This is an alternative to [`include!`]ing a file which /// is generated at build time, like for [`gather`] and variants, which doesn't create any new code /// and doesn't include any information in the binary that you do not explicitly use. /// /// The environment variables are prefixed with the given string, which should be generally be /// uppercase and end with an underscore. /// /// See [`Info`] for a list of gathered data. /// /// Unlike [`gather`], there is no Clap-ready `LONG_VERSION` string, but you can of course generate /// one yourself from the environment variables. /// /// We also instruct rustc to rerun the build script if the environment changes, as necessary. pub fn gather_to_env_with_prefix(prefix: &str) { let info = Info::gather().expect("bosion"); info.set_reruns(); let Info { crate_version, crate_features, build_date, build_datetime, git, } = info; println!("cargo:rustc-env={prefix}CRATE_VERSION={crate_version}"); println!( "cargo:rustc-env={prefix}CRATE_FEATURES={}", crate_features.join(",") ); println!("cargo:rustc-env={prefix}BUILD_DATE={build_date}"); println!("cargo:rustc-env={prefix}BUILD_DATETIME={build_datetime}"); if let Some(GitInfo { git_hash, git_shorthash, git_date, git_datetime, .. }) = git { println!("cargo:rustc-env={prefix}GIT_COMMIT_HASH={git_hash}"); println!("cargo:rustc-env={prefix}GIT_COMMIT_SHORTHASH={git_shorthash}"); println!("cargo:rustc-env={prefix}GIT_COMMIT_DATE={git_date}"); println!("cargo:rustc-env={prefix}GIT_COMMIT_DATETIME={git_datetime}"); } } ================================================ FILE: crates/cli/Cargo.toml ================================================ [package] name = "watchexec-cli" version = "2.5.1" authors = ["Félix Saparelli ", "Matt Green "] license = "Apache-2.0" description = "Executes commands in response to file modifications" keywords = ["watcher", "filesystem", "cli", "watchexec"] categories = ["command-line-utilities"] documentation = "https://watchexec.github.io/docs/#watchexec" homepage = "https://watchexec.github.io" repository = "https://github.com/watchexec/watchexec" readme = "README.md" edition = "2021" # sets the default for the workspace default-run = "watchexec" [[bin]] name = "watchexec" path = "src/main.rs" [dependencies] argfile = "0.2.0" chrono = "0.4.31" clap_complete = "4.5.44" clap_complete_nushell = "4.4.2" clap_mangen = "0.2.15" clearscreen = "4.0.4" dashmap = "6.1.0" dirs = "6.0.0" dunce = "1.0.4" foldhash = "0.1.5" # needs to be in sync with jaq's requirement futures = "0.3.29" humantime = "2.1.0" indexmap = "2.10.0" # needs to be in sync with jaq's requirement jaq-core = "2.1.0" jaq-json = { version = "1.1.0", features = ["serde_json"] } jaq-std = "2.1.0" notify-rust = "4.11.7" serde_json = "1.0.138" tempfile = "3.16.0" termcolor = "1.4.0" tracing = "0.1.40" tracing-appender = "0.2.3" which = "8.0.0" [dependencies.blake3] version = "1.3.3" features = ["rayon"] [dependencies.clap] version = "4.4.7" features = ["cargo", "derive", "env", "wrap_help"] [dependencies.console-subscriber] version = "0.5.0" optional = true [dependencies.eyra] version = "0.22.0" features = ["log", "env_logger"] optional = true [dependencies.ignore-files] version = "3.0.5" path = "../ignore-files" [dependencies.miette] version = "7.5.0" features = ["fancy"] [dependencies.pid1] version = "0.1.1" optional = true [dependencies.project-origins] version = "1.4.2" path = "../project-origins" [dependencies.watchexec] version = "8.2.0" path = "../lib" [dependencies.watchexec-events] version = "6.1.0" path = "../events" features = ["serde"] [dependencies.watchexec-signals] version = "5.0.1" path = "../signals" [dependencies.watchexec-filterer-globset] version = "8.0.0" path = "../filterer/globset" [dependencies.tokio] version = "1.33.0" features = [ "fs", "io-std", "process", "net", "rt", "rt-multi-thread", "signal", "sync", ] [dependencies.tracing-subscriber] version = "0.3.6" features = [ "env-filter", "fmt", "json", "tracing-log", "ansi", ] [target.'cfg(unix)'.dependencies] libc = "0.2.74" nix = { version = "0.30.1", features = ["net"] } [target.'cfg(windows)'.dependencies] socket2 = "0.6.1" uuid = { version = "1.13.1", features = ["v4"] } windows-sys = { version = ">= 0.59.0, < 0.62.0", features = ["Win32_Networking_WinSock"] } [target.'cfg(target_env = "musl")'.dependencies] mimalloc = "0.1.39" [build-dependencies] embed-resource = "3.0.1" [build-dependencies.bosion] version = "2.0.0" path = "../bosion" [dev-dependencies] tracing-test = "0.2.4" uuid = { workspace = true, features = [ "v4", "fast-rng" ] } rand = { workspace = true } [features] default = ["pid1"] ## Build using Eyra's pure-Rust libc eyra = ["dep:eyra"] ## Enables PID1 handling. pid1 = ["dep:pid1"] ## Enables logging for PID1 handling. pid1-withlog = ["pid1"] ## For debugging only: enables the Tokio Console. dev-console = ["dep:console-subscriber"] [package.metadata.binstall] pkg-url = "{ repo }/releases/download/v{ version }/watchexec-{ version }-{ target }.{ archive-format }" bin-dir = "watchexec-{ version }-{ target }/{ bin }{ binary-ext }" pkg-fmt = "txz" [package.metadata.binstall.overrides.x86_64-pc-windows-msvc] pkg-fmt = "zip" [package.metadata.deb] maintainer = "Félix Saparelli " license-file = ["../../LICENSE", "0"] section = "utility" depends = "libc6, libgcc-s1" # not needed for musl, but see below # conf-files = [] # look me up when config file lands assets = [ ["../../target/release/watchexec", "usr/bin/watchexec", "755"], ["README.md", "usr/share/doc/watchexec/README", "644"], ["../../doc/watchexec.1.md", "usr/share/doc/watchexec/watchexec.1.md", "644"], ["../../doc/watchexec.1", "usr/share/man/man1/watchexec.1", "644"], ["../../completions/bash", "usr/share/bash-completion/completions/watchexec", "644"], ["../../completions/fish", "usr/share/fish/vendor_completions.d/watchexec.fish", "644"], ["../../completions/zsh", "usr/share/zsh/site-functions/_watchexec", "644"], ["../../doc/logo.svg", "usr/share/icons/hicolor/scalable/apps/watchexec.svg", "644"], ] [package.metadata.generate-rpm] assets = [ { source = "../../target/release/watchexec", dest = "/usr/bin/watchexec", mode = "755" }, { source = "README.md", dest = "/usr/share/doc/watchexec/README", mode = "644", doc = true }, { source = "../../doc/watchexec.1.md", dest = "/usr/share/doc/watchexec/watchexec.1.md", mode = "644", doc = true }, { source = "../../doc/watchexec.1", dest = "/usr/share/man/man1/watchexec.1", mode = "644" }, { source = "../../completions/bash", dest = "/usr/share/bash-completion/completions/watchexec", mode = "644" }, { source = "../../completions/fish", dest = "/usr/share/fish/vendor_completions.d/watchexec.fish", mode = "644" }, { source = "../../completions/zsh", dest = "/usr/share/zsh/site-functions/_watchexec", mode = "644" }, { source = "../../doc/logo.svg", dest = "/usr/share/icons/hicolor/scalable/apps/watchexec.svg", mode = "644" }, # set conf = true for config file when that lands ] auto-req = "disabled" # technically incorrect when using musl, but these are probably # present on every rpm-using system, so let's worry about it if # someone asks. [package.metadata.generate-rpm.requires] glibc = "*" libgcc = "*" [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" doc_markdown = "allow" ================================================ FILE: crates/cli/README.md ================================================ # Watchexec CLI A simple standalone tool that watches a path and runs a command whenever it detects modifications. Example use cases: * Automatically run unit tests * Run linters/syntax checkers ## Features * Simple invocation and use * Runs on Linux, Mac, Windows, and more * Monitors current directory and all subdirectories for changes * Uses efficient event polling mechanism (on Linux, Mac, Windows, BSD) * Coalesces multiple filesystem events into one, for editors that use swap/backup files during saving * By default, uses `.gitignore`, `.ignore`, and other such files to determine which files to ignore notifications for * Support for watching files with a specific extension * Support for filtering/ignoring events based on [glob patterns](https://docs.rs/globset/*/globset/#syntax) * Launches the command in a new process group (can be disabled with `--no-process-group`) * Optionally clears screen between executions * Optionally restarts the command with every modification (good for servers) * Optionally sends a desktop notification on command start and end * Does not require a language runtime * Sets the following environment variables in the process: `$WATCHEXEC_COMMON_PATH` is set to the longest common path of all of the below variables, and so should be prepended to each path to obtain the full/real path. | Variable name | Event kind | |---|---| | `$WATCHEXEC_CREATED_PATH` | files/folders were created | | `$WATCHEXEC_REMOVED_PATH` | files/folders were removed | | `$WATCHEXEC_RENAMED_PATH` | files/folders were renamed | | `$WATCHEXEC_WRITTEN_PATH` | files/folders were modified | | `$WATCHEXEC_META_CHANGED_PATH` | files/folders' metadata were modified | | `$WATCHEXEC_OTHERWISE_CHANGED_PATH` | every other kind of event | These variables may contain multiple paths: these are separated by the platform's path separator, as with the `PATH` system environment variable. On Unix that is `:`, and on Windows `;`. Within each variable, paths are deduplicated and sorted in binary order (i.e. neither Unicode nor locale aware). This can be disabled with `--emit-events=none` or changed to JSON events on STDIN with `--emit-events=json-stdio`. ## Anti-Features * Not tied to any particular language or ecosystem * Not tied to Git or the presence of a repository/project * Does not require a cryptic command line involving `xargs` ## Usage Examples Watch all JavaScript, CSS and HTML files in the current directory and all subdirectories for changes, running `make` when a change is detected: $ watchexec --exts js,css,html make Call `make test` when any file changes in this directory/subdirectory, except for everything below `target`: $ watchexec -i "target/**" make test Call `ls -la` when any file changes in this directory/subdirectory: $ watchexec -- ls -la Call/restart `python server.py` when any Python file in the current directory (and all subdirectories) changes: $ watchexec -e py -r python server.py Call/restart `my_server` when any file in the current directory (and all subdirectories) changes, sending `SIGKILL` to stop the command: $ watchexec -r --stop-signal SIGKILL my_server Send a SIGHUP to the command upon changes (Note: using `-n` here we're executing `my_server` directly, instead of wrapping it in a shell: $ watchexec -n --signal SIGHUP my_server Run `make` when any file changes, using the `.gitignore` file in the current directory to filter: $ watchexec make Run `make` when any file in `lib` or `src` changes: $ watchexec -w lib -w src make Run `bundle install` when the `Gemfile` changes: $ watchexec -w Gemfile bundle install Run two commands: $ watchexec 'date; make' Get desktop ("toast") notifications when the command starts and finishes: $ watchexec -N go build Only run when files are created: $ watchexec --fs-events create -- s3 sync . s3://my-bucket If you come from `entr`, note that the watchexec command is run in a shell by default. You can use `-n` or `--shell=none` to not do that: $ watchexec -n -- echo ';' lorem ipsum On Windows, you may prefer to use Powershell: $ watchexec --shell=pwsh -- Test-Connection example.com You can eschew running commands entirely and get a stream of events to process on your own: ```console $ watchexec --emit-events-to=json-stdio --only-emit-events {"tags":[{"kind":"source","source":"filesystem"},{"kind":"fs","simple":"modify","full":"Modify(Data(Any))"},{"kind":"path","absolute":"/home/code/rust/watchexec/crates/cli/README.md","filetype":"file"}]} {"tags":[{"kind":"source","source":"filesystem"},{"kind":"fs","simple":"modify","full":"Modify(Data(Any))"},{"kind":"path","absolute":"/home/code/rust/watchexec/crates/lib/Cargo.toml","filetype":"file"}]} {"tags":[{"kind":"source","source":"filesystem"},{"kind":"fs","simple":"modify","full":"Modify(Data(Any))"},{"kind":"path","absolute":"/home/code/rust/watchexec/crates/cli/src/args.rs","filetype":"file"}]} ``` Print the time commands take to run: ```console $ watchexec --timings -- make [Running: make] ... [Command was successful, lasted 52.748081074s] ``` ## Installation ### Package manager Watchexec is in many package managers. A full list of [known packages](../../doc/packages.md) is available, and there may be more out there! Please contribute any you find to the list :) Common package managers: - Alpine: `$ apk add watchexec` - ArchLinux: `$ pacman -S watchexec` - Nix: `$ nix-shell -p watchexec` - Debian/Ubuntu via [apt.cli.rs](https://apt.cli.rs): `$ apt install watchexec` - Homebrew on Mac: `$ brew install watchexec` - Chocolatey on Windows: `#> choco install watchexec` ### [Binstall](https://github.com/cargo-bins/cargo-binstall) $ cargo binstall watchexec-cli ### Pre-built binaries Use the download section on [Github](https://github.com/watchexec/watchexec/releases/latest) or [the website](https://watchexec.github.io/downloads/) to obtain the package appropriate for your platform and architecture, extract it, and place it in your `PATH`. There are also Debian/Ubuntu (DEB) and Fedora/RedHat (RPM) packages. Checksums and signatures are available. ### Cargo (from source) Only the latest Rust stable is supported, but older versions may work. $ cargo install watchexec-cli ## Shell completions Currently available shell completions: - bash: `completions/bash` should be installed to `/usr/share/bash-completion/completions/watchexec` - elvish: `completions/elvish` should be installed to `$XDG_CONFIG_HOME/elvish/completions/` - fish: `completions/fish` should be installed to `/usr/share/fish/vendor_completions.d/watchexec.fish` - nu: `completions/nu` should be installed to `$XDG_CONFIG_HOME/nu/completions/` - powershell: `completions/powershell` should be installed to `$PROFILE/` - zsh: `completions/zsh` should be installed to `/usr/share/zsh/site-functions/_watchexec` If not bundled, you can generate completions for your shell with `watchexec --completions `. ## Manual There's a manual page at `doc/watchexec.1`. Install it to `/usr/share/man/man1/`. If not bundled, you can generate a manual page with `watchexec --manual > /path/to/watchexec.1`, or view it inline with `watchexec --manual` (requires `man`). You can also [read a text version](../../doc/watchexec.1.md). Note that it is automatically generated from the help text, so it is not as pretty as a carefully hand-written one. ## Advanced builds These are additional options available with custom builds by setting features: ### PID1 If you're using Watchexec as PID1 (most frequently in containers or namespaces), and it's not doing what you expect, you can create a build with PID1 early logging: `--features pid1-withlog`. If you don't need PID1 support, or if you're doing something that conflicts with this program's PID1 support, you can disable it with `--no-default-features`. ### Eyra [Eyra](https://github.com/sunfishcode/eyra) is a system to build Linux programs with no dependency on C code (in the libc path). To build Watchexec like this, use `--features eyra` and a Nightly compiler. This feature also lets you get early logging into program startup, with `RUST_LOG=trace`. ================================================ FILE: crates/cli/build.rs ================================================ fn main() { embed_resource::compile("watchexec-manifest.rc", embed_resource::NONE) .manifest_optional() .unwrap(); bosion::gather(); if std::env::var("CARGO_FEATURE_EYRA").is_ok() { println!("cargo:rustc-link-arg=-nostartfiles"); } } ================================================ FILE: crates/cli/integration/env-unix.sh ================================================ #!/bin/bash set -euxo pipefail watchexec=${WATCHEXEC_BIN:-watchexec} $watchexec -1 --env FOO=BAR echo '$FOO' | grep BAR ================================================ FILE: crates/cli/integration/no-shell-unix.sh ================================================ #!/bin/bash set -euxo pipefail watchexec=${WATCHEXEC_BIN:-watchexec} $watchexec -1 -n echo 'foo bar' | grep 'foo bar' ================================================ FILE: crates/cli/integration/socket.sh ================================================ #!/bin/bash set -euxo pipefail watchexec=${WATCHEXEC_BIN:-watchexec} test_socketfd=${TEST_SOCKETFD_BIN:-test-socketfd} $watchexec --socket 18080 -1 -- $test_socketfd tcp $watchexec --socket udp::18080 -1 -- $test_socketfd udp $watchexec --socket 18080 --socket 28080 -1 -- $test_socketfd tcp tcp $watchexec --socket 18080 --socket 28080 --socket udp::38080 -1 -- $test_socketfd tcp tcp udp if [[ "$TEST_PLATFORM" = "linux" ]]; then $watchexec --socket 127.0.1.1:18080 -1 -- $test_socketfd tcp fi ================================================ FILE: crates/cli/integration/stdin-quit-unix.sh ================================================ #!/bin/bash set -euxo pipefail watchexec=${WATCHEXEC_BIN:-watchexec} timeout -s9 30s sh -c "sleep 10 | $watchexec --stdin-quit echo" ================================================ FILE: crates/cli/integration/trailingargfile-unix.sh ================================================ #!/bin/bash set -euxo pipefail watchexec=${WATCHEXEC_BIN:-watchexec} $watchexec -1 -- echo @trailingargfile ================================================ FILE: crates/cli/release.toml ================================================ pre-release-commit-message = "release: cli v{{version}}" tag-prefix = "" tag-message = "watchexec {{version}}" pre-release-hook = ["sh", "-c", "cd ../.. && bin/completions && bin/manpage"] [[pre-release-replacements]] file = "watchexec.exe.manifest" search = "^ version=\"[\\d.]+[.]0\"" replace = " version=\"{{version}}.0\"" prerelease = false max = 1 [[pre-release-replacements]] file = "../../CITATION.cff" search = "^version: \"?[\\d.]+(-.+)?\"?" replace = "version: \"{{version}}\"" prerelease = true max = 1 [[pre-release-replacements]] file = "../../CITATION.cff" search = "^date-released: .+" replace = "date-released: {{date}}" prerelease = true max = 1 ================================================ FILE: crates/cli/run-tests.sh ================================================ #!/bin/bash set -euo pipefail export WATCHEXEC_BIN=$(realpath ${WATCHEXEC_BIN:-$(which watchexec)}) export TEST_SOCKETFD_BIN=$(realpath ${TEST_SOCKETFD_BIN:-$(which test-socketfd)}) export TEST_PLATFORM="${1:-linux}" cd "$(dirname "${BASH_SOURCE[0]}")/integration" for test in *.sh; do if [[ "$test" == *-unix.sh && "$TEST_PLATFORM" = "windows" ]]; then echo "Skipping $test as it requires unix" continue fi if [[ "$test" == *-win.sh && "$TEST_PLATFORM" != "windows" ]]; then echo "Skipping $test as it requires windows" continue fi echo echo echo "======= Testing $test =======" ./$test done ================================================ FILE: crates/cli/src/args/command.rs ================================================ use std::{ ffi::{OsStr, OsString}, mem::take, path::PathBuf, }; use clap::{ builder::TypedValueParser, error::{Error, ErrorKind}, Parser, ValueEnum, ValueHint, }; use miette::{IntoDiagnostic, Result}; use tracing::{info, warn}; use watchexec_signals::Signal; use crate::socket::{SocketSpec, SocketSpecValueParser}; use super::{TimeSpan, OPTSET_COMMAND}; #[derive(Debug, Clone, Parser)] pub struct CommandArgs { /// Use a different shell /// /// By default, Watchexec will use '$SHELL' if it's defined or a default of 'sh' on Unix-likes, /// and either 'pwsh', 'powershell', or 'cmd' (CMD.EXE) on Windows, depending on what Watchexec /// detects is the running shell. /// /// With this option, you can override that and use a different shell, for example one with more /// features or one which has your custom aliases and functions. /// /// If the value has spaces, it is parsed as a command line, and the first word used as the /// shell program, with the rest as arguments to the shell. /// /// The command is run with the '-c' flag (except for 'cmd' on Windows, where it's '/C'). /// /// The special value 'none' can be used to disable shell use entirely. In that case, the /// command provided to Watchexec will be parsed, with the first word being the executable and /// the rest being the arguments, and executed directly. Note that this parsing is rudimentary, /// and may not work as expected in all cases. /// /// Using 'none' is a little more efficient and can enable a stricter interpretation of the /// input, but it also means that you can't use shell features like globbing, redirection, /// control flow, logic, or pipes. /// /// Examples: /// /// Use without shell: /// /// $ watchexec -n -- zsh -x -o shwordsplit scr /// /// Use with powershell core: /// /// $ watchexec --shell=pwsh -- Test-Connection localhost /// /// Use with CMD.exe: /// /// $ watchexec --shell=cmd -- dir /// /// Use with a different unix shell: /// /// $ watchexec --shell=bash -- 'echo $BASH_VERSION' /// /// Use with a unix shell and options: /// /// $ watchexec --shell='zsh -x -o shwordsplit' -- scr #[arg( long, help_heading = OPTSET_COMMAND, value_name = "SHELL", display_order = 190, )] pub shell: Option, /// Shorthand for '--shell=none' #[arg( short = 'n', help_heading = OPTSET_COMMAND, display_order = 140, )] pub no_shell: bool, /// Deprecated shorthand for '--emit-events=none' /// /// This is the old way to disable event emission into the environment. See '--emit-events' for /// more. Will be removed at next major release. #[arg( long, help_heading = OPTSET_COMMAND, hide = true, // deprecated )] pub no_environment: bool, /// Add env vars to the command /// /// This is a convenience option for setting environment variables for the command, without /// setting them for the Watchexec process itself. /// /// Use key=value syntax. Multiple variables can be set by repeating the option. #[arg( long, short = 'E', help_heading = OPTSET_COMMAND, value_name = "KEY=VALUE", value_parser = EnvVarValueParser, display_order = 50, )] pub env: Vec, /// Don't use a process group /// /// By default, Watchexec will run the command in a process group, so that signals and /// terminations are sent to all processes in the group. Sometimes that's not what you want, and /// you can disable the behaviour with this option. /// /// Deprecated, use '--wrap-process=none' instead. #[arg( long, help_heading = OPTSET_COMMAND, display_order = 141, )] pub no_process_group: bool, /// Configure how the process is wrapped /// /// By default, Watchexec will run the command in a session on Mac, in a process group in Unix, /// and in a Job Object in Windows. /// /// Some Unix programs prefer running in a session, while others do not work in a process group. /// /// Use 'group' to use a process group, 'session' to use a process session, and 'none' to run /// the command directly. On Windows, either of 'group' or 'session' will use a Job Object. /// /// If you find you need to specify this frequently for different kinds of programs, file an /// issue at . As errors of this nature are hard to /// debug and can be highly environment-dependent, reports from *multiple affected people* are /// more likely to be actioned promptly. Ask your friends/colleagues! #[arg( long, help_heading = OPTSET_COMMAND, value_name = "MODE", default_value = WRAP_DEFAULT, display_order = 231, )] pub wrap_process: WrapMode, /// Signal to send to stop the command /// /// This is used by 'restart' and 'signal' modes of '--on-busy-update' (unless '--signal' is /// provided). The restart behaviour is to send the signal, wait for the command to exit, and if /// it hasn't exited after some time (see '--timeout-stop'), forcefully terminate it. /// /// The default on unix is "SIGTERM". /// /// Input is parsed as a full signal name (like "SIGTERM"), a short signal name (like "TERM"), /// or a signal number (like "15"). All input is case-insensitive. /// /// On Windows this option is technically supported but only supports the "KILL" event, as /// Watchexec cannot yet deliver other events. Windows doesn't have signals as such; instead it /// has termination (here called "KILL" or "STOP") and "CTRL+C", "CTRL+BREAK", and "CTRL+CLOSE" /// events. For portability the unix signals "SIGKILL", "SIGINT", "SIGTERM", and "SIGHUP" are /// respectively mapped to these. #[arg( long, help_heading = OPTSET_COMMAND, value_name = "SIGNAL", display_order = 191, )] pub stop_signal: Option, /// Time to wait for the command to exit gracefully /// /// This is used by the 'restart' mode of '--on-busy-update'. After the graceful stop signal /// is sent, Watchexec will wait for the command to exit. If it hasn't exited after this time, /// it is forcefully terminated. /// /// Takes a unit-less value in seconds, or a time span value such as "5min 20s". /// Providing a unit-less value is deprecated and will warn; it will be an error in the future. /// /// The default is 10 seconds. Set to 0 to immediately force-kill the command. /// /// This has no practical effect on Windows as the command is always forcefully terminated; see /// '--stop-signal' for why. #[arg( long, help_heading = OPTSET_COMMAND, default_value = "10s", hide_default_value = true, value_name = "TIMEOUT", display_order = 192, )] pub stop_timeout: TimeSpan, /// Kill the command if it runs longer than this duration /// /// Takes a time span value such as "30s", "5min", or "1h 30m". /// /// When the timeout is reached, the command is gracefully stopped using --stop-signal, then /// forcefully terminated after --stop-timeout if still running. /// /// Each run of the command has its own independent timeout. #[arg( long, help_heading = OPTSET_COMMAND, value_name = "TIMEOUT", display_order = 193, )] pub timeout: Option, /// Sleep before running the command /// /// This option will cause Watchexec to sleep for the specified amount of time before running /// the command, after an event is detected. This is like using "sleep 5 && command" in a shell, /// but portable and slightly more efficient. /// /// Takes a unit-less value in seconds, or a time span value such as "2min 5s". /// Providing a unit-less value is deprecated and will warn; it will be an error in the future. #[arg( long, help_heading = OPTSET_COMMAND, value_name = "DURATION", display_order = 40, )] pub delay_run: Option, /// Set the working directory /// /// By default, the working directory of the command is the working directory of Watchexec. You /// can change that with this option. Note that paths may be less intuitive to use with this. #[arg( long, help_heading = OPTSET_COMMAND, value_hint = ValueHint::DirPath, value_name = "DIRECTORY", display_order = 230, )] pub workdir: Option, /// Provide a socket to the command /// /// This implements the systemd socket-passing protocol, like with `systemfd`: sockets are /// opened from the watchexec process, and then passed to the commands it runs. This lets you /// keep sockets open and avoid address reuse issues or dropping packets. /// /// This option can be supplied multiple times, to open multiple sockets. /// /// The value can be either of `PORT` (opens a TCP listening socket at that port), `HOST:PORT` /// (specify a host IP address; IPv6 addresses can be specified `[bracketed]`), `TYPE::PORT` or /// `TYPE::HOST:PORT` (specify a socket type, `tcp` / `udp`). /// /// This integration only provides basic support, if you want more control you should use the /// `systemfd` tool from , upon which this is based. The /// syntax here and the spawning behaviour is identical to `systemfd`, and both watchexec and /// systemfd are compatible implementations of the systemd socket-activation protocol. /// /// Watchexec does _not_ set the `LISTEN_PID` variable on unix, which means any child process of /// your command could accidentally bind to the sockets, unless the `LISTEN_*` variables are /// removed from the environment. #[arg( long, help_heading = OPTSET_COMMAND, value_name = "PORT", value_parser = SocketSpecValueParser, display_order = 60, )] pub socket: Vec, } impl CommandArgs { pub(crate) async fn normalise(&mut self) -> Result<()> { if self.no_process_group { warn!("--no-process-group is deprecated"); self.wrap_process = WrapMode::None; } let workdir = if let Some(w) = take(&mut self.workdir) { w } else { let curdir = std::env::current_dir().into_diagnostic()?; dunce::canonicalize(curdir).into_diagnostic()? }; info!(path=?workdir, "effective working directory"); self.workdir = Some(workdir); debug_assert!(self.workdir.is_some()); Ok(()) } } #[derive(Clone, Copy, Debug, Default, ValueEnum)] pub enum WrapMode { #[default] Group, Session, None, } pub const WRAP_DEFAULT: &str = if cfg!(target_os = "macos") { "session" } else { "group" }; #[derive(Clone, Debug)] pub struct EnvVar { pub key: String, pub value: OsString, } #[derive(Clone)] pub(crate) struct EnvVarValueParser; impl TypedValueParser for EnvVarValueParser { type Value = EnvVar; fn parse_ref( &self, _cmd: &clap::Command, _arg: Option<&clap::Arg>, value: &OsStr, ) -> Result { let value = value .to_str() .ok_or_else(|| Error::raw(ErrorKind::ValueValidation, "invalid UTF-8"))?; let (key, value) = value .split_once('=') .ok_or_else(|| Error::raw(ErrorKind::ValueValidation, "missing = separator"))?; Ok(EnvVar { key: key.into(), value: value.into(), }) } } ================================================ FILE: crates/cli/src/args/events.rs ================================================ use std::{ffi::OsStr, path::PathBuf}; use clap::{ builder::TypedValueParser, error::ErrorKind, Arg, Command, CommandFactory, Parser, ValueEnum, }; use miette::Result; use tracing::warn; use watchexec_signals::Signal; use super::{command::CommandArgs, filtering::FilteringArgs, TimeSpan, OPTSET_EVENTS}; #[derive(Debug, Clone, Parser)] pub struct EventsArgs { /// What to do when receiving events while the command is running /// /// Default is to 'do-nothing', which ignores events while the command is running, so that /// changes that occur due to the command are ignored, like compilation outputs. You can also /// use 'queue' which will run the command once again when the current run has finished if any /// events occur while it's running, or 'restart', which terminates the running command and starts /// a new one. Finally, there's 'signal', which only sends a signal; this can be useful with /// programs that can reload their configuration without a full restart. /// /// The signal can be specified with the '--signal' option. #[arg( short, long, help_heading = OPTSET_EVENTS, default_value = "do-nothing", hide_default_value = true, value_name = "MODE", display_order = 150, )] pub on_busy_update: OnBusyUpdate, /// Restart the process if it's still running /// /// This is a shorthand for '--on-busy-update=restart'. #[arg( short, long, help_heading = OPTSET_EVENTS, conflicts_with_all = ["on_busy_update"], display_order = 180, )] pub restart: bool, /// Send a signal to the process when it's still running /// /// Specify a signal to send to the process when it's still running. This implies /// '--on-busy-update=signal'; otherwise the signal used when that mode is 'restart' is /// controlled by '--stop-signal'. /// /// See the long documentation for '--stop-signal' for syntax. /// /// Signals are not supported on Windows at the moment, and will always be overridden to 'kill'. /// See '--stop-signal' for more on Windows "signals". #[arg( short, long, help_heading = OPTSET_EVENTS, conflicts_with_all = ["restart"], value_name = "SIGNAL", display_order = 190, )] pub signal: Option, /// Translate signals from the OS to signals to send to the command /// /// Takes a pair of signal names, separated by a colon, such as "TERM:INT" to map SIGTERM to /// SIGINT. The first signal is the one received by watchexec, and the second is the one sent to /// the command. The second can be omitted to discard the first signal, such as "TERM:" to /// not do anything on SIGTERM. /// /// If SIGINT or SIGTERM are mapped, then they no longer quit Watchexec. Besides making it hard /// to quit Watchexec itself, this is useful to send pass a Ctrl-C to the command without also /// terminating Watchexec and the underlying program with it, e.g. with "INT:INT". /// /// This option can be specified multiple times to map multiple signals. /// /// Signal syntax is case-insensitive for short names (like "TERM", "USR2") and long names (like /// "SIGKILL", "SIGHUP"). Signal numbers are also supported (like "15", "31"). On Windows, the /// forms "STOP", "CTRL+C", and "CTRL+BREAK" are also supported to receive, but Watchexec cannot /// yet deliver other "signals" than a STOP. #[arg( long = "map-signal", help_heading = OPTSET_EVENTS, value_name = "SIGNAL:SIGNAL", value_parser = SignalMappingValueParser, display_order = 130, )] pub signal_map: Vec, /// Time to wait for new events before taking action /// /// When an event is received, Watchexec will wait for up to this amount of time before handling /// it (such as running the command). This is essential as what you might perceive as a single /// change may actually emit many events, and without this behaviour, Watchexec would run much /// too often. Additionally, it's not infrequent that file writes are not atomic, and each write /// may emit an event, so this is a good way to avoid running a command while a file is /// partially written. /// /// An alternative use is to set a high value (like "30min" or longer), to save power or /// bandwidth on intensive tasks, like an ad-hoc backup script. In those use cases, note that /// every accumulated event will build up in memory. /// /// Takes a unit-less value in milliseconds, or a time span value such as "5sec 20ms". /// Providing a unit-less value is deprecated and will warn; it will be an error in the future. /// /// The default is 50 milliseconds. Setting to 0 is highly discouraged. #[arg( long, short, help_heading = OPTSET_EVENTS, default_value = "50ms", hide_default_value = true, value_name = "TIMEOUT", display_order = 40, )] pub debounce: TimeSpan<1_000_000>, /// Exit when stdin closes /// /// This watches the stdin file descriptor for EOF, and exits Watchexec gracefully when it is /// closed. This is used by some process managers to avoid leaving zombie processes around. #[arg( long, help_heading = OPTSET_EVENTS, display_order = 191, )] pub stdin_quit: bool, /// Respond to keypresses to quit, restart, or pause /// /// In interactive mode, Watchexec listens for keypresses and responds to them. Currently /// supported keys are: 'r' to restart the command, 'p' to toggle pausing the watch, and 'q' /// to quit. This requires a terminal (TTY) and puts stdin into raw mode, so the child process /// will not receive stdin input. #[arg( long, short = 'I', help_heading = OPTSET_EVENTS, display_order = 90, )] pub interactive: bool, /// Exit when the command has an error /// /// By default, Watchexec will continue to watch and re-run the command after the command /// exits, regardless of its exit status. With this option, it will instead exit when the /// command completes with any non-success exit status. /// /// This is useful when running Watchexec in a process manager or container, where you want /// the container to restart when the command fails rather than hang waiting for file changes. #[arg( long, help_heading = OPTSET_EVENTS, display_order = 91, )] pub exit_on_error: bool, /// Wait until first change before running command /// /// By default, Watchexec will run the command once immediately. With this option, it will /// instead wait until an event is detected before running the command as normal. #[arg( long, short, help_heading = OPTSET_EVENTS, display_order = 161, )] pub postpone: bool, /// Poll for filesystem changes /// /// By default, and where available, Watchexec uses the operating system's native file system /// watching capabilities. This option disables that and instead uses a polling mechanism, which /// is less efficient but can work around issues with some file systems (like network shares) or /// edge cases. /// /// Optionally takes a unit-less value in milliseconds, or a time span value such as "2s 500ms", /// to use as the polling interval. If not specified, the default is 30 seconds. /// Providing a unit-less value is deprecated and will warn; it will be an error in the future. /// /// Aliased as '--force-poll'. #[arg( long, help_heading = OPTSET_EVENTS, alias = "force-poll", num_args = 0..=1, default_missing_value = "30s", value_name = "INTERVAL", display_order = 160, )] pub poll: Option>, /// Configure event emission /// /// Watchexec can emit event information when running a command, which can be used by the child /// process to target specific changed files. /// /// One thing to take care with is assuming inherent behaviour where there is only chance. /// Notably, it could appear as if the `RENAMED` variable contains both the original and the new /// path being renamed. In previous versions, it would even appear on some platforms as if the /// original always came before the new. However, none of this was true. It's impossible to /// reliably and portably know which changed path is the old or new, "half" renames may appear /// (only the original, only the new), "unknown" renames may appear (change was a rename, but /// whether it was the old or new isn't known), rename events might split across two debouncing /// boundaries, and so on. /// /// This option controls where that information is emitted. It defaults to 'none', which doesn't /// emit event information at all. The other options are 'environment' (deprecated), 'stdio', /// 'file', 'json-stdio', and 'json-file'. /// /// The 'stdio' and 'file' modes are text-based: 'stdio' writes absolute paths to the stdin of /// the command, one per line, each prefixed with `create:`, `remove:`, `rename:`, `modify:`, /// or `other:`, then closes the handle; 'file' writes the same thing to a temporary file, and /// its path is given with the $WATCHEXEC_EVENTS_FILE environment variable. /// /// There are also two JSON modes, which are based on JSON objects and can represent the full /// set of events Watchexec handles. Here's an example of a folder being created on Linux: /// /// ```json /// { /// "tags": [ /// { /// "kind": "path", /// "absolute": "/home/user/your/new-folder", /// "filetype": "dir" /// }, /// { /// "kind": "fs", /// "simple": "create", /// "full": "Create(Folder)" /// }, /// { /// "kind": "source", /// "source": "filesystem", /// } /// ], /// "metadata": { /// "notify-backend": "inotify" /// } /// } /// ``` /// /// The fields are as follows: /// /// - `tags`, structured event data. /// - `tags[].kind`, which can be: /// * 'path', along with: /// + `absolute`, an absolute path. /// + `filetype`, a file type if known ('dir', 'file', 'symlink', 'other'). /// * 'fs': /// + `simple`, the "simple" event type ('access', 'create', 'modify', 'remove', or 'other'). /// + `full`, the "full" event type, which is too complex to fully describe here, but looks like 'General(Precise(Specific))'. /// * 'source', along with: /// + `source`, the source of the event ('filesystem', 'keyboard', 'mouse', 'os', 'time', 'internal'). /// * 'keyboard', along with: /// + `keycode`. Currently only the value 'eof' is supported. /// * 'process', for events caused by processes: /// + `pid`, the process ID. /// * 'signal', for signals sent to Watchexec: /// + `signal`, the normalised signal name ('hangup', 'interrupt', 'quit', 'terminate', 'user1', 'user2'). /// * 'completion', for when a command ends: /// + `disposition`, the exit disposition ('success', 'error', 'signal', 'stop', 'exception', 'continued'). /// + `code`, the exit, signal, stop, or exception code. /// - `metadata`, additional information about the event. /// /// The 'json-stdio' mode will emit JSON events to the standard input of the command, one per /// line, then close stdin. The 'json-file' mode will create a temporary file, write the /// events to it, and provide the path to the file with the $WATCHEXEC_EVENTS_FILE /// environment variable. /// /// Finally, the 'environment' mode was the default until 2.0. It sets environment variables /// with the paths of the affected files, for filesystem events: /// /// $WATCHEXEC_COMMON_PATH is set to the longest common path of all of the below variables, /// and so should be prepended to each path to obtain the full/real path. Then: /// /// - $WATCHEXEC_CREATED_PATH is set when files/folders were created /// - $WATCHEXEC_REMOVED_PATH is set when files/folders were removed /// - $WATCHEXEC_RENAMED_PATH is set when files/folders were renamed /// - $WATCHEXEC_WRITTEN_PATH is set when files/folders were modified /// - $WATCHEXEC_META_CHANGED_PATH is set when files/folders' metadata were modified /// - $WATCHEXEC_OTHERWISE_CHANGED_PATH is set for every other kind of pathed event /// /// Multiple paths are separated by the system path separator, ';' on Windows and ':' on unix. /// Within each variable, paths are deduplicated and sorted in binary order (i.e. neither /// Unicode nor locale aware). /// /// This is the legacy mode, is deprecated, and will be removed in the future. The environment /// is a very restricted space, while also limited in what it can usefully represent. Large /// numbers of files will either cause the environment to be truncated, or may error or crash /// the process entirely. The $WATCHEXEC_COMMON_PATH is also unintuitive, as demonstrated by the /// multiple confused queries that have landed in my inbox over the years. #[arg( long, help_heading = OPTSET_EVENTS, verbatim_doc_comment, default_value = "none", hide_default_value = true, value_name = "MODE", display_order = 50, )] pub emit_events_to: EmitEvents, } impl EventsArgs { pub(crate) fn normalise( &mut self, command: &CommandArgs, filtering: &FilteringArgs, only_emit_events: bool, ) -> Result<()> { if self.signal.is_some() { self.on_busy_update = OnBusyUpdate::Signal; } else if self.restart { self.on_busy_update = OnBusyUpdate::Restart; } if command.no_environment { warn!("--no-environment is deprecated"); self.emit_events_to = EmitEvents::None; } if only_emit_events && !matches!( self.emit_events_to, EmitEvents::JsonStdio | EmitEvents::Stdio ) { self.emit_events_to = EmitEvents::JsonStdio; } if self.stdin_quit && filtering.watch_file == Some(PathBuf::from("-")) { super::Args::command() .error( ErrorKind::InvalidValue, "stdin-quit cannot be used when --watch-file=-", ) .exit(); } if self.interactive && filtering.watch_file == Some(PathBuf::from("-")) { super::Args::command() .error( ErrorKind::InvalidValue, "interactive mode cannot be used when --watch-file=-", ) .exit(); } Ok(()) } } #[derive(Clone, Copy, Debug, Default, ValueEnum)] pub enum EmitEvents { #[default] Environment, Stdio, File, JsonStdio, JsonFile, None, } #[derive(Clone, Copy, Debug, Default, ValueEnum)] pub enum OnBusyUpdate { #[default] Queue, DoNothing, Restart, Signal, } #[derive(Clone, Copy, Debug)] pub struct SignalMapping { pub from: Signal, pub to: Option, } #[derive(Clone)] struct SignalMappingValueParser; impl TypedValueParser for SignalMappingValueParser { type Value = SignalMapping; fn parse_ref( &self, _cmd: &Command, _arg: Option<&Arg>, value: &OsStr, ) -> Result { let value = value .to_str() .ok_or_else(|| clap::error::Error::raw(ErrorKind::ValueValidation, "invalid UTF-8"))?; let (from, to) = value .split_once(':') .ok_or_else(|| clap::error::Error::raw(ErrorKind::ValueValidation, "missing ':'"))?; let from = from .parse::() .map_err(|sigparse| clap::error::Error::raw(ErrorKind::ValueValidation, sigparse))?; let to = if to.is_empty() { None } else { Some(to.parse::().map_err(|sigparse| { clap::error::Error::raw(ErrorKind::ValueValidation, sigparse) })?) }; Ok(Self::Value { from, to }) } } ================================================ FILE: crates/cli/src/args/filtering.rs ================================================ use std::{ collections::BTreeSet, mem::take, path::{Path, PathBuf}, }; use clap::{Parser, ValueEnum, ValueHint}; use miette::{IntoDiagnostic, Result}; use tokio::{ fs::File, io::{AsyncBufReadExt, BufReader}, }; use tracing::{debug, info}; use watchexec::{paths::PATH_SEPARATOR, WatchedPath}; use crate::filterer::parse::FilterProgram; use super::{command::CommandArgs, OPTSET_FILTERING}; #[derive(Debug, Clone, Parser)] pub struct FilteringArgs { #[doc(hidden)] #[arg(skip)] pub paths: Vec, /// Watch a specific file or directory /// /// By default, Watchexec watches the current directory. /// /// When watching a single file, it's often better to watch the containing directory instead, /// and filter on the filename. Some editors may replace the file with a new one when saving, /// and some platforms may not detect that or further changes. /// /// Upon starting, Watchexec resolves a "project origin" from the watched paths. See the help /// for '--project-origin' for more information. /// /// This option can be specified multiple times to watch multiple files or directories. /// /// The special value '/dev/null', provided as the only path watched, will cause Watchexec to /// not watch any paths. Other event sources (like signals or key events) may still be used. #[arg( short = 'w', long = "watch", help_heading = OPTSET_FILTERING, value_hint = ValueHint::AnyPath, value_name = "PATH", display_order = 230, )] pub recursive_paths: Vec, /// Watch a specific directory, non-recursively /// /// Unlike '-w', folders watched with this option are not recursed into. /// /// This option can be specified multiple times to watch multiple directories non-recursively. #[arg( short = 'W', long = "watch-non-recursive", help_heading = OPTSET_FILTERING, value_hint = ValueHint::AnyPath, value_name = "PATH", display_order = 231, )] pub non_recursive_paths: Vec, /// Watch files and directories from a file /// /// Each line in the file will be interpreted as if given to '-w'. /// /// For more complex uses (like watching non-recursively), use the argfile capability: build a /// file containing command-line options and pass it to watchexec with `@path/to/argfile`. /// /// The special value '-' will read from STDIN; this in incompatible with '--stdin-quit'. #[arg( short = 'F', long, help_heading = OPTSET_FILTERING, value_hint = ValueHint::AnyPath, value_name = "PATH", display_order = 232, )] pub watch_file: Option, /// Don't load gitignores /// /// Among other VCS exclude files, like for Mercurial, Subversion, Bazaar, DARCS, Fossil. Note /// that Watchexec will detect which of these is in use, if any, and only load the relevant /// files. Both global (like '~/.gitignore') and local (like '.gitignore') files are considered. /// /// This option is useful if you want to watch files that are ignored by Git. #[arg( long, help_heading = OPTSET_FILTERING, display_order = 145, )] pub no_vcs_ignore: bool, /// Don't load project-local ignores /// /// This disables loading of project-local ignore files, like '.gitignore' or '.ignore' in the /// watched project. This is contrasted with '--no-vcs-ignore', which disables loading of Git /// and other VCS ignore files, and with '--no-global-ignore', which disables loading of global /// or user ignore files, like '~/.gitignore' or '~/.config/watchexec/ignore'. /// /// Supported project ignore files: /// /// - Git: .gitignore at project root and child directories, .git/info/exclude, and the file pointed to by `core.excludesFile` in .git/config. /// - Mercurial: .hgignore at project root and child directories. /// - Bazaar: .bzrignore at project root. /// - Darcs: _darcs/prefs/boring /// - Fossil: .fossil-settings/ignore-glob /// - Ripgrep/Watchexec/generic: .ignore at project root and child directories. /// /// VCS ignore files (Git, Mercurial, Bazaar, Darcs, Fossil) are only used if the corresponding /// VCS is discovered to be in use for the project/origin. For example, a .bzrignore in a Git /// repository will be discarded. #[arg( long, help_heading = OPTSET_FILTERING, verbatim_doc_comment, display_order = 144, )] pub no_project_ignore: bool, /// Don't load global ignores /// /// This disables loading of global or user ignore files, like '~/.gitignore', /// '~/.config/watchexec/ignore', or '%APPDATA%\Bazzar\2.0\ignore'. Contrast with /// '--no-vcs-ignore' and '--no-project-ignore'. /// /// Supported global ignore files /// /// - Git (if core.excludesFile is set): the file at that path /// - Git (otherwise): the first found of $XDG_CONFIG_HOME/git/ignore, %APPDATA%/.gitignore, %USERPROFILE%/.gitignore, $HOME/.config/git/ignore, $HOME/.gitignore. /// - Bazaar: the first found of %APPDATA%/Bazzar/2.0/ignore, $HOME/.bazaar/ignore. /// - Watchexec: the first found of $XDG_CONFIG_HOME/watchexec/ignore, %APPDATA%/watchexec/ignore, %USERPROFILE%/.watchexec/ignore, $HOME/.watchexec/ignore. /// /// Like for project files, Git and Bazaar global files will only be used for the corresponding /// VCS as used in the project. #[arg( long, help_heading = OPTSET_FILTERING, verbatim_doc_comment, display_order = 142, )] pub no_global_ignore: bool, /// Don't use internal default ignores /// /// Watchexec has a set of default ignore patterns, such as editor swap files, `*.pyc`, `*.pyo`, /// `.DS_Store`, `.bzr`, `_darcs`, `.fossil-settings`, `.git`, `.hg`, `.pijul`, `.svn`, and /// Watchexec log files. #[arg( long, help_heading = OPTSET_FILTERING, display_order = 140, )] pub no_default_ignore: bool, /// Don't discover ignore files at all /// /// This is a shorthand for '--no-global-ignore', '--no-vcs-ignore', '--no-project-ignore', but /// even more efficient as it will skip all the ignore discovery mechanisms from the get go. /// /// Note that default ignores are still loaded, see '--no-default-ignore'. #[arg( long, help_heading = OPTSET_FILTERING, display_order = 141, )] pub no_discover_ignore: bool, /// Don't ignore anything at all /// /// This is a shorthand for '--no-discover-ignore', '--no-default-ignore'. /// /// Note that ignores explicitly loaded via other command line options, such as '--ignore' or /// '--ignore-file', will still be used. #[arg( long, help_heading = OPTSET_FILTERING, display_order = 92, )] pub ignore_nothing: bool, /// Filename extensions to filter to /// /// This is a quick filter to only emit events for files with the given extensions. Extensions /// can be given with or without the leading dot (e.g. 'js' or '.js'). Multiple extensions can /// be given by repeating the option or by separating them with commas. #[arg( long = "exts", short = 'e', help_heading = OPTSET_FILTERING, value_delimiter = ',', value_name = "EXTENSIONS", display_order = 50, )] pub filter_extensions: Vec, /// Filename patterns to filter to /// /// Provide a glob-like filter pattern, and only events for files matching the pattern will be /// emitted. Multiple patterns can be given by repeating the option. Events that are not from /// files (e.g. signals, keyboard events) will pass through untouched. #[arg( long = "filter", short = 'f', help_heading = OPTSET_FILTERING, value_name = "PATTERN", display_order = 60, )] pub filter_patterns: Vec, /// Files to load filters from /// /// Provide a path to a file containing filters, one per line. Empty lines and lines starting /// with '#' are ignored. Uses the same pattern format as the '--filter' option. /// /// This can also be used via the $WATCHEXEC_FILTER_FILES environment variable. #[arg( long = "filter-file", help_heading = OPTSET_FILTERING, value_delimiter = PATH_SEPARATOR.chars().next().unwrap(), value_hint = ValueHint::FilePath, value_name = "PATH", env = "WATCHEXEC_FILTER_FILES", hide_env = true, display_order = 61, )] pub filter_files: Vec, /// Set the project origin /// /// Watchexec will attempt to discover the project's "origin" (or "root") by searching for a /// variety of markers, like files or directory patterns. It does its best but sometimes gets it /// it wrong, and you can override that with this option. /// /// The project origin is used to determine the path of certain ignore files, which VCS is being /// used, the meaning of a leading '/' in filtering patterns, and maybe more in the future. /// /// When set, Watchexec will also not bother searching, which can be significantly faster. #[arg( long, help_heading = OPTSET_FILTERING, value_hint = ValueHint::DirPath, value_name = "DIRECTORY", display_order = 160, )] pub project_origin: Option, /// Filter programs. /// /// Provide your own custom filter programs in jaq (similar to jq) syntax. Programs are given /// an event in the same format as described in '--emit-events-to' and must return a boolean. /// Invalid programs will make watchexec fail to start; use '-v' to see program runtime errors. /// /// In addition to the jaq stdlib, watchexec adds some custom filter definitions: /// /// - 'path | file_meta' returns file metadata or null if the file does not exist. /// /// - 'path | file_size' returns the size of the file at path, or null if it does not exist. /// /// - 'path | file_read(bytes)' returns a string with the first n bytes of the file at path. /// If the file is smaller than n bytes, the whole file is returned. There is no filter to /// read the whole file at once to encourage limiting the amount of data read and processed. /// /// - 'string | hash', and 'path | file_hash' return the hash of the string or file at path. /// No guarantee is made about the algorithm used: treat it as an opaque value. /// /// - 'any | kv_store(key)', 'kv_fetch(key)', and 'kv_clear' provide a simple key-value store. /// Data is kept in memory only, there is no persistence. Consistency is not guaranteed. /// /// - 'any | printout', 'any | printerr', and 'any | log(level)' will print or log any given /// value to stdout, stderr, or the log (levels = error, warn, info, debug, trace), and /// pass the value through (so '[1] | log("debug") | .[]' will produce a '1' and log '[1]'). /// /// All filtering done with such programs, and especially those using kv or filesystem access, /// is much slower than the other filtering methods. If filtering is too slow, events will back /// up and stall watchexec. Take care when designing your filters. /// /// If the argument to this option starts with an '@', the rest of the argument is taken to be /// the path to a file containing a jaq program. /// /// Jaq programs are run in order, after all other filters, and short-circuit: if a filter (jaq /// or not) rejects an event, execution stops there, and no other filters are run. Additionally, /// they stop after outputting the first value, so you'll want to use 'any' or 'all' when /// iterating, otherwise only the first item will be processed, which can be quite confusing! /// /// Find user-contributed programs or submit your own useful ones at /// . /// /// ## Examples: /// /// Regexp ignore filter on paths: /// /// 'all(.tags[] | select(.kind == "path"); .absolute | test("[.]test[.]js$")) | not' /// /// Pass any event that creates a file: /// /// 'any(.tags[] | select(.kind == "fs"); .simple == "create")' /// /// Pass events that touch executable files: /// /// 'any(.tags[] | select(.kind == "path" && .filetype == "file"); .absolute | metadata | .executable)' /// /// Ignore files that start with shebangs: /// /// 'any(.tags[] | select(.kind == "path" && .filetype == "file"); .absolute | read(2) == "#!") | not' #[arg( long = "filter-prog", short = 'j', help_heading = OPTSET_FILTERING, value_name = "EXPRESSION", display_order = 62, )] pub filter_programs: Vec, #[doc(hidden)] #[clap(skip)] pub filter_programs_parsed: Vec, /// Filename patterns to filter out /// /// Provide a glob-like filter pattern, and events for files matching the pattern will be /// excluded. Multiple patterns can be given by repeating the option. Events that are not from /// files (e.g. signals, keyboard events) will pass through untouched. #[arg( long = "ignore", short = 'i', help_heading = OPTSET_FILTERING, value_name = "PATTERN", display_order = 90, )] pub ignore_patterns: Vec, /// Files to load ignores from /// /// Provide a path to a file containing ignores, one per line. Empty lines and lines starting /// with '#' are ignored. Uses the same pattern format as the '--ignore' option. /// /// This can also be used via the $WATCHEXEC_IGNORE_FILES environment variable. #[arg( long = "ignore-file", help_heading = OPTSET_FILTERING, value_delimiter = PATH_SEPARATOR.chars().next().unwrap(), value_hint = ValueHint::FilePath, value_name = "PATH", env = "WATCHEXEC_IGNORE_FILES", hide_env = true, display_order = 91, )] pub ignore_files: Vec, /// Filesystem events to filter to /// /// This is a quick filter to only emit events for the given types of filesystem changes. Choose /// from 'access', 'create', 'remove', 'rename', 'modify', 'metadata'. Multiple types can be /// given by repeating the option or by separating them with commas. By default, this is all /// types except for 'access'. /// /// This may apply filtering at the kernel level when possible, which can be more efficient, but /// may be more confusing when reading the logs. #[arg( long = "fs-events", help_heading = OPTSET_FILTERING, default_value = "create,remove,rename,modify,metadata", value_delimiter = ',', hide_default_value = true, value_name = "EVENTS", display_order = 63, )] pub filter_fs_events: Vec, /// Don't emit fs events for metadata changes /// /// This is a shorthand for '--fs-events create,remove,rename,modify'. Using it alongside the /// '--fs-events' option is non-sensical and not allowed. #[arg( long = "no-meta", help_heading = OPTSET_FILTERING, conflicts_with = "filter_fs_events", display_order = 142, )] pub filter_fs_meta: bool, } impl FilteringArgs { pub(crate) async fn normalise(&mut self, command: &CommandArgs) -> Result<()> { if self.ignore_nothing { self.no_global_ignore = true; self.no_vcs_ignore = true; self.no_project_ignore = true; self.no_default_ignore = true; self.no_discover_ignore = true; } if self.filter_fs_meta { self.filter_fs_events = vec![ FsEvent::Create, FsEvent::Remove, FsEvent::Rename, FsEvent::Modify, ]; } if let Some(watch_file) = self.watch_file.as_ref() { if watch_file == Path::new("-") { let file = tokio::io::stdin(); let mut lines = BufReader::new(file).lines(); while let Ok(Some(line)) = lines.next_line().await { self.recursive_paths.push(line.into()); } } else { let file = File::open(watch_file).await.into_diagnostic()?; let mut lines = BufReader::new(file).lines(); while let Ok(Some(line)) = lines.next_line().await { self.recursive_paths.push(line.into()); } }; } let project_origin = if let Some(p) = take(&mut self.project_origin) { p } else { crate::dirs::project_origin(&self, command).await? }; debug!(path=?project_origin, "resolved project origin"); let project_origin = dunce::canonicalize(project_origin).into_diagnostic()?; info!(path=?project_origin, "effective project origin"); self.project_origin = Some(project_origin.clone()); self.paths = take(&mut self.recursive_paths) .into_iter() .map(|path| { { if path.is_absolute() { Ok(path) } else { dunce::canonicalize(project_origin.join(path)).into_diagnostic() } } .map(WatchedPath::recursive) }) .chain(take(&mut self.non_recursive_paths).into_iter().map(|path| { { if path.is_absolute() { Ok(path) } else { dunce::canonicalize(project_origin.join(path)).into_diagnostic() } } .map(WatchedPath::non_recursive) })) .collect::>>()? .into_iter() .collect(); if self.paths.len() == 1 && self .paths .first() .map_or(false, |p| p.as_ref() == Path::new("/dev/null")) { info!("only path is /dev/null, not watching anything"); self.paths = Vec::new(); } else if self.paths.is_empty() { info!("no paths, using current directory"); self.paths.push(command.workdir.as_deref().unwrap().into()); } info!(paths=?self.paths, "effective watched paths"); for (n, prog) in self.filter_programs.iter().enumerate() { if let Some(progpath) = prog.strip_prefix('@') { self.filter_programs_parsed .push(FilterProgram::new_jaq_from_file(progpath).await?); } else { self.filter_programs_parsed .push(FilterProgram::new_jaq_from_arg(n, prog.clone())?); } } debug_assert!(self.project_origin.is_some()); Ok(()) } } #[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)] pub enum FsEvent { Access, Create, Remove, Rename, Modify, Metadata, } ================================================ FILE: crates/cli/src/args/logging.rs ================================================ use std::{env::var, io::stderr, path::PathBuf}; use clap::{ArgAction, Parser, ValueHint}; use miette::{bail, Result}; use tokio::fs::metadata; use tracing::{info, warn}; use tracing_appender::{non_blocking, non_blocking::WorkerGuard, rolling}; use tracing_subscriber::{EnvFilter, FmtSubscriber}; use super::OPTSET_DEBUGGING; #[derive(Debug, Clone, Parser)] pub struct LoggingArgs { /// Set diagnostic log level /// /// This enables diagnostic logging, which is useful for investigating bugs or gaining more /// insight into faulty filters or "missing" events. Use multiple times to increase verbosity. /// /// Goes up to '-vvvv'. When submitting bug reports, default to a '-vvv' log level. /// /// You may want to use with '--log-file' to avoid polluting your terminal. /// /// Setting $WATCHEXEC_LOG also works, and takes precedence, but is not recommended. However, using /// $WATCHEXEC_LOG is the only way to get logs from before these options are parsed. #[arg( long, short, help_heading = OPTSET_DEBUGGING, action = ArgAction::Count, default_value = "0", num_args = 0, display_order = 220, )] pub verbose: u8, /// Write diagnostic logs to a file /// /// This writes diagnostic logs to a file, instead of the terminal, in JSON format. If a log /// level was not already specified, this will set it to '-vvv'. /// /// If a path is not provided, the default is the working directory. Note that with /// '--ignore-nothing', the write events to the log will likely get picked up by Watchexec, /// causing a loop; prefer setting a path outside of the watched directory. /// /// If the path provided is a directory, a file will be created in that directory. The file name /// will be the current date and time, in the format 'watchexec.YYYY-MM-DDTHH-MM-SSZ.log'. #[arg( long, help_heading = OPTSET_DEBUGGING, num_args = 0..=1, default_missing_value = ".", value_hint = ValueHint::AnyPath, value_name = "PATH", display_order = 120, )] pub log_file: Option, /// Print events that trigger actions /// /// This prints the events that triggered the action when handling it (after debouncing), in a /// human readable form. This is useful for debugging filters. /// /// Use '-vvv' instead when you need more diagnostic information. #[arg( long, help_heading = OPTSET_DEBUGGING, display_order = 160, )] pub print_events: bool, } pub fn preargs() -> bool { let mut log_on = false; #[cfg(feature = "dev-console")] match console_subscriber::try_init() { Ok(_) => { warn!("dev-console enabled"); log_on = true; } Err(e) => { eprintln!("Failed to initialise tokio console, falling back to normal logging\n{e}") } } if !log_on && var("WATCHEXEC_LOG").is_ok() { let subscriber = FmtSubscriber::builder().with_env_filter(EnvFilter::from_env("WATCHEXEC_LOG")); match subscriber.try_init() { Ok(()) => { warn!(WATCHEXEC_LOG=%var("WATCHEXEC_LOG").unwrap(), "logging configured from WATCHEXEC_LOG"); log_on = true; } Err(e) => { eprintln!("Failed to initialise logging with WATCHEXEC_LOG, falling back\n{e}"); } } } log_on } pub async fn postargs(args: &LoggingArgs) -> Result> { if args.verbose == 0 { return Ok(None); } let (log_writer, guard) = if let Some(file) = &args.log_file { let is_dir = metadata(&file).await.map_or(false, |info| info.is_dir()); let (dir, filename) = if is_dir { ( file.to_owned(), PathBuf::from(format!( "watchexec.{}.log", chrono::Utc::now().format("%Y-%m-%dT%H-%M-%SZ") )), ) } else if let (Some(parent), Some(file_name)) = (file.parent(), file.file_name()) { (parent.into(), PathBuf::from(file_name)) } else { bail!("Failed to determine log file name"); }; non_blocking(rolling::never(dir, filename)) } else { non_blocking(stderr()) }; let mut builder = tracing_subscriber::fmt().with_env_filter(match args.verbose { 0 => unreachable!("checked by if earlier"), 1 => "warn", 2 => "info", 3 => "debug", _ => "trace", }); if args.verbose > 2 { use tracing_subscriber::fmt::format::FmtSpan; builder = builder.with_span_events(FmtSpan::NEW | FmtSpan::CLOSE); } match if args.log_file.is_some() { builder.json().with_writer(log_writer).try_init() } else if args.verbose > 3 { builder.pretty().with_writer(log_writer).try_init() } else { builder.with_writer(log_writer).try_init() } { Ok(()) => info!("logging initialised"), Err(e) => eprintln!("Failed to initialise logging, continuing with none\n{e}"), } Ok(Some(guard)) } ================================================ FILE: crates/cli/src/args/output.rs ================================================ use clap::{Parser, ValueEnum}; use miette::Result; use super::OPTSET_OUTPUT; #[derive(Debug, Clone, Parser)] pub struct OutputArgs { /// Clear screen before running command /// /// If this doesn't completely clear the screen, try '--clear=reset'. #[arg( short = 'c', long = "clear", help_heading = OPTSET_OUTPUT, num_args = 0..=1, default_missing_value = "clear", value_name = "MODE", display_order = 30, )] pub screen_clear: Option, /// Alert when commands start and end /// /// With this, Watchexec will emit a desktop notification when a command starts and ends, on /// supported platforms. On unsupported platforms, it may silently do nothing, or log a warning. /// /// The mode can be specified to only notify when the command `start`s, `end`s, or for `both` /// (which is the default). #[arg( short = 'N', long, help_heading = OPTSET_OUTPUT, num_args = 0..=1, default_missing_value = "both", value_name = "WHEN", display_order = 140, )] pub notify: Option, /// When to use terminal colours /// /// Setting the environment variable `NO_COLOR` to any value is equivalent to `--color=never`. #[arg( long, help_heading = OPTSET_OUTPUT, default_value = "auto", value_name = "MODE", alias = "colour", display_order = 31, )] pub color: ColourMode, /// Print how long the command took to run /// /// This may not be exactly accurate, as it includes some overhead from Watchexec itself. Use /// the `time` utility, high-precision timers, or benchmarking tools for more accurate results. #[arg( long, help_heading = OPTSET_OUTPUT, display_order = 200, )] pub timings: bool, /// Don't print starting and stopping messages /// /// By default Watchexec will print a message when the command starts and stops. This option /// disables this behaviour, so only the command's output, warnings, and errors will be printed. #[arg( short, long, help_heading = OPTSET_OUTPUT, display_order = 170, )] pub quiet: bool, /// Ring the terminal bell on command completion #[arg( long, help_heading = OPTSET_OUTPUT, display_order = 20, )] pub bell: bool, } impl OutputArgs { pub(crate) fn normalise(&mut self) -> Result<()> { // https://no-color.org/ if self.color == ColourMode::Auto && std::env::var("NO_COLOR").is_ok() { self.color = ColourMode::Never; } Ok(()) } } #[derive(Clone, Copy, Debug, Default, ValueEnum)] pub enum ClearMode { #[default] Clear, Reset, } #[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)] pub enum ColourMode { Auto, Always, Never, } #[derive(Clone, Copy, Debug, Default, Eq, PartialEq, ValueEnum)] pub enum NotifyMode { /// Notify on both start and end #[default] Both, /// Notify only when the command starts Start, /// Notify only when the command ends End, } impl NotifyMode { /// Whether to notify on command start pub fn on_start(self) -> bool { matches!(self, Self::Both | Self::Start) } /// Whether to notify on command end pub fn on_end(self) -> bool { matches!(self, Self::Both | Self::End) } } ================================================ FILE: crates/cli/src/args.rs ================================================ use std::{ ffi::{OsStr, OsString}, str::FromStr, time::Duration, }; use clap::{Parser, ValueEnum, ValueHint}; use miette::Result; use tracing::{debug, info, warn}; use tracing_appender::non_blocking::WorkerGuard; pub(crate) mod command; pub(crate) mod events; pub(crate) mod filtering; pub(crate) mod logging; pub(crate) mod output; const OPTSET_COMMAND: &str = "Command"; const OPTSET_DEBUGGING: &str = "Debugging"; const OPTSET_EVENTS: &str = "Events"; const OPTSET_FILTERING: &str = "Filtering"; const OPTSET_OUTPUT: &str = "Output"; include!(env!("BOSION_PATH")); /// Execute commands when watched files change. /// /// Recursively monitors the current directory for changes, executing the command when a filesystem /// change is detected (among other event sources). By default, watchexec uses efficient /// kernel-level mechanisms to watch for changes. /// /// At startup, the specified command is run once, and watchexec begins monitoring for changes. /// /// Events are debounced and checked using a variety of mechanisms, which you can control using /// the flags in the **Filtering** section. The order of execution is: internal prioritisation /// (signals come before everything else, and SIGINT/SIGTERM are processed even more urgently), /// then file event kind (`--fs-events`), then files explicitly watched with `-w`, then ignores /// (`--ignore` and co), then filters (which includes `--exts`), then filter programs. /// /// Examples: /// /// Rebuild a project when source files change: /// /// $ watchexec make /// /// Watch all HTML, CSS, and JavaScript files for changes: /// /// $ watchexec -e html,css,js make /// /// Run tests when source files change, clearing the screen each time: /// /// $ watchexec -c make test /// /// Launch and restart a node.js server: /// /// $ watchexec -r node app.js /// /// Watch lib and src directories for changes, rebuilding each time: /// /// $ watchexec -w lib -w src make #[derive(Debug, Clone, Parser)] #[command( name = "watchexec", bin_name = "watchexec", author, version, long_version = Bosion::LONG_VERSION, after_help = "Want more detail? Try the long '--help' flag!", after_long_help = "Use @argfile as first argument to load arguments from the file 'argfile' (one argument per line) which will be inserted in place of the @argfile (further arguments on the CLI will override or add onto those in the file).\n\nDidn't expect this much output? Use the short '-h' flag to get short help.", hide_possible_values = true, )] pub struct Args { /// Command (program and arguments) to run on changes /// /// It's run when events pass filters and the debounce period (and once at startup unless /// '--postpone' is given). If you pass flags to the command, you should separate it with -- /// though that is not strictly required. /// /// Examples: /// /// $ watchexec -w src npm run build /// /// $ watchexec -w src -- rsync -a src dest /// /// Take care when using globs or other shell expansions in the command. Your shell may expand /// them before ever passing them to Watchexec, and the results may not be what you expect. /// Compare: /// /// $ watchexec echo src/*.rs /// /// $ watchexec echo 'src/*.rs' /// /// $ watchexec --shell=none echo 'src/*.rs' /// /// Behaviour depends on the value of '--shell': for all except 'none', every part of the /// command is joined together into one string with a single ascii space character, and given to /// the shell as described in the help for '--shell'. For 'none', each distinct element the /// command is passed as per the execvp(3) convention: first argument is the program, as a path /// or searched for in the 'PATH' environment variable, rest are arguments. #[arg( trailing_var_arg = true, num_args = 1.., value_hint = ValueHint::CommandString, value_name = "COMMAND", required_unless_present_any = ["completions", "manual", "only_emit_events"], )] pub program: Vec, /// Show the manual page /// /// This shows the manual page for Watchexec, if the output is a terminal and the 'man' program /// is available. If not, the manual page is printed to stdout in ROFF format (suitable for /// writing to a watchexec.1 file). #[arg( long, conflicts_with_all = ["program", "completions", "only_emit_events"], display_order = 130, )] pub manual: bool, /// Generate a shell completions script /// /// Provides a completions script or configuration for the given shell. If Watchexec is not /// distributed with pre-generated completions, you can use this to generate them yourself. /// /// Supported shells: bash, elvish, fish, nu, powershell, zsh. #[arg( long, value_name = "SHELL", conflicts_with_all = ["program", "manual", "only_emit_events"], display_order = 30, )] pub completions: Option, /// Only emit events to stdout, run no commands. /// /// This is a convenience option for using Watchexec as a file watcher, without running any /// commands. It is almost equivalent to using `cat` as the command, except that it will not /// spawn a new process for each event. /// /// This option implies `--emit-events-to=json-stdio`; you may also use the text mode by /// specifying `--emit-events-to=stdio`. #[arg( long, conflicts_with_all = ["program", "completions", "manual"], display_order = 150, )] pub only_emit_events: bool, /// Testing only: exit Watchexec after the first run and return the command's exit code #[arg(short = '1', hide = true)] pub once: bool, #[command(flatten)] pub command: command::CommandArgs, #[command(flatten)] pub events: events::EventsArgs, #[command(flatten)] pub filtering: filtering::FilteringArgs, #[command(flatten)] pub logging: logging::LoggingArgs, #[command(flatten)] pub output: output::OutputArgs, } #[derive(Clone, Copy, Debug)] pub struct TimeSpan(pub Duration); impl FromStr for TimeSpan { type Err = humantime::DurationError; fn from_str(s: &str) -> Result { s.parse::() .map_or_else( |_| humantime::parse_duration(s), |unitless| { if unitless != 0 { eprintln!("Warning: unitless non-zero time span values are deprecated and will be removed in an upcoming version"); } Ok(Duration::from_nanos(unitless * UNITLESS_NANOS_MULTIPLIER)) }, ) .map(TimeSpan) } } fn expand_args_up_to_doubledash() -> Result, std::io::Error> { use argfile::Argument; use std::collections::VecDeque; let args = std::env::args_os(); let mut expanded_args = Vec::with_capacity(args.size_hint().0); let mut todo: VecDeque<_> = args.map(|a| Argument::parse(a, argfile::PREFIX)).collect(); while let Some(next) = todo.pop_front() { match next { Argument::PassThrough(arg) => { expanded_args.push(arg.clone()); if arg == "--" { break; } } Argument::Path(path) => { let content = std::fs::read_to_string(path)?; let new_args = argfile::parse_fromfile(&content, argfile::PREFIX); todo.reserve(new_args.len()); for (i, arg) in new_args.into_iter().enumerate() { todo.insert(i, arg); } } } } while let Some(next) = todo.pop_front() { expanded_args.push(match next { Argument::PassThrough(arg) => arg, Argument::Path(path) => { let path = path.as_os_str(); let mut restored = OsString::with_capacity(path.len() + 1); restored.push(OsStr::new("@")); restored.push(path); restored } }); } Ok(expanded_args) } #[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)] pub enum ShellCompletion { Bash, Elvish, Fish, Nu, Powershell, Zsh, } #[derive(Debug, Default)] pub struct Guards { _log: Option, } pub async fn get_args() -> Result<(Args, Guards)> { let prearg_logs = logging::preargs(); if prearg_logs { warn!( "⚠ WATCHEXEC_LOG environment variable set or hardcoded, logging options have no effect" ); } debug!("expanding @argfile arguments if any"); let args = expand_args_up_to_doubledash().expect("while expanding @argfile"); debug!("parsing arguments"); let mut args = Args::parse_from(args); let _log = if !prearg_logs { logging::postargs(&args.logging).await? } else { None }; args.output.normalise()?; args.command.normalise().await?; args.filtering.normalise(&args.command).await?; args.events .normalise(&args.command, &args.filtering, args.only_emit_events)?; info!(?args, "got arguments"); Ok((args, Guards { _log })) } #[test] fn verify_cli() { use clap::CommandFactory; Args::command().debug_assert() } ================================================ FILE: crates/cli/src/config.rs ================================================ use std::{ borrow::Cow, collections::HashMap, env::var, ffi::OsStr, fmt, fs::File, io::{IsTerminal, Write}, iter::once, process::{ExitCode, Stdio}, sync::{ atomic::{AtomicBool, AtomicU8, Ordering}, Arc, }, time::Duration, }; use clearscreen::ClearScreen; use miette::{IntoDiagnostic, Report, Result}; use notify_rust::Notification; use termcolor::{Color, ColorChoice, ColorSpec, StandardStream, WriteColor}; use tokio::{process::Command as TokioCommand, time::sleep}; use tracing::{debug, debug_span, error, instrument, trace, trace_span, Instrument}; use watchexec::{ action::ActionHandler, command::{Command, Program, Shell, SpawnOptions}, error::RuntimeError, job::{CommandState, Job}, sources::fs::Watcher, Config, ErrorHook, Id, }; use watchexec_events::{Event, KeyCode, Keyboard, Priority, ProcessEnd, Tag}; use watchexec_signals::Signal; use crate::{ args::{ command::{EnvVar, WrapMode}, events::{EmitEvents, OnBusyUpdate, SignalMapping}, output::{ClearMode, ColourMode, NotifyMode}, Args, }, emits::events_to_simple_format, socket::Sockets, state::State, }; #[derive(Clone, Copy, Debug)] struct OutputFlags { quiet: bool, colour: ColorChoice, timings: bool, bell: bool, notify: Option, } #[derive(Clone, Copy, Debug)] struct TimeoutConfig { /// The maximum duration the command is allowed to run timeout: Option, /// Signal to send for graceful stop (used when timeout fires) stop_signal: Signal, /// Grace period after stop signal before force kill stop_timeout: Duration, } pub fn make_config(args: &Args, state: &State) -> Result { let _span = debug_span!("args-runtime").entered(); let config = Config::default(); config.on_error(|err: ErrorHook| { if let RuntimeError::IoError { about: "waiting on process group", .. } = err.error { // "No child processes" and such // these are often spurious, so condemn them to -v only error!("{}", err.error); return; } if cfg!(debug_assertions) { eprintln!("[[{:?}]]", err.error); } eprintln!("[[Error (not fatal)]]\n{}", Report::new(err.error)); }); config.pathset(args.filtering.paths.clone()); config.throttle(args.events.debounce.0); config.keyboard_events(args.events.stdin_quit || args.events.interactive); if let Some(interval) = args.events.poll { config.file_watcher(Watcher::Poll(interval.0)); } let once = args.once; let clear = args.output.screen_clear; let emit_events_to = args.events.emit_events_to; let state = state.clone(); if args.only_emit_events { config.on_action(move |mut action| { // if we got a terminate or interrupt signal, quit if action .signals() .any(|sig| sig == Signal::Terminate || sig == Signal::Interrupt) { // no need to be graceful as there's no commands action.quit(); return action; } // clear the screen before printing events if let Some(mode) = clear { match mode { ClearMode::Clear => { clearscreen::clear().ok(); } ClearMode::Reset => { reset_screen(); } } } match emit_events_to { EmitEvents::Stdio => { println!( "{}", events_to_simple_format(action.events.as_ref()).unwrap_or_default() ); } EmitEvents::JsonStdio => { for event in action.events.iter().filter(|e| !e.is_empty()) { println!("{}", serde_json::to_string(event).unwrap_or_default()); } } other => unreachable!( "emit_events_to should have been validated earlier: {:?}", other ), } action }); return Ok(config); } let delay_run = args.command.delay_run.map(|ts| ts.0); let on_busy = args.events.on_busy_update; let stdin_quit = args.events.stdin_quit; let interactive = args.events.interactive; let exit_on_error = args.events.exit_on_error; let signal = args.events.signal; let stop_signal = args.command.stop_signal; let stop_timeout = args.command.stop_timeout.0; let print_events = args.logging.print_events; let outflags = OutputFlags { quiet: args.output.quiet, colour: match args.output.color { ColourMode::Auto if !std::io::stdin().is_terminal() => ColorChoice::Never, ColourMode::Auto => ColorChoice::Auto, ColourMode::Always => ColorChoice::Always, ColourMode::Never => ColorChoice::Never, }, timings: args.output.timings, bell: args.output.bell, notify: args.output.notify, }; let timeout_config = TimeoutConfig { timeout: args.command.timeout.map(|ts| ts.0), stop_signal: stop_signal.unwrap_or(Signal::Terminate), stop_timeout, }; let workdir = Arc::new(args.command.workdir.clone()); let add_envs: Arc<[EnvVar]> = args.command.env.clone().into(); debug!( envs=?args.command.env, "additional environment variables to add to command" ); let id = Id::default(); let command = interpret_command_args(args)?; let signal_map: Arc>> = Arc::new( args.events .signal_map .iter() .copied() .map(|SignalMapping { from, to }| (from, to)) .collect(), ); let queued = Arc::new(AtomicBool::new(false)); let quit_again = Arc::new(AtomicU8::new(0)); let paused = Arc::new(AtomicBool::new(false)); let should_quit = Arc::new(AtomicBool::new(false)); config.on_action_async(move |mut action| { let add_envs = add_envs.clone(); let command = command.clone(); let state = state.clone(); let queued = queued.clone(); let quit_again = quit_again.clone(); let paused = paused.clone(); let should_quit = should_quit.clone(); let signal_map = signal_map.clone(); let workdir = workdir.clone(); Box::new( async move { trace!(events=?action.events, "handling action"); let add_envs = add_envs.clone(); let command = command.clone(); let queued = queued.clone(); let quit_again = quit_again.clone(); let paused = paused.clone(); let should_quit = should_quit.clone(); let signal_map = signal_map.clone(); let workdir = workdir.clone(); trace!("set spawn hook for workdir and environment variables"); let job = action.get_or_create_job(id, move || command.clone()); let events = action.events.clone(); job.set_spawn_hook({ let state = state.clone(); move |command, _| { let add_envs = add_envs.clone(); let state = state.clone(); let events = events.clone(); if let Some(ref workdir) = workdir.as_ref() { debug!(?workdir, "set command workdir"); command.command_mut().current_dir(workdir); } if let Some(ref socket_set) = state.socket_set { for env in socket_set.envs() { command.command_mut().env(env.key, env.value); } } emit_events_to_command( command.command_mut(), events, state, emit_events_to, add_envs, ); } }); let show_events = { let events = action.events.clone(); move || { if print_events { trace!("print events to stderr"); for (n, event) in events.iter().enumerate() { eprintln!("[EVENT {n}] {event}"); } } } }; let clear_screen = { let events = action.events.clone(); move || { if let Some(mode) = clear { match mode { ClearMode::Clear => { clearscreen::clear().ok(); debug!("cleared screen"); } ClearMode::Reset => { reset_screen(); debug!("hard-reset screen"); } } } // re-show events after clearing if print_events { trace!("print events to stderr"); for (n, event) in events.iter().enumerate() { eprintln!("[EVENT {n}] {event}"); } } } }; let quit = |mut action: ActionHandler| { match quit_again.fetch_add(1, Ordering::Relaxed) { 0 => { if stop_timeout > Duration::ZERO && action.list_jobs().any(|(_, job)| job.is_running()) { eprintln!("[Waiting {stop_timeout:?} for processes to exit before stopping...]"); } // eprintln!("[Waiting {stop_timeout:?} for processes to exit before stopping... Ctrl-C again to exit faster]"); // see TODO in action/worker.rs action.quit_gracefully( stop_signal.unwrap_or(Signal::Terminate), stop_timeout, ); } 1 => { action.quit_gracefully(Signal::ForceStop, Duration::ZERO); } _ => { action.quit(); } } action }; // Check if we should quit due to command failure (--exit-on-error) if should_quit.load(Ordering::SeqCst) { debug!("command failed with --exit-on-error, quitting"); return quit(action); } if once { debug!("debug mode: run once and quit"); show_events(); if let Some(delay) = delay_run { job.run_async(move |_| { Box::new(async move { sleep(delay).await; }) }); } // this blocks the event loop, but also this is a debug feature so i don't care job.start().await; let timed_out = if let Some(timeout) = timeout_config.timeout { tokio::select! { _ = job.to_wait() => false, _ = tokio::time::sleep(timeout) => { if cfg!(windows) { job.stop().await; } else { job.stop_with_signal(timeout_config.stop_signal, timeout_config.stop_timeout).await; } true } } } else { job.to_wait().await; false }; job.run({ let state = state.clone(); move |context| { if let Some(end) = end_of_process(context.current, outflags, timed_out) { *state.exit_code.lock().unwrap() = ExitCode::from( end.into_exitstatus() .code() .unwrap_or(0) .try_into() .unwrap_or(1), ); } } }) .await; return quit(action); } let is_keyboard_eof = action .events .iter() .any(|e| e.tags.contains(&Tag::Keyboard(Keyboard::Eof))); if stdin_quit && is_keyboard_eof { debug!("keyboard EOF, quit"); show_events(); return quit(action); } if interactive { for event in action.events.iter() { for tag in &event.tags { match tag { Tag::Keyboard(Keyboard::Eof) => { debug!("interactive: Ctrl-C/D, quit"); return quit(action); } Tag::Keyboard(Keyboard::Key { key, .. }) => match key { KeyCode::Char('q') => { debug!("interactive: quit"); return quit(action); } KeyCode::Char('p') => { let was_paused = paused.fetch_xor(true, Ordering::SeqCst); if was_paused { debug!("interactive: unpause"); eprintln!("[Unpaused]"); } else { debug!("interactive: pause"); eprintln!("[Paused]"); } return action; } KeyCode::Char('r') => { debug!("interactive: restart"); clear_screen(); if cfg!(windows) { job.restart(); } else { job.restart_with_signal( stop_signal.unwrap_or(Signal::Terminate), stop_timeout, ); } job.run({ let job = job.clone(); let should_quit = should_quit.clone(); let state = state.clone(); move |context| { setup_process( job.clone(), context.command.clone(), outflags, timeout_config, exit_on_error, should_quit.clone(), state.clone(), ); } }); return action; } _ => {} }, _ => {} } } } } let signals: Vec = action.signals().collect(); trace!(?signals, "received some signals"); // if we got a terminate or interrupt signal and they're not mapped, quit if (signals.contains(&Signal::Terminate) && !signal_map.contains_key(&Signal::Terminate)) || (signals.contains(&Signal::Interrupt) && !signal_map.contains_key(&Signal::Interrupt)) { debug!("unmapped terminate or interrupt signal, quit"); show_events(); return quit(action); } // pass all other signals on for signal in signals { match signal_map.get(&signal) { Some(Some(mapped)) => { debug!(?signal, ?mapped, "passing mapped signal"); job.signal(*mapped); } Some(None) => { debug!(?signal, "discarding signal"); } None => { debug!(?signal, "passing signal on"); job.signal(signal); } } } // only filesystem events below here (or empty synthetic events) if action.paths().next().is_none() && !action.events.iter().any(watchexec_events::Event::is_empty) { debug!("no filesystem or synthetic events, skip without doing more"); show_events(); return action; } if interactive && paused.load(Ordering::SeqCst) { debug!("interactive: paused, ignoring filesystem event"); return action; } show_events(); if let Some(delay) = delay_run { trace!("delaying run by sleeping inside the job"); job.run_async(move |_| { Box::new(async move { sleep(delay).await; }) }); } trace!("querying job state via run_async"); job.run_async({ let job = job.clone(); let should_quit = should_quit.clone(); let state = state.clone(); move |context| { let job = job.clone(); let should_quit = should_quit.clone(); let state = state.clone(); let is_running = matches!(context.current, CommandState::Running { .. }); Box::new(async move { let innerjob = job.clone(); let should_quit = should_quit.clone(); let state = state.clone(); if is_running { trace!(?on_busy, "job is running, decide what to do"); match on_busy { OnBusyUpdate::DoNothing => {} OnBusyUpdate::Signal => { job.signal(if cfg!(windows) { Signal::ForceStop } else { stop_signal.or(signal).unwrap_or(Signal::Terminate) }); } OnBusyUpdate::Restart if cfg!(windows) => { job.restart(); job.run({ let should_quit = should_quit.clone(); let state = state.clone(); move |context| { clear_screen(); setup_process( innerjob.clone(), context.command.clone(), outflags, timeout_config, exit_on_error, should_quit.clone(), state.clone(), ); } }); } OnBusyUpdate::Restart => { job.restart_with_signal( stop_signal.unwrap_or(Signal::Terminate), stop_timeout, ); job.run({ let should_quit = should_quit.clone(); let state = state.clone(); move |context| { clear_screen(); setup_process( innerjob.clone(), context.command.clone(), outflags, timeout_config, exit_on_error, should_quit.clone(), state.clone(), ); } }); } OnBusyUpdate::Queue => { let job = job.clone(); let already_queued = queued.fetch_or(true, Ordering::SeqCst); if already_queued { debug!("next start is already queued, do nothing"); } else { debug!("queueing next start of job"); tokio::spawn({ let queued = queued.clone(); let should_quit = should_quit.clone(); let state = state.clone(); async move { trace!("waiting for job to finish"); job.to_wait().await; trace!("job finished, starting queued"); job.start(); job.run({ let should_quit = should_quit.clone(); let state = state.clone(); move |context| { clear_screen(); setup_process( innerjob.clone(), context.command.clone(), outflags, timeout_config, exit_on_error, should_quit.clone(), state.clone(), ); } }) .await; trace!("resetting queued state"); queued.store(false, Ordering::SeqCst); } }); } } } } else { trace!("job is not running, start it"); job.start(); job.run({ let should_quit = should_quit.clone(); let state = state.clone(); move |context| { clear_screen(); setup_process( innerjob.clone(), context.command.clone(), outflags, timeout_config, exit_on_error, should_quit.clone(), state.clone(), ); } }); } }) } }); action } .instrument(trace_span!("action handler")), ) }); Ok(config) } #[instrument(level = "debug")] fn interpret_command_args(args: &Args) -> Result> { let mut cmd = args.program.clone(); assert!(!cmd.is_empty(), "(clap) Bug: command is not present"); let shell = if args.command.no_shell { None } else { let shell = args.command.shell.clone().or_else(|| var("SHELL").ok()); match shell .as_deref() .or_else(|| { if cfg!(not(windows)) { Some("sh") } else if var("POWERSHELL_DISTRIBUTION_CHANNEL").is_ok() && (which::which("pwsh").is_ok() || which::which("pwsh.exe").is_ok()) { trace!("detected pwsh"); Some("pwsh") } else if var("PSModulePath").is_ok() && (which::which("powershell").is_ok() || which::which("powershell.exe").is_ok()) { trace!("detected powershell"); Some("powershell") } else { Some("cmd") } }) .or(Some("default")) { Some("") => return Err(RuntimeError::CommandShellEmptyShell).into_diagnostic(), Some("none") | None => None, #[cfg(windows)] Some("cmd") | Some("cmd.exe") | Some("CMD") | Some("CMD.EXE") => Some(Shell::cmd()), Some(other) => { let sh = other.split_ascii_whitespace().collect::>(); // UNWRAP: checked by Some("") #[allow(clippy::unwrap_used)] let (shprog, shopts) = sh.split_first().unwrap(); Some(Shell { prog: shprog.into(), options: shopts.iter().map(|s| (*s).to_string()).collect(), program_option: Some(Cow::Borrowed(OsStr::new("-c"))), }) } } }; let program = if let Some(shell) = shell { Program::Shell { shell, command: cmd.join(" "), args: Vec::new(), } } else { Program::Exec { prog: cmd.remove(0).into(), args: cmd, } }; Ok(Arc::new(Command { program, options: SpawnOptions { grouped: matches!(args.command.wrap_process, WrapMode::Group), session: matches!(args.command.wrap_process, WrapMode::Session), ..Default::default() }, })) } #[instrument(level = "trace")] fn setup_process( job: Job, command: Arc, outflags: OutputFlags, timeout_config: TimeoutConfig, exit_on_error: bool, should_quit: Arc, state: State, ) { if outflags.notify.is_some_and(|m| m.on_start()) { Notification::new() .summary("Watchexec: change detected") .body(&format!("Running {command}")) .show() .map_or_else( |err| { eprintln!("[[Failed to send desktop notification: {err}]]"); }, drop, ); } if !outflags.quiet { let mut stderr = StandardStream::stderr(outflags.colour); stderr.reset().ok(); stderr .set_color(ColorSpec::new().set_fg(Some(Color::Green))) .ok(); writeln!(&mut stderr, "[Running: {command}]").ok(); stderr.reset().ok(); } let send_quit_event = Arc::new(AtomicBool::new(false)); tokio::spawn({ let send_quit_event = send_quit_event.clone(); let state_for_event = state.clone(); async move { let timed_out = if let Some(timeout) = timeout_config.timeout { tokio::select! { _ = job.to_wait() => false, _ = tokio::time::sleep(timeout) => { if cfg!(windows) { job.stop().await; } else { job.stop_with_signal(timeout_config.stop_signal, timeout_config.stop_timeout).await; } true } } } else { job.to_wait().await; false }; job.run({ let send_quit_event = send_quit_event.clone(); move |context| { if let Some(status) = end_of_process(context.current, outflags, timed_out) { // Store exit code in state *state.exit_code.lock().unwrap() = ExitCode::from( status .into_exitstatus() .code() .unwrap_or(0) .try_into() .unwrap_or(1), ); // If exit_on_error is enabled and command failed, signal quit if exit_on_error && !matches!(status, ProcessEnd::Success) { debug!("command failed, setting should_quit flag for --exit-on-error"); should_quit.store(true, Ordering::SeqCst); send_quit_event.store(true, Ordering::SeqCst); } } } }) .await; // Send a synthetic event to trigger the action handler to check should_quit // This ensures we quit immediately instead of waiting for the next file event if send_quit_event.load(Ordering::SeqCst) { if let Some(wx) = state_for_event.watchexec.get() { debug!("sending synthetic event to trigger quit"); if let Err(e) = wx.send_event(Event::default(), Priority::Urgent).await { error!("failed to send synthetic quit event: {e}"); } } } } }); } fn format_duration(duration: Duration) -> impl fmt::Display { fmt::from_fn(move |f| { let secs = duration.as_secs(); if secs > 0 { write!(f, "{secs}s") } else { write!(f, "{}ms", duration.subsec_millis()) } }) } #[instrument(level = "trace")] fn end_of_process( state: &CommandState, outflags: OutputFlags, timed_out: bool, ) -> Option { let CommandState::Finished { status, started, finished, } = state else { return None; }; let duration = *finished - *started; let duration_display = format_duration(duration); let timing = if outflags.timings { format!(", lasted {duration_display}") } else { String::new() }; // Show timeout message and return early - no need for redundant status message if timed_out { if outflags.notify.is_some_and(|m| m.on_end()) { Notification::new() .summary("Watchexec: command timed out") .body(&format!("Command timed out after {duration_display}")) .show() .map_or_else( |err| { eprintln!("[[Failed to send desktop notification: {err}]]"); }, drop, ); } if !outflags.quiet { let mut stderr = StandardStream::stderr(outflags.colour); stderr.reset().ok(); stderr .set_color(ColorSpec::new().set_fg(Some(Color::Yellow))) .ok(); writeln!(&mut stderr, "[Command timed out after {duration_display}]").ok(); stderr.reset().ok(); } if outflags.bell { let mut stdout = std::io::stdout(); stdout.write_all(b"\x07").ok(); stdout.flush().ok(); } return Some(*status); } let (msg, fg) = match status { ProcessEnd::ExitError(code) => (format!("Command exited with {code}{timing}"), Color::Red), ProcessEnd::ExitSignal(sig) => { (format!("Command killed by {sig:?}{timing}"), Color::Magenta) } ProcessEnd::ExitStop(sig) => (format!("Command stopped by {sig:?}{timing}"), Color::Blue), ProcessEnd::Continued => (format!("Command continued{timing}"), Color::Cyan), ProcessEnd::Exception(ex) => ( format!("Command ended by exception {ex:#x}{timing}"), Color::Yellow, ), ProcessEnd::Success => (format!("Command was successful{timing}"), Color::Green), }; if outflags.notify.is_some_and(|m| m.on_end()) { Notification::new() .summary("Watchexec: command ended") .body(&msg) .show() .map_or_else( |err| { eprintln!("[[Failed to send desktop notification: {err}]]"); }, drop, ); } if !outflags.quiet { let mut stderr = StandardStream::stderr(outflags.colour); stderr.reset().ok(); stderr.set_color(ColorSpec::new().set_fg(Some(fg))).ok(); writeln!(&mut stderr, "[{msg}]").ok(); stderr.reset().ok(); } if outflags.bell { let mut stdout = std::io::stdout(); stdout.write_all(b"\x07").ok(); stdout.flush().ok(); } Some(*status) } #[instrument(level = "trace")] fn emit_events_to_command( command: &mut TokioCommand, events: Arc<[Event]>, state: State, emit_events_to: EmitEvents, add_envs: Arc<[EnvVar]>, ) { use crate::emits::{emits_to_environment, emits_to_file, emits_to_json_file}; let mut stdin = None; let add_envs = add_envs.clone(); let mut envs = Box::new(add_envs.into_iter().cloned()) as Box>; match emit_events_to { EmitEvents::Environment => { envs = Box::new(envs.chain(emits_to_environment(&events))); } EmitEvents::Stdio => match emits_to_file(&state.emit_file, &events) .and_then(|path| File::open(path).into_diagnostic()) { Ok(file) => { stdin.replace(Stdio::from(file)); } Err(err) => { error!("Failed to write events to stdin, continuing without it: {err}"); } }, EmitEvents::File => match emits_to_file(&state.emit_file, &events) { Ok(path) => { envs = Box::new(envs.chain(once(EnvVar { key: "WATCHEXEC_EVENTS_FILE".into(), value: path.into(), }))); } Err(err) => { error!("Failed to write WATCHEXEC_EVENTS_FILE, continuing without it: {err}"); } }, EmitEvents::JsonStdio => match emits_to_json_file(&state.emit_file, &events) .and_then(|path| File::open(path).into_diagnostic()) { Ok(file) => { stdin.replace(Stdio::from(file)); } Err(err) => { error!("Failed to write events to stdin, continuing without it: {err}"); } }, EmitEvents::JsonFile => match emits_to_json_file(&state.emit_file, &events) { Ok(path) => { envs = Box::new(envs.chain(once(EnvVar { key: "WATCHEXEC_EVENTS_FILE".into(), value: path.into(), }))); } Err(err) => { error!("Failed to write WATCHEXEC_EVENTS_FILE, continuing without it: {err}"); } }, EmitEvents::None => {} } for var in envs { debug!(?var, "inserting environment variable"); command.env(var.key, var.value); } if let Some(stdin) = stdin { debug!("set command stdin"); command.stdin(stdin); } } pub fn reset_screen() { for cs in [ ClearScreen::WindowsCooked, ClearScreen::WindowsVt, ClearScreen::VtLeaveAlt, ClearScreen::VtWellDone, ClearScreen::default(), ] { cs.clear().ok(); } } ================================================ FILE: crates/cli/src/dirs.rs ================================================ use std::{ collections::HashSet, path::{Path, PathBuf}, }; use ignore_files::{IgnoreFile, IgnoreFilesFromOriginArgs}; use miette::{miette, IntoDiagnostic, Result}; use project_origins::ProjectType; use tokio::fs::canonicalize; use tracing::{debug, info, warn}; use watchexec::paths::common_prefix; use crate::args::{command::CommandArgs, filtering::FilteringArgs, Args}; pub async fn project_origin( FilteringArgs { project_origin, paths, .. }: &FilteringArgs, CommandArgs { workdir, .. }: &CommandArgs, ) -> Result { let project_origin = if let Some(origin) = project_origin { debug!(?origin, "project origin override"); canonicalize(origin).await.into_diagnostic()? } else { let homedir = match dirs::home_dir() { None => None, Some(dir) => Some(canonicalize(dir).await.into_diagnostic()?), }; debug!(?homedir, "home directory"); let homedir_requested = homedir.as_ref().map_or(false, |home| { paths .binary_search_by_key(home, |w| PathBuf::from(w.clone())) .is_ok() }); debug!( ?homedir_requested, "resolved whether the homedir is explicitly requested" ); let mut origins = HashSet::new(); for path in paths { origins.extend(project_origins::origins(path).await); } match (homedir, homedir_requested) { (Some(ref dir), false) if origins.contains(dir) => { debug!("removing homedir from origins"); origins.remove(dir); } _ => {} } if origins.is_empty() { debug!("no origins, using current directory"); origins.insert(workdir.clone().unwrap()); } debug!(?origins, "resolved all project origins"); // This canonicalize is probably redundant canonicalize( common_prefix(&origins) .ok_or_else(|| miette!("no common prefix, but this should never fail"))?, ) .await .into_diagnostic()? }; debug!(?project_origin, "resolved common/project origin"); Ok(project_origin) } pub async fn vcs_types(origin: &Path) -> Vec { let vcs_types = project_origins::types(origin) .await .into_iter() .filter(|pt| pt.is_vcs()) .collect::>(); info!(?vcs_types, "effective vcs types"); vcs_types } pub async fn ignores(args: &Args, vcs_types: &[ProjectType]) -> Result> { let origin = args.filtering.project_origin.clone().unwrap(); let mut skip_git_global_excludes = false; let mut ignores = if args.filtering.no_project_ignore { Vec::new() } else { let ignore_files = args.filtering.ignore_files.iter().map(|path| { if path.is_absolute() { path.into() } else { origin.join(path) } }); let (mut ignores, errors) = ignore_files::from_origin( IgnoreFilesFromOriginArgs::new_unchecked( &origin, args.filtering.paths.iter().map(PathBuf::from), ignore_files, ) .canonicalise() .await .into_diagnostic()?, ) .await; for err in errors { warn!("while discovering project-local ignore files: {}", err); } debug!(?ignores, "discovered ignore files from project origin"); if !vcs_types.is_empty() { ignores = ignores .into_iter() .filter(|ig| match ig.applies_to { Some(pt) if pt.is_vcs() => vcs_types.contains(&pt), _ => true, }) .inspect(|ig| { if let IgnoreFile { applies_to: Some(ProjectType::Git), applies_in: None, .. } = ig { warn!("project git config overrides the global excludes"); skip_git_global_excludes = true; } }) .collect::>(); debug!(?ignores, "filtered ignores to only those for project vcs"); } ignores }; let global_ignores = if args.filtering.no_global_ignore { Vec::new() } else { let (mut global_ignores, errors) = ignore_files::from_environment(Some("watchexec")).await; for err in errors { warn!("while discovering global ignore files: {}", err); } debug!(?global_ignores, "discovered ignore files from environment"); if skip_git_global_excludes { global_ignores = global_ignores .into_iter() .filter(|gig| { !matches!( gig, IgnoreFile { applies_to: Some(ProjectType::Git), applies_in: None, .. } ) }) .collect::>(); debug!( ?global_ignores, "filtered global ignores to exclude global git ignores" ); } global_ignores }; ignores.extend(global_ignores.into_iter().filter(|ig| match ig.applies_to { Some(pt) if pt.is_vcs() => vcs_types.contains(&pt), _ => true, })); debug!( ?ignores, ?vcs_types, "combined and applied overall vcs filter over ignores" ); ignores.extend(args.filtering.ignore_files.iter().map(|ig| IgnoreFile { applies_to: None, applies_in: None, path: ig.clone(), })); debug!( ?ignores, ?args.filtering.ignore_files, "combined with ignore files from command line / env" ); if args.filtering.no_project_ignore { ignores = ignores .into_iter() .filter(|ig| { !ig.applies_in .as_ref() .map_or(false, |p| p.starts_with(&origin)) }) .collect::>(); debug!( ?ignores, "filtered ignores to exclude project-local ignores" ); } if args.filtering.no_global_ignore { ignores = ignores .into_iter() .filter(|ig| ig.applies_in.is_some()) .collect::>(); debug!(?ignores, "filtered ignores to exclude global ignores"); } if args.filtering.no_vcs_ignore { ignores = ignores .into_iter() .filter(|ig| ig.applies_to.is_none()) .collect::>(); debug!(?ignores, "filtered ignores to exclude VCS-specific ignores"); } info!(files=?ignores.iter().map(|ig| ig.path.as_path()).collect::>(), "found some ignores"); Ok(ignores) } ================================================ FILE: crates/cli/src/emits.rs ================================================ use std::{fmt::Write, path::PathBuf}; use miette::{IntoDiagnostic, Result}; use watchexec::paths::summarise_events_to_env; use watchexec_events::{filekind::FileEventKind, Event, Tag}; use crate::{args::command::EnvVar, state::RotatingTempFile}; pub fn emits_to_environment(events: &[Event]) -> impl Iterator { summarise_events_to_env(events.iter()) .into_iter() .map(|(k, value)| EnvVar { key: format!("WATCHEXEC_{k}_PATH"), value, }) } pub fn events_to_simple_format(events: &[Event]) -> Result { let mut buf = String::new(); for event in events { let feks = event .tags .iter() .filter_map(|tag| match tag { Tag::FileEventKind(kind) => Some(kind), _ => None, }) .collect::>(); for path in event.paths().map(|(p, _)| p) { if feks.is_empty() { writeln!(&mut buf, "other:{}", path.to_string_lossy()).into_diagnostic()?; continue; } for fek in &feks { writeln!( &mut buf, "{}:{}", match fek { FileEventKind::Any | FileEventKind::Other => "other", FileEventKind::Access(_) => "access", FileEventKind::Create(_) => "create", FileEventKind::Modify(_) => "modify", FileEventKind::Remove(_) => "remove", }, path.to_string_lossy() ) .into_diagnostic()?; } } } Ok(buf) } pub fn emits_to_file(target: &RotatingTempFile, events: &[Event]) -> Result { target.rotate()?; target.write(events_to_simple_format(events)?.as_bytes())?; Ok(target.path()) } pub fn emits_to_json_file(target: &RotatingTempFile, events: &[Event]) -> Result { target.rotate()?; for event in events { if event.is_empty() { continue; } target.write(&serde_json::to_vec(event).into_diagnostic()?)?; target.write(b"\n")?; } Ok(target.path()) } ================================================ FILE: crates/cli/src/filterer/parse.rs ================================================ use std::{fmt::Debug, path::PathBuf}; use jaq_core::{ load::{Arena, File, Loader}, Ctx, Filter, Native, RcIter, }; use jaq_json::Val; use miette::{miette, IntoDiagnostic, Result, WrapErr}; use tokio::io::AsyncReadExt; use tracing::{debug, trace}; use watchexec_events::Event; use super::proglib::jaq_lib; #[derive(Clone)] pub enum FilterProgram { Jaq(Filter>), } impl Debug for FilterProgram { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match self { Self::Jaq(_) => f.debug_tuple("Jaq").field(&"filter").finish(), } } } impl FilterProgram { pub(crate) async fn new_jaq_from_file(path: impl Into) -> Result { async fn inner(path: PathBuf) -> Result { trace!(?path, "reading filter program from file"); let mut progfile = tokio::fs::File::open(&path).await.into_diagnostic()?; let mut buf = String::with_capacity(progfile.metadata().await.into_diagnostic()?.len() as _); let bytes_read = progfile.read_to_string(&mut buf).await.into_diagnostic()?; debug!(?path, %bytes_read, "read filter program from file"); FilterProgram::new_jaq(path, buf) } let path = path.into(); let error = format!("in file {path:?}"); inner(path).await.wrap_err(error) } pub(crate) fn new_jaq_from_arg(n: usize, arg: String) -> Result { let path = PathBuf::from(format!("")); let error = format!("in --filter-prog {n}"); Self::new_jaq(path, arg).wrap_err(error) } fn new_jaq(path: PathBuf, code: String) -> Result { let user_lib_paths = [ PathBuf::from("~/.jq"), PathBuf::from("$ORIGIN/../lib/jq"), PathBuf::from("$ORIGIN/../lib"), ]; let arena = Arena::default(); let loader = Loader::new(jaq_std::defs().chain(jaq_json::defs())).with_std_read(&user_lib_paths); let modules = match loader.load(&arena, File { path, code: &code }) { Ok(m) => m, Err(errs) => { let errs = errs .into_iter() .map(|(_, err)| format!("{err:?}")) .collect::>() .join("\n"); return Err(miette!("{}", errs).wrap_err("failed to load filter program")); } }; let filter = jaq_lib() .compile(modules) .map_err(|errs| miette!("Failed to compile jaq program: {:?}", errs))?; Ok(Self::Jaq(filter)) } pub(crate) fn run(&self, event: &Event) -> Result { match self { Self::Jaq(filter) => { let inputs = RcIter::new(std::iter::empty()); let val = serde_json::to_value(event) .map_err(|err| miette!("failed to serialize event: {}", err)) .map(Val::from)?; let mut results = filter.run((Ctx::new([], &inputs), val)); results .next() .ok_or_else(|| miette!("returned no value"))? .map_err(|err| miette!("program failed: {err}")) .and_then(|val| match val { Val::Bool(b) => Ok(b), val => Err(miette!("returned non-boolean {val:?}")), }) } } } } ================================================ FILE: crates/cli/src/filterer/proglib/file.rs ================================================ use std::{ fs::{metadata, File, FileType, Metadata}, io::{BufReader, Read}, iter::once, time::{SystemTime, UNIX_EPOCH}, }; use jaq_core::{Error, Native}; use jaq_json::Val; use jaq_std::{v, Filter}; use serde_json::{json, Value}; use tracing::{debug, error}; use super::macros::return_err; pub fn funs() -> [Filter>; 3] { [ ( "file_read", v(0), Native::new({ move |_, (mut ctx, val)| { let path = match &val { Val::Str(v) => v.to_string(), _ => return_err!(Err(Error::str("expected string (path) but got {val:?}"))), }; let Val::Int(bytes) = ctx.pop_var() else { return_err!(Err(Error::str("expected integer"))); }; let bytes = match u64::try_from(bytes) { Ok(b) => b, Err(err) => return_err!(Err(Error::str(format!( "expected positive integer; {err}" )))), }; Box::new(once(Ok(match File::open(&path) { Ok(file) => { let buf_reader = BufReader::new(file); let mut limited = buf_reader.take(bytes); let mut buffer = String::with_capacity(bytes as _); match limited.read_to_string(&mut buffer) { Ok(read) => { debug!("jaq: read {read} bytes from {path:?}"); Val::Str(buffer.into()) } Err(err) => { error!("jaq: failed to read from {path:?}: {err:?}"); Val::Null } } } Err(err) => { error!("jaq: failed to open file {path:?}: {err:?}"); Val::Null } }))) } }), ), ( "file_meta", v(0), Native::new({ move |_, (_, val)| { let path = match &val { Val::Str(v) => v.to_string(), _ => return_err!(Err(Error::str("expected string (path) but got {val:?}"))), }; Box::new(once(Ok(match metadata(&path) { Ok(meta) => Val::from(json_meta(meta)), Err(err) => { error!("jaq: failed to open {path:?}: {err:?}"); Val::Null } }))) } }), ), ( "file_size", v(0), Native::new({ move |_, (_, val)| { let path = match &val { Val::Str(v) => v.to_string(), _ => return_err!(Err(Error::str("expected string (path) but got {val:?}"))), }; Box::new(once(Ok(match metadata(&path) { Ok(meta) => Val::Int(meta.len() as _), Err(err) => { error!("jaq: failed to open {path:?}: {err:?}"); Val::Null } }))) } }), ), ] } fn json_meta(meta: Metadata) -> Value { let perms = meta.permissions(); #[cfg_attr(not(unix), allow(unused_mut))] let mut val = json!({ "type": filetype_str(meta.file_type()), "size": meta.len(), "modified": fs_time(meta.modified()), "accessed": fs_time(meta.accessed()), "created": fs_time(meta.created()), "dir": meta.is_dir(), "file": meta.is_file(), "symlink": meta.is_symlink(), "readonly": perms.readonly(), }); #[cfg(unix)] { use std::os::unix::fs::PermissionsExt; let map = val.as_object_mut().unwrap(); map.insert( "mode".to_string(), Value::String(format!("{:o}", perms.mode())), ); map.insert("mode_byte".to_string(), Value::from(perms.mode())); map.insert( "executable".to_string(), Value::Bool(perms.mode() & 0o111 != 0), ); } val } fn filetype_str(filetype: FileType) -> &'static str { #[cfg(unix)] { use std::os::unix::fs::FileTypeExt; if filetype.is_char_device() { return "char"; } else if filetype.is_block_device() { return "block"; } else if filetype.is_fifo() { return "fifo"; } else if filetype.is_socket() { return "socket"; } } #[cfg(windows)] { use std::os::windows::fs::FileTypeExt; if filetype.is_symlink_dir() { return "symdir"; } else if filetype.is_symlink_file() { return "symfile"; } } if filetype.is_dir() { "dir" } else if filetype.is_file() { "file" } else if filetype.is_symlink() { "symlink" } else { "unknown" } } fn fs_time(time: std::io::Result) -> Option { time.ok() .and_then(|time| time.duration_since(UNIX_EPOCH).ok()) .map(|dur| dur.as_secs()) } ================================================ FILE: crates/cli/src/filterer/proglib/hash.rs ================================================ use std::{fs::File, io::Read, iter::once}; use jaq_core::{Error, Native}; use jaq_json::Val; use jaq_std::{v, Filter}; use tracing::{debug, error}; use super::macros::return_err; pub fn funs() -> [Filter>; 2] { [ ( "hash", v(0), Native::new({ move |_, (_, val)| { let string = match &val { Val::Str(v) => v.to_string(), _ => return_err!(Err(Error::str("expected string but got {val:?}"))), }; Box::new(once(Ok(Val::Str( blake3::hash(string.as_bytes()).to_hex().to_string().into(), )))) } }), ), ( "file_hash", v(0), Native::new({ move |_, (_, val)| { let path = match &val { Val::Str(v) => v.to_string(), _ => return_err!(Err(Error::str("expected string but got {val:?}"))), }; Box::new(once(Ok(match File::open(&path) { Ok(mut file) => { const BUFFER_SIZE: usize = 1024 * 1024; let mut hasher = blake3::Hasher::new(); let mut buf = vec![0; BUFFER_SIZE]; while let Ok(bytes) = file.read(&mut buf) { debug!("jaq: read {bytes} bytes from {path:?}"); if bytes == 0 { break; } hasher.update(&buf[..bytes]); buf = vec![0; BUFFER_SIZE]; } Val::Str(hasher.finalize().to_hex().to_string().into()) } Err(err) => { error!("jaq: failed to open file {path:?}: {err:?}"); Val::Null } }))) } }), ), ] } ================================================ FILE: crates/cli/src/filterer/proglib/kv.rs ================================================ use std::{ iter::once, sync::{Arc, OnceLock}, }; use dashmap::DashMap; use jaq_core::Native; use jaq_json::Val; use jaq_std::{v, Filter}; use crate::filterer::syncval::SyncVal; type KvStore = Arc>; fn kv_store() -> KvStore { static KV_STORE: OnceLock = OnceLock::new(); KV_STORE.get_or_init(KvStore::default).clone() } pub fn funs() -> [Filter>; 3] { [ ( "kv_clear", v(0), Native::new({ move |_, (_, val)| { let kv = kv_store(); kv.clear(); Box::new(once(Ok(val))) } }), ), ( "kv_store", v(1), Native::new({ move |_, (mut ctx, val)| { let kv = kv_store(); let key = ctx.pop_var().to_string(); kv.insert(key, (&val).into()); Box::new(once(Ok(val))) } }), ), ( "kv_fetch", v(1), Native::new({ move |_, (mut ctx, _)| { let kv = kv_store(); let key = ctx.pop_var().to_string(); Box::new(once(Ok(kv .get(&key) .map_or(Val::Null, |val| val.value().into())))) } }), ), ] } ================================================ FILE: crates/cli/src/filterer/proglib/macros.rs ================================================ macro_rules! return_err { ($err:expr) => { return Box::new(once($err.map_err(Into::into))) }; } pub(crate) use return_err; ================================================ FILE: crates/cli/src/filterer/proglib/output.rs ================================================ use std::iter::once; use jaq_core::{Ctx, Error, Native}; use jaq_json::Val; use jaq_std::{v, Filter}; use tracing::{debug, error, info, trace, warn}; use super::macros::return_err; macro_rules! log_action { ($level:expr, $val:expr) => { match $level.to_ascii_lowercase().as_str() { "trace" => trace!("jaq: {}", $val), "debug" => debug!("jaq: {}", $val), "info" => info!("jaq: {}", $val), "warn" => warn!("jaq: {}", $val), "error" => error!("jaq: {}", $val), _ => return_err!(Err(Error::str("invalid log level"))), } }; } pub fn funs() -> [Filter>; 3] { [ ( "log", v(1), Native::new(|_, (mut ctx, val): (Ctx<'_, Val>, _)| { let level = ctx.pop_var().to_string(); log_action!(level, val); // passthrough Box::new(once(Ok(val))) }) .with_update(|_, (mut ctx, val), _| { let level = ctx.pop_var().to_string(); log_action!(level, val); // passthrough Box::new(once(Ok(val))) }), ), ( "printout", v(0), Native::new(|_, (_, val)| { println!("{val}"); Box::new(once(Ok(val))) }) .with_update(|_, (_, val), _| { println!("{val}"); Box::new(once(Ok(val))) }), ), ( "printerr", v(0), Native::new(|_, (_, val)| { eprintln!("{val}"); Box::new(once(Ok(val))) }) .with_update(|_, (_, val), _| { eprintln!("{val}"); Box::new(once(Ok(val))) }), ), ] } ================================================ FILE: crates/cli/src/filterer/proglib.rs ================================================ use jaq_core::{Compiler, Native}; mod file; mod hash; mod kv; mod macros; mod output; pub fn jaq_lib<'s>() -> Compiler<&'s str, Native> { Compiler::<_, Native<_>>::default().with_funs( jaq_std::funs() .chain(jaq_json::funs()) .chain(file::funs()) .chain(hash::funs()) .chain(kv::funs()) .chain(output::funs()), ) } ================================================ FILE: crates/cli/src/filterer/progs.rs ================================================ use std::marker::PhantomData; use miette::miette; use tokio::{ sync::{mpsc, oneshot}, task::{block_in_place, spawn_blocking}, }; use tracing::{error, trace, warn}; use watchexec::error::RuntimeError; use watchexec_events::Event; use crate::args::Args; const BUFFER: usize = 128; #[derive(Debug)] pub struct FilterProgs { channel: Requester, } #[derive(Debug, Clone)] pub struct Requester { sender: mpsc::Sender<(S, oneshot::Sender)>, _receiver: PhantomData, } impl Requester where S: Send + Sync, R: Send + Sync, { pub fn new(capacity: usize) -> (Self, mpsc::Receiver<(S, oneshot::Sender)>) { let (sender, receiver) = mpsc::channel(capacity); ( Self { sender, _receiver: PhantomData, }, receiver, ) } pub fn call(&self, value: S) -> Result { // FIXME: this should really be async with a timeout, but that needs filtering in general // to be async, which should be done at some point block_in_place(|| { let (sender, receiver) = oneshot::channel(); self.sender.blocking_send((value, sender)).map_err(|err| { RuntimeError::External(miette!("filter progs internal channel: {}", err).into()) })?; receiver .blocking_recv() .map_err(|err| RuntimeError::External(Box::new(err))) }) } } impl FilterProgs { pub fn check(&self, event: &Event) -> Result { self.channel.call(event.clone()) } pub fn new(args: &Args) -> miette::Result { let progs = args.filtering.filter_programs_parsed.clone(); let (requester, mut receiver) = Requester::::new(BUFFER); let task = spawn_blocking(move || { 'chan: while let Some((event, sender)) = receiver.blocking_recv() { for (n, prog) in progs.iter().enumerate() { trace!(?n, "trying filter program"); match prog.run(&event) { Ok(false) => { trace!( ?n, verdict = false, "filter program finished; fail so stopping there" ); sender .send(false) .unwrap_or_else(|_| warn!("failed to send filter result")); continue 'chan; } Ok(true) => { trace!( ?n, verdict = true, "filter program finished; pass so trying next" ); continue; } Err(err) => { error!(?n, error=%err, "filter program failed, so trying next"); continue; } } } trace!("all filters failed, sending pass as default"); sender .send(true) .unwrap_or_else(|_| warn!("failed to send filter result")); } Ok(()) as miette::Result<()> }); tokio::spawn(async { match task.await { Ok(Ok(())) => {} Ok(Err(err)) => error!("filter progs task failed: {}", err), Err(err) => error!("filter progs task panicked: {}", err), } }); Ok(Self { channel: requester }) } } ================================================ FILE: crates/cli/src/filterer/syncval.rs ================================================ /// Jaq's [Val](jaq_json::Val) uses Rc, but we want to use in Sync contexts. UGH! use std::{rc::Rc, sync::Arc}; use indexmap::IndexMap; use jaq_json::Val; #[derive(Clone, Debug)] pub enum SyncVal { Null, Bool(bool), Int(isize), Float(f64), Num(Arc), Str(Arc), Arr(Arc<[SyncVal]>), Obj(Arc, SyncVal>>), } impl From<&Val> for SyncVal { fn from(val: &Val) -> Self { match val { Val::Null => Self::Null, Val::Bool(b) => Self::Bool(*b), Val::Int(i) => Self::Int(*i), Val::Float(f) => Self::Float(*f), Val::Num(s) => Self::Num(s.to_string().into()), Val::Str(s) => Self::Str(s.to_string().into()), Val::Arr(a) => Self::Arr({ let mut arr = Vec::with_capacity(a.len()); for v in a.iter() { arr.push(v.into()); } arr.into() }), Val::Obj(m) => Self::Obj(Arc::new({ let mut map = IndexMap::new(); for (k, v) in m.iter() { map.insert(k.to_string().into(), v.into()); } map })), } } } impl From<&SyncVal> for Val { fn from(val: &SyncVal) -> Self { match val { SyncVal::Null => Self::Null, SyncVal::Bool(b) => Self::Bool(*b), SyncVal::Int(i) => Self::Int(*i), SyncVal::Float(f) => Self::Float(*f), SyncVal::Num(s) => Self::Num(s.to_string().into()), SyncVal::Str(s) => Self::Str(s.to_string().into()), SyncVal::Arr(a) => Self::Arr({ let mut arr = Vec::with_capacity(a.len()); for v in a.iter() { arr.push(v.into()); } arr.into() }), SyncVal::Obj(m) => Self::Obj(Rc::new({ let mut map: IndexMap<_, _, foldhash::fast::RandomState> = Default::default(); for (k, v) in m.iter() { map.insert(k.to_string().into(), v.into()); } map })), } } } ================================================ FILE: crates/cli/src/filterer.rs ================================================ use std::{ ffi::OsString, path::{Path, PathBuf, MAIN_SEPARATOR}, sync::Arc, }; use miette::{IntoDiagnostic, Result}; use tokio::io::{AsyncBufReadExt, BufReader}; use tracing::{info, trace, trace_span}; use watchexec::{error::RuntimeError, filter::Filterer}; use watchexec_events::{ filekind::{FileEventKind, ModifyKind}, Event, Priority, Tag, }; use watchexec_filterer_globset::GlobsetFilterer; use crate::args::{filtering::FsEvent, Args}; pub mod parse; mod proglib; mod progs; mod syncval; /// A custom filterer that combines the library's Globset filterer and a switch for --no-meta #[derive(Debug)] pub struct WatchexecFilterer { inner: GlobsetFilterer, fs_events: Vec, progs: Option, } impl Filterer for WatchexecFilterer { #[tracing::instrument(level = "trace", skip(self))] fn check_event(&self, event: &Event, priority: Priority) -> Result { for tag in &event.tags { if let Tag::FileEventKind(fek) = tag { let normalised = match fek { FileEventKind::Access(_) => FsEvent::Access, FileEventKind::Modify(ModifyKind::Name(_)) => FsEvent::Rename, FileEventKind::Modify(ModifyKind::Metadata(_)) => FsEvent::Metadata, FileEventKind::Modify(_) => FsEvent::Modify, FileEventKind::Create(_) => FsEvent::Create, FileEventKind::Remove(_) => FsEvent::Remove, _ => continue, }; trace!(allowed=?self.fs_events, this=?normalised, "check against fs event filter"); if !self.fs_events.contains(&normalised) { return Ok(false); } } } trace!("check against original event"); if !self.inner.check_event(event, priority)? { return Ok(false); } if let Some(progs) = &self.progs { trace!("check against program filters"); if !progs.check(event)? { return Ok(false); } } Ok(true) } } impl WatchexecFilterer { /// Create a new filterer from the given arguments pub async fn new(args: &Args) -> Result> { let project_origin = args.filtering.project_origin.clone().unwrap(); let workdir = args.command.workdir.clone().unwrap(); let ignore_files = if args.filtering.no_discover_ignore { Vec::new() } else { let vcs_types = crate::dirs::vcs_types(&project_origin).await; crate::dirs::ignores(args, &vcs_types).await? }; let mut ignores = Vec::new(); if !args.filtering.no_default_ignore { ignores.extend([ (format!("**{MAIN_SEPARATOR}.DS_Store"), None), (String::from("watchexec.*.log"), None), (String::from("*.py[co]"), None), (String::from("#*#"), None), (String::from(".#*"), None), (String::from(".*.kate-swp"), None), (String::from(".*.sw?"), None), (String::from(".*.sw?x"), None), (format!("**{MAIN_SEPARATOR}.bzr{MAIN_SEPARATOR}**"), None), (format!("**{MAIN_SEPARATOR}_darcs{MAIN_SEPARATOR}**"), None), ( format!("**{MAIN_SEPARATOR}.fossil-settings{MAIN_SEPARATOR}**"), None, ), (format!("**{MAIN_SEPARATOR}.git{MAIN_SEPARATOR}**"), None), (format!("**{MAIN_SEPARATOR}.hg{MAIN_SEPARATOR}**"), None), (format!("**{MAIN_SEPARATOR}.pijul{MAIN_SEPARATOR}**"), None), (format!("**{MAIN_SEPARATOR}.svn{MAIN_SEPARATOR}**"), None), ]); } let whitelist = args .filtering .paths .iter() .map(std::convert::Into::into) .filter(|p: &PathBuf| p.is_file()); let mut filters = args .filtering .filter_patterns .iter() .map(|f| (f.to_owned(), Some(workdir.clone()))) .collect::>(); for filter_file in &args.filtering.filter_files { filters.extend(read_filter_file(filter_file).await?); } ignores.extend( args.filtering .ignore_patterns .iter() .map(|f| (f.to_owned(), Some(workdir.clone()))), ); let exts = args .filtering .filter_extensions .iter() .map(|e| OsString::from(e.strip_prefix('.').unwrap_or(e))); info!("initialising Globset filterer"); Ok(Arc::new(Self { inner: GlobsetFilterer::new( project_origin, filters, ignores, whitelist, ignore_files, exts, ) .await .into_diagnostic()?, fs_events: args.filtering.filter_fs_events.clone(), progs: if args.filtering.filter_programs_parsed.is_empty() { None } else { Some(progs::FilterProgs::new(args)?) }, })) } } async fn read_filter_file(path: &Path) -> Result)>> { let _span = trace_span!("loading filter file", ?path).entered(); let file = tokio::fs::File::open(path).await.into_diagnostic()?; let metadata_len = file .metadata() .await .map(|m| usize::try_from(m.len())) .unwrap_or(Ok(0)) .into_diagnostic()?; let filter_capacity = if metadata_len == 0 { 0 } else { metadata_len / 20 }; let mut filters = Vec::with_capacity(filter_capacity); let reader = BufReader::new(file); let mut lines = reader.lines(); while let Some(line) = lines.next_line().await.into_diagnostic()? { let line = line.trim(); if line.is_empty() || line.starts_with('#') { continue; } trace!(?line, "adding filter line"); filters.push((line.to_owned(), Some(path.to_owned()))); } Ok(filters) } ================================================ FILE: crates/cli/src/lib.rs ================================================ #![deny(rust_2018_idioms)] #![allow(clippy::missing_const_for_fn, clippy::future_not_send)] use std::{ io::{IsTerminal, Write}, process::{ExitCode, Stdio}, }; use clap::CommandFactory; use clap_complete::{Generator, Shell}; use clap_mangen::Man; use miette::{IntoDiagnostic, Result}; use std::sync::Arc; use tokio::{io::AsyncWriteExt, process::Command}; use tracing::{debug, info}; use watchexec::Watchexec; use watchexec_events::{Event, Priority}; use crate::{ args::{Args, ShellCompletion}, filterer::WatchexecFilterer, }; pub mod args; mod config; mod dirs; mod emits; mod filterer; mod socket; mod state; async fn run_watchexec(args: Args, state: state::State) -> Result<()> { info!(version=%env!("CARGO_PKG_VERSION"), "constructing Watchexec from CLI"); let config = config::make_config(&args, &state)?; config.filterer(WatchexecFilterer::new(&args).await?); info!("initialising Watchexec runtime"); let wx = Arc::new(Watchexec::with_config(config)?); // Set the watchexec reference in state so it can be used for sending synthetic events state .watchexec .set(wx.clone()) .expect("watchexec reference already set"); if !args.events.postpone { debug!("kicking off with empty event"); wx.send_event(Event::default(), Priority::Urgent).await?; } if args.events.interactive { eprintln!("[Interactive] q: quit, p: pause/unpause, r: restart"); } info!("running main loop"); wx.main().await.into_diagnostic()??; if matches!( args.output.screen_clear, Some(args::output::ClearMode::Reset) ) { config::reset_screen(); } info!("done with main loop"); Ok(()) } async fn run_manpage() -> Result<()> { info!(version=%env!("CARGO_PKG_VERSION"), "constructing manpage"); let man = Man::new(Args::command().long_version(None)); let mut buffer: Vec = Default::default(); man.render(&mut buffer).into_diagnostic()?; if std::io::stdout().is_terminal() && which::which("man").is_ok() { let mut child = Command::new("man") .arg("-l") .arg("-") .stdin(Stdio::piped()) .stdout(Stdio::inherit()) .stderr(Stdio::inherit()) .kill_on_drop(true) .spawn() .into_diagnostic()?; child .stdin .as_mut() .unwrap() .write_all(&buffer) .await .into_diagnostic()?; if let Some(code) = child .wait() .await .into_diagnostic()? .code() .and_then(|code| if code == 0 { None } else { Some(code) }) { return Err(miette::miette!("Exited with status code {}", code)); } } else { std::io::stdout() .lock() .write_all(&buffer) .into_diagnostic()?; } Ok(()) } #[allow(clippy::unused_async)] async fn run_completions(shell: ShellCompletion) -> Result<()> { fn generate(generator: impl Generator) { let mut cmd = Args::command(); clap_complete::generate(generator, &mut cmd, "watchexec", &mut std::io::stdout()); } info!(version=%env!("CARGO_PKG_VERSION"), "constructing completions"); match shell { ShellCompletion::Bash => generate(Shell::Bash), ShellCompletion::Elvish => generate(Shell::Elvish), ShellCompletion::Fish => generate(Shell::Fish), ShellCompletion::Nu => generate(clap_complete_nushell::Nushell), ShellCompletion::Powershell => generate(Shell::PowerShell), ShellCompletion::Zsh => generate(Shell::Zsh), } Ok(()) } pub async fn run() -> Result { let (args, _guards) = args::get_args().await?; Ok(if args.manual { run_manpage().await?; ExitCode::SUCCESS } else if let Some(shell) = args.completions { run_completions(shell).await?; ExitCode::SUCCESS } else { let state = state::new(&args).await?; run_watchexec(args, state.clone()).await?; let exit = *(state.exit_code.lock().unwrap()); exit }) } ================================================ FILE: crates/cli/src/main.rs ================================================ #[cfg(feature = "eyra")] extern crate eyra; use std::process::ExitCode; use miette::IntoDiagnostic; #[cfg(target_env = "musl")] #[global_allocator] static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc; fn main() -> miette::Result { #[cfg(feature = "pid1")] pid1::Pid1Settings::new() .enable_log(cfg!(feature = "pid1-withlog")) .launch() .into_diagnostic()?; tokio::runtime::Builder::new_multi_thread() .enable_all() .build() .unwrap() .block_on(async { watchexec_cli::run().await }) } ================================================ FILE: crates/cli/src/socket/fallback.rs ================================================ use miette::{bail, Result}; use crate::args::command::EnvVar; use super::{SocketSpec, Sockets}; #[derive(Debug)] pub struct SocketSet; impl SocketSet for SocketSet { async fn create(_: &[SocketSpec]) -> Result { bail!("--socket is not supported on your platform") } fn envs(&self) -> Vec { Vec::new() } } ================================================ FILE: crates/cli/src/socket/parser.rs ================================================ use std::{ ffi::OsStr, net::{IpAddr, Ipv4Addr, SocketAddr}, num::{IntErrorKind, NonZero}, str::FromStr, }; use clap::{ builder::TypedValueParser, error::{Error, ErrorKind}, }; use miette::Result; use super::{SocketSpec, SocketType}; #[derive(Clone)] pub(crate) struct SocketSpecValueParser; impl TypedValueParser for SocketSpecValueParser { type Value = SocketSpec; fn parse_ref( &self, _cmd: &clap::Command, _arg: Option<&clap::Arg>, value: &OsStr, ) -> Result { let value = value .to_str() .ok_or_else(|| Error::raw(ErrorKind::ValueValidation, "invalid UTF-8"))? .to_ascii_lowercase(); let (socket, value) = if let Some(val) = value.strip_prefix("tcp::") { (SocketType::Tcp, val) } else if let Some(val) = value.strip_prefix("udp::") { (SocketType::Udp, val) } else if let Some((pre, _)) = value.split_once("::") { if !pre.starts_with("[") { return Err(Error::raw( ErrorKind::ValueValidation, format!("invalid prefix {pre:?}"), )); } (SocketType::Tcp, value.as_ref()) } else { (SocketType::Tcp, value.as_ref()) }; let addr = if let Ok(addr) = SocketAddr::from_str(value) { addr } else { match NonZero::::from_str(value) { Ok(port) => SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), port.get()), Err(err) if *err.kind() == IntErrorKind::Zero => { return Err(Error::raw( ErrorKind::ValueValidation, "invalid port number: cannot be zero", )) } Err(err) if *err.kind() == IntErrorKind::PosOverflow => { return Err(Error::raw( ErrorKind::ValueValidation, "invalid port number: greater than 65535", )) } Err(_) => { return Err(Error::raw( ErrorKind::ValueValidation, "invalid port number", )) } } }; Ok(SocketSpec { socket, addr }) } } ================================================ FILE: crates/cli/src/socket/test.rs ================================================ use crate::args::Args; use super::*; use clap::{builder::TypedValueParser, CommandFactory}; use std::{ ffi::OsStr, net::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6}, }; #[test] fn parse_port_only() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("8080")) .unwrap(), SocketSpec { socket: SocketType::Tcp, addr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080)), } ); } #[test] fn parse_addr_port_v4() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("1.2.3.4:38192")) .unwrap(), SocketSpec { socket: SocketType::Tcp, addr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(1, 2, 3, 4), 38192)), } ); } #[test] fn parse_addr_port_v6() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("[ff64::1234]:81")) .unwrap(), SocketSpec { socket: SocketType::Tcp, addr: SocketAddr::V6(SocketAddrV6::new( Ipv6Addr::new(0xff64, 0, 0, 0, 0, 0, 0, 0x1234), 81, 0, 0 )), } ); } #[test] fn parse_port_only_explicit_tcp() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("tcp::443")) .unwrap(), SocketSpec { socket: SocketType::Tcp, addr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 443)), } ); } #[test] fn parse_addr_port_v4_explicit_tcp() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("tcp::1.2.3.4:38192")) .unwrap(), SocketSpec { socket: SocketType::Tcp, addr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(1, 2, 3, 4), 38192)), } ); } #[test] fn parse_addr_port_v6_explicit_tcp() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("tcp::[ff64::1234]:81")) .unwrap(), SocketSpec { socket: SocketType::Tcp, addr: SocketAddr::V6(SocketAddrV6::new( Ipv6Addr::new(0xff64, 0, 0, 0, 0, 0, 0, 0x1234), 81, 0, 0 )), } ); } #[test] fn parse_port_only_explicit_udp() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("udp::443")) .unwrap(), SocketSpec { socket: SocketType::Udp, addr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 443)), } ); } #[test] fn parse_addr_port_v4_explicit_udp() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("udp::1.2.3.4:38192")) .unwrap(), SocketSpec { socket: SocketType::Udp, addr: SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(1, 2, 3, 4), 38192)), } ); } #[test] fn parse_addr_port_v6_explicit_udp() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("udp::[ff64::1234]:81")) .unwrap(), SocketSpec { socket: SocketType::Udp, addr: SocketAddr::V6(SocketAddrV6::new( Ipv6Addr::new(0xff64, 0, 0, 0, 0, 0, 0, 0x1234), 81, 0, 0 )), } ); } #[test] fn parse_bad_prefix() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("gopher::777")) .unwrap_err() .to_string(), String::from(r#"error: invalid prefix "gopher""#), ); } #[test] fn parse_bad_port_zero() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("0")) .unwrap_err() .to_string(), String::from("error: invalid port number: cannot be zero"), ); } #[test] fn parse_bad_port_high() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("100000")) .unwrap_err() .to_string(), String::from("error: invalid port number: greater than 65535"), ); } #[test] fn parse_bad_port_alpha() { let cmd = Args::command(); assert_eq!( SocketSpecValueParser .parse_ref(&cmd, None, OsStr::new("port")) .unwrap_err() .to_string(), String::from("error: invalid port number"), ); } ================================================ FILE: crates/cli/src/socket/unix.rs ================================================ use std::os::fd::{AsRawFd, OwnedFd}; use miette::{IntoDiagnostic, Result}; use nix::sys::socket::{ bind, listen, setsockopt, socket, sockopt, AddressFamily, Backlog, SockFlag, SockType, SockaddrStorage, }; use tracing::instrument; use crate::args::command::EnvVar; use super::{SocketSpec, SocketType, Sockets}; #[derive(Debug)] pub struct SocketSet { fds: Vec, } impl Sockets for SocketSet { #[instrument(level = "trace")] async fn create(specs: &[SocketSpec]) -> Result { debug_assert!(!specs.is_empty()); specs .into_iter() .map(SocketSpec::create) .collect::>>() .map(|fds| Self { fds }) } #[instrument(level = "trace")] fn envs(&self) -> Vec { vec![ EnvVar { key: "LISTEN_FDS".into(), value: self.fds.len().to_string().into(), }, EnvVar { key: "LISTEN_FDS_FIRST_FD".into(), value: self.fds.first().unwrap().as_raw_fd().to_string().into(), }, ] } } impl SocketSpec { fn create(&self) -> Result { let addr = SockaddrStorage::from(self.addr); let fam = if self.addr.is_ipv4() { AddressFamily::Inet } else { AddressFamily::Inet6 }; let ty = match self.socket { SocketType::Tcp => SockType::Stream, SocketType::Udp => SockType::Datagram, }; let sock = socket(fam, ty, SockFlag::empty(), None).into_diagnostic()?; setsockopt(&sock, sockopt::ReuseAddr, &true).into_diagnostic()?; if matches!(fam, AddressFamily::Inet | AddressFamily::Inet6) { setsockopt(&sock, sockopt::ReusePort, &true).into_diagnostic()?; } bind(sock.as_raw_fd(), &addr).into_diagnostic()?; if let SocketType::Tcp = self.socket { listen(&sock, Backlog::new(1).unwrap()).into_diagnostic()?; } Ok(sock) } } ================================================ FILE: crates/cli/src/socket/windows.rs ================================================ use std::{ io::ErrorKind, net::SocketAddr, os::windows::io::{AsRawSocket, OwnedSocket}, str::FromStr, sync::Arc, }; use miette::{IntoDiagnostic, Result}; use tokio::{ io::{AsyncReadExt, AsyncWriteExt}, net::{TcpListener, TcpStream}, task::spawn, }; use tracing::instrument; use uuid::Uuid; use windows_sys::Win32::Networking::WinSock::{WSADuplicateSocketW, SOCKET, WSAPROTOCOL_INFOW}; use crate::args::command::EnvVar; use super::{SocketSpec, SocketType, Sockets}; #[derive(Debug)] pub struct SocketSet { sockets: Arc<[OwnedSocket]>, secret: Uuid, server: Option, server_addr: SocketAddr, } impl Sockets for SocketSet { #[instrument(level = "trace")] async fn create(specs: &[SocketSpec]) -> Result { debug_assert!(!specs.is_empty()); let sockets = specs .into_iter() .map(SocketSpec::create) .collect::>>()?; let server = TcpListener::bind("127.0.0.1:0").await.into_diagnostic()?; let server_addr = server.local_addr().into_diagnostic()?; Ok(Self { sockets: sockets.into(), secret: Uuid::new_v4(), server: Some(server), server_addr, }) } #[instrument(level = "trace")] fn envs(&self) -> Vec { vec![ EnvVar { key: "SYSTEMFD_SOCKET_SERVER".into(), value: self.server_addr.to_string().into(), }, EnvVar { key: "SYSTEMFD_SOCKET_SECRET".into(), value: self.secret.to_string().into(), }, ] } #[instrument(level = "trace", skip(self))] fn serve(&mut self) { let listener = self.server.take().unwrap(); let secret = self.secret; let sockets = self.sockets.clone(); spawn(async move { loop { let Ok((stream, _)) = listener.accept().await else { break; }; spawn(provide_sockets(stream, sockets.clone(), secret)); } }); } } async fn provide_sockets( mut stream: TcpStream, sockets: Arc<[OwnedSocket]>, secret: Uuid, ) -> std::io::Result<()> { let mut data = Vec::new(); stream.read_to_end(&mut data).await?; let Ok(out) = String::from_utf8(data) else { return Err(ErrorKind::InvalidInput.into()); }; let Some((challenge, pid)) = out.split_once('|') else { return Err(ErrorKind::InvalidInput.into()); }; let Ok(uuid) = Uuid::from_str(challenge) else { return Err(ErrorKind::InvalidInput.into()); }; let Ok(pid) = u32::from_str(pid) else { return Err(ErrorKind::InvalidInput.into()); }; if uuid != secret { return Err(ErrorKind::InvalidData.into()); } for socket in sockets.iter() { let payload = socket_to_payload(socket, pid)?; stream.write_all(&payload).await?; } stream.shutdown().await } fn socket_to_payload(socket: &OwnedSocket, pid: u32) -> std::io::Result> { // SAFETY: // - we're not reading from this until it gets populated by WSADuplicateSocketW // - the struct is entirely integers and arrays of integers let mut proto_info: WSAPROTOCOL_INFOW = unsafe { std::mem::zeroed() }; // SAFETY: ffi if unsafe { WSADuplicateSocketW(socket.as_raw_socket() as SOCKET, pid, &mut proto_info) } != 0 { return Err(ErrorKind::InvalidData.into()); } // SAFETY: // - non-nullability, alignment, and contiguousness are taken care of by serialising a single value // - WSAPROTOCOL_INFOW is repr(C) // - we don't mutate that memory (we immediately to_vec it) // - we have its exact size Ok(unsafe { let bytes: *const u8 = &proto_info as *const WSAPROTOCOL_INFOW as *const _; std::slice::from_raw_parts(bytes, std::mem::size_of::()) } .to_vec()) } impl SocketSpec { fn create(&self) -> Result { use socket2::{Domain, SockAddr, Socket, Type}; let addr = SockAddr::from(self.addr); let dom = if self.addr.is_ipv4() { Domain::IPV4 } else { Domain::IPV6 }; let ty = match self.socket { SocketType::Tcp => Type::STREAM, SocketType::Udp => Type::DGRAM, }; let sock = Socket::new(dom, ty, None).into_diagnostic()?; sock.set_reuse_address(true).into_diagnostic()?; sock.bind(&addr).into_diagnostic()?; if let SocketType::Tcp = self.socket { sock.listen(1).into_diagnostic()?; } Ok(sock.into()) } } ================================================ FILE: crates/cli/src/socket.rs ================================================ // listen-fd code inspired by systemdfd source by @mitsuhiko (Apache-2.0) // https://github.com/mitsuhiko/systemfd/blob/master/src/fd.rs use std::net::SocketAddr; use clap::ValueEnum; use miette::Result; pub(crate) use imp::*; pub(crate) use parser::SocketSpecValueParser; use crate::args::command::EnvVar; #[cfg(unix)] #[path = "socket/unix.rs"] mod imp; #[cfg(windows)] #[path = "socket/windows.rs"] mod imp; #[cfg(not(any(unix, windows)))] #[path = "socket/fallback.rs"] mod imp; mod parser; #[cfg(test)] mod test; #[derive(Clone, Copy, Debug, Default, PartialEq, Eq, ValueEnum)] pub enum SocketType { #[default] Tcp, Udp, } #[derive(Clone, Copy, Debug, PartialEq, Eq)] pub struct SocketSpec { pub socket: SocketType, pub addr: SocketAddr, } pub(crate) trait Sockets where Self: Sized, { async fn create(specs: &[SocketSpec]) -> Result; fn envs(&self) -> Vec; fn serve(&mut self) {} } ================================================ FILE: crates/cli/src/state.rs ================================================ use std::{ env::var_os, io::Write, path::PathBuf, process::ExitCode, sync::{Arc, Mutex, OnceLock}, }; use watchexec::Watchexec; use miette::{IntoDiagnostic, Result}; use tempfile::NamedTempFile; use crate::{ args::Args, socket::{SocketSet, Sockets}, }; pub type State = Arc; pub async fn new(args: &Args) -> Result { let socket_set = if args.command.socket.is_empty() { None } else { let mut sockets = SocketSet::create(&args.command.socket).await?; sockets.serve(); Some(sockets) }; Ok(Arc::new(InnerState { emit_file: RotatingTempFile::default(), socket_set, exit_code: Mutex::new(ExitCode::SUCCESS), watchexec: OnceLock::new(), })) } #[derive(Debug)] pub struct InnerState { pub emit_file: RotatingTempFile, pub socket_set: Option, pub exit_code: Mutex, /// Reference to the Watchexec instance, set after creation. /// Used to send synthetic events (e.g., to trigger immediate quit on error). pub watchexec: OnceLock>, } #[derive(Debug, Default)] pub struct RotatingTempFile(Mutex>); impl RotatingTempFile { pub fn rotate(&self) -> Result<()> { // implicitly drops the old file *self.0.lock().unwrap() = Some( if let Some(dir) = var_os("WATCHEXEC_TMPDIR") { NamedTempFile::new_in(dir) } else { NamedTempFile::new() } .into_diagnostic()?, ); Ok(()) } pub fn write(&self, data: &[u8]) -> Result<()> { if let Some(file) = self.0.lock().unwrap().as_mut() { file.write_all(data).into_diagnostic()?; } Ok(()) } pub fn path(&self) -> PathBuf { if let Some(file) = self.0.lock().unwrap().as_ref() { file.path().to_owned() } else { PathBuf::new() } } } ================================================ FILE: crates/cli/tests/common/mod.rs ================================================ use std::path::PathBuf; use std::{fs, sync::OnceLock}; use miette::{Context, IntoDiagnostic, Result}; use rand::Rng; static PLACEHOLDER_DATA: OnceLock = OnceLock::new(); fn get_placeholder_data() -> &'static str { PLACEHOLDER_DATA.get_or_init(|| "PLACEHOLDER\n".repeat(500)) } /// The amount of nesting that will be used for generated files #[derive(Debug, Clone, PartialEq, Eq)] pub enum GeneratedFileNesting { /// Only one level of files Flat, /// Random, up to a certiain maximum RandomToMax(usize), } /// Configuration for creating testing subfolders #[derive(Debug, Clone, PartialEq, Eq)] pub struct TestSubfolderConfiguration { /// The amount of nesting that will be used when folders are generated pub(crate) nesting: GeneratedFileNesting, /// Number of files the folder should contain pub(crate) file_count: usize, /// Subfolder name pub(crate) name: String, } /// Options for generating test files #[derive(Debug, Clone, PartialEq, Eq, Default)] pub struct GenerateTestFilesArgs { /// The path where the files should be generated /// if None, the current working directory will be used. pub(crate) path: Option, /// Configurations for subfolders to generate pub(crate) subfolder_configs: Vec, } /// Generate test files /// /// This returns the same number of paths that were requested via subfolder_configs. pub fn generate_test_files(args: GenerateTestFilesArgs) -> Result> { // Use or create a temporary directory for the test files let tmpdir = if let Some(p) = args.path { p } else { tempfile::tempdir() .into_diagnostic() .wrap_err("failed to build tempdir")? .keep() }; let mut paths = vec![tmpdir.clone()]; // Generate subfolders matching each config for subfolder_config in &args.subfolder_configs { // Create the subfolder path let subfolder_path = tmpdir.join(&subfolder_config.name); fs::create_dir(&subfolder_path) .into_diagnostic() .wrap_err(format!( "failed to create path for dir [{}]", subfolder_path.display() ))?; paths.push(subfolder_path.clone()); // Fill the subfolder with files match subfolder_config.nesting { GeneratedFileNesting::Flat => { for idx in 0..subfolder_config.file_count { // Write stub file contents fs::write( subfolder_path.join(format!("stub-file-{idx}")), get_placeholder_data(), ) .into_diagnostic() .wrap_err(format!( "failed to write temporary file in subfolder {} @ idx {idx}", subfolder_path.display() ))?; } } GeneratedFileNesting::RandomToMax(max_depth) => { let mut generator = rand::rng(); for idx in 0..subfolder_config.file_count { // Build a randomized path up to max depth let mut generated_path = subfolder_path.clone(); let depth = generator.random_range(0..max_depth); for _ in 0..depth { generated_path.push("stub-dir"); } // Create the path fs::create_dir_all(&generated_path) .into_diagnostic() .wrap_err(format!( "failed to create randomly generated path [{}]", generated_path.display() ))?; // Write stub file contents @ the new randomized path fs::write( generated_path.join(format!("stub-file-{idx}")), get_placeholder_data(), ) .into_diagnostic() .wrap_err(format!( "failed to write temporary file in subfolder {} @ idx {idx}", subfolder_path.display() ))?; } } } } Ok(paths) } ================================================ FILE: crates/cli/tests/ignore.rs ================================================ use std::{ path::{Path, PathBuf}, process::Stdio, time::Duration, }; use miette::{IntoDiagnostic, Result, WrapErr}; use tokio::{process::Command, time::Instant}; use tracing_test::traced_test; use uuid::Uuid; mod common; use common::{generate_test_files, GenerateTestFilesArgs}; use crate::common::{GeneratedFileNesting, TestSubfolderConfiguration}; /// Directory name that will be sued for the dir that *should* be watched const WATCH_DIR_NAME: &str = "watch"; /// The token that watch will echo every time a match is found const WATCH_TOKEN: &str = "updated"; /// Ensure that watchexec runtime does not increase with the /// number of *ignored* files in a given folder /// /// This test creates two separate folders, one small and the other large /// /// Each folder has two subfolders: /// - a shallow one to be watched, with a few files of single depth (20 files) /// - a deep one to be ignored, with many files at varying depths (small case 200 files, large case 200,000 files) /// /// watchexec, when executed on *either* folder should *not* experience a more /// than 10x degradation in performance, because the vast majority of the files /// are supposed to be ignored to begin with. /// /// When running the CLI on the root folders, it should *not* take a long time to start de #[tokio::test] #[traced_test] async fn e2e_ignore_many_files_200_000() -> Result<()> { // Create a tempfile so that drop will clean it up let small_test_dir = tempfile::tempdir() .into_diagnostic() .wrap_err("failed to create tempdir for test use")?; // Determine the watchexec bin to use & build arguments let wexec_bin = std::env::var("TEST_WATCHEXEC_BIN").unwrap_or( option_env!("CARGO_BIN_EXE_watchexec") .map(std::string::ToString::to_string) .unwrap_or("watchexec".into()), ); let token = format!("{WATCH_TOKEN}-{}", Uuid::new_v4()); let args: Vec = vec![ "-1".into(), // exit as soon as watch completes "--watch".into(), WATCH_DIR_NAME.into(), "echo".into(), token.clone(), ]; // Generate a small directory of files containing dirs that *will* and will *not* be watched let [ref root_dir_path, _, _] = generate_test_files(GenerateTestFilesArgs { path: Some(PathBuf::from(small_test_dir.path())), subfolder_configs: vec![ // Shallow folder will have a small number of files and won't be watched TestSubfolderConfiguration { name: "watch".into(), nesting: GeneratedFileNesting::Flat, file_count: 5, }, // Deep folder will have *many* amll files and will be watched TestSubfolderConfiguration { name: "unrelated".into(), nesting: GeneratedFileNesting::RandomToMax(42), file_count: 200, }, ], })?[..] else { panic!("unexpected number of paths returned from generate_test_files"); }; // Get the number of elapsed let small_elapsed = run_watchexec_cmd(&wexec_bin, root_dir_path, args.clone()).await?; // Create a tempfile so that drop will clean it up let large_test_dir = tempfile::tempdir() .into_diagnostic() .wrap_err("failed to create tempdir for test use")?; // Generate a *large* directory of files let [ref root_dir_path, _, _] = generate_test_files(GenerateTestFilesArgs { path: Some(PathBuf::from(large_test_dir.path())), subfolder_configs: vec![ // Shallow folder will have a small number of files and won't be watched TestSubfolderConfiguration { name: "watch".into(), nesting: GeneratedFileNesting::Flat, file_count: 5, }, // Deep folder will have *many* amll files and will be watched TestSubfolderConfiguration { name: "unrelated".into(), nesting: GeneratedFileNesting::RandomToMax(42), file_count: 200_000, }, ], })?[..] else { panic!("unexpected number of paths returned from generate_test_files"); }; // Get the number of elapsed let large_elapsed = run_watchexec_cmd(&wexec_bin, root_dir_path, args.clone()).await?; // We expect the ignores to not impact watchexec startup time at all // whether there are 200 files in there or 200k assert!( large_elapsed < small_elapsed * 10, "200k ignore folder ({:?}) took more than 10x more time ({:?}) than 200 ignore folder ({:?})", large_elapsed, small_elapsed * 10, small_elapsed, ); Ok(()) } /// Run a watchexec command once async fn run_watchexec_cmd( wexec_bin: impl AsRef, dir: impl AsRef, args: impl Into>, ) -> Result { // Build the subprocess command let mut cmd = Command::new(wexec_bin.as_ref()); cmd.args(args.into()); cmd.current_dir(dir); cmd.stdout(Stdio::piped()); cmd.stderr(Stdio::piped()); let start = Instant::now(); cmd.kill_on_drop(true) .output() .await .into_diagnostic() .wrap_err("fixed")?; Ok(start.elapsed()) } ================================================ FILE: crates/cli/watchexec-manifest.rc ================================================ #define RT_MANIFEST 24 1 RT_MANIFEST "watchexec.exe.manifest" ================================================ FILE: crates/cli/watchexec.exe.manifest ================================================ true UTF-8 SegmentHeap ================================================ FILE: crates/events/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v6.1.0 (2026-02-22) - Add `Keyboard::Key` to describe arbitrary single-key keyboard events ## v6.0.0 (2025-05-15) ## v5.0.1 (2025-05-15) - Deps: remove unused dependency `nix` ([#930](https://github.com/watchexec/watchexec/pull/930)) ## v5.0.0 (2025-02-09) ## v4.0.0 (2024-10-14) - Deps: nix 0.29 ## v3.0.0 (2024-04-20) - Deps: nix 0.28 ## v2.0.1 (2023-11-29) - Add `ProcessEnd::into_exitstatus` testing-only utility method. - Deps: upgrade to Notify 6.0 - Deps: upgrade to nix 0.27 - Deps: upgrade to watchexec-signals 2.0.0 ## v2.0.0 (2023-11-29) Same as 2.0.1, but yanked. ## v1.1.0 (2023-11-26) Same as 2.0.1, but yanked. ## v1.0.0 (2023-03-18) - Split off new `watchexec-events` crate (this one), to have a lightweight library that can parse and generate events and maintain the JSON event format. ================================================ FILE: crates/events/Cargo.toml ================================================ [package] name = "watchexec-events" version = "6.1.0" authors = ["Félix Saparelli "] license = "Apache-2.0 OR MIT" description = "Watchexec's event types" keywords = ["watchexec", "event", "format", "json"] documentation = "https://docs.rs/watchexec-events" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.61.0" edition = "2021" [dependencies.notify-types] version = "2.0.0" optional = true [dependencies.serde] version = "1.0.183" optional = true features = ["derive"] [dependencies.watchexec-signals] version = "5.0.1" path = "../signals" default-features = false [dev-dependencies] snapbox = "0.6.18" serde_json = "1.0.107" [features] default = ["notify"] notify = ["dep:notify-types"] serde = ["dep:serde", "notify-types?/serde", "watchexec-signals/serde"] [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" ================================================ FILE: crates/events/README.md ================================================ # watchexec-events _Watchexec's event types._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license] or [MIT](https://passcod.mit-license.org). - Status: maintained. [docs]: https://docs.rs/watchexec-events [license]: ../../LICENSE Fundamentally, events in watchexec have three purposes: 1. To trigger the launch, restart, or other interruption of a process; 2. To be filtered upon according to whatever set of criteria is desired; 3. To carry information about what caused the event, which may be provided to the process. Outside of Watchexec, this library is particularly useful if you're building a tool that runs under it, and want to easily read its events (with `--emit-events-to=json-file` and `--emit-events-to=json-stdio`). ```rust ,no_run use std::io::{stdin, Result}; use watchexec_events::Event; fn main() -> Result<()> { for line in stdin().lines() { let event: Event = serde_json::from_str(&line?)?; dbg!(event); } Ok(()) } ``` ## Features - `serde`: enables serde support. - `notify`: use Notify's file event types (default). If you disable `notify`, you'll get a leaner dependency tree that's still able to parse the entire events, but isn't type compatible with Notify. In most deserialisation usecases, this is fine, but it's not the default to avoid surprises. ================================================ FILE: crates/events/examples/parse-and-print.rs ================================================ use std::io::{stdin, Result}; use watchexec_events::Event; fn main() -> Result<()> { for line in stdin().lines() { let event: Event = serde_json::from_str(&line?)?; dbg!(event); } Ok(()) } ================================================ FILE: crates/events/release.toml ================================================ pre-release-commit-message = "release: events v{{version}}" tag-prefix = "watchexec-events-" tag-message = "watchexec-events {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/events/src/event.rs ================================================ use std::{ collections::HashMap, fmt, path::{Path, PathBuf}, }; use watchexec_signals::Signal; #[cfg(feature = "serde")] use crate::serde_formats::{SerdeEvent, SerdeTag}; use crate::{filekind::FileEventKind, FileType, Keyboard, ProcessEnd}; /// An event, as far as watchexec cares about. #[derive(Clone, Debug, Default, Eq, PartialEq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(from = "SerdeEvent", into = "SerdeEvent"))] pub struct Event { /// Structured, classified information which can be used to filter or classify the event. pub tags: Vec, /// Arbitrary other information, cannot be used for filtering. pub metadata: HashMap>, } /// Something which can be used to filter or qualify an event. #[derive(Clone, Debug, Eq, PartialEq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(from = "SerdeTag", into = "SerdeTag"))] #[non_exhaustive] pub enum Tag { /// The event is about a path or file in the filesystem. Path { /// Path to the file or directory. path: PathBuf, /// Optional file type, if known. file_type: Option, }, /// Kind of a filesystem event (create, remove, modify, etc). FileEventKind(FileEventKind), /// The general source of the event. Source(Source), /// The event is about a keyboard input. Keyboard(Keyboard), /// The event was caused by a particular process. Process(u32), /// The event is about a signal being delivered to the main process. Signal(Signal), /// The event is about a subprocess ending. ProcessCompletion(Option), #[cfg(feature = "serde")] /// The event is unknown (or not yet implemented). Unknown, } impl Tag { /// The name of the variant. #[must_use] pub const fn discriminant_name(&self) -> &'static str { match self { Self::Path { .. } => "Path", Self::FileEventKind(_) => "FileEventKind", Self::Source(_) => "Source", Self::Keyboard(_) => "Keyboard", Self::Process(_) => "Process", Self::Signal(_) => "Signal", Self::ProcessCompletion(_) => "ProcessCompletion", #[cfg(feature = "serde")] Self::Unknown => "Unknown", } } } /// The general origin of the event. /// /// This is set by the event source. Note that not all of these are currently used. #[derive(Clone, Copy, Debug, Eq, PartialEq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] #[non_exhaustive] pub enum Source { /// Event comes from a file change. Filesystem, /// Event comes from a keyboard input. Keyboard, /// Event comes from a mouse click. Mouse, /// Event comes from the OS. Os, /// Event is time based. Time, /// Event is internal to Watchexec. Internal, } impl fmt::Display for Source { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!( f, "{}", match self { Self::Filesystem => "filesystem", Self::Keyboard => "keyboard", Self::Mouse => "mouse", Self::Os => "os", Self::Time => "time", Self::Internal => "internal", } ) } } /// The priority of the event in the queue. /// /// In the event queue, events are inserted with a priority, such that more important events are /// delivered ahead of others. This is especially important when there is a large amount of events /// generated and relatively slow filtering, as events can become noticeably delayed, and may give /// the impression of stalling. #[derive(Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum Priority { /// Low priority /// /// Used for: /// - process completion events Low, /// Normal priority /// /// Used for: /// - filesystem events Normal, /// High priority /// /// Used for: /// - signals to main process, except Interrupt and Terminate High, /// Urgent events bypass filtering entirely. /// /// Used for: /// - Interrupt and Terminate signals to main process Urgent, } impl Default for Priority { fn default() -> Self { Self::Normal } } impl Event { /// Returns true if the event has an Internal source tag. #[must_use] pub fn is_internal(&self) -> bool { self.tags .iter() .any(|tag| matches!(tag, Tag::Source(Source::Internal))) } /// Returns true if the event has no tags. #[must_use] pub fn is_empty(&self) -> bool { self.tags.is_empty() } /// Return all paths in the event's tags. pub fn paths(&self) -> impl Iterator)> { self.tags.iter().filter_map(|p| match p { Tag::Path { path, file_type } => Some((path.as_path(), file_type.as_ref())), _ => None, }) } /// Return all signals in the event's tags. pub fn signals(&self) -> impl Iterator + '_ { self.tags.iter().filter_map(|p| match p { Tag::Signal(s) => Some(*s), _ => None, }) } /// Return all process completions in the event's tags. pub fn completions(&self) -> impl Iterator> + '_ { self.tags.iter().filter_map(|p| match p { Tag::ProcessCompletion(s) => Some(*s), _ => None, }) } } impl fmt::Display for Event { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "Event")?; for p in &self.tags { match p { Tag::Path { path, file_type } => { write!(f, " path={}", path.display())?; if let Some(ft) = file_type { write!(f, " filetype={ft}")?; } } Tag::FileEventKind(kind) => write!(f, " kind={kind:?}")?, Tag::Source(s) => write!(f, " source={s:?}")?, Tag::Keyboard(k) => write!(f, " keyboard={k:?}")?, Tag::Process(p) => write!(f, " process={p}")?, Tag::Signal(s) => write!(f, " signal={s:?}")?, Tag::ProcessCompletion(None) => write!(f, " command-completed")?, Tag::ProcessCompletion(Some(c)) => write!(f, " command-completed({c:?})")?, #[cfg(feature = "serde")] Tag::Unknown => write!(f, " unknown")?, } } if !self.metadata.is_empty() { write!(f, " meta: {:?}", self.metadata)?; } Ok(()) } } ================================================ FILE: crates/events/src/fs.rs ================================================ use std::fmt; /// Re-export of the Notify file event types. #[cfg(feature = "notify")] pub mod filekind { pub use notify_types::event::{ AccessKind, AccessMode, CreateKind, DataChange, EventKind as FileEventKind, MetadataKind, ModifyKind, RemoveKind, RenameMode, }; } /// Pseudo file event types without dependency on Notify. #[cfg(not(feature = "notify"))] pub mod filekind { pub use crate::sans_notify::{ AccessKind, AccessMode, CreateKind, DataChange, EventKind as FileEventKind, MetadataKind, ModifyKind, RemoveKind, RenameMode, }; } /// The type of a file. /// /// This is a simplification of the [`std::fs::FileType`] type, which is not constructable and may /// differ on different platforms. #[derive(Clone, Copy, Debug, Eq, PartialEq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum FileType { /// A regular file. File, /// A directory. Dir, /// A symbolic link. Symlink, /// Something else. Other, } impl From for FileType { fn from(ft: std::fs::FileType) -> Self { if ft.is_file() { Self::File } else if ft.is_dir() { Self::Dir } else if ft.is_symlink() { Self::Symlink } else { Self::Other } } } impl fmt::Display for FileType { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { Self::File => write!(f, "file"), Self::Dir => write!(f, "dir"), Self::Symlink => write!(f, "symlink"), Self::Other => write!(f, "other"), } } } ================================================ FILE: crates/events/src/keyboard.rs ================================================ #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] #[non_exhaustive] /// A keyboard input. pub enum Keyboard { /// Event representing an 'end of file' on stdin Eof, /// A key press in interactive mode Key { /// The key that was pressed. key: KeyCode, /// Modifier keys held during the press. #[cfg_attr( feature = "serde", serde(default, skip_serializing_if = "Modifiers::is_empty") )] modifiers: Modifiers, }, } /// A key code. #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] #[non_exhaustive] pub enum KeyCode { /// A unicode character (letter, digit, symbol, space). Char(char), /// Enter / Return. Enter, /// Escape. Escape, } /// Modifier key flags. #[derive(Debug, Clone, Copy, Default, PartialEq, Eq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] pub struct Modifiers { /// Ctrl / Control was held. #[cfg_attr(feature = "serde", serde(default, skip_serializing_if = "is_false"))] pub ctrl: bool, /// Alt / Option was held. #[cfg_attr(feature = "serde", serde(default, skip_serializing_if = "is_false"))] pub alt: bool, /// Shift was held. #[cfg_attr(feature = "serde", serde(default, skip_serializing_if = "is_false"))] pub shift: bool, } #[cfg(feature = "serde")] fn is_false(b: &bool) -> bool { !b } impl Modifiers { /// Returns true if no modifier keys are set. #[must_use] pub fn is_empty(&self) -> bool { !self.ctrl && !self.alt && !self.shift } } ================================================ FILE: crates/events/src/lib.rs ================================================ #![doc = include_str!("../README.md")] #![cfg_attr(not(test), warn(unused_crate_dependencies))] #[doc(inline)] pub use event::*; #[doc(inline)] pub use fs::*; #[doc(inline)] pub use keyboard::*; #[doc(inline)] pub use process::*; mod event; mod fs; mod keyboard; mod process; #[cfg(not(feature = "notify"))] mod sans_notify; #[cfg(feature = "serde")] mod serde_formats; ================================================ FILE: crates/events/src/process.rs ================================================ use std::{ num::{NonZeroI32, NonZeroI64}, process::ExitStatus, }; use watchexec_signals::Signal; /// The end status of a process. /// /// This is a sort-of equivalent of the [`std::process::ExitStatus`] type which, while /// constructable, differs on various platforms. The native type is an integer that is interpreted /// either through convention or via platform-dependent libc or kernel calls; our type is a more /// structured representation for the purpose of being clearer and transportable. /// /// On Unix, one can tell whether a process dumped core from the exit status; this is not replicated /// in this structure; if that's desirable you can obtain it manually via `libc::WCOREDUMP` and the /// `ExitSignal` variant. /// /// On Unix and Windows, the exit status is a 32-bit integer; on Fuchsia it's a 64-bit integer. For /// portability, we use `i64`. On all platforms, the "success" value is zero, so we special-case /// that as a variant and use `NonZeroI*` to limit the other values. #[derive(Clone, Copy, Debug, Eq, PartialEq)] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr(feature = "serde", serde(tag = "disposition", content = "code"))] pub enum ProcessEnd { /// The process ended successfully, with exit status = 0. #[cfg_attr(feature = "serde", serde(rename = "success"))] Success, /// The process exited with a non-zero exit status. #[cfg_attr(feature = "serde", serde(rename = "error"))] ExitError(NonZeroI64), /// The process exited due to a signal. #[cfg_attr(feature = "serde", serde(rename = "signal"))] ExitSignal(Signal), /// The process was stopped (but not terminated) (`libc::WIFSTOPPED`). #[cfg_attr(feature = "serde", serde(rename = "stop"))] ExitStop(NonZeroI32), /// The process suffered an unhandled exception or warning (typically Windows only). #[cfg_attr(feature = "serde", serde(rename = "exception"))] Exception(NonZeroI32), /// The process was continued (`libc::WIFCONTINUED`). #[cfg_attr(feature = "serde", serde(rename = "continued"))] Continued, } impl From for ProcessEnd { #[cfg(unix)] fn from(es: ExitStatus) -> Self { use std::os::unix::process::ExitStatusExt; match (es.code(), es.signal(), es.stopped_signal()) { (Some(_), Some(_), _) => { unreachable!("exitstatus cannot both be code and signal?!") } (Some(code), None, _) => { NonZeroI64::try_from(i64::from(code)).map_or(Self::Success, Self::ExitError) } (None, Some(_), Some(stopsig)) => { NonZeroI32::try_from(stopsig).map_or(Self::Success, Self::ExitStop) } #[cfg(not(target_os = "vxworks"))] (None, Some(_), _) if es.continued() => Self::Continued, (None, Some(signal), _) => Self::ExitSignal(signal.into()), (None, None, _) => Self::Success, } } #[cfg(windows)] fn from(es: ExitStatus) -> Self { match es.code().map(NonZeroI32::try_from) { None | Some(Err(_)) => Self::Success, Some(Ok(code)) if code.get() < 0 => Self::Exception(code), Some(Ok(code)) => Self::ExitError(code.into()), } } #[cfg(not(any(unix, windows)))] fn from(es: ExitStatus) -> Self { if es.success() { Self::Success } else { Self::ExitError(NonZeroI64::new(1).unwrap()) } } } impl ProcessEnd { /// Convert a `ProcessEnd` to an `ExitStatus`. /// /// This is a testing function only! **It will panic** if the `ProcessEnd` is not representable /// as an `ExitStatus` on Unix. This is also not guaranteed to be accurate, as the `waitpid()` /// status union is platform-specific. Exit codes and signals are implemented, other variants /// are not. #[cfg(unix)] #[must_use] pub fn into_exitstatus(self) -> ExitStatus { use std::os::unix::process::ExitStatusExt; match self { Self::Success => ExitStatus::from_raw(0), Self::ExitError(code) => { ExitStatus::from_raw(i32::from(u8::try_from(code.get()).unwrap_or_default()) << 8) } Self::ExitSignal(signal) => { ExitStatus::from_raw(signal.to_nix().map_or(0, |sig| sig as i32)) } Self::Continued => ExitStatus::from_raw(0xffff), _ => unimplemented!(), } } /// Convert a `ProcessEnd` to an `ExitStatus`. /// /// This is a testing function only! **It will panic** if the `ProcessEnd` is not representable /// as an `ExitStatus` on Windows. #[cfg(windows)] #[must_use] pub fn into_exitstatus(self) -> ExitStatus { use std::os::windows::process::ExitStatusExt; match self { Self::Success => ExitStatus::from_raw(0), Self::ExitError(code) => ExitStatus::from_raw(code.get().try_into().unwrap()), _ => unimplemented!(), } } /// Unimplemented on this platform. #[cfg(not(any(unix, windows)))] #[must_use] pub fn into_exitstatus(self) -> ExitStatus { unimplemented!() } } ================================================ FILE: crates/events/src/sans_notify.rs ================================================ // This file is dual-licensed under the Artistic License 2.0 as per the // LICENSE.ARTISTIC file, and the Creative Commons Zero 1.0 license. // // Taken verbatim from the `notify` crate, with the Event types removed. use std::hash::Hash; #[cfg(feature = "serde")] use serde::{Deserialize, Serialize}; /// An event describing open or close operations on files. #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum AccessMode { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event emitted when the file is executed, or the folder opened. Execute, /// An event emitted when the file is opened for reading. Read, /// An event emitted when the file is opened for writing. Write, /// An event which specific kind is known but cannot be represented otherwise. Other, } /// An event describing non-mutating access operations on files. #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(tag = "kind", content = "mode"))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum AccessKind { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event emitted when the file is read. Read, /// An event emitted when the file, or a handle to the file, is opened. Open(AccessMode), /// An event emitted when the file, or a handle to the file, is closed. Close(AccessMode), /// An event which specific kind is known but cannot be represented otherwise. Other, } /// An event describing creation operations on files. #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(tag = "kind"))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum CreateKind { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event which results in the creation of a file. File, /// An event which results in the creation of a folder. Folder, /// An event which specific kind is known but cannot be represented otherwise. Other, } /// An event emitted when the data content of a file is changed. #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum DataChange { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event emitted when the size of the data is changed. Size, /// An event emitted when the content of the data is changed. Content, /// An event which specific kind is known but cannot be represented otherwise. Other, } /// An event emitted when the metadata of a file or folder is changed. #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum MetadataKind { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event emitted when the access time of the file or folder is changed. AccessTime, /// An event emitted when the write or modify time of the file or folder is changed. WriteTime, /// An event emitted when the permissions of the file or folder are changed. Permissions, /// An event emitted when the ownership of the file or folder is changed. Ownership, /// An event emitted when an extended attribute of the file or folder is changed. /// /// If the extended attribute's name or type is known, it should be provided in the /// `Info` event attribute. Extended, /// An event which specific kind is known but cannot be represented otherwise. Other, } /// An event emitted when the name of a file or folder is changed. #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum RenameMode { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event emitted on the file or folder resulting from a rename. To, /// An event emitted on the file or folder that was renamed. From, /// A single event emitted with both the `From` and `To` paths. /// /// This event should be emitted when both source and target are known. The paths should be /// provided in this exact order (from, to). Both, /// An event which specific kind is known but cannot be represented otherwise. Other, } /// An event describing mutation of content, name, or metadata. #[derive(Clone, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(tag = "kind", content = "mode"))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum ModifyKind { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event emitted when the data content of a file is changed. Data(DataChange), /// An event emitted when the metadata of a file or folder is changed. Metadata(MetadataKind), /// An event emitted when the name of a file or folder is changed. #[cfg_attr(feature = "serde", serde(rename = "rename"))] Name(RenameMode), /// An event which specific kind is known but cannot be represented otherwise. Other, } /// An event describing removal operations on files. #[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(tag = "kind"))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum RemoveKind { /// The catch-all case, to be used when the specific kind of event is unknown. Any, /// An event emitted when a file is removed. File, /// An event emitted when a folder is removed. Folder, /// An event which specific kind is known but cannot be represented otherwise. Other, } /// Top-level event kind. /// /// This is arguably the most important classification for events. All subkinds below this one /// represent details that may or may not be available for any particular backend, but most tools /// and Notify systems will only care about which of these four general kinds an event is about. #[derive(Clone, Debug, Eq, Hash, PartialEq)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "serde", serde(rename_all = "kebab-case"))] pub enum EventKind { /// The catch-all event kind, for unsupported/unknown events. /// /// This variant should be used as the "else" case when mapping native kernel bitmasks or /// bitmaps, such that if the mask is ever extended with new event types the backend will not /// gain bugs due to not matching new unknown event types. /// /// This variant is also the default variant used when Notify is in "imprecise" mode. Any, /// An event describing non-mutating access operations on files. /// /// This event is about opening and closing file handles, as well as executing files, and any /// other such event that is about accessing files, folders, or other structures rather than /// mutating them. /// /// Only some platforms are capable of generating these. Access(AccessKind), /// An event describing creation operations on files. /// /// This event is about the creation of files, folders, or other structures but not about e.g. /// writing new content into them. Create(CreateKind), /// An event describing mutation of content, name, or metadata. /// /// This event is about the mutation of files', folders', or other structures' content, name /// (path), or associated metadata (attributes). Modify(ModifyKind), /// An event describing removal operations on files. /// /// This event is about the removal of files, folders, or other structures but not e.g. erasing /// content from them. This may also be triggered for renames/moves that move files _out of the /// watched subpath_. /// /// Some editors also trigger Remove events when saving files as they may opt for removing (or /// renaming) the original then creating a new file in-place. Remove(RemoveKind), /// An event not fitting in any of the above four categories. /// /// This may be used for meta-events about the watch itself. Other, } impl EventKind { /// Indicates whether an event is an Access variant. pub fn is_access(&self) -> bool { matches!(self, EventKind::Access(_)) } /// Indicates whether an event is a Create variant. pub fn is_create(&self) -> bool { matches!(self, EventKind::Create(_)) } /// Indicates whether an event is a Modify variant. pub fn is_modify(&self) -> bool { matches!(self, EventKind::Modify(_)) } /// Indicates whether an event is a Remove variant. pub fn is_remove(&self) -> bool { matches!(self, EventKind::Remove(_)) } /// Indicates whether an event is an Other variant. pub fn is_other(&self) -> bool { matches!(self, EventKind::Other) } } impl Default for EventKind { fn default() -> Self { EventKind::Any } } ================================================ FILE: crates/events/src/serde_formats.rs ================================================ use std::{ collections::BTreeMap, num::{NonZeroI32, NonZeroI64}, path::PathBuf, }; use serde::{Deserialize, Serialize}; use watchexec_signals::Signal; use crate::{ fs::filekind::{ AccessKind, AccessMode, CreateKind, DataChange, FileEventKind as EventKind, MetadataKind, ModifyKind, RemoveKind, RenameMode, }, Event, FileType, Keyboard, ProcessEnd, Source, Tag, }; #[derive(Clone, Debug, Default, Serialize, Deserialize)] pub struct SerdeTag { kind: TagKind, // path #[serde(default, skip_serializing_if = "Option::is_none")] absolute: Option, #[serde(default, skip_serializing_if = "Option::is_none")] filetype: Option, // fs #[serde(default, skip_serializing_if = "Option::is_none")] simple: Option, #[serde(default, skip_serializing_if = "Option::is_none")] full: Option, // source #[serde(default, skip_serializing_if = "Option::is_none")] source: Option, // keyboard #[serde(default, skip_serializing_if = "Option::is_none")] keycode: Option, // process #[serde(default, skip_serializing_if = "Option::is_none")] pid: Option, // signal #[serde(default, skip_serializing_if = "Option::is_none")] signal: Option, // completion #[serde(default, skip_serializing_if = "Option::is_none")] disposition: Option, #[serde(default, skip_serializing_if = "Option::is_none")] code: Option, } #[derive(Clone, Copy, Debug, Default, Serialize, Deserialize)] #[serde(rename_all = "kebab-case")] pub enum TagKind { #[default] None, Path, Fs, Source, Keyboard, Process, Signal, Completion, } #[derive(Clone, Copy, Debug, Serialize, Deserialize)] #[serde(rename_all = "kebab-case")] pub enum ProcessDisposition { Unknown, Success, Error, Signal, Stop, Exception, Continued, } #[derive(Clone, Copy, Debug, Serialize, Deserialize)] #[serde(rename_all = "kebab-case")] pub enum FsEventKind { Access, Create, Modify, Remove, Other, } impl From for FsEventKind { fn from(value: EventKind) -> Self { match value { EventKind::Access(_) => Self::Access, EventKind::Create(_) => Self::Create, EventKind::Modify(_) => Self::Modify, EventKind::Remove(_) => Self::Remove, EventKind::Any | EventKind::Other => Self::Other, } } } impl From for SerdeTag { fn from(value: Tag) -> Self { match value { Tag::Path { path, file_type } => Self { kind: TagKind::Path, absolute: Some(path), filetype: file_type, ..Default::default() }, Tag::FileEventKind(fek) => Self { kind: TagKind::Fs, full: Some(format!("{fek:?}")), simple: Some(fek.into()), ..Default::default() }, Tag::Source(source) => Self { kind: TagKind::Source, source: Some(source), ..Default::default() }, Tag::Keyboard(keycode) => Self { kind: TagKind::Keyboard, keycode: Some(keycode), ..Default::default() }, Tag::Process(pid) => Self { kind: TagKind::Process, pid: Some(pid), ..Default::default() }, Tag::Signal(signal) => Self { kind: TagKind::Signal, signal: Some(signal), ..Default::default() }, Tag::ProcessCompletion(None) => Self { kind: TagKind::Completion, disposition: Some(ProcessDisposition::Unknown), ..Default::default() }, Tag::ProcessCompletion(Some(end)) => Self { kind: TagKind::Completion, code: match &end { ProcessEnd::Success | ProcessEnd::Continued | ProcessEnd::ExitSignal(_) => None, ProcessEnd::ExitError(err) => Some(err.get()), ProcessEnd::ExitStop(code) => Some(code.get().into()), ProcessEnd::Exception(exc) => Some(exc.get().into()), }, signal: if let ProcessEnd::ExitSignal(sig) = &end { Some(*sig) } else { None }, disposition: Some(match end { ProcessEnd::Success => ProcessDisposition::Success, ProcessEnd::ExitError(_) => ProcessDisposition::Error, ProcessEnd::ExitSignal(_) => ProcessDisposition::Signal, ProcessEnd::ExitStop(_) => ProcessDisposition::Stop, ProcessEnd::Exception(_) => ProcessDisposition::Exception, ProcessEnd::Continued => ProcessDisposition::Continued, }), ..Default::default() }, Tag::Unknown => Self::default(), } } } #[allow( clippy::fallible_impl_from, reason = "this triggers due to the unwraps, which are checked by branches" )] #[allow( clippy::too_many_lines, reason = "clearer as a single match tree than broken up" )] impl From for Tag { fn from(value: SerdeTag) -> Self { match value { SerdeTag { kind: TagKind::Path, absolute: Some(path), filetype, .. } => Self::Path { path, file_type: filetype, }, SerdeTag { kind: TagKind::Fs, full: Some(full), .. } => Self::FileEventKind(match full.as_str() { "Any" => EventKind::Any, "Access(Any)" => EventKind::Access(AccessKind::Any), "Access(Read)" => EventKind::Access(AccessKind::Read), "Access(Open(Any))" => EventKind::Access(AccessKind::Open(AccessMode::Any)), "Access(Open(Execute))" => EventKind::Access(AccessKind::Open(AccessMode::Execute)), "Access(Open(Read))" => EventKind::Access(AccessKind::Open(AccessMode::Read)), "Access(Open(Write))" => EventKind::Access(AccessKind::Open(AccessMode::Write)), "Access(Open(Other))" => EventKind::Access(AccessKind::Open(AccessMode::Other)), "Access(Close(Any))" => EventKind::Access(AccessKind::Close(AccessMode::Any)), "Access(Close(Execute))" => { EventKind::Access(AccessKind::Close(AccessMode::Execute)) } "Access(Close(Read))" => EventKind::Access(AccessKind::Close(AccessMode::Read)), "Access(Close(Write))" => EventKind::Access(AccessKind::Close(AccessMode::Write)), "Access(Close(Other))" => EventKind::Access(AccessKind::Close(AccessMode::Other)), "Access(Other)" => EventKind::Access(AccessKind::Other), "Create(Any)" => EventKind::Create(CreateKind::Any), "Create(File)" => EventKind::Create(CreateKind::File), "Create(Folder)" => EventKind::Create(CreateKind::Folder), "Create(Other)" => EventKind::Create(CreateKind::Other), "Modify(Any)" => EventKind::Modify(ModifyKind::Any), "Modify(Data(Any))" => EventKind::Modify(ModifyKind::Data(DataChange::Any)), "Modify(Data(Size))" => EventKind::Modify(ModifyKind::Data(DataChange::Size)), "Modify(Data(Content))" => EventKind::Modify(ModifyKind::Data(DataChange::Content)), "Modify(Data(Other))" => EventKind::Modify(ModifyKind::Data(DataChange::Other)), "Modify(Metadata(Any))" => { EventKind::Modify(ModifyKind::Metadata(MetadataKind::Any)) } "Modify(Metadata(AccessTime))" => { EventKind::Modify(ModifyKind::Metadata(MetadataKind::AccessTime)) } "Modify(Metadata(WriteTime))" => { EventKind::Modify(ModifyKind::Metadata(MetadataKind::WriteTime)) } "Modify(Metadata(Permissions))" => { EventKind::Modify(ModifyKind::Metadata(MetadataKind::Permissions)) } "Modify(Metadata(Ownership))" => { EventKind::Modify(ModifyKind::Metadata(MetadataKind::Ownership)) } "Modify(Metadata(Extended))" => { EventKind::Modify(ModifyKind::Metadata(MetadataKind::Extended)) } "Modify(Metadata(Other))" => { EventKind::Modify(ModifyKind::Metadata(MetadataKind::Other)) } "Modify(Name(Any))" => EventKind::Modify(ModifyKind::Name(RenameMode::Any)), "Modify(Name(To))" => EventKind::Modify(ModifyKind::Name(RenameMode::To)), "Modify(Name(From))" => EventKind::Modify(ModifyKind::Name(RenameMode::From)), "Modify(Name(Both))" => EventKind::Modify(ModifyKind::Name(RenameMode::Both)), "Modify(Name(Other))" => EventKind::Modify(ModifyKind::Name(RenameMode::Other)), "Modify(Other)" => EventKind::Modify(ModifyKind::Other), "Remove(Any)" => EventKind::Remove(RemoveKind::Any), "Remove(File)" => EventKind::Remove(RemoveKind::File), "Remove(Folder)" => EventKind::Remove(RemoveKind::Folder), "Remove(Other)" => EventKind::Remove(RemoveKind::Other), _ => EventKind::Other, // and literal "Other" }), SerdeTag { kind: TagKind::Fs, simple: Some(simple), .. } => Self::FileEventKind(match simple { FsEventKind::Access => EventKind::Access(AccessKind::Any), FsEventKind::Create => EventKind::Create(CreateKind::Any), FsEventKind::Modify => EventKind::Modify(ModifyKind::Any), FsEventKind::Remove => EventKind::Remove(RemoveKind::Any), FsEventKind::Other => EventKind::Other, }), SerdeTag { kind: TagKind::Source, source: Some(source), .. } => Self::Source(source), SerdeTag { kind: TagKind::Keyboard, keycode: Some(keycode), .. } => Self::Keyboard(keycode), SerdeTag { kind: TagKind::Process, pid: Some(pid), .. } => Self::Process(pid), SerdeTag { kind: TagKind::Signal, signal: Some(sig), .. } => Self::Signal(sig), SerdeTag { kind: TagKind::Completion, disposition: None | Some(ProcessDisposition::Unknown), .. } => Self::ProcessCompletion(None), SerdeTag { kind: TagKind::Completion, disposition: Some(ProcessDisposition::Success), .. } => Self::ProcessCompletion(Some(ProcessEnd::Success)), SerdeTag { kind: TagKind::Completion, disposition: Some(ProcessDisposition::Continued), .. } => Self::ProcessCompletion(Some(ProcessEnd::Continued)), SerdeTag { kind: TagKind::Completion, disposition: Some(ProcessDisposition::Signal), signal: Some(sig), .. } => Self::ProcessCompletion(Some(ProcessEnd::ExitSignal(sig))), SerdeTag { kind: TagKind::Completion, disposition: Some(ProcessDisposition::Error), code: Some(err), .. } if err != 0 => Self::ProcessCompletion(Some(ProcessEnd::ExitError(unsafe { NonZeroI64::new_unchecked(err) }))), SerdeTag { kind: TagKind::Completion, disposition: Some(ProcessDisposition::Stop), code: Some(code), .. } if code != 0 && i32::try_from(code).is_ok() => { Self::ProcessCompletion(Some(ProcessEnd::ExitStop(unsafe { // SAFETY&UNWRAP: checked above NonZeroI32::new_unchecked(code.try_into().unwrap()) }))) } SerdeTag { kind: TagKind::Completion, disposition: Some(ProcessDisposition::Exception), code: Some(exc), .. } if exc != 0 && i32::try_from(exc).is_ok() => { Self::ProcessCompletion(Some(ProcessEnd::Exception(unsafe { // SAFETY&UNWRAP: checked above NonZeroI32::new_unchecked(exc.try_into().unwrap()) }))) } _ => Self::Unknown, } } } #[derive(Clone, Debug, Default, Serialize, Deserialize)] pub struct SerdeEvent { #[serde(default, skip_serializing_if = "Vec::is_empty")] tags: Vec, // for a consistent serialization order #[serde(default, skip_serializing_if = "BTreeMap::is_empty")] metadata: BTreeMap>, } impl From for SerdeEvent { fn from(Event { tags, metadata }: Event) -> Self { Self { tags, metadata: metadata.into_iter().collect(), } } } impl From for Event { fn from(SerdeEvent { tags, metadata }: SerdeEvent) -> Self { Self { tags, metadata: metadata.into_iter().collect(), } } } ================================================ FILE: crates/events/tests/json.rs ================================================ #![cfg(feature = "serde")] use std::num::{NonZeroI32, NonZeroI64}; use snapbox::{assert_data_eq, file}; use watchexec_events::{ filekind::{CreateKind, FileEventKind as EventKind, ModifyKind, RemoveKind, RenameMode}, Event, FileType, Keyboard, ProcessEnd, Source, Tag, }; use watchexec_signals::Signal; fn parse_file(path: &str) -> Vec { serde_json::from_str(&std::fs::read_to_string(path).unwrap()).unwrap() } #[test] fn single() { let single = Event { tags: vec![Tag::Source(Source::Internal)], metadata: Default::default(), }; assert_data_eq!( serde_json::to_string_pretty(&single).unwrap(), file!["snapshots/single.json"], ); assert_eq!( serde_json::from_str::( &std::fs::read_to_string("tests/snapshots/single.json").unwrap() ) .unwrap(), single ); } #[test] fn array() { let array = &[ Event { tags: vec![Tag::Source(Source::Internal)], metadata: Default::default(), }, Event { tags: vec![ Tag::ProcessCompletion(Some(ProcessEnd::Success)), Tag::Process(123), ], metadata: Default::default(), }, Event { tags: vec![Tag::Keyboard(Keyboard::Eof)], metadata: Default::default(), }, ]; assert_data_eq!( serde_json::to_string_pretty(array).unwrap(), file!["snapshots/array.json"], ); assert_eq!(parse_file("tests/snapshots/array.json"), array); } #[test] fn metadata() { let metadata = &[Event { tags: vec![Tag::Source(Source::Internal)], metadata: [ ("Dafan".into(), vec!["Mountain".into()]), ("Lan".into(), vec!["Zhan".into()]), ] .into(), }]; assert_data_eq!( serde_json::to_string_pretty(metadata).unwrap(), file!["snapshots/metadata.json"], ); assert_eq!(parse_file("tests/snapshots/metadata.json"), metadata); } #[test] fn asymmetric() { // asymmetric because these have information loss or missing fields assert_eq!( parse_file("tests/snapshots/asymmetric.json"), &[ Event { tags: vec![ // no filetype field Tag::Path { path: "/foo/bar/baz".into(), file_type: None }, // fs with only simple representation Tag::FileEventKind(EventKind::Create(CreateKind::Any)), // unparsable of a known kind Tag::Unknown, ], metadata: Default::default(), }, Event { tags: vec![ // no simple field Tag::FileEventKind(EventKind::Modify(ModifyKind::Other)), // no disposition field Tag::ProcessCompletion(None) ], metadata: Default::default(), }, ] ); } #[test] fn sources() { let sources = vec![ Event { tags: vec![ Tag::Source(Source::Filesystem), Tag::Source(Source::Keyboard), Tag::Source(Source::Mouse), ], metadata: Default::default(), }, Event { tags: vec![ Tag::Source(Source::Os), Tag::Source(Source::Time), Tag::Source(Source::Internal), ], metadata: Default::default(), }, ]; assert_data_eq!( serde_json::to_string_pretty(&sources).unwrap(), file!["snapshots/sources.json"], ); assert_eq!(parse_file("tests/snapshots/sources.json"), sources); } #[test] fn signals() { let signals = vec![ Event { tags: vec![ Tag::Signal(Signal::Interrupt), Tag::Signal(Signal::User1), Tag::Signal(Signal::ForceStop), ], metadata: Default::default(), }, Event { tags: vec![ Tag::Signal(Signal::Custom(66)), Tag::Signal(Signal::Custom(0)), ], metadata: Default::default(), }, ]; assert_data_eq!( serde_json::to_string_pretty(&signals).unwrap(), file!["snapshots/signals.json"], ); assert_eq!(parse_file("tests/snapshots/signals.json"), signals); } #[test] fn completions() { let completions = vec![ Event { tags: vec![ Tag::ProcessCompletion(None), Tag::ProcessCompletion(Some(ProcessEnd::Success)), Tag::ProcessCompletion(Some(ProcessEnd::Continued)), ], metadata: Default::default(), }, Event { tags: vec![ Tag::ProcessCompletion(Some(ProcessEnd::ExitError(NonZeroI64::new(12).unwrap()))), Tag::ProcessCompletion(Some(ProcessEnd::ExitSignal(Signal::Interrupt))), Tag::ProcessCompletion(Some(ProcessEnd::ExitSignal(Signal::Custom(34)))), Tag::ProcessCompletion(Some(ProcessEnd::ExitStop(NonZeroI32::new(56).unwrap()))), Tag::ProcessCompletion(Some(ProcessEnd::Exception(NonZeroI32::new(78).unwrap()))), ], metadata: Default::default(), }, ]; assert_data_eq!( serde_json::to_string_pretty(&completions).unwrap(), file!["snapshots/completions.json"], ); assert_eq!(parse_file("tests/snapshots/completions.json"), completions); } #[test] fn paths() { let paths = vec![ Event { tags: vec![ Tag::Path { path: "/foo/bar/baz".into(), file_type: Some(FileType::Symlink), }, Tag::FileEventKind(EventKind::Create(CreateKind::File)), ], metadata: Default::default(), }, Event { tags: vec![ Tag::Path { path: "/rename/from/this".into(), file_type: Some(FileType::File), }, Tag::Path { path: "/rename/into/that".into(), file_type: Some(FileType::Other), }, Tag::FileEventKind(EventKind::Modify(ModifyKind::Name(RenameMode::Both))), ], metadata: Default::default(), }, Event { tags: vec![ Tag::Path { path: "/delete/this".into(), file_type: Some(FileType::Dir), }, Tag::Path { path: "/".into(), file_type: None, }, Tag::FileEventKind(EventKind::Remove(RemoveKind::Any)), ], metadata: Default::default(), }, ]; assert_data_eq!( serde_json::to_string_pretty(&paths).unwrap(), file!["snapshots/paths.json"], ); assert_eq!(parse_file("tests/snapshots/paths.json"), paths); } ================================================ FILE: crates/events/tests/snapshots/array.json ================================================ [ { "tags": [ { "kind": "source", "source": "internal" } ] }, { "tags": [ { "kind": "completion", "disposition": "success" }, { "kind": "process", "pid": 123 } ] }, { "tags": [ { "kind": "keyboard", "keycode": "eof" } ] } ] ================================================ FILE: crates/events/tests/snapshots/asymmetric.json ================================================ [ { "tags": [ { "kind": "path", "absolute": "/foo/bar/baz" }, { "kind": "fs", "simple": "create" }, { "kind": "fs" } ] }, { "tags": [ { "kind": "fs", "full": "Modify(Other)" }, { "kind": "completion" } ] } ] ================================================ FILE: crates/events/tests/snapshots/completions.json ================================================ [ { "tags": [ { "kind": "completion", "disposition": "unknown" }, { "kind": "completion", "disposition": "success" }, { "kind": "completion", "disposition": "continued" } ] }, { "tags": [ { "kind": "completion", "disposition": "error", "code": 12 }, { "kind": "completion", "signal": "SIGINT", "disposition": "signal" }, { "kind": "completion", "signal": 34, "disposition": "signal" }, { "kind": "completion", "disposition": "stop", "code": 56 }, { "kind": "completion", "disposition": "exception", "code": 78 } ] } ] ================================================ FILE: crates/events/tests/snapshots/metadata.json ================================================ [ { "tags": [ { "kind": "source", "source": "internal" } ], "metadata": { "Dafan": [ "Mountain" ], "Lan": [ "Zhan" ] } } ] ================================================ FILE: crates/events/tests/snapshots/paths.json ================================================ [ { "tags": [ { "kind": "path", "absolute": "/foo/bar/baz", "filetype": "symlink" }, { "kind": "fs", "simple": "create", "full": "Create(File)" } ] }, { "tags": [ { "kind": "path", "absolute": "/rename/from/this", "filetype": "file" }, { "kind": "path", "absolute": "/rename/into/that", "filetype": "other" }, { "kind": "fs", "simple": "modify", "full": "Modify(Name(Both))" } ] }, { "tags": [ { "kind": "path", "absolute": "/delete/this", "filetype": "dir" }, { "kind": "path", "absolute": "/" }, { "kind": "fs", "simple": "remove", "full": "Remove(Any)" } ] } ] ================================================ FILE: crates/events/tests/snapshots/signals.json ================================================ [ { "tags": [ { "kind": "signal", "signal": "SIGINT" }, { "kind": "signal", "signal": "SIGUSR1" }, { "kind": "signal", "signal": "SIGKILL" } ] }, { "tags": [ { "kind": "signal", "signal": 66 }, { "kind": "signal", "signal": 0 } ] } ] ================================================ FILE: crates/events/tests/snapshots/single.json ================================================ { "tags": [ { "kind": "source", "source": "internal" } ] } ================================================ FILE: crates/events/tests/snapshots/sources.json ================================================ [ { "tags": [ { "kind": "source", "source": "filesystem" }, { "kind": "source", "source": "keyboard" }, { "kind": "source", "source": "mouse" } ] }, { "tags": [ { "kind": "source", "source": "os" }, { "kind": "source", "source": "time" }, { "kind": "source", "source": "internal" } ] } ] ================================================ FILE: crates/filterer/globset/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v8.0.0 (2025-05-15) ## v7.0.0 (2025-02-09) ## v6.0.0 (2024-10-14) - Deps: watchexec 5 ## v5.0.0 (2024-10-13) - Add whitelist parameter. ## v4.0.1 (2024-04-28) - Hide fmt::Debug spew from ignore crate, use `full_debug` feature to restore. ## v4.0.0 (2024-04-20) - Deps: watchexec 4 ## v3.0.0 (2024-01-01) - Deps: `watchexec-filterer-ignore` and `ignore-files` ## v2.0.1 (2023-12-09) - Depend on `watchexec-events` instead of the `watchexec` re-export. ## v1.2.0 (2023-03-18) - Ditch MSRV policy. The `rust-version` indication will remain, for the minimum estimated Rust version for the code features used in the crate's own code, but dependencies may have already moved on. From now on, only latest stable is assumed and tested for. ([#510](https://github.com/watchexec/watchexec/pull/510)) ## v1.1.0 (2023-01-09) - MSRV: bump to 1.61.0 ## v1.0.1 (2022-09-07) - Deps: update miette to 5.3.0 ## v1.0.0 (2022-06-23) - Initial release as a separate crate. ================================================ FILE: crates/filterer/globset/Cargo.toml ================================================ [package] name = "watchexec-filterer-globset" version = "8.0.0" authors = ["Matt Green ", "Félix Saparelli "] license = "Apache-2.0" description = "Watchexec filterer component based on globset" keywords = ["watchexec", "filterer", "globset"] documentation = "https://docs.rs/watchexec-filterer-globset" homepage = "https://watchexec.github.io" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.61.0" edition = "2021" [dependencies] ignore = "0.4.18" tracing = "0.1.40" [dependencies.ignore-files] version = "3.0.5" path = "../../ignore-files" [dependencies.watchexec] version = "8.2.0" path = "../../lib" [dependencies.watchexec-events] version = "6.1.0" path = "../../events" [dependencies.watchexec-filterer-ignore] version = "7.0.0" path = "../ignore" [dev-dependencies] tracing-subscriber = "0.3.6" tempfile = "3.16.0" [dev-dependencies.tokio] version = "1.33.0" features = [ "fs", "io-std", "rt", "rt-multi-thread", "macros", ] [features] default = [] ## Don't hide ignore::gitignore::Gitignore Debug impl full_debug = [] ================================================ FILE: crates/filterer/globset/README.md ================================================ [![Crates.io page](https://badgen.net/crates/v/watchexec-filterer-globset)](https://crates.io/crates/watchexec-filterer-globset) [![API Docs](https://docs.rs/watchexec-filterer-globset/badge.svg)][docs] [![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license] [![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml) # Watchexec filterer: globset _The default filterer implementation for Watchexec._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license]. - Status: maintained. [docs]: https://docs.rs/watchexec-filterer-globset [license]: ../../../LICENSE ================================================ FILE: crates/filterer/globset/release.toml ================================================ pre-release-commit-message = "release: filterer-globset v{{version}}" tag-prefix = "watchexec-filterer-globset-" tag-message = "watchexec-filterer-globset {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/filterer/globset/src/lib.rs ================================================ //! A path-only Watchexec filterer based on globsets. //! //! This filterer mimics the behavior of the `watchexec` v1 filter, but does not match it exactly, //! due to differing internals. It is used as the default filterer in Watchexec CLI currently. #![doc(html_favicon_url = "https://watchexec.github.io/logo:watchexec.svg")] #![doc(html_logo_url = "https://watchexec.github.io/logo:watchexec.svg")] #![warn(clippy::unwrap_used, missing_docs)] #![cfg_attr(not(test), warn(unused_crate_dependencies))] #![deny(rust_2018_idioms)] use std::{ ffi::OsString, path::{Path, PathBuf}, }; use ignore::gitignore::{Gitignore, GitignoreBuilder}; use ignore_files::{Error, IgnoreFile, IgnoreFilter}; use tracing::{debug, trace, trace_span}; use watchexec::{error::RuntimeError, filter::Filterer}; use watchexec_events::{Event, FileType, Priority}; use watchexec_filterer_ignore::IgnoreFilterer; /// A simple filterer in the style of the watchexec v1.17 filter. #[cfg_attr(feature = "full_debug", derive(Debug))] pub struct GlobsetFilterer { #[cfg_attr(not(unix), allow(dead_code))] origin: PathBuf, filters: Gitignore, ignores: Gitignore, whitelist: Vec, ignore_files: IgnoreFilterer, extensions: Vec, } #[cfg(not(feature = "full_debug"))] impl std::fmt::Debug for GlobsetFilterer { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("GlobsetFilterer") .field("origin", &self.origin) .field("filters", &"ignore::gitignore::Gitignore{...}") .field("ignores", &"ignore::gitignore::Gitignore{...}") .field("ignore_files", &self.ignore_files) .field("extensions", &self.extensions) .finish() } } impl GlobsetFilterer { /// Create a new `GlobsetFilterer` from a project origin, allowed extensions, and lists of globs. /// /// The first list is used to filter paths (only matching paths will pass the filter), the /// second is used to ignore paths (matching paths will fail the pattern). If the filter list is /// empty, only the ignore list will be used. If both lists are empty, the filter always passes. /// Whitelist is used to automatically accept files even if they would be filtered out /// otherwise. It is passed as an absolute path to the file that should not be filtered. /// /// Ignores and filters are passed as a tuple of the glob pattern as a string and an optional /// path of the folder the pattern should apply in (e.g. the folder a gitignore file is in). /// A `None` to the latter will mark the pattern as being global. /// /// The extensions list is used to filter files by extension. /// /// Non-path events are always passed. #[allow(clippy::future_not_send)] pub async fn new( origin: impl AsRef, filters: impl IntoIterator)>, ignores: impl IntoIterator)>, whitelist: impl IntoIterator, ignore_files: impl IntoIterator, extensions: impl IntoIterator, ) -> Result { let origin = origin.as_ref(); let mut filters_builder = GitignoreBuilder::new(origin); let mut ignores_builder = GitignoreBuilder::new(origin); for (filter, in_path) in filters { trace!(filter=?&filter, "add filter to globset filterer"); filters_builder .add_line(in_path.clone(), &filter) .map_err(|err| Error::Glob { file: in_path, err })?; } for (ignore, in_path) in ignores { trace!(ignore=?&ignore, "add ignore to globset filterer"); ignores_builder .add_line(in_path.clone(), &ignore) .map_err(|err| Error::Glob { file: in_path, err })?; } let filters = filters_builder .build() .map_err(|err| Error::Glob { file: None, err })?; let ignores = ignores_builder .build() .map_err(|err| Error::Glob { file: None, err })?; let extensions: Vec = extensions.into_iter().collect(); let mut ignore_files = IgnoreFilter::new(origin, &ignore_files.into_iter().collect::>()).await?; ignore_files.finish(); let ignore_files = IgnoreFilterer(ignore_files); let whitelist = whitelist.into_iter().collect::>(); debug!( ?origin, num_filters=%filters.num_ignores(), num_neg_filters=%filters.num_whitelists(), num_ignores=%ignores.num_ignores(), num_in_ignore_files=?ignore_files.0.num_ignores(), num_neg_ignores=%ignores.num_whitelists(), num_extensions=%extensions.len(), "globset filterer built"); Ok(Self { origin: origin.into(), filters, ignores, whitelist, ignore_files, extensions, }) } } impl Filterer for GlobsetFilterer { /// Filter an event. /// /// This implementation never errors. fn check_event(&self, event: &Event, priority: Priority) -> Result { let _span = trace_span!("filterer_check").entered(); { trace!("checking internal whitelist"); // Ideally check path equality backwards for better perf // There could be long matching prefixes so we will exit late if event .paths() .any(|(p, _)| self.whitelist.iter().any(|w| w == p)) { trace!("internal whitelist filterer matched (success)"); return Ok(true); } } { trace!("checking internal ignore filterer"); if !self .ignore_files .check_event(event, priority) .expect("IgnoreFilterer never errors") { trace!("internal ignore filterer matched (fail)"); return Ok(false); } } let mut paths = event.paths().peekable(); if paths.peek().is_none() { trace!("non-path event (pass)"); Ok(true) } else { Ok(paths.any(|(path, file_type)| { let _span = trace_span!("path", ?path).entered(); let is_dir = file_type.map_or(false, |t| matches!(t, FileType::Dir)); if self.ignores.matched(path, is_dir).is_ignore() { trace!("ignored by globset ignore"); return false; } let mut filtered = false; if self.filters.num_ignores() > 0 { trace!("running through glob filters"); filtered = true; if self.filters.matched(path, is_dir).is_ignore() { trace!("allowed by globset filters"); return true; } // Watchexec 1.x bug, TODO remove at 2.0 #[cfg(unix)] if let Ok(based) = path.strip_prefix(&self.origin) { let rebased = { use std::path::MAIN_SEPARATOR; let mut b = self.origin.clone().into_os_string(); b.push(PathBuf::from(String::from(MAIN_SEPARATOR))); b.push(PathBuf::from(String::from(MAIN_SEPARATOR))); b.push(based.as_os_str()); b }; trace!(?rebased, "testing on rebased path, 1.x bug compat (#258)"); if self.filters.matched(rebased, is_dir).is_ignore() { trace!("allowed by globset filters, 1.x bug compat (#258)"); return true; } } } if !self.extensions.is_empty() { trace!("running through extension filters"); filtered = true; if is_dir { trace!("failed on extension check due to being a dir"); return false; } if let Some(ext) = path.extension() { if self.extensions.iter().any(|e| e == ext) { trace!("allowed by extension filter"); return true; } } else { trace!( ?path, "failed on extension check due to having no extension" ); return false; } } !filtered })) } } } ================================================ FILE: crates/filterer/globset/tests/filtering.rs ================================================ mod helpers; use helpers::globset::*; use std::io::Write; #[tokio::test] async fn empty_filter_passes_everything() { let filterer = filt(&[], &[], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_does_pass("Cargo.json"); filterer.file_does_pass("Gemfile.toml"); filterer.file_does_pass("FINAL-FINAL.docx"); filterer.dir_does_pass("/test/Cargo.toml"); filterer.dir_does_pass("/a/folder"); filterer.file_does_pass("apples/carrots/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples/carrots/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.dir_does_pass("apples/oranges/bananas"); } #[tokio::test] async fn exact_filename() { let filterer = filt(&["Cargo.toml"], &[], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_does_pass("/test/foo/bar/Cargo.toml"); filterer.file_doesnt_pass("Cargo.json"); filterer.file_doesnt_pass("Gemfile.toml"); filterer.file_doesnt_pass("FINAL-FINAL.docx"); filterer.dir_doesnt_pass("/a/folder"); filterer.dir_does_pass("/test/Cargo.toml"); } #[tokio::test] async fn exact_filename_in_folder() { let filterer = filt(&["sub/Cargo.toml"], &[], &[], &[], &[]).await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_does_pass("sub/Cargo.toml"); filterer.file_doesnt_pass("/test/foo/bar/Cargo.toml"); filterer.file_doesnt_pass("Cargo.json"); filterer.file_doesnt_pass("Gemfile.toml"); filterer.file_doesnt_pass("FINAL-FINAL.docx"); filterer.dir_doesnt_pass("/a/folder"); filterer.dir_does_pass("/test/sub/Cargo.toml"); } #[tokio::test] async fn exact_filename_in_hidden_folder() { let filterer = filt(&[".sub/Cargo.toml"], &[], &[], &[], &[]).await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_does_pass(".sub/Cargo.toml"); filterer.file_doesnt_pass("/test/foo/bar/Cargo.toml"); filterer.file_doesnt_pass("Cargo.json"); filterer.file_doesnt_pass("Gemfile.toml"); filterer.file_doesnt_pass("FINAL-FINAL.docx"); filterer.dir_doesnt_pass("/a/folder"); filterer.dir_does_pass("/test/.sub/Cargo.toml"); } #[tokio::test] async fn exact_filenames_multiple() { let filterer = filt(&["Cargo.toml", "package.json"], &[], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_does_pass("/test/foo/bar/Cargo.toml"); filterer.file_does_pass("package.json"); filterer.file_does_pass("/test/foo/bar/package.json"); filterer.file_doesnt_pass("Cargo.json"); filterer.file_doesnt_pass("package.toml"); filterer.file_doesnt_pass("Gemfile.toml"); filterer.file_doesnt_pass("FINAL-FINAL.docx"); filterer.dir_doesnt_pass("/a/folder"); filterer.dir_does_pass("/test/Cargo.toml"); filterer.dir_does_pass("/test/package.json"); } #[tokio::test] async fn glob_single_final_ext_star() { let filterer = filt(&["Cargo.*"], &[], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_does_pass("Cargo.json"); filterer.file_doesnt_pass("Gemfile.toml"); filterer.file_doesnt_pass("FINAL-FINAL.docx"); filterer.dir_doesnt_pass("/a/folder"); filterer.dir_does_pass("Cargo.toml"); } #[tokio::test] async fn glob_star_trailing_slash() { let filterer = filt(&["Cargo.*/"], &[], &[], &[], &[]).await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_doesnt_pass("Cargo.json"); filterer.file_doesnt_pass("Gemfile.toml"); filterer.file_doesnt_pass("FINAL-FINAL.docx"); filterer.dir_doesnt_pass("/a/folder"); filterer.dir_does_pass("Cargo.toml"); filterer.unk_doesnt_pass("Cargo.toml"); } #[tokio::test] async fn glob_star_leading_slash() { let filterer = filt(&["/Cargo.*"], &[], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_does_pass("Cargo.json"); filterer.dir_does_pass("Cargo.toml"); filterer.unk_does_pass("Cargo.toml"); filterer.file_doesnt_pass("foo/Cargo.toml"); filterer.dir_doesnt_pass("foo/Cargo.toml"); } #[tokio::test] async fn glob_leading_double_star() { let filterer = filt(&["**/possum"], &[], &[], &[], &[]).await; filterer.file_does_pass("possum"); filterer.file_does_pass("foo/bar/possum"); filterer.file_does_pass("/foo/bar/possum"); filterer.dir_does_pass("possum"); filterer.dir_does_pass("foo/bar/possum"); filterer.dir_does_pass("/foo/bar/possum"); filterer.file_doesnt_pass("rat"); filterer.file_doesnt_pass("foo/bar/rat"); filterer.file_doesnt_pass("/foo/bar/rat"); } #[tokio::test] async fn glob_trailing_double_star() { let filterer = filt(&["possum/**"], &[], &[], &[], &[]).await; // these do work by expectation and in v1 filterer.file_does_pass("/test/possum/foo/bar"); filterer.dir_doesnt_pass("possum"); filterer.dir_doesnt_pass("foo/bar/possum"); filterer.dir_does_pass("possum/foo/bar"); filterer.file_doesnt_pass("rat"); filterer.file_doesnt_pass("foo/bar/rat"); filterer.file_doesnt_pass("/foo/bar/rat"); } #[tokio::test] async fn glob_middle_double_star() { let filterer = filt(&["apples/**/oranges"], &[], &[], &[], &[]).await; filterer.dir_doesnt_pass("/a/folder"); filterer.file_does_pass("apples/carrots/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples/carrots/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.dir_doesnt_pass("apples/oranges/bananas"); } #[tokio::test] async fn glob_double_star_trailing_slash() { let filterer = filt(&["apples/**/oranges/"], &[], &[], &[], &[]).await; filterer.dir_doesnt_pass("/a/folder"); filterer.file_doesnt_pass("apples/carrots/oranges"); filterer.file_doesnt_pass("apples/carrots/cauliflowers/oranges"); filterer.file_doesnt_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples/carrots/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.dir_doesnt_pass("apples/oranges/bananas"); filterer.unk_doesnt_pass("apples/carrots/oranges"); filterer.unk_doesnt_pass("apples/carrots/cauliflowers/oranges"); filterer.unk_doesnt_pass("apples/carrots/cauliflowers/artichokes/oranges"); } #[tokio::test] async fn ignore_exact_filename() { let filterer = filt(&[], &["Cargo.toml"], &[], &[], &[]).await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_doesnt_pass("/test/foo/bar/Cargo.toml"); filterer.file_does_pass("Cargo.json"); filterer.file_does_pass("Gemfile.toml"); filterer.file_does_pass("FINAL-FINAL.docx"); filterer.dir_does_pass("/a/folder"); filterer.dir_doesnt_pass("/test/Cargo.toml"); } #[tokio::test] async fn ignore_exact_filename_in_folder() { let filterer = filt(&[], &["sub/Cargo.toml"], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_doesnt_pass("sub/Cargo.toml"); filterer.file_does_pass("/test/foo/bar/Cargo.toml"); filterer.file_does_pass("Cargo.json"); filterer.file_does_pass("Gemfile.toml"); filterer.file_does_pass("FINAL-FINAL.docx"); filterer.dir_does_pass("/a/folder"); filterer.dir_doesnt_pass("/test/sub/Cargo.toml"); } #[tokio::test] async fn ignore_exact_filename_in_hidden_folder() { let filterer = filt(&[], &[".sub/Cargo.toml"], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_doesnt_pass(".sub/Cargo.toml"); filterer.file_does_pass("/test/foo/bar/Cargo.toml"); filterer.file_does_pass("Cargo.json"); filterer.file_does_pass("Gemfile.toml"); filterer.file_does_pass("FINAL-FINAL.docx"); filterer.dir_does_pass("/a/folder"); filterer.dir_doesnt_pass("/test/.sub/Cargo.toml"); } #[tokio::test] async fn ignore_exact_filenames_multiple() { let filterer = filt(&[], &["Cargo.toml", "package.json"], &[], &[], &[]).await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_doesnt_pass("/test/foo/bar/Cargo.toml"); filterer.file_doesnt_pass("package.json"); filterer.file_doesnt_pass("/test/foo/bar/package.json"); filterer.file_does_pass("Cargo.json"); filterer.file_does_pass("package.toml"); filterer.file_does_pass("Gemfile.toml"); filterer.file_does_pass("FINAL-FINAL.docx"); filterer.dir_does_pass("/a/folder"); filterer.dir_doesnt_pass("/test/Cargo.toml"); filterer.dir_doesnt_pass("/test/package.json"); } #[tokio::test] async fn ignore_glob_single_final_ext_star() { let filterer = filt(&[], &["Cargo.*"], &[], &[], &[]).await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_doesnt_pass("Cargo.json"); filterer.file_does_pass("Gemfile.toml"); filterer.file_does_pass("FINAL-FINAL.docx"); filterer.dir_does_pass("/a/folder"); filterer.dir_doesnt_pass("Cargo.toml"); } #[tokio::test] async fn ignore_glob_star_trailing_slash() { let filterer = filt(&[], &["Cargo.*/"], &[], &[], &[]).await; filterer.file_does_pass("Cargo.toml"); filterer.file_does_pass("Cargo.json"); filterer.file_does_pass("Gemfile.toml"); filterer.file_does_pass("FINAL-FINAL.docx"); filterer.dir_does_pass("/a/folder"); filterer.dir_doesnt_pass("Cargo.toml"); filterer.unk_does_pass("Cargo.toml"); } #[tokio::test] async fn ignore_glob_star_leading_slash() { let filterer = filt(&[], &["/Cargo.*"], &[], &[], &[]).await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_doesnt_pass("Cargo.json"); filterer.dir_doesnt_pass("Cargo.toml"); filterer.unk_doesnt_pass("Cargo.toml"); filterer.file_does_pass("foo/Cargo.toml"); filterer.dir_does_pass("foo/Cargo.toml"); } #[tokio::test] async fn ignore_glob_leading_double_star() { let filterer = filt(&[], &["**/possum"], &[], &[], &[]).await; filterer.file_doesnt_pass("possum"); filterer.file_doesnt_pass("foo/bar/possum"); filterer.file_doesnt_pass("/foo/bar/possum"); filterer.dir_doesnt_pass("possum"); filterer.dir_doesnt_pass("foo/bar/possum"); filterer.dir_doesnt_pass("/foo/bar/possum"); filterer.file_does_pass("rat"); filterer.file_does_pass("foo/bar/rat"); filterer.file_does_pass("/foo/bar/rat"); } #[tokio::test] async fn ignore_glob_trailing_double_star() { let filterer = filt(&[], &["possum/**"], &[], &[], &[]).await; filterer.file_does_pass("possum"); filterer.file_doesnt_pass("possum/foo/bar"); filterer.file_does_pass("/possum/foo/bar"); filterer.file_doesnt_pass("/test/possum/foo/bar"); filterer.dir_does_pass("possum"); filterer.dir_does_pass("foo/bar/possum"); filterer.dir_does_pass("/foo/bar/possum"); filterer.dir_doesnt_pass("possum/foo/bar"); filterer.dir_does_pass("/possum/foo/bar"); filterer.dir_doesnt_pass("/test/possum/foo/bar"); filterer.file_does_pass("rat"); filterer.file_does_pass("foo/bar/rat"); filterer.file_does_pass("/foo/bar/rat"); } #[tokio::test] async fn ignore_glob_middle_double_star() { let filterer = filt(&[], &["apples/**/oranges"], &[], &[], &[]).await; filterer.dir_does_pass("/a/folder"); filterer.file_doesnt_pass("apples/carrots/oranges"); filterer.file_doesnt_pass("apples/carrots/cauliflowers/oranges"); filterer.file_doesnt_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_doesnt_pass("apples/carrots/oranges"); filterer.dir_doesnt_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_doesnt_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.dir_does_pass("apples/oranges/bananas"); } #[tokio::test] async fn ignore_glob_double_star_trailing_slash() { let filterer = filt(&[], &["apples/**/oranges/"], &[], &[], &[]).await; filterer.dir_does_pass("/a/folder"); filterer.file_does_pass("apples/carrots/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_doesnt_pass("apples/carrots/oranges"); filterer.dir_doesnt_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_doesnt_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.dir_does_pass("apples/oranges/bananas"); filterer.unk_does_pass("apples/carrots/oranges"); filterer.unk_does_pass("apples/carrots/cauliflowers/oranges"); filterer.unk_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); } #[tokio::test] async fn ignores_take_precedence() { let filterer = filt( &["*.docx", "*.toml", "*.json"], &["*.toml", "*.json"], &[], &[], &[], ) .await; filterer.file_doesnt_pass("Cargo.toml"); filterer.file_doesnt_pass("/test/foo/bar/Cargo.toml"); filterer.file_doesnt_pass("package.json"); filterer.file_doesnt_pass("/test/foo/bar/package.json"); filterer.dir_doesnt_pass("/test/Cargo.toml"); filterer.dir_doesnt_pass("/test/package.json"); filterer.file_does_pass("FINAL-FINAL.docx"); } #[tokio::test] async fn extensions_fail_dirs() { let filterer = filt(&[], &[], &[], &["py"], &[]).await; filterer.file_does_pass("Cargo.py"); filterer.file_doesnt_pass("Cargo.toml"); filterer.dir_doesnt_pass("Cargo"); filterer.dir_doesnt_pass("Cargo.toml"); filterer.dir_doesnt_pass("Cargo.py"); } #[tokio::test] async fn extensions_fail_extensionless() { let filterer = filt(&[], &[], &[], &["py"], &[]).await; filterer.file_does_pass("Cargo.py"); filterer.file_doesnt_pass("Cargo"); } #[tokio::test] async fn multipath_allow_on_any_one_pass() { use watchexec::filter::Filterer; use watchexec_events::{Event, FileType, Tag}; let filterer = filt(&[], &[], &[], &["py"], &[]).await; let origin = tokio::fs::canonicalize(".").await.unwrap(); let event = Event { tags: vec![ Tag::Path { path: origin.join("Cargo.py"), file_type: Some(FileType::File), }, Tag::Path { path: origin.join("Cargo.toml"), file_type: Some(FileType::File), }, Tag::Path { path: origin.join("Cargo.py"), file_type: Some(FileType::Dir), }, ], metadata: Default::default(), }; assert!(filterer.check_event(&event, Priority::Normal).unwrap()); } #[tokio::test] async fn extensions_and_filters_glob() { let filterer = filt(&["*/justfile"], &[], &[], &["md", "css"], &[]).await; filterer.file_does_pass("foo/justfile"); filterer.file_does_pass("bar.md"); filterer.file_does_pass("qux.css"); filterer.file_doesnt_pass("nope.py"); // Watchexec 1.x buggy behaviour, should not pass #[cfg(unix)] filterer.file_does_pass("justfile"); } #[tokio::test] async fn extensions_and_filters_slash() { let filterer = filt(&["/justfile"], &[], &[], &["md", "css"], &[]).await; filterer.file_does_pass("justfile"); filterer.file_does_pass("bar.md"); filterer.file_does_pass("qux.css"); filterer.file_doesnt_pass("nope.py"); } #[tokio::test] async fn leading_single_glob_file() { let filterer = filt(&["*/justfile"], &[], &[], &[], &[]).await; filterer.file_does_pass("foo/justfile"); filterer.file_doesnt_pass("notfile"); filterer.file_doesnt_pass("not/thisfile"); // Watchexec 1.x buggy behaviour, should not pass #[cfg(unix)] filterer.file_does_pass("justfile"); } #[tokio::test] async fn nonpath_event_passes() { use watchexec::filter::Filterer; use watchexec_events::{Event, Source, Tag}; let filterer = filt(&[], &[], &[], &["py"], &[]).await; assert!(filterer .check_event( &Event { tags: vec![Tag::Source(Source::Internal)], metadata: Default::default(), }, Priority::Normal ) .unwrap()); assert!(filterer .check_event( &Event { tags: vec![Tag::Source(Source::Keyboard)], metadata: Default::default(), }, Priority::Normal ) .unwrap()); } // The following tests replicate the "buggy"/"confusing" watchexec v1 behaviour. #[tokio::test] async fn ignore_folder_incorrectly_with_bare_match() { let filterer = filt(&[], &["prunes"], &[], &[], &[]).await; filterer.file_does_pass("apples"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes/oranges/bananas"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("prunes"); filterer.dir_doesnt_pass("prunes"); // buggy behaviour (should be doesnt): filterer.file_does_pass("prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("prunes/oranges/bananas"); filterer.dir_does_pass("prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("prunes/carrots/cauliflowers/artichokes/oranges"); } #[tokio::test] async fn ignore_folder_incorrectly_with_bare_and_leading_slash() { let filterer = filt(&[], &["/prunes"], &[], &[], &[]).await; filterer.file_does_pass("apples"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes/oranges/bananas"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("prunes"); filterer.dir_doesnt_pass("prunes"); // buggy behaviour (should be doesnt): filterer.file_does_pass("prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("prunes/oranges/bananas"); filterer.dir_does_pass("prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("prunes/carrots/cauliflowers/artichokes/oranges"); } #[tokio::test] async fn ignore_folder_incorrectly_with_bare_and_trailing_slash() { let filterer = filt(&[], &["prunes/"], &[], &[], &[]).await; filterer.file_does_pass("apples"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes/oranges/bananas"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.dir_doesnt_pass("prunes"); // buggy behaviour (should be doesnt): filterer.file_does_pass("prunes"); filterer.file_does_pass("prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("prunes/oranges/bananas"); filterer.dir_does_pass("prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("prunes/carrots/cauliflowers/artichokes/oranges"); } #[tokio::test] async fn ignore_folder_incorrectly_with_only_double_double_glob() { let filterer = filt(&[], &["**/prunes/**"], &[], &[], &[]).await; filterer.file_does_pass("apples"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes/oranges/bananas"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("prunes/carrots/cauliflowers/oranges"); filterer.file_doesnt_pass("prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("prunes/oranges/bananas"); filterer.dir_doesnt_pass("prunes/carrots/cauliflowers/oranges"); filterer.dir_doesnt_pass("prunes/carrots/cauliflowers/artichokes/oranges"); // buggy behaviour (should be doesnt): filterer.file_does_pass("prunes"); filterer.dir_does_pass("prunes"); } #[tokio::test] async fn ignore_folder_correctly_with_double_and_double_double_globs() { let filterer = filt(&[], &["**/prunes", "**/prunes/**"], &[], &[], &[]).await; filterer.file_does_pass("apples"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.file_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("raw-prunes/oranges/bananas"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/oranges"); filterer.dir_does_pass("raw-prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("prunes"); filterer.file_doesnt_pass("prunes/carrots/cauliflowers/oranges"); filterer.file_doesnt_pass("prunes/carrots/cauliflowers/artichokes/oranges"); filterer.file_doesnt_pass("prunes/oranges/bananas"); filterer.dir_doesnt_pass("prunes"); filterer.dir_doesnt_pass("prunes/carrots/cauliflowers/oranges"); filterer.dir_doesnt_pass("prunes/carrots/cauliflowers/artichokes/oranges"); } #[tokio::test] async fn whitelist_overrides_ignore() { let filterer = filt(&[], &["**/prunes"], &["/prunes"], &[], &[]).await; filterer.file_does_pass("apples"); filterer.file_does_pass("/prunes"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("/prunes"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_doesnt_pass("apples/prunes"); filterer.file_doesnt_pass("raw/prunes"); filterer.dir_doesnt_pass("apples/prunes"); filterer.dir_doesnt_pass("raw/prunes"); } #[tokio::test] async fn whitelist_overrides_ignore_files() { let mut ignore_file = tempfile::NamedTempFile::new().unwrap(); let _ = ignore_file.write(b"prunes"); let origin = std::fs::canonicalize(".").unwrap(); let whitelist = origin.join("prunes").display().to_string(); let filterer = filt( &[], &[], &[&whitelist], &[], &[ignore_file.path().to_path_buf()], ) .await; filterer.file_does_pass("apples"); filterer.file_does_pass("prunes"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("prunes"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_doesnt_pass("apples/prunes"); filterer.file_doesnt_pass("raw/prunes"); filterer.dir_doesnt_pass("apples/prunes"); filterer.dir_doesnt_pass("raw/prunes"); } #[tokio::test] async fn whitelist_overrides_ignore_files_nested() { let mut ignore_file = tempfile::NamedTempFile::new().unwrap(); let _ = ignore_file.write(b"prunes\n"); let origin = std::fs::canonicalize(".").unwrap(); let whitelist = origin.join("prunes").join("target").display().to_string(); let filterer = filt( &[], &[], &[&whitelist], &[], &[ignore_file.path().to_path_buf()], ) .await; filterer.file_does_pass("apples"); filterer.file_doesnt_pass("prunes"); filterer.dir_does_pass("apples"); filterer.dir_doesnt_pass("prunes"); filterer.file_does_pass("raw-prunes"); filterer.dir_does_pass("raw-prunes"); filterer.file_doesnt_pass("prunes/apples"); filterer.file_doesnt_pass("prunes/raw"); filterer.dir_doesnt_pass("prunes/apples"); filterer.dir_doesnt_pass("prunes/raw"); filterer.file_doesnt_pass("apples/prunes"); filterer.file_doesnt_pass("raw/prunes"); filterer.dir_doesnt_pass("apples/prunes"); filterer.dir_doesnt_pass("raw/prunes"); filterer.file_does_pass("prunes/target"); filterer.dir_does_pass("prunes/target"); filterer.file_doesnt_pass("prunes/nested/target"); filterer.dir_doesnt_pass("prunes/nested/target"); } ================================================ FILE: crates/filterer/globset/tests/helpers/mod.rs ================================================ use std::{ ffi::OsString, path::{Path, PathBuf}, }; use ignore_files::IgnoreFile; use watchexec::{error::RuntimeError, filter::Filterer}; use watchexec_events::{Event, FileType, Priority, Tag}; use watchexec_filterer_globset::GlobsetFilterer; use watchexec_filterer_ignore::IgnoreFilterer; pub mod globset { pub use super::globset_filt as filt; pub use super::PathHarness; pub use watchexec_events::Priority; } pub trait PathHarness: Filterer { fn check_path( &self, path: PathBuf, file_type: Option, ) -> std::result::Result { let event = Event { tags: vec![Tag::Path { path, file_type }], metadata: Default::default(), }; self.check_event(&event, Priority::Normal) } fn path_pass(&self, path: &str, file_type: Option, pass: bool) { let origin = std::fs::canonicalize(".").unwrap(); let full_path = if let Some(suf) = path.strip_prefix("/test/") { origin.join(suf) } else if Path::new(path).has_root() { path.into() } else { origin.join(path) }; tracing::info!(?path, ?file_type, ?pass, "check"); assert_eq!( self.check_path(full_path, file_type).unwrap(), pass, "{} {:?} (expected {})", match file_type { Some(FileType::File) => "file", Some(FileType::Dir) => "dir", Some(FileType::Symlink) => "symlink", Some(FileType::Other) => "other", None => "path", }, path, if pass { "pass" } else { "fail" } ); } fn file_does_pass(&self, path: &str) { self.path_pass(path, Some(FileType::File), true); } fn file_doesnt_pass(&self, path: &str) { self.path_pass(path, Some(FileType::File), false); } fn dir_does_pass(&self, path: &str) { self.path_pass(path, Some(FileType::Dir), true); } fn dir_doesnt_pass(&self, path: &str) { self.path_pass(path, Some(FileType::Dir), false); } fn unk_does_pass(&self, path: &str) { self.path_pass(path, None, true); } fn unk_doesnt_pass(&self, path: &str) { self.path_pass(path, None, false); } } impl PathHarness for GlobsetFilterer {} impl PathHarness for IgnoreFilterer {} fn tracing_init() { use tracing_subscriber::{ fmt::{format::FmtSpan, Subscriber}, util::SubscriberInitExt, EnvFilter, }; Subscriber::builder() .pretty() .with_span_events(FmtSpan::NEW | FmtSpan::CLOSE) .with_env_filter(EnvFilter::from_default_env()) .finish() .try_init() .ok(); } pub async fn globset_filt( filters: &[&str], ignores: &[&str], whitelists: &[&str], extensions: &[&str], ignore_files: &[PathBuf], ) -> GlobsetFilterer { let origin = tokio::fs::canonicalize(".").await.unwrap(); tracing_init(); GlobsetFilterer::new( origin, filters.iter().map(|s| ((*s).to_string(), None)), ignores.iter().map(|s| ((*s).to_string(), None)), whitelists.iter().map(|s| (*s).into()), ignore_files.iter().map(|path| IgnoreFile { path: path.clone(), applies_in: None, applies_to: None, }), extensions.iter().map(OsString::from), ) .await .expect("making filterer") } ================================================ FILE: crates/filterer/ignore/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v7.0.0 (2025-05-15) - Deps: remove unused dependency `watchexec-signals` ([#930](https://github.com/watchexec/watchexec/pull/930)) ## v6.0.0 (2025-02-09) ## v5.0.0 (2024-10-14) ## v4.0.1 (2024-04-28) ## v4.0.0 (2024-04-20) - Deps: watchexec 4 ## v3.0.1 (2024-01-04) - Normalise paths on all platforms (via `normalize-path`). ## v3.0.0 (2024-01-01) - Deps: `ignore-files` 2.0.0 ## v2.0.1 (2023-12-09) - Depend on `watchexec-events` instead of the `watchexec` re-export. ## v1.2.1 (2023-05-14) - Use IO-free dunce::simplify to normalise paths on Windows. - Known regression: some filtering patterns misbehave slightly on Windows with paths outside the project root. - As filters were previously completely broken on Windows, this is still considered an improvement. ## v1.2.0 (2023-03-18) - Ditch MSRV policy. The `rust-version` indication will remain, for the minimum estimated Rust version for the code features used in the crate's own code, but dependencies may have already moved on. From now on, only latest stable is assumed and tested for. ([#510](https://github.com/watchexec/watchexec/pull/510)) ## v1.1.0 (2023-01-09) - MSRV: bump to 1.61.0 ## v1.0.0 (2022-06-23) - Initial release as a separate crate. ================================================ FILE: crates/filterer/ignore/Cargo.toml ================================================ [package] name = "watchexec-filterer-ignore" version = "7.0.0" authors = ["Félix Saparelli "] license = "Apache-2.0" description = "Watchexec filterer component for ignore files" keywords = ["watchexec", "filterer", "ignore"] documentation = "https://docs.rs/watchexec-filterer-ignore" homepage = "https://watchexec.github.io" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.61.0" edition = "2021" [dependencies] ignore = "0.4.18" dunce = "1.0.4" normalize-path = "0.2.1" tracing = "0.1.40" [dependencies.ignore-files] version = "3.0.5" path = "../../ignore-files" [dependencies.watchexec] version = "8.2.0" path = "../../lib" [dependencies.watchexec-events] version = "6.1.0" path = "../../events" [dev-dependencies.project-origins] version = "1.4.2" path = "../../project-origins" [dev-dependencies.tokio] version = "1.33.0" features = [ "fs", "io-std", "rt", "rt-multi-thread", "macros", ] [dev-dependencies.tracing-subscriber] version = "0.3.6" features = ["env-filter"] ================================================ FILE: crates/filterer/ignore/README.md ================================================ [![Crates.io page](https://badgen.net/crates/v/watchexec-filterer-ignore)](https://crates.io/crates/watchexec-filterer-ignore) [![API Docs](https://docs.rs/watchexec-filterer-ignore/badge.svg)][docs] [![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license] [![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml) # Watchexec filterer: ignore _(Sub)filterer implementation for ignore files._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license]. - Status: maintained. This is mostly a thin layer above the [ignore-files](../../ignore-files) crate, and is meant to be used as part of another more general filterer. However, there's nothing wrong with using it directly if all that's needed is to handle ignore files. [docs]: https://docs.rs/watchexec-filterer-ignore [license]: ../../../LICENSE ================================================ FILE: crates/filterer/ignore/release.toml ================================================ pre-release-commit-message = "release: filterer-ignore v{{version}}" tag-prefix = "watchexec-filterer-ignore-" tag-message = "watchexec-filterer-ignore {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/filterer/ignore/src/lib.rs ================================================ //! A Watchexec Filterer implementation for ignore files. //! //! This filterer is meant to be used as a backing filterer inside a more complex or complete //! filterer, and not as a standalone filterer. //! //! This is a fairly simple wrapper around the [`ignore_files`] crate, which is probably where you //! want to look for any detail or to use this outside of Watchexec. #![doc(html_favicon_url = "https://watchexec.github.io/logo:watchexec.svg")] #![doc(html_logo_url = "https://watchexec.github.io/logo:watchexec.svg")] #![warn(clippy::unwrap_used, missing_docs)] #![cfg_attr(not(test), warn(unused_crate_dependencies))] #![deny(rust_2018_idioms)] use ignore::Match; use ignore_files::IgnoreFilter; use normalize_path::NormalizePath; use tracing::{trace, trace_span}; use watchexec::{error::RuntimeError, filter::Filterer}; use watchexec_events::{Event, FileType, Priority}; /// A Watchexec [`Filterer`] implementation for [`IgnoreFilter`]. #[derive(Clone, Debug)] pub struct IgnoreFilterer(pub IgnoreFilter); impl Filterer for IgnoreFilterer { /// Filter an event. /// /// This implementation never errors. It returns `Ok(false)` if the event is ignored according /// to the ignore files, and `Ok(true)` otherwise. It ignores event priority. fn check_event(&self, event: &Event, _priority: Priority) -> Result { let _span = trace_span!("filterer_check").entered(); let mut pass = true; for (path, file_type) in event.paths() { let path = dunce::simplified(path).normalize(); let path = path.as_path(); let _span = trace_span!("checking_against_compiled", ?path, ?file_type).entered(); let is_dir = file_type.map_or(false, |t| matches!(t, FileType::Dir)); match self.0.match_path(path, is_dir) { Match::None => { trace!("no match (pass)"); pass &= true; } Match::Ignore(glob) => { if glob.from().map_or(true, |f| path.strip_prefix(f).is_ok()) { trace!(?glob, "positive match (fail)"); pass &= false; } else { trace!(?glob, "positive match, but not in scope (ignore)"); } } Match::Whitelist(glob) => { trace!(?glob, "negative match (pass)"); pass = true; } } } trace!(?pass, "verdict"); Ok(pass) } } ================================================ FILE: crates/filterer/ignore/tests/filtering.rs ================================================ use ignore_files::IgnoreFilter; use watchexec_filterer_ignore::IgnoreFilterer; mod helpers; use helpers::ignore::*; #[tokio::test] async fn folders() { let filterer = filt("", &[file("folders")]).await; filterer.file_doesnt_pass("prunes"); filterer.dir_doesnt_pass("prunes"); folders_suite(&filterer, "prunes"); filterer.file_doesnt_pass("apricots"); filterer.dir_doesnt_pass("apricots"); folders_suite(&filterer, "apricots"); filterer.file_does_pass("cherries"); filterer.dir_doesnt_pass("cherries"); folders_suite(&filterer, "cherries"); filterer.file_does_pass("grapes"); filterer.dir_does_pass("grapes"); folders_suite(&filterer, "grapes"); filterer.file_doesnt_pass("feijoa"); filterer.dir_doesnt_pass("feijoa"); folders_suite(&filterer, "feijoa"); } fn folders_suite(filterer: &IgnoreFilterer, name: &str) { filterer.file_does_pass("apples"); filterer.file_does_pass("apples/carrots/cauliflowers/oranges"); filterer.file_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass("apples/oranges/bananas"); filterer.dir_does_pass("apples"); filterer.dir_does_pass("apples/carrots/cauliflowers/oranges"); filterer.dir_does_pass("apples/carrots/cauliflowers/artichokes/oranges"); filterer.file_does_pass(&format!("raw-{name}")); filterer.dir_does_pass(&format!("raw-{name}")); filterer.file_does_pass(&format!("raw-{name}/carrots/cauliflowers/oranges")); filterer.file_does_pass(&format!("raw-{name}/oranges/bananas")); filterer.dir_does_pass(&format!("raw-{name}/carrots/cauliflowers/oranges")); filterer.file_does_pass(&format!( "raw-{}/carrots/cauliflowers/artichokes/oranges", name )); filterer.dir_does_pass(&format!( "raw-{}/carrots/cauliflowers/artichokes/oranges", name )); filterer.dir_doesnt_pass(&format!("{name}/carrots/cauliflowers/oranges")); filterer.dir_doesnt_pass(&format!("{name}/carrots/cauliflowers/artichokes/oranges")); filterer.file_doesnt_pass(&format!("{name}/carrots/cauliflowers/oranges")); filterer.file_doesnt_pass(&format!("{name}/carrots/cauliflowers/artichokes/oranges")); filterer.file_doesnt_pass(&format!("{name}/oranges/bananas")); } #[tokio::test] async fn globs() { let filterer = filt("", &[file("globs").applies_globally()]).await; // Unmatched filterer.file_does_pass("FINAL-FINAL.docx"); #[cfg(windows)] filterer.dir_does_pass(r"C:\a\folder"); #[cfg(not(windows))] filterer.dir_does_pass("/a/folder"); filterer.file_does_pass("rat"); filterer.file_does_pass("foo/bar/rat"); #[cfg(windows)] filterer.file_does_pass(r"C:\foo\bar\rat"); #[cfg(not(windows))] filterer.file_does_pass("/foo/bar/rat"); // Cargo.toml filterer.file_doesnt_pass("Cargo.toml"); filterer.dir_doesnt_pass("Cargo.toml"); filterer.file_does_pass("Cargo.json"); // package.json filterer.file_doesnt_pass("package.json"); filterer.dir_doesnt_pass("package.json"); filterer.file_does_pass("package.toml"); // *.gemspec filterer.file_doesnt_pass("pearl.gemspec"); filterer.dir_doesnt_pass("sapphire.gemspec"); filterer.file_doesnt_pass(".gemspec"); filterer.file_does_pass("diamond.gemspecial"); // test-* filterer.file_doesnt_pass("test-unit"); filterer.dir_doesnt_pass("test-integration"); filterer.file_does_pass("tester-helper"); // *.sw* filterer.file_doesnt_pass("source.file.swa"); filterer.file_doesnt_pass(".source.file.swb"); filterer.dir_doesnt_pass("source.folder.swd"); filterer.file_does_pass("other.thing.s_w"); // sources.*/ filterer.file_does_pass("sources.waters"); filterer.dir_doesnt_pass("sources.rivers"); // /output.* filterer.file_doesnt_pass("output.toml"); filterer.file_doesnt_pass("output.json"); filterer.dir_doesnt_pass("output.toml"); filterer.unk_doesnt_pass("output.toml"); filterer.file_does_pass("foo/output.toml"); filterer.dir_does_pass("foo/output.toml"); // **/possum filterer.file_doesnt_pass("possum"); filterer.file_doesnt_pass("foo/bar/possum"); // #[cfg(windows)] FIXME should work // filterer.file_doesnt_pass(r"C:\foo\bar\possum"); #[cfg(not(windows))] filterer.file_doesnt_pass("/foo/bar/possum"); filterer.dir_doesnt_pass("possum"); filterer.dir_doesnt_pass("foo/bar/possum"); // #[cfg(windows)] FIXME should work // filterer.dir_doesnt_pass(r"C:\foo\bar\possum"); #[cfg(not(windows))] filterer.dir_doesnt_pass("/foo/bar/possum"); // zebra/** filterer.file_does_pass("zebra"); filterer.file_doesnt_pass("zebra/foo/bar"); // #[cfg(windows)] FIXME should work // filterer.file_does_pass(r"C:\zebra\foo\bar"); #[cfg(not(windows))] filterer.file_does_pass("/zebra/foo/bar"); // #[cfg(windows)] FIXME should work // filterer.file_doesnt_pass(r"C:\test\zebra\foo\bar"); #[cfg(not(windows))] filterer.file_doesnt_pass("/test/zebra/foo/bar"); filterer.dir_does_pass("zebra"); filterer.dir_does_pass("foo/bar/zebra"); // #[cfg(windows)] FIXME should work // filterer.dir_does_pass(r"C:\foo\bar\zebra"); #[cfg(not(windows))] filterer.dir_does_pass("/foo/bar/zebra"); filterer.dir_doesnt_pass("zebra/foo/bar"); // #[cfg(windows)] FIXME should work // filterer.dir_does_pass(r"C:\zebra\foo\bar"); #[cfg(not(windows))] filterer.dir_does_pass("/zebra/foo/bar"); // #[cfg(windows)] FIXME should work // filterer.dir_doesnt_pass(r"C:\test\zebra\foo\bar"); #[cfg(not(windows))] filterer.dir_doesnt_pass("/test/zebra/foo/bar"); // elep/**/hant filterer.file_doesnt_pass("elep/carrots/hant"); filterer.file_doesnt_pass("elep/carrots/cauliflowers/hant"); filterer.file_doesnt_pass("elep/carrots/cauliflowers/artichokes/hant"); filterer.dir_doesnt_pass("elep/carrots/hant"); filterer.dir_doesnt_pass("elep/carrots/cauliflowers/hant"); filterer.dir_doesnt_pass("elep/carrots/cauliflowers/artichokes/hant"); filterer.file_doesnt_pass("elep/hant/bananas"); filterer.dir_doesnt_pass("elep/hant/bananas"); // song/**/bird/ filterer.file_does_pass("song/carrots/bird"); filterer.file_does_pass("song/carrots/cauliflowers/bird"); filterer.file_does_pass("song/carrots/cauliflowers/artichokes/bird"); filterer.dir_doesnt_pass("song/carrots/bird"); filterer.dir_doesnt_pass("song/carrots/cauliflowers/bird"); filterer.dir_doesnt_pass("song/carrots/cauliflowers/artichokes/bird"); filterer.unk_does_pass("song/carrots/bird"); filterer.unk_does_pass("song/carrots/cauliflowers/bird"); filterer.unk_does_pass("song/carrots/cauliflowers/artichokes/bird"); filterer.file_doesnt_pass("song/bird/bananas"); filterer.dir_doesnt_pass("song/bird/bananas"); } #[tokio::test] async fn negate() { let filterer = filt("", &[file("negate")]).await; filterer.file_does_pass("yeah"); filterer.file_doesnt_pass("nah"); filterer.file_does_pass("nah.yeah"); } #[tokio::test] async fn allowlist() { let filterer = filt("", &[file("allowlist")]).await; filterer.file_does_pass("mod.go"); filterer.file_does_pass("foo.go"); filterer.file_does_pass("go.sum"); filterer.file_does_pass("go.mod"); filterer.file_does_pass("README.md"); filterer.file_does_pass("LICENSE"); filterer.file_does_pass(".gitignore"); filterer.file_doesnt_pass("evil.sum"); filterer.file_doesnt_pass("evil.mod"); filterer.file_doesnt_pass("gofile.gone"); filterer.file_doesnt_pass("go.js"); filterer.file_doesnt_pass("README.asciidoc"); filterer.file_doesnt_pass("LICENSE.txt"); filterer.file_doesnt_pass("foo/.gitignore"); } #[tokio::test] async fn scopes() { let filterer = filt( "", &[ file("scopes-global").applies_globally(), file("scopes-local"), file("scopes-sublocal").applies_in("tests"), file("none-allowed").applies_in("tests/child"), ], ) .await; filterer.file_doesnt_pass("global.a"); // #[cfg(windows)] FIXME should work // filterer.file_doesnt_pass(r"C:\global.b"); #[cfg(not(windows))] filterer.file_doesnt_pass("/global.b"); filterer.file_doesnt_pass("tests/global.c"); filterer.file_doesnt_pass("local.a"); // #[cfg(windows)] FIXME should work // filterer.file_does_pass(r"C:\local.b"); #[cfg(not(windows))] filterer.file_does_pass("/local.b"); // FIXME flaky // filterer.file_doesnt_pass("tests/local.c"); filterer.file_does_pass("sublocal.a"); // #[cfg(windows)] FIXME should work // filterer.file_does_pass(r"C:\sublocal.b"); #[cfg(not(windows))] filterer.file_does_pass("/sublocal.b"); filterer.file_doesnt_pass("tests/sublocal.c"); filterer.file_doesnt_pass("tests/child/child.txt"); filterer.file_doesnt_pass("tests/child/grandchild/grandchild.c"); } #[tokio::test] async fn self_ignored() { let filterer = filt("", &[file("self.ignore").applies_in("tests/ignores")]).await; filterer.file_doesnt_pass("tests/ignores/self.ignore"); filterer.file_does_pass("self.ignore"); } #[tokio::test] async fn add_globs_without_any_ignore_file() { let origin = std::fs::canonicalize(".").unwrap(); let mut ignore_filter = IgnoreFilter::new(&origin, &[]).await.unwrap(); ignore_filter .add_globs(&["other/"], Some(&origin)) .expect("Failed to add globs to ignore filter"); let filterer = IgnoreFilterer(ignore_filter); filterer.file_doesnt_pass("other/some/file.txt"); filterer.file_does_pass("tests/ignores/self.ignore"); } #[tokio::test] async fn add_globs_to_existing_ignore_file() { let ignore_file = file("self.ignore").applies_in("tests/ignores"); let ignore_file_applies_in = ignore_file.applies_in.clone().unwrap(); let origin = std::fs::canonicalize(".").unwrap(); let mut ignore_filter = IgnoreFilter::new(&origin, &[ignore_file]).await.unwrap(); ignore_filter .add_globs(&["other/"], Some(&ignore_file_applies_in)) .expect("Failed to add globs to ignore filter"); let filterer = IgnoreFilterer(ignore_filter); filterer.file_doesnt_pass("tests/ignores/other/some/file.txt"); filterer.file_doesnt_pass("tests/ignores/self.ignore"); filterer.file_does_pass("README.md"); } #[tokio::test] async fn add_ignore_file_without_any_preexisting_ignore_file() { let origin = std::fs::canonicalize(".").unwrap(); let mut ignore_filter = IgnoreFilter::new(&origin, &[]).await.unwrap(); let new_ignore_file = file("self.ignore").applies_in("tests/ignores"); ignore_filter.add_file(&new_ignore_file).await.unwrap(); let filterer = IgnoreFilterer(ignore_filter); filterer.file_doesnt_pass("tests/ignores/self.ignore"); filterer.file_does_pass("README.md"); } #[tokio::test] async fn add_ignore_file_to_existing_ignore_file() { let ignore_file = file("scopes-global").applies_in("tests/ignores"); let origin = std::fs::canonicalize(".").unwrap(); let mut ignore_filter = IgnoreFilter::new(&origin, &[ignore_file]).await.unwrap(); let new_ignore_file = file("self.ignore").applies_in("tests/ignores"); ignore_filter.add_file(&new_ignore_file).await.unwrap(); let filterer = IgnoreFilterer(ignore_filter); filterer.file_doesnt_pass("tests/ignores/self.ignore"); filterer.file_doesnt_pass("tests/ignores/global.txt"); filterer.file_does_pass("README.md"); } ================================================ FILE: crates/filterer/ignore/tests/helpers/mod.rs ================================================ use std::path::{Path, PathBuf}; use ignore_files::{IgnoreFile, IgnoreFilter}; use watchexec::{error::RuntimeError, filter::Filterer}; use watchexec_events::{Event, FileType, Priority, Tag}; use watchexec_filterer_ignore::IgnoreFilterer; pub mod ignore { pub use super::ig_file as file; pub use super::ignore_filt as filt; pub use super::Applies; pub use super::PathHarness; } pub trait PathHarness: Filterer { fn check_path( &self, path: PathBuf, file_type: Option, ) -> std::result::Result { let event = Event { tags: vec![Tag::Path { path, file_type }], metadata: Default::default(), }; self.check_event(&event, Priority::Normal) } fn path_pass(&self, path: &str, file_type: Option, pass: bool) { let origin = std::fs::canonicalize(".").unwrap(); let full_path = if let Some(suf) = path.strip_prefix("/test/") { origin.join(suf) } else if Path::new(path).has_root() { path.into() } else { origin.join(path) }; tracing::info!(?path, ?file_type, ?pass, "check"); assert_eq!( self.check_path(full_path, file_type).unwrap(), pass, "{} {:?} (expected {})", match file_type { Some(FileType::File) => "file", Some(FileType::Dir) => "dir", Some(FileType::Symlink) => "symlink", Some(FileType::Other) => "other", None => "path", }, path, if pass { "pass" } else { "fail" } ); } fn file_does_pass(&self, path: &str) { self.path_pass(path, Some(FileType::File), true); } fn file_doesnt_pass(&self, path: &str) { self.path_pass(path, Some(FileType::File), false); } fn dir_does_pass(&self, path: &str) { self.path_pass(path, Some(FileType::Dir), true); } fn dir_doesnt_pass(&self, path: &str) { self.path_pass(path, Some(FileType::Dir), false); } fn unk_does_pass(&self, path: &str) { self.path_pass(path, None, true); } fn unk_doesnt_pass(&self, path: &str) { self.path_pass(path, None, false); } } impl PathHarness for IgnoreFilterer {} fn tracing_init() { use tracing_subscriber::{ fmt::{format::FmtSpan, Subscriber}, util::SubscriberInitExt, EnvFilter, }; Subscriber::builder() .pretty() .with_span_events(FmtSpan::NEW | FmtSpan::CLOSE) .with_env_filter(EnvFilter::from_default_env()) .finish() .try_init() .ok(); } pub async fn ignore_filt(origin: &str, ignore_files: &[IgnoreFile]) -> IgnoreFilterer { tracing_init(); let origin = tokio::fs::canonicalize(".").await.unwrap().join(origin); IgnoreFilterer( IgnoreFilter::new(origin, ignore_files) .await .expect("making filterer"), ) } pub fn ig_file(name: &str) -> IgnoreFile { let origin = std::fs::canonicalize(".").unwrap(); let path = origin.join("tests").join("ignores").join(name); IgnoreFile { path, applies_in: Some(origin), applies_to: None, } } pub trait Applies { fn applies_globally(self) -> Self; fn applies_in(self, origin: &str) -> Self; } impl Applies for IgnoreFile { fn applies_globally(mut self) -> Self { self.applies_in = None; self } fn applies_in(mut self, origin: &str) -> Self { let origin = std::fs::canonicalize(".").unwrap().join(origin); self.applies_in = Some(origin); self } } ================================================ FILE: crates/filterer/ignore/tests/ignores/allowlist ================================================ # from https://github.com/github/gitignore * !/.gitignore !*.go !go.sum !go.mod !README.md !LICENSE !*/ ================================================ FILE: crates/filterer/ignore/tests/ignores/folders ================================================ prunes /apricots cherries/ **/grapes/** **/feijoa **/feijoa/** ================================================ FILE: crates/filterer/ignore/tests/ignores/globs ================================================ Cargo.toml package.json *.gemspec test-* *.sw* sources.*/ /output.* **/possum zebra/** elep/**/hant song/**/bird/ ================================================ FILE: crates/filterer/ignore/tests/ignores/negate ================================================ nah !nah.yeah ================================================ FILE: crates/filterer/ignore/tests/ignores/none-allowed ================================================ * ================================================ FILE: crates/filterer/ignore/tests/ignores/scopes-global ================================================ global.* ================================================ FILE: crates/filterer/ignore/tests/ignores/scopes-local ================================================ local.* ================================================ FILE: crates/filterer/ignore/tests/ignores/scopes-sublocal ================================================ sublocal.* ================================================ FILE: crates/filterer/ignore/tests/ignores/self.ignore ================================================ self.ignore ================================================ FILE: crates/ignore-files/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v3.0.5 (2026-01-20) - Deps: gix-config 0.50 - Deps: radix-trie 0.3 - Fix: match git's behaviour for finding ignores ## v3.0.4 (2025-05-15) - Calls to `add_globs()` and `add_file()` dynamically create a new ignore entry if there isn't one at the location of `applies_in` param. This allows users to e.g. add globs to a path that previously has no ignore files in. ([#908](https://github.com/watchexec/watchexec/pull/908)) - Deps: gix-config 0.45 ## v3.0.3 (2025-02-09) - Deps: gix-config 0.43 ## v3.0.2 (2024-10-14) - Deps: gix-config 0.40 ## v3.0.1 (2024-04-28) - Hide fmt::Debug spew from ignore crate, use `full_debug` feature to restore. ## v3.0.0 (2024-04-20) - Deps: gix-config 0.36 - Deps: miette 7 ## v2.1.0 (2024-01-04) - Normalise paths on all platforms (via `normalize-path`). - Require paths be normalised before discovery. - Add convenience APIs to `IgnoreFilesFromOriginArgs` for that purpose. ## v2.0.0 (2024-01-01) - A round of optimisation by @t3hmrman, improving directory traversal to avoid crawling unneeded paths. ([#663](https://github.com/watchexec/watchexec/pull/663)) - Respect `applies_in` scope when processing nested ignores, by @thislooksfun. ([#746](https://github.com/watchexec/watchexec/pull/746)) ## v1.3.2 (2023-11-26) - Remove error diagnostic codes. - Deps: upgrade to gix-config 0.31.0 - Deps: upgrade Tokio requirement to 1.33.0 ## v1.3.1 (2023-06-03) - Use Tokio's canonicalize instead of dunce::simplified. ## v1.3.0 (2023-05-14) - Use IO-free dunce::simplify to normalise paths on Windows. - Handle gitignores correctly (one GitIgnoreBuilder per path). - Deps: update gix-config to 0.22. ## v1.2.0 (2023-03-18) - Deps: update git-config to gix-config. - Deps: update tokio to 1.24 - Ditch MSRV policy (only latest supported now). - `from_environment()` no longer looks at `WATCHEXEC_IGNORE_FILES`. ## v1.1.0 (2023-01-08) - Add missing `Send` bound to async functions. ## v1.0.1 (2022-09-07) - Deps: update git-config to 0.7.1 - Deps: update miette to 5.3.0 ## v1.0.0 (2022-06-16) - Initial release as a separate crate. ================================================ FILE: crates/ignore-files/Cargo.toml ================================================ [package] name = "ignore-files" version = "3.0.5" authors = ["Félix Saparelli "] license = "Apache-2.0" description = "Find, parse, and interpret ignore files" keywords = ["ignore", "files", "discover", "find"] documentation = "https://docs.rs/ignore-files" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.70.0" edition = "2021" [dependencies] futures = "0.3.29" gix-config = "0.50.0" ignore = "0.4.18" miette = "7.2.0" normalize-path = "0.2.1" thiserror = "2.0.11" tracing = "0.1.40" radix_trie = "0.3.0" dunce = "1.0.4" [dependencies.tokio] version = "1.33.0" default-features = false features = [ "fs", "macros", "rt", ] [dependencies.project-origins] version = "1.4.2" path = "../project-origins" [dev-dependencies] tracing-subscriber = "0.3.6" [features] default = [] ## Don't hide ignore::gitignore::Gitignore Debug impl full_debug = [] [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" ================================================ FILE: crates/ignore-files/README.md ================================================ [![Crates.io page](https://badgen.net/crates/v/ignore-files)](https://crates.io/crates/ignore-files) [![API Docs](https://docs.rs/ignore-files/badge.svg)][docs] [![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license] [![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml) # Ignore files _Find, parse, and interpret ignore files._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license]. - Status: done. [docs]: https://docs.rs/ignore-files [license]: ../../LICENSE ================================================ FILE: crates/ignore-files/release.toml ================================================ pre-release-commit-message = "release: ignore-files v{{version}}" tag-prefix = "ignore-files-" tag-message = "ignore-files {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/ignore-files/src/discover.rs ================================================ use std::{ collections::HashSet, env, io::{Error, ErrorKind}, path::{Path, PathBuf}, }; use futures::future::try_join_all; use gix_config::{path::interpolate::Context as InterpolateContext, File, Path as GitPath}; use miette::{bail, Result}; use normalize_path::NormalizePath; use project_origins::ProjectType; use tokio::fs::{canonicalize, metadata, read_dir}; use tracing::{trace, trace_span}; use crate::{IgnoreFile, IgnoreFilter}; /// Arguments for finding ignored files in a given directory and subdirectories #[derive(Clone, Debug, Default, PartialEq, Eq)] #[non_exhaustive] pub struct IgnoreFilesFromOriginArgs { /// Origin from which finding ignored files will start. pub origin: PathBuf, /// Paths that have been explicitly selected to be watched. /// /// If this list is non-empty, all paths not on this list will be ignored. /// /// These paths *must* be absolute and normalised (no `.` and `..` components). pub explicit_watches: Vec, /// Paths that have been explicitly ignored. /// /// If this list is non-empty, all paths on this list will be ignored. /// /// These paths *must* be absolute and normalised (no `.` and `..` components). pub explicit_ignores: Vec, } impl IgnoreFilesFromOriginArgs { /// Check that this struct is correctly-formed. pub fn check(&self) -> Result<()> { if self.explicit_watches.iter().any(|p| !p.is_absolute()) { bail!("explicit_watches contains non-absolute paths"); } if self.explicit_watches.iter().any(|p| !p.is_normalized()) { bail!("explicit_watches contains non-normalised paths"); } if self.explicit_ignores.iter().any(|p| !p.is_absolute()) { bail!("explicit_ignores contains non-absolute paths"); } if self.explicit_ignores.iter().any(|p| !p.is_normalized()) { bail!("explicit_ignores contains non-normalised paths"); } Ok(()) } /// Canonicalise all paths. /// /// The result is always well-formed. pub async fn canonicalise(self) -> std::io::Result { Ok(Self { origin: canonicalize(&self.origin).await?, explicit_watches: try_join_all(self.explicit_watches.into_iter().map(canonicalize)) .await?, explicit_ignores: try_join_all(self.explicit_ignores.into_iter().map(canonicalize)) .await?, }) } /// Create args with all fields set and check that they are correctly-formed. pub fn new( origin: impl AsRef, explicit_watches: Vec, explicit_ignores: Vec, ) -> Result { let this = Self { origin: PathBuf::from(origin.as_ref()), explicit_watches, explicit_ignores, }; this.check()?; Ok(this) } /// Create args without checking well-formed-ness. /// /// Use this only if you know that the args are well-formed, or if you are about to call /// [`canonicalise()`][IgnoreFilesFromOriginArgs::canonicalise()] on them. pub fn new_unchecked( origin: impl AsRef, explicit_watches: impl IntoIterator>, explicit_ignores: impl IntoIterator>, ) -> Self { Self { origin: origin.as_ref().into(), explicit_watches: explicit_watches.into_iter().map(Into::into).collect(), explicit_ignores: explicit_ignores.into_iter().map(Into::into).collect(), } } } impl From<&Path> for IgnoreFilesFromOriginArgs { fn from(path: &Path) -> Self { Self { origin: path.into(), ..Default::default() } } } /// Finds all ignore files in the given directory and subdirectories. /// /// This considers: /// - Git ignore files (`.gitignore`) /// - Mercurial ignore files (`.hgignore`) /// - Tool-generic `.ignore` files /// - `.git/info/exclude` files in the `path` directory only /// - Git configurable project ignore files (with `core.excludesFile` in `.git/config`) /// /// Importantly, this should be called from the origin of the project, not a subfolder. This /// function will not discover the project origin, and will not traverse parent directories. Use the /// `project-origins` crate for that. /// /// This function also does not distinguish between project folder types, and collects all files for /// all supported VCSs and other project types. Use the `applies_to` field to filter the results. /// /// All errors (permissions, etc) are collected and returned alongside the ignore files: you may /// want to show them to the user while still using whatever ignores were successfully found. Errors /// from files not being found are silently ignored (the files are just not returned). /// /// ## Special case: project-local git config specifying `core.excludesFile` /// /// If the project's `.git/config` specifies a value for `core.excludesFile`, this function will /// return an `IgnoreFile { path: path/to/that/file, applies_in: None, applies_to: Some(ProjectType::Git) }`. /// This is the only case in which the `applies_in` field is None from this function. When such is /// received the global Git ignore files found by [`from_environment()`] **should be ignored**. /// /// ## Async /// /// This future is not `Send` due to [`gix_config`] internals. /// /// ## Panics /// /// This function panics if the `args` are not correctly-formed; this can be checked beforehand /// without panicking with [`IgnoreFilesFromOriginArgs::check()`]. #[expect( clippy::future_not_send, reason = "gix_config internals, if this changes: update the doc" )] #[allow( clippy::too_many_lines, reason = "it's just the discover_file calls that explode the line count" )] pub async fn from_origin( args: impl Into, ) -> (Vec, Vec) { let args = args.into(); args.check() .expect("checking well-formedness of IgnoreFilesFromOriginArgs"); let origin = &args.origin; let mut ignore_files = args .explicit_ignores .iter() .map(|p| IgnoreFile { path: p.clone(), applies_in: Some(origin.clone()), applies_to: None, }) .collect(); let mut errors = Vec::new(); match find_file(origin.join(".git/config")).await { Err(err) => errors.push(err), Ok(None) => {} Ok(Some(path)) => match path.parent().map(|path| File::from_git_dir(path.into())) { None => errors.push(Error::new( ErrorKind::Other, "unreachable: .git/config must have a parent", )), Some(Err(err)) => errors.push(Error::new(ErrorKind::Other, err)), Some(Ok(config)) => { let config_excludes = config.value::>("core.excludesFile"); if let Ok(excludes) = config_excludes { match excludes.interpolate(InterpolateContext { home_dir: env::var("HOME").ok().map(PathBuf::from).as_deref(), ..Default::default() }) { Ok(e) => { discover_file( &mut ignore_files, &mut errors, None, Some(ProjectType::Git), e.into(), ) .await; } Err(err) => { errors.push(Error::new(ErrorKind::Other, err)); } } } } }, } discover_file( &mut ignore_files, &mut errors, Some(origin.clone()), Some(ProjectType::Bazaar), origin.join(".bzrignore"), ) .await; discover_file( &mut ignore_files, &mut errors, Some(origin.clone()), Some(ProjectType::Darcs), origin.join("_darcs/prefs/boring"), ) .await; discover_file( &mut ignore_files, &mut errors, Some(origin.clone()), Some(ProjectType::Fossil), origin.join(".fossil-settings/ignore-glob"), ) .await; discover_file( &mut ignore_files, &mut errors, Some(origin.clone()), Some(ProjectType::Git), origin.join(".git/info/exclude"), ) .await; trace!("visiting child directories for ignore files"); match DirTourist::new(origin, &ignore_files, &args.explicit_watches).await { Ok(mut dirs) => { loop { match dirs.next().await { Visit::Done => break, Visit::Skip => continue, Visit::Find(dir) => { // Attempt to find a .ignore file in the directory if discover_file( &mut ignore_files, &mut errors, Some(dir.clone()), None, dir.join(".ignore"), ) .await { dirs.add_last_file_to_filter(&ignore_files, &mut errors) .await; } // Attempt to find a .gitignore file in the directory if discover_file( &mut ignore_files, &mut errors, Some(dir.clone()), Some(ProjectType::Git), dir.join(".gitignore"), ) .await { dirs.add_last_file_to_filter(&ignore_files, &mut errors) .await; } // Attempt to find a .hgignore file in the directory if discover_file( &mut ignore_files, &mut errors, Some(dir.clone()), Some(ProjectType::Mercurial), dir.join(".hgignore"), ) .await { dirs.add_last_file_to_filter(&ignore_files, &mut errors) .await; } } } } errors.extend(dirs.errors); } Err(err) => { errors.push(err); } } (ignore_files, errors) } /// Finds all ignore files that apply to the current runtime. /// /// Takes an optional `appname` for the calling application for application-specific config files. /// /// This considers: /// - User-specific git ignore files (e.g. `~/.gitignore`) /// - Git configurable ignore files (e.g. with `core.excludesFile` in system or user config) /// - `$XDG_CONFIG_HOME/{appname}/ignore`, as well as other locations (APPDATA on Windows…) /// /// All errors (permissions, etc) are collected and returned alongside the ignore files: you may /// want to show them to the user while still using whatever ignores were successfully found. Errors /// from files not being found are silently ignored (the files are just not returned). /// /// ## Async /// /// This future is not `Send` due to [`gix_config`] internals. #[expect( clippy::future_not_send, reason = "gix_config internals, if this changes: update the doc" )] #[allow(clippy::too_many_lines, reason = "clearer than broken up needlessly")] pub async fn from_environment(appname: Option<&str>) -> (Vec, Vec) { let mut files = Vec::new(); let mut errors = Vec::new(); let mut found_git_global = false; match File::from_environment_overrides().map(|mut env| { File::from_globals().map(move |glo| { env.append(glo); env }) }) { Err(err) => errors.push(Error::new(ErrorKind::Other, err)), Ok(Err(err)) => errors.push(Error::new(ErrorKind::Other, err)), Ok(Ok(config)) => { let config_excludes = config.value::>("core.excludesFile"); if let Ok(excludes) = config_excludes { match excludes.interpolate(InterpolateContext { home_dir: env::var("HOME").ok().map(PathBuf::from).as_deref(), ..Default::default() }) { Ok(e) => { if discover_file( &mut files, &mut errors, None, Some(ProjectType::Git), e.into(), ) .await { found_git_global = true; } } Err(err) => { errors.push(Error::new(ErrorKind::Other, err)); } } } } } if !found_git_global { let mut tries = Vec::with_capacity(3); if let Ok(home) = env::var("XDG_CONFIG_HOME") { tries.push(Path::new(&home).join("git/ignore")); } if let Ok(home) = env::var("HOME") { tries.push(Path::new(&home).join(".config/git/ignore")); } if let Ok(home) = env::var("USERPROFILE") { tries.push(Path::new(&home).join(".config/git/ignore")); } for path in tries { if discover_file(&mut files, &mut errors, None, Some(ProjectType::Git), path).await { break; } } } let mut bzrs = Vec::with_capacity(5); if let Ok(home) = env::var("APPDATA") { bzrs.push(Path::new(&home).join("Bazzar/2.0/ignore")); } if let Ok(home) = env::var("HOME") { bzrs.push(Path::new(&home).join(".bazarr/ignore")); } for path in bzrs { if discover_file( &mut files, &mut errors, None, Some(ProjectType::Bazaar), path, ) .await { break; } } if let Some(name) = appname { let mut wgis = Vec::with_capacity(4); if let Ok(home) = env::var("XDG_CONFIG_HOME") { wgis.push(Path::new(&home).join(format!("{name}/ignore"))); } if let Ok(home) = env::var("APPDATA") { wgis.push(Path::new(&home).join(format!("{name}/ignore"))); } if let Ok(home) = env::var("USERPROFILE") { wgis.push(Path::new(&home).join(format!(".{name}/ignore"))); } if let Ok(home) = env::var("HOME") { wgis.push(Path::new(&home).join(format!(".{name}/ignore"))); } for path in wgis { if discover_file(&mut files, &mut errors, None, None, path).await { break; } } } (files, errors) } // TODO: add context to these errors /// Utility function to handle looking for an ignore file and adding it to a list if found. /// /// This is mostly an internal function, but it is exposed for other filterers to use. #[allow(clippy::future_not_send)] #[tracing::instrument(skip(files, errors), level = "trace")] #[inline] pub async fn discover_file( files: &mut Vec, errors: &mut Vec, applies_in: Option, applies_to: Option, path: PathBuf, ) -> bool { match find_file(path).await { Err(err) => { trace!(?err, "found an error"); errors.push(err); false } Ok(None) => { trace!("found nothing"); false } Ok(Some(path)) => { trace!(?path, "found a file"); files.push(IgnoreFile { path, applies_in, applies_to, }); true } } } async fn find_file(path: PathBuf) -> Result, Error> { match metadata(&path).await { Err(err) if err.kind() == std::io::ErrorKind::NotFound => Ok(None), Err(err) => Err(err), Ok(meta) if meta.is_file() && meta.len() > 0 => Ok(Some(path)), Ok(_) => Ok(None), } } #[derive(Debug)] struct DirTourist { base: PathBuf, to_visit: Vec, to_skip: HashSet, to_explicitly_watch: HashSet, pub errors: Vec, filter: IgnoreFilter, } #[derive(Debug)] enum Visit { Find(PathBuf), Skip, Done, } impl DirTourist { pub async fn new( base: &Path, ignore_files: &[IgnoreFile], watch_files: &[PathBuf], ) -> Result { let base = canonicalize(base).await?; trace!("create IgnoreFilterer for visiting directories"); let mut filter = IgnoreFilter::new(&base, ignore_files) .await .map_err(|err| Error::new(ErrorKind::Other, err))?; filter .add_globs( &[ "/.git", "/.hg", "/.bzr", "/_darcs", "/.fossil-settings", "/.svn", "/.pijul", ], Some(&base), ) .map_err(|err| Error::new(ErrorKind::Other, err))?; Ok(Self { to_visit: vec![base.clone()], base, to_skip: HashSet::new(), to_explicitly_watch: watch_files.iter().cloned().collect(), errors: Vec::new(), filter, }) } #[allow(clippy::future_not_send)] pub async fn next(&mut self) -> Visit { if let Some(path) = self.to_visit.pop() { self.visit_path(path).await } else { Visit::Done } } #[allow(clippy::future_not_send)] #[tracing::instrument(skip(self), level = "trace")] async fn visit_path(&mut self, path: PathBuf) -> Visit { if self.must_skip(&path) { trace!("in skip list"); return Visit::Skip; } if !self.filter.check_dir(&path) { trace!(?path, "path is ignored, adding to skip list"); self.skip(path); return Visit::Skip; } // If explicitly watched paths were not specified, we can include any path // // If explicitly watched paths *were* specified, then to include the path, either: // - the path in question starts with an explicitly included path (/a/b starting with /a) // - the path in question is *above* the explicitly included path (/a is above /a/b) if self.to_explicitly_watch.is_empty() || self .to_explicitly_watch .iter() .any(|p| path.starts_with(p) || p.starts_with(&path)) { trace!(?path, ?self.to_explicitly_watch, "including path; it starts with one of the explicitly watched paths"); } else { trace!(?path, ?self.to_explicitly_watch, "excluding path; it did not start with any of explicitly watched paths"); self.skip(path); return Visit::Skip; } let mut dir = match read_dir(&path).await { Ok(dir) => dir, Err(err) => { trace!("failed to read dir: {}", err); self.errors.push(err); return Visit::Skip; } }; while let Some(entry) = match dir.next_entry().await { Ok(entry) => entry, Err(err) => { trace!("failed to read dir entries: {}", err); self.errors.push(err); return Visit::Skip; } } { let path = entry.path(); let _span = trace_span!("dir_entry", ?path).entered(); if self.must_skip(&path) { trace!("in skip list"); continue; } match entry.file_type().await { Ok(ft) => { if ft.is_dir() { if !self.filter.check_dir(&path) { trace!("path is ignored, adding to skip list"); self.skip(path); continue; } trace!("found a dir, adding to list"); self.to_visit.push(path); } else { trace!("not a dir"); } } Err(err) => { trace!("failed to read filetype, adding to skip list: {}", err); self.errors.push(err); self.skip(path); } } } Visit::Find(path) } pub fn skip(&mut self, path: PathBuf) { let check_path = path.as_path(); self.to_visit.retain(|p| !p.starts_with(check_path)); self.to_skip.insert(path); } pub(crate) async fn add_last_file_to_filter( &mut self, files: &[IgnoreFile], errors: &mut Vec, ) { if let Some(ig) = files.last() { if let Err(err) = self.filter.add_file(ig).await { errors.push(Error::new(ErrorKind::Other, err)); } } } fn must_skip(&self, mut path: &Path) -> bool { if self.to_skip.contains(path) { return true; } while let Some(parent) = path.parent() { if parent == self.base { break; } if self.to_skip.contains(parent) { return true; } path = parent; } false } } ================================================ FILE: crates/ignore-files/src/error.rs ================================================ use std::path::PathBuf; use miette::Diagnostic; use thiserror::Error; #[derive(Debug, Error, Diagnostic)] #[non_exhaustive] pub enum Error { /// Error received when an [`IgnoreFile`] cannot be read. /// /// [`IgnoreFile`]: crate::IgnoreFile #[error("cannot read ignore '{file}': {err}")] Read { /// The path to the erroring ignore file. file: PathBuf, /// The underlying error. #[source] err: std::io::Error, }, /// Error received when parsing a glob fails. #[error("cannot parse glob from ignore '{file:?}': {err}")] Glob { /// The path to the erroring ignore file. file: Option, /// The underlying error. #[source] err: ignore::Error, // TODO: extract glob error into diagnostic }, /// Multiple related [`Error`](enum@Error)s. #[error("multiple: {0:?}")] Multi(#[related] Vec), /// Error received when trying to canonicalize a path #[error("cannot canonicalize '{path:?}'")] Canonicalize { /// the path that cannot be canonicalized path: PathBuf, /// the underlying error #[source] err: std::io::Error, }, } ================================================ FILE: crates/ignore-files/src/filter.rs ================================================ use std::path::{Path, PathBuf}; use futures::stream::{FuturesUnordered, StreamExt}; use ignore::{ gitignore::{Gitignore, GitignoreBuilder, Glob}, Match, }; use radix_trie::{Trie, TrieCommon}; use tokio::fs::{canonicalize, read_to_string}; use tracing::{trace, trace_span}; use crate::{simplify_path, Error, IgnoreFile}; #[derive(Clone)] #[cfg_attr(feature = "full_debug", derive(Debug))] struct Ignore { gitignore: Gitignore, builder: Option, } #[cfg(not(feature = "full_debug"))] impl std::fmt::Debug for Ignore { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("Ignore") .field("gitignore", &"ignore::gitignore::Gitignore{...}") .field("builder", &"ignore::gitignore::GitignoreBuilder{...}") .finish() } } /// A mutable filter dedicated to ignore files and trees of ignore files. /// /// This reads and compiles ignore files, and should be used for handling ignore files. It's created /// with a project origin and a list of ignore files, and new ignore files can be added later /// (unless [`finish`](IgnoreFilter::finish()) is called). #[derive(Clone, Debug)] pub struct IgnoreFilter { origin: PathBuf, ignores: Trie, } impl IgnoreFilter { /// Create a new empty filterer. /// /// Prefer [`new()`](IgnoreFilter::new()) if you have ignore files ready to use. pub fn empty(origin: impl AsRef) -> Self { let origin = origin.as_ref(); let mut ignores = Trie::new(); ignores.insert( origin.display().to_string(), Ignore { gitignore: Gitignore::empty(), builder: Some(GitignoreBuilder::new(origin)), }, ); Self { origin: origin.to_owned(), ignores, } } /// Read ignore files from disk and load them for filtering. /// /// Use [`empty()`](IgnoreFilter::empty()) if you want an empty filterer, /// or to construct one outside an async environment. pub async fn new(origin: impl AsRef + Send, files: &[IgnoreFile]) -> Result { let origin = origin.as_ref().to_owned(); let origin = canonicalize(&origin) .await .map_err(move |err| Error::Canonicalize { path: origin, err })?; let origin = simplify_path(&origin); let _span = trace_span!("build_filterer", ?origin); trace!(files=%files.len(), "loading file contents"); let (files_contents, errors): (Vec<_>, Vec<_>) = files .iter() .map(|file| async move { trace!(?file, "loading ignore file"); let content = read_to_string(&file.path) .await .map_err(|err| Error::Read { file: file.path.clone(), err, })?; Ok((file.clone(), content)) }) .collect::>() .collect::>() .await .into_iter() .map(|res| match res { Ok(o) => (Some(o), None), Err(e) => (None, Some(e)), }) .unzip(); let errors: Vec = errors.into_iter().flatten().collect(); if !errors.is_empty() { trace!("found {} errors", errors.len()); return Err(Error::Multi(errors)); } // TODO: different parser/adapter for non-git-syntax ignore files? trace!(files=%files_contents.len(), "building ignore list"); let mut ignores_trie = Trie::new(); // add builder for the root of the file system, so that we can handle global ignores and globs ignores_trie.insert( prefix(&origin), Ignore { gitignore: Gitignore::empty(), builder: Some(GitignoreBuilder::new(&origin)), }, ); let mut total_num_ignores = 0; let mut total_num_whitelists = 0; for (file, content) in files_contents.into_iter().flatten() { let _span = trace_span!("loading ignore file", ?file).entered(); let applies_in = get_applies_in_path(&origin, &file); let mut builder = ignores_trie .get(&applies_in.display().to_string()) .and_then(|node| node.builder.clone()) .unwrap_or_else(|| GitignoreBuilder::new(&applies_in)); for line in content.lines() { if line.is_empty() || line.starts_with('#') { continue; } trace!(?line, "adding ignore line"); builder .add_line(Some(applies_in.clone().clone()), line) .map_err(|err| Error::Glob { file: Some(file.path.clone()), err, })?; } trace!("compiling globset"); let compiled_builder = builder .build() .map_err(|err| Error::Glob { file: None, err })?; total_num_ignores += compiled_builder.num_ignores(); total_num_whitelists += compiled_builder.num_whitelists(); ignores_trie.insert( applies_in.display().to_string(), Ignore { gitignore: compiled_builder, builder: Some(builder), }, ); } trace!( files=%files.len(), trie=?ignores_trie, ignores=%total_num_ignores, allows=%total_num_whitelists, "ignore files loaded and compiled", ); Ok(Self { origin: origin.clone(), ignores: ignores_trie, }) } /// Returns the number of ignores and allowlists loaded. #[must_use] pub fn num_ignores(&self) -> (u64, u64) { self.ignores.iter().fold((0, 0), |mut acc, (_, ignore)| { acc.0 += ignore.gitignore.num_ignores(); acc.1 += ignore.gitignore.num_whitelists(); acc }) } /// Deletes the internal builder, to save memory. /// /// This makes it impossible to add new ignore files without re-compiling the whole set. pub fn finish(&mut self) { let keys = self.ignores.keys().cloned().collect::>(); for key in keys { if let Some(ignore) = self.ignores.get_mut(&key) { ignore.builder = None; } } } /// Reads and adds an ignore file, if the builder is available. /// /// Does nothing silently otherwise. pub async fn add_file(&mut self, file: &IgnoreFile) -> Result<(), Error> { let applies_in = get_applies_in_path(&self.origin, file); let applies_in_str = applies_in.display().to_string(); if self.ignores.get(&applies_in_str).is_none() { self.ignores.insert( applies_in_str.clone(), Ignore { gitignore: Gitignore::empty(), builder: Some(GitignoreBuilder::new(&applies_in)), }, ); } let Some(Ignore { builder: Some(ref mut builder), .. }) = self.ignores.get_mut(&applies_in_str) else { return Ok(()); }; trace!(?file, "reading ignore file"); let content = read_to_string(&file.path) .await .map_err(|err| Error::Read { file: file.path.clone(), err, })?; let _span = trace_span!("loading ignore file", ?file).entered(); for line in content.lines() { if line.is_empty() || line.starts_with('#') { continue; } trace!(?line, "adding ignore line"); builder .add_line(Some(applies_in.clone()), line) .map_err(|err| Error::Glob { file: Some(file.path.clone()), err, })?; } self.recompile(file)?; Ok(()) } fn recompile(&mut self, file: &IgnoreFile) -> Result<(), Error> { let applies_in = get_applies_in_path(&self.origin, file) .display() .to_string(); let Some(Ignore { gitignore: compiled, builder: Some(builder), }) = self.ignores.get(&applies_in) else { return Ok(()); }; let pre_ignores = compiled.num_ignores(); let pre_allows = compiled.num_whitelists(); trace!("recompiling globset"); let recompiled = builder.build().map_err(|err| Error::Glob { file: Some(file.path.clone()), err, })?; trace!( new_ignores=%(recompiled.num_ignores() - pre_ignores), new_allows=%(recompiled.num_whitelists() - pre_allows), "ignore file loaded and set recompiled", ); self.ignores.insert( applies_in, Ignore { gitignore: recompiled, builder: Some(builder.to_owned()), }, ); Ok(()) } /// Adds some globs manually, if the builder is available. /// /// Does nothing silently otherwise. pub fn add_globs(&mut self, globs: &[&str], applies_in: Option<&PathBuf>) -> Result<(), Error> { let virtual_ignore_file = IgnoreFile { path: "manual glob".into(), applies_in: applies_in.cloned(), applies_to: None, }; let applies_in = get_applies_in_path(&self.origin, &virtual_ignore_file); let applies_in_str = applies_in.display().to_string(); if self.ignores.get(&applies_in_str).is_none() { self.ignores.insert( applies_in_str.clone(), Ignore { gitignore: Gitignore::empty(), builder: Some(GitignoreBuilder::new(&applies_in)), }, ); } let Some(Ignore { builder: Some(builder), .. }) = self.ignores.get_mut(&applies_in_str) else { return Ok(()); }; let _span = trace_span!("loading ignore globs", ?globs).entered(); for line in globs { if line.is_empty() || line.starts_with('#') { continue; } trace!(?line, "adding ignore line"); builder .add_line(Some(applies_in.clone()), line) .map_err(|err| Error::Glob { file: None, err })?; } self.recompile(&virtual_ignore_file)?; Ok(()) } /// Match a particular path against the ignore set. pub fn match_path(&self, path: &Path, is_dir: bool) -> Match<&Glob> { let path = simplify_path(path); let path = path.as_path(); let mut search_path = path; loop { let Some(trie_node) = self .ignores .get_ancestor(&search_path.display().to_string()) else { trace!(?path, ?search_path, "no ignores for path"); return Match::None; }; // Unwrap will always succeed because every node has an entry. let ignores = trie_node.value().unwrap(); let match_ = if path.strip_prefix(&self.origin).is_ok() { trace!(?path, ?search_path, "checking against path or parents"); ignores.gitignore.matched_path_or_any_parents(path, is_dir) } else { trace!(?path, ?search_path, "checking against path only"); ignores.gitignore.matched(path, is_dir) }; match match_ { Match::None => { trace!( ?path, ?search_path, "no match found, searching for parent ignores" ); // Unwrap will always succeed because every node has an entry. let trie_path = Path::new(trie_node.key().unwrap()); if let Some(trie_parent) = trie_path.parent() { trace!(?path, ?search_path, "checking parent ignore"); search_path = trie_parent; } else { trace!(?path, ?search_path, "no parent ignore found"); return Match::None; } } _ => return match_, } } } /// Check a particular folder path against the ignore set. /// /// Returns `false` if the folder should be ignored. /// /// Note that this is a slightly different implementation than watchexec's Filterer trait, as /// the latter handles events with multiple associated paths. pub fn check_dir(&self, path: &Path) -> bool { let _span = trace_span!("check_dir", ?path).entered(); trace!("checking against compiled ignore files"); match self.match_path(path, true) { Match::None => { trace!("no match (pass)"); true } Match::Ignore(glob) => { if glob.from().map_or(true, |f| path.strip_prefix(f).is_ok()) { trace!(?glob, "positive match (fail)"); false } else { trace!(?glob, "positive match, but not in scope (pass)"); true } } Match::Whitelist(glob) => { trace!(?glob, "negative match (pass)"); true } } } } fn get_applies_in_path(origin: &Path, ignore_file: &IgnoreFile) -> PathBuf { let root_path = PathBuf::from(prefix(origin)); ignore_file .applies_in .as_ref() .map_or(root_path, |p| simplify_path(p)) } /// Gets the root component of a given path. /// /// This will be `/` on unix systems, or a Drive letter (`C:`, `D:`, etc) fn prefix>(path: T) -> String { let path = path.as_ref(); let Some(prefix) = path.components().next() else { return "/".into(); }; match prefix { std::path::Component::Prefix(prefix_component) => { prefix_component.as_os_str().to_str().unwrap_or("/").into() } _ => "/".into(), } } #[cfg(test)] mod tests { use super::IgnoreFilter; #[tokio::test] async fn handle_relative_paths() { let ignore = IgnoreFilter::new(".", &[]).await.unwrap(); assert!(ignore.origin.is_absolute()); } } ================================================ FILE: crates/ignore-files/src/lib.rs ================================================ //! Find, parse, and interpret ignore files. //! //! Ignore files are files that contain ignore patterns, often following the `.gitignore` format. //! There may be one or more global ignore files, which apply everywhere, and one or more per-folder //! ignore files, which apply to a specific folder and its subfolders. Furthermore, there may be //! more ignore files in _these_ subfolders, and so on. Discovering and interpreting all of these in //! a single context is not a simple task: this is what this crate provides. #![cfg_attr(not(test), warn(unused_crate_dependencies))] use std::path::{Path, PathBuf}; use normalize_path::NormalizePath; use project_origins::ProjectType; #[doc(inline)] pub use discover::*; mod discover; #[doc(inline)] pub use error::*; mod error; #[doc(inline)] pub use filter::*; mod filter; /// An ignore file. /// /// This records both the path to the ignore file and some basic metadata about it: which project /// type it applies to if any, and which subtree it applies in if any (`None` = global ignore file). #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub struct IgnoreFile { /// The path to the ignore file. pub path: PathBuf, /// The path to the subtree the ignore file applies to, or `None` for global ignores. pub applies_in: Option, /// Which project type the ignore file applies to, or was found through. pub applies_to: Option, } pub(crate) fn simplify_path(path: &Path) -> PathBuf { dunce::simplified(path).normalize() } ================================================ FILE: crates/ignore-files/tests/filtering.rs ================================================ mod helpers; use helpers::ignore_tests::*; #[tokio::test] async fn globals() { let filter = filt( "tree", &[ file("global/first").applies_globally(), file("global/second").applies_globally(), ], ) .await; // Both ignores should be loaded as global filter.agnostic_fail("/apples"); filter.agnostic_fail("/oranges"); // Sanity check filter.agnostic_pass("/kiwi"); } #[tokio::test] async fn tree() { let filter = filt("tree", &[file("tree/base"), file("tree/branch/inner")]).await; // "oranges" is not ignored at any level filter.agnostic_pass("tree/oranges"); filter.agnostic_pass("tree/branch/oranges"); filter.agnostic_pass("tree/branch/inner/oranges"); filter.agnostic_pass("tree/other/oranges"); // "apples" should only be ignored at the root filter.agnostic_fail("tree/apples"); filter.agnostic_pass("tree/branch/apples"); filter.agnostic_pass("tree/branch/inner/apples"); filter.agnostic_pass("tree/other/apples"); // "carrots" should be ignored at any level filter.agnostic_fail("tree/carrots"); filter.agnostic_fail("tree/branch/carrots"); filter.agnostic_fail("tree/branch/inner/carrots"); filter.agnostic_fail("tree/other/carrots"); // "pineapples/grapes" should only be ignored at the root filter.agnostic_fail("tree/pineapples/grapes"); filter.agnostic_pass("tree/branch/pineapples/grapes"); filter.agnostic_pass("tree/branch/inner/pineapples/grapes"); filter.agnostic_pass("tree/other/pineapples/grapes"); // "cauliflowers" should only be ignored at the root of "branch/" filter.agnostic_pass("tree/cauliflowers"); filter.agnostic_fail("tree/branch/cauliflowers"); filter.agnostic_pass("tree/branch/inner/cauliflowers"); filter.agnostic_pass("tree/other/cauliflowers"); // "artichokes" should be ignored anywhere inside of "branch/" filter.agnostic_pass("tree/artichokes"); filter.agnostic_fail("tree/branch/artichokes"); filter.agnostic_fail("tree/branch/inner/artichokes"); filter.agnostic_pass("tree/other/artichokes"); // "bananas/pears" should only be ignored at the root of "branch/" filter.agnostic_pass("tree/bananas/pears"); filter.agnostic_fail("tree/branch/bananas/pears"); filter.agnostic_pass("tree/branch/inner/bananas/pears"); filter.agnostic_pass("tree/other/bananas/pears"); } ================================================ FILE: crates/ignore-files/tests/global/first ================================================ apples ================================================ FILE: crates/ignore-files/tests/global/second ================================================ oranges ================================================ FILE: crates/ignore-files/tests/helpers/mod.rs ================================================ use std::path::{Path, PathBuf}; use ignore::{gitignore::Glob, Match}; use ignore_files::{IgnoreFile, IgnoreFilter}; pub mod ignore_tests { pub use super::ig_file as file; pub use super::ignore_filt as filt; pub use super::Applies; pub use super::PathHarness; } /// Get the drive letter of the current working directory. #[cfg(windows)] fn drive_root() -> String { let path = std::fs::canonicalize(".").unwrap(); let Some(prefix) = path.components().next() else { return r"C:\".into(); }; match prefix { std::path::Component::Prefix(prefix_component) => prefix_component .as_os_str() .to_str() .map(|p| p.to_owned() + r"\") .unwrap_or(r"C:\".into()), _ => r"C:\".into(), } } fn normalize_path(path: &str) -> PathBuf { #[cfg(windows)] let path: &str = &String::from(path) .strip_prefix("/") .map_or(path.into(), |p| drive_root() + p); let path: PathBuf = if Path::new(path).has_root() { path.into() } else { std::fs::canonicalize(".").unwrap().join("tests").join(path) }; dunce::simplified(&path).into() } pub trait PathHarness { fn check_path(&self, path: &Path, is_dir: bool) -> Match<&Glob>; fn path_pass(&self, path: &str, is_dir: bool, pass: bool) { let full_path = &normalize_path(path); tracing::info!(?path, ?is_dir, ?pass, "check"); let result = self.check_path(full_path, is_dir); assert_eq!( match result { Match::None => true, Match::Ignore(glob) => !glob.from().map_or(true, |f| full_path.starts_with(f)), Match::Whitelist(_glob) => true, }, pass, "{} {:?} (expected {}) [result: {}]", if is_dir { "dir" } else { "file" }, full_path, if pass { "pass" } else { "fail" }, match result { Match::None => String::from("None"), Match::Ignore(glob) => format!( "Ignore({})", glob.from() .map_or(String::new(), |f| f.display().to_string()) ), Match::Whitelist(glob) => format!( "Whitelist({})", glob.from() .map_or(String::new(), |f| f.display().to_string()) ), }, ); } fn file_does_pass(&self, path: &str) { self.path_pass(path, false, true); } fn file_doesnt_pass(&self, path: &str) { self.path_pass(path, false, false); } fn dir_does_pass(&self, path: &str) { self.path_pass(path, true, true); } fn dir_doesnt_pass(&self, path: &str) { self.path_pass(path, true, false); } fn agnostic_pass(&self, path: &str) { self.file_does_pass(path); self.dir_does_pass(path); } fn agnostic_fail(&self, path: &str) { self.file_doesnt_pass(path); self.dir_doesnt_pass(path); } } impl PathHarness for IgnoreFilter { fn check_path(&self, path: &Path, is_dir: bool) -> Match<&Glob> { self.match_path(path, is_dir) } } fn tracing_init() { use tracing_subscriber::{ fmt::{format::FmtSpan, Subscriber}, util::SubscriberInitExt, EnvFilter, }; Subscriber::builder() .pretty() .with_span_events(FmtSpan::NEW | FmtSpan::CLOSE) .with_env_filter(EnvFilter::from_default_env()) .finish() .try_init() .ok(); } pub async fn ignore_filt(origin: &str, ignore_files: &[IgnoreFile]) -> IgnoreFilter { tracing_init(); let origin = normalize_path(origin); IgnoreFilter::new(origin, ignore_files) .await .expect("making filterer") } pub fn ig_file(name: &str) -> IgnoreFile { let path = normalize_path(name); let parent: PathBuf = path.parent().unwrap_or(&path).into(); IgnoreFile { path, applies_in: Some(parent), applies_to: None, } } pub trait Applies { fn applies_globally(self) -> Self; } impl Applies for IgnoreFile { fn applies_globally(mut self) -> Self { self.applies_in = None; self } } ================================================ FILE: crates/ignore-files/tests/tree/base ================================================ /apples carrots pineapples/grapes ================================================ FILE: crates/ignore-files/tests/tree/branch/inner ================================================ /cauliflowers artichokes bananas/pears ================================================ FILE: crates/lib/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v8.2.0 (2026-03-02) - Feat: add `fs_ready` signal for watcher readiness ([#1024](https://github.com/watchexec/watchexec/pull/1024)) ## v8.1.2 (2026-02-24) ## v8.1.1 (2026-02-22) - Fix: bug on macOS where a task in the keyboard events worker would hang after graceful quit ([#1018](https://github.com/watchexec/watchexec/pull/1018)) ## v8.1.0 (2026-02-22) - Augments `keyboard_events` config to emit events for all single keyboard key inputs, in addition to the existing EOF - `keyboard_events` now switches to raw mode (and disabling it switches back to cooked) ## v8.0.1 (2025-05-15) ## v8.0.0 (2025-05-15) ## v7.0.0 (2025-05-15) - Deps: remove unused dependency `async-recursion` ([#930](https://github.com/watchexec/watchexec/pull/930)) - Deps: remove unused dependency `process-wrap` ([#930](https://github.com/watchexec/watchexec/pull/930)) - Deps: remove unused dependency `project-origins` ([#930](https://github.com/watchexec/watchexec/pull/930)) - Deps: remove ignore-files dependency ([#929](https://github.com/watchexec/watchexec/pull/929)) - Breaking: remove deprecated IgnoreFiles variant on RuntimeError ([#929](https://github.com/watchexec/watchexec/pull/929)) ## v6.0.0 (2025-02-09) ## v5.0.0 (2024-10-14) - Deps: nix 0.29 ## v4.1.0 (2024-04-28) - Feature: non-recursive watches with `WatchedPath::non_recursive()` - Fix: `config.pathset()` now preserves `WatchedPath` attributes - Refactor: move `WatchedPath` to the root of the crate (old path remains as re-export for now) ## v4.0.0 (2024-04-20) - Deps: replace command-group with process-wrap (in supervisor, but has flow-on effects) - Deps: miette 7 - Deps: nix 0.28 ## v3.0.1 (2023-11-29) - Deps: watchexec-events and watchexec-signals after major bump and yank ## v3.0.0 (2023-11-26) ### General - Crate is more oriented around `Watchexec` the core experience rather than providing the kitchensink / components so you could build your own from the pieces; that helps the cohesion of the whole and simplifies many patterns. - Deprecated items (mostly leftover from splitting out the `watchexec_events` and `watchexec_signals` crates) are removed. - Watchexec can now supervise multiple commands at once. See [Action](#Action) below, the [Action docs](https://docs.rs/watchexec/latest/watchexec/action/struct.Action.html), and the [Supervisor docs](https://docs.rs/watchexec-supervisor) for more. - Because of this new feature, the one where multiple commands could be set under the one supervisor is removed. - Watchexec's supervisor was split up into its own crate, [`watchexec-supervisor`](https://docs.rs/watchexec-supervisor). - Tokio requirement is now 1.33. - Notify was upgraded to 6.0. - Nix was upgraded to 0.27. ### `Watchexec` - `Watchexec::new()` now takes the `on_action` handler. As this is the most important handler to define and Watchexec will not be functional without one, that enforces providing it first. - `Watchexec::with_config()` lets one provide a config upfront, otherwise the default values are used. - `Watchexec::default()` is mostly used to avoid boilerplate in doc comment examples, and panics on initialisation errors. - `Watchexec::reconfigure()` is removed. Use the public `config` field instead to access the "live" `Arc` (see below). - Completion events aren't emitted anymore. They still exist in the Event enum, but they're not generated by Watchexec itself. Use `Job#to_wait` instead. Of course you can insert them as synthetic events if you want. ### Config - `InitConfig` and `RuntimeConfig` have been unified into a single `Config` struct. - Instead of module-specific `WorkingData` structures, all of the config is now flat in the same `Config`. That makes it easier to work with as all that's needed is to pass an `Arc` around, but it does mean the event sources are no longer independent. - Instead of using `tokio::sync::watch` for some values, and `HandlerLock` for handlers, and so on, everything is now a new `Changeable` type, specialised to `ChangeableFn` for closures and `ChangeableFilterer` for the Filterer. - There's now a `signal_change()` method which must be called after changes to the config; this is taken care of when using the methods on `Config`. This is required for the few places in Watchexec which need active reconfiguration rather than reading config values just-in-time. - The above means that instead of using `Watchexec::reconfigure()` and keeping a clone of the config around, an `Arc` is now "live" and changes applied to it will affect the Watchexec instance directly. - `command` / `commands` are removed from config. Instead use the Action handler API for creating new supervised commands. - `command_grouped` is removed from config. That's now an option set on `Command`. - `action_throttle` is renamed to `throttle` and now defaults to `50ms`, which is the default in Watchexec CLI. - `keyboard_emit_eof` is renamed to `keyboard_events`. - `pre_spawn_handler` is removed. Use `Job#set_spawn_hook` instead. - `post_spawn_handler` is removed. Use `Job#run` instead. ### Command The structure has been reworked to be simpler and more extensible. Instead of a Command _enum_, there's now a Command _struct_, which holds a single `Program` and behaviour-altering options. `Shell` has also been redone, with less special-casing. If you had: ```rust Command::Exec { prog: "date".into(), args: vec!["+%s".into()], } ``` You should now write: ```rust Command { program: Program::Exec { prog: "date".into(), args: vec!["+%s".into()], }, options: Default::default(), } ``` The new `Program::Shell` field `args: Vec` lets you pass (trailing) arguments to the shell invocation: ```rust Program::Shell { shell: Shell::new("sh"), command: "ls".into(), args: vec!["--".into(), "movies".into()], } ``` is equivalent to: ```console $ sh -c "ls" -- movies ``` - The old `args` field of `Command::Shell` is now the `options` field of `Shell`. - `Shell` has a new field `program_option: Option>` which is the syntax of the option used to provide the command. Ie for most shells it's `-c` and for `CMD.EXE` it's `/C`; this makes it fully customisable (including its absence!) if you want to use weird shells or non-shell programs as shells. - The special-cased `Shell::Powershell` is removed. - On Windows, arguments are specified with [`raw_arg`](https://doc.rust-lang.org/stable/std/os/windows/process/trait.CommandExt.html#tymethod.raw_arg) instead of `arg` to avoid quoting issues. - `Command` can no longer take a list of programs. That was always quite a hack; now that multiple supervised commands are possible, that's how multiple programs should be handled. - The top-level Watchexec `command_grouped` option is now Command-level, so you can start both grouped and non-grouped programs. - There's a new `reset_sigmask` option to control whether commands should have their signal masks reset on Unix. By default the signal mask is inherited. ### Errors - `RuntimeError::NoCommands`, `RuntimeError::Handler`, `RuntimeError::HandlerLockHeld`, and `CriticalError::MissingHandler` are removed as the relevant types/structures don't exist anymore. - `RuntimeError::CommandShellEmptyCommand` and `RuntimeError::CommandShellEmptyShell` are removed; you can construct `Shell` with empty shell program and `Program::Shell` with an empty command, these will at best do nothing but they won't error early through Watchexec. - `RuntimeError::ClearScreen` is removed, as clearing the screen is now done by the consumer of Watchexec, not Watchexec itself. - Watchexec will now panic if locks are poisoned; we can't recover from that. - The filesystem watcher's "too many files", "too many handles", and other initialisation errors are removed as `RuntimeErrors`, and are now `CriticalErrors`. These being runtime, nominally recoverable errors instead of end-the-world failures is one of the most common pitfalls of using the library, and though recovery _is_ technically possible, it's better approached other ways. - The `on_error` handler is now sync only and no longer returns a `Result`; as such there's no longer the weird logic of "if the `on_error` handler errors, it will call itself on the error once, then crash". - If you were doing async work in `on_error`, you should instead use non-async calls (like `try_send()` for Tokio channels). The error handler is expected to return as fast as possible, and _not_ do blocking work if it can at all avoid it; this was always the case but is now documented more explicitly. - Error diagnostic codes are removed. ### Action The process supervision system is entirely reworked. Instead of "applying `Outcome`s", there's now a `Job` type which is a single supervised command, provided by the separate [`watchexec-supervisor`](https://docs.rs/watchexec-supervisor) crate. The Action handler itself can only create new jobs and list existing ones, and interaction with commands is done through the `Job` type. The controls available on `Job` are now modeled on "real" supervisors like systemd, and are both more and less powerful than the old `Outcome` system. This can be seen clearly in how a "restart" is specified. Previously, this was an `Outcome` combinator: ```rust Outcome::if_running( Outcome::both(Outcome::stop(), Outcome::start()), Outcome::start(), ) ``` Now, it's a discrete method: ```rust job.restart(); ``` Previously, a graceful stop was a mess: ```rust Outcome::if_running( Outcome::both( Outcome::both( Outcome::signal(Signal::Terminate), Outcome::wait_timeout(Duration::from_secs(30)), ), Outcome::both(Outcome::stop(), Outcome::start()), ), Outcome::DoNothing, ) ``` Now, it's again a discrete method: ```rust job.stop_with_signal(Signal::Terminate, Duration::from_secs(30)); ``` The `stop()` and `start()` methods also do nothing if the process is already stopped or started, respectively, so you don't need to check the status of the job before calling them. The `try_restart()` method is available to do a restart only if the job is running, with the `try_restart_with_signal()` variant for graceful restarts. Further, all of these methods are non-blocking sync (and take `&self`), but they return a `Ticket`, a future which resolves when the control has been processed. That can be dropped if you don't care about it without affecting the job, or used to perform more advanced flow control. The special `to_wait()` method returns a detached, cloneable, "wait()" future, which will resolve when the process exits, without needing to hold on to the `Job` or a reference at all. See the [`restart_run_on_successful_build` example](./examples/restart_run_on_successful_build.rs) which starts a `cargo build`, waits for it to end, and then (re)starts `cargo run` if the build exited successfully. Finally: `Outcome::Clear` and `Outcome::Reset` are gone, and there's no equivalent on `Job`: that's because these are screen control actions, not job control. You should use the [clearscreen](https://docs.rs/clearscreen) crate directly in your action handler, in conjunction with job control, to achieve the desired effect. ## v2.3.0 (2023-03-22) - New: `Outcome::Race` and `Outcome::race()` ([#548](https://github.com/watchexec/watchexec/pull/548)) - New: `Outcome::wait_timeout()` ([#548](https://github.com/watchexec/watchexec/pull/548)) - New: `Outcome::sequence()` ([#548](https://github.com/watchexec/watchexec/pull/548)) - Fix: `kill_on_drop(true)` set for group commands as well as ungrouped ([#549](https://github.com/watchexec/watchexec/pull/549)) - Some `debug!`s upgraded to `info!`s, based on experience reading logs ([#547](https://github.com/watchexec/watchexec/pull/547)) ## v2.2.0 (2023-03-18) - Ditch MSRV policy. The `rust-version` indication will remain, for the minimum estimated Rust version for the code features used in the crate's own code, but dependencies may have already moved on. From now on, only latest stable is assumed and tested for. ([#510](https://github.com/watchexec/watchexec/pull/510)) - Split off `watchexec-events` and `watchexec-signals` crates. - Unify `SubSignal` and `MainSignal` into a new `Signal` type. The former types and paths exist as deprecated aliases/re-exports. ## v2.1.1 (2023-02-14) ## v2.1.0 (2023-01-08) - MSRV: bump to 1.61.0 - Deps: drop explicit dependency on `libc` on Unix. - Internal: remove all usage of `dunce`, replaced with either Tokio's `canonicalize` (properly async) or [normalize-path](https://docs.rs/normalize-path) (performs no I/O). - Internal: drop support code for Fuchsia. MIO already didn't support it, so it never compiled there. - Add `#[must_use]` annotations to a bunch of functions. - Add missing `Send` bound to `HandlerLock`. - Add new keyboard event source; initially supports just detecting EOF on STDIN. ([#449](https://github.com/watchexec/watchexec/pull/449)) - Fix `summarise_events_to_env` on Windows to output paths with backslashes. ## v2.0.2 (2022-09-07) - Deps: upgrade to miette 5.3.0 ## v2.0.1 (2022-09-07) - Deps: upgrade to Notify 5.0.0 ## v2.0.0 (2022-06-17) First "stable" release of the library. - **Change: the library is split into even more crates** - Two new low-level crates, `project-origins` and `ignore-files`, extract standalone functionality - Filterers are now separate crates, so they can evolve independently (faster) to the main library crate - These five new crates live in the watchexec monorepo, rather than being completely separate like `command-group` and `clearscreen` - This makes the main library bit less likely to change as often as it did, so it was finally time to release 2.0.0! - **Change: the Action worker now launches a set of Commands** - A new type `Command` replaces and augments `Shell`, making explicit which style of calling will be used - The action working data now takes a `Vec`, so multiple commands to be run as a set - Commands in the set are run sequentially, with an error interrupting the sequence - It is thus possible to run both "shelled" and "raw exec" commands in a set - `PreSpawn` and `PostSpawn` handlers are run per Command, not per command set - This new style should be preferred over sending command lines like `cmd1 && cmd2` - **Change: the event queue is now a priority queue** - Shutting down the runtime is faster and more predictable. No more hanging after hitting Ctrl-C if there's tonnes of events coming in! - Signals sent to the main process have higher priority - Events marked "urgent" skip filtering entirely - SIGINT, SIGTERM, and Ctrl-C on Windows are marked urgent - This means it's no longer possible to accidentally filter these events out - They still require handling in `on_action` to do anything - The API for the `Filterer` trait changes slightly to let filterers use event priority - Improvement: the main subtasks of the runtime are now aborted on error - Improvement: the event queue is explicitly closed when shutting down - Improvement: the action worker will check if the event queue is closed more often, to shutdown early - Improvement: `kill_on_drop` is set on Commands, which will be a little more eager to terminate processes when we're done with them - Feature: `Outcome::Sleep` waits for a given duration ([#79](https://github.com/watchexec/watchexec/issues/79)) Other miscellaneous: - Deps: add the `log` feature to tracing so logs can be emitted to `log` subscribers - Deps: upgrade to Tokio 1.19 - Deps: upgrade to Miette 4 - Deps: upgrade to Notify 5.0.0-pre.15 - Docs: fix the main example in lib.rs ([#297](https://github.com/watchexec/watchexec/pull/297)) - Docs: describe a tuple argument in the globset filterer interface - Docs: the library crate gains a file-based CHANGELOG.md (and won't go in the Github releases tab anymore) - Docs: the library's readme's code block example is now checked as a doc-test - Meta: PRs are now merged by Bors ## v2.0.0-pre.14 (2022-04-04) - Replace git2 dependency by git-config ([#267](https://github.com/watchexec/watchexec/pull/267)). This makes using the library more pleasant and will also avoid library version mismatch errors when the libgit2 library updates on the system. ## v2.0.0-pre.13 (2022-03-18) - Revert backend switch on mac from previous release. We'll do it a different way later ([#269](https://github.com/watchexec/watchexec/issues/269)) ## v2.0.0-pre.12 (2022-03-16) - Upgraded to [Notify pre.14](https://github.com/notify-rs/notify/releases/tag/5.0.0-pre.14) - Internal change: kqueue backend is used on mac. This _should_ reduce or eliminate some old persistent bugs on mac, and improve response times, but please report any issues you have! - `Watchexec::new()` now reports the library's version at debug level - Notify version is now specified with an exact (`=`) requirement, to avoid breakage ([#266](https://github.com/watchexec/watchexec/issues/266)) ## v2.0.0-pre.11 (2022-03-07) - New `error::FsWatcherError` enum split off from `RuntimeError`, and with additional variants to take advantage of targeted help text for known inotify errors on Linux - Help text is now carried through elevated errors properly - Globset filterer: `extensions` and `filters` are now cooperative rather than exclusionary. That is, a filters of `["Gemfile"]` and an extensions of `["js", "rb"]` will match _both_ `Gemfile` and `index.js` rather than matching nothing at all. This restores pre 2.0 behaviour. - Globset filterer: on unix, a filter of `*/file` will match both `file` and `dir/file` instead of just `dir/file`. This is a compatibility fix and is incorrect behaviour which will be removed in the future. Do not rely on it. ## v2.0.0-pre.10 (2022-02-07) - The `on_error` handler gets an upgraded parameter which lets it upgrade (runtime) errors to critical. - `summarize_events_to_paths` now deduplicates paths within each variable. ## v2.0.0-pre.9 (2022-01-31) - `Action`, `PreSpawn`, and `PostSpawn` structs passed to handlers now contain an `Arc<[Event]>` instead of an `Arc>` - `Outcome` processing (the final bit of an action) now runs concurrently, so it doesn't block further event processing ([#247](https://github.com/watchexec/watchexec/issues/247), and to a certain extent, [#241](https://github.com/watchexec/watchexec/issues/241)) ## v2.0.0-pre.8 (2022-01-26) - Fix: globset filterer should pass all non-path events ([#248](https://github.com/watchexec/watchexec/pull/248)) ## v2.0.0-pre.7 (2022-01-26) [YANKED] **Yanked for critical bug in globset filterer (fixed in pre.8) on 2022-01-26** - Fix: typo in logging/errors ([#242](https://github.com/watchexec/watchexec/pull/242)) - Globset: an extension filter should fail all paths that are about folders ([#244](https://github.com/watchexec/watchexec/issues/244)) - Globset: in the case of an event with multiple paths, any pass should pass the entire event - Removal: `filter::check_glob` and `error::GlobParseError` ## v2.0.0-pre.6 (2022-01-19) First version of library v2 that was used in a CLI release. - Globset filterer was erroneously passing files with no extension when an extension filter was specified ## v2.0.0-pre.5 (2022-01-18) - Update MSRV (to 1.58) and policy (bump incurs minor semver only) - Some bugfixes around canonicalisation of paths - Eliminate context-less IO errors - Move error types around - Prep library readme - Update deps ## v2.0.0-pre.4 (2022-01-16) - More logging, especially around ignore file discovery and filtering - The const `paths::PATH_SEPARATOR` is now public, being `:` on Unix and `;` and Windows. - Add Subversion to discovered ProjectTypes - Add common (sub)Filterer for ignore files, so they benefit from a single consistent implementation. This also makes ignore file discovery correct and efficient by being able to interpret ignore files which searching for ignore files, or in other words, _not_ descending into directories which are ignored. - Integrate this new IgnoreFilterer into the GlobsetFilterer and TaggedFilterer. This does mean that some old v1 behaviour of patterns in gitignores will not behave quite the same now, but that was arguably always a bug. The old "buggy" v1 behaviour around folder filtering remains for manual filters, which are those most likely to be surprising if "fixed". ## v2.0.0-pre.3 (2021-12-29) - [`summarise_events_to_env`](https://docs.rs/watchexec/2.0.0-pre.3/watchexec/paths/fn.summarise_events_to_env.html) used to return `COMMON_PATH`, it now returns `COMMON`, in keeping with the other variable names. ## v2.0.0-pre.2 (2021-12-29) - [`summarise_events_to_env`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/paths/fn.summarise_events_to_env.html) returns a `HashMap<&str, OsString>` rather than `HashMap<&OsStr, OsString>`, because the expectation is that the variable names are processed, e.g. in the CLI: `WATCHEXEC_{}_PATH`. `OsStr` makes that painful for no reason (the strings are static anyway). - The [`Action`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/action/struct.Action.html) struct's `events` field changes to be an `Arc>` rather than a `Vec`: the intent is for the events to be immutable/read-only (and it also made it easier/cheaper to implement the next change below). - The [`PreSpawn`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/action/struct.PreSpawn.html) and [`PostSpawn`](https://docs.rs/watchexec/2.0.0-pre.2/watchexec/action/struct.PostSpawn.html) structs got a new `events: Arc>` field so these handlers get read-only access to the events that triggered the command. ## v2.0.0-pre.1 (2021-12-21) - MSRV bumped to 1.56 - Rust 2021 edition - More documentation around tagged filterer: - `==` and `!=` are case-insensitive - the mapping of matcher to tags - the mapping of matcher to auto op - Finished the tagged filterer: - Proper path glob matching - Signal matching - Process completion matching - Allowlisting pattern works - More matcher aliases to the parser - Negated filters - Some silly filter parsing bugs - File event kind matching - Folder filtering (main confusing behaviour in v1) - Lots of tests: - Globset filterer - Including the "buggy"/confusing behaviour of v1, for parity/compat - Tagged filterer: - Paths - Including verifying that the v1 confusing behaviour is fixed - Non-path filters - Filter parsing - Ignore files - Filter scopes - Outcomes - Change reporting in the environment - ...Specify behaviour a little more precisely through that process - Prepare the watchexec event type to be serializable - A synthetic `FileType` - A synthetic `ProcessEnd` (`ExitStatus` replacement) - Some ease-of-use improvements, mainly removing generics when overkill ## v2.0.0-pre.0 (2021-10-17) - Placeholder release of v2 library (preview) ## v1.17.1 (2021-07-22) - Process handling code replaced with the new [command-group](https://github.com/watchexec/command-group) crate. - [#158](https://github.com/watchexec/watchexec/issues/158) New option `use_process_group` (default `true`) allows disabling use of process groups. - [#168](https://github.com/watchexec/watchexec/issues/168) Default debounce time further decreased to 100ms. - Binstall configuration and transitional `cargo install watchexec` stub removed. ## v1.16.1 (2021-07-10) - [#200](https://github.com/watchexec/watchexec/issues/200): Expose when the process is done running - [`ba26999`](https://github.com/watchexec/watchexec/commit/ba26999028cfcac410120330800a9a9026ca7274) Pin globset to 0.4.6 to avoid breakage due to a bugfix in 0.4.7 ## v1.16.0 (2021-05-09) - Initial release as a separate crate. ================================================ FILE: crates/lib/Cargo.toml ================================================ [package] name = "watchexec" version = "8.2.0" authors = ["Félix Saparelli ", "Matt Green "] license = "Apache-2.0" description = "Library to execute commands in response to file modifications" keywords = ["watcher", "filesystem", "watchexec"] documentation = "https://docs.rs/watchexec" homepage = "https://watchexec.github.io" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.61.0" edition = "2021" [dependencies] async-priority-channel = "0.2.0" atomic-take = "1.0.0" futures = "0.3.29" miette = "7.2.0" notify = "8.0.0" thiserror = "2.0.11" normalize-path = "0.2.0" [dependencies.watchexec-events] version = "6.1.0" path = "../events" [dependencies.watchexec-signals] version = "5.0.1" path = "../signals" [dependencies.watchexec-supervisor] version = "5.2.0" path = "../supervisor" [dependencies.tokio] version = "1.33.0" features = [ "fs", "io-std", "process", "rt", "rt-multi-thread", "signal", "sync", ] [dependencies.tracing] version = "0.1.40" features = ["log"] [target.'cfg(unix)'.dependencies] libc = "0.2.74" [target.'cfg(windows)'.dependencies.windows-sys] version = ">= 0.59.0, < 0.62.0" features = ["Win32_System_Console", "Win32_Foundation"] [dev-dependencies.tracing-subscriber] version = "0.3.6" features = ["env-filter"] [target.'cfg(unix)'.dev-dependencies.nix] version = "0.30.1" features = ["signal"] [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" ================================================ FILE: crates/lib/README.md ================================================ [![Crates.io page](https://badgen.net/crates/v/watchexec)](https://crates.io/crates/watchexec) [![API Docs](https://docs.rs/watchexec/badge.svg)][docs] [![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license] [![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml) # Watchexec library _The library which powers [Watchexec CLI](https://watchexec.github.io) and other tools._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license]. - Status: maintained. [docs]: https://docs.rs/watchexec [license]: ../../LICENSE ## Examples Here's a complete example showing some of the library's features: ```rust ,no_run use miette::{IntoDiagnostic, Result}; use std::{ sync::{Arc, Mutex}, time::Duration, }; use watchexec::{ command::{Command, Program, Shell}, job::CommandState, Watchexec, }; use watchexec_events::{Event, Priority}; use watchexec_signals::Signal; #[tokio::main] async fn main() -> Result<()> { // this is okay to start with, but Watchexec logs a LOT of data, // even at error level. you will quickly want to filter it down. tracing_subscriber::fmt() .with_env_filter(tracing_subscriber::EnvFilter::from_default_env()) .init(); // initialise Watchexec with a simple initial action handler let job = Arc::new(Mutex::new(None)); let wx = Watchexec::new({ let outerjob = job.clone(); move |mut action| { let (_, job) = action.create_job(Arc::new(Command { program: Program::Shell { shell: Shell::new("bash"), command: " echo 'Hello world' trap 'echo Not quitting yet!' TERM read " .into(), args: Vec::new(), }, options: Default::default(), })); // store the job outside this closure too *outerjob.lock().unwrap() = Some(job.clone()); // block SIGINT #[cfg(unix)] job.set_spawn_hook(|cmd, _| { use nix::sys::signal::{sigprocmask, SigSet, SigmaskHow, Signal}; unsafe { cmd.command_mut().pre_exec(|| { let mut newset = SigSet::empty(); newset.add(Signal::SIGINT); sigprocmask(SigmaskHow::SIG_BLOCK, Some(&newset), None)?; Ok(()) }); } }); // start the command job.start(); action } })?; // start the engine let main = wx.main(); // send an event to start wx.send_event(Event::default(), Priority::Urgent) .await .unwrap(); // ^ this will cause the action handler we've defined above to run, // creating and starting our little bash program, and storing it in the mutex // spin until we've got the job while job.lock().unwrap().is_none() { tokio::task::yield_now().await; } // watch the job and restart it when it exits let job = job.lock().unwrap().clone().unwrap(); let auto_restart = tokio::spawn(async move { loop { job.to_wait().await; job.run(|context| { if let CommandState::Finished { status, started, finished, } = context.current { let duration = *finished - *started; eprintln!("[Program stopped with {status:?}; ran for {duration:?}]") } }) .await; eprintln!("[Restarting...]"); job.start().await; } }); // now we change what the action does: let auto_restart_abort = auto_restart.abort_handle(); wx.config.on_action(move |mut action| { // if we get Ctrl-C on the Watchexec instance, we quit if action.signals().any(|sig| sig == Signal::Interrupt) { eprintln!("[Quitting...]"); auto_restart_abort.abort(); action.quit_gracefully(Signal::ForceStop, Duration::ZERO); return action; } // if the action was triggered by file events, gracefully stop the program if action.paths().next().is_some() { // watchexec can manage ("supervise") more than one program; // here we only have one but we don't know its Id so we grab it out of the iterator if let Some(job) = action.list_jobs().next().map(|(_, job)| job.clone()) { eprintln!("[Asking program to stop...]"); job.stop_with_signal(Signal::Terminate, Duration::from_secs(5)); } } action }); // and watch all files in the current directory: wx.config.pathset(["."]); // then keep running until Watchexec quits! let _ = main.await.into_diagnostic()?; auto_restart.abort(); Ok(()) } ``` Other examples: - [Only Commands](./examples/only_commands.rs): skip watching files, only use the supervisor. - [Only Events](./examples/only_events.rs): never start any processes, only print events. - [Restart `cargo run` only when `cargo build` succeeds](./examples/restart_run_on_successful_build.rs) ## Kitchen sink Though not its primary usecase, the library exposes most of its relatively standalone components, available to make other tools that are not Watchexec-shaped: - **Event sources**: [Filesystem](https://docs.rs/watchexec/3/watchexec/sources/fs/index.html), [Signals](https://docs.rs/watchexec/3/watchexec/sources/signal/index.html), [Keyboard](https://docs.rs/watchexec/3/watchexec/sources/keyboard/index.html). - Finding **[a common prefix](https://docs.rs/watchexec/3/watchexec/paths/fn.common_prefix.html)** of a set of paths. - A **[Changeable](https://docs.rs/watchexec/3/watchexec/changeable/index.html)** type, which powers the "live" configuration system. - And [more][docs]! Filterers are split into their own crates, so they can be evolved independently: - The **[Globset](https://docs.rs/watchexec-filterer-globset) filterer** implements the default Watchexec CLI filtering, based on the regex crate's ignore mechanisms. - ~~The **[Tagged](https://docs.rs/watchexec-filterer-tagged) filterer**~~ was an experiment in creating a more powerful filtering solution, which could operate on every part of events, not just their paths, using a custom syntax. It is no longer maintained. - The **[Ignore](https://docs.rs/watchexec-filterer-ignore) filterer** implements ignore-file semantics, and especially supports _trees_ of ignore files. It is used as a subfilterer in both of the main filterers above. There are also separate, standalone crates used to build Watchexec which you can tap into: - **[Supervisor](https://docs.rs/watchexec-supervisor)** is Watchexec's process supervisor and command abstraction. - **[ClearScreen](https://docs.rs/clearscreen)** makes clearing the terminal screen in a cross-platform way easy by default, and provides advanced options to fit your usecase. - **[Command Group](https://docs.rs/command-group)** augments the std and tokio `Command` with support for process groups, portable between Unix and Windows. - **[Event types](https://docs.rs/watchexec-events)** contains the event types used by Watchexec, including the JSON format used for passing event data to child processes. - **[Signal types](https://docs.rs/watchexec-signals)** contains the signal types used by Watchexec. - **[Ignore files](https://docs.rs/ignore-files)** finds, parses, and interprets ignore files. - **[Project Origins](https://docs.rs/project-origins)** finds the origin (or root) path of a project, and what kind of project it is. ## Rust version (MSRV) Due to the unpredictability of dependencies changing their MSRV, this library no longer tries to keep to a minimum supported Rust version behind stable. Instead, it is assumed that developers use the latest stable at all times. Applications that wish to support lower-than-stable Rust (such as the Watchexec CLI does) should: - use a lock file - recommend the use of `--locked` when installing from source - provide pre-built binaries (and [Binstall](https://github.com/cargo-bins/cargo-binstall) support) for non-distro users - avoid using newer features until some time has passed, to let distro users catch up - consider recommending that distro-Rust users switch to distro `rustup` where available ================================================ FILE: crates/lib/examples/only_commands.rs ================================================ use std::{ sync::Arc, time::{Duration, Instant}, }; use miette::{IntoDiagnostic, Result}; use tokio::time::sleep; use watchexec::{ command::{Command, Program}, Watchexec, }; use watchexec_events::{Event, Priority}; #[tokio::main] async fn main() -> Result<()> { let wx = Watchexec::new(|mut action| { // you don't HAVE to respond to filesystem events: // here, we start a command every five seconds, unless we get a signal and quit if action.signals().next().is_some() { eprintln!("[Quitting...]"); action.quit(); } else { let (_, job) = action.create_job(Arc::new(Command { program: Program::Exec { prog: "echo".into(), args: vec![ "Hello world!".into(), format!("Current time: {:?}", Instant::now()), "Press Ctrl+C to quit".into(), ], }, options: Default::default(), })); job.start(); } action })?; tokio::spawn({ let wx = wx.clone(); async move { loop { sleep(Duration::from_secs(5)).await; wx.send_event(Event::default(), Priority::Urgent) .await .unwrap(); } } }); let _ = wx.main().await.into_diagnostic()?; Ok(()) } ================================================ FILE: crates/lib/examples/only_events.rs ================================================ use miette::{IntoDiagnostic, Result}; use watchexec::Watchexec; #[tokio::main] async fn main() -> Result<()> { let wx = Watchexec::new(|mut action| { // you don't HAVE to spawn jobs: // here, we just print out the events as they come in for event in action.events.iter() { eprintln!("{event:?}"); } // quit when we get a signal if action.signals().next().is_some() { eprintln!("[Quitting...]"); action.quit(); } action })?; // start the engine let main = wx.main(); // and watch all files in the current directory: wx.config.pathset(["."]); let _ = main.await.into_diagnostic()?; Ok(()) } ================================================ FILE: crates/lib/examples/readme.rs ================================================ use std::{ sync::{Arc, Mutex}, time::Duration, }; use miette::{IntoDiagnostic, Result}; use watchexec::{ command::{Command, Program, Shell}, job::CommandState, Watchexec, }; use watchexec_events::{Event, Priority}; use watchexec_signals::Signal; #[tokio::main] async fn main() -> Result<()> { // this is okay to start with, but Watchexec logs a LOT of data, // even at error level. you will quickly want to filter it down. tracing_subscriber::fmt() .with_env_filter(tracing_subscriber::EnvFilter::from_default_env()) .init(); // initialise Watchexec with a simple initial action handler let job = Arc::new(Mutex::new(None)); let wx = Watchexec::new({ let outerjob = job.clone(); move |mut action| { let (_, job) = action.create_job(Arc::new(Command { program: Program::Shell { shell: Shell::new("bash"), command: " echo 'Hello world' trap 'echo Not quitting yet!' TERM read " .into(), args: Vec::new(), }, options: Default::default(), })); // store the job outside this closure too *outerjob.lock().unwrap() = Some(job.clone()); // block SIGINT #[cfg(unix)] job.set_spawn_hook(|cmd, _| { use nix::sys::signal::{sigprocmask, SigSet, SigmaskHow, Signal}; unsafe { cmd.command_mut().pre_exec(|| { let mut newset = SigSet::empty(); newset.add(Signal::SIGINT); sigprocmask(SigmaskHow::SIG_BLOCK, Some(&newset), None)?; Ok(()) }); } }); // start the command job.start(); action } })?; // start the engine let main = wx.main(); // send an event to start wx.send_event(Event::default(), Priority::Urgent) .await .unwrap(); // ^ this will cause the action handler we've defined above to run, // creating and starting our little bash program, and storing it in the mutex // spin until we've got the job while job.lock().unwrap().is_none() { tokio::task::yield_now().await; } // watch the job and restart it when it exits let job = job.lock().unwrap().clone().unwrap(); let auto_restart = tokio::spawn(async move { loop { job.to_wait().await; job.run(|context| { if let CommandState::Finished { status, started, finished, } = context.current { let duration = *finished - *started; eprintln!("[Program stopped with {status:?}; ran for {duration:?}]"); } }) .await; eprintln!("[Restarting...]"); job.start().await; } }); // now we change what the action does: let auto_restart_abort = auto_restart.abort_handle(); wx.config.on_action(move |mut action| { // if we get Ctrl-C on the Watchexec instance, we quit if action.signals().any(|sig| sig == Signal::Interrupt) { eprintln!("[Quitting...]"); auto_restart_abort.abort(); action.quit_gracefully(Signal::ForceStop, Duration::ZERO); return action; } // if the action was triggered by file events, gracefully stop the program if action.paths().next().is_some() { // watchexec can manage ("supervise") more than one program; // here we only have one but we don't know its Id so we grab it out of the iterator if let Some(job) = action.list_jobs().next().map(|(_, job)| job) { eprintln!("[Asking program to stop...]"); job.stop_with_signal(Signal::Terminate, Duration::from_secs(5)); } // we could also use `action.get_or_create_job` initially and store its Id to use here, // see the CHANGELOG.md for an example under "3.0.0 > Action". } action }); // and watch all files in the current directory: wx.config.pathset(["."]); // then keep running until Watchexec quits! let _ = main.await.into_diagnostic()?; auto_restart.abort(); Ok(()) } ================================================ FILE: crates/lib/examples/restart_run_on_successful_build.rs ================================================ use std::sync::Arc; use miette::{IntoDiagnostic, Result}; use watchexec::{ command::{Command, Program, SpawnOptions}, job::CommandState, Id, Watchexec, }; use watchexec_events::{Event, Priority, ProcessEnd}; use watchexec_signals::Signal; #[tokio::main] async fn main() -> Result<()> { let build_id = Id::default(); let run_id = Id::default(); let wx = Watchexec::new_async(move |mut action| { Box::new(async move { if action.signals().any(|sig| sig == Signal::Interrupt) { eprintln!("[Quitting...]"); action.quit(); return action; } let build = action.get_or_create_job(build_id, || { Arc::new(Command { program: Program::Exec { prog: "cargo".into(), args: vec!["build".into()], }, options: Default::default(), }) }); let run = action.get_or_create_job(run_id, || { Arc::new(Command { program: Program::Exec { prog: "cargo".into(), args: vec!["run".into()], }, options: SpawnOptions { grouped: true, ..Default::default() }, }) }); if action.paths().next().is_some() || action.events.iter().any(|event| event.tags.is_empty()) { build.restart().await; } build.to_wait().await; build .run(move |context| { if let CommandState::Finished { status: ProcessEnd::Success, .. } = context.current { run.restart(); } }) .await; action }) })?; // start the engine let main = wx.main(); // send an event to start wx.send_event(Event::default(), Priority::Urgent) .await .unwrap(); // and watch all files in cli src wx.config.pathset(["crates/cli/src"]); // then keep running until Watchexec quits! let _ = main.await.into_diagnostic()?; Ok(()) } ================================================ FILE: crates/lib/release.toml ================================================ pre-release-commit-message = "release: lib v{{version}}" tag-prefix = "watchexec-" tag-message = "watchexec {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/lib/src/action/handler.rs ================================================ use std::{collections::HashMap, path::Path, sync::Arc, time::Duration}; use tokio::task::JoinHandle; use watchexec_events::{Event, FileType, ProcessEnd}; use watchexec_signals::Signal; use watchexec_supervisor::{ command::Command, job::{start_job, Job}, }; use crate::id::Id; use super::QuitManner; /// The environment given to the action handler. /// /// The action handler is the heart of a Watchexec program. Within, you decide what happens when an /// event successfully passes all filters. Watchexec maintains a set of Supervised [`Job`]s, which /// are assigned a unique [`Id`] for lightweight reference. In this action handler, you should /// add commands to be supervised with `create_job()`, or find an already-supervised job with /// `get_job()` or `list_jobs()`. You can interact with jobs directly via their handles, and can /// even store clones of the handles for later use outside the action handler. /// /// The action handler is also given the [`Event`]s which triggered the action. These are expected /// to be the way to determine what to do with a job. However, in some applications you might not /// care about them, and that's fine too: for example, you can build a Watchexec which only does /// process supervision, and is triggered entirely by synthetic events. Conversely, you are also not /// obligated to use the job handles: you can build a Watchexec which only does something with the /// events, and never actually starts any processes. /// /// There are some important considerations to keep in mind when writing an action handler: /// /// 1. The action handler is called with the supervisor set _as of when the handler was called_. /// This is particularly important when multiple action handlers might be running at the same /// time: they might have incomplete views of the supervisor set. /// /// 2. The way the action handler communicates with the Watchexec handler is through the return /// value of the handler. That is, when you add a job with `create_job()`, the job is not added /// to the Watchexec instance's supervisor set until the action handler returns. Similarly, when /// using `quit()`, the quit action is not performed until the action handler returns and the /// Watchexec instance is able to see it. /// /// 3. The action handler blocks the action main loop. This means that if you have a long-running /// action handler, the Watchexec instance will not be able to process events until the handler /// returns. That will cause events to accumulate and then get dropped once the channel reaches /// capacity, which will impact your ability to receive signals (such as a Ctrl-C), and may spew /// [`EventChannelTrySend` errors](crate::error::RuntimeError::EventChannelTrySend). /// /// If you want to do something long-running, you should either ignore that error, and accept /// events may be dropped, or preferrably spawn a task to do it, and return from the action /// handler as soon as possible. #[derive(Debug)] pub struct Handler { /// The collected events which triggered the action. pub events: Arc<[Event]>, extant: HashMap, pub(crate) new: HashMap)>, pub(crate) quit: Option, } impl Handler { pub(crate) fn new(events: Arc<[Event]>, jobs: HashMap) -> Self { Self { events, extant: jobs, new: HashMap::new(), quit: None, } } /// Create a new job and return its handle. /// /// This starts the [`Job`] immediately, and stores a copy of its handle and [`Id`] in this /// `Action` (and thus in the Watchexec instance, when the action handler returns). pub fn create_job(&mut self, command: Arc) -> (Id, Job) { let id = Id::default(); let (job, task) = start_job(command); self.new.insert(id, (job.clone(), task)); (id, job) } // exposing this is dangerous as it allows duplicate IDs which may leak jobs fn create_job_with_id(&mut self, id: Id, command: Arc) -> Job { let (job, task) = start_job(command); self.new.insert(id, (job.clone(), task)); job } /// Get an existing job or create a new one given an Id. /// /// This starts the [`Job`] immediately if one with the Id doesn't exist, and stores a copy of /// its handle and [`Id`] in this `Action` (and thus in the Watchexec instance, when the action /// handler returns). pub fn get_or_create_job(&mut self, id: Id, command: impl Fn() -> Arc) -> Job { self.get_job(id) .unwrap_or_else(|| self.create_job_with_id(id, command())) } /// Get a job given its Id. /// /// This returns a job handle, if it existed when this handler was called. #[must_use] pub fn get_job(&self, id: Id) -> Option { self.extant.get(&id).cloned() } /// List all jobs currently supervised by Watchexec. /// /// This returns an iterator over all jobs, in no particular order, as of when this handler was /// called. pub fn list_jobs(&self) -> impl Iterator + '_ { self.extant.iter().map(|(id, job)| (*id, job.clone())) } /// Shut down the Watchexec instance immediately. /// /// This will kill and drop all jobs without waiting on processes, then quit. /// /// Use `graceful_quit()` to wait for processes to finish before quitting. /// /// The quit is initiated once the action handler returns, not when this method is called. pub fn quit(&mut self) { self.quit = Some(QuitManner::Abort); } /// Shut down the Watchexec instance gracefully. /// /// This will send graceful stops to all jobs, wait on them to finish, then reap them and quit. /// /// Use `quit()` to quit more abruptly. /// /// If you want to wait for all other actions to finish and for jobs to get cleaned up, but not /// gracefully delay for processes, you can do: /// /// ```no_compile /// action.quit_gracefully(Signal::ForceStop, Duration::ZERO); /// ``` /// /// The quit is initiated once the action handler returns, not when this method is called. pub fn quit_gracefully(&mut self, signal: Signal, grace: Duration) { self.quit = Some(QuitManner::Graceful { signal, grace }); } /// Convenience to get all signals in the event set. pub fn signals(&self) -> impl Iterator + '_ { self.events.iter().flat_map(Event::signals) } /// Convenience to get all paths in the event set. /// /// An action contains a set of events, and some of those events might relate to watched /// files, and each of *those* events may have one or more paths that were affected. /// To hide this complexity this method just provides any and all paths in the event, /// along with the type of file at that path, if Watchexec knows that. pub fn paths(&self) -> impl Iterator)> + '_ { self.events.iter().flat_map(Event::paths) } /// Convenience to get all process completions in the event set. pub fn completions(&self) -> impl Iterator> + '_ { self.events.iter().flat_map(Event::completions) } } ================================================ FILE: crates/lib/src/action/quit.rs ================================================ use std::time::Duration; use watchexec_signals::Signal; /// How the Watchexec instance should quit. #[derive(Clone, Copy, Debug, Eq, PartialEq)] pub enum QuitManner { /// Kill all processes and drop all jobs, then quit. Abort, /// Gracefully stop all jobs, then quit. Graceful { /// Signal to send immediately signal: Signal, /// Time to wait before forceful termination grace: Duration, }, } ================================================ FILE: crates/lib/src/action/return.rs ================================================ use std::future::Future; use super::ActionHandler; /// The return type of an action. /// /// This is the type returned by the raw action handler, used internally or when setting the action /// handler directly via the field on [`Config`](crate::Config). It is not used when setting the /// action handler via [`Config::on_action`](crate::Config::on_action) and /// [`Config::on_action_async`](crate::Config::on_action_async) as that takes care of wrapping the /// return type from the specialised signature on these methods. pub enum ActionReturn { /// The action handler is synchronous and here's its return value. Sync(ActionHandler), /// The action handler is asynchronous: this is the future that will resolve to its return value. Async(Box + Send + Sync>), } ================================================ FILE: crates/lib/src/action/worker.rs ================================================ use std::{ collections::HashMap, mem::take, sync::Arc, time::{Duration, Instant}, }; use async_priority_channel as priority; use tokio::{sync::mpsc, time::timeout}; use tracing::{debug, trace}; use watchexec_events::{Event, Priority}; use watchexec_supervisor::job::Job; use super::{handler::Handler, quit::QuitManner}; use crate::{ action::ActionReturn, error::{CriticalError, RuntimeError}, filter::Filterer, id::Id, late_join_set::LateJoinSet, Config, }; /// The main worker of a Watchexec process. /// /// This is the main loop of the process. It receives events from the event channel, filters them, /// debounces them, obtains the desired outcome of an actioned event, calls the appropriate handlers /// and schedules processes as needed. pub async fn worker( config: Arc, errors: mpsc::Sender, events: priority::Receiver, ) -> Result<(), CriticalError> { let mut jobtasks = LateJoinSet::default(); let mut jobs = HashMap::::new(); while let Some(mut set) = throttle_collect( config.clone(), events.clone(), errors.clone(), Instant::now(), ) .await? { let events: Arc<[Event]> = Arc::from(take(&mut set).into_boxed_slice()); trace!("preparing action handler"); let action = Handler::new(events.clone(), jobs.clone()); debug!("running action handler"); let action = match config.action_handler.call(action) { ActionReturn::Sync(action) => action, ActionReturn::Async(action) => Box::into_pin(action).await, }; debug!("take control of new tasks"); for (id, (job, task)) in action.new { trace!(?id, "taking control of new task"); jobtasks.insert(task); jobs.insert(id, job); } if let Some(manner) = action.quit { debug!(?manner, "quitting worker"); match manner { QuitManner::Abort => break, QuitManner::Graceful { signal, grace } => { debug!(?signal, ?grace, "quitting worker gracefully"); let mut tasks = LateJoinSet::default(); for (id, job) in jobs.drain() { trace!(?id, "quitting job"); tasks.spawn(async move { job.stop_with_signal(signal, grace); job.delete().await; }); } // TODO: spawn to process actions, and allow events to come in while // waiting for graceful shutdown, e.g. a second Ctrl-C to hasten debug!("waiting for graceful shutdown tasks"); tasks.join_all().await; debug!("waiting for job tasks to end"); jobtasks.join_all().await; break; } } } let gc: Vec = jobs .iter() .filter_map(|(id, job)| { if job.is_dead() { trace!(?id, "job is dead, gc'ing"); Some(*id) } else { None } }) .collect(); if !gc.is_empty() { debug!("garbage collect old tasks"); for id in gc { jobs.remove(&id); } } debug!("action handler finished"); } debug!("action worker finished"); Ok(()) } pub async fn throttle_collect( config: Arc, events: priority::Receiver, errors: mpsc::Sender, mut last: Instant, ) -> Result>, CriticalError> { if events.is_closed() { trace!("events channel closed, stopping"); return Ok(None); } let mut set: Vec = vec![]; loop { let maxtime = if set.is_empty() { trace!("nothing in set, waiting forever for next event"); Duration::from_secs(u64::MAX) } else { config.throttle.get().saturating_sub(last.elapsed()) }; if maxtime.is_zero() { if set.is_empty() { trace!("out of throttle but nothing to do, resetting"); last = Instant::now(); continue; } trace!("out of throttle on recycle"); } else { trace!(?maxtime, "waiting for event"); let maybe_event = timeout(maxtime, events.recv()).await; if events.is_closed() { trace!("events channel closed during timeout, stopping"); return Ok(None); } match maybe_event { Err(_timeout) => { trace!("timed out, cycling"); continue; } Ok(Err(_empty)) => return Ok(None), Ok(Ok((event, priority))) => { trace!(?event, ?priority, "got event"); if priority == Priority::Urgent { trace!("urgent event, by-passing filters"); } else if event.is_empty() { trace!("empty event, by-passing filters"); } else { let filtered = config.filterer.check_event(&event, priority); match filtered { Err(err) => { trace!(%err, "filter errored on event"); errors.send(err).await?; continue; } Ok(false) => { trace!("filter rejected event"); continue; } Ok(true) => { trace!("filter passed event"); } } } if set.is_empty() { trace!("event is the first, resetting throttle window"); last = Instant::now(); } set.push(event); if priority == Priority::Urgent { trace!("urgent event, by-passing throttle"); } else { let elapsed = last.elapsed(); if elapsed < config.throttle.get() { trace!(?elapsed, "still within throttle window, cycling"); continue; } } } } } return Ok(Some(set)); } } ================================================ FILE: crates/lib/src/action.rs ================================================ //! Processor responsible for receiving events, filtering them, and scheduling actions in response. #[doc(inline)] pub use handler::Handler as ActionHandler; #[doc(inline)] pub use quit::QuitManner; #[doc(inline)] pub use r#return::ActionReturn; #[doc(inline)] pub use worker::worker; mod handler; mod quit; mod r#return; mod worker; ================================================ FILE: crates/lib/src/changeable.rs ================================================ //! Changeable values. use std::{ any::type_name, fmt, sync::{Arc, RwLock}, }; /// A shareable value that doesn't keep a lock when it is read. /// /// This is essentially an `Arc>`, with the only two methods to use it as: /// - replace the value, which obtains a write lock /// - get a clone of that value, which obtains a read lock /// /// but importantly because you get a clone of the value, the read lock is not held after the /// `get()` method returns. /// /// See [`ChangeableFn`] for a specialised variant which holds an [`Fn`]. #[derive(Clone)] pub struct Changeable(Arc>); impl Changeable where T: Clone + Send, { /// Create a new Changeable. /// /// If `T: Default`, prefer using `::default()`. #[must_use] pub fn new(value: T) -> Self { Self(Arc::new(RwLock::new(value))) } /// Replace the value with a new one. /// /// Panics if the lock was poisoned. pub fn replace(&self, new: T) { *(self.0.write().expect("changeable lock poisoned")) = new; } /// Get a clone of the value. /// /// Panics if the lock was poisoned. #[must_use] pub fn get(&self) -> T { self.0.read().expect("handler lock poisoned").clone() } } impl Default for Changeable where T: Clone + Send + Default, { fn default() -> Self { Self::new(T::default()) } } // TODO: with specialisation, write a better impl when T: Debug impl fmt::Debug for Changeable { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Changeable") .field("inner type", &type_name::()) .finish_non_exhaustive() } } /// A shareable `Fn` that doesn't hold a lock when it is called. /// /// This is a specialisation of [`Changeable`] for the `Fn` usecase. /// /// As this is for Watchexec, only `Fn`s with a single argument and return value are supported /// here; it's simple enough to make your own if you want more. pub struct ChangeableFn(Changeable U) + Send + Sync>>); impl ChangeableFn where T: Send, U: Send, { pub(crate) fn new(f: impl (Fn(T) -> U) + Send + Sync + 'static) -> Self { Self(Changeable::new(Arc::new(f))) } /// Replace the fn with a new one. /// /// Panics if the lock was poisoned. pub fn replace(&self, new: impl (Fn(T) -> U) + Send + Sync + 'static) { self.0.replace(Arc::new(new)); } /// Call the fn. /// /// Panics if the lock was poisoned. pub fn call(&self, data: T) -> U { (self.0.get())(data) } } // the derive adds a T: Clone bound impl Clone for ChangeableFn { fn clone(&self) -> Self { Self(Changeable::clone(&self.0)) } } impl Default for ChangeableFn where T: Send, U: Send + Default, { fn default() -> Self { Self::new(|_| U::default()) } } impl fmt::Debug for ChangeableFn { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("ChangeableFn") .field("payload type", &type_name::()) .field("return type", &type_name::()) .finish_non_exhaustive() } } ================================================ FILE: crates/lib/src/config.rs ================================================ //! Configuration and builders for [`crate::Watchexec`]. use std::{future::Future, pin::pin, sync::Arc, time::Duration}; use tokio::sync::{watch, Notify}; use tracing::{debug, trace}; use crate::{ action::{ActionHandler, ActionReturn}, changeable::{Changeable, ChangeableFn}, filter::{ChangeableFilterer, Filterer}, sources::fs::{WatchedPath, Watcher}, ErrorHook, }; /// Configuration for [`Watchexec`][crate::Watchexec]. /// /// Almost every field is a [`Changeable`], such that its value can be changed from a `&self`. /// /// Fields are public for advanced use, but in most cases changes should be made through the /// methods provided: not only are they more convenient, each calls `debug!` on the new value, /// providing a quick insight into what your application sets. /// /// The methods also set the "change signal" of the Config: this notifies some parts of Watchexec /// they should re-read the config. If you modify values via the fields directly, you should call /// `signal_change()` yourself. Note that this doesn't mean that changing values _without_ calling /// this will prevent Watchexec changing until it's called: most parts of Watchexec take a /// "just-in-time" approach and read a config item immediately before it's needed, every time it's /// needed, and thus don't need to listen for the change signal. #[derive(Clone, Debug)] #[non_exhaustive] pub struct Config { /// This is set by the change methods whenever they're called, and notifies Watchexec that it /// should read the configuration again. pub(crate) change_signal: Arc, /// The main handler to define: what to do when an action is triggered. /// /// This handler is called with the [`Action`] environment, look at its doc for more detail. /// /// If this handler is not provided, or does nothing, Watchexec in turn will do nothing, not /// even quit. Hence, you really need to provide a handler. This is enforced when using /// [`Watchexec::new()`], but not when using [`Watchexec::default()`]. /// /// It is possible to change the handler or any other configuration inside the previous handler. /// This and other handlers are fetched "just in time" when needed, so changes to handlers can /// appear instant, or may lag a little depending on lock contention, but a handler being called /// does not hold its lock. A handler changing while it's being called doesn't affect the run of /// a previous version of the handler: it will neither be stopped nor retried with the new code. /// /// It is important for this handler to return quickly: avoid performing blocking work in it. /// This is true for all handlers, but especially for this one, as it will block the event loop /// and you'll find that the internal event queues quickly fill up and it all grinds to a halt. /// Spawn threads or tasks, or use channels or other async primitives to communicate with your /// expensive code. pub action_handler: ChangeableFn, /// Runtime error handler. /// /// This is run on every runtime error that occurs within Watchexec. The default handler /// is a no-op. /// /// # Examples /// /// Set the error handler: /// /// ``` /// # use watchexec::{config::Config, ErrorHook}; /// let mut config = Config::default(); /// config.on_error(|err: ErrorHook| { /// tracing::error!("{}", err.error); /// }); /// ``` /// /// Output a critical error (which will terminate Watchexec): /// /// ``` /// # use watchexec::{config::Config, ErrorHook, error::{CriticalError, RuntimeError}}; /// let mut config = Config::default(); /// config.on_error(|err: ErrorHook| { /// tracing::error!("{}", err.error); /// /// if matches!(err.error, RuntimeError::FsWatcher { .. }) { /// err.critical(CriticalError::External("fs watcher failed".into())); /// } /// }); /// ``` /// /// Elevate a runtime error to critical (will preserve the error information): /// /// ``` /// # use watchexec::{config::Config, ErrorHook, error::RuntimeError}; /// let mut config = Config::default(); /// config.on_error(|err: ErrorHook| { /// tracing::error!("{}", err.error); /// /// if matches!(err.error, RuntimeError::FsWatcher { .. }) { /// err.elevate(); /// } /// }); /// ``` /// /// It is important for this to return quickly: avoid performing blocking work. Locking and /// writing to stdio is fine, but waiting on the network is a bad idea. Of course, an /// asynchronous log writer or separate UI thread is always a better idea than `println!` if /// have that ability. pub error_handler: ChangeableFn, /// The set of filesystem paths to be watched. /// /// If this is non-empty, the filesystem event source is started and configured to provide /// events for these paths. If it becomes empty, the filesystem event source is shut down. pub pathset: Changeable>, /// The kind of filesystem watcher to be used. pub file_watcher: Changeable, /// Watch stdin and emit events when input comes in over the keyboard. /// /// If this is true, the keyboard event source is started and stdin is switched to raw mode /// (disabling line buffering). Individual key events are emitted, as well as EOF. If it /// becomes false, the keyboard event source is shut down, cooked mode is restored, and stdin /// may flow to commands again. /// /// This requires a TTY and is opt-in. pub keyboard_events: Changeable, /// How long to wait for events to build up before executing an action. /// /// This is sometimes called "debouncing." We debounce on the trailing edge: an action is /// triggered only after that amount of time has passed since the first event in the cycle. The /// action is called with all the collected events in the cycle. /// /// Default is 50ms. pub throttle: Changeable, /// The filterer implementation to use when filtering events. /// /// The default is a no-op, which will always pass every event. pub filterer: ChangeableFilterer, /// The buffer size of the channel which carries runtime errors. /// /// The default (64) is usually fine. If you expect a much larger throughput of runtime errors, /// or if your `error_handler` is slow, adjusting this value may help. /// /// This is unchangeable at runtime and must be set before Watchexec instantiation. pub error_channel_size: usize, /// The buffer size of the channel which carries events. /// /// The default (4096) is usually fine. If you expect a much larger throughput of events, /// adjusting this value may help. /// /// This is unchangeable at runtime and must be set before Watchexec instantiation. pub event_channel_size: usize, /// Signalled by the filesystem worker after it finishes applying a pathset change /// (registering/unregistering OS watches). Subscribe via [`Config::fs_ready()`] **before** /// calling [`Config::pathset()`] to avoid missing the notification. pub(crate) fs_ready: watch::Sender<()>, } impl Default for Config { fn default() -> Self { Self { change_signal: Default::default(), action_handler: ChangeableFn::new(ActionReturn::Sync), error_handler: Default::default(), pathset: Default::default(), file_watcher: Default::default(), keyboard_events: Default::default(), throttle: Changeable::new(Duration::from_millis(50)), filterer: Default::default(), error_channel_size: 64, event_channel_size: 4096, fs_ready: watch::channel(()).0, } } } impl Config { /// Signal that the configuration has changed. /// /// This is called automatically by all other methods here, so most of the time calling this /// isn't needed, but it can be useful for some advanced uses. #[allow( clippy::must_use_candidate, reason = "this return can explicitly be ignored" )] pub fn signal_change(&self) -> &Self { self.change_signal.notify_waiters(); self } /// Watch the config for a change, but run once first. /// /// This returns a Stream where the first value is available immediately, and then every /// subsequent one is from a change signal for this Config. #[must_use] pub(crate) fn watch(&self) -> ConfigWatched { ConfigWatched::new(self.change_signal.clone()) } /// Subscribe to filesystem worker readiness notifications. /// /// Returns a [`watch::Receiver`] that is notified each time the filesystem worker finishes /// applying a pathset change (i.e. OS watches are registered/unregistered). Signals readiness /// even if some paths failed to register; check the error handler for failures. To avoid /// missing a notification, subscribe **before** calling [`Config::pathset()`], then /// `.changed().await`. pub fn fs_ready(&self) -> watch::Receiver<()> { self.fs_ready.subscribe() } /// Set the pathset to be watched. pub fn pathset(&self, pathset: I) -> &Self where I: IntoIterator, P: Into, { let pathset = pathset.into_iter().map(std::convert::Into::into).collect(); debug!(?pathset, "Config: pathset"); self.pathset.replace(pathset); self.signal_change() } /// Set the file watcher type to use. pub fn file_watcher(&self, watcher: Watcher) -> &Self { debug!(?watcher, "Config: file watcher"); self.file_watcher.replace(watcher); self.signal_change() } /// Enable keyboard/stdin event source. pub fn keyboard_events(&self, enable: bool) -> &Self { debug!(?enable, "Config: keyboard"); self.keyboard_events.replace(enable); self.signal_change() } /// Set the throttle. pub fn throttle(&self, throttle: impl Into) -> &Self { let throttle = throttle.into(); debug!(?throttle, "Config: throttle"); self.throttle.replace(throttle); self.signal_change() } /// Set the filterer implementation to use. pub fn filterer(&self, filterer: impl Filterer + 'static) -> &Self { debug!(?filterer, "Config: filterer"); self.filterer.replace(filterer); self.signal_change() } /// Set the runtime error handler. pub fn on_error(&self, handler: impl Fn(ErrorHook) + Send + Sync + 'static) -> &Self { debug!("Config: on_error"); self.error_handler.replace(handler); self.signal_change() } /// Set the action handler. pub fn on_action( &self, handler: impl (Fn(ActionHandler) -> ActionHandler) + Send + Sync + 'static, ) -> &Self { debug!("Config: on_action"); self.action_handler .replace(move |action| ActionReturn::Sync(handler(action))); self.signal_change() } /// Set the action handler to a future-returning closure. pub fn on_action_async( &self, handler: impl (Fn(ActionHandler) -> Box + Send + Sync>) + Send + Sync + 'static, ) -> &Self { debug!("Config: on_action_async"); self.action_handler .replace(move |action| ActionReturn::Async(handler(action))); self.signal_change() } } #[derive(Debug)] pub(crate) struct ConfigWatched { first_run: bool, notify: Arc, } impl ConfigWatched { fn new(notify: Arc) -> Self { let notified = notify.notified(); pin!(notified).as_mut().enable(); Self { first_run: true, notify, } } pub async fn next(&mut self) { let notified = self.notify.notified(); let mut notified = pin!(notified); notified.as_mut().enable(); if self.first_run { trace!("ConfigWatched: first run"); self.first_run = false; } else { trace!(?notified, "ConfigWatched: waiting for change"); // there's a bit of a gotcha where any config changes made after a Notified resolves // but before a new one is issued will not be caught. not sure how to fix that yet. notified.await; } } } ================================================ FILE: crates/lib/src/error/critical.rs ================================================ use miette::Diagnostic; use thiserror::Error; use tokio::{sync::mpsc, task::JoinError}; use watchexec_events::{Event, Priority}; use super::{FsWatcherError, RuntimeError}; use crate::sources::fs::Watcher; /// Errors which are not recoverable and stop watchexec execution. #[derive(Debug, Diagnostic, Error)] #[non_exhaustive] pub enum CriticalError { /// Pseudo-error used to signal a graceful exit. #[error("this should never be printed (exit)")] Exit, /// For custom critical errors. /// /// This should be used for errors by external code which are not covered by the other error /// types; watchexec-internal errors should never use this. #[error("external(critical): {0}")] External(#[from] Box), /// For elevated runtime errors. /// /// This is used for runtime errors elevated to critical. #[error("a runtime error is too serious for the process to continue")] Elevated { /// The runtime error to be elevated. #[source] err: RuntimeError, /// Some context or help for the user. help: Option, }, /// A critical I/O error occurred. #[error("io({about}): {err}")] IoError { /// What it was about. about: &'static str, /// The I/O error which occurred. #[source] err: std::io::Error, }, /// Error received when a runtime error cannot be sent to the errors channel. #[error("cannot send internal runtime error: {0}")] ErrorChannelSend(#[from] mpsc::error::SendError), /// Error received when an event cannot be sent to the events channel. #[error("cannot send event to internal channel: {0}")] EventChannelSend(#[from] async_priority_channel::SendError<(Event, Priority)>), /// Error received when joining the main watchexec task. #[error("main task join: {0}")] MainTaskJoin(#[source] JoinError), /// Error received when the filesystem watcher can't initialise. /// /// In theory this is recoverable but in practice it's generally not, so we treat it as critical. #[error("fs: cannot initialise {kind:?} watcher")] FsWatcherInit { /// The kind of watcher. kind: Watcher, /// The error which occurred. #[source] err: FsWatcherError, }, } ================================================ FILE: crates/lib/src/error/runtime.rs ================================================ use miette::Diagnostic; use thiserror::Error; use watchexec_events::{Event, Priority}; use watchexec_signals::Signal; use crate::sources::fs::Watcher; /// Errors which _may_ be recoverable, transient, or only affect a part of the operation, and should /// be reported to the user and/or acted upon programmatically, but will not outright stop watchexec. /// /// Some errors that are classified here are spurious and may be ignored. For example, /// "waiting on process" errors should not be printed to the user by default: /// /// ``` /// # use tracing::error; /// # use watchexec::{Config, ErrorHook, error::RuntimeError}; /// # let mut config = Config::default(); /// config.on_error(|err: ErrorHook| { /// if let RuntimeError::IoError { /// about: "waiting on process group", /// .. /// } = err.error /// { /// error!("{}", err.error); /// return; /// } /// /// // ... /// }); /// ``` /// /// On the other hand, some errors may not be fatal to this library's understanding, but will be to /// your application. In those cases, you should "elevate" these errors, which will transform them /// to [`CriticalError`](super::CriticalError)s: /// /// ``` /// # use watchexec::{Config, ErrorHook, error::{RuntimeError, FsWatcherError}}; /// # let mut config = Config::default(); /// config.on_error(|err: ErrorHook| { /// if let RuntimeError::FsWatcher { /// err: /// FsWatcherError::Create { .. } /// | FsWatcherError::TooManyWatches { .. } /// | FsWatcherError::TooManyHandles { .. }, /// .. /// } = err.error { /// err.elevate(); /// return; /// } /// /// // ... /// }); /// ``` #[derive(Debug, Diagnostic, Error)] #[non_exhaustive] pub enum RuntimeError { /// Pseudo-error used to signal a graceful exit. #[error("this should never be printed (exit)")] Exit, /// For custom runtime errors. /// /// This should be used for errors by external code which are not covered by the other error /// types; watchexec-internal errors should never use this. #[error("external(runtime): {0}")] External(#[from] Box), /// Generic I/O error, with some context. #[error("io({about}): {err}")] IoError { /// What it was about. about: &'static str, /// The I/O error which occurred. #[source] err: std::io::Error, }, /// Events from the filesystem watcher event source. #[error("{kind:?} fs watcher error")] FsWatcher { /// The kind of watcher that failed to instantiate. kind: Watcher, /// The underlying error. #[source] err: super::FsWatcherError, }, /// Events from the keyboard event source #[error("keyboard watcher error")] KeyboardWatcher { /// The underlying error. #[source] err: super::KeyboardWatcherError, }, /// Opaque internal error from a command supervisor. #[error("internal: command supervisor: {0}")] InternalSupervisor(String), /// Error received when an event cannot be sent to the event channel. #[error("cannot send event from {ctx}: {err}")] EventChannelSend { /// The context in which this error happened. /// /// This is not stable and its value should not be relied on except for printing the error. ctx: &'static str, /// The underlying error. #[source] err: async_priority_channel::SendError<(Event, Priority)>, }, /// Error received when an event cannot be sent to the event channel. #[error("cannot send event from {ctx}: {err}")] EventChannelTrySend { /// The context in which this error happened. /// /// This is not stable and its value should not be relied on except for printing the error. ctx: &'static str, /// The underlying error. #[source] err: async_priority_channel::TrySendError<(Event, Priority)>, }, /// Error received when a [`Handler`][crate::handler::Handler] errors. /// /// The error is completely opaque, having been flattened into a string at the error point. #[error("handler error while {ctx}: {err}")] Handler { /// The context in which this error happened. /// /// This is not stable and its value should not be relied on except for printing the error. ctx: &'static str, /// The underlying error, as the Display representation of the original error. err: String, }, /// Error received when a [`Handler`][crate::handler::Handler] which has been passed a lock has kept that lock open after the handler has completed. #[error("{0} handler returned while holding a lock alive")] HandlerLockHeld(&'static str), /// Error received when operating on a process. #[error("when operating on process: {0}")] Process(#[source] std::io::Error), /// Error received when a process did not start correctly, or finished before we could even tell. #[error("process was dead on arrival")] ProcessDeadOnArrival, /// Error received when a [`Signal`] is unsupported /// /// This may happen if the signal is not supported on the current platform, or if Watchexec /// doesn't support sending the signal. #[error("unsupported signal: {0:?}")] UnsupportedSignal(Signal), /// Error received when there are no commands to run. /// /// This is generally a programmer error and should be caught earlier. #[error("no commands to run")] NoCommands, /// Error received when trying to render a [`Command::Shell`](crate::command::Command) that has no `command` /// /// This is generally a programmer error and should be caught earlier. #[error("empty shelled command")] CommandShellEmptyCommand, /// Error received when trying to render a [`Shell::Unix`](crate::command::Shell) with an empty shell /// /// This is generally a programmer error and should be caught earlier. #[error("empty shell program")] CommandShellEmptyShell, /// Error emitted by a [`Filterer`](crate::filter::Filterer). #[error("{kind} filterer: {err}")] Filterer { /// The kind of filterer that failed. /// /// This should be set by the filterer itself to a short name for the filterer. /// /// This is not stable and its value should not be relied on except for printing the error. kind: &'static str, /// The underlying error. #[source] err: Box, }, } ================================================ FILE: crates/lib/src/error/specialised.rs ================================================ use std::path::PathBuf; use miette::Diagnostic; use thiserror::Error; /// Errors emitted by the filesystem watcher. #[derive(Debug, Diagnostic, Error)] #[non_exhaustive] pub enum FsWatcherError { /// Error received when creating a filesystem watcher fails. /// /// Also see `TooManyWatches` and `TooManyHandles`. #[error("failed to instantiate")] #[diagnostic(help("perhaps retry with the poll watcher"))] Create(#[source] notify::Error), /// Error received when creating or updating a filesystem watcher fails because there are too many watches. /// /// This is the OS error 28 on Linux. #[error("failed to instantiate: too many watches")] #[cfg_attr(target_os = "linux", diagnostic(help("you will want to increase your inotify.max_user_watches, see inotify(7) and https://watchexec.github.io/docs/inotify-limits.html")))] #[cfg_attr( not(target_os = "linux"), diagnostic(help("this should not happen on your platform")) )] TooManyWatches(#[source] notify::Error), /// Error received when creating or updating a filesystem watcher fails because there are too many file handles open. /// /// This is the OS error 24 on Linux. It may also occur when the limit for inotify instances is reached. #[error("failed to instantiate: too many handles")] #[cfg_attr(target_os = "linux", diagnostic(help("you will want to increase your `nofile` limit, see pam_limits(8); or increase your inotify.max_user_instances, see inotify(7) and https://watchexec.github.io/docs/inotify-limits.html")))] #[cfg_attr( not(target_os = "linux"), diagnostic(help("this should not happen on your platform")) )] TooManyHandles(#[source] notify::Error), /// Error received when reading a filesystem event fails. #[error("received an event that we could not read")] Event(#[source] notify::Error), /// Error received when adding to the pathset for the filesystem watcher fails. #[error("while adding {path:?}")] PathAdd { /// The path that was attempted to be added. path: PathBuf, /// The underlying error. #[source] err: notify::Error, }, /// Error received when removing from the pathset for the filesystem watcher fails. #[error("while removing {path:?}")] PathRemove { /// The path that was attempted to be removed. path: PathBuf, /// The underlying error. #[source] err: notify::Error, }, } /// Errors emitted by the keyboard watcher. #[derive(Debug, Diagnostic, Error)] #[non_exhaustive] pub enum KeyboardWatcherError { /// Error received when shutting down stdin watcher fails. #[error("failed to shut down stdin watcher")] StdinShutdown, } ================================================ FILE: crates/lib/src/error.rs ================================================ //! Error types for critical, runtime, and specialised errors. #[doc(inline)] pub use critical::*; #[doc(inline)] pub use runtime::*; #[doc(inline)] pub use specialised::*; mod critical; mod runtime; mod specialised; ================================================ FILE: crates/lib/src/filter.rs ================================================ //! The `Filterer` trait for event filtering. use std::{fmt, sync::Arc}; use watchexec_events::{Event, Priority}; use crate::{changeable::Changeable, error::RuntimeError}; /// An interface for filtering events. pub trait Filterer: std::fmt::Debug + Send + Sync { /// Called on (almost) every event, and should return `false` if the event is to be discarded. /// /// Checking whether an event passes a filter is synchronous, should be fast, and must not block /// the thread. Do any expensive stuff upfront during construction of your filterer, or in a /// separate thread/task, as needed. /// /// Returning an error will also fail the event processing, but the error will be propagated to /// the watchexec error handler. While the type signature supports any [`RuntimeError`], it's /// preferred that you create your own error type and return it wrapped in the /// [`RuntimeError::Filterer`] variant with the name of your filterer as `kind`. fn check_event(&self, event: &Event, priority: Priority) -> Result; } impl Filterer for () { fn check_event(&self, _event: &Event, _priority: Priority) -> Result { Ok(true) } } impl Filterer for Arc { fn check_event(&self, event: &Event, priority: Priority) -> Result { Self::as_ref(self).check_event(event, priority) } } /// A shareable `Filterer` that doesn't hold a lock when it is called. /// /// This is a specialisation of [`Changeable`] for `Filterer`. pub struct ChangeableFilterer(Changeable>); impl ChangeableFilterer { /// Replace the filterer with a new one. /// /// Panics if the lock was poisoned. pub fn replace(&self, new: impl Filterer + 'static) { self.0.replace(Arc::new(new)); } } impl Filterer for ChangeableFilterer { fn check_event(&self, event: &Event, priority: Priority) -> Result { Arc::as_ref(&self.0.get()).check_event(event, priority) } } // the derive adds a T: Clone bound impl Clone for ChangeableFilterer { fn clone(&self) -> Self { Self(Changeable::clone(&self.0)) } } impl Default for ChangeableFilterer { fn default() -> Self { Self(Changeable::new(Arc::new(()))) } } impl fmt::Debug for ChangeableFilterer { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("ChangeableFilterer") .field("filterer", &format!("{:?}", self.0.get())) .finish_non_exhaustive() } } ================================================ FILE: crates/lib/src/id.rs ================================================ use std::{cell::Cell, num::NonZeroU64}; /// Unique opaque identifier. #[must_use] #[derive(Debug, Hash, PartialEq, Eq, Clone, Copy)] pub struct Id { thread: NonZeroU64, counter: u64, } thread_local! { static COUNTER: Cell = const { Cell::new(0) }; } impl Default for Id { fn default() -> Self { let counter = COUNTER.get(); COUNTER.set(counter.wrapping_add(1)); Self { thread: threadid(), counter, } } } fn threadid() -> NonZeroU64 { use std::hash::{Hash, Hasher}; struct Extractor { id: u64, } impl Hasher for Extractor { fn finish(&self) -> u64 { self.id } fn write(&mut self, _bytes: &[u8]) {} fn write_u64(&mut self, n: u64) { self.id = n; } } let mut ex = Extractor { id: 0 }; std::thread::current().id().hash(&mut ex); // SAFETY: guaranteed to be > 0 // safeguarded by the max(1), but this is already guaranteed by the thread id being a NonZeroU64 // internally; as that guarantee is not stable, we do make sure, just to be on the safe side. unsafe { NonZeroU64::new_unchecked(ex.finish().max(1)) } } // Replace with this when the thread_id_value feature is stable // fn threadid() -> NonZeroU64 { // std::thread::current().id().as_u64() // } #[test] fn test_threadid() { let top = threadid(); std::thread::spawn(move || { assert_ne!(top, threadid()); }) .join() .expect("thread failed"); } ================================================ FILE: crates/lib/src/late_join_set.rs ================================================ use std::future::Future; use futures::{stream::FuturesUnordered, StreamExt}; use tokio::task::{JoinError, JoinHandle}; /// A collection of tasks spawned on a Tokio runtime. /// /// This is conceptually a variant of Tokio's [`JoinSet`](tokio::task::JoinSet) which can attach /// tasks after they've been spawned. /// /// # Examples /// /// Spawn multiple tasks and wait for them. /// /// ```no_compile /// use crate::late_join_set::LateJoinSet; /// /// #[tokio::main] /// async fn main() { /// let mut set = LateJoinSet::default(); /// /// for i in 0..10 { /// set.spawn(async move { println!("{i}"); }); /// } /// /// let mut seen = [false; 10]; /// while let Some(res) = set.join_next().await { /// let idx = res.unwrap(); /// seen[idx] = true; /// } /// /// for i in 0..10 { /// assert!(seen[i]); /// } /// } /// ``` /// /// Attach a task to a set after it's been spawned. /// /// ```no_compile /// use crate::late_join_set::LateJoinSet; /// /// #[tokio::main] /// async fn main() { /// let mut set = LateJoinSet::default(); /// /// let handle = tokio::spawn(async move { println!("Hello, world!"); }); /// set.insert(handle); /// set.abort_all(); /// } /// ``` #[derive(Debug, Default)] pub struct LateJoinSet { tasks: FuturesUnordered>, } impl LateJoinSet { /// Spawn the provided task on the `LateJoinSet`. /// /// The provided future will start running in the background immediately when this method is /// called, even if you don't await anything on this `LateJoinSet`. /// /// # Panics /// /// This method panics if called outside of a Tokio runtime. #[track_caller] pub fn spawn(&self, task: impl Future + Send + 'static) { self.insert(tokio::spawn(task)); } /// Insert an already-spawned task into the [`LateJoinSet`]. pub fn insert(&self, task: JoinHandle<()>) { self.tasks.push(task); } /// Waits until one of the tasks in the set completes. /// /// Returns `None` if the set is empty. pub async fn join_next(&mut self) -> Option> { self.tasks.next().await } /// Waits until all the tasks in the set complete. /// /// Ignores any panics in the tasks shutting down. pub async fn join_all(&mut self) { while self.join_next().await.is_some() {} self.tasks.clear(); } /// Aborts all tasks on this `LateJoinSet`. /// /// This does not remove the tasks from the `LateJoinSet`. To wait for the tasks to complete /// cancellation, use `join_all` or call `join_next` in a loop until the `LateJoinSet` is empty. pub fn abort_all(&self) { self.tasks.iter().for_each(JoinHandle::abort); } } impl Drop for LateJoinSet { fn drop(&mut self) { self.abort_all(); self.tasks.clear(); } } ================================================ FILE: crates/lib/src/lib.rs ================================================ //! Watchexec: a library for utilities and programs which respond to (file, signal, etc) events //! primarily by launching or managing other programs. //! //! Also see the CLI tool: //! //! This library is powered by [Tokio](https://tokio.rs). //! //! The main way to use this crate involves constructing a [`Watchexec`] around a [`Config`], then //! running it. Handlers (defined in [`Config`]) are used to hook into Watchexec at various points. //! The config can be changed at any time with the `config` field on your [`Watchexec`] instance. //! //! It's recommended to use the [miette] erroring library in applications, but all errors implement //! [`std::error::Error`] so your favourite error handling library can of course be used. //! //! ```no_run //! use miette::{IntoDiagnostic, Result}; //! use watchexec_signals::Signal; //! use watchexec::Watchexec; //! //! #[tokio::main] //! async fn main() -> Result<()> { //! let wx = Watchexec::new(|mut action| { //! // print any events //! for event in action.events.iter() { //! eprintln!("EVENT: {event:?}"); //! } //! //! // if Ctrl-C is received, quit //! if action.signals().any(|sig| sig == Signal::Interrupt) { //! action.quit(); //! } //! //! action //! })?; //! //! // watch the current directory //! wx.config.pathset(["."]); //! //! wx.main().await.into_diagnostic()?; //! Ok(()) //! } //! ``` //! //! Alternatively, you can use the modules exposed by the crate and the external crates such as //! [`notify`], [`clearscreen`](https://docs.rs/clearscreen), [`process_wrap`]... to build something //! more advanced, at the cost of reimplementing the glue code. //! //! Note that the library generates a _lot_ of debug messaging with [tracing]. **You should not //! enable printing even `error`-level log messages for this crate unless it's for debugging.** //! Instead, make use of the [`Config::on_error()`] method to define a handler for errors //! occurring at runtime that are _meant_ for you to handle (by printing out or otherwise). #![doc(html_favicon_url = "https://watchexec.github.io/logo:watchexec.svg")] #![doc(html_logo_url = "https://watchexec.github.io/logo:watchexec.svg")] #![warn(clippy::unwrap_used, missing_docs)] #![cfg_attr(not(test), warn(unused_crate_dependencies))] #![deny(rust_2018_idioms)] // the toolkit to make your own pub mod action; pub mod error; pub mod filter; pub mod paths; pub mod sources; // the core experience pub mod changeable; pub mod config; mod id; mod late_join_set; mod watched_path; mod watchexec; #[doc(inline)] pub use crate::{ id::Id, watched_path::WatchedPath, watchexec::{ErrorHook, Watchexec}, }; #[doc(no_inline)] pub use crate::config::Config; #[doc(no_inline)] pub use watchexec_supervisor::{command, job}; #[cfg(debug_assertions)] #[doc(hidden)] pub mod readme_doc_check { #[doc = include_str!("../README.md")] pub struct Readme; } ================================================ FILE: crates/lib/src/paths.rs ================================================ //! Utilities for paths and sets of paths. use std::{ collections::{HashMap, HashSet}, ffi::OsString, path::{Path, PathBuf}, }; use watchexec_events::{Event, FileType, Tag}; /// The separator for paths used in environment variables. #[cfg(unix)] pub const PATH_SEPARATOR: &str = ":"; /// The separator for paths used in environment variables. #[cfg(not(unix))] pub const PATH_SEPARATOR: &str = ";"; /// Returns the longest common prefix of all given paths. /// /// This is a utility function which is useful for finding the common root of a set of origins. /// /// Returns `None` if zero paths are given or paths share no common prefix. pub fn common_prefix(paths: I) -> Option where I: IntoIterator, P: AsRef, { let mut paths = paths.into_iter(); let first_path = paths.next().map(|p| p.as_ref().to_owned()); let mut longest_path = if let Some(ref p) = first_path { p.components().collect::>() } else { return None; }; for path in paths { let mut greatest_distance = 0; for component_pair in path.as_ref().components().zip(longest_path.iter()) { if component_pair.0 != *component_pair.1 { break; } greatest_distance += 1; } if greatest_distance != longest_path.len() { longest_path.truncate(greatest_distance); } } if longest_path.is_empty() { None } else { let mut result = PathBuf::new(); for component in longest_path { result.push(component.as_os_str()); } Some(result) } } /// Summarise [`Event`]s as a set of environment variables by category. /// /// - `CREATED` -> `Create(_)` /// - `META_CHANGED` -> `Modify(Metadata(_))` /// - `REMOVED` -> `Remove(_)` /// - `RENAMED` -> `Modify(Name(_))` /// - `WRITTEN` -> `Modify(Data(_))`, `Access(Close(Write))` /// - `OTHERWISE_CHANGED` -> anything else /// - plus `COMMON` with the common prefix of all paths (even if there's only one path). /// /// It ignores non-path events and pathed events without event kind. Multiple events are sorted in /// byte order and joined with the platform-specific path separator (`:` for unix, `;` for Windows). pub fn summarise_events_to_env<'events>( events: impl IntoIterator, ) -> HashMap<&'static str, OsString> { let mut all_trunks = Vec::new(); let mut kind_buckets = HashMap::new(); for event in events { let (paths, trunks): (Vec<_>, Vec<_>) = event .paths() .map(|(p, ft)| { ( p.to_owned(), match ft { Some(FileType::Dir) => None, _ => p.parent(), } .unwrap_or(p) .to_owned(), ) }) .unzip(); tracing::trace!(?paths, ?trunks, "event paths"); if paths.is_empty() { continue; } all_trunks.extend(trunks.clone()); // usually there's only one but just in case for kind in event.tags.iter().filter_map(|t| { if let Tag::FileEventKind(kind) = t { Some(kind) } else { None } }) { kind_buckets .entry(kind) .or_insert_with(HashSet::new) .extend(paths.clone()); } } let common_path = common_prefix(all_trunks); let mut grouped_buckets = HashMap::new(); for (kind, paths) in kind_buckets { use notify::event::{AccessKind::*, AccessMode::*, EventKind::*, ModifyKind::*}; grouped_buckets .entry(match kind { Modify(Data(_)) | Access(Close(Write)) => "WRITTEN", Modify(Metadata(_)) => "META_CHANGED", Remove(_) => "REMOVED", Create(_) => "CREATED", Modify(Name(_)) => "RENAMED", _ => "OTHERWISE_CHANGED", }) .or_insert_with(HashSet::new) .extend(paths.into_iter().map(|ref p| { common_path .as_ref() .and_then(|prefix| p.strip_prefix(prefix).ok()) .map_or_else( || p.clone().into_os_string(), |suffix| suffix.as_os_str().to_owned(), ) })); } let mut res: HashMap<&'static str, OsString> = grouped_buckets .into_iter() .map(|(kind, paths)| { let mut joined = OsString::with_capacity(paths.iter().map(|p| p.len()).sum::() + paths.len()); let mut paths = paths.into_iter().collect::>(); paths.sort(); paths.into_iter().enumerate().for_each(|(i, path)| { if i > 0 { joined.push(PATH_SEPARATOR); } joined.push(path); }); (kind, joined) }) .collect(); if let Some(common_path) = common_path { res.insert("COMMON", common_path.into_os_string()); } res } ================================================ FILE: crates/lib/src/sources/fs.rs ================================================ //! Event source for changes to files and directories. use std::{ collections::{HashMap, HashSet}, fs::metadata, mem::take, sync::Arc, time::Duration, }; use async_priority_channel as priority; use normalize_path::NormalizePath; use tokio::sync::mpsc; use tracing::{debug, error, trace}; use watchexec_events::{Event, Priority, Source, Tag}; use crate::{ error::{CriticalError, FsWatcherError, RuntimeError}, Config, }; // re-export for compatibility, until next major version pub use crate::WatchedPath; /// What kind of filesystem watcher to use. /// /// For now only native and poll watchers are supported. In the future there may be additional /// watchers available on some platforms. #[derive(Clone, Copy, Debug, Default, PartialEq, Eq)] #[non_exhaustive] pub enum Watcher { /// The Notify-recommended watcher on the platform. /// /// For platforms Notify supports, that's a [native implementation][notify::RecommendedWatcher], /// for others it's polling with a default interval. #[default] Native, /// Notify’s [poll watcher][notify::PollWatcher] with a custom interval. Poll(Duration), } impl Watcher { fn create( self, f: impl notify::EventHandler, ) -> Result, CriticalError> { use notify::{Config, Watcher as _}; match self { Self::Native => { notify::RecommendedWatcher::new(f, Config::default()).map(|w| Box::new(w) as _) } Self::Poll(delay) => { notify::PollWatcher::new(f, Config::default().with_poll_interval(delay)) .map(|w| Box::new(w) as _) } } .map_err(|err| CriticalError::FsWatcherInit { kind: self, err: if cfg!(target_os = "linux") && (matches!(err.kind, notify::ErrorKind::MaxFilesWatch) || matches!(err.kind, notify::ErrorKind::Io(ref ioerr) if ioerr.raw_os_error() == Some(28))) { FsWatcherError::TooManyWatches(err) } else if cfg!(target_os = "linux") && matches!(err.kind, notify::ErrorKind::Io(ref ioerr) if ioerr.raw_os_error() == Some(24)) { FsWatcherError::TooManyHandles(err) } else { FsWatcherError::Create(err) }, }) } } /// Launch the filesystem event worker. /// /// While you can run several, you should only have one. /// /// This only does a bare minimum of setup; to actually start the work, you need to set a non-empty /// pathset in the [`Config`]. /// /// Note that the paths emitted by the watcher are normalised. No guarantee is made about the /// implementation or output of that normalisation (it may change without notice). /// /// # Examples /// /// Direct usage: /// /// ```no_run /// use async_priority_channel as priority; /// use tokio::sync::mpsc; /// use watchexec::{Config, sources::fs::worker}; /// /// #[tokio::main] /// async fn main() -> Result<(), Box> { /// let (ev_s, _) = priority::bounded(1024); /// let (er_s, _) = mpsc::channel(64); /// /// let config = Config::default(); /// config.pathset(["."]); /// /// worker(config.into(), er_s, ev_s).await?; /// Ok(()) /// } /// ``` pub async fn worker( config: Arc, errors: mpsc::Sender, events: priority::Sender, ) -> Result<(), CriticalError> { debug!("launching filesystem worker"); let mut watcher_type = Watcher::default(); let mut watcher = None; let mut pathset = HashSet::new(); let mut config_watch = config.watch(); loop { config_watch.next().await; trace!("filesystem worker got a config change"); if config.pathset.get().is_empty() { trace!( "{}", if pathset.is_empty() { "no watched paths, no watcher needed" } else { "no more watched paths, dropping watcher" } ); watcher.take(); pathset.clear(); let _ = config.fs_ready.send(()); continue; } // now we know the watcher should be alive, so let's start it if it's not already: let config_watcher = config.file_watcher.get(); if watcher.is_none() || watcher_type != config_watcher { debug!(kind=?config_watcher, "creating new watcher"); let n_errors = errors.clone(); let n_events = events.clone(); watcher_type = config_watcher; watcher = config_watcher .create(move |nev: Result| { trace!(event = ?nev, "receiving possible event from watcher"); if let Err(e) = process_event(nev, config_watcher, &n_events) { n_errors.try_send(e).ok(); } }) .map(Some)?; } // now let's calculate which paths we should add to the watch, and which we should drop: let config_pathset = config.pathset.get(); tracing::info!(?config_pathset, "obtaining pathset"); let (to_watch, to_drop) = if pathset.is_empty() { // if the current pathset is empty, we can take a shortcut (config_pathset, Vec::new()) } else { let mut to_watch = Vec::with_capacity(config_pathset.len()); let mut to_drop = Vec::with_capacity(pathset.len()); for path in &pathset { if !config_pathset.contains(path) { to_drop.push(path.clone()); // try dropping the clone? } } for path in config_pathset { if !pathset.contains(&path) { to_watch.push(path); } } (to_watch, to_drop) }; // now apply it to the watcher let Some(watcher) = watcher.as_mut() else { panic!("BUG: watcher should exist at this point"); }; debug!(?to_watch, ?to_drop, "applying changes to the watcher"); for path in to_drop { trace!(?path, "removing path from the watcher"); if let Err(err) = watcher.unwatch(path.path.as_ref()) { error!(?err, "notify unwatch() error"); for e in notify_multi_path_errors(watcher_type, path, err, true) { errors.send(e).await?; } } else { pathset.remove(&path); } } for path in to_watch { trace!(?path, "adding path to the watcher"); if let Err(err) = watcher.watch( path.path.as_ref(), if path.recursive { notify::RecursiveMode::Recursive } else { notify::RecursiveMode::NonRecursive }, ) { error!(?err, "notify watch() error"); for e in notify_multi_path_errors(watcher_type, path, err, false) { errors.send(e).await?; } } else { pathset.insert(path); } } let _ = config.fs_ready.send(()); } } fn notify_multi_path_errors( kind: Watcher, watched_path: WatchedPath, mut err: notify::Error, rm: bool, ) -> Vec { let mut paths = take(&mut err.paths); if paths.is_empty() { paths.push(watched_path.into()); } let generic = err.to_string(); let mut err = Some(err); let mut errs = Vec::with_capacity(paths.len()); for path in paths { let e = err .take() .unwrap_or_else(|| notify::Error::generic(&generic)) .add_path(path.clone()); errs.push(RuntimeError::FsWatcher { kind, err: if rm { FsWatcherError::PathRemove { path, err: e } } else { FsWatcherError::PathAdd { path, err: e } }, }); } errs } fn process_event( nev: Result, kind: Watcher, n_events: &priority::Sender, ) -> Result<(), RuntimeError> { let nev = nev.map_err(|err| RuntimeError::FsWatcher { kind, err: FsWatcherError::Event(err), })?; let mut tags = Vec::with_capacity(4); tags.push(Tag::Source(Source::Filesystem)); tags.push(Tag::FileEventKind(nev.kind)); for path in nev.paths { // possibly pull file_type from whatever notify (or the native driver) returns? tags.push(Tag::Path { file_type: metadata(&path).ok().map(|m| m.file_type().into()), path: path.normalize(), }); } if let Some(pid) = nev.attrs.process_id() { tags.push(Tag::Process(pid)); } let mut metadata = HashMap::new(); if let Some(uid) = nev.attrs.info() { metadata.insert("file-event-info".to_string(), vec![uid.to_string()]); } if let Some(src) = nev.attrs.source() { metadata.insert("notify-backend".to_string(), vec![src.to_string()]); } let ev = Event { tags, metadata }; trace!(event = ?ev, "processed notify event into watchexec event"); n_events .try_send(ev, Priority::Normal) .map_err(|err| RuntimeError::EventChannelTrySend { ctx: "fs watcher", err, })?; Ok(()) } ================================================ FILE: crates/lib/src/sources/keyboard.rs ================================================ //! Event source for keyboard input and related events use std::io::Read; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; use async_priority_channel as priority; use tokio::{ spawn, sync::{mpsc, oneshot}, }; use tracing::trace; use watchexec_events::{Event, KeyCode, Keyboard, Modifiers, Priority, Source, Tag}; use crate::{ error::{CriticalError, RuntimeError}, Config, }; /// Launch the keyboard event worker. /// /// While you can run several, you should only have one. /// /// Sends keyboard events via to the provided 'events' channel pub async fn worker( config: Arc, errors: mpsc::Sender, events: priority::Sender, ) -> Result<(), CriticalError> { let mut send_close = None; let mut config_watch = config.watch(); loop { config_watch.next().await; let want_keyboard = config.keyboard_events.get(); match (want_keyboard, &send_close) { // if we want to watch stdin and we're not already watching it then spawn a task to watch it (true, None) => { let (close_s, close_r) = oneshot::channel::<()>(); send_close = Some(close_s); spawn(watch_stdin(errors.clone(), events.clone(), close_r)); } // if we don't want to watch stdin but we are already watching it then send a close signal to end // the watching (false, Some(_)) => { // ignore send error as if channel is closed watch is already gone send_close .take() .expect("unreachable due to match") .send(()) .ok(); } // otherwise no action is required _ => {} } } } #[cfg(unix)] mod raw_mode { use std::os::fd::AsRawFd; /// Stored original termios to restore on drop. pub struct RawModeGuard { fd: i32, original: libc::termios, } impl RawModeGuard { /// Switch stdin to raw mode. Returns None if stdin is not a TTY. pub fn enter() -> Option { let fd = std::io::stdin().as_raw_fd(); // SAFETY: isatty, tcgetattr, cfmakeraw, and tcsetattr are POSIX standard // functions operating on a valid fd (stdin). We check return values before // proceeding. The original termios is saved and restored in Drop. unsafe { if libc::isatty(fd) == 0 { return None; } let mut original: libc::termios = std::mem::zeroed(); if libc::tcgetattr(fd, &mut original) != 0 { return None; } let mut raw = original; libc::cfmakeraw(&mut raw); // Re-enable output post-processing so \n still maps to \r\n raw.c_oflag |= libc::OPOST; // Non-blocking reads: return after 100ms if no input available. // This ensures the tokio blocking thread doesn't park forever, // allowing graceful shutdown when the close signal is received. raw.c_cc[libc::VMIN] = 0; raw.c_cc[libc::VTIME] = 1; if libc::tcsetattr(fd, libc::TCSANOW, &raw) != 0 { return None; } Some(Self { fd, original }) } } } impl Drop for RawModeGuard { fn drop(&mut self) { // SAFETY: restoring the original termios saved in enter() on the same fd. unsafe { libc::tcsetattr(self.fd, libc::TCSANOW, &self.original); } } } } #[cfg(windows)] mod raw_mode { use windows_sys::Win32::Foundation::{HANDLE, INVALID_HANDLE_VALUE}; use windows_sys::Win32::System::Console::{ GetConsoleMode, GetStdHandle, SetConsoleMode, ENABLE_ECHO_INPUT, ENABLE_LINE_INPUT, ENABLE_PROCESSED_INPUT, STD_INPUT_HANDLE, }; /// Stored original console mode to restore on drop. pub struct RawModeGuard { handle: HANDLE, original_mode: u32, } // SAFETY: HANDLE is a process-global value (stdin) that is safe to use from any thread. unsafe impl Send for RawModeGuard {} impl RawModeGuard { /// Switch stdin to raw-like mode. Returns None if stdin is not a console. pub fn enter() -> Option { // SAFETY: GetStdHandle, GetConsoleMode, and SetConsoleMode are Windows Console // API functions. We check return values before proceeding. The handle is valid // for the lifetime of the process. The original mode is saved and restored in Drop. unsafe { let handle = GetStdHandle(STD_INPUT_HANDLE); if handle == INVALID_HANDLE_VALUE || handle.is_null() { return None; } let mut original_mode: u32 = 0; if GetConsoleMode(handle, &mut original_mode) == 0 { return None; } // Disable line input, echo, and Ctrl+C signal processing let raw_mode = original_mode & !(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT); if SetConsoleMode(handle, raw_mode) == 0 { return None; } Some(Self { handle, original_mode, }) } } } impl Drop for RawModeGuard { fn drop(&mut self) { // SAFETY: restoring the original console mode saved in enter() on the same handle. unsafe { SetConsoleMode(self.handle, self.original_mode); } } } } fn byte_to_keyboard(byte: u8) -> Option { match byte { // Ctrl-C / Ctrl-D 3 | 4 => Some(Keyboard::Eof), // Enter (byte 13, before Ctrl range to avoid overlap) 13 => Some(Keyboard::Key { key: KeyCode::Enter, modifiers: Modifiers::default(), }), // Ctrl+letter (1-26 excluding 3,4,13 handled above) b @ 1..=26 => Some(Keyboard::Key { key: KeyCode::Char((b + b'a' - 1) as char), modifiers: Modifiers { ctrl: true, ..Default::default() }, }), 27 => Some(Keyboard::Key { key: KeyCode::Escape, modifiers: Modifiers::default(), }), b if char::from(b).is_ascii_graphic() || b == b' ' => Some(Keyboard::Key { key: KeyCode::Char(char::from(b)), modifiers: Modifiers::default(), }), _ => None, } } async fn watch_stdin( errors: mpsc::Sender, events: priority::Sender, close_r: oneshot::Receiver<()>, ) -> Result<(), CriticalError> { // Use an AtomicBool to signal the blocking reader to stop. // This avoids tokio::io::stdin() which uses blocking threads that can't be // interrupted, causing the process to hang on shutdown (issue #1017). let cancel = Arc::new(AtomicBool::new(false)); let cancel_clone = cancel.clone(); let (tx, mut rx) = mpsc::channel::, ()>>(16); // Spawn a blocking task that reads stdin directly tokio::task::spawn_blocking(move || { #[cfg(any(unix, windows))] let _raw_guard = raw_mode::RawModeGuard::enter(); let mut stdin = std::io::stdin().lock(); let mut buffer = [0u8; 10]; while !cancel_clone.load(Ordering::Relaxed) { match stdin.read(&mut buffer) { Ok(0) => { // EOF or VTIME timeout with no data // With VMIN=0/VTIME=1, this is a timeout - just loop and check cancel #[cfg(any(unix, windows))] if _raw_guard.is_some() { continue; } // Real EOF in non-raw mode let _ = tx.blocking_send(Ok(vec![])); break; } Ok(n) => { if tx.blocking_send(Ok(buffer[..n].to_vec())).is_err() { break; } } Err(_) => { let _ = tx.blocking_send(Err(())); break; } } } }); // Wait for either data from stdin or the close signal tokio::select! { _ = async { 'read: while let Some(result) = rx.recv().await { match result { Ok(bytes) if bytes.is_empty() => { // EOF let _ = send_event(errors.clone(), events.clone(), Keyboard::Eof).await; break; } Ok(bytes) => { for &byte in &bytes { if let Some(key) = byte_to_keyboard(byte) { let is_eof = matches!(key, Keyboard::Eof); let _ = send_event(errors.clone(), events.clone(), key).await; if is_eof { break 'read; } } } } Err(()) => break, } } } => {} _ = close_r => {} } // Always signal the blocking thread to stop when we exit cancel.store(true, Ordering::Relaxed); Ok(()) } async fn send_event( errors: mpsc::Sender, events: priority::Sender, msg: Keyboard, ) -> Result<(), CriticalError> { let tags = vec![Tag::Source(Source::Keyboard), Tag::Keyboard(msg)]; let event = Event { tags, metadata: Default::default(), }; trace!(?event, "processed keyboard input into event"); if let Err(err) = events.send(event, Priority::Normal).await { errors .send(RuntimeError::EventChannelSend { ctx: "keyboard", err, }) .await?; } Ok(()) } ================================================ FILE: crates/lib/src/sources/signal.rs ================================================ //! Event source for signals / notifications sent to the main process. use std::sync::Arc; use async_priority_channel as priority; use tokio::{select, sync::mpsc}; use tracing::{debug, trace}; use watchexec_events::{Event, Priority, Source, Tag}; use watchexec_signals::Signal; use crate::{ error::{CriticalError, RuntimeError}, Config, }; /// Launch the signal event worker. /// /// While you _could_ run several (it won't panic), you **must** only have one (for correctness). /// This may be enforced later. /// /// # Examples /// /// Direct usage: /// /// ```no_run /// use tokio::sync::mpsc; /// use async_priority_channel as priority; /// use watchexec::sources::signal::worker; /// /// #[tokio::main] /// async fn main() -> Result<(), Box> { /// let (ev_s, _) = priority::bounded(1024); /// let (er_s, _) = mpsc::channel(64); /// /// worker(Default::default(), er_s, ev_s).await?; /// Ok(()) /// } /// ``` pub async fn worker( config: Arc, errors: mpsc::Sender, events: priority::Sender, ) -> Result<(), CriticalError> { imp_worker(config, errors, events).await } #[cfg(unix)] async fn imp_worker( _config: Arc, errors: mpsc::Sender, events: priority::Sender, ) -> Result<(), CriticalError> { use tokio::signal::unix::{signal, SignalKind}; debug!("launching unix signal worker"); macro_rules! listen { ($sig:ident) => {{ trace!(kind=%stringify!($sig), "listening for unix signal"); signal(SignalKind::$sig()).map_err(|err| CriticalError::IoError { about: concat!("setting ", stringify!($sig), " signal listener"), err })? }} } let mut s_hangup = listen!(hangup); let mut s_interrupt = listen!(interrupt); let mut s_quit = listen!(quit); let mut s_terminate = listen!(terminate); let mut s_user1 = listen!(user_defined1); let mut s_user2 = listen!(user_defined2); loop { let sig = select!( _ = s_hangup.recv() => Signal::Hangup, _ = s_interrupt.recv() => Signal::Interrupt, _ = s_quit.recv() => Signal::Quit, _ = s_terminate.recv() => Signal::Terminate, _ = s_user1.recv() => Signal::User1, _ = s_user2.recv() => Signal::User2, ); debug!(?sig, "received unix signal"); send_event(errors.clone(), events.clone(), sig).await?; } } #[cfg(windows)] async fn imp_worker( _config: Arc, errors: mpsc::Sender, events: priority::Sender, ) -> Result<(), CriticalError> { use tokio::signal::windows::{ctrl_break, ctrl_c}; debug!("launching windows signal worker"); macro_rules! listen { ($sig:ident) => {{ trace!(kind=%stringify!($sig), "listening for windows process notification"); $sig().map_err(|err| CriticalError::IoError { about: concat!("setting ", stringify!($sig), " signal listener"), err })? }} } let mut sigint = listen!(ctrl_c); let mut sigbreak = listen!(ctrl_break); loop { let sig = select!( _ = sigint.recv() => Signal::Interrupt, _ = sigbreak.recv() => Signal::Terminate, ); debug!(?sig, "received windows process notification"); send_event(errors.clone(), events.clone(), sig).await?; } } async fn send_event( errors: mpsc::Sender, events: priority::Sender, sig: Signal, ) -> Result<(), CriticalError> { let tags = vec![ Tag::Source(if sig == Signal::Interrupt { Source::Keyboard } else { Source::Os }), Tag::Signal(sig), ]; let event = Event { tags, metadata: Default::default(), }; trace!(?event, "processed signal into event"); if let Err(err) = events .send( event, match sig { Signal::Interrupt | Signal::Terminate => Priority::Urgent, _ => Priority::High, }, ) .await { errors .send(RuntimeError::EventChannelSend { ctx: "signals", err, }) .await?; } Ok(()) } ================================================ FILE: crates/lib/src/sources.rs ================================================ //! Sources of events. pub mod fs; pub mod keyboard; pub mod signal; ================================================ FILE: crates/lib/src/watched_path.rs ================================================ use std::path::{Path, PathBuf}; /// A path to watch. /// /// Can be a recursive or non-recursive watch. #[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct WatchedPath { pub(crate) path: PathBuf, pub(crate) recursive: bool, } impl From for WatchedPath { fn from(path: PathBuf) -> Self { Self { path, recursive: true, } } } impl From<&str> for WatchedPath { fn from(path: &str) -> Self { Self { path: path.into(), recursive: true, } } } impl From for WatchedPath { fn from(path: String) -> Self { Self { path: path.into(), recursive: true, } } } impl From<&Path> for WatchedPath { fn from(path: &Path) -> Self { Self { path: path.into(), recursive: true, } } } impl From for PathBuf { fn from(path: WatchedPath) -> Self { path.path } } impl From<&WatchedPath> for PathBuf { fn from(path: &WatchedPath) -> Self { path.path.clone() } } impl AsRef for WatchedPath { fn as_ref(&self) -> &Path { self.path.as_ref() } } impl WatchedPath { /// Create a new watched path, recursively descending into subdirectories. pub fn recursive(path: impl Into) -> Self { Self { path: path.into(), recursive: true, } } /// Create a new watched path, not descending into subdirectories. pub fn non_recursive(path: impl Into) -> Self { Self { path: path.into(), recursive: false, } } } ================================================ FILE: crates/lib/src/watchexec.rs ================================================ use std::{ fmt, future::Future, sync::{Arc, OnceLock}, }; use async_priority_channel as priority; use atomic_take::AtomicTake; use futures::TryFutureExt; use miette::Diagnostic; use tokio::{ spawn, sync::{mpsc, Notify}, task::{JoinHandle, JoinSet}, }; use tracing::{debug, error, trace}; use watchexec_events::{Event, Priority}; use crate::{ action::{self, ActionHandler}, changeable::ChangeableFn, error::{CriticalError, RuntimeError}, sources::{fs, keyboard, signal}, Config, }; /// The main watchexec runtime. /// /// All this really does is tie the pieces together in one convenient interface. /// /// It creates the correct channels, spawns every available event sources, the action worker, the /// error hook, and provides an interface to change the runtime configuration during the runtime, /// inject synthetic events, and wait for graceful shutdown. pub struct Watchexec { /// The configuration of this Watchexec instance. /// /// Configuration can be changed at any time using the provided methods on [`Config`]. /// /// Treat this field as readonly: replacing it with a different instance of `Config` will not do /// anything except potentially lose you access to the actual Watchexec config. In normal use /// you'll have obtained `Watchexec` behind an `Arc` so that won't be an issue. /// /// # Examples /// /// Change the action handler: /// /// ```no_run /// # use watchexec::Watchexec; /// let wx = Watchexec::default(); /// wx.config.on_action(|mut action| { /// if action.signals().next().is_some() { /// action.quit(); /// } /// /// action /// }); /// ``` /// /// Set paths to be watched: /// /// ```no_run /// # use watchexec::Watchexec; /// let wx = Watchexec::new(|mut action| { /// if action.signals().next().is_some() { /// action.quit(); /// } else { /// for event in action.events.iter() { /// println!("{event:?}"); /// } /// } /// /// action /// }).unwrap(); /// /// wx.config.pathset(["."]); /// ``` pub config: Arc, start_lock: Arc, event_input: priority::Sender, handle: Arc>>>, } impl Default for Watchexec { /// Instantiate with default config. /// /// Note that this will panic if the constructor errors. /// /// Prefer calling `new()` instead. fn default() -> Self { Self::with_config(Default::default()).expect("Use Watchexec::new() to avoid this panic") } } impl fmt::Debug for Watchexec { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("Watchexec").finish_non_exhaustive() } } impl Watchexec { /// Instantiates a new `Watchexec` runtime given an initial action handler. /// /// Returns an [`Arc`] for convenience; use [`try_unwrap`][Arc::try_unwrap()] to get the value /// directly if needed, or use `new_with_config`. /// /// Look at the [`Config`] documentation for more on the required action handler. /// Watchexec will subscribe to most signals sent to the process it runs in and send them, as /// [`Event`]s, to the action handler. At minimum, you should check for interrupt/ctrl-c events /// and call `action.quit()` in your handler, otherwise hitting ctrl-c will do nothing. pub fn new( action_handler: impl (Fn(ActionHandler) -> ActionHandler) + Send + Sync + 'static, ) -> Result, CriticalError> { let config = Config::default(); config.on_action(action_handler); Self::with_config(config).map(Arc::new) } /// Instantiates a new `Watchexec` runtime given an initial async action handler. /// /// This is the same as [`new`](fn@Self::new) except the action handler is async. pub fn new_async( action_handler: impl (Fn(ActionHandler) -> Box + Send + Sync>) + Send + Sync + 'static, ) -> Result, CriticalError> { let config = Config::default(); config.on_action_async(action_handler); Self::with_config(config).map(Arc::new) } /// Instantiates a new `Watchexec` runtime with a config. /// /// This is generally not needed: the config can be changed after instantiation (before and /// after _starting_ Watchexec with `main()`). The only time this should be used is to set the /// "unchangeable" configuration items for internal details like buffer sizes for queues, or to /// obtain Self unwrapped by an Arc like `new()` does. pub fn with_config(config: Config) -> Result { debug!(?config, pid=%std::process::id(), version=%env!("CARGO_PKG_VERSION"), "initialising"); let config = Arc::new(config); let outer_config = config.clone(); let notify = Arc::new(Notify::new()); let start_lock = notify.clone(); let (ev_s, ev_r) = priority::bounded(config.event_channel_size.try_into().unwrap_or(u64::MAX)); let event_input = ev_s.clone(); trace!("creating main task"); let handle = spawn(async move { trace!("waiting for start lock"); notify.notified().await; debug!("starting main task"); let (er_s, er_r) = mpsc::channel(config.error_channel_size); let mut tasks = JoinSet::new(); tasks.spawn(action::worker(config.clone(), er_s.clone(), ev_r).map_ok(|()| "action")); tasks.spawn(fs::worker(config.clone(), er_s.clone(), ev_s.clone()).map_ok(|()| "fs")); tasks.spawn( signal::worker(config.clone(), er_s.clone(), ev_s.clone()).map_ok(|()| "signal"), ); tasks.spawn( keyboard::worker(config.clone(), er_s.clone(), ev_s.clone()) .map_ok(|()| "keyboard"), ); tasks.spawn(error_hook(er_r, config.error_handler.clone()).map_ok(|()| "error")); while let Some(Ok(res)) = tasks.join_next().await { match res { Ok("action") => { debug!("action worker exited, ending watchexec"); break; } Ok(task) => { debug!(task, "worker exited"); } Err(CriticalError::Exit) => { trace!("got graceful exit request via critical error, erasing the error"); // Close event channel to signal worker task to stop ev_s.close(); } Err(e) => { return Err(e); } } } debug!("main task graceful exit"); tasks.shutdown().await; Ok(()) }); trace!("done with setup"); Ok(Self { config: outer_config, start_lock, event_input, handle: Arc::new(AtomicTake::new(handle)), }) } /// Inputs an [`Event`] directly. /// /// This can be useful for testing, for custom event sources, or for one-off action triggers /// (for example, on start). /// /// Hint: use [`Event::default()`] to send an empty event (which won't be filtered). pub async fn send_event(&self, event: Event, priority: Priority) -> Result<(), CriticalError> { self.event_input.send(event, priority).await?; Ok(()) } /// Start watchexec and obtain the handle to its main task. /// /// This must only be called once. /// /// # Panics /// Panics if called twice. pub fn main(&self) -> JoinHandle> { trace!("notifying start lock"); self.start_lock.notify_one(); debug!("handing over main task handle"); self.handle .take() .expect("Watchexec::main was called twice") } } async fn error_hook( mut errors: mpsc::Receiver, handler: ChangeableFn, ) -> Result<(), CriticalError> { while let Some(err) = errors.recv().await { if matches!(err, RuntimeError::Exit) { trace!("got graceful exit request via runtime error, upgrading to crit"); return Err(CriticalError::Exit); } error!(%err, "runtime error"); let payload = ErrorHook::new(err); let crit = payload.critical.clone(); handler.call(payload); ErrorHook::handle_crit(crit)?; } Ok(()) } /// The environment given to the error handler. /// /// This deliberately does not implement Clone to make it hard to move it out of the handler, which /// you should not do. /// /// The [`ErrorHook::critical()`] method should be used to send a [`CriticalError`], which will /// terminate watchexec. This is useful to e.g. upgrade certain errors to be fatal. /// /// Note that returning errors from the error handler does not result in critical errors. #[derive(Debug)] pub struct ErrorHook { /// The runtime error for which this handler was called. pub error: RuntimeError, critical: Arc>, } impl ErrorHook { fn new(error: RuntimeError) -> Self { Self { error, critical: Default::default(), } } fn handle_crit(crit: Arc>) -> Result<(), CriticalError> { match Arc::try_unwrap(crit) { Err(err) => { error!(?err, "error handler hook has an outstanding ref"); Ok(()) } Ok(crit) => crit.into_inner().map_or_else( || Ok(()), |crit| { debug!(%crit, "error handler output a critical error"); Err(crit) }, ), } } /// Set a critical error to be emitted. /// /// This takes `self` and `ErrorHook` is not `Clone`, so it's only possible to call it once. /// Regardless, if you _do_ manage to call it twice, it will do nothing beyond the first call. pub fn critical(self, critical: CriticalError) { self.critical.set(critical).ok(); } /// Elevate the current runtime error to critical. /// /// This is a shorthand method for `ErrorHook::critical(CriticalError::Elevated(error))`. pub fn elevate(self) { let Self { error, critical } = self; critical .set(CriticalError::Elevated { help: error.help().map(|h| h.to_string()), err: error, }) .ok(); } } ================================================ FILE: crates/lib/tests/env_reporting.rs ================================================ use std::{collections::HashMap, ffi::OsString, path::MAIN_SEPARATOR}; use notify::event::CreateKind; use watchexec::paths::summarise_events_to_env; use watchexec_events::{filekind::*, Event, Tag}; #[cfg(unix)] const ENV_SEP: &str = ":"; #[cfg(not(unix))] const ENV_SEP: &str = ";"; fn ospath(path: &str) -> OsString { let root = std::fs::canonicalize(".").unwrap(); if path.is_empty() { root } else { root.join(path) } .into() } fn event(path: &str, kind: FileEventKind) -> Event { Event { tags: vec![ Tag::Path { path: ospath(path).into(), file_type: None, }, Tag::FileEventKind(kind), ], metadata: Default::default(), } } #[test] fn no_events_no_env() { let events = Vec::::new(); assert_eq!(summarise_events_to_env(&events), HashMap::new()); } #[test] fn single_created() { let events = vec![event("file.txt", FileEventKind::Create(CreateKind::File))]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ("CREATED", OsString::from("file.txt")), ("COMMON", ospath("")), ]) ); } #[test] fn single_meta() { let events = vec![event( "file.txt", FileEventKind::Modify(ModifyKind::Metadata(MetadataKind::Any)), )]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ("META_CHANGED", OsString::from("file.txt")), ("COMMON", ospath("")), ]) ); } #[test] fn single_removed() { let events = vec![event("file.txt", FileEventKind::Remove(RemoveKind::File))]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ("REMOVED", OsString::from("file.txt")), ("COMMON", ospath("")), ]) ); } #[test] fn single_renamed() { let events = vec![event( "file.txt", FileEventKind::Modify(ModifyKind::Name(RenameMode::Any)), )]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ("RENAMED", OsString::from("file.txt")), ("COMMON", ospath("")), ]) ); } #[test] fn single_written() { let events = vec![event( "file.txt", FileEventKind::Modify(ModifyKind::Data(DataChange::Any)), )]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ("WRITTEN", OsString::from("file.txt")), ("COMMON", ospath("")), ]) ); } #[test] fn single_otherwise() { let events = vec![event("file.txt", FileEventKind::Any)]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ("OTHERWISE_CHANGED", OsString::from("file.txt")), ("COMMON", ospath("")), ]) ); } #[test] fn all_types_once() { let events = vec![ event("create.txt", FileEventKind::Create(CreateKind::File)), event( "metadata.txt", FileEventKind::Modify(ModifyKind::Metadata(MetadataKind::Any)), ), event("remove.txt", FileEventKind::Remove(RemoveKind::File)), event( "rename.txt", FileEventKind::Modify(ModifyKind::Name(RenameMode::Any)), ), event( "modify.txt", FileEventKind::Modify(ModifyKind::Data(DataChange::Any)), ), event("any.txt", FileEventKind::Any), ]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ("CREATED", OsString::from("create.txt")), ("META_CHANGED", OsString::from("metadata.txt")), ("REMOVED", OsString::from("remove.txt")), ("RENAMED", OsString::from("rename.txt")), ("WRITTEN", OsString::from("modify.txt")), ("OTHERWISE_CHANGED", OsString::from("any.txt")), ("COMMON", ospath("")), ]) ); } #[test] fn single_type_multipath() { let events = vec![ event("root.txt", FileEventKind::Create(CreateKind::File)), event("sub/folder.txt", FileEventKind::Create(CreateKind::File)), event("dom/folder.txt", FileEventKind::Create(CreateKind::File)), event( "deeper/sub/folder.txt", FileEventKind::Create(CreateKind::File), ), ]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ( "CREATED", OsString::from( [ format!("deeper{MAIN_SEPARATOR}sub{MAIN_SEPARATOR}folder.txt"), format!("dom{MAIN_SEPARATOR}folder.txt"), "root.txt".to_string(), format!("sub{MAIN_SEPARATOR}folder.txt"), ] .join(ENV_SEP) ) ), ("COMMON", ospath("")), ]) ); } #[test] fn single_type_divergent_paths() { let events = vec![ event("sub/folder.txt", FileEventKind::Create(CreateKind::File)), event("dom/folder.txt", FileEventKind::Create(CreateKind::File)), ]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ( "CREATED", OsString::from( [ format!("dom{MAIN_SEPARATOR}folder.txt"), format!("sub{MAIN_SEPARATOR}folder.txt"), ] .join(ENV_SEP) ) ), ("COMMON", ospath("")), ]) ); } #[test] fn multitype_multipath() { let events = vec![ event("root.txt", FileEventKind::Create(CreateKind::File)), event("sibling.txt", FileEventKind::Create(CreateKind::Any)), event( "sub/folder.txt", FileEventKind::Modify(ModifyKind::Metadata(MetadataKind::Ownership)), ), event("dom/folder.txt", FileEventKind::Remove(RemoveKind::Folder)), event("deeper/sub/folder.txt", FileEventKind::Other), ]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ( "CREATED", OsString::from(["root.txt", "sibling.txt"].join(ENV_SEP)), ), ( "META_CHANGED", OsString::from(format!("sub{MAIN_SEPARATOR}folder.txt")) ), ( "REMOVED", OsString::from(format!("dom{MAIN_SEPARATOR}folder.txt")) ), ( "OTHERWISE_CHANGED", OsString::from(format!( "deeper{MAIN_SEPARATOR}sub{MAIN_SEPARATOR}folder.txt" )) ), ("COMMON", ospath("")), ]) ); } #[test] fn multiple_paths_in_one_event() { let events = vec![Event { tags: vec![ Tag::Path { path: ospath("one.txt").into(), file_type: None, }, Tag::Path { path: ospath("two.txt").into(), file_type: None, }, Tag::FileEventKind(FileEventKind::Any), ], metadata: Default::default(), }]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ( "OTHERWISE_CHANGED", OsString::from(String::new() + "one.txt" + ENV_SEP + "two.txt") ), ("COMMON", ospath("")), ]) ); } #[test] fn mixed_non_paths_events() { let events = vec![ event("one.txt", FileEventKind::Any), Event { tags: vec![Tag::Process(1234)], metadata: Default::default(), }, event("two.txt", FileEventKind::Any), Event { tags: vec![Tag::FileEventKind(FileEventKind::Any)], metadata: Default::default(), }, ]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ( "OTHERWISE_CHANGED", OsString::from(String::new() + "one.txt" + ENV_SEP + "two.txt") ), ("COMMON", ospath("")), ]) ); } #[test] fn only_non_paths_events() { let events = vec![ Event { tags: vec![Tag::Process(1234)], metadata: Default::default(), }, Event { tags: vec![Tag::FileEventKind(FileEventKind::Any)], metadata: Default::default(), }, ]; assert_eq!(summarise_events_to_env(&events), HashMap::new()); } #[test] fn multipath_is_sorted() { let events = vec![ event("0123.txt", FileEventKind::Any), event("a.txt", FileEventKind::Any), event("b.txt", FileEventKind::Any), event("c.txt", FileEventKind::Any), event("ᄁ.txt", FileEventKind::Any), ]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ( "OTHERWISE_CHANGED", OsString::from( String::new() + "0123.txt" + ENV_SEP + "a.txt" + ENV_SEP + "b.txt" + ENV_SEP + "c.txt" + ENV_SEP + "ᄁ.txt" ) ), ("COMMON", ospath("")), ]) ); } #[test] fn multipath_is_deduped() { let events = vec![ event("0123.txt", FileEventKind::Any), event("0123.txt", FileEventKind::Any), event("a.txt", FileEventKind::Any), event("a.txt", FileEventKind::Any), event("b.txt", FileEventKind::Any), event("b.txt", FileEventKind::Any), event("c.txt", FileEventKind::Any), event("ᄁ.txt", FileEventKind::Any), event("ᄁ.txt", FileEventKind::Any), ]; assert_eq!( summarise_events_to_env(&events), HashMap::from([ ( "OTHERWISE_CHANGED", OsString::from( String::new() + "0123.txt" + ENV_SEP + "a.txt" + ENV_SEP + "b.txt" + ENV_SEP + "c.txt" + ENV_SEP + "ᄁ.txt" ) ), ("COMMON", ospath("")), ]) ); } ================================================ FILE: crates/lib/tests/error_handler.rs ================================================ use std::time::Duration; use miette::Result; use tokio::time::sleep; use watchexec::{ErrorHook, Watchexec}; #[tokio::main] async fn main() -> Result<()> { tracing_subscriber::fmt::init(); let wx = Watchexec::default(); wx.config.on_error(|err: ErrorHook| { eprintln!("Watchexec Runtime Error: {}", err.error); }); wx.main(); // TODO: induce an error here sleep(Duration::from_secs(1)).await; Ok(()) } ================================================ FILE: crates/project-origins/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v1.4.2 (2025-05-15) ## v1.4.1 (2025-02-09) ## v1.4.0 (2024-04-28) - Add out-of-tree Git repositories (`.git` file instead of folder). ## v1.3.0 (2024-01-01) - Remove `README.md` files from detection; those were causing too many false positives and were a weak signal anyway. - Add Node.js package manager lockfiles. ## v1.2.1 (2023-11-26) - Deps: upgrade Tokio requirement to 1.33.0 ## v1.2.0 (2023-01-08) - Add `const` qualifier to `ProjectType::is_vcs` and `::is_soft`. - Use Tokio's canonicalize instead of dunce. - Add missing `Send` bound to `origins()` and `types()`. ## v1.1.1 (2022-09-07) - Deps: update miette to 5.3.0 ## v1.1.0 (2022-08-24) - Add support for Go. - Add support for Zig. - Add `Pipfile` support for Pip. - Add detection of `CONTRIBUTING.md`. - Document what causes the detection of each project type. ## v1.0.0 (2022-06-16) - Initial release as a separate crate. ================================================ FILE: crates/project-origins/Cargo.toml ================================================ [package] name = "project-origins" version = "1.4.2" authors = ["Félix Saparelli "] license = "Apache-2.0" description = "Resolve project origins and kinds from a path" keywords = ["project", "origin", "root", "git"] documentation = "https://docs.rs/project-origins" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.58.0" edition = "2021" [dependencies] futures = "0.3.29" tokio = { version = "1.33.0", features = ["fs"] } tokio-stream = { version = "0.1.9", features = ["fs"] } [dev-dependencies] miette = "7.2.0" tracing-subscriber = "0.3.11" [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" ================================================ FILE: crates/project-origins/README.md ================================================ [![Crates.io page](https://badgen.net/crates/v/project-origins)](https://crates.io/crates/project-origins) [![API Docs](https://docs.rs/project-origins/badge.svg)][docs] [![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license] [![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml) # Project origins _Resolve project origins and kinds from a path._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license]. - Status: maintained. [docs]: https://docs.rs/project-origins [license]: ../../LICENSE ================================================ FILE: crates/project-origins/examples/find-origins.rs ================================================ use std::env::args; use miette::{IntoDiagnostic, Result}; use project_origins::origins; // Run with: `cargo run --example find-origins [PATH]` #[tokio::main] async fn main() -> Result<()> { tracing_subscriber::fmt::init(); let first_arg = args().nth(1).unwrap_or_else(|| ".".to_string()); let path = tokio::fs::canonicalize(first_arg).await.into_diagnostic()?; for origin in origins(&path).await { println!("{}", origin.display()); } Ok(()) } ================================================ FILE: crates/project-origins/release.toml ================================================ pre-release-commit-message = "release: project-origins v{{version}}" tag-prefix = "project-origins-" tag-message = "project-origins {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/project-origins/src/lib.rs ================================================ //! Resolve project origins and kinds from a path. //! //! This crate originated in [Watchexec](https://docs.rs/watchexec): it is used to resolve where a //! project's origin (or root) is, starting either at that origin, or within a subdirectory of it. //! //! This crate also provides the kind of project it is, and defines two categories within this: //! version control systems, and software development environments. //! //! As it is possible to find several project origins, of different or similar kinds, from a given //! directory and walking up, [`origins`] returns a set, rather than a single path. Determining //! which of these is the "one true origin" (if necessary) is left to the caller. #![cfg_attr(not(test), warn(unused_crate_dependencies))] use std::{ collections::{HashMap, HashSet}, fs::FileType, path::{Path, PathBuf}, }; use futures::StreamExt; use tokio::fs::read_dir; use tokio_stream::wrappers::ReadDirStream; /// Project types recognised by watchexec. /// /// There are two kinds of projects: VCS and software suite. The latter is more characterised by /// what package manager or build system is in use. The enum is marked non-exhaustive as more types /// can get added in the future. /// /// Do not rely on the ordering or value (e.g. with transmute) of the variants. #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] #[non_exhaustive] pub enum ProjectType { /// VCS: [Bazaar](https://bazaar.canonical.com/). /// /// Detects when a `.bzr` folder or a `.bzrignore` file is present. Bazaar does not support (at /// writing, anyway) ignore files deeper than the repository origin, so this should not /// false-positive. Bazaar, /// VCS: [Darcs](http://darcs.net/). /// /// Detects when a `_darcs` folder is present. Darcs, /// VCS: [Fossil](https://www.fossil-scm.org/). /// /// Detects when a `.fossil-settings` folder is present. Fossil, /// VCS: [Git](https://git-scm.com/). /// /// Detects when a `.git` file or folder is present, or any of the files `.gitattributes` or /// `.gitmodules`. Does _not_ check or return from the presence of `.gitignore` files, as Git /// supports nested ignores, and that would result in false-positives. Git, /// VCS: [Mercurial](https://www.mercurial-scm.org/). /// /// Detects when a `.hg` folder is present, or any of the files `.hgignore` or `.hgtags`. /// Mercurial does not support (at writing, anyway) ignore files deeper than the repository /// origin, so this should not false-positive. Mercurial, /// VCS: [Pijul](https://pijul.org/). /// /// This is not detected at the moment. Pijul, /// VCS: [Subversion](https://subversion.apache.org) (aka SVN). /// /// Detects when a `.svn` folder is present. Subversion, /// Soft: [Ruby](https://www.ruby-lang.org/)’s [Bundler](https://bundler.io/). /// /// Detects when a `Gemfile` file is present. Bundler, /// Soft: the [C programming language](https://en.wikipedia.org/wiki/C_(programming_language)). /// /// Detects when a `.ctags` file is present. C, /// Soft: [Rust](https://www.rust-lang.org/)’s [Cargo](https://doc.rust-lang.org/cargo/). /// /// Detects Cargo workspaces and Cargo crates through the presence of a `Cargo.toml` file. Cargo, /// Soft: the [Docker](https://www.docker.com/) container runtime. /// /// Detects when a `Dockerfile` file is present. Docker, /// Soft: the [Elixir](https://elixir-lang.org/) language. /// /// Detects when a `mix.exs` file is present. Elixir, /// Soft: the [Go](https://golang.net) language. /// /// Detects when a `go.mod` or `go.sum` file is present. Go, /// Soft: [Java](https://www.java.com/)’s [Gradle](https://gradle.org/). /// /// Detects when a `build.gradle` file is present. Gradle, /// Soft: [EcmaScript](https://www.ecmascript.org/) (aka JavaScript). /// /// Detects when a `package.json` or `cgmanifest.json` file is present. /// /// This is a catch-all for all `package.json`-based projects, and does not differentiate /// between NPM, Yarn, PNPM, Node, browser, Deno, Bun, etc. JavaScript, /// Soft: [Clojure](https://clojure.org/)’s [Leiningen](https://leiningen.org/). /// /// Detects when a `project.clj` file is present. Leiningen, /// Soft: [Java](https://www.java.com/)’s [Maven](https://maven.apache.org/). /// /// Detects when a `pom.xml` file is present. Maven, /// Soft: the [Perl](https://www.perl.org/) language. /// /// Detects when a `.perltidyrc` or `Makefile.PL` file is present. Perl, /// Soft: the [PHP](https://www.php.net/) language. /// /// Detects when a `composer.json` file is present. PHP, /// Soft: [Python](https://www.python.org/)’s [Pip](https://www.pip.org/). /// /// Detects when a `requirements.txt` or `Pipfile` file is present. Pip, /// Soft: the [V](https://www.v-lang.org/) language. /// /// Detects when a `v.mod` file is present. V, /// Soft: the [Zig](https://ziglang.org/) language. /// /// Detects when a `build.zig` file is present. Zig, } impl ProjectType { /// Returns true if the project type is a VCS. #[must_use] pub const fn is_vcs(self) -> bool { matches!( self, Self::Bazaar | Self::Darcs | Self::Fossil | Self::Git | Self::Mercurial | Self::Pijul | Self::Subversion ) } /// Returns true if the project type is a software suite. #[must_use] pub const fn is_soft(self) -> bool { matches!( self, Self::Bundler | Self::C | Self::Cargo | Self::Docker | Self::Elixir | Self::Gradle | Self::JavaScript | Self::Leiningen | Self::Maven | Self::Perl | Self::PHP | Self::Pip | Self::V ) } } /// Traverses the parents of the given path and returns _all_ that are project origins. /// /// This checks for the presence of a wide range of files and directories that are likely to be /// present and indicative of the root or origin path of a project. It's entirely possible to have /// multiple such origins show up: for example, a member of a Cargo workspace will list both the /// member project and the workspace root as origins. /// /// This looks at a wider variety of files than the [`types`] function does: something can be /// detected as an origin but not be able to match to any particular [`ProjectType`]. pub async fn origins(path: impl AsRef + Send) -> HashSet { fn check_list(list: &DirList) -> bool { if list.is_empty() { return false; } [ list.has_dir("_darcs"), list.has_dir(".bzr"), list.has_dir(".fossil-settings"), list.has_dir(".git"), list.has_dir(".github"), list.has_dir(".hg"), list.has_dir(".svn"), list.has_file(".asf.yaml"), list.has_file(".bzrignore"), list.has_file(".codecov.yml"), list.has_file(".ctags"), list.has_file(".editorconfig"), list.has_file(".git"), list.has_file(".gitattributes"), list.has_file(".gitmodules"), list.has_file(".hgignore"), list.has_file(".hgtags"), list.has_file(".perltidyrc"), list.has_file(".travis.yml"), list.has_file("appveyor.yml"), list.has_file("build.gradle"), list.has_file("build.properties"), list.has_file("build.xml"), list.has_file("Cargo.toml"), list.has_file("Cargo.lock"), list.has_file("cgmanifest.json"), list.has_file("CMakeLists.txt"), list.has_file("composer.json"), list.has_file("COPYING"), list.has_file("docker-compose.yml"), list.has_file("Dockerfile"), list.has_file("Gemfile"), list.has_file("LICENSE.txt"), list.has_file("LICENSE"), list.has_file("Makefile.am"), list.has_file("Makefile.pl"), list.has_file("Makefile.PL"), list.has_file("Makefile"), list.has_file("mix.exs"), list.has_file("moonshine-dependencies.xml"), list.has_file("package.json"), list.has_file("package-lock.json"), list.has_file("pnpm-lock.yaml"), list.has_file("yarn.lock"), list.has_file("pom.xml"), list.has_file("project.clj"), list.has_file("requirements.txt"), list.has_file("v.mod"), list.has_file("CONTRIBUTING.md"), list.has_file("go.mod"), list.has_file("go.sum"), list.has_file("Pipfile"), list.has_file("build.zig"), ] .into_iter() .any(|f| f) } let mut origins = HashSet::new(); let path = path.as_ref(); let mut current = path; if check_list(&DirList::obtain(current).await) { origins.insert(current.to_owned()); } while let Some(parent) = current.parent() { current = parent; if check_list(&DirList::obtain(current).await) { origins.insert(current.to_owned()); } } origins } /// Returns all project types detected at this given origin. /// /// This should be called with a result of [`origins()`], or a project origin if already known; it /// will not find the origin itself. /// /// The returned list may be empty. /// /// Note that this only detects project types listed in the [`ProjectType`] enum, and may not detect /// anything for some paths returned by [`origins()`]. pub async fn types(path: impl AsRef + Send) -> HashSet { let path = path.as_ref(); let list = DirList::obtain(path).await; [ list.if_has_dir("_darcs", ProjectType::Darcs), list.if_has_dir(".bzr", ProjectType::Bazaar), list.if_has_dir(".fossil-settings", ProjectType::Fossil), list.if_has_dir(".git", ProjectType::Git), list.if_has_dir(".hg", ProjectType::Mercurial), list.if_has_dir(".svn", ProjectType::Subversion), list.if_has_file(".bzrignore", ProjectType::Bazaar), list.if_has_file(".ctags", ProjectType::C), list.if_has_file(".git", ProjectType::Git), list.if_has_file(".gitattributes", ProjectType::Git), list.if_has_file(".gitmodules", ProjectType::Git), list.if_has_file(".hgignore", ProjectType::Mercurial), list.if_has_file(".hgtags", ProjectType::Mercurial), list.if_has_file(".perltidyrc", ProjectType::Perl), list.if_has_file("build.gradle", ProjectType::Gradle), list.if_has_file("Cargo.toml", ProjectType::Cargo), list.if_has_file("cgmanifest.json", ProjectType::JavaScript), list.if_has_file("composer.json", ProjectType::PHP), list.if_has_file("Dockerfile", ProjectType::Docker), list.if_has_file("Gemfile", ProjectType::Bundler), list.if_has_file("Makefile.PL", ProjectType::Perl), list.if_has_file("mix.exs", ProjectType::Elixir), list.if_has_file("package.json", ProjectType::JavaScript), list.if_has_file("pom.xml", ProjectType::Maven), list.if_has_file("project.clj", ProjectType::Leiningen), list.if_has_file("requirements.txt", ProjectType::Pip), list.if_has_file("v.mod", ProjectType::V), list.if_has_file("go.mod", ProjectType::Go), list.if_has_file("go.sum", ProjectType::Go), list.if_has_file("Pipfile", ProjectType::Pip), list.if_has_file("build.zig", ProjectType::Zig), ] .into_iter() .flatten() .collect() } #[derive(Debug, Default)] struct DirList(HashMap); impl DirList { async fn obtain(path: &Path) -> Self { if let Ok(s) = read_dir(path).await { Self( ReadDirStream::new(s) .filter_map(|entry| async move { match entry { Err(_) => None, Ok(entry) => { if let (Ok(path), Ok(file_type)) = (entry.path().strip_prefix(path), entry.file_type().await) { Some((path.to_owned(), file_type)) } else { None } } } }) .collect::>() .await, ) } else { Self::default() } } #[inline] fn is_empty(&self) -> bool { self.0.is_empty() } #[inline] fn has_file(&self, name: impl AsRef) -> bool { let name = name.as_ref(); self.0.get(name).map_or(false, std::fs::FileType::is_file) } #[inline] fn has_dir(&self, name: impl AsRef) -> bool { let name = name.as_ref(); self.0.get(name).map_or(false, std::fs::FileType::is_dir) } #[inline] fn if_has_file(&self, name: impl AsRef, project: ProjectType) -> Option { if self.has_file(name) { Some(project) } else { None } } #[inline] fn if_has_dir(&self, name: impl AsRef, project: ProjectType) -> Option { if self.has_dir(name) { Some(project) } else { None } } } ================================================ FILE: crates/signals/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v5.0.1 (2026-01-20) ## v5.0.0 (2025-05-15) - Deps: nix 0.30 ## v4.0.1 (2025-02-09) ## v4.0.0 (2024-10-14) - Deps: nix 0.29 ## v3.0.0 (2024-04-20) - Deps: miette 7 - Deps: nix 0.28 ## v2.1.0 (2023-12-09) - Derive `Hash` for `Signal`. ## v2.0.0 (2023-11-29) - Deps: upgrade nix to 0.27 ## v1.0.1 (2023-11-26) Same as 2.0.0, but yanked. ## v1.0.0 (2023-03-18) - Split off new `watchexec-signals` crate (this one), to have a lightweight library that can parse and represent signals as handled by Watchexec. ================================================ FILE: crates/signals/Cargo.toml ================================================ [package] name = "watchexec-signals" version = "5.0.1" authors = ["Félix Saparelli "] license = "Apache-2.0 OR MIT" description = "Watchexec's signal types" keywords = ["watchexec", "signal"] documentation = "https://docs.rs/watchexec-signals" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.61.0" edition = "2021" [dependencies.miette] version = "7.2.0" optional = true [dependencies.thiserror] version = "2.0.11" optional = true [dependencies.serde] version = "1.0.183" optional = true features = ["derive"] [target.'cfg(unix)'.dependencies.nix] version = "0.30.1" features = ["signal"] [features] default = ["fromstr", "miette"] fromstr = ["dep:thiserror"] miette = ["dep:miette"] serde = ["dep:serde"] [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" needless_doctest_main = "allow" ================================================ FILE: crates/signals/README.md ================================================ # watchexec-signals _Watchexec's signal type._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license] or [MIT](https://passcod.mit-license.org). - Status: maintained. [docs]: https://docs.rs/watchexec-signals [license]: ../../LICENSE ```rust use std::str::FromStr; use watchexec_signals::Signal; fn main() { assert_eq!(Signal::from_str("SIGINT").unwrap(), Signal::Interrupt); } ``` ## Features - `serde`: enables serde support. - `fromstr`: enables `FromStr` support (default). - `miette`: enables miette (rich diagnostics) support (default). ================================================ FILE: crates/signals/release.toml ================================================ pre-release-commit-message = "release: signals v{{version}}" tag-prefix = "watchexec-signals-" tag-message = "watchexec-signals {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/signals/src/lib.rs ================================================ #![doc = include_str!("../README.md")] #![cfg_attr(not(test), warn(unused_crate_dependencies))] // thiserror's macro generates code that triggers this lint spuriously #![allow(unused_assignments)] use std::fmt; #[cfg(feature = "fromstr")] use std::str::FromStr; #[cfg(unix)] use nix::sys::signal::Signal as NixSignal; /// A notification (signals or Windows control events) sent to a process. /// /// This signal type in Watchexec is used for any of: /// - signals sent to the main process by some external actor, /// - signals received from a sub process by the main process, /// - signals sent to a sub process by Watchexec. /// /// On Windows, only some signals are supported, as described. Others will be ignored. /// /// On Unix, there are several "first-class" signals which have their own variants, and a generic /// [`Custom`][Signal::Custom] variant which can be used to send arbitrary signals. #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] #[non_exhaustive] #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] #[cfg_attr( feature = "serde", serde( from = "serde_support::SerdeSignal", into = "serde_support::SerdeSignal" ) )] pub enum Signal { /// Indicate that the terminal is disconnected. /// /// On Unix, this is `SIGHUP`. On Windows, this is ignored for now but may be supported in the /// future (see [#219](https://github.com/watchexec/watchexec/issues/219)). /// /// Despite its nominal purpose, on Unix this signal is often used to reload configuration files. Hangup, /// Indicate to the kernel that the process should stop. /// /// On Unix, this is `SIGKILL`. On Windows, this is `TerminateProcess`. /// /// This signal is not handled by the process, but directly by the kernel, and thus cannot be /// intercepted. Subprocesses may exit in inconsistent states. ForceStop, /// Indicate that the process should stop. /// /// On Unix, this is `SIGINT`. On Windows, this is ignored for now but may be supported in the /// future (see [#219](https://github.com/watchexec/watchexec/issues/219)). /// /// This signal generally indicates an action taken by the user, so it may be handled /// differently than a termination. Interrupt, /// Indicate that the process is to stop, the kernel will then dump its core. /// /// On Unix, this is `SIGQUIT`. On Windows, it is ignored. /// /// This is rarely used. Quit, /// Indicate that the process should stop. /// /// On Unix, this is `SIGTERM`. On Windows, this is ignored for now but may be supported in the /// future (see [#219](https://github.com/watchexec/watchexec/issues/219)). /// /// On Unix, this signal generally indicates an action taken by the system, so it may be handled /// differently than an interruption. Terminate, /// Indicate an application-defined behaviour should happen. /// /// On Unix, this is `SIGUSR1`. On Windows, it is ignored. /// /// This signal is generally used to start debugging. User1, /// Indicate an application-defined behaviour should happen. /// /// On Unix, this is `SIGUSR2`. On Windows, it is ignored. /// /// This signal is generally used to reload configuration. User2, /// Indicate using a custom signal. /// /// Internally, this is converted to a [`nix::Signal`](https://docs.rs/nix/*/nix/sys/signal/enum.Signal.html) /// but for portability this variant is a raw `i32`. /// /// Invalid signals on the current platform will be ignored. Does nothing on Windows. /// /// The special value `0` is used to indicate an unknown signal. That is, a signal was received /// or parsed, but it is not known which. This is not a usual case, and should in general be /// ignored rather than hard-erroring. /// /// # Examples /// /// ``` /// # #[cfg(unix)] /// # { /// use watchexec_signals::Signal; /// use nix::sys::signal::Signal as NixSignal; /// assert_eq!(Signal::Custom(6), Signal::from(NixSignal::SIGABRT as i32)); /// # } /// ``` /// /// On Unix the [`from_nix`][Signal::from_nix] method should be preferred if converting from /// Nix's `Signal` type: /// /// ``` /// # #[cfg(unix)] /// # { /// use watchexec_signals::Signal; /// use nix::sys::signal::Signal as NixSignal; /// assert_eq!(Signal::Custom(6), Signal::from_nix(NixSignal::SIGABRT)); /// # } /// ``` Custom(i32), } impl Signal { /// Converts to a [`nix::Signal`][NixSignal] if possible. /// /// This will return `None` if the signal is not supported on the current platform (only for /// [`Custom`][Signal::Custom], as the first-class ones are always supported). #[cfg(unix)] #[must_use] pub fn to_nix(self) -> Option { match self { Self::Hangup => Some(NixSignal::SIGHUP), Self::ForceStop => Some(NixSignal::SIGKILL), Self::Interrupt => Some(NixSignal::SIGINT), Self::Quit => Some(NixSignal::SIGQUIT), Self::Terminate => Some(NixSignal::SIGTERM), Self::User1 => Some(NixSignal::SIGUSR1), Self::User2 => Some(NixSignal::SIGUSR2), Self::Custom(sig) => NixSignal::try_from(sig).ok(), } } /// Converts from a [`nix::Signal`][NixSignal]. #[cfg(unix)] #[allow(clippy::missing_const_for_fn)] #[must_use] pub fn from_nix(sig: NixSignal) -> Self { match sig { NixSignal::SIGHUP => Self::Hangup, NixSignal::SIGKILL => Self::ForceStop, NixSignal::SIGINT => Self::Interrupt, NixSignal::SIGQUIT => Self::Quit, NixSignal::SIGTERM => Self::Terminate, NixSignal::SIGUSR1 => Self::User1, NixSignal::SIGUSR2 => Self::User2, sig => Self::Custom(sig as _), } } } impl From for Signal { /// Converts from a raw signal number. /// /// This uses hardcoded numbers for the first-class signals. fn from(raw: i32) -> Self { match raw { 1 => Self::Hangup, 2 => Self::Interrupt, 3 => Self::Quit, 9 => Self::ForceStop, 10 => Self::User1, 12 => Self::User2, 15 => Self::Terminate, _ => Self::Custom(raw), } } } #[cfg(feature = "fromstr")] impl Signal { /// Parse the input as a unix signal. /// /// This parses the input as a signal name, or a signal number, in a case-insensitive manner. /// It supports integers, the short name of the signal (like `INT`, `HUP`, `USR1`, etc), and /// the long name of the signal (like `SIGINT`, `SIGHUP`, `SIGUSR1`, etc). /// /// Note that this is entirely accurate only when used on unix targets; on other targets it /// falls back to a hardcoded approximation instead of looking up signal tables (via [`nix`]). /// /// ``` /// # use watchexec_signals::Signal; /// assert_eq!(Signal::Hangup, Signal::from_unix_str("hup").unwrap()); /// assert_eq!(Signal::Interrupt, Signal::from_unix_str("SIGINT").unwrap()); /// assert_eq!(Signal::ForceStop, Signal::from_unix_str("Kill").unwrap()); /// ``` /// /// Using [`FromStr`] is recommended for practical use, as it will also parse Windows control /// events, see [`Signal::from_windows_str`]. pub fn from_unix_str(s: &str) -> Result { Self::from_unix_str_impl(s) } #[cfg(unix)] fn from_unix_str_impl(s: &str) -> Result { if let Ok(sig) = i32::from_str(s) { if let Ok(sig) = NixSignal::try_from(sig) { return Ok(Self::from_nix(sig)); } } if let Ok(sig) = NixSignal::from_str(&s.to_ascii_uppercase()) .or_else(|_| NixSignal::from_str(&format!("SIG{}", s.to_ascii_uppercase()))) { return Ok(Self::from_nix(sig)); } Err(SignalParseError::new(s, "unsupported signal")) } #[cfg(not(unix))] fn from_unix_str_impl(s: &str) -> Result { match s.to_ascii_uppercase().as_str() { "KILL" | "SIGKILL" | "9" => Ok(Self::ForceStop), "HUP" | "SIGHUP" | "1" => Ok(Self::Hangup), "INT" | "SIGINT" | "2" => Ok(Self::Interrupt), "QUIT" | "SIGQUIT" | "3" => Ok(Self::Quit), "TERM" | "SIGTERM" | "15" => Ok(Self::Terminate), "USR1" | "SIGUSR1" | "10" => Ok(Self::User1), "USR2" | "SIGUSR2" | "12" => Ok(Self::User2), number => match i32::from_str(number) { Ok(int) => Ok(Self::Custom(int)), Err(_) => Err(SignalParseError::new(s, "unsupported signal")), }, } } /// Parse the input as a windows control event. /// /// This parses the input as a control event name, in a case-insensitive manner. /// /// The names matched are mostly made up as there's no standard for them, but should be familiar /// to Windows users. They are mapped to the corresponding unix concepts as follows: /// /// - `CTRL-CLOSE`, `CTRL+CLOSE`, or `CLOSE` for a hangup /// - `CTRL-BREAK`, `CTRL+BREAK`, or `BREAK` for a terminate /// - `CTRL-C`, `CTRL+C`, or `C` for an interrupt /// - `STOP`, `FORCE-STOP` for a forced stop. This is also mapped to `KILL` and `SIGKILL`. /// /// ``` /// # use watchexec_signals::Signal; /// assert_eq!(Signal::Hangup, Signal::from_windows_str("ctrl+close").unwrap()); /// assert_eq!(Signal::Interrupt, Signal::from_windows_str("C").unwrap()); /// assert_eq!(Signal::ForceStop, Signal::from_windows_str("Stop").unwrap()); /// ``` /// /// Using [`FromStr`] is recommended for practical use, as it will fall back to parsing as a /// unix signal, which can be helpful for portability. pub fn from_windows_str(s: &str) -> Result { match s.to_ascii_uppercase().as_str() { "CTRL-CLOSE" | "CTRL+CLOSE" | "CLOSE" => Ok(Self::Hangup), "CTRL-BREAK" | "CTRL+BREAK" | "BREAK" => Ok(Self::Terminate), "CTRL-C" | "CTRL+C" | "C" => Ok(Self::Interrupt), "KILL" | "SIGKILL" | "FORCE-STOP" | "STOP" => Ok(Self::ForceStop), _ => Err(SignalParseError::new(s, "unknown control name")), } } } #[cfg(feature = "fromstr")] impl FromStr for Signal { type Err = SignalParseError; fn from_str(s: &str) -> Result { Self::from_windows_str(s).or_else(|err| Self::from_unix_str(s).map_err(|_| err)) } } /// Error when parsing a signal from string. #[cfg(feature = "fromstr")] #[cfg_attr(feature = "miette", derive(miette::Diagnostic))] #[derive(Debug, thiserror::Error)] #[error("invalid signal `{src}`: {err}")] pub struct SignalParseError { // The string that was parsed. #[cfg_attr(feature = "miette", source_code)] src: String, // The error that occurred. err: String, // The span of the source which is in error. #[cfg_attr(feature = "miette", label = "invalid signal")] span: (usize, usize), } #[cfg(feature = "fromstr")] impl SignalParseError { #[must_use] pub fn new(src: &str, err: &str) -> Self { Self { src: src.to_owned(), err: err.to_owned(), span: (0, src.len()), } } } impl fmt::Display for Signal { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!( f, "{}", match (self, cfg!(windows)) { (Self::Hangup, false) => "SIGHUP", (Self::Hangup, true) => "CTRL-CLOSE", (Self::ForceStop, false) => "SIGKILL", (Self::ForceStop, true) => "STOP", (Self::Interrupt, false) => "SIGINT", (Self::Interrupt, true) => "CTRL-C", (Self::Quit, _) => "SIGQUIT", (Self::Terminate, false) => "SIGTERM", (Self::Terminate, true) => "CTRL-BREAK", (Self::User1, _) => "SIGUSR1", (Self::User2, _) => "SIGUSR2", (Self::Custom(n), _) => { return write!(f, "{n}"); } } ) } } #[cfg(feature = "serde")] mod serde_support { use super::Signal; #[derive(Clone, Copy, Debug, serde::Serialize, serde::Deserialize)] #[serde(untagged)] pub enum SerdeSignal { Named(NamedSignal), Number(i32), } #[derive(Clone, Copy, Debug, serde::Serialize, serde::Deserialize)] #[serde(rename_all = "kebab-case")] pub enum NamedSignal { #[serde(rename = "SIGHUP")] Hangup, #[serde(rename = "SIGKILL")] ForceStop, #[serde(rename = "SIGINT")] Interrupt, #[serde(rename = "SIGQUIT")] Quit, #[serde(rename = "SIGTERM")] Terminate, #[serde(rename = "SIGUSR1")] User1, #[serde(rename = "SIGUSR2")] User2, } impl From for SerdeSignal { fn from(signal: Signal) -> Self { match signal { Signal::Hangup => Self::Named(NamedSignal::Hangup), Signal::Interrupt => Self::Named(NamedSignal::Interrupt), Signal::Quit => Self::Named(NamedSignal::Quit), Signal::Terminate => Self::Named(NamedSignal::Terminate), Signal::User1 => Self::Named(NamedSignal::User1), Signal::User2 => Self::Named(NamedSignal::User2), Signal::ForceStop => Self::Named(NamedSignal::ForceStop), Signal::Custom(number) => Self::Number(number), } } } impl From for Signal { fn from(signal: SerdeSignal) -> Self { match signal { SerdeSignal::Named(NamedSignal::Hangup) => Self::Hangup, SerdeSignal::Named(NamedSignal::ForceStop) => Self::ForceStop, SerdeSignal::Named(NamedSignal::Interrupt) => Self::Interrupt, SerdeSignal::Named(NamedSignal::Quit) => Self::Quit, SerdeSignal::Named(NamedSignal::Terminate) => Self::Terminate, SerdeSignal::Named(NamedSignal::User1) => Self::User1, SerdeSignal::Named(NamedSignal::User2) => Self::User2, SerdeSignal::Number(number) => Self::Custom(number), } } } } ================================================ FILE: crates/supervisor/CHANGELOG.md ================================================ # Changelog ## Next (YYYY-MM-DD) ## v5.2.0 (2026-03-09) - Add the ability to use `spawn_with` from process-wrap (#1013) ## v5.1.0 (2026-02-22) - Add `is_running()` and clarify what `is_dead()` is measuring ## v5.0.2 (2026-01-20) - Deps: process-wrap 9 - Fix: handle graceful stop when job handle dropped (#981, #982) ## v5.0.1 (2025-05-15) ## v5.0.0 (2025-05-15) - Deps: process-wrap 8.2.1 ## v4.0.0 (2025-02-09) ## v3.0.0 (2024-10-14) - Deps: nix 0.29 ## v2.0.0 (2024-04-20) - Deps: replace command-group with process-wrap - Deps: nix 0.28 ## v1.0.3 (2023-12-19) - Fix Start executing even when the job is running. - Add kill-on-drop to guarantee no two processes run at the same time. ## v1.0.2 (2023-12-09) - Add `trace`-level logging to Job task. ## v1.0.1 (2023-11-29) - Deps: watchexec-events 2.0.1 - Deps: watchexec-signals 2.0.0 ## v1.0.0 (2023-11-26) - Initial release as a separate crate. ================================================ FILE: crates/supervisor/Cargo.toml ================================================ [package] name = "watchexec-supervisor" version = "5.2.0" authors = ["Félix Saparelli "] license = "Apache-2.0 OR MIT" description = "Watchexec's process supervisor component" keywords = ["process", "command", "supervisor", "watchexec"] documentation = "https://docs.rs/watchexec-supervisor" repository = "https://github.com/watchexec/watchexec" readme = "README.md" rust-version = "1.64.0" edition = "2021" [dependencies] futures = "0.3.29" tracing = "0.1.40" [dependencies.process-wrap] version = "9.1.0" features = ["reset-sigmask", "tokio1"] [dependencies.tokio] version = "1.38.0" default-features = false features = ["macros", "process", "rt", "sync", "time"] [dependencies.watchexec-events] version = "6.1.0" default-features = false path = "../events" [dependencies.watchexec-signals] version = "5.0.1" default-features = false path = "../signals" [dev-dependencies] boxcar = "0.2.9" [target.'cfg(unix)'.dev-dependencies.nix] version = "0.30.1" features = ["signal"] [lints.clippy] nursery = "warn" pedantic = "warn" module_name_repetitions = "allow" similar_names = "allow" cognitive_complexity = "allow" too_many_lines = "allow" missing_errors_doc = "allow" missing_panics_doc = "allow" default_trait_access = "allow" enum_glob_use = "allow" option_if_let_else = "allow" blocks_in_conditions = "allow" ================================================ FILE: crates/supervisor/README.md ================================================ [![Crates.io page](https://badgen.net/crates/v/watchexec-supervisor)](https://crates.io/crates/watchexec-supervisor) [![API Docs](https://docs.rs/watchexec-supervisor/badge.svg)][docs] [![Crate license: Apache 2.0](https://badgen.net/badge/license/Apache%202.0)][license] [![CI status](https://github.com/watchexec/watchexec/actions/workflows/check.yml/badge.svg)](https://github.com/watchexec/watchexec/actions/workflows/check.yml) # Supervisor _Watchexec's process supervisor._ - **[API documentation][docs]**. - Licensed under [Apache 2.0][license]. - Status: maintained. [docs]: https://docs.rs/watchexec-supervisor [license]: ../../LICENSE ================================================ FILE: crates/supervisor/release.toml ================================================ pre-release-commit-message = "release: supervisor v{{version}}" tag-prefix = "watchexec-supervisor-" tag-message = "watchexec-supervisor {{version}}" [[pre-release-replacements]] file = "CHANGELOG.md" search = "^## Next.*$" replace = "## Next (YYYY-MM-DD)\n\n## v{{version}} ({{date}})" prerelease = true max = 1 ================================================ FILE: crates/supervisor/src/command/conversions.rs ================================================ use std::fmt; use process_wrap::tokio::{CommandWrap, KillOnDrop}; use tokio::process::Command as TokioCommand; use tracing::trace; use super::{Command, Program, SpawnOptions}; impl Command { /// Obtain a [`process_wrap::tokio::CommandWrap`]. pub fn to_spawnable(&self) -> CommandWrap { trace!(program=?self.program, "constructing command"); let cmd = match &self.program { Program::Exec { prog, args, .. } => { let mut c = TokioCommand::new(prog); c.args(args); c } Program::Shell { shell, args, command, } => { let mut c = TokioCommand::new(shell.prog.clone()); // Avoid quoting issues on Windows by using raw_arg everywhere #[cfg(windows)] { for opt in &shell.options { c.raw_arg(opt); } if let Some(progopt) = &shell.program_option { c.raw_arg(progopt); } c.raw_arg(command); for arg in args { c.raw_arg(arg); } } #[cfg(not(windows))] { c.args(shell.options.clone()); if let Some(progopt) = &shell.program_option { c.arg(progopt); } c.arg(command); for arg in args { c.arg(arg); } } c } }; let mut cmd = CommandWrap::from(cmd); cmd.wrap(KillOnDrop); match self.options { #[cfg(unix)] SpawnOptions { session: true, .. } => { cmd.wrap(process_wrap::tokio::ProcessSession); } #[cfg(unix)] SpawnOptions { grouped: true, .. } => { cmd.wrap(process_wrap::tokio::ProcessGroup::leader()); } #[cfg(windows)] SpawnOptions { grouped: true, .. } | SpawnOptions { session: true, .. } => { cmd.wrap(process_wrap::tokio::JobObject); } _ => {} } #[cfg(unix)] if self.options.reset_sigmask { cmd.wrap(process_wrap::tokio::ResetSigmask); } cmd } } impl fmt::Display for Program { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { Self::Exec { prog, args, .. } => { write!(f, "{}", prog.display())?; for arg in args { write!(f, " {arg}")?; } Ok(()) } Self::Shell { command, .. } => { write!(f, "{command}") } } } } impl fmt::Display for Command { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{}", self.program) } } ================================================ FILE: crates/supervisor/src/command/program.rs ================================================ use std::path::PathBuf; use super::Shell; /// A single program call. #[derive(Clone, Debug, PartialEq, Eq, Hash)] pub enum Program { /// A raw program call: the path or name of a program and its argument list. Exec { /// Path or name of the program. prog: PathBuf, /// The arguments to pass. args: Vec, }, /// A shell program: a string which is to be executed by a shell. /// /// (Tip: in general, a shell will handle its own job control, so there's no inherent need to /// set `grouped: true` at the [`Command`](super::Command) level.) Shell { /// The shell to run. shell: Shell, /// The command line to pass to the shell. command: String, /// The arguments to pass to the shell invocation. /// /// This may not be supported by all shells. Note that some shells require the use of `--` /// for disambiguation: this is not handled by Watchexec, and will need to be the first /// item in this vec if desired. /// /// This appends the values within to the shell process invocation. args: Vec, }, } ================================================ FILE: crates/supervisor/src/command/shell.rs ================================================ use std::{borrow::Cow, ffi::OsStr, path::PathBuf}; /// How to call the shell used to run shelled programs. #[derive(Clone, Debug, PartialEq, Eq, Hash)] pub struct Shell { /// Path or name of the shell. pub prog: PathBuf, /// Additional options or arguments to pass to the shell. /// /// These will be inserted before the `program_option` immediately preceding the program string. pub options: Vec, /// The syntax of the option which precedes the program string. /// /// For most shells, this is `-c`. On Windows, CMD.EXE prefers `/C`. If this is `None`, then no /// option is prepended; this may be useful for non-shell or non-standard shell programs. pub program_option: Option>, } impl Shell { /// Shorthand for most shells, using the `-c` convention. pub fn new(name: impl Into) -> Self { Self { prog: name.into(), options: Vec::new(), program_option: Some(Cow::Borrowed(OsStr::new("-c"))), } } #[cfg(windows)] #[must_use] /// Shorthand for the CMD.EXE shell. pub fn cmd() -> Self { Self { prog: "CMD.EXE".into(), options: Vec::new(), program_option: Some(Cow::Borrowed(OsStr::new("/C"))), } } } ================================================ FILE: crates/supervisor/src/command.rs ================================================ //! Command construction and configuration. #[doc(inline)] pub use self::{program::Program, shell::Shell}; mod conversions; mod program; mod shell; /// A command to execute. /// /// # Example /// /// ``` /// # use watchexec_supervisor::command::{Command, Program}; /// Command { /// program: Program::Exec { /// prog: "make".into(), /// args: vec!["check".into()], /// }, /// options: Default::default(), /// }; /// ``` #[derive(Clone, Debug, PartialEq, Eq, Hash)] pub struct Command { /// Program to execute for this command. pub program: Program, /// Options for spawning the program. pub options: SpawnOptions, } /// Options set when constructing or spawning a command. /// /// It's recommended to use the [`Default`] implementation for this struct, and only set the options /// you need to change, to proof against new options being added in future. /// /// # Examples /// /// ``` /// # use watchexec_supervisor::command::{Command, Program, SpawnOptions}; /// Command { /// program: Program::Exec { /// prog: "make".into(), /// args: vec!["check".into()], /// }, /// options: SpawnOptions { /// grouped: true, /// ..Default::default() /// }, /// }; /// ``` #[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Hash)] pub struct SpawnOptions { /// Run the program in a new process group. /// /// This will use either of Unix [process groups] or Windows [Job Objects] via the /// [`process-wrap`](process_wrap) crate. /// /// [process groups]: https://en.wikipedia.org/wiki/Process_group /// [Job Objects]: https://en.wikipedia.org/wiki/Object_Manager_(Windows) pub grouped: bool, /// Run the program in a new session. /// /// This will use Unix [sessions]. On Windows, this is not supported. This /// implies `grouped: true`. /// /// [sessions]: https://pubs.opengroup.org/onlinepubs/9699919799/functions/setsid.html pub session: bool, /// Reset the signal mask of the process before we spawn it. /// /// By default, the signal mask of the process is inherited from the parent process. This means /// that if the parent process has blocked any signals, the child process will also block those /// signals. This can cause problems if the child process is expecting to receive those signals. /// /// This is only supported on Unix systems. pub reset_sigmask: bool, } ================================================ FILE: crates/supervisor/src/errors.rs ================================================ //! Error types. use std::{ io::Error, sync::{Arc, OnceLock}, }; /// Convenience type for a [`std::io::Error`] which can be shared across threads. pub type SyncIoError = Arc>; /// Make a [`SyncIoError`] from a [`std::io::Error`]. #[must_use] pub fn sync_io_error(err: Error) -> SyncIoError { let lock = OnceLock::new(); lock.set(err).expect("unreachable: lock was just created"); Arc::new(lock) } ================================================ FILE: crates/supervisor/src/flag.rs ================================================ //! A flag that can be raised to wake a task. //! //! Copied wholesale from //! unfortunately not aware of crated version! use std::{ pin::Pin, sync::{ atomic::{AtomicBool, Ordering::Relaxed}, Arc, }, }; use futures::{ future::Future, task::{AtomicWaker, Context, Poll}, }; #[derive(Debug)] struct Inner { waker: AtomicWaker, set: AtomicBool, } #[derive(Clone, Debug)] pub struct Flag(Arc); impl Default for Flag { fn default() -> Self { Self::new(false) } } impl Flag { pub fn new(value: bool) -> Self { Self(Arc::new(Inner { waker: AtomicWaker::new(), set: AtomicBool::new(value), })) } pub fn raised(&self) -> bool { self.0.set.load(Relaxed) } pub fn raise(&self) { self.0.set.store(true, Relaxed); self.0.waker.wake(); } } impl Future for Flag { type Output = (); fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> { // quick check to avoid registration if already done. if self.0.set.load(Relaxed) { return Poll::Ready(()); } self.0.waker.register(cx.waker()); // Need to check condition **after** `register` to avoid a race // condition that would result in lost notifications. if self.0.set.load(Relaxed) { Poll::Ready(()) } else { Poll::Pending } } } ================================================ FILE: crates/supervisor/src/job/job.rs ================================================ #![allow(clippy::must_use_candidate)] // Ticket-returning methods are supposed to be used without awaiting use std::{ future::Future, sync::{ atomic::{AtomicBool, Ordering}, Arc, }, time::Duration, }; use process_wrap::tokio::CommandWrap; use watchexec_signals::Signal; use crate::{command::Command, errors::SyncIoError, flag::Flag}; use super::{ messages::{Control, ControlMessage, Ticket}, priority::{Priority, PrioritySender}, JobTaskContext, }; /// A handle to a job task spawned in the supervisor. /// /// A job is a task which manages a [`Command`]. It is responsible for spawning the command's /// program, for handling messages which control it, for managing the program's lifetime, and for /// collecting its exit status and some timing information. /// /// Most of the methods here queue [`Control`]s to the job task and return [`Ticket`]s. Controls /// execute in order, except where noted. Tickets are futures which resolve when the corresponding /// control has been run. Unlike most futures, tickets don't need to be polled for controls to make /// progress; the future is only used to signal completion. Dropping a ticket will not drop the /// control, so it's safe to do so if you don't care about when the control completes. /// /// Note that controls are not guaranteed to run, like if the job task stops or panics before a /// control is processed. If a job task stops gracefully, all pending tickets will resolve /// immediately. If a job task panics (outside of hooks, panics are bugs!), pending tickets will /// never resolve. /// /// This struct is cloneable (internally it is made of Arcs). Dropping the last instance of a Job /// will close the job's control queue, which will cause the job task to stop gracefully. Note that /// a task graceful stop is not the same as a graceful stop of the contained command; when the job /// drops, the command will be dropped in turn, and forcefully terminated via `kill_on_drop`. #[derive(Debug, Clone)] pub struct Job { pub(crate) command: Arc, pub(crate) control_queue: PrioritySender, /// Set to true when the command task has stopped gracefully. pub(crate) gone: Flag, /// Mirrors the command state: true when a child process is running. pub(crate) running: Arc, } impl Job { /// The [`Command`] this job is managing. pub fn command(&self) -> Arc { self.command.clone() } /// If this job is dead. /// /// A dead job is one where the job task has stopped entirely, not just /// a job whose command has finished. See [`is_running`](Self::is_running). pub fn is_dead(&self) -> bool { self.gone.raised() } /// If a child process is currently running. /// /// This returns `false` if the command has finished, hasn't been started /// yet, or the job is dead. pub fn is_running(&self) -> bool { self.running.load(Ordering::Relaxed) } fn prepare_control(&self, control: Control) -> (Ticket, ControlMessage) { let done = Flag::default(); ( Ticket { job_gone: self.gone.clone(), control_done: done.clone(), }, ControlMessage { control, done }, ) } pub(crate) fn send_controls( &self, controls: [Control; N], priority: Priority, ) -> Ticket { if N == 0 || self.gone.raised() { Ticket::cancelled() } else if N == 1 { let control = controls.into_iter().next().expect("UNWRAP: N > 0"); let (ticket, control) = self.prepare_control(control); self.control_queue.send(control, priority); ticket } else { let mut last_ticket = None; for control in controls { let (ticket, control) = self.prepare_control(control); last_ticket = Some(ticket); self.control_queue.send(control, priority); } last_ticket.expect("UNWRAP: N > 0") } } /// Send a control message to the command. /// /// All control messages are queued in the order they're sent and processed in order. /// /// In general prefer using the other methods on this struct rather than sending [`Control`]s /// directly. pub fn control(&self, control: Control) -> Ticket { self.send_controls([control], Priority::Normal) } /// Start the command if it's not running. pub fn start(&self) -> Ticket { self.control(Control::Start) } /// Stop the command if it's running and wait for completion. /// /// If you don't want to wait for completion, use `signal(Signal::ForceStop)` instead. pub fn stop(&self) -> Ticket { self.control(Control::Stop) } /// Gracefully stop the command if it's running. /// /// The command will be sent `signal` and then given `grace` time before being forcefully /// terminated. If `grace` is zero, that still happens, but the command is terminated forcefully /// on the next "tick" of the supervisor loop, which doesn't leave the process a lot of time to /// do anything. pub fn stop_with_signal(&self, signal: Signal, grace: Duration) -> Ticket { if cfg!(unix) { self.control(Control::GracefulStop { signal, grace }) } else { self.stop() } } /// Restart the command if it's running, or start it if it's not. pub fn restart(&self) -> Ticket { self.send_controls([Control::Stop, Control::Start], Priority::Normal) } /// Gracefully restart the command if it's running, or start it if it's not. /// /// The command will be sent `signal` and then given `grace` time before being forcefully /// terminated. If `grace` is zero, that still happens, but the command is terminated forcefully /// on the next "tick" of the supervisor loop, which doesn't leave the process a lot of time to /// do anything. pub fn restart_with_signal(&self, signal: Signal, grace: Duration) -> Ticket { if cfg!(unix) { self.send_controls( [Control::GracefulStop { signal, grace }, Control::Start], Priority::Normal, ) } else { self.restart() } } /// Restart the command if it's running, but don't start it if it's not. pub fn try_restart(&self) -> Ticket { self.control(Control::TryRestart) } /// Restart the command if it's running, but don't start it if it's not. /// /// The command will be sent `signal` and then given `grace` time before being forcefully /// terminated. If `grace` is zero, that still happens, but the command is terminated forcefully /// on the next "tick" of the supervisor loop, which doesn't leave the process a lot of time to /// do anything. pub fn try_restart_with_signal(&self, signal: Signal, grace: Duration) -> Ticket { if cfg!(unix) { self.control(Control::TryGracefulRestart { signal, grace }) } else { self.try_restart() } } /// Send a signal to the command. /// /// Sends a signal to the current program, if there is one. If there isn't, this is a no-op. /// /// On Windows, this is a no-op for all signals but [`Signal::ForceStop`], which tries to stop /// the command like a `stop()` would, but doesn't wait for completion. This is because Windows /// doesn't have signals; in future [`Hangup`](Signal::Hangup), [`Interrupt`](Signal::Interrupt), /// and [`Terminate`](Signal::Terminate) may be implemented using [GenerateConsoleCtrlEvent], /// see [tracking issue #219](https://github.com/watchexec/watchexec/issues/219). /// /// [GenerateConsoleCtrlEvent]: https://learn.microsoft.com/en-us/windows/console/generateconsolectrlevent pub fn signal(&self, sig: Signal) -> Ticket { self.control(Control::Signal(sig)) } /// Stop the command, then mark it for garbage collection. /// /// The underlying control messages are sent like normal, so they wait for all pending controls /// to process. If you want to delete the command immediately, use `delete_now()`. pub fn delete(&self) -> Ticket { self.send_controls([Control::Stop, Control::Delete], Priority::Normal) } /// Stop the command immediately, then mark it for garbage collection. /// /// The underlying control messages are sent with higher priority than normal, so they bypass /// all others. If you want to delete after all current controls are processed, use `delete()`. pub fn delete_now(&self) -> Ticket { self.send_controls([Control::Stop, Control::Delete], Priority::Urgent) } /// Get a future which resolves when the command ends. /// /// If the command is not running, the future resolves immediately. /// /// The underlying control message is sent with higher priority than normal, so it targets the /// actively running command, not the one that will be running after the rest of the controls /// get done; note that may still be racy if the command ends between the time the message is /// sent and the time it's processed. pub fn to_wait(&self) -> Ticket { self.send_controls([Control::NextEnding], Priority::High) } /// Run an arbitrary function. /// /// The function is given [`&JobTaskContext`](JobTaskContext), which contains the state of the /// currently executing, next-to-start, or just-finished command, as well as the final state of /// the _previous_ run of the command. /// /// Technically, some operations can be done through a `&self` shared borrow on the running /// command's [`ChildWrapper`], but this library recommends against taking advantage of this, /// and prefer using the methods on here instead, so that the supervisor can keep track of /// what's going on. pub fn run(&self, fun: impl FnOnce(&JobTaskContext<'_>) + Send + Sync + 'static) -> Ticket { self.control(Control::SyncFunc(Box::new(fun))) } /// Run an arbitrary function and await the returned future. /// /// The function is given [`&JobTaskContext`](JobTaskContext), which contains the state of the /// currently executing, next-to-start, or just-finished command, as well as the final state of /// the _previous_ run of the command. /// /// Technically, some operations can be done through a `&self` shared borrow on the running /// command's [`ChildWrapper`], but this library recommends against taking advantage of this, /// and prefer using the methods on here instead, so that the supervisor can keep track of /// what's going on. /// /// A gotcha when using this method is that the future returned by the function can live longer /// than the `&JobTaskContext` it was given, so you can't bring the context into the async block /// and instead must clone or copy the parts you need beforehand, in the sync portion. /// /// For example, this won't compile: /// /// ```compile_fail /// # use std::sync::Arc; /// # use tokio::sync::mpsc; /// # use watchexec_supervisor::command::{Command, Program}; /// # use watchexec_supervisor::job::{CommandState, start_job}; /// # /// # let (job, _task) = start_job(Arc::new(Command { program: Program::Exec { prog: "/bin/date".into(), args: Vec::new() }.into(), options: Default::default() })); /// let (channel, receiver) = mpsc::channel(10); /// job.run_async(|context| Box::new(async move { /// if let CommandState::Finished { status, .. } = context.current { /// channel.send(status).await.ok(); /// } /// })); /// ``` /// /// But this does: /// /// ```no_run /// # use std::sync::Arc; /// # use tokio::sync::mpsc; /// # use watchexec_supervisor::command::{Command, Program}; /// # use watchexec_supervisor::job::{CommandState, start_job}; /// # /// # let (job, _task) = start_job(Arc::new(Command { program: Program::Exec { prog: "/bin/date".into(), args: Vec::new() }.into(), options: Default::default() })); /// let (channel, receiver) = mpsc::channel(10); /// job.run_async(|context| { /// let status = if let CommandState::Finished { status, .. } = context.current { /// Some(*status) /// } else { /// None /// }; /// /// Box::new(async move { /// if let Some(status) = status { /// channel.send(status).await.ok(); /// } /// }) /// }); /// ``` pub fn run_async( &self, fun: impl (FnOnce(&JobTaskContext<'_>) -> Box + Send + Sync>) + Send + Sync + 'static, ) -> Ticket { self.control(Control::AsyncFunc(Box::new(fun))) } /// Set the spawn hook. /// /// The hook will be called once per process spawned, before the process is spawned. It's given /// a mutable reference to the [`process_wrap::tokio::CommandWrap`] and some context; it /// can modify or further [wrap](process_wrap) the command as it sees fit. pub fn set_spawn_hook( &self, fun: impl Fn(&mut CommandWrap, &JobTaskContext<'_>) + Send + Sync + 'static, ) -> Ticket { self.control(Control::SetSyncSpawnHook(Arc::new(fun))) } /// Set the spawn hook (async version). /// /// The hook will be called once per process spawned, before the process is spawned. It's given /// a mutable reference to the [`process_wrap::tokio::CommandWrap`] and some context; it /// can modify or further [wrap](process_wrap) the command as it sees fit. /// /// A gotcha when using this method is that the future returned by the function can live longer /// than the references it was given, so you can't bring the command or context into the async /// block and instead must clone or copy the parts you need beforehand, in the sync portion. See /// the documentation for [`run_async`](Job::run_async) for an example. /// /// Fortunately, async spawn hooks should be exceedingly rare: there's very few things to do in /// spawn hooks that can't be done in the simpler sync version. pub fn set_spawn_async_hook( &self, fun: impl (Fn(&mut CommandWrap, &JobTaskContext<'_>) -> Box + Send + Sync>) + Send + Sync + 'static, ) -> Ticket { self.control(Control::SetAsyncSpawnHook(Arc::new(fun))) } /// Unset any spawn hook. pub fn unset_spawn_hook(&self) -> Ticket { self.control(Control::UnsetSpawnHook) } /// Set the spawn function. /// /// When set, this function is passed to /// [`CommandWrap::spawn_with()`](process_wrap::tokio::CommandWrap::spawn_with) instead of /// using the default [`CommandWrap::spawn()`]. It receives a `&mut tokio::process::Command` /// and must return the spawned [`tokio::process::Child`]. /// /// All process-wrap layers are still applied around the child, so this only customises the /// low-level spawn step. This is useful for delegating process spawning to a privileged /// helper (e.g. for Linux capability granting) while keeping the supervisor's lifecycle /// management. pub fn set_spawn_fn( &self, fun: impl Fn(&mut tokio::process::Command) -> std::io::Result + Send + Sync + 'static, ) -> Ticket { self.control(Control::SetSpawnFn(Arc::new(fun))) } /// Unset any spawn function, reverting to the default `CommandWrap::spawn()`. pub fn unset_spawn_fn(&self) -> Ticket { self.control(Control::ClearSpawnFn) } /// Set the error handler. pub fn set_error_handler(&self, fun: impl Fn(SyncIoError) + Send + Sync + 'static) -> Ticket { self.control(Control::SetSyncErrorHandler(Arc::new(fun))) } /// Set the error handler (async version). pub fn set_async_error_handler( &self, fun: impl (Fn(SyncIoError) -> Box + Send + Sync>) + Send + Sync + 'static, ) -> Ticket { self.control(Control::SetAsyncErrorHandler(Arc::new(fun))) } /// Unset the error handler. /// /// Errors will be silently ignored. pub fn unset_error_handler(&self) -> Ticket { self.control(Control::UnsetErrorHandler) } } ================================================ FILE: crates/supervisor/src/job/messages.rs ================================================ use std::{ future::Future, pin::Pin, task::{Context, Poll}, time::Duration, }; use futures::{future::select, FutureExt}; use watchexec_signals::Signal; use crate::flag::Flag; use super::task::{ AsyncErrorHandler, AsyncFunc, AsyncSpawnHook, SyncErrorHandler, SyncFunc, SyncSpawnHook, SpawnFn, }; /// The underlying control message types for [`Job`](super::Job). /// /// You may use [`Job::control()`](super::Job::control()) to send these messages directly, but in /// general should prefer the higher-level methods on [`Job`](super::Job) itself. pub enum Control { /// For [`Job::start()`](super::Job::start()). Start, /// For [`Job::stop()`](super::Job::stop()). Stop, /// For [`Job::stop_with_signal()`](super::Job::stop_with_signal()). GracefulStop { /// Signal to send immediately signal: Signal, /// Time to wait before forceful termination grace: Duration, }, /// For [`Job::try_restart()`](super::Job::try_restart()). TryRestart, /// For [`Job::try_restart_with_signal()`](super::Job::try_restart_with_signal()). TryGracefulRestart { /// Signal to send immediately signal: Signal, /// Time to wait before forceful termination and restart grace: Duration, }, /// Internal implementation detail of [`Control::TryGracefulRestart`]. ContinueTryGracefulRestart, /// For [`Job::signal()`](super::Job::signal()). Signal(Signal), /// For [`Job::delete()`](super::Job::delete()) and [`Job::delete_now()`](super::Job::delete_now()). Delete, /// For [`Job::to_wait()`](super::Job::to_wait()). NextEnding, /// For [`Job::run()`](super::Job::run()). SyncFunc(SyncFunc), /// For [`Job::run_async()`](super::Job::run_async()). AsyncFunc(AsyncFunc), /// For [`Job::set_spawn_hook()`](super::Job::set_spawn_hook()). SetSyncSpawnHook(SyncSpawnHook), /// For [`Job::set_spawn_async_hook()`](super::Job::set_spawn_async_hook()). SetAsyncSpawnHook(AsyncSpawnHook), /// For [`Job::unset_spawn_hook()`](super::Job::unset_spawn_hook()). UnsetSpawnHook, /// For [`Job::set_error_handler()`](super::Job::set_error_handler()). SetSyncErrorHandler(SyncErrorHandler), /// For [`Job::set_async_error_handler()`](super::Job::set_async_error_handler()). SetAsyncErrorHandler(AsyncErrorHandler), /// For [`Job::unset_error_handler()`](super::Job::unset_error_handler()). UnsetErrorHandler, /// For [`Job::set_spawn_fn()`](super::Job::set_spawn_fn()). SetSpawnFn(SpawnFn), /// For [`Job::unset_spawn_fn()`](super::Job::unset_spawn_fn()). ClearSpawnFn, } impl std::fmt::Debug for Control { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match self { Self::Start => f.debug_struct("Start").finish(), Self::Stop => f.debug_struct("Stop").finish(), Self::GracefulStop { signal, grace } => f .debug_struct("GracefulStop") .field("signal", signal) .field("grace", grace) .finish(), Self::TryRestart => f.debug_struct("TryRestart").finish(), Self::TryGracefulRestart { signal, grace } => f .debug_struct("TryGracefulRestart") .field("signal", signal) .field("grace", grace) .finish(), Self::ContinueTryGracefulRestart => { f.debug_struct("ContinueTryGracefulRestart").finish() } Self::Signal(signal) => f.debug_struct("Signal").field("signal", signal).finish(), Self::Delete => f.debug_struct("Delete").finish(), Self::NextEnding => f.debug_struct("NextEnding").finish(), Self::SyncFunc(_) => f.debug_struct("SyncFunc").finish_non_exhaustive(), Self::AsyncFunc(_) => f.debug_struct("AsyncFunc").finish_non_exhaustive(), Self::SetSyncSpawnHook(_) => f.debug_struct("SetSyncSpawnHook").finish_non_exhaustive(), Self::SetAsyncSpawnHook(_) => { f.debug_struct("SetSpawnAsyncHook").finish_non_exhaustive() } Self::UnsetSpawnHook => f.debug_struct("UnsetSpawnHook").finish(), Self::SetSyncErrorHandler(_) => f .debug_struct("SetSyncErrorHandler") .finish_non_exhaustive(), Self::SetAsyncErrorHandler(_) => f .debug_struct("SetAsyncErrorHandler") .finish_non_exhaustive(), Self::UnsetErrorHandler => f.debug_struct("UnsetErrorHandler").finish(), Self::SetSpawnFn(_) => f.debug_struct("SetSpawnFn").finish_non_exhaustive(), Self::ClearSpawnFn => f.debug_struct("ClearSpawnFn").finish(), } } } #[derive(Debug)] pub struct ControlMessage { pub control: Control, pub done: Flag, } /// Lightweight future which resolves when the corresponding control has been run. /// /// Unlike most futures, tickets don't need to be polled for controls to make progress; the future /// is only used to signal completion. Dropping a ticket will not drop the control, so it's safe to /// do so if you don't care about when the control completes. /// /// Tickets can be cloned, and all clones will resolve at the same time. #[derive(Debug, Clone)] pub struct Ticket { pub(crate) job_gone: Flag, pub(crate) control_done: Flag, } impl Ticket { pub(crate) fn cancelled() -> Self { Self { job_gone: Flag::new(true), control_done: Flag::new(true), } } } impl Future for Ticket { type Output = (); fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { Pin::new(&mut select(self.job_gone.clone(), self.control_done.clone()).map(|_| ())).poll(cx) } } ================================================ FILE: crates/supervisor/src/job/priority.rs ================================================ use std::time::Duration; use tokio::{ select, sync::mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender}, time::{sleep_until, Instant, Sleep}, }; use crate::flag::Flag; use super::{messages::ControlMessage, Control}; #[derive(Debug, Copy, Clone)] pub enum Priority { Normal, High, Urgent, } #[derive(Debug)] pub struct PriorityReceiver { pub normal: UnboundedReceiver, pub high: UnboundedReceiver, pub urgent: UnboundedReceiver, } #[derive(Debug, Clone)] pub struct PrioritySender { pub normal: UnboundedSender, pub high: UnboundedSender, pub urgent: UnboundedSender, } impl PrioritySender { pub fn send(&self, message: ControlMessage, priority: Priority) { // drop errors: if the channel is closed, the job is dead let _ = match priority { Priority::Normal => self.normal.send(message), Priority::High => self.high.send(message), Priority::Urgent => self.urgent.send(message), }; } } impl PriorityReceiver { /// Receive a control message from the command. /// /// If `stop_timer` is `Some`, normal priority messages are not received; instead, only high and /// urgent priority messages are received until the timer expires, and when the timer completes, /// a `Stop` control message is returned and the `stop_timer` is `None`d. /// /// This is used to implement stop's, restart's, and try-restart's graceful stopping logic. pub async fn recv(&mut self, stop_timer: &mut Option) -> Option { if stop_timer.as_ref().map_or(false, Timer::is_past) { return stop_timer.take().map(|timer| timer.to_control()); } if let Ok(message) = self.urgent.try_recv() { return Some(message); } if let Ok(message) = self.high.try_recv() { return Some(message); } if let Some(timer) = stop_timer.clone() { select! { () = timer.to_sleep() => { *stop_timer = None; Some(timer.to_control()) } message = self.urgent.recv() => message, message = self.high.recv() => message, } } else { select! { message = self.urgent.recv() => message, message = self.high.recv() => message, message = self.normal.recv() => message, } } } } pub fn new() -> (PrioritySender, PriorityReceiver) { let (normal_tx, normal_rx) = unbounded_channel(); let (high_tx, high_rx) = unbounded_channel(); let (urgent_tx, urgent_rx) = unbounded_channel(); ( PrioritySender { normal: normal_tx, high: high_tx, urgent: urgent_tx, }, PriorityReceiver { normal: normal_rx, high: high_rx, urgent: urgent_rx, }, ) } #[derive(Debug, Clone)] pub struct Timer { pub until: Instant, pub done: Flag, pub is_restart: bool, } impl Timer { pub fn stop(grace: Duration, done: Flag) -> Self { Self { until: Instant::now() + grace, done, is_restart: false, } } pub fn restart(grace: Duration, done: Flag) -> Self { Self { until: Instant::now() + grace, done, is_restart: true, } } fn to_sleep(&self) -> Sleep { sleep_until(self.until) } fn is_past(&self) -> bool { self.until <= Instant::now() } fn to_control(&self) -> ControlMessage { ControlMessage { control: if self.is_restart { Control::ContinueTryGracefulRestart } else { Control::Stop }, done: self.done.clone(), } } } ================================================ FILE: crates/supervisor/src/job/state.rs ================================================ use std::{sync::Arc, time::Instant}; #[cfg(not(test))] use process_wrap::tokio::ChildWrapper; use process_wrap::tokio::CommandWrap; use tracing::trace; use watchexec_events::ProcessEnd; use crate::command::Command; use super::task::SpawnFn; /// The state of the job's command / process. /// /// This is used both internally to represent the current state (ready/pending, running, finished) /// of the command, and can be queried via the [`JobTaskContext`](super::JobTaskContext) by hooks. /// /// Technically, some operations can be done through a `&self` shared borrow on the running /// command's [`ChildWrapper`], but this library recommends against taking advantage of this, /// and prefer using the methods on [`Job`](super::Job) instead, so that the job can keep track of /// what's going on. #[derive(Debug)] #[cfg_attr(test, derive(Clone))] pub enum CommandState { /// The command is neither running nor has finished. This is the initial state. Pending, /// The command is currently running. Note that this is established after the process is spawned /// and not precisely synchronised with the process' aliveness: in some cases the process may be /// exited but still `Running` in this enum. Running { /// The child process (test version). #[cfg(test)] child: super::TestChild, /// The child process. #[cfg(not(test))] child: Box, /// The time at which the process was spawned. started: Instant, }, /// The command has completed and its status was collected. Finished { /// The command's exit status. status: ProcessEnd, /// The time at which the process was spawned. started: Instant, /// The time at which the process finished, or more precisely, when its status was collected. finished: Instant, }, } impl CommandState { /// Whether the command is pending, i.e. not running or finished. #[must_use] pub const fn is_pending(&self) -> bool { matches!(self, Self::Pending) } /// Whether the command is running. #[must_use] pub const fn is_running(&self) -> bool { matches!(self, Self::Running { .. }) } /// Whether the command is finished. #[must_use] pub const fn is_finished(&self) -> bool { matches!(self, Self::Finished { .. }) } #[cfg_attr(test, allow(unused_mut, unused_variables))] pub(crate) fn spawn( &mut self, command: Arc, mut spawnable: CommandWrap, spawn_fn: Option<&SpawnFn>, ) -> std::io::Result { if let Self::Running { .. } = self { trace!("command running, not spawning again"); return Ok(false); } trace!(?command, "spawning command"); #[cfg(test)] let child = super::TestChild::new(command)?; #[cfg(not(test))] let child = if let Some(f) = spawn_fn { spawnable.spawn_with(|cmd| f(cmd))? } else { spawnable.spawn()? }; *self = Self::Running { child, started: Instant::now(), }; Ok(true) } #[must_use] pub(crate) fn reset(&mut self) -> Self { trace!(?self, "resetting command state"); match self { Self::Pending => Self::Pending, Self::Finished { status, started, finished, .. } => { let copy = Self::Finished { status: *status, started: *started, finished: *finished, }; *self = Self::Pending; copy } Self::Running { started, .. } => { let copy = Self::Finished { status: ProcessEnd::Continued, started: *started, finished: Instant::now(), }; *self = Self::Pending; copy } } } pub(crate) async fn wait(&mut self) -> std::io::Result { if let Self::Running { child, started } = self { let end = child.wait().await?; *self = Self::Finished { status: end.into(), started: *started, finished: Instant::now(), }; Ok(true) } else { Ok(false) } } } ================================================ FILE: crates/supervisor/src/job/task.rs ================================================ use std::{ future::Future, mem::take, sync::{ atomic::{AtomicBool, Ordering}, Arc, }, time::Instant, }; use process_wrap::tokio::CommandWrap; use tokio::{select, task::JoinHandle}; use tracing::{instrument, trace, trace_span, Instrument}; use watchexec_signals::Signal; use crate::{ command::Command, errors::{sync_io_error, SyncIoError}, flag::Flag, job::priority::Timer, }; use super::{ job::Job, messages::{Control, ControlMessage}, priority, state::CommandState, }; /// Spawn a job task and return a [`Job`] handle and a [`JoinHandle`]. /// /// The job task immediately starts in the background: it does not need polling. #[must_use] #[instrument(level = "trace")] pub fn start_job(command: Arc) -> (Job, JoinHandle<()>) { enum Loop { Normally, Skip, Break, } let (sender, mut receiver) = priority::new(); let gone = Flag::default(); let done = gone.clone(); let running = Arc::new(AtomicBool::new(false)); let running_flag = running.clone(); ( Job { command: command.clone(), control_queue: sender, gone, running, }, tokio::spawn(async move { let mut error_handler = ErrorHandler::None; let mut spawn_hook = SpawnHook::None; let mut spawn_fn: Option = None; let mut command_state = CommandState::Pending; let mut previous_run = None; let mut stop_timer = None; let mut on_end: Vec = Vec::new(); let mut on_end_restart: Option = None; 'main: loop { running_flag.store(command_state.is_running(), Ordering::Relaxed); select! { result = command_state.wait(), if command_state.is_running() => { trace!(?result, ?command_state, "got wait result"); match async { #[cfg(test)] eprintln!("[{:?}] waited: {result:?}", Instant::now()); match result { Err(err) => { let fut = error_handler.call(sync_io_error(err)); fut.await; return Loop::Skip; } Ok(true) => { trace!(existing=?stop_timer, "erasing stop timer"); if let Some(timer) = stop_timer.take() { timer.done.raise(); } trace!(count=%on_end.len(), "raising all pending end flags"); for done in take(&mut on_end) { done.raise(); } if let Some(flag) = on_end_restart.take() { trace!("continuing a graceful restart"); let mut spawnable = command.to_spawnable(); previous_run = Some(command_state.reset()); spawn_hook .call( &mut spawnable, &JobTaskContext { command: command.clone(), current: &command_state, previous: previous_run.as_ref(), }, ) .await; if let Err(err) = command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref()) { let fut = error_handler.call(sync_io_error(err)); fut.await; return Loop::Skip; } trace!("raising graceful restart's flag"); flag.raise(); } } Ok(false) => { trace!("child wasn't running, ignoring wait result"); } } Loop::Normally }.instrument(trace_span!("handle wait result")).await { Loop::Normally => {} Loop::Skip => { trace!("skipping to next event"); continue 'main; } Loop::Break => { trace!("breaking out of main loop"); break 'main; } } } Some(ControlMessage { control, done }) = receiver.recv(&mut stop_timer) => { match async { trace!(?control, ?command_state, "got control message"); #[cfg(test)] eprintln!("[{:?}] control: {control:?}", Instant::now()); macro_rules! try_with_handler { ($erroring:expr) => { match $erroring { Err(err) => { let fut = error_handler.call(sync_io_error(err)); fut.await; trace!("raising done flag for this control after error"); done.raise(); return Loop::Normally; } Ok(value) => value, } }; } match control { Control::Start => { if command_state.is_running() { trace!("child is running, skip"); } else { let mut spawnable = command.to_spawnable(); previous_run = Some(command_state.reset()); spawn_hook .call( &mut spawnable, &JobTaskContext { command: command.clone(), current: &command_state, previous: previous_run.as_ref(), }, ) .await; try_with_handler!(command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref())); } } Control::Stop => { if let CommandState::Running { child, started, .. } = &mut command_state { trace!("stopping child"); try_with_handler!(Box::into_pin(child.kill()).await); trace!("waiting on child"); let status = try_with_handler!(child.wait().await); trace!(?status, "got child end status"); command_state = CommandState::Finished { status: status.into(), started: *started, finished: Instant::now(), }; trace!(count=%on_end.len(), "raising all pending end flags"); for done in take(&mut on_end) { done.raise(); } } else { trace!("child isn't running, skip"); } } Control::GracefulStop { signal, grace } => { if let CommandState::Running { child, .. } = &mut command_state { try_with_handler!(signal_child(signal, child).await); trace!(?grace, "setting up graceful stop timer"); stop_timer.replace(Timer::stop(grace, done)); return Loop::Skip; } trace!("child isn't running, skip"); } Control::TryRestart => { if let CommandState::Running { child, started, .. } = &mut command_state { trace!("stopping child"); try_with_handler!(Box::into_pin(child.kill()).await); trace!("waiting on child"); let status = try_with_handler!(child.wait().await); trace!(?status, "got child end status"); command_state = CommandState::Finished { status: status.into(), started: *started, finished: Instant::now(), }; previous_run = Some(command_state.reset()); trace!(count=%on_end.len(), "raising all pending end flags"); for done in take(&mut on_end) { done.raise(); } let mut spawnable = command.to_spawnable(); spawn_hook .call( &mut spawnable, &JobTaskContext { command: command.clone(), current: &command_state, previous: previous_run.as_ref(), }, ) .await; try_with_handler!(command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref())); } else { trace!("child isn't running, skip"); } } Control::TryGracefulRestart { signal, grace } => { if let CommandState::Running { child, .. } = &mut command_state { try_with_handler!(signal_child(signal, child).await); trace!(?grace, "setting up graceful stop timer"); stop_timer.replace(Timer::restart(grace, done.clone())); trace!("setting up graceful restart flag"); on_end_restart = Some(done); return Loop::Skip; } trace!("child isn't running, skip"); } Control::ContinueTryGracefulRestart => { trace!("continuing a graceful try-restart"); if let CommandState::Running { child, started, .. } = &mut command_state { trace!("stopping child forcefully"); try_with_handler!(Box::into_pin(child.kill()).await); trace!("waiting on child"); let status = try_with_handler!(child.wait().await); trace!(?status, "got child end status"); command_state = CommandState::Finished { status: status.into(), started: *started, finished: Instant::now(), }; trace!(count=%on_end.len(), "raising all pending end flags"); for done in take(&mut on_end) { done.raise(); } } let mut spawnable = command.to_spawnable(); previous_run = Some(command_state.reset()); spawn_hook .call( &mut spawnable, &JobTaskContext { command: command.clone(), current: &command_state, previous: previous_run.as_ref(), }, ) .await; try_with_handler!(command_state.spawn(command.clone(), spawnable, spawn_fn.as_ref())); } Control::Signal(signal) => { if let CommandState::Running { child, .. } = &mut command_state { try_with_handler!(signal_child(signal, child).await); } else { trace!("child isn't running, skip"); } } Control::Delete => { trace!("raising done flag immediately"); done.raise(); return Loop::Break; } Control::NextEnding => { if matches!(command_state, CommandState::Finished { .. }) { trace!("child is finished, raise done flag immediately"); done.raise(); return Loop::Normally; } trace!("queue end flag"); on_end.push(done); return Loop::Skip; } Control::SyncFunc(f) => { f(&JobTaskContext { command: command.clone(), current: &command_state, previous: previous_run.as_ref(), }); } Control::AsyncFunc(f) => { Box::into_pin(f(&JobTaskContext { command: command.clone(), current: &command_state, previous: previous_run.as_ref(), })) .await; } Control::SetSyncErrorHandler(f) => { trace!("setting sync error handler"); error_handler = ErrorHandler::Sync(f); } Control::SetAsyncErrorHandler(f) => { trace!("setting async error handler"); error_handler = ErrorHandler::Async(f); } Control::UnsetErrorHandler => { trace!("unsetting error handler"); error_handler = ErrorHandler::None; } Control::SetSyncSpawnHook(f) => { trace!("setting sync spawn hook"); spawn_hook = SpawnHook::Sync(f); } Control::SetAsyncSpawnHook(f) => { trace!("setting async spawn hook"); spawn_hook = SpawnHook::Async(f); } Control::UnsetSpawnHook => { trace!("unsetting spawn hook"); spawn_hook = SpawnHook::None; } Control::SetSpawnFn(f) => { trace!("setting spawn fn"); spawn_fn = Some(f); } Control::ClearSpawnFn => { trace!("clearing spawn fn"); spawn_fn = None; } } trace!("raising control done flag"); done.raise(); Loop::Normally }.instrument(trace_span!("handle control message")).await { Loop::Normally => {} Loop::Skip => { trace!("skipping to next event (without raising done flag)"); continue 'main; } Loop::Break => { trace!("breaking out of main loop"); break 'main; } } } else => { trace!("all select branches disabled, exiting"); break 'main; } } } trace!("raising job done flag"); running_flag.store(false, Ordering::Relaxed); done.raise(); }), ) } macro_rules! sync_async_callbox { ($name:ident, $synct:ty, $asynct:ty, ($($argname:ident : $argtype:ty),*)) => { pub enum $name { None, Sync($synct), Async($asynct), } impl $name { #[instrument(level = "trace", skip(self, $($argname),*))] pub async fn call(&self, $($argname: $argtype),*) { match self { $name::None => (), $name::Sync(f) => { ::tracing::trace!("calling sync {:?}", stringify!($name)); f($($argname),*) } $name::Async(f) => { ::tracing::trace!("calling async {:?}", stringify!($name)); Box::into_pin(f($($argname),*)).await } } } } }; } /// Job task internals exposed via hooks. #[derive(Debug)] pub struct JobTaskContext<'task> { /// The job's [`Command`]. pub command: Arc, /// The current state of the job. pub current: &'task CommandState, /// The state of the previous iteration of the job, if any. /// /// This is generally [`CommandState::Finished`], but may be other states in rare cases. pub previous: Option<&'task CommandState>, } pub type SyncFunc = Box) + Send + Sync + 'static>; pub type AsyncFunc = Box< dyn (FnOnce(&JobTaskContext<'_>) -> Box + Send + Sync>) + Send + Sync + 'static, >; pub type SyncSpawnHook = Arc) + Send + Sync + 'static>; pub type AsyncSpawnHook = Arc< dyn (Fn(&mut CommandWrap, &JobTaskContext<'_>) -> Box + Send + Sync>) + Send + Sync + 'static, >; /// A function that customises how the underlying process is spawned. /// /// When set on a [`Job`](super::Job), this function is passed to /// [`CommandWrap::spawn_with()`](process_wrap::tokio::CommandWrap::spawn_with) instead of using /// the default [`CommandWrap::spawn()`](process_wrap::tokio::CommandWrap::spawn). It receives a /// `&mut tokio::process::Command` and must return the spawned `tokio::process::Child`. /// /// All process-wrap layers are still applied around the child, so this only customises the /// low-level spawn step. This is useful for delegating process spawning to a privileged helper /// (e.g. for Linux capability granting) while keeping the supervisor's lifecycle management. pub type SpawnFn = Arc< dyn Fn(&mut tokio::process::Command) -> std::io::Result + Send + Sync + 'static, >; sync_async_callbox!(SpawnHook, SyncSpawnHook, AsyncSpawnHook, (command: &mut CommandWrap, context: &JobTaskContext<'_>)); pub type SyncErrorHandler = Arc; pub type AsyncErrorHandler = Arc< dyn (Fn(SyncIoError) -> Box + Send + Sync>) + Send + Sync + 'static, >; sync_async_callbox!(ErrorHandler, SyncErrorHandler, AsyncErrorHandler, (error: SyncIoError)); #[cfg_attr(not(windows), allow(clippy::needless_pass_by_ref_mut))] // needed for start_kill() #[instrument(level = "trace")] async fn signal_child( signal: Signal, #[cfg(not(test))] child: &mut Box, #[cfg(test)] child: &mut super::TestChild, ) -> std::io::Result<()> { #[cfg(unix)] { let sig = signal .to_nix() .or_else(|| Signal::Terminate.to_nix()) .expect("UNWRAP: guaranteed for Signal::Terminate default"); trace!(signal=?sig, "sending signal"); child.signal(sig as _)?; } #[cfg(windows)] if signal == Signal::ForceStop { trace!("starting kill, without waiting"); child.start_kill()?; } else { trace!(?signal, "ignoring unsupported signal"); } Ok(()) } ================================================ FILE: crates/supervisor/src/job/test.rs ================================================ #![allow(clippy::unwrap_used)] use std::{ num::NonZeroI64, process::{ExitStatus, Output}, sync::{ atomic::{AtomicBool, Ordering}, Arc, Mutex, }, time::{Duration, Instant}, }; use tokio::time::sleep; use watchexec_events::ProcessEnd; #[cfg(unix)] use crate::job::TestChildCall; use crate::{ command::{Command, Program}, job::{start_job, CommandState}, }; use super::{Control, Job, Priority}; const GRACE: u64 = 10; // millis fn erroring_command() -> Arc { Arc::new(Command { program: Program::Exec { prog: "/does/not/exist".into(), args: Vec::new(), }, options: Default::default(), }) } fn working_command() -> Arc { Arc::new(Command { program: Program::Exec { prog: "/does/not/run".into(), args: Vec::new(), }, options: Default::default(), }) } fn ungraceful_command() -> Arc { Arc::new(Command { program: Program::Exec { prog: "sleep".into(), args: vec![(GRACE * 2).to_string()], }, options: Default::default(), }) } fn graceful_command() -> Arc { Arc::new(Command { program: Program::Exec { prog: "sleep".into(), args: vec![(2 * GRACE / 3).to_string()], }, options: Default::default(), }) } #[tokio::test] async fn sync_error_handler() { let (job, task) = start_job(erroring_command()); let error_handler_called = Arc::new(AtomicBool::new(false)); job.set_error_handler({ let error_handler_called = error_handler_called.clone(); move |_| { error_handler_called.store(true, Ordering::Relaxed); } }) .await; job.start().await; assert!( error_handler_called.load(Ordering::Relaxed), "called on start" ); task.abort(); } #[tokio::test] async fn async_error_handler() { let (job, task) = start_job(erroring_command()); let error_handler_called = Arc::new(AtomicBool::new(false)); job.set_async_error_handler({ let error_handler_called = error_handler_called.clone(); move |_| { let error_handler_called = error_handler_called.clone(); Box::new(async move { error_handler_called.store(true, Ordering::Relaxed); }) } }) .await; job.start().await; assert!( error_handler_called.load(Ordering::Relaxed), "called on start" ); task.abort(); } #[tokio::test] async fn unset_error_handler() { let (job, task) = start_job(erroring_command()); let error_handler_called = Arc::new(AtomicBool::new(false)); job.set_error_handler({ let error_handler_called = error_handler_called.clone(); move |_| { error_handler_called.store(true, Ordering::Relaxed); } }) .await; job.unset_error_handler().await; job.start().await; assert!( !error_handler_called.load(Ordering::Relaxed), "not called even after start" ); task.abort(); } #[tokio::test] async fn queue_ordering() { let (job, task) = start_job(working_command()); let error_handler_called = Arc::new(AtomicBool::new(false)); job.set_error_handler({ let error_handler_called = error_handler_called.clone(); move |_| { error_handler_called.store(true, Ordering::Relaxed); } }); job.unset_error_handler(); // We're not awaiting until this one, but because the queue is processed in // order, it's effectively the same as waiting them all. job.start().await; assert!( !error_handler_called.load(Ordering::Relaxed), "called after queue await" ); task.abort(); } #[tokio::test] async fn sync_func() { let (job, task) = start_job(working_command()); let func_called = Arc::new(AtomicBool::new(false)); let ticket = job.run({ let func_called = func_called.clone(); move |_| { func_called.store(true, Ordering::Relaxed); } }); assert!( !func_called.load(Ordering::Relaxed), "immediately after submit, likely before processed" ); ticket.await; assert!( func_called.load(Ordering::Relaxed), "after it's been processed" ); task.abort(); } #[tokio::test] async fn async_func() { let (job, task) = start_job(working_command()); let func_called = Arc::new(AtomicBool::new(false)); let ticket = job.run_async({ let func_called = func_called.clone(); move |_| { let func_called = func_called; Box::new(async move { func_called.store(true, Ordering::Relaxed); }) } }); assert!( !func_called.load(Ordering::Relaxed), "immediately after submit, likely before processed" ); ticket.await; assert!( func_called.load(Ordering::Relaxed), "after it's been processed" ); task.abort(); } // TODO: figure out how to test spawn hooks async fn refresh_state(job: &Job, state: &Arc>>, current: bool) { job.send_controls( [Control::SyncFunc(Box::new({ let state = state.clone(); move |context| { if current { state.lock().unwrap().replace(context.current.clone()); } else { *state.lock().unwrap() = context.previous.cloned(); } } }))], Priority::Urgent, ) .await; } async fn set_running_child_status(job: &Job, status: ExitStatus) { job.send_controls( [Control::AsyncFunc(Box::new({ move |context| { let output_lock = if let CommandState::Running { child, .. } = context.current { Some(child.output.clone()) } else { None }; Box::new(async move { if let Some(output_lock) = output_lock { *output_lock.lock().await = Some(Output { status, stdout: Vec::new(), stderr: Vec::new(), }); } }) } }))], Priority::Urgent, ) .await; } macro_rules! expect_state { ($current:literal, $job:expr, $expected:pat, $reason:literal) => { let state = Arc::new(Mutex::new(None)); refresh_state(&$job, &state, $current).await; { let state = state.lock().unwrap(); let reason = $reason; let reason = if reason.is_empty() { String::new() } else { format!(" ({reason})") }; assert!( matches!(*state, Some($expected)), "expected Some({}), got {state:?}{reason}", stringify!($expected), ); } }; ($job:expr, $expected:pat, $reason:literal) => { expect_state!(true, $job, $expected, $reason) }; ($job:expr, $expected:pat) => { expect_state!(true, $job, $expected, "") }; (previous: $job:expr, $expected:pat, $reason:literal) => { expect_state!(false, $job, $expected, $reason) }; (previous: $job:expr, $expected:pat) => { expect_state!(false, $job, $expected, "") }; } #[cfg(unix)] async fn get_child(job: &Job) -> super::TestChild { let state = Arc::new(Mutex::new(None)); refresh_state(job, &state, true).await; let state = state.lock().unwrap(); let state = state.as_ref().expect("no state"); match state { CommandState::Running { ref child, .. } => child.clone(), _ => panic!("get_child: expected IsRunning, got {state:?}"), } } #[tokio::test] async fn start() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); task.abort(); } #[cfg(unix)] #[tokio::test] async fn signal_unix() { use nix::sys::signal::Signal; let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start(); job.signal(watchexec_signals::Signal::User1).await; let calls = get_child(&job).await.calls; assert!(calls.iter().any( |(_, call)| matches!(call, TestChildCall::Signal(sig) if *sig == Signal::SIGUSR1 as i32) )); task.abort(); } #[tokio::test] async fn stop() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; job.stop().await; expect_state!( job, CommandState::Finished { status: ProcessEnd::Success, .. } ); task.abort(); } #[tokio::test] async fn stop_when_running() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.stop().await; expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); task.abort(); } #[tokio::test] async fn stop_fail() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status( &job, ProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(), ) .await; job.stop().await; expect_state!( job, CommandState::Finished { status: ProcessEnd::ExitError(_), .. } ); task.abort(); } #[tokio::test] async fn restart() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status( &job, ProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(), ) .await; job.restart().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; job.stop().await; expect_state!( previous: job, CommandState::Finished { status: ProcessEnd::ExitError(_), .. } ); expect_state!( job, CommandState::Finished { status: ProcessEnd::Success, .. } ); task.abort(); } #[tokio::test] async fn graceful_stop() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; let stop = job.stop_with_signal( watchexec_signals::Signal::Terminate, Duration::from_millis(GRACE), ); sleep(Duration::from_millis(GRACE / 2)).await; expect_state!( job, CommandState::Finished { .. }, "after signal but before delayed force-stop" ); stop.await; expect_state!(job, CommandState::Finished { .. }); task.abort(); } /// Regression test for https://github.com/watchexec/watchexec/issues/981 /// /// When a process responds to SIGTERM gracefully and the Job handle is dropped, /// the task should exit cleanly without panicking. #[tokio::test] async fn graceful_stop_with_job_dropped() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; // Start graceful stop but don't await the ticket let _stop = job.stop_with_signal( watchexec_signals::Signal::Terminate, Duration::from_millis(GRACE), ); // Give the task time to process the graceful stop sleep(Duration::from_millis(GRACE / 2)).await; // Drop the job handle (simulating the caller losing interest) // This closes all channels to the task drop(job); // The task should exit cleanly without panicking // Previously this would panic with "all branches are disabled and there is no else branch" tokio::time::timeout(Duration::from_millis(GRACE * 10), task) .await .expect("task should complete within timeout") .expect("task should not panic"); } #[tokio::test] async fn graceful_restart() { let (job, task) = start_job(working_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status( &job, ProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(), ) .await; job.restart_with_signal( watchexec_signals::Signal::Terminate, Duration::from_millis(GRACE), ) .await; set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; job.stop().await; expect_state!( previous: job, CommandState::Finished { status: ProcessEnd::ExitError(_), .. } ); expect_state!( job, CommandState::Finished { status: ProcessEnd::Success, .. } ); task.abort(); } #[tokio::test] async fn graceful_stop_beyond_grace() { let (job, task) = start_job(ungraceful_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; let stop = job.stop_with_signal( watchexec_signals::Signal::User1, Duration::from_millis(GRACE), ); #[cfg(unix)] { use nix::sys::signal::Signal; expect_state!( job, CommandState::Running { .. }, "after USR1 but before delayed stop" ); let calls = get_child(&job).await.calls; assert!(calls.iter().any(|(_, call)| matches!( call, TestChildCall::Signal(sig) if *sig == Signal::SIGUSR1 as i32 ))); } stop.await; expect_state!(job, CommandState::Finished { .. }); task.abort(); } #[tokio::test] async fn graceful_restart_beyond_grace() { let (job, task) = start_job(ungraceful_command()); expect_state!(job, CommandState::Pending); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status( &job, ProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(), ) .await; let restart = job.restart_with_signal( watchexec_signals::Signal::User1, Duration::from_millis(GRACE), ); #[cfg(unix)] { use nix::sys::signal::Signal; expect_state!( job, CommandState::Running { .. }, "after USR1 but before delayed restart" ); let calls = get_child(&job).await.calls; assert!(calls.iter().any(|(_, call)| matches!( call, TestChildCall::Signal(sig) if *sig == Signal::SIGUSR1 as i32 ))); } restart.await; set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; job.stop().await; expect_state!( previous: job, CommandState::Finished { status: ProcessEnd::ExitError(_), .. } ); expect_state!( job, CommandState::Finished { status: ProcessEnd::Success, .. } ); task.abort(); } #[tokio::test] async fn try_restart() { let (job, task) = start_job(graceful_command()); expect_state!(job, CommandState::Pending); job.try_restart().await; expect_state!( job, CommandState::Pending, "command still not running after try-restart" ); job.start().await; expect_state!(job, CommandState::Running { .. }); let try_restart = job.try_restart(); eprintln!("[{:?}] test: await try_restart", Instant::now()); try_restart.await; expect_state!(job, CommandState::Running { .. }); job.stop().await; expect_state!( previous: job, CommandState::Finished { .. } ); expect_state!(job, CommandState::Finished { .. }); task.abort(); } #[tokio::test] async fn try_graceful_restart() { let (job, task) = start_job(graceful_command()); expect_state!(job, CommandState::Pending); job.try_restart_with_signal( watchexec_signals::Signal::User1, Duration::from_millis(GRACE), ) .await; expect_state!( job, CommandState::Pending, "command still not running after try-graceful-restart" ); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status( &job, ProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(), ) .await; let restart = job.try_restart_with_signal( watchexec_signals::Signal::User1, Duration::from_millis(GRACE), ); expect_state!(job, CommandState::Running { .. }); eprintln!("[{:?}] await restart", Instant::now()); restart.await; eprintln!("[{:?}] awaited restart", Instant::now()); expect_state!( previous: job, CommandState::Finished { status: ProcessEnd::ExitError(_), .. } ); expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; job.stop().await; expect_state!( job, CommandState::Finished { status: ProcessEnd::Success, .. } ); task.abort(); } #[tokio::test] async fn try_restart_beyond_grace() { let (job, task) = start_job(ungraceful_command()); expect_state!(job, CommandState::Pending); job.try_restart().await; expect_state!( job, CommandState::Pending, "command still not running after try-restart" ); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status( &job, ProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(), ) .await; job.try_restart().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; job.stop().await; expect_state!( previous: job, CommandState::Finished { status: ProcessEnd::ExitError(_), .. } ); expect_state!( job, CommandState::Finished { status: ProcessEnd::Success, .. } ); task.abort(); } #[tokio::test] async fn try_graceful_restart_beyond_grace() { let (job, task) = start_job(ungraceful_command()); expect_state!(job, CommandState::Pending); job.try_restart_with_signal( watchexec_signals::Signal::User1, Duration::from_millis(GRACE), ) .await; expect_state!( job, CommandState::Pending, "command still not running after try-graceful-restart" ); job.start().await; expect_state!(job, CommandState::Running { .. }); set_running_child_status( &job, ProcessEnd::ExitError(NonZeroI64::new(1).unwrap()).into_exitstatus(), ) .await; let restart = job.try_restart_with_signal( watchexec_signals::Signal::User1, Duration::from_millis(GRACE), ); expect_state!(job, CommandState::Running { .. }); restart.await; expect_state!( previous: job, CommandState::Finished { status: ProcessEnd::ExitError(_), .. } ); expect_state!(job, CommandState::Running { .. }); set_running_child_status(&job, ProcessEnd::Success.into_exitstatus()).await; job.stop().await; expect_state!( job, CommandState::Finished { status: ProcessEnd::Success, .. } ); task.abort(); } ================================================ FILE: crates/supervisor/src/job/testchild.rs ================================================ use std::{ future::Future, io::Result, path::Path, pin::Pin, process::{ExitStatus, Output}, sync::Arc, time::{Duration, Instant}, }; use tokio::{sync::Mutex, time::sleep}; use watchexec_events::ProcessEnd; use crate::command::{Command, Program}; /// Mock version of [`TokioChildWrapper`](process_wrap::tokio::TokioChildWrapper). #[derive(Debug, Clone)] pub struct TestChild { pub grouped: bool, pub command: Arc, pub calls: Arc>, pub output: Arc>>, pub spawned: Instant, } impl TestChild { pub fn new(command: Arc) -> std::io::Result { if let Program::Exec { prog, .. } = &command.program { if prog == Path::new("/does/not/exist") { return Err(std::io::Error::new( std::io::ErrorKind::NotFound, "file not found", )); } } Ok(Self { grouped: command.options.grouped || command.options.session, command, calls: Arc::new(boxcar::Vec::new()), output: Arc::new(Mutex::new(None)), spawned: Instant::now(), }) } } #[derive(Debug)] pub enum TestChildCall { Id, Kill, StartKill, TryWait, Wait, #[cfg(unix)] Signal(i32), } // Exact same signatures as ErasedChild impl TestChild { pub fn id(&mut self) -> Option { self.calls.push(TestChildCall::Id); None } pub fn kill(&mut self) -> Box> + Send + '_> { self.calls.push(TestChildCall::Kill); Box::new(async { Ok(()) }) } pub fn start_kill(&mut self) -> Result<()> { self.calls.push(TestChildCall::StartKill); Ok(()) } pub fn try_wait(&mut self) -> Result> { self.calls.push(TestChildCall::TryWait); if let Program::Exec { prog, args } = &self.command.program { if prog == Path::new("sleep") { if let Some(time) = args .first() .and_then(|arg| arg.parse().ok()) .map(Duration::from_millis) { if self.spawned.elapsed() < time { return Ok(None); } } } } Ok(self .output .try_lock() .ok() .and_then(|o| o.as_ref().map(|o| o.status))) } pub fn wait(&mut self) -> Pin> + Send + '_>> { self.calls.push(TestChildCall::Wait); Box::pin(async { if let Program::Exec { prog, args } = &self.command.program { if prog == Path::new("sleep") { if let Some(time) = args .first() .and_then(|arg| arg.parse().ok()) .map(Duration::from_millis) { if self.spawned.elapsed() < time { sleep(time - self.spawned.elapsed()).await; if let Ok(guard) = self.output.try_lock() { if let Some(output) = guard.as_ref() { return Ok(output.status); } } return Ok(ProcessEnd::Success.into_exitstatus()); } } } } loop { eprintln!("[{:?}] child: output lock", Instant::now()); let output = self.output.lock().await; if let Some(output) = output.as_ref() { return Ok(output.status); } eprintln!("[{:?}] child: output unlock", Instant::now()); sleep(Duration::from_secs(1)).await; } }) } pub fn wait_with_output(self) -> Box> + Send> { Box::new(async move { loop { let mut output = self.output.lock().await; if let Some(output) = output.take() { return Ok(output); } else { sleep(Duration::from_secs(1)).await; } } }) } #[cfg(unix)] pub fn signal(&self, sig: i32) -> Result<()> { self.calls.push(TestChildCall::Signal(sig)); Ok(()) } } ================================================ FILE: crates/supervisor/src/job.rs ================================================ //! Job supervision. #[doc(inline)] pub use self::{ job::Job, messages::{Control, Ticket}, state::CommandState, task::{JobTaskContext, SpawnFn}, }; #[cfg(test)] pub(crate) use self::{priority::Priority, testchild::TestChild}; #[cfg(all(unix, test))] pub(crate) use self::testchild::TestChildCall; #[doc(inline)] pub use task::start_job; #[allow(clippy::module_inception)] mod job; mod messages; mod priority; mod state; mod task; #[cfg(test)] mod testchild; #[cfg(test)] mod test; ================================================ FILE: crates/supervisor/src/lib.rs ================================================ //! Watchexec's process supervisor. //! //! This crate implements the process supervisor for Watchexec. It is responsible for spawning and //! managing processes, and for sending events to them. //! //! You may use this crate to implement your own process supervisor, but keep in mind its direction //! will always primarily be driven by the needs of Watchexec itself. //! //! # Usage //! //! There is no struct or implementation of a single supervisor, as the particular needs of the //! application will dictate how that is designed. Instead, this crate provides a [`Job`](job::Job) //! construct, which is a handle to a single [`Command`](command::Command), and manages its //! lifecycle. The `Job` API has been modeled after the `systemctl` set of commands for service //! control, with operations for starting, stopping, restarting, sending signals, waiting for the //! process to complete, etc. //! //! There are also methods for running hooks within the job's runtime task, and for handling errors. //! //! # Theory of Operation //! //! A [`Job`](job::Job) is, properly speaking, a handle which lets one control a Tokio task. That //! task is spawned on the Tokio runtime, and so runs in the background. A `Job` takes as input a //! [`Command`](command::Command), which describes how to start a single process, through either a //! shell command or a direct executable invocation, and if the process should be grouped (using //! [`process-wrap`](process_wrap)) or not. //! //! The job's task runs an event loop on two sources: the process's `wait()` (i.e. when the process //! ends) and the job's control queue. The control queue is a hybrid MPSC queue, with three priority //! levels and a timer. When the timer is active, the lowest ("Normal") priority queue is disabled. //! This is an internal detail which serves to implement graceful stops and restarts. The internals //! of the job's task are not available to the API user, actions and queries are performed by //! sending messages on this control queue. //! //! The control queue is executed in priority and in order within priorities. Sending a control to //! the task returns a [`Ticket`](job::Ticket), which is a future that resolves when the control has //! been processed. Dropping the ticket will not cancel the control. This provides two complementary //! ways to orchestrate actions: queueing controls in the desired order if there is no need for //! branching flow or for signaling, and sending controls or performing other actions after awaiting //! tickets. //! //! Do note that both of these can be used together. There is no need for the below pattern: //! //! ```no_run //! # #[tokio::main(flavor = "current_thread")] async fn main() { // single-threaded for doctest only //! # use std::sync::Arc; //! # use watchexec_supervisor::Signal; //! # use watchexec_supervisor::command::{Command, Program}; //! # use watchexec_supervisor::job::{CommandState, start_job}; //! # //! # let (job, task) = start_job(Arc::new(Command { program: Program::Exec { prog: "/bin/date".into(), args: Vec::new() }.into(), options: Default::default() })); //! # //! job.start().await; //! job.signal(Signal::User1).await; //! job.stop().await; //! # task.abort(); //! # } //! ``` //! //! Because of ordering, it behaves the same as this: //! //! ```no_run //! # #[tokio::main(flavor = "current_thread")] async fn main() { // single-threaded for doctest only //! # use std::sync::Arc; //! # use watchexec_supervisor::Signal; //! # use watchexec_supervisor::command::{Command, Program}; //! # use watchexec_supervisor::job::{CommandState, start_job}; //! # //! # let (job, task) = start_job(Arc::new(Command { program: Program::Exec { prog: "/bin/date".into(), args: Vec::new() }.into(), options: Default::default() })); //! # //! job.start(); //! job.signal(Signal::User1); //! job.stop().await; // here, all of start(), signal(), and stop() will have run in order //! # task.abort(); //! # } //! ``` //! //! However, this is a different program: //! //! ```no_run //! # #[tokio::main(flavor = "current_thread")] async fn main() { // single-threaded for doctest only //! # use std::sync::Arc; //! # use std::time::Duration; //! # use tokio::time::sleep; //! # use watchexec_supervisor::Signal; //! # use watchexec_supervisor::command::{Command, Program}; //! # use watchexec_supervisor::job::{CommandState, start_job}; //! # //! # let (job, task) = start_job(Arc::new(Command { program: Program::Exec { prog: "/bin/date".into(), args: Vec::new() }.into(), options: Default::default() })); //! # //! job.start().await; //! println!("program started!"); //! sleep(Duration::from_secs(5)).await; // wait until program is fully started //! //! job.signal(Signal::User1).await; //! sleep(Duration::from_millis(150)).await; // wait until program has dumped stats //! println!("program stats dumped via USR1 signal!"); //! //! job.stop().await; //! println!("program stopped"); //! # //! # task.abort(); //! # } //! ``` //! //! # Example //! //! ```no_run //! # #[tokio::main(flavor = "current_thread")] async fn main() { // single-threaded for doctest only //! # use std::sync::Arc; //! use watchexec_supervisor::Signal; //! use watchexec_supervisor::command::{Command, Program}; //! use watchexec_supervisor::job::{CommandState, start_job}; //! //! let (job, task) = start_job(Arc::new(Command { //! program: Program::Exec { //! prog: "/bin/date".into(), //! args: Vec::new(), //! }.into(), //! options: Default::default(), //! })); //! //! job.start().await; //! job.signal(Signal::User1).await; //! job.stop().await; //! //! job.delete_now().await; //! //! task.await; // make sure the task is fully cleaned up //! # } //! ``` #![doc(html_favicon_url = "https://watchexec.github.io/logo:watchexec.svg")] #![doc(html_logo_url = "https://watchexec.github.io/logo:watchexec.svg")] #![warn(clippy::unwrap_used, missing_docs, rustdoc::unescaped_backticks)] #![cfg_attr(not(test), warn(unused_crate_dependencies))] #![deny(rust_2018_idioms)] #[doc(no_inline)] pub use watchexec_events::ProcessEnd; #[doc(no_inline)] pub use watchexec_signals::Signal; pub mod command; pub mod errors; pub mod job; mod flag; ================================================ FILE: crates/supervisor/tests/programs.rs ================================================ use watchexec_supervisor::command::{Command, Program, Shell}; #[tokio::test] #[cfg(unix)] async fn unix_shell_none() -> Result<(), std::io::Error> { assert!(Command { program: Program::Exec { prog: "echo".into(), args: vec!["hi".into()], }, options: Default::default() } .to_spawnable() .spawn()? .wait() .await? .success()); Ok(()) } #[tokio::test] #[cfg(unix)] async fn unix_shell_sh() -> Result<(), std::io::Error> { assert!(Command { program: Program::Shell { shell: Shell::new("sh"), command: "echo hi".into(), args: Vec::new(), }, options: Default::default() } .to_spawnable() .spawn()? .wait() .await? .success()); Ok(()) } #[tokio::test] #[cfg(unix)] async fn unix_shell_alternate() -> Result<(), std::io::Error> { assert!(Command { program: Program::Shell { shell: Shell::new("bash"), command: "echo".into(), args: vec!["--".into(), "hi".into()], }, options: Default::default() } .to_spawnable() .spawn()? .wait() .await? .success()); Ok(()) } #[tokio::test] #[cfg(unix)] async fn unix_shell_alternate_shopts() -> Result<(), std::io::Error> { assert!(Command { program: Program::Shell { shell: Shell { options: vec!["-o".into(), "errexit".into()], ..Shell::new("bash") }, command: "echo hi".into(), args: Vec::new(), }, options: Default::default() } .to_spawnable() .spawn()? .wait() .await? .success()); Ok(()) } #[tokio::test] #[cfg(windows)] async fn windows_shell_none() -> Result<(), std::io::Error> { assert!(Command { program: Program::Exec { prog: "echo".into(), args: vec!["hi".into()], }, options: Default::default() } .to_spawnable() .spawn()? .wait() .await? .success()); Ok(()) } #[tokio::test] #[cfg(windows)] async fn windows_shell_cmd() -> Result<(), std::io::Error> { assert!(Command { program: Program::Shell { shell: Shell::cmd(), args: Vec::new(), command: r#""echo" hi"#.into() }, options: Default::default() } .to_spawnable() .spawn()? .wait() .await? .success()); Ok(()) } #[tokio::test] #[cfg(windows)] async fn windows_shell_powershell() -> Result<(), std::io::Error> { assert!(Command { program: Program::Shell { shell: Shell::new("pwsh.exe"), args: Vec::new(), command: "echo hi".into() }, options: Default::default() } .to_spawnable() .spawn()? .wait() .await? .success()); Ok(()) } ================================================ FILE: crates/test-socketfd/Cargo.toml ================================================ [package] name = "test-socketfd" version = "0.0.0" publish = false authors = ["Félix Saparelli "] license = "Apache-2.0 OR MIT" description = "Test program for --socket" edition = "2021" [dependencies] listenfd = "1.0.2" [lints.clippy] nursery = "warn" pedantic = "warn" ================================================ FILE: crates/test-socketfd/README.md ================================================ This is a testing tool for the `--socket` option, which can also be used by third-parties to check compatibility. ## Install ```console cargo install --git https://github.com/watchexec/watchexec test-socketfd ``` ## Usage Print the control env variables and the number of available sockets: ``` test-socketfd ``` Validate that one TCP socket and one UDP socket are available, in this order: ``` test-socketfd tcp udp ``` The tool also supports `unix-stream`, `unix-datagram`, and `unix-raw` on unix, even if watchexec itself doesn't. These correspond to the `ListenFd` methods here: https://docs.rs/listenfd/latest/listenfd/struct.ListenFd.html ================================================ FILE: crates/test-socketfd/src/main.rs ================================================ use std::{ env::{args, var}, io::ErrorKind, }; use listenfd::ListenFd; fn main() { eprintln!("LISTEN_FDS={:?}", var("LISTEN_FDS")); eprintln!("LISTEN_FDS_FIRST_FD={:?}", var("LISTEN_FDS_FIRST_FD")); eprintln!("LISTEN_PID={:?}", var("LISTEN_PID")); eprintln!("SYSTEMFD_SOCKET_SERVER={:?}", var("SYSTEMFD_SOCKET_SERVER")); eprintln!("SYSTEMFD_SOCKET_SECRET={:?}", var("SYSTEMFD_SOCKET_SECRET")); let mut listenfd = ListenFd::from_env(); println!("\n{} sockets available\n", listenfd.len()); for (n, arg) in args().skip(1).enumerate() { match arg.as_str() { "tcp" => { if let Ok(addr) = listenfd .take_tcp_listener(n) .and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into())) .expect(&format!("expected TCP listener at FD#{n}")) .local_addr() { println!("obtained TCP listener at FD#{n}, at addr {addr:?}"); } else { println!("obtained TCP listener at FD#{n}, unknown addr"); } } "udp" => { if let Ok(addr) = listenfd .take_udp_socket(n) .and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into())) .expect(&format!("expected UDP socket at FD#{n}")) .local_addr() { println!("obtained UDP socket at FD#{n}, at addr {addr:?}"); } else { println!("obtained UDP socket at FD#{n}, unknown addr"); } } #[cfg(unix)] "unix-stream" => { if let Ok(addr) = listenfd .take_unix_listener(n) .and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into())) .expect(&format!("expected Unix stream listener at FD#{n}")) .local_addr() { println!("obtained Unix stream listener at FD#{n}, at addr {addr:?}"); } else { println!("obtained Unix stream listener at FD#{n}, unknown addr"); } } #[cfg(unix)] "unix-datagram" => { if let Ok(addr) = listenfd .take_unix_datagram(n) .and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into())) .expect(&format!("expected Unix datagram socket at FD#{n}")) .local_addr() { println!("obtained Unix datagram socket at FD#{n}, at addr {addr:?}"); } else { println!("obtained Unix datagram socket at FD#{n}, unknown addr"); } } #[cfg(unix)] "unix-raw" => { let raw = listenfd .take_raw_fd(n) .and_then(|l| l.ok_or_else(|| ErrorKind::NotFound.into())) .expect(&format!("expected Unix raw socket at FD#{n}")); println!("obtained Unix raw socket at FD#{n}: {raw}"); } other => { if cfg!(unix) { panic!("expected one of (tcp, udp, unix-stream, unix-datagram, unix-raw), found {other}") } else { panic!("expected one of (tcp, udp), found {other}") } } } } } ================================================ FILE: doc/packages.md ================================================ # Known packages of Watchexec Note that only first-party packages are maintained here. Anyone is welcome to create and maintain a packaging of Watchexec for their platform/distribution and submit it to their upstreams, and anyone may submit a PR to update this list. To report issues with non-first-party packages (outside of bugs that belong to Watchexec), contact the relevant packager. | Platform | Distributor | Package name | Status | Install command | |:-|:-|:-:|:-:|-:| | Linux | _n/a_ (deb) | [`watchexec-{version}-{platform}.deb`](https://github.com/watchexec/watchexec/releases) | first-party | `dpkg -i watchexec-*.deb` | | Linux | _n/a_ (rpm) | [`watchexec-{version}-{platform}.rpm`](https://github.com/watchexec/watchexec/releases) | first-party | `dnf install watchexec-*.deb` | | Linux | _n/a_ (tarball) | [`watchexec-{version}-{platform}.tar.xz`](https://github.com/watchexec/watchexec/releases) | first-party | `tar xf watchexec-*.tar.xz` | | Linux | Alpine | [`watchexec`](https://pkgs.alpinelinux.org/packages?name=watchexec) | official | `apk add watchexec` | | Linux | ALT Sisyphus | [`watchexec`](https://packages.altlinux.org/en/sisyphus/srpms/watchexec/) | official | `apt-get install watchexec` | | Linux | ~~[APT repo](https://apt.cli.rs) (Debian & Ubuntu)~~ | [`watchexec-cli`](https://apt.cli.rs) | defunct | | | Linux | Arch | [`watchexec`](https://archlinux.org/packages/extra/x86_64/watchexec/) | official | `pacman -S watchexec` | | Linux | Gentoo GURU | [`watchexec`](https://gpo.zugaina.org/Overlays/guru/app-misc/watchexec) | community | `emerge -av watchexec` | | Linux | GNU Guix | [`watchexec`](https://packages.guix.gnu.org/packages/watchexec/) | outdated | `guix install watchexec` | | Linux | LiGurOS | [`watchexec`](https://gitlab.com/liguros/liguros-repo/-/tree/stable/app-misc/watchexec) | official | `emerge -av watchexec` | | Linux | Manjaro | [`watchexec`](https://software.manjaro.org/package/watchexec) | official | `pamac install watchexec` | | Linux | Nix | [`watchexec`](https://search.nixos.org/packages?query=watchexec) | official | `nix-shell -p watchexec` | | Linux | openSUSE | [`watchexec`](https://software.opensuse.org/package/watchexec) | official | `zypper install watchexec` | | Linux | pacstall (Ubuntu) | [`watchexec-cli`](https://pacstall.dev/packages/watchexec-bin) | community | `pacstall -I watchexec-bin` | | Linux | Parabola | [`watchexec`](https://www.parabola.nu/packages/?q=watchexec) | official | `pacman -S watchexec` | | Linux | Solus | [`watchexec`](https://github.com/getsolus/packages/blob/main/packages/w/watchexec/package.yml) | official | `eopkg install watchexec` | | Linux | Termux (Android) | [`watchexec`](https://github.com/termux/termux-packages/blob/master/packages/watchexec/build.sh) | official | `pkg install watchexec` | | Linux | Void | [`watchexec`](https://github.com/void-linux/void-packages/tree/master/srcpkgs/watchexec) | official | `xbps-install watchexec` | | MacOS | _n/a_ (tarball) | [`watchexec-{version}-{platform}.tar.xz`](https://github.com/watchexec/watchexec/releases) | first-party | `tar xf watchexec-*.tar.xz` | | MacOS | Homebrew | [`watchexec`](https://formulae.brew.sh/formula/watchexec) | official | `brew install watchexec` | | MacOS | MacPorts | [`watchexec`](https://ports.macports.org/port/watchexec/summary/) | official | `port install watchexec` | | Windows | _n/a_ (zip) | [`watchexec-{version}-{platform}.zip`](https://github.com/watchexec/watchexec/releases) | first-party | `Expand-Archive -Path watchexec-*.zip` | | Windows | Baulk | [`watchexec`](https://github.com/baulk/bucket/blob/master/bucket/watchexec.json) | official | `baulk install watchexec` | | Windows | Chocolatey | [`watchexec`](https://community.chocolatey.org/packages/watchexec) | community | `choco install watchexec` | | Windows | MSYS2 mingw | [`mingw-w64-watchexec`](https://github.com/msys2/MINGW-packages/blob/master/mingw-w64-watchexec) | official | `pacman -S mingw-w64-x86_64-watchexec` | | Windows | Scoop | [`watchexec`](https://github.com/ScoopInstaller/Main/blob/master/bucket/watchexec.json) | official | `scoop install watchexec` | | _Any_ | Crates.io | [`watchexec-cli`](https://crates.io/crates/watchexec-cli) | first-party | `cargo install --locked watchexec-cli` | | _Any_ | Binstall | [`watchexec-cli`](https://crates.io/crates/watchexec-cli) | first-party | `cargo binstall watchexec-cli` | | _Any_ | Webi | [`watchexec`](https://webinstall.dev/watchexec/) | third-party | varies (see webpage) | Legend: - first-party: packaged and distributed by the Watchexec developers (in this repo) - official: packaged and distributed by the official package team for the listed distribution - community: packaged by a community member or organisation, outside of the official distribution - third-party: a redistribution of another package (e.g. using the first-party tarballs via a non-first-party installer) - outdated: an official or community packaging that is severely outdated (not just a couple releases out) ================================================ FILE: doc/socket.md ================================================ # `--socket`, `systemd.socket`, `systemfd` The `--socket` option is a lightweight version of [the `systemfd` tool][systemfd], which itself is an implementation of [systemd's socket activation feature][systemd sockets], which itself is a reimagination of earlier socket activation efforts, such as inetd and launchd. All three of these are compatible with each other in some ways. This document attempts to describe the commonalities and specify minimum behaviour that additional implementations should follow to keep compatibility. It does not seek to establish authority over any project. [systemfd]: https://github.com/mitsuhiko/systemfd [systemd sockets]: https://0pointer.de/blog/projects/socket-activation.html ## Basic principle of operation There's two programs involved: a socket provider, and a socket consumer. In systemd, the provider is systemd itself, and the consumer is the main service process. In watchexec (and systemfd), the provider is watchexec itself, and the consumer is the command it runs. The provider creates a socket and binds them to an address, and then makes it available to the consumer. There is an optional authentication layer to avoid the wrong process from attaching to the wrong socket. The consumer that obtains a socket is then able to listen on it. When the consumer exits, it doesn't close the socket; the provider then makes it available to the next instance. Socket activation is an advanced behaviour, where the provider listens on the socket itself and uses that to start the consumer service. As the provider controls the socket, more behaviours are possible such as having the real address bound to a separate socket and passing data through, or providing new sockets instead of sharing a single one. The important principle is that the consumer should not need to care: socket control is decoupled from application message and stream handling. ## Unix The Unix protocol was designed by systemd. Sockets are provided to consumers through file descriptors. - The file descriptors are assigned in a contiguous block. - The number of socket file descriptors is passed to the consumer using the environment variable `LISTEN_FDS`. - The starting file descriptor is read from the environment variable `LISTEN_FDS_FIRST_FD`, or defaults to `3` if that variable is not present. - If the `LISTEN_PID` environment variable is present, and the process ID of the consumer process doesn't match it, it must stop and not listen on any of the file descriptors. - The consumer may choose to reject the sockets if the file descriptor count isn't what it expects. - The consumer should strip the above environment variables from any child process it starts. The consumer side in pseudo code: ``` let pid_check = env::get("LISTEN_PID"); if pid_check && pid_check != getpid() { return; } let expected_socket_count = 2; let fd_count = env::get("LISTEN_FDS"); if !fd_count || fd_count != expected_socket_count { return; } let starting_fd = env::get("LISTEN_FDS_FIRST_FD"); if !starting_fd { starting_fd = 3; } for (let fd = starting_fd; fd += 1; fd < starting_fd + fd_count) { configure_socket(fd); } ``` ## Windows The Windows protocol was designed by systemfd. Sockets are provided to consumers through the [WSAPROTOCOL_INFOW] structure. - The provider starts a TCP server bound to 127.0.0.1 on a random port. - It writes the address to the server to the `SYSTEMFD_SOCKET_SERVER` environment variable for the consumer processes. - The provider generates and stores a random 128 bit value as a key for a socket set. - It writes the key in UUID hex string format (e.g. `59fb60fe-2634-4ec8-aa81-038793888c8e`) to the `SYSTEMFD_SOCKET_SECRET` environment variable for the consumer processes. - The consumer opens a connection to the `SYSTEMFD_SOCKET_SERVER` and: 1. reads the key from `SYSTEMFD_SOCKET_SECRET`; 2. writes the key in the same format, then a `|` character, then its own process ID as a string (in base 10), and then EOF; 2. reads the response to EOF. - The response will be one or more `WSAPROTOCOL_INFOW` structures, with no padding or separators. - If the provider has no record of the key (i.e. if it doesn't match the one provided to the consumer via `SYSTEMFD_SOCKET_SECRET`), it will close the connection without sending any data. - Optionally, the provider can check the consumer's PID is what it expects, and reject if it's unhappy (by closing the connection without sending any data). The consumer side in pseudo code: ``` let server = env::get("SYSTEMFD_SOCKET_SERVER"); let key = env::get("SYSTEMFD_SOCKET_SECRET"); if !server || !key { return; } if !valid_uuid(key) { return; } let (writer, reader) = TcpClient::connect(server); writer.write(key); writer.write("|"); writer.write(getpid().to_string()); writer.close(); while reader.has_more_data() { let socket = reader.read(size_of(WSAPROTOCOL_INFOW)) as WSAPROTOCOL_INFOW; configure_socket(socket); } ``` [WSAPROTOCOL_INFOW]: https://learn.microsoft.com/en-us/windows/win32/api/winsock2/ns-winsock2-wsaprotocol_infow ================================================ FILE: doc/watchexec.1 ================================================ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .TH watchexec 1 "watchexec 2.5.1" .SH NAME watchexec \- Execute commands when watched files change .SH SYNOPSIS \fBwatchexec\fR [\fB\-\-bell\fR] [\fB\-c\fR|\fB\-\-clear\fR] [\fB\-\-completions\fR] [\fB\-\-color\fR] [\fB\-d\fR|\fB\-\-debounce\fR] [\fB\-\-delay\-run\fR] [\fB\-e\fR|\fB\-\-exts\fR] [\fB\-E\fR|\fB\-\-env\fR] [\fB\-\-emit\-events\-to\fR] [\fB\-f\fR|\fB\-\-filter\fR] [\fB\-\-socket\fR] [\fB\-\-filter\-file\fR] [\fB\-j\fR|\fB\-\-filter\-prog\fR] [\fB\-\-fs\-events\fR] [\fB\-i\fR|\fB\-\-ignore\fR] [\fB\-I\fR|\fB\-\-interactive\fR] [\fB\-\-exit\-on\-error\fR] [\fB\-\-ignore\-file\fR] [\fB\-\-ignore\-nothing\fR] [\fB\-\-log\-file\fR] [\fB\-\-manual\fR] [\fB\-\-map\-signal\fR] [\fB\-n \fR] [\fB\-N\fR|\fB\-\-notify\fR] [\fB\-\-no\-default\-ignore\fR] [\fB\-\-no\-discover\-ignore\fR] [\fB\-\-no\-process\-group\fR] [\fB\-\-no\-global\-ignore\fR] [\fB\-\-no\-meta\fR] [\fB\-\-no\-project\-ignore\fR] [\fB\-\-no\-vcs\-ignore\fR] [\fB\-o\fR|\fB\-\-on\-busy\-update\fR] [\fB\-\-only\-emit\-events\fR] [\fB\-\-poll\fR] [\fB\-\-print\-events\fR] [\fB\-\-project\-origin\fR] [\fB\-p\fR|\fB\-\-postpone\fR] [\fB\-q\fR|\fB\-\-quiet\fR] [\fB\-r\fR|\fB\-\-restart\fR] [\fB\-s\fR|\fB\-\-signal\fR] [\fB\-\-shell\fR] [\fB\-\-stdin\-quit\fR] [\fB\-\-stop\-signal\fR] [\fB\-\-stop\-timeout\fR] [\fB\-\-timeout\fR] [\fB\-\-timings\fR] [\fB\-v\fR|\fB\-\-verbose\fR]... [\fB\-w\fR|\fB\-\-watch\fR] [\fB\-\-workdir\fR] [\fB\-W\fR|\fB\-\-watch\-non\-recursive\fR] [\fB\-\-wrap\-process\fR] [\fB\-F\fR|\fB\-\-watch\-file\fR] [\fB\-h\fR|\fB\-\-help\fR] [\fB\-V\fR|\fB\-\-version\fR] [\fICOMMAND\fR] .SH DESCRIPTION Execute commands when watched files change. .PP Recursively monitors the current directory for changes, executing the command when a filesystem change is detected (among other event sources). By default, watchexec uses efficient kernel\-level mechanisms to watch for changes. .PP At startup, the specified command is run once, and watchexec begins monitoring for changes. .PP Events are debounced and checked using a variety of mechanisms, which you can control using the flags in the **Filtering** section. The order of execution is: internal prioritisation (signals come before everything else, and SIGINT/SIGTERM are processed even more urgently), then file event kind (`\-\-fs\-events`), then files explicitly watched with `\-w`, then ignores (`\-\-ignore` and co), then filters (which includes `\-\-exts`), then filter programs. .PP Examples: .PP Rebuild a project when source files change: .PP $ watchexec make .PP Watch all HTML, CSS, and JavaScript files for changes: .PP $ watchexec \-e html,css,js make .PP Run tests when source files change, clearing the screen each time: .PP $ watchexec \-c make test .PP Launch and restart a node.js server: .PP $ watchexec \-r node app.js .PP Watch lib and src directories for changes, rebuilding each time: .PP $ watchexec \-w lib \-w src make .SH OPTIONS .TP \fB\-\-completions\fR \fI\fR Generate a shell completions script Provides a completions script or configuration for the given shell. If Watchexec is not distributed with pre\-generated completions, you can use this to generate them yourself. Supported shells: bash, elvish, fish, nu, powershell, zsh. .TP \fB\-\-manual\fR Show the manual page This shows the manual page for Watchexec, if the output is a terminal and the \*(Aqman\*(Aq program is available. If not, the manual page is printed to stdout in ROFF format (suitable for writing to a watchexec.1 file). .TP \fB\-\-only\-emit\-events\fR Only emit events to stdout, run no commands. This is a convenience option for using Watchexec as a file watcher, without running any commands. It is almost equivalent to using `cat` as the command, except that it will not spawn a new process for each event. This option implies `\-\-emit\-events\-to=json\-stdio`; you may also use the text mode by specifying `\-\-emit\-events\-to=stdio`. .TP \fB\-h\fR, \fB\-\-help\fR Print help (see a summary with \*(Aq\-h\*(Aq) .TP \fB\-V\fR, \fB\-\-version\fR Print version .TP [\fICOMMAND\fR] Command (program and arguments) to run on changes It\*(Aqs run when events pass filters and the debounce period (and once at startup unless \*(Aq\-\-postpone\*(Aq is given). If you pass flags to the command, you should separate it with \-\- though that is not strictly required. Examples: $ watchexec \-w src npm run build $ watchexec \-w src \-\- rsync \-a src dest Take care when using globs or other shell expansions in the command. Your shell may expand them before ever passing them to Watchexec, and the results may not be what you expect. Compare: $ watchexec echo src/*.rs $ watchexec echo \*(Aqsrc/*.rs\*(Aq $ watchexec \-\-shell=none echo \*(Aqsrc/*.rs\*(Aq Behaviour depends on the value of \*(Aq\-\-shell\*(Aq: for all except \*(Aqnone\*(Aq, every part of the command is joined together into one string with a single ascii space character, and given to the shell as described in the help for \*(Aq\-\-shell\*(Aq. For \*(Aqnone\*(Aq, each distinct element the command is passed as per the execvp(3) convention: first argument is the program, as a path or searched for in the \*(AqPATH\*(Aq environment variable, rest are arguments. .SH COMMAND .TP \fB\-\-delay\-run\fR \fI\fR Sleep before running the command This option will cause Watchexec to sleep for the specified amount of time before running the command, after an event is detected. This is like using "sleep 5 && command" in a shell, but portable and slightly more efficient. Takes a unit\-less value in seconds, or a time span value such as "2min 5s". Providing a unit\-less value is deprecated and will warn; it will be an error in the future. .TP \fB\-E\fR, \fB\-\-env\fR \fI\fR Add env vars to the command This is a convenience option for setting environment variables for the command, without setting them for the Watchexec process itself. Use key=value syntax. Multiple variables can be set by repeating the option. .TP \fB\-\-socket\fR \fI\fR Provide a socket to the command This implements the systemd socket\-passing protocol, like with `systemfd`: sockets are opened from the watchexec process, and then passed to the commands it runs. This lets you keep sockets open and avoid address reuse issues or dropping packets. This option can be supplied multiple times, to open multiple sockets. The value can be either of `PORT` (opens a TCP listening socket at that port), `HOST:PORT` (specify a host IP address; IPv6 addresses can be specified `[bracketed]`), `TYPE::PORT` or `TYPE::HOST:PORT` (specify a socket type, `tcp` / `udp`). This integration only provides basic support, if you want more control you should use the `systemfd` tool from , upon which this is based. The syntax here and the spawning behaviour is identical to `systemfd`, and both watchexec and systemfd are compatible implementations of the systemd socket\-activation protocol. Watchexec does _not_ set the `LISTEN_PID` variable on unix, which means any child process of your command could accidentally bind to the sockets, unless the `LISTEN_*` variables are removed from the environment. .TP \fB\-n\fR Shorthand for \*(Aq\-\-shell=none\*(Aq .TP \fB\-\-no\-process\-group\fR Don\*(Aqt use a process group By default, Watchexec will run the command in a process group, so that signals and terminations are sent to all processes in the group. Sometimes that\*(Aqs not what you want, and you can disable the behaviour with this option. Deprecated, use \*(Aq\-\-wrap\-process=none\*(Aq instead. .TP \fB\-\-shell\fR \fI\fR Use a different shell By default, Watchexec will use \*(Aq$SHELL\*(Aq if it\*(Aqs defined or a default of \*(Aqsh\*(Aq on Unix\-likes, and either \*(Aqpwsh\*(Aq, \*(Aqpowershell\*(Aq, or \*(Aqcmd\*(Aq (CMD.EXE) on Windows, depending on what Watchexec detects is the running shell. With this option, you can override that and use a different shell, for example one with more features or one which has your custom aliases and functions. If the value has spaces, it is parsed as a command line, and the first word used as the shell program, with the rest as arguments to the shell. The command is run with the \*(Aq\-c\*(Aq flag (except for \*(Aqcmd\*(Aq on Windows, where it\*(Aqs \*(Aq/C\*(Aq). The special value \*(Aqnone\*(Aq can be used to disable shell use entirely. In that case, the command provided to Watchexec will be parsed, with the first word being the executable and the rest being the arguments, and executed directly. Note that this parsing is rudimentary, and may not work as expected in all cases. Using \*(Aqnone\*(Aq is a little more efficient and can enable a stricter interpretation of the input, but it also means that you can\*(Aqt use shell features like globbing, redirection, control flow, logic, or pipes. Examples: Use without shell: $ watchexec \-n \-\- zsh \-x \-o shwordsplit scr Use with powershell core: $ watchexec \-\-shell=pwsh \-\- Test\-Connection localhost Use with CMD.exe: $ watchexec \-\-shell=cmd \-\- dir Use with a different unix shell: $ watchexec \-\-shell=bash \-\- \*(Aqecho $BASH_VERSION\*(Aq Use with a unix shell and options: $ watchexec \-\-shell=\*(Aqzsh \-x \-o shwordsplit\*(Aq \-\- scr .TP \fB\-\-stop\-signal\fR \fI\fR Signal to send to stop the command This is used by \*(Aqrestart\*(Aq and \*(Aqsignal\*(Aq modes of \*(Aq\-\-on\-busy\-update\*(Aq (unless \*(Aq\-\-signal\*(Aq is provided). The restart behaviour is to send the signal, wait for the command to exit, and if it hasn\*(Aqt exited after some time (see \*(Aq\-\-timeout\-stop\*(Aq), forcefully terminate it. The default on unix is "SIGTERM". Input is parsed as a full signal name (like "SIGTERM"), a short signal name (like "TERM"), or a signal number (like "15"). All input is case\-insensitive. On Windows this option is technically supported but only supports the "KILL" event, as Watchexec cannot yet deliver other events. Windows doesn\*(Aqt have signals as such; instead it has termination (here called "KILL" or "STOP") and "CTRL+C", "CTRL+BREAK", and "CTRL+CLOSE" events. For portability the unix signals "SIGKILL", "SIGINT", "SIGTERM", and "SIGHUP" are respectively mapped to these. .TP \fB\-\-stop\-timeout\fR \fI\fR Time to wait for the command to exit gracefully This is used by the \*(Aqrestart\*(Aq mode of \*(Aq\-\-on\-busy\-update\*(Aq. After the graceful stop signal is sent, Watchexec will wait for the command to exit. If it hasn\*(Aqt exited after this time, it is forcefully terminated. Takes a unit\-less value in seconds, or a time span value such as "5min 20s". Providing a unit\-less value is deprecated and will warn; it will be an error in the future. The default is 10 seconds. Set to 0 to immediately force\-kill the command. This has no practical effect on Windows as the command is always forcefully terminated; see \*(Aq\-\-stop\-signal\*(Aq for why. .TP \fB\-\-timeout\fR \fI\fR Kill the command if it runs longer than this duration Takes a time span value such as "30s", "5min", or "1h 30m". When the timeout is reached, the command is gracefully stopped using \-\-stop\-signal, then forcefully terminated after \-\-stop\-timeout if still running. Each run of the command has its own independent timeout. .TP \fB\-\-workdir\fR \fI\fR Set the working directory By default, the working directory of the command is the working directory of Watchexec. You can change that with this option. Note that paths may be less intuitive to use with this. .TP \fB\-\-wrap\-process\fR \fI\fR [default: group] Configure how the process is wrapped By default, Watchexec will run the command in a session on Mac, in a process group in Unix, and in a Job Object in Windows. Some Unix programs prefer running in a session, while others do not work in a process group. Use \*(Aqgroup\*(Aq to use a process group, \*(Aqsession\*(Aq to use a process session, and \*(Aqnone\*(Aq to run the command directly. On Windows, either of \*(Aqgroup\*(Aq or \*(Aqsession\*(Aq will use a Job Object. If you find you need to specify this frequently for different kinds of programs, file an issue at . As errors of this nature are hard to debug and can be highly environment\-dependent, reports from *multiple affected people* are more likely to be actioned promptly. Ask your friends/colleagues! .SH EVENTS .TP \fB\-d\fR, \fB\-\-debounce\fR \fI\fR Time to wait for new events before taking action When an event is received, Watchexec will wait for up to this amount of time before handling it (such as running the command). This is essential as what you might perceive as a single change may actually emit many events, and without this behaviour, Watchexec would run much too often. Additionally, it\*(Aqs not infrequent that file writes are not atomic, and each write may emit an event, so this is a good way to avoid running a command while a file is partially written. An alternative use is to set a high value (like "30min" or longer), to save power or bandwidth on intensive tasks, like an ad\-hoc backup script. In those use cases, note that every accumulated event will build up in memory. Takes a unit\-less value in milliseconds, or a time span value such as "5sec 20ms". Providing a unit\-less value is deprecated and will warn; it will be an error in the future. The default is 50 milliseconds. Setting to 0 is highly discouraged. .TP \fB\-\-emit\-events\-to\fR \fI\fR Configure event emission Watchexec can emit event information when running a command, which can be used by the child process to target specific changed files. One thing to take care with is assuming inherent behaviour where there is only chance. Notably, it could appear as if the `RENAMED` variable contains both the original and the new path being renamed. In previous versions, it would even appear on some platforms as if the original always came before the new. However, none of this was true. It\*(Aqs impossible to reliably and portably know which changed path is the old or new, "half" renames may appear (only the original, only the new), "unknown" renames may appear (change was a rename, but whether it was the old or new isn\*(Aqt known), rename events might split across two debouncing boundaries, and so on. This option controls where that information is emitted. It defaults to \*(Aqnone\*(Aq, which doesn\*(Aqt emit event information at all. The other options are \*(Aqenvironment\*(Aq (deprecated), \*(Aqstdio\*(Aq, \*(Aqfile\*(Aq, \*(Aqjson\-stdio\*(Aq, and \*(Aqjson\-file\*(Aq. The \*(Aqstdio\*(Aq and \*(Aqfile\*(Aq modes are text\-based: \*(Aqstdio\*(Aq writes absolute paths to the stdin of the command, one per line, each prefixed with `create:`, `remove:`, `rename:`, `modify:`, or `other:`, then closes the handle; \*(Aqfile\*(Aq writes the same thing to a temporary file, and its path is given with the $WATCHEXEC_EVENTS_FILE environment variable. There are also two JSON modes, which are based on JSON objects and can represent the full set of events Watchexec handles. Here\*(Aqs an example of a folder being created on Linux: ```json { "tags": [ { "kind": "path", "absolute": "/home/user/your/new\-folder", "filetype": "dir" }, { "kind": "fs", "simple": "create", "full": "Create(Folder)" }, { "kind": "source", "source": "filesystem", } ], "metadata": { "notify\-backend": "inotify" } } ``` The fields are as follows: \- `tags`, structured event data. \- `tags[].kind`, which can be: * \*(Aqpath\*(Aq, along with: + `absolute`, an absolute path. + `filetype`, a file type if known (\*(Aqdir\*(Aq, \*(Aqfile\*(Aq, \*(Aqsymlink\*(Aq, \*(Aqother\*(Aq). * \*(Aqfs\*(Aq: + `simple`, the "simple" event type (\*(Aqaccess\*(Aq, \*(Aqcreate\*(Aq, \*(Aqmodify\*(Aq, \*(Aqremove\*(Aq, or \*(Aqother\*(Aq). + `full`, the "full" event type, which is too complex to fully describe here, but looks like \*(AqGeneral(Precise(Specific))\*(Aq. * \*(Aqsource\*(Aq, along with: + `source`, the source of the event (\*(Aqfilesystem\*(Aq, \*(Aqkeyboard\*(Aq, \*(Aqmouse\*(Aq, \*(Aqos\*(Aq, \*(Aqtime\*(Aq, \*(Aqinternal\*(Aq). * \*(Aqkeyboard\*(Aq, along with: + `keycode`. Currently only the value \*(Aqeof\*(Aq is supported. * \*(Aqprocess\*(Aq, for events caused by processes: + `pid`, the process ID. * \*(Aqsignal\*(Aq, for signals sent to Watchexec: + `signal`, the normalised signal name (\*(Aqhangup\*(Aq, \*(Aqinterrupt\*(Aq, \*(Aqquit\*(Aq, \*(Aqterminate\*(Aq, \*(Aquser1\*(Aq, \*(Aquser2\*(Aq). * \*(Aqcompletion\*(Aq, for when a command ends: + `disposition`, the exit disposition (\*(Aqsuccess\*(Aq, \*(Aqerror\*(Aq, \*(Aqsignal\*(Aq, \*(Aqstop\*(Aq, \*(Aqexception\*(Aq, \*(Aqcontinued\*(Aq). + `code`, the exit, signal, stop, or exception code. \- `metadata`, additional information about the event. The \*(Aqjson\-stdio\*(Aq mode will emit JSON events to the standard input of the command, one per line, then close stdin. The \*(Aqjson\-file\*(Aq mode will create a temporary file, write the events to it, and provide the path to the file with the $WATCHEXEC_EVENTS_FILE environment variable. Finally, the \*(Aqenvironment\*(Aq mode was the default until 2.0. It sets environment variables with the paths of the affected files, for filesystem events: $WATCHEXEC_COMMON_PATH is set to the longest common path of all of the below variables, and so should be prepended to each path to obtain the full/real path. Then: \- $WATCHEXEC_CREATED_PATH is set when files/folders were created \- $WATCHEXEC_REMOVED_PATH is set when files/folders were removed \- $WATCHEXEC_RENAMED_PATH is set when files/folders were renamed \- $WATCHEXEC_WRITTEN_PATH is set when files/folders were modified \- $WATCHEXEC_META_CHANGED_PATH is set when files/folders\*(Aq metadata were modified \- $WATCHEXEC_OTHERWISE_CHANGED_PATH is set for every other kind of pathed event Multiple paths are separated by the system path separator, \*(Aq;\*(Aq on Windows and \*(Aq:\*(Aq on unix. Within each variable, paths are deduplicated and sorted in binary order (i.e. neither Unicode nor locale aware). This is the legacy mode, is deprecated, and will be removed in the future. The environment is a very restricted space, while also limited in what it can usefully represent. Large numbers of files will either cause the environment to be truncated, or may error or crash the process entirely. The $WATCHEXEC_COMMON_PATH is also unintuitive, as demonstrated by the multiple confused queries that have landed in my inbox over the years. .TP \fB\-I\fR, \fB\-\-interactive\fR Respond to keypresses to quit, restart, or pause In interactive mode, Watchexec listens for keypresses and responds to them. Currently supported keys are: \*(Aqr\*(Aq to restart the command, \*(Aqp\*(Aq to toggle pausing the watch, and \*(Aqq\*(Aq to quit. This requires a terminal (TTY) and puts stdin into raw mode, so the child process will not receive stdin input. .TP \fB\-\-exit\-on\-error\fR Exit when the command has an error By default, Watchexec will continue to watch and re\-run the command after the command exits, regardless of its exit status. With this option, it will instead exit when the command completes with any non\-success exit status. This is useful when running Watchexec in a process manager or container, where you want the container to restart when the command fails rather than hang waiting for file changes. .TP \fB\-\-map\-signal\fR \fI\fR Translate signals from the OS to signals to send to the command Takes a pair of signal names, separated by a colon, such as "TERM:INT" to map SIGTERM to SIGINT. The first signal is the one received by watchexec, and the second is the one sent to the command. The second can be omitted to discard the first signal, such as "TERM:" to not do anything on SIGTERM. If SIGINT or SIGTERM are mapped, then they no longer quit Watchexec. Besides making it hard to quit Watchexec itself, this is useful to send pass a Ctrl\-C to the command without also terminating Watchexec and the underlying program with it, e.g. with "INT:INT". This option can be specified multiple times to map multiple signals. Signal syntax is case\-insensitive for short names (like "TERM", "USR2") and long names (like "SIGKILL", "SIGHUP"). Signal numbers are also supported (like "15", "31"). On Windows, the forms "STOP", "CTRL+C", and "CTRL+BREAK" are also supported to receive, but Watchexec cannot yet deliver other "signals" than a STOP. .TP \fB\-o\fR, \fB\-\-on\-busy\-update\fR \fI\fR What to do when receiving events while the command is running Default is to \*(Aqdo\-nothing\*(Aq, which ignores events while the command is running, so that changes that occur due to the command are ignored, like compilation outputs. You can also use \*(Aqqueue\*(Aq which will run the command once again when the current run has finished if any events occur while it\*(Aqs running, or \*(Aqrestart\*(Aq, which terminates the running command and starts a new one. Finally, there\*(Aqs \*(Aqsignal\*(Aq, which only sends a signal; this can be useful with programs that can reload their configuration without a full restart. The signal can be specified with the \*(Aq\-\-signal\*(Aq option. .TP \fB\-\-poll\fR [\fI\fR] Poll for filesystem changes By default, and where available, Watchexec uses the operating system\*(Aqs native file system watching capabilities. This option disables that and instead uses a polling mechanism, which is less efficient but can work around issues with some file systems (like network shares) or edge cases. Optionally takes a unit\-less value in milliseconds, or a time span value such as "2s 500ms", to use as the polling interval. If not specified, the default is 30 seconds. Providing a unit\-less value is deprecated and will warn; it will be an error in the future. Aliased as \*(Aq\-\-force\-poll\*(Aq. .TP \fB\-p\fR, \fB\-\-postpone\fR Wait until first change before running command By default, Watchexec will run the command once immediately. With this option, it will instead wait until an event is detected before running the command as normal. .TP \fB\-r\fR, \fB\-\-restart\fR Restart the process if it\*(Aqs still running This is a shorthand for \*(Aq\-\-on\-busy\-update=restart\*(Aq. .TP \fB\-s\fR, \fB\-\-signal\fR \fI\fR Send a signal to the process when it\*(Aqs still running Specify a signal to send to the process when it\*(Aqs still running. This implies \*(Aq\-\-on\-busy\-update=signal\*(Aq; otherwise the signal used when that mode is \*(Aqrestart\*(Aq is controlled by \*(Aq\-\-stop\-signal\*(Aq. See the long documentation for \*(Aq\-\-stop\-signal\*(Aq for syntax. Signals are not supported on Windows at the moment, and will always be overridden to \*(Aqkill\*(Aq. See \*(Aq\-\-stop\-signal\*(Aq for more on Windows "signals". .TP \fB\-\-stdin\-quit\fR Exit when stdin closes This watches the stdin file descriptor for EOF, and exits Watchexec gracefully when it is closed. This is used by some process managers to avoid leaving zombie processes around. .SH FILTERING .TP \fB\-e\fR, \fB\-\-exts\fR \fI\fR Filename extensions to filter to This is a quick filter to only emit events for files with the given extensions. Extensions can be given with or without the leading dot (e.g. \*(Aqjs\*(Aq or \*(Aq.js\*(Aq). Multiple extensions can be given by repeating the option or by separating them with commas. .TP \fB\-f\fR, \fB\-\-filter\fR \fI\fR Filename patterns to filter to Provide a glob\-like filter pattern, and only events for files matching the pattern will be emitted. Multiple patterns can be given by repeating the option. Events that are not from files (e.g. signals, keyboard events) will pass through untouched. .TP \fB\-\-filter\-file\fR \fI\fR Files to load filters from Provide a path to a file containing filters, one per line. Empty lines and lines starting with \*(Aq#\*(Aq are ignored. Uses the same pattern format as the \*(Aq\-\-filter\*(Aq option. This can also be used via the $WATCHEXEC_FILTER_FILES environment variable. .TP \fB\-j\fR, \fB\-\-filter\-prog\fR \fI\fR Filter programs. Provide your own custom filter programs in jaq (similar to jq) syntax. Programs are given an event in the same format as described in \*(Aq\-\-emit\-events\-to\*(Aq and must return a boolean. Invalid programs will make watchexec fail to start; use \*(Aq\-v\*(Aq to see program runtime errors. In addition to the jaq stdlib, watchexec adds some custom filter definitions: \- \*(Aqpath | file_meta\*(Aq returns file metadata or null if the file does not exist. \- \*(Aqpath | file_size\*(Aq returns the size of the file at path, or null if it does not exist. \- \*(Aqpath | file_read(bytes)\*(Aq returns a string with the first n bytes of the file at path. If the file is smaller than n bytes, the whole file is returned. There is no filter to read the whole file at once to encourage limiting the amount of data read and processed. \- \*(Aqstring | hash\*(Aq, and \*(Aqpath | file_hash\*(Aq return the hash of the string or file at path. No guarantee is made about the algorithm used: treat it as an opaque value. \- \*(Aqany | kv_store(key)\*(Aq, \*(Aqkv_fetch(key)\*(Aq, and \*(Aqkv_clear\*(Aq provide a simple key\-value store. Data is kept in memory only, there is no persistence. Consistency is not guaranteed. \- \*(Aqany | printout\*(Aq, \*(Aqany | printerr\*(Aq, and \*(Aqany | log(level)\*(Aq will print or log any given value to stdout, stderr, or the log (levels = error, warn, info, debug, trace), and pass the value through (so \*(Aq[1] | log("debug") | .[]\*(Aq will produce a \*(Aq1\*(Aq and log \*(Aq[1]\*(Aq). All filtering done with such programs, and especially those using kv or filesystem access, is much slower than the other filtering methods. If filtering is too slow, events will back up and stall watchexec. Take care when designing your filters. If the argument to this option starts with an \*(Aq@\*(Aq, the rest of the argument is taken to be the path to a file containing a jaq program. Jaq programs are run in order, after all other filters, and short\-circuit: if a filter (jaq or not) rejects an event, execution stops there, and no other filters are run. Additionally, they stop after outputting the first value, so you\*(Aqll want to use \*(Aqany\*(Aq or \*(Aqall\*(Aq when iterating, otherwise only the first item will be processed, which can be quite confusing! Find user\-contributed programs or submit your own useful ones at . ## Examples: Regexp ignore filter on paths: \*(Aqall(.tags[] | select(.kind == "path"); .absolute | test("[.]test[.]js$")) | not\*(Aq Pass any event that creates a file: \*(Aqany(.tags[] | select(.kind == "fs"); .simple == "create")\*(Aq Pass events that touch executable files: \*(Aqany(.tags[] | select(.kind == "path" && .filetype == "file"); .absolute | metadata | .executable)\*(Aq Ignore files that start with shebangs: \*(Aqany(.tags[] | select(.kind == "path" && .filetype == "file"); .absolute | read(2) == "#!") | not\*(Aq .TP \fB\-\-fs\-events\fR \fI\fR Filesystem events to filter to This is a quick filter to only emit events for the given types of filesystem changes. Choose from \*(Aqaccess\*(Aq, \*(Aqcreate\*(Aq, \*(Aqremove\*(Aq, \*(Aqrename\*(Aq, \*(Aqmodify\*(Aq, \*(Aqmetadata\*(Aq. Multiple types can be given by repeating the option or by separating them with commas. By default, this is all types except for \*(Aqaccess\*(Aq. This may apply filtering at the kernel level when possible, which can be more efficient, but may be more confusing when reading the logs. .TP \fB\-i\fR, \fB\-\-ignore\fR \fI\fR Filename patterns to filter out Provide a glob\-like filter pattern, and events for files matching the pattern will be excluded. Multiple patterns can be given by repeating the option. Events that are not from files (e.g. signals, keyboard events) will pass through untouched. .TP \fB\-\-ignore\-file\fR \fI\fR Files to load ignores from Provide a path to a file containing ignores, one per line. Empty lines and lines starting with \*(Aq#\*(Aq are ignored. Uses the same pattern format as the \*(Aq\-\-ignore\*(Aq option. This can also be used via the $WATCHEXEC_IGNORE_FILES environment variable. .TP \fB\-\-ignore\-nothing\fR Don\*(Aqt ignore anything at all This is a shorthand for \*(Aq\-\-no\-discover\-ignore\*(Aq, \*(Aq\-\-no\-default\-ignore\*(Aq. Note that ignores explicitly loaded via other command line options, such as \*(Aq\-\-ignore\*(Aq or \*(Aq\-\-ignore\-file\*(Aq, will still be used. .TP \fB\-\-no\-default\-ignore\fR Don\*(Aqt use internal default ignores Watchexec has a set of default ignore patterns, such as editor swap files, `*.pyc`, `*.pyo`, `.DS_Store`, `.bzr`, `_darcs`, `.fossil\-settings`, `.git`, `.hg`, `.pijul`, `.svn`, and Watchexec log files. .TP \fB\-\-no\-discover\-ignore\fR Don\*(Aqt discover ignore files at all This is a shorthand for \*(Aq\-\-no\-global\-ignore\*(Aq, \*(Aq\-\-no\-vcs\-ignore\*(Aq, \*(Aq\-\-no\-project\-ignore\*(Aq, but even more efficient as it will skip all the ignore discovery mechanisms from the get go. Note that default ignores are still loaded, see \*(Aq\-\-no\-default\-ignore\*(Aq. .TP \fB\-\-no\-global\-ignore\fR Don\*(Aqt load global ignores This disables loading of global or user ignore files, like \*(Aq~/.gitignore\*(Aq, \*(Aq~/.config/watchexec/ignore\*(Aq, or \*(Aq%APPDATA%\\Bazzar\\2.0\\ignore\*(Aq. Contrast with \*(Aq\-\-no\-vcs\-ignore\*(Aq and \*(Aq\-\-no\-project\-ignore\*(Aq. Supported global ignore files \- Git (if core.excludesFile is set): the file at that path \- Git (otherwise): the first found of $XDG_CONFIG_HOME/git/ignore, %APPDATA%/.gitignore, %USERPROFILE%/.gitignore, $HOME/.config/git/ignore, $HOME/.gitignore. \- Bazaar: the first found of %APPDATA%/Bazzar/2.0/ignore, $HOME/.bazaar/ignore. \- Watchexec: the first found of $XDG_CONFIG_HOME/watchexec/ignore, %APPDATA%/watchexec/ignore, %USERPROFILE%/.watchexec/ignore, $HOME/.watchexec/ignore. Like for project files, Git and Bazaar global files will only be used for the corresponding VCS as used in the project. .TP \fB\-\-no\-meta\fR Don\*(Aqt emit fs events for metadata changes This is a shorthand for \*(Aq\-\-fs\-events create,remove,rename,modify\*(Aq. Using it alongside the \*(Aq\-\-fs\-events\*(Aq option is non\-sensical and not allowed. .TP \fB\-\-no\-project\-ignore\fR Don\*(Aqt load project\-local ignores This disables loading of project\-local ignore files, like \*(Aq.gitignore\*(Aq or \*(Aq.ignore\*(Aq in the watched project. This is contrasted with \*(Aq\-\-no\-vcs\-ignore\*(Aq, which disables loading of Git and other VCS ignore files, and with \*(Aq\-\-no\-global\-ignore\*(Aq, which disables loading of global or user ignore files, like \*(Aq~/.gitignore\*(Aq or \*(Aq~/.config/watchexec/ignore\*(Aq. Supported project ignore files: \- Git: .gitignore at project root and child directories, .git/info/exclude, and the file pointed to by `core.excludesFile` in .git/config. \- Mercurial: .hgignore at project root and child directories. \- Bazaar: .bzrignore at project root. \- Darcs: _darcs/prefs/boring \- Fossil: .fossil\-settings/ignore\-glob \- Ripgrep/Watchexec/generic: .ignore at project root and child directories. VCS ignore files (Git, Mercurial, Bazaar, Darcs, Fossil) are only used if the corresponding VCS is discovered to be in use for the project/origin. For example, a .bzrignore in a Git repository will be discarded. .TP \fB\-\-no\-vcs\-ignore\fR Don\*(Aqt load gitignores Among other VCS exclude files, like for Mercurial, Subversion, Bazaar, DARCS, Fossil. Note that Watchexec will detect which of these is in use, if any, and only load the relevant files. Both global (like \*(Aq~/.gitignore\*(Aq) and local (like \*(Aq.gitignore\*(Aq) files are considered. This option is useful if you want to watch files that are ignored by Git. .TP \fB\-\-project\-origin\fR \fI\fR Set the project origin Watchexec will attempt to discover the project\*(Aqs "origin" (or "root") by searching for a variety of markers, like files or directory patterns. It does its best but sometimes gets it it wrong, and you can override that with this option. The project origin is used to determine the path of certain ignore files, which VCS is being used, the meaning of a leading \*(Aq/\*(Aq in filtering patterns, and maybe more in the future. When set, Watchexec will also not bother searching, which can be significantly faster. .TP \fB\-w\fR, \fB\-\-watch\fR \fI\fR Watch a specific file or directory By default, Watchexec watches the current directory. When watching a single file, it\*(Aqs often better to watch the containing directory instead, and filter on the filename. Some editors may replace the file with a new one when saving, and some platforms may not detect that or further changes. Upon starting, Watchexec resolves a "project origin" from the watched paths. See the help for \*(Aq\-\-project\-origin\*(Aq for more information. This option can be specified multiple times to watch multiple files or directories. The special value \*(Aq/dev/null\*(Aq, provided as the only path watched, will cause Watchexec to not watch any paths. Other event sources (like signals or key events) may still be used. .TP \fB\-W\fR, \fB\-\-watch\-non\-recursive\fR \fI\fR Watch a specific directory, non\-recursively Unlike \*(Aq\-w\*(Aq, folders watched with this option are not recursed into. This option can be specified multiple times to watch multiple directories non\-recursively. .TP \fB\-F\fR, \fB\-\-watch\-file\fR \fI\fR Watch files and directories from a file Each line in the file will be interpreted as if given to \*(Aq\-w\*(Aq. For more complex uses (like watching non\-recursively), use the argfile capability: build a file containing command\-line options and pass it to watchexec with `@path/to/argfile`. The special value \*(Aq\-\*(Aq will read from STDIN; this in incompatible with \*(Aq\-\-stdin\-quit\*(Aq. .SH DEBUGGING .TP \fB\-\-log\-file\fR [\fI\fR] Write diagnostic logs to a file This writes diagnostic logs to a file, instead of the terminal, in JSON format. If a log level was not already specified, this will set it to \*(Aq\-vvv\*(Aq. If a path is not provided, the default is the working directory. Note that with \*(Aq\-\-ignore\-nothing\*(Aq, the write events to the log will likely get picked up by Watchexec, causing a loop; prefer setting a path outside of the watched directory. If the path provided is a directory, a file will be created in that directory. The file name will be the current date and time, in the format \*(Aqwatchexec.YYYY\-MM\-DDTHH\-MM\-SSZ.log\*(Aq. .TP \fB\-\-print\-events\fR Print events that trigger actions This prints the events that triggered the action when handling it (after debouncing), in a human readable form. This is useful for debugging filters. Use \*(Aq\-vvv\*(Aq instead when you need more diagnostic information. .TP \fB\-v\fR, \fB\-\-verbose\fR Set diagnostic log level This enables diagnostic logging, which is useful for investigating bugs or gaining more insight into faulty filters or "missing" events. Use multiple times to increase verbosity. Goes up to \*(Aq\-vvvv\*(Aq. When submitting bug reports, default to a \*(Aq\-vvv\*(Aq log level. You may want to use with \*(Aq\-\-log\-file\*(Aq to avoid polluting your terminal. Setting $WATCHEXEC_LOG also works, and takes precedence, but is not recommended. However, using $WATCHEXEC_LOG is the only way to get logs from before these options are parsed. .SH OUTPUT .TP \fB\-\-bell\fR Ring the terminal bell on command completion .TP \fB\-c\fR, \fB\-\-clear\fR [\fI\fR] Clear screen before running command If this doesn\*(Aqt completely clear the screen, try \*(Aq\-\-clear=reset\*(Aq. .TP \fB\-\-color\fR \fI\fR [default: auto] When to use terminal colours Setting the environment variable `NO_COLOR` to any value is equivalent to `\-\-color=never`. .TP \fB\-N\fR, \fB\-\-notify\fR [\fI\fR] Alert when commands start and end With this, Watchexec will emit a desktop notification when a command starts and ends, on supported platforms. On unsupported platforms, it may silently do nothing, or log a warning. The mode can be specified to only notify when the command `start`s, `end`s, or for `both` (which is the default). .TP \fB\-q\fR, \fB\-\-quiet\fR Don\*(Aqt print starting and stopping messages By default Watchexec will print a message when the command starts and stops. This option disables this behaviour, so only the command\*(Aqs output, warnings, and errors will be printed. .TP \fB\-\-timings\fR Print how long the command took to run This may not be exactly accurate, as it includes some overhead from Watchexec itself. Use the `time` utility, high\-precision timers, or benchmarking tools for more accurate results. .SH EXTRA Use @argfile as first argument to load arguments from the file \*(Aqargfile\*(Aq (one argument per line) which will be inserted in place of the @argfile (further arguments on the CLI will override or add onto those in the file). Didn\*(Aqt expect this much output? Use the short \*(Aq\-h\*(Aq flag to get short help. .SH VERSION v2.5.1 .SH AUTHORS Félix Saparelli , Matt Green ================================================ FILE: doc/watchexec.1.md ================================================ # NAME watchexec - Execute commands when watched files change # SYNOPSIS **watchexec** \[**\--bell**\] \[**-c**\|**\--clear**\] \[**\--completions**\] \[**\--color**\] \[**-d**\|**\--debounce**\] \[**\--delay-run**\] \[**-e**\|**\--exts**\] \[**-E**\|**\--env**\] \[**\--emit-events-to**\] \[**-f**\|**\--filter**\] \[**\--socket**\] \[**\--filter-file**\] \[**-j**\|**\--filter-prog**\] \[**\--fs-events**\] \[**-i**\|**\--ignore**\] \[**-I**\|**\--interactive**\] \[**\--exit-on-error**\] \[**\--ignore-file**\] \[**\--ignore-nothing**\] \[**\--log-file**\] \[**\--manual**\] \[**\--map-signal**\] \[**-n** \] \[**-N**\|**\--notify**\] \[**\--no-default-ignore**\] \[**\--no-discover-ignore**\] \[**\--no-process-group**\] \[**\--no-global-ignore**\] \[**\--no-meta**\] \[**\--no-project-ignore**\] \[**\--no-vcs-ignore**\] \[**-o**\|**\--on-busy-update**\] \[**\--only-emit-events**\] \[**\--poll**\] \[**\--print-events**\] \[**\--project-origin**\] \[**-p**\|**\--postpone**\] \[**-q**\|**\--quiet**\] \[**-r**\|**\--restart**\] \[**-s**\|**\--signal**\] \[**\--shell**\] \[**\--stdin-quit**\] \[**\--stop-signal**\] \[**\--stop-timeout**\] \[**\--timeout**\] \[**\--timings**\] \[**-v**\|**\--verbose**\]\... \[**-w**\|**\--watch**\] \[**\--workdir**\] \[**-W**\|**\--watch-non-recursive**\] \[**\--wrap-process**\] \[**-F**\|**\--watch-file**\] \[**-h**\|**\--help**\] \[**-V**\|**\--version**\] \[*COMMAND*\] # DESCRIPTION Execute commands when watched files change. Recursively monitors the current directory for changes, executing the command when a filesystem change is detected (among other event sources). By default, watchexec uses efficient kernel-level mechanisms to watch for changes. At startup, the specified command is run once, and watchexec begins monitoring for changes. Events are debounced and checked using a variety of mechanisms, which you can control using the flags in the \*\*Filtering\*\* section. The order of execution is: internal prioritisation (signals come before everything else, and SIGINT/SIGTERM are processed even more urgently), then file event kind (\`\--fs-events\`), then files explicitly watched with \`-w\`, then ignores (\`\--ignore\` and co), then filters (which includes \`\--exts\`), then filter programs. Examples: Rebuild a project when source files change: \$ watchexec make Watch all HTML, CSS, and JavaScript files for changes: \$ watchexec -e html,css,js make Run tests when source files change, clearing the screen each time: \$ watchexec -c make test Launch and restart a node.js server: \$ watchexec -r node app.js Watch lib and src directories for changes, rebuilding each time: \$ watchexec -w lib -w src make # OPTIONS **\--completions** *\* : Generate a shell completions script Provides a completions script or configuration for the given shell. If Watchexec is not distributed with pre-generated completions, you can use this to generate them yourself. Supported shells: bash, elvish, fish, nu, powershell, zsh. **\--manual** : Show the manual page This shows the manual page for Watchexec, if the output is a terminal and the man program is available. If not, the manual page is printed to stdout in ROFF format (suitable for writing to a watchexec.1 file). **\--only-emit-events** : Only emit events to stdout, run no commands. This is a convenience option for using Watchexec as a file watcher, without running any commands. It is almost equivalent to using \`cat\` as the command, except that it will not spawn a new process for each event. This option implies \`\--emit-events-to=json-stdio\`; you may also use the text mode by specifying \`\--emit-events-to=stdio\`. **-h**, **\--help** : Print help (see a summary with -h) **-V**, **\--version** : Print version \[*COMMAND*\] : Command (program and arguments) to run on changes Its run when events pass filters and the debounce period (and once at startup unless \--postpone is given). If you pass flags to the command, you should separate it with \-- though that is not strictly required. Examples: \$ watchexec -w src npm run build \$ watchexec -w src \-- rsync -a src dest Take care when using globs or other shell expansions in the command. Your shell may expand them before ever passing them to Watchexec, and the results may not be what you expect. Compare: \$ watchexec echo src/\*.rs \$ watchexec echo src/\*.rs \$ watchexec \--shell=none echo src/\*.rs Behaviour depends on the value of \--shell: for all except none, every part of the command is joined together into one string with a single ascii space character, and given to the shell as described in the help for \--shell. For none, each distinct element the command is passed as per the execvp(3) convention: first argument is the program, as a path or searched for in the PATH environment variable, rest are arguments. # COMMAND **\--delay-run** *\* : Sleep before running the command This option will cause Watchexec to sleep for the specified amount of time before running the command, after an event is detected. This is like using \"sleep 5 && command\" in a shell, but portable and slightly more efficient. Takes a unit-less value in seconds, or a time span value such as \"2min 5s\". Providing a unit-less value is deprecated and will warn; it will be an error in the future. **-E**, **\--env** *\* : Add env vars to the command This is a convenience option for setting environment variables for the command, without setting them for the Watchexec process itself. Use key=value syntax. Multiple variables can be set by repeating the option. **\--socket** *\* : Provide a socket to the command This implements the systemd socket-passing protocol, like with \`systemfd\`: sockets are opened from the watchexec process, and then passed to the commands it runs. This lets you keep sockets open and avoid address reuse issues or dropping packets. This option can be supplied multiple times, to open multiple sockets. The value can be either of \`PORT\` (opens a TCP listening socket at that port), \`HOST:PORT\` (specify a host IP address; IPv6 addresses can be specified \`\[bracketed\]\`), \`TYPE::PORT\` or \`TYPE::HOST:PORT\` (specify a socket type, \`tcp\` / \`udp\`). This integration only provides basic support, if you want more control you should use the \`systemfd\` tool from \, upon which this is based. The syntax here and the spawning behaviour is identical to \`systemfd\`, and both watchexec and systemfd are compatible implementations of the systemd socket-activation protocol. Watchexec does \_not\_ set the \`LISTEN_PID\` variable on unix, which means any child process of your command could accidentally bind to the sockets, unless the \`LISTEN\_\*\` variables are removed from the environment. **-n** : Shorthand for \--shell=none **\--no-process-group** : Dont use a process group By default, Watchexec will run the command in a process group, so that signals and terminations are sent to all processes in the group. Sometimes thats not what you want, and you can disable the behaviour with this option. Deprecated, use \--wrap-process=none instead. **\--shell** *\* : Use a different shell By default, Watchexec will use \$SHELL if its defined or a default of sh on Unix-likes, and either pwsh, powershell, or cmd (CMD.EXE) on Windows, depending on what Watchexec detects is the running shell. With this option, you can override that and use a different shell, for example one with more features or one which has your custom aliases and functions. If the value has spaces, it is parsed as a command line, and the first word used as the shell program, with the rest as arguments to the shell. The command is run with the -c flag (except for cmd on Windows, where its /C). The special value none can be used to disable shell use entirely. In that case, the command provided to Watchexec will be parsed, with the first word being the executable and the rest being the arguments, and executed directly. Note that this parsing is rudimentary, and may not work as expected in all cases. Using none is a little more efficient and can enable a stricter interpretation of the input, but it also means that you cant use shell features like globbing, redirection, control flow, logic, or pipes. Examples: Use without shell: \$ watchexec -n \-- zsh -x -o shwordsplit scr Use with powershell core: \$ watchexec \--shell=pwsh \-- Test-Connection localhost Use with CMD.exe: \$ watchexec \--shell=cmd \-- dir Use with a different unix shell: \$ watchexec \--shell=bash \-- echo \$BASH_VERSION Use with a unix shell and options: \$ watchexec \--shell=zsh -x -o shwordsplit \-- scr **\--stop-signal** *\* : Signal to send to stop the command This is used by restart and signal modes of \--on-busy-update (unless \--signal is provided). The restart behaviour is to send the signal, wait for the command to exit, and if it hasnt exited after some time (see \--timeout-stop), forcefully terminate it. The default on unix is \"SIGTERM\". Input is parsed as a full signal name (like \"SIGTERM\"), a short signal name (like \"TERM\"), or a signal number (like \"15\"). All input is case-insensitive. On Windows this option is technically supported but only supports the \"KILL\" event, as Watchexec cannot yet deliver other events. Windows doesnt have signals as such; instead it has termination (here called \"KILL\" or \"STOP\") and \"CTRL+C\", \"CTRL+BREAK\", and \"CTRL+CLOSE\" events. For portability the unix signals \"SIGKILL\", \"SIGINT\", \"SIGTERM\", and \"SIGHUP\" are respectively mapped to these. **\--stop-timeout** *\* : Time to wait for the command to exit gracefully This is used by the restart mode of \--on-busy-update. After the graceful stop signal is sent, Watchexec will wait for the command to exit. If it hasnt exited after this time, it is forcefully terminated. Takes a unit-less value in seconds, or a time span value such as \"5min 20s\". Providing a unit-less value is deprecated and will warn; it will be an error in the future. The default is 10 seconds. Set to 0 to immediately force-kill the command. This has no practical effect on Windows as the command is always forcefully terminated; see \--stop-signal for why. **\--timeout** *\* : Kill the command if it runs longer than this duration Takes a time span value such as \"30s\", \"5min\", or \"1h 30m\". When the timeout is reached, the command is gracefully stopped using \--stop-signal, then forcefully terminated after \--stop-timeout if still running. Each run of the command has its own independent timeout. **\--workdir** *\* : Set the working directory By default, the working directory of the command is the working directory of Watchexec. You can change that with this option. Note that paths may be less intuitive to use with this. **\--wrap-process** *\* \[default: group\] : Configure how the process is wrapped By default, Watchexec will run the command in a session on Mac, in a process group in Unix, and in a Job Object in Windows. Some Unix programs prefer running in a session, while others do not work in a process group. Use group to use a process group, session to use a process session, and none to run the command directly. On Windows, either of group or session will use a Job Object. If you find you need to specify this frequently for different kinds of programs, file an issue at \. As errors of this nature are hard to debug and can be highly environment-dependent, reports from \*multiple affected people\* are more likely to be actioned promptly. Ask your friends/colleagues! # EVENTS **-d**, **\--debounce** *\* : Time to wait for new events before taking action When an event is received, Watchexec will wait for up to this amount of time before handling it (such as running the command). This is essential as what you might perceive as a single change may actually emit many events, and without this behaviour, Watchexec would run much too often. Additionally, its not infrequent that file writes are not atomic, and each write may emit an event, so this is a good way to avoid running a command while a file is partially written. An alternative use is to set a high value (like \"30min\" or longer), to save power or bandwidth on intensive tasks, like an ad-hoc backup script. In those use cases, note that every accumulated event will build up in memory. Takes a unit-less value in milliseconds, or a time span value such as \"5sec 20ms\". Providing a unit-less value is deprecated and will warn; it will be an error in the future. The default is 50 milliseconds. Setting to 0 is highly discouraged. **\--emit-events-to** *\* : Configure event emission Watchexec can emit event information when running a command, which can be used by the child process to target specific changed files. One thing to take care with is assuming inherent behaviour where there is only chance. Notably, it could appear as if the \`RENAMED\` variable contains both the original and the new path being renamed. In previous versions, it would even appear on some platforms as if the original always came before the new. However, none of this was true. Its impossible to reliably and portably know which changed path is the old or new, \"half\" renames may appear (only the original, only the new), \"unknown\" renames may appear (change was a rename, but whether it was the old or new isnt known), rename events might split across two debouncing boundaries, and so on. This option controls where that information is emitted. It defaults to none, which doesnt emit event information at all. The other options are environment (deprecated), stdio, file, json-stdio, and json-file. The stdio and file modes are text-based: stdio writes absolute paths to the stdin of the command, one per line, each prefixed with \`create:\`, \`remove:\`, \`rename:\`, \`modify:\`, or \`other:\`, then closes the handle; file writes the same thing to a temporary file, and its path is given with the \$WATCHEXEC_EVENTS_FILE environment variable. There are also two JSON modes, which are based on JSON objects and can represent the full set of events Watchexec handles. Heres an example of a folder being created on Linux: \`\`\`json { \"tags\": \[ { \"kind\": \"path\", \"absolute\": \"/home/user/your/new-folder\", \"filetype\": \"dir\" }, { \"kind\": \"fs\", \"simple\": \"create\", \"full\": \"Create(Folder)\" }, { \"kind\": \"source\", \"source\": \"filesystem\", } \], \"metadata\": { \"notify-backend\": \"inotify\" } } \`\`\` The fields are as follows: \- \`tags\`, structured event data. - \`tags\[\].kind\`, which can be: \* path, along with: + \`absolute\`, an absolute path. + \`filetype\`, a file type if known (dir, file, symlink, other). \* fs: + \`simple\`, the \"simple\" event type (access, create, modify, remove, or other). + \`full\`, the \"full\" event type, which is too complex to fully describe here, but looks like General(Precise(Specific)). \* source, along with: + \`source\`, the source of the event (filesystem, keyboard, mouse, os, time, internal). \* keyboard, along with: + \`keycode\`. Currently only the value eof is supported. \* process, for events caused by processes: + \`pid\`, the process ID. \* signal, for signals sent to Watchexec: + \`signal\`, the normalised signal name (hangup, interrupt, quit, terminate, user1, user2). \* completion, for when a command ends: + \`disposition\`, the exit disposition (success, error, signal, stop, exception, continued). + \`code\`, the exit, signal, stop, or exception code. - \`metadata\`, additional information about the event. The json-stdio mode will emit JSON events to the standard input of the command, one per line, then close stdin. The json-file mode will create a temporary file, write the events to it, and provide the path to the file with the \$WATCHEXEC_EVENTS_FILE environment variable. Finally, the environment mode was the default until 2.0. It sets environment variables with the paths of the affected files, for filesystem events: \$WATCHEXEC_COMMON_PATH is set to the longest common path of all of the below variables, and so should be prepended to each path to obtain the full/real path. Then: \- \$WATCHEXEC_CREATED_PATH is set when files/folders were created - \$WATCHEXEC_REMOVED_PATH is set when files/folders were removed - \$WATCHEXEC_RENAMED_PATH is set when files/folders were renamed - \$WATCHEXEC_WRITTEN_PATH is set when files/folders were modified - \$WATCHEXEC_META_CHANGED_PATH is set when files/folders metadata were modified - \$WATCHEXEC_OTHERWISE_CHANGED_PATH is set for every other kind of pathed event Multiple paths are separated by the system path separator, ; on Windows and : on unix. Within each variable, paths are deduplicated and sorted in binary order (i.e. neither Unicode nor locale aware). This is the legacy mode, is deprecated, and will be removed in the future. The environment is a very restricted space, while also limited in what it can usefully represent. Large numbers of files will either cause the environment to be truncated, or may error or crash the process entirely. The \$WATCHEXEC_COMMON_PATH is also unintuitive, as demonstrated by the multiple confused queries that have landed in my inbox over the years. **-I**, **\--interactive** : Respond to keypresses to quit, restart, or pause In interactive mode, Watchexec listens for keypresses and responds to them. Currently supported keys are: r to restart the command, p to toggle pausing the watch, and q to quit. This requires a terminal (TTY) and puts stdin into raw mode, so the child process will not receive stdin input. **\--exit-on-error** : Exit when the command has an error By default, Watchexec will continue to watch and re-run the command after the command exits, regardless of its exit status. With this option, it will instead exit when the command completes with any non-success exit status. This is useful when running Watchexec in a process manager or container, where you want the container to restart when the command fails rather than hang waiting for file changes. **\--map-signal** *\* : Translate signals from the OS to signals to send to the command Takes a pair of signal names, separated by a colon, such as \"TERM:INT\" to map SIGTERM to SIGINT. The first signal is the one received by watchexec, and the second is the one sent to the command. The second can be omitted to discard the first signal, such as \"TERM:\" to not do anything on SIGTERM. If SIGINT or SIGTERM are mapped, then they no longer quit Watchexec. Besides making it hard to quit Watchexec itself, this is useful to send pass a Ctrl-C to the command without also terminating Watchexec and the underlying program with it, e.g. with \"INT:INT\". This option can be specified multiple times to map multiple signals. Signal syntax is case-insensitive for short names (like \"TERM\", \"USR2\") and long names (like \"SIGKILL\", \"SIGHUP\"). Signal numbers are also supported (like \"15\", \"31\"). On Windows, the forms \"STOP\", \"CTRL+C\", and \"CTRL+BREAK\" are also supported to receive, but Watchexec cannot yet deliver other \"signals\" than a STOP. **-o**, **\--on-busy-update** *\* : What to do when receiving events while the command is running Default is to do-nothing, which ignores events while the command is running, so that changes that occur due to the command are ignored, like compilation outputs. You can also use queue which will run the command once again when the current run has finished if any events occur while its running, or restart, which terminates the running command and starts a new one. Finally, theres signal, which only sends a signal; this can be useful with programs that can reload their configuration without a full restart. The signal can be specified with the \--signal option. **\--poll** \[*\*\] : Poll for filesystem changes By default, and where available, Watchexec uses the operating systems native file system watching capabilities. This option disables that and instead uses a polling mechanism, which is less efficient but can work around issues with some file systems (like network shares) or edge cases. Optionally takes a unit-less value in milliseconds, or a time span value such as \"2s 500ms\", to use as the polling interval. If not specified, the default is 30 seconds. Providing a unit-less value is deprecated and will warn; it will be an error in the future. Aliased as \--force-poll. **-p**, **\--postpone** : Wait until first change before running command By default, Watchexec will run the command once immediately. With this option, it will instead wait until an event is detected before running the command as normal. **-r**, **\--restart** : Restart the process if its still running This is a shorthand for \--on-busy-update=restart. **-s**, **\--signal** *\* : Send a signal to the process when its still running Specify a signal to send to the process when its still running. This implies \--on-busy-update=signal; otherwise the signal used when that mode is restart is controlled by \--stop-signal. See the long documentation for \--stop-signal for syntax. Signals are not supported on Windows at the moment, and will always be overridden to kill. See \--stop-signal for more on Windows \"signals\". **\--stdin-quit** : Exit when stdin closes This watches the stdin file descriptor for EOF, and exits Watchexec gracefully when it is closed. This is used by some process managers to avoid leaving zombie processes around. # FILTERING **-e**, **\--exts** *\* : Filename extensions to filter to This is a quick filter to only emit events for files with the given extensions. Extensions can be given with or without the leading dot (e.g. js or .js). Multiple extensions can be given by repeating the option or by separating them with commas. **-f**, **\--filter** *\* : Filename patterns to filter to Provide a glob-like filter pattern, and only events for files matching the pattern will be emitted. Multiple patterns can be given by repeating the option. Events that are not from files (e.g. signals, keyboard events) will pass through untouched. **\--filter-file** *\* : Files to load filters from Provide a path to a file containing filters, one per line. Empty lines and lines starting with \# are ignored. Uses the same pattern format as the \--filter option. This can also be used via the \$WATCHEXEC_FILTER_FILES environment variable. **-j**, **\--filter-prog** *\* : Filter programs. Provide your own custom filter programs in jaq (similar to jq) syntax. Programs are given an event in the same format as described in \--emit-events-to and must return a boolean. Invalid programs will make watchexec fail to start; use -v to see program runtime errors. In addition to the jaq stdlib, watchexec adds some custom filter definitions: \- path \| file_meta returns file metadata or null if the file does not exist. \- path \| file_size returns the size of the file at path, or null if it does not exist. \- path \| file_read(bytes) returns a string with the first n bytes of the file at path. If the file is smaller than n bytes, the whole file is returned. There is no filter to read the whole file at once to encourage limiting the amount of data read and processed. \- string \| hash, and path \| file_hash return the hash of the string or file at path. No guarantee is made about the algorithm used: treat it as an opaque value. \- any \| kv_store(key), kv_fetch(key), and kv_clear provide a simple key-value store. Data is kept in memory only, there is no persistence. Consistency is not guaranteed. \- any \| printout, any \| printerr, and any \| log(level) will print or log any given value to stdout, stderr, or the log (levels = error, warn, info, debug, trace), and pass the value through (so \[1\] \| log(\"debug\") \| .\[\] will produce a 1 and log \[1\]). All filtering done with such programs, and especially those using kv or filesystem access, is much slower than the other filtering methods. If filtering is too slow, events will back up and stall watchexec. Take care when designing your filters. If the argument to this option starts with an @, the rest of the argument is taken to be the path to a file containing a jaq program. Jaq programs are run in order, after all other filters, and short-circuit: if a filter (jaq or not) rejects an event, execution stops there, and no other filters are run. Additionally, they stop after outputting the first value, so youll want to use any or all when iterating, otherwise only the first item will be processed, which can be quite confusing! Find user-contributed programs or submit your own useful ones at \. \## Examples: Regexp ignore filter on paths: all(.tags\[\] \| select(.kind == \"path\"); .absolute \| test(\"\[.\]test\[.\]js\$\")) \| not Pass any event that creates a file: any(.tags\[\] \| select(.kind == \"fs\"); .simple == \"create\") Pass events that touch executable files: any(.tags\[\] \| select(.kind == \"path\" && .filetype == \"file\"); .absolute \| metadata \| .executable) Ignore files that start with shebangs: any(.tags\[\] \| select(.kind == \"path\" && .filetype == \"file\"); .absolute \| read(2) == \"#!\") \| not **\--fs-events** *\* : Filesystem events to filter to This is a quick filter to only emit events for the given types of filesystem changes. Choose from access, create, remove, rename, modify, metadata. Multiple types can be given by repeating the option or by separating them with commas. By default, this is all types except for access. This may apply filtering at the kernel level when possible, which can be more efficient, but may be more confusing when reading the logs. **-i**, **\--ignore** *\* : Filename patterns to filter out Provide a glob-like filter pattern, and events for files matching the pattern will be excluded. Multiple patterns can be given by repeating the option. Events that are not from files (e.g. signals, keyboard events) will pass through untouched. **\--ignore-file** *\* : Files to load ignores from Provide a path to a file containing ignores, one per line. Empty lines and lines starting with \# are ignored. Uses the same pattern format as the \--ignore option. This can also be used via the \$WATCHEXEC_IGNORE_FILES environment variable. **\--ignore-nothing** : Dont ignore anything at all This is a shorthand for \--no-discover-ignore, \--no-default-ignore. Note that ignores explicitly loaded via other command line options, such as \--ignore or \--ignore-file, will still be used. **\--no-default-ignore** : Dont use internal default ignores Watchexec has a set of default ignore patterns, such as editor swap files, \`\*.pyc\`, \`\*.pyo\`, \`.DS_Store\`, \`.bzr\`, \`\_darcs\`, \`.fossil-settings\`, \`.git\`, \`.hg\`, \`.pijul\`, \`.svn\`, and Watchexec log files. **\--no-discover-ignore** : Dont discover ignore files at all This is a shorthand for \--no-global-ignore, \--no-vcs-ignore, \--no-project-ignore, but even more efficient as it will skip all the ignore discovery mechanisms from the get go. Note that default ignores are still loaded, see \--no-default-ignore. **\--no-global-ignore** : Dont load global ignores This disables loading of global or user ignore files, like \~/.gitignore, \~/.config/watchexec/ignore, or %APPDATA%\\Bazzar\\2.0\\ignore. Contrast with \--no-vcs-ignore and \--no-project-ignore. Supported global ignore files \- Git (if core.excludesFile is set): the file at that path - Git (otherwise): the first found of \$XDG_CONFIG_HOME/git/ignore, %APPDATA%/.gitignore, %USERPROFILE%/.gitignore, \$HOME/.config/git/ignore, \$HOME/.gitignore. - Bazaar: the first found of %APPDATA%/Bazzar/2.0/ignore, \$HOME/.bazaar/ignore. - Watchexec: the first found of \$XDG_CONFIG_HOME/watchexec/ignore, %APPDATA%/watchexec/ignore, %USERPROFILE%/.watchexec/ignore, \$HOME/.watchexec/ignore. Like for project files, Git and Bazaar global files will only be used for the corresponding VCS as used in the project. **\--no-meta** : Dont emit fs events for metadata changes This is a shorthand for \--fs-events create,remove,rename,modify. Using it alongside the \--fs-events option is non-sensical and not allowed. **\--no-project-ignore** : Dont load project-local ignores This disables loading of project-local ignore files, like .gitignore or .ignore in the watched project. This is contrasted with \--no-vcs-ignore, which disables loading of Git and other VCS ignore files, and with \--no-global-ignore, which disables loading of global or user ignore files, like \~/.gitignore or \~/.config/watchexec/ignore. Supported project ignore files: \- Git: .gitignore at project root and child directories, .git/info/exclude, and the file pointed to by \`core.excludesFile\` in .git/config. - Mercurial: .hgignore at project root and child directories. - Bazaar: .bzrignore at project root. - Darcs: \_darcs/prefs/boring - Fossil: .fossil-settings/ignore-glob - Ripgrep/Watchexec/generic: .ignore at project root and child directories. VCS ignore files (Git, Mercurial, Bazaar, Darcs, Fossil) are only used if the corresponding VCS is discovered to be in use for the project/origin. For example, a .bzrignore in a Git repository will be discarded. **\--no-vcs-ignore** : Dont load gitignores Among other VCS exclude files, like for Mercurial, Subversion, Bazaar, DARCS, Fossil. Note that Watchexec will detect which of these is in use, if any, and only load the relevant files. Both global (like \~/.gitignore) and local (like .gitignore) files are considered. This option is useful if you want to watch files that are ignored by Git. **\--project-origin** *\* : Set the project origin Watchexec will attempt to discover the projects \"origin\" (or \"root\") by searching for a variety of markers, like files or directory patterns. It does its best but sometimes gets it it wrong, and you can override that with this option. The project origin is used to determine the path of certain ignore files, which VCS is being used, the meaning of a leading / in filtering patterns, and maybe more in the future. When set, Watchexec will also not bother searching, which can be significantly faster. **-w**, **\--watch** *\* : Watch a specific file or directory By default, Watchexec watches the current directory. When watching a single file, its often better to watch the containing directory instead, and filter on the filename. Some editors may replace the file with a new one when saving, and some platforms may not detect that or further changes. Upon starting, Watchexec resolves a \"project origin\" from the watched paths. See the help for \--project-origin for more information. This option can be specified multiple times to watch multiple files or directories. The special value /dev/null, provided as the only path watched, will cause Watchexec to not watch any paths. Other event sources (like signals or key events) may still be used. **-W**, **\--watch-non-recursive** *\* : Watch a specific directory, non-recursively Unlike -w, folders watched with this option are not recursed into. This option can be specified multiple times to watch multiple directories non-recursively. **-F**, **\--watch-file** *\* : Watch files and directories from a file Each line in the file will be interpreted as if given to -w. For more complex uses (like watching non-recursively), use the argfile capability: build a file containing command-line options and pass it to watchexec with \`@path/to/argfile\`. The special value - will read from STDIN; this in incompatible with \--stdin-quit. # DEBUGGING **\--log-file** \[*\*\] : Write diagnostic logs to a file This writes diagnostic logs to a file, instead of the terminal, in JSON format. If a log level was not already specified, this will set it to -vvv. If a path is not provided, the default is the working directory. Note that with \--ignore-nothing, the write events to the log will likely get picked up by Watchexec, causing a loop; prefer setting a path outside of the watched directory. If the path provided is a directory, a file will be created in that directory. The file name will be the current date and time, in the format watchexec.YYYY-MM-DDTHH-MM-SSZ.log. **\--print-events** : Print events that trigger actions This prints the events that triggered the action when handling it (after debouncing), in a human readable form. This is useful for debugging filters. Use -vvv instead when you need more diagnostic information. **-v**, **\--verbose** : Set diagnostic log level This enables diagnostic logging, which is useful for investigating bugs or gaining more insight into faulty filters or \"missing\" events. Use multiple times to increase verbosity. Goes up to -vvvv. When submitting bug reports, default to a -vvv log level. You may want to use with \--log-file to avoid polluting your terminal. Setting \$WATCHEXEC_LOG also works, and takes precedence, but is not recommended. However, using \$WATCHEXEC_LOG is the only way to get logs from before these options are parsed. # OUTPUT **\--bell** : Ring the terminal bell on command completion **-c**, **\--clear** \[*\*\] : Clear screen before running command If this doesnt completely clear the screen, try \--clear=reset. **\--color** *\* \[default: auto\] : When to use terminal colours Setting the environment variable \`NO_COLOR\` to any value is equivalent to \`\--color=never\`. **-N**, **\--notify** \[*\*\] : Alert when commands start and end With this, Watchexec will emit a desktop notification when a command starts and ends, on supported platforms. On unsupported platforms, it may silently do nothing, or log a warning. The mode can be specified to only notify when the command \`start\`s, \`end\`s, or for \`both\` (which is the default). **-q**, **\--quiet** : Dont print starting and stopping messages By default Watchexec will print a message when the command starts and stops. This option disables this behaviour, so only the commands output, warnings, and errors will be printed. **\--timings** : Print how long the command took to run This may not be exactly accurate, as it includes some overhead from Watchexec itself. Use the \`time\` utility, high-precision timers, or benchmarking tools for more accurate results. # EXTRA Use \@argfile as first argument to load arguments from the file argfile (one argument per line) which will be inserted in place of the \@argfile (further arguments on the CLI will override or add onto those in the file). Didnt expect this much output? Use the short -h flag to get short help. # VERSION v2.5.1 # AUTHORS Félix Saparelli \, Matt Green \