Showing preview only (321K chars total). Download the full file or copy to clipboard to get everything.
Repository: tokio-rs/tokio-metrics
Branch: main
Commit: 54942a38c602
Files: 23
Total size: 309.8 KB
Directory structure:
gitextract_igc6ri6v/
├── .github/
│ └── workflows/
│ ├── ci.yml
│ └── release.yml
├── .gitignore
├── CHANGELOG.md
├── CONTRIBUTING.md
├── Cargo.toml
├── LICENSE
├── README.md
├── benches/
│ └── poll_overhead.rs
├── examples/
│ ├── axum.rs
│ ├── runtime.rs
│ ├── stream.rs
│ └── task.rs
├── release-plz.toml
├── src/
│ ├── derived_metrics.rs
│ ├── lib.rs
│ ├── metrics_rs.rs
│ ├── runtime/
│ │ ├── metrics_rs_integration.rs
│ │ └── poll_time_histogram.rs
│ ├── runtime.rs
│ ├── task/
│ │ └── metrics_rs_integration.rs
│ └── task.rs
└── tests/
└── auto_metrics.rs
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/ci.yml
================================================
name: CI
on:
push:
branches:
- main
pull_request: {}
env:
RUSTFLAGS: -Dwarnings
RUST_BACKTRACE: 1
# Change to specific Rust release to pin
rust_stable: stable
rust_clippy: 1.52.0
rust_min: 1.49.0
jobs:
check:
# Run `cargo check` first to ensure that the pushed code at least compiles.
runs-on: ubuntu-latest
env:
RUSTFLAGS: --cfg tokio_unstable -Dwarnings
steps:
- uses: actions/checkout@master
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ env.rust_stable }}
override: true
profile: minimal
components: clippy, rustfmt
- uses: Swatinem/rust-cache@v1
- name: Check
uses: actions-rs/cargo@v1
with:
command: clippy
args: --all --all-targets --all-features
- name: rustfmt
uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
check-docs:
runs-on: ubuntu-latest
env:
RUSTDOCFLAGS: -D broken-intra-doc-links --cfg tokio_unstable
RUSTFLAGS: --cfg tokio_unstable -Dwarnings
steps:
- uses: actions/checkout@master
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ env.rust_stable }}
override: true
profile: minimal
- uses: Swatinem/rust-cache@v1
- name: cargo doc
run: cargo doc --all-features --no-deps
cargo-hack:
runs-on: ubuntu-latest
env:
RUSTFLAGS: --cfg tokio_unstable -Dwarnings
steps:
- uses: actions/checkout@master
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ env.rust_stable }}
override: true
profile: minimal
- uses: Swatinem/rust-cache@v1
- name: Install cargo-hack
run: |
curl -LsSf https://github.com/taiki-e/cargo-hack/releases/latest/download/cargo-hack-x86_64-unknown-linux-gnu.tar.gz | tar xzf - -C ~/.cargo/bin
- name: cargo hack check
run: cargo hack check --each-feature --no-dev-deps --all
test-versions:
name: test-version (${{ matrix.name }})
needs: check
runs-on: ubuntu-latest
strategy:
matrix:
include:
- rustflags: "--cfg tokio_unstable -Dwarnings"
name: "tokio-unstable"
- rustflags: "-Dwarnings"
name: "stable"
env:
RUSTFLAGS: ${{ matrix.rustflags }}
steps:
- uses: actions/checkout@master
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ env.rust_stable }}
override: true
profile: minimal
- uses: Swatinem/rust-cache@v1
- name: Run tests
uses: actions-rs/cargo@v1
with:
command: test
args: --all --all-features --all-targets
test-docs:
needs: check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ env.rust_stable }}
override: true
profile: minimal
- uses: Swatinem/rust-cache@v1
- name: Run doc tests
uses: actions-rs/cargo@v1
with:
command: test
args: --all-features --doc
env:
RUSTDOCFLAGS: --cfg tokio_unstable
RUSTFLAGS: --cfg tokio_unstable -Dwarnings
semver:
name: semver
needs: check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check `tokio-metrics` semver with only default features
uses: obi1kenobi/cargo-semver-checks-action@v2
with:
rust-toolchain: ${{ env.rust_stable }}
package: tokio-metrics
feature-group: default-features
- name: Check `tokio-metrics` semver with all features & tokio_unstable RUSTFLAG
uses: obi1kenobi/cargo-semver-checks-action@v2
with:
rust-toolchain: ${{ env.rust_stable }}
package: tokio-metrics
feature-group: all-features
env:
RUSTFLAGS: --cfg tokio_unstable -Dwarnings
================================================
FILE: .github/workflows/release.yml
================================================
name: Publish release
permissions:
pull-requests: write
contents: write
id-token: write # Required for OIDC token exchange / trusted publishing
on:
push:
branches:
- main
jobs:
release-plz-release:
if: github.repository_owner == 'tokio-rs'
name: Release-plz release
runs-on: ubuntu-latest
environment: release
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
- name: Authenticate to crates.io
uses: rust-lang/crates-io-auth-action@v1
id: auth
- name: Run release-plz
uses: release-plz/action@v0.5.102
with:
command: release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CARGO_REGISTRY_TOKEN: ${{ steps.auth.outputs.token }}
================================================
FILE: .gitignore
================================================
/target
Cargo.lock
.vscode
================================================
FILE: CHANGELOG.md
================================================
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.5.0](https://github.com/tokio-rs/tokio-metrics/compare/v0.4.9...v0.5.0) - 2026-04-09
### Breaking Changes
- `RuntimeMetrics::poll_time_histogram` is now a `PollTimeHistogram` instead of `Vec<u64>`. Each bucket carries its duration range alongside the count. ([#121](https://github.com/tokio-rs/tokio-metrics/pull/121))
### Added
- Add `metrique-integration` feature to use `RuntimeMetrics` as a metrique unit of work ([#121](https://github.com/tokio-rs/tokio-metrics/pull/121))
### Other
- Fix doctests failing after Tokio v1.51 ([#122](https://github.com/tokio-rs/tokio-metrics/pull/122))
## [0.4.9](https://github.com/tokio-rs/tokio-metrics/compare/v0.4.8...v0.4.9) - 2026-02-23
### Added
- *(task)* Expose a static-friendly TaskMonitorCore without inner Arc ([#115](https://github.com/tokio-rs/tokio-metrics/pull/115))
### Other
- Fix doctest feature gates and relax rt requirement for task metrics reporter ([#118](https://github.com/tokio-rs/tokio-metrics/pull/118))
## [0.4.8](https://github.com/tokio-rs/tokio-metrics/compare/v0.4.7...v0.4.8) - 2026-02-16
### Added
- publicly export task `TaskIntervals` type ([#112](https://github.com/tokio-rs/tokio-metrics/pull/112))
### Fixed
- use saturating_sub to prevent overflow panics in runtime metrics ([#114](https://github.com/tokio-rs/tokio-metrics/pull/114))
# 0.4.7 (January 15, 2025)
- docs: fix typos in `TaskMetrics` ([#103])
- rt: integrate derived metrics with metrics.rs ([#104])
- fix: indentation in task.rs ([#105])
- docs: update readme and crate documentation ([#107])
- rt: make `live_tasks_count` (`num_alive_tasks()`) stable([#108])
- docs: move `live_tasks_count` to stable metrics in README ([#109])
[#103]: https://github.com/tokio-rs/tokio-metrics/pull/103
[#104]: https://github.com/tokio-rs/tokio-metrics/pull/104
[#105]: https://github.com/tokio-rs/tokio-metrics/pull/105
[#107]: https://github.com/tokio-rs/tokio-metrics/pull/107
[#108]: https://github.com/tokio-rs/tokio-metrics/pull/108
[#109]: https://github.com/tokio-rs/tokio-metrics/pull/109
# 0.4.6 (December 3rd, 2025)
- add metrics_rs integration to task metrics ([#100])
- readme: add max_idle_duration to readme ([#98])
- readme: keep default features ([#29])
[#29]: https://github.com/tokio-rs/tokio-metrics/pull/29
[#98]: https://github.com/tokio-rs/tokio-metrics/pull/98
[#100]: https://github.com/tokio-rs/tokio-metrics/pull/100
# 0.4.5 (September 4th, 2025)
- Add max_idle_duration ([#95])
[#95]: https://github.com/tokio-rs/tokio-metrics/pull/95
# 0.4.4 (August 5th, 2025)
### Added
- fix: Add TaskIntervals struct ([#91])
- chore: update dev-dependencies ([#92])
[#91]: https://github.com/tokio-rs/tokio-metrics/pull/91
[#92]: https://github.com/tokio-rs/tokio-metrics/pull/92
# 0.4.3 (July 3rd, 2025)
### Added
- rt: partially stabilize `RuntimeMonitor` and related metrics ([#87])
[#87]: https://github.com/tokio-rs/tokio-metrics/pull/87
# 0.4.2 (April 30th, 2025)
### Fixed
- docs: specify metrics-rs-integration feature dependency for relevant APIs ([#78])
- docs: fix links ([#79])
[#78]: https://github.com/tokio-rs/tokio-metrics/pull/78
[#79]: https://github.com/tokio-rs/tokio-metrics/pull/79
# 0.4.1 (April 20th, 2025)
### Added
- rt: add support for `blocking_queue_depth`, `live_task_count`, `blocking_threads_count`,
`idle_blocking_threads_count` ([#49], [#74])
- rt: add integration with metrics.rs ([#68])
[#49]: https://github.com/tokio-rs/tokio-metrics/pull/49
[#68]: https://github.com/tokio-rs/tokio-metrics/pull/68
[#74]: https://github.com/tokio-rs/tokio-metrics/pull/74
# 0.4.0 (November 26th, 2024)
The core Tokio crate has renamed some of the metrics and this breaking release
uses the new names. The minimum required Tokio is bumped to 1.41, and the MSRV
is bumped to 1.70 to match.
- runtime: use new names for poll time histogram ([#66])
- runtime: rename injection queue to global queue ([#66])
- doc: various doc fixes ([#66], [#65])
[#65]: https://github.com/tokio-rs/tokio-metrics/pull/65
[#66]: https://github.com/tokio-rs/tokio-metrics/pull/66
# 0.3.1 (October 12th, 2023)
### Fixed
- task: fix doc error in idle definition ([#54])
- chore: support tokio 1.33 without stats feature ([#55])
[#54]: https://github.com/tokio-rs/tokio-metrics/pull/54
[#55]: https://github.com/tokio-rs/tokio-metrics/pull/55
# 0.3.0 (August 14th, 2023)
### Added
- rt: add support for mean task poll time ([#50])
- rt: add support for task poll count histogram ([#52])
[#50]: https://github.com/tokio-rs/tokio-metrics/pull/50
[#52]: https://github.com/tokio-rs/tokio-metrics/pull/52
# 0.2.2 (April 13th, 2023)
### Added
- task: add TaskMonitorBuilder ([#46])
### Fixed
- task: fix default long delay threshold ([#46])
[#46]: https://github.com/tokio-rs/tokio-metrics/pull/46
# 0.2.1 (April 5th, 2023)
### Added
- task: add short and long delay metrics ([#44])
[#44]: https://github.com/tokio-rs/tokio-metrics/pull/44
# 0.2.0 (March 6th, 2023)
### Added
- Add `Debug` implementations. ([#28])
- rt: add concrete `RuntimeIntervals` iterator type ([#26])
- rt: add budget_forced_yield_count metric ([#39])
- rt: add io_driver_ready_count metric ([#40])
- rt: add steal_operations metric ([#37])
- task: also instrument streams ([#31])
### Documented
- doc: fix count in `TaskMonitor` docstring ([#24])
- doc: the description of steal_count ([#35])
[#24]: https://github.com/tokio-rs/tokio-metrics/pull/24
[#26]: https://github.com/tokio-rs/tokio-metrics/pull/26
[#28]: https://github.com/tokio-rs/tokio-metrics/pull/28
[#31]: https://github.com/tokio-rs/tokio-metrics/pull/31
[#35]: https://github.com/tokio-rs/tokio-metrics/pull/35
[#37]: https://github.com/tokio-rs/tokio-metrics/pull/37
[#39]: https://github.com/tokio-rs/tokio-metrics/pull/39
[#40]: https://github.com/tokio-rs/tokio-metrics/pull/40
================================================
FILE: CONTRIBUTING.md
================================================
## Doing releases
There is a `.github/workflows/release.yml` workflow that will publish a crates.io release and create a GitHub release every time the version in `Cargo.toml` changes on `main`. The workflow is authorized to publish via [trusted publishing](https://rust-lang.github.io/rfcs/3691-trusted-publishing-cratesio.html), no further authorization is needed.
To prepare a release, use [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/), and in a clean git repo run:
```
cargo install release-plz --locked
git checkout main && release-plz update
# review the changes to Cargo.toml and CHANGELOG.md
git commit -a
```
Then open a PR for the release and get it approved. Even if you have bypass permissions on branch protection, always use a PR so CI runs before the release publishes. Once merged, the release workflow will automatically publish to crates.io and create a GitHub release.
## How to test docs.rs changes
Set up your local docs.rs environment as per official README:
https://github.com/rust-lang/docs.rs?tab=readme-ov-file#getting-started
Make sure you have:
- Your .env contents exported to your local ENVs
- docker-compose stack for db and s3 running
- The web server running via local (or pure docker-compose approach)
- If on a remote machine, port 3000 (or whatever your webserver is listening on) forwarded
Invoke the cargo build command against your local path to your `tokio-metrics` workspace:
```
# you could also invoke the built `cratesfyi` binary from outside of your cargo workspace,
# though you'll still need the right ENVs exported
cargo run -- build crate --local ../tokio-metrics
```
Then, you can view the generated documentation for `tokio-metrics` in your browser. If you figure
out how to get CSS working, update this guide :)
================================================
FILE: Cargo.toml
================================================
[package]
name = "tokio-metrics"
version = "0.5.0"
edition = "2021"
rust-version = "1.70.0"
authors = ["Tokio Contributors <team@tokio.rs>"]
license = "MIT"
readme = "README.md"
repository = "https://github.com/tokio-rs/tokio-metrics"
homepage = "https://tokio.rs"
description = """
Runtime and task level metrics for Tokio applications.
"""
categories = ["asynchronous", "network-programming"]
keywords = ["async", "futures", "metrics", "debugging"]
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(tokio_unstable)'] }
[features]
default = ["rt"]
metrics-rs-integration = ["dep:metrics"]
metrique-integration = ["dep:metrique"]
rt = ["tokio"]
[dependencies]
tokio-stream = "0.1.11"
futures-util = "0.3.19"
pin-project-lite = "0.2.7"
tokio = { version = "1.45.1", features = ["rt", "time", "net"], optional = true }
metrics = { version = "0.24", optional = true }
metrique = { version = "0.1.23", default-features = false, optional = true }
[dev-dependencies]
metrique = { version = "0.1.23", features = ["test-util"] }
axum = "0.8"
criterion = "0.7"
futures = "0.3.21"
num_cpus = "1.13.1"
serde = { version = "1.0.136", features = ["derive"] }
serde_json = "1.0.79"
tokio = { version = "1.45.1", features = ["full", "rt", "time", "macros", "test-util"] }
metrics-util = { version = "0.20", features = ["debugging"] }
metrics = { version = "0.24" }
metrics-exporter-prometheus = { version = "0.17", features = ["uds-listener"] }
[[example]]
name = "runtime"
required-features = ["rt"]
[[bench]]
name = "poll_overhead"
harness = false
[package.metadata.docs.rs]
all-features = true
# enable unstable features in the documentation
rustdoc-args = ["--cfg", "docsrs", "--cfg", "tokio_unstable"]
# it's necessary to _also_ pass `--cfg tokio_unstable` to rustc, or else
# dependencies will not be enabled, and the docs build will fail.
rustc-args = ["--cfg", "tokio_unstable"]
================================================
FILE: LICENSE
================================================
Copyright (c) 2022 Tokio Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
================================================
FILE: README.md
================================================
# Tokio Metrics
[![Crates.io][crates-badge]][crates-url]
[![Documentation][docs-badge]][docs-url]
[![MIT licensed][mit-badge]][mit-url]
[![Build Status][actions-badge]][actions-url]
[![Discord chat][discord-badge]][discord-url]
[crates-badge]: https://img.shields.io/crates/v/tokio-metrics.svg
[crates-url]: https://crates.io/crates/tokio-metrics
[docs-badge]: https://docs.rs/tokio-metrics/badge.svg
[docs-url]: https://docs.rs/tokio-metrics
[mit-badge]: https://img.shields.io/badge/license-MIT-blue.svg
[mit-url]: https://github.com/tokio-rs/tokio-metrics/blob/master/LICENSE
[actions-badge]: https://github.com/tokio-rs/tokio-metrics/workflows/CI/badge.svg
[actions-url]: https://github.com/tokio-rs/tokio-metrics/actions?query=workflow%3ACI+branch%3Amain
[discord-badge]: https://img.shields.io/discord/500028886025895936.svg?logo=discord&style=flat-square
[discord-url]: https://discord.gg/tokio
Provides utilities for collecting metrics from a Tokio application, including
runtime and per-task metrics.
```toml
[dependencies]
tokio-metrics = "0.5"
```
## Getting Started With Task Metrics
Use `TaskMonitor` to instrument tasks before spawning them, and to observe
metrics for those tasks. All tasks instrumented with a given `TaskMonitor`
aggregate their metrics together. To split out metrics for different tasks, use
separate `TaskMetrics` instances.
```rust
// construct a TaskMonitor
let monitor = tokio_metrics::TaskMonitor::new();
// print task metrics every 500ms
{
let frequency = std::time::Duration::from_millis(500);
let monitor = monitor.clone();
tokio::spawn(async move {
for metrics in monitor.intervals() {
println!("{:?}", metrics);
tokio::time::sleep(frequency).await;
}
});
}
// instrument some tasks and spawn them
loop {
tokio::spawn(monitor.instrument(do_work()));
}
```
### Task Metrics
#### Base Metrics
- **[`instrumented_count`]**
The number of tasks instrumented.
- **[`dropped_count`]**
The number of tasks dropped.
- **[`first_poll_count`]**
The number of tasks polled for the first time.
- **[`total_first_poll_delay`]**
The total duration elapsed between the instant tasks are instrumented, and the instant they are first polled.
- **[`total_idled_count`]**
The total number of times that tasks idled, waiting to be awoken.
- **[`total_idle_duration`]**
The total duration that tasks idled.
- **[`max_idle_duration`]**
The maximum idle duration that a task took.
- **[`total_scheduled_count`]**
The total number of times that tasks were awoken (and then, presumably, scheduled for execution).
- **[`total_scheduled_duration`]**
The total duration that tasks spent waiting to be polled after awakening.
- **[`total_poll_count`]**
The total number of times that tasks were polled.
- **[`total_poll_duration`]**
The total duration elapsed during polls.
- **[`total_fast_poll_count`]**
The total number of times that polling tasks completed swiftly.
- **[`total_fast_poll_duration`]**
The total duration of fast polls.
- **[`total_slow_poll_count`]**
The total number of times that polling tasks completed slowly.
- **[`total_slow_poll_duration`]**
The total duration of slow polls.
- **[`total_short_delay_count`]**
The total count of short scheduling delays.
- **[`total_short_delay_duration`]**
The total duration of short scheduling delays.
- **[`total_long_delay_count`]**
The total count of long scheduling delays.
- **[`total_long_delay_duration`]**
The total duration of long scheduling delays.
#### Derived Metrics
- **[`mean_first_poll_delay`]**
The mean duration elapsed between the instant tasks are instrumented, and the instant they are first polled.
- **[`mean_idle_duration`]**
The mean duration of idles.
- **[`mean_scheduled_duration`]**
The mean duration that tasks spent waiting to be executed after awakening.
- **[`mean_poll_duration`]**
The mean duration of polls.
- **[`slow_poll_ratio`]**
The ratio between the number polls categorized as slow and fast.
- **[`long_delay_ratio`]**
The ratio between the number of long scheduling delays and the number of total schedules.
- **[`mean_fast_poll_duration`]**
The mean duration of fast polls.
- **[`mean_slow_poll_duration`]**
The mean duration of slow polls.
- **[`mean_short_delay_duration`]**
The mean duration of short schedules.
- **[`mean_long_delay_duration`]**
The mean duration of long schedules.
[`instrumented_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.instrumented_count
[`dropped_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.dropped_count
[`first_poll_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.first_poll_count
[`total_first_poll_delay`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_first_poll_delay
[`total_idled_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_idled_count
[`total_idle_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_idle_duration
[`max_idle_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.max_idle_duration
[`total_scheduled_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_scheduled_count
[`total_scheduled_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_scheduled_duration
[`total_poll_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_poll_count
[`total_poll_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_poll_duration
[`total_fast_poll_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_fast_poll_count
[`total_fast_poll_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_fast_poll_duration
[`total_slow_poll_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_slow_poll_count
[`total_slow_poll_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_slow_poll_duration
[`total_short_delay_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_short_delay_count
[`total_short_delay_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_short_delay_duration
[`total_long_delay_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_long_delay_count
[`total_long_delay_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#structfield.total_long_delay_duration
[`mean_first_poll_delay`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_first_poll_delay
[`mean_idle_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_idle_duration
[`mean_scheduled_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_scheduled_duration
[`mean_poll_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_poll_duration
[`slow_poll_ratio`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.slow_poll_ratio
[`long_delay_ratio`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.long_delay_ratio
[`mean_fast_poll_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_fast_poll_duration
[`mean_slow_poll_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_slow_poll_duration
[`mean_short_delay_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_short_delay_duration
[`mean_long_delay_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetrics.html#method.mean_long_delay_duration
## Getting Started With Runtime Metrics
Not all runtime metrics are stable. Using unstable metrics requires `tokio_unstable`, and the `rt` crate
feature. To enable `tokio_unstable`, the `--cfg` `tokio_unstable` must be passed
to `rustc` when compiling. You can do this by setting the `RUSTFLAGS`
environment variable before compiling your application; e.g.:
```sh
RUSTFLAGS="--cfg tokio_unstable" cargo build
```
Or, by creating the file `.cargo/config.toml` in the root directory of your crate.
If you're using a workspace, put this file in the root directory of your workspace instead.
```toml
[build]
rustflags = ["--cfg", "tokio_unstable"]
rustdocflags = ["--cfg", "tokio_unstable"]
```
Putting `.cargo/config.toml` files below the workspace or crate root directory may lead to tools like
Rust-Analyzer or VSCode not using your `.cargo/config.toml` since they invoke cargo from
the workspace or crate root and cargo only looks for the `.cargo` directory in the current & parent directories.
Cargo ignores configurations in child directories.
More information about where cargo looks for configuration files can be found
[here](https://doc.rust-lang.org/cargo/reference/config.html).
Missing this configuration file during compilation will cause tokio-metrics to not work, and alternating
between building with and without this configuration file included will cause full rebuilds of your project.
### Collecting Runtime Metrics directly
The `rt` feature of `tokio-metrics` is on by default; simply check that you do
not set `default-features = false` when declaring it as a dependency; e.g.:
```toml
[dependencies]
tokio-metrics = "0.5"
```
From within a Tokio runtime, use `RuntimeMonitor` to monitor key metrics of
that runtime.
```rust
let handle = tokio::runtime::Handle::current();
let runtime_monitor = tokio_metrics::RuntimeMonitor::new(&handle);
// print runtime metrics every 500ms
let frequency = std::time::Duration::from_millis(500);
tokio::spawn(async move {
for metrics in runtime_monitor.intervals() {
println!("Metrics = {:?}", metrics);
tokio::time::sleep(frequency).await;
}
});
// run some tasks
tokio::spawn(do_work());
tokio::spawn(do_work());
tokio::spawn(do_work());
```
### Runtime Metrics
#### Stable Base Metrics
- **[`workers_count`]**
The number of worker threads used by the runtime.
- **[`total_park_count`]**
The number of times worker threads parked.
- **[`max_park_count`]**
The maximum number of times any worker thread parked.
- **[`min_park_count`]**
The minimum number of times any worker thread parked.
- **[`total_busy_duration`]**
The amount of time worker threads were busy.
- **[`max_busy_duration`]**
The maximum amount of time a worker thread was busy.
- **[`min_busy_duration`]**
The minimum amount of time a worker thread was busy.
- **[`global_queue_depth`]**
The number of tasks currently scheduled in the runtime's global queue.
- **[`elapsed`]**
Total amount of time elapsed since observing runtime metrics.
- **[`live_tasks_count`]**
The current number of alive tasks in the runtime.
#### Unstable Base Metrics
- **[`mean_poll_duration`](https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.mean_poll_duration)**
The average duration of a single invocation of poll on a task.
- **[`mean_poll_duration_worker_min`]**
The average duration of a single invocation of poll on a task on the worker with the lowest value.
- **[`mean_poll_duration_worker_max`]**
The average duration of a single invocation of poll on a task on the worker with the highest value.
- **[`poll_time_histogram`]**
A histogram of task polls since the previous probe grouped by poll times.
- **[`total_noop_count`]**
The number of times worker threads unparked but performed no work before parking again.
- **[`max_noop_count`]**
The maximum number of times any worker thread unparked but performed no work before parking again.
- **[`min_noop_count`]**
The minimum number of times any worker thread unparked but performed no work before parking again.
- **[`total_steal_count`]**
The number of tasks worker threads stole from another worker thread.
- **[`max_steal_count`]**
The maximum number of tasks any worker thread stole from another worker thread.
- **[`min_steal_count`]**
The minimum number of tasks any worker thread stole from another worker thread.
- **[`total_steal_operations`]**
The number of times worker threads stole tasks from another worker thread.
- **[`max_steal_operations`]**
The maximum number of times any worker thread stole tasks from another worker thread.
- **[`min_steal_operations`]**
The minimum number of times any worker thread stole tasks from another worker thread.
- **[`num_remote_schedules`]**
The number of tasks scheduled from outside of the runtime.
- **[`total_local_schedule_count`]**
The number of tasks scheduled from worker threads.
- **[`max_local_schedule_count`]**
The maximum number of tasks scheduled from any one worker thread.
- **[`min_local_schedule_count`]**
The minimum number of tasks scheduled from any one worker thread.
- **[`total_overflow_count`]**
The number of times worker threads saturated their local queues.
- **[`max_overflow_count`]**
The maximum number of times any one worker saturated its local queue.
- **[`min_overflow_count`]**
The minimum number of times any one worker saturated its local queue.
- **[`total_polls_count`]**
The number of tasks that have been polled across all worker threads.
- **[`max_polls_count`]**
The maximum number of tasks that have been polled in any worker thread.
- **[`min_polls_count`]**
The minimum number of tasks that have been polled in any worker thread.
- **[`total_local_queue_depth`]**
The total number of tasks currently scheduled in workers' local queues.
- **[`max_local_queue_depth`]**
The maximum number of tasks currently scheduled any worker's local queue.
- **[`min_local_queue_depth`]**
The minimum number of tasks currently scheduled any worker's local queue.
- **[`blocking_queue_depth`]**
The number of tasks currently waiting to be executed in the blocking threadpool.
- **[`blocking_threads_count`]**
The number of additional threads spawned by the runtime.
- **[`idle_blocking_threads_count`]**
The number of idle threads, which have spawned by the runtime for `spawn_blocking` calls.
- **[`budget_forced_yield_count`]**
The number of times that a task was forced to yield because it exhausted its budget.
- **[`io_driver_ready_count`]**
The number of ready events received from the I/O driver.
#### Stable Derived Metrics
- **[`busy_ratio`]**
The ratio between the amount of time worker threads were busy and the total time elapsed since observing runtime metrics.
#### Unstable Derived Metrics
- **[`mean_polls_per_park`]**
The ratio of the number of tasks that have been polled and the number of times worker threads unparked but performed no work before parking again.
[`workers_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.workers_count
[`total_park_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_park_count
[`max_park_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_park_count
[`min_park_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_park_count
[`total_busy_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_busy_duration
[`max_busy_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_busy_duration
[`min_busy_duration`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_busy_duration
[`global_queue_depth`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.global_queue_depth
[`elapsed`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.elapsed
[`mean_poll_duration_worker_min`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.mean_poll_duration_worker_min
[`mean_poll_duration_worker_max`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.mean_poll_duration_worker_max
[`poll_time_histogram`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.poll_time_histogram
[`total_noop_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_noop_count
[`max_noop_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_noop_count
[`min_noop_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_noop_count
[`total_steal_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_steal_count
[`max_steal_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_steal_count
[`min_steal_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_steal_count
[`total_steal_operations`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_steal_operations
[`max_steal_operations`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_steal_operations
[`min_steal_operations`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_steal_operations
[`num_remote_schedules`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.num_remote_schedules
[`total_local_schedule_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_local_schedule_count
[`max_local_schedule_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_local_schedule_count
[`min_local_schedule_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_local_schedule_count
[`total_overflow_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_overflow_count
[`max_overflow_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_overflow_count
[`min_overflow_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_overflow_count
[`total_polls_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_polls_count
[`max_polls_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_polls_count
[`min_polls_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_polls_count
[`injection_queue_depth`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.injection_queue_depth
[`total_local_queue_depth`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.total_local_queue_depth
[`max_local_queue_depth`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.max_local_queue_depth
[`min_local_queue_depth`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.min_local_queue_depth
[`blocking_queue_depth`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.blocking_queue_depth
[`live_tasks_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.live_tasks_count
[`blocking_threads_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.blocking_threads_count
[`idle_blocking_threads_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.idle_blocking_threads_count
[`budget_forced_yield_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.budget_forced_yield_count
[`io_driver_ready_count`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#structfield.io_driver_ready_count
[`busy_ratio`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#method.busy_ratio
[`mean_polls_per_park`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetrics.html#method.mean_polls_per_park
## Collecting Metrics via metrics.rs
If you also enable the `metrics-rs-integration` feature, you can use [metrics.rs] exporters to export metrics
outside of your process. `metrics.rs` supports a variety of exporters, including [Prometheus].
The exported metrics by default will be exported with their name, preceded by `tokio_`. For example,
`tokio_workers_count` for the [`workers_count`] metric and `tokio_instrumented_count` for the
[`instrumented_count`] metric. This can be customized by using the
[`RuntimeMetricsReporterBuilder::with_metrics_transformer`] and [`TaskMetricsReporterBuilder::new`] functions.
If you want to use [Prometheus], you could have this `Cargo.toml`:
[Prometheus]: https://prometheus.io
[`RuntimeMetricsReporterBuilder::with_metrics_transformer`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.RuntimeMetricsReporterBuilder.html#method.with_metrics_transformer
[`TaskMetricsReporterBuilder::new`]: https://docs.rs/tokio-metrics/latest/tokio_metrics/struct.TaskMetricsReporterBuilder.html#method.new
```toml
[dependencies]
tokio-metrics = { version = "0.5", features = ["metrics-rs-integration"] }
metrics = "0.24"
# You don't actually need to use the Prometheus exporter with uds-listener enabled,
# it's just here as an example.
metrics-exporter-prometheus = { version = "0.16", features = ["uds-listener"] }
```
Then, you can launch a metrics exporter:
```rust
// This makes metrics visible via a local Unix socket with name prometheus.sock
// You probably want to do it differently.
//
// If you use this exporter, you can access the metrics for debugging
// by running `curl --unix-socket prometheus.sock localhost`.
metrics_exporter_prometheus::PrometheusBuilder::new()
.with_http_uds_listener("prometheus.sock")
.install()
.unwrap();
// This line launches the runtime reporter that monitors the Tokio runtime and exports the metrics.
tokio::task::spawn(
tokio_metrics::RuntimeMetricsReporterBuilder::default().describe_and_run(),
);
// This line creates a task monitor.
let task_monitor = tokio_metrics::TaskMonitor::new();
// This line launches the task reporter that exports the task metrics.
tokio::task::spawn(
tokio_metrics::TaskMetricsReporterBuilder::new(|name| {
let name = name.replacen("tokio_", "my_task_", 1);
Key::from_parts(name, &[("application", "my_app")])
})
.describe_and_run(task_monitor.clone()),
);
// run some tasks
tokio::spawn(do_work());
// This line causes the task monitor to monitor this task.
tokio::spawn(task_monitor.instrument(do_work()));
tokio::spawn(do_work());
```
Of course, it will work with any other [metrics.rs] exporter.
[metrics.rs]: https://docs.rs/metrics
## Relation to Tokio Console
Currently, Tokio Console is primarily intended for **local** debugging. Tokio
metrics is intended to enable reporting of metrics in production to your
preferred tools. Longer term, it is likely that `tokio-metrics` will merge with
Tokio Console.
## License
This project is licensed under the [MIT license].
[MIT license]: LICENSE
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in tokio-metrics by you, shall be licensed as MIT, without any
additional terms or conditions.
================================================
FILE: benches/poll_overhead.rs
================================================
use criterion::{criterion_group, criterion_main, Criterion};
use futures::task;
use std::future::Future;
use std::hint::black_box;
use std::iter;
use std::pin::Pin;
use std::sync::{Arc, Barrier};
use std::task::{Context, Poll};
use std::thread;
use std::time::{Duration, Instant};
use tokio_metrics::TaskMonitor;
pub struct TestFuture;
impl Future for TestFuture {
type Output = ();
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> {
cx.waker().wake_by_ref();
Poll::Pending
}
}
fn bench_poll(c: &mut Criterion) {
c.bench_function("poll", move |b| {
b.iter_custom(|iters| {
let monitor = TaskMonitor::new();
let num_cpus = num_cpus::get();
let start = Arc::new(Barrier::new(num_cpus + 1));
let stop = Arc::new(Barrier::new(num_cpus + 1));
let mut workers: Vec<_> = iter::repeat((monitor, start.clone(), stop.clone()))
.take(num_cpus)
.map(|(monitor, start, stop)| {
thread::spawn(move || {
let waker = task::noop_waker();
let mut cx = Context::from_waker(&waker);
let mut instrumented = Box::pin(monitor.instrument(TestFuture));
start.wait();
let start_time = Instant::now();
for _i in 0..iters {
let _ = black_box(instrumented.as_mut().poll(&mut cx));
}
let stop_time = Instant::now();
stop.wait();
stop_time - start_time
})
})
.collect();
start.wait();
stop.wait();
let elapsed: Duration = workers.drain(..).map(|w| w.join().unwrap()).sum();
elapsed / (num_cpus as u32)
})
});
}
criterion_group!(benches, bench_poll);
criterion_main!(benches);
================================================
FILE: examples/axum.rs
================================================
#[tokio::main]
async fn main() {
// construct a TaskMonitor for each endpoint
let monitor_root = tokio_metrics::TaskMonitor::new();
let monitor_create_user = CreateUserMonitors {
// monitor for the entire endpoint
route: tokio_metrics::TaskMonitor::new(),
// monitor for database insertion subtask
insert: tokio_metrics::TaskMonitor::new(),
};
// build our application with two instrumented endpoints
let app = axum::Router::new()
// `GET /` goes to `root`
.route(
"/",
axum::routing::get({
let monitor = monitor_root.clone();
move || monitor.instrument(async { "Hello, World!" })
}),
)
// `POST /users` goes to `create_user`
.route(
"/users",
axum::routing::post({
let monitors = monitor_create_user.clone();
let route = monitors.route.clone();
move |payload| route.instrument(create_user(payload, monitors))
}),
);
// print task metrics for each endpoint every 1s
let metrics_frequency = std::time::Duration::from_secs(1);
tokio::spawn(async move {
let root_intervals = monitor_root.intervals();
let create_user_route_intervals = monitor_create_user.route.intervals();
let create_user_insert_intervals = monitor_create_user.insert.intervals();
let create_user_intervals = create_user_route_intervals.zip(create_user_insert_intervals);
let intervals = root_intervals.zip(create_user_intervals);
for (root_route, (create_user_route, create_user_insert)) in intervals {
println!("root_route = {root_route:#?}");
println!("create_user_route = {create_user_route:#?}");
println!("create_user_insert = {create_user_insert:#?}");
tokio::time::sleep(metrics_frequency).await;
}
});
// run the server
let addr = std::net::SocketAddr::from(([127, 0, 0, 1], 3000));
let listener = tokio::net::TcpListener::bind(&addr).await.unwrap();
axum::serve(listener, app).await.unwrap();
}
async fn create_user(
axum::Json(payload): axum::Json<CreateUser>,
monitors: CreateUserMonitors,
) -> impl axum::response::IntoResponse {
let user = User {
id: 1337,
username: payload.username,
};
// instrument inserting the user into the db:
monitors.insert.instrument(insert_user(user.clone())).await;
(axum::http::StatusCode::CREATED, axum::Json(user))
}
#[derive(Clone)]
struct CreateUserMonitors {
// monitor for the entire endpoint
route: tokio_metrics::TaskMonitor,
// monitor for database insertion subtask
insert: tokio_metrics::TaskMonitor,
}
#[derive(serde::Deserialize)]
struct CreateUser {
username: String,
}
#[derive(Clone, serde::Serialize)]
struct User {
id: u64,
username: String,
}
// insert the user into the database
async fn insert_user(_: User) {
/* talk to database */
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
}
================================================
FILE: examples/runtime.rs
================================================
use std::time::Duration;
use tokio_metrics::RuntimeMonitor;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let handle = tokio::runtime::Handle::current();
// print runtime metrics every 500ms
{
let runtime_monitor = RuntimeMonitor::new(&handle);
tokio::spawn(async move {
for interval in runtime_monitor.intervals() {
// pretty-print the metric interval
println!("{interval:?}");
// wait 500ms
tokio::time::sleep(Duration::from_millis(500)).await;
}
});
}
// await some tasks
tokio::join![do_work(), do_work(), do_work(),];
Ok(())
}
async fn do_work() {
for _ in 0..25 {
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(100)).await;
}
}
================================================
FILE: examples/stream.rs
================================================
use std::time::Duration;
use futures::{stream::FuturesUnordered, StreamExt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let metrics_monitor = tokio_metrics::TaskMonitor::new();
// print task metrics every 500ms
{
let metrics_monitor = metrics_monitor.clone();
tokio::spawn(async move {
for deltas in metrics_monitor.intervals() {
// pretty-print the metric deltas
println!("{deltas:?}");
// wait 500ms
tokio::time::sleep(Duration::from_millis(500)).await;
}
})
};
// instrument a stream and await it
let mut stream =
metrics_monitor.instrument((0..3).map(|_| do_work()).collect::<FuturesUnordered<_>>());
while stream.next().await.is_some() {}
println!("{:?}", metrics_monitor.cumulative());
Ok(())
}
async fn do_work() {
for _ in 0..25 {
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(100)).await;
}
}
================================================
FILE: examples/task.rs
================================================
use std::time::Duration;
use tokio_metrics::{TaskMonitor, TaskMonitorCore};
/// It's usually the right choice to use a static [`tokio_metrics::TaskMonitorCore`].
///
/// If you need to dynamically generate task monitors at runtime,
/// [`tokio_metrics::TaskMonitor`] will be more ergonomic.
///
/// See the [`tokio_metrics::TaskMonitorCore`] documentation for more discussion.
static STATIC_MONITOR: TaskMonitorCore = TaskMonitorCore::new();
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// spawn a task that prints out from the static monitor on a loop
tokio::spawn(async {
for deltas in TaskMonitorCore::intervals(&STATIC_MONITOR) {
// pretty print
println!("{deltas:?}");
tokio::time::sleep(Duration::from_millis(500)).await;
}
});
tokio::join![
STATIC_MONITOR.instrument(do_work()),
STATIC_MONITOR.instrument(do_work()),
STATIC_MONITOR.instrument(do_work()),
];
// imagine we wanted to generate a task monitor to keep track of all tasks
// and child tasks spawned by a given request
for i in 0..5 {
// roughly equivalent to Arc::new(TaskMonitorCore::new())
let metrics_monitor = TaskMonitor::new();
// instrument some tasks and await them
tokio::join![
// roughly equivalent to TaskMonitorCore::instrument_with(do_work(), metrics_monitor.clone())
metrics_monitor.instrument(do_work()),
metrics_monitor.instrument(do_work()),
metrics_monitor.instrument(do_work())
];
let cumulative = metrics_monitor.cumulative();
println!("{i}: {cumulative:?}");
}
Ok(())
}
async fn do_work() {
for _ in 0..25 {
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(100)).await;
}
}
================================================
FILE: release-plz.toml
================================================
[workspace]
git_release_enable = false
changelog_update = false
[[package]]
name = "tokio-metrics"
changelog_path = "./CHANGELOG.md"
changelog_update = true
git_release_enable = true
================================================
FILE: src/derived_metrics.rs
================================================
macro_rules! derived_metrics {
(
[$metrics_name:ty] {
stable {
$(
$(#[$($attributes:tt)*])*
$vis:vis fn $name:ident($($args:tt)*) -> $ty:ty $body:block
)*
}
unstable {
$(
$(#[$($unstable_attributes:tt)*])*
$unstable_vis:vis fn $unstable_name:ident($($unstable_args:tt)*) -> $unstable_ty:ty $unstable_body:block
)*
}
}
) => {
impl $metrics_name {
$(
$(#[$($attributes)*])*
$vis fn $name($($args)*) -> $ty $body
)*
$(
$(#[$($unstable_attributes)*])*
#[cfg(tokio_unstable)]
$unstable_vis fn $unstable_name($($unstable_args)*) -> $unstable_ty $unstable_body
)*
#[cfg(all(test, feature = "metrics-rs-integration"))]
const DERIVED_METRICS: &[&str] = &[$(stringify!($name),)*];
#[cfg(all(test, tokio_unstable, feature = "metrics-rs-integration"))]
const UNSTABLE_DERIVED_METRICS: &[&str] = &[$(stringify!($unstable_name),)*];
}
};
}
pub(crate) use derived_metrics;
================================================
FILE: src/lib.rs
================================================
#![warn(
clippy::arithmetic_side_effects,
missing_debug_implementations,
missing_docs,
rust_2018_idioms,
unreachable_pub
)]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, allow(unused_attributes))]
//! Monitor key metrics of tokio tasks and runtimes.
//!
//! ### Monitoring task metrics
//! [Monitor][TaskMonitor] key [metrics][TaskMetrics] of tokio tasks.
//!
//! In the below example, a [`TaskMonitor`] is [constructed][TaskMonitor::new] and used to
//! [instrument][TaskMonitor::instrument] three worker tasks; meanwhile, a fourth task
//! prints [metrics][TaskMetrics] in 500ms [intervals][TaskMonitor::intervals]:
//! ```
//! use std::time::Duration;
//!
//! #[tokio::main]
//! async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
//! // construct a metrics taskmonitor
//! let metrics_monitor = tokio_metrics::TaskMonitor::new();
//!
//! // print task metrics every 500ms
//! {
//! let metrics_monitor = metrics_monitor.clone();
//! tokio::spawn(async move {
//! for interval in metrics_monitor.intervals() {
//! // pretty-print the metric interval
//! println!("{:?}", interval);
//! // wait 500ms
//! tokio::time::sleep(Duration::from_millis(500)).await;
//! }
//! });
//! }
//!
//! // instrument some tasks and await them
//! // note that the same taskmonitor can be used for multiple tasks
//! tokio::join![
//! metrics_monitor.instrument(do_work()),
//! metrics_monitor.instrument(do_work()),
//! metrics_monitor.instrument(do_work())
//! ];
//!
//! Ok(())
//! }
//!
//! async fn do_work() {
//! for _ in 0..25 {
//! tokio::task::yield_now().await;
//! tokio::time::sleep(Duration::from_millis(100)).await;
//! }
//! }
//! ```
#![cfg_attr(
feature = "rt",
doc = r##"
### Monitoring runtime metrics
[Monitor][RuntimeMonitor] key [metrics][RuntimeMetrics] of a tokio runtime.
**This functionality requires crate feature `rt` and some metrics require `tokio_unstable`.**
In the below example, a [`RuntimeMonitor`] is [constructed][RuntimeMonitor::new] and
three tasks are spawned and awaited; meanwhile, a fourth task prints [metrics][RuntimeMetrics]
in 500ms [intervals][RuntimeMonitor::intervals]:
```
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let handle = tokio::runtime::Handle::current();
// construct the runtime metrics monitor
let runtime_monitor = tokio_metrics::RuntimeMonitor::new(&handle);
// print runtime metrics every 500ms
{
tokio::spawn(async move {
for interval in runtime_monitor.intervals() {
// pretty-print the metric interval
println!("{:?}", interval);
// wait 500ms
tokio::time::sleep(Duration::from_millis(500)).await;
}
});
}
// await some tasks
tokio::join![
do_work(),
do_work(),
do_work(),
];
Ok(())
}
async fn do_work() {
for _ in 0..25 {
tokio::task::yield_now().await;
tokio::time::sleep(Duration::from_millis(100)).await;
}
}
```
"##
)]
//! ### Monitoring and publishing metrics
//!
//! If the `metrics-rs-integration` feature is additionally enabled, this crate allows
//! publishing metrics externally via [metrics-rs](metrics) exporters.
//!
//! For example, you can use [metrics_exporter_prometheus] to make metrics visible
//! to [Prometheus]. You can see the [metrics_exporter_prometheus] and [metrics-rs](metrics)
//! docs for guidance on configuring exporters.
//!
//! The published metrics are the same as the fields and methods of
#![cfg_attr(feature = "rt", doc = "[RuntimeMetrics] and")]
//! [TaskMetrics], but with a "tokio_" prefix added, for example
#![cfg_attr(feature = "rt", doc = "`tokio_workers_count` and")]
//! `tokio_instrumented_count`.
//!
//! [metrics_exporter_prometheus]: https://docs.rs/metrics_exporter_prometheus
#![cfg_attr(feature = "rt", doc = "[RuntimeMetrics]: crate::RuntimeMetrics")]
//! [Prometheus]: https://prometheus.io
//! [TaskMetrics]: crate::TaskMetrics
//!
//! This example exports [Prometheus] metrics by listening on a local Unix socket
//! called `prometheus.sock`, which you can access for debugging by
//! `curl --unix-socket prometheus.sock localhost`.
//!
//! ```
//! use std::time::Duration;
//!
//! #[tokio::main]
//! async fn main() {
//! metrics_exporter_prometheus::PrometheusBuilder::new()
//! .with_http_uds_listener("prometheus.sock")
//! .install()
//! .unwrap();
#![cfg_attr(
all(feature = "rt", feature = "metrics-rs-integration"),
doc = r##"
// This line launches the runtime reporter that monitors the Tokio runtime and exports the metrics.
tokio::task::spawn(
tokio_metrics::RuntimeMetricsReporterBuilder::default().describe_and_run(),
);
"##
)]
//! let monitor = tokio_metrics::TaskMonitor::new();
#![cfg_attr(
all(feature = "rt", feature = "metrics-rs-integration"),
doc = r##"
use metrics::Key;
// This line launches the task reporter that monitors Tokio tasks and exports the metrics.
tokio::task::spawn(
tokio_metrics::TaskMetricsReporterBuilder::new(|name| {
let name = name.replacen("tokio_", "my_task_", 1);
Key::from_parts(name, &[("application", "my_app")])
})
.describe_and_run(monitor.clone()),
);
"##
)]
//! // Run some code.
//! tokio::task::spawn(monitor.instrument(async move {
//! for _ in 0..1000 {
//! tokio::time::sleep(Duration::from_millis(10)).await;
//! }
//! }))
//! .await
//! .unwrap();
//! }
//! ```
macro_rules! cfg_rt {
($($item:item)*) => {
$(
#[cfg(feature = "rt")]
#[cfg_attr(docsrs, doc(cfg(feature = "rt")))]
$item
)*
};
}
cfg_rt! {
mod runtime;
pub use runtime::{
RuntimeIntervals,
RuntimeMetrics,
RuntimeMonitor,
};
}
#[cfg(all(feature = "rt", tokio_unstable))]
pub use runtime::{HistogramBucket, PollTimeHistogram};
#[cfg(all(feature = "rt", feature = "metrics-rs-integration"))]
#[cfg_attr(
docsrs,
doc(cfg(all(feature = "rt", feature = "metrics-rs-integration")))
)]
pub use runtime::metrics_rs_integration::{RuntimeMetricsReporter, RuntimeMetricsReporterBuilder};
mod derived_metrics;
#[cfg(feature = "metrics-rs-integration")]
mod metrics_rs;
mod task;
#[cfg(feature = "metrics-rs-integration")]
#[cfg_attr(docsrs, doc(cfg(feature = "metrics-rs-integration")))]
pub use task::metrics_rs_integration::{TaskMetricsReporter, TaskMetricsReporterBuilder};
pub use task::{
Instrumented, TaskIntervals, TaskMetrics, TaskMonitor, TaskMonitorCore, TaskMonitorCoreBuilder,
};
================================================
FILE: src/metrics_rs.rs
================================================
use std::time::Duration;
pub(crate) const DEFAULT_METRIC_SAMPLING_INTERVAL: Duration = Duration::from_secs(30);
macro_rules! kind_to_type {
(Counter) => {
metrics::Counter
};
(Gauge) => {
metrics::Gauge
};
(PollTimeHistogram) => {
metrics::Histogram
};
}
macro_rules! metric_key {
($transform_fn:ident, $name:ident) => {
$transform_fn(concat!("tokio_", stringify!($name)))
};
}
// calling `trim` since /// inserts spaces into docs
macro_rules! describe_metric_ref {
($transform_fn:ident, $doc:expr, $name:ident: Counter<$unit:ident> []) => {
metrics::describe_counter!(
crate::metrics_rs::metric_key!($transform_fn, $name)
.name()
.to_owned(),
metrics::Unit::$unit,
$doc.trim()
)
};
($transform_fn:ident, $doc:expr, $name:ident: Gauge<$unit:ident> []) => {
metrics::describe_gauge!(
crate::metrics_rs::metric_key!($transform_fn, $name)
.name()
.to_owned(),
metrics::Unit::$unit,
$doc.trim()
)
};
($transform_fn:ident, $doc:expr, $name:ident: PollTimeHistogram<$unit:ident> []) => {
metrics::describe_histogram!(
crate::metrics_rs::metric_key!($transform_fn, $name)
.name()
.to_owned(),
metrics::Unit::$unit,
$doc.trim()
)
};
}
macro_rules! capture_metric_ref {
($transform_fn:ident, $name:ident: Counter []) => {{
let (name, labels) = crate::metrics_rs::metric_key!($transform_fn, $name).into_parts();
metrics::counter!(name, labels)
}};
($transform_fn:ident, $name:ident: Gauge []) => {{
let (name, labels) = crate::metrics_rs::metric_key!($transform_fn, $name).into_parts();
metrics::gauge!(name, labels)
}};
($transform_fn:ident, $name:ident: PollTimeHistogram []) => {{
let (name, labels) = crate::metrics_rs::metric_key!($transform_fn, $name).into_parts();
metrics::histogram!(name, labels)
}};
}
macro_rules! metric_refs {
(
[$struct_name:ident] [$($ignore:ident),* $(,)?] [$metrics_name:ty] [$emit_arg_type:ty] {
stable {
$(
#[doc = $doc:tt]
$name:ident: $kind:tt <$unit:ident> $opts:tt
),*
$(,)?
}
stable_derived {
$(
#[doc = $derived_doc:tt]
$derived_name:ident: $derived_kind:tt <$derived_unit:ident> $derived_opts:tt
),*
$(,)?
}
unstable {
$(
#[doc = $unstable_doc:tt]
$unstable_name:ident: $unstable_kind:tt <$unstable_unit:ident> $unstable_opts:tt
),*
$(,)?
}
unstable_derived {
$(
#[doc = $unstable_derived_doc:tt]
$unstable_derived_name:ident: $unstable_derived_kind:tt <$unstable_derived_unit:ident> $unstable_derived_opts:tt
),*
$(,)?
}
}
) => {
struct $struct_name {
$(
$name: crate::metrics_rs::kind_to_type!($kind),
)*
$(
$derived_name: crate::metrics_rs::kind_to_type!($derived_kind),
)*
$(
#[cfg(tokio_unstable)]
$unstable_name: crate::metrics_rs::kind_to_type!($unstable_kind),
)*
$(
#[cfg(tokio_unstable)]
$unstable_derived_name: crate::metrics_rs::kind_to_type!($unstable_derived_kind),
)*
}
impl $struct_name {
fn capture(transform_fn: &mut dyn FnMut(&'static str) -> metrics::Key) -> Self {
Self {
$(
$name: crate::metrics_rs::capture_metric_ref!(transform_fn, $name: $kind $opts),
)*
$(
$derived_name: crate::metrics_rs::capture_metric_ref!(transform_fn, $derived_name: $derived_kind $derived_opts),
)*
$(
#[cfg(tokio_unstable)]
$unstable_name: crate::metrics_rs::capture_metric_ref!(transform_fn, $unstable_name: $unstable_kind $unstable_opts),
)*
$(
#[cfg(tokio_unstable)]
$unstable_derived_name: crate::metrics_rs::capture_metric_ref!(transform_fn, $unstable_derived_name: $unstable_derived_kind $unstable_derived_opts),
)*
}
}
fn emit(&self, metrics: $metrics_name, emit_arg: $emit_arg_type) {
// Emit derived metrics before base metrics because emitting base metrics may move
// out of `$metrics`.
$(
crate::metrics_rs::MyMetricOp::op((&self.$derived_name, metrics.$derived_name()), emit_arg);
)*
$(
#[cfg(tokio_unstable)]
crate::metrics_rs::MyMetricOp::op((&self.$unstable_derived_name, metrics.$unstable_derived_name()), emit_arg);
)*
$(
crate::metrics_rs::MyMetricOp::op((&self.$name, metrics.$name), emit_arg);
)*
$(
#[cfg(tokio_unstable)]
crate::metrics_rs::MyMetricOp::op((&self.$unstable_name, metrics.$unstable_name), emit_arg);
)*
}
fn describe(transform_fn: &mut dyn FnMut(&'static str) -> metrics::Key) {
$(
crate::metrics_rs::describe_metric_ref!(transform_fn, $doc, $name: $kind<$unit> $opts);
)*
$(
crate::metrics_rs::describe_metric_ref!(transform_fn, $derived_doc, $derived_name: $derived_kind<$derived_unit> $derived_opts);
)*
$(
#[cfg(tokio_unstable)]
crate::metrics_rs::describe_metric_ref!(transform_fn, $unstable_doc, $unstable_name: $unstable_kind<$unstable_unit> $unstable_opts);
)*
$(
#[cfg(tokio_unstable)]
crate::metrics_rs::describe_metric_ref!(transform_fn, $unstable_derived_doc, $unstable_derived_name: $unstable_derived_kind<$unstable_derived_unit> $unstable_derived_opts);
)*
}
}
#[test]
fn test_no_fields_missing() {
// test that no fields are missing. We can't use exhaustive matching here
// since the metrics structs are #[non_exhaustive], so use a debug impl
let debug = format!("{:#?}", <$metrics_name>::default());
for line in debug.lines() {
// Only look at top-level field lines: exactly 4 spaces of
// indentation and containing a `:` (field name separator).
// This skips the struct header/footer and any nested
// struct/vec Debug output from complex field types.
let is_top_level_field = line.starts_with(" ")
&& !line.starts_with(" ")
&& line.contains(':');
if !is_top_level_field {
continue
}
$(
let expected = format!(" {}:", stringify!($ignore));
if line.contains(&expected) {
continue
}
);*
$(
let expected = format!(" {}:", stringify!($name));
eprintln!("{}", expected);
if line.contains(&expected) {
continue
}
);*
$(
let expected = format!(" {}:", stringify!($unstable_name));
eprintln!("{}", expected);
if line.contains(&expected) {
continue
}
);*
panic!("missing metric {:?}", line);
}
}
#[test]
fn test_no_derived_metrics_missing() {
// test that no derived metrics are missing.
for derived_metric in <$metrics_name>::DERIVED_METRICS {
$(
if *derived_metric == stringify!($derived_name) {
continue
}
);*
panic!("missing metric {:?}", derived_metric);
}
#[cfg(tokio_unstable)]
for unstable_derived_metric in <$metrics_name>::UNSTABLE_DERIVED_METRICS {
$(
if *unstable_derived_metric == stringify!($unstable_derived_name) {
continue
}
);*
panic!("missing metric {:?}", unstable_derived_metric);
}
}
}
}
pub(crate) use capture_metric_ref;
pub(crate) use describe_metric_ref;
pub(crate) use kind_to_type;
pub(crate) use metric_key;
pub(crate) use metric_refs;
pub(crate) trait MyMetricOp<T> {
fn op(self, t: T);
}
impl<T> MyMetricOp<T> for (&metrics::Counter, Duration) {
fn op(self, _: T) {
self.0
.increment(self.1.as_micros().try_into().unwrap_or(u64::MAX));
}
}
impl<T> MyMetricOp<T> for (&metrics::Counter, u64) {
fn op(self, _t: T) {
self.0.increment(self.1);
}
}
impl<T> MyMetricOp<T> for (&metrics::Gauge, Duration) {
fn op(self, _t: T) {
self.0.set(self.1.as_micros() as f64);
}
}
impl<T> MyMetricOp<T> for (&metrics::Gauge, u64) {
fn op(self, _: T) {
self.0.set(self.1 as f64);
}
}
impl<T> MyMetricOp<T> for (&metrics::Gauge, usize) {
fn op(self, _t: T) {
self.0.set(self.1 as f64);
}
}
impl<T> MyMetricOp<T> for (&metrics::Gauge, f64) {
fn op(self, _t: T) {
self.0.set(self.1);
}
}
#[cfg(all(feature = "rt", tokio_unstable))]
impl<T> MyMetricOp<T> for (&metrics::Histogram, crate::runtime::PollTimeHistogram) {
fn op(self, _: T) {
for bucket in self.1.buckets() {
if bucket.count() > 0 {
// Use range.start as the representative value; the metrics-rs
// histogram handles its own bucketing from these raw values.
self.0.record_many(
bucket.range_start().as_micros() as f64,
bucket.count() as usize,
);
}
}
}
}
================================================
FILE: src/runtime/metrics_rs_integration.rs
================================================
use std::{fmt, time::Duration};
use tokio::runtime::Handle;
use super::{RuntimeIntervals, RuntimeMetrics, RuntimeMonitor};
use crate::metrics_rs::{metric_refs, DEFAULT_METRIC_SAMPLING_INTERVAL};
/// A builder for the [`RuntimeMetricsReporter`] that wraps the RuntimeMonitor, periodically
/// reporting RuntimeMetrics to any configured [metrics-rs] recorder.
///
/// ### Published Metrics
///
/// The published metrics are the fields of [RuntimeMetrics], but with the
/// `tokio_` prefix added, for example, `tokio_workers_count`. If desired, you
/// can use the [`with_metrics_transformer`] function to customize the metric names.
///
/// ### Usage
///
/// To upload metrics via [metrics-rs], you need to set up a reporter, which
/// is actually what exports the metrics outside of the program. You must set
/// up the reporter before you call [`describe_and_run`].
///
/// You can find exporters within the [metrics-rs] docs. One such reporter
/// is the [metrics_exporter_prometheus] reporter, which makes metrics visible
/// through Prometheus.
///
/// You can use it for example to export Prometheus metrics by listening on a local Unix socket
/// called `prometheus.sock`, which you can access for debugging by
/// `curl --unix-socket prometheus.sock localhost`, as follows:
///
/// ```
/// use std::time::Duration;
///
/// #[tokio::main]
/// async fn main() {
/// metrics_exporter_prometheus::PrometheusBuilder::new()
/// .with_http_uds_listener("prometheus.sock")
/// .install()
/// .unwrap();
/// tokio::task::spawn(
/// tokio_metrics::RuntimeMetricsReporterBuilder::default()
/// // the default metric sampling interval is 30 seconds, which is
/// // too long for quick tests, so have it be 1 second.
/// .with_interval(std::time::Duration::from_secs(1))
/// .describe_and_run(),
/// );
/// // Run some code
/// tokio::task::spawn(async move {
/// for _ in 0..1000 {
/// tokio::time::sleep(Duration::from_millis(10)).await;
/// }
/// })
/// .await
/// .unwrap();
/// }
/// ```
///
/// [`describe_and_run`]: RuntimeMetricsReporterBuilder::describe_and_run
/// [`with_metrics_transformer`]: RuntimeMetricsReporterBuilder::with_metrics_transformer
/// [metrics-rs]: metrics
/// [metrics_exporter_prometheus]: https://docs.rs/metrics_exporter_prometheus
pub struct RuntimeMetricsReporterBuilder {
interval: Duration,
metrics_transformer: Box<dyn FnMut(&'static str) -> metrics::Key + Send>,
}
impl fmt::Debug for RuntimeMetricsReporterBuilder {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("RuntimeMetricsReporterBuilder")
.field("interval", &self.interval)
// skip metrics_transformer field
.finish()
}
}
impl Default for RuntimeMetricsReporterBuilder {
fn default() -> Self {
RuntimeMetricsReporterBuilder {
interval: DEFAULT_METRIC_SAMPLING_INTERVAL,
metrics_transformer: Box::new(metrics::Key::from_static_name),
}
}
}
impl RuntimeMetricsReporterBuilder {
/// Set the metric sampling interval, default: 30 seconds.
///
/// Note that this is the interval on which metrics are *sampled* from
/// the Tokio runtime and then set on the [metrics-rs] reporter. Uploading the
/// metrics upstream is controlled by the reporter set up in the
/// application, and is normally controlled by a different period.
///
/// For example, if metrics are exported via Prometheus, that
/// normally operates at a pull-based fashion, and the actual collection
/// period is controlled by the Prometheus server, which periodically polls the
/// application's Prometheus exporter to get the latest value of the metrics.
///
/// [metrics-rs]: metrics
pub fn with_interval(mut self, interval: Duration) -> Self {
self.interval = interval;
self
}
/// Set a custom "metrics transformer", which is used during `build` to transform the metric
/// names into metric keys, for example to add dimensions. The string metric names used by this reporter
/// all start with `tokio_`. The default transformer is just [`metrics::Key::from_static_name`]
///
/// For example, to attach a dimension named "application" with value "my_app", and to replace
/// `tokio_` with `my_app_`
/// ```
/// # use metrics::Key;
///
/// #[tokio::main]
/// async fn main() {
/// metrics_exporter_prometheus::PrometheusBuilder::new()
/// .with_http_uds_listener("prometheus.sock")
/// .install()
/// .unwrap();
/// tokio::task::spawn(
/// tokio_metrics::RuntimeMetricsReporterBuilder::default().with_metrics_transformer(|name| {
/// let name = name.replacen("tokio_", "my_app_", 1);
/// Key::from_parts(name, &[("application", "my_app")])
/// })
/// .describe_and_run()
/// );
/// }
/// ```
pub fn with_metrics_transformer(
mut self,
transformer: impl FnMut(&'static str) -> metrics::Key + Send + 'static,
) -> Self {
self.metrics_transformer = Box::new(transformer);
self
}
/// Build the [`RuntimeMetricsReporter`] for the current Tokio runtime. This function will capture
/// the [`Counter`]s, [`Gauge`]s and [`Histogram`]s from the current [metrics-rs] reporter,
/// so if you are using [`with_local_recorder`], you should wrap this function and [`describe`] with it.
///
/// For example:
/// ```
/// # use std::sync::Arc;
///
/// #[tokio::main]
/// async fn main() {
/// let builder = tokio_metrics::RuntimeMetricsReporterBuilder::default();
/// let recorder = Arc::new(metrics_util::debugging::DebuggingRecorder::new());
/// let metrics_reporter = metrics::with_local_recorder(&recorder, || builder.describe().build());
///
/// // no need to wrap `run()`, since the metrics are already captured
/// tokio::task::spawn(metrics_reporter.run());
/// }
/// ```
///
///
/// [`Counter`]: metrics::Counter
/// [`Gauge`]: metrics::Counter
/// [`Histogram`]: metrics::Counter
/// [metrics-rs]: metrics
/// [`with_local_recorder`]: metrics::with_local_recorder
/// [`describe`]: Self::describe
#[must_use = "reporter does nothing unless run"]
pub fn build(self) -> RuntimeMetricsReporter {
self.build_with_monitor(RuntimeMonitor::new(&Handle::current()))
}
/// Build the [`RuntimeMetricsReporter`] with a specific [`RuntimeMonitor`]. This function will capture
/// the [`Counter`]s, [`Gauge`]s and [`Histogram`]s from the current [metrics-rs] reporter,
/// so if you are using [`with_local_recorder`], you should wrap this function and [`describe`]
/// with it.
///
/// [`Counter`]: metrics::Counter
/// [`Gauge`]: metrics::Counter
/// [`Histogram`]: metrics::Counter
/// [metrics-rs]: metrics
/// [`with_local_recorder`]: metrics::with_local_recorder
/// [`describe`]: Self::describe
#[must_use = "reporter does nothing unless run"]
pub fn build_with_monitor(mut self, monitor: RuntimeMonitor) -> RuntimeMetricsReporter {
RuntimeMetricsReporter {
interval: self.interval,
intervals: monitor.intervals(),
emitter: RuntimeMetricRefs::capture(&mut self.metrics_transformer),
}
}
/// Call [`describe_counter`] etc. to describe the emitted metrics.
///
/// Describing metrics makes the reporter attach descriptions and units to them,
/// which makes them easier to use. However, some reporters don't support
/// describing the same metric name more than once, so it is generally a good
/// idea to only call this function once per metric reporter.
///
/// [`describe_counter`]: metrics::describe_counter
/// [metrics-rs]: metrics
pub fn describe(mut self) -> Self {
RuntimeMetricRefs::describe(&mut self.metrics_transformer);
self
}
/// Runs the reporter (within the returned future), [describing] the metrics beforehand.
///
/// Describing metrics makes the reporter attach descriptions and units to them,
/// which makes them easier to use. However, some reporters don't support
/// describing the same metric name more than once. If you are emitting multiple
/// metrics via a single reporter, try to call [`describe`] once and [`run`] for each
/// runtime metrics reporter.
///
/// ### Working with a custom reporter
///
/// If you want to set a local metrics reporter, you shouldn't be calling this method,
/// but you should instead call `.describe().build()` within [`with_local_recorder`] and then
/// call `run` (see the docs on [`build`]).
///
/// [describing]: Self::describe
/// [`describe`]: Self::describe
/// [`build`]: Self::build.
/// [`run`]: RuntimeMetricsReporter::run
/// [`with_local_recorder`]: metrics::with_local_recorder
pub async fn describe_and_run(self) {
self.describe().build().run().await;
}
/// Runs the reporter (within the returned future), not describing the metrics beforehand.
///
/// ### Working with a custom reporter
///
/// If you want to set a local metrics reporter, you shouldn't be calling this method,
/// but you should instead call `.describe().build()` within [`with_local_recorder`] and then
/// call [`run`] (see the docs on [`build`]).
///
/// [`build`]: Self::build
/// [`run`]: RuntimeMetricsReporter::run
/// [`with_local_recorder`]: metrics::with_local_recorder
pub async fn run_without_describing(self) {
self.build().run().await;
}
}
/// Collects metrics from a Tokio runtime and uploads them to [metrics_rs](metrics).
pub struct RuntimeMetricsReporter {
interval: Duration,
intervals: RuntimeIntervals,
emitter: RuntimeMetricRefs,
}
metric_refs! {
[RuntimeMetricRefs] [elapsed] [RuntimeMetrics] [&tokio::runtime::RuntimeMetrics] {
stable {
/// The number of worker threads used by the runtime
workers_count: Gauge<Count> [],
/// The current number of alive tasks in the runtime.
live_tasks_count: Gauge<Count> [],
/// The number of times worker threads parked
max_park_count: Gauge<Count> [],
/// The minimum number of times any worker thread parked
min_park_count: Gauge<Count> [],
/// The number of times worker threads parked
total_park_count: Gauge<Count> [],
/// The amount of time worker threads were busy
total_busy_duration: Counter<Microseconds> [],
/// The maximum amount of time a worker thread was busy
max_busy_duration: Counter<Microseconds> [],
/// The minimum amount of time a worker thread was busy
min_busy_duration: Counter<Microseconds> [],
/// The number of tasks currently scheduled in the runtime's global queue
global_queue_depth: Gauge<Count> [],
}
stable_derived {
/// The ratio of the [`RuntimeMetrics::total_busy_duration`] to the [`RuntimeMetrics::elapsed`].
busy_ratio: Gauge<Percent> [],
}
unstable {
/// The average duration of a single invocation of poll on a task
mean_poll_duration: Gauge<Microseconds> [],
/// The average duration of a single invocation of poll on a task on the worker with the lowest value
mean_poll_duration_worker_min: Gauge<Microseconds> [],
/// The average duration of a single invocation of poll on a task on the worker with the highest value
mean_poll_duration_worker_max: Gauge<Microseconds> [],
/// A histogram of task polls since the previous probe grouped by poll times
poll_time_histogram: PollTimeHistogram<Microseconds> [],
/// The number of times worker threads unparked but performed no work before parking again
total_noop_count: Counter<Count> [],
/// The maximum number of times any worker thread unparked but performed no work before parking again
max_noop_count: Counter<Count> [],
/// The minimum number of times any worker thread unparked but performed no work before parking again
min_noop_count: Counter<Count> [],
/// The number of tasks worker threads stole from another worker thread
total_steal_count: Counter<Count> [],
/// The maximum number of tasks any worker thread stole from another worker thread.
max_steal_count: Counter<Count> [],
/// The minimum number of tasks any worker thread stole from another worker thread
min_steal_count: Counter<Count> [],
/// The number of times worker threads stole tasks from another worker thread
total_steal_operations: Counter<Count> [],
/// The maximum number of times any worker thread stole tasks from another worker thread
max_steal_operations: Counter<Count> [],
/// The minimum number of times any worker thread stole tasks from another worker thread
min_steal_operations: Counter<Count> [],
/// The number of tasks scheduled from **outside** of the runtime
num_remote_schedules: Counter<Count> [],
/// The number of tasks scheduled from worker threads
total_local_schedule_count: Counter<Count> [],
/// The maximum number of tasks scheduled from any one worker thread
max_local_schedule_count: Counter<Count> [],
/// The minimum number of tasks scheduled from any one worker thread
min_local_schedule_count: Counter<Count> [],
/// The number of times worker threads saturated their local queues
total_overflow_count: Counter<Count> [],
/// The maximum number of times any one worker saturated its local queue
max_overflow_count: Counter<Count> [],
/// The minimum number of times any one worker saturated its local queue
min_overflow_count: Counter<Count> [],
/// The number of tasks that have been polled across all worker threads
total_polls_count: Counter<Count> [],
/// The maximum number of tasks that have been polled in any worker thread
max_polls_count: Counter<Count> [],
/// The minimum number of tasks that have been polled in any worker thread
min_polls_count: Counter<Count> [],
/// The total number of tasks currently scheduled in workers' local queues
total_local_queue_depth: Gauge<Count> [],
/// The maximum number of tasks currently scheduled any worker's local queue
max_local_queue_depth: Gauge<Count> [],
/// The minimum number of tasks currently scheduled any worker's local queue
min_local_queue_depth: Gauge<Count> [],
/// The number of tasks currently waiting to be executed in the runtime's blocking threadpool.
blocking_queue_depth: Gauge<Count> [],
/// The number of additional threads spawned by the runtime.
blocking_threads_count: Gauge<Count> [],
/// The number of idle threads, which have spawned by the runtime for `spawn_blocking` calls.
idle_blocking_threads_count: Gauge<Count> [],
/// Returns the number of times that tasks have been forced to yield back to the scheduler after exhausting their task budgets
budget_forced_yield_count: Counter<Count> [],
/// Returns the number of ready events processed by the runtime’s I/O driver
io_driver_ready_count: Counter<Count> [],
}
unstable_derived {
/// The ratio of the [`RuntimeMetrics::total_polls_count`] to the [`RuntimeMetrics::total_noop_count`].
mean_polls_per_park: Gauge<Percent> [],
}
}
}
impl fmt::Debug for RuntimeMetricsReporter {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("RuntimeMetricsReporter")
.field("interval", &self.interval)
// skip intervals field
.finish()
}
}
impl RuntimeMetricsReporter {
/// Collect and publish metrics once to the configured [metrics_rs](metrics) reporter.
pub fn run_once(&mut self) {
let metrics = self
.intervals
.next()
.expect("RuntimeIntervals::next never returns None");
self.emitter.emit(metrics, &self.intervals.runtime);
}
/// Collect and publish metrics periodically to the configured [metrics_rs](metrics) reporter.
///
/// You probably want to run this within its own task (using [`tokio::task::spawn`])
pub async fn run(mut self) {
loop {
self.run_once();
tokio::time::sleep(self.interval).await;
}
}
}
================================================
FILE: src/runtime/poll_time_histogram.rs
================================================
use std::time::Duration;
/// A histogram of task poll durations, pairing each bucket's count with its
/// time range from the runtime configuration.
///
/// This type is returned as part of [`RuntimeMetrics`][super::RuntimeMetrics]
/// when the runtime has poll time histograms enabled via
/// [`enable_metrics_poll_time_histogram`][tokio::runtime::Builder::enable_metrics_poll_time_histogram].
///
/// Each bucket contains the [`Duration`] range configured for that bucket and
/// the count of task polls that fell into that range during the sampling
/// interval.
#[derive(Debug, Clone, Default)]
#[non_exhaustive]
pub struct PollTimeHistogram {
buckets: Vec<HistogramBucket>,
}
impl PollTimeHistogram {
pub(crate) fn new(buckets: Vec<HistogramBucket>) -> Self {
Self { buckets }
}
/// Returns the histogram buckets.
pub fn buckets(&self) -> &[HistogramBucket] {
&self.buckets
}
pub(crate) fn buckets_mut(&mut self) -> &mut [HistogramBucket] {
&mut self.buckets
}
/// Returns just the bucket counts as a `Vec<u64>`.
pub fn as_counts(&self) -> Vec<u64> {
self.buckets.iter().map(|b| b.count).collect()
}
}
/// A single bucket in a [`PollTimeHistogram`].
#[derive(Debug, Clone, Copy, Default)]
#[non_exhaustive]
pub struct HistogramBucket {
range_start: Duration,
range_end: Duration,
count: u64,
}
impl HistogramBucket {
pub(crate) fn new(range_start: Duration, range_end: Duration, count: u64) -> Self {
Self { range_start, range_end, count }
}
/// The start of the time range for this bucket (inclusive).
pub fn range_start(&self) -> Duration {
self.range_start
}
/// The end of the time range for this bucket (exclusive).
pub fn range_end(&self) -> Duration {
self.range_end
}
/// Returns the poll count for this bucket during the interval.
pub fn count(&self) -> u64 {
self.count
}
/// Adds to the count of this bucket.
pub(crate) fn add_count(&mut self, delta: u64) {
self.count = self.count.saturating_add(delta);
}
}
#[cfg(feature = "metrique-integration")]
impl metrique::writer::Value for PollTimeHistogram {
fn write(&self, writer: impl metrique::writer::ValueWriter) {
use metrique::writer::unit::NegativeScale;
use metrique::writer::{MetricFlags, Observation, Unit};
// Use the bucket midpoint as the representative value.
// Tokio's last bucket has range_end of Duration::from_nanos(u64::MAX),
// so use range_start for it since the midpoint wouldn't be representative.
const LAST_BUCKET_END: Duration = Duration::from_nanos(u64::MAX);
writer.metric(
self.buckets.iter().filter(|b| b.count > 0).map(|b| {
let value_us = if b.range_end == LAST_BUCKET_END {
b.range_start.as_micros() as f64
} else {
#[allow(clippy::incompatible_msrv)] // metrique-integration requires 1.89+
f64::midpoint(
b.range_start.as_micros() as f64,
b.range_end.as_micros() as f64,
)
};
Observation::Repeated {
total: value_us * b.count as f64,
occurrences: b.count,
}
}),
Unit::Second(NegativeScale::Micro),
[],
MetricFlags::empty(),
);
}
}
#[cfg(feature = "metrique-integration")]
impl metrique::CloseValue for PollTimeHistogram {
type Closed = Self;
fn close(self) -> Self {
self
}
}
#[cfg(all(test, feature = "metrique-integration"))]
mod tests {
use super::*;
use crate::runtime::RuntimeMetrics;
use metrique::CloseValue;
use metrique::test_util::test_metric;
#[test]
fn poll_time_histogram_close_value() {
let hist = PollTimeHistogram::new(vec![
HistogramBucket::new(Duration::from_micros(0), Duration::from_micros(100), 5),
HistogramBucket::new(Duration::from_micros(100), Duration::from_micros(200), 0),
HistogramBucket::new(Duration::from_micros(200), Duration::from_micros(500), 3),
]);
let closed = hist.close();
let buckets = closed.buckets();
assert_eq!(buckets.len(), 3);
assert_eq!(buckets[0].count(), 5);
assert_eq!(buckets[0].range_start(), Duration::from_micros(0));
assert_eq!(buckets[0].range_end(), Duration::from_micros(100));
assert_eq!(buckets[1].count(), 0);
assert_eq!(buckets[2].count(), 3);
assert_eq!(buckets[2].range_start(), Duration::from_micros(200));
assert_eq!(buckets[2].range_end(), Duration::from_micros(500));
}
#[test]
fn poll_time_histogram_last_bucket_uses_range_start() {
let last_bucket_start = Duration::from_millis(500);
let metrics = RuntimeMetrics {
poll_time_histogram: PollTimeHistogram::new(vec![
HistogramBucket::new(Duration::from_micros(0), Duration::from_micros(100), 0),
HistogramBucket::new(last_bucket_start, Duration::from_nanos(u64::MAX), 2),
]),
..Default::default()
};
let entry = test_metric(metrics);
let hist = &entry.metrics["poll_time_histogram"];
assert_eq!(hist.distribution.len(), 1);
match hist.distribution[0] {
metrique::writer::Observation::Repeated { total, occurrences } => {
assert_eq!(occurrences, 2);
let expected = last_bucket_start.as_micros() as f64 * 2.0;
assert!((total - expected).abs() < 0.01);
}
other => panic!("expected Repeated, got {other:?}"),
}
}
}
================================================
FILE: src/runtime.rs
================================================
use crate::derived_metrics::derived_metrics;
#[cfg(tokio_unstable)]
use std::ops::Range;
use std::time::{Duration, Instant};
use tokio::runtime;
#[cfg(tokio_unstable)]
mod poll_time_histogram;
#[cfg(tokio_unstable)]
pub use poll_time_histogram::{HistogramBucket, PollTimeHistogram};
#[cfg(feature = "metrics-rs-integration")]
pub(crate) mod metrics_rs_integration;
/// Monitors key metrics of the tokio runtime.
///
/// ### Usage
/// ```
/// use std::time::Duration;
/// use tokio_metrics::RuntimeMonitor;
///
/// #[tokio::main]
/// async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
/// let handle = tokio::runtime::Handle::current();
///
/// // print runtime metrics every 500ms
/// {
/// let runtime_monitor = RuntimeMonitor::new(&handle);
/// tokio::spawn(async move {
/// for interval in runtime_monitor.intervals() {
/// // pretty-print the metric interval
/// println!("{:?}", interval);
/// // wait 500ms
/// tokio::time::sleep(Duration::from_millis(500)).await;
/// }
/// });
/// }
///
/// // await some tasks
/// tokio::join![
/// do_work(),
/// do_work(),
/// do_work(),
/// ];
///
/// Ok(())
/// }
///
/// async fn do_work() {
/// for _ in 0..25 {
/// tokio::task::yield_now().await;
/// tokio::time::sleep(Duration::from_millis(100)).await;
/// }
/// }
/// ```
#[derive(Debug)]
pub struct RuntimeMonitor {
/// Handle to the runtime
runtime: runtime::RuntimeMetrics,
}
macro_rules! define_runtime_metrics {
(
stable {
$(
$(#[$($attributes:tt)*])*
$vis:vis $name:ident: $ty:ty
),*
$(,)?
}
unstable {
$(
$(#[$($unstable_attributes:tt)*])*
$unstable_vis:vis $unstable_name:ident: $unstable_ty:ty
),*
$(,)?
}
) => {
/// Key runtime metrics.
#[non_exhaustive]
#[cfg_attr(feature = "metrique-integration", metrique::unit_of_work::metrics(subfield_owned))]
#[derive(Default, Debug, Clone)]
pub struct RuntimeMetrics {
$(
$(#[$($attributes)*])*
#[cfg_attr(docsrs, doc(cfg(feature = "rt")))]
$vis $name: $ty,
)*
$(
$(#[$($unstable_attributes)*])*
#[cfg(tokio_unstable)]
#[cfg_attr(docsrs, doc(cfg(all(feature = "rt", tokio_unstable))))]
$unstable_vis $unstable_name: $unstable_ty,
)*
}
};
}
define_runtime_metrics! {
stable {
/// The number of worker threads used by the runtime.
///
/// This metric is static for a runtime.
///
/// This metric is always equal to [`tokio::runtime::RuntimeMetrics::num_workers`].
/// When using the `current_thread` runtime, the return value is always `1`.
///
/// The number of workers is set by configuring
/// [`worker_threads`][`tokio::runtime::Builder::worker_threads`] with
/// [`tokio::runtime::Builder`], or by parameterizing [`tokio::main`].
///
/// ##### Examples
/// In the below example, the number of workers is set by parameterizing [`tokio::main`]:
/// ```
/// use tokio::runtime::Handle;
///
/// #[tokio::main(flavor = "multi_thread", worker_threads = 10)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// assert_eq!(next_interval().workers_count, 10);
/// }
/// ```
///
/// [`tokio::main`]: https://docs.rs/tokio/latest/tokio/attr.main.html
///
/// When using the `current_thread` runtime, the return value is always `1`; e.g.:
/// ```
/// use tokio::runtime::Handle;
///
/// #[tokio::main(flavor = "current_thread")]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// assert_eq!(next_interval().workers_count, 1);
/// }
/// ```
///
/// This metric is always equal to [`tokio::runtime::RuntimeMetrics::num_workers`]; e.g.:
/// ```
/// use tokio::runtime::Handle;
///
/// #[tokio::main]
/// async fn main() {
/// let handle = Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// assert_eq!(next_interval().workers_count, handle.metrics().num_workers());
/// }
/// ```
pub workers_count: usize,
/// The current number of alive tasks in the runtime.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::num_alive_tasks`].
pub live_tasks_count: usize,
/// The number of times worker threads parked.
///
/// The worker park count increases by one each time the worker parks the thread waiting for
/// new inbound events to process. This usually means the worker has processed all pending work
/// and is currently idle.
///
/// ##### Definition
/// This metric is derived from the sum of [`tokio::runtime::RuntimeMetrics::worker_park_count`]
/// across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::max_park_count`]
/// - [`RuntimeMetrics::min_park_count`]
///
/// ##### Examples
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of interval 1
/// assert_eq!(interval.total_park_count, 0);
///
/// induce_parks().await;
///
/// let interval = next_interval(); // end of interval 2
/// assert!(interval.total_park_count >= 1); // usually 1 or 2 parks
/// }
///
/// async fn induce_parks() {
/// let _ = tokio::time::timeout(std::time::Duration::ZERO, async {
/// loop { tokio::task::yield_now().await; }
/// }).await;
/// }
/// ```
pub total_park_count: u64,
/// The maximum number of times any worker thread parked.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_park_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_park_count`]
/// - [`RuntimeMetrics::min_park_count`]
pub max_park_count: u64,
/// The minimum number of times any worker thread parked.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_park_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_park_count`]
/// - [`RuntimeMetrics::max_park_count`]
pub min_park_count: u64,
/// The amount of time worker threads were busy.
///
/// The worker busy duration increases whenever the worker is spending time processing work.
/// Using this value can indicate the total load of workers.
///
/// ##### Definition
/// This metric is derived from the sum of
/// [`tokio::runtime::RuntimeMetrics::worker_total_busy_duration`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::min_busy_duration`]
/// - [`RuntimeMetrics::max_busy_duration`]
///
/// ##### Examples
/// In the below example, tasks spend a total of 3s busy:
/// ```
/// use tokio::time::Duration;
///
/// fn main() {
/// let start = tokio::time::Instant::now();
///
/// let rt = tokio::runtime::Builder::new_current_thread()
/// .enable_all()
/// .build()
/// .unwrap();
///
/// let handle = rt.handle();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let delay_1s = Duration::from_secs(1);
/// let delay_3s = Duration::from_secs(3);
///
/// rt.block_on(async {
/// // keep the main task busy for 1s
/// spin_for(delay_1s);
///
/// // spawn a task and keep it busy for 2s
/// let _ = tokio::spawn(async move {
/// spin_for(delay_3s);
/// }).await;
/// });
///
/// // flush metrics
/// drop(rt);
///
/// let elapsed = start.elapsed();
///
/// let interval = next_interval(); // end of interval 2
/// assert!(interval.total_busy_duration >= delay_1s + delay_3s);
/// assert!(interval.total_busy_duration <= elapsed);
/// }
///
/// fn time<F>(task: F) -> Duration
/// where
/// F: Fn() -> ()
/// {
/// let start = tokio::time::Instant::now();
/// task();
/// start.elapsed()
/// }
///
/// /// Block the current thread for a given `duration`.
/// fn spin_for(duration: Duration) {
/// let start = tokio::time::Instant::now();
/// while start.elapsed() <= duration {}
/// }
/// ```
///
/// Busy times may not accumulate as the above example suggests (FIXME: Why?); e.g., if we
/// remove the three second delay, the time spent busy falls to mere microseconds:
/// ```should_panic
/// use tokio::time::Duration;
///
/// fn main() {
/// let rt = tokio::runtime::Builder::new_current_thread()
/// .enable_all()
/// .build()
/// .unwrap();
///
/// let handle = rt.handle();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let delay_1s = Duration::from_secs(1);
///
/// let elapsed = time(|| rt.block_on(async {
/// // keep the main task busy for 1s
/// spin_for(delay_1s);
/// }));
///
/// // flush metrics
/// drop(rt);
///
/// let interval = next_interval(); // end of interval 2
/// assert!(interval.total_busy_duration >= delay_1s); // FAIL
/// assert!(interval.total_busy_duration <= elapsed);
/// }
///
/// fn time<F>(task: F) -> Duration
/// where
/// F: Fn() -> ()
/// {
/// let start = tokio::time::Instant::now();
/// task();
/// start.elapsed()
/// }
///
/// /// Block the current thread for a given `duration`.
/// fn spin_for(duration: Duration) {
/// let start = tokio::time::Instant::now();
/// while start.elapsed() <= duration {}
/// }
/// ```
pub total_busy_duration: Duration,
/// The maximum amount of time a worker thread was busy.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_total_busy_duration`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_busy_duration`]
/// - [`RuntimeMetrics::min_busy_duration`]
pub max_busy_duration: Duration,
/// The minimum amount of time a worker thread was busy.
///
/// ##### Definition
/// This metric is derived from the minimum of
/// [`tokio::runtime::RuntimeMetrics::worker_total_busy_duration`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_busy_duration`]
/// - [`RuntimeMetrics::max_busy_duration`]
pub min_busy_duration: Duration,
/// The number of tasks currently scheduled in the runtime's global queue.
///
/// Tasks that are spawned or notified from a non-runtime thread are scheduled using the
/// runtime's global queue. This metric returns the **current** number of tasks pending in
/// the global queue. As such, the returned value may increase or decrease as new tasks are
/// scheduled and processed.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::global_queue_depth`].
///
/// ##### Example
/// ```
/// # let current_thread = tokio::runtime::Builder::new_current_thread()
/// # .enable_all()
/// # .build()
/// # .unwrap();
/// #
/// # let multi_thread = tokio::runtime::Builder::new_multi_thread()
/// # .worker_threads(2)
/// # .enable_all()
/// # .build()
/// # .unwrap();
/// #
/// # for runtime in [current_thread, multi_thread] {
/// let handle = runtime.handle().clone();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of interval 1
/// # #[cfg(tokio_unstable)]
/// assert_eq!(interval.num_remote_schedules, 0);
///
/// // spawn a system thread outside of the runtime
/// std::thread::spawn(move || {
/// // spawn two tasks from this non-runtime thread
/// handle.spawn(async {});
/// handle.spawn(async {});
/// }).join().unwrap();
///
/// // flush metrics
/// drop(runtime);
///
/// let interval = next_interval(); // end of interval 2
/// # #[cfg(tokio_unstable)]
/// assert_eq!(interval.num_remote_schedules, 2);
/// # }
/// ```
pub global_queue_depth: usize,
/// Total amount of time elapsed since observing runtime metrics.
pub elapsed: Duration,
}
unstable {
/// The average duration of a single invocation of poll on a task.
///
/// This average is an exponentially-weighted moving average of the duration
/// of task polls on all runtime workers.
///
/// ##### Examples
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval();
/// println!("mean task poll duration is {:?}", interval.mean_poll_duration);
/// }
/// ```
pub mean_poll_duration: Duration,
/// The average duration of a single invocation of poll on a task on the
/// worker with the lowest value.
///
/// This average is an exponentially-weighted moving average of the duration
/// of task polls on the runtime worker with the lowest value.
///
/// ##### Examples
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval();
/// println!("min mean task poll duration is {:?}", interval.mean_poll_duration_worker_min);
/// }
/// ```
pub mean_poll_duration_worker_min: Duration,
/// The average duration of a single invocation of poll on a task on the
/// worker with the highest value.
///
/// This average is an exponentially-weighted moving average of the duration
/// of task polls on the runtime worker with the highest value.
///
/// ##### Examples
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval();
/// println!("max mean task poll duration is {:?}", interval.mean_poll_duration_worker_max);
/// }
/// ```
pub mean_poll_duration_worker_max: Duration,
/// A histogram of task polls since the previous probe grouped by poll
/// times.
///
/// Each bucket contains the configured [`Duration`] range and the count
/// of task polls that fell into that range during the interval. Use
/// [`PollTimeHistogram::as_counts`] to get just the raw counts as a
/// `Vec<u64>`.
///
/// This metric must be explicitly enabled when creating the runtime with
/// [`enable_metrics_poll_time_histogram`][tokio::runtime::Builder::enable_metrics_poll_time_histogram].
/// Bucket sizes are fixed and configured at the runtime level. See
/// configuration options on
/// [`runtime::Builder`][tokio::runtime::Builder::enable_metrics_poll_time_histogram].
///
/// ##### Examples
/// ```
/// use tokio::runtime::HistogramConfiguration;
/// use std::time::Duration;
///
/// let config = HistogramConfiguration::linear(Duration::from_micros(50), 12);
///
/// let rt = tokio::runtime::Builder::new_multi_thread()
/// .enable_metrics_poll_time_histogram()
/// .metrics_poll_time_histogram_configuration(config)
/// .build()
/// .unwrap();
///
/// rt.block_on(async {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval();
/// for bucket in interval.poll_time_histogram.buckets() {
/// println!("{:?}..{:?} => {} polls", bucket.range_start(), bucket.range_end(), bucket.count());
/// }
/// });
/// ```
pub poll_time_histogram: PollTimeHistogram,
/// The number of times worker threads unparked but performed no work before parking again.
///
/// The worker no-op count increases by one each time the worker unparks the thread but finds
/// no new work and goes back to sleep. This indicates a false-positive wake up.
///
/// ##### Definition
/// This metric is derived from the sum of [`tokio::runtime::RuntimeMetrics::worker_noop_count`]
/// across all worker threads.
///
/// ##### Examples
/// Unfortunately, there isn't a great way to reliably induce no-op parks, as they occur as
/// false-positive events under concurrency.
///
/// The below example triggers fewer than two parks in the single-threaded runtime:
/// ```
/// #[tokio::main(flavor = "current_thread")]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// assert_eq!(next_interval().total_park_count, 0);
///
/// async {
/// tokio::time::sleep(std::time::Duration::from_millis(1)).await;
/// }.await;
///
/// assert!(next_interval().total_park_count > 0);
/// }
/// ```
///
/// The below example triggers fewer than two parks in the multi-threaded runtime:
/// ```
/// #[tokio::main(flavor = "multi_thread")]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// async {
/// tokio::time::sleep(std::time::Duration::from_millis(1)).await;
/// }.await;
///
/// assert!(next_interval().total_noop_count > 0);
/// }
/// ```
pub total_noop_count: u64,
/// The maximum number of times any worker thread unparked but performed no work before parking
/// again.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_noop_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_noop_count`]
/// - [`RuntimeMetrics::min_noop_count`]
pub max_noop_count: u64,
/// The minimum number of times any worker thread unparked but performed no work before parking
/// again.
///
/// ##### Definition
/// This metric is derived from the minimum of
/// [`tokio::runtime::RuntimeMetrics::worker_noop_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_noop_count`]
/// - [`RuntimeMetrics::max_noop_count`]
pub min_noop_count: u64,
/// The number of tasks worker threads stole from another worker thread.
///
/// The worker steal count increases by the amount of stolen tasks each time the worker
/// has processed its scheduled queue and successfully steals more pending tasks from another
/// worker.
///
/// This metric only applies to the **multi-threaded** runtime and will always return `0` when
/// using the current thread runtime.
///
/// ##### Definition
/// This metric is derived from the sum of [`tokio::runtime::RuntimeMetrics::worker_steal_count`] for
/// all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::min_steal_count`]
/// - [`RuntimeMetrics::max_steal_count`]
///
/// ##### Examples
/// In the below example, a blocking channel is used to backup one worker thread:
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of first sampling interval
/// assert_eq!(interval.total_steal_count, 0);
/// assert_eq!(interval.min_steal_count, 0);
/// assert_eq!(interval.max_steal_count, 0);
///
/// // induce a steal
/// async {
/// let (tx, rx) = std::sync::mpsc::channel();
/// // Move to the runtime.
/// tokio::spawn(async move {
/// // Spawn the task that sends to the channel
/// tokio::spawn(async move {
/// tx.send(()).unwrap();
/// });
/// // Spawn a task that bumps the previous task out of the "next
/// // scheduled" slot.
/// tokio::spawn(async {});
/// // Blocking receive on the channel.
/// rx.recv().unwrap();
/// flush_metrics().await;
/// }).await.unwrap();
/// flush_metrics().await;
/// }.await;
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 2
/// println!("total={}; min={}; max={}", interval.total_steal_count, interval.min_steal_count, interval.max_steal_count);
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 3
/// println!("total={}; min={}; max={}", interval.total_steal_count, interval.min_steal_count, interval.max_steal_count);
/// }
///
/// async fn flush_metrics() {
/// let _ = tokio::time::sleep(std::time::Duration::ZERO).await;
/// }
/// ```
pub total_steal_count: u64,
/// The maximum number of tasks any worker thread stole from another worker thread.
///
/// ##### Definition
/// This metric is derived from the maximum of [`tokio::runtime::RuntimeMetrics::worker_steal_count`]
/// across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_steal_count`]
/// - [`RuntimeMetrics::min_steal_count`]
pub max_steal_count: u64,
/// The minimum number of tasks any worker thread stole from another worker thread.
///
/// ##### Definition
/// This metric is derived from the minimum of [`tokio::runtime::RuntimeMetrics::worker_steal_count`]
/// across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_steal_count`]
/// - [`RuntimeMetrics::max_steal_count`]
pub min_steal_count: u64,
/// The number of times worker threads stole tasks from another worker thread.
///
/// The worker steal operations increases by one each time the worker has processed its
/// scheduled queue and successfully steals more pending tasks from another worker.
///
/// This metric only applies to the **multi-threaded** runtime and will always return `0` when
/// using the current thread runtime.
///
/// ##### Definition
/// This metric is derived from the sum of [`tokio::runtime::RuntimeMetrics::worker_steal_operations`]
/// for all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::min_steal_operations`]
/// - [`RuntimeMetrics::max_steal_operations`]
///
/// ##### Examples
/// In the below example, a blocking channel is used to backup one worker thread:
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of first sampling interval
/// assert_eq!(interval.total_steal_operations, 0);
/// assert_eq!(interval.min_steal_operations, 0);
/// assert_eq!(interval.max_steal_operations, 0);
///
/// // induce a steal
/// async {
/// let (tx, rx) = std::sync::mpsc::channel();
/// // Move to the runtime.
/// tokio::spawn(async move {
/// // Spawn the task that sends to the channel
/// tokio::spawn(async move {
/// tx.send(()).unwrap();
/// });
/// // Spawn a task that bumps the previous task out of the "next
/// // scheduled" slot.
/// tokio::spawn(async {});
/// // Blocking receive on the channe.
/// rx.recv().unwrap();
/// flush_metrics().await;
/// }).await.unwrap();
/// flush_metrics().await;
/// }.await;
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 2
/// println!("total={}; min={}; max={}", interval.total_steal_operations, interval.min_steal_operations, interval.max_steal_operations);
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 3
/// println!("total={}; min={}; max={}", interval.total_steal_operations, interval.min_steal_operations, interval.max_steal_operations);
/// }
///
/// async fn flush_metrics() {
/// let _ = tokio::time::sleep(std::time::Duration::ZERO).await;
/// }
/// ```
pub total_steal_operations: u64,
/// The maximum number of times any worker thread stole tasks from another worker thread.
///
/// ##### Definition
/// This metric is derived from the maximum of [`tokio::runtime::RuntimeMetrics::worker_steal_operations`]
/// across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_steal_operations`]
/// - [`RuntimeMetrics::min_steal_operations`]
pub max_steal_operations: u64,
/// The minimum number of times any worker thread stole tasks from another worker thread.
///
/// ##### Definition
/// This metric is derived from the minimum of [`tokio::runtime::RuntimeMetrics::worker_steal_operations`]
/// across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_steal_operations`]
/// - [`RuntimeMetrics::max_steal_operations`]
pub min_steal_operations: u64,
/// The number of tasks scheduled from **outside** of the runtime.
///
/// The remote schedule count increases by one each time a task is woken from **outside** of
/// the runtime. This usually means that a task is spawned or notified from a non-runtime
/// thread and must be queued using the Runtime's global queue, which tends to be slower.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::remote_schedule_count`].
///
/// ##### Examples
/// In the below example, a remote schedule is induced by spawning a system thread, then
/// spawning a tokio task from that system thread:
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of first sampling interval
/// assert_eq!(interval.num_remote_schedules, 0);
///
/// // spawn a non-runtime thread
/// std::thread::spawn(move || {
/// // spawn two tasks from this non-runtime thread
/// async move {
/// handle.spawn(async {}).await;
/// handle.spawn(async {}).await;
/// }
/// }).join().unwrap().await;
///
/// let interval = next_interval(); // end of second sampling interval
/// assert_eq!(interval.num_remote_schedules, 2);
///
/// let interval = next_interval(); // end of third sampling interval
/// assert_eq!(interval.num_remote_schedules, 0);
/// }
/// ```
pub num_remote_schedules: u64,
/// The number of tasks scheduled from worker threads.
///
/// The local schedule count increases by one each time a task is woken from **inside** of the
/// runtime. This usually means that a task is spawned or notified from within a runtime thread
/// and will be queued on the worker-local queue.
///
/// ##### Definition
/// This metric is derived from the sum of
/// [`tokio::runtime::RuntimeMetrics::worker_local_schedule_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::min_local_schedule_count`]
/// - [`RuntimeMetrics::max_local_schedule_count`]
///
/// ##### Examples
/// ###### With `current_thread` runtime
/// In the below example, two tasks are spawned from the context of a third tokio task:
/// ```
/// #[tokio::main(flavor = "current_thread")]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = { flush_metrics().await; next_interval() }; // end interval 2
/// assert_eq!(interval.total_local_schedule_count, 0);
///
/// let task = async {
/// tokio::spawn(async {}); // local schedule 1
/// tokio::spawn(async {}); // local schedule 2
/// };
///
/// let handle = tokio::spawn(task); // local schedule 3
///
/// let interval = { flush_metrics().await; next_interval() }; // end interval 2
/// assert_eq!(interval.total_local_schedule_count, 3);
///
/// let _ = handle.await;
///
/// let interval = { flush_metrics().await; next_interval() }; // end interval 3
/// assert_eq!(interval.total_local_schedule_count, 0);
/// }
///
/// async fn flush_metrics() {
/// tokio::task::yield_now().await;
/// }
/// ```
///
/// ###### With `multi_thread` runtime
/// In the below example, 100 tasks are spawned:
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of interval 1
/// assert_eq!(interval.total_local_schedule_count, 0);
///
/// use std::sync::atomic::{AtomicBool, Ordering};
/// static SPINLOCK: AtomicBool = AtomicBool::new(true);
///
/// // block the other worker thread
/// tokio::spawn(async {
/// while SPINLOCK.load(Ordering::SeqCst) {}
/// });
///
/// // FIXME: why does this need to be in a `spawn`?
/// let _ = tokio::spawn(async {
/// // spawn 100 tasks
/// for _ in 0..100 {
/// tokio::spawn(async {});
/// }
/// // this spawns 1 more task
/// flush_metrics().await;
/// }).await;
///
/// // unblock the other worker thread
/// SPINLOCK.store(false, Ordering::SeqCst);
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 2
/// assert_eq!(interval.total_local_schedule_count, 100 + 1);
/// }
///
/// async fn flush_metrics() {
/// let _ = tokio::time::sleep(std::time::Duration::ZERO).await;
/// }
/// ```
pub total_local_schedule_count: u64,
/// The maximum number of tasks scheduled from any one worker thread.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_local_schedule_count`] for all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_local_schedule_count`]
/// - [`RuntimeMetrics::min_local_schedule_count`]
pub max_local_schedule_count: u64,
/// The minimum number of tasks scheduled from any one worker thread.
///
/// ##### Definition
/// This metric is derived from the minimum of
/// [`tokio::runtime::RuntimeMetrics::worker_local_schedule_count`] for all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_local_schedule_count`]
/// - [`RuntimeMetrics::max_local_schedule_count`]
pub min_local_schedule_count: u64,
/// The number of times worker threads saturated their local queues.
///
/// The worker steal count increases by one each time the worker attempts to schedule a task
/// locally, but its local queue is full. When this happens, half of the
/// local queue is moved to the global queue.
///
/// This metric only applies to the **multi-threaded** scheduler.
///
/// ##### Definition
/// This metric is derived from the sum of
/// [`tokio::runtime::RuntimeMetrics::worker_overflow_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::min_overflow_count`]
/// - [`RuntimeMetrics::max_overflow_count`]
///
/// ##### Examples
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 1)]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of interval 1
/// assert_eq!(interval.total_overflow_count, 0);
///
/// use std::sync::atomic::{AtomicBool, Ordering};
///
/// // spawn a ton of tasks
/// let _ = tokio::spawn(async {
/// // we do this in a `tokio::spawn` because it is impossible to
/// // overflow the main task
/// for _ in 0..300 {
/// tokio::spawn(async {});
/// }
/// }).await;
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 2
/// assert_eq!(interval.total_overflow_count, 1);
/// }
///
/// async fn flush_metrics() {
/// let _ = tokio::time::sleep(std::time::Duration::from_millis(1)).await;
/// }
/// ```
pub total_overflow_count: u64,
/// The maximum number of times any one worker saturated its local queue.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_overflow_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_overflow_count`]
/// - [`RuntimeMetrics::min_overflow_count`]
pub max_overflow_count: u64,
/// The minimum number of times any one worker saturated its local queue.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_overflow_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_overflow_count`]
/// - [`RuntimeMetrics::max_overflow_count`]
pub min_overflow_count: u64,
/// The number of tasks that have been polled across all worker threads.
///
/// The worker poll count increases by one each time a worker polls a scheduled task.
///
/// ##### Definition
/// This metric is derived from the sum of
/// [`tokio::runtime::RuntimeMetrics::worker_poll_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::min_polls_count`]
/// - [`RuntimeMetrics::max_polls_count`]
///
/// ##### Examples
/// In the below example, 42 tasks are spawned and polled:
/// ```
/// #[tokio::main(flavor = "current_thread")]
/// async fn main() {
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 1
/// assert_eq!(interval.total_polls_count, 0);
/// assert_eq!(interval.min_polls_count, 0);
/// assert_eq!(interval.max_polls_count, 0);
///
/// const N: u64 = 42;
///
/// for _ in 0..N {
/// let _ = tokio::spawn(async {}).await;
/// }
///
/// let interval = { flush_metrics().await; next_interval() }; // end of interval 2
/// assert_eq!(interval.total_polls_count, N);
/// assert_eq!(interval.min_polls_count, N);
/// assert_eq!(interval.max_polls_count, N);
/// }
///
/// async fn flush_metrics() {
/// let _ = tokio::task::yield_now().await;
/// }
/// ```
pub total_polls_count: u64,
/// The maximum number of tasks that have been polled in any worker thread.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_poll_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_polls_count`]
/// - [`RuntimeMetrics::min_polls_count`]
pub max_polls_count: u64,
/// The minimum number of tasks that have been polled in any worker thread.
///
/// ##### Definition
/// This metric is derived from the minimum of
/// [`tokio::runtime::RuntimeMetrics::worker_poll_count`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_polls_count`]
/// - [`RuntimeMetrics::max_polls_count`]
pub min_polls_count: u64,
/// The total number of tasks currently scheduled in workers' local queues.
///
/// Tasks that are spawned or notified from within a runtime thread are scheduled using that
/// worker's local queue. This metric returns the **current** number of tasks pending in all
/// workers' local queues. As such, the returned value may increase or decrease as new tasks
/// are scheduled and processed.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::worker_local_queue_depth`].
///
/// ##### See also
/// - [`RuntimeMetrics::min_local_queue_depth`]
/// - [`RuntimeMetrics::max_local_queue_depth`]
///
/// ##### Example
///
/// ###### With `current_thread` runtime
/// The below example spawns 100 tasks:
/// ```
/// #[tokio::main(flavor = "current_thread")]
/// async fn main() {
/// const N: usize = 100;
///
/// let handle = tokio::runtime::Handle::current();
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let mut next_interval = || intervals.next().unwrap();
///
/// let interval = next_interval(); // end of interval 1
/// assert_eq!(interval.total_local_queue_depth, 0);
///
///
/// for _ in 0..N {
/// tokio::spawn(async {});
/// }
/// let interval = next_interval(); // end of interval 2
/// assert_eq!(interval.total_local_queue_depth, N);
/// }
/// ```
///
/// ###### With `multi_thread` runtime
/// The below example spawns 100 tasks and observes them in the
/// local queue:
/// ```
/// #[tokio::main(flavor = "multi_thread", worker_threads = 2)]
/// async fn main() {
/// use std::sync::mpsc;
/// use tokio::sync::oneshot;
///
/// const N: usize = 100;
///
/// let handle = tokio::runtime::Handle::current();
///
/// // block one worker so the other is the only one running
/// let (block_tx, block_rx) = mpsc::channel::<()>();
/// let (started_tx, started_rx) = oneshot::channel();
/// tokio::spawn(async move {
/// let _ = started_tx.send(());
/// let _ = block_rx.recv();
/// });
/// let _ = started_rx.await;
///
/// // spawn + sample from the free worker thread
/// let (depth_tx, depth_rx) = oneshot::channel();
/// tokio::spawn(async move {
/// let monitor = tokio_metrics::RuntimeMonitor::new(&handle);
/// let mut intervals = monitor.intervals();
/// let _ = intervals.next().unwrap(); // baseline
///
/// for _ in 0..N {
/// tokio::spawn(async {});
/// }
///
/// let depth = intervals.next().unwrap().total_local_queue_depth;
/// let _ = depth_tx.send(depth);
/// });
///
/// let depth = depth_rx.await.unwrap();
///
/// // Tokio may place one spawned task in a LIFO slot rather than the
/// // local queue, which may not be reflected in `worker_local_queue_depth`,
/// // so accept N or N - 1.
/// assert!(depth == N || depth == N - 1, "depth = {depth}");
///
/// let _ = block_tx.send(());
/// }
/// ```
pub total_local_queue_depth: usize,
/// The maximum number of tasks currently scheduled any worker's local queue.
///
/// ##### Definition
/// This metric is derived from the maximum of
/// [`tokio::runtime::RuntimeMetrics::worker_local_queue_depth`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_local_queue_depth`]
/// - [`RuntimeMetrics::min_local_queue_depth`]
pub max_local_queue_depth: usize,
/// The minimum number of tasks currently scheduled any worker's local queue.
///
/// ##### Definition
/// This metric is derived from the minimum of
/// [`tokio::runtime::RuntimeMetrics::worker_local_queue_depth`] across all worker threads.
///
/// ##### See also
/// - [`RuntimeMetrics::total_local_queue_depth`]
/// - [`RuntimeMetrics::max_local_queue_depth`]
pub min_local_queue_depth: usize,
/// The number of tasks currently waiting to be executed in the runtime's blocking threadpool.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::blocking_queue_depth`].
pub blocking_queue_depth: usize,
/// The number of additional threads spawned by the runtime.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::num_blocking_threads`].
pub blocking_threads_count: usize,
/// The number of idle threads, which have spawned by the runtime for `spawn_blocking` calls.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::num_idle_blocking_threads`].
pub idle_blocking_threads_count: usize,
/// Returns the number of times that tasks have been forced to yield back to the scheduler after exhausting their task budgets.
///
/// This count starts at zero when the runtime is created and increases by one each time a task yields due to exhausting its budget.
///
/// The counter is monotonically increasing. It is never decremented or reset to zero.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::budget_forced_yield_count`].
pub budget_forced_yield_count: u64,
/// Returns the number of ready events processed by the runtime’s I/O driver.
///
/// ##### Definition
/// This metric is derived from [`tokio::runtime::RuntimeMetrics::io_driver_ready_count`].
pub io_driver_ready_count: u64,
}
}
macro_rules! define_semi_stable {
(
$(#[$($attributes:tt)*])*
$vis:vis struct $name:ident {
stable {
$($stable_name:ident: $stable_ty:ty),*
$(,)?
}
$(,)?
unstable {
$($unstable_name:ident: $unstable_ty:ty),*
$(,)?
}
}
) => {
$(#[$($attributes)*])*
$vis struct $name {
$(
$stable_name: $stable_ty,
)*
$(
#[cfg(tokio_unstable)]
#[cfg_attr(docsrs, doc(cfg(all(feature = "rt", tokio_unstable))))]
$unstable_name: $unstable_ty,
)*
}
};
}
define_semi_stable! {
/// Snapshot of per-worker metrics
#[derive(Debug, Default)]
struct Worker {
stable {
worker: usize,
total_park_count: u64,
total_busy_duration: Duration,
}
unstable {
total_noop_count: u64,
total_steal_count: u64,
total_steal_operations: u64,
total_local_schedule_count: u64,
total_overflow_count: u64,
total_polls_count: u64,
poll_time_histogram: Vec<u64>,
}
}
}
define_semi_stable! {
/// Iterator returned by [`RuntimeMonitor::intervals`].
///
/// See that method's documentation for more details.
#[derive(Debug)]
pub struct RuntimeIntervals {
stable {
runtime: runtime::RuntimeMetrics,
started_at: Instant,
workers: Vec<Worker>,
}
unstable {
// Number of tasks scheduled from *outside* of the runtime
num_remote_schedules: u64,
budget_forced_yield_count: u64,
io_driver_ready_count: u64,
// Cached bucket ranges, static config that doesn't change after runtime creation.
bucket_ranges: Vec<Range<Duration>>,
}
}
}
impl RuntimeIntervals {
fn probe(&mut self) -> RuntimeMetrics {
let now = Instant::now();
let mut metrics = RuntimeMetrics {
workers_count: self.runtime.num_workers(),
live_tasks_count: self.runtime.num_alive_tasks(),
elapsed: now.saturating_duration_since(self.started_at),
global_queue_depth: self.runtime.global_queue_depth(),
min_park_count: u64::MAX,
min_busy_duration: Duration::from_secs(1000000000),
..Default::default()
};
#[cfg(tokio_unstable)]
{
let num_remote_schedules = self.runtime.remote_schedule_count();
let budget_forced_yields = self.runtime.budget_forced_yield_count();
let io_driver_ready_events = self.runtime.io_driver_ready_count();
metrics.num_remote_schedules = num_remote_schedules.saturating_sub(self.num_remote_schedules);
metrics.min_noop_count = u64::MAX;
metrics.min_steal_count = u64::MAX;
metrics.min_local_schedule_count = u64::MAX;
metrics.min_overflow_count = u64::MAX;
metrics.min_polls_count = u64::MAX;
metrics.min_local_queue_depth = usize::MAX;
metrics.mean_poll_duration_worker_min = Duration::MAX;
metrics.poll_time_histogram = PollTimeHistogram::new(
self.bucket_ranges
.iter()
.map(|range| HistogramBucket::new(range.start, range.end, 0))
.collect(),
);
metrics.budget_forced_yield_count =
budget_forced_yields.saturating_sub(self.budget_forced_yield_count);
metrics.io_driver_ready_count = io_driver_ready_events.saturating_sub(self.io_driver_ready_count);
self.num_remote_schedules = num_remote_schedules;
self.budget_forced_yield_count = budget_forced_yields;
self.io_driver_ready_count = io_driver_ready_events;
}
self.started_at = now;
for worker in &mut self.workers {
worker.probe(&self.runtime, &mut metrics);
}
#[cfg(tokio_unstable)]
{
if metrics.total_polls_count == 0 {
debug_assert_eq!(metrics.mean_poll_duration, Duration::default());
metrics.mean_poll_duration_worker_max = Duration::default();
metrics.mean_poll_duration_worker_min = Duration::default();
}
}
metrics
}
}
impl Iterator for RuntimeIntervals {
type Item = RuntimeMetrics;
fn next(&mut self) -> Option<RuntimeMetrics> {
Some(self.probe())
}
}
impl RuntimeMonitor {
/// Creates a new [`RuntimeMonitor`].
pub fn new(runtime: &runtime::Handle) -> RuntimeMonitor {
let runtime = runtime.metrics();
RuntimeMonitor { runtime }
}
/// Produces an unending iterator of [`RuntimeMetrics`].
///
/// Each sampling interval is defined by the time elapsed between advancements of the iterator
/// produced by [`RuntimeMonitor::intervals`]. The item type of this iterator is [`RuntimeMetrics`],
/// which is a bundle of runtime metrics that describe *only* changes occurring within that sampling
/// interval.
///
/// # Example
///
/// ```
/// use std::time::Duration;
///
/// #[tokio::main]
/// async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
/// let handle = tokio::runtime::Handle::current();
/// // construct the runtime metrics monitor
/// let runtime_monitor = tokio_metrics::RuntimeMonitor::new(&handle);
///
/// // print runtime metrics every 500ms
/// {
/// tokio::spawn(async move {
/// for interval in runtime_monitor.intervals() {
/// // pretty-print the metric interval
/// println!("{:?}", interval);
/// // wait 500ms
/// tokio::time::sleep(Duration::from_millis(500)).await;
/// }
/// });
/// }
///
/// // await some tasks
/// tokio::join![
/// do_work(),
/// do_work(),
/// do_work(),
/// ];
///
/// Ok(())
/// }
///
/// async fn do_work() {
/// for _ in 0..25 {
/// tokio::task::yield_now().await;
/// tokio::time::sleep(Duration::from_millis(100)).await;
/// }
/// }
/// ```
pub fn intervals(&self) -> RuntimeIntervals {
let started_at = Instant::now();
let workers = (0..self.runtime.num_workers())
.map(|worker| Worker::new(worker, &self.runtime))
.collect();
RuntimeIntervals {
runtime: self.runtime.clone(),
started_at,
workers,
#[cfg(tokio_unstable)]
num_remote_schedules: self.runtime.remote_schedule_count(),
#[cfg(tokio_unstable)]
budget_forced_yield_count: self.runtime.budget_forced_yield_count(),
#[cfg(tokio_unstable)]
io_driver_ready_count: self.runtime.io_driver_ready_count(),
#[cfg(tokio_unstable)]
bucket_ranges: (0..self.runtime.poll_time_histogram_num_buckets())
.map(|i| self.runtime.poll_time_histogram_bucket_range(i))
.collect(),
}
}
}
impl Worker {
fn new(worker: usize, rt: &runtime::RuntimeMetrics) -> Worker {
#[allow(unused_mut, clippy::needless_update)]
let mut wrk = Worker {
worker,
total_park_count: rt.worker_park_count(worker),
total_busy_duration: rt.worker_total_busy_duration(worker),
..Default::default()
};
#[cfg(tokio_unstable)]
{
let poll_time_histogram = if rt.poll_time_histogram_enabled() {
vec![0; rt.poll_time_histogram_num_buckets()]
} else {
vec![]
};
wrk.total_noop_count = rt.worker_noop_count(worker);
wrk.total_steal_count = rt.worker_steal_count(worker);
wrk.total_steal_operations = rt.worker_steal_operations(worker);
wrk.total_local_schedule_count = rt.worker_local_schedule_count(worker);
wrk.total_overflow_count = rt.worker_overflow_count(worker);
wrk.total_polls_count = rt.worker_poll_count(worker);
wrk.poll_time_histogram = poll_time_histogram;
};
wrk
}
fn probe(&mut self, rt: &runtime::RuntimeMetrics, metrics: &mut RuntimeMetrics) {
macro_rules! metric {
( $sum:ident, $max:ident, $min:ident, $probe:ident ) => {{
let val = rt.$probe(self.worker);
let delta = val - self.$sum;
self.$sum = val;
metrics.$sum += delta;
if delta > metrics.$max {
metrics.$max = delta;
}
if delta < metrics.$min {
metrics.$min = delta;
}
}};
}
metric!(
total_park_count,
max_park_count,
min_park_count,
worker_park_count
);
metric!(
total_busy_duration,
max_busy_duration,
min_busy_duration,
worker_total_busy_duration
);
#[cfg(tokio_unstable)]
{
let mut worker_polls_count = self.total_polls_count;
let total_polls_count = metrics.total_polls_count;
metric!(
total_noop_count,
max_noop_count,
min_noop_count,
worker_noop_count
);
metric!(
total_steal_count,
max_steal_count,
min_steal_count,
worker_steal_count
);
metric!(
total_steal_operations,
max_steal_operations,
min_steal_operations,
worker_steal_operations
);
metric!(
total_local_schedule_count,
max_local_schedule_count,
min_local_schedule_count,
worker_local_schedule_count
);
metric!(
total_overflow_count,
max_overflow_count,
min_overflow_count,
worker_overflow_count
);
metric!(
total_polls_count,
max_polls_count,
min_polls_count,
worker_poll_count
);
// Get the number of polls since last probe
worker_polls_count = self.total_polls_count.saturating_sub(worker_polls_count);
// Update the mean task poll duration if there were polls
if worker_polls_count > 0 {
let val = rt.worker_mean_poll_time(self.worker);
if val > metrics.mean_poll_duration_worker_max {
metrics.mean_poll_duration_worker_max = val;
}
if val < metrics.mean_poll_duration_worker_min {
metrics.mean_poll_duration_worker_min = val;
}
// First, scale the current value down
let ratio = total_polls_count as f64 / metrics.total_polls_count as f64;
let mut mean = metrics.mean_poll_duration.as_nanos() as f64 * ratio;
// Add the scaled current worker's mean poll duration
let ratio = worker_polls_count as f64 / metrics.total_polls_count as f64;
mean += val.as_nanos() as f64 * ratio;
metrics.mean_poll_duration = Duration::from_nanos(mean as u64);
}
// Update the histogram counts if there were polls since last count
if worker_polls_count > 0 {
for (bucket, entry) in metrics.poll_time_histogram.buckets_mut().iter_mut().enumerate() {
let new = rt.poll_time_histogram_bucket_count(self.worker, bucket);
let delta = new.saturating_sub(self.poll_time_histogram[bucket]);
self.poll_time_histogram[bucket] = new;
entry.add_count(delta);
}
}
// Local scheduled tasks is an absolute value
let local_scheduled_tasks = rt.worker_local_queue_depth(self.worker);
metrics.total_local_queue_depth = metrics.total_local_queue_depth.saturating_add(local_scheduled_tasks);
if local_scheduled_tasks > metrics.max_local_queue_depth {
metrics.max_local_queue_depth = local_scheduled_tasks;
}
if local_scheduled_tasks < metrics.min_local_queue_depth {
metrics.min_local_queue_depth = local_scheduled_tasks;
}
// Blocking queue depth is an absolute value too
metrics.blocking_queue_depth = rt.blocking_queue_depth();
metrics.blocking_threads_count = rt.num_blocking_threads();
metrics.idle_blocking_threads_count = rt.num_idle_blocking_threads();
}
}
}
derived_metrics!(
[RuntimeMetrics] {
stable {
/// Returns the ratio of the [`RuntimeMetrics::total_busy_duration`] to the [`RuntimeMetrics::elapsed`].
pub fn busy_ratio(&self) -> f64 {
self.total_busy_duration.as_nanos() as f64 / self.elapsed.as_nanos() as f64
}
}
unstable {
/// Returns the ratio of the [`RuntimeMetrics::total_polls_count`] to the [`RuntimeMetrics::total_noop_count`].
pub fn mean_polls_per_park(&self) -> f64 {
let total_park_count = self.total_park_count.saturating_sub(self.total_noop_count);
if total_park_count == 0 {
0.0
} else {
self.total_polls_count as f64 / total_park_count as f64
}
}
}
}
);
#[cfg(all(test, tokio_unstable, feature = "metrique-integration"))]
mod metrique_integration_tests {
use super::*;
use metrique::test_util::test_metric;
/// Compile-time regression: if a field is added whose type doesn't
/// implement `CloseValue`, this will fail to compile.
#[test]
fn metrique_integration_produces_expected_fields() {
let metrics = RuntimeMetrics {
workers_count: 4,
total_park_count: 100,
poll_time_histogram: PollTimeHistogram::new(vec![
HistogramBucket::new(Duration::from_micros(0), Duration::from_micros(100), 10),
HistogramBucket::new(Duration::from_micros(100), Duration::from_micros(200), 0),
HistogramBucket::new(Duration::from_micros(200), Duration::from_micros(500), 3),
]),
..Default::default()
};
let entry = test_metric(metrics);
// Stable fields
assert_eq!(entry.metrics["workers_count"], 4);
assert_eq!(entry.metrics["total_park_count"], 100);
assert_eq!(entry.metrics["elapsed"].as_f64(), 0.0);
assert_eq!(entry.metrics["total_busy_duration"].as_f64(), 0.0);
assert_eq!(entry.metrics["global_queue_depth"].as_u64(), 0);
// Unstable fields
assert_eq!(entry.metrics["mean_poll_duration"].as_f64(), 0.0);
assert_eq!(entry.metrics["total_steal_count"].as_u64(), 0);
assert_eq!(entry.metrics["total_polls_count"].as_u64(), 0);
// 2 non-zero buckets (count 10 and 3) should produce 2 observations
let hist = &entry.metrics["poll_time_histogram"];
assert_eq!(hist.distribution.len(), 2, "expected 2 non-zero buckets");
// midpoint of 0..100µs = 50µs, count = 10
match hist.distribution[0] {
metrique::writer::Observation::Repeated { total, occurrences } => {
assert_eq!(occurrences, 10);
assert!((total - 500.0).abs() < 0.01, "expected 50 * 10 = 500, got {total}");
}
other => panic!("expected Repeated, got {other:?}"),
}
// midpoint of 200..500µs = 350µs, count = 3
match hist.distribution[1] {
metrique::writer::Observation::Repeated { total, occurrences } => {
assert_eq!(occurrences, 3);
assert!((total - 1050.0).abs() < 0.01, "expected 350 * 3 = 1050, got {total}");
}
other => panic!("expected Repeated, got {other:?}"),
}
}
/// Collect `RuntimeMetrics` from a live Tokio runtime and verify the pipeline produces valid output.
#[cfg(feature = "rt")]
#[test]
fn metrique_end_to_end() {
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.enable_metrics_poll_time_histogram()
.build()
.unwrap();
rt.block_on(async {
let handle = tokio::runtime::Handle::current();
let monitor = RuntimeMonitor::new(&handle);
let mut intervals = monitor.intervals();
let _ = intervals.next().unwrap();
// Spawn tasks to create some work for the runtime to poll.
let mut metrics_with_polls = None;
for _ in 0..4 {
for _ in 0..25 {
tokio::spawn(async {
tokio::task::yield_now().await;
})
.await
.unwrap();
}
// Slow poll (>900µs) to land in the last histogram bucket.
tokio::spawn(async {
std::thread::sleep(Duration::from_millis(1));
})
.await
.unwrap();
let metrics = intervals.next().unwrap();
let total_polls: u64 = metrics.poll_time_histogram.buckets().iter().map(|b| b.count()).sum();
if total_polls > 0 {
metrics_with_polls = Some(metrics);
break;
}
}
let metrics = metrics_with_polls.expect("expected polls to be recorded within 4 sampled intervals");
let expected_workers_count = metrics.workers_count;
let expected_non_zero_buckets = metrics
.poll_time_histogram
.buckets()
.iter()
.filter(|b| b.count() > 0)
.count();
let expected_total_polls: u64 = metrics.poll_time_histogram.buckets().iter().map(|b| b.count()).sum();
assert!(expected_workers_count > 0);
assert!(expected_total_polls > 0);
let last_bucket = metrics.poll_time_histogram.buckets().last().unwrap();
// Sanity check: Tokio's last histogram bucket ends at Duration::from_nanos(u64::MAX)
assert_eq!(last_bucket.range_end(), Duration::from_nanos(u64::MAX));
assert!(last_bucket.count() > 0, "expected slow poll to land in last bucket");
let last_bucket_start_us = last_bucket.range_start().as_micros() as f64;
let last_bucket_count = last_bucket.count();
let entry = test_metric(metrics);
assert_eq!(entry.metrics["workers_count"], expected_workers_count as u64);
assert!(entry.metrics["elapsed"].as_f64() >= 0.0);
assert!(entry.metrics["total_busy_duration"].as_f64() >= 0.0);
let hist = &entry.metrics["poll_time_histogram"];
assert_eq!(hist.distribution.len(), expected_non_zero_buckets);
let observed_total_occurrences: u64 = hist
.distribution
.iter()
.map(|obs| match obs {
metrique::writer::Observation::Repeated { occurrences, .. } => *occurrences,
other => panic!("expected Repeated, got {other:?}"),
})
.sum();
assert_eq!(observed_total_occurrences, expected_total_polls);
// The last observation corresponds to the last histogram bucket.
// Verify it uses range_start as the representative value instead of a midpoint,
// since the last bucket range_end is Duration::from_nanos(u64::MAX).
let last_obs = hist.distribution.last().unwrap();
match last_obs {
metrique::writer::Observation::Repeated { total, occurrences } => {
assert_eq!(*occurrences, last_bucket_count);
let expected_total = last_bucket_start_us * last_bucket_count as f64;
assert!(
(total - expected_total).abs() < 0.01,
"last bucket should use range_start ({last_bucket_start_us}µs) as representative value, \
expected total={expected_total}, got {total}"
);
}
other => panic!("expected Repeated, got {other:?}"),
}
});
}
}
================================================
FILE: src/task/metrics_rs_integration.rs
================================================
use std::{fmt, time::Duration};
use super::{TaskIntervals, TaskMetrics, TaskMonitor};
use crate::metrics_rs::{metric_refs, DEFAULT_METRIC_SAMPLING_INTERVAL};
/// A builder for the [`TaskMetricsReporter`] that wraps the [`TaskMonitor`], periodically
/// reporting [`TaskMetrics`] to any configured [metrics-rs] recorder.
///
/// ### Published Metrics
///
/// The published metrics are the fields of [`TaskMetrics`], but with the
/// `tokio_` prefix added, for example, `tokio_instrumented_count`. If you have multiple
/// [`TaskMonitor`]s then it is strongly recommended to give each [`TaskMonitor`] a unique metric
/// name or dimension value.
///
/// ### Usage
///
/// To upload metrics via [metrics-rs], you need to set up a reporter, which
/// is actually what exports the metrics outside of the program. You must set
/// up the reporter before you call [`describe_and_run`].
///
/// You can find exporters within the [metrics-rs] docs. One such reporter
/// is the [metrics_exporter_prometheus] reporter, which makes metrics visible
/// through Prometheus.
///
/// You can use it for example to export Prometheus metrics by listening on a local Unix socket
/// called `prometheus.sock`, which you can access for debugging by
/// `curl --unix-socket prometheus.sock localhost`, as follows:
///
/// ```
/// use std::time::Duration;
///
/// use metrics::Key;
///
/// #[tokio::main]
/// async fn main() {
/// metrics_exporter_prometheus::PrometheusBuilder::new()
/// .with_http_uds_listener("prometheus.sock")
/// .install()
/// .unwrap();
/// let monitor = tokio_metrics::TaskMonitor::new();
/// tokio::task::spawn(
/// tokio_metrics::TaskMetricsReporterBuilder::new(|name| {
/// let name = name.replacen("tokio_", "my_task_", 1);
/// Key::from_parts(name, &[("application", "my_app")])
/// })
/// // the default metric sampling interval is 30 seconds, which is
/// // too long for quick tests, so have it be 1 second.
/// .with_interval(std::time::Duration::from_secs(1))
/// .describe_and_run(monitor.clone()),
/// );
/// // Run some code
/// tokio::task::spawn(monitor.instrument(async move {
/// for _ in 0..1000 {
/// tokio::time::sleep(Duration::from_millis(10)).await;
/// }
/// }))
/// .await
/// .unwrap();
/// }
/// ```
///
/// [`describe_and_run`]: TaskMetricsReporterBuilder::describe_and_run
/// [metrics-rs]: metrics
/// [metrics_exporter_prometheus]: https://docs.rs/metrics_exporter_prometheus
pub struct TaskMetricsReporterBuilder {
interval: Duration,
metrics_transformer: Box<dyn FnMut(&'static str) -> metrics::Key + Send>,
}
impl fmt::Debug for TaskMetricsReporterBuilder {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("TaskMetricsReporterBuilder")
.field("interval", &self.interval)
// skip metrics_transformer field
.finish()
}
}
impl TaskMetricsReporterBuilder {
/// Creates a new [`TaskMetricsReporterBuilder`] with a custom "metrics transformer". The custom
/// transformer is used during `build` to transform the metric names into metric keys, for
/// example to add dimensions. The string metric names used by this reporter all start with
/// `tokio_`. The default transformer is just [`metrics::Key::from_static_name`]
///
/// For example, to attach a dimension named "application" with value "my_app", and to replace
/// `tokio_` with `my_task_`
/// ```
/// # use metrics::Key;
///
/// #[tokio::main]
/// async fn main() {
/// metrics_exporter_prometheus::PrometheusBuilder::new()
/// .with_http_uds_listener("prometheus.sock")
/// .install()
/// .unwrap();
/// let monitor = tokio_metrics::TaskMonitor::new();
/// tokio::task::spawn(
/// tokio_metrics::TaskMetricsReporterBuilder::new(|name| {
/// let name = name.replacen("tokio_", "my_task_", 1);
/// Key::from_parts(name, &[("application", "my_app")])
/// })
/// .describe_and_run(monitor)
/// );
/// }
/// ```
pub fn new(transformer: impl FnMut(&'static str) -> metrics::Key + Send + 'static) -> Self {
TaskMetricsReporterBuilder {
interval: DEFAULT_METRIC_SAMPLING_INTERVAL,
metrics_transformer: Box::new(transformer),
}
}
/// Set the metric sampling interval, default: 30 seconds.
///
/// Note that this is the interval on which metrics are *sampled* from
/// the Tokio task and then set on the [metrics-rs] reporter. Uploading the
/// metrics upstream is controlled by the reporter set up in the
/// application, and is normally controlled by a different period.
///
/// For example, if metrics are exported via Prometheus, that
/// normally operates at a pull-based fashion, and the actual collection
/// period is controlled by the Prometheus server, which periodically polls the
/// application's Prometheus exporter to get the latest value of the metrics.
///
/// [metrics-rs]: metrics
pub fn with_interval(mut self, interval: Duration) -> Self {
self.interval = interval;
self
}
/// Build the [`TaskMetricsReporter`] with a specific [`TaskMonitor`]. This function will capture
/// the [`Counter`]s and [`Gauge`]s from the current [metrics-rs] reporter,
/// so if you are using [`with_local_recorder`], you should wrap this function and [`describe`]
/// with it.
///
/// [`Counter`]: metrics::Counter
/// [`Gauge`]: metrics::Counter
/// [`Histogram`]: metrics::Counter
/// [metrics-rs]: metrics
/// [`with_local_recorder`]: metrics::with_local_recorder
/// [`describe`]: Self::describe
#[must_use = "reporter does nothing unless run"]
pub fn build_with_monitor(mut self, monitor: TaskMonitor) -> TaskMetricsReporter {
TaskMetricsReporter {
interval: self.interval,
intervals: monitor.intervals(),
emitter: TaskMetricRefs::capture(&mut self.metrics_transformer),
}
}
/// Call [`describe_counter`] etc. to describe the emitted metrics.
///
/// Describing metrics makes the reporter attach descriptions and units to them,
/// which makes them easier to use. However, some reporters don't support
/// describing the same metric name more than once, so it is generally a good
/// idea to only call this function once per metric reporter.
///
/// [`describe_counter`]: metrics::describe_counter
/// [metrics-rs]: metrics
pub fn describe(mut self) -> Self {
TaskMetricRefs::describe(&mut self.metrics_transformer);
self
}
/// Runs the reporter (within the returned future), [describing] the metrics beforehand.
///
/// Describing metrics makes the reporter attach descriptions and units to them,
/// which makes them easier to use. However, some reporters don't support
/// describing the same metric name more than once. If you are emitting multiple
/// metrics via a single reporter, try to call [`describe`] once and [`run`] for each
/// task metrics reporter.
///
/// ### Working with a custom reporter
///
/// If you want to set a local metrics reporter, you shouldn't be calling this method,
/// but you should instead call `.describe().build()` within [`with_local_recorder`] and then
/// call `run` (see the docs on [`build_with_monitor`]).
///
/// [describing]: Self::describe
/// [`describe`]: Self::describe
/// [`build_with_monitor`]: Self::build_with_monitor.
/// [`run`]: TaskMetricsReporter::run
/// [`with_local_recorder`]: metrics::with_local_recorder
#[cfg(feature = "rt")]
pub async fn describe_and_run(self, monitor: TaskMonitor) {
self.describe().build_with_monitor(monitor).run().await;
}
/// Runs the reporter (within the returned future), not describing the metrics beforehand.
///
/// ### Working with a custom reporter
///
/// If you want to set a local metrics reporter, you shouldn't be calling this method,
/// but you should instead call `.describe().build()` within [`with_local_recorder`] and then
/// call [`run`] (see the docs on [`build_with_monitor`]).
///
/// [`build_with_monitor`]: Self::build_with_monitor
/// [`run`]: TaskMetricsReporter::run
/// [`with_local_recorder`]: metrics::with_local_recorder
#[cfg(feature = "rt")]
pub async fn run_without_describing(self, monitor: TaskMonitor) {
self.build_with_monitor(monitor).run().await;
}
}
/// Collects metrics from a Tokio task and uploads them to [metrics_rs](metrics).
pub struct TaskMetricsReporter {
interval: Duration,
intervals: TaskIntervals,
emitter: TaskMetricRefs,
}
metric_refs! {
[TaskMetricRefs] [elapsed] [TaskMetrics] [()] {
stable {
/// The number of tasks instrumented.
instrumented_count: Gauge<Count> [],
/// The number of tasks dropped.
dropped_count: Gauge<Count> [],
/// The number of tasks polled for the first time.
first_poll_count: Gauge<Count> [],
/// The total duration elapsed between the instant tasks are instrumented, and the instant they are first polled.
total_first_poll_delay: Counter<Microseconds> [],
/// The total number of times that tasks idled, waiting to be awoken.
total_idled_count: Gauge<Count> [],
/// The total duration that tasks idled.
total_idle_duration: Counter<Microseconds> [],
/// The maximum idle duration that a task took.
max_idle_duration: Counter<Microseconds> [],
/// The total number of times that tasks were awoken (and then, presumably, scheduled for execution).
total_scheduled_count: Gauge<Count> [],
/// The total duration that tasks spent waiting to be polled after awakening.
total_scheduled_duration: Counter<Microseconds> [],
/// The total number of times that tasks were polled.
total_poll_count: Gauge<Count> [],
/// The total duration elapsed during polls.
total_poll_duration: Counter<Microseconds> [],
/// The total number of times that polling tasks completed swiftly.
total_fast_poll_count: Gauge<Count> [],
/// The total duration of fast polls.
total_fast_poll_duration: Counter<Microseconds> [],
/// The total number of times that polling tasks completed slowly.
total_slow_poll_count: Gauge<Count> [],
/// The total duration of slow polls.
total_slow_poll_duration: Counter<Microseconds> [],
/// The total count of tasks with short scheduling delays.
total_short_delay_count: Gauge<Count> [],
/// The total count of tasks with long scheduling delays.
total_long_delay_count: Gauge<Count> [],
/// The total duration of tasks with short scheduling delays.
total_short_delay_duration: Counter<Microseconds> [],
/// The total number of times that a task had a long scheduling duration.
total_long_delay_duration: Counter<Microseconds> [],
}
stable_derived {
/// The mean duration elapsed between the instant tasks are instrumented, and the instant they are first polled.
mean_first_poll_delay: Counter<Microseconds> [],
/// The mean duration of idles.
mean_idle_duration: Counter<Microseconds> [],
/// The mean duration that tasks spent waiting to be executed after awakening.
mean_scheduled_duration: Counter<Microseconds> [],
/// The mean duration of polls.
mean_poll_duration: Counter<Microseconds> [],
/// The ratio between the number polls categorized as slow and fast.
slow_poll_ratio: Gauge<Percent> [],
/// The ratio of tasks exceeding [`long_delay_threshold`][TaskMonitor::long_delay_threshold].
long_delay_ratio: Gauge<Percent> [],
/// The mean duration of fast polls.
mean_fast_poll_duration: Counter<Microseconds> [],
/// The average time taken for a task with a short scheduling delay to be executed after being scheduled.
mean_short_delay_duration: Counter<Microseconds> [],
/// The mean duration of slow polls.
mean_slow_poll_duration: Counter<Microseconds> [],
/// The average scheduling delay for a task which takes a long time to start executing after being scheduled.
mean_long_delay_duration: Counter<Microseconds> [],
}
unstable {}
unstable_derived {}
}
}
impl fmt::Debug for TaskMetricsReporter {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("TaskMetricsReporter")
.field("interval", &self.interval)
// skip intervals field
.finish()
}
}
impl TaskMetricsReporter {
/// Collect and publish metrics once to the configured [metrics_rs](metrics) reporter.
pub fn run_once(&mut self) {
let metrics = self
.intervals
.next()
.expect("TaskIntervals::next never returns None");
self.emitter.emit(metrics, ());
}
/// Collect and publish metrics periodically to the configured [metrics_rs](metrics) reporter.
///
/// You probably want to run this within its own task (using [`tokio::task::spawn`])
#[cfg(feature = "rt")]
pub async fn run(mut self) {
loop {
self.run_once();
tokio::time::sleep(self.interval).await;
}
}
}
================================================
FILE: src/task.rs
================================================
use futures_util::task::{ArcWake, AtomicWaker};
use pin_project_lite::pin_project;
use std::future::Future;
use std::ops::Deref;
use std::pin::Pin;
use std::sync::atomic::{AtomicU64, Ordering::SeqCst};
use std::sync::Arc;
use std::task::{Context, Poll};
use tokio_stream::Stream;
#[cfg(feature = "rt")]
use tokio::time::{Duration, Instant};
use crate::derived_metrics::derived_metrics;
#[cfg(not(feature = "rt"))]
use std::time::{Duration, Instant};
#[cfg(feature = "metrics-rs-integration")]
pub(crate) mod metrics_rs_integration;
/// Monitors key metrics of instrumented tasks.
///
/// This struct is preferred for generating a variable number of monitors at runtime.
/// If you can construct a fixed count of `static` monitors instead, see [`TaskMonitorCore`].
///
/// ### Basic Usage
/// A [`TaskMonitor`] tracks key [metrics][TaskMetrics] of async tasks that have been
/// [instrumented][`TaskMonitor::instrument`] with the monitor.
///
/// In the below example, a [`TaskMonitor`] is [constructed][TaskMonitor::new] and used to
/// [instrument][TaskMonitor::instrument] three worker tasks; meanwhile, a fourth task
/// prints [metrics][TaskMetrics] in 500ms [intervals][TaskMonitor::intervals].
/// ```
/// use std::time::Duration;
///
/// #[tokio::main]
/// async fn main() {
/// // construct a metrics monitor
/// let metrics_monitor = tokio_metrics::TaskMonitor::new();
///
/// // print task metrics every 500ms
/// {
/// let metrics_monitor = metrics_monitor.clone();
/// tokio::spawn(async move {
/// for interval in metrics_monitor.intervals() {
/// // pretty-print the metric interval
/// println!("{:?}", interval);
/// // wait 500ms
/// tokio::time::sleep(Duration::from_millis(500)).await;
/// }
/// });
/// }
///
/// // instrument some tasks and await them
/// // note that the same TaskMonitor can be used for multiple tasks
/// tokio::join![
/// metrics_monitor.instrument(do_work()),
/// metrics_monitor.instrument(do_work()),
/// metrics_monitor.instrument(do_work())
/// ];
/// }
///
/// async fn do_work() {
/// for _ in 0..25 {
/// tokio::task::yield_now().await;
/// tokio::time::sleep(Duration::from_millis(100)).await;
/// }
/// }
/// ```
///
/// ### What should I instrument?
/// In most cases, you should construct a *distinct* [`TaskMonitor`] for each kind of key task.
///
/// #### Instrumenting a web application
/// For instance, a web service should have a distinct [`TaskMonitor`] for each endpoint. Within
/// each endpoint, it's prudent to additionally instrument major sub-tasks, each with their own
/// distinct [`TaskMonitor`]s. [*Why are my tasks slow?*](#why-are-my-tasks-slow) explores a
/// debugging scenario for a web service that takes this approach to instrumentation. This
/// approach is exemplified in the below example:
/// ```no_run
/// // The unabridged version of this snippet is in the examples directory of this crate.
///
/// #[tokio::main]
/// async fn main() {
/// // construct a TaskMonitor for root endpoint
/// let monitor_root = tokio_metrics::TaskMonitor::new();
///
/// // construct TaskMonitors for create_users endpoint
/// let monitor_create_user = CreateUserMonitors {
/// // monitor for the entire endpoint
/// route: tokio_metrics::TaskMonitor::new(),
/// // monitor for database insertion subtask
/// insert: tokio_metrics::TaskMonitor::new(),
/// };
///
/// // build our application with two instrumented endpoints
/// let app = axum::Router::new()
/// // `GET /` goes to `root`
/// .route("/", axum::routing::get({
/// let monitor = monitor_root.clone();
/// move || monitor.instrument(async { "Hello, World!" })
/// }))
/// // `POST /users` goes to `create_user`
/// .route("/users", axum::routing::post({
/// let monitors = monitor_create_user.clone();
/// let route = monitors.route.clone();
/// move |payload| {
/// route.instrument(create_user(payload, monitors))
/// }
/// }));
///
/// // print task metrics for each endpoint every 1s
/// let metrics_frequency = std::time::Duration::from_secs(1);
/// tokio::spawn(async move {
/// let root_intervals = monitor_root.intervals();
/// let create_user_route_intervals =
/// monitor_create_user.route.intervals();
/// let create_user_insert_intervals =
/// monitor_create_user.insert.intervals();
/// let create_user_intervals =
/// create_user_route_intervals.zip(create_user_insert_intervals);
///
/// let intervals = root_intervals.zip(create_user_intervals);
/// for (root_route, (create_user_route, create_user_insert)) in intervals {
/// println!("root_route = {:#?}", root_route);
/// println!("create_user_route = {:#?}", create_user_route);
/// println!("create_user_insert = {:#?}", create_user_insert);
/// tokio::time::sleep(metrics_frequency).await;
/// }
/// });
///
/// // run the server
/// let addr = std::net::SocketAddr::from(([127, 0, 0, 1], 3000));
/// let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
/// axum::serve(listener, app)
/// .await
/// .unwrap();
/// }
///
/// async fn create_user(
/// axum::Json(payload): axum::Json<CreateUser>,
/// monitors: CreateUserMonitors,
/// ) -> impl axum::response::IntoResponse {
/// let user = User { id: 1337, username: payload.username, };
/// // instrument inserting the user into the db:
/// let _ = monitors.insert.instrument(insert_user(user.clone())).await;
/// (axum::http::StatusCode::CREATED, axum::Json(user))
/// }
///
/// /* definitions of CreateUserMonitors, CreateUser and User omitted for brevity */
///
/// #
/// # #[derive(Clone)]
/// # struct CreateUserMonitors {
/// # // monitor for the entire endpoint
/// # route: tokio_metrics::TaskMonitor,
/// # // monitor for database insertion subtask
/// # insert: tokio_metrics::TaskMonitor,
/// # }
/// #
/// # #[derive(serde::Deserialize)] struct CreateUser { username: String, }
/// # #[derive(Clone, serde::Serialize)] struct User { id: u64, username: String, }
/// #
/// // insert the user into the database
/// async fn insert_user(_: User) {
/// /* implementation details elided */
/// tokio::time::sleep(std::time::Duration::from_secs(1)).await;
/// }
/// ```
///
/// ### Why are my tasks slow?
/// **Scenario:** You track key, high-level metrics about the customer response time. An alarm warns
/// you that P90 latency for an endpoint exceeds your targets. What is causing the increase?
///
/// #### Identifying the high-level culprits
/// A set of tasks will appear to execute more slowly if:
/// - they are taking longer to poll (i.e., they consume too much CPU time)
/// - they are waiting longer to be polled (e.g., they're waiting longer in tokio's scheduling
/// queues)
/// - they are waiting longer on external events to complete (e.g., asynchronous network requests)
///
/// The culprits, at a high level, may be some combination of these sources of latency. Fortunately,
/// you have instrumented the key tasks of each of your endpoints with distinct [`TaskMonitor`]s.
/// Using the monitors on the endpoint experiencing elevated latency, you begin by answering:
/// - [*Are my tasks taking longer to poll?*](#are-my-tasks-taking-longer-to-poll)
/// - [*Are my tasks spending more time waiting to be polled?*](#are-my-tasks-spending-more-time-waiting-to-be-polled)
/// - [*Are my tasks spending more time waiting on external events to complete?*](#are-my-tasks-spending-more-time-waiting-on-external-events-to-complete)
///
/// ##### Are my tasks taking longer to poll?
/// - **Did [`mean_poll_duration`][TaskMetrics::mean_poll_duration] increase?**
/// This metric reflects the mean poll duration. If it increased, it means that, on average,
/// individual polls tended to take longer. However, this does not necessarily imply increased
/// task latency: An increase in poll durations could be offset by fewer polls.
/// - **Did [`slow_poll_ratio`][TaskMetrics::slow_poll_ratio] increase?**
/// This metric reflects the proportion of polls that were 'slow'. If it increased, it means that
/// a greater proportion of polls performed excessive computation before yielding. This does not
/// necessarily imply increased task latency: An increase in the proportion of slow polls could be
/// offset by fewer or faster polls.
/// - **Did [`mean_slow_poll_duration`][TaskMetrics::mean_slow_poll_duration] increase?**
/// This metric reflects the mean duration of slow polls. If it increased, it means that, on
/// average, slow polls got slower. This does not necessarily imply increased task latency: An
/// increase in average slow poll duration could be offset by fewer or faster polls.
///
/// If so, [*why are my tasks taking longer to poll?*](#why-are-my-tasks-taking-longer-to-poll)
///
/// ##### Are my tasks spending more time waiting to be polled?
/// - **Did [`mean_first_poll_delay`][TaskMetrics::mean_first_poll_delay] increase?**
/// This metric reflects the mean delay between the instant a task is first instrumented and the
/// instant it is first polled. If it increases, it means that, on average, tasks spent longer
/// waiting to be initially run.
/// - **Did [`mean_scheduled_duration`][TaskMetrics::mean_scheduled_duration] increase?**
/// This metric reflects the mean duration that tasks spent in the scheduled state. The
/// 'scheduled' state of a task is the duration between the instant a task is awoken and the
/// instant it is subsequently polled. If this metric increases, it means that, on average, tasks
/// spent longer in tokio's queues before being polled.
/// - **Did [`long_delay_ratio`][TaskMetrics::long_delay_ratio] increase?**
/// This metric reflects the proportion of scheduling delays which were 'long'. If it increased,
/// it means that a greater proportion of tasks experienced excessive delays before they could
/// execute after being woken. This does not necessarily indicate an increase in latency, as this
/// could be offset by fewer or faster task polls.
/// - **Did [`mean_long_delay_duration`][TaskMetrics::mean_long_delay_duration] increase?**
/// This metric reflects the mean duration of long delays. If it increased, it means that, on
/// average, long delays got even longer. This does not necessarily imply increased task latency:
/// an increase in average long delay duration could be offset by fewer or faster polls or more
/// short schedules.
///
/// If so, [*why are my tasks spending more time waiting to be polled?*](#why-are-my-tasks-spending-more-time-waiting-to-be-polled)
///
/// ##### Are my tasks spending more time waiting on external events to complete?
/// - **Did [`mean_idle_duration`][TaskMetrics::mean_idle_duration] increase?**
/// This metric reflects the mean duration that tasks spent in the idle state. The idle state is
/// the duration spanning the instant a task completes a poll, and the instant that it is next
/// awoken. Tasks inhabit this state when they are waiting for task-external events to complete
/// (e.g., an asynchronous sleep, a network request, file I/O, etc.). If this metric increases,
/// tasks, in aggregate, spent more time waiting for task-external events to complete.
///
/// If so, [*why are my tasks spending more time waiting on external events to complete?*](#why-are-my-tasks-spending-more-time-waiting-on-external-events-to-complete)
///
/// #### Digging deeper
/// Having [established the high-level culprits](#identifying-the-high-level-culprits), you now
/// search for further explanation...
///
/// ##### Why are my tasks taking longer to poll?
/// You observed that [your tasks are taking longer to poll](#are-my-tasks-taking-longer-to-poll).
/// The culprit is likely some combination of:
/// - **Your tasks are accidentally blocking.** Common culprits include:
/// 1. Using the Rust standard library's [filesystem](https://doc.rust-lang.org/std/fs/) or
/// [networking](https://doc.rust-lang.org/std/net/) APIs.
/// These APIs are synchronous; use tokio's [filesystem](https://docs.rs/tokio/latest/tokio/fs/)
/// and [networking](https://docs.rs/tokio/latest/tokio/net/) APIs, instead.
/// 3. Calling [`block_on`](https://docs.rs/tokio/latest/tokio/runtime/struct.Handle.html#method.block_on).
/// 4. Invoking `println!` or other synchronous logging routines.
/// Invocations of `println!` involve acquiring an exclusive lock on stdout, followed by a
/// synchronous write to stdout.
/// 2. **Your tasks are computationally expensive.** Common culprits include:
/// 1. TLS/cryptographic routines
/// 2. doing a lot of processing on bytes
/// 3. calling non-Tokio resources
///
/// ##### Why are my tasks spending more time waiting to be polled?
/// You observed that [your tasks are spending more time waiting to be polled](#are-my-tasks-spending-more-time-waiting-to-be-polled)
/// suggesting some combination of:
/// - Your application is inflating the time elapsed between instrumentation and first poll.
/// - Your tasks are being scheduled into tokio's global queue.
/// - Other tasks are spending too long without yielding, thus backing up tokio's queues.
///
/// Start by asking: [*Is time-to-first-poll unusually high?*](#is-time-to-first-poll-unusually-high)
///
/// ##### Why are my tasks spending more time waiting on external events to complete?
/// You observed that [your tasks are spending more time waiting waiting on external events to
/// complete](#are-my-tasks-spending-more-time-waiting-on-external-events-to-complete). But what
/// event? Fortunately, within the task experiencing increased idle times, you monitored several
/// sub-tasks with distinct [`TaskMonitor`]s. For each of these sub-tasks, you [*you try to identify
/// the performance culprits...*](#identifying-the-high-level-culprits)
///
/// #### Digging even deeper
///
/// ##### Is time-to-first-poll unusually high?
/// Contrast these two metrics:
/// - **[`mean_first_poll_delay`][TaskMetrics::mean_first_poll_delay]**
/// This metric reflects the mean delay between the instant a task is first instrumented and the
/// instant it is *first* polled.
/// - **[`mean_scheduled_duration`][TaskMetrics::mean_scheduled_duration]**
/// This metric reflects the mean delay between the instant when tasks were awoken and the
/// instant they were subsequently polled.
///
/// If the former metric exceeds the latter (or increased unexpectedly more than the latter), then
/// start by investigating [*if your application is artificially delaying the time-to-first-poll*](#is-my-application-delaying-the-time-to-first-poll).
///
/// Otherwise, investigate [*if other tasks are polling too long without yielding*](#are-other-tasks-polling-too-long-without-yielding).
///
/// ##### Is my application delaying the time-to-first-poll?
/// You observed that [`mean_first_poll_delay`][TaskMetrics::mean_first_poll_delay] increased, more
/// than [`mean_scheduled_duration`][TaskMetrics::mean_scheduled_duration]. Your application may be
/// needlessly inflating the time elapsed between instrumentation and first poll. Are you
/// constructing (and instrumenting) tasks separately from awaiting or spawning them?
///
/// For instance, in the below example, the application induces 1 second delay between when `task`
/// is instrumented and when it is awaited:
/// ```rust
/// #[tokio::main]
/// async fn main() {
/// use tokio::time::Duration;
/// let monitor = tokio_metrics::TaskMonitor::new();
///
/// let task = monitor.instrument(async move {});
///
/// let one_sec = Duration::from_secs(1);
/// tokio::time::sleep(one_sec).await;
///
/// let _ = tokio::spawn(task).await;
///
/// assert!(monitor.cumulative().total_first_poll_delay >= one_sec);
/// }
/// ```
///
/// Otherwise, [`mean_first_poll_delay`][TaskMetrics::mean_first_poll_delay] might be unusually high
/// because [*your application is spawning key tasks into tokio's global queue...*](#is-my-application-spawning-more-tasks-into-tokio’s-global-queue)
///
/// ##### Is my application spawning more tasks into tokio's global queue?
/// Tasks awoken from threads *not* managed by the tokio runtime are scheduled with a slower,
/// global "injection" queue.
///
/// You may be notifying runtime tasks from off-runtime. For instance, Given the following:
/// ```ignore
/// #[tokio::main]
/// async fn main() {
/// for _ in 0..100 {
/// let (tx, rx) = oneshot::channel();
/// tokio::spawn(async move {
/// tx.send(());
/// })
///
/// rx.await;
/// }
/// }
/// ```
/// One would expect this to run efficiently, however, the main task is run *off* the main runtime
/// and the spawned tasks are *on* runtime, which means the snippet will run much slower than:
/// ```ignore
/// #[tokio::main]
/// async fn main() {
/// tokio::spawn(async {
/// for _ in 0..100 {
/// let (tx, rx) = oneshot::channel();
/// tokio::spawn(async move {
/// tx.send(());
/// })
///
/// rx.await;
/// }
/// }).await;
/// }
/// ```
/// The slowdown is caused by a higher time between the `rx` task being notified (in `tx.send()`)
/// and the task being polled.
///
/// ##### Are other tasks polling too long without yielding?
/// You suspect that your tasks are slow because they're backed up in tokio's scheduling queues. For
/// *each* of your application's [`TaskMonitor`]s you check to see [*if their associated tasks are
/// taking longer to poll...*](#are-my-tasks-taking-longer-to-poll)
///
/// ### Limitations
/// The [`TaskMetrics`] type uses [`u64`] to represent both event counters and durations (measured
/// in nanoseconds). Consequently, event counters are accurate for ≤ [`u64::MAX`] events, and
/// durations are accurate for ≤ [`u64::MAX`] nanoseconds.
///
/// The counters and durations of [`TaskMetrics`] produced by [`TaskMonitor::cumulative`] increase
/// monotonically with each successive invocation of [`TaskMonitor::cumulative`]. Upon overflow,
/// counters and durations wrap.
///
/// The counters and durations of [`TaskMetrics`] produced by [`TaskMonitor::intervals`] are
/// calculated by computing the difference of metrics in successive invocations of
/// [`TaskMonitor::cumulative`]. If, within a monitoring interval, an event occurs more than
/// [`u64::MAX`] times, or a monitored duration exceeds [`u64::MAX`] nanoseconds, the metrics for
/// that interval will overflow and not be accurate.
///
/// ##### Examples at the limits
/// Consider the [`TaskMetrics::total_first_poll_delay`] metric. This metric accurately reflects
/// delays between instrumentation and first-poll ≤ [`u64::MAX`] nanoseconds:
/// ```
/// use tokio::time::Duration;
///
/// #[tokio::main(flavor = "current_thread", start_paused = true)]
/// async fn main() {
/// let monitor = tokio_metrics::TaskMonitor::new();
/// let mut interval = monitor.intervals();
/// let mut next_interval = || interval.next().unwrap();
///
/// // construct and instrument a task, but do not `await` it
/// let task = monitor.instrument(async {});
///
/// // this is the maximum duration representable by tokio_metrics
/// let max_duration = Duration::from_nanos(u64::MAX);
///
/// // let's advance the clock by this amount and
gitextract_igc6ri6v/
├── .github/
│ └── workflows/
│ ├── ci.yml
│ └── release.yml
├── .gitignore
├── CHANGELOG.md
├── CONTRIBUTING.md
├── Cargo.toml
├── LICENSE
├── README.md
├── benches/
│ └── poll_overhead.rs
├── examples/
│ ├── axum.rs
│ ├── runtime.rs
│ ├── stream.rs
│ └── task.rs
├── release-plz.toml
├── src/
│ ├── derived_metrics.rs
│ ├── lib.rs
│ ├── metrics_rs.rs
│ ├── runtime/
│ │ ├── metrics_rs_integration.rs
│ │ └── poll_time_histogram.rs
│ ├── runtime.rs
│ ├── task/
│ │ └── metrics_rs_integration.rs
│ └── task.rs
└── tests/
└── auto_metrics.rs
SYMBOL INDEX (147 symbols across 11 files)
FILE: benches/poll_overhead.rs
type TestFuture (line 13) | pub struct TestFuture;
type Output (line 16) | type Output = ();
method poll (line 18) | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> {
function bench_poll (line 24) | fn bench_poll(c: &mut Criterion) {
FILE: examples/axum.rs
function main (line 2) | async fn main() {
function create_user (line 56) | async fn create_user(
type CreateUserMonitors (line 70) | struct CreateUserMonitors {
type CreateUser (line 78) | struct CreateUser {
type User (line 82) | struct User {
function insert_user (line 88) | async fn insert_user(_: User) {
FILE: examples/runtime.rs
function main (line 5) | async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
function do_work (line 27) | async fn do_work() {
FILE: examples/stream.rs
function main (line 6) | async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
function do_work (line 32) | async fn do_work() {
FILE: examples/task.rs
function main (line 13) | async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
function do_work (line 50) | async fn do_work() {
FILE: src/metrics_rs.rs
constant DEFAULT_METRIC_SAMPLING_INTERVAL (line 3) | pub(crate) const DEFAULT_METRIC_SAMPLING_INTERVAL: Duration = Duration::...
type MyMetricOp (line 246) | pub(crate) trait MyMetricOp<T> {
method op (line 247) | fn op(self, t: T);
function op (line 251) | fn op(self, _: T) {
function op (line 258) | fn op(self, _t: T) {
function op (line 264) | fn op(self, _t: T) {
function op (line 270) | fn op(self, _: T) {
function op (line 276) | fn op(self, _t: T) {
function op (line 282) | fn op(self, _t: T) {
function op (line 289) | fn op(self, _: T) {
FILE: src/runtime.rs
type RuntimeMonitor (line 57) | pub struct RuntimeMonitor {
method new (line 1356) | pub fn new(runtime: &runtime::Handle) -> RuntimeMonitor {
method intervals (line 1409) | pub fn intervals(&self) -> RuntimeIntervals {
method probe (line 1285) | fn probe(&mut self) -> RuntimeMetrics {
type Item (line 1347) | type Item = RuntimeMetrics;
method next (line 1349) | fn next(&mut self) -> Option<RuntimeMetrics> {
method new (line 1436) | fn new(worker: usize, rt: &runtime::RuntimeMetrics) -> Worker {
method probe (line 1463) | fn probe(&mut self, rt: &runtime::RuntimeMetrics, metrics: &mut RuntimeM...
function metrique_integration_produces_expected_fields (line 1625) | fn metrique_integration_produces_expected_fields() {
function metrique_end_to_end (line 1677) | fn metrique_end_to_end() {
FILE: src/runtime/metrics_rs_integration.rs
type RuntimeMetricsReporterBuilder (line 62) | pub struct RuntimeMetricsReporterBuilder {
method fmt (line 68) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
method with_interval (line 99) | pub fn with_interval(mut self, interval: Duration) -> Self {
method with_metrics_transformer (line 128) | pub fn with_metrics_transformer(
method build (line 163) | pub fn build(self) -> RuntimeMetricsReporter {
method build_with_monitor (line 179) | pub fn build_with_monitor(mut self, monitor: RuntimeMonitor) -> Runtim...
method describe (line 196) | pub fn describe(mut self) -> Self {
method describe_and_run (line 220) | pub async fn describe_and_run(self) {
method run_without_describing (line 235) | pub async fn run_without_describing(self) {
method default (line 77) | fn default() -> Self {
type RuntimeMetricsReporter (line 241) | pub struct RuntimeMetricsReporter {
method fmt (line 345) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
method run_once (line 355) | pub fn run_once(&mut self) {
method run (line 366) | pub async fn run(mut self) {
FILE: src/runtime/poll_time_histogram.rs
type PollTimeHistogram (line 15) | pub struct PollTimeHistogram {
method new (line 20) | pub(crate) fn new(buckets: Vec<HistogramBucket>) -> Self {
method buckets (line 25) | pub fn buckets(&self) -> &[HistogramBucket] {
method buckets_mut (line 29) | pub(crate) fn buckets_mut(&mut self) -> &mut [HistogramBucket] {
method as_counts (line 34) | pub fn as_counts(&self) -> Vec<u64> {
method write (line 76) | fn write(&self, writer: impl metrique::writer::ValueWriter) {
type Closed (line 109) | type Closed = Self;
method close (line 111) | fn close(self) -> Self {
type HistogramBucket (line 42) | pub struct HistogramBucket {
method new (line 49) | pub(crate) fn new(range_start: Duration, range_end: Duration, count: u...
method range_start (line 54) | pub fn range_start(&self) -> Duration {
method range_end (line 59) | pub fn range_end(&self) -> Duration {
method count (line 64) | pub fn count(&self) -> u64 {
method add_count (line 69) | pub(crate) fn add_count(&mut self, delta: u64) {
function poll_time_histogram_close_value (line 124) | fn poll_time_histogram_close_value() {
function poll_time_histogram_last_bucket_uses_range_start (line 144) | fn poll_time_histogram_last_bucket_uses_range_start() {
FILE: src/task.rs
type TaskMonitor (line 521) | pub struct TaskMonitor {
method as_ref (line 534) | fn as_ref(&self) -> &TaskMonitorCore {
constant DEFAULT_SLOW_POLL_THRESHOLD (line 1662) | pub const DEFAULT_SLOW_POLL_THRESHOLD: Duration = Duration::from_micro...
constant DEFAULT_SLOW_POLL_THRESHOLD (line 1665) | pub const DEFAULT_SLOW_POLL_THRESHOLD: Duration = Duration::from_milli...
constant DEFAULT_LONG_DELAY_THRESHOLD (line 1670) | pub const DEFAULT_LONG_DELAY_THRESHOLD: Duration = Duration::from_micr...
constant DEFAULT_LONG_DELAY_THRESHOLD (line 1673) | pub const DEFAULT_LONG_DELAY_THRESHOLD: Duration = Duration::from_mill...
method new (line 1682) | pub fn new() -> TaskMonitor {
method builder (line 1687) | pub fn builder() -> TaskMonitorBuilder {
method with_slow_poll_threshold (line 1736) | pub fn with_slow_poll_threshold(slow_poll_cut_off: Duration) -> TaskMo...
method slow_poll_threshold (line 1761) | pub fn slow_poll_threshold(&self) -> Duration {
method long_delay_threshold (line 1767) | pub fn long_delay_threshold(&self) -> Duration {
method instrument (line 1832) | pub fn instrument<F>(&self, task: F) -> Instrumented<F> {
method cumulative (line 1893) | pub fn cumulative(&self) -> TaskMetrics {
method intervals (line 1953) | pub fn intervals(&self) -> TaskIntervals {
type Target (line 526) | type Target = TaskMonitorCore;
method deref (line 528) | fn deref(&self) -> &Self::Target {
type TaskMonitorCore (line 602) | pub struct TaskMonitorCore {
method builder (line 1963) | pub const fn builder() -> TaskMonitorCoreBuilder {
method new (line 1975) | pub const fn new() -> TaskMonitorCore {
method with_slow_poll_threshold (line 1982) | pub const fn with_slow_poll_threshold(slow_poll_cut_off: Duration) -> ...
method slow_poll_threshold (line 1989) | pub fn slow_poll_threshold(&self) -> Duration {
method long_delay_threshold (line 1995) | pub fn long_delay_threshold(&self) -> Duration {
method instrument (line 2015) | pub fn instrument<F>(&'static self, task: F) -> Instrumented<F, &'stat...
method instrument_with (line 2057) | pub fn instrument_with<F, M: AsRef<TaskMonitorCore> + Send + Sync + 's...
method cumulative (line 2092) | pub fn cumulative(&self) -> TaskMetrics {
method intervals (line 2117) | pub fn intervals<Monitor: AsRef<TaskMonitorCore> + Send + Sync + 'stat...
method as_ref (line 2130) | fn as_ref(&self) -> &TaskMonitorCore {
method create (line 2136) | const fn create(slow_poll_cut_off: Duration, long_delay_cut_off: Durat...
type TaskMonitorBuilder (line 609) | pub struct TaskMonitorBuilder(TaskMonitorCoreBuilder);
method new (line 613) | pub fn new() -> Self {
method with_slow_poll_threshold (line 618) | pub fn with_slow_poll_threshold(&mut self, threshold: Duration) -> &mu...
method with_long_delay_threshold (line 624) | pub fn with_long_delay_threshold(&mut self, threshold: Duration) -> &m...
method build (line 630) | pub fn build(self) -> TaskMonitor {
type TaskMonitorCoreBuilder (line 649) | pub struct TaskMonitorCoreBuilder {
method new (line 656) | pub const fn new() -> Self {
method with_slow_poll_threshold (line 664) | pub const fn with_slow_poll_threshold(self, threshold: Duration) -> Se...
method with_long_delay_threshold (line 672) | pub const fn with_long_delay_threshold(self, threshold: Duration) -> S...
method build (line 680) | pub const fn build(self) -> TaskMonitorCore {
type TaskMetrics (line 722) | pub struct TaskMetrics {
type RawMetrics (line 1578) | struct RawMetrics {
method get_and_reset_local_max_idle_duration (line 2165) | fn get_and_reset_local_max_idle_duration(&self) -> Duration {
method metrics (line 2169) | fn metrics(&self) -> TaskMetrics {
type State (line 1643) | struct State<M> {
method default (line 2219) | fn default() -> TaskMonitor {
method default (line 2225) | fn default() -> TaskMonitorCore {
type Output (line 2729) | type Output = T::Output;
method poll (line 2731) | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
type Item (line 2737) | type Item = T::Item;
method poll_next (line 2739) | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<...
function instrument_poll (line 2744) | fn instrument_poll<T, M: AsRef<TaskMonitorCore> + Send + Sync + 'static,...
function on_wake (line 2871) | fn on_wake(&self) {
method wake_by_ref (line 2886) | fn wake_by_ref(arc_self: &Arc<State<M>>) {
method wake (line 2891) | fn wake(self: Arc<State<M>>) {
type TaskIntervals (line 2901) | pub struct TaskIntervals<M: AsRef<TaskMonitorCore> + Send + Sync + 'stat...
function probe (line 2907) | fn probe(&mut self) -> TaskMetrics {
type Item (line 2984) | type Item = TaskMetrics;
method next (line 2986) | fn next(&mut self) -> Option<Self::Item> {
function to_nanos (line 2992) | fn to_nanos(d: Duration) -> u64 {
function sub (line 3000) | fn sub(a: Duration, b: Duration) -> Duration {
function mean (line 3006) | fn mean(d: Duration, count: u64) -> Duration {
type _BoxedInstrumented (line 3021) | type _BoxedInstrumented = Instrumented<Pin<Box<dyn Future<Output = ()>>>>;
type _Wrapper (line 3024) | struct _Wrapper {
function _partial_annotation (line 3029) | async fn _partial_annotation(monitor: &TaskMonitor) {
function _common_usage (line 3035) | async fn _common_usage(monitor: &TaskMonitor) {
function _store_without_annotation (line 3040) | async fn _store_without_annotation(monitor: &TaskMonitor) {
function _function_boundary (line 3046) | async fn _function_boundary(fut: Instrumented<impl Future<Output = i32>>...
function _return_position (line 3051) | fn _return_position(monitor: &TaskMonitor) -> Instrumented<impl Future<O...
function _intervals_inference (line 3056) | fn _intervals_inference(monitor: &TaskMonitor) {
function inference_compiles (line 3062) | async fn inference_compiles() {
FILE: src/task/metrics_rs_integration.rs
type TaskMetricsReporterBuilder (line 66) | pub struct TaskMetricsReporterBuilder {
method fmt (line 72) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
method new (line 107) | pub fn new(transformer: impl FnMut(&'static str) -> metrics::Key + Sen...
method with_interval (line 127) | pub fn with_interval(mut self, interval: Duration) -> Self {
method build_with_monitor (line 144) | pub fn build_with_monitor(mut self, monitor: TaskMonitor) -> TaskMetri...
method describe (line 161) | pub fn describe(mut self) -> Self {
method describe_and_run (line 186) | pub async fn describe_and_run(self, monitor: TaskMonitor) {
method run_without_describing (line 202) | pub async fn run_without_describing(self, monitor: TaskMonitor) {
type TaskMetricsReporter (line 208) | pub struct TaskMetricsReporter {
method fmt (line 284) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
method run_once (line 294) | pub fn run_once(&mut self) {
method run (line 306) | pub async fn run(mut self) {
Condensed preview — 23 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (327K chars).
[
{
"path": ".github/workflows/ci.yml",
"chars": 3919,
"preview": "name: CI\n\non:\n push:\n branches:\n - main\n pull_request: {}\n\nenv:\n RUSTFLAGS: -Dwarnings\n RUST_BACKTRACE: 1\n # "
},
{
"path": ".github/workflows/release.yml",
"chars": 906,
"preview": "name: Publish release\n\npermissions:\n pull-requests: write\n contents: write\n id-token: write # Required for OIDC token"
},
{
"path": ".gitignore",
"chars": 26,
"preview": "/target\nCargo.lock\n.vscode"
},
{
"path": "CHANGELOG.md",
"chars": 6100,
"preview": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Change"
},
{
"path": "CONTRIBUTING.md",
"chars": 1800,
"preview": "## Doing releases\n\nThere is a `.github/workflows/release.yml` workflow that will publish a crates.io release and create "
},
{
"path": "Cargo.toml",
"chars": 1896,
"preview": "[package]\nname = \"tokio-metrics\"\nversion = \"0.5.0\"\nedition = \"2021\"\nrust-version = \"1.70.0\"\nauthors = [\"Tokio Contributo"
},
{
"path": "LICENSE",
"chars": 1061,
"preview": "Copyright (c) 2022 Tokio Contributors\n\nPermission is hereby granted, free of charge, to any\nperson obtaining a copy of t"
},
{
"path": "README.md",
"chars": 24355,
"preview": "# Tokio Metrics\n\n[![Crates.io][crates-badge]][crates-url]\n[![Documentation][docs-badge]][docs-url]\n[![MIT licensed][mit-"
},
{
"path": "benches/poll_overhead.rs",
"chars": 2001,
"preview": "use criterion::{criterion_group, criterion_main, Criterion};\nuse futures::task;\nuse std::future::Future;\nuse std::hint::"
},
{
"path": "examples/axum.rs",
"chars": 3107,
"preview": "#[tokio::main]\nasync fn main() {\n // construct a TaskMonitor for each endpoint\n let monitor_root = tokio_metrics::"
},
{
"path": "examples/runtime.rs",
"chars": 872,
"preview": "use std::time::Duration;\nuse tokio_metrics::RuntimeMonitor;\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::e"
},
{
"path": "examples/stream.rs",
"chars": 1062,
"preview": "use std::time::Duration;\n\nuse futures::{stream::FuturesUnordered, StreamExt};\n\n#[tokio::main]\nasync fn main() -> Result<"
},
{
"path": "examples/task.rs",
"chars": 1884,
"preview": "use std::time::Duration;\nuse tokio_metrics::{TaskMonitor, TaskMonitorCore};\n\n/// It's usually the right choice to use a "
},
{
"path": "release-plz.toml",
"chars": 184,
"preview": "[workspace]\ngit_release_enable = false\nchangelog_update = false\n\n[[package]]\nname = \"tokio-metrics\"\nchangelog_path = \"./"
},
{
"path": "src/derived_metrics.rs",
"chars": 1272,
"preview": "macro_rules! derived_metrics {\n (\n [$metrics_name:ty] {\n stable {\n $(\n "
},
{
"path": "src/lib.rs",
"chars": 6946,
"preview": "#![warn(\n clippy::arithmetic_side_effects,\n missing_debug_implementations,\n missing_docs,\n rust_2018_idioms,"
},
{
"path": "src/metrics_rs.rs",
"chars": 10730,
"preview": "use std::time::Duration;\n\npub(crate) const DEFAULT_METRIC_SAMPLING_INTERVAL: Duration = Duration::from_secs(30);\n\nmacro_"
},
{
"path": "src/runtime/metrics_rs_integration.rs",
"chars": 17239,
"preview": "use std::{fmt, time::Duration};\n\nuse tokio::runtime::Handle;\n\nuse super::{RuntimeIntervals, RuntimeMetrics, RuntimeMonit"
},
{
"path": "src/runtime/poll_time_histogram.rs",
"chars": 5822,
"preview": "use std::time::Duration;\n\n/// A histogram of task poll durations, pairing each bucket's count with its\n/// time range fr"
},
{
"path": "src/runtime.rs",
"chars": 72288,
"preview": "use crate::derived_metrics::derived_metrics;\n#[cfg(tokio_unstable)]\nuse std::ops::Range;\nuse std::time::{Duration, Insta"
},
{
"path": "src/task/metrics_rs_integration.rs",
"chars": 13969,
"preview": "use std::{fmt, time::Duration};\n\nuse super::{TaskIntervals, TaskMetrics, TaskMonitor};\nuse crate::metrics_rs::{metric_re"
},
{
"path": "src/task.rs",
"chars": 132268,
"preview": "use futures_util::task::{ArcWake, AtomicWaker};\nuse pin_project_lite::pin_project;\nuse std::future::Future;\nuse std::ops"
},
{
"path": "tests/auto_metrics.rs",
"chars": 7509,
"preview": "macro_rules! cfg_rt {\n ($($item:item)*) => {\n $(\n #[cfg(all(tokio_unstable, feature = \"rt\"))]\n "
}
]
About this extraction
This page contains the full source code of the tokio-rs/tokio-metrics GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 23 files (309.8 KB), approximately 73.3k tokens, and a symbol index with 147 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.