Full Code of azat/chdig for AI

main 7394b22c63a3 cached
89 files
747.5 KB
168.9k tokens
813 symbols
1 requests
Download .txt
Showing preview only (781K chars total). Download the full file or copy to clipboard to get everything.
Repository: azat/chdig
Branch: main
Commit: 7394b22c63a3
Files: 89
Total size: 747.5 KB

Directory structure:
gitextract_a1a8yrqt/

├── .cargo/
│   ├── audit.toml
│   └── config.toml
├── .exrc
├── .github/
│   └── workflows/
│       ├── build.yml
│       ├── pre_release.yml
│       ├── pull_request.yml
│       └── release.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .yamllint
├── Cargo.toml
├── Documentation/
│   ├── Actions.md
│   ├── Bugs.md
│   ├── Developers.md
│   └── FAQ.md
├── LICENSE
├── Makefile
├── README.md
├── chdig-nfpm.yaml
├── rustfmt.toml
├── src/
│   ├── actions.rs
│   ├── bin.rs
│   ├── common/
│   │   ├── mod.rs
│   │   ├── relative_date_time.rs
│   │   ├── sparkline.rs
│   │   └── stopwatch.rs
│   ├── interpreter/
│   │   ├── background_runner.rs
│   │   ├── clickhouse.rs
│   │   ├── clickhouse_quirks.rs
│   │   ├── context.rs
│   │   ├── debug_metrics.rs
│   │   ├── flamegraph.rs
│   │   ├── mod.rs
│   │   ├── options.rs
│   │   ├── perfetto.rs
│   │   ├── query.rs
│   │   └── worker.rs
│   ├── lib.rs
│   ├── main.rs
│   ├── pastila.rs
│   ├── utils.rs
│   └── view/
│       ├── log_view.rs
│       ├── mod.rs
│       ├── navigation.rs
│       ├── provider.rs
│       ├── providers/
│       │   ├── asynchronous_inserts.rs
│       │   ├── background_schedule_pool.rs
│       │   ├── background_schedule_pool_log.rs
│       │   ├── backups.rs
│       │   ├── client.rs
│       │   ├── dictionaries.rs
│       │   ├── errors.rs
│       │   ├── logger_names.rs
│       │   ├── merges.rs
│       │   ├── mod.rs
│       │   ├── mutations.rs
│       │   ├── object_storage_queue.rs
│       │   ├── part_log.rs
│       │   ├── queries.rs
│       │   ├── replicas.rs
│       │   ├── replicated_fetches.rs
│       │   ├── replication_queue.rs
│       │   ├── server_logs.rs
│       │   ├── table_parts.rs
│       │   └── tables.rs
│       ├── queries_view.rs
│       ├── query_view.rs
│       ├── registry.rs
│       ├── search_history.rs
│       ├── settings_view.rs
│       ├── sql_query_view.rs
│       ├── summary_view.rs
│       ├── table_view.rs
│       ├── text_log_view.rs
│       └── utils.rs
├── tests/
│   └── configs/
│       ├── accept_invalid_certificate.yaml
│       ├── basic.xml
│       ├── basic.yaml
│       ├── chdig_basic.yaml
│       ├── chdig_empty.yaml
│       ├── chdig_partial.yaml
│       ├── connections.yaml
│       ├── empty.xml
│       ├── empty.yaml
│       ├── tls.xml
│       ├── tls.yaml
│       ├── unknown_directives.xml
│       └── unknown_directives.yaml
└── typos.toml

================================================
FILE CONTENTS
================================================

================================================
FILE: .cargo/audit.toml
================================================
# https://docs.rs/crate/cargo-audit/0.10.0/source/audit.toml.example
[advisories]
ignore = [
    # time: Potential segfault in the time crate
    # chdig should not be affected by this, waiting for upstream.
    "RUSTSEC-2020-0071",
    # ansi_term is Unmaintained
    "RUSTSEC-2021-0139",
    # term_size is Unmaintained
    "RUSTSEC-2020-0163",
    # stdweb is unmaintained
    "RUSTSEC-2020-0056",

    # Waiting for upstream
    # owning_ref: Multiple soundness issues in `owning_ref`
    "RUSTSEC-2022-0040",
    # nix: Out-of-bounds write in nix::unistd::getgrouplist
    "RUSTSEC-2021-0119",
    # rustc-serialize: Stack overflow in rustc_serialize when parsing deeply nested JSON
    "RUSTSEC-2022-0004",
    # atty: Potential unaligned read
    "RUSTSEC-2021-0145",
]


================================================
FILE: .cargo/config.toml
================================================
[build]
rustflags = ["--cfg", "tokio_unstable"]


================================================
FILE: .exrc
================================================
"
" Add this into your .vimrc, to allow vim handle this file.
"
" set exrc
" set secure " even after this this is kind of dangerous
"

set tabstop=4
set softtabstop=4
set shiftwidth=4
set expandtab

let detectindent_preferred_indent=4
let g:detectindent_preferred_expandtab=1


================================================
FILE: .github/workflows/build.yml
================================================
---
name: Build chdig

on:
  workflow_call:
    inputs: {}

env:
  CARGO_TERM_COLOR: always

jobs:
  lint:
    name: Run linters
    runs-on: ubuntu-22.04

    steps:
    - uses: actions/checkout@v3
      with:
        persist-credentials: false
    - uses: Swatinem/rust-cache@v2
      with:
        cache-on-failure: true
    - name: cargo check
      run: cargo check
    - name: cargo clippy
      run: cargo clippy

  build-linux:
    name: Build Linux (x86_64)
    runs-on: ubuntu-22.04

    steps:
    - uses: actions/checkout@v3
      with:
        # To fetch tags, but can this be improved using blobless checkout?
        # [1]. But anyway right it is not important, and unlikely will be,
        # since the repository is small.
        #
        #   [1]: https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
        fetch-depth: 0
        persist-credentials: false

    # Workaround for https://github.com/actions/checkout/issues/882
    - name: Fix tags for release
      # will break on a lightweight tag
      run: git fetch origin +refs/tags/*:refs/tags/*

    - uses: Swatinem/rust-cache@v2
      with:
        cache-on-failure: true

    - name: Install dependencies
      run: |
        # nfpm
        curl -sS -Lo /tmp/nfpm.deb "https://github.com/goreleaser/nfpm/releases/download/v2.43.4/nfpm_2.43.4_amd64.deb"
        sudo dpkg -i /tmp/nfpm.deb
        # for building cityhash for clickhouse-rs
        sudo apt-get install -y musl-tools
        # gcc cannot do cross compile, and there is no musl-g++ in musl-tools
        sudo ln -srf /usr/bin/clang /usr/bin/musl-g++
        # musl for static binaries
        rustup target add x86_64-unknown-linux-musl

    - name: Run tests
      run: make test

    - name: Build
      run: |
        set -x
        make packages target=x86_64-unknown-linux-musl
        ls -l
        declare -A mapping
        mapping[chdig*.x86_64.rpm]=chdig-latest.x86_64.rpm
        mapping[chdig*-x86_64.pkg.tar.zst]=chdig-latest-x86_64.pkg.tar.zst
        mapping[chdig*-x86_64.tar.gz]=chdig-latest-x86_64.tar.gz
        mapping[chdig*_amd64.deb]=chdig-latest_amd64.deb
        mapping[target/chdig]=chdig-amd64
        for pattern in "${!mapping[@]}"; do
            cp $pattern ${mapping[$pattern]}
        done

    - name: Check package
      run: |
        sudo dpkg -i chdig-latest_amd64.deb
        chdig --help

    - name: Archive Packages
      uses: actions/upload-artifact@v4
      with:
        name: linux-packages-amd64
        path: |
          chdig-amd64
          *.deb
          *.rpm
          *.tar.*

  build-linux-no-features:
    name: Build Linux (no features)
    runs-on: ubuntu-22.04

    steps:
    - uses: actions/checkout@v3
      with:
        persist-credentials: false
    - uses: Swatinem/rust-cache@v2
      with:
        cache-on-failure: true
    - name: Run tests
      run: make test
    - name: Build
      run: |
        cargo build --no-default-features
    - name: Check package
      run: |
        cargo run --no-default-features -- --help

  build-macos-x86_64:
    name: Build MacOS (x86_64)
    runs-on: macos-15-intel

    steps:
    - uses: actions/checkout@v3
      with:
        # To fetch tags, but can this be improved using blobless checkout?
        # [1]. But anyway right it is not important, and unlikely will be,
        # since the repository is small.
        #
        #   [1]: https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
        fetch-depth: 0
        persist-credentials: false

    # Workaround for https://github.com/actions/checkout/issues/882
    - name: Fix tags for release
      # will break on a lightweight tag
      run: git fetch origin +refs/tags/*:refs/tags/*

    - uses: Swatinem/rust-cache@v2
      with:
        cache-on-failure: true

    - name: Worker info
      run: |
        # SDKs versions
        ls -al /Library/Developer/CommandLineTools/SDKs/

    - name: Build
      run: |
        set -x
        make deploy-binary
        cp target/chdig chdig-macos-x86_64

    - name: Check package
      run: |
        ./chdig-macos-x86_64 --help

    - name: Archive Packages
      uses: actions/upload-artifact@v4
      with:
        name: macos-packages-x86_64
        path: |
          chdig-macos-x86_64

  build-macos-arm64:
    name: Build MacOS (arm64)
    runs-on: macos-26

    steps:
    - uses: actions/checkout@v3
      with:
        # To fetch tags, but can this be improved using blobless checkout?
        # [1]. But anyway right it is not important, and unlikely will be,
        # since the repository is small.
        #
        #   [1]: https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
        fetch-depth: 0
        persist-credentials: false

    # Workaround for https://github.com/actions/checkout/issues/882
    - name: Fix tags for release
      # will break on a lightweight tag
      run: git fetch origin +refs/tags/*:refs/tags/*

    - uses: Swatinem/rust-cache@v2
      with:
        cache-on-failure: true

    - name: Worker info
      run: |
        # SDKs versions
        ls -al /Library/Developer/CommandLineTools/SDKs/

    - name: Build
      run: |
        set -x
        make deploy-binary
        cp target/chdig chdig-macos-arm64

    - name: Check package
      run: |
        ./chdig-macos-arm64 --help

    - name: Archive Packages
      uses: actions/upload-artifact@v4
      with:
        name: macos-packages-arm64
        path: |
          chdig-macos-arm64

  build-windows:
    name: Build Windows
    runs-on: windows-latest

    steps:
    - uses: actions/checkout@v3
      with:
        # To fetch tags, but can this be improved using blobless checkout?
        # [1]. But anyway right it is not important, and unlikely will be,
        # since the repository is small.
        #
        #   [1]: https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
        fetch-depth: 0
        persist-credentials: false

    # Workaround for https://github.com/actions/checkout/issues/882
    - name: Fix tags for release
      # will break on a lightweight tag
      run: git fetch origin +refs/tags/*:refs/tags/*

    - uses: Swatinem/rust-cache@v2
      with:
        cache-on-failure: true

    - name: Build
      run: |
        make deploy-binary
        cp target/chdig.exe chdig-windows-x86_64.exe

    - name: Archive Packages
      uses: actions/upload-artifact@v4
      with:
        name: windows-packages-x86_64
        path: |
          chdig-windows-x86_64.exe

  build-linux-aarch64:
    name: Build Linux (aarch64)
    runs-on: ubuntu-22.04-arm

    steps:
    - uses: actions/checkout@v3
      with:
        # To fetch tags, but can this be improved using blobless checkout?
        # [1]. But anyway right it is not important, and unlikely will be,
        # since the repository is small.
        #
        #   [1]: https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
        fetch-depth: 0
        persist-credentials: false

    # Workaround for https://github.com/actions/checkout/issues/882
    - name: Fix tags for release
      # will break on a lightweight tag
      run: git fetch origin +refs/tags/*:refs/tags/*

    - uses: Swatinem/rust-cache@v2
      with:
        cache-on-failure: true

    - name: Install dependencies
      run: |
        # nfpm
        curl -sS -Lo /tmp/nfpm.deb "https://github.com/goreleaser/nfpm/releases/download/v2.43.4/nfpm_2.43.4_arm64.deb"
        sudo dpkg -i /tmp/nfpm.deb
        # for building cityhash for clickhouse-rs
        sudo apt-get install -y musl-tools
        # gcc cannot do cross compile, and there is no musl-g++ in musl-tools
        sudo ln -srf /usr/bin/clang /usr/bin/musl-g++
        # "Compiler family detection failed due to error: ToolNotFound: failed to find tool "aarch64-linux-musl-g++": No such file or directory"
        sudo ln -srf /usr/bin/clang /usr/bin/aarch64-linux-musl-g++
        # musl for static binaries
        rustup target add aarch64-unknown-linux-musl

    - name: Run tests
      run: make test

    - name: Build
      run: |
        set -x
        make packages target=aarch64-unknown-linux-musl
        ls -l
        declare -A mapping
        mapping[chdig*.aarch64.rpm]=chdig-latest.aarch64.rpm
        mapping[chdig*-aarch64.pkg.tar.zst]=chdig-latest-aarch64.pkg.tar.zst
        mapping[chdig*-aarch64.tar.gz]=chdig-latest-aarch64.tar.gz
        mapping[chdig*_arm64.deb]=chdig-latest_arm64.deb
        mapping[target/chdig]=chdig-aarch64
        for pattern in "${!mapping[@]}"; do
            cp $pattern ${mapping[$pattern]}
        done

    - name: Check package
      run: |
        sudo dpkg -i chdig-latest_arm64.deb
        chdig --help

    - name: Archive Packages
      uses: actions/upload-artifact@v4
      with:
        name: linux-packages-aarch64
        path: |
          chdig-aarch64
          *.deb
          *.rpm
          *.tar.*


================================================
FILE: .github/workflows/pre_release.yml
================================================
---
name: pre-release

on:
  push:
    branches:
    - main

jobs:
  build:
    uses: ./.github/workflows/build.yml

  publish-pre-release:
    name: Publish Pre Release
    runs-on: ubuntu-22.04

    permissions:
      contents: write

    needs:
    - build

    steps:
    - name: Download artifacts
      uses: actions/download-artifact@v4
    - uses: "marvinpinto/action-automatic-releases@latest"
      with:
        repo_token: "${{ secrets.GITHUB_TOKEN }}"
        prerelease: true
        automatic_release_tag: "latest"
        title: "Development Build"
        files: |
          macos-packages-x86_64/*
          macos-packages-arm64/*
          windows-packages-x86_64/*
          linux-packages-amd64/*
          linux-packages-aarch64/*


================================================
FILE: .github/workflows/pull_request.yml
================================================
---
name: pull_request

on:
  pull_request:
    types:
    - synchronize
    - reopened
    - opened
    branches:
    - main
    paths-ignore:
    - '**.md'
    - 'Documentation/**'

jobs:
  spellcheck:
    name: Spell Check with Typos
    runs-on: ubuntu-latest
    steps:
    - name: Checkout Actions Repository
      uses: actions/checkout@v4

    - name: Spell Check Repo
      uses: crate-ci/typos@v1.31.1
      with:
        config: typos.toml

  build:
    needs: spellcheck
    uses: ./.github/workflows/build.yml


================================================
FILE: .github/workflows/release.yml
================================================
---
name: release

on:
  push:
    tags:
    - "v*"

jobs:
  build:
    uses: ./.github/workflows/build.yml

  publish-release:
    name: Publish Release
    runs-on: ubuntu-22.04

    permissions:
      contents: write

    needs:
    - build

    steps:
    - name: Download artifacts
      uses: actions/download-artifact@v4
    - uses: "marvinpinto/action-automatic-releases@latest"
      with:
        repo_token: "${{ secrets.GITHUB_TOKEN }}"
        prerelease: false
        files: |
          macos-packages-x86_64/*
          macos-packages-arm64/*
          windows-packages-x86_64/*
          linux-packages-amd64/*
          linux-packages-aarch64/*

    - name: Generate PKGBUILD
      run: |
        set -x

        VERSION="${GITHUB_REF##*/}"
        VERSION="${VERSION#v}"
        SHA256_x86_64=$(sha256sum linux-packages-amd64/chdig-$VERSION-1-x86_64.pkg.tar.zst | cut -d' ' -f1)
        SHA256_aarch64=$(sha256sum linux-packages-aarch64/chdig-$VERSION-1-aarch64.pkg.tar.zst | cut -d' ' -f1)

        cat > PKGBUILD <<EOL
        # shellcheck disable=SC2034,SC2154
        # - SC2034 - appears unused.
        # - SC2154 - pkgdir is referenced but not assigned.

        # Maintainer: Azat Khuzhin <a3at.mail@gmail.com>
        pkgname=chdig-bin
        pkgver=$VERSION
        pkgrel=1
        pkgdesc="Dig into ClickHouse with TUI interface (binaries for latest stable version)"
        arch=('x86_64' 'aarch64')
        conflicts=("chdig")
        provides=("chdig")
        url="https://github.com/azat/chdig"
        license=('MIT')
        source_x86_64=("https://github.com/azat/chdig/releases/download/v\$pkgver/chdig-\$pkgver-1-x86_64.pkg.tar.zst")
        source_aarch64=("https://github.com/azat/chdig/releases/download/v\$pkgver/chdig-\$pkgver-1-aarch64.pkg.tar.zst")
        sha256sums_x86_64=('$SHA256_x86_64')
        sha256sums_aarch64=('$SHA256_aarch64')

        package() {
            tar -C "\$pkgdir" -xvf chdig-\$pkgver-1-\$(uname -m).pkg.tar.zst
            rm -f "\$pkgdir/.PKGINFO"
            rm -f "\$pkgdir/.MTREE"
        }
        # vim set: ts=4 sw=4 et
        EOL
        cat PKGBUILD
    - name: Publish to the AUR
      uses: KSXGitHub/github-actions-deploy-aur@v4.1.3
      if: ${{ github.event.repository.fork == false }}
      with:
        pkgname: chdig-bin
        pkgbuild: PKGBUILD
        commit_username: Azat Khuzhin
        commit_email: a3at.mail@gmail.com
        ssh_private_key: ${{ secrets.AUR_SSH_PRIVATE_KEY }}
        commit_message: Release ${{ github.ref_name }}
        # force_push: 'true'


================================================
FILE: .gitignore
================================================
# cargo
target
/vendor
# distribution
dist
# packages
*.deb
*.tar.*
*.tar
*.rpm
# intellij
.idea/


================================================
FILE: .pre-commit-config.yaml
================================================
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
  rev: v4.5.0
  hooks:
  - id: check-byte-order-marker
  - id: check-yaml
  - id: end-of-file-fixer
  - id: mixed-line-ending
  - id: trailing-whitespace
- repo: https://github.com/pre-commit/pre-commit
  rev: v3.6.0
  hooks:
  - id: validate_manifest
- repo: https://github.com/doublify/pre-commit-rust
  rev: v1.0
  hooks:
  - id: fmt
    pass_filenames: false
  - id: cargo-check
  - id: clippy
- repo: https://github.com/adrienverge/yamllint.git
  rev: v1.35.1
  hooks:
  - id: yamllint


================================================
FILE: .yamllint
================================================
# vi: ft=yaml
---
extends: default

rules:
  indentation:
    spaces: 2
    level: error
    indent-sequences: false
  line-length:
    max: 250
  braces:
    max-spaces-inside: 1
  truthy:
    allowed-values: ['true', 'false', 'yes', 'no']
    check-keys: true
  comments:
    # this is useful to distinguish commented code from comments
    require-starting-space: false


================================================
FILE: Cargo.toml
================================================
[package]
name = "chdig"
authors = ["Azat Khuzhin <a3at.mail@gmail.com>"]
homepage = "https://github.com/azat/chdig"
repository = "https://github.com/azat/chdig"
readme = "README.md"
description = "Dig into ClickHouse with TUI interface"
license = "MIT"
version = "26.4.3"
edition = "2024"

[lib]
name = "chdig"
crate-type = ["staticlib", "lib"]
path = "src/lib.rs"

[[bin]]
name = "chdig"
path = "src/main.rs"

[features]
default = ["tls"]
tls = ["clickhouse-rs/tls-rustls"]
tokio-console = ["dep:console-subscriber", "tokio/tracing"]

[patch.crates-io]
cursive = { git = "https://github.com/azat-rust/cursive", branch = "chdig-next" }
cursive_core = { git = "https://github.com/azat-rust/cursive", branch = "chdig-next" }

[dependencies]
# Basic
anyhow = { version = "*", default-features = false, features = ["std"] }
libc = { version = "*", default-features = false }
size = { version = "*", default-features = false, features = ["std"] }
tempfile = { version = "*", default-features = false }
url = { version = "*", default-features = false }
humantime = { version = "*", default-features = false }
backtrace = { version = "*", default-features = false, features = ["std"] }
futures = { version = "*", default-features = false, features = ["std"] }
strfmt = { version = "*", default-features = false }
fuzzy-matcher = { version = "*", default-features = false }
# chrono/chrono-tz should match clickhouse-rs
chrono = { version = "0.4", default-features = false, features = ["std", "clock"] }
chrono-tz = { version = "0.8", default-features = false }
flexi_logger = { version = "0.27", default-features = false }
log = { version = "0.4", default-features = false }
futures-util = { version = "*", default-features = false }
semver = { version = "*", default-features = false }
serde = { version = "*", features = ["derive"] }
serde_json = { version = "*", default-features = false, features = ["std"] }
serde_yaml = { version = "*", default-features = false }
quick-xml = { version = "*", features = ["serialize"] }
percent-encoding = { version = "*", default-features = false }
regex = { version = "*", default-features = false, features = ["std"] }
# CLI
clap = { version = "*", default-features = false, features = ["derive", "env", "help", "usage", "std", "color", "error-context", "suggestions"] }
clap_complete = { version = "*", default-features = false }
# UI
cursive = { version = "*", default-features = false, features = ["crossterm-backend"] }
cursive-syntect = { version = "*", default-features = true }
unicode-width = "0.1"
cursive-flexi-logger-view = { git = "https://github.com/azat-rust/cursive-flexi-logger-view", branch = "next", default-features = false }
syntect = { version = "*", default-features = false, features = ["default-syntaxes", "default-themes"] }
arboard = { version = "*", default-features = false }
clickhouse-rs = { git = "https://github.com/azat-rust/clickhouse-rs", branch = "next", default-features = false, features = ["tokio_io"] }
tokio = { version = "*", default-features = false, features = ["macros"] }
console-subscriber = { version = "*", default-features = false, optional = true }
# Flamegraphs
flamelens = { git = "https://github.com/azat-rust/flamelens", branch = "diff-mode", default-features = false }
ratatui = { version = "0.29.0", features = ["unstable-rendered-line-info"] }
# Should **only** with the flamelens, since cursive re-export it, while flamelens does not
crossterm = { version = "0.28.1", features = ["use-dev-tty"] }
# Perfetto
perfetto_protos = { version = "*", default-features = false }
protobuf = { version = "3", default-features = false }
tiny_http = { version = "*", default-features = false }
# Sharing
aes-gcm = { version = "0.10", default-features = false, features = ["aes", "alloc"] }
rand = { version = "0.8", default-features = false, features = ["std", "std_rng"] }
base64 = { version = "0.22", default-features = false, features = ["std"] }

[dev-dependencies]
pretty_assertions = { version= "*", default-features = false, features = ["alloc"] }

[profile.release]
# Too slow and does not worth it
lto = false

[lints.clippy]
needless_return = "allow"
type_complexity = "allow"
uninlined_format_args = "allow"

[lints.rust]
elided_lifetimes_in_paths = "deny"


================================================
FILE: Documentation/Actions.md
================================================
### Actions

`chdig` supports lots of actions, some has shortcut, others available only in
`Ctlr-P` (fuzzy search by all actions) (also there is `F8` for query actions
and `F2` for global actions, if you prefer old school).

### Shortcuts

Here is a list of available shortcuts

| Category        | Shortcut      | Description                                   |
|-----------------|---------------|-----------------------------------------------|
| Global Shortcuts| **F1**        | Show help                                     |
|                 | **F2**        | Views                                         |
|                 | **F8**        | Show actions                                  |
|                 | **Ctrl-p**    | Fuzzy actions                                 |
|                 | **F**         | CPU Server Flamegraph                         |
|                 |               | Real Server Flamegraph                        |
|                 |               | Memory Server Flamegraph                      |
|                 |               | Memory Sample Server Flamegraph               |
|                 |               | Jemalloc Sample Server Flamegraph             |
|                 |               | Events Server Flamegraph                      |
|                 |               | Live Server Flamegraph                        |
|                 |               | CPU Server Flamegraph in speedscope           |
|                 |               | Real Server Flamegraph in speedscope          |
|                 |               | Memory Server Flamegraph in speedscope        |
|                 |               | Memory Sample Server Flamegraph in speedscope |
|                 |               | Jemalloc Sample Server Flamegraph in speedscope|
|                 |               | Events Server Flamegraph in speedscope        |
|                 |               | Live Server Flamegraph in speedscope          |
| Actions         | **<Space>**   | Select                                        |
|                 | **-**         | Show all queries                              |
|                 | **+**         | Show queries on shards                        |
|                 | **/**         | Filter                                        |
|                 |               | Query details                                 |
|                 |               | Query profile events                          |
|                 | **P**         | Query processors                              |
|                 | **v**         | Query views                                   |
|                 | **C**         | Show CPU flamegraph                           |
|                 | **R**         | Show Real flamegraph                          |
|                 | **M**         | Show memory flamegraph                        |
|                 |               | Show memory sample flamegraph                 |
|                 |               | Show jemalloc sample flamegraph               |
|                 |               | Show events flamegraph                        |
|                 | **L**         | Show live flamegraph                          |
|                 |               | Show CPU flamegraph in speedscope             |
|                 |               | Show Real flamegraph in speedscope            |
|                 |               | Show memory flamegraph in speedscope          |
|                 |               | Show memory sample flamegraph in speedscope   |
|                 |               | Show jemalloc sample flamegraph in speedscope |
|                 |               | Show events flamegraph in speedscope          |
|                 |               | Show live flamegraph in speedscope            |
|                 | **Alt+E**     | Edit query and execute                        |
|                 | **S**         | Show query                                    |
|                 | **y**         | Copy query to clipboard                       |
|                 | **s**         | `EXPLAIN SYNTAX`                              |
|                 | **e**         | `EXPLAIN PLAN`                                |
|                 | **E**         | `EXPLAIN PIPELINE`                            |
|                 | **G**         | `EXPLAIN PIPELINE graph=1` (open in browser)  |
|                 | **I**         | `EXPLAIN INDEXES`                             |
|                 | **K**         | `KILL` query                                  |
|                 | **l**         | Show query logs                               |
|                 | **(**         | Increase number of queries to render to 20    |
|                 | **)**         | Decrease number of queries to render to 20    |
| Logs            | **-**         | Turn ON/OFF options:                          |
|                 |               | - `S` - toggle wrap mode                      |
|                 | **/**         | Forward search                                |
|                 | **?**         | Reverse search                                |
|                 | **s**         | Save logs to file                             |
|                 | **n**/**N**   | Move to next/previous match                   |
| Basic navigation| **j**/**k**   | Down/Up                                       |
|                 | **G**/**g**   | Move to the end/Move to the beginning         |
|                 | **PageDown**/**PageUp**| Move to the end/Move to the beginning|
|                 | **Home**      | Reset selection/follow item in table          |
| chdig controls  | **Esc**       | Back/Quit                                     |
|                 | **q**         | Back/Quit                                     |
|                 | **Q**         | Quit forcefully                               |
|                 | **Backspace** | Back                                          |
|                 | **p**         | Toggle pause                                  |
|                 | **r**         | Refresh                                       |
|                 | **T**         | Seek 10 mins backward                         |
|                 | **t**         | Seek 10 mins forward                          |
|                 | **Alt+t**     | Set time interval                             |
|                 | **~**         | chdig debug console                           |


================================================
FILE: Documentation/Bugs.md
================================================
### `--history` is broken in some versions

The reason is that in some ClickHouse versions merge() function ignore aliases.


================================================
FILE: Documentation/Developers.md
================================================
## Developer Documentation

### Debugging async code with tokio-console

chdig supports [tokio-console](https://github.com/tokio-rs/console) for debugging async tasks and runtime behavior.

To enable tokio console support:

1. Build with the `tokio-console` feature:
   ```bash
   cargo build --features tokio-console
   ```

2. Run chdig:
   ```bash
   cargo run --features tokio-console
   ```

3. In a separate terminal, start tokio-console:
   ```bash
   # Install if needed
   cargo install tokio-console

   # Connect to the running application
   tokio-console
   ```


================================================
FILE: Documentation/FAQ.md
================================================
### What is format of the URL accepted by `chdig`?

The simplest form is just - **`localhost`**

For a secure connections with user and password _(note: passing the password on
the command line is not safe)_, use:

```sh
chdig -u 'user:password@clickhouse-host.com/?secure=true'
```

A full list of supported connection options is available [here](https://github.com/azat-rust/clickhouse-rs/?tab=readme-ov-file#dns).

_Note: This link currently points to my fork, as some changes have not yet been accepted upstream._

### Environment variables

A safer way to pass the password is via environment variables:


```sh
export CLICKHOUSE_USER='user'
export CLICKHOUSE_PASSWORD='password'
chdig -u 'clickhouse-host.com/?secure=true'
# or specify the port explicitly
chdig -u 'clickhouse-host.com:9440/?secure=true'
```

### What is --config (`CLICKHOUSE_CONFIG`)?

This is standard config for [ClickHouse client](https://clickhouse.com/docs/interfaces/cli#configuration_files), i.e.

```yaml
user: foo
password: bar
host: play
secure: true
```

_See also some examples and possible advanced use cases [here](/tests/configs)_

### What is --connection?

`--connection` allows you to use predefined connections, that is supported by
`clickhouse-client` ([1], [2]).

Here is an example in `XML` format:

```xml
<clickhouse>
    <connections_credentials>
        <connection>
            <name>prod</name>
            <hostname>prod</hostname>
            <user>default</user>
            <password>secret</password>
            <!-- <secure>false</secure> -->
            <!-- <skip_verify>false</skip_verify> -->
            <!-- <ca_certificate></ca_certificate> -->
            <!-- <client_certificate></client_certificate> -->
            <!-- <client_private_key></client_private_key> -->
        </connection>
    </connections_credentials>
</clickhouse>
```

Or in `YAML`:

```yaml
---
connections_credentials:
  prod:
    name: prod
    hostname: prod
    user: default
    password: secret
    # secure: false
    # skip_verify: false
    # ca_certificate:
    # client_certificate:
    # client_private_key:
```

And later, instead of specifying `--url` (with password in plain-text, which is
highly not recommended), you can use `chdig --connection prod`.

  [1]: https://github.com/ClickHouse/ClickHouse/pull/45715
  [2]: https://github.com/ClickHouse/ClickHouse/pull/46480

### What is Perfetto export?

Pressing `X` in the queries view exports a timeline visualization to
[Perfetto UI](https://ui.perfetto.dev) — an open-source trace viewer that
provides a zoomable timeline, flamegraph visualization, and SQL-queryable trace
data. It runs entirely in the browser.

An embedded HTTP server starts on port 9001 (lazily, on first export) and serves
the binary protobuf trace. The browser opens automatically.

The export includes data from multiple ClickHouse system tables (when available):

| Source table | What it shows |
|---|---|
| In-memory queries | Query duration slices grouped by host/user |
| `system.opentelemetry_span_log` | Processor pipeline spans |
| `system.trace_log` (ProfileEvent) | Per-thread counter increments |
| `system.trace_log` (CPU/Real/Memory) | Stack trace samples (flamegraph in Perfetto) |
| `system.text_log` | Query log messages grouped by level |
| `system.query_metric_log` | Per-query metric snapshots |
| `system.part_log` | Part lifecycle events (NewPart, MergeParts, etc.) |
| `system.query_thread_log` | Per-thread execution with ProfileEvents |

Tables that don't exist are silently skipped — the export works with whatever
data is available.

When queries are selected with `Space`, only those queries are exported.

To get the richest traces, enable these ClickHouse settings for the queries you
want to analyze:

```sql
SET
    opentelemetry_start_trace_probability = 1,
    opentelemetry_trace_processors = 1,
    opentelemetry_trace_cpu_scheduling = 1,
    log_query_threads = 1,
    trace_profile_events = 1,
    query_metric_log_interval = 0
```

- `opentelemetry_start_trace_probability` / `opentelemetry_trace_processors` /
  `opentelemetry_trace_cpu_scheduling` — enable OpenTelemetry spans for the
  query execution pipeline (populates `system.opentelemetry_span_log`)
- `log_query_threads` — log per-thread execution info
  (populates `system.query_thread_log`)
- `trace_profile_events` — record ProfileEvent counter increments with
  timestamps into `system.trace_log`, giving precise per-event timelines
- `query_metric_log_interval` — controls periodic metric snapshots in
  `system.query_metric_log` (sampled every N milliseconds). Set to `0` to
  disable if you prefer the more accurate `trace_profile_events`. Set to e.g.
  `1000` (1 second) if you want periodic snapshots — note that these are
  sampled and less precise than `trace_profile_events`, but lighter on overhead

### What is flamegraph?

It is best to start with [Brendan Gregg's site](https://www.brendangregg.com/flamegraphs.html) for a solid introduction to flamegraphs.

Below is a description of the various types of flamegraphs available in `chdig`:

- `Real` - Traces are captured at regular intervals (defined by [`query_profiler_real_time_period_ns`](https://clickhouse.com/docs/operations/settings/settings#query_profiler_real_time_period_ns)/[`global_profiler_real_time_period_ns`](https://clickhouse.com/docs/operations/server-configuration-parameters/settings#global_profiler_real_time_period_ns)) for each thread, regardless of whether the thread is actively running on the CPU
- `CPU` - Traces are captured only when a thread is actively executing on the CPU, based on the interval specified in [`query_profiler_cpu_time_period_ns`](https://clickhouse.com/docs/operations/settings/settings#query_profiler_cpu_time_period_ns)/[`global_profiler_cpu_time_period_ns`](https://clickhouse.com/docs/operations/server-configuration-parameters/settings#global_profiler_cpu_time_period_ns)
- `Memory` - Traces are captured after each [`memory_profiler_step`](https://clickhouse.com/docs/operations/settings/settings#memory_profiler_step)/[`total_memory_profiler_step`](https://clickhouse.com/docs/operations/server-configuration-parameters/settings#total_memory_profiler_step) bytes are allocated by the query or server
- `Live` - Real-time visualization of what server is doing now from [`system.stack_trace`](https://clickhouse.com/docs/operations/system-tables/stack_trace)

See also:
- [Sampling Query Profiler](https://clickhouse.com/docs/operations/optimizing-performance/sampling-query-profiler)

_Note: for `Memory` `chdig` uses `memory_profiler_step` over `memory_profiler_sample_probability`, since the later is disabled by default_

### Why I see IO wait reported as zero?

- You should ensure that ClickHouse uses one of taskstat gathering methods:
  - procfs
  - netlink

- And also for linux 5.14 you should enable `kernel.task_delayacct` sysctl as well.

### How to copy text from `chdig`

By default `chdig` is started with mouse mode enabled in terminal, you cannot
copy with this mode enabled. But, terminals provide a way to disable it
temporary by pressing some key (usually it is some combination of `Alt`,
`Shift` or/and `Ctrl`), so you can find yours press them, and copy.

---

See also [bugs list](Bugs.md)


================================================
FILE: LICENSE
================================================
Copyright 2023 Azat Khuzhin

Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the “Software”), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: Makefile
================================================
debug ?=
target ?= $(shell rustc -vV | sed -n 's|host: ||p')
# Parse the target (i.e. aarch64-unknown-linux-musl)
target_os := $(shell echo $(target) | cut -d'-' -f3)
target_libc := $(shell echo $(target) | cut -d'-' -f4)
target_arch := $(shell echo $(target) | cut -d'-' -f1)
host_arch := $(shell uname -m)

# Version normalization for deb/rpm:
# - trim "v" prefix
# - first "-" replace with "+"
# - second "-" replace with "~"
#
# Refs: https://www.debian.org/doc/debian-policy/ch-controlfields.html
CHDIG_VERSION=$(shell git describe | sed -e 's/^v//' -e 's/-/+/' -e 's/-/~/')
# Refs: https://wiki.archlinux.org/title/Arch_package_guidelines#Package_versioning
CHDIG_VERSION_ARCH=$(shell git describe | sed -e 's/^v//' -e 's/-/./g')

$(info DESTDIR = $(DESTDIR))
$(info CHDIG_VERSION = $(CHDIG_VERSION))
$(info CHDIG_VERSION_ARCH = $(CHDIG_VERSION_ARCH))
$(info debug = $(debug))
$(info target = $(target))
$(info host_arch = $(host_arch))

ifdef debug
  cargo_build_opts :=
  target_type := debug
else
  cargo_build_opts := --release
  target_type = release
endif

ifneq ($(target),)
  cargo_build_opts += --target $(target)
endif

# Normalize architecture names
norm_target_arch := $(shell echo $(target_arch) | sed -e 's/^aarch64$$/arm64/' -e 's/^x86_64$$/amd64/')
norm_host_arch := $(shell echo $(host_arch) | sed -e 's/^aarch64$$/arm64/' -e 's/^x86_64$$/amd64/')

$(info Normalized target arch: $(norm_target_arch))
$(info Normalized host arch: $(norm_host_arch))

# Cross compilation requires some tricks:
# - use lld linker
# - explicitly specify path for libstdc++
# (Also some packages, that you can found in github actions manifests)
#
# TODO: allow to use clang/gcc from PATH
ifneq ($(norm_host_arch),$(norm_target_arch))
  $(info Cross compilation for $(target_arch))

  # Detect the latest lld
  LLD := $(shell ls /usr/bin/ld.lld /usr/bin/ld.lld-* 2>/dev/null | sort -V | tail -n1)
  $(info LLD = $(LLD))
  # Detect the latest clang
  CLANG := $(shell ls /usr/bin/clang /usr/bin/clang-* 2>/dev/null | grep -e '/clang$$' -e '/clang-[0-9]\+$$' | sort -V | tail -n1)
  $(info CLANG = $(CLANG))
  CLANG_CXX := $(shell ls /usr/bin/clang++ /usr/bin/clang++-* 2>/dev/null | grep -e '/clang++$$' -e '/clang++-[0-9]\+$$' | sort -V | tail -n1)
  $(info CLANG_CXX = $(CLANG_CXX))

  export CC := $(CLANG)
  export CXX := $(CLANG_CXX)
  export RUSTFLAGS := -C linker=$(LLD)

  # /usr/aarch64-linux-gnu/lib64/ (archlinux aarch64-linux-gnu-gcc)
  prefix := /usr/$(target_arch)-$(target_os)-gnu/lib
  ifneq ($(wildcard $(prefix)),)
    export RUSTFLAGS := $(RUSTFLAGS) -C link-args=-L$(prefix)
  endif
  prefix := /usr/$(target_arch)-$(target_os)-gnu/lib64
  ifneq ($(wildcard $(prefix)),)
    export RUSTFLAGS := $(RUSTFLAGS) -C link-args=-L$(prefix)
  endif

  # /usr/lib/gcc-cross/aarch64-linux-gnu/$gcc (ubuntu)
  latest_gcc_cross_version := $(shell ls -d /usr/lib/gcc-cross/$(target_arch)-$(target_os)-gnu/* 2>/dev/null | sort -V | tail -n1 | xargs -I{} basename {})
  prefix := /usr/lib/gcc-cross/$(target_arch)-$(target_os)-gnu/$(latest_gcc_cross_version)
  ifneq ($(wildcard $(prefix)),)
    export RUSTFLAGS := $(RUSTFLAGS) -C link-args=-L$(prefix)
  endif

  # NOTE: there is also https://musl.cc/aarch64-linux-musl-cross.tgz

  $(info RUSTFLAGS = $(RUSTFLAGS))
endif

.PHONY: build build_completion deploy-binary chdig install run \
	deb rpm archlinux tar packages

# This should be the first target (since ".DEFAULT_GOAL" is supported only since 3.80+)
default: build
.DEFAULT_GOAL: default

chdig:
	cargo build $(cargo_build_opts)

run: chdig
	cargo run $(cargo_build_opts)

build: chdig deploy-binary

test:
	@if command -v cargo-nextest >/dev/null 2>&1; then \
		cargo nextest run $(cargo_build_opts); \
	else \
		cargo test $(cargo_build_opts); \
	fi

build_completion: chdig
	cargo run $(cargo_build_opts) -- --completion bash > target/chdig.bash-completion

install: chdig build_completion
	install -m755 -D -t $(DESTDIR)/bin target/$(target)/$(target_type)/chdig
	install -m644 -D -t $(DESTDIR)/share/bash-completion/completions target/chdig.bash-completion

deploy-binary: chdig
	cp target/$(target)/$(target_type)/chdig target/chdig

packages: build build_completion deb rpm archlinux tar

deb: build
	CHDIG_VERSION=${CHDIG_VERSION} CHDIG_ARCH=${norm_target_arch} nfpm package --config chdig-nfpm.yaml --packager deb
rpm: build
	CHDIG_VERSION=${CHDIG_VERSION} CHDIG_ARCH=${target_arch} nfpm package --config chdig-nfpm.yaml --packager rpm
archlinux: build
	CHDIG_VERSION=${CHDIG_VERSION_ARCH} CHDIG_ARCH=${target_arch} nfpm package --config chdig-nfpm.yaml --packager archlinux
.ONESHELL:
tar: archlinux
	CHDIG_VERSION=${CHDIG_VERSION_ARCH} CHDIG_ARCH=${target_arch} nfpm package --config chdig-nfpm.yaml --packager archlinux
	tmp_dir=$(shell mktemp -d /tmp/chdig-${CHDIG_VERSION}.XXXXXX)
	echo "Temporary directory for tar package: $$tmp_dir"
	tar -C $$tmp_dir -vxf chdig-${CHDIG_VERSION_ARCH}-1-${target_arch}.pkg.tar.zst usr
	# Strip /tmp/chdig-${CHDIG_VERSION}.XXXXXX and replace it with chdig-${CHDIG_VERSION}
	# (and we need to remove leading slash)
	tar --show-transformed-names --transform "s#^$${tmp_dir#/}#chdig-${CHDIG_VERSION}-${target_arch}#" -vczf chdig-${CHDIG_VERSION}-${target_arch}.tar.gz $$tmp_dir
	echo rm -fr $$tmp_dir

help:
	@echo "Usage: make [debug=1] [target=<TRIPLE>]"


================================================
FILE: README.md
================================================
### chdig

Dig into [ClickHouse](https://github.com/ClickHouse/ClickHouse/) with TUI interface.

### Installation

`chdig` is also available as part of `clickhouse` - `clickhouse chdig`, but
that version may be slightly outdated.

Pre-built packages (`.deb`, `.rpm`, `archlinux`, `.tar.gz`) and standalone
binaries for `Linux` and `macOS` are available for both `x86_64` and `aarch64`
architectures.

The latest [unstable release can be found on GitHub](https://github.com/azat/chdig/releases/tag/latest).

*See also the complete list of [releases](https://github.com/azat/chdig/releases).*

<details>

<summary>Package repositories (AUR, Scoop, Homebrew)</summary>

#### archlinux user repository (aur)

And also for archlinux there is an aur package:
- [**chdig-latest-bin**](https://aur.archlinux.org/packages/chdig-latest-bin) - binary artifact of the upstream
- [chdig-git](https://aur.archlinux.org/packages/chdig-git) - build from sources
- [chdig-bin](https://aur.archlinux.org/packages/chdig-bin) - binary of the latest stable version

*Note: `chdig-latest-bin` is recommended because it is latest available version and you don't need toolchain to compile*

#### scoop (windows)

```
scoop bucket add extras
scoop install extras/chdig
```

#### brew (macos)

```
brew install chdig
```

</details>

### Demo

[![asciicast](https://github.com/azat/chdig/releases/download/v26.1.1/chdig-v26.1.1.gif)](https://asciinema.org/a/OvQIBpQCAtFU8AyF)

### Motivation

The idea is came from everyday digging into various ClickHouse issues.

ClickHouse has a approximately universe of introspection tools, and it is easy
to forget some of them. At first I came with some
[slides](https://azat.sh/presentations/2022-know-your-clickhouse/) and a
picture (to attract your attention) by analogy to what [Brendan
Gregg](https://www.brendangregg.com/linuxperf.html) did for Linux:

[![Know Your ClickHouse](https://azat.sh/presentations/2022-know-your-clickhouse/Know-Your-ClickHouse.png)](https://azat.sh/presentations/2022-know-your-clickhouse/Know-Your-ClickHouse.png)

*Note, the picture and the presentation had been made in the beginning of 2022,
so it may not include some new introspection tools*.

But this requires you to dig into lots of places, and even though during this
process you will learn a lot, it does not solves the problem of forgetfulness.
So I came up with this simple TUI interface that tries to make this process
simpler.

`chdig` can be used not only to debug some problems, but also just as a regular
introspection, like `top` for Linux.

### Features

- `top` like interface (or [`csysdig`](https://github.com/draios/sysdig) to be more precise)
- [Flamegraphs](Documentation/FAQ.md#what-is-flamegraph) (CPU/Real/Memory/Live) in TUI (thanks to [flamelens](https://github.com/ys-l/flamelens))
- [Perfetto support](Documentation/FAQ.md#what-is-perfetto-export)
- Share flamegraphs (using [pastila.nl](https://pastila.nl/) and [speedscope](https://www.speedscope.app/))
- Share logs via [pastila.nl](https://pastila.nl/)
- Share query pipelines (using [viz.js](https://github.com/mdaines/viz-js) and [pastila.nl](https://pastila.nl/))
- Cluster support (`--cluster`) - aggregate data from all hosts in the cluster
- Historical support (`--history`) - includes rotated `system.*_log_*` tables
- `clickhouse-client` compatibility (including `--connection`) for options and configuration files

And there is a huge bunch of [ideas](https://github.com/azat/chdig/issues).

**Note, this it is in a pre-alpha stage, so everything can be changed (keyboard
shortcuts, views, color schema and of course features)**

### Requirements

If something does not work, like you have too old version of `ClickHouse`, consider upgrading.

*Note: the oldest version that had been tested was 21.2 (at some point in time)*

### Build from sources

```
cargo build
```

> [!NOTE]
> If you see an error like `failed to authenticate when downloading repository: git@github.com:azat-rust/cursive`,
> it is likely because your local Git config is rewriting `https://github.com/` to `git@github.com:`:
>
> ```
> [url "git@github.com:"]
>     insteadOf = https://github.com/
> ```
>
> Cargo's built-in Git library does not handle this case gracefully.
> You can either remove that config entry or tell Cargo to use the system Git client instead:
>
> ```toml
> # ~/.cargo/config.toml
> [net]
> git-fetch-with-cli = true
> ```

For development and debugging information, see [Documentation/Developers.md](Documentation/Developers.md).

## References

- [FAQ](Documentation/FAQ.md)
- [Bugs list](Documentation/Bugs.md)
- [Shortcuts](Documentation/Actions.md#shortcuts)
- [Developers](Documentation/Developers.md)


================================================
FILE: chdig-nfpm.yaml
================================================
---
name: "chdig"
arch: "${CHDIG_ARCH}"
platform: "linux"
version: "${CHDIG_VERSION}"
homepage: "https://github.com/azat/chdig"
license: "Apache"
priority: "optional"
maintainer: "Azat Khuzhin <a3at.mail@gmail.com>"
description: |
  Dig into ClickHouse queries with TUI interface.

contents:
- src: target/chdig
  dst: /usr/bin/chdig
  file_info:
    mode: 0755
- src: target/chdig.bash-completion
  dst: /usr/share/bash-completion/completions/chdig
  file_info:
    mode: 0644
- src: README.md
  dst: /usr/share/doc/chdig/README.md
  file_info:
    mode: 0644


================================================
FILE: rustfmt.toml
================================================
edition = "2018"


================================================
FILE: src/actions.rs
================================================
use cursive::{event::Event, theme::Effect, utils::markup::StyledString};

#[derive(Clone)]
pub struct ActionDescription {
    pub text: &'static str,
    pub event: Event,
}

impl ActionDescription {
    pub fn event_string(&self) -> String {
        match self.event {
            Event::Char(c) => {
                // - It is hard to understand that nothing is a space
                // - And it overlaps with no shortcut actions
                if c == ' ' {
                    return "<Space>".to_string();
                } else {
                    return c.to_string();
                }
            }
            Event::CtrlChar(c) => {
                return format!("Ctrl+{}", c);
            }
            Event::AltChar(c) => {
                return format!("Alt+{}", c);
            }
            Event::Key(k) => {
                return format!("{:?}", k);
            }
            Event::Unknown(_) => {
                return "".to_string();
            }
            _ => panic!("{:?} is not supported", self.event),
        }
    }
    pub fn preview_styled(&self) -> StyledString {
        let mut text = StyledString::default();
        text.append_styled(format!("{:>10}", self.event_string()), Effect::Bold);
        text.append_plain(format!(" - {}\n", self.text));
        return text;
    }
}


================================================
FILE: src/bin.rs
================================================
use anyhow::{Result, anyhow};
use backtrace::Backtrace;
use flexi_logger::{FileSpec, LogSpecification, Logger};
use std::ffi::OsString;
use std::panic::{self, PanicHookInfo};
use std::sync::Arc;

use cursive::view::Resizable;

use crate::{
    interpreter::{ClickHouse, Context, ContextArc, options},
    view::Navigation,
};

// NOTE: hyper also has trace_span() which will not be overwritten
//
// FIXME: should be initialize before options, but options prints completion that should be
// done before terminal switched to raw mode.
const DEFAULT_RUST_LOG: &str = "trace,cursive=info,clickhouse_rs=info,hyper=info,rustls=info";

fn panic_hook(info: &PanicHookInfo<'_>) {
    let location = info.location().unwrap();

    let msg = if let Some(s) = info.payload().downcast_ref::<&'static str>() {
        *s
    } else if let Some(s) = info.payload().downcast_ref::<String>() {
        &s[..]
    } else {
        "Box<Any>"
    };

    // NOTE: we need to add \r since the terminal is in raw mode.
    // (another option is to restore the terminal state with termios)
    let stacktrace: String = format!("{:?}", Backtrace::new()).replace('\n', "\n\r");

    print!(
        "\n\rthread '<unnamed>' panicked at '{}', {}\n\r{}",
        msg, location, stacktrace
    );
}

pub async fn chdig_main_async<I, T>(itr: I) -> Result<()>
where
    I: IntoIterator<Item = T>,
    T: Into<OsString> + Clone,
{
    let options = options::parse_from(itr)?;

    let mut logger_handle = None;
    // We start logging to file earlier for better introspection.
    if let Some(log) = &options.service.log {
        logger_handle = Some(
            Logger::try_with_env_or_str(DEFAULT_RUST_LOG)?
                .log_to_file(FileSpec::try_from(log)?)
                .format(flexi_logger::with_thread)
                .start()?,
        );
    }

    // Initialize it before any backends (otherwise backend will prepare terminal for TUI app, and
    // panic hook will clear the screen).
    let clickhouse = Arc::new(ClickHouse::new(options.clickhouse.clone()).await?);

    let server_warnings = match clickhouse.get_warnings().await {
        Ok(w) => w,
        Err(e) => {
            log::warn!("Failed to fetch system.warnings: {}", e);
            Vec::new()
        }
    };

    panic::set_hook(Box::new(|info| {
        panic_hook(info);
    }));

    let backend = cursive::backends::try_default().map_err(|e| anyhow!(e.to_string()))?;
    let mut siv = cursive::CursiveRunner::new(cursive::Cursive::new(), backend);

    if options.service.log.is_none() {
        logger_handle = Some(
            Logger::try_with_env_or_str(DEFAULT_RUST_LOG)?
                .log_to_writer(cursive_flexi_logger_view::cursive_flexi_logger(&siv))
                .format(flexi_logger::colored_with_thread)
                .start()?,
        );
    }

    // FIXME: should be initialized before cursive, otherwise on error it clears the terminal.
    let context: ContextArc = Context::new(options, clickhouse, siv.cb_sink().clone()).await?;

    siv.chdig(context.clone());

    if !server_warnings.is_empty() {
        let text = server_warnings.join("\n");
        siv.add_layer(
            cursive::views::Dialog::around(cursive::views::ScrollView::new(
                cursive::views::TextView::new(text),
            ))
            .title("Server warnings")
            .button("OK", |s| {
                s.pop_layer();
            })
            .max_width(80),
        );
    }

    log::info!("chdig started");
    siv.run();

    if let Some(logger_handle) = logger_handle {
        // Suppress error from the cursive_flexi_logger_view - "cursive callback sink is closed!"
        // Note, cursive_flexi_logger_view does not implements shutdown() so it will not help.
        logger_handle.set_new_spec(LogSpecification::parse("none")?);
    }

    return Ok(());
}

fn collect_args(argc: c_int, argv: *const *const c_char) -> Vec<OsString> {
    use std::ffi::CStr;
    unsafe {
        std::slice::from_raw_parts(argv, argc as usize)
            .iter()
            .map(|&ptr| {
                let c_str = CStr::from_ptr(ptr);
                let string = c_str.to_string_lossy().into_owned();
                OsString::from(string)
            })
            .collect()
    }
}

use std::os::raw::{c_char, c_int};
#[unsafe(no_mangle)]
pub extern "C" fn chdig_main(argc: c_int, argv: *const *const c_char) -> c_int {
    #[cfg(feature = "tokio-console")]
    console_subscriber::init();

    tokio::runtime::Builder::new_current_thread()
        .enable_all()
        .build()
        .unwrap()
        .block_on(chdig_main_async(collect_args(argc, argv)))
        .unwrap_or_else(|e| {
            eprintln!("{}", e);
            std::process::exit(1);
        });
    return 0;
}


================================================
FILE: src/common/mod.rs
================================================
mod relative_date_time;
pub mod sparkline;
mod stopwatch;

pub use relative_date_time::RelativeDateTime;
pub use relative_date_time::parse_datetime_or_date;
pub use stopwatch::Stopwatch;


================================================
FILE: src/common/relative_date_time.rs
================================================
use chrono::{DateTime, Local, NaiveDate, NaiveDateTime, TimeDelta};
use std::{
    fmt::Display,
    ops::{AddAssign, SubAssign},
    str::FromStr,
};

pub fn parse_datetime_or_date(value: &str) -> Result<DateTime<Local>, String> {
    let mut errors = Vec::new();
    // Parse without timezone
    match value.parse::<NaiveDateTime>() {
        Ok(datetime) => return Ok(datetime.and_local_timezone(Local).unwrap()),
        Err(err) => errors.push(err),
    }
    // Parse *with* timezone
    match value.parse::<DateTime<Local>>() {
        Ok(datetime) => return Ok(datetime),
        Err(err) => errors.push(err),
    }
    // Parse as date
    match value.parse::<NaiveDate>() {
        Ok(date) => {
            return Ok(date
                .and_hms_opt(0, 0, 0)
                .unwrap()
                .and_local_timezone(Local)
                .unwrap());
        }
        Err(err) => errors.push(err),
    }
    return Err(format!(
        "Valid RFC3339-formatted (YYYY-MM-DDTHH:MM:SS[.ssssss][±hh:mm|Z]) datetime or date while parsing '{}':\n{}",
        value,
        errors
            .iter()
            .map(|e| e.to_string())
            .collect::<Vec<String>>()
            .join("\n")
    ));
}

#[derive(Clone, Debug)]
pub struct RelativeDateTime {
    date_time: Option<DateTime<Local>>,
    // Always subtracted
    offset: Option<TimeDelta>,
}

impl RelativeDateTime {
    pub fn new(offset: Option<TimeDelta>) -> Self {
        Self {
            date_time: None,
            offset,
        }
    }

    pub fn get_date_time(&self) -> Option<DateTime<Local>> {
        self.date_time
    }

    pub fn to_editable_string(&self) -> String {
        match (&self.date_time, &self.offset) {
            (None, Some(offset)) => {
                humantime::format_duration(offset.to_std().unwrap_or_default()).to_string()
            }
            (Some(dt), _) => dt.format("%Y-%m-%dT%H:%M:%S").to_string(),
            (None, None) => String::new(),
        }
    }

    pub fn to_sql_datetime_64(&self) -> Option<String> {
        match (self.date_time, self.offset) {
            (Some(date_time), Some(offset)) => Some(format!(
                "fromUnixTimestamp64Nano({}) - INTERVAL {} NANOSECOND",
                date_time.timestamp_nanos_opt()?,
                offset.num_nanoseconds()?
            )),
            (None, Some(offset)) => Some(format!(
                "now() - INTERVAL {} NANOSECOND",
                offset.num_nanoseconds()?
            )),
            (Some(date_time), None) => Some(format!(
                "fromUnixTimestamp64Nano({})",
                date_time.timestamp_nanos_opt()?
            )),
            (None, None) => Some("now()".to_string()),
        }
    }
}

impl From<DateTime<Local>> for RelativeDateTime {
    fn from(value: DateTime<Local>) -> Self {
        RelativeDateTime {
            date_time: Some(value),
            offset: None,
        }
    }
}

impl From<Option<DateTime<Local>>> for RelativeDateTime {
    fn from(value: Option<DateTime<Local>>) -> Self {
        RelativeDateTime {
            date_time: value,
            offset: None,
        }
    }
}

impl FromStr for RelativeDateTime {
    type Err = anyhow::Error;

    fn from_str(s: &str) -> std::result::Result<Self, Self::Err> {
        // Empty string is a special case for relative "now"
        // (i.e. it will be always calculated from current time)
        if s.is_empty() {
            Ok(RelativeDateTime {
                date_time: None,
                offset: None,
            })
        } else if let Ok(datetime) = parse_datetime_or_date(s) {
            Ok(RelativeDateTime {
                date_time: Some(datetime),
                offset: None,
            })
        } else {
            Ok(RelativeDateTime {
                date_time: None,
                offset: Some(TimeDelta::from_std(
                    s.parse::<humantime::Duration>()?.into(),
                )?),
            })
        }
    }
}

impl From<RelativeDateTime> for DateTime<Local> {
    fn from(value: RelativeDateTime) -> Self {
        let mut date_time = value.date_time.unwrap_or(Local::now());
        if let Some(offset) = value.offset {
            date_time -= offset;
        }
        return date_time;
    }
}

impl Display for RelativeDateTime {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        f.write_fmt(format_args!(
            "{:?} (offset={:?})",
            self.date_time, self.offset
        ))
    }
}

impl AddAssign<TimeDelta> for RelativeDateTime {
    fn add_assign(&mut self, rhs: TimeDelta) {
        self.offset = Some(rhs);
    }
}

impl SubAssign<TimeDelta> for RelativeDateTime {
    fn sub_assign(&mut self, rhs: TimeDelta) {
        self.offset = Some(rhs);
    }
}


================================================
FILE: src/common/sparkline.rs
================================================
use std::collections::VecDeque;

const BLOCKS: &[char] = &['▁', '▂', '▃', '▄', '▅', '▆', '▇', '█'];

pub struct SparklineBuffer {
    data: VecDeque<f64>,
    capacity: usize,
}

impl SparklineBuffer {
    pub fn new(capacity: usize) -> Self {
        Self {
            data: VecDeque::with_capacity(capacity),
            capacity,
        }
    }

    pub fn push(&mut self, value: f64) {
        if self.data.len() == self.capacity {
            self.data.pop_front();
        }
        self.data.push_back(value);
    }

    pub fn render(&self, width: usize) -> String {
        if self.data.is_empty() {
            return String::new();
        }

        let samples: Vec<f64> = self
            .data
            .iter()
            .rev()
            .take(width)
            .copied()
            .collect::<Vec<_>>()
            .into_iter()
            .rev()
            .collect();

        let min = samples.iter().copied().fold(f64::INFINITY, f64::min);
        let max = samples.iter().copied().fold(f64::NEG_INFINITY, f64::max);
        let range = max - min;

        samples
            .iter()
            .map(|&v| {
                if range == 0.0 {
                    BLOCKS[BLOCKS.len() / 2]
                } else {
                    let idx = ((v - min) / range * (BLOCKS.len() - 1) as f64).round() as usize;
                    BLOCKS[idx.min(BLOCKS.len() - 1)]
                }
            })
            .collect()
    }
}


================================================
FILE: src/common/stopwatch.rs
================================================
/// Stupid and simple implementation of stopwatch.
use std::time::{Duration, Instant};

pub struct Stopwatch {
    start_time: Instant,
}

impl Stopwatch {
    pub fn start_new() -> Stopwatch {
        Stopwatch {
            start_time: Instant::now(),
        }
    }

    pub fn elapsed_ms(&self) -> u64 {
        return self.elapsed().as_millis() as u64;
    }

    pub fn elapsed(&self) -> Duration {
        return self.start_time.elapsed();
    }
}


================================================
FILE: src/interpreter/background_runner.rs
================================================
use std::sync::{Arc, Condvar, Mutex, atomic};
use std::thread;
use std::time::Duration;

/// Runs periodic tasks in background thread.
///
/// It is OK to suppress unused warning for this code, since it join the thread in drop()
/// correctly, example:
///
/// ``rust
/// pub struct SomeView {
///     #[allow(unused)]
///     bg_runner: BackgroundRunner,
/// }
/// ``
///
pub struct BackgroundRunner {
    interval: Duration,
    thread: Option<thread::JoinHandle<()>>,
    force: Arc<atomic::AtomicBool>,
    exit: Arc<Mutex<bool>>,
    cv: Arc<(Mutex<()>, Condvar)>,
}

impl Drop for BackgroundRunner {
    fn drop(&mut self) {
        log::debug!("Stopping updates");
        *self.exit.lock().unwrap() = true;
        self.cv.1.notify_all();
        self.thread.take().unwrap().join().unwrap();
        log::debug!("Updates stopped");
    }
}

impl BackgroundRunner {
    pub fn new(
        interval: Duration,
        cv: Arc<(Mutex<()>, Condvar)>,
        force: Arc<atomic::AtomicBool>,
    ) -> Self {
        return Self {
            interval,
            thread: None,
            force,
            exit: Arc::new(Mutex::new(false)),
            cv,
        };
    }

    pub fn start<C: Fn(bool) + std::marker::Send + 'static>(&mut self, callback: C) {
        let interval = self.interval;
        let cv = self.cv.clone();
        let exit = self.exit.clone();
        let force = self.force.clone();
        self.thread = Some(std::thread::spawn(move || {
            loop {
                let was_force = force.swap(false, atomic::Ordering::SeqCst);
                callback(was_force);

                if *exit.lock().unwrap() {
                    break;
                }

                let _ = cv.1.wait_timeout(cv.0.lock().unwrap(), interval).unwrap();
                if *exit.lock().unwrap() {
                    break;
                }
            }
        }));
        // Explicitly trigger at least one update with force
        self.schedule();
    }

    pub fn schedule(&mut self) {
        self.force.store(true, atomic::Ordering::SeqCst);
        self.cv.1.notify_all();
    }
}


================================================
FILE: src/interpreter/clickhouse.rs
================================================
use crate::{
    common::RelativeDateTime,
    interpreter::{
        ClickHouseAvailableQuirks, ClickHouseQuirks,
        options::{ClickHouseOptions, LogsOrder},
    },
};
use anyhow::{Error, Result};
use chrono::{DateTime, Local};
use chrono_tz::Tz;
use clickhouse_rs::{
    Block, Options, Pool,
    types::{Complex, FromSql},
};
use futures_util::StreamExt;
use std::collections::HashMap;
use std::str::FromStr;

// TODO:
// - implement parsing using serde
// - replace clickhouse_rs::client_info::write() (with extend crate) to change the client name
// - escape parameters

pub type Columns = Block<Complex>;

pub struct ClickHouse {
    pub quirks: ClickHouseQuirks,
    // Server has use_shared_merge_tree_log_pipeline enabled (SharedMergeTree-backed system.*_log).
    // When true, system.*_log reads do not need clusterAllReplicas(): one replica sees all rows.
    shared_log_pipeline: bool,
    options: ClickHouseOptions,
    pool: Pool,
}

#[derive(Debug, PartialEq, Clone)]
#[allow(clippy::upper_case_acronyms)]
pub enum TraceType {
    CPU,
    Real,
    Memory,
    MemorySample,
    JemallocSample,
    ProfileEvent,
    MemoryAllocatedWithoutCheck,
}

#[derive(Debug, Clone)]
pub struct TextLogArguments {
    pub query_ids: Option<Vec<String>>,
    pub logger_names: Option<Vec<String>>,
    pub hostname: Option<String>,
    pub message_filter: Option<String>,
    pub max_level: Option<String>,
    pub start: DateTime<Local>,
    pub end: RelativeDateTime,
}

#[derive(Default)]
pub struct ClickHouseServerCPU {
    pub count: u64,
    pub user: u64,
    pub system: u64,
}
/// NOTE: Likely misses threads for IO
#[derive(Default)]
pub struct ClickHouseServerThreadPools {
    pub merges_mutations: u64,
    pub fetches: u64,
    pub common: u64,
    pub moves: u64,
    pub schedule: u64,
    pub buffer_flush: u64,
    pub distributed: u64,
    pub message_broker: u64,
    pub backups: u64,
    pub io: u64,
    pub remote_io: u64,
    pub queries: u64,
}
#[derive(Default)]
pub struct ClickHouseServerThreads {
    pub os_total: u64,
    pub os_runnable: u64,
    pub tcp: u64,
    pub http: u64,
    pub interserver: u64,
    pub pools: ClickHouseServerThreadPools,
}
#[derive(Default)]
pub struct ClickHouseServerMemory {
    pub os_total: u64,
    pub resident: u64,

    pub tracked: u64,
    pub tables: u64,
    pub caches: u64,
    pub queries: u64,
    pub merges_mutations: u64,
    pub active_merges: u64,
    pub async_inserts: u64,
    pub dictionaries: u64,
    pub primary_keys: u64,
    pub fragmentation: u64,
    pub index_granularity: u64,
    pub io: u64,
}
/// May have duplicated accounting (due to bridges and stuff)
#[derive(Default)]
pub struct ClickHouseServerNetwork {
    pub send_bytes: u64,
    pub receive_bytes: u64,
}
#[derive(Default)]
pub struct ClickHouseServerUptime {
    pub _os: u64,
    pub server: u64,
}
/// May does not take into account some block devices (due to filter by sd*/nvme*/vd*)
#[derive(Default)]
pub struct ClickHouseServerBlockDevices {
    pub read_bytes: u64,
    pub write_bytes: u64,
}
#[derive(Default)]
pub struct ClickHouseServerStorages {
    pub buffer_bytes: u64,
    // Replace with bytes once [1] will be merged.
    //
    //   [1]: https://github.com/ClickHouse/ClickHouse/pull/50238
    pub distributed_insert_files: u64,
    pub total_rows: u64,
    pub total_bytes: u64,
}
#[derive(Default)]
pub struct ClickHouseServerRows {
    pub selected: u64,
    pub inserted: u64,
}
#[derive(Default)]
pub struct ClickHouseServerSummary {
    pub queries: u64,
    pub merges: u64,
    pub mutations: u64,
    pub replication_queue: u64,
    pub replication_queue_tries: u64,
    pub fetches: u64,
    pub servers: u64,
    pub rows: ClickHouseServerRows,
    pub storages: ClickHouseServerStorages,
    pub uptime: ClickHouseServerUptime,
    pub memory: ClickHouseServerMemory,
    pub cpu: ClickHouseServerCPU,
    pub threads: ClickHouseServerThreads,
    pub network: ClickHouseServerNetwork,
    pub blkdev: ClickHouseServerBlockDevices,
    pub update_interval: u64,
}

pub struct QueryMetricRow {
    pub host_name: String,
    pub timestamp_ns: u64,
    pub memory_usage: i64,
    pub peak_memory_usage: i64,
    pub profile_events: HashMap<String, u64>,
}

pub struct MetricLogRow {
    pub timestamp_ns: u64,
    pub profile_events: HashMap<String, u64>,
    pub current_metrics: HashMap<String, i64>,
}

fn collect_values<'b, T: FromSql<'b>>(block: &'b Columns, column: &str) -> Vec<T> {
    return (0..block.row_count())
        .map(|i| block.get(i, column).unwrap())
        .collect();
}

const CHDIG_CLIENT_NAME: [&str; 2] = ["chdig", env!("CARGO_PKG_VERSION")];
fn get_client_name() -> String {
    return CHDIG_CLIENT_NAME.join("-");
}

impl ClickHouse {
    pub async fn new(options: ClickHouseOptions) -> Result<Self> {
        let url = format!(
            "{}&client_name={}",
            options.url.clone().unwrap(),
            get_client_name()
        );
        let connect_options: Options = Options::from_str(&url)?
            .with_setting(
                "storage_system_stack_trace_pipe_read_timeout_ms",
                1000,
                /* is_important= */ false,
            )
            // FIXME: ClickHouse's analyzer does not handle ProfileEvents.Names (and similar), it throws:
            //
            //   Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got LowCardinality(String)
            //
            .with_setting("allow_experimental_analyzer", false, true)
            // TODO: add support of Map type for LowCardinality in the driver
            .with_setting("low_cardinality_allow_in_native_format", false, true);
        let pool = Pool::new(connect_options);

        let mut handle = pool.get_handle().await.map_err(|e| {
            Error::msg(format!(
                "Cannot connect to ClickHouse at {} ({})",
                options.url_safe, e
            ))
        })?;

        let version = if let Some(override_version) = &options.server_version {
            override_version.clone()
        } else {
            let version = handle
                .query("SELECT version()")
                .fetch_all()
                .await?
                .get::<String, _>(0, 0)?;

            // Get VERSION_DESCRIBE from system.build_options for full version info (only build_options
            // include version prefix, i.e. -stable/-testing)
            handle
                .query("SELECT value FROM system.build_options WHERE name = 'VERSION_DESCRIBE'")
                .fetch_all()
                .await?
                .get::<String, _>(0, 0)
                .unwrap_or_else(|_| version.clone())
        };

        let quirks = ClickHouseQuirks::new(version);

        // SMT-backed system.*_log (ClickHouse Cloud) exposes all replicas' rows through any single
        // replica, so clusterAllReplicas() is pure overhead there. The setting is off by default
        // and on self-hosted clusters, so we silently fall back to the cluster-wrapped path.
        let shared_log_pipeline = handle
            .query(
                "SELECT value FROM system.server_settings \
                 WHERE name = 'use_shared_merge_tree_log_pipeline'",
            )
            .fetch_all()
            .await
            .ok()
            .filter(|block| block.row_count() > 0)
            .and_then(|block| block.get::<String, _>(0, 0).ok())
            .map(|v| v == "1" || v.eq_ignore_ascii_case("true"))
            .unwrap_or(false);
        if shared_log_pipeline {
            log::info!(
                "SharedMergeTree log pipeline detected, skipping clusterAllReplicas() for system.*_log"
            );
        }

        return Ok(ClickHouse {
            quirks,
            shared_log_pipeline,
            options,
            pool,
        });
    }

    pub fn version(&self) -> String {
        return self.quirks.get_version();
    }

    pub async fn get_slow_query_log(
        &self,
        filter: &String,
        start: RelativeDateTime,
        end: RelativeDateTime,
        limit: u64,
        selected_host: Option<&String>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "query_log");
        let host_filter = self.get_log_host_filter_clause(selected_host);
        return self
            .execute(
                format!(
                    r#"
                    WITH
                        {start} AS start_,
                        {end}   AS end_,
                        slow_queries_ids AS (
                            SELECT DISTINCT initial_query_id
                            FROM {db_table}
                            WHERE
                                event_date BETWEEN toDate(start_) AND toDate(end_) AND
                                event_time BETWEEN toDateTime(start_) AND toDateTime(end_) AND
                                is_initial_query AND
                                /* To make query faster */
                                query_duration_ms > 1e3
                                {filter}
                                {internal}
                                {host_filter}
                            ORDER BY query_duration_ms DESC
                            LIMIT {limit}
                        )
                    SELECT
                        ProfileEvents.Names,
                        ProfileEvents.Values,
                        Settings.Names,
                        Settings.Values,
                        {peak_threads_usage} AS peak_threads_usage,
                        // Compatibility with system.processlist
                        memory_usage::Int64 AS peak_memory_usage,
                        query_duration_ms/1e3 AS elapsed,
                        user,
                        is_initial_query,
                        (exception_code = 394)::UInt8 AS is_cancelled,
                        initial_query_id,
                        query_id,
                        hostname as host_name,
                        current_database,
                        query_start_time_microseconds,
                        event_time_microseconds AS query_end_time_microseconds,
                        toValidUTF8(query) AS original_query,
                        normalizeQuery(query) AS normalized_query
                    FROM {db_table}
                    PREWHERE
                        event_date BETWEEN toDate(start_) AND toDate(end_) AND
                        event_time BETWEEN toDateTime(start_) AND toDateTime(end_) AND
                        type != 'QueryStart' AND
                        initial_query_id GLOBAL IN slow_queries_ids
                "#,
                    start = start.to_sql_datetime_64().ok_or(Error::msg("Invalid start"))?,
                    end = end.to_sql_datetime_64().ok_or(Error::msg("Invalid end"))?,
                    db_table = dbtable,
                    peak_threads_usage = if self.quirks.has(ClickHouseAvailableQuirks::QueryLogPeakThreadsUsage) {
                        "peak_threads_usage"
                    } else {
                        "length(thread_ids)"
                    },
                    internal = if self.options.internal_queries {
                        "".to_string()
                    } else {
                        format!("AND client_name != '{}'", get_client_name())
                    },
                    filter = if !filter.is_empty() {
                        format!("AND (client_hostname LIKE '{0}' OR log_comment LIKE '{0}' OR os_user LIKE '{0}' OR user LIKE '{0}' OR initial_user LIKE '{0}' OR client_name LIKE '{0}' OR query_id LIKE '{0}' OR query LIKE '{0}')", &filter)
                    } else {
                        "".to_string()
                    },
                    host_filter = host_filter,
                )
                .as_str(),
            )
            .await;
    }

    pub async fn get_last_query_log(
        &self,
        filter: &String,
        start: RelativeDateTime,
        end: RelativeDateTime,
        limit: u64,
        selected_host: Option<&String>,
    ) -> Result<Columns> {
        // TODO:
        // - propagate sort order from the table
        // - distributed_group_by_no_merge=2 is broken for this query with WINDOW function
        let dbtable = self.get_log_table_name("system", "query_log");
        let host_filter = self.get_log_host_filter_clause(selected_host);
        return self
            .execute(
                format!(
                    r#"
                    WITH
                        {start} AS start_,
                        {end}   AS end_,
                        last_queries_ids AS (
                            SELECT DISTINCT initial_query_id
                            FROM {db_table}
                            WHERE
                                event_date BETWEEN toDate(start_) AND toDate(end_) AND
                                event_time BETWEEN toDateTime(start_) AND toDateTime(end_) AND
                                is_initial_query
                                {filter}
                                {internal}
                                {host_filter}
                            ORDER BY event_date DESC, event_time DESC
                            LIMIT {limit}
                        )
                    SELECT
                        ProfileEvents.Names,
                        ProfileEvents.Values,
                        Settings.Names,
                        Settings.Values,
                        {peak_threads_usage} AS peak_threads_usage,
                        // Compatibility with system.processlist
                        memory_usage::Int64 AS peak_memory_usage,
                        query_duration_ms/1e3 AS elapsed,
                        user,
                        is_initial_query,
                        (exception_code = 394)::UInt8 AS is_cancelled,
                        initial_query_id,
                        query_id,
                        hostname as host_name,
                        current_database,
                        query_start_time_microseconds,
                        event_time_microseconds AS query_end_time_microseconds,
                        toValidUTF8(query) AS original_query,
                        normalizeQuery(query) AS normalized_query
                    FROM {db_table}
                    PREWHERE
                        event_date BETWEEN toDate(start_) AND toDate(end_) AND
                        event_time BETWEEN toDateTime(start_) AND toDateTime(end_) AND
                        type != 'QueryStart' AND
                        initial_query_id GLOBAL IN last_queries_ids
                "#,
                    start = start.to_sql_datetime_64().ok_or(Error::msg("Invalid start"))?,
                    end = end.to_sql_datetime_64().ok_or(Error::msg("Invalid end"))?,
                    db_table = dbtable,
                    peak_threads_usage = if self.quirks.has(ClickHouseAvailableQuirks::QueryLogPeakThreadsUsage) {
                        "peak_threads_usage"
                    } else {
                        "length(thread_ids)"
                    },
                    internal = if self.options.internal_queries {
                        "".to_string()
                    } else {
                        format!("AND client_name != '{}'", get_client_name())
                    },
                    filter = if !filter.is_empty() {
                        format!("AND (client_hostname LIKE '{0}' OR log_comment LIKE '{0}' OR os_user LIKE '{0}' OR user LIKE '{0}' OR initial_user LIKE '{0}' OR client_name LIKE '{0}' OR query_id LIKE '{0}' OR query LIKE '{0}')", &filter)
                    } else {
                        "".to_string()
                    },
                    host_filter = host_filter,
                )
                .as_str(),
            )
            .await;
    }

    pub async fn get_processlist(
        &self,
        filter: String,
        limit: u64,
        selected_host: Option<&String>,
    ) -> Result<Columns> {
        let dbtable = self.get_table_name_no_history("system", "processes");
        let host_filter = self.get_host_filter_clause(selected_host);
        return self
            .execute(
                format!(
                    r#"
                    SELECT
                        ProfileEvents.Names,
                        ProfileEvents.Values,
                        Settings.Names,
                        Settings.Values,
                        {peak_threads_usage} AS peak_threads_usage,
                        peak_memory_usage,
                        elapsed / {q} AS elapsed,
                        user,
                        is_initial_query,
                        is_cancelled,
                        initial_query_id,
                        query_id,
                        hostName() AS host_name,
                        {current_database} AS current_database,
                        /* NOTE: now64()/elapsed does not have enough precision to handle starting
                         * time properly, while this column is used for querying system.text_log,
                         * and it should be the smallest time that we are looking for */
                        (now64(6) - elapsed - 1) AS query_start_time_microseconds,
                        now64(6) AS query_end_time_microseconds,
                        toValidUTF8(query) AS original_query,
                        normalizeQuery(query) AS normalized_query
                    FROM {}
                    WHERE 1
                    {filter}
                    {internal}
                    {host_filter}
                    LIMIT {limit}
                "#,
                    dbtable,
                    q = if self.quirks.has(ClickHouseAvailableQuirks::ProcessesElapsed) {
                        10
                    } else {
                        1
                    },
                    current_database = if self.quirks.has(ClickHouseAvailableQuirks::ProcessesCurrentDatabase) {
                        // This is required for EXPLAIN (available since 20.6),
                        // so EXPLAIN with non-default current_database will be broken from processes view.
                        "'default'"
                    } else {
                        "current_database"
                    },
                    internal = if self.options.internal_queries {
                        "".to_string()
                    } else {
                            format!("AND client_name != '{}'", get_client_name())
                        },
                    filter = if !filter.is_empty() {
                        format!("AND (client_hostname LIKE '{0}' OR Settings['log_comment'] LIKE '{0}' OR os_user LIKE '{0}' OR user LIKE '{0}' OR initial_user LIKE '{0}' OR client_name LIKE '{0}' OR query_id LIKE '{0}' OR query LIKE '{0}')", &filter)
                    } else {
                        "".to_string()
                    },
                    peak_threads_usage = if self.quirks.has(ClickHouseAvailableQuirks::ProcessesPeakThreadsUsage) {
                        "peak_threads_usage"
                    } else {
                        "length(thread_ids)"
                    },
                    host_filter = host_filter,
                )
                .as_str(),
            )
            .await;
    }

    pub async fn get_summary(
        &self,
        selected_host: Option<&String>,
    ) -> Result<ClickHouseServerSummary> {
        let host_filter = self.get_host_filter_clause(selected_host);
        let host_where = if host_filter.is_empty() {
            String::new()
        } else {
            format!(" WHERE {}", &host_filter[4..]) // Remove leading "AND "
        };

        let memory_index_granularity_trait = if self.quirks.has(ClickHouseAvailableQuirks::AsynchronousMetricsTotalIndexGranularityBytesInMemoryAllocated) {
            format!("(SELECT sum(index_granularity_bytes_in_memory_allocated) FROM {}{}) AS memory_index_granularity_", self.get_table_name_no_history("system", "parts"), host_where)
        } else {
            "0::UInt64 AS memory_index_granularity_".to_string()
        };

        // NOTE: metrics (but not all of them) are deltas, so chdig do not need to reimplement this logic by itself.
        let block = self
            .execute(
                &format!(
                    r#"
                    WITH
                        -- memory detalization
                        (SELECT sum(CAST(value AS UInt64)) FROM {metrics} WHERE metric = 'MemoryTracking' {host_filter_and}) AS memory_tracked_,
                        (SELECT sum(CAST(value AS UInt64)) FROM {metrics} WHERE metric = 'MergesMutationsMemoryTracking' {host_filter_and}) AS memory_merges_mutations_,
                        (SELECT sum(total_bytes) FROM {tables} WHERE engine IN ('Join','Memory','Buffer','Set') {host_filter_and}) AS memory_tables_,
                        (SELECT sum(CAST(value AS UInt64)) FROM {asynchronous_metrics} WHERE metric LIKE '%CacheBytes' AND metric NOT LIKE '%Filesystem%' {host_filter_and}) AS memory_async_metrics_caches_,
                        (SELECT sum(CAST(value AS UInt64)) FROM {metrics} WHERE
                            metric NOT LIKE '%Filesystem%' AND
                            (metric LIKE '%CacheBytes' OR metric IN ('IcebergMetadataFilesCacheSize', 'VectorSimilarityIndexCacheSize'))
                            {host_filter_and}
                        ) AS memory_metrics_caches_,
                        (SELECT sum(CAST(memory_usage AS UInt64)) FROM {processes} {host_filter_where})                              AS memory_queries_,
                        (SELECT sum(CAST(memory_usage AS UInt64)) FROM {merges} {host_filter_where})                                 AS memory_active_merges_,
                        (SELECT sum(bytes_allocated) FROM {dictionaries} {host_filter_where})                                        AS memory_dictionaries_,
                        (SELECT sum(total_bytes) FROM {async_inserts} {host_filter_where})                                           AS memory_async_inserts_,
                        {memory_index_granularity_trait},
                        (SELECT count() FROM {one} {host_filter_where})                                                              AS servers_,
                        (SELECT count() FROM {replication_queue} {host_filter_where})                                                AS replication_queue_,
                        (SELECT sum(num_tries) FROM {replication_queue} {host_filter_where})                                         AS replication_queue_tries_,
                        (SELECT [sum(total_rows), sum(total_bytes)] FROM (
                            SELECT
                                if(engine LIKE 'Shared%', max(total_rows), sum(total_rows)) AS total_rows,
                                if(engine LIKE 'Shared%', max(total_bytes), sum(total_bytes)) AS total_bytes
                            FROM {tables}
                            WHERE has_own_data = 1 {host_filter_and}
                            GROUP BY database, name, engine
                        )) AS storage_totals_
                    SELECT
                        assumeNotNull(memory_tracked_)                           AS memory_tracked,
                        assumeNotNull(memory_merges_mutations_)                  AS memory_merges_mutations,
                        assumeNotNull(memory_tables_)                            AS memory_tables,
                        assumeNotNull(memory_async_metrics_caches_) + assumeNotNull(memory_metrics_caches_) AS memory_caches,
                        assumeNotNull(memory_queries_)                           AS memory_queries,
                        assumeNotNull(memory_active_merges_)                     AS memory_active_merges,
                        assumeNotNull(memory_dictionaries_)                      AS memory_dictionaries,
                        assumeNotNull(memory_async_inserts_)                     AS memory_async_inserts,
                        assumeNotNull(servers_)                                  AS servers,
                        assumeNotNull(replication_queue_)                        AS replication_queue,
                        assumeNotNull(replication_queue_tries_)                  AS replication_queue_tries,
                        assumeNotNull(storage_totals_[1])::UInt64               AS storage_total_rows,
                        assumeNotNull(storage_totals_[2])::UInt64              AS storage_total_bytes,

                        max2(assumeNotNull(memory_index_granularity_), asynchronous_metrics.memory_index_granularity)::UInt64 AS memory_index_granularity,

                        asynchronous_metrics.*,
                        events.*,
                        metrics.*
                    FROM
                    (
                        WITH
                            -- exclude MD/LVM
                            metric LIKE '%_sd%' OR metric LIKE '%_nvme%' OR metric LIKE '%_vd%' AS is_disk,
                            metric LIKE '%vlan%' AS is_vlan
                        -- NOTE: cast should be after aggregation function since the type is Float64
                        SELECT
                            CAST(minIf(value, metric == 'OSUptime') AS UInt64)       AS os_uptime,
                            CAST(min(uptime()) AS UInt64)                            AS uptime,
                            -- memory
                            CAST(coalesce(sumIfOrNull(value, metric == 'CGroupMemoryTotal' and value > 0), sumIf(value, metric == 'OSMemoryTotal')) AS UInt64) AS os_memory_total,
                            CAST(sumIf(value, metric == 'MemoryResident') AS UInt64) AS memory_resident,
                            -- May differs from primary_key_bytes_in_memory_allocated from
                            -- system.parts, since it takes into account only active parts
                            CAST(sumIf(value,
                                metric == 'TotalPrimaryKeyBytesInMemoryAllocated'
                                OR metric == 'TotalProjectionPrimaryKeyBytesInMemoryAllocated'
                            ) AS UInt64) AS memory_primary_keys,
                            CAST(sumIf(value,
                                metric == 'TotalIndexGranularityBytesInMemoryAllocated'
                                OR metric == 'TotalProjectionIndexGranularityBytesInMemoryAllocated'
                            ) AS UInt64) AS memory_index_granularity,
                            CAST((
                                sumIf(value, metric == 'jemalloc.resident') -
                                sumIf(value, metric == 'jemalloc.allocated')
                            ) AS UInt64) AS memory_fragmentation,
                            -- cpu
                            CAST(
                                max2(
                                    countIf(metric LIKE 'CPUFrequencyMHz%'),
                                    sumIf(value, metric = 'CGroupMaxCPU')
                                )
                            AS UInt64) AS cpu_count,
                            CAST(
                                max2(
                                    sumIf(value, metric LIKE 'OSUserTimeCPU%'),
                                    sumIf(value, metric = 'OSUserTime')
                                )
                            AS UInt64) AS cpu_user,
                            CAST(
                                max2(
                                    sumIf(value, metric LIKE 'OSSystemTimeCPU%'),
                                    sumIf(value, metric = 'OSSystemTime')
                                )
                            AS UInt64) AS cpu_system,
                            -- threads detalization
                            CAST(sumIf(value, metric = 'HTTPThreads') AS UInt64)             AS threads_http,
                            CAST(sumIf(value, metric = 'TCPThreads') AS UInt64)              AS threads_tcp,
                            CAST(sumIf(value, metric = 'OSThreadsTotal') AS UInt64)          AS threads_os_total,
                            CAST(sumIf(value, metric = 'OSThreadsRunnable') AS UInt64)       AS threads_os_runnable,
                            CAST(sumIf(value, metric = 'InterserverThreads') AS UInt64)      AS threads_interserver,
                            -- network
                            CAST(sumIf(value, metric LIKE 'NetworkSendBytes%' AND NOT is_vlan) AS UInt64)    AS net_send_bytes,
                            CAST(sumIf(value, metric LIKE 'NetworkReceiveBytes%' AND NOT is_vlan) AS UInt64) AS net_receive_bytes,
                            -- block devices
                            CAST(sumIf(value, metric LIKE 'BlockReadBytes%' AND is_disk) AS UInt64)      AS block_read_bytes,
                            CAST(sumIf(value, metric LIKE 'BlockWriteBytes%' AND is_disk) AS UInt64)     AS block_write_bytes,
                            -- update intervals
                            CAST(anyLastIf(value, metric == 'AsynchronousMetricsUpdateInterval') AS UInt64) AS metrics_update_interval
                        FROM {asynchronous_metrics}
                        {host_filter_where}
                    ) as asynchronous_metrics,
                    (
                        SELECT
                            sumIf(CAST(value AS UInt64), event == 'IOBufferAllocBytes') AS memory_io,
                            sumIf(CAST(value AS UInt64), event == 'SelectedRows') AS selected_rows,
                            sumIf(CAST(value AS UInt64), event == 'InsertedRows') AS inserted_rows
                        FROM {events}
                        {host_filter_where}
                    ) as events,
                    (
                        SELECT
                            sumIf(CAST(value AS UInt64), metric == 'Query') AS queries,
                            sumIf(CAST(value AS UInt64), metric == 'Merge') AS merges,
                            sumIf(CAST(value AS UInt64), metric == 'PartMutation') AS mutations,
                            sumIf(CAST(value AS UInt64), metric == 'ReplicatedFetch') AS fetches,

                            sumIf(CAST(value AS UInt64), metric == 'StorageBufferBytes') AS storage_buffer_bytes,
                            sumIf(CAST(value AS UInt64), metric == 'DistributedFilesToInsert') AS storage_distributed_insert_files,

                            sumIf(CAST(value AS UInt64), metric == 'BackgroundMergesAndMutationsPoolTask')    AS threads_merges_mutations,
                            sumIf(CAST(value AS UInt64), metric == 'BackgroundFetchesPoolTask')               AS threads_fetches,
                            sumIf(CAST(value AS UInt64), metric == 'BackgroundCommonPoolTask')                AS threads_common,
                            sumIf(CAST(value AS UInt64), metric == 'BackgroundMovePoolTask')                  AS threads_moves,
                            sumIf(CAST(value AS UInt64), metric == 'BackgroundSchedulePoolTask')              AS threads_schedule,
                            sumIf(CAST(value AS UInt64), metric == 'BackgroundBufferFlushSchedulePoolTask')   AS threads_buffer_flush,
                            sumIf(CAST(value AS UInt64), metric == 'BackgroundDistributedSchedulePoolTask')   AS threads_distributed,
                            sumIf(CAST(value AS UInt64), metric == 'BackgroundMessageBrokerSchedulePoolTask') AS threads_message_broker,
                            sumIf(CAST(value AS UInt64), metric IN (
                                'BackupThreadsActive',
                                'RestoreThreadsActive',
                                'BackupsIOThreadsActive'
                            )) AS threads_backups,
                            sumIf(CAST(value AS UInt64), metric IN (
                                'DiskObjectStorageAsyncThreadsActive',
                                'ThreadPoolRemoteFSReaderThreadsActive',
                                'StorageS3ThreadsActive'
                            )) AS threads_remote_io,
                            sumIf(CAST(value AS UInt64), metric IN (
                                'IOThreadsActive',
                                'IOWriterThreadsActive',
                                'IOPrefetchThreadsActive',
                                'MarksLoaderThreadsActive'
                            )) AS threads_io,
                            sumIf(CAST(value AS UInt64), metric IN (
                                'QueryPipelineExecutorThreadsActive',
                                'QueryThread',
                                'AggregatorThreadsActive',
                                'StorageDistributedThreadsActive',
                                'DestroyAggregatesThreadsActive'
                            )) AS threads_queries
                        FROM {metrics}
                        {host_filter_where}
                    ) as metrics
                    SETTINGS enable_global_with_statement=0
                "#,
                    metrics=self.get_table_name_no_history("system", "metrics"),
                    events=self.get_table_name_no_history("system", "events"),
                    tables=self.get_table_name_no_history("system", "tables"),
                    processes=self.get_table_name_no_history("system", "processes"),
                    merges=self.get_table_name_no_history("system", "merges"),
                    async_inserts=self.get_table_name_no_history("system", "asynchronous_inserts"),
                    replication_queue=self.get_table_name_no_history("system", "replication_queue"),
                    dictionaries=self.get_table_name_no_history("system", "dictionaries"),
                    asynchronous_metrics=self.get_table_name_no_history("system", "asynchronous_metrics"),
                    one=self.get_table_name_no_history("system", "one"),

                    memory_index_granularity_trait=memory_index_granularity_trait,
                    host_filter_where=host_where,
                    host_filter_and=host_filter,
                )
            )
            .await?;

        let get = |key: &str| {
            // By subquery.column
            if let Ok(value) = block.get::<u64, _>(0, key) {
                return value;
            }

            let parts = key.split(".").collect::<Vec<&str>>();
            assert!(parts.len() <= 2);
            // By column
            return block.get::<u64, _>(0, parts[parts.len() - 1]).expect(key);
        };

        return Ok(ClickHouseServerSummary {
            queries: get("metrics.queries"),
            merges: get("metrics.merges"),
            mutations: get("metrics.mutations"),
            replication_queue: get("replication_queue"),
            replication_queue_tries: get("replication_queue_tries"),
            fetches: get("metrics.fetches"),
            servers: get("servers"),

            uptime: ClickHouseServerUptime {
                _os: get("asynchronous_metrics.os_uptime"),
                server: get("asynchronous_metrics.uptime"),
            },

            rows: ClickHouseServerRows {
                selected: get("events.selected_rows"),
                inserted: get("events.inserted_rows"),
            },

            storages: ClickHouseServerStorages {
                buffer_bytes: get("metrics.storage_buffer_bytes"),
                distributed_insert_files: get("metrics.storage_distributed_insert_files"),
                total_rows: get("storage_total_rows"),
                total_bytes: get("storage_total_bytes"),
            },

            memory: ClickHouseServerMemory {
                os_total: get("asynchronous_metrics.os_memory_total"),
                resident: get("asynchronous_metrics.memory_resident"),

                tracked: get("memory_tracked"),
                merges_mutations: get("memory_merges_mutations"),
                tables: get("memory_tables"),
                caches: get("memory_caches"),
                queries: get("memory_queries"),
                active_merges: get("memory_active_merges"),
                async_inserts: get("memory_async_inserts"),
                dictionaries: get("memory_dictionaries"),
                primary_keys: get("asynchronous_metrics.memory_primary_keys"),
                fragmentation: get("asynchronous_metrics.memory_fragmentation"),
                index_granularity: get("memory_index_granularity"),
                io: get("events.memory_io"),
            },

            cpu: ClickHouseServerCPU {
                count: get("asynchronous_metrics.cpu_count"),
                user: get("asynchronous_metrics.cpu_user"),
                system: get("asynchronous_metrics.cpu_system"),
            },

            threads: ClickHouseServerThreads {
                os_total: get("asynchronous_metrics.threads_os_total"),
                os_runnable: get("asynchronous_metrics.threads_os_runnable"),
                http: get("asynchronous_metrics.threads_http"),
                tcp: get("asynchronous_metrics.threads_tcp"),
                interserver: get("asynchronous_metrics.threads_interserver"),
                pools: ClickHouseServerThreadPools {
                    merges_mutations: get("metrics.threads_merges_mutations"),
                    fetches: get("metrics.threads_fetches"),
                    common: get("metrics.threads_common"),
                    moves: get("metrics.threads_moves"),
                    schedule: get("metrics.threads_schedule"),
                    buffer_flush: get("metrics.threads_buffer_flush"),
                    distributed: get("metrics.threads_distributed"),
                    message_broker: get("metrics.threads_message_broker"),
                    backups: get("metrics.threads_backups"),
                    io: get("metrics.threads_io"),
                    remote_io: get("metrics.threads_remote_io"),
                    queries: get("metrics.threads_queries"),
                },
            },

            network: ClickHouseServerNetwork {
                send_bytes: get("asynchronous_metrics.net_send_bytes"),
                receive_bytes: get("asynchronous_metrics.net_receive_bytes"),
            },

            blkdev: ClickHouseServerBlockDevices {
                read_bytes: get("asynchronous_metrics.block_read_bytes"),
                write_bytes: get("asynchronous_metrics.block_write_bytes"),
            },

            update_interval: get("asynchronous_metrics.metrics_update_interval"),
        });
    }

    pub async fn kill_query(&self, query_id: &str) -> Result<()> {
        let query = if let Some(cluster) = &self.options.cluster {
            format!(
                "KILL QUERY ON CLUSTER {} WHERE query_id = '{}' SYNC",
                cluster, query_id
            )
        } else {
            format!("KILL QUERY WHERE query_id = '{}' SYNC", query_id)
        };
        return self.execute_simple(&query).await;
    }

    pub async fn execute_query(&self, database: &str, query: &str) -> Result<()> {
        self.execute_simple(&format!("USE {}", database)).await?;
        return self.execute_simple(query).await;
    }

    pub async fn explain_syntax(
        &self,
        database: &str,
        query: &str,
        settings: &HashMap<String, String>,
    ) -> Result<Vec<String>> {
        return self
            .explain("SYNTAX", database, query, Some(settings))
            .await;
    }

    pub async fn explain_plan(&self, database: &str, query: &str) -> Result<Vec<String>> {
        return self.explain("PLAN actions=1", database, query, None).await;
    }

    pub async fn explain_pipeline(&self, database: &str, query: &str) -> Result<Vec<String>> {
        return self.explain("PIPELINE", database, query, None).await;
    }

    pub async fn explain_pipeline_graph(&self, database: &str, query: &str) -> Result<Vec<String>> {
        return self
            .explain("PIPELINE graph=1", database, query, None)
            .await;
    }

    // NOTE: can we benefit from json=1?
    pub async fn explain_plan_indexes(&self, database: &str, query: &str) -> Result<Vec<String>> {
        return self.explain("PLAN indexes=1", database, query, None).await;
    }

    pub async fn show_create_table(&self, database: &str, table: &str) -> Result<String> {
        let result = self
            .execute(&format!("SHOW CREATE TABLE {}.{}", database, table))
            .await?;
        let statement: String = collect_values(&result, "statement")
            .into_iter()
            .next()
            .unwrap_or_default();
        return Ok(statement);
    }

    // TODO: copy all settings from the query
    async fn explain(
        &self,
        what: &str,
        database: &str,
        query: &str,
        settings: Option<&HashMap<String, String>>,
    ) -> Result<Vec<String>> {
        self.execute_simple(&format!("USE {}", database)).await?;

        if let Some(settings) = settings {
            // NOTE: it handles queries with SETTINGS incorrectly, i.e.:
            //
            //     SELECT 1 SETTINGS max_threads=1
            //
            //     EXPLAIN SYNTAX SELECT 1 SETTINGS max_threads=1 SETTINGS max_threads=1, max_insert_threads=1 ->
            //     SELECT 1 SETTINGS max_threads=1
            //
            // This can be fixed two ways:
            // - in ClickHouse
            // - by passing settings in the protocol
            if !settings.is_empty() {
                return Ok(collect_values(
                    &self
                        .execute(&format!(
                            "EXPLAIN {} {} SETTINGS {}",
                            what,
                            query,
                            settings
                                .iter()
                                .map(|kv| format!("{}='{}'", kv.0, kv.1.replace('\'', "\\\'")))
                                .collect::<Vec<String>>()
                                .join(",")
                        ))
                        .await?,
                    "explain",
                ));
            }
        }

        return Ok(collect_values(
            &self.execute(&format!("EXPLAIN {} {}", what, query)).await?,
            "explain",
        ));
    }

    pub async fn get_query_logs(&self, args: &TextLogArguments) -> Result<Columns> {
        // TODO:
        // - optional flush, but right now it gives "blocks should not be empty." error
        //   self.execute("SYSTEM FLUSH LOGS").await;
        // - configure time interval
        //
        // NOTE:
        // - we cannot use LIVE VIEW, since
        //   a) they are pretty complex
        //   b) it does not work in case we monitor the whole cluster

        let dbtable = self.get_log_table_name("system", "text_log");
        let order = if self.options.logs_order == LogsOrder::Desc {
            "DESC"
        } else {
            "ASC"
        };
        return self
            .execute(
                format!(
                    r#"
                    WITH
                        fromUnixTimestamp64Nano({}) AS start_time_,
                        {} AS end_time_
                    SELECT
                        hostname AS host_name,
                        event_time,
                        event_time_microseconds,
                        thread_id,
                        level::String AS level,
                        logger_name::String AS logger_name,
                        query_id::String AS query_id,
                        message
                    FROM {}
                    WHERE
                            event_date >= toDate(start_time_) AND event_time >= toDateTime(start_time_) AND event_time_microseconds > start_time_
                        AND event_date <= toDate(end_time_)   AND event_time <= toDateTime(end_time_)   AND event_time_microseconds <= end_time_
                        {}
                        {}
                        {}
                        {}
                        {}
                    ORDER BY event_date {order}, event_time {order}, event_time_microseconds {order}
                    LIMIT {}
                    "#,
                    args.start
                        .timestamp_nanos_opt()
                        .ok_or(Error::msg("Invalid start time"))?,
                    args.end.to_sql_datetime_64().ok_or(Error::msg("Invalid end time"))?,
                    dbtable,
                    if let Some(query_ids) = &args.query_ids {
                        format!("AND query_id IN ('{}')", query_ids.join("','"))
                    } else {
                        "".into()
                    },
                    if let Some(logger_names) = &args.logger_names {
                        format!("AND ({})", logger_names.iter().map(|l| format!("logger_name LIKE '{}'", l)).collect::<Vec<_>>().join(" OR "))
                    } else {
                        "".into()
                    },
                    if let Some(hostname) = &args.hostname {
                        format!("AND (hostName() = '{0}' OR hostname = '{0}')", hostname.replace('\'', "''"))
                    } else {
                        "".into()
                    },
                    if let Some(message_filter) = &args.message_filter {
                        format!("AND message LIKE '%{}%'", message_filter)
                    } else {
                        "".into()
                    },
                    if let Some(max_level) = &args.max_level {
                        format!("AND level <= '{}'", max_level)
                    } else {
                        "".into()
                    },
                    self.options.limit,
                )
                .as_str(),
            )
            .await;
    }

    /// Return query flamegraph in pyspy format for flameshow.
    /// It is the same format as TSV, but with ' ' delimiter between symbols and weight.
    pub async fn get_flamegraph(
        &self,
        trace_type: TraceType,
        query_ids: Option<&[String]>,
        start_microseconds: Option<DateTime<Local>>,
        end_microseconds: Option<DateTime<Local>>,
        selected_host: Option<&String>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "trace_log");
        let host_filter = self.get_log_host_filter_clause(selected_host);
        return self
            .execute(&format!(
                r#"
            WITH
                {} AS start_time_,
                {} AS end_time_
            SELECT
              {} AS human_trace,
              {} weight
            FROM {}
            WHERE
                    event_date >= toDate(start_time_) AND event_time >= toDateTime(start_time_) AND event_time_microseconds > start_time_
                AND event_date <= toDate(end_time_)   AND event_time <= toDateTime(end_time_)   AND event_time_microseconds <= end_time_
                AND trace_type = '{:?}'
                {}
                {}
            GROUP BY human_trace
            SETTINGS allow_introspection_functions=1
            "#,
                match start_microseconds {
                    Some(time) => format!(
                        "fromUnixTimestamp64Nano({})",
                        time.timestamp_nanos_opt()
                            .ok_or(Error::msg("Invalid start time"))?
                    ),
                    None => "toDateTime64(now() - INTERVAL 1 HOUR, 6)".to_string(),
                },
                match end_microseconds {
                    Some(time) => format!(
                        "fromUnixTimestamp64Nano({})",
                        time.timestamp_nanos_opt()
                            .ok_or(Error::msg("Invalid end time"))?
                    ),
                    None => "toDateTime64(now(), 6)".to_string(),
                },
                if self.quirks.has(ClickHouseAvailableQuirks::TraceLogHasSymbols) {
                    r#"
                        if(empty(symbols),
                           arrayStringConcat(arrayMap(
                             addr -> demangle(addressToSymbol(addr)),
                             arrayReverse(trace)
                           ), ';'),
                           arrayStringConcat(arrayReverse(symbols), ';')
                        )
                    "#
                } else {
                    r#"
                        arrayStringConcat(arrayMap(
                          addr -> demangle(addressToSymbol(addr)),
                          arrayReverse(trace)
                        ), ';')
                    "#
                },
                match trace_type {
                    TraceType::Memory => "abs(sum(size))",
                    TraceType::MemorySample => "abs(sum(size))",
                    TraceType::JemallocSample => "abs(sum(size))",
                    TraceType::MemoryAllocatedWithoutCheck => "abs(sum(size))",
                    _ => "count()",
                },
                dbtable,
                trace_type,
                if let Some(ids) = query_ids {
                    format!("AND query_id IN ('{}')", ids.join("','"))
                } else {
                    String::new()
                },
                host_filter,
            ))
            .await;
    }

    /// Return jemalloc flamegraph in pyspy format.
    /// It is the same format as TSV, but with ' ' delimiter between symbols and weight.
    pub async fn get_jemalloc_flamegraph(&self, selected_host: Option<&String>) -> Result<Columns> {
        let dbtable = self.get_table_name("system", "jemalloc_profile_text");
        let host_filter = if let Some(host) = selected_host {
            if !host.is_empty() && self.options.cluster.is_some() {
                format!("AND hostName() = '{}'", host.replace('\'', "''"))
            } else {
                String::new()
            }
        } else {
            String::new()
        };
        return self
            .execute(&format!(
                r#"
            WITH splitByChar(' ', line) AS parts
            SELECT
                arrayStringConcat(arraySlice(parts, 1, -1), ' ') AS symbols,
                parts[-1]::UInt64 AS bytes
            FROM {}
            WHERE 1 {}
            SETTINGS jemalloc_profile_text_output_format='collapsed'
            "#,
                dbtable, host_filter,
            ))
            .await;
    }

    pub async fn get_live_query_flamegraph(
        &self,
        query_ids: &Option<Vec<String>>,
        selected_host: Option<&String>,
    ) -> Result<Columns> {
        let dbtable = self.get_table_name_no_history("system", "stack_trace");
        let host_filter = self.get_host_filter_clause(selected_host);
        let where_clause = match (query_ids.as_ref(), host_filter.is_empty()) {
            (Some(v), true) => format!("query_id IN ('{}')", v.join("','")),
            (Some(v), false) => format!("query_id IN ('{}') {}", v.join("','"), host_filter),
            (None, false) => format!("1 {}", host_filter),
            (None, true) => "1".to_string(),
        };
        return self
            .execute(&format!(
                r#"
            SELECT
              arrayStringConcat(arrayMap(
                addr -> demangle(addressToSymbol(addr)),
                arrayReverse(trace)
              ), ';') AS human_trace,
              count() weight
            FROM {}
            WHERE {}
            GROUP BY human_trace
            SETTINGS allow_introspection_functions=1
            "#,
                dbtable, where_clause
            ))
            .await;
    }

    pub async fn get_background_schedule_pool_query_ids(
        &self,
        log_name: Option<String>,
        database: String,
        table: String,
        start: RelativeDateTime,
        end: RelativeDateTime,
        selected_host: Option<&String>,
    ) -> Result<Vec<String>> {
        let dbtable = self.get_log_table_name("system", "background_schedule_pool_log");

        let start_sql = start
            .to_sql_datetime_64()
            .ok_or_else(|| Error::msg("Invalid start"))?;
        let end_sql = end
            .to_sql_datetime_64()
            .ok_or_else(|| Error::msg("Invalid end"))?;

        let host_filter = self.get_log_host_filter_clause(selected_host);

        let query = if let Some(ref log_name) = log_name {
            format!(
                r#"
                WITH {start} AS start_, {end} AS end_
                SELECT DISTINCT query_id
                FROM {dbtable}
                WHERE
                    event_date BETWEEN toDate(start_) AND toDate(end_) AND
                    event_time BETWEEN toDateTime(start_) AND toDateTime(end_) AND
                    log_name = '{log_name}' AND
                    database = '{database}' AND
                    table = '{table}'
                    {host_filter}
                LIMIT 1000
                "#,
                start = start_sql,
                end = end_sql,
                dbtable = dbtable,
                log_name = log_name.replace('\'', "''"),
                database = database.replace('\'', "''"),
                table = table.replace('\'', "''"),
                host_filter = host_filter,
            )
        } else {
            format!(
                r#"
                WITH {start} AS start_, {end} AS end_
                SELECT DISTINCT query_id
                FROM {dbtable}
                WHERE
                    event_date BETWEEN toDate(start_) AND toDate(end_) AND
                    event_time BETWEEN toDateTime(start_) AND toDateTime(end_) AND
                    database = '{database}' AND
                    table = '{table}'
                    {host_filter}
                LIMIT 1000
                "#,
                start = start_sql,
                end = end_sql,
                dbtable = dbtable,
                database = database.replace('\'', "''"),
                table = table.replace('\'', "''"),
                host_filter = host_filter,
            )
        };

        let columns = self.execute(&query).await?;
        let mut query_ids = Vec::new();
        for i in 0..columns.row_count() {
            if let Ok(query_id) = columns.get::<String, _>(i, "query_id") {
                query_ids.push(query_id);
            }
        }

        Ok(query_ids)
    }

    pub async fn get_otel_spans_for_perfetto(
        &self,
        query_ids: Option<&[String]>,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "opentelemetry_span_log");
        let start_us = start.timestamp_micros();
        let end_us = end.timestamp_micros();
        let query_id_filter = if let Some(ids) = query_ids {
            format!(
                "AND attribute['clickhouse.query_id'] IN ('{}')",
                ids.join("','")
            )
        } else {
            String::new()
        };
        return self
            .execute(&format!(
                r#"
                    SELECT
                        operation_name,
                        start_time_us,
                        finish_time_us,
                        attribute['clickhouse.query_id'] AS query_id,
                        {host_expr} AS host_name
                    FROM {dbtable}
                    WHERE start_time_us BETWEEN {start_us} AND {end_us}
                      {query_id_filter}
                    ORDER BY start_time_us
                    "#,
                dbtable = dbtable,
                start_us = start_us,
                end_us = end_us,
                query_id_filter = query_id_filter,
                host_expr = self.get_log_hostname_column(),
            ))
            .await;
    }

    pub async fn get_trace_log_counters_for_perfetto(
        &self,
        query_ids: Option<&[String]>,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "trace_log");
        let query_id_filter = if let Some(ids) = query_ids {
            format!("AND query_id IN ('{}')", ids.join("','"))
        } else {
            String::new()
        };
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        query_id,
                        event,
                        increment,
                        event_time_microseconds,
                        {host_expr} AS host_name
                    FROM {dbtable}
                    WHERE trace_type = 'ProfileEvent' AND increment != 0
                      {query_id_filter}
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
                query_id_filter = query_id_filter,
                host_expr = self.get_log_hostname_column(),
            ))
            .await;
    }

    pub async fn get_query_metrics_for_perfetto(
        &self,
        query_ids: Option<&[String]>,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Vec<QueryMetricRow>> {
        let dbtable = self.get_log_table_name("system", "query_metric_log");
        let query_id_filter = if let Some(ids) = query_ids {
            format!("AND query_id IN ('{}')", ids.join("','"))
        } else {
            String::new()
        };
        let block = self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        query_id,
                        event_time_microseconds,
                        memory_usage,
                        peak_memory_usage,
                        {host_expr} AS host_name,
                        COLUMNS('ProfileEvent_')
                    FROM {dbtable}
                    WHERE 1
                      {query_id_filter}
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
                query_id_filter = query_id_filter,
                host_expr = self.get_log_hostname_column(),
            ))
            .await?;

        let pe_columns: Vec<String> = block
            .columns()
            .iter()
            .map(|c| c.name().to_string())
            .filter(|name| name.starts_with("ProfileEvent_"))
            .collect();

        let mut rows = Vec::with_capacity(block.row_count());
        for i in 0..block.row_count() {
            let mut profile_events = HashMap::new();
            for col in &pe_columns {
                let value: u64 = block.get(i, col.as_str()).unwrap_or(0);
                if value != 0 {
                    let name = col.strip_prefix("ProfileEvent_").unwrap();
                    profile_events.insert(name.to_string(), value);
                }
            }
            let ts_ns = match block.get::<DateTime<Tz>, _>(i, "event_time_microseconds") {
                Ok(dt) => dt.with_timezone(&Local).timestamp_nanos_opt().unwrap_or(0) as u64,
                Err(e) => {
                    log::warn!(
                        "Perfetto: query_metric_log row {} event_time_microseconds: {}",
                        i,
                        e
                    );
                    continue;
                }
            };
            rows.push(QueryMetricRow {
                host_name: block.get(i, "host_name").unwrap_or_default(),
                timestamp_ns: ts_ns,
                memory_usage: block.get(i, "memory_usage").unwrap_or(0),
                peak_memory_usage: block.get(i, "peak_memory_usage").unwrap_or(0),
                profile_events,
            });
        }
        Ok(rows)
    }

    pub async fn get_part_log_for_perfetto(
        &self,
        query_ids: Option<&[String]>,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "part_log");
        let query_id_filter = if let Some(ids) = query_ids {
            format!("AND query_id IN ('{}')", ids.join("','"))
        } else {
            String::new()
        };
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        event_type,
                        event_time_microseconds,
                        duration_ms,
                        database,
                        table,
                        part_name,
                        query_id,
                        rows,
                        size_in_bytes,
                        {host_expr} AS host_name
                    FROM {dbtable}
                    WHERE event_type NOT IN ('MergePartsStart', 'MutatePartStart')
                      {query_id_filter}
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
                query_id_filter = query_id_filter,
                host_expr = self.get_log_hostname_column(),
            ))
            .await;
    }

    pub async fn get_stack_traces_for_perfetto(
        &self,
        query_ids: Option<&[String]>,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "trace_log");
        let symbol_expr = if self
            .quirks
            .has(ClickHouseAvailableQuirks::TraceLogHasSymbols)
        {
            r#"arrayReverse(if(empty(symbols),
                arrayMap(addr -> demangle(addressToSymbol(addr)), trace),
                symbols))"#
        } else {
            "arrayReverse(arrayMap(addr -> demangle(addressToSymbol(addr)), trace))"
        };
        let query_id_filter = if let Some(ids) = query_ids {
            format!("AND query_id IN ('{}')", ids.join("','"))
        } else {
            String::new()
        };
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        event_time_microseconds,
                        thread_id,
                        trace_type::String AS trace_type,
                        {symbol_expr} AS stack,
                        size,
                        query_id,
                        {host_expr} AS host_name
                    FROM {dbtable}
                    WHERE trace_type IN ('CPU', 'Real', 'Memory')
                      {query_id_filter}
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    SETTINGS allow_introspection_functions=1
                    "#,
                dbtable = dbtable,
                symbol_expr = symbol_expr,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
                query_id_filter = query_id_filter,
                host_expr = self.get_log_hostname_column(),
            ))
            .await;
    }

    pub async fn get_text_log_for_perfetto(
        &self,
        query_ids: Option<&[String]>,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "text_log");
        let query_id_filter = if let Some(ids) = query_ids {
            format!("AND query_id IN ('{}')", ids.join("','"))
        } else {
            String::new()
        };
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        event_time_microseconds,
                        level::String AS level,
                        logger_name::String AS logger_name,
                        message,
                        query_id,
                        {host_expr} AS host_name
                    FROM {dbtable}
                    WHERE 1
                      {query_id_filter}
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
                query_id_filter = query_id_filter,
                host_expr = self.get_log_hostname_column(),
            ))
            .await;
    }

    pub async fn get_query_thread_log_for_perfetto(
        &self,
        query_ids: Option<&[String]>,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "query_thread_log");
        let query_id_filter = if let Some(ids) = query_ids {
            format!("AND query_id IN ('{}')", ids.join("','"))
        } else {
            String::new()
        };
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        query_id,
                        thread_name,
                        event_time_microseconds,
                        query_duration_ms,
                        ProfileEvents.Names,
                        ProfileEvents.Values,
                        peak_memory_usage,
                        {host_expr} AS host_name
                    FROM {dbtable}
                    WHERE 1
                      {query_id_filter}
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
                query_id_filter = query_id_filter,
                host_expr = self.get_log_hostname_column(),
            ))
            .await;
    }

    pub async fn get_queries_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "query_log");
        return self
            .execute(
                format!(
                    r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        ProfileEvents.Names,
                        ProfileEvents.Values,
                        Settings.Names,
                        Settings.Values,
                        {peak_threads_usage} AS peak_threads_usage,
                        memory_usage::Int64 AS peak_memory_usage,
                        query_duration_ms/1e3 AS elapsed,
                        user,
                        is_initial_query,
                        initial_query_id,
                        query_id,
                        hostname as host_name,
                        current_database,
                        query_start_time_microseconds,
                        event_time_microseconds AS query_end_time_microseconds,
                        toValidUTF8(query) AS original_query,
                        normalizeQuery(query) AS normalized_query
                    FROM {dbtable}
                    WHERE type != 'QueryStart'
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    "#,
                    start = start
                        .timestamp_nanos_opt()
                        .ok_or(Error::msg("Invalid start"))?,
                    end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
                    dbtable = dbtable,
                    peak_threads_usage = if self
                        .quirks
                        .has(ClickHouseAvailableQuirks::QueryLogPeakThreadsUsage)
                    {
                        "peak_threads_usage"
                    } else {
                        "length(thread_ids)"
                    },
                )
                .as_str(),
            )
            .await;
    }

    pub async fn get_metric_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Vec<MetricLogRow>> {
        let dbtable = self.get_log_table_name("system", "metric_log");
        let block = self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        event_time_microseconds,
                        COLUMNS('ProfileEvent_'),
                        COLUMNS('CurrentMetric_')
                    FROM {dbtable}
                    WHERE 1
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await?;

        let pe_columns: Vec<String> = block
            .columns()
            .iter()
            .map(|c| c.name().to_string())
            .filter(|name| name.starts_with("ProfileEvent_"))
            .collect();
        let cm_columns: Vec<String> = block
            .columns()
            .iter()
            .map(|c| c.name().to_string())
            .filter(|name| name.starts_with("CurrentMetric_"))
            .collect();

        let mut rows = Vec::with_capacity(block.row_count());
        for i in 0..block.row_count() {
            let ts_ns = match block.get::<DateTime<Tz>, _>(i, "event_time_microseconds") {
                Ok(dt) => dt.with_timezone(&Local).timestamp_nanos_opt().unwrap_or(0) as u64,
                Err(e) => {
                    log::warn!(
                        "Perfetto: metric_log row {} event_time_microseconds: {}",
                        i,
                        e
                    );
                    continue;
                }
            };
            let mut profile_events = HashMap::new();
            for col in &pe_columns {
                let value: u64 = block.get(i, col.as_str()).unwrap_or(0);
                if value != 0 {
                    let name = col.strip_prefix("ProfileEvent_").unwrap();
                    profile_events.insert(name.to_string(), value);
                }
            }
            let mut current_metrics = HashMap::new();
            for col in &cm_columns {
                let value: i64 = block.get(i, col.as_str()).unwrap_or(0);
                if value != 0 {
                    let name = col.strip_prefix("CurrentMetric_").unwrap();
                    current_metrics.insert(name.to_string(), value);
                }
            }
            rows.push(MetricLogRow {
                timestamp_ns: ts_ns,
                profile_events,
                current_metrics,
            });
        }
        Ok(rows)
    }

    pub async fn get_asynchronous_metric_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "asynchronous_metric_log");
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        metric,
                        value,
                        event_time_microseconds
                    FROM {dbtable}
                    WHERE 1
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_asynchronous_insert_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "asynchronous_insert_log");
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        database,
                        table,
                        format,
                        status,
                        bytes,
                        exception,
                        event_time_microseconds,
                        flush_time_microseconds,
                        query_id
                    FROM {dbtable}
                    WHERE 1
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_error_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "error_log");
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        error,
                        code,
                        value,
                        remote,
                        last_error_message,
                        event_time
                    FROM {dbtable}
                    WHERE 1
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_s3_queue_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "s3queue_log");
        return self
            .execute(&format!(
                r#"
                    SELECT
                        file_name,
                        rows_processed,
                        status,
                        processing_start_time,
                        processing_end_time,
                        exception
                    FROM {dbtable}
                    WHERE processing_start_time >= toDateTime(fromUnixTimestamp64Nano({start}))
                      AND processing_start_time <= toDateTime(fromUnixTimestamp64Nano({end}))
                    ORDER BY processing_start_time
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_azure_queue_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "azure_queue_log");
        return self
            .execute(&format!(
                r#"
                    SELECT
                        database,
                        table,
                        file_name,
                        rows_processed,
                        status,
                        processing_start_time,
                        processing_end_time,
                        exception
                    FROM {dbtable}
                    WHERE processing_start_time >= toDateTime(fromUnixTimestamp64Nano({start}))
                      AND processing_start_time <= toDateTime(fromUnixTimestamp64Nano({end}))
                    ORDER BY processing_start_time
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_blob_storage_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "blob_storage_log");
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        event_type,
                        query_id,
                        disk_name,
                        bucket,
                        remote_path,
                        data_size,
                        error,
                        event_time_microseconds
                    FROM {dbtable}
                    WHERE 1
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_background_schedule_pool_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "background_schedule_pool_log");
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        log_name,
                        database,
                        table,
                        query_id,
                        duration_ms,
                        error,
                        exception,
                        event_time_microseconds
                    FROM {dbtable}
                    WHERE 1
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_session_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "session_log");
        return self
            .execute(&format!(
                r#"
                    WITH
                        fromUnixTimestamp64Nano({start}) AS start_,
                        fromUnixTimestamp64Nano({end}) AS end_
                    SELECT
                        type::String AS type,
                        user,
                        auth_type::String AS auth_type,
                        interface::String AS interface,
                        toString(client_address) AS client_address,
                        client_name,
                        failure_reason,
                        event_time_microseconds
                    FROM {dbtable}
                    WHERE 1
                      AND event_date >= toDate(start_) AND event_time >= toDateTime(start_)
                      AND event_date <= toDate(end_)   AND event_time <= toDateTime(end_)
                    ORDER BY event_time_microseconds
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_aggregated_zookeeper_log_for_perfetto(
        &self,
        start: DateTime<Local>,
        end: DateTime<Local>,
    ) -> Result<Columns> {
        let dbtable = self.get_log_table_name("system", "aggregated_zookeeper_log");
        return self
            .execute(&format!(
                r#"
                    SELECT
                        event_time,
                        session_id,
                        parent_path,
                        operation::String AS operation,
                        count,
                        mapKeys(errors) AS error_names,
                        mapValues(errors) AS error_counts,
                        average_latency,
                        component
                    FROM {dbtable}
                    WHERE event_time >= toDateTime(fromUnixTimestamp64Nano({start}))
                      AND event_time <= toDateTime(fromUnixTimestamp64Nano({end}))
                    ORDER BY event_time
                    "#,
                dbtable = dbtable,
                start = start
                    .timestamp_nanos_opt()
                    .ok_or(Error::msg("Invalid start"))?,
                end = end.timestamp_nanos_opt().ok_or(Error::msg("Invalid end"))?,
            ))
            .await;
    }

    pub async fn get_warnings(&self) -> Result<Vec<String>> {
        let table_exists: u64 = self
            .execute(
                "SELECT count() FROM system.tables WHERE database = 'system' AND name = 'warnings'",
            )
            .await?
            .get(0, "count()")?;
        if table_exists == 0 {
            return Ok(Vec::new());
        }

        let block = self.execute("SELECT message FROM system.warnings").await?;
        let warnings: Vec<String> = collect_values(&block, "message");
        let filtered: Vec<String> = warnings
            .into_iter()
            .filter(|w| !w.contains("transparent_hugepage") && !w.starts_with("Obsolete settings"))
            .collect();
        Ok(filtered)
    }

    pub async fn execute(&self, query: &str) -> Result<Columns> {
        let columns = self
            .pool
            .get_handle()
            .await?
            .query(query)
            .fetch_all()
            .await?;
        log::trace!("Received {} rows for query: {}", columns.row_count(), query);
        Ok(columns)
    }

    async fn execute_simple(&self, query: &str) -> Result<()> {
        let mut client = self.pool.get_handle().await?;
        let mut stream = client.query(query).stream_blocks();
        let ret = stream.next().await;
        if let Some(Err(err)) = ret {
            return Err(Error::new(err));
        } else {
            return Ok(());
        }
    }

    pub async fn get_cluster_hosts(&self) -> Result<Vec<String>> {
        let cluster = self.options.cluster.clone().unwrap_or_default();
        if cluster.is_empty() {
            return Ok(Vec::new());
        }

        let query = format!(
            "SELECT DISTINCT hostName() AS host FROM clusterAllReplicas('{}', system.one) ORDER BY host",
            cluster
        );

        let columns = self.execute(&query).await?;
        let mut hosts = Vec::new();
        for i in 0..columns.row_count() {
            if let Ok(host) = columns.get::<String, _>(i, "host") {
                hosts.push(host);
            }
        }

        Ok(hosts)
    }

    pub fn get_host_filter_clause(&self, selected_host: Option<&String>) -> String {
        if let Some(host) = selected_host
            && !host.is_empty()
            && self.options.cluster.is_some()
        {
            return format!("AND hostName() = '{}'", host.replace('\'', "''"));
        }
        String::new()
    }

    // Filter for system.*_log reads. Without clusterAllReplicas(), hostName() collapses to the
    // executor node, so we match on the persisted `hostname` column instead.
    pub fn get_log_host_filter_clause(&self, selected_host: Option<&String>) -> String {
        if let Some(host) = selected_host
            && !host.is_empty()
            && self.options.cluster.is_some()
        {
            let col = if self.shared_log_pipeline {
                "hostname"
            } else {
                "hostName()"
            };
            return format!("AND {} = '{}'", col, host.replace('\'', "''"));
        }
        String::new()
    }

    // SELECT-side hostname expression for system.*_log reads. Pairs with get_log_host_filter_clause.
    pub fn get_log_hostname_column(&self) -> &'static str {
        if self.shared_log_pipeline {
            "hostname"
        } else {
            "hostName()"
        }
    }

    pub fn get_table_name(&self, database: &str, table: &str) -> String {
        let cluster = self.options.cluster.clone().unwrap_or_default();
        let history = self.options.history;

        return match (history, cluster.is_empty()) {
            (false, true) => format!("{}.{}", database, table),
            (true, false) => format!(
                "clusterAllReplicas('{}', merge('{}', '^{}'))",
                cluster, database, table
            ),
            (true, true) => format!("merge('{}', '^{}')", database, table),
            (false, false) => format!(
                "clusterAllReplicas('{}', '{}', '{}')",
                cluster, database, table
            ),
        };
    }

    // Variant for system.*_log tables. With use_shared_merge_tree_log_pipeline we can skip
    // clusterAllReplicas() entirely — a single replica observes the whole cluster's rows.
    pub fn get_log_table_name(&self, database: &str, table: &str) -> String {
        if self.shared_log_pipeline {
            let history = self.options.history;
            return if history {
                format!("merge('{}', '^{}')", database, table)
            } else {
                format!("{}.{}", database, table)
            };
        }
        self.get_table_name(database, table)
    }

    pub fn get_table_name_no_history(&self, database: &str, table: &str) -> String {
        let cluster = self.options.cluster.clone().unwrap_or_default();
        return match cluster.is_empty() {
            true => format!("{}.{}", database, table),
            false => format!(
                "clusterAllReplicas('{}', '{}', '{}')",
                cluster, database, table
            ),
        };
    }
}


================================================
FILE: src/interpreter/clickhouse_quirks.rs
================================================
use semver::{Version, VersionReq};

#[derive(Debug, Clone, Copy)]
pub enum ClickHouseAvailableQuirks {
    ProcessesElapsed = 1,
    ProcessesCurrentDatabase = 2,
    AsynchronousMetricsTotalIndexGranularityBytesInMemoryAllocated = 3,
    TraceLogHasSymbols = 4,
    SystemReplicasUUID = 8,
    QueryLogPeakThreadsUsage = 16,
    ProcessesPeakThreadsUsage = 32,
    SystemBackgroundSchedulePool = 64,
}

// List of quirks (that requires workaround) or new features.
const QUIRKS: [(&str, ClickHouseAvailableQuirks); 8] = [
    // https://github.com/ClickHouse/ClickHouse/pull/46047
    //
    // NOTE: I use here 22.13 because I have such version in production, which is more or less the
    // same as 23.1
    (
        ">=22.13, <23.2",
        ClickHouseAvailableQuirks::ProcessesElapsed,
    ),
    // https://github.com/ClickHouse/ClickHouse/pull/22365
    ("<21.4", ClickHouseAvailableQuirks::ProcessesCurrentDatabase),
    // https://github.com/ClickHouse/ClickHouse/pull/80861
    (
        ">=24.11, <25.6",
        ClickHouseAvailableQuirks::AsynchronousMetricsTotalIndexGranularityBytesInMemoryAllocated,
    ),
    (">=25.1", ClickHouseAvailableQuirks::TraceLogHasSymbols),
    (">=25.11", ClickHouseAvailableQuirks::SystemReplicasUUID),
    // peak_threads_usage is available in system.query_log since 23.8
    (
        ">=23.8",
        ClickHouseAvailableQuirks::QueryLogPeakThreadsUsage,
    ),
    // peak_threads_usage is available in system.processes since 25.11
    (
        ">=25.11",
        ClickHouseAvailableQuirks::ProcessesPeakThreadsUsage,
    ),
    // peak_threads_usage is available in system.processes since 25.11
    (
        ">=25.12",
        ClickHouseAvailableQuirks::SystemBackgroundSchedulePool,
    ),
];

pub struct ClickHouseQuirks {
    // Return more verbose version for the UI
    version_string: String,
    mask: u64,
}

// Custom matcher, that will properly handle prerelease.
// https://github.com/dtolnay/semver/issues/323#issuecomment-2432169904
fn version_matches(version: &semver::Version, req: &semver::VersionReq) -> bool {
    if req.matches(version) {
        return true;
    }

    // This custom matching logic is needed, because semver cannot compare different version with pre-release tags
    let mut version_without_pre = version.clone();
    version_without_pre.pre = "".parse().unwrap();
    for comp in &req.comparators {
        if comp.matches(version) {
            continue;
        }

        // If major & minor & patch are the same (or omitted),
        // this means there is a mismatch on the pre-release tag
        if comp.major == version.major
            && comp.minor.is_none_or(|m| m == version.minor)
            && comp.patch.is_none_or(|p| p == version.patch)
        {
            return false;
        }

        // Otherwise, compare without pre-release tags
        let mut comp_without_pre = comp.clone();
        comp_without_pre.pre = "".parse().unwrap();
        if !comp_without_pre.matches(&version_without_pre) {
            return false;
        }
    }
    true
}

impl ClickHouseQuirks {
    pub fn new(version_string: String) -> Self {
        // Version::parse() supports only x.y.z and nothing more, but we don't need anything more,
        // only .minor may include new features.
        let components = version_string
            .strip_prefix('v')
            .unwrap_or(&version_string)
            .split('.')
            .collect::<Vec<&str>>();
        let mut ver_maj_min_patch_pre = components[0..3].join(".");
        let version_pre = components.last().unwrap_or(&"-testing");
        if !version_pre.ends_with("-stable") {
            log::warn!(
                "Non-stable version detected ({}), treating as older/development version",
                version_string
            );
            ver_maj_min_patch_pre.push_str(&format!(
                "-{}",
                version_pre
                    .split('-')
                    .collect::<Vec<&str>>()
                    .last()
                    .unwrap_or(&"alpha")
            ));
        }
        log::debug!("Version (maj.min.patch.pre): {}", ver_maj_min_patch_pre);

        let version = Version::parse(ver_maj_min_patch_pre.as_str())
            .unwrap_or_else(|_| panic!("Cannot parse version: {}", ver_maj_min_patch_pre));
        log::debug!("Version: {}", version);

        let mut mask: u64 = 0;
        for quirk in &QUIRKS {
            let version_requirement = VersionReq::parse(quirk.0)
                .unwrap_or_else(|_| panic!("Cannot parse version requirements for {:?}", quirk.1));
            if version_matches(&version, &version_requirement) {
                mask |= quirk.1 as u64;
                log::warn!("Apply quirk {:?}", quirk.1);
            }
        }

        return Self {
            version_string,
            mask,
        };
    }

    pub fn get_version(&self) -> String {
        return self.version_string.clone();
    }

    pub fn has(&self, quirk: ClickHouseAvailableQuirks) -> bool {
        return (self.mask & quirk as u64) != 0;
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_stable_version() {
        let quirks = ClickHouseQuirks::new("25.11.1.1-stable".to_string());
        assert_eq!(quirks.get_version(), "25.11.1.1-stable");
        assert!(quirks.has(ClickHouseAvailableQuirks::SystemReplicasUUID));
        assert!(quirks.has(ClickHouseAvailableQuirks::ProcessesPeakThreadsUsage));
        assert!(quirks.has(ClickHouseAvailableQuirks::TraceLogHasSymbols));
    }

    #[test]
    fn test_testing_version() {
        let quirks = ClickHouseQuirks::new("25.11.1.1-testing".to_string());
        assert_eq!(quirks.get_version(), "25.11.1.1-testing");
        assert!(!quirks.has(ClickHouseAvailableQuirks::SystemReplicasUUID));
        assert!(!quirks.has(ClickHouseAvailableQuirks::ProcessesPeakThreadsUsage));
    }

    #[test]
    fn test_next_testing_prerelease_version() {
        let quirks = ClickHouseQuirks::new("25.12.1.1-testing".to_string());
        assert_eq!(quirks.get_version(), "25.12.1.1-testing");
        assert!(quirks.has(ClickHouseAvailableQuirks::SystemReplicasUUID));
        assert!(quirks.has(ClickHouseAvailableQuirks::ProcessesPeakThreadsUsage));
    }

    #[test]
    fn test_version_with_v_prefix() {
        let quirks = ClickHouseQuirks::new("v25.11.1.1-stable".to_string());
        assert_eq!(quirks.get_version(), "v25.11.1.1-stable");
        assert!(quirks.has(ClickHouseAvailableQuirks::SystemReplicasUUID));
    }

    // Here are the tests only for version_matches(), in other aspects we are relying on semver tests
}


================================================
FILE: src/interpreter/context.rs
================================================
use crate::actions::ActionDescription;
use crate::interpreter::{
    ClickHouse, Worker,
    debug_metrics::DebugMetrics,
    options::{ChDigOptions, ChDigViews},
    perfetto::PerfettoServer,
};
use anyhow::Result;
use chrono::Duration;
use cursive::{Cursive, View, event::Event, event::EventResult, views::Dialog, views::OnEventView};
use std::sync::{Arc, Condvar, Mutex, atomic};

pub type ContextArc = Arc<Mutex<Context>>;

type GlobalActionCallback = Arc<Box<dyn Fn(&mut Cursive) + Send + Sync>>;
pub struct GlobalAction {
    pub description: ActionDescription,
    pub callback: GlobalActionCallback,
}

type ViewActionCallback =
    Arc<Box<dyn Fn(&mut dyn View) -> Result<Option<EventResult>> + Send + Sync>>;
pub struct ViewAction {
    pub description: ActionDescription,
    pub callback: ViewActionCallback,
}

pub struct Context {
    pub options: ChDigOptions,

    pub clickhouse: Arc<ClickHouse>,
    pub server_version: String,
    pub worker: Worker,
    pub background_runner_cv: Arc<(Mutex<()>, Condvar)>,
    pub background_runner_force: Arc<atomic::AtomicBool>,
    pub background_runner_summary_force: Arc<atomic::AtomicBool>,

    pub cb_sink: cursive::CbSink,

    pub global_actions: Vec<GlobalAction>,
    pub views_menu_actions: Vec<GlobalAction>,
    pub view_actions: Vec<ViewAction>,

    pub pending_view_callback: Option<ViewActionCallback>,
    pub view_registry: crate::view::ViewRegistry,

    pub search_history: crate::view::search_history::SearchHistory,

    pub selected_host: Option<String>,
    pub current_view: Option<ChDigViews>,

    pub perfetto_server: Option<Arc<PerfettoServer>>,

    pub queries_filter: Arc<Mutex<String>>,
    pub queries_limit: Arc<Mutex<u64>>,

    pub debug_metrics: Arc<DebugMetrics>,
}

impl Context {
    pub async fn new(
        options: ChDigOptions,
        clickhouse: Arc<ClickHouse>,
        cb_sink: cursive::CbSink,
    ) -> Result<ContextArc> {
        let server_version = clickhouse.version();
        let debug_metrics = DebugMetrics::new();
        let worker = Worker::new();
        let background_runner_cv = Arc::new((Mutex::new(()), Condvar::new()));
        let background_runner_force = Arc::new(atomic::AtomicBool::new(false));
        let background_runner_summary_force = Arc::new(atomic::AtomicBool::new(false));

        let view_registry = crate::view::ViewRegistry::new();

        let queries_filter = Arc::new(Mutex::new(String::new()));
        let queries_limit = Arc::new(Mutex::new(options.view.queries_limit));

        // Metrics are always collected; display is toggled with `!`. The refresh thread
        // sleeps when hidden, so this is free when unused.
        debug_metrics.spawn_refresh(cb_sink.clone(), std::time::Duration::from_millis(500));

        let context = Arc::new(Mutex::new(Context {
            options,
            clickhouse,
            server_version,
            worker,
            background_runner_cv,
            background_runner_force,
            background_runner_summary_force,
            cb_sink,
            global_actions: Vec::new(),
            views_menu_actions: Vec::new(),
            view_actions: Vec::new(),
            pending_view_callback: None,
            view_registry,
            search_history: crate::view::search_history::SearchHistory::new(),
            selected_host: None,
            current_view: None,
            perfetto_server: None,
            queries_filter,
            queries_limit,
            debug_metrics,
        }));

        context.lock().unwrap().worker.start(context.clone());

        return Ok(context);
    }

    pub fn add_global_action<F, E>(
        &mut self,
        siv: &mut Cursive,
        text: &'static str,
        event: E,
        cb: F,
    ) where
        F: Fn(&mut Cursive) + Send + Sync + Copy + 'static,
        E: Into<Event>,
    {
        let event = event.into();
        let action = GlobalAction {
            description: ActionDescription { text, event },
            callback: Arc::new(Box::new(cb)),
        };
        siv.add_global_callback(action.description.event.clone(), cb);
        self.global_actions.push(action);
    }
    pub fn add_global_action_without_shortcut<F>(
        &mut self,
        siv: &mut Cursive,
        text: &'static str,
        cb: F,
    ) where
        F: Fn(&mut Cursive) + Send + Sync + Copy + 'static,
    {
        return self.add_global_action(siv, text, Event::Unknown(Vec::from([0u8])), cb);
    }

    pub fn add_view<F>(&mut self, text: &'static str, cb: F)
    where
        F: Fn(&mut Cursive) + Send + Sync + 'static,
    {
        let action = GlobalAction {
            description: ActionDescription {
                text,
                event: Event::Unknown(Vec::from([0u8])),
            },
            callback: Arc::new(Box::new(cb)),
        };
        self.views_menu_actions.push(action);
    }

    pub fn register_provider(&mut self, provider: Arc<dyn crate::view::ViewProvider>) {
        let name = provider.name();
        self.view_registry.register(provider);
        self.add_view(name, move |siv| {
            let context = siv.user_data::<ContextArc>().unwrap().clone();
            let provider = context.lock().unwrap().view_registry.get(name);
            {
                let mut ctx = context.lock().unwrap();
                ctx.current_view = Some(provider.view_type());
            }
            provider.show(siv, context.clone());
        });
    }

    pub fn add_view_action<F, E, V>(
        &mut self,
        view: &mut OnEventView<V>,
        text: &'static str,
        event: E,
        cb: F,
    ) where
        F: Fn(&mut dyn View) -> Result<Option<EventResult>> + Send + Sync + Copy + 'static,
        E: Into<Event>,
        V: View,
    {
        let event = event.into();
        let action = ViewAction {
            description: ActionDescription { text, event },
            callback: Arc::new(Box::new(cb)),
        };
        let event = action.description.event.clone();
        let cb = action.callback.clone();
        view.set_on_event_inner(event, move |sub_view, _event| {
            let result = cb.as_ref()(sub_view);
            match result {
                Err(err) => {
                    return Some(EventResult::with_cb_once(move |siv: &mut Cursive| {
                        siv.add_layer(Dialog::info(err.to_string()));
                    }));
                }
                Ok(event) => return event,
            }
        });
        self.view_actions.push(action);
    }

    pub fn add_view_action_without_shortcut<F, V>(
        &mut self,
        view: &mut OnEventView<V>,
        text: &'static str,
        cb: F,
    ) where
        F: Fn(&mut dyn View) -> Result<Option<EventResult>> + Send + Sync + Copy + 'static,
        V: View,
    {
        return self.add_view_action(view, text, Event::Unknown(Vec::from([0u8])), cb);
    }

    pub fn get_or_start_perfetto_server(&mut self) -> Arc<PerfettoServer> {
        if let Some(ref server) = self.perfetto_server {
            return server.clone();
        }
        let server = Arc::new(PerfettoServer::new());
        self.perfetto_server = Some(server.clone());
        server
    }

    pub fn trigger_view_refresh(&self) {
        self.background_runner_force
            .store(true, atomic::Ordering::SeqCst);
        self.background_runner_summary_force
            .store(true, atomic::Ordering::SeqCst);
        self.background_runner_cv.1.notify_all();
    }

    pub fn shift_time_interval(&mut self, is_sub: bool, minutes: i64) {
        let new_start = &mut self.options.view.start;
        let new_end = &mut self.options.view.end;

        if is_sub {
            *new_start -= Duration::try_minutes(minutes).unwrap();
            *new_end -= Duration::try_minutes(minutes).unwrap();
            log::debug!(
                "Set time frame to ({}, {}) ({} minutes backward)",
                new_start,
                new_end,
                minutes
            );
        } else {
            *new_start += Duration::try_minutes(minutes).unwrap();
            *new_end += Duration::try_minutes(minutes).unwrap();
            log::debug!(
                "Set time frame to ({}, {}) ({} minutes forward)",
                new_start,
                new_end,
                minutes
            );
        }
    }
}


================================================
FILE: src/interpreter/debug_metrics.rs
================================================
//! Internal chdig observability counters, rendered into the status bar when toggled with `!`.
//!
//! Metrics are recorded unconditionally — the cost is two atomic ops per worker event plus a
//! lock-and-push on a ~256-entry ring buffer. Display is gated on a toggle flag: when off
//! the refresh thread sleeps and does not ping the event loop, so there is no UI cost either.
//!
//! Picks:
//! - Nearest-rank percentile over a fixed-size [`Histogram`] (O(N log N) per snapshot,
//!   N≤256). Simpler than an online estimator (t-digest, HDR histogram) and accurate enough
//!   for a status bar at a few Hz.
//! - Event-loop latency is measured as a `cb_sink` round-trip, not frame render time.
//!   Cursive does not expose per-frame hooks; round-trip drift is the quantity the user
//!   actually perceives as "responsiveness". Tracked as a histogram (not a single latest
//!   value) so transient spikes don't get hidden behind whatever the most recent ping saw.
//! - [`InFlightGuard`] is an RAII guard so early returns and panics in the worker cannot
//!   leak the counter.

use std::collections::VecDeque;
use std::fmt;
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::{Duration, Instant};

use cursive::CbSink;

const SAMPLES_CAPACITY: usize = 256;

/// Fixed-capacity ring-buffer histogram over `Duration` samples. Thread-safe via an
/// internal `Mutex` — contention is negligible at the rates we record (≤ a few Hz).
pub struct Histogram {
    samples: Mutex<VecDeque<Duration>>,
}

impl Histogram {
    fn new() -> Self {
        Histogram {
            samples: Mutex::new(VecDeque::with_capacity(SAMPLES_CAPACITY)),
        }
    }

    pub fn record(&self, d: Duration) {
        let mut s = self.samples.lock().unwrap();
        if s.len() == SAMPLES_CAPACITY {
            s.pop_front();
        }
        s.push_back(d);
    }

    /// Nearest-rank (p50, p90, p99). Returns zeros on an empty histogram.
    pub fn percentiles(&self) -> (Duration, Duration, Duration) {
        let s = self.samples.lock().unwrap();
        if s.is_empty() {
            return (Duration::ZERO, Duration::ZERO, Duration::ZERO);
        }
        let mut v: Vec<Duration> = s.iter().copied().collect();
        v.sort_unstable();
        (percentile(&v, 50), percentile(&v, 90), percentile(&v, 99))
    }
}

pub struct DebugMetrics {
    shown: AtomicBool,
    in_flight: AtomicU64,
    /// `cb_sink` round-trip latency — proxy for "how responsive does chdig feel".
    ui_lag: Histogram,
    /// Per-worker-event processing duration (a worker event is one ClickHouse query /
    /// action chdig issued).
    event: Histogram,
}

#[must_use = "Drop decrements the in-flight counter; hold this for the duration of work"]
pub struct InFlightGuard(Arc<DebugMetrics>);

impl Drop for InFlightGuard {
    fn drop(&mut self) {
        self.0.in_flight.fetch_sub(1, Ordering::Relaxed);
    }
}

impl DebugMetrics {
    pub fn new() -> Arc<Self> {
        Arc::new(DebugMetrics {
            shown: AtomicBool::new(false),
            in_flight: AtomicU64::new(0),
            ui_lag: Histogram::new(),
            event: Histogram::new(),
        })
    }

    pub fn is_shown(&self) -> bool {
        self.shown.load(Ordering::Relaxed)
    }

    /// Flips visibility and returns the new state.
    pub fn toggle_shown(&self) -> bool {
        !self.shown.fetch_xor(true, Ordering::Relaxed)
    }

    pub fn track_in_flight(self: &Arc<Self>) -> InFlightGuard {
        self.in_flight.fetch_add(1, Ordering::Relaxed);
        InFlightGuard(Arc::clone(self))
    }

    pub fn record_event(&self, d: Duration) {
        self.event.record(d);
    }

    pub fn record_ui_lag(&self, d: Duration) {
        self.ui_lag.record(d);
    }

    pub fn snapshot(&self) -> MetricsSnapshot {
        let (lag_p50, lag_p90, lag_p99) = self.ui_lag.percentiles();
        let (evt_p50, evt_p90, evt_p99) = self.event.percentiles();
        MetricsSnapshot {
            in_flight: self.in_flight.load(Ordering::Relaxed),
            lag_p50,
            lag_p90,
            lag_p99,
            evt_p50,
            evt_p90,
            evt_p99,
        }
    }

    /// Spawn a background thread that, *while visibility is on*, probes event-loop lag
    /// via a `cb_sink` round-trip and pushes the latest snapshot into the status bar.
    /// When visibility is off the thread sleeps, so the hidden cost is just a dormant
    /// thread (no cb_sink traffic, no redraws). Exits when the sink is closed.
    pub fn spawn_refresh(self: &Arc<Self>, cb_sink: CbSink, interval: Duration) {
        let metrics = Arc::clone(self);
        thread::Builder::new()
            .name("chdig-debug-metrics".into())
            .spawn(move || refresh_loop(metrics, cb_sink, interval))
            .expect("spawn chdig-debug-metrics");
    }
}

fn refresh_loop(metrics: Arc<DebugMetrics>, cb_sink: CbSink, interval: Duration) {
    loop {
        thread::sleep(interval);
        if !metrics.is_shown() {
            continue;
        }
        let sent_at = Instant::now();
        let metrics = Arc::clone(&metrics);
        let send_result = cb_sink.send(Box::new(move |siv: &mut cursive::Cursive| {
            metrics.record_ui_lag(sent_at.elapsed());
            let text = metrics.snapshot().to_string();
            crate::view::Navigation::set_statusbar_debug(siv, text);
        }));
        if send_result.is_err() {
            break;
        }
    }
}

#[derive(Default, Clone, Copy)]
pub struct MetricsSnapshot {
    pub in_flight: u64,
    pub lag_p50: Duration,
    pub lag_p90: Duration,
    pub lag_p99: Duration,
    pub evt_p50: Duration,
    pub evt_p90: Duration,
    pub evt_p99: Duration,
}

impl fmt::Display for MetricsSnapshot {
    /// Status-bar line; written to be readable without a legend:
    ///   * `UI lag`   – cb_sink round-trip percentiles (event loop responsiveness)
    ///   * `Active`   – worker events currently being processed
    ///   * `Event`    – worker-event processing-time percentiles (one per ClickHouse query)
    ///
    /// All triples are `p50/p90/p99`, nearest-rank over the last [`SAMPLES_CAPACITY`]
    /// samples of each kind.
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        write!(
            f,
            "UI lag p50/p90/p99: {}/{}/{} ms  Active: {}  Event p50/p90/p99: {}/{}/{} ms",
            self.lag_p50.as_millis(),
            self.lag_p90.as_millis(),
            self.lag_p99.as_millis(),
            self.in_flight,
            self.evt_p50.as_millis(),
            self.evt_p90.as_millis(),
            self.evt_p99.as_millis(),
        )
    }
}

/// Nearest-rank percentile; q ∈ 0..=100. Undefined on an empty slice — callers must guard.
fn percentile<T: Copy>(sorted: &[T], q: u32) -> T {
    debug_assert!(q <= 100);
    debug_assert!(!sorted.is_empty());
    let rank = (q as usize * sorted.len()).div_ceil(100).max(1);
    sorted[rank - 1]
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn percentile_integer_ranks() {
        let v: Vec<u64> = (1..=10).collect();
        assert_eq!(percentile(&v, 50), 5);
        assert_eq!(percentile(&v, 90), 9);
        assert_eq!(percentile(&v, 99), 10);
        assert_eq!(percentile(&v, 100), 10);
    }

    #[test]
    fn percentile_single_element() {
        assert_eq!(percentile(&[42u64], 50), 42);
        assert_eq!(percentile(&[42u64], 99), 42);
    }

    #[test]
    fn histogram_caps_at_capacity() {
        let h = Histogram::new();
        // Feed monotonic samples well past capacity and assert that the p99 reflects
        // only the most recent SAMPLES_CAPACITY values (earliest ones were evicted).
        let total = SAMPLES_CAPACITY + 50;
        for i in 0..total {
            h.record(Duration::from_millis(i as u64));
        }
        let (_p50, _p90, p99) = h.percentiles();
        // Oldest retained = total - SAMPLES_CAPACITY = 50; newest = total - 1 = 305.
        // Nearest-rank p99: rank = ceil(99 * 256 / 100) = 254; value = 50 + (254-1) = 303.
        assert_eq!(p99, Duration::from_millis(303));
    }

    #[test]
    fn histogram_empty_returns_zero() {
        let h = Histogram::new();
        assert_eq!(
            h.percentiles(),
            (Duration::ZERO, Duration::ZERO, Duration::ZERO)
        );
    }

    #[test]
    fn ui_lag_and_event_are_independent() {
        let m = DebugMetrics::new();
        m.record_ui_lag(Duration::from_millis(5));
        m.record_event(Duration::from_millis(500));
        let s = m.snapshot();
        assert_eq!(s.lag_p50, Duration::from_millis(5));
        assert_eq!(s.evt_p50, Duration::from_millis(500));
    }

    #[test]
    fn in_flight_guard_is_raii() {
        let m = DebugMetrics::new();
        assert_eq!(m.snapshot().in_flight, 0);
        let g1 = m.track_in_flight();
        let g2 = m.track_in_flight();
        assert_eq!(m.snapshot().in_flight, 2);
        drop(g1);
        assert_eq!(m.snapshot().in_flight, 1);
        drop(g2);
        assert_eq!(m.snapshot().in_flight, 0);
    }

    #[test]
    fn toggle_shown_returns_new_state() {
        let m = DebugMetrics::new();
        assert!(!m.is_shown());
        assert!(m.toggle_shown());
        assert!(m.is_shown());
        assert!(!m.toggle_shown());
        assert!(!m.is_shown());
    }

    #[test]
    fn display_format_is_readable() {
        let s = MetricsSnapshot {
            in_flight: 3,
            lag_p50: Duration::from_millis(1),
            lag_p90: Duration::from_millis(4),
            lag_p99: Duration::from_millis(12),
            evt_p50: Duration::from_millis(12),
            evt_p90: Duration::from_millis(87),
            evt_p99: Duration::from_millis(420),
        };
        let rendered = s.to_string();
        assert!(rendered.contains("UI lag p50/p90/p99: 1/4/12 ms"));
        assert!(rendered.contains("Active: 3"));
        assert!(rendered.contains("Event p50/p90/p99: 12/87/420 ms"));
    }
}


================================================
FILE: src/interpreter/flamegraph.rs
================================================
use crate::interpreter::clickhouse::Columns;
use crate::pastila;
use anyhow::{Error, Result};
use crossterm::event::{self, Event as CrosstermEvent, KeyEventKind};
use flamelens::app::{App, AppResult};
use flamelens::flame::FlameGraph;
use flamelens::handler::handle_key_events;
use flamelens::ui;
use ratatui::Terminal;
use ratatui::backend::CrosstermBackend;
use std::io;

pub fn block_to_folded(block: &Columns) -> String {
    block
        .rows()
        .map(|x| {
            [
                x.get::<String, _>(0).unwrap(),
                x.get::<u64, _>(1).unwrap().to_string(),
            ]
            .join(" ")
        })
        .collect::<Vec<String>>()
        .join("\n")
}

fn run_flamelens(mut app: App) -> AppResult<()> {
    let backend = CrosstermBackend::new(io::stderr());
    let mut terminal = Terminal::new(backend)?;
    let timeout = std::time::Duration::from_secs(1);

    terminal.clear()?;

    // Start the main loop.
    while app.running {
        terminal.draw(|frame| {
            ui::render(&mut app, frame);
            if let Some(input_buffer) = &app.input_buffer
                && let Some(cursor) = input_buffer.cursor
            {
                frame.set_cursor_position((cursor.0, cursor.1));
            }
        })?;

        // FIXME: note, right now I cannot use EventHandle with Tui, since EventHandle is not
        // terminated gracefully
        if event::poll(timeout).expect("failed to poll new events") {
            match event::read().expect("unable to read event") {
                CrosstermEvent::Key(e) => {
                    if e.kind == KeyEventKind::Press {
                        handle_key_events(e, &mut app)?
                    }
                }
                CrosstermEvent::Mouse(_e) => {}
                CrosstermEvent::Resize(_w, _h) => {}
                CrosstermEvent::FocusGained => {}
                CrosstermEvent::FocusLost => {}
                CrosstermEvent::Paste(_) => {}
            }
        }
    }

    terminal.clear()?;
    // ratatui's Terminal::drop may shows the cursor, re-hide it for cursive
    drop(terminal);
    crossterm::execute!(io::stderr(), crossterm::cursor::Hide)?;

    Ok(())
}

pub fn show(title: &'static str, data: String) -> AppResult<()> {
    if data.trim().is_empty() {
        return Err(Error::msg("Flamegraph is empty").into());
    }

    let flamegraph = FlameGraph::from_string(data, true);
    run_flamelens(App::with_flamegraph(title, flamegraph))
}

/// Show a differential flamegraph: `after` rendered with per-frame coloring
/// against the `before` baseline (handled by flamelens's `diff_mode`).
pub fn show_diff(title: &'static str, before: String, after: String) -> AppResult<()> {
    if before.trim().is_empty() && after.trim().is_empty() {
        return Err(Error::msg("Flamegraph diff is empty (both queries have no samples)").into());
    }

    let before_fg = FlameGraph::from_string(before, true);
    let mut after_fg = FlameGraph::from_string(after, true);
    after_fg.set_diff_against(&before_fg);
    run_flamelens(App::with_flamegraph(title, after_fg))
}

pub async fn share(
    data: String,
    pastila_clickhouse_host: &str,
    pastila_url: &str,
) -> Result<String> {
    if data.trim().is_empty() {
        return Err(Error::msg("Flamegraph is empty"));
    }

    let pastila_url =
        pastila::upload_encrypted(&data, pastila_clickhouse_host, pastila_url).await?;
    return Ok(format!("https://whodidit.you/#profileURL={}", pastila_url));
}


================================================
FILE: src/interpreter/mod.rs
================================================
// pub for clickhouse::Columns
mod background_runner;
pub mod clickhouse;
mod clickhouse_quirks;
mod context;
pub mod debug_metrics;
mod query;
mod worker;
// only functions
pub mod flamegraph;
pub mod options;
pub mod perfetto;

pub use clickhouse::ClickHouse;
pub use clickhouse::TextLogArguments;
pub use clickhouse_quirks::ClickHouseAvailableQuirks;
pub use clickhouse_quirks::ClickHouseQuirks;
pub use context::Context;
pub use context::ContextArc;
pub use worker::Worker;

pub type WorkerEvent = worker::Event;
pub type Query = query::Query;
pub type BackgroundRunner = background_runner::BackgroundRunner;


================================================
FILE: src/interpreter/options.rs
================================================
use crate::common::RelativeDateTime;
use anyhow::{Result, anyhow};
use clap::{ArgAction, Args, CommandFactory, Parser, Subcommand, ValueEnum, builder::ArgPredicate};
use clap_complete::{Shell, generate};
use percent_encoding::{NON_ALPHANUMERIC, utf8_percent_encode};
use quick_xml::de::Deserializer as XmlDeserializer;
use serde::Deserialize;
use serde_yaml::Deserializer as YamlDeserializer;
use std::collections::HashMap;
use std::env;
use std::ffi::OsString;
use std::fs;
use std::io;
use std::net::{SocketAddr, ToSocketAddrs};
use std::path;
use std::process;
use std::str::FromStr;
use std::time;

#[derive(Deserialize, Debug, PartialEq)]
struct ClickHouseClientConfigOpenSSLClient {
    #[serde(rename = "verificationMode")]
    verification_mode: Option<String>,
    #[serde(rename = "certificateFile")]
    certificate_file: Option<String>,
    #[serde(rename = "privateKeyFile")]
    private_key_file: Option<String>,
    #[serde(rename = "caConfig")]
    ca_config: Option<String>,
}
#[derive(Deserialize, Debug, PartialEq)]
struct ClickHouseClientConfigOpenSSL {
    client: Option<ClickHouseClientConfigOpenSSLClient>,
}

#[derive(Deserialize, Debug, PartialEq)]
struct ClickHouseClientConfigConnectionsCredentials {
    name: String,
    hostname: Option<String>,
    port: Option<u16>,
    user: Option<String>,
    password: Option<String>,
    secure: Option<bool>,
    // chdig analog for accept_invalid_certificate
    skip_verify: Option<bool>,
    #[serde(rename = "accept-invalid-certificate")]
    accept_invalid_certificate: Option<bool>,
    ca_certificate: Option<String>,
    client_certificate: Option<String>,
    client_private_key: Option<String>,
    history_file: Option<String>,
}
#[derive(Deserialize, Default, Debug, PartialEq)]
struct ClickHouseClientConfig {
    user: Option<String>,
    password: Option<String>,
    secure: Option<bool>,
    // chdig analog for accept_invalid_certificate
    skip_verify: Option<bool>,
    #[serde(rename = "accept-invalid-certificate")]
    accept_invalid_certificate: Option<bool>,
    open_ssl: Option<ClickHouseClientConfigOpenSSL>,
    history_file: Option<String>,
    connections_credentials: Vec<ClickHouseClientConfigConnectionsCredentials>,
}

#[derive(Deserialize, Default)]
struct XmlClickHouseClientConfigConnectionsCredentialsConnection {
    connection: Option<Vec<ClickHouseClientConfigConnectionsCredentials>>,
}
#[derive(Deserialize)]
struct XmlClickHouseClientConfig {
    user: Option<String>,
    password: Option<String>,
    secure: Option<bool>,
    // chdig analog for accept_invalid_certificate
    skip_verify: Option<bool>,
    #[serde(rename = "accept-invalid-certificate")]
    accept_invalid_certificate: Option<bool>,
    #[serde(rename = "openSSL")]
    open_ssl: Option<ClickHouseClientConfigOpenSSL>,
    history_file: Option<String>,
    connections_credentials: Option<XmlClickHouseClientConfigConnectionsCredentialsConnection>,
}

#[derive(Deserialize)]
struct YamlClickHouseClientConfig {
    user: Option<String>,
    password: Option<String>,
    secure: Option<bool>,
    // chdig analog for accept_invalid_certificate
    skip_verify: Option<bool>,
    #[serde(rename = "accept-invalid-certificate")]
    accept_invalid_certificate: Option<bool>,
    #[serde(rename = "openSSL")]
    open_ssl: Option<ClickHouseClientConfigOpenSSL>,
    history_file: Option<String>,
    connections_credentials: Option<HashMap<String, ClickHouseClientConfigConnectionsCredentials>>,
}

#[derive(Debug, Clone, Copy, PartialEq, Eq, Subcommand)]
pub enum ChDigViews {
    /// Show now running queries (from system.processes)
    Queries,
    /// Show last running queries (from system.query_log)
    LastQueries,
    /// Show slow (slower then 1 second, ordered by duration) queries (from system.query_log)
    SlowQueries,
    /// Show merges for MergeTree engine (system.merges)
    Merges,
    /// Show S3 Queue (system.s3queue_metadata_cache)
    S3Queue,
    /// Show Azure Queue (system.azure_queue_metadata_cache)
    AzureQueue,
    /// Show mutations for MergeTree engine (system.mutations)
    Mutations,
    /// Show replication queue for ReplicatedMergeTree engine (system.replication_queue)
    ReplicationQueue,
    /// Show fetches for ReplicatedMergeTree engine (system.replicated_fetches)
    ReplicatedFetches,
    /// Show information about replicas (system.replicas)
    Replicas,
    /// Tables
    Tables,
    /// Show all errors that happened in a server since start (system.errors)
    Errors,
    /// Show information about backups (system.backups)
    Backups,
    /// Show information about dictionaries (system.dictionaries)
    Dictionaries,
    /// Show server logs (system.text_log)
    ServerLogs,
    /// Show loggers (system.text_log)
    Loggers,
    /// Show background schedule pool tasks (system.background_schedule_pool)
    BackgroundSchedulePool,
    /// Show background schedule pool logs (system.background_schedule_pool_log)
    BackgroundSchedulePoolLog,
    /// Show table parts (system.parts)
    TableParts,
    /// Show asynchronous inserts (system.asynchronous_inserts)
    AsynchronousInserts,
    /// Show part log (system.part_log)
    PartLog,
    /// Spawn client inside chdig
    Client,
}

#[derive(Parser, Clone)]
#[command(name = "chdig")]
#[command(author, version, about, long_about = None)]
pub struct ChDigOptions {
    #[command(flatten)]
    pub clickhouse: ClickHouseOptions,
    #[command(flatten)]
    pub view: ViewOptions,
    #[command(subcommand)]
    pub start_view: Option<ChDigViews>,
    #[command(flatten)]
    pub service: ServiceOptions,
    #[clap(skip)]
    pub perfetto: ChDigPerfettoConfig,
}

#[derive(Debug, Clone, Copy, PartialEq, Eq, Default, ValueEnum, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum LogsOrder {
    #[default]
    Asc,
    Desc,
}

#[derive(Args, Clone, Default)]
pub struct ClickHouseOptions {
    #[arg(short('u'), long, value_name = "URL", env = "CHDIG_URL")]
    pub url: Option<String>,
    /// Overrides host in --url (for clickhouse-client compatibility)
    #[arg(long, env = "CLICKHOUSE_HOST")]
    pub host: Option<String>,
    /// Overrides port in --url (for clickhouse-client compatibility)
    #[arg(long)]
    pub port: Option<u16>,
    /// Overrides user in --url (for clickhouse-client compatibility)
    #[arg(long, env = "CLICKHOUSE_USER")]
    pub user: Option<String>,
    /// Overrides password in --url (for clickhouse-client compatibility)
    #[arg(long, env = "CLICKHOUSE_PASSWORD")]
    pub password: Option<String>,
    /// Overrides secure=1 in --url (for clickhouse-client compatibility)
    #[arg(long, action = ArgAction::SetTrue)]
    pub secure: bool,
    /// ClickHouse like config (with some advanced features)
    #[arg(long, env = "CLICKHOUSE_CONFIG")]
    pub config: Option<String>,
    #[arg(short('C'), long)]
    pub connection: Option<String>,
    // Safe version for "url" (to show in UI)
    #[clap(skip)]
    pub url_safe: String,
    #[arg(short('c'), long)]
    pub cluster: Option<String>,
    /// Aggregate system.*_log historical data, using merge()
    #[arg(long, action = ArgAction::SetTrue)]
    pub history: bool,
    #[arg(long, action = ArgAction::SetTrue, overrides_with = "history")]
    pub no_history: bool,
    /// Do not hide internal (spawned by chdig) queries
    #[arg(long, action = ArgAction::SetTrue)]
    pub internal_queries: bool,
    #[arg(long, action = ArgAction::SetTrue, overrides_with = "internal_queries")]
    pub no_internal_queries: bool,
    /// Limit for logs
    #[arg(long, default_value_t = 100000)]
    pub limit: u64,
    /// Sort order for logs (desc returns the newest --limit rows, useful for long backups)
    #[arg(long, value_enum, default_value_t = LogsOrder::Asc)]
    pub logs_order: LogsOrder,
    /// Override server version (for dev builds with features already available). Should include
    /// at least three components (maj.min.patch)
    #[arg(long, hide = true)]
    pub server_version: Option<String>,
    /// Skip unavailable shards in distributed queries
    #[arg(long, action = ArgAction::SetTrue)]
    pub skip_unavailable_shards: bool,
    #[clap(skip)]
    pub history_file: Option<String>,
}

impl ClickHouseOptions {
    pub fn connection_info(&self) -> String {
        if let Some(ref connection) = self.connection {
            connection.clone()
        } else if let Ok(url) = url::Url::parse(&self.url_safe) {
            url.host_str().unwrap_or("localhost").to_string()
        } else {
            self.url_safe.clone()
        }
    }
}

#[derive(Args, Clone)]
pub struct ViewOptions {
    #[arg(
        short('d'),
        long,
        value_parser = |arg: &str| -> Result<time::Duration> {Ok(time::Duration::from_millis(arg.parse()?))},
        default_value = "30000",
    )]
    pub delay_interval: time::Duration,

    #[arg(short('g'), long, action = ArgAction::SetTrue, default_value_if("cluster", ArgPredicate::IsPresent, Some("true")))]
    /// Grouping distributed queries (turned on by default in --cluster mode)
    pub group_by: bool,
    #[arg(short('G'), long, action = ArgAction::SetTrue, overrides_with = "group_by")]
    no_group_by: bool,

    #[arg(long, action = ArgAction::SetTrue)]
    /// Do not accumulate metrics for subqueries in the initial query
    pub no_subqueries: bool,

    /// Use short option -b, like atop(1) has
    #[arg(long, short('b'), default_value = "1hour")]
    /// Begin of the time interval to look at
    pub start: RelativeDateTime,
    #[arg(long, short('e'), default_value = "")]
    /// End of the time interval
    pub end: RelativeDateTime,

    /// Wrap long lines
    #[arg(long, action = ArgAction::SetTrue)]
    pub wrap: bool,

    /// Disable stripping common hostname prefix and suffix in queries and logs views
    #[arg(long, action = ArgAction::SetTrue)]
    pub no_strip_hostname_suffix: bool,

    /// Limit for number of queries to render in queries views
    #[arg(long, default_value_t = 10000)]
    pub queries_limit: u64,
    // TODO: --mouse/--no-mouse (see EXIT_MOUSE_SEQUENCE in termion)
}

#[derive(Args, Clone)]
pub struct ServiceOptions {
    #[arg(long, value_enum)]
    completion: Option<Shell>,
    #[arg(long)]
    /// Log (for debugging chdig itself)
    pub log: Option<String>,
    #[arg(
        long,
        default_value = "https://uzg8q0g12h.eu-central-1.aws.clickhouse.cloud/?user=paste"
    )]
    /// Pastila ClickHouse backend for uploading and sharing flamegraphs
    pub pastila_clickhouse_host: String,
    #[arg(long, default_value = "https://pastila.nl/")]
    /// pastila.nl URL (only to show direct link to pastila in logs)
    pub pastila_url: String,
    /// Path to chdig config file (YAML)
    #[arg(long, env = "CHDIG_CONFIG")]
    pub chdig_config: Option<String>,
}

#[derive(Deserialize, Clone)]
#[serde(default)]
pub struct ChDigPerfettoConfig {
    pub opentelemetry_span_log: bool,
    pub trace_log: bool,
    pub query_metric_log: bool,
    pub part_log: bool,
    pub query_thread_log: bool,
    pub text_log: bool,
    pub text_log_android: bool,
    pub per_server: bool,
    pub metric_log: bool,
    pub asynchronous_metric_log: bool,
    pub asynchronous_insert_log: bool,
    pub error_log: bool,
    pub s3_queue_log: bool,
    pub azure_queue_log: bool,
    pub blob_storage_log: bool,
    pub background_schedule_pool_log: bool,
    pub session_log: bool,
    pub aggregated_zookeeper_log: bool,
}

impl Default for ChDigPerfettoConfig {
    fn default() -> Self {
        Self {
            opentelemetry_span_log: true,
            trace_log: true,
            query_metric_log: false,
            part_log: true,
            query_thread_log: true,
            text_log: true,
            text_log_android: true,
            per_server: true,
            metric_log: true,
            asynchronous_metric_log: false,
            asynchronous_insert_log: true,
            error_log: true,
            s3_queue_log: true,
            azure_queue_log: true,
            blob_storage_log: true,
            background_schedule_pool_log: true,
            session_log: true,
            aggregated_zookeeper_log: false,
        }
    }
}

#[derive(Deserialize, Default)]
#[serde(default)]
struct ChDigConfig {
    clickhouse: ChDigClickHouseConfig,
    view: ChDigViewConfig,
    service: ChDigServiceConfig,
    perfetto: ChDigPerfettoConfig,
}

#[derive(Deserialize, Default)]
#[serde(default)]
struct ChDigClickHouseConfig {
    url: Option<String>,
    host: Option<String>,
    port: Option<u16>,
    user: Option<String>,
    password: Option<String>,
    secure: Option<bool>,
    config: Option<String>,
    connection: Option<String>,
    cluster: Option<String>,
    history: Option<bool>,
    internal_queries: Option<bool>,
    limit: Option<u64>,
    logs_order: Option<LogsOrder>,
    skip_unavailable_shards: Option<bool>,
}

#[derive(Deserialize, Default)]
#[serde(default)]
struct ChDigViewConfig {
    delay_interval: Option<u64>,
    group_by: Option<bool>,
    no_subqueries: Option<bool>,
    start: Option<String>,
    end: Option<String>,
    wrap: Option<bool>,
    no_strip_hostname_suffix: Option<bool>,
    queries_limit: Option<u64>,
}

#[derive(Deserialize, Default)]
#[serde(default)]
struct ChDigServiceConfig {
    log: Option<String>,
    pastila_clickhouse_host: Option<String>,
    pastila_url: Option<String>,
}

fn read_yaml_clickhouse_client_config(path: &str) -> Result<ClickHouseClientConfig> {
    let file = fs::File::open(path)?;
    let reader = io::BufReader::new(file);
    let doc = YamlDeserializer::from_reader(reader);
    let yaml_config = YamlClickHouseClientConfig::deserialize(doc)?;

    let config = ClickHouseClientConfig {
        user: yaml_config.user,
        password: yaml_config.password,
        secure: yaml_config.secure,
        skip_verify: yaml_config.skip_verify,
        accept_invalid_certificate: yaml_config.accept_invalid_certificate,
        open_ssl: yaml_config.open_ssl,
        history_file: yaml_config.history_file,
        connections_credentials: yaml_config
            .connections_credentials
            .unwrap_or_default()
            .into_values()
            .collect(),
    };
    return Ok(config);
}
fn read_xml_clickhouse_client_config(path: &str) -> Result<ClickHouseClientConfig> {
    let file = fs::File::open(path)?;
    let reader = io::BufReader::new(file);
    let mut doc = XmlDeserializer::from_reader(reader);
    let xml_config = XmlClickHouseClientConfig::deserialize(&mut doc)?;

    let config = ClickHouseClientConfig {
        user: xml_config.user,
        password: xml_config.password,
        secure: xml_config.secure,
        skip_verify: xml_config.skip_verify,
        accept_invalid_certificate: xml_config.accept_invalid_certificate,
        o
Download .txt
gitextract_a1a8yrqt/

├── .cargo/
│   ├── audit.toml
│   └── config.toml
├── .exrc
├── .github/
│   └── workflows/
│       ├── build.yml
│       ├── pre_release.yml
│       ├── pull_request.yml
│       └── release.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .yamllint
├── Cargo.toml
├── Documentation/
│   ├── Actions.md
│   ├── Bugs.md
│   ├── Developers.md
│   └── FAQ.md
├── LICENSE
├── Makefile
├── README.md
├── chdig-nfpm.yaml
├── rustfmt.toml
├── src/
│   ├── actions.rs
│   ├── bin.rs
│   ├── common/
│   │   ├── mod.rs
│   │   ├── relative_date_time.rs
│   │   ├── sparkline.rs
│   │   └── stopwatch.rs
│   ├── interpreter/
│   │   ├── background_runner.rs
│   │   ├── clickhouse.rs
│   │   ├── clickhouse_quirks.rs
│   │   ├── context.rs
│   │   ├── debug_metrics.rs
│   │   ├── flamegraph.rs
│   │   ├── mod.rs
│   │   ├── options.rs
│   │   ├── perfetto.rs
│   │   ├── query.rs
│   │   └── worker.rs
│   ├── lib.rs
│   ├── main.rs
│   ├── pastila.rs
│   ├── utils.rs
│   └── view/
│       ├── log_view.rs
│       ├── mod.rs
│       ├── navigation.rs
│       ├── provider.rs
│       ├── providers/
│       │   ├── asynchronous_inserts.rs
│       │   ├── background_schedule_pool.rs
│       │   ├── background_schedule_pool_log.rs
│       │   ├── backups.rs
│       │   ├── client.rs
│       │   ├── dictionaries.rs
│       │   ├── errors.rs
│       │   ├── logger_names.rs
│       │   ├── merges.rs
│       │   ├── mod.rs
│       │   ├── mutations.rs
│       │   ├── object_storage_queue.rs
│       │   ├── part_log.rs
│       │   ├── queries.rs
│       │   ├── replicas.rs
│       │   ├── replicated_fetches.rs
│       │   ├── replication_queue.rs
│       │   ├── server_logs.rs
│       │   ├── table_parts.rs
│       │   └── tables.rs
│       ├── queries_view.rs
│       ├── query_view.rs
│       ├── registry.rs
│       ├── search_history.rs
│       ├── settings_view.rs
│       ├── sql_query_view.rs
│       ├── summary_view.rs
│       ├── table_view.rs
│       ├── text_log_view.rs
│       └── utils.rs
├── tests/
│   └── configs/
│       ├── accept_invalid_certificate.yaml
│       ├── basic.xml
│       ├── basic.yaml
│       ├── chdig_basic.yaml
│       ├── chdig_empty.yaml
│       ├── chdig_partial.yaml
│       ├── connections.yaml
│       ├── empty.xml
│       ├── empty.yaml
│       ├── tls.xml
│       ├── tls.yaml
│       ├── unknown_directives.xml
│       └── unknown_directives.yaml
└── typos.toml
Download .txt
SYMBOL INDEX (813 symbols across 52 files)

FILE: src/actions.rs
  type ActionDescription (line 4) | pub struct ActionDescription {
    method event_string (line 10) | pub fn event_string(&self) -> String {
    method preview_styled (line 36) | pub fn preview_styled(&self) -> StyledString {

FILE: src/bin.rs
  constant DEFAULT_RUST_LOG (line 19) | const DEFAULT_RUST_LOG: &str = "trace,cursive=info,clickhouse_rs=info,hy...
  function panic_hook (line 21) | fn panic_hook(info: &PanicHookInfo<'_>) {
  function chdig_main_async (line 42) | pub async fn chdig_main_async<I, T>(itr: I) -> Result<()>
  function collect_args (line 119) | fn collect_args(argc: c_int, argv: *const *const c_char) -> Vec<OsString> {
  function chdig_main (line 135) | pub extern "C" fn chdig_main(argc: c_int, argv: *const *const c_char) ->...

FILE: src/common/relative_date_time.rs
  function parse_datetime_or_date (line 8) | pub fn parse_datetime_or_date(value: &str) -> Result<DateTime<Local>, St...
  type RelativeDateTime (line 43) | pub struct RelativeDateTime {
    method new (line 50) | pub fn new(offset: Option<TimeDelta>) -> Self {
    method get_date_time (line 57) | pub fn get_date_time(&self) -> Option<DateTime<Local>> {
    method to_editable_string (line 61) | pub fn to_editable_string(&self) -> String {
    method to_sql_datetime_64 (line 71) | pub fn to_sql_datetime_64(&self) -> Option<String> {
    method from (line 92) | fn from(value: DateTime<Local>) -> Self {
    method from (line 101) | fn from(value: Option<DateTime<Local>>) -> Self {
    method add_assign (line 156) | fn add_assign(&mut self, rhs: TimeDelta) {
    method sub_assign (line 162) | fn sub_assign(&mut self, rhs: TimeDelta) {
  type Err (line 110) | type Err = anyhow::Error;
  method from_str (line 112) | fn from_str(s: &str) -> std::result::Result<Self, Self::Err> {
  function from (line 137) | fn from(value: RelativeDateTime) -> Self {
  method fmt (line 147) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {

FILE: src/common/sparkline.rs
  constant BLOCKS (line 3) | const BLOCKS: &[char] = &['▁', '▂', '▃', '▄', '▅', '▆', '▇', '█'];
  type SparklineBuffer (line 5) | pub struct SparklineBuffer {
    method new (line 11) | pub fn new(capacity: usize) -> Self {
    method push (line 18) | pub fn push(&mut self, value: f64) {
    method render (line 25) | pub fn render(&self, width: usize) -> String {

FILE: src/common/stopwatch.rs
  type Stopwatch (line 4) | pub struct Stopwatch {
    method start_new (line 9) | pub fn start_new() -> Stopwatch {
    method elapsed_ms (line 15) | pub fn elapsed_ms(&self) -> u64 {
    method elapsed (line 19) | pub fn elapsed(&self) -> Duration {

FILE: src/interpreter/background_runner.rs
  type BackgroundRunner (line 17) | pub struct BackgroundRunner {
    method new (line 36) | pub fn new(
    method start (line 50) | pub fn start<C: Fn(bool) + std::marker::Send + 'static>(&mut self, cal...
    method schedule (line 74) | pub fn schedule(&mut self) {
  method drop (line 26) | fn drop(&mut self) {

FILE: src/interpreter/clickhouse.rs
  type Columns (line 24) | pub type Columns = Block<Complex>;
  type ClickHouse (line 26) | pub struct ClickHouse {
    method new (line 185) | pub async fn new(options: ClickHouseOptions) -> Result<Self> {
    method version (line 263) | pub fn version(&self) -> String {
    method get_slow_query_log (line 267) | pub async fn get_slow_query_log(
    method get_last_query_log (line 351) | pub async fn get_last_query_log(
    method get_processlist (line 436) | pub async fn get_processlist(
    method get_summary (line 512) | pub async fn get_summary(
    method kill_query (line 814) | pub async fn kill_query(&self, query_id: &str) -> Result<()> {
    method execute_query (line 826) | pub async fn execute_query(&self, database: &str, query: &str) -> Resu...
    method explain_syntax (line 831) | pub async fn explain_syntax(
    method explain_plan (line 842) | pub async fn explain_plan(&self, database: &str, query: &str) -> Resul...
    method explain_pipeline (line 846) | pub async fn explain_pipeline(&self, database: &str, query: &str) -> R...
    method explain_pipeline_graph (line 850) | pub async fn explain_pipeline_graph(&self, database: &str, query: &str...
    method explain_plan_indexes (line 857) | pub async fn explain_plan_indexes(&self, database: &str, query: &str) ...
    method show_create_table (line 861) | pub async fn show_create_table(&self, database: &str, table: &str) -> ...
    method explain (line 873) | async fn explain(
    method get_query_logs (line 918) | pub async fn get_query_logs(&self, args: &TextLogArguments) -> Result<...
    method get_flamegraph (line 1002) | pub async fn get_flamegraph(
    method get_jemalloc_flamegraph (line 1086) | pub async fn get_jemalloc_flamegraph(&self, selected_host: Option<&Str...
    method get_live_query_flamegraph (line 1113) | pub async fn get_live_query_flamegraph(
    method get_background_schedule_pool_query_ids (line 1145) | pub async fn get_background_schedule_pool_query_ids(
    method get_otel_spans_for_perfetto (line 1222) | pub async fn get_otel_spans_for_perfetto(
    method get_trace_log_counters_for_perfetto (line 1262) | pub async fn get_trace_log_counters_for_perfetto(
    method get_query_metrics_for_perfetto (line 1304) | pub async fn get_query_metrics_for_perfetto(
    method get_part_log_for_perfetto (line 1385) | pub async fn get_part_log_for_perfetto(
    method get_stack_traces_for_perfetto (line 1432) | pub async fn get_stack_traces_for_perfetto(
    method get_text_log_for_perfetto (line 1488) | pub async fn get_text_log_for_perfetto(
    method get_query_thread_log_for_perfetto (line 1531) | pub async fn get_query_thread_log_for_perfetto(
    method get_queries_for_perfetto (line 1576) | pub async fn get_queries_for_perfetto(
    method get_metric_log_for_perfetto (line 1631) | pub async fn get_metric_log_for_perfetto(
    method get_asynchronous_metric_log_for_perfetto (line 1712) | pub async fn get_asynchronous_metric_log_for_perfetto(
    method get_asynchronous_insert_log_for_perfetto (line 1743) | pub async fn get_asynchronous_insert_log_for_perfetto(
    method get_error_log_for_perfetto (line 1780) | pub async fn get_error_log_for_perfetto(
    method get_s3_queue_log_for_perfetto (line 1814) | pub async fn get_s3_queue_log_for_perfetto(
    method get_azure_queue_log_for_perfetto (line 1844) | pub async fn get_azure_queue_log_for_perfetto(
    method get_blob_storage_log_for_perfetto (line 1876) | pub async fn get_blob_storage_log_for_perfetto(
    method get_background_schedule_pool_log_for_perfetto (line 1912) | pub async fn get_background_schedule_pool_log_for_perfetto(
    method get_session_log_for_perfetto (line 1948) | pub async fn get_session_log_for_perfetto(
    method get_aggregated_zookeeper_log_for_perfetto (line 1984) | pub async fn get_aggregated_zookeeper_log_for_perfetto(
    method get_warnings (line 2017) | pub async fn get_warnings(&self) -> Result<Vec<String>> {
    method execute (line 2037) | pub async fn execute(&self, query: &str) -> Result<Columns> {
    method execute_simple (line 2049) | async fn execute_simple(&self, query: &str) -> Result<()> {
    method get_cluster_hosts (line 2060) | pub async fn get_cluster_hosts(&self) -> Result<Vec<String>> {
    method get_host_filter_clause (line 2082) | pub fn get_host_filter_clause(&self, selected_host: Option<&String>) -...
    method get_log_host_filter_clause (line 2094) | pub fn get_log_host_filter_clause(&self, selected_host: Option<&String...
    method get_log_hostname_column (line 2110) | pub fn get_log_hostname_column(&self) -> &'static str {
    method get_table_name (line 2118) | pub fn get_table_name(&self, database: &str, table: &str) -> String {
    method get_log_table_name (line 2138) | pub fn get_log_table_name(&self, database: &str, table: &str) -> String {
    method get_table_name_no_history (line 2150) | pub fn get_table_name_no_history(&self, database: &str, table: &str) -...
  type TraceType (line 37) | pub enum TraceType {
  type TextLogArguments (line 48) | pub struct TextLogArguments {
  type ClickHouseServerCPU (line 59) | pub struct ClickHouseServerCPU {
  type ClickHouseServerThreadPools (line 66) | pub struct ClickHouseServerThreadPools {
  type ClickHouseServerThreads (line 81) | pub struct ClickHouseServerThreads {
  type ClickHouseServerMemory (line 90) | pub struct ClickHouseServerMemory {
  type ClickHouseServerNetwork (line 109) | pub struct ClickHouseServerNetwork {
  type ClickHouseServerUptime (line 114) | pub struct ClickHouseServerUptime {
  type ClickHouseServerBlockDevices (line 120) | pub struct ClickHouseServerBlockDevices {
  type ClickHouseServerStorages (line 125) | pub struct ClickHouseServerStorages {
  type ClickHouseServerRows (line 135) | pub struct ClickHouseServerRows {
  type ClickHouseServerSummary (line 140) | pub struct ClickHouseServerSummary {
  type QueryMetricRow (line 159) | pub struct QueryMetricRow {
  type MetricLogRow (line 167) | pub struct MetricLogRow {
  function collect_values (line 173) | fn collect_values<'b, T: FromSql<'b>>(block: &'b Columns, column: &str) ...
  constant CHDIG_CLIENT_NAME (line 179) | const CHDIG_CLIENT_NAME: [&str; 2] = ["chdig", env!("CARGO_PKG_VERSION")];
  function get_client_name (line 180) | fn get_client_name() -> String {

FILE: src/interpreter/clickhouse_quirks.rs
  type ClickHouseAvailableQuirks (line 4) | pub enum ClickHouseAvailableQuirks {
  constant QUIRKS (line 16) | const QUIRKS: [(&str, ClickHouseAvailableQuirks); 8] = [
  type ClickHouseQuirks (line 51) | pub struct ClickHouseQuirks {
    method new (line 92) | pub fn new(version_string: String) -> Self {
    method get_version (line 138) | pub fn get_version(&self) -> String {
    method has (line 142) | pub fn has(&self, quirk: ClickHouseAvailableQuirks) -> bool {
  function version_matches (line 59) | fn version_matches(version: &semver::Version, req: &semver::VersionReq) ...
  function test_stable_version (line 152) | fn test_stable_version() {
  function test_testing_version (line 161) | fn test_testing_version() {
  function test_next_testing_prerelease_version (line 169) | fn test_next_testing_prerelease_version() {
  function test_version_with_v_prefix (line 177) | fn test_version_with_v_prefix() {

FILE: src/interpreter/context.rs
  type ContextArc (line 13) | pub type ContextArc = Arc<Mutex<Context>>;
  type GlobalActionCallback (line 15) | type GlobalActionCallback = Arc<Box<dyn Fn(&mut Cursive) + Send + Sync>>;
  type GlobalAction (line 16) | pub struct GlobalAction {
  type ViewActionCallback (line 21) | type ViewActionCallback =
  type ViewAction (line 23) | pub struct ViewAction {
  type Context (line 28) | pub struct Context {
    method new (line 61) | pub async fn new(
    method add_global_action (line 110) | pub fn add_global_action<F, E>(
    method add_global_action_without_shortcut (line 128) | pub fn add_global_action_without_shortcut<F>(
    method add_view (line 139) | pub fn add_view<F>(&mut self, text: &'static str, cb: F)
    method register_provider (line 153) | pub fn register_provider(&mut self, provider: Arc<dyn crate::view::Vie...
    method add_view_action (line 167) | pub fn add_view_action<F, E, V>(
    method add_view_action_without_shortcut (line 199) | pub fn add_view_action_without_shortcut<F, V>(
    method get_or_start_perfetto_server (line 211) | pub fn get_or_start_perfetto_server(&mut self) -> Arc<PerfettoServer> {
    method trigger_view_refresh (line 220) | pub fn trigger_view_refresh(&self) {
    method shift_time_interval (line 228) | pub fn shift_time_interval(&mut self, is_sub: bool, minutes: i64) {

FILE: src/interpreter/debug_metrics.rs
  constant SAMPLES_CAPACITY (line 27) | const SAMPLES_CAPACITY: usize = 256;
  type Histogram (line 31) | pub struct Histogram {
    method new (line 36) | fn new() -> Self {
    method record (line 42) | pub fn record(&self, d: Duration) {
    method percentiles (line 51) | pub fn percentiles(&self) -> (Duration, Duration, Duration) {
  type DebugMetrics (line 62) | pub struct DebugMetrics {
    method new (line 82) | pub fn new() -> Arc<Self> {
    method is_shown (line 91) | pub fn is_shown(&self) -> bool {
    method toggle_shown (line 96) | pub fn toggle_shown(&self) -> bool {
    method track_in_flight (line 100) | pub fn track_in_flight(self: &Arc<Self>) -> InFlightGuard {
    method record_event (line 105) | pub fn record_event(&self, d: Duration) {
    method record_ui_lag (line 109) | pub fn record_ui_lag(&self, d: Duration) {
    method snapshot (line 113) | pub fn snapshot(&self) -> MetricsSnapshot {
    method spawn_refresh (line 131) | pub fn spawn_refresh(self: &Arc<Self>, cb_sink: CbSink, interval: Dura...
  type InFlightGuard (line 73) | pub struct InFlightGuard(Arc<DebugMetrics>);
  method drop (line 76) | fn drop(&mut self) {
  function refresh_loop (line 140) | fn refresh_loop(metrics: Arc<DebugMetrics>, cb_sink: CbSink, interval: D...
  type MetricsSnapshot (line 160) | pub struct MetricsSnapshot {
    method fmt (line 178) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
  function percentile (line 194) | fn percentile<T: Copy>(sorted: &[T], q: u32) -> T {
  function percentile_integer_ranks (line 206) | fn percentile_integer_ranks() {
  function percentile_single_element (line 215) | fn percentile_single_element() {
  function histogram_caps_at_capacity (line 221) | fn histogram_caps_at_capacity() {
  function histogram_empty_returns_zero (line 236) | fn histogram_empty_returns_zero() {
  function ui_lag_and_event_are_independent (line 245) | fn ui_lag_and_event_are_independent() {
  function in_flight_guard_is_raii (line 255) | fn in_flight_guard_is_raii() {
  function toggle_shown_returns_new_state (line 268) | fn toggle_shown_returns_new_state() {
  function display_format_is_readable (line 278) | fn display_format_is_readable() {

FILE: src/interpreter/flamegraph.rs
  function block_to_folded (line 13) | pub fn block_to_folded(block: &Columns) -> String {
  function run_flamelens (line 27) | fn run_flamelens(mut app: App) -> AppResult<()> {
  function show (line 71) | pub fn show(title: &'static str, data: String) -> AppResult<()> {
  function show_diff (line 82) | pub fn show_diff(title: &'static str, before: String, after: String) -> ...
  function share (line 93) | pub async fn share(

FILE: src/interpreter/mod.rs
  type WorkerEvent (line 22) | pub type WorkerEvent = worker::Event;
  type Query (line 23) | pub type Query = query::Query;
  type BackgroundRunner (line 24) | pub type BackgroundRunner = background_runner::BackgroundRunner;

FILE: src/interpreter/options.rs
  type ClickHouseClientConfigOpenSSLClient (line 21) | struct ClickHouseClientConfigOpenSSLClient {
  type ClickHouseClientConfigOpenSSL (line 32) | struct ClickHouseClientConfigOpenSSL {
  type ClickHouseClientConfigConnectionsCredentials (line 37) | struct ClickHouseClientConfigConnectionsCredentials {
  type ClickHouseClientConfig (line 54) | struct ClickHouseClientConfig {
  type XmlClickHouseClientConfigConnectionsCredentialsConnection (line 68) | struct XmlClickHouseClientConfigConnectionsCredentialsConnection {
  type XmlClickHouseClientConfig (line 72) | struct XmlClickHouseClientConfig {
  type YamlClickHouseClientConfig (line 87) | struct YamlClickHouseClientConfig {
  type ChDigViews (line 102) | pub enum ChDigViews {
  type ChDigOptions (line 152) | pub struct ChDigOptions {
  type LogsOrder (line 167) | pub enum LogsOrder {
  type ClickHouseOptions (line 174) | pub struct ClickHouseOptions {
    method connection_info (line 230) | pub fn connection_info(&self) -> String {
  type ViewOptions (line 242) | pub struct ViewOptions {
  type ServiceOptions (line 284) | pub struct ServiceOptions {
  type ChDigPerfettoConfig (line 306) | pub struct ChDigPerfettoConfig {
  method default (line 328) | fn default() -> Self {
  type ChDigConfig (line 354) | struct ChDigConfig {
  type ChDigClickHouseConfig (line 363) | struct ChDigClickHouseConfig {
  type ChDigViewConfig (line 382) | struct ChDigViewConfig {
  type ChDigServiceConfig (line 395) | struct ChDigServiceConfig {
  function read_yaml_clickhouse_client_config (line 401) | fn read_yaml_clickhouse_client_config(path: &str) -> Result<ClickHouseCl...
  function read_xml_clickhouse_client_config (line 423) | fn read_xml_clickhouse_client_config(path: &str) -> Result<ClickHouseCli...
  function try_default_clickhouse_client_config (line 461) | fn try_default_clickhouse_client_config() -> Option<Result<ClickHouseCli...
  function read_chdig_config (line 490) | fn read_chdig_config(path: &str) -> Result<ChDigConfig> {
  function try_default_chdig_config (line 507) | fn try_default_chdig_config() -> Option<Result<ChDigConfig>> {
  function apply_chdig_config (line 527) | fn apply_chdig_config(options: &mut ChDigOptions, config: &ChDigConfig) {
  function parse_url (line 638) | fn parse_url(options: &ClickHouseOptions) -> Result<url::Url> {
  function is_cloud_host (line 650) | pub fn is_cloud_host(host: &str) -> bool {
  function is_local_address (line 657) | fn is_local_address(host: &str) -> bool {
  function set_password_from_opt (line 674) | fn set_password_from_opt(url: &mut url::Url, password: Option<String>, f...
  function clickhouse_url_defaults (line 686) | fn clickhouse_url_defaults(
  function adjust_defaults (line 930) | fn adjust_defaults(options: &mut ChDigOptions) -> Result<()> {
  function parse_from (line 965) | pub fn parse_from<I, T>(itr: I) -> Result<ChDigOptions>
  function test_url_parse_no_proto (line 991) | fn test_url_parse_no_proto() {
  function test_url_parse_user (line 999) | fn test_url_parse_user() {
  function test_url_parse_password (line 1013) | fn test_url_parse_password() {
  function test_url_parse_password_with_special_chars (line 1029) | fn test_url_parse_password_with_special_chars() {
  function test_url_parse_port (line 1044) | fn test_url_parse_port() {
  function test_url_parse_secure (line 1058) | fn test_url_parse_secure() {
  function test_config_empty (line 1074) | fn test_config_empty() {
  function test_config_unknown_directives (line 1086) | fn test_config_unknown_directives() {
  function test_config_basic (line 1098) | fn test_config_basic() {
  function test_config_tls (line 1111) | fn test_config_tls() {
  function test_config_tls_applying_config_to_connection_url (line 1131) | fn test_config_tls_applying_config_to_connection_url() {
  function test_config_connections_applying_config_to_connection_url_play (line 1148) | fn test_config_connections_applying_config_to_connection_url_play() {
  function test_config_connections_applying_config_to_connection_url_play_tls (line 1164) | fn test_config_connections_applying_config_to_connection_url_play_tls() {
  function test_config_connections_host (line 1183) | fn test_config_connections_host() {
  function test_config_apply_accept_invalid_certificate (line 1198) | fn test_config_apply_accept_invalid_certificate() {
  function test_cloud_defaults (line 1215) | fn test_cloud_defaults() {
  function test_chdig_config_empty (line 1246) | fn test_chdig_config_empty() {
  function test_chdig_config_basic (line 1255) | fn test_chdig_config_basic() {
  function test_chdig_config_partial (line 1294) | fn test_chdig_config_partial() {
  function test_chdig_config_apply_clickhouse (line 1311) | fn test_chdig_config_apply_clickhouse() {
  function test_chdig_config_apply_view (line 1329) | fn test_chdig_config_apply_view() {
  function test_chdig_config_perfetto (line 1352) | fn test_chdig_config_perfetto() {
  function test_chdig_config_perfetto_defaults (line 1371) | fn test_chdig_config_perfetto_defaults() {
  function test_chdig_config_cli_overrides_config (line 1383) | fn test_chdig_config_cli_overrides_config() {

FILE: src/interpreter/perfetto.rs
  constant SEQUENCE_ID (line 29) | const SEQUENCE_ID: u32 = 1;
  constant CLOCK_ID_UNIXTIME (line 43) | const CLOCK_ID_UNIXTIME: u32 = 128;
  type Sample (line 45) | struct Sample {
  type PerfettoTraceBuilder (line 50) | pub struct PerfettoTraceBuilder {
    method new (line 69) | pub fn new(per_server: bool, text_log_android: bool) -> Self {
    method alloc_uuid (line 88) | fn alloc_uuid(&mut self) -> u64 {
    method make_packet (line 94) | fn make_packet(&mut self) -> TracePacket {
    method make_event_packet (line 106) | fn make_event_packet(&mut self, ts_ns: u64) -> TracePacket {
    method add_process_track (line 113) | fn add_process_track(&mut self, uuid: u64, name: &str) {
    method add_child_track (line 122) | fn add_child_track(&mut self, uuid: u64, parent_uuid: u64, name: &str) {
    method add_counter_track (line 132) | fn add_counter_track(&mut self, uuid: u64, parent_uuid: u64, name: &st...
    method add_slice_begin (line 145) | fn add_slice_begin(
    method add_slice_end (line 162) | fn add_slice_end(&mut self, track_uuid: u64, ts_ns: u64) {
    method add_instant (line 171) | fn add_instant(
    method add_counter_value (line 188) | fn add_counter_value(&mut self, track_uuid: u64, ts_ns: u64, value: i6...
    method unit_for_event (line 201) | fn unit_for_event(name: &str) -> (Unit, i64) {
    method make_annotation_str (line 215) | fn make_annotation_str(name: &str, value: &str) -> DebugAnnotation {
    method make_annotation_int (line 222) | fn make_annotation_int(name: &str, value: i64) -> DebugAnnotation {
    method datetime_to_ns (line 229) | fn datetime_to_ns(dt: &DateTime<Local>) -> Option<u64> {
    method log_level_to_prio (line 233) | fn log_level_to_prio(level: &str) -> AndroidLogPriority {
    method add_queries (line 246) | pub fn add_queries(&mut self, queries: &[Query]) {
    method get_or_create_host_uuid (line 299) | fn get_or_create_host_uuid(&mut self, host_name: &str) -> u64 {
    method get_host_category_track (line 309) | fn get_host_category_track(&mut self, host_name: &str, category: &'sta...
    method add_otel_spans (line 325) | pub fn add_otel_spans(&mut self, columns: &Columns) {
    method add_trace_log_counters (line 391) | pub fn add_trace_log_counters(&mut self, columns: &Columns) {
    method add_query_metrics (line 449) | pub fn add_query_metrics(&mut self, rows: &[QueryMetricRow]) {
    method add_part_log (line 520) | pub fn add_part_log(&mut self, columns: &Columns) {
    method add_query_thread_log (line 596) | pub fn add_query_thread_log(&mut self, columns: &Columns) {
    method add_text_logs (line 672) | pub fn add_text_logs(&mut self, columns: &Columns) {
    method add_metric_log (line 753) | pub fn add_metric_log(&mut self, rows: &[MetricLogRow]) {
    method add_asynchronous_metric_log (line 792) | pub fn add_asynchronous_metric_log(&mut self, columns: &Columns) {
    method add_asynchronous_insert_log (line 828) | pub fn add_asynchronous_insert_log(&mut self, columns: &Columns) {
    method add_error_log (line 886) | pub fn add_error_log(&mut self, columns: &Columns) {
    method add_s3_queue_log (line 933) | pub fn add_s3_queue_log(&mut self, columns: &Columns) {
    method add_azure_queue_log (line 973) | pub fn add_azure_queue_log(&mut self, columns: &Columns) {
    method add_blob_storage_log (line 1021) | pub fn add_blob_storage_log(&mut self, columns: &Columns) {
    method add_background_pool_log (line 1073) | pub fn add_background_pool_log(&mut self, columns: &Columns) {
    method add_session_log (line 1128) | pub fn add_session_log(&mut self, columns: &Columns) {
    method add_aggregated_zookeeper_log (line 1181) | pub fn add_aggregated_zookeeper_log(&mut self, columns: &Columns) {
    method alloc_intern_id (line 1253) | fn alloc_intern_id(&mut self) -> u64 {
    method add_stack_traces (line 1274) | pub fn add_stack_traces(&mut self, columns: &Columns) {
    method emit_streaming_profile (line 1412) | fn emit_streaming_profile(
    method make_clock_snapshot (line 1480) | fn make_clock_snapshot() -> ClockSnapshot {
    method build (line 1499) | pub fn build(self) -> Vec<u8> {
  type PerfettoServer (line 1519) | pub struct PerfettoServer {
    method new (line 1526) | pub fn new() -> Self {
    method set_trace (line 1613) | pub fn set_trace(&self, data: Vec<u8>) {
    method get_perfetto_url (line 1617) | pub fn get_perfetto_url(&self) -> String {

FILE: src/interpreter/query.rs
  function map_from_arrays (line 11) | fn map_from_arrays<K, V>(keys: Vec<K>, values: Vec<V>) -> HashMap<K, V>
  type Query (line 23) | pub struct Query {
    method from_clickhouse_block (line 57) | pub fn from_clickhouse_block(
    method cpu (line 108) | pub fn cpu(&self) -> f64 {
    method io_wait (line 141) | pub fn io_wait(&self) -> f64 {
    method cpu_wait (line 174) | pub fn cpu_wait(&self) -> f64 {
    method net_io (line 207) | pub fn net_io(&self) -> f64 {
    method disk_io (line 216) | pub fn disk_io(&self) -> f64 {
    method io (line 225) | pub fn io(&self) -> f64 {
    method get_profile_events_multi (line 236) | fn get_profile_events_multi(&self, names: &[&'static str]) -> u64 {
    method get_prev_profile_events_multi (line 243) | fn get_prev_profile_events_multi(&self, names: &[&'static str]) -> u64 {
    method get_per_second_rate_events_multi (line 256) | fn get_per_second_rate_events_multi(&self, events: &[&'static str]) ->...
    method fmt (line 278) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {

FILE: src/interpreter/worker.rs
  type Event (line 26) | pub enum Event {
    method enum_key (line 103) | fn enum_key(&self) -> String {
  type ReceiverArc (line 134) | type ReceiverArc = Arc<Mutex<mpsc::Receiver<Event>>>;
  type Sender (line 135) | type Sender = mpsc::Sender<Event>;
  type Worker (line 137) | pub struct Worker {
    method new (line 147) | pub fn new() -> Self {
    method start (line 169) | pub fn start(&mut self, context: ContextArc) {
    method toggle_pause (line 177) | pub fn toggle_pause(&mut self) {
    method is_paused (line 185) | pub fn is_paused(&self) -> bool {
    method send (line 190) | pub fn send(&mut self, force: bool, event: Event) {
  function start_tokio (line 217) | async fn start_tokio(context: ContextArc, receiver: ReceiverArc) {
  function render_or_share_flamegraph (line 326) | async fn render_or_share_flamegraph(
  function fetch_and_populate_perfetto_trace (line 368) | async fn fetch_and_populate_perfetto_trace(
  function fetch_server_perfetto_sources (line 493) | async fn fetch_server_perfetto_sources(
  function serve_perfetto_trace (line 672) | fn serve_perfetto_trace(
  function process_event (line 712) | async fn process_event(context: ContextArc, event: Event, need_clear: &m...

FILE: src/main.rs
  function main (line 6) | async fn main() -> Result<()> {

FILE: src/pastila.rs
  type ClickHouseSipHash (line 15) | pub struct ClickHouseSipHash {
    method new (line 26) | pub fn new() -> Self {
    method sipround (line 39) | fn sipround(&mut self) {
    method write (line 59) | pub fn write(&mut self, data: &[u8]) {
    method finish128 (line 78) | pub fn finish128(mut self) -> u128 {
  function calculate_hash (line 100) | pub fn calculate_hash(text: &str) -> String {
  function get_fingerprint (line 107) | pub fn get_fingerprint(text: &str) -> String {
  function encrypt_content (line 133) | fn encrypt_content(content: &str, key: &[u8; 16]) -> Result<String> {
  function get_pastila_client (line 144) | async fn get_pastila_client(pastila_clickhouse_host: &str) -> Result<cli...
  function upload_encrypted (line 175) | pub async fn upload_encrypted(

FILE: src/utils.rs
  type TerminalRawModeGuard (line 26) | pub struct TerminalRawModeGuard {
    method leave (line 33) | pub fn leave() -> Self {
    method do_restore (line 47) | fn do_restore() -> std::io::Result<()> {
    method restore (line 57) | pub fn restore(&mut self) -> std::io::Result<()> {
  method drop (line 64) | fn drop(&mut self) {
  function fuzzy_actions (line 71) | pub fn fuzzy_actions<F>(siv: &mut Cursive, actions: Vec<ActionDescriptio...
  function fuzzy_select_strings (line 85) | pub fn fuzzy_select_strings<F>(
  function highlight_sql (line 206) | pub fn highlight_sql(text: &str) -> Result<StyledString> {
  function get_query (line 220) | pub fn get_query(query: &str, settings: &HashMap<String, String>) -> Str...
  function edit_query (line 250) | pub fn edit_query(query: &str, settings: &HashMap<String, String>) -> Re...
  function open_url_command (line 282) | pub fn open_url_command(url: &str) -> Command {
  function share_graph (line 301) | pub async fn share_graph(
  function find_common_hostname_prefix_and_suffix (line 355) | pub fn find_common_hostname_prefix_and_suffix<'a, I>(hostnames: I) -> (S...

FILE: src/view/log_view.rs
  function hash_to_color (line 26) | fn hash_to_color(hash: u64) -> Color {
  function get_level_color (line 47) | fn get_level_color(level: &str) -> Color {
  function int_hash_64 (line 72) | fn int_hash_64(value: u64) -> u64 {
  function string_hash (line 78) | fn string_hash(s: &str) -> u64 {
  type LogEntry (line 85) | pub struct LogEntry {
    method to_styled_string (line 104) | fn to_styled_string(&self, cluster: bool) -> StyledString {
    method to_styled_string_with_identifiers (line 108) | fn to_styled_string_with_identifiers(
  type IdentifierMaps (line 96) | struct IdentifierMaps {
  type FilterType (line 201) | enum FilterType {
  type LogViewBase (line 208) | pub struct LogViewBase {
    method get_visible_log (line 280) | fn get_visible_log(&self, visible_idx: usize) -> Option<&LogEntry> {
    method visible_log_count (line 291) | fn visible_log_count(&self) -> usize {
    method get_identifier_maps (line 300) | fn get_identifier_maps(&self) -> Option<IdentifierMaps> {
    method display_row_to_log (line 338) | fn display_row_to_log(&self, display_row: usize) -> Option<(usize, usi...
    method log_to_display_row (line 365) | fn log_to_display_row(&self, log_idx: usize) -> usize {
    method extract_identifiers (line 376) | fn extract_identifiers(&mut self) {
    method rebuild_content_with_highlights (line 430) | fn rebuild_content_with_highlights(&mut self) {
    method rebuild_content_normal (line 436) | fn rebuild_content_normal(&mut self) {
    method apply_filter (line 442) | fn apply_filter(&mut self) {
    method search_in_direction (line 463) | fn search_in_direction(&mut self, forward: bool) -> bool {
    method search_log (line 499) | fn search_log(
    method search_row (line 547) | fn search_row(
    method update_search_forward (line 576) | fn update_search_forward(&mut self) -> bool {
    method update_search_reverse (line 580) | fn update_search_reverse(&mut self) -> bool {
    method update_search (line 584) | fn update_search(&mut self) -> bool {
    method set_options (line 598) | fn set_options(&mut self, options: &str) -> Result<()> {
    method push_logs (line 609) | fn push_logs(&mut self, mut logs: Vec<LogEntry>) {
    method compute_rows (line 693) | fn compute_rows(&mut self) {
    method rows_are_valid (line 780) | fn rows_are_valid(&mut self, size: Vec2) -> bool {
    method layout_content (line 790) | fn layout_content(&mut self, size: Vec2) {
    method inner_required_size (line 807) | fn inner_required_size(&mut self, mut req: Vec2) -> Vec2 {
    method draw_content (line 816) | fn draw_content(&self, printer: &Printer<'_, '_>) {
    method write_plain_text (line 903) | fn write_plain_text<W: Write>(&self, writer: &mut W) -> Result<()> {
  method default (line 247) | fn default() -> Self {
  function show_filtered_logs_popup (line 923) | fn show_filtered_logs_popup(siv: &mut Cursive) {
  type LogView (line 1042) | pub struct LogView {
    method new (line 1047) | pub fn new(
    method push_logs (line 1367) | pub fn push_logs(&mut self, logs: Vec<LogEntry>) {
  method draw (line 1373) | fn draw(&self, printer: &Printer<'_, '_>) {
  method layout (line 1377) | fn layout(&mut self, size: Vec2) {
  method wrap_required_size (line 1412) | fn wrap_required_size(&mut self, mut req: Vec2) -> Vec2 {

FILE: src/view/navigation.rs
  function toggle_debug_metrics (line 19) | fn toggle_debug_metrics(siv: &mut Cursive) {
  function make_menu_text (line 32) | fn make_menu_text() -> StyledString {
  type Navigation (line 51) | pub trait Navigation {
    method has_view (line 52) | fn has_view(&mut self, name: &str) -> bool;
    method make_theme_from_therminal (line 54) | fn make_theme_from_therminal(&mut self) -> Theme;
    method pop_ui (line 55) | fn pop_ui(&mut self, exit: bool);
    method toggle_pause_updates (line 56) | fn toggle_pause_updates(&mut self, reason: Option<&str>);
    method refresh_view (line 57) | fn refresh_view(&mut self);
    method seek_time_frame (line 58) | fn seek_time_frame(&mut self, is_sub: bool);
    method select_time_frame (line 59) | fn select_time_frame(&mut self);
    method initialize_global_shortcuts (line 61) | fn initialize_global_shortcuts(&mut self, context: ContextArc);
    method initialize_views_menu (line 62) | fn initialize_views_menu(&mut self, context: ContextArc);
    method chdig (line 63) | fn chdig(&mut self, context: ContextArc);
    method show_help_dialog (line 65) | fn show_help_dialog(&mut self);
    method show_settings_dialog (line 66) | fn show_settings_dialog(&mut self);
    method show_views (line 67) | fn show_views(&mut self);
    method show_actions (line 68) | fn show_actions(&mut self);
    method show_fuzzy_actions (line 69) | fn show_fuzzy_actions(&mut self);
    method show_server_flamegraph (line 70) | fn show_server_flamegraph(&mut self, tui: bool, trace_type: Option<Tra...
    method show_jemalloc_flamegraph (line 71) | fn show_jemalloc_flamegraph(&mut self, tui: bool);
    method show_server_perfetto (line 72) | fn show_server_perfetto(&mut self);
    method show_connection_dialog (line 73) | fn show_connection_dialog(&mut self);
    method drop_main_view (line 75) | fn drop_main_view(&mut self);
    method set_main_view (line 76) | fn set_main_view<V: IntoBoxedView + 'static>(&mut self, view: V);
    method set_statusbar_version (line 78) | fn set_statusbar_version(&mut self, main_content: impl Into<SpannedStr...
    method set_statusbar_content (line 79) | fn set_statusbar_content(&mut self, content: impl Into<SpannedString<S...
    method set_statusbar_connection (line 80) | fn set_statusbar_connection(&mut self, content: impl Into<SpannedStrin...
    method set_statusbar_debug (line 81) | fn set_statusbar_debug(&mut self, content: impl Into<SpannedString<Sty...
    method call_on_name_or_render_error (line 84) | fn call_on_name_or_render_error<V, F>(&mut self, name: &str, callback: F)
    method has_view (line 91) | fn has_view(&mut self, name: &str) -> bool {
    method make_theme_from_therminal (line 95) | fn make_theme_from_therminal(&mut self) -> Theme {
    method pop_ui (line 106) | fn pop_ui(&mut self, exit: bool) {
    method toggle_pause_updates (line 132) | fn toggle_pause_updates(&mut self, reason: Option<&str>) {
    method refresh_view (line 155) | fn refresh_view(&mut self) {
    method seek_time_frame (line 161) | fn seek_time_frame(&mut self, is_sub: bool) {
    method select_time_frame (line 167) | fn select_time_frame(&mut self) {
    method chdig (line 219) | fn chdig(&mut self, context: ContextArc) {
    method initialize_global_shortcuts (line 274) | fn initialize_global_shortcuts(&mut self, context: ContextArc) {
    method initialize_views_menu (line 330) | fn initialize_views_menu(&mut self, context: ContextArc) {
    method show_help_dialog (line 360) | fn show_help_dialog(&mut self) {
    method show_settings_dialog (line 401) | fn show_settings_dialog(&mut self) {
    method show_views (line 405) | fn show_views(&mut self) {
    method show_actions (line 474) | fn show_actions(&mut self) {
    method show_fuzzy_actions (line 546) | fn show_fuzzy_actions(&mut self) {
    method show_server_flamegraph (line 613) | fn show_server_flamegraph(&mut self, tui: bool, trace_type: Option<Tra...
    method show_jemalloc_flamegraph (line 629) | fn show_jemalloc_flamegraph(&mut self, tui: bool) {
    method show_server_perfetto (line 636) | fn show_server_perfetto(&mut self) {
    method show_connection_dialog (line 709) | fn show_connection_dialog(&mut self) {
    method drop_main_view (line 796) | fn drop_main_view(&mut self) {
    method set_main_view (line 813) | fn set_main_view<V: IntoBoxedView + 'static>(&mut self, view: V) {
    method set_statusbar_version (line 819) | fn set_statusbar_version(&mut self, main_content: impl Into<SpannedStr...
    method set_statusbar_content (line 830) | fn set_statusbar_content(&mut self, content: impl Into<SpannedString<S...
    method set_statusbar_connection (line 837) | fn set_statusbar_connection(&mut self, content: impl Into<SpannedStrin...
    method set_statusbar_debug (line 844) | fn set_statusbar_debug(&mut self, content: impl Into<SpannedString<Sty...
    method call_on_name_or_render_error (line 861) | fn call_on_name_or_render_error<V, F>(&mut self, name: &str, callback: F)

FILE: src/view/provider.rs
  type ViewProvider (line 6) | pub trait ViewProvider: Send + Sync {
    method name (line 8) | fn name(&self) -> &'static str;
    method view_type (line 11) | fn view_type(&self) -> ChDigViews;
    method show (line 14) | fn show(&self, siv: &mut Cursive, context: ContextArc);

FILE: src/view/providers/asynchronous_inserts.rs
  type AsynchronousInsertsViewProvider (line 12) | pub struct AsynchronousInsertsViewProvider;
  method name (line 15) | fn name(&self) -> &'static str {
  method view_type (line 19) | fn view_type(&self) -> ChDigViews {
  method show (line 23) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  function build_query (line 28) | fn build_query(
  function get_columns (line 87) | fn get_columns(is_dialog: bool) -> (Vec<&'static str>, Vec<&'static str>) {
  function show_insert_details (line 104) | fn show_insert_details(siv: &mut Cursive, columns: Vec<&'static str>, ro...
  function show_asynchronous_inserts (line 123) | pub fn show_asynchronous_inserts(
  function show_asynchronous_inserts_dialog (line 164) | pub fn show_asynchronous_inserts_dialog(

FILE: src/view/providers/background_schedule_pool.rs
  type BackgroundSchedulePoolViewProvider (line 15) | pub struct BackgroundSchedulePoolViewProvider;
  method name (line 18) | fn name(&self) -> &'static str {
  method view_type (line 22) | fn view_type(&self) -> ChDigViews {
  method show (line 26) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  function show_background_schedule_pool_actions (line 101) | fn show_background_schedule_pool_actions(
  function show_tasks_logs (line 131) | fn show_tasks_logs(siv: &mut Cursive, columns: Vec<&'static str>, row: v...
  function show_tasks_summary (line 164) | fn show_tasks_summary(siv: &mut Cursive, columns: Vec<&'static str>, row...
  function show_background_schedule_pool_dialog (line 183) | pub fn show_background_schedule_pool_dialog(

FILE: src/view/providers/background_schedule_pool_log.rs
  type BackgroundSchedulePoolLogViewProvider (line 12) | pub struct BackgroundSchedulePoolLogViewProvider;
  method name (line 15) | fn name(&self) -> &'static str {
  method view_type (line 19) | fn view_type(&self) -> ChDigViews {
  method show (line 23) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  type FilterParams (line 28) | struct FilterParams {
    method build_where_clauses (line 35) | fn build_where_clauses(&self) -> Vec<String> {
    method build_title (line 54) | fn build_title(&self, for_dialog: bool) -> String {
    method generate_view_name (line 88) | fn generate_view_name(&self) -> String {
  function build_query (line 98) | fn build_query(context: &ContextArc, filters: &FilterParams) -> String {
  function get_columns (line 145) | fn get_columns() -> (Vec<&'static str>, Vec<&'static str>) {
  function show_task_logs (line 160) | fn show_task_logs(siv: &mut Cursive, columns: Vec<&'static str>, row: vi...
  function show_background_schedule_pool_log (line 208) | pub fn show_background_schedule_pool_log(
  function show_background_schedule_pool_log_dialog (line 249) | pub fn show_background_schedule_pool_log_dialog(

FILE: src/view/providers/backups.rs
  type BackupsViewProvider (line 13) | pub struct BackupsViewProvider;
  method name (line 16) | fn name(&self) -> &'static str {
  method view_type (line 20) | fn view_type(&self) -> ChDigViews {
  method show (line 24) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/client.rs
  function parse_duration_us (line 14) | fn parse_duration_us(s: &str) -> Option<u64> {
  type ClientViewProvider (line 24) | pub struct ClientViewProvider;
    method spawn_and_wait (line 28) | fn spawn_and_wait(cmd: &mut Command) -> std::io::Result<std::process::...
    method spawn_and_wait (line 50) | fn spawn_and_wait(cmd: &mut Command) -> std::io::Result<std::process::...
  method name (line 56) | fn name(&self) -> &'static str {
  method view_type (line 60) | fn view_type(&self) -> ChDigViews {
  method show (line 64) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/dictionaries.rs
  type DictionariesViewProvider (line 8) | pub struct DictionariesViewProvider;
  method name (line 11) | fn name(&self) -> &'static str {
  method view_type (line 15) | fn view_type(&self) -> ChDigViews {
  method show (line 19) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/errors.rs
  type ErrorsViewProvider (line 14) | pub struct ErrorsViewProvider;
  method name (line 17) | fn name(&self) -> &'static str {
  method view_type (line 21) | fn view_type(&self) -> ChDigViews {
  method show (line 25) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/logger_names.rs
  type LoggerNamesViewProvider (line 13) | pub struct LoggerNamesViewProvider;
  method name (line 16) | fn name(&self) -> &'static str {
  method view_type (line 20) | fn view_type(&self) -> ChDigViews {
  method show (line 24) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/merges.rs
  type MergesViewProvider (line 12) | pub struct MergesViewProvider;
  method name (line 15) | fn name(&self) -> &'static str {
  method view_type (line 19) | fn view_type(&self) -> ChDigViews {
  method show (line 23) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  function get_columns (line 28) | fn get_columns(is_dialog: bool) -> Vec<&'static str> {
  function build_query (line 62) | fn build_query(
  function get_merges_logs_callback (line 101) | fn get_merges_logs_callback()
  function show_merges (line 139) | fn show_merges(
  function show_merges_dialog (line 176) | pub fn show_merges_dialog(

FILE: src/view/providers/mod.rs
  type TableFilterParams (line 53) | pub struct TableFilterParams {
    method new (line 63) | pub fn new(
    method with_table_prefix (line 79) | pub fn with_table_prefix(mut self, prefix: &'static str) -> Self {
    method build_where_clauses (line 84) | pub fn build_where_clauses(&self) -> Vec<String> {
    method build_title (line 105) | pub fn build_title(&self, for_dialog: bool) -> String {
    method generate_view_name (line 132) | pub fn generate_view_name(&self) -> String {
  function is_valid_identifier_begin (line 142) | fn is_valid_identifier_begin(c: char) -> bool {
  function is_word_char_ascii (line 146) | fn is_word_char_ascii(c: char) -> bool {
  function is_valid_identifier (line 150) | fn is_valid_identifier(s: &str) -> bool {
  function backquote_if_needed (line 173) | fn backquote_if_needed(s: &str) -> String {
  function escape_for_like (line 185) | fn escape_for_like(s: &str) -> String {
  function query_result_show_logs_for_row (line 191) | pub fn query_result_show_logs_for_row(
  type ClickHouseSettingValue (line 243) | pub trait ClickHouseSettingValue {
    method format_for_query (line 244) | fn format_for_query(&self) -> String;
    method format_for_query (line 248) | fn format_for_query(&self) -> String {
    method format_for_query (line 254) | fn format_for_query(&self) -> String {
    method format_for_query (line 260) | fn format_for_query(&self) -> String {
    method format_for_query (line 266) | fn format_for_query(&self) -> String {
    method format_for_query (line 272) | fn format_for_query(&self) -> String {
    method format_for_query (line 278) | fn format_for_query(&self) -> String {
  type RenderFromClickHouseQueryArguments (line 283) | pub struct RenderFromClickHouseQueryArguments<F, T> {
  function render_from_clickhouse_query (line 295) | pub fn render_from_clickhouse_query<F, T>(
  function query_result_show_row (line 391) | pub fn query_result_show_row(siv: &mut Cursive, columns: Vec<&'static st...
  function test_backquote_if_needed_valid_identifiers (line 409) | fn test_backquote_if_needed_valid_identifiers() {
  function test_backquote_if_needed_reserved_keywords (line 419) | fn test_backquote_if_needed_reserved_keywords() {
  function test_backquote_if_needed_special_characters (line 433) | fn test_backquote_if_needed_special_characters() {
  function test_backquote_if_needed_backtick_escaping (line 444) | fn test_backquote_if_needed_backtick_escaping() {

FILE: src/view/providers/mutations.rs
  type MutationsViewProvider (line 11) | pub struct MutationsViewProvider;
  method name (line 14) | fn name(&self) -> &'static str {
  method view_type (line 18) | fn view_type(&self) -> ChDigViews {
  method show (line 22) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  function get_columns (line 27) | fn get_columns(is_dialog: bool) -> Vec<&'static str> {
  function build_query (line 53) | fn build_query(
  function show_mutations (line 84) | fn show_mutations(
  function show_mutations_dialog (line 123) | pub fn show_mutations_dialog(

FILE: src/view/providers/object_storage_queue.rs
  function show_queue (line 8) | fn show_queue(siv: &mut Cursive, context: ContextArc, table: &'static [&...
  type S3QueueViewProvider (line 34) | pub struct S3QueueViewProvider;
  method name (line 37) | fn name(&self) -> &'static str {
  method view_type (line 41) | fn view_type(&self) -> ChDigViews {
  method show (line 45) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  type AzureQueueViewProvider (line 50) | pub struct AzureQueueViewProvider;
  method name (line 53) | fn name(&self) -> &'static str {
  method view_type (line 57) | fn view_type(&self) -> ChDigViews {
  method show (line 61) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/part_log.rs
  type PartLogViewProvider (line 16) | pub struct PartLogViewProvider;
  method name (line 19) | fn name(&self) -> &'static str {
  method view_type (line 23) | fn view_type(&self) -> ChDigViews {
  method show (line 27) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  type FilterParams (line 32) | struct FilterParams {
    method build_where_clauses (line 39) | fn build_where_clauses(&self) -> Vec<String> {
    method build_title (line 60) | fn build_title(&self, for_dialog: bool) -> String {
    method generate_view_name (line 87) | fn generate_view_name(&self) -> String {
  function build_query (line 97) | fn build_query(context: &ContextArc, filters: &FilterParams, is_dialog: ...
  function get_columns (line 173) | fn get_columns(is_dialog: bool) -> (Vec<&'static str>, Vec<&'static str>) {
  function show_part_logs (line 209) | fn show_part_logs(siv: &mut Cursive, columns: Vec<&'static str>, row: vi...
  function show_part_details (line 244) | fn show_part_details(siv: &mut Cursive, columns: Vec<&'static str>, row:...
  function part_log_action_callback (line 263) | fn part_log_action_callback(
  function show_part_log (line 293) | pub fn show_part_log(
  function show_part_log_dialog (line 334) | pub fn show_part_log_dialog(

FILE: src/view/providers/queries.rs
  type ProcessesViewProvider (line 10) | pub struct ProcessesViewProvider;
  method name (line 13) | fn name(&self) -> &'static str {
  method view_type (line 17) | fn view_type(&self) -> ChDigViews {
  method show (line 21) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  type SlowQueryLogViewProvider (line 41) | pub struct SlowQueryLogViewProvider;
  method name (line 44) | fn name(&self) -> &'static str {
  method view_type (line 48) | fn view_type(&self) -> ChDigViews {
  method show (line 52) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  type LastQueryLogViewProvider (line 72) | pub struct LastQueryLogViewProvider;
  method name (line 75) | fn name(&self) -> &'static str {
  method view_type (line 79) | fn view_type(&self) -> ChDigViews {
  method show (line 83) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/replicas.rs
  type ReplicasViewProvider (line 10) | pub struct ReplicasViewProvider;
  method name (line 13) | fn name(&self) -> &'static str {
  method view_type (line 17) | fn view_type(&self) -> ChDigViews {
  method show (line 21) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/replicated_fetches.rs
  type ReplicatedFetchesViewProvider (line 8) | pub struct ReplicatedFetchesViewProvider;
  method name (line 11) | fn name(&self) -> &'static str {
  method view_type (line 15) | fn view_type(&self) -> ChDigViews {
  method show (line 19) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/replication_queue.rs
  type ReplicationQueueViewProvider (line 8) | pub struct ReplicationQueueViewProvider;
  method name (line 11) | fn name(&self) -> &'static str {
  method view_type (line 15) | fn view_type(&self) -> ChDigViews {
  method show (line 19) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/server_logs.rs
  type ServerLogsViewProvider (line 12) | pub struct ServerLogsViewProvider;
  method name (line 15) | fn name(&self) -> &'static str {
  method view_type (line 19) | fn view_type(&self) -> ChDigViews {
  method show (line 23) | fn show(&self, siv: &mut Cursive, context: ContextArc) {

FILE: src/view/providers/table_parts.rs
  type TablePartsViewProvider (line 16) | pub struct TablePartsViewProvider;
  method name (line 19) | fn name(&self) -> &'static str {
  method view_type (line 23) | fn view_type(&self) -> ChDigViews {
  method show (line 27) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  function build_query (line 32) | fn build_query(
  function get_columns (line 105) | fn get_columns(is_dialog: bool) -> (Vec<&'static str>, Vec<&'static str>) {
  function show_part_logs (line 137) | fn show_part_logs(siv: &mut Cursive, columns: Vec<&'static str>, row: vi...
  function show_part_details (line 172) | fn show_part_details(siv: &mut Cursive, columns: Vec<&'static str>, row:...
  function table_parts_action_callback (line 191) | fn table_parts_action_callback(
  function show_table_parts (line 221) | pub fn show_table_parts(
  function show_table_parts_dialog (line 258) | pub fn show_table_parts_dialog(

FILE: src/view/providers/tables.rs
  type TablesViewProvider (line 14) | pub struct TablesViewProvider;
  method name (line 17) | fn name(&self) -> &'static str {
  method view_type (line 21) | fn view_type(&self) -> ChDigViews {
  method show (line 25) | fn show(&self, siv: &mut Cursive, context: ContextArc) {
  function show_table_actions (line 129) | fn show_table_actions(
  function show_create_table (line 219) | fn show_create_table(siv: &mut Cursive, columns: Vec<&'static str>, row:...
  function show_table_logs (line 241) | fn show_table_logs(
  function show_table_background_tasks (line 250) | fn show_table_background_tasks(
  function show_table_background_tasks_logs (line 272) | fn show_table_background_tasks_logs(
  function show_table_parts (line 294) | fn show_table_parts(siv: &mut Cursive, columns: Vec<&'static str>, row: ...
  function show_table_asynchronous_inserts (line 316) | fn show_table_asynchronous_inserts(
  function show_table_merges (line 342) | fn show_table_merges(siv: &mut Cursive, columns: Vec<&'static str>, row:...
  function show_table_mutations (line 358) | fn show_table_mutations(siv: &mut Cursive, columns: Vec<&'static str>, r...
  function show_table_part_log (line 374) | fn show_table_part_log(siv: &mut Cursive, columns: Vec<&'static str>, ro...

FILE: src/view/queries_view.rs
  constant QUERY_TIME_DRIFT_BUFFER_SECONDS (line 36) | const QUERY_TIME_DRIFT_BUFFER_SECONDS: i64 = 1;
  type QueryKey (line 39) | type QueryKey = (String, String);
  function query_key (line 41) | fn query_key(q: &Query) -> QueryKey {
  function queries_count_subqueries (line 45) | fn queries_count_subqueries(queries: &mut HashMap<QueryKey, Query>) {
  function sum_map (line 57) | fn sum_map<K, V>(m1: &HashMap<K, V>, m2: &HashMap<K, V>) -> HashMap<K, V>
  function queries_sum_profile_events (line 73) | fn queries_sum_profile_events(queries: &mut HashMap<QueryKey, Query>) {
  type QueriesColumn (line 94) | pub enum QueriesColumn {
  method eq (line 114) | fn eq(&self, other: &Self) -> bool {
  method to_column (line 120) | fn to_column(&self, column: QueriesColumn) -> String {
  method cmp (line 174) | fn cmp(&self, other: &Self, column: QueriesColumn) -> Ordering
  method to_column_styled (line 201) | fn to_column_styled(&self, column: QueriesColumn) -> StyledString {
  type QueriesView (line 212) | pub struct QueriesView {
    method update (line 245) | pub fn update(&mut self, processes: Columns) -> Result<()> {
    method update_view (line 279) | fn update_view(&mut self) {
    method show_flamegraph (line 347) | fn show_flamegraph(&mut self, tui: bool, trace_type: Option<TraceType>...
    method show_flamegraph_diff (line 371) | fn show_flamegraph_diff(&mut self, trace_type: TraceType) -> Result<()> {
    method get_selected_query (line 398) | fn get_selected_query(&self) -> Result<Query> {
    method get_query_ids (line 407) | fn get_query_ids(&self) -> Result<(Vec<String>, DateTime<Local>, Optio...
    method get_query_id_groups (line 475) | fn get_query_id_groups(
    method update_limit (line 528) | pub fn update_limit(&mut self, is_sub: bool) {
    method action_show_query_logs (line 538) | fn action_show_query_logs(&mut self) -> Result<Option<EventResult>> {
    method action_show_flamegraph (line 574) | fn action_show_flamegraph(
    method action_show_flamegraph_diff (line 583) | fn action_show_flamegraph_diff(
    method action_query_profile_events (line 591) | fn action_query_profile_events(&mut self) -> Result<Option<EventResult...
    method action_query_details (line 636) | fn action_query_details(&mut self) -> Result<Option<EventResult>> {
    method action_edit_query_and_execute (line 645) | fn action_edit_query_and_execute(&mut self) -> Result<Option<EventResu...
    method action_show_query (line 662) | fn action_show_query(&mut self) -> Result<Option<EventResult>> {
    method action_copy_query (line 688) | fn action_copy_query(&mut self) -> Result<Option<EventResult>> {
    method action_explain_syntax (line 718) | fn action_explain_syntax(&mut self) -> Result<Option<EventResult>> {
    method action_explain_plan (line 730) | fn action_explain_plan(&mut self) -> Result<Option<EventResult>> {
    method action_explain_pipeline (line 741) | fn action_explain_pipeline(&mut self) -> Result<Option<EventResult>> {
    method action_select (line 752) | fn action_select(&mut self) -> Result<Option<EventResult>> {
    method action_show_all_queries (line 766) | fn action_show_all_queries(&mut self) -> Result<Option<EventResult>> {
    method action_show_queries_on_shards (line 772) | fn action_show_queries_on_shards(&mut self) -> Result<Option<EventResu...
    method action_explain_indexes (line 782) | fn action_explain_indexes(&mut self) -> Result<Option<EventResult>> {
    method action_explain_pipeline_graph (line 793) | fn action_explain_pipeline_graph(&mut self) -> Result<Option<EventResu...
    method action_kill_query (line 805) | fn action_kill_query(&mut self) -> Result<Option<EventResult>> {
    method action_export_perfetto (line 837) | fn action_export_perfetto(&mut self) -> Result<Option<EventResult>> {
    method action_increase_limit (line 862) | fn action_increase_limit(&mut self) -> Result<Option<EventResult>> {
    method action_decrease_limit (line 868) | fn action_decrease_limit(&mut self) -> Result<Option<EventResult>> {
    method action_query_processors (line 874) | fn action_query_processors(&mut self) -> Result<Option<EventResult>> {
    method action_query_views (line 954) | fn action_query_views(&mut self) -> Result<Option<EventResult>> {
    method new (line 1026) | pub fn new(
  type Type (line 236) | pub enum Type {
  method drop (line 1279) | fn drop(&mut self) {

FILE: src/view/query_view.rs
  type QueryDetailsColumn (line 16) | pub enum QueryDetailsColumn {
  type QueryProcessDetails (line 24) | pub struct QueryProcessDetails {
    method eq (line 35) | fn eq(&self, other: &Self) -> bool {
    method format_value (line 46) | fn format_value(&self, value: u64) -> String {
    method format_rate (line 69) | fn format_rate(&self, rate: f64) -> String {
    method to_column (line 94) | fn to_column(&self, column: QueryDetailsColumn) -> String {
    method cmp (line 109) | fn cmp(&self, other: &Self, column: QueryDetailsColumn) -> Ordering
    method to_column_styled (line 125) | fn to_column_styled(&self, column: QueryDetailsColumn) -> StyledString {
  type QueryView (line 169) | pub struct QueryView {
    method apply_filter (line 176) | fn apply_filter(&mut self) {
    method new (line 193) | pub fn new(query: Query, view_name: &'static str) -> NamedView<OnEvent...
    method new_diff (line 197) | pub fn new_diff(queries: Vec<Query>, view_name: &'static str) -> Named...
    method new_internal (line 201) | fn new_internal(queries: Vec<Query>, view_name: &'static str) -> Named...

FILE: src/view/registry.rs
  type ViewRegistry (line 5) | pub struct ViewRegistry {
    method new (line 10) | pub fn new() -> Self {
    method register (line 16) | pub fn register(&mut self, provider: Arc<dyn ViewProvider>) {
    method get (line 21) | pub fn get(&self, name: &str) -> Arc<dyn ViewProvider> {
    method get_by_view_type (line 29) | pub fn get_by_view_type(&self, view_type: ChDigViews) -> Arc<dyn ViewP...
  method default (line 39) | fn default() -> Self {

FILE: src/view/search_history.rs
  type SearchHistory (line 5) | pub struct SearchHistory {
    method new (line 12) | pub fn new() -> Self {
    method add_entry (line 20) | pub fn add_entry(&self, entry: String) {
    method reset_index (line 35) | pub fn reset_index(&self) {
    method navigate_up (line 39) | pub fn navigate_up(&self, current_content: &str) -> Option<String> {
    method navigate_down (line 67) | pub fn navigate_down(&self) -> Option<String> {
  method default (line 89) | fn default() -> Self {

FILE: src/view/settings_view.rs
  function apply_settings (line 13) | fn apply_settings(siv: &mut Cursive, context: &ContextArc) {
  function show_settings_dialog (line 203) | pub fn show_settings_dialog(siv: &mut Cursive) {

FILE: src/view/sql_query_view.rs
  type Field (line 19) | pub enum Field {
    method as_datetime (line 37) | pub fn as_datetime(&self) -> Option<DateTime<Local>> {
    method fmt (line 47) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
  type Row (line 87) | pub struct Row(pub Vec<Field>, Vec<usize>);
    method eq (line 90) | fn eq(&self, other: &Self) -> bool {
    method to_column (line 101) | fn to_column(&self, column: u8) -> String {
    method cmp (line 105) | fn cmp(&self, other: &Self, column: u8) -> Ordering
  type RowCallback (line 116) | type RowCallback = Arc<dyn Fn(&mut Cursive, Vec<&'static str>, Row) + Se...
  type BarColumnConfig (line 119) | type BarColumnConfig = (&'static str, &'static str);
  constant BAR_WIDTH (line 121) | const BAR_WIDTH: usize = 10;
  constant BAR_FILLED (line 122) | const BAR_FILLED: char = '█';
  constant BAR_EMPTY (line 123) | const BAR_EMPTY: char = '░';
  function render_bar (line 125) | fn render_bar(value: f64, max: f64) -> String {
  function field_to_f64 (line 136) | fn field_to_f64(field: &Field) -> f64 {
  type SQLQueryView (line 152) | pub struct SQLQueryView {
    method set_title (line 171) | pub fn set_title<S: Into<String>>(&mut self, title: S) {
    method update (line 175) | pub fn update(&mut self, block: Columns) -> Result<()> {
    method apply_filter (line 219) | fn apply_filter(&mut self) {
    method set_bar_columns (line 241) | pub fn set_bar_columns(&mut self, configs: Vec<BarColumnConfig>) {
    method compute_bars (line 245) | fn compute_bars(&mut self) {
    method set_on_submit (line 274) | pub fn set_on_submit<F>(&mut self, cb: F)
    method new (line 281) | pub fn new(
  function parse_columns (line 395) | fn parse_columns(columns: &[&'static str]) -> Vec<&'static str> {

FILE: src/view/summary_view.rs
  constant SPARKLINE_CAPACITY (line 20) | const SPARKLINE_CAPACITY: usize = 60;
  constant SPARKLINE_WIDTH (line 21) | const SPARKLINE_WIDTH: usize = 8;
  type SparklineSet (line 23) | struct SparklineSet {
    method new (line 31) | fn new() -> Self {
  type SummaryView (line 41) | pub struct SummaryView {
    method new (line 80) | pub fn new(context: ContextArc) -> Self {
    method set_view_content (line 219) | pub fn set_view_content<S>(&mut self, view_name: &str, content: S)
    method update (line 228) | pub fn update(&mut self, summary: ClickHouseServerSummary) {
  function get_color_for_ratio (line 52) | fn get_color_for_ratio(used: u64, total: u64) -> cursive::theme::Color {
  function get_color_for_bytes (line 63) | fn get_color_for_bytes(bytes: u64) -> cursive::theme::Color {
  method draw (line 551) | fn draw(&self, printer: &Printer<'_, '_>) {
  method needs_relayout (line 555) | fn needs_relayout(&self) -> bool {
  method layout (line 559) | fn layout(&mut self, size: Vec2) {
  method required_size (line 563) | fn required_size(&mut self, req: Vec2) -> Vec2 {
  method on_event (line 567) | fn on_event(&mut self, event: Event) -> EventResult {
  method call_on_any (line 571) | fn call_on_any(&mut self, selector: &Selector<'_>, callback: AnyCb<'_>) {

FILE: src/view/table_view.rs
  type TableViewItem (line 48) | pub trait TableViewItem<H>: Clone + Sized
    method to_column (line 54) | fn to_column(&self, column: H) -> String;
    method cmp (line 57) | fn cmp(&self, other: &Self, column: H) -> Ordering
    method to_column_styled (line 63) | fn to_column_styled(&self, column: H) -> StyledString {
  type OnSortCallback (line 73) | type OnSortCallback<H> = Arc<dyn Fn(&mut Cursive, H, Ordering) + Send + ...
  type IndexCallback (line 78) | type IndexCallback = Arc<dyn Fn(&mut Cursive, Option<usize>, Option<usiz...
  type TableView (line 136) | pub struct TableView<T, H> {
  method default (line 179) | fn default() -> Self {
  function set_items_stable (line 196) | pub fn set_items_stable(&mut self, items: Vec<T>) {
  function new (line 217) | pub fn new() -> Self {
  function column (line 250) | pub fn column<S: Into<String>, C: FnOnce(TableColumn<H>) -> TableColumn<...
  function add_column (line 265) | pub fn add_column<S: Into<String>, C: FnOnce(TableColumn<H>) -> TableCol...
  function remove_column (line 275) | pub fn remove_column(&mut self, i: usize) {
  function insert_column (line 291) | pub fn insert_column<S: Into<String>, C: FnOnce(TableColumn<H>) -> Table...
  function default_column (line 315) | pub fn default_column(mut self, column: H) -> Self {
  function set_default_column (line 321) | pub fn set_default_column(&mut self, column: H) {
  function sort_by (line 336) | pub fn sort_by(&mut self, column: H, order: Ordering) {
  function sort (line 354) | pub fn sort(&mut self) {
  function order (line 365) | pub fn order(&self) -> Option<(H, Ordering)> {
  function disable (line 377) | pub fn disable(&mut self) {
  function enable (line 382) | pub fn enable(&mut self) {
  function set_enabled (line 387) | pub fn set_enabled(&mut self, enabled: bool) {
  function is_enabled (line 392) | pub fn is_enabled(&self) -> bool {
  function set_on_sort (line 406) | pub fn set_on_sort<F>(&mut self, cb: F)
  function on_sort (line 425) | pub fn on_sort<F>(self, cb: F) -> Self
  function set_on_submit (line 445) | pub fn set_on_submit<F>(&mut self, cb: F)
  function on_submit (line 467) | pub fn on_submit<F>(self, cb: F) -> Self
  function set_on_select (line 486) | pub fn set_on_select<F>(&mut self, cb: F)
  function on_select (line 507) | pub fn on_select<F>(self, cb: F) -> Self
  function clear (line 515) | pub fn clear(&mut self) {
  function len (line 523) | pub fn len(&self) -> usize {
  function is_empty (line 528) | pub fn is_empty(&self) -> bool {
  function row (line 533) | pub fn row(&self) -> Option<usize> {
  function set_selected_row (line 542) | pub fn set_selected_row(&mut self, row_index: usize) {
  function selected_row (line 550) | pub fn selected_row(self, row_index: usize) -> Self {
  function set_items (line 558) | pub fn set_items(&mut self, items: Vec<T>) {
  function set_items_and_focus (line 562) | fn set_items_and_focus(&mut self, items: Vec<T>, new_location: Option<us...
  function calculate_content_widths (line 592) | fn calculate_content_widths(&mut self) {
  function items (line 619) | pub fn items(self, items: Vec<T>) -> Self {
  function title (line 624) | pub fn title<S: Into<String>>(mut self, title: S) -> Self {
  function set_title (line 630) | pub fn set_title<S: Into<String>>(&mut self, title: S) {
  function borrow_item (line 636) | pub fn borrow_item(&self, index: usize) -> Option<&T> {
  function borrow_item_mut (line 642) | pub fn borrow_item_mut(&mut self, index: usize) -> Option<&mut T> {
  function borrow_items (line 647) | pub fn borrow_items(&mut self) -> &[T] {
  function borrow_items_mut (line 654) | pub fn borrow_items_mut(&mut self) -> &mut [T] {
  function item (line 661) | pub fn item(&self) -> Option<usize> {
  function set_selected_item (line 671) | pub fn set_selected_item(&mut self, item_index: usize) {
  function selected_item (line 688) | pub fn selected_item(self, item_index: usize) -> Self {
  function insert_item (line 698) | pub fn insert_item(&mut self, item: T) {
  function insert_item_at (line 712) | pub fn insert_item_at(&mut self, index: usize, item: T) {
  function remove_item (line 726) | pub fn remove_item(&mut self, item_index: usize) -> Option<T> {
  function take_items (line 754) | pub fn take_items(&mut self) -> Vec<T> {
  function title_height (line 767) | fn title_height(&self) -> usize {
  function draw_columns (line 771) | fn draw_columns<C: Fn(&Printer<'_, '_>, &TableColumn<H>)>(
  function sort_items (line 796) | fn sort_items(&mut self, column: H, order: Ordering) {
  function draw_item (line 816) | fn draw_item(&self, printer: &Printer<'_, '_>, i: usize) {
  function on_focus_change (line 823) | fn on_focus_change(&self) -> EventResult {
  function focus_up (line 833) | fn focus_up(&mut self, n: usize) {
  function focus_down (line 837) | fn focus_down(&mut self, n: usize) {
  function active_column (line 842) | fn active_column(&self) -> usize {
  function column_cancel (line 846) | fn column_cancel(&mut self) {
  function column_next (line 853) | fn column_next(&mut self) -> bool {
  function column_prev (line 864) | fn column_prev(&mut self) -> bool {
  function column_select (line 875) | fn column_select(&mut self) -> EventResult {
  function column_for_x (line 906) | fn column_for_x(&self, mut x: usize) -> Option<usize> {
  function column_boundary_at (line 918) | fn column_boundary_at(&self, x: usize) -> Option<(usize, usize)> {
  function draw_content (line 936) | fn draw_content(&self, printer: &Printer<'_, '_>) {
  function layout_content (line 960) | fn layout_content(&mut self, size: Vec2) {
  function content_required_size (line 1045) | fn content_required_size(&mut self, req: Vec2) -> Vec2 {
  function on_inner_event (line 1049) | fn on_inner_event(&mut self, event: Event) -> EventResult {
  function inner_important_area (line 1144) | fn inner_important_area(&self, size: Vec2) -> Rect {
  function on_submit_event (line 1148) | fn on_submit_event(&mut self) -> EventResult {
  method draw (line 1164) | fn draw(&self, printer: &Printer<'_, '_>) {
  method layout (line 1206) | fn layout(&mut self, size: Vec2) {
  method take_focus (line 1218) | fn take_focus(&mut self, _: Direction) -> Result<EventResult, CannotFocu...
  method on_event (line 1222) | fn on_event(&mut self, event: Event) -> EventResult {
  method important_area (line 1354) | fn important_area(&self, size: Vec2) -> Rect {
  type TableColumn (line 1362) | pub struct TableColumn<H> {
  type TableColumnWidth (line 1373) | enum TableColumnWidth {
  function ordering (line 1385) | pub fn ordering(mut self, order: Ordering) -> Self {
  function align (line 1391) | pub fn align(mut self, alignment: HAlign) -> Self {
  function width (line 1397) | pub fn width(mut self, width: usize) -> Self {
  function width_percent (line 1404) | pub fn width_percent(mut self, width: usize) -> Self {
  function width_min (line 1411) | pub fn width_min(mut self, min: usize) -> Self {
  function width_min_max (line 1418) | pub fn width_min_max(mut self, min: usize, max: usize) -> Self {
  function new (line 1423) | fn new(column: H, title: String) -> Self {
  function draw_header (line 1436) | fn draw_header(&self, printer: &Printer<'_, '_>) {
  function draw_row (line 1467) | fn draw_row(&self, printer: &Printer<'_, '_>, value: &StyledString) {
  type SimpleColumn (line 1509) | enum SimpleColumn {
    method as_str (line 1515) | fn as_str(&self) -> &str {
  type SimpleItem (line 1523) | struct SimpleItem {
    method to_column (line 1528) | fn to_column(&self, column: SimpleColumn) -> String {
    method cmp (line 1534) | fn cmp(&self, other: &Self, column: SimpleColumn) -> Ordering
  function setup_test_table (line 1544) | fn setup_test_table() -> TableView<SimpleItem, SimpleColumn> {
  function should_insert_into_existing_table (line 1550) | fn should_insert_into_existing_table() {
  function should_insert_into_empty_table (line 1573) | fn should_insert_into_empty_table() {

FILE: src/view/text_log_view.rs
  type DateTime64 (line 15) | pub type DateTime64 = DateTime<Local>;
  type DateTimeArc (line 16) | pub type DateTimeArc = Arc<Mutex<DateTime64>>;
  type TextLogView (line 18) | pub struct TextLogView {
    method new (line 30) | pub fn new(view_name: &'static str, context: ContextArc, args: TextLog...
    method update (line 136) | pub fn update(&mut self, logs_block: Columns) -> Result<()> {
  constant FLUSH_INTERVAL_MILLISECONDS (line 27) | const FLUSH_INTERVAL_MILLISECONDS: i64 = 7500;

FILE: src/view/utils.rs
  function show_bottom_prompt (line 19) | pub fn show_bottom_prompt<F>(siv: &mut Cursive, prefix: &'static str, on...
Condensed preview — 89 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (797K chars).
[
  {
    "path": ".cargo/audit.toml",
    "chars": 777,
    "preview": "# https://docs.rs/crate/cargo-audit/0.10.0/source/audit.toml.example\n[advisories]\nignore = [\n    # time: Potential segfa"
  },
  {
    "path": ".cargo/config.toml",
    "chars": 48,
    "preview": "[build]\nrustflags = [\"--cfg\", \"tokio_unstable\"]\n"
  },
  {
    "path": ".exrc",
    "chars": 276,
    "preview": "\"\n\" Add this into your .vimrc, to allow vim handle this file.\n\"\n\" set exrc\n\" set secure \" even after this this is kind o"
  },
  {
    "path": ".github/workflows/build.yml",
    "chars": 9027,
    "preview": "---\nname: Build chdig\n\non:\n  workflow_call:\n    inputs: {}\n\nenv:\n  CARGO_TERM_COLOR: always\n\njobs:\n  lint:\n    name: Run"
  },
  {
    "path": ".github/workflows/pre_release.yml",
    "chars": 753,
    "preview": "---\nname: pre-release\n\non:\n  push:\n    branches:\n    - main\n\njobs:\n  build:\n    uses: ./.github/workflows/build.yml\n\n  p"
  },
  {
    "path": ".github/workflows/pull_request.yml",
    "chars": 523,
    "preview": "---\nname: pull_request\n\non:\n  pull_request:\n    types:\n    - synchronize\n    - reopened\n    - opened\n    branches:\n    -"
  },
  {
    "path": ".github/workflows/release.yml",
    "chars": 2568,
    "preview": "---\nname: release\n\non:\n  push:\n    tags:\n    - \"v*\"\n\njobs:\n  build:\n    uses: ./.github/workflows/build.yml\n\n  publish-r"
  },
  {
    "path": ".gitignore",
    "chars": 98,
    "preview": "# cargo\ntarget\n/vendor\n# distribution\ndist\n# packages\n*.deb\n*.tar.*\n*.tar\n*.rpm\n# intellij\n.idea/\n"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 557,
    "preview": "---\nrepos:\n- repo: https://github.com/pre-commit/pre-commit-hooks\n  rev: v4.5.0\n  hooks:\n  - id: check-byte-order-marker"
  },
  {
    "path": ".yamllint",
    "chars": 373,
    "preview": "# vi: ft=yaml\n---\nextends: default\n\nrules:\n  indentation:\n    spaces: 2\n    level: error\n    indent-sequences: false\n  l"
  },
  {
    "path": "Cargo.toml",
    "chars": 4250,
    "preview": "[package]\nname = \"chdig\"\nauthors = [\"Azat Khuzhin <a3at.mail@gmail.com>\"]\nhomepage = \"https://github.com/azat/chdig\"\nrep"
  },
  {
    "path": "Documentation/Actions.md",
    "chars": 6496,
    "preview": "### Actions\n\n`chdig` supports lots of actions, some has shortcut, others available only in\n`Ctlr-P` (fuzzy search by all"
  },
  {
    "path": "Documentation/Bugs.md",
    "chars": 124,
    "preview": "### `--history` is broken in some versions\n\nThe reason is that in some ClickHouse versions merge() function ignore alias"
  },
  {
    "path": "Documentation/Developers.md",
    "chars": 575,
    "preview": "## Developer Documentation\n\n### Debugging async code with tokio-console\n\nchdig supports [tokio-console](https://github.c"
  },
  {
    "path": "Documentation/FAQ.md",
    "chars": 7268,
    "preview": "### What is format of the URL accepted by `chdig`?\n\nThe simplest form is just - **`localhost`**\n\nFor a secure connection"
  },
  {
    "path": "LICENSE",
    "chars": 1052,
    "preview": "Copyright 2023 Azat Khuzhin\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis softwa"
  },
  {
    "path": "Makefile",
    "chars": 5318,
    "preview": "debug ?=\ntarget ?= $(shell rustc -vV | sed -n 's|host: ||p')\n# Parse the target (i.e. aarch64-unknown-linux-musl)\ntarget"
  },
  {
    "path": "README.md",
    "chars": 4699,
    "preview": "### chdig\n\nDig into [ClickHouse](https://github.com/ClickHouse/ClickHouse/) with TUI interface.\n\n### Installation\n\n`chdi"
  },
  {
    "path": "chdig-nfpm.yaml",
    "chars": 561,
    "preview": "---\nname: \"chdig\"\narch: \"${CHDIG_ARCH}\"\nplatform: \"linux\"\nversion: \"${CHDIG_VERSION}\"\nhomepage: \"https://github.com/azat"
  },
  {
    "path": "rustfmt.toml",
    "chars": 17,
    "preview": "edition = \"2018\"\n"
  },
  {
    "path": "src/actions.rs",
    "chars": 1325,
    "preview": "use cursive::{event::Event, theme::Effect, utils::markup::StyledString};\n\n#[derive(Clone)]\npub struct ActionDescription "
  },
  {
    "path": "src/bin.rs",
    "chars": 4781,
    "preview": "use anyhow::{Result, anyhow};\nuse backtrace::Backtrace;\nuse flexi_logger::{FileSpec, LogSpecification, Logger};\nuse std:"
  },
  {
    "path": "src/common/mod.rs",
    "chars": 187,
    "preview": "mod relative_date_time;\npub mod sparkline;\nmod stopwatch;\n\npub use relative_date_time::RelativeDateTime;\npub use relativ"
  },
  {
    "path": "src/common/relative_date_time.rs",
    "chars": 4794,
    "preview": "use chrono::{DateTime, Local, NaiveDate, NaiveDateTime, TimeDelta};\nuse std::{\n    fmt::Display,\n    ops::{AddAssign, Su"
  },
  {
    "path": "src/common/sparkline.rs",
    "chars": 1459,
    "preview": "use std::collections::VecDeque;\n\nconst BLOCKS: &[char] = &['▁', '▂', '▃', '▄', '▅', '▆', '▇', '█'];\n\npub struct Sparklin"
  },
  {
    "path": "src/common/stopwatch.rs",
    "chars": 456,
    "preview": "/// Stupid and simple implementation of stopwatch.\nuse std::time::{Duration, Instant};\n\npub struct Stopwatch {\n    start"
  },
  {
    "path": "src/interpreter/background_runner.rs",
    "chars": 2120,
    "preview": "use std::sync::{Arc, Condvar, Mutex, atomic};\nuse std::thread;\nuse std::time::Duration;\n\n/// Runs periodic tasks in back"
  },
  {
    "path": "src/interpreter/clickhouse.rs",
    "chars": 91186,
    "preview": "use crate::{\n    common::RelativeDateTime,\n    interpreter::{\n        ClickHouseAvailableQuirks, ClickHouseQuirks,\n     "
  },
  {
    "path": "src/interpreter/clickhouse_quirks.rs",
    "chars": 6645,
    "preview": "use semver::{Version, VersionReq};\n\n#[derive(Debug, Clone, Copy)]\npub enum ClickHouseAvailableQuirks {\n    ProcessesElap"
  },
  {
    "path": "src/interpreter/context.rs",
    "chars": 8358,
    "preview": "use crate::actions::ActionDescription;\nuse crate::interpreter::{\n    ClickHouse, Worker,\n    debug_metrics::DebugMetrics"
  },
  {
    "path": "src/interpreter/debug_metrics.rs",
    "chars": 10009,
    "preview": "//! Internal chdig observability counters, rendered into the status bar when toggled with `!`.\n//!\n//! Metrics are recor"
  },
  {
    "path": "src/interpreter/flamegraph.rs",
    "chars": 3520,
    "preview": "use crate::interpreter::clickhouse::Columns;\nuse crate::pastila;\nuse anyhow::{Error, Result};\nuse crossterm::event::{sel"
  },
  {
    "path": "src/interpreter/mod.rs",
    "chars": 613,
    "preview": "// pub for clickhouse::Columns\nmod background_runner;\npub mod clickhouse;\nmod clickhouse_quirks;\nmod context;\npub mod de"
  },
  {
    "path": "src/interpreter/options.rs",
    "chars": 50393,
    "preview": "use crate::common::RelativeDateTime;\nuse anyhow::{Result, anyhow};\nuse clap::{ArgAction, Args, CommandFactory, Parser, S"
  },
  {
    "path": "src/interpreter/perfetto.rs",
    "chars": 67961,
    "preview": "use crate::interpreter::Query;\nuse crate::interpreter::clickhouse::{Columns, MetricLogRow, QueryMetricRow};\nuse chrono::"
  },
  {
    "path": "src/interpreter/query.rs",
    "chars": 12015,
    "preview": "use anyhow::Result;\nuse chrono::{DateTime, Local};\nuse chrono_tz::Tz;\nuse size::{Base, SizeFormatter, Style};\nuse std::c"
  },
  {
    "path": "src/interpreter/worker.rs",
    "chars": 46258,
    "preview": "use crate::{\n    common::{RelativeDateTime, Stopwatch},\n    interpreter::{\n        ContextArc, Query,\n        clickhouse"
  },
  {
    "path": "src/lib.rs",
    "chars": 142,
    "preview": "mod actions;\nmod common;\nmod interpreter;\nmod pastila;\nmod utils;\nmod view;\n\nmod bin;\npub use bin::chdig_main;\npub use b"
  },
  {
    "path": "src/main.rs",
    "chars": 266,
    "preview": "use anyhow::Result;\nuse chdig::chdig_main_async;\nuse std::env::args_os;\n\n#[tokio::main(flavor = \"current_thread\")]\nasync"
  },
  {
    "path": "src/pastila.rs",
    "chars": 6285,
    "preview": "use aes_gcm::{\n    Aes128Gcm, KeyInit, Nonce,\n    aead::{Aead, generic_array::GenericArray},\n};\nuse anyhow::{Result, any"
  },
  {
    "path": "src/utils.rs",
    "chars": 12747,
    "preview": "use crate::actions::ActionDescription;\nuse crate::pastila;\nuse crate::view::Navigation;\nuse anyhow::{Context, Error, Res"
  },
  {
    "path": "src/view/log_view.rs",
    "chars": 52214,
    "preview": "use anyhow::{Error, Result};\nuse chrono::{DateTime, Datelike, Duration, Local, Timelike};\nuse cursive::{\n    Cursive, Pr"
  },
  {
    "path": "src/view/mod.rs",
    "chars": 734,
    "preview": "mod log_view;\nmod navigation;\nmod provider;\npub mod providers;\nmod queries_view;\nmod query_view;\nmod registry;\npub mod s"
  },
  {
    "path": "src/view/navigation.rs",
    "chars": 36687,
    "preview": "use crate::utils::{fuzzy_actions, fuzzy_select_strings};\nuse crate::{\n    common::parse_datetime_or_date,\n    interprete"
  },
  {
    "path": "src/view/provider.rs",
    "chars": 542,
    "preview": "use crate::interpreter::{ContextArc, options::ChDigViews};\nuse cursive::Cursive;\n\n/// Trait for providing views in the a"
  },
  {
    "path": "src/view/providers/asynchronous_inserts.rs",
    "chars": 5324,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{self, Navigation, ViewProvider},\n};\nuse curs"
  },
  {
    "path": "src/view/providers/background_schedule_pool.rs",
    "chars": 8413,
    "preview": "use crate::{\n    actions::ActionDescription,\n    interpreter::{ContextArc, WorkerEvent, options::ChDigViews},\n    utils:"
  },
  {
    "path": "src/view/providers/background_schedule_pool_log.rs",
    "chars": 8510,
    "preview": "use crate::{\n    interpreter::{ContextArc, clickhouse::TextLogArguments, options::ChDigViews},\n    view::{self, Navigati"
  },
  {
    "path": "src/view/providers/backups.rs",
    "chars": 2926,
    "preview": "use crate::{\n    common::RelativeDateTime,\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{self, TextLogV"
  },
  {
    "path": "src/view/providers/client.rs",
    "chars": 8533,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    utils::TerminalRawModeGuard,\n    view::ViewProvider"
  },
  {
    "path": "src/view/providers/dictionaries.rs",
    "chars": 1330,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::ViewProvider,\n};\nuse cursive::Cursive;\nuse st"
  },
  {
    "path": "src/view/providers/errors.rs",
    "chars": 4847,
    "preview": "use crate::{\n    common::RelativeDateTime,\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{self, QueryRes"
  },
  {
    "path": "src/view/providers/logger_names.rs",
    "chars": 5970,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{self, Navigation, TextLogView, ViewProvider}"
  },
  {
    "path": "src/view/providers/merges.rs",
    "chars": 6426,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{self, Navigation, TextLogView, ViewProvider}"
  },
  {
    "path": "src/view/providers/mod.rs",
    "chars": 13881,
    "preview": "pub mod asynchronous_inserts;\nmod background_schedule_pool;\nmod background_schedule_pool_log;\nmod backups;\nmod client;\nm"
  },
  {
    "path": "src/view/providers/mutations.rs",
    "chars": 4098,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{self, Navigation, ViewProvider},\n};\nuse curs"
  },
  {
    "path": "src/view/providers/object_storage_queue.rs",
    "chars": 1634,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::ViewProvider,\n};\nuse cursive::Cursive;\nuse st"
  },
  {
    "path": "src/view/providers/part_log.rs",
    "chars": 10757,
    "preview": "use crate::{\n    actions::ActionDescription,\n    common::RelativeDateTime,\n    interpreter::{ContextArc, TextLogArgument"
  },
  {
    "path": "src/view/providers/queries.rs",
    "chars": 2432,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{self, Navigation, ProcessesType, ViewProvide"
  },
  {
    "path": "src/view/providers/replicas.rs",
    "chars": 3370,
    "preview": "use crate::{\n    interpreter::{ClickHouseAvailableQuirks, ContextArc, options::ChDigViews},\n    view::{self, Navigation,"
  },
  {
    "path": "src/view/providers/replicated_fetches.rs",
    "chars": 1302,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::ViewProvider,\n};\nuse cursive::Cursive;\nuse st"
  },
  {
    "path": "src/view/providers/replication_queue.rs",
    "chars": 1408,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::ViewProvider,\n};\nuse cursive::Cursive;\nuse st"
  },
  {
    "path": "src/view/providers/server_logs.rs",
    "chars": 1787,
    "preview": "use crate::{\n    interpreter::{ContextArc, options::ChDigViews},\n    view::{Navigation, TextLogView, ViewProvider},\n};\nu"
  },
  {
    "path": "src/view/providers/table_parts.rs",
    "chars": 8575,
    "preview": "use crate::{\n    actions::ActionDescription,\n    common::RelativeDateTime,\n    interpreter::{ContextArc, TextLogArgument"
  },
  {
    "path": "src/view/providers/tables.rs",
    "chars": 12744,
    "preview": "use crate::{\n    actions::ActionDescription,\n    interpreter::{ClickHouseAvailableQuirks, ContextArc, WorkerEvent, optio"
  },
  {
    "path": "src/view/queries_view.rs",
    "chars": 53838,
    "preview": "use anyhow::{Error, Result};\nuse chrono::{DateTime, Local, TimeDelta};\nuse cursive::view::Scrollable;\nuse std::cmp::Orde"
  },
  {
    "path": "src/view/query_view.rs",
    "chars": 11220,
    "preview": "use crate::interpreter::Query;\nuse crate::view::TableViewItem;\nuse crate::view::table_view::TableView;\nuse cursive::them"
  },
  {
    "path": "src/view/registry.rs",
    "chars": 1006,
    "preview": "use super::provider::ViewProvider;\nuse crate::interpreter::options::ChDigViews;\nuse std::sync::Arc;\n\npub struct ViewRegi"
  },
  {
    "path": "src/view/search_history.rs",
    "chars": 2550,
    "preview": "use std::collections::VecDeque;\nuse std::sync::{Arc, Mutex};\n\n#[derive(Clone)]\npub struct SearchHistory {\n    history: A"
  },
  {
    "path": "src/view/settings_view.rs",
    "chars": 15451,
    "preview": "use crate::interpreter::{ContextArc, options::ChDigViews};\nuse cursive::{\n    Cursive,\n    event::{Event, Key},\n    them"
  },
  {
    "path": "src/view/sql_query_view.rs",
    "chars": 13557,
    "preview": "use std::cmp::Ordering;\nuse std::sync::{Arc, Mutex};\n\nuse anyhow::{Result, anyhow};\nuse size::{Base, SizeFormatter, Styl"
  },
  {
    "path": "src/view/summary_view.rs",
    "chars": 22293,
    "preview": "use chrono::{DateTime, Local};\nuse cursive::{\n    Printer, Vec2,\n    event::{AnyCb, Event, EventResult},\n    theme::Base"
  },
  {
    "path": "src/view/table_view.rs",
    "chars": 52653,
    "preview": "//\n// Copied from https://github.com/BonsaiDen/cursive_table_view\n//\n// And extended to support:\n// - Adopt to recent cu"
  },
  {
    "path": "src/view/text_log_view.rs",
    "chars": 6456,
    "preview": "use anyhow::Result;\nuse std::sync::{Arc, Mutex};\n\nuse chrono::{DateTime, Duration, Local};\nuse chrono_tz::Tz;\nuse cursiv"
  },
  {
    "path": "src/view/utils.rs",
    "chars": 3039,
    "preview": "use crate::interpreter::ContextArc;\nuse cursive::event::Key;\nuse cursive::theme::{ColorStyle, PaletteColor};\nuse cursive"
  },
  {
    "path": "tests/configs/accept_invalid_certificate.yaml",
    "chars": 33,
    "preview": "accept-invalid-certificate: true\n"
  },
  {
    "path": "tests/configs/basic.xml",
    "chars": 73,
    "preview": "<clickhouse>\n  <user>foo</user>\n  <password>bar</password>\n</clickhouse>\n"
  },
  {
    "path": "tests/configs/basic.yaml",
    "chars": 28,
    "preview": "---\nuser: foo\npassword: bar\n"
  },
  {
    "path": "tests/configs/chdig_basic.yaml",
    "chars": 755,
    "preview": "clickhouse:\n  url: \"tcp://config-host:9000\"\n  host: \"config-host\"\n  port: 9440\n  user: \"config_user\"\n  password: \"config"
  },
  {
    "path": "tests/configs/chdig_empty.yaml",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/configs/chdig_partial.yaml",
    "chars": 89,
    "preview": "clickhouse:\n  host: \"partial-host\"\n  user: \"partial_user\"\n\nview:\n  delay_interval: 10000\n"
  },
  {
    "path": "tests/configs/connections.yaml",
    "chars": 288,
    "preview": "---\nconnections_credentials:\n  play:\n    name: play\n    hostname: play.clickhouse.com\n    secure: true\n\n  play-tls:\n    "
  },
  {
    "path": "tests/configs/empty.xml",
    "chars": 27,
    "preview": "<clickhouse>\n</clickhouse>\n"
  },
  {
    "path": "tests/configs/empty.yaml",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "tests/configs/tls.xml",
    "chars": 272,
    "preview": "<clickhouse>\n  <secure>true</secure>\n  <openSSL>\n    <client>\n      <verificationMode>strict</verificationMode>\n      <c"
  },
  {
    "path": "tests/configs/tls.yaml",
    "chars": 132,
    "preview": "---\nsecure: true\nopenSSL:\n  client:\n    verificationMode: strict\n    certificateFile: cert\n    privateKeyFile: key\n    c"
  },
  {
    "path": "tests/configs/unknown_directives.xml",
    "chars": 44,
    "preview": "<clickhouse>\n  <foo>bar</foo>\n</clickhouse>\n"
  },
  {
    "path": "tests/configs/unknown_directives.yaml",
    "chars": 13,
    "preview": "---\nfoo: bar\n"
  },
  {
    "path": "typos.toml",
    "chars": 251,
    "preview": "# typos.toml\n\n[default.extend-identifiers]\nratatui = \"ratatui\"\nthr = \"thr\"\n\n[default.extend-words]\n# Used in imported co"
  }
]

About this extraction

This page contains the full source code of the azat/chdig GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 89 files (747.5 KB), approximately 168.9k tokens, and a symbol index with 813 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!