[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Go to '...'\n2. Click on '....'\n3. Scroll down to '....'\n4. See error\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n\n**Desktop (please complete the following information):**\n - OS: [e.g. iOS]\n - Browser [e.g. chrome, safari]\n - Version [e.g. 22]\n\n**Smartphone (please complete the following information):**\n - Device: [e.g. iPhone6]\n - OS: [e.g. iOS8.1]\n - Browser [e.g. stock browser, safari]\n - Version [e.g. 22]\n\n**Additional context**\nAdd any other context about the problem here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/r2cn.md",
    "content": "---\nname: r2cn\nabout: r2cn 任务模板\ntitle: \"[r2cn] \"\nlabels: r2cn\nassignees: \"\"\n---\n\n[__任务__]\n\n[__任务分值__] 4 分\n\n[__背景描述__]\n\n[__需求描述__]\n\n[__代码标准__]\n\n1. 所有 **PR** 提交必须签署 `Signed-off-by` 和 使用 `GPG` 签名，即提交代码时（使用 `git commit` 命令时）至少使用 `-s -S` 两个参数，参考 [Contributing Guide](https://github.com/genmeta/dquic/blob/main/docs/contributing.md)；\n2. 所有 **PR** 提交必须通过 `GitHub Actions` 自动化测试，提交 **PR** 后请关注 `GitHub Actions` 结果；\n3. 代码注释均需要使用英文;\n\n[__PR 提交地址__] 提交到 [dquic](https://github.com/genmeta/dquic) 仓库的 `main` 分支 `` 目录；\n\n[__开发指导__]\n\n1. 认领任务参考 [r2cn 开源实习计划 - 任务认领与确认](https://r2cn.dev/docs/student/assign);\n\n[__导师及邮箱__] 请申请此题目的同学使用邮件联系导师，或加入到 [R2CN Discord](https://discord.gg/WRp4TKv6rh) 后在 `#p-meta` 频道和导师交流。\n\n1. Peng Zhang <zhangpeng@genmeta.net>\n\n[__备注__]\n\n1. **认领实习任务的同学，必须完成测试任务和注册流程，请参考：** [r2cn 开源实习计划 - 测试任务](https://r2cn.dev/docs/student/pre-task) 和 [r2cn 开源实习计划 - 学生注册与审核](https://r2cn.dev/docs/student/signup)\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "# To get started with Dependabot version updates, you'll need to specify which\n# package ecosystems to update and where the package manifests are located.\n# Please see the documentation for all configuration options:\n# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates\n\nversion: 2\nupdates:\n  - package-ecosystem: \"cargo\" # See documentation for possible values\n    directory: \"/\" # Location of package manifests\n    schedule:\n      interval: \"weekly\"\n\n"
  },
  {
    "path": ".github/workflows/benchmark.yml",
    "content": "name: Benchmarks\n\non:\n  workflow_dispatch:  # Allows manual triggering\n  schedule:\n    - cron: '0 2 * * *'  # UTC 2:00 AM = Beijing 10:00 AM\n\njobs:\n\n  prepare-matrix:\n    runs-on: ubuntu-latest\n    outputs: \n      runners: ${{ steps.prepare-runners.outputs.runners }}\n    steps:\n      - uses: actions/checkout@v4\n      - id: prepare-runners\n        run: |\n          runners=\"$(python3 benchmark/launch.py runners -q)\"\n          echo \"runners=$runners\" >> $GITHUB_OUTPUT\n\n  run-benchmarks:\n    strategy:\n      fail-fast: false\n      matrix:\n        runner: ${{ fromJson(needs.prepare-matrix.outputs.runners) }}\n        target: [ubuntu,macos,]\n    runs-on: ${{ matrix.target }}-latest\n    needs: prepare-matrix\n    steps:\n      - uses: actions/checkout@v4\n      - name: Install latest rust stable toolchain\n        uses: actions-rust-lang/setup-rust-toolchain@v1\n        with:\n          rustflags: \"\" # tquic use deprecated function, and this action set rustflags to \"-D warnings\" by default\n      - name: Install go for macos runner\n        if: matrix.target=='macos' && matrix.runner=='quic-go'\n        run: brew install go\n      - name: Run benchmarks\n        run: |\n          which openssl\n          python3 benchmark/launch.py run ${{ matrix.runner }} --no-plot\n      - name: Rename benchmark results dir\n        run: mv benchmark/output benchmark-output-${{ matrix.target }}-${{ matrix.runner }}\n      - name: Upload benchmark results\n        uses: actions/upload-artifact@v4\n        with:\n          path: benchmark-output-${{ matrix.target }}-${{ matrix.runner }}\n          name: benchmark-output-${{ matrix.target }}-${{ matrix.runner }}\n  \n  summary-results:\n    runs-on: ubuntu-latest\n    needs: [run-benchmarks]\n    strategy:\n      fail-fast: false\n      matrix:\n        target: [ubuntu, macos]\n    steps:\n      - uses: actions/checkout@v4\n      - name: Install matplotlib\n        run: |\n          sudo apt update\n          sudo apt install -y python3-matplotlib\n      - name: Download outputs\n        uses: actions/download-artifact@v4\n        with:\n          pattern: benchmark-output-${{ matrix.target }}-*\n      - name: Summary ${{ matrix.target }}\n        run: |\n          # Collect all results.json paths and create a space-separated list\n          results_files=$(find . -name \"results.json\" | tr '\\n' ' ')\n          echo \"Results files: $results_files\"\n\n          # Pass all results files to the plot command\n          python3 benchmark/launch.py plot $results_files\n\n          # Collect logs\n          cp -r */logs benchmark/output/\n          mv benchmark/output/ benchmark-output-${{ matrix.target }}\n\n      - name: Upload benchmark results\n        uses: actions/upload-artifact@v4\n        with:\n          path: benchmark-output-${{ matrix.target }}\n          name: benchmark-output-${{ matrix.target }}\n\n"
  },
  {
    "path": ".github/workflows/codecov.yml",
    "content": "name: Coverage\n\non:\n  push:\n    branches: [\"main\"]\n  pull_request:\n    branches: ['main']\n\njobs:\n  coverage:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - uses: dtolnay/rust-toolchain@stable\n      - uses: taiki-e/install-action@cargo-llvm-cov\n      # Limit test parallelism to 1 thread to avoid resource contention\n      - run: cargo llvm-cov --all-features --workspace --lcov --output-path lcov.info -- --test-threads=1\n          \n      - name: Upload coverage to Codecov\n        uses: codecov/codecov-action@v4\n        with:\n          token: ${{ secrets.CODECOV_TOKEN }}\n          files: lcov.info\n          fail_ci_if_error: true\n"
  },
  {
    "path": ".github/workflows/commitlint.yml",
    "content": "name: Commitlint\n\non:\n  push:\n    branches: [\"main\"]\n  pull_request:\n    branches: [\"main\"]\n\njobs:\n  commitlint:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - uses: wagoid/commitlint-github-action@v5\n"
  },
  {
    "path": ".github/workflows/feishu-bot.yml",
    "content": "name: feishu bot\n\non:\n  branch_protection_rule:\n    types: [created, deleted]\n  check_run:\n    types: [rerequested, completed]\n  check_suite:\n    types: [completed]\n  create:\n  delete:\n  deployment_status:\n  discussion:\n    types: [created, edited, answered]\n  discussion_comment:\n    types: [created, deleted]\n  fork:\n  gollum:\n  issues:\n    types: [opened, edited, milestoned, pinned, reopened]\n  issue_comment:\n    types: [created, deleted]\n  label:\n    types: [created, deleted]\n  merge_group:\n    types: [checks_requested]\n  milestone:\n    types: [opened, deleted]\n  page_build:\n  project:\n    types: [created, deleted, reopened]\n  project_card:\n    types: [created, deleted]\n  project_column:\n    types: [created, deleted]\n  public:\n  pull_request:\n    branches: [\"main\"]\n    types: [opened, reopened]\n  pull_request_review:\n    types: [edited, dismissed, submitted]\n  pull_request_review_comment:\n    types: [created, edited, deleted]\n  pull_request_target:\n    types: [assigned, opened, synchronize, reopened]\n  push:\n    branches: [\"main\"]\n  registry_package:\n    types: [published]\n  release:\n    types: [published]\n  status:\n  watch:\n    types: [started]\n  # schedule:\n  #   - cron: \"30 2 * * *\"\n\njobs:\n  send-event:\n    name: Webhook\n    runs-on: ubuntu-latest\n    steps:\n      - uses: KaminariOS/feishu-bot-webhook-action@main\n        with:\n          webhook: ${{ secrets.FEISHU_BOT_WEBHOOK }}\n          signkey: ${{ secrets.FEISHU_BOT_SIGNKEY }}\n"
  },
  {
    "path": ".github/workflows/rust.yml",
    "content": "name: Rust\n\non:\n  push:\n    branches: [\"main\"]\n  pull_request:\n    branches: [\"main\"]\n\nenv:\n  CARGO_TERM_COLOR: always\n\njobs:\n  build:\n    strategy:\n      matrix:\n        target: [ubuntu, macos, windows]\n      fail-fast: false\n    runs-on: ${{ matrix.target }}-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Install latest rust stable toolchain\n        uses: actions-rust-lang/setup-rust-toolchain@v1\n      - name: Build\n        run: cargo build --verbose\n      - name: Run tests\n        # Limit test parallelism to 1 thread to avoid resource contention\n        # GitHub runners have limited cores (ubuntu/windows=2, macos=3)\n        run: cargo test --workspace --verbose -- --test-threads=1\n\n  format:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Install latest rust nightly toolchain and rustfmt\n        uses: actions-rust-lang/setup-rust-toolchain@v1\n        with:\n          toolchain: nightly\n          components: rustfmt\n      - name: Run rustfmt\n        run: cargo +nightly fmt --all -- --check\n  clippy:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Install latest rust nightly toolchain with clippy\n        uses: actions-rust-lang/setup-rust-toolchain@v1\n        with:\n          toolchain: nightly\n          components: clippy\n      - name: Run clippy\n        run: cargo +nightly clippy --all-targets --all-features -- -Dwarnings\n  doc:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Install latest rust nightly toolchain\n        uses: actions-rust-lang/setup-rust-toolchain@v1\n        with:\n          toolchain: nightly\n      - name: Run doc\n        run: RUSTDOCFLAGS=\"-D warnings\" cargo +nightly doc --no-deps\n\n  msrv:\n    strategy:\n      matrix:\n        target: [ubuntu, macos, windows]\n      fail-fast: false\n    runs-on: ${{ matrix.target }}-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Install msrv toolchain\n        uses: actions-rust-lang/setup-rust-toolchain@v1\n        with:\n          toolchain: 1.88.0\n      - name: Build with msrv\n        run: cargo build --workspace --release\n"
  },
  {
    "path": ".github/workflows/traversal.yml",
    "content": "name: Traversal\n\non:\n  push:\n    branches: [\"main\", \"build/*\"]\n  pull_request:\n    branches: [\"main\"]\n  workflow_dispatch:\n\nenv:\n  CARGO_TERM_COLOR: always\n  FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true\n\njobs:\n  nat-detection:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n\n      - name: Set up Docker Buildx\n        uses: docker/setup-buildx-action@v3\n\n      - name: Build Docker image with cache\n        uses: docker/build-push-action@v6\n        with:\n          context: .\n          file: qtraversal/tools/dockerfile\n          tags: dquic-traversal-test:latest\n          load: true\n          cache-from: type=gha\n          cache-to: type=gha,mode=max\n\n      - name: Create cargo cache volume\n        run: docker volume create cargo-cache\n\n      - name: Compile tests and get NAT detection test list\n        run: |\n          docker run --rm --privileged \\\n            -v ${{ github.workspace }}:/dquic \\\n            -v cargo-cache:/usr/local/cargo/registry \\\n            dquic-traversal-test:latest \\\n            /bin/bash -c \"\n              set -e\n              cd /dquic\n              cargo build --example stun_server --release\n              cargo test --package qtraversal test_detect --no-run\n              cargo test --package qtraversal test_detect -- --list\n            \" | grep \": test$\" | awk '{print $1}' | sed 's/:$//' > /tmp/nat_tests.txt\n          cat /tmp/nat_tests.txt\n\n      - name: Run NAT detection tests serially\n        run: |\n          mapfile -t NAT_TESTS < /tmp/nat_tests.txt\n\n          for test in \"${NAT_TESTS[@]}\"; do\n            if [ -z \"$test\" ]; then\n              continue\n            fi\n            echo \"========================================\"\n            echo \"Running NAT detection: $test\"\n            echo \"========================================\"\n\n            docker run --rm --privileged \\\n              -v ${{ github.workspace }}:/dquic \\\n              -v cargo-cache:/usr/local/cargo/registry \\\n              dquic-traversal-test:latest \\\n              /bin/bash -c \"\n                set -e\n                cd /dquic\n                bash qtraversal/tools/run_stun.sh\n                echo 'DEBUG: Running test [$test]'\n                ip netns exec nsa cargo test --package qtraversal '$test' -- --nocapture --include-ignored\n              \"\n\n            echo \"Completed: $test\"\n            echo \"\"\n          done\n\n  punch:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n\n      - name: Set up Docker Buildx\n        uses: docker/setup-buildx-action@v3\n\n      - name: Build Docker image with cache\n        uses: docker/build-push-action@v6\n        with:\n          context: .\n          file: qtraversal/tools/dockerfile\n          tags: dquic-traversal-test:latest\n          load: true\n          cache-from: type=gha\n          cache-to: type=gha,mode=max\n\n      - name: Create cargo cache volume\n        run: docker volume create cargo-cache\n\n      - name: Compile tests and get hole punching test list\n        run: |\n          docker run --rm --privileged \\\n            -v ${{ github.workspace }}:/dquic \\\n            -v cargo-cache:/usr/local/cargo/registry \\\n            dquic-traversal-test:latest \\\n            /bin/bash -c \"\n              set -e\n              cd /dquic\n              cargo build --example stun_server --release\n              cargo test --test traversal --no-run\n              cargo test --test traversal -- --list\n            \" | grep \": test$\" | awk '{print $1}' | sed 's/:$//' > /tmp/hp_tests.txt\n          cat /tmp/hp_tests.txt\n\n      - name: Run hole punching tests serially\n        run: |\n          mapfile -t HP_TESTS < /tmp/hp_tests.txt\n\n          for test in \"${HP_TESTS[@]}\"; do\n            if [ -z \"$test\" ]; then\n              continue\n            fi\n            echo \"========================================\"\n            echo \"Running hole punching: $test\"\n            echo \"========================================\"\n\n            docker run --rm --privileged \\\n              -v ${{ github.workspace }}:/dquic \\\n              -v cargo-cache:/usr/local/cargo/registry \\\n              dquic-traversal-test:latest \\\n              /bin/bash -c \"\n                set -e\n                cd /dquic\n                bash qtraversal/tools/run_stun.sh\n                echo 'DEBUG: Running test [$test]'\n                ip netns exec nsa cargo test --test traversal '$test' -- --include-ignored --nocapture\n              \"\n\n            echo \"Completed: $test\"\n            echo \"\"\n          done\n"
  },
  {
    "path": ".gitignore",
    "content": "# Generated by Cargo\n# will have compiled files and executables\ndebug/\ntarget/\n\n# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries\n# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html\nCargo.lock\n\n# These are backup files generated by rustfmt\n**/*.rs.bk\n\n# Exclude benchmark temp files\n/benchmark/*\n!/benchmark/launch.py\n\n# cago-tarpaulin (coverage tool) generates this\ntarpaulin-report.html\n\n# MSVC Windows builds of rustc generate these, which store debugging information\n*.pdb\n.vscode/*\n.idea/\n*.log\nlog\n.DS_Store\n*.sqlog\n.cargo/config.toml\n\n# Local agent instructions\nAGENTS.md\n"
  },
  {
    "path": ".pre-commit-config.yaml",
    "content": "repos:\n  - hooks:\n      - id: commitizen\n        stages:\n          - commit-msg\n    repo: https://github.com/commitizen-tools/commitizen\n    rev: v2.24.0\n  - hooks:\n      - id: fmt\n      #- id: cargo-check\n      #- id: clippy\n    repo: https://github.com/doublify/pre-commit-rust\n    rev: v1.0\n  - repo: https://github.com/alessandrojcm/commitlint-pre-commit-hook\n    rev: v9.5.0\n    hooks:\n      - id: commitlint\n        stages: [commit-msg]\n        additional_dependencies: [\"@commitlint/config-conventional\"]\n"
  },
  {
    "path": ".rustfmt.toml",
    "content": "imports_granularity = \"Crate\"\ngroup_imports = \"StdExternalCrate\"\nstyle_edition = \"2024\"\n"
  },
  {
    "path": ".rusty-hook.toml",
    "content": "[hooks]\n#pre-commit = \"cargo check && cargo clippy --all-targets --all -- -D warnings\"\n#pre-push = \"cargo check && cargo clippy --all-targets --all -- -D warnings && cargo test -- --test-threads=1\"\npre-push = \"cargo build\"\n#post-commit = \"echo yay\"\n\n[logging]\nverbose = true\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, religion, or sexual identity\nand orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\n  and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the\n  overall community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or\n  advances of any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email\n  address, without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n  professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\n\nCommunity leaders have the right and responsibility to remove, edit, or reject\ncomments, commits, code, wiki edits, issues, and other contributions that are\nnot aligned to this Code of Conduct, and will communicate reasons for moderation\ndecisions when appropriate.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\nExamples of representing our community include using an official e-mail address,\nposting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement at\n[quic_team@genmeta.net].\nAll complaints will be reviewed and investigated promptly and fairly.\n\nAll community leaders are obligated to respect the privacy and security of the\nreporter of any incident.\n\n## Enforcement Guidelines\n\nCommunity leaders will follow these Community Impact Guidelines in determining\nthe consequences for any action they deem in violation of this Code of Conduct:\n\n### 1. Correction\n\n**Community Impact**: Use of inappropriate language or other behavior deemed\nunprofessional or unwelcome in the community.\n\n**Consequence**: A private, written warning from community leaders, providing\nclarity around the nature of the violation and an explanation of why the\nbehavior was inappropriate. A public apology may be requested.\n\n### 2. Warning\n\n**Community Impact**: A violation through a single incident or series\nof actions.\n\n**Consequence**: A warning with consequences for continued behavior. No\ninteraction with the people involved, including unsolicited interaction with\nthose enforcing the Code of Conduct, for a specified period of time. This\nincludes avoiding interactions in community spaces as well as external channels\nlike social media. Violating these terms may lead to a temporary or\npermanent ban.\n\n### 3. Temporary Ban\n\n**Community Impact**: A serious violation of community standards, including\nsustained inappropriate behavior.\n\n**Consequence**: A temporary ban from any sort of interaction or public\ncommunication with the community for a specified period of time. No public or\nprivate interaction with the people involved, including unsolicited interaction\nwith those enforcing the Code of Conduct, is allowed during this period.\nViolating these terms may lead to a permanent ban.\n\n### 4. Permanent Ban\n\n**Community Impact**: Demonstrating a pattern of violation of community\nstandards, including sustained inappropriate behavior,  harassment of an\nindividual, or aggression toward or disparagement of classes of individuals.\n\n**Consequence**: A permanent ban from any sort of public interaction within\nthe community.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage],\nversion 2.0, available at\nhttps://www.contributor-covenant.org/version/2/0/code_of_conduct.html.\n\nCommunity Impact Guidelines were inspired by [Mozilla's code of conduct\nenforcement ladder](https://github.com/mozilla/diversity).\n\n[homepage]: https://www.contributor-covenant.org\n\nFor answers to common questions about this code of conduct, see the FAQ at\nhttps://www.contributor-covenant.org/faq. Translations are available at\nhttps://www.contributor-covenant.org/translations.\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing to dquic\n\nWelcome all feedback and PRs, including bug reports, feature requests, documentation improvements, and code refactoring. However, please note that dquic has extremely strict quality requirements for code and documentation. The quality of code and documentation will undergo rigorous review before being merged. Contributors must understand and patiently address all feedback before merging.\n\nIf you are unsure about the reasonableness of a feature or its implementation, please first create an issue in the [issue list](https://github.com/genmeta/dquic/issues) for discussion to ensure that the feature is reasonable and has a good implementation plan.\n"
  },
  {
    "path": "Cargo.toml",
    "content": "[workspace]\nresolver = \"2\"\nmembers = [\n    \"qmacro\",\n    \"qbase\",\n    \"qevent\",\n    \"qrecovery\",\n    \"qcongestion\",\n    \"qudp\",\n    \"qinterface\",\n    \"qprotocol\",\n    \"qdatagram\",\n    \"qconnection\",\n    \"dquic\",\n    \"h3-shim\",\n    \"qtraversal\",\n    \"qresolve\",\n]\ndefault-members = [\n    \"qmacro\",\n    \"qbase\",\n    \"qevent\",\n    \"qrecovery\",\n    \"qcongestion\",\n    \"qinterface\",\n    \"qprotocol\",\n    \"qconnection\",\n    \"dquic\",\n    \"h3-shim\",\n    \"qtraversal\",\n]\n\n[workspace.package]\nversion = \"0.5.0\"\nedition = \"2024\"\nreadme = \"README.md\"\nrepository = \"https://github.com/genmeta/dquic\"\nlicense = \"Apache-2.0\"\nkeywords = [\"async\", \"quic\", \"http3\"]\ncategories = [\"network-programming\", \"asynchronous\"]\nrust-version = \"1.88.0\"\n\n[workspace.dependencies]\narc-swap = \"1\"\nasync-trait = \"0.1.88\"\nbitflags = \"2\"\nbon = \"3\"\nbytes = \"1\"\ncfg-if = \"1\"\ndashmap = \"6\"\nderive_builder = \"0.20\"\nderive_more = \"2\"\nenum_dispatch = \"0.3\"\nfutures = \"0.3\"\ngetset = \"0.1\"\nnetdev = \"0.42\"\nnom = \"8\"\nnetwatcher = \"0.4\"\npin-project-lite = \"0.2\"\nrand = \"0.10\"\nring = \"0.17\"\nrustls = { version = \"0.23\", default-features = false, features = [\"std\"] }\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\nserde_with = \"3\"\nsmallvec = { version = \"1\", features = [\n    \"union\",\n    \"const_generics\",\n    \"const_new\",\n] }\nsocket2 = { version = \"0.6\", features = [\"all\"] }\nsnafu = \"0.8\"\nthiserror = \"2\"\ntokio = { version = \"1\" }\ntokio-util = { version = \"0.7\" }\ntracing = \"0.1\"\nx509-parser = \"0.18\"\nurl = \"2.5.7\"\n\n# h3 for h3-shim only , windows-sys, nix and libc for qudp only\n# they are not the default members of the workspace\n# windows-sys = \"?\"\n# libc = \"0.2\"\n# nix = \"?\"\n\n# dev-dependencies, for examples\nclap = { version = \"4\", features = [\"derive\"] }\nh3 = \"0.0.8\"\nh3-datagram = \"0.0.2\"\nhttp = \"1\"\nindicatif = { version = \"0.18\", features = [\"tokio\"] }\nparking_lot = \"0.12\"\npostcard = { version = \"1\", features = [\"use-std\"] }\nrustls-native-certs = \"0.8\"\ntracing-subscriber = \"0.3\"\ntracing-appender = \"0.2\"\n\n# members\nqmacro = { path = \"./qmacro\", version = \"0.5.0\" }\nqbase = { path = \"./qbase\", version = \"0.5.0\" }\nqevent = { path = \"./qevent\", version = \"0.5.0\" }\nqudp = { path = \"./qudp\", version = \"0.5.0\" }\nqinterface = { path = \"./qinterface\", version = \"0.5.0\" }\nqdatagram = { path = \"./qdatagram\", version = \"0.5.0\" }\nqresolve = { path = \"./qresolve\", version = \"0.5.0\" }\nqrecovery = { path = \"./qrecovery\", version = \"0.5.0\" }\nqtraversal = { path = \"./qtraversal\", version = \"0.5.0\" }\nqcongestion = { path = \"./qcongestion\", version = \"0.5.0\" }\nqconnection = { path = \"./qconnection\", version = \"0.5.0\" }\ndquic = { path = \"./dquic\", version = \"0.5.0\" }\nh3-shim = { path = \"./h3-shim\", version = \"0.5.0\" }\n\n\n[profile.bench]\ndebug = true\n\n[profile.release]\ndebug = true\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "README.md",
    "content": "# dquic\n\n[![License: Apache-2.0](https://img.shields.io/github/license/genmeta/dquic)](https://www.apache.org/licenses/LICENSE-2.0)\n[![Build Status](https://img.shields.io/github/actions/workflow/status/genmeta/dquic/rust.yml)](https://github.com/genmeta/dquic/actions/workflows/rust.yml)\n[![codecov](https://codecov.io/gh/genmeta/dquic/graph/badge.svg)](https://codecov.io/gh/genmeta/dquic)\n[![crates.io](https://img.shields.io/crates/v/dquic.svg)](https://crates.io/crates/dquic)\n[![Documentation](https://docs.rs/dquic/badge.svg)](https://docs.rs/dquic/)\n[![Dependencies](https://img.shields.io/deps-rs/repo/github/genmeta/dquic)](https://github.com/genmeta/dquic/network/dependencies)\n![MSRV](https://img.shields.io/crates/msrv/dquic)\n\nEnglish | [中文](README_CN.md)\n\nThe QUIC protocol is an important infrastructure for the next generation Internet, and `dquic` is a native asynchronous Rust implementation of the QUIC protocol, an efficient and scalable [RFC 9000][1] implementation with excellent engineering quality.\n`dquic` not only implements the standard QUIC protocol but also includes additional extensions such as [RFC 9221 (Unreliable Datagram Extension)][3] and [qlog (QUIC event logging)][2].\n\nAs widely recognized, QUIC possesses numerous advanced features and unparalleled security, making it highly suitable for applications in:\n\n**High-performance data transmission:**\n\n- Achieves 0-RTT connection establishment to minimize latency.\n- Utilizes multiplexed streams to eliminate head-of-line blocking and improve throughput.\n- Multi-path transmission to improve transmission capacity.\n- Efficient transmission control algorithms such as BBR ensure low latency and high bandwidth utilization.\n\n**Data privacy and security:**\n\n- Integrates TLS 1.3 encryption by default for end-to-end security.\n- Implements forward-secure keys and authenticated packet headers to resist tampering.\n\n**IoT and edge computing:**\n\n- Supports connection migration to maintain sessions across network changes (e.g., Wi-Fi to cellular).\n- Enables lightweight communication with unreliable datagrams (RFC 9221) for real-time IoT scenarios.\n\nThese characteristics position QUIC as a transformative protocol for modern networks, combining performance optimizations with robust cryptographic guarantees.\n\n## Design\n\nThe QUIC protocol is a rather complex, IO-intensive protocol, making it extremely fit for asynchronous programming.\nThe basic events in asynchronous IO are read, write, and timers. However, throughout the implementation of the QUIC protocol, the internal events are intricate and dazzling.\nIf you look at the protocol carefully, you will found that certain structures become evident, revealing that the core of the QUIC protocol is driven by layers of underlying IO events progressively influencing the application layer behavior.\nFor example, when the receiving data of a stream is contiguous, it constitutes an event that awakens the corresponding application layer to read;\nsimilarly, when the Initial data exchange completes and the Handshake keys are obtained, this is another event that awakens the task processing the Handshake data packet.\nThese events illustrate the classic Reactor pattern.\n`dquic` refines and encapsulates these various internal Reactors of QUIC, making each module more independent, clarifying the cooperation between the system's modules, and thereby making the overall design more user-friendly.\n\nIt is noticeable that the QUIC protocol has multiple layers. In the transport layer, there are many functions such as opening new connections, receiving, sending, reading, writing, and accepting new connections, most of which are asynchronous.\nHere, we call these functions as various functors with each layer having its own functor.\nWith these layers in place, it becomes clear that the `Accept Functor` and the `Read Functor`, or the `Write Functor`, do not belong to the same layer, which is quite interesting.\n\n![image](https://github.com/genmeta/dquic/blob/main/images/arch.png?raw=true)\n\n## Overview\n\n- **qbase**: Core structure of the QUIC protocol, including variable integer encoding (VarInt), connection ID management, stream ID, various frame and packet type definitions, and asynchronous keys.\n- **qrecovery**: The reliable transport part of QUIC, encompassing the state machine evolution of the sender/receiver, and the internal logic interaction between the application layer and the transport layer.\n- **qcongestion**: Congestion control in QUIC, which abstracts a unified congestion control interface and implements BBRv1. In the future, it will also implement more transport control algorithms such as Cubic and others.\n- **qinterface**: QUIC's packet routing and definition of the underlying I/O interface (`QuicIO`) enable dquic to run in various environments. Contains an optional qudp-based `QuicIO` implementation\n- **qdatagram**: The extension for unreliable datagram transmission based on QUIC offers transmission control mechanisms and enhanced security compared to directly sending unreliable datagrams over UDP. See [RFC 9221][3].\n- **qconnection**: Encapsulation of QUIC connections, linking the necessary components and tasks within a QUIC connection to ensure smooth operation.\n- **dquic**: The top-level encapsulation of the QUIC protocol, including interfaces for both the QUIC client and server.\n- **qudp**: High-performance UDP encapsulation for QUIC. Ordinary UDP incurs a system call for each packet sent or received, resulting in poor performance.\n- **qevent**: The implementation of [qlog][2] supports logging internal activities of individual QUIC connections in JSON format, maintains compatibility with qlog 3, and enables visualization analysis through [qvis][4]. However, it is important to note that enabling qlog can significantly impact performance despite its utility in troubleshooting.\n\n![image](https://github.com/genmeta/dquic/blob/main/images/qvis.png?raw=true)\n\n## Usage\n\n#### Demos\n\nRun h3 server:\n\n```shell\ncargo run --example h3-server --package h3-shim -- --dir ./h3-shim\n```\n\nSend a h3 request:\n\n```shell\ncargo run --example h3-client --package h3-shim -- https://localhost:4433/examples/h3-server.rs\n```\n\nFor more complete examples, please refer to the `examples` folders under the `h3-shim` and `dquic` folders.\n\n#### API\n\n`dquic` provides user-friendly interfaces for creating client and server connections, while also supporting additional features that meet modern network requirements.\n\nIn addition to bind an IP address + port, `dquic` can also bind a network interface, dynamically adapting to actual address changes, which provides good mobility for dquic.\n\nThe QUIC client not only provides configuration options specified by the QUIC protocol's Parameters and optional 0-RTT functionality, but also includes some additional advanced options. For example, the QUIC client can set its own certificate for server verification, and can also set its own Token manager to manage Tokens issued by various servers for future connections with these servers.\n\nThe QUIC client supports multipath handshaking, it can simultaneously connect to server's IPv4 and IPv6 addresses. Even if some paths are unreachable, as long as one path is reachable, the connection can be established.\n\nThe following is a simple example, please refer to the documentation for more details.\n\n```rust\nuse std::path::PathBuf;\nuse std::sync::Arc;\nuse dquic::prelude::{handy::ToCertificate, *};\n\nasync fn client() -> Result<(), Box<dyn std::error::Error>> {\n    // Set up root certificate store\n    let mut roots = rustls::RootCertStore::empty();\n\n    // Load system certificates\n    roots.add_parsable_certificates(rustls_native_certs::load_native_certs().certs);\n    // Load custom certificates (can be used independently of system certificates)\n    roots.add_parsable_certificates(PathBuf::from(\"/path/to/ca.cert\").to_certificate());  // Load at runtime\n    // roots.add_parsable_certificates(include_bytes!(\"/path/to/ca.cert\").to_certificate()); // Embed at compile time\n\n    // Build the QUIC client\n    let quic_client = Arc::new(QuicClient::builder()\n        .with_root_certificates(roots)\n        .without_cert()                                      // Client certificate verification is typically not required\n        // .with_parameters(your_parameters)                 // Custom transport parameters\n        // .bind([\"iface://v4.eth0:0\", \"iface://v6.eth0:0\"]) // Bind to specific network interfaces\n        // .enable_0rtt()                                    // Enable 0-RTT\n        // .enable_sslkeylog()                               // Enable SSL key logging\n        // .with_qlog(Arc::new(handy::LegacySeqLogger::new(\n        //     PathBuf::from(\"/path/to/qlog_dir\"),\n        // )))                                               // Enable qlog for visualization with qvis tool\n        .build());\n\n    // Connect to the server\n    let connection = quic_client.connect(\"localhost\").await?;\n\n    // Start using the QUIC connection!\n    // For more usage examples, see dquic/examples and h3-shim/examples\n\n    Ok(())\n}\n```\n\nThe QUIC server is represented as `QuicListeners`, supporting SNI (Server Name Indication), allowing multiple Servers to be started in one process, each with their own certificates and keys. Each server can also bind to multiple addresses, and multiple Servers can bind to the same address. Clients must correctly connect to the corresponding interface of the corresponding Server, otherwise the connection will be rejected.\n\nQuicListeners supports verifying client identity through various methods, including through `client_name` transport parameters, verifying client certificate content, etc. QuicListeners also supports anti-port scanning functionality, only responding after preliminary verification of client identity.\n\n```rust\nuse std::path::PathBuf;\nuse dquic::prelude::*;\n\nasync fn server() -> Result<(), Box<dyn std::error::Error>> {\n    let quic_listeners = QuicListeners::builder()\n        .without_client_cert_verifier()         // Client certificate verification is typically not required\n        // .with_parameters(your_parameters)    // Custom transport parameters\n        // .enable_0rtt()                       // Enable 0-RTT for servers\n        // .enable_anti_port_scan()             // Anti-port scanning protection\n        .listen(8192)?;                         // Start listening with backlog (similar to Unix listen)\n\n    // Add a server that can be connected\n    quic_listeners.add_server(\n        \"localhost\",\n        // Certificate and key files as byte arrays or paths\n        PathBuf::from(\"/path/to/server.cert\").as_path(),\n        PathBuf::from(\"/path/to/server.key\").as_path(),\n        [\n            \"192.168.1.108:4433\",   // Bind to the IPv4 address\n            \"iface://v6.eth0:4433\", // Bind to the eth0's IPv6 address\n        ],\n        None, // ocsp\n    ).await?;\n\n    // Continue calling `quic_listeners.add_server()` to add more servers\n    // Call `quic_listeners.remove_server()` to remove a server\n\n    // Accept trusted new connections\n    while let Ok((connection, server_name, pathway, link)) = quic_listeners.accept().await {\n        // Handle the incoming QUIC connection!\n        // You can refer to examples in dquic/examples and h3-shim/examples\n    }\n\n    Ok(())\n}\n```\n\nThere is an asynchronous interface for creating unidirectional or bidirectional QUIC streams from a QUIC Connection, or for listening to incoming streams from the other side of a QUIC Connection. This interface is almost identical to the one in [`hyperium/h3`](https://github.com/hyperium/h3/blob/master/docs/PROPOSAL.md#5-quic-transport).\n\nFor reading and writing data from QUIC streams, the standard **`AsyncRead`** and **`AsyncWrite`** interfaces are implemented for QUIC streams, making them very convenient to use.\n\n## Performance\n\nGitHub Actions periodically runs [benchmark tests][5]. The results show that dquic, quiche, tquic and quinn all deliver excellent performance, with each excelling in different benchmark testing scenarios. It should be noted that transmission performance is also greatly related to congestion control algorithms. dquic's performance will continue to be optimized in the coming period. If you want higher performance, dquic provides abstract interfaces that can use DPDK or XDP to replace UdpSocket!\n\n<img src=\"https://github.com/genmeta/dquic/blob/main/images/benchmark_15KB.png?raw=true\" width=33% height=33%><img src=\"https://github.com/genmeta/dquic/blob/main/images/benchmark_30KB.png?raw=true\" width=33% height=33%><img src=\"https://github.com/genmeta/dquic/blob/main/images/benchmark_2048KB.png?raw=true\" width=33% height=33%>\n\n## Contribution\n\nAll feedback and PRs are welcome, including bug reports, feature requests, documentation improvements, code refactoring, and more.\n\nIf you are unsure whether a feature or its implementation is reasonable, please first create an issue in the [issue list](https://github.com/genmeta/dquic/issues) for discussion.\nThis ensures the feature is reasonable and has a solid implementation plan.\n\n## Community\n\n- [Official Community](https://github.com/genmeta/dquic/discussions)\n- chat group：[send email](mailto:quic_team@genmeta.net) to introduce your contribution, and we will reply to your email with an invitation link and QR code to join the group.\n\n[1]: https://www.rfc-editor.org/rfc/rfc9000.html\n[2]: https://datatracker.ietf.org/doc/draft-ietf-quic-qlog-quic-events/\n[3]: https://datatracker.ietf.org/doc/html/rfc9221\n[4]: https://qvis.quictools.info/#/files\n[5]: https://github.com/genmeta/dquic/actions\n"
  },
  {
    "path": "README_CN.md",
    "content": "# dquic\n\n[![License: Apache-2.0](https://img.shields.io/github/license/genmeta/dquic)](https://www.apache.org/licenses/LICENSE-2.0)\n[![Build Status](https://img.shields.io/github/actions/workflow/status/genmeta/dquic/rust.yml)](https://github.com/genmeta/dquic/actions/workflows/rust.yml)\n[![codecov](https://codecov.io/gh/genmeta/dquic/graph/badge.svg)](https://codecov.io/gh/genmeta/dquic)\n[![crates.io](https://img.shields.io/crates/v/dquic.svg)](https://crates.io/crates/dquic)\n[![Documentation](https://docs.rs/dquic/badge.svg)](https://docs.rs/dquic/)\n[![Dependencies](https://img.shields.io/deps-rs/repo/github/genmeta/dquic)](https://github.com/genmeta/dquic/network/dependencies)\n![MSRV](https://img.shields.io/crates/msrv/dquic)\n\n[English](README.md) | 中文\n\nQUIC协议是下一代互联网重要的基础设施，而`dquic`则是一个原生异步Rust的QUIC协议实现，一个高效的、可扩展的[RFC 9000][1]实现，同时工程质量优良。\n`dquic`不仅实现了标准QUIC协议，还额外实现了[RFC 9221 (Unreliable Datagram Extension)][3]、[qlog (QUIC event logging)][2]等扩展。\n\n众所周知，QUIC拥有许多优良特性，以及极致的安全性，十分适合在高性能传输、数据隐私安全、物联网领域推广使用:\n\n**高性能数据传输：**\n\n- 0-RTT握手，最小化建连时延\n- 流的多路复用，消除了头端阻塞，提升吞吐率\n- 多路径传输，提升传输能力\n- BBR等高效的传输控制算法，保证低时延、高带宽利用率\n\n**数据隐私安全：**\n\n- 默认集成TLS 1.3端到端加密\n- 实现前向安全密钥和经过身份验证的数据包头，以抵御篡改。\n\n**IoT和边缘计算：**\n\n- 支持连接迁移，以便在网络变化（例如从Wi-Fi切换到蜂窝网络）时保持会话。\n- 实现轻量级通信，支持不可靠数据报（RFC 9221），适用于实时物联网场景。\n\n## 设计原则\n\nQUIC协议可谓一个相当复杂的、IO密集型的协议，因此正是适合异步大显身手的地方。异步IO中最基本的事件有数据可读、可写，以及定时器，但纵观整个QUIC协议实现，内部的事件错综复杂、令人眼花缭乱。然而，仔细探查之下还是能发现一些结构，会发现QUIC协议核心是由一层层底层IO事件逐步向上驱动应用层行为的。比如当一个流接收数据至连续，这也是一个事件，将唤醒对应的应用层来读；再比如，当Initial数据交互完毕获得Handshake密钥之后，这也是一个事件，将唤醒Handshake数据包任务的处理。以上这些事件就是经典的Reactor模式，`dquic`正是对这些QUIC内部形形色色的Reactor的拆分细化和封装，让各个模块更加独立，让整个系统各模块配合的更加清晰，进而整体设计也更加人性化。\n\n注意到QUIC协议内部，还能分出很多层。在传输层，有很多功能比如打开新连接、接收、发送、读取、写入、Accept新连接，它们大都是异步的，在这里称之为各种“算子”，且每层都有自己的算子，有了这些分层之后，就会发现，其实Accept算子和Read算子、Write算子根本不在同一层，很有意思。\n\n![image](https://github.com/genmeta/dquic/blob/main/images/arch.png)\n\n## 概览\n\n- **qbase**: QUIC协议的基础结构，包括可变整型编码VarInt、连接ID管理、流ID、各种帧以及包类型定义、异步密钥等\n- **qrecovery**: QUIC的可靠传输部分，包括发送端/接收端的状态机演变、应用层与传输层的内部逻辑交互等\n- **qcongestion**: QUIC的拥塞控制，抽象了统一的拥塞控制接口，并实现了BBRv1，未来还会实现Cubic、ETC等更多的传输控制算法\n- **qinterface**: QUIC的数据包路由和对底层I/O接口(`QuicIO`)的定义，令dquic可以运行在各种环境。内含一个可选的基于qudp的`QuicIO`实现\n- **qconnection**： QUIC连接封装，将QUIC连接内部所需的各组件、任务串联起来，最终能够完美运行\n- **dquic**: QUIC协议的顶层封装，包括QUIC客户端和服务端2部分的接口\n- **qudp**： QUIC的高性能UDP封装，使用GSO、GRO等手段极致优化UDP的性能\n- **qdatagram**: 基于QUIC的不可靠数据报传输的扩展，相比于直接用UDP发送不可靠数据报，该扩展拥有QUIC的传输控制和极致安全性。详情参考[RFC 9221][3]\n- **qevent**: [qlog][2]的实现，支持以json形式记录单个quic连接内部活动，兼容qlog 3，支持[qvis][4]可视化分析。请注意，开启qlog虽有助于分析问题，但相当影响性能\n\n![image](https://github.com/genmeta/dquic/blob/main/images/qvis.png?raw=true)\n\n## 使用方式\n\n#### 样例演示\n\n本仓库提供了三组样例：\n\n- `echo-client`和`echo-server`: 位于`dquic/examples/`文件夹下，展示了dquic的基本使用方法。\n- `http-client`和`http-server`: 位于`dquic/examples/`文件夹下，展示了在dquic上运行HTTP/0.9协议。\n- `h3-client`和`h3-server`: 位于`h3-shim/examples/`文件夹下，展示了在dquic上运行HTTP/3协议。\n\n以H3为例，运行一个H3服务器:\n\n```shell\ncargo run --example h3-server --package h3-shim -- --dir ./h3-shim\n```\n\n发起一个H3请求:\n\n```shell\ncargo run --example h3-client --package h3-shim -- https://localhost:4433/examples/h3-server.rs\n```\n\n#### API简介\n\n`dquic`提供了人性化的接口创建客户端和服务端的连接，同时还支持一些符合现代网络需求的附加功能设置。\n\n除了可以绑定到ip地址+端口，`dquic`还支持绑定到网络接口上，以动态地适应实际地址变化，这使得`dquic`拥有了良好的移动性。\n\nQUIC客户端不仅提供了QUIC协议所规定的Parameters选项配置，可选的0RTT功能，还有一些额外的高级选项，比如QUIC客户端可设置自己的证书以供服务端验证，也可设置自己的Token管理器，管理着各服务器颁发的Token，以便未来和这些服务器再次连接时用的上。\n\nQUIC客户端支持多路径握手，即同时尝试连接到服务器的IPv4和IPv6地址，即使某些路径不可达，但只要有一条路径能够联通，连接就可以建立。如果对端的实现同样是dquic，则还支持多路径传输。\n\n以下为简单示例，更多细节请参阅文档。\n\n```rust\nuse std::path::PathBuf;\nuse std::sync::Arc;\nuse dquic::prelude::{handy::ToCertificate, *};\n\nasync fn client() -> Result<(), Box<dyn std::error::Error>> {\n    // 设置根证书存储\n    let mut roots = rustls::RootCertStore::empty();\n\n    // 加载系统证书\n    roots.add_parsable_certificates(rustls_native_certs::load_native_certs().certs);\n\n    // 加载自定义证书（可与系统证书独立使用）\n    roots.add_parsable_certificates(PathBuf::from(\"path/to/your/cert.pem\").to_certificate()); // 运行时加载\n    // roots.add_parsable_certificates(include_bytes!(\"path/to/your/cert.pem\").to_certificate()); // 编译时嵌入\n\n    // 构建QUIC客户端\n    let quic_client = Arc::new(QuicClient::builder()\n        .with_root_certificates(roots)\n        .without_cert()                                      // 通常不需要客户端证书验证\n        // .with_parameters(your_parameters)                 // 自定义传输参数\n        // .bind([\"iface://v4.eth0:0\", \"iface://v6.eth0:0\"]) // 绑定到指定网络接口eth0的IPv4和IPv6地址\n        // .enable_0rtt()                                    // 启用0-RTT\n        // .enable_sslkeylog()                               // 启用SSL密钥日志\n        // .with_qlog(Arc::new(handy::LegacySeqLogger::new(\n        //     PathBuf::from(\"/path/to/qlog_dir\"),\n        // )))                                               // 启用qlog，可用qvis工具可视化\n        .build());\n\n    // 连接到服务器\n    let connection = quic_client.connect(\"localhost\").await?;\n\n    // 开始使用QUIC连接！\n    // 更多使用示例请参考 dquic/examples 和 h3-shim/examples\n\n    Ok(())\n}\n```\n\nQUIC服务端表现为`QuicListeners`，支持SNI（Server Name Indication），在一个进程启动多个Server，分别有自己的证书和密钥，每个服务端又可以绑定到多个地址上，支持多个Server绑定同一个地址。Client必须正确连接到对应的Server的对应接口上，否则连接会被自动拒绝。\n\nQuicListeners支持通过多种方法验证客客户端的身份，包括通过`client_name`传输参数，验证客户端证书的内容等。QuicListeners还支持抗端口扫描功能，只有在初步验证客户端的身份后才会做出响应。\n\n```rust\n// 创建QUIC监听器（每个程序只能有一个实例）\nuse std::path::PathBuf;\nuse dquic::prelude::*;\n\nasync fn server() -> Result<(), Box<dyn std::error::Error>> {\n    let quic_listeners = QuicListeners::builder()\n        .without_client_cert_verifier()         // 通常不需要客户端证书验证\n        // .with_parameters(your_parameters)    // 自定义传输参数\n        // .enable_0rtt()                       // 为服务器启用0-RTT\n        // .enable_anti_port_scan()             // 抗端口扫描保护\n        .listen(8192)?;                         // 开始监听，设置积压队列（类似Unix listen）\n\n    // 添加可连接的服务器\n    quic_listeners.add_server(\n        \"localhost\",\n        // 证书和密钥文件的字节数组或路径\n        PathBuf::from(\"/path/to/server.crt\").as_path(),\n        PathBuf::from(\"/path/to/server.key\").as_path(),\n        [\n            \"192.168.1.106:4433\",   // 绑定到此IPv4地址\n            \"iface://v6.eth0:4433\", // 绑定到eth0的IPv6地址\n        ],\n        None, // ocsp\n    ).await?;\n\n    // 继续调用 `quic_listeners.add_server()` 来添加更Server\n    // 调用 `quic_listeners.remove_server()` 来移除一个Serer\n\n    // 接受可信的新连接\n    while let Ok((connection, server_name, pathway, link)) = quic_listeners.accept().await {\n        // 处理传入的QUIC连接！\n        // 可以参考 dquic/examples 和 h3-shim/examples 中的示例\n    }\n\n    Ok(())\n}\n```\n\n关于如何从QUIC Connection中创建单向QUIC流，或者双向QUIC流，抑或是从QUIC Connection监听来自对方的流，都有一套异步的接口，这套接口几乎与[`hyperium/h3`](https://github.com/hyperium/h3/blob/master/docs/PROPOSAL.md#5-quic-transport)的接口相同。\n\n至于如何从QUIC流中读写数据，则为QUIC流实现了标准的 **`AsyncRead`** 、 **`AsyncWrite`** 接口，可以很方便地使用。\n\n## 性能\n\ngithub action会定期运行[基准测试][5]，效果如下。go-quic和quiche、tquic、quinn都具备优良性能，在三种基准测试场景下互有千秋。须知传输性能跟传输控制算法也有很大关系，dquic的性能在未来一段时间还会持续优化，如果想获得更高性能，dquic提供了抽象接口，可使用DPDK或者XDP代替UdpSocket！\n\n<img src=\"https://github.com/genmeta/dquic/blob/main/images/benchmark_15KB.png?raw=true\" width=33% height=33%><img src=\"https://github.com/genmeta/dquic/blob/main/images/benchmark_30KB.png?raw=true\" width=33% height=33%><img src=\"https://github.com/genmeta/dquic/blob/main/images/benchmark_2048KB.png?raw=true\" width=33% height=33%>\n\n## 贡献\n\n欢迎所有反馈和PR，包括bug反馈、功能请求、文档修缮、代码重构等。\n\n如果不确定一个功能或者其实现是否合理，请首先在[issue列表](https://github.com/genmeta/dquic/issues)中创建一个issue，大家一起讨论，以确保功能是合理的，并有一个良好的实现方案。\n\n## 社区交流\n\n- [用户论坛](https://github.com/genmeta/dquic/discussions)\n- 聊天群：[发送邮件](mailto:quic_team@genmeta.net)介绍一下您的贡献，我们将邮件回复您加群链接及群二维码。\n\n[1]: https://www.rfc-editor.org/rfc/rfc9000.html\n[2]: https://datatracker.ietf.org/doc/draft-ietf-quic-qlog-quic-events/\n[3]: https://datatracker.ietf.org/doc/html/rfc9221\n[4]: https://qvis.quictools.info/#/files\n[5]: https://github.com/genmeta/dquic/actions\n"
  },
  {
    "path": "SECURITY.md",
    "content": "# Security Policy\n\n## Supported Versions\n\nUse this section to tell people about which versions of your project are\ncurrently being supported with security updates.\n\n| Version | Supported          |\n| ------- | ------------------ |\n| 0.5.x   | :white_check_mark: |\n\n## Reporting a Vulnerability\n\nUse this section to tell people how to report a vulnerability.\n\nTell them where to go, how often they can expect to get an update on a\nreported vulnerability, what to expect if the vulnerability is accepted or\ndeclined, etc.\n"
  },
  {
    "path": "benchmark/launch.py",
    "content": "#!/usr/bin/env python3\n\n\nimport os\nimport subprocess\nimport re\nimport json\nimport logging\nimport shutil\nimport argparse\n\n\nclass ServerRunner:\n    name: str\n    launch_server: list[str]\n    listen_port: int\n\n    def __init__(self, impl_name: str, launch_server: list[str], listen_port: int):\n        self.name = impl_name\n        self.listen_port = listen_port\n        self.launch_server = launch_server\n\n    def run(self, log) -> subprocess.Popen:\n        # 在后台运行server\n        return subprocess.Popen(\n            self.launch_server,\n            cwd=rand_files.path,\n            stdout=log,\n            stderr=log,\n            env={**os.environ, \"RUST_LOG\": \"off\"}\n        )\n\n\nclass Result:\n    success: int\n    duration: float\n    qps: float\n\n    def __init__(self, success: int, duration: float):\n        self.success, self.duration = success, duration\n        self.qps = success / duration if duration > 0 else 0\n\n    def __str__(self):\n        return f\"success={self.success}, duration={self.duration}, qps={self.qps}\"\n\n    def __repr__(self):\n        return self.__str__()\n\n    @staticmethod\n    def average(results: list['Result']) -> 'Result':\n        total_success, total_duration = 0, 0\n        total_success = sum(result.success for result in results)\n        total_duration = sum(result.duration for result in results)\n        return Result(total_success, total_duration)\n\n\nroot = os.path.join(os.path.dirname(__file__))\n\n\nclass RandomFiles:\n    path = os.path.join(root, \"rand_files\")\n\n    def __init__(self):\n        if not os.path.exists(self.path):\n            os.makedirs(self.path)\n\n    def gen(self, file_size: int) -> str:\n        file_name = f\"rand_file_{file_size}.bin\"\n        logging.info(f\"Generating {file_name}...\")\n        file_path = os.path.join(self.path, file_name)\n        if not os.path.exists(file_path):\n            with open(file_path, \"wb\") as f:\n                f.write(os.urandom(int(file_size) * 1024))\n        return file_name\n\n\nclass Certs:\n    path = os.path.join(root, \"certs\")\n\n    root_cert = os.path.join(path, \"root_cert.pem\")\n    root_key = os.path.join(path, \"root_key.pem\")\n    server_cert = os.path.join(path, \"server_cert.pem\")\n    server_key = os.path.join(path, \"server_key.pem\")\n    server_csr = os.path.join(path, \"server_csr.pem\")\n    server_ext = os.path.join(path, \"server.ext\")\n    server_cert_der = os.path.join(path, \"server_cert.der\")\n    server_key_der = os.path.join(path, \"server_key.der\")\n\n    def __init__(self):\n        pass\n\n    def gen(self):\n        if not os.path.exists(self.path):\n            logging.info(\"Generating certs...\")\n            os.makedirs(self.path)\n\n            # CA\n            subprocess.run(\n                [\"openssl\", \"ecparam\", \"-name\", \"prime256v1\",\n                    \"-genkey\", \"-out\", self.root_key], check=True)\n            subprocess.run(\n                [\"openssl\", \"req\", \"-new\", \"-x509\", \"-key\", self.root_key, \"-out\", self.root_cert,\n                 \"-days\", \"3650\", \"-subj\", \"/CN=localhost\", \"-addext\", \"subjectAltName=DNS:localhost\"], check=True)\n            # Server\n\n            subprocess.run(\n                [\"openssl\", \"ecparam\", \"-name\", \"prime256v1\", \"-genkey\", \"-out\", self.server_key], check=True)\n            subprocess.run(\n                [\"openssl\", \"req\", \"-new\", \"-key\", self.server_key, \"-out\",\n                 self.server_csr,  \"-subj\", \"/CN=localhost\", \"-addext\", \"subjectAltName=DNS:localhost\"], check=True)\n            # use server ext to add subjectAltName, openssl binary on macos CI doesnot support `-copy-extensions copy` parameter\n            with open(self.server_ext, \"w\") as f:\n                f.write(\"subjectAltName=DNS:localhost\\n\")\n                f.flush()\n            subprocess.run(\n                [\"openssl\", \"x509\", \"-req\", \"-in\", self.server_csr, \"-CA\", self.root_cert,\n                 \"-CAkey\", self.root_key, \"-CAcreateserial\", \"-out\", self.server_cert,\n                 \"-days\", \"365\", \"-extfile\", self.server_ext], check=True)\n            # Convert pem to der\n            subprocess.run(\n                [\"openssl\", \"x509\", \"-in\", self.server_cert, \"-outform\", \"der\",\n                 \"-out\", self.server_cert_der], check=True)\n            subprocess.run(\n                [\"openssl\", \"ec\", \"-in\", self.server_key, \"-outform\", \"der\",\n                 \"-out\", self.server_key_der], check=True)\n\n\nrand_files = RandomFiles()\necc_certs = Certs()\n\nquic_go_dir = os.path.join(root, \"go-quic-demo\")\ndquic_dir = os.path.join(root, \"..\")\ntquic_dir = os.path.join(root, \"tquic\")\nquinn_dir = os.path.join(root, \"h3\")\nquiche_dir = os.path.join(root, \"quiche\")\n\n\ndef git_clone(owner: str, repo: str, branch: str) -> None:\n    if not os.path.exists(os.path.join(root, repo)):\n        logging.info(f\"Cloning {owner}/{repo}...\")\n        subprocess.run(\n            [\"git\", \"clone\", \"--recursive\", \"--branch\", branch,\n             f\"https://github.com/{owner}/{repo}\"],\n            cwd=root,\n        )\n\n\ndef go_quic_runner() -> ServerRunner:\n    logging.info(\"Building quic-go server...\")\n\n    git_clone(\"eareimu\", \"go-quic-demo\", \"main\")\n\n    # 编译\n    subprocess.run(\n        [\"go\", \"get\", \"example/quic-server\",],\n        cwd=quic_go_dir,\n        check=True\n    )\n    subprocess.run(\n        [\"go\", \"build\", \"-ldflags=-s -w\", \"-trimpath\", \"-o\", \"quic_server\"],\n        cwd=quic_go_dir,\n        check=True\n    )\n\n    binary = os.path.join(quic_go_dir, \"quic_server\")\n    launch = [\n        binary,\n        \"-c\", ecc_certs.server_cert,\n        \"-k\", ecc_certs.server_key,\n        \"-a\", \"[::1]:4430\",\n    ]\n\n    return ServerRunner('quic-go', launch, 4430)\n\n\ndef dquic_runner() -> ServerRunner:\n    logging.info(\"Building dquic server...\")\n\n    # git_clone(\"genmeta\", \"dquic\", \"main\")\n\n    # 编译\n    subprocess.run(\n        [\"cargo\", \"build\", \"--release\", \"--package\",\n            \"h3-shim\", \"--example\", \"h3-server\"],\n        cwd=dquic_dir,\n        check=True\n    )\n\n    launch = [\n        os.path.join(dquic_dir,\n                     \"target\", \"release\", \"examples\", \"h3-server\"),\n        \"-c\", ecc_certs.server_cert,\n        \"-k\", ecc_certs.server_key,\n        \"-b\", \"4096\",  # 设置backlog\n        \"-l\", \"[::1]:4431\"\n    ]\n\n    return ServerRunner('dquic', launch, 4431)\n\n\ndef dquic_multi_path_runner() -> ServerRunner:\n    logging.info(\"Building dquic server...\")\n\n    # git_clone(\"genmeta\", \"dquic\", \"main\")\n\n    # 编译\n    subprocess.run(\n        [\"cargo\", \"build\", \"--release\", \"--package\",\n            \"h3-shim\", \"--example\", \"h3-server\"],\n        cwd=dquic_dir,\n        check=True\n    )\n\n    launch = [\n        os.path.join(dquic_dir,\n                     \"target\", \"release\", \"examples\", \"h3-server\"),\n        \"-c\", ecc_certs.server_cert,\n        \"-k\", ecc_certs.server_key,\n        \"-b\", \"4096\",  # 设置backlog\n        \"-l\", \"[::1]:4435\",\n        \"-l\", \"127.0.0.1:4435\"\n    ]\n\n    return ServerRunner('dquic-multi-path', launch, 4435)\n\n\ndef tquic_runner() -> ServerRunner:\n    logging.info(\"Building tquic server...\")\n\n    git_clone(\"Tencent\", \"tquic\", \"v1.6.0\")\n\n    subprocess.run(\n        [\"cargo\", \"build\", \"--release\", \"--package\",\n            \"tquic_tools\", \"--bin\", \"tquic_server\"],\n        cwd=tquic_dir,\n        check=True\n    )\n\n    launch = [\n        os.path.join(tquic_dir, \"target\", \"release\", \"tquic_server\"),\n        \"-c\", ecc_certs.server_cert,\n        \"-k\", ecc_certs.server_key,\n        \"-l\", \"[::1]:4432\",\n        \"--log-level\", \"OFF\",\n    ]\n\n    return ServerRunner('tquic', launch, 4432)\n\n\ndef quinn_runner() -> ServerRunner:\n    logging.info(\"Building quinn server...\")\n\n    git_clone(\"hyperium\", \"h3\", \"h3-quinn-v0.0.9\")\n\n    subprocess.run(\n        [\"cargo\", \"build\", \"--release\", \"--example\", \"server\"],\n        cwd=quinn_dir,\n        check=True\n    )\n\n    launch = [\n        os.path.join(quinn_dir,\n                     \"target\", \"release\", \"examples\", \"server\"),\n        \"-c\", ecc_certs.server_cert_der,\n        \"-k\", ecc_certs.server_key_der,\n        \"-l\", \"[::1]:4433\",\n        \"-d\", \".\"  # 实际上是rand-files\n    ]\n\n    return ServerRunner('quinn', launch, 4433)\n\n\ndef cf_quiche_runner() -> ServerRunner:\n    logging.info(\"Building cloudflare-quiche server...\")\n\n    git_clone(\"cloudflare\", \"quiche\", \"0.23.4\")\n\n    subprocess.run(\n        [\"cargo\", \"build\", \"--release\", \"--bin\", \"quiche-server\"],\n        cwd=quiche_dir,\n        check=True\n    )\n\n    launch = [\n        os.path.join(quiche_dir,\n                     \"target\", \"release\", \"quiche-server\"),\n        \"--key\", ecc_certs.server_key,\n        \"--cert\", ecc_certs.server_cert,\n        \"--listen\", \"[::1]:4434\",\n        \"--root\", \".\",\n        \"--no-retry\"\n    ]\n\n    return ServerRunner('cloudflare quiche', launch, 4434)\n\n\nclass H3Client:\n    stress: int\n    requests: int\n    progress: bool\n\n    def __init__(self, stress: int = 1024*30, requests: int = 8, progress: bool = False):\n        logging.info(\"Building dquic client\")\n        subprocess.run(\n            [\n                \"cargo\", \"build\", \"--package\", \"h3-shim\",\n                \"--release\", \"--example\", \"h3-client\",\n            ],\n            check=True\n        )\n        self.stress = stress\n        self.requests = requests\n        self.progress = progress\n\n    def run_once(self, server_runner: ServerRunner, file_size: int, seq: int = 0) -> Result:\n        logging.info(f\"Launch {server_runner.name} server and client\")\n        # 在后台启动server\n        log_dir = os.path.join(output_dir, \"logs\")\n        if not os.path.exists(log_dir):\n            os.makedirs(log_dir)\n\n        client_log = f\"client_{server_runner.name}_{file_size}KB_{seq}.log\"\n        client_log = open(os.path.join(log_dir, client_log), \"w+\")\n        server_log = f\"server_{server_runner.name}_{file_size}KB_{seq}.log\"\n        server_log = open(os.path.join(log_dir, server_log), \"w+\")\n\n        server = server_runner.run(server_log)\n        launch_client = [\n            \"cargo\", \"run\", \"--package\", \"h3-shim\",\n            \"--release\", \"--example\", \"h3-client\", \"--\",\n                         \"--conns\", str(int(self.stress / file_size)),\n                         \"--reqs\", str(self.requests),\n                         \"--roots\", ecc_certs.root_cert,\n                         \"--progress\", \"true\" if self.progress else \"false\",\n                         \"--ansi\", \"false\",\n                         f'https://localhost:{server_runner.listen_port}/{rand_files.gen(file_size)}'\n        ]\n\n        try:\n            subprocess.run(\n                launch_client,\n                cwd=dquic_dir,\n                env={**os.environ, \"RUST_LOG\": \"counting\"},\n                stdout=client_log,\n                text=True,\n                timeout=15\n            )\n        except subprocess.TimeoutExpired:\n            server.kill()\n            server.wait()\n            logging.warning(\n                f\"Timeout expired for running {server_runner.name} {file_size}KB\")\n            client_log.close()\n            server_log.close()\n            return Result(success=0, duration=0)\n\n        server.kill()\n        server.wait()\n        client_log.seek(0)\n        output = client_log.read()\n        client_log.close()\n        server_log.close()\n\n        # Extract total_time and success_queries using regex\n        match = re.search(\n            r\"success_queries=(\\d+).*?total_time=(\\d+\\.?\\d*)\", output)\n        if match:\n            success_queries = int(match.group(1))\n            total_time = float(match.group(2))\n            return Result(success=int(success_queries), duration=total_time)\n        else:\n            logging.error(f\"Failed to parse benchmark output: {output}\")\n            return Result(success=0, duration=0)\n\n    def run_many(self, server_runner: ServerRunner, file_size: int, times: int = 3) -> list[Result]:\n        results = []\n        for seq in range(0, times):\n            once = self.run_once(server_runner, file_size, seq)\n            logging.info(\n                f\"Run {server_runner.name} {file_size}KB complete: {once}\")\n            results.append(once)\n        return results\n\n\ndef run(*runners: ServerRunner) -> dict[dict[str, list[Result]]]:\n    ecc_certs.gen()\n    client = H3Client(stress=2048*15, requests=8, progress=True)\n\n    return {\n        runner.name: {\n            file_size: client.run_many(runner, file_size, times=10)\n            for file_size in [15, 30, 2048]\n        }\n        for runner in runners\n    }\n\n\noutput_dir = os.path.join(root, \"output\")\n\n\ndef plot_results(results: dict[str, dict[str, list[Result]]]):\n    import matplotlib.pyplot as plt\n    # [实现名, [文件大小, 多次运行的结果]]\n    plot_out_dir = os.path.join(output_dir, \"plots\")\n    if not os.path.exists(plot_out_dir):\n        os.makedirs(plot_out_dir)\n\n    implementations = sorted(results.keys())\n    file_sizes = sorted(results[implementations[0]].keys())\n\n    for file_size in file_sizes:\n        plt.figure(figsize=(10, 6))\n        # 平均图\n        qps_values = [\n            Result.average(results[impl][file_size]).qps\n            for impl in implementations\n        ]\n        bars = plt.bar(implementations, qps_values)\n\n        plt.title(f\"file size {file_size}KB\")\n        plt.xlabel(\"Implementations\")\n        plt.ylabel(\"QPS\")\n        plt.xticks(rotation=45)\n\n        for bar in bars:\n            height = bar.get_height()\n            plt.text(bar.get_x() + bar.get_width()/2, height,\n                     round(height, 2), ha='center', va='bottom')\n\n        plt.tight_layout()\n        plt.savefig(os.path.join(plot_out_dir, f\"benchmark_{file_size}KB.png\"))\n        plt.close()\n\n        # 每个实现的多次运行结果图\n        for impl in implementations:\n            plt.figure(figsize=(10, 6))\n            qps_values = [result.qps for result in results[impl][file_size]]\n            bars = plt.bar([i for i in range(len(qps_values))], qps_values)\n\n            plt.title(f\"{impl} file size {file_size}KB\")\n            plt.xlabel(\"Runs\")\n            plt.ylabel(\"QPS\")\n            plt.xticks(rotation=45)\n\n            for bar in bars:\n                height = bar.get_height()\n                plt.text(bar.get_x() + bar.get_width()/2, height,\n                         round(height, 2), ha='center', va='bottom')\n\n            plt.tight_layout()\n            plt.savefig(\n                os.path.join(plot_out_dir, f\"{impl}_{file_size}KB.png\"))\n            plt.close()\n\n\ndef save_results(results: dict[str, dict[str, list[Result]]]):\n    \"\"\"save results to json file\"\"\"\n    if not os.path.exists(output_dir):\n        os.makedirs(output_dir)\n\n    with open(os.path.join(output_dir, \"results.json\"), \"w\") as f:\n        json.dump({\n            impl: {\n                size: [r.__dict__ for r in results_list]\n                for size, results_list in sizes_results.items()\n            }\n            for impl, sizes_results in results.items()\n        }, f, indent=2)\n\n\ndef load_results(*paths: str) -> dict[str, dict[str, list[Result]]]:\n    \"\"\"load and merge results from multiple json files\"\"\"\n    merged = {}\n    for path in paths:\n        with open(path, \"r\") as f:\n            results = {\n                impl: {\n                    size: [Result(r[\"success\"], r[\"duration\"]) for r in runs]\n                    for size, runs in sizes_results.items()\n                }\n                for impl, sizes_results in json.load(f).items()\n            }\n            for impl, sizes in results.items():\n                if impl not in merged:\n                    merged[impl] = {}\n                for size, runs in sizes.items():\n                    if size not in merged[impl]:\n                        merged[impl][size] = []\n                    merged[impl][size].extend(runs)\n    return merged\n\n\nif __name__ == \"__main__\":\n    logging.root.setLevel(logging.INFO)\n\n    parser = argparse.ArgumentParser(\n        description='QUIC implementation benchmark')\n    subparsers = parser.add_subparsers(dest='command', required=True)\n\n    runners = {\n        'quic-go': go_quic_runner,\n        'dquic': dquic_runner,\n        'tquic': tquic_runner,\n        'quinn': quinn_runner,\n        'cf-quiche': cf_quiche_runner,\n        'dquic-multi-path': dquic_multi_path_runner,\n    }\n\n    # Diaplay runners\n    runners_parser = subparsers.add_parser(\n        'runners', help='List available implementations')\n    runners_parser.add_argument('-q', '--quiet', action='store_true',\n                                help='Only display implementation names')\n\n    # Run command\n    run_parser = subparsers.add_parser(\n        'run', help='Run benchmark and save results')\n    run_parser.add_argument('implementations', nargs='*',\n                            choices=list(runners.keys()),\n                            help='Implementations to benchmark')\n    run_parser.add_argument('--no-plot', action='store_true',\n                            help='Skip plotting results')\n\n    # plot command\n    plot_parser = subparsers.add_parser(\n        'plot', help='Load and plot results from files')\n    plot_parser.add_argument('files', nargs='+', default=[os.path.join(output_dir, \"results.json\")],\n                             help='Results JSON file paths')\n\n    # Clean command\n    clean_parser = subparsers.add_parser('clean', help='Clean generated files')\n    clean_parser.add_argument('--all', action='store_true',\n                              help='Also remove git cloned implementations')\n\n    args = parser.parse_args()\n\n    if args.command == 'runners':\n        if not args.quiet:\n            print(\"Available implementations:\")\n            for impl in runners.keys():\n                print(f\"- {impl}\")\n        else:\n            print(\n                '[' + ', '.join(f'\"{impl}\"' for impl in runners.keys()) + ']'\n            )\n        exit(0)\n    elif args.command == 'run':\n        selected_runners = [\n            runners[impl]() for impl in args.implementations] if args.implementations else [r() for r in runners.values()]\n        results = run(*selected_runners)\n        save_results(results)\n        if args.no_plot:\n            exit(0)\n    elif args.command == 'plot':\n        results = load_results(*args.files)\n    elif args.command == 'clean':\n        paths = [rand_files.path, ecc_certs.path, output_dir]\n        if args.all:\n            paths.extend([quic_go_dir, tquic_dir, quinn_dir, quiche_dir])\n        for path in paths:\n            if os.path.exists(path):\n                shutil.rmtree(path)\n        exit(0)\n\n    plot_results(results)\n\n    print(results)\n"
  },
  {
    "path": "codecov.yml",
    "content": "coverage:\n  status:\n    patch: off\n    project: off\n  range: \"70..100\"\n"
  },
  {
    "path": "commitlint.config.js",
    "content": "module.exports = {\n  extends: ['@commitlint/config-conventional'],\n  rules: {\n    'header-max-length': [2, 'always', 160],\n    'body-max-line-length': [2, 'always', 160],\n    'footer-max-line-length': [2, 'always', 160],\n  },\n}\n"
  },
  {
    "path": "dquic/Cargo.toml",
    "content": "[package]\nname = \"dquic\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"An IETF quic transport protocol implemented natively using async Rust\"\nreadme = \"README.md\"\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\narc-swap = { workspace = true }\nbytes = { workspace = true }\ndashmap = { workspace = true }\nderive_more = { workspace = true, features = [\"deref\"] }\nfutures = { workspace = true }\nqconnection = { workspace = true }\nqresolve = { workspace = true }\nrustls = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true }\ntokio-util = { workspace = true, features = [\"rt\"] }\ntracing = { workspace = true }\n\n[dev-dependencies]\nclap = { workspace = true }\nhttp = { workspace = true }\nindicatif = { workspace = true }\npostcard = { workspace = true }\nqevent = { workspace = true, features = [\"telemetry\"] }\nqtraversal = { workspace = true, features = [\"test-ttl\"] }\nrustls = { workspace = true, features = [\"ring\"] }\nrustls-native-certs = { workspace = true }\nserde = { workspace = true }\ntokio = { workspace = true, features = [\"fs\", \"io-std\", \"rt-multi-thread\"] }\ntokio-util = { workspace = true, features = [\"rt\"] }\ntracing-appender = { workspace = true }\nx509-parser = { workspace = true }\n\n# console-subscriber = \"0.4\"\n\n[features]\ndefault = [\"datagram\"]\ntelemetry = [\"qconnection/telemetry\"]\ndatagram = [\"qconnection/datagram\"]\n\n[dev-dependencies.tracing-subscriber]\nworkspace = true\nfeatures = [\"env-filter\", \"time\"]\n"
  },
  {
    "path": "dquic/examples/echo-client.rs",
    "content": "use std::{\n    borrow::Cow,\n    path::{Path, PathBuf},\n    sync::Arc,\n    time::Duration,\n};\n\nuse clap::Parser;\nuse dquic::prelude::{handy::ToCertificate, *};\nuse http::uri::Authority;\nuse indicatif::{MultiProgress, ProgressBar, ProgressDrawTarget, ProgressStyle};\nuse qevent::telemetry::handy::{LegacySeqLogger, NoopLogger};\nuse rustls::RootCertStore;\nuse tokio::{\n    fs,\n    io::{self, AsyncBufReadExt, AsyncWrite, AsyncWriteExt},\n    task::JoinSet,\n};\nuse tracing_subscriber::prelude::*;\n\n#[derive(Parser, Debug)]\n#[command(name = \"server\")]\nstruct Options {\n    #[arg(long, help = \"Save the qlog to a dir\", value_name = \"PATH\")]\n    qlog: Option<PathBuf>,\n    #[arg(\n        long,\n        short,\n        value_delimiter = ',',\n        default_value = \"tests/keychain/localhost/ca.cert\",\n        help = \"Certificates of CA who issues the server certificate\"\n    )]\n    roots: Vec<PathBuf>,\n    #[arg(\n        long,\n        short,\n        value_delimiter = ',',\n        help = \"files that will be sent to server, if not present, stdin will be used\"\n    )]\n    files: Vec<PathBuf>,\n    #[arg(\n        long,\n        short = 'p',\n        action = clap::ArgAction::Set,\n        help = \"enable progress bar\",\n        default_value = \"false\",\n        value_enum\n    )]\n    progress: bool,\n    #[arg(\n        long,\n        default_value = \"true\",\n        action = clap::ArgAction::Set,\n        help = \"Enable ANSI color output in logs\"\n    )]\n    ansi: bool,\n    #[arg(default_value = \"localhost:4433\", help = \"Host and port to connect to\")]\n    auth: Authority,\n}\n\n#[tokio::main]\nasync fn main() {\n    let options = Options::parse();\n    let (non_blocking, _guard) = tracing_appender::non_blocking(std::io::stdout());\n    tracing_subscriber::registry()\n        // .with(\n        //     console_subscriber::ConsoleLayer::builder()\n        //         .server_addr(\"127.0.0.1:6670\".parse::<SocketAddr>().unwrap())\n        //         .spawn(),\n        // )\n        .with(\n            tracing_subscriber::fmt::layer()\n                .with_writer(non_blocking)\n                .with_ansi(options.ansi)\n                .with_filter(\n                    tracing_subscriber::EnvFilter::builder()\n                        .with_default_directive(match options.progress {\n                            true => tracing::level_filters::LevelFilter::OFF.into(),\n                            false => tracing::level_filters::LevelFilter::INFO.into(),\n                        })\n                        .from_env_lossy(),\n                ),\n        )\n        .init();\n\n    if let Err(error) = run(options).await {\n        tracing::error!(?error);\n        std::process::exit(1);\n    };\n}\n\ntype Error = Box<dyn std::error::Error + Send + Sync>;\n\nasync fn run(options: Options) -> Result<(), Error> {\n    let qlogger: Arc<dyn qevent::telemetry::QLog + Send + Sync> = match options.qlog {\n        Some(dir) => Arc::new(LegacySeqLogger::new(dir)),\n        None => Arc::new(NoopLogger),\n    };\n\n    let mut roots = RootCertStore::empty();\n    roots.add_parsable_certificates(rustls_native_certs::load_native_certs().certs);\n    roots.add_parsable_certificates(options.roots.iter().flat_map(|path| path.to_certificate()));\n\n    let client = Arc::new(\n        QuicClient::builder()\n            .with_root_certificates(roots)\n            .without_cert()\n            .with_parameters(handy::client_parameters())\n            .with_qlog(qlogger)\n            .defer_idle_timeout(Duration::from_secs(60))\n            .enable_sslkeylog()\n            .enable_0rtt()\n            .build(),\n    );\n\n    match options.files {\n        files if files.is_empty() => process(&client, &options.auth, options.progress).await,\n        files => {\n            let files = files.iter().map(|p| p.as_path());\n            send_and_verify_files(&client, options.auth, files, options.progress).await\n        }\n    }\n}\n\nasync fn send_and_verify_files(\n    client: &Arc<QuicClient>,\n    auth: Authority,\n    files: impl Iterator<Item = &Path>,\n    progress: bool,\n) -> Result<(), Error> {\n    let pbs = MultiProgress::new();\n    if !progress {\n        pbs.set_draw_target(ProgressDrawTarget::hidden());\n    }\n    let total_tx = pbs.add(new_pb(\"总↑\", 0));\n    let total_rx = pbs.add(new_pb(\"总↓️\", 0));\n\n    let mut echos = JoinSet::new();\n\n    for path in files {\n        let data = fs::read(path).await?;\n        let (total_tx, total_rx) = (total_tx.clone(), total_rx.clone());\n        total_tx.inc_length(data.len() as u64);\n        total_rx.inc_length(data.len() as u64);\n\n        let client = client.clone();\n        let auth = auth.clone();\n\n        let tx_pb = pbs.insert_before(&total_tx, new_pb(\"↑\", data.len() as u64));\n        let rx_pb = pbs.insert_before(&total_rx, new_pb(\"↓\", data.len() as u64));\n        echos.spawn(async move {\n            let mut back = vec![];\n            send_and_verify_echo(&client, &auth, &data, tx_pb, rx_pb, &mut back).await?;\n            assert_eq!(back, data);\n            total_tx.inc(data.len() as u64);\n            total_rx.inc(data.len() as u64);\n            Result::<(), Error>::Ok(())\n        });\n    }\n\n    echos\n        .join_all()\n        .await\n        .into_iter()\n        .collect::<Result<(), Error>>()?;\n\n    total_tx.finish();\n    total_rx.finish();\n\n    Ok(())\n}\n\nasync fn process(client: &Arc<QuicClient>, auth: &Authority, progress: bool) -> Result<(), Error> {\n    eprintln!(\n        \"Enter interactive mode. Input anything, enter, then server will echo it back. Input `exit` or `quit` to quit.\"\n    );\n\n    let mut stdin = io::BufReader::new(io::stdin());\n    let mut stdout = io::stdout();\n\n    loop {\n        stdout.write_all(b\"\\n>\").await?;\n        stdout.flush().await?;\n\n        let mut line = String::new();\n        stdin.read_line(&mut line).await?;\n        let line = line.trim();\n\n        if line == \"exit\" || line == \"quit\" {\n            break Ok(());\n        }\n\n        let tx_pb = new_pb(\"↑\", line.len() as u64);\n        let rx_pb = new_pb(\"↓️\", line.len() as u64);\n        if !progress {\n            tx_pb.set_draw_target(ProgressDrawTarget::hidden());\n            rx_pb.set_draw_target(ProgressDrawTarget::hidden());\n        }\n        send_and_verify_echo(client, auth, line.as_bytes(), tx_pb, rx_pb, &mut stdout).await?;\n    }\n}\n\nfn new_pb(prefix: impl Into<Cow<'static, str>>, len: u64) -> ProgressBar {\n    let style = ProgressStyle::default_bar()\n        .template(\"{prefix} {wide_bar} {percent_precise}% {decimal_bytes_per_sec} ETA: {eta} {msg}\")\n        .unwrap();\n    ProgressBar::new(len).with_style(style).with_prefix(prefix)\n}\n\nasync fn send_and_verify_echo(\n    client: &Arc<QuicClient>,\n    auth: &Authority,\n    data: &[u8],\n    tx_pb: ProgressBar,\n    rx_pb: ProgressBar,\n    dst: &mut (impl AsyncWrite + Unpin),\n) -> Result<(), Error> {\n    let connection = client.connect(auth.host()).await?;\n\n    let (sid, (reader, writer)) = connection.open_bi_stream().await?.unwrap();\n    tracing::debug!(%sid, \"opened stream\");\n\n    let mut reader = rx_pb.wrap_async_read(reader);\n    let mut writer = tx_pb.wrap_async_write(writer);\n\n    tokio::try_join!(\n        async {\n            writer.write_all(data).await?;\n            writer.shutdown().await?;\n            tx_pb.finish();\n            Result::<(), Error>::Ok(())\n        },\n        async {\n            io::copy(&mut reader, dst).await?;\n            dst.flush().await?;\n            rx_pb.finish();\n            Result::<(), Error>::Ok(())\n        }\n    )\n    .map(|_| ())\n}\n"
  },
  {
    "path": "dquic/examples/echo-server.rs",
    "content": "use std::{path::PathBuf, sync::Arc, time::Duration};\n\nuse clap::Parser;\nuse dquic::{prelude::*, qinterface::io::IO};\nuse qevent::telemetry::handy::{LegacySeqLogger, NoopLogger};\nuse tokio::io::{self, AsyncWriteExt};\nuse tracing::info;\nuse tracing_subscriber::prelude::*;\n\n#[derive(Parser, Debug)]\n#[command(name = \"server\")]\nstruct Options {\n    #[arg(long, help = \"Save the qlog to a dir\", value_name = \"PATH\")]\n    qlog: Option<PathBuf>,\n    #[arg(\n        short,\n        long,\n        value_delimiter = ',',\n        default_values = [\"127.0.0.1:4433\", \"[::1]:4433\"],\n        help = \"What BindUris to listen for new connections\",\n    )]\n    listen: Vec<BindUri>,\n    #[arg(\n        long,\n        short,\n        default_value = \"4096\",\n        help = \"Maximum number of requests in the backlog. \\\n                If the backlog is full, new connections will be refused.\"\n    )]\n    backlog: usize,\n    #[arg(\n        long,\n        default_value = \"true\",\n        action = clap::ArgAction::Set,\n        help = \"Enable ANSI color output in logs\"\n    )]\n    ansi: bool,\n    #[command(flatten)]\n    certs: Certs,\n}\n\n#[derive(Parser, Debug)]\nstruct Certs {\n    #[arg(long, short, default_value = \"localhost\", help = \"Server name.\")]\n    server_name: String,\n    #[arg(\n        long,\n        short,\n        default_value = \"tests/keychain/localhost/server.cert\",\n        help = \"Certificate for TLS. If present, `--key` is mandatory.\"\n    )]\n    cert: PathBuf,\n    #[arg(\n        long,\n        short,\n        default_value = \"tests/keychain/localhost/server.key\",\n        help = \"Private key for the certificate.\"\n    )]\n    key: PathBuf,\n}\n\n#[tokio::main]\nasync fn main() {\n    let options = Options::parse();\n    let (non_blocking, _guard) = tracing_appender::non_blocking(std::io::stdout());\n    tracing_subscriber::registry()\n        // .with(console_subscriber::spawn())\n        .with(\n            tracing_subscriber::fmt::layer()\n                .with_writer(non_blocking)\n                .with_ansi(options.ansi)\n                .with_filter(\n                    tracing_subscriber::EnvFilter::builder()\n                        .with_default_directive(tracing::level_filters::LevelFilter::INFO.into())\n                        .from_env_lossy(),\n                ),\n        )\n        .init();\n\n    if let Err(error) = run(options).await {\n        tracing::info!(?error);\n        std::process::exit(1);\n    }\n}\n\nasync fn run(options: Options) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {\n    let qlogger: Arc<dyn qevent::telemetry::QLog + Send + Sync> = match options.qlog {\n        Some(dir) => Arc::new(LegacySeqLogger::new(dir)),\n        None => Arc::new(NoopLogger),\n    };\n\n    let listeners = QuicListeners::builder()\n        .without_client_cert_verifier()\n        .with_parameters(handy::server_parameters())\n        .with_qlog(qlogger)\n        .defer_idle_timeout(Duration::from_secs(0))\n        .enable_0rtt()\n        .listen(options.backlog)?;\n    listeners\n        .add_server(\n            options.certs.server_name.as_str(),\n            options.certs.cert.as_path(),\n            options.certs.key.as_path(),\n            options.listen,\n            None,\n        )\n        .await?;\n\n    tracing::info!(\n        \"Listening on {}\",\n        listeners\n            .get_server(options.certs.server_name.as_str())\n            .unwrap()\n            .bind_interfaces()\n            .iter()\n            .next()\n            .unwrap()\n            .1\n            .borrow()\n            .bound_addr()?\n    );\n\n    serve_echo(listeners).await?;\n    Ok(())\n}\n\nasync fn serve_echo(listeners: Arc<QuicListeners>) -> Result<(), ListenersShutdown> {\n    async fn handle_stream(mut reader: StreamReader, mut writer: StreamWriter) -> io::Result<()> {\n        io::copy(&mut reader, &mut writer).await?;\n        writer.shutdown().await?;\n        tracing::debug!(\"stream copy done\");\n\n        io::Result::Ok(())\n    }\n\n    loop {\n        let (connection, _server, pathway, ..) = listeners.accept().await?;\n        info!(source = ?pathway.remote(), \"accepted new connection\");\n        tokio::spawn(async move {\n            while let Ok((_sid, (reader, writer))) = connection.accept_bi_stream().await {\n                tokio::spawn(handle_stream(reader, writer));\n            }\n        });\n    }\n}\n"
  },
  {
    "path": "dquic/examples/http-client.rs",
    "content": "use std::{path::PathBuf, sync::Arc};\n\nuse clap::Parser;\nuse dquic::prelude::{handy::ToCertificate, *};\nuse http::{Uri, uri::Parts};\nuse qevent::telemetry::handy::{LegacySeqLogger, NoopLogger};\nuse tokio::{\n    fs,\n    io::{self, AsyncBufReadExt, AsyncWriteExt, BufReader},\n};\nuse tracing_subscriber::prelude::*;\n\n#[derive(Parser, Debug)]\n#[command(version, about, long_about = None)]\nstruct Options {\n    #[arg(long, help = \"Save the qlog to a dir\", value_name = \"PATH\")]\n    qlog: Option<PathBuf>,\n    #[arg(\n        long,\n        short,\n        value_delimiter = ',',\n        default_value = \"tests/keychain/localhost/ca.cert\",\n        help = \"Certificates of CA who issues the server certificate\"\n    )]\n    roots: Vec<PathBuf>,\n    #[arg(long, help = \"Skip verification of server certificate\")]\n    skip_verify: bool,\n    #[arg(\n        long,\n        short,\n        value_delimiter = ',',\n        default_value = \"quic\",\n        help = \"ALPNs to use for the connection\"\n    )]\n    alpns: Vec<Vec<u8>>,\n    #[arg(\n        long,\n        default_value = \"true\",\n        action = clap::ArgAction::Set,\n        help = \"Enable ANSI color output in logs\"\n    )]\n    ansi: bool,\n    #[arg(long, help = \"Save the response to a dir\", value_name = \"PATH\")]\n    save: Option<PathBuf>,\n    #[arg(\n        value_delimiter = ',',\n        default_value = \"http://localhost:4433/\",\n        help = \"Uri to request. If only one uri is present and path is not specified, enter process mode\"\n    )]\n    uris: Vec<Uri>,\n}\n\n#[tokio::main]\nasync fn main() {\n    let options = Options::parse();\n    let (non_blocking, _guard) = tracing_appender::non_blocking(std::io::stdout());\n    tracing_subscriber::registry()\n        // .with(\n        //     console_subscriber::ConsoleLayer::builder()\n        //         .server_addr(\"127.0.0.1:6670\".parse::<SocketAddr>().unwrap())\n        //         .spawn(),\n        // )\n        .with(\n            tracing_subscriber::fmt::layer()\n                .with_writer(non_blocking)\n                .with_ansi(options.ansi)\n                .with_filter(\n                    tracing_subscriber::EnvFilter::builder()\n                        .with_default_directive(tracing::level_filters::LevelFilter::INFO.into())\n                        .from_env_lossy(),\n                ),\n        )\n        .init();\n\n    if let Err(error) = run(options).await {\n        tracing::error!(?error);\n        std::process::exit(1);\n    }\n}\n\ntype Error = Box<dyn std::error::Error + Send + Sync>;\n\nasync fn run(options: Options) -> Result<(), Error> {\n    if options.uris.is_empty() {\n        return Err(\"no uri specified\".into());\n    }\n\n    let qlogger: Arc<dyn qevent::telemetry::QLog + Send + Sync> = match options.qlog {\n        Some(dir) => Arc::new(LegacySeqLogger::new(dir)),\n        None => Arc::new(NoopLogger),\n    };\n\n    let client_builder = if options.skip_verify {\n        tracing::warn!(\"skip server verify\");\n        QuicClient::builder().without_verifier()\n    } else {\n        tracing::info!(\"load ca certs\");\n        let mut roots = rustls::RootCertStore::empty();\n        roots.add_parsable_certificates(rustls_native_certs::load_native_certs().certs);\n        roots\n            .add_parsable_certificates(options.roots.iter().flat_map(|path| path.to_certificate()));\n        QuicClient::builder().with_root_certificates(roots)\n    };\n\n    let client = Arc::new(\n        client_builder\n            .with_qlog(qlogger)\n            .without_cert()\n            .with_parameters(handy::client_parameters())\n            .with_alpns(options.alpns)\n            .enable_sslkeylog()\n            .build(),\n    );\n\n    if options.uris.len() == 1 && options.uris[0].path() == \"/\" {\n        return process(&client, &options.uris[0], options.save).await;\n    } else {\n        for uri in options.uris {\n            download(&client, uri, options.save.as_ref()).await?;\n        }\n    }\n\n    Ok(())\n}\n\nasync fn process(\n    client: &Arc<QuicClient>,\n    base_uri: &Uri,\n    save: Option<PathBuf>,\n) -> Result<(), Error> {\n    let mut stdin = BufReader::new(io::stdin());\n    eprintln!(\n        \"Enter interactive mode. Input content to request (e.g: Cargo.toml), input `exit` or `quit` to quit.\"\n    );\n    loop {\n        let mut input = String::new();\n        _ = stdin.read_line(&mut input).await?;\n\n        let content = input.trim();\n        if content.is_empty() {\n            continue;\n        }\n\n        if content == \"exit\" || content == \"quit\" {\n            return Ok(());\n        }\n\n        let mut uri_parts = Parts::default();\n        uri_parts.scheme = base_uri.scheme().cloned();\n        uri_parts.authority = base_uri.authority().cloned();\n        uri_parts.path_and_query = Some(format!(\"/{content}\").parse()?);\n        download(client, Uri::from_parts(uri_parts)?, save.as_ref()).await?;\n    }\n}\n\nasync fn download(client: &Arc<QuicClient>, uri: Uri, save: Option<&PathBuf>) -> Result<(), Error> {\n    let authority = uri.authority().ok_or(\"authority must be present in uri\")?;\n\n    let file_path = uri.path().strip_prefix('/');\n    let file_path = file_path.ok_or_else(|| format!(\"invalid path `{}`\", uri.path()))?;\n\n    let connection = client.connect(authority.host()).await?;\n    let (_sid, (mut response, mut request)) = connection\n        .open_bi_stream()\n        .await?\n        .expect(\"very very hard to exhaust the available stream ids\");\n    request\n        .write_all(format!(\"GET /{file_path}\").as_bytes())\n        .await?;\n    request.shutdown().await?;\n\n    match save.map(|dir| dir.join(file_path)) {\n        Some(path) => io::copy(&mut response, &mut fs::File::create(path).await?).await?,\n        None => io::copy(&mut response, &mut io::stdout()).await?,\n    };\n\n    _ = connection.close(\"Bye bye\", 0);\n\n    tracing::info!(\"Saved to file {file_path}\");\n    Ok(())\n}\n"
  },
  {
    "path": "dquic/examples/http-server.rs",
    "content": "use std::{path::PathBuf, sync::Arc};\n\nuse clap::Parser;\nuse dquic::{prelude::*, qinterface::io::IO};\nuse tokio::{\n    fs,\n    io::{self, AsyncReadExt, AsyncWriteExt},\n};\nuse tracing_subscriber::prelude::*;\n\n#[derive(Parser, Debug)]\n#[command(name = \"server\")]\nstruct Options {\n    #[arg(\n        name = \"dir\",\n        short,\n        long,\n        help = \"Root directory of the files to serve. \\\n                If omitted, server will respond OK.\",\n        default_value = \"./\"\n    )]\n    root: PathBuf,\n    #[arg(long, help = \"Save the qlog to a dir\", value_name = \"PATH\")]\n    qlog: Option<PathBuf>,\n    #[arg(\n        short,\n        long,\n        value_delimiter = ',',\n        default_values = [\"127.0.0.1:4433\", \"[::1]:4433\"],\n        help = \"What BindUris to listen for new connections\"\n    )]\n    listen: Vec<BindUri>,\n    #[arg(\n        long,\n        short,\n        value_delimiter = ',',\n        default_value = \"quic\",\n        help = \"ALPNs to use for the connection\"\n    )]\n    alpns: Vec<Vec<u8>>,\n    #[arg(\n        long,\n        short,\n        default_value = \"4096\",\n        help = \"Maximum number of requests in the backlog. \\\n                If the backlog is full, new connections will be refused.\"\n    )]\n    backlog: usize,\n    #[arg(\n        long,\n        default_value = \"true\",\n        action = clap::ArgAction::Set,\n        help = \"Enable ANSI color output in logs\"\n    )]\n    ansi: bool,\n    #[command(flatten)]\n    certs: Certs,\n}\n\n#[derive(Parser, Debug)]\nstruct Certs {\n    #[arg(long, short, default_value = \"localhost\", help = \"Server name.\")]\n    server_name: String,\n    #[arg(\n        long,\n        short,\n        default_value = \"tests/keychain/localhost/server.cert\",\n        help = \"Certificate for TLS. If present, `--key` is mandatory.\"\n    )]\n    cert: PathBuf,\n    #[arg(\n        long,\n        short,\n        default_value = \"tests/keychain/localhost/server.key\",\n        help = \"Private key for the certificate.\"\n    )]\n    key: PathBuf,\n}\n\ntype Error = Box<dyn std::error::Error + Send + Sync>;\n\nfn main() {\n    let options = Options::parse();\n    let (non_blocking, _guard) = tracing_appender::non_blocking(std::io::stdout());\n    tracing_subscriber::registry()\n        // .with(console_subscriber::spawn())\n        .with(\n            tracing_subscriber::fmt::layer()\n                .with_writer(non_blocking)\n                .with_ansi(options.ansi)\n                .with_filter(\n                    tracing_subscriber::EnvFilter::builder()\n                        .with_default_directive(tracing::level_filters::LevelFilter::INFO.into())\n                        .from_env_lossy(),\n                ),\n        )\n        .init();\n\n    let rt = tokio::runtime::Builder::new_current_thread()\n        .enable_all()\n        // default value 512 out of macos ulimit\n        .max_blocking_threads(256)\n        .build()\n        .expect(\"failed to build tokio runtime\");\n\n    if let Err(error) = rt.block_on(run(options)) {\n        tracing::info!(?error);\n        std::process::exit(1);\n    }\n}\n\nasync fn run(options: Options) -> Result<(), Error> {\n    let qlogger: Arc<dyn qevent::telemetry::QLog + Send + Sync> = match options.qlog {\n        Some(dir) => Arc::new(handy::LegacySeqLogger::new(dir)),\n        None => Arc::new(handy::NoopLogger),\n    };\n\n    let listeners = QuicListeners::builder()\n        .with_qlog(qlogger)\n        .without_client_cert_verifier()\n        .with_parameters(handy::server_parameters())\n        .with_alpns(options.alpns)\n        .listen(options.backlog)?;\n    listeners\n        .add_server(\n            options.certs.server_name.as_str(),\n            options.certs.cert.as_path(),\n            options.certs.key.as_path(),\n            options.listen,\n            None,\n        )\n        .await?;\n\n    tracing::info!(\n        \"Listening on {}\",\n        listeners\n            .get_server(options.certs.server_name.as_str())\n            .unwrap()\n            .bind_interfaces()\n            .iter()\n            .next()\n            .unwrap()\n            .1\n            .borrow()\n            .bound_addr()?\n    );\n\n    loop {\n        let (connection, _server, _pathway, _link) = listeners.accept().await?;\n        tokio::spawn(serve_files(connection));\n    }\n}\n\nasync fn serve_files(connection: Connection) -> Result<(), Error> {\n    async fn serve_file(mut reader: StreamReader, mut writer: StreamWriter) -> Result<(), Error> {\n        let mut request = String::new();\n        reader.read_to_string(&mut request).await?;\n        tracing::info!(\"received request: {request}\");\n\n        // HTTP/0.9 is very simple - just a GET request with a path\n        let serve = async {\n            match request.trim().strip_prefix(\"GET /\") {\n                Some(path) => {\n                    tracing::debug!(?path, \"Received HTTP/0.9 request\");\n                    let mut file = fs::File::open(PathBuf::from_iter([\"./\", path])).await?;\n                    io::copy(&mut file, &mut writer).await.map(|_| ())\n                }\n                None => Err(io::Error::other(format!(\n                    \"Invalid HTTP/0.9 request: {request}\",\n                ))),\n            }\n        };\n\n        if let Err(error) = serve.await {\n            tracing::warn!(\"failed to serve request: {}\", error);\n        }\n\n        _ = writer.shutdown().await;\n\n        Ok(())\n    }\n\n    loop {\n        let (_sid, (reader, writer)) = connection.accept_bi_stream().await?;\n        tokio::spawn(serve_file(reader, writer));\n    }\n}\n"
  },
  {
    "path": "dquic/examples/traversal-client.rs",
    "content": "// use std::{io, net::SocketAddr};\n\n// use clap::Parser;\n// use dquic::{\n//     prelude::{\n//         Connection, EndpointAddr, ParameterId, QuicClient, EndpointAddr, handy::ToCertificate,\n//     },\n//     qbase::param::ClientParameters,\n//     qtraversal::iface::TraversalFactory,\n// };\n// use rustls::RootCertStore;\n// use tokio::{\n//     io::{AsyncReadExt, AsyncWriteExt},\n//     task::JoinSet,\n// };\n// use tracing::{info, warn};\n// use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};\n\n// #[derive(Parser)]\n// struct Options {\n//     #[arg(long)]\n//     bind1: SocketAddr,\n//     #[arg(long)]\n//     bind2: SocketAddr,\n//     #[arg(long)]\n//     server_outer: SocketAddr,\n//     #[arg(long)]\n//     server_agent: SocketAddr,\n//     #[arg(long, default_value = \"nat.genmeta.net:20004\")]\n//     stun_server: String,\n// }\n\n// pub type Error = Box<dyn std::error::Error + Send + Sync>;\n\n// #[tokio::main]\n// pub async fn main() -> io::Result<()> {\n//     init_logger()?;\n//     let default_panic = std::panic::take_hook();\n//     std::panic::set_hook(Box::new(move |info| {\n//         default_panic(info);\n//         info!(\"panic: {}\", info);\n//         std::process::exit(1);\n//     }));\n//     let ops = Options::parse();\n//     let server_ep = EndpointAddr::Agent {\n//         agent: ops.server_agent,\n//         outer: ops.server_outer,\n//     };\n\n//     let mut roots = RootCertStore::empty();\n//     roots.add_parsable_certificates(\n//         include_bytes!(\"../../../tests/keychain/localhost/ca.cert\").to_certificate(),\n//     );\n\n//     let stun_servers: Vec<SocketAddr> = tokio::net::lookup_host(&ops.stun_server).await?.collect();\n//     if stun_servers.is_empty() {\n//         return Err(io::Error::other(\"failed to resolve stun server\"));\n//     }\n\n//     let factory = TraversalFactory::initialize_global(stun_servers).unwrap();\n//     let client = QuicClient::builder()\n//         .with_root_certificates(roots)\n//         .without_cert()\n//         .enable_sslkeylog()\n//         // .with_qlog(Arc::new(DefaultSeqLogger::new(PathBuf::from(\"qlog\"))))\n//         .with_iface_factory(factory.as_ref().clone())\n//         .with_parameters(client_stream_unlimited_parameters())\n//         .bind(&[ops.bind1, ops.bind2][..])\n//         .await\n//         .build();\n\n//     let mut handle_set = JoinSet::new();\n//     for _ in 0..1 {\n//         info!(\n//             \"server ep {:?}, bind {} {}\",\n//             server_ep, ops.bind1, ops.bind2\n//         );\n//         let connection = client\n//             .connected_to(\"localhost\", [server_ep])\n//             .await\n//             .map_err(io::Error::other)?;\n\n//         const DATA: &[u8] = include_bytes!(\"./client.rs\");\n//         handle_set.spawn(async move {\n//             send_and_verify_echo(&connection, DATA).await.unwrap();\n//             // 等待打洞结束\n//             tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;\n//             warn!(\"finish one connection\");\n//         });\n//     }\n//     let _et = handle_set.join_all().await;\n//     Ok(())\n// }\n\n// async fn send_and_verify_echo(connection: &Connection, data: &[u8]) -> Result<(), Error> {\n//     let (_sid, (mut reader, mut writer)) = connection.open_bi_stream().await?.unwrap();\n//     tracing::debug!(\"stream opened\");\n\n//     let mut back = Vec::new();\n//     tokio::try_join!(\n//         async {\n//             writer.write_all(data).await?;\n//             writer.shutdown().await?;\n//             tracing::info!(\"xxxxx write done\");\n//             Result::<(), Error>::Ok(())\n//         },\n//         async {\n//             reader.read_to_end(&mut back).await?;\n//             assert_eq!(back, data);\n//             tracing::info!(\"xxxx read done\");\n//             Result::<(), Error>::Ok(())\n//         }\n//     )\n//     .map(|_| ())\n// }\n\n// fn client_stream_unlimited_parameters() -> ClientParameters {\n//     let mut params = ClientParameters::default();\n//     _ = params.set(ParameterId::ActiveConnectionIdLimit, 10u32);\n//     _ = params.set(ParameterId::InitialMaxData, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamDataBidiLocal, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamDataBidiRemote, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamDataUni, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamsBidi, 100u32);\n//     _ = params.set(ParameterId::InitialMaxStreamsUni, 100u32);\n//     params\n// }\n\n// pub fn init_logger() -> std::io::Result<()> {\n//     let filter = tracing_subscriber::filter::filter_fn(|metadata| {\n//         !metadata.target().contains(\"netlink_packet_route\")\n//     });\n\n//     let _ = tracing_subscriber::registry()\n//         .with(tracing_subscriber::Layer::with_filter(\n//             tracing_subscriber::fmt::layer()\n//                 .with_target(true)\n//                 .with_ansi(false)\n//                 .with_file(true)\n//                 .with_line_number(true),\n//             filter,\n//         ))\n//         .try_init();\n//     Ok(())\n// }\n\nfn main() {}\n"
  },
  {
    "path": "dquic/examples/traversal-server.rs",
    "content": "// use std::{io, net::SocketAddr, sync::Arc};\n\n// use clap::Parser;\n// use dquic::{\n//     prelude::{Connection, ParameterId, QuicListeners, StreamReader, StreamWriter},\n//     qbase::param::ServerParameters,\n//     qtraversal,\n// };\n// use qtraversal::iface::TraversalFactory;\n// use tokio::io::AsyncWriteExt;\n// use tracing::{Instrument, info, info_span};\n// use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};\n\n// #[derive(clap::Parser)]\n// struct Options {\n//     #[arg(long, default_value = \"192.168.1.4:6000\")]\n//     bind1: SocketAddr,\n//     #[arg(long, default_value = \"[2409:8a00:1850:be40:1037:3cbd:ec40:11c6]:6000\")]\n//     bind2: SocketAddr,\n//     #[arg(long, default_value = \"nat.genmeta.net:20004\")]\n//     stun_server: String,\n// }\n\n// #[tokio::main]\n// pub async fn main() -> io::Result<()> {\n//     init_logger()?;\n//     let default_panic = std::panic::take_hook();\n//     std::panic::set_hook(Box::new(move |info| {\n//         default_panic(info);\n//         info!(\"panic: {}\", info);\n//         std::process::exit(1);\n//     }));\n//     let ops = Options::parse();\n\n//     let stun_servers: Vec<SocketAddr> = tokio::net::lookup_host(&ops.stun_server).await?.collect();\n//     if stun_servers.is_empty() {\n//         return Err(io::Error::other(\"failed to resolve stun server\"));\n//     }\n\n//     let factory = TraversalFactory::initialize_global(stun_servers).unwrap();\n//     let server = QuicListeners::builder()?\n//         // .with_single_cert(\n//         //     include_bytes!(\"../../../tests/keychain/localhost/server.cert\"),\n//         //     include_bytes!(\"../../../tests/keychain/localhost/server.key\"),\n//         // )\n//         .with_iface_factory(factory.as_ref().clone())\n//         .with_parameters(server_stream_unlimited_parameters())\n//         .without_client_cert_verifier()\n//         .listen(1000);\n\n//     server\n//         .add_server(\n//             \"localhost\",\n//             include_bytes!(\"../../../tests/keychain/localhost/server.cert\"),\n//             include_bytes!(\"../../../tests/keychain/localhost/server.key\"),\n//             [ops.bind1],\n//             None,\n//         )\n//         .await?;\n\n//     launch(server).await?;\n\n//     Ok(())\n// }\n\n// pub fn server_stream_unlimited_parameters() -> ServerParameters {\n//     let mut params = ServerParameters::default();\n//     _ = params.set(ParameterId::ActiveConnectionIdLimit, 10u32);\n//     _ = params.set(ParameterId::InitialMaxData, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamDataBidiLocal, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamDataBidiRemote, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamDataUni, 1u32 << 20);\n//     _ = params.set(ParameterId::InitialMaxStreamsBidi, 100u32);\n//     _ = params.set(ParameterId::InitialMaxStreamsUni, 100u32);\n//     params\n// }\n\n// pub async fn launch(server: Arc<QuicListeners>) -> io::Result<()> {\n//     async fn handle_connection(conn: Arc<Connection>) -> io::Result<()> {\n//         loop {\n//             let (sid, (reader, writer)) = conn.accept_bi_stream().await?;\n//             tokio::spawn(\n//                 handle_stream(reader, writer).instrument(info_span!(\"handle_stream\",%sid)),\n//             );\n//         }\n//     }\n\n//     async fn handle_stream(mut reader: StreamReader, mut writer: StreamWriter) -> io::Result<()> {\n//         tokio::io::copy(&mut reader, &mut writer).await?;\n//         writer.shutdown().await?;\n//         tracing::info!(\"stream copy done\");\n\n//         io::Result::Ok(())\n//     }\n\n//     loop {\n//         let (connection, _name, pathway, _link) = server\n//             .accept()\n//             .await\n//             .map_err(|_e| io::Error::other(\"accept error\"))?;\n//         info!(source = ?pathway.remote(), \"accepted new connection\");\n//         tokio::spawn(handle_connection(Arc::new(connection)));\n//     }\n// }\n\n// pub fn init_logger() -> std::io::Result<()> {\n//     let filter = tracing_subscriber::filter::filter_fn(|metadata| {\n//         !metadata.target().contains(\"netlink_packet_route\")\n//     });\n\n//     let _ = tracing_subscriber::registry()\n//         .with(tracing_subscriber::Layer::with_filter(\n//             tracing_subscriber::fmt::layer()\n//                 .with_target(true)\n//                 .with_ansi(false)\n//                 .with_file(true)\n//                 .with_line_number(true),\n//             filter,\n//         ))\n//         .try_init();\n//     Ok(())\n// }\n\nfn main() {}\n"
  },
  {
    "path": "dquic/src/cert.rs",
    "content": "use std::path::Path;\n\nuse rustls::pki_types::{CertificateDer, PrivateKeyDer, pem::PemObject};\n\npub trait ToCertificate {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>>;\n}\n\nimpl ToCertificate for Vec<CertificateDer<'static>> {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>> {\n        self\n    }\n}\n\nimpl ToCertificate for &[CertificateDer<'static>] {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>> {\n        self.to_vec().to_certificate()\n    }\n}\n\nimpl<const N: usize> ToCertificate for [CertificateDer<'static>; N] {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>> {\n        self.to_vec().to_certificate()\n    }\n}\n\nimpl ToCertificate for CertificateDer<'static> {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>> {\n        vec![self]\n    }\n}\n\nimpl ToCertificate for &Path {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>> {\n        let data = std::fs::read(self).expect(\"Failed to read certificate file\");\n        if let Ok(certs) = CertificateDer::pem_slice_iter(&data).collect::<Result<Vec<_>, _>>()\n            && !certs.is_empty()\n        {\n            return certs;\n        }\n\n        vec![CertificateDer::from(data)]\n    }\n}\n\nimpl ToCertificate for &[u8] {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>> {\n        if let Ok(certs) = CertificateDer::pem_slice_iter(self).collect::<Result<Vec<_>, _>>()\n            && !certs.is_empty()\n        {\n            return certs;\n        }\n\n        vec![CertificateDer::from(self.to_vec())]\n    }\n}\n\nimpl<const N: usize> ToCertificate for &[u8; N] {\n    fn to_certificate(self) -> Vec<CertificateDer<'static>> {\n        <&[u8]>::to_certificate(self)\n    }\n}\n\npub trait ToPrivateKey {\n    fn to_private_key(self) -> PrivateKeyDer<'static>;\n}\n\nimpl ToPrivateKey for PrivateKeyDer<'static> {\n    fn to_private_key(self) -> PrivateKeyDer<'static> {\n        self\n    }\n}\n\nimpl ToPrivateKey for &PrivateKeyDer<'static> {\n    fn to_private_key(self) -> PrivateKeyDer<'static> {\n        self.clone_key()\n    }\n}\n\nimpl ToPrivateKey for &Path {\n    fn to_private_key(self) -> PrivateKeyDer<'static> {\n        let data = std::fs::read(self).expect(\"failed to read private key file\");\n        if let Ok(key) = PrivateKeyDer::from_pem_slice(&data) {\n            return key;\n        }\n\n        PrivateKeyDer::try_from(data)\n            .expect(\"failed to parse private key file as pem or der format\")\n    }\n}\n\nimpl ToPrivateKey for &[u8] {\n    fn to_private_key(self) -> PrivateKeyDer<'static> {\n        if let Ok(key) = PrivateKeyDer::from_pem_slice(self) {\n            return key;\n        }\n\n        PrivateKeyDer::try_from(self.to_vec())\n            .expect(\"failed to parse private key file as pem or der format\")\n    }\n}\n\nimpl<const N: usize> ToPrivateKey for &[u8; N] {\n    fn to_private_key(self) -> PrivateKeyDer<'static> {\n        <&[u8]>::to_private_key(self)\n    }\n}\n"
  },
  {
    "path": "dquic/src/client.rs",
    "content": "use std::{\n    collections::HashMap,\n    io,\n    net::SocketAddr,\n    str::FromStr,\n    sync::{\n        Arc,\n        atomic::{AtomicBool, Ordering},\n    },\n    time::Duration,\n};\n\nuse dashmap::DashMap;\nuse futures::StreamExt;\nuse qbase::{net::Family, param::ClientParameters, token::TokenSink};\nuse qconnection::{\n    self,\n    qbase::net::AddrFamily,\n    qinterface::{component::location::Locations, io::IO},\n};\nuse qevent::telemetry::QLog;\nuse qinterface::{\n    BindInterface, Interface, bind_uri::BindUri, component::route::QuicRouter, device::Devices,\n    io::ProductIO, manager::InterfaceManager,\n};\nuse qresolve::Source;\nuse rustls::{\n    ConfigBuilder, WantsVerifier,\n    client::{ResolvesClientCert, WantsClientCert},\n};\nuse thiserror::Error;\n\nuse crate::{prelude::*, *};\n\ntype TlsClientConfig = rustls::ClientConfig;\ntype TlsClientConfigBuilder<T> = ConfigBuilder<TlsClientConfig, T>;\n\n/// A QUIC client for initiating connections to servers.\n///\n/// ## Creating Clients\n///\n/// Use [`QuicClient::builder`] to configure and create a client instance.\n/// Configure interfaces, TLS settings, and connection behavior before building.\n///\n/// ## Interface Management\n///\n/// - **Automatic binding**: If no interfaces are bound, the client automatically binds to system-assigned addresses\n/// - **Manual binding**: Use [`QuicClientBuilder::bind`] to bind specific interfaces\n///\n/// ## Connection Handling\n///\n/// Call [`QuicClient::connect`] to establish connections. The client supports:\n/// - **Automatic interface selection**: Matches interface with server endpoint address\n#[derive(Clone)]\npub struct QuicClient {\n    network: common::Network,\n    bind_ifaces: DashMap<BindUri, BindInterface>,\n    manual_bind: Arc<AtomicBool>,\n\n    // quic config(in initialize order)\n    _prefer_versions: Vec<u32>,\n    token_sink: Arc<dyn TokenSink>,\n    parameters: ClientParameters,\n    tls_config: TlsClientConfig,\n    stream_strategy_factory: Arc<dyn ProductStreamsConcurrencyController>,\n    defer_idle_timeout: Duration,\n    qlogger: Arc<dyn QLog + Send + Sync>,\n}\n\n#[derive(Debug, Error)]\npub enum ConnectServerError {\n    #[error(\"DNS lookup failed\")]\n    Dns {\n        #[from]\n        source: io::Error,\n    },\n    #[error(\"Failed to bind interface for client connection\")]\n    BindInterface {\n        #[from]\n        source: BindInterfaceError,\n    },\n}\n\n#[derive(Debug, Error)]\n#[error(\n    \"Failed to bind interface `{}` for client connection\",\n    bind_uri.as_ref().map_or(String::from(\"<no bind uri generated>\"), |bind_uri| bind_uri.to_string())\n)]\npub struct BindInterfaceError {\n    bind_uri: Option<BindUri>,\n    #[source]\n    bind_error: io::Error,\n}\n\nimpl QuicClient {\n    #[inline]\n    pub fn bind_ifaces(&self) -> HashMap<BindUri, BindInterface> {\n        self.bind_ifaces\n            .iter()\n            .map(|entry| (entry.key().clone(), entry.value().clone()))\n            .collect()\n    }\n\n    pub async fn bind(&self, bind_uri: impl Into<BindUri>) -> BindInterface {\n        let bind_interface = self.network.bind(bind_uri.into()).await;\n        self.bind_ifaces\n            .insert(bind_interface.bind_uri(), bind_interface.clone());\n        self.manual_bind.store(true, Ordering::Relaxed);\n        bind_interface\n    }\n\n    #[inline]\n    pub fn unbind(&self, bind_uri: &BindUri) -> Option<BindInterface> {\n        self.bind_ifaces.remove(bind_uri).map(|(_, iface)| iface)\n    }\n\n    /// Creates a new QUIC connection to the specified server without any initial paths.\n    ///\n    /// This method initializes the connection state but does not start the handshake\n    /// because no network paths are established yet. You must manually add paths\n    /// using [`Connection::add_path`] to initiate communication.\n    ///\n    /// This is useful for advanced scenarios where you need fine-grained control\n    /// over which interfaces and paths are used for the connection.\n    pub fn new_connection(&self, server_name: impl Into<String>) -> Connection {\n        Connection::new_client(server_name.into(), self.token_sink.clone())\n            .with_parameters(self.parameters.clone())\n            .with_tls_config(self.tls_config.clone())\n            .with_streams_concurrency_strategy(self.stream_strategy_factory.as_ref())\n            .with_zero_rtt(self.tls_config.enable_early_data)\n            .with_iface_factory(self.network.iface_factory.clone())\n            .with_iface_manager(self.network.iface_manager.clone())\n            .with_quic_router(self.network.quic_router.clone())\n            .with_locations(self.network.locations.clone())\n            // todo\n            // .with_stun_servers()\n            .with_defer_idle_timeout(self.defer_idle_timeout)\n            .with_cids(ConnectionId::random_gen(8))\n            .with_qlog(self.qlogger.clone())\n            .run()\n    }\n\n    /// Builds a [`BindUri`] from the DNS [`Source`] and endpoint address.\n    ///\n    /// - For [`Source::Mdns`]: binds to the discovering NIC (e.g., `iface://v4.en0:0`).\n    /// - For other sources: binds to a wildcard address matching the endpoint family.\n    fn bind_uri_for(source: &Source, ep: &EndpointAddr) -> BindUri {\n        match source {\n            Source::Mdns { nic, family } => {\n                let f = match family {\n                    Family::V4 => \"v4\",\n                    Family::V6 => \"v6\",\n                };\n                BindUri::from_str(&format!(\"iface://{f}.{nic}:0\"))\n                    .expect(\"iface URI should be valid\")\n                    .alloc_port()\n            }\n            _ => match ep.family() {\n                Family::V4 => BindUri::from_str(\"inet://0.0.0.0:0\")\n                    .expect(\"URL should be valid\")\n                    .alloc_port(),\n                Family::V6 => BindUri::from_str(\"inet://[::]:0\")\n                    .expect(\"URL should be valid\")\n                    .alloc_port(),\n            },\n        }\n    }\n\n    /// Ensures at least one interface exists for the given endpoint.\n    async fn ensure_iface_for(&self, source: &Source, ep: &EndpointAddr) {\n        if self.manual_bind.load(Ordering::Relaxed) {\n            return;\n        }\n        if self.bind_ifaces.is_empty() {\n            let bind_uri = Self::bind_uri_for(source, ep);\n            let iface = self.network.bind(bind_uri).await;\n            self.bind_ifaces.insert(iface.bind_uri(), iface);\n        }\n    }\n\n    /// Returns matching bound interfaces or auto-binds a new one.\n    async fn select_or_bind_ifaces(\n        &self,\n        source: &Source,\n        ep: &EndpointAddr,\n    ) -> Result<Vec<(SocketAddr, Interface)>, BindInterfaceError> {\n        let iface_matches_source =\n            |iface: &Interface| match source {\n                Source::Mdns { nic, family } => iface.bind_uri().as_iface_bind_uri().is_some_and(\n                    |(iface_family, iface_name, _)| {\n                        iface_family == *family && iface_name == nic.as_ref()\n                    },\n                ),\n                _ => true,\n            };\n\n        if self.manual_bind.load(Ordering::Relaxed) {\n            let ifaces = self\n                .bind_ifaces\n                .iter()\n                .map(|entry| entry.value().borrow())\n                .filter(|iface| iface_matches_source(iface))\n                .filter_map(|iface| Some((iface.bound_addr().ok()?, iface)))\n                .filter(|(addr, _)| addr.family() == ep.family())\n                .collect::<Vec<_>>();\n            Ok(ifaces)\n        } else {\n            let ifaces = self\n                .bind_ifaces\n                .iter()\n                .map(|entry| entry.value().borrow())\n                .filter(|iface| iface_matches_source(iface))\n                .filter_map(|iface| Some((iface.bound_addr().ok()?, iface)))\n                .filter(|(addr, _)| addr.family() == ep.family())\n                .collect::<Vec<_>>();\n            if !ifaces.is_empty() {\n                return Ok(ifaces);\n            }\n            let bind_uri = Self::bind_uri_for(source, ep);\n            let iface = self.network.bind(bind_uri.clone()).await.borrow();\n            let bound_addr = iface.bound_addr().map_err(|source| BindInterfaceError {\n                bind_uri: Some(bind_uri),\n                bind_error: source,\n            })?;\n            Ok(vec![(bound_addr, iface)])\n        }\n    }\n\n    /// Probes and generates potential network paths to the given server endpoints.\n    ///\n    /// Each endpoint is paired with its DNS [`Source`] so that the correct network\n    /// interface can be selected:\n    ///\n    /// - **Direct endpoints**: selects matching bound interfaces or auto-binds a new one,\n    ///   then constructs [`Link`] and [`Pathway`] for each.\n    /// - **Agent endpoints**: ensures an interface exists but does **not** build a path —\n    ///   the puncher system handles Agent paths after STUN discovery.\n    ///\n    /// Returns a list of `(Interface, Link, Pathway)` tuples for Direct endpoints only.\n    ///\n    /// ### Example\n    ///\n    /// ```no_run\n    /// # use dquic::prelude::*;\n    /// # use dquic::qresolve::Source;\n    /// # async fn example(quic_client: &QuicClient) -> Result<(), Box<dyn std::error::Error>> {\n    /// let server_addresses: Vec<_> = tokio::net::lookup_host(\"genmeta.net:443\")\n    ///     .await?\n    ///     .map(|addr| (Source::System, addr.into()))\n    ///     .collect();\n    /// let paths = quic_client.probe(server_addresses).await?;\n    /// let connection = quic_client.new_connection(\"genmeta.net\");\n    /// for (iface, link, pathway) in paths {\n    ///     connection.add_path(iface.bind_uri(), link, pathway)?;\n    /// }\n    /// # Ok(())\n    /// # }\n    /// ```\n    pub async fn probe(\n        &self,\n        server_eps: impl IntoIterator<Item = (Source, EndpointAddr)>,\n    ) -> Result<Vec<(Interface, Link, Pathway)>, BindInterfaceError> {\n        let server_eps = server_eps.into_iter().collect::<Vec<_>>();\n\n        let mut paths = vec![];\n        for (source, server_ep) in server_eps {\n            if matches!(server_ep, EndpointAddr::Agent { .. }) {\n                self.ensure_iface_for(&source, &server_ep).await;\n            } else {\n                let ifaces = self.select_or_bind_ifaces(&source, &server_ep).await?;\n\n                paths.extend(ifaces.into_iter().map(move |(bound_addr, iface)| {\n                    let dst = *server_ep;\n                    let link = Link::new(bound_addr, dst);\n                    let pathway = Pathway::new(bound_addr.into(), server_ep);\n                    (iface, link, pathway)\n                }));\n            }\n        }\n\n        Ok(paths)\n    }\n\n    /// Processes a single server endpoint for the given connection:\n    /// 1. Registers the peer endpoint (with its DNS source) in the connection's address book.\n    /// 2. Probes for immediate paths (Direct endpoints) or ensures an interface\n    ///    is bound (Agent endpoints).  See [`Self::probe`] for details.\n    /// 3. Adds any resulting Direct paths to the connection.\n    ///\n    /// Returns `true` if at least one Direct path was added.\n    async fn setup_server_endpoint(\n        &self,\n        connection: &Connection,\n        source: Source,\n        server_ep: EndpointAddr,\n    ) -> Result<bool, BindInterfaceError> {\n        // Register the peer endpoint with its DNS source — the puncher will\n        // only auto-create paths with local endpoints matching the source constraint\n        // (e.g. mDNS endpoints are restricted to the discovering NIC).\n        _ = connection.add_peer_endpoint(server_ep, source.clone());\n\n        // probe() handles both Direct and Agent uniformly:\n        //   Direct → select/bind interface, construct Link & Pathway, return paths.\n        //   Agent  → ensure an interface is bound, return empty paths.\n        let paths = self.probe([(source, server_ep)]).await?;\n        let has_direct_path = !paths.is_empty();\n        for (iface, link, pathway) in paths {\n            _ = connection.add_path(iface.bind_uri(), link, pathway);\n        }\n        Ok(has_direct_path)\n    }\n\n    /// Connects to a server using specific endpoint addresses.\n    ///\n    /// This method combines [`QuicClient::probe`] and [`QuicClient::new_connection`].\n    /// It creates a connection and automatically adds paths for all the provided\n    /// server endpoints.\n    ///\n    /// The returned [`Connection`] may not have completed the handshake yet.\n    /// However, any asynchronous operations on the connection (like opening streams)\n    /// will automatically wait for the handshake to complete.\n    ///\n    /// If `server_eps` is empty, this is equivalent to calling [`QuicClient::new_connection`]\n    /// and the connection will remain idle until paths are added.\n    ///\n    /// This variant preserves the DNS [`Source`] so that the correct network interface\n    /// is selected for each endpoint (e.g., mDNS endpoints bind to the discovering NIC).\n    pub async fn connected_to_with_source(\n        &self,\n        server_name: impl Into<String>,\n        server_eps: impl IntoIterator<Item = (Source, EndpointAddr)>,\n    ) -> Result<Connection, ConnectServerError> {\n        let connection = self.new_connection(server_name);\n        _ = connection.subscribe_local_address();\n        for (source, server_ep) in server_eps {\n            self.setup_server_endpoint(&connection, source, server_ep)\n                .await\n                .map_err(|source| ConnectServerError::BindInterface { source })?;\n        }\n        Ok(connection)\n    }\n\n    /// Connects to a server by its hostname and optional port.\n    ///\n    /// This is the most convenient way to establish a connection. It performs the following steps:\n    /// 1. Parses the server string (e.g., \"example.com\" or \"example.com:443\").\n    ///    Defaults to port 443 if not specified.\n    /// 2. Performs an asynchronous DNS lookup to resolve the hostname to IP addresses.\n    /// 3. Calls [`QuicClient::connected_to_with_source`] with the resolved addresses.\n    ///\n    /// The returned [`Connection`] may not have completed the handshake yet.\n    /// Asynchronous operations on the connection will wait for the handshake.\n    pub async fn connect(self: &Arc<Self>, server: &str) -> Result<Connection, ConnectServerError> {\n        let mut server_eps = self\n            .network\n            .resolver\n            .lookup(server)\n            .await\n            .map_err(|source| ConnectServerError::Dns { source })?;\n\n        let connection = self.new_connection(server);\n        if connection.subscribe_local_address().is_err() {\n            // connection already closed, return immediately (not connect error)\n            return Ok(connection);\n        }\n\n        let mut last_error: Option<ConnectServerError> = None;\n\n        // Consume the DNS stream until we get at least one Direct path,\n        // or exhaust all endpoints (Agent-only is acceptable).\n        //\n        // `last_error` doubles as a \"no viable endpoint yet\" sentinel:\n        // - On `Ok(false)` (Agent registered): clear it — we have a viable fallback.\n        // - On `Err`: set/keep it — probe failure, keep looking.\n        // - On stream exhaustion: if still `Some`, nothing viable → propagate error.\n        while let Some((source, server_ep)) = server_eps.next().await {\n            match self\n                .setup_server_endpoint(&connection, source, server_ep)\n                .await\n            {\n                Ok(true) => {\n                    last_error = None; // Got a Direct path, proceed.\n                    break;\n                }\n                Ok(false) => {\n                    // Agent endpoint registered — even if later Direct probes fail,\n                    // the puncher can still establish paths asynchronously.\n                    last_error = None;\n                }\n                Err(error) => {\n                    last_error.get_or_insert(error.into());\n                }\n            }\n        }\n        if let Some(error) = last_error {\n            return Err(error);\n        }\n\n        // Background task: keep consuming the DNS stream for late-arriving endpoints.\n        tokio::spawn({\n            let connection = connection.clone();\n            let client = self.clone();\n            async move {\n                while let Some((source, server_ep)) = server_eps.next().await {\n                    _ = client\n                        .setup_server_endpoint(&connection, source, server_ep)\n                        .await;\n                }\n            }\n        });\n\n        Ok(connection)\n    }\n}\n\n/// Builder for [`QuicClient`].\n#[derive(Clone)]\npub struct QuicClientBuilder<T> {\n    network: common::Network,\n\n    // client\n    bind_ifaces: DashMap<BindUri, BindInterface>,\n    manual_bind: bool,\n    // client: quic config(in initialize order)\n    prefer_versions: Vec<u32>,\n    token_sink: Arc<dyn TokenSink>,\n    parameters: ClientParameters,\n    tls_config: T,\n    stream_strategy_factory: Arc<dyn ProductStreamsConcurrencyController>,\n    defer_idle_timeout: Duration,\n    qlogger: Arc<dyn QLog + Send + Sync>,\n}\n\nimpl QuicClient {\n    /// Create a new [`QuicClient`] builder.\n    pub fn builder() -> QuicClientBuilder<TlsClientConfigBuilder<WantsVerifier>> {\n        Self::builder_with_tls(TlsClientConfig::builder_with_protocol_versions(&[\n            &rustls::version::TLS13,\n        ]))\n    }\n\n    /// Create a [`QuicClient`] builder with custom crypto provider.\n    pub fn builder_with_crypto_provider(\n        provider: Arc<rustls::crypto::CryptoProvider>,\n    ) -> QuicClientBuilder<TlsClientConfigBuilder<WantsVerifier>> {\n        Self::builder_with_tls(\n            TlsClientConfig::builder_with_provider(provider)\n                .with_protocol_versions(&[&rustls::version::TLS13])\n                .unwrap(),\n        )\n    }\n\n    /// Start to build a QuicClient with the given TLS configuration.\n    ///\n    /// This is useful when you want to customize the TLS configuration, or integrate qm-quic with other crates.\n    pub fn builder_with_tls<T>(tls_config: T) -> QuicClientBuilder<T> {\n        QuicClientBuilder {\n            // network\n            network: common::Network::default(),\n\n            // client\n            bind_ifaces: DashMap::new(),\n            manual_bind: false,\n            // client: quic config(in initialize order)\n            prefer_versions: vec![1],\n            token_sink: Arc::new(handy::NoopTokenRegistry),\n            parameters: handy::client_parameters(),\n            tls_config,\n            stream_strategy_factory: Arc::new(handy::ConsistentConcurrency::new),\n            defer_idle_timeout: Duration::ZERO,\n            qlogger: Arc::new(handy::NoopLogger),\n        }\n    }\n}\n\nimpl<T> QuicClientBuilder<T> {\n    pub fn with_resolver(mut self, resolver: Arc<dyn Resolve + Send + Sync>) -> Self {\n        self.network.resolver = resolver;\n        self\n    }\n\n    pub fn physical_ifaces(mut self, physical_ifaces: &'static Devices) -> Self {\n        self.network.devices = physical_ifaces;\n        self\n    }\n\n    /// Specify how client bind interfaces.\n    ///\n    /// The given factory will be used by [`Self::bind`],\n    /// and/or [`QuicClient::connect`] if no interface bound when client built.\n    ///\n    /// The default quic interface is provided by [`handy::DEFAULT_IO_FACTORY`].\n    /// For Unix and Windows targets, this is a high performance UDP library supporting GSO and GRO\n    /// provided by `qudp` crate. For other platforms, please specify you own factory.\n    pub fn with_iface_factory(mut self, iface_factory: Arc<dyn ProductIO>) -> Self {\n        self.network.iface_factory = iface_factory;\n        self\n    }\n\n    /// Specify the interfaces manager for the client.\n    pub fn with_iface_manager(mut self, iface_manager: Arc<InterfaceManager>) -> Self {\n        self.network.iface_manager = iface_manager;\n        self\n    }\n\n    pub fn with_router(mut self, router: Arc<QuicRouter>) -> Self {\n        self.network.quic_router = router;\n        self\n    }\n\n    pub fn with_stun(mut self, server: impl Into<Arc<str>>) -> Self {\n        self.network.stun_server = Some(server.into());\n        self\n    }\n\n    /// Specify the locations for interface sharing.\n    ///\n    /// The given locations is shared by all connections created by this client.\n    pub fn with_locations(mut self, locations: Arc<Locations>) -> Self {\n        self.network.locations = locations;\n        self\n    }\n\n    /// Create quic interfaces bound on given address.\n    ///\n    /// If the bind failed, the error will be returned immediately.\n    ///\n    /// The default quic interface is provided by [`handy::DEFAULT_IO_FACTORY`].\n    /// For Unix and Windows targets, this is a high performance UDP library supporting GSO and GRO\n    /// provided by `qudp` crate. For other platforms, please specify you own factory with\n    /// [`QuicClientBuilder::with_iface_factory`].\n    ///\n    /// If you dont bind any address, each time the client initiates a new connection,\n    /// the client will use bind a new interface on address and port that dynamic assigned by the system.\n    ///\n    /// To know more about how the client selects the interface when initiates a new connection,\n    /// read [`QuicClient::connect`].\n    ///\n    /// If you call this multiple times, only the last set of interface will be used,\n    /// previous bound interface will be freed immediately.\n    ///\n    /// If all interfaces are closed, clients will no longer be able to initiate new connections.\n    pub async fn bind(mut self, bind_uris: impl IntoIterator<Item = impl Into<BindUri>>) -> Self {\n        self.bind_ifaces = self\n            .network\n            .bind_many(bind_uris)\n            .await\n            .map(|bind_iface| (bind_iface.bind_uri(), bind_iface))\n            .collect()\n            .await;\n        self.manual_bind = true;\n        self\n    }\n\n    /// (WIP)Specify the quic versions that the client prefers.\n    ///\n    /// If you call this multiple times, only the last call will take effect.\n    pub fn prefer_versions(mut self, versions: impl IntoIterator<Item = u32>) -> Self {\n        self.prefer_versions.clear();\n        self.prefer_versions.extend(versions);\n        self\n    }\n\n    /// Specify the token sink for the client.\n    ///\n    /// The token sink is used to storage the tokens that the client received from the server. The client will use the\n    /// tokens to prove it self to the server when it reconnects to the server. read [address verification] in quic rfc\n    /// for more information.\n    ///\n    /// [address verification](https://www.rfc-editor.org/rfc/rfc9000.html#name-address-validation)\n    pub fn with_token_sink(self, token_sink: Arc<dyn TokenSink>) -> Self {\n        Self { token_sink, ..self }\n    }\n\n    /// Specify the [transport parameters] for the client.\n    ///\n    /// If you call this multiple times, only the last `parameters` will be used.\n    ///\n    /// Usually, you don't need to call this method, because the client will use a set of default parameters.\n    ///\n    /// [transport parameters](https://www.rfc-editor.org/rfc/rfc9000.html#name-transport-parameter-definit)\n    pub fn with_parameters(self, parameters: ClientParameters) -> Self {\n        Self { parameters, ..self }\n    }\n\n    fn map_tls<T1>(self, f: impl FnOnce(T) -> T1) -> QuicClientBuilder<T1> {\n        QuicClientBuilder {\n            network: self.network,\n            bind_ifaces: self.bind_ifaces,\n            manual_bind: self.manual_bind,\n            prefer_versions: self.prefer_versions,\n            token_sink: self.token_sink,\n            parameters: self.parameters,\n            tls_config: f(self.tls_config),\n            stream_strategy_factory: self.stream_strategy_factory,\n            defer_idle_timeout: self.defer_idle_timeout,\n            qlogger: self.qlogger,\n        }\n    }\n\n    pub fn with_name(mut self, name: impl Into<String>) -> Self {\n        self.parameters\n            .set(ParameterId::ClientName, name.into())\n            .expect(\"parameter 0xffee belong_to client and has type String\");\n        self\n    }\n\n    /// Provide an option to defer an idle timeout.\n    ///\n    /// This facility could be used when the application wishes to avoid losing\n    /// state that has been associated with an open connection but does not expect\n    /// to exchange application data for some time.\n    ///\n    /// See [Deferring Idle Timeout](https://datatracker.ietf.org/doc/html/rfc9000#name-deferring-idle-timeout)\n    /// of [RFC 9000](https://datatracker.ietf.org/doc/html/rfc9000)\n    /// for more information.\n    pub fn defer_idle_timeout(mut self, duration: Duration) -> Self {\n        self.defer_idle_timeout = duration;\n        self\n    }\n\n    /// Specify the streams concurrency strategy controller for the client.\n    ///\n    /// The streams controller is used to control the concurrency of data streams. `controller` is a closure that accept\n    /// (initial maximum number of bidirectional streams, initial maximum number of unidirectional streams) configured in\n    /// [transport parameters] and return a `ControlConcurrency` object.\n    ///\n    /// If you call this multiple times, only the last `controller` will be used.\n    ///\n    /// [transport parameters](https://www.rfc-editor.org/rfc/rfc9000.html#name-transport-parameter-definit)\n    pub fn with_streams_concurrency_strategy(\n        self,\n        stream_strategy_factory: Arc<dyn ProductStreamsConcurrencyController>,\n    ) -> Self {\n        Self {\n            stream_strategy_factory,\n            ..self\n        }\n    }\n\n    /// Specify qlog collector for server connections.\n    ///\n    /// If you call this multiple times, only the last `logger` will be used.\n    ///\n    /// Pre-implemented loggers:\n    /// - [`LegacySeqLogger`]: Generates qlog files compatible with [qvis] visualization.\n    ///   - `LegacySeqLogger::new(PathBuf::from(\"/dir\"))`: Write to files `{connection_id}_{role}.sqlog` in `dir`\n    ///   - `LegacySeqLogger::new(tokio::io::stdout())`: Stream to stdout\n    ///   - `LegacySeqLogger::new(tokio::io::stderr())`: Stream to stderr\n    ///\n    ///   Output format: JSON-SEQ ([RFC7464]), one JSON event per line.\n    ///\n    /// - [`handy::NoopLogger`] (default): Ignores all qlog events (default, recommended for production).\n    ///\n    /// [qvis]: https://qvis.quictools.info/\n    /// [RFC7464]: https://www.rfc-editor.org/rfc/rfc7464\n    /// [`LegacySeqLogger`]: qevent::telemetry::handy::LegacySeqLogger\n    pub fn with_qlog(self, qlogger: Arc<dyn QLog + Send + Sync>) -> Self {\n        Self { qlogger, ..self }\n    }\n}\n\nimpl QuicClientBuilder<TlsClientConfigBuilder<WantsVerifier>> {\n    /// Choose how to verify server certificates.\n    ///\n    /// Read [TlsClientConfigBuilder::with_root_certificates] for more information.\n    pub fn with_root_certificates(\n        self,\n        root_store: impl Into<Arc<rustls::RootCertStore>>,\n    ) -> QuicClientBuilder<TlsClientConfigBuilder<WantsClientCert>> {\n        self.map_tls(|tls_config_builder| tls_config_builder.with_root_certificates(root_store))\n    }\n\n    /// Choose how to verify server certificates using a webpki verifier.\n    ///\n    /// Read [TlsClientConfigBuilder::with_webpki_verifier] for more information.\n    pub fn with_webpki_verifier(\n        self,\n        verifier: Arc<rustls::client::WebPkiServerVerifier>,\n    ) -> QuicClientBuilder<TlsClientConfigBuilder<WantsClientCert>> {\n        self.map_tls(|tls_config_builder| tls_config_builder.with_webpki_verifier(verifier))\n    }\n\n    /// Replace the default server certificate verifier with a custom one.\n    ///\n    /// This exposes rustls' low-level custom verifier hook. The provided\n    /// verifier becomes fully responsible for server certificate validation,\n    /// including any WebPKI, OCSP, pinning, or private PKI checks you require.\n    pub fn with_custom_server_cert_verifier(\n        self,\n        verifier: Arc<dyn rustls::client::danger::ServerCertVerifier>,\n    ) -> QuicClientBuilder<TlsClientConfigBuilder<WantsClientCert>> {\n        self.map_tls(|tls_config_builder| {\n            tls_config_builder\n                .dangerous()\n                .with_custom_certificate_verifier(verifier)\n        })\n    }\n\n    /// Dangerously disable server certificate verification.\n    pub fn without_verifier(self) -> QuicClientBuilder<TlsClientConfigBuilder<WantsClientCert>> {\n        #[derive(Debug)]\n        struct DangerousServerCertVerifier;\n\n        impl rustls::client::danger::ServerCertVerifier for DangerousServerCertVerifier {\n            fn verify_server_cert(\n                &self,\n                _: &rustls::pki_types::CertificateDer<'_>,\n                _: &[rustls::pki_types::CertificateDer<'_>],\n                _: &rustls::pki_types::ServerName<'_>,\n                _: &[u8],\n                _: rustls::pki_types::UnixTime,\n            ) -> Result<rustls::client::danger::ServerCertVerified, rustls::Error> {\n                Ok(rustls::client::danger::ServerCertVerified::assertion())\n            }\n\n            fn verify_tls12_signature(\n                &self,\n                _: &[u8],\n                _: &rustls::pki_types::CertificateDer<'_>,\n                _: &rustls::DigitallySignedStruct,\n            ) -> Result<rustls::client::danger::HandshakeSignatureValid, rustls::Error>\n            {\n                Ok(rustls::client::danger::HandshakeSignatureValid::assertion())\n            }\n\n            fn verify_tls13_signature(\n                &self,\n                _: &[u8],\n                _: &rustls::pki_types::CertificateDer<'_>,\n                _: &rustls::DigitallySignedStruct,\n            ) -> Result<rustls::client::danger::HandshakeSignatureValid, rustls::Error>\n            {\n                Ok(rustls::client::danger::HandshakeSignatureValid::assertion())\n            }\n\n            fn supported_verify_schemes(&self) -> Vec<rustls::SignatureScheme> {\n                vec![\n                    rustls::SignatureScheme::RSA_PKCS1_SHA1,\n                    rustls::SignatureScheme::ECDSA_SHA1_Legacy,\n                    rustls::SignatureScheme::RSA_PKCS1_SHA256,\n                    rustls::SignatureScheme::ECDSA_NISTP256_SHA256,\n                    rustls::SignatureScheme::RSA_PKCS1_SHA384,\n                    rustls::SignatureScheme::ECDSA_NISTP384_SHA384,\n                    rustls::SignatureScheme::RSA_PKCS1_SHA512,\n                    rustls::SignatureScheme::ECDSA_NISTP521_SHA512,\n                    rustls::SignatureScheme::RSA_PSS_SHA256,\n                    rustls::SignatureScheme::RSA_PSS_SHA384,\n                    rustls::SignatureScheme::RSA_PSS_SHA512,\n                    rustls::SignatureScheme::ED25519,\n                    rustls::SignatureScheme::ED448,\n                ]\n            }\n        }\n\n        self.map_tls(|tls_config_builder| {\n            tls_config_builder\n                .dangerous()\n                .with_custom_certificate_verifier(Arc::new(DangerousServerCertVerifier))\n        })\n    }\n}\n\nimpl QuicClientBuilder<TlsClientConfigBuilder<WantsClientCert>> {\n    /// Sets a single certificate chain and matching private key for use\n    /// in client authentication.\n    ///\n    /// Read [TlsClientConfigBuilder::with_single_cert] for more information.\n    pub fn with_cert(\n        self,\n        cert: impl handy::ToCertificate,\n        key: impl handy::ToPrivateKey,\n    ) -> QuicClientBuilder<TlsClientConfig> {\n        self.map_tls(|tls_config_builder| {\n            tls_config_builder\n                .with_client_auth_cert(cert.to_certificate(), key.to_private_key())\n                .expect(\"The private key was wrong encoded or failed validation\")\n        })\n    }\n\n    /// Do not support client auth.\n    pub fn without_cert(self) -> QuicClientBuilder<TlsClientConfig> {\n        self.map_tls(|tls_config_builder| tls_config_builder.with_no_client_auth())\n    }\n    /// Sets a custom [`ResolvesClientCert`].\n    pub fn with_cert_resolver(\n        self,\n        cert_resolver: Arc<dyn ResolvesClientCert>,\n    ) -> QuicClientBuilder<TlsClientConfig> {\n        self.map_tls(|tls_config_builder| {\n            tls_config_builder.with_client_cert_resolver(cert_resolver)\n        })\n    }\n}\n\nimpl QuicClientBuilder<TlsClientConfig> {\n    /// Specify the [alpn-protocol-ids] that will be sent in `ClientHello`.\n    ///\n    /// By default, its empty and the APLN extension wont be sent.\n    ///\n    /// If you call this multiple times, all the `alpn_protocol` will be used.\n    ///\n    /// [alpn-protocol-ids](https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids)\n    pub fn with_alpns(mut self, alpns: impl IntoIterator<Item = impl Into<Vec<u8>>>) -> Self {\n        self.tls_config\n            .alpn_protocols\n            .extend(alpns.into_iter().map(Into::into));\n        self\n    }\n\n    /// Enable the `keylog` feature.\n    ///\n    /// This is useful when you want to debug the TLS connection.\n    ///\n    /// The keylog file will be in the file that environment veriable `SSLKEYLOGFILE` pointed to.\n    ///\n    /// Read [`rustls::KeyLogFile`] for more information.\n    pub fn enable_sslkeylog(mut self) -> Self {\n        self.tls_config.key_log = Arc::new(rustls::KeyLogFile::new());\n        self\n    }\n\n    pub fn enable_0rtt(mut self) -> Self {\n        self.tls_config.enable_early_data = true;\n        self\n    }\n\n    /// Build the QuicClient, ready to initiates connect to the servers.\n    pub fn build(self) -> QuicClient {\n        QuicClient {\n            network: self.network,\n            bind_ifaces: self.bind_ifaces,\n            manual_bind: Arc::new(AtomicBool::new(self.manual_bind)),\n            _prefer_versions: self.prefer_versions,\n            token_sink: self.token_sink,\n            parameters: self.parameters,\n            tls_config: self.tls_config,\n            stream_strategy_factory: self.stream_strategy_factory,\n            defer_idle_timeout: self.defer_idle_timeout,\n            qlogger: self.qlogger,\n        }\n    }\n}\n"
  },
  {
    "path": "dquic/src/common.rs",
    "content": "use std::{net::SocketAddr, sync::Arc};\n\nuse futures::{Stream, StreamExt, stream};\nuse qconnection::{\n    prelude::{EndpointAddr, handy},\n    qinterface::{\n        BindInterface, Interface,\n        bind_uri::BindUri,\n        component::{\n            Components,\n            alive::RebindOnNetworkChangedComponent,\n            location::{Locations, LocationsComponent},\n            route::{QuicRouter, QuicRouterComponent},\n        },\n        device::Devices,\n        io::ProductIO,\n        manager::InterfaceManager,\n    },\n    qtraversal::{\n        nat::{client::StunClientsComponent, router::StunRouterComponent},\n        route::{ForwardersComponent, ReceiveAndDeliverPacketComponent},\n    },\n};\nuse qresolve::{Family, Resolve, SystemResolver};\n\n#[derive(Clone)]\npub struct Network {\n    pub resolver: Arc<dyn Resolve + Send + Sync>,\n    pub devices: &'static Devices,\n    pub iface_factory: Arc<dyn ProductIO>,\n    pub iface_manager: Arc<InterfaceManager>,\n    pub quic_router: Arc<QuicRouter>,\n    pub stun_server: Option<Arc<str>>,\n    pub locations: Arc<Locations>,\n}\n\nimpl Default for Network {\n    fn default() -> Self {\n        Self {\n            resolver: Arc::new(SystemResolver),\n            devices: Devices::global(),\n            iface_factory: Arc::new(handy::DEFAULT_IO_FACTORY),\n            iface_manager: InterfaceManager::global().clone(),\n            quic_router: QuicRouter::global().clone(),\n            stun_server: None,\n            locations: Arc::new(Locations::new()),\n        }\n    }\n}\n\nimpl Network {\n    /// 只取第一个可用的 STUN agent 即返回，后续由 StunClientsComponent 自动补充到 MIN_AGENTS\n    async fn lookup_first_agent(\n        &self,\n        stun_server: &str,\n        family: Family,\n    ) -> Option<Vec<SocketAddr>> {\n        let stream = self.resolver.lookup(stun_server).await.ok()?;\n        let mut stream = std::pin::pin!(stream);\n        while let Some((_source, ep)) = stream.next().await {\n            let EndpointAddr::Direct { addr } = ep else {\n                continue;\n            };\n            if match family {\n                Family::V4 => addr.is_ipv4(),\n                Family::V6 => addr.is_ipv6(),\n            } {\n                tracing::trace!(\"resolved first stun agent for {stun_server}: {addr}\");\n                return Some(vec![addr]);\n            }\n        }\n        None\n    }\n\n    fn init_iface_components(\n        &self,\n        bind_iface: &BindInterface,\n        stun_agent: Option<(Arc<str>, Vec<SocketAddr>)>,\n    ) {\n        bind_iface.with_components_mut(move |components: &mut Components, iface: &Interface| {\n            // rebind interface on network changed\n            components.init_with(|| RebindOnNetworkChangedComponent::new(iface, self.devices));\n            // quic packet router\n            let quic_router = components\n                .init_with(|| QuicRouterComponent::new(self.quic_router.clone()))\n                .router();\n\n            let locations = components\n                .init_with(|| LocationsComponent::new(iface.downgrade(), self.locations.clone()))\n                .clone();\n\n            match stun_agent {\n                // stun enabled:\n                Some((stun_server, stun_agents)) => {\n                    // initial stun router\n                    let stun_router = components\n                        .init_with(|| StunRouterComponent::new(iface.downgrade()))\n                        .router();\n                    // initial stun clients (后续会自动补充到 MIN_AGENTS)\n                    let clients = components\n                        .init_with(|| {\n                            StunClientsComponent::new(\n                                iface.downgrade(),\n                                stun_router.clone(),\n                                self.resolver.clone(),\n                                stun_server,\n                                stun_agents,\n                                Some(locations.clone()),\n                            )\n                        })\n                        .clone();\n                    // initial forwarder\n                    let relay = bind_iface\n                        .bind_uri()\n                        .relay()\n                        .and_then(|r| r.parse::<SocketAddr>().ok());\n\n                    let forwarder = if let Some(relay) = relay {\n                        components\n                            .init_with(|| ForwardersComponent::new_server(relay))\n                            .forwarder()\n                    } else {\n                        components\n                            .init_with(|| ForwardersComponent::new_client(clients))\n                            .forwarder()\n                    };\n\n                    // initial receive and deliver packet component(quic, stun and forwarder)\n                    components.init_with(|| {\n                        ReceiveAndDeliverPacketComponent::builder(iface.downgrade())\n                            .quic_router(quic_router)\n                            .stun_router(stun_router)\n                            .forwarder(forwarder)\n                            .init()\n                    });\n                }\n                // no stun: receive and deliver quic only\n                None => {\n                    components.init_with(|| {\n                        ReceiveAndDeliverPacketComponent::builder(iface.downgrade())\n                            .quic_router(quic_router)\n                            .init()\n                    });\n                }\n            };\n        });\n    }\n\n    pub async fn bind(&self, bind_uri: BindUri) -> BindInterface {\n        let stun_server = if let Some(server) = bind_uri.stun_server() {\n            Some(Arc::from(server))\n        } else if let Some(\"false\") = bind_uri.prop(BindUri::STUN_PROP).as_deref() {\n            None\n        } else {\n            self.stun_server.clone()\n        };\n\n        let family = bind_uri.family();\n        let stun_agents = match &stun_server {\n            Some(server) => self\n                .lookup_first_agent(server.as_ref(), family)\n                .await\n                .unwrap_or_default(),\n            None => vec![],\n        };\n\n        let factory = self.iface_factory.clone();\n        let bind_iface = self.iface_manager.bind(bind_uri, factory).await;\n        self.init_iface_components(&bind_iface, stun_server.map(|s| (s, stun_agents)));\n\n        bind_iface\n    }\n\n    pub async fn bind_many(\n        &self,\n        bind_uris: impl IntoIterator<Item = impl Into<BindUri>>,\n    ) -> impl Stream<Item = BindInterface> {\n        stream::iter(bind_uris).then(async |bind_uri| self.bind(bind_uri.into()).await)\n    }\n}\n"
  },
  {
    "path": "dquic/src/lib.rs",
    "content": "#![doc=include_str!(\"../README.md\")]\n\npub mod prelude {\n    pub use ::qconnection;\n    pub use qconnection::prelude::*;\n    pub use qresolve::Resolve;\n\n    pub use crate::{\n        client::{BindInterfaceError, ConnectServerError, QuicClient},\n        server::{ListenError, ListenersShutdown, QuicListeners, Server, ServerError},\n    };\n\n    pub mod handy {\n        pub use qconnection::prelude::handy::*;\n        pub use qresolve::SystemResolver;\n\n        pub use crate::cert::{ToCertificate, ToPrivateKey};\n    }\n}\n\npub mod builder {\n    pub use qconnection::builder::*;\n\n    pub use crate::{client::QuicClientBuilder, server::QuicListenersBuilder};\n}\n\n// Hidden modules used to integrate the code examples from the README into the cargo test\nmod doc {\n    #[doc=include_str!(\"../README_CN.md\")]\n    mod zh {}\n\n    // Omitted: Duplicate with crate documentation\n    // #[doc=include_str!(\"../../README.md\")]\n    // mod en {}\n}\n\npub use ::qconnection::{self, qbase, qdatagram, qevent, qinterface, qrecovery, qtraversal};\npub use ::qresolve;\n\nmod cert;\nmod client;\nmod common;\nmod server;\n"
  },
  {
    "path": "dquic/src/server.rs",
    "content": "use std::{\n    collections::HashMap,\n    fmt::Debug,\n    io,\n    ops::{Deref, DerefMut},\n    pin::pin,\n    sync::Arc,\n    time::Duration,\n};\n\nuse arc_swap::ArcSwap;\nuse dashmap::DashMap;\nuse futures::StreamExt;\nuse qbase::{\n    packet::{DataHeader, GetDcid, Packet, long::DataHeader as LongHeader},\n    param::ServerParameters,\n    token::TokenProvider,\n    util::BoundQueue,\n};\nuse qconnection::{\n    self,\n    qinterface::{self, bind_uri::BindUri, component::location::Locations, device::Devices},\n    tls::AcceptAllClientAuther,\n};\nuse qevent::telemetry::QLog;\nuse qinterface::{\n    BindInterface,\n    component::route::{QuicRouter, Way},\n    io::ProductIO,\n    manager::InterfaceManager,\n};\nuse rustls::{\n    ConfigBuilder, ServerConfig as TlsServerConfig, WantsVerifier,\n    server::{NoClientAuth, ResolvesServerCert, danger::ClientCertVerifier},\n    sign::CertifiedKey,\n};\nuse thiserror::Error;\nuse tokio::sync::{OwnedSemaphorePermit, Semaphore};\nuse tracing::Instrument;\n\nuse crate::{prelude::*, *};\n\n/// Errors that can occur during server management operations.\n#[derive(Debug, thiserror::Error)]\npub enum ServerError {\n    /// The server with the specified name already exists.\n    #[error(\"Server '{server}' already exists\")]\n    ServerAlreadyExists { server: String },\n\n    /// The server with the specified name was not found.\n    #[error(\"Server '{server}' not found\")]\n    ServerNotFound { server: String },\n\n    /// Failed to load the private key for the server.\n    #[error(\"Failed to load private key for server '{server}': {source}\")]\n    InvalidCertOrKey {\n        server: String,\n        #[source]\n        source: rustls::Error,\n    },\n}\n\nimpl From<ServerError> for io::Error {\n    fn from(error: ServerError) -> Self {\n        let kind = match &error {\n            ServerError::ServerAlreadyExists { .. } => io::ErrorKind::AlreadyExists,\n            ServerError::ServerNotFound { .. } => io::ErrorKind::NotFound,\n            ServerError::InvalidCertOrKey { .. } => io::ErrorKind::InvalidInput,\n        };\n        io::Error::new(kind, error)\n    }\n}\n\n/// Errors that can occur during QuicListeners builder creation.\n#[derive(Debug, thiserror::Error)]\npub enum ListenError {\n    /// A QuicListeners instance is already running globally.\n    #[error(\"A QuicListeners is already running on the router\")]\n    AlreadyRunning,\n}\n\nimpl From<ListenError> for io::Error {\n    fn from(error: ListenError) -> Self {\n        match error {\n            ListenError::AlreadyRunning => io::Error::new(io::ErrorKind::AlreadyExists, error),\n        }\n    }\n}\n\ntype TlsServerConfigBuilder<T> = ConfigBuilder<TlsServerConfig, T>;\n\n#[derive(Debug, Default)]\npub struct VirtualHosts(Arc<DashMap<String, Server>>);\n\nimpl ResolvesServerCert for VirtualHosts {\n    fn resolve(&self, client_hello: rustls::server::ClientHello) -> Option<Arc<CertifiedKey>> {\n        self.0\n            .get(client_hello.server_name()?)\n            .map(|server| server.certified_key())\n    }\n}\n\npub struct Server {\n    network: common::Network,\n    bind_ifaces: DashMap<BindUri, BindInterface>,\n    // todo: [update] change to LocalAgent\n    certified_key: ArcSwap<CertifiedKey>,\n}\n\nimpl std::fmt::Debug for Server {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Server\")\n            .field(\"bind_ifaces\", &self.bind_ifaces)\n            .field(\"certified_key\", &self.certified_key())\n            .finish()\n    }\n}\n\nimpl Server {\n    pub fn bind_interfaces(&self) -> HashMap<BindUri, BindInterface> {\n        self.bind_ifaces\n            .iter()\n            .map(|entry| (entry.key().clone(), entry.value().clone()))\n            .collect()\n    }\n\n    pub async fn bind(&self, bind_uris: impl IntoIterator<Item = impl Into<BindUri>>) {\n        let mut bind_ifaces = pin!(self.network.bind_many(bind_uris).await);\n        while let Some(bind_iface) = bind_ifaces.next().await {\n            self.bind_ifaces.insert(bind_iface.bind_uri(), bind_iface);\n        }\n    }\n\n    pub fn get_iface(&self, bind_uri: &BindUri) -> Option<BindInterface> {\n        self.bind_ifaces\n            .get(bind_uri)\n            .map(|iface| iface.value().clone())\n    }\n\n    pub fn remove_iface(&self, bind_uri: &BindUri) -> Option<BindInterface> {\n        self.bind_ifaces.remove(bind_uri).map(|entry| entry.1)\n    }\n\n    pub fn certified_key(&self) -> Arc<CertifiedKey> {\n        self.certified_key.load_full()\n    }\n\n    pub fn update_ocsp(&self, ocsp: Option<Vec<u8>>) {\n        self.certified_key.rcu(|current| CertifiedKey {\n            cert: current.cert.clone(),\n            key: current.key.clone(),\n            ocsp: ocsp.clone(),\n        });\n    }\n}\n\ntype Incomings = BoundQueue<((Connection, String, Pathway, Link), OwnedSemaphorePermit)>;\n\n/// A QUIC listener that can serve multiple virtual servers, accepting incoming connections.\n///\n/// ## Creating Listeners\n///\n/// Use [`QuicListenersBuilder`] to configure the listener, then call [`QuicListenersBuilder::listen`]\n/// to start accepting connections.\n///\n/// **Note**: Only one [`QuicListeners`] instance can run at a time globally.\n/// To stop the listeners, call [`QuicListeners::shutdown`] or drop all references to the [`Arc<QuicListeners>`].\n///\n/// ## Managing Servers\n///\n/// Add multiple virtual servers by calling [`QuicListeners::add_server`] multiple times.\n/// Each server is identified by its server name (SNI) and handles connections independently.\n///\n/// - Servers can share the same network interfaces\n/// - Servers can be added without initially binding to any interface\n///\n/// ## Connection Handling\n///\n/// Call [`QuicListeners::accept`] to receive incoming connections. The listener automatically:\n/// - Routes connections to the appropriate server based on SNI (Server Name Indication)\n/// - Rejects connections if the target server isn't listening on the receiving interface\n/// - Returns connections that may still be completing their QUIC handshake\n#[derive(Clone)]\npub struct QuicListeners {\n    network: common::Network,\n\n    // server\n    servers: Arc<DashMap<String, Server>>, // must be empty while building\n    incomings: Arc<Incomings>,             // identify the building QuicListeners\n    backlog: Arc<Semaphore>,               // limit the number of concurrent connections\n    // server: quic config(in initialize order)\n    _supported_versions: Vec<u32>,\n    token_provider: Arc<dyn TokenProvider>,\n    parameters: ServerParameters,\n    anti_port_scan: bool,\n    client_auther: Arc<dyn AuthClient>,\n    tls_config: TlsServerConfig,\n    stream_strategy_factory: Arc<dyn ProductStreamsConcurrencyController>,\n    defer_idle_timeout: Duration,\n    qlogger: Arc<dyn QLog + Send + Sync>,\n}\n\nimpl QuicListeners {\n    /// Add a virtual server with its certificate chain and private key.\n    ///\n    /// Creates a new virtual host identified by its server name (SNI). The server will use the\n    /// certificate chain and private key that matches the SNI in the client's `ClientHello` message.\n    /// If no matching server is found, the connection will be rejected.\n    ///\n    /// A server can be added without binding to any interface initially, but will not accept\n    /// connections until interfaces are added via [`bind`]. This allows flexible\n    /// server configuration and hot-swapping of network bindings.\n    ///\n    /// [`bind`]: Server::bind\n    pub async fn add_server(\n        &self,\n        server_name: impl Into<String>,\n        cert_chain: impl handy::ToCertificate,\n        private_key: impl handy::ToPrivateKey,\n        bind_uris: impl IntoIterator<Item = impl Into<BindUri>>,\n        ocsp: impl Into<Option<Vec<u8>>>,\n    ) -> Result<(), ServerError> {\n        let server = server_name.into();\n\n        let server_entry = match self.servers.entry(server.clone()) {\n            dashmap::Entry::Vacant(entry) => entry,\n            dashmap::Entry::Occupied(..) => {\n                return Err(ServerError::ServerAlreadyExists { server });\n            }\n        };\n\n        let cert = cert_chain.to_certificate();\n        let key = self\n            .tls_config\n            .crypto_provider()\n            .key_provider\n            .load_private_key(private_key.to_private_key())\n            .map_err(|e| ServerError::InvalidCertOrKey {\n                server: server.clone(),\n                source: e,\n            })?;\n        let ocsp = ocsp.into();\n        let certified_key = CertifiedKey { cert, key, ocsp };\n\n        certified_key\n            .keys_match()\n            .map_err(|source| ServerError::InvalidCertOrKey {\n                server: server.clone(),\n                source,\n            })?;\n        let certified_key = Arc::new(certified_key);\n\n        let bind_uris = bind_uris.into_iter();\n\n        let server = Server {\n            network: self.network.clone(),\n            bind_ifaces: DashMap::with_capacity(bind_uris.size_hint().0),\n            certified_key: ArcSwap::new(certified_key),\n        };\n        server.bind(bind_uris).await;\n        server_entry.insert(server);\n\n        Ok(())\n    }\n\n    /// Remove a virtual server and all its associated interfaces.\n    ///\n    /// Completely removes a server from the listeners, including all network interfaces\n    /// it was bound to (if the interface is not used by other servers).\n    /// This is the inverse operation of [`add_server`] and provides a clean\n    /// way to decommission a virtual host.\n    ///\n    /// Returns `true` if the server existed and was removed, `false` if no server with the\n    /// specified name was found. You must remove an existing server before adding a new\n    /// one with the same name.\n    ///\n    /// [`add_server`]: QuicListeners::add_server\n    pub fn remove_server(&self, server_name: &str) -> bool {\n        self.servers.remove(server_name).is_some()\n    }\n\n    /// Get the server by its name.\n    pub fn get_server<'l>(&'l self, server_name: &str) -> Option<impl Deref<Target = Server> + 'l> {\n        self.servers.get(server_name)\n    }\n\n    /// Get a mutable reference to the server by its name.\n    pub fn get_server_mut<'l>(\n        &'l self,\n        server_name: &str,\n    ) -> Option<impl DerefMut<Target = Server> + 'l> {\n        self.servers.get_mut(server_name)\n    }\n\n    pub fn servers(&self) -> Vec<String> {\n        self.servers\n            .iter()\n            .map(|entry| entry.key().clone())\n            .collect()\n    }\n}\n\n#[derive(Debug, Error, Clone, Copy)]\n#[error(\"Listeners shutdown\")]\npub struct ListenersShutdown;\n\nimpl QuicListeners {\n    /// Accept an incoming QUIC connection from the queue.\n    ///\n    /// Returns the connection, connected server name, and network path information.\n    /// Connections are automatically routed based on SNI (Server Name Indication).\n    ///\n    /// The connection queue size is limited by the `backlog` parameter in [`QuicListenersBuilder::listen`].\n    /// When the queue is full, new incoming packets may be dropped at the network level.\n    pub async fn accept(&self) -> Result<(Connection, String, Pathway, Link), ListenersShutdown> {\n        self.incomings\n            .recv()\n            .await\n            .ok_or(ListenersShutdown)\n            .map(|(i, ..)| i)\n    }\n\n    /// Close the QuicListeners, stops accepting new connections.\n    ///\n    /// Unaccepted connections will be closed\n    pub fn shutdown(&self) {\n        self.incomings.close();\n        self.backlog.close();\n    }\n}\n\nimpl Drop for QuicListeners {\n    fn drop(&mut self) {\n        self.shutdown();\n    }\n}\n\nstruct ServerAuther {\n    anti_port_scan: bool,\n    iface: BindUri,\n    servers: Arc<DashMap<String, Server>>,\n}\n\nimpl AuthClient for ServerAuther {\n    fn verify_client_name(\n        &self,\n        server_agent: &LocalAgent,\n        _: Option<&str>,\n    ) -> ClientNameVerifyResult {\n        match self\n            .servers\n            .get(server_agent.name())\n            .is_some_and(|server| server.bind_ifaces.contains_key(&self.iface))\n        {\n            true => ClientNameVerifyResult::Accept,\n            false if self.anti_port_scan => ClientNameVerifyResult::SilentRefuse(\"\".to_owned()),\n            false => ClientNameVerifyResult::Refuse(\"\".to_owned()),\n        }\n    }\n\n    fn verify_client_agent(&self, _: &LocalAgent, _: &RemoteAgent) -> ClientAgentVerifyResult {\n        ClientAgentVerifyResult::Accept\n    }\n}\n\n// internal methods\nimpl QuicListeners {\n    #[tracing::instrument(\n        target = \"quic_listeners\", level = \"debug\", skip_all, \n        fields(%bind_uri, %pathway, %link, odcid=tracing::field::Empty, server_name=tracing::field::Empty)\n    )]\n    pub(crate) fn try_accept_connection(&self, packet: Packet, (bind_uri, pathway, link): Way) {\n        let origin_dcid = match &packet {\n            Packet::Data(data_packet) => match &data_packet.header {\n                DataHeader::Long(LongHeader::Initial(hdr)) => *hdr.dcid(),\n                DataHeader::Long(LongHeader::ZeroRtt(hdr)) => *hdr.dcid(),\n                _ => return,\n            },\n            _ => return,\n        };\n        tracing::Span::current().record(\"odcid\", origin_dcid.to_string());\n\n        if origin_dcid.is_empty() {\n            tracing::debug!(target: \"quic_listeners\", \"Received an initial/0rtt packet with empty destination CID, ignoring it\");\n            return;\n        }\n\n        // Acquire a permit from the backlog semaphore to limit the number of concurrent connections.\n        let Ok(premit) = self.backlog.clone().try_acquire_owned() else {\n            tracing::debug!(target: \"quic_listeners\", \"Backlog full, dropping incoming packet\");\n            return;\n        };\n\n        let server_auther = ServerAuther {\n            anti_port_scan: self.anti_port_scan,\n            iface: bind_uri.clone(),\n            servers: self.servers.clone(),\n        };\n\n        let connection = Connection::new_server(self.token_provider.clone())\n            .with_parameters(self.parameters.clone())\n            .with_client_auther(Box::new((server_auther, self.client_auther.clone())))\n            .with_tls_config(self.tls_config.clone())\n            .with_streams_concurrency_strategy(self.stream_strategy_factory.as_ref())\n            .with_zero_rtt(self.tls_config.max_early_data_size == 0xffffffff)\n            .with_defer_idle_timeout(self.defer_idle_timeout)\n            .with_iface_factory(self.network.iface_factory.clone())\n            .with_iface_manager(self.network.iface_manager.clone())\n            .with_quic_router(self.network.quic_router.clone())\n            .with_locations(self.network.locations.clone())\n            // todo\n            // .with_stun_servers()\n            .with_cids(origin_dcid)\n            .with_qlog(self.qlogger.clone())\n            .run();\n\n        let incomings = self.incomings.clone();\n        let quic_router = self.network.quic_router.clone();\n\n        let try_accept_connection = async move {\n            quic_router.deliver(packet, (bind_uri, pathway, link)).await;\n\n            match connection.server_name().await {\n                Ok(server_name) => {\n                    tracing::Span::current().record(\"server_name\", &server_name);\n                    _ = connection.subscribe_local_address();\n                    let incoming = (connection, server_name, pathway, link);\n                    match incomings.send((incoming, premit)).await {\n                        Ok(..) => {\n                            tracing::debug!(target: \"quic_listeners\", \"Accepted incoming connection\")\n                        }\n                        Err(..) => {\n                            tracing::debug!(target: \"quic_listeners\", \"Listeners is shutdown, closing incoming connection\")\n                        }\n                    }\n                }\n                Err(error) => {\n                    tracing::debug!(\n                        target: \"quic_listeners\",\n                        \"Failed to accept connection: {error}\",\n                    );\n                }\n            }\n        };\n        // Task completes after a single accept-notify cycle; no explicit join needed.\n        tokio::spawn(try_accept_connection.in_current_span());\n    }\n}\n\n/// The builder for the quic listeners.\n#[derive(Clone)]\npub struct QuicListenersBuilder<T> {\n    // network\n    network: common::Network,\n\n    // server\n    servers: Arc<DashMap<String, Server>>, // must be empty while building\n    incomings: Arc<Incomings>,             // identify the building QuicListeners\n    // server: quic config(in initialize order)\n    supported_versions: Vec<u32>,\n    token_provider: Arc<dyn TokenProvider>,\n    parameters: ServerParameters,\n    anti_port_scan: bool,\n    client_auther: Arc<dyn AuthClient>,\n    tls_config: T,\n    stream_strategy_factory: Arc<dyn ProductStreamsConcurrencyController>,\n    defer_idle_timeout: Duration,\n    qlogger: Arc<dyn QLog + Send + Sync>,\n}\n\nimpl QuicListeners {\n    /// Start to build a [`QuicListeners`].\n    pub fn builder() -> QuicListenersBuilder<TlsServerConfigBuilder<WantsVerifier>> {\n        Self::builder_with_tls(TlsServerConfig::builder_with_protocol_versions(&[\n            &rustls::version::TLS13,\n        ]))\n    }\n\n    /// Start to build a QuicServer with the given tls crypto provider.\n    pub fn builder_with_crypto_provider(\n        provider: Arc<rustls::crypto::CryptoProvider>,\n    ) -> Result<QuicListenersBuilder<TlsServerConfigBuilder<WantsVerifier>>, rustls::Error> {\n        Ok(Self::builder_with_tls(\n            TlsServerConfig::builder_with_provider(provider)\n                .with_protocol_versions(&[&rustls::version::TLS13])?,\n        ))\n    }\n\n    /// Start to build a [`QuicListeners`] with the given TLS configuration.\n    ///\n    /// This is useful when you want to customize the TLS configuration, or integrate qm-quic with other crates.\n    pub fn builder_with_tls<T>(tls_config: T) -> QuicListenersBuilder<T> {\n        QuicListenersBuilder {\n            // network\n            network: common::Network::default(),\n\n            // server\n            servers: Arc::new(DashMap::new()), // must be empty while building\n            incomings: Arc::new(BoundQueue::new(8)), // identify the building QuicListeners\n            // server: quic config(in initialize order)\n            supported_versions: vec![1],\n            token_provider: Arc::new(handy::NoopTokenRegistry),\n            parameters: handy::server_parameters(),\n            anti_port_scan: false,\n            client_auther: Arc::new(AcceptAllClientAuther),\n            tls_config,\n            stream_strategy_factory: Arc::new(handy::ConsistentConcurrency::new),\n            defer_idle_timeout: Duration::ZERO,\n            qlogger: Arc::new(handy::NoopLogger),\n        }\n    }\n}\n\nimpl<T> QuicListenersBuilder<T> {\n    pub fn with_resolver(mut self, resolver: Arc<dyn Resolve + Send + Sync>) -> Self {\n        self.network.resolver = resolver;\n        self\n    }\n\n    pub fn with_physical_ifaces(mut self, physical_ifaces: &'static Devices) -> Self {\n        self.network.devices = physical_ifaces;\n        self\n    }\n\n    /// Specify how hosts bind to the interface.\n    ///\n    /// If you call this multiple times, only the last `factory` will be used.\n    ///\n    /// The default quic interface is provided by [`handy::DEFAULT_IO_FACTORY`].\n    /// For Unix and Windows targets, this is a high performance UDP library supporting GSO and GRO\n    /// provided by `qudp` crate. For other platforms, please specify you own factory.\n    pub fn with_iface_factory(mut self, iface_factory: Arc<dyn ProductIO + 'static>) -> Self {\n        self.network.iface_factory = iface_factory;\n        self\n    }\n\n    pub fn with_iface_manager(mut self, iface_manager: Arc<InterfaceManager>) -> Self {\n        self.network.iface_manager = iface_manager;\n        self\n    }\n\n    /// Specify the router to use for the listeners.\n    ///\n    /// Packets received from the interface bound to the server will be deliver this router,\n    /// connectless packets (maybe incoming client connection) will be delivered to QuicListeners.\n    ///\n    /// A router can only be listened to by one QuicListener,\n    /// or the [`QuicListenersBuilder::listen`] will fail.\n    pub fn with_router(mut self, router: Arc<QuicRouter>) -> Self {\n        self.network.quic_router = router;\n        self\n    }\n\n    pub fn with_stun(mut self, stun_server: impl Into<Arc<str>>) -> Self {\n        self.network.stun_server = Some(stun_server.into());\n        self\n    }\n\n    /// Specify the locations for interface sharing.\n    ///\n    /// The given locations is shared by all connections created by this listeners.\n    pub fn with_locations(mut self, locations: Arc<Locations>) -> Self {\n        self.network.locations = locations;\n        self\n    }\n\n    /// (WIP)Specify the supported quic versions.\n    ///\n    /// If you call this multiple times, only the last call will take effect.\n    pub fn with_supported_versions(mut self, versions: impl IntoIterator<Item = u32>) -> Self {\n        self.supported_versions.clear();\n        self.supported_versions.extend(versions);\n        self\n    }\n\n    /// Specify how server to create and verify the client's Token in [address verification].\n    ///\n    /// If you call this multiple times, only the last `token_provider` will be used.\n    ///\n    /// [address verification](https://www.rfc-editor.org/rfc/rfc9000.html#name-address-validation)\n    pub fn with_token_provider(self, token_provider: Arc<dyn TokenProvider>) -> Self {\n        Self {\n            token_provider,\n            ..self\n        }\n    }\n\n    /// Specify the [transport parameters] for the server connections.\n    ///\n    /// If you call this multiple times, only the last `parameters` will be used.\n    ///\n    /// Usually, you don't need to call this method, because the server will use a set of default parameters.\n    ///\n    /// [transport parameters](https://www.rfc-editor.org/rfc/rfc9000.html#name-transport-parameter-definit)\n    pub fn with_parameters(mut self, parameters: ServerParameters) -> Self {\n        self.parameters = parameters;\n        self\n    }\n\n    /// Enable anti-port scanning protection.\n    ///\n    /// When anti-port scanning protection is enabled, the server will silently drop connections\n    /// that fail validation (e.g., invalid ClientHello, authentication failures)\n    /// without sending any response packets.\n    ///\n    /// This security feature provides the following benefits:\n    /// - Prevents attackers from detecting server presence through port scanning\n    /// - Reduces the attack surface by not revealing server configuration details\n    /// - Protects against network reconnaissance and probing attacks\n    /// - Makes the server appear \"offline\" to unauthorized connection attempts\n    ///\n    /// **Security Note:** This feature should be used carefully as it may make\n    /// debugging connection issues more difficult. Consider using it in production\n    /// environments where security is prioritized over observability.\n    ///\n    /// **Tip:** For enhanced security, combine this with [`with_client_auther`] to implement\n    /// custom authentication logic while maintaining stealth behavior for failed connections.\n    ///\n    /// Default: disabled\n    ///\n    /// [`with_client_auther`]: QuicListenersBuilder::with_client_auther\n    pub fn enable_anti_port_scan(mut self) -> Self {\n        self.anti_port_scan = true;\n        self\n    }\n\n    /// Specify custom client authentication handlers for the server.\n    ///\n    /// Client authers are used to perform additional validation beyond standard TLS\n    /// certificate verification. They can verify server names, client parameters,\n    /// and client certificates according to custom business logic.\n    ///\n    /// Each [`AuthClient`] implementation provides three verification methods:\n    /// - `verify_server_name()`: Validates the requested server name (SNI)\n    /// - `verify_client_params()`: Validates client QUIC transport parameters\n    /// - `verify_client_certs()`: Validates client certificate chains\n    ///\n    /// All provided authers must approve the connection for it to be accepted.\n    /// If any auther rejects the connection, it will be dropped.\n    ///\n    /// If you call this multiple times, only the last `client_auther` will be used.\n    ///\n    /// **Security Enhancement:** When combined with [`enable_anti_port_scan`],\n    /// failed authentication attempts will be silently dropped without any response,\n    /// providing enhanced security against reconnaissance attacks.\n    ///\n    /// **TLS Protocol Note:** Certificate verification failures during the TLS handshake\n    /// will still send error responses to clients, as the server has already sent\n    /// its `ServerHello` message at that point. The stealth behavior only applies to\n    /// earlier validation failures that occur before the TLS handshake begins.\n    ///\n    /// **Built-in Validation:** The server automatically verifies that the interface\n    /// receiving the client connection is configured to listen for the requested\n    /// server name (SNI). This built-in validation ensures proper routing of\n    /// connections to their intended hosts.\n    ///\n    /// Default: empty (only built-in host and interface validation)\n    ///\n    /// [`AuthClient`]: qconnection::tls::AuthClient\n    /// [`enable_anti_port_scan`]: QuicListenersBuilder::enable_anti_port_scan\n    pub fn with_client_auther(mut self, client_auther: impl AuthClient + 'static) -> Self {\n        self.client_auther = Arc::new(client_auther);\n        self\n    }\n\n    fn map_tls<T1>(self, f: impl FnOnce(T) -> T1) -> QuicListenersBuilder<T1> {\n        QuicListenersBuilder {\n            network: self.network,\n            servers: self.servers,\n            incomings: self.incomings,\n            supported_versions: self.supported_versions,\n            token_provider: self.token_provider,\n            parameters: self.parameters,\n            anti_port_scan: self.anti_port_scan,\n            client_auther: self.client_auther,\n            tls_config: f(self.tls_config),\n            stream_strategy_factory: self.stream_strategy_factory,\n            defer_idle_timeout: self.defer_idle_timeout,\n            qlogger: self.qlogger,\n        }\n    }\n\n    /// Specify the factory which product the streams concurrency strategy controller for the server.\n    ///\n    /// The streams controller is used to control the concurrency of data streams.\n    /// Take a look of [`ControlStreamsConcurrency`] for more information.\n    ///\n    /// If you call this multiple times, only the last `controller` will be used.\n    pub fn with_streams_concurrency_strategy(\n        self,\n        stream_strategy_factory: Arc<dyn ProductStreamsConcurrencyController>,\n    ) -> Self {\n        Self {\n            stream_strategy_factory,\n            ..self\n        }\n    }\n\n    /// Provide an option to defer an idle timeout.\n    ///\n    /// This facility could be used when the application wishes to avoid losing\n    /// state that has been associated with an open connection but does not expect\n    /// to exchange application data for some time.\n    ///\n    /// See [Deferring Idle Timeout](https://datatracker.ietf.org/doc/html/rfc9000#name-deferring-idle-timeout)\n    /// of [RFC 9000](https://datatracker.ietf.org/doc/html/rfc9000)\n    /// for more information.\n    pub fn defer_idle_timeout(mut self, duration: Duration) -> Self {\n        self.defer_idle_timeout = duration;\n        self\n    }\n\n    /// Specify qlog collector for server connections.\n    ///\n    /// If you call this multiple times, only the last `logger` will be used.\n    ///\n    /// Pre-implemented loggers:\n    /// - [`LegacySeqLogger`]: Generates qlog files compatible with [qvis] visualization.\n    ///   - `LegacySeqLogger::new(PathBuf::from(\"/dir\"))`: Write to files `{connection_id}_{role}.sqlog` in `dir`\n    ///   - `LegacySeqLogger::new(tokio::io::stdout())`: Stream to stdout\n    ///   - `LegacySeqLogger::new(tokio::io::stderr())`: Stream to stderr\n    ///\n    ///   Output format: JSON-SEQ ([RFC7464]), one JSON event per line.\n    ///\n    /// - [`handy::NoopLogger`] (default): Ignores all qlog events (default, recommended for production).\n    ///\n    /// [qvis]: https://qvis.quictools.info/\n    /// [RFC7464]: https://www.rfc-editor.org/rfc/rfc7464\n    /// [`LegacySeqLogger`]: qevent::telemetry::handy::LegacySeqLogger\n    pub fn with_qlog(self, qlogger: Arc<dyn QLog + Send + Sync>) -> Self {\n        Self { qlogger, ..self }\n    }\n}\n\nimpl QuicListenersBuilder<TlsServerConfigBuilder<WantsVerifier>> {\n    /// Choose how to verify client certificates.\n    pub fn with_client_cert_verifier(\n        self,\n        client_cert_verifier: Arc<dyn ClientCertVerifier>,\n    ) -> QuicListenersBuilder<TlsServerConfig> {\n        let virtual_servers = Arc::new(VirtualHosts(self.servers.clone()));\n        self.map_tls(|tls_config_builder| {\n            tls_config_builder\n                .with_client_cert_verifier(client_cert_verifier)\n                .with_cert_resolver(virtual_servers)\n        })\n    }\n\n    /// Disable client authentication.\n    pub fn without_client_cert_verifier(self) -> QuicListenersBuilder<TlsServerConfig> {\n        let virtual_servers = Arc::new(VirtualHosts(self.servers.clone()));\n        self.map_tls(|tls_config_builder| {\n            tls_config_builder\n                .with_client_cert_verifier(Arc::new(NoClientAuth))\n                .with_cert_resolver(virtual_servers)\n        })\n    }\n}\n\nimpl QuicListenersBuilder<TlsServerConfig> {\n    /// Specify the [alpn-protocol-ids] that the server supports.\n    ///\n    /// If you call this multiple times, all the `alpn_protocol` will be used.\n    ///\n    /// If you never call this method, we will not do ALPN with the client.\n    ///\n    /// [alpn-protocol-ids](https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids)\n    pub fn with_alpns(mut self, alpn: impl IntoIterator<Item = impl Into<Vec<u8>>>) -> Self {\n        self.tls_config\n            .alpn_protocols\n            .extend(alpn.into_iter().map(Into::into));\n        self\n    }\n\n    pub fn enable_0rtt(mut self) -> Self {\n        // The TLS early_data extension in the NewSessionTicket message is defined to convey (in the\n        // max_early_data_size parameter) the amount of TLS 0-RTT data the server is willing to accept. QUIC does not\n        // use TLS early data. QUIC uses 0-RTT packets to carry early data. Accordingly, the max_early_data_size\n        // parameter is repurposed to hold a sentinel value 0xffffffff to indicate that the server is willing to accept QUIC\n        // 0-RTT data. To indicate that the server does not accept 0-RTT data, the early_data extension is omitted from\n        // the NewSessionTicket. The amount of data that the client can send in QUIC 0-RTT is controlled by the\n        // initial_max_data transport parameter supplied by the server.\n        self.tls_config.max_early_data_size = 0xffffffff;\n        self\n    }\n\n    /// Start listening for incoming connections.\n    ///\n    /// The `backlog` parameter has the same meaning as the backlog parameter of the UNIX listen function,\n    /// which is the maximum number of pending connections that can be queued.\n    /// If the queue is full, new initial packets may be dropped.\n    ///\n    /// Panic if `backlog` is 0.\n    pub fn listen(self, backlog: usize) -> Result<Arc<QuicListeners>, ListenError> {\n        assert!(backlog > 0, \"backlog must be greater than 0\");\n        debug_assert!(self.servers.is_empty());\n\n        let quic_router = self.network.quic_router.clone();\n\n        let quic_listeners = Arc::new(QuicListeners {\n            network: self.network,\n            servers: self.servers,\n            incomings: self.incomings,\n            backlog: Arc::new(Semaphore::new(backlog)),\n            _supported_versions: self.supported_versions,\n            token_provider: self.token_provider,\n            parameters: self.parameters,\n            anti_port_scan: self.anti_port_scan,\n            client_auther: self.client_auther,\n            tls_config: self.tls_config,\n            stream_strategy_factory: self.stream_strategy_factory,\n            defer_idle_timeout: self.defer_idle_timeout,\n            qlogger: self.qlogger,\n        });\n\n        // TODO: optimize init order\n        let listeners = quic_listeners.clone();\n        if !quic_router.on_connectless_packets(move |packet, way| {\n            listeners.try_accept_connection(packet, way);\n        }) {\n            return Err(ListenError::AlreadyRunning);\n        }\n\n        Ok(quic_listeners)\n    }\n}\n"
  },
  {
    "path": "dquic/tests/auth.rs",
    "content": "use std::{future::Future, sync::Arc, time::Duration};\n\nuse dquic::{\n    prelude::{handy::*, *},\n    qbase,\n    qresolve::Source,\n};\nuse qbase::param::ServerParameters;\nuse qconnection::qinterface::{bind_uri::BindUri, component::route::QuicRouter};\nuse rustls::{\n    pki_types::{CertificateDer, pem::PemObject},\n    server::WebPkiClientVerifier,\n};\nuse tokio::{\n    io::{AsyncReadExt, AsyncWriteExt},\n    time,\n};\nuse tokio_util::task::AbortOnDropHandle;\n\nmod common;\nuse common::*;\nmod echo_common;\nuse echo_common::*;\n\n#[test]\nfn client_without_verify() -> Result<(), BoxError> {\n    run(async {\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n\n        let client = {\n            let parameters = client_parameters();\n            let client = QuicClient::builder()\n                .with_router(router)\n                .without_verifier()\n                .with_parameters(parameters)\n                .without_cert()\n                .with_qlog(qlogger())\n                .enable_sslkeylog()\n                .build();\n            Arc::new(client)\n        };\n\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        send_and_verify_echo(&connection, TEST_DATA).await?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\nstruct ClientNameAuther<const SILENT_REFUSE: bool>;\n\nimpl<const SILENT: bool> AuthClient for ClientNameAuther<SILENT> {\n    fn verify_client_name(\n        &self,\n        _: &LocalAgent,\n        client_name: Option<&str>,\n    ) -> ClientNameVerifyResult {\n        match matches!(client_name, Some(\"client\")) {\n            true => ClientNameVerifyResult::Accept,\n            false if !SILENT => ClientNameVerifyResult::Refuse(\"\".to_owned()),\n            false => ClientNameVerifyResult::SilentRefuse(\"Client name \".to_owned()),\n        }\n    }\n\n    fn verify_client_agent(&self, _: &LocalAgent, _: &RemoteAgent) -> ClientAgentVerifyResult {\n        ClientAgentVerifyResult::Accept\n    }\n}\n\nasync fn launch_client_auth_test_server<const SILENT_REFUSE: bool>(\n    quic_router: Arc<QuicRouter>,\n    server_parameters: ServerParameters,\n) -> Result<(Arc<QuicListeners>, impl Future<Output: Send>), BoxError> {\n    let mut roots = rustls::RootCertStore::empty();\n    roots.add_parsable_certificates(CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap));\n    let listeners = QuicListeners::builder()\n        .with_router(quic_router)\n        .with_client_cert_verifier(\n            WebPkiClientVerifier::builder(Arc::new(roots))\n                .build()\n                .unwrap(),\n        )\n        .with_client_auther(ClientNameAuther::<SILENT_REFUSE>)\n        .with_parameters(server_parameters)\n        .with_qlog(qlogger())\n        .listen(128)?;\n    listeners\n        .add_server(\n            \"localhost\",\n            SERVER_CERT,\n            SERVER_KEY,\n            [BindUri::from(\"inet://127.0.0.1:0\").alloc_port()],\n            None,\n        )\n        .await?;\n    Ok((listeners.clone(), serve_echo(listeners)))\n}\n\n#[test]\nfn auth_client_name() -> Result<(), BoxError> {\n    run(async {\n        const SILENT_REFUSE: bool = false;\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_client_auth_test_server::<SILENT_REFUSE>(router.clone(), server_parameters())\n                .await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n        let server_addr = get_server_addr(&listeners);\n\n        let client = {\n            let mut roots = rustls::RootCertStore::empty();\n            roots.add_parsable_certificates(\n                CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap),\n            );\n            let client = QuicClient::builder()\n                .with_router(router)\n                .with_root_certificates(roots)\n                .with_parameters(client_parameters())\n                .with_cert(CLIENT_CERT, CLIENT_KEY)\n                .with_name(\"client\")\n                .with_qlog(qlogger())\n                .enable_sslkeylog()\n                .build();\n\n            Arc::new(client)\n        };\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        send_and_verify_echo(&connection, TEST_DATA).await?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn auth_client_name_incorrect_name() -> Result<(), BoxError> {\n    run(async {\n        const SILENT_REFUSE: bool = false;\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_client_auth_test_server::<SILENT_REFUSE>(router.clone(), server_parameters())\n                .await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n        let server_addr = get_server_addr(&listeners);\n\n        let client = {\n            let mut roots = rustls::RootCertStore::empty();\n            roots.add_parsable_certificates(\n                CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap),\n            );\n            let client = QuicClient::builder()\n                .with_router(router)\n                .with_root_certificates(roots)\n                .with_parameters(client_parameters())\n                .with_cert(CLIENT_CERT, CLIENT_KEY)\n                .with_name(\"wrong_name\")\n                .with_qlog(qlogger())\n                .enable_sslkeylog()\n                .build();\n\n            Arc::new(client)\n        };\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        let error = connection.terminated().await;\n        // TODO: 偶尔以NoViablePath结束，需要调查原因\n        assert_eq!(error.kind(), ErrorKind::ConnectionRefused);\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn auth_client_refuse() -> Result<(), BoxError> {\n    run(async {\n        const SILENT_REFUSE: bool = false;\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_client_auth_test_server::<SILENT_REFUSE>(router.clone(), server_parameters())\n                .await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n        let server_addr = get_server_addr(&listeners);\n\n        let client = {\n            let parameters = client_parameters();\n            // no CLIENT_NAME\n\n            let mut roots = rustls::RootCertStore::empty();\n            roots.add_parsable_certificates(\n                CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap),\n            );\n            let client = QuicClient::builder()\n                .with_router(router)\n                .with_root_certificates(roots)\n                .with_parameters(parameters)\n                .with_cert(CLIENT_CERT, CLIENT_KEY)\n                .with_qlog(qlogger())\n                .enable_sslkeylog()\n                .build();\n\n            Arc::new(client)\n        };\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n\n        let error = connection.terminated().await;\n        // TODO: 偶尔以NoViablePath结束，需要调查原因\n        assert_eq!(error.kind(), ErrorKind::ConnectionRefused);\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn auth_client_refuse_silently() -> Result<(), BoxError> {\n    run(async {\n        const SILENT_REFUSE: bool = true;\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_client_auth_test_server::<SILENT_REFUSE>(router.clone(), server_parameters())\n                .await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n        let server_addr = get_server_addr(&listeners);\n\n        let client = {\n            let parameters = client_parameters();\n            // no CLIENT_NAME\n\n            let mut roots = rustls::RootCertStore::empty();\n            roots.add_parsable_certificates(\n                CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap),\n            );\n            let client = QuicClient::builder()\n                .with_router(router)\n                .with_root_certificates(roots)\n                .with_parameters(parameters)\n                .with_cert(CLIENT_CERT, CLIENT_KEY)\n                .with_qlog(qlogger())\n                .enable_sslkeylog()\n                .build();\n\n            Arc::new(client)\n        };\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n\n        // Silent refuse means server doesn't send CCF, so client should either:\n        // 1. Timeout waiting for handshake\n        // 2. Fail with NoViablePath when path times out\n        let result = time::timeout(Duration::from_secs(1), connection.handshaked()).await;\n        match result {\n            Err(_timeout) => {}                                     // Expected: timeout\n            Ok(Err(e)) if e.kind() == ErrorKind::NoViablePath => {} // Also acceptable: path timeout\n            Ok(other) => panic!(\"Expected timeout or NoViablePath, got {:?}\", other),\n        }\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[derive(serde::Serialize, serde::Deserialize)]\nstruct Message {\n    data: Vec<u8>,\n    sign: Vec<u8>,\n}\n\nconst SIGNATURE_SCHEME: rustls::SignatureScheme = rustls::SignatureScheme::ECDSA_NISTP256_SHA256;\n\nasync fn send_and_verify_echo_with_sign_verify(\n    connection: &Connection,\n    data: &[u8],\n) -> Result<(), BoxError> {\n    let local_agent = connection.local_agent().await.unwrap().unwrap();\n    let remote_agent = connection.remote_agent().await.unwrap().unwrap();\n    let (_sid, (mut reader, mut writer)) = connection.open_bi_stream().await?.unwrap();\n    tracing::debug!(\"stream opened\");\n\n    let write = async {\n        let data = data.to_vec();\n        let sign = local_agent.sign(SIGNATURE_SCHEME, &data).unwrap();\n        let message = postcard::to_stdvec(&Message { data, sign }).unwrap();\n        writer.write_all(&message).await?;\n        writer.shutdown().await?;\n        tracing::info!(\"write done\");\n        Result::<(), BoxError>::Ok(())\n    };\n    let read = async {\n        let mut message = Vec::new();\n        reader.read_to_end(&mut message).await?;\n        let message: Message = postcard::from_bytes(&message).unwrap();\n        remote_agent\n            .verify(SIGNATURE_SCHEME, &message.data, &message.sign)\n            .unwrap();\n        assert_eq!(message.data, data);\n        tracing::info!(\"read done\");\n        Result::<(), BoxError>::Ok(())\n    };\n\n    tokio::try_join!(read, write).map(|_| ())\n}\n\nasync fn echo_stream_with_sign_verify(\n    local_agent: LocalAgent,\n    remote_agent: RemoteAgent,\n    mut reader: StreamReader,\n    mut writer: StreamWriter,\n) {\n    let mut message = Vec::new();\n    reader.read_to_end(&mut message).await.unwrap();\n    let Message { data, sign } = postcard::from_bytes(&message).unwrap();\n    remote_agent.verify(SIGNATURE_SCHEME, &data, &sign).unwrap();\n    tracing::debug!(\"message received and verified\");\n\n    let sign = local_agent.sign(SIGNATURE_SCHEME, &data).unwrap();\n    let message = postcard::to_stdvec(&Message { data, sign }).unwrap();\n    writer.write_all(&message).await.unwrap();\n    writer.shutdown().await.unwrap();\n    tracing::debug!(\"signed echo sent\");\n}\n\npub async fn serve_echo_with_sign_verify(listeners: Arc<QuicListeners>) {\n    while let Ok((connection, server, pathway, _link)) = listeners.accept().await {\n        assert_eq!(server, \"localhost\");\n        let local_agent = connection.local_agent().await.unwrap().unwrap();\n        let remote_agent = connection.remote_agent().await.unwrap().unwrap();\n        tracing::info!(source = ?pathway.remote(),\"accepted new connection\");\n        tokio::spawn(async move {\n            while let Ok((_sid, (reader, writer))) = connection.accept_bi_stream().await {\n                tokio::spawn(echo_stream_with_sign_verify(\n                    local_agent.clone(),\n                    remote_agent.clone(),\n                    reader,\n                    writer,\n                ));\n            }\n        });\n    }\n}\n\nasync fn launch_echo_with_sign_verify_server(\n    quic_router: Arc<QuicRouter>,\n    parameters: ServerParameters,\n) -> Result<(Arc<QuicListeners>, impl Future<Output: Send>), BoxError> {\n    let mut roots = rustls::RootCertStore::empty();\n    roots.add_parsable_certificates(CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap));\n    let listeners = QuicListeners::builder()\n        .with_router(quic_router)\n        .with_client_cert_verifier(\n            WebPkiClientVerifier::builder(Arc::new(roots))\n                .build()\n                .unwrap(),\n        )\n        .with_parameters(parameters)\n        .with_qlog(qlogger())\n        .listen(128)?;\n    listeners\n        .add_server(\n            \"localhost\",\n            SERVER_CERT,\n            SERVER_KEY,\n            [BindUri::from(\"inet://127.0.0.1:0\").alloc_port()],\n            None,\n        )\n        .await?;\n    Ok((listeners.clone(), serve_echo_with_sign_verify(listeners)))\n}\n\n#[test]\nfn sign_and_verify() -> Result<(), BoxError> {\n    run(async {\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_with_sign_verify_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n        let server_addr = get_server_addr(&listeners);\n\n        let client = {\n            let mut roots = rustls::RootCertStore::empty();\n            roots.add_parsable_certificates(\n                CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap),\n            );\n            let client = QuicClient::builder()\n                .with_router(router)\n                .with_root_certificates(roots)\n                .with_parameters(client_parameters())\n                .with_cert(CLIENT_CERT, CLIENT_KEY)\n                .with_name(\"client\")\n                .with_qlog(qlogger())\n                .enable_sslkeylog()\n                .build();\n\n            Arc::new(client)\n        };\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        send_and_verify_echo_with_sign_verify(&connection, TEST_DATA).await?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n"
  },
  {
    "path": "dquic/tests/common/mod.rs",
    "content": "// common is submod for both echo and auth tests\n#![allow(unused)]\n\nuse std::{\n    future::Future,\n    net::SocketAddr,\n    sync::{Arc, LazyLock, OnceLock},\n    time::Duration,\n};\n\nuse dquic::{\n    prelude::{handy::*, *},\n    qbase::{self, param::ClientParameters},\n    qinterface::{component::route::QuicRouter, io::IO},\n};\nuse qevent::telemetry::QLog;\nuse rustls::pki_types::{CertificateDer, pem::PemObject};\nuse tokio::time;\nuse tracing::level_filters::LevelFilter;\nuse tracing_appender::non_blocking::WorkerGuard;\nuse tracing_subscriber::{\n    Layer, prelude::__tracing_subscriber_SubscriberExt, util::SubscriberInitExt,\n};\n\npub fn qlogger() -> Arc<dyn QLog + Send + Sync> {\n    static QLOGGER: OnceLock<Arc<dyn QLog + Send + Sync>> = OnceLock::new();\n    QLOGGER.get_or_init(|| Arc::new(NoopLogger)).clone()\n}\n\npub type BoxError = Box<dyn std::error::Error + Send + Sync>;\n\npub fn run<F: Future>(future: F) -> F::Output {\n    static RT: LazyLock<tokio::runtime::Runtime> = LazyLock::new(|| {\n        tokio::runtime::Builder::new_multi_thread()\n            .enable_all()\n            .build()\n            .unwrap()\n    });\n\n    static TRACING: LazyLock<WorkerGuard> = LazyLock::new(|| {\n        let (non_blocking, guard) = tracing_appender::non_blocking(std::io::stdout());\n\n        tracing_subscriber::registry()\n            // .with(console_subscriber::spawn())\n            .with(\n                tracing_subscriber::fmt::layer()\n                    .with_writer(non_blocking)\n                    .with_file(true)\n                    .with_line_number(true)\n                    .with_filter(LevelFilter::DEBUG),\n            )\n            .with(tracing_subscriber::filter::filter_fn(|metadata| {\n                !metadata.target().contains(\"netlink_packet_route\")\n            }))\n            .init();\n        guard\n    });\n\n    RT.block_on(async move {\n        LazyLock::force(&TRACING);\n        match time::timeout(Duration::from_secs(60), future).await {\n            Ok(output) => output,\n            Err(_timedout) => panic!(\"test timed out\"),\n        }\n    })\n}\n\npub fn launch_test_client(\n    quic_router: Arc<QuicRouter>,\n    parameters: ClientParameters,\n) -> Arc<QuicClient> {\n    let mut roots = rustls::RootCertStore::empty();\n    roots.add_parsable_certificates(CertificateDer::pem_slice_iter(CA_CERT).map(Result::unwrap));\n    let client = QuicClient::builder()\n        .with_router(quic_router)\n        .with_root_certificates(roots)\n        .with_parameters(parameters)\n        .without_cert()\n        .with_qlog(qlogger())\n        .enable_sslkeylog()\n        .build();\n\n    Arc::new(client)\n}\n\npub fn get_server_addr(listeners: &QuicListeners) -> SocketAddr {\n    let localhost = listeners\n        .get_server(\"localhost\")\n        .expect(\"Server localhost must be registered\");\n    let localhost_bind_interface = localhost\n        .bind_interfaces()\n        .into_iter()\n        .next()\n        .map(|(_bind_uri, interface)| interface)\n        .expect(\"Server should bind at least one address\");\n    localhost_bind_interface\n        .borrow()\n        .bound_addr()\n        .expect(\"failed to get real addr\")\n}\n\npub const CA_CERT: &[u8] = include_bytes!(\"../../../tests/keychain/localhost/ca.cert\");\npub const SERVER_CERT: &[u8] = include_bytes!(\"../../../tests/keychain/localhost/server.cert\");\npub const SERVER_KEY: &[u8] = include_bytes!(\"../../../tests/keychain/localhost/server.key\");\npub const CLIENT_CERT: &[u8] = include_bytes!(\"../../../tests/keychain/localhost/client.cert\");\npub const CLIENT_KEY: &[u8] = include_bytes!(\"../../../tests/keychain/localhost/client.key\");\npub const TEST_DATA: &[u8] = include_bytes!(\"mod.rs\");\n"
  },
  {
    "path": "dquic/tests/echo.rs",
    "content": "use std::{sync::Arc, time::Duration};\n\nuse dquic::{\n    prelude::{handy::*, *},\n    qbase::param::{ClientParameters, ServerParameters},\n    qinterface::{bind_uri::BindUri, component::route::QuicRouter},\n    qresolve::Source,\n};\nuse tokio::task::JoinSet;\nuse tokio_util::task::AbortOnDropHandle;\nuse tracing::Instrument;\n\nmod common;\nuse common::*;\nmod echo_common;\nuse echo_common::*;\n\n#[test]\nfn single_stream() -> Result<(), BoxError> {\n    run(async {\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n        let client = launch_test_client(router, client_parameters());\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        send_and_verify_echo(&connection, TEST_DATA).await?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn signal_big_stream() -> Result<(), BoxError> {\n    run(async {\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n        let client = launch_test_client(router, client_parameters());\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        // Use 16x repeat (~58KB) instead of 1024x (~3.7MB) for CI stability\n        send_and_verify_echo(&connection, &TEST_DATA.to_vec().repeat(16)).await?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn empty_stream() -> Result<(), BoxError> {\n    run(async {\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n        let client = launch_test_client(router, client_parameters());\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        send_and_verify_echo(&connection, b\"\").await?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn shutdown() -> Result<(), BoxError> {\n    run(async {\n        async fn serve_only_one_stream(listeners: Arc<QuicListeners>) {\n            while let Ok((connection, server, pathway, _link)) = listeners.accept().await {\n                assert_eq!(server, \"localhost\");\n                tracing::info!(source = ?pathway.remote(), \"accepted new connection\");\n                tokio::spawn(async move {\n                    let (_sid, (reader, writer)) = connection.accept_bi_stream().await?;\n                    echo_stream(reader, writer).await;\n                    _ = connection.close(\"Bye bye\", 0);\n                    Result::<(), BoxError>::Ok(())\n                });\n            }\n        }\n\n        let router = Arc::new(QuicRouter::default());\n        let listeners = QuicListeners::builder()\n            .with_router(router.clone())\n            .without_client_cert_verifier()\n            .with_parameters(server_parameters())\n            .with_qlog(qlogger())\n            .listen(128)?;\n        listeners\n            .add_server(\n                \"localhost\",\n                SERVER_CERT,\n                SERVER_KEY,\n                [BindUri::from(\"inet://127.0.0.1:0\").alloc_port()],\n                None,\n            )\n            .await?;\n        let server_task = serve_only_one_stream(listeners.clone());\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n\n        let client = launch_test_client(router, client_parameters());\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        _ = connection.handshaked().await; // 可有可无\n\n        assert!(\n            send_and_verify_echo(&connection, b\"\").await.is_err()\n                || send_and_verify_echo(&connection, b\"\").await.is_err()\n        );\n\n        connection.terminated().await;\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn idle_timeout() -> Result<(), BoxError> {\n    run(async {\n        fn server_parameters() -> ServerParameters {\n            let mut params = handy::server_parameters();\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(1))\n                .expect(\"unreachable\");\n\n            params\n        }\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n\n        let client = launch_test_client(router, client_parameters());\n        let connection = client\n            .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n            .await?;\n        connection.terminated().await;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn double_connections() -> Result<(), BoxError> {\n    run(async {\n        // Use extended timeouts for parallel connection tests on slower CI\n        fn client_parameters() -> ClientParameters {\n            let mut params = handy::client_parameters();\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(60))\n                .expect(\"unreachable\");\n            params\n        }\n\n        fn server_parameters() -> ServerParameters {\n            let mut params = handy::server_parameters();\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(60))\n                .expect(\"unreachable\");\n            params\n        }\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n        let client = launch_test_client(router, client_parameters());\n\n        let mut connections = JoinSet::new();\n\n        for conn_idx in 0..2 {\n            let connection = client\n                .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n                .await?;\n            connections.spawn(\n                async move { send_and_verify_echo(&connection, TEST_DATA).await }\n                    .instrument(tracing::info_span!(\"stream\", conn_idx)),\n            );\n        }\n\n        connections\n            .join_all()\n            .await\n            .into_iter()\n            .collect::<Result<(), BoxError>>()?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\nconst PARALLEL_ECHO_CONNS: usize = 3;\nconst PARALLEL_ECHO_STREAMS: usize = 2;\n\n#[test]\nfn parallel_stream() -> Result<(), BoxError> {\n    run(async {\n        fn client_parameters() -> ClientParameters {\n            let mut params = handy::client_parameters();\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(60))\n                .expect(\"unreachable\");\n            params\n        }\n\n        fn server_parameters() -> ServerParameters {\n            let mut params = handy::server_parameters();\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(60))\n                .expect(\"unreachable\");\n            params\n        }\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n        let client = launch_test_client(router, client_parameters());\n\n        let mut streams = JoinSet::new();\n\n        for conn_idx in 0..PARALLEL_ECHO_CONNS {\n            tracing::info!(conn_idx, \"Starting connection\");\n            let connection = Arc::new(\n                client\n                    .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n                    .await?,\n            );\n            tracing::info!(conn_idx, \"Connected\");\n            for stream_idx in 0..PARALLEL_ECHO_STREAMS {\n                let connection = connection.clone();\n                streams.spawn(\n                    async move { send_and_verify_echo(&connection, TEST_DATA).await }\n                        .instrument(tracing::info_span!(\"stream\", conn_idx, stream_idx)),\n                );\n            }\n        }\n\n        streams\n            .join_all()\n            .await\n            .into_iter()\n            .collect::<Result<(), BoxError>>()?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn parallel_big_stream() -> Result<(), BoxError> {\n    run(async {\n        fn client_parameters() -> ClientParameters {\n            let mut params = handy::client_parameters();\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(60))\n                .expect(\"unreachable\");\n            params\n        }\n\n        fn server_parameters() -> ServerParameters {\n            let mut params = handy::server_parameters();\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(60))\n                .expect(\"unreachable\");\n            params\n        }\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n\n        let client = launch_test_client(router, client_parameters());\n\n        let mut big_streams = JoinSet::new();\n        // Use 4x repeat (~14KB per connection) instead of 32x (~117KB) for CI stability\n        let test_data = Arc::new(TEST_DATA.to_vec().repeat(4));\n\n        for conn_idx in 0..PARALLEL_ECHO_CONNS {\n            let connection = client\n                .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n                .await?;\n            let test_data = test_data.clone();\n            big_streams.spawn(\n                async move { send_and_verify_echo(&connection, &test_data).await }\n                    .instrument(tracing::info_span!(\"stream\", conn_idx)),\n            );\n        }\n\n        big_streams\n            .join_all()\n            .await\n            .into_iter()\n            .collect::<Result<(), BoxError>>()?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n\n#[test]\nfn limited_streams() -> Result<(), BoxError> {\n    run(async {\n        pub fn client_parameters() -> ClientParameters {\n            let mut params = ClientParameters::default();\n\n            for (id, value) in [\n                (ParameterId::InitialMaxStreamsBidi, 2u32),\n                (ParameterId::InitialMaxStreamsUni, 0u32),\n                (ParameterId::InitialMaxData, 1u32 << 10),\n                (ParameterId::InitialMaxStreamDataBidiLocal, 1u32 << 10),\n                (ParameterId::InitialMaxStreamDataBidiRemote, 1u32 << 10),\n                (ParameterId::InitialMaxStreamDataUni, 1u32 << 10),\n            ] {\n                params.set(id, value).expect(\"unreachable\");\n            }\n\n            params\n        }\n\n        pub fn server_parameters() -> ServerParameters {\n            let mut params = ServerParameters::default();\n\n            for (id, value) in [\n                (ParameterId::InitialMaxStreamsBidi, 2u32),\n                (ParameterId::InitialMaxStreamsUni, 2u32),\n                (ParameterId::InitialMaxData, 1u32 << 20),\n                (ParameterId::InitialMaxStreamDataBidiLocal, 1u32 << 10),\n                (ParameterId::InitialMaxStreamDataBidiRemote, 1u32 << 10),\n                (ParameterId::InitialMaxStreamDataUni, 1u32 << 10),\n            ] {\n                params.set(id, value).expect(\"unreachable\");\n            }\n            params\n                .set(ParameterId::MaxIdleTimeout, Duration::from_secs(30))\n                .expect(\"unreachable\");\n\n            params\n        }\n\n        let router = Arc::new(QuicRouter::default());\n        let (listeners, server_task) =\n            launch_echo_server(router.clone(), server_parameters()).await?;\n        let _server_task = AbortOnDropHandle::new(tokio::spawn(server_task));\n\n        let server_addr = get_server_addr(&listeners);\n        let client = launch_test_client(router, client_parameters());\n\n        let mut streams = JoinSet::new();\n\n        for conn_idx in 0..PARALLEL_ECHO_CONNS / 2 {\n            let connection = Arc::new(\n                client\n                    .connected_to_with_source(\"localhost\", [(Source::System, server_addr.into())])\n                    .await?,\n            );\n            for stream_idx in 0..PARALLEL_ECHO_STREAMS / 2 {\n                let connection = connection.clone();\n                streams.spawn(\n                    async move { send_and_verify_echo(&connection, TEST_DATA).await }\n                        .instrument(tracing::info_span!(\"stream\", conn_idx, stream_idx)),\n                );\n            }\n        }\n\n        streams\n            .join_all()\n            .await\n            .into_iter()\n            .collect::<Result<(), BoxError>>()?;\n\n        listeners.shutdown();\n        Ok(())\n    })\n}\n"
  },
  {
    "path": "dquic/tests/echo_common/mod.rs",
    "content": "// common is submod for echo, auth and traversal\n#![allow(unused)]\n\nuse std::sync::Arc;\n\nuse dquic::{prelude::*, qbase::param::ServerParameters, qinterface::component::route::QuicRouter};\nuse tokio::io::{self, AsyncReadExt, AsyncWriteExt};\n\nuse crate::common::{BoxError, SERVER_CERT, SERVER_KEY, qlogger};\n\npub async fn echo_stream(mut reader: StreamReader, mut writer: StreamWriter) {\n    io::copy(&mut reader, &mut writer).await.unwrap();\n    _ = writer.shutdown().await;\n    tracing::debug!(\"stream copy done\");\n}\n\npub async fn serve_echo(listeners: Arc<QuicListeners>) {\n    while let Ok((connection, server, pathway, _link)) = listeners.accept().await {\n        assert_eq!(server, \"localhost\");\n        tracing::info!(source = ?pathway.remote(), \"accepted new connection\");\n        tokio::spawn(async move {\n            while let Ok((_sid, (reader, writer))) = connection.accept_bi_stream().await {\n                tokio::spawn(echo_stream(reader, writer));\n            }\n        });\n    }\n}\n\npub async fn send_and_verify_echo(connection: &Connection, data: &[u8]) -> Result<(), BoxError> {\n    let (_sid, (mut reader, mut writer)) = connection.open_bi_stream().await?.unwrap();\n    tracing::debug!(\"stream opened\");\n\n    let mut back = Vec::new();\n    tokio::try_join!(\n        async {\n            writer.write_all(data).await?;\n            writer.shutdown().await?;\n            tracing::info!(\"write done\");\n            Result::<(), BoxError>::Ok(())\n        },\n        async {\n            reader.read_to_end(&mut back).await?;\n            assert_eq!(back, data);\n            tracing::info!(\"read done\");\n            Result::<(), BoxError>::Ok(())\n        }\n    )\n    .map(|_| ())\n}\n\npub async fn launch_echo_server(\n    quic_router: Arc<QuicRouter>,\n    parameters: ServerParameters,\n) -> Result<(Arc<QuicListeners>, impl Future<Output: Send>), BoxError> {\n    let listeners = QuicListeners::builder()\n        .with_router(quic_router)\n        .without_client_cert_verifier()\n        .with_parameters(parameters)\n        .with_qlog(qlogger())\n        .listen(128)\n        .unwrap();\n    listeners\n        .add_server(\n            \"localhost\",\n            SERVER_CERT,\n            SERVER_KEY,\n            [BindUri::from(\"inet://127.0.0.1:0\").alloc_port()],\n            None,\n        )\n        .await?;\n    Ok((listeners.clone(), serve_echo(listeners)))\n}\n"
  },
  {
    "path": "dquic/tests/traversal.rs",
    "content": "use std::{\n    collections::HashMap,\n    io,\n    net::SocketAddr,\n    sync::{Arc, LazyLock},\n    time::Duration,\n};\n\nuse dquic::{\n    prelude::{handy::*, *},\n    qinterface::{component::location::Locations, manager::InterfaceManager},\n    qresolve::Source,\n    qtraversal::nat::client::{NatType, StunClientsComponent},\n};\nuse futures::{\n    FutureExt,\n    future::{BoxFuture, Shared},\n};\nuse rustls::RootCertStore;\nuse tokio::task::JoinSet;\nuse tracing::{info, warn};\n\nmod common;\nuse common::*;\nmod echo_common;\nuse echo_common::*;\n\n#[derive(Debug, Clone, Copy)]\npub struct TestCase {\n    pub bind_addr: &'static str,\n    pub outer_addr: &'static str,\n    pub nat_type: NatType,\n}\n\npub const STUN_SERVERS: &str = \"10.10.0.64:20002\";\n\npub const CASES: [TestCase; 10] = [\n    TestCase {\n        bind_addr: \"192.168.0.98:6001\",\n        outer_addr: \"10.10.0.98:6001\",\n        nat_type: NatType::FullCone,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.96:6002\",\n        outer_addr: \"10.10.0.96:6002\",\n        nat_type: NatType::RestrictedCone,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.88:6003\",\n        outer_addr: \"10.10.0.88:6003\",\n        nat_type: NatType::RestrictedPort,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.86:6004\",\n        outer_addr: \"10.10.0.86:6004\",\n        nat_type: NatType::Dynamic,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.84:6005\",\n        outer_addr: \"10.10.0.84:6005\",\n        nat_type: NatType::Symmetric,\n    },\n    // server\n    TestCase {\n        bind_addr: \"172.16.0.48:6006\",\n        outer_addr: \"10.10.0.48:6006\",\n        nat_type: NatType::FullCone,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.46:6007\",\n        outer_addr: \"10.10.0.46:6007\",\n        nat_type: NatType::RestrictedCone,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.38:6008\",\n        outer_addr: \"10.10.0.38:6008\",\n        nat_type: NatType::RestrictedPort,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.36:6009\",\n        outer_addr: \"10.10.0.36:6009\",\n        nat_type: NatType::Dynamic,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.34:6010\",\n        outer_addr: \"10.10.0.34:6010\",\n        nat_type: NatType::Symmetric,\n    },\n];\n\nstatic CLIENT_CASES: LazyLock<HashMap<NatType, TestCase>> = LazyLock::new(|| {\n    CASES[0..5]\n        .iter()\n        .map(|case| (case.nat_type, *case))\n        .collect()\n});\n\nstatic SERVER_CASES: LazyLock<HashMap<NatType, TestCase>> = LazyLock::new(|| {\n    CASES[5..10]\n        .iter()\n        .map(|case| (case.nat_type, *case))\n        .collect()\n});\n\nmacro_rules! test_punch_matrix {\n    (async fn $test_name:ident = test_punch_case($client:expr, $server:expr) $($tt:tt)*) => {\n\n        #[test]\n        #[ignore]\n        fn $test_name() {\n            run(async move {\n                let span = tracing::info_span!(\n                    stringify!($test_name),\n                    client = stringify!($client),\n                    server = stringify!($server)\n                );\n                let _enter = span.enter();\n                test_punch_case($client, $server).await\n            });\n        }\n\n        test_punch_matrix!($($tt)*);\n    };\n    () => {}\n}\n\n/*\n    // in host:\n    sudo docker buildx build -f qtraversal/tools/dockerfile -t dquic-traversal-test:latest .\n    sudo docker run -it --rm --privileged -v .:/dquic dquic-traversal-test:latest\n\n    // in contrainer:\n    cd /dquic && ./qtraversal/tools/run_stun.sh\n    ip netns exec nsa cargo test --test traversal -- --include-ignored --nocapture\n*/\n\ntest_punch_matrix! {\n    async fn test_punch_full_cone_to_full_cone = test_punch_case(NatType::FullCone, NatType::FullCone)\n    async fn test_punch_full_cone_to_restricted_cone = test_punch_case(NatType::FullCone, NatType::RestrictedCone)\n    async fn test_punch_full_cone_to_port_restricted = test_punch_case(NatType::FullCone, NatType::RestrictedPort)\n    async fn test_punch_full_cone_to_dynamic = test_punch_case(NatType::FullCone, NatType::Dynamic)\n    async fn test_punch_full_cone_to_symmetric = test_punch_case(NatType::FullCone, NatType::Symmetric)\n    async fn test_punch_restricted_cone_to_full_cone = test_punch_case(NatType::RestrictedCone, NatType::FullCone)\n    async fn test_punch_restricted_cone_to_restricted_cone = test_punch_case(NatType::RestrictedCone, NatType::RestrictedCone)\n    async fn test_punch_restricted_cone_to_port_restricted = test_punch_case(NatType::RestrictedCone, NatType::RestrictedPort)\n    async fn test_punch_restricted_cone_to_dynamic = test_punch_case(NatType::RestrictedCone, NatType::Dynamic)\n    async fn test_punch_restricted_cone_to_symmetric = test_punch_case(NatType::RestrictedCone, NatType::Symmetric)\n    async fn test_punch_port_restricted_to_full_cone = test_punch_case(NatType::RestrictedPort, NatType::FullCone)\n    async fn test_punch_port_restricted_to_restricted_cone = test_punch_case(NatType::RestrictedPort, NatType::RestrictedCone)\n    async fn test_punch_port_restricted_to_port_restricted = test_punch_case(NatType::RestrictedPort, NatType::RestrictedPort)\n    async fn test_punch_port_restricted_to_dynamic = test_punch_case(NatType::RestrictedPort, NatType::Dynamic)\n    async fn test_punch_port_restricted_to_symmetric = test_punch_case(NatType::RestrictedPort, NatType::Symmetric)\n    async fn test_punch_dynamic_to_full_cone = test_punch_case(NatType::Dynamic, NatType::FullCone)\n    async fn test_punch_dynamic_to_restricted_cone = test_punch_case(NatType::Dynamic, NatType::RestrictedCone)\n    async fn test_punch_dynamic_to_port_restricted = test_punch_case(NatType::Dynamic, NatType::RestrictedPort)\n    async fn test_punch_dynamic_to_dynamic = test_punch_case(NatType::Dynamic, NatType::Dynamic)\n    async fn test_punch_dynamic_to_symmetric = test_punch_case(NatType::Dynamic, NatType::Symmetric)\n    async fn test_punch_symmetric_to_full_cone = test_punch_case(NatType::Symmetric, NatType::FullCone)\n    async fn test_punch_symmetric_to_restricted_cone = test_punch_case(NatType::Symmetric, NatType::RestrictedCone)\n    async fn test_punch_symmetric_to_port_restricted = test_punch_case(NatType::Symmetric, NatType::RestrictedPort)\n    async fn test_punch_symmetric_to_dynamic = test_punch_case(NatType::Symmetric, NatType::Dynamic)\n    async fn test_punch_symmetric_to_symmetric = test_punch_case(NatType::Symmetric, NatType::Symmetric)\n}\n\nasync fn launch_stun_test_server(server_case: TestCase) -> Arc<QuicListeners> {\n    let server_addr: SocketAddr = server_case.bind_addr.parse().unwrap();\n    let locations = Arc::new(Locations::new());\n    let listeners = QuicListeners::builder()\n        .with_parameters(server_parameters())\n        .without_client_cert_verifier()\n        .with_stun(STUN_SERVERS)\n        .with_router(Arc::default())\n        .with_locations(locations)\n        .with_qlog(qlogger())\n        .listen(1000)\n        .unwrap();\n\n    listeners\n        .add_server(\"localhost\", SERVER_CERT, SERVER_KEY, [server_addr], None)\n        .await\n        .unwrap();\n\n    info!(\"Server listening on {server_addr}\");\n\n    tokio::spawn(serve_echo(listeners.clone()));\n\n    listeners\n}\n\nstatic SERVERS: LazyLock<HashMap<NatType, Shared<BoxFuture<Arc<QuicListeners>>>>> =\n    LazyLock::new(|| {\n        SERVER_CASES\n            .values()\n            .map(|case| {\n                let server = launch_stun_test_server(*case).boxed().shared();\n                (case.nat_type, server)\n            })\n            .collect()\n    });\n\nasync fn launch_stun_test_client(client_case: TestCase) -> Arc<QuicClient> {\n    let client_addr: SocketAddr = client_case.bind_addr.parse().unwrap();\n\n    let mut roots = RootCertStore::empty();\n    roots.add_parsable_certificates(CA_CERT.to_certificate());\n\n    let locations = Arc::new(Locations::new());\n    let client = QuicClient::builder()\n        .with_root_certificates(roots)\n        .without_cert()\n        .enable_sslkeylog()\n        .with_parameters(client_parameters())\n        .with_stun(STUN_SERVERS)\n        .with_locations(locations)\n        .bind([client_addr])\n        .await\n        .with_qlog(qlogger())\n        .build();\n\n    info!(\"Client bound on {client_addr}\");\n\n    Arc::new(client)\n}\n\nstatic CLIENTS: LazyLock<HashMap<NatType, Shared<BoxFuture<Arc<QuicClient>>>>> =\n    LazyLock::new(|| {\n        CLIENT_CASES\n            .values()\n            .map(|case| {\n                let client = launch_stun_test_client(*case).boxed().shared();\n                (case.nat_type, client)\n            })\n            .collect()\n    });\n\nasync fn test_punch_case(client_nat: NatType, server_nat: NatType) {\n    let client_case = CLIENT_CASES[&client_nat];\n    let server_case = SERVER_CASES[&server_nat];\n\n    info!(\"Testing punch case: client {client_nat:?} <-> server {server_nat:?}\",);\n\n    if client_nat == NatType::Dynamic || server_nat == NatType::Dynamic {\n        warn!(\"Skipping Dynamic NAT test case\");\n        // TODO: Dynamic NAT 模拟有问题\n        return;\n    }\n    if client_nat == NatType::Symmetric && server_nat == NatType::Symmetric {\n        warn!(\"Skipping Symmetric NAT to Symmetric NAT test case\");\n        // Symmetric NAT 互穿不通\n        return;\n    }\n\n    let _server = SERVERS[&server_nat].clone().await;\n    let server_iface = InterfaceManager::global()\n        .borrow(&(server_case.bind_addr.parse::<SocketAddr>().unwrap().into()))\n        .unwrap();\n\n    let server_ep = get_stun_data(server_iface).await[0].0;\n    launch_client(client_case, server_ep).await;\n}\n\nasync fn get_stun_data(server_iface: dquic::qinterface::Interface) -> Vec<(EndpointAddr, NatType)> {\n    let mut outer_addresses = server_iface\n        .with_component(|clients: &StunClientsComponent| {\n            clients.with_clients(|clients| {\n                // workaround. clippy issue: https://github.com/rust-lang/rust-clippy/issues/16428\n                #[allow(clippy::redundant_iter_cloned)]\n                clients\n                    .values()\n                    .cloned()\n                    .map(|client| async move {\n                        let agent = client.agent_addr();\n                        let outer = client.outer_addr().await?;\n                        let ep = EndpointAddr::with_agent(agent, outer);\n                        let nat_type = client.nat_type().await?;\n                        io::Result::Ok((ep, nat_type))\n                    })\n                    .collect::<JoinSet<_>>()\n            })\n        })\n        .expect(\"interface rebinded too quickly\")\n        .expect(\"traversal components missing\");\n    let mut datas = vec![];\n\n    while let Some(join_result) = outer_addresses.join_next().await {\n        let result = join_result.expect(\"detect panic\");\n        let data = result.expect(\"detect outer addr or nat type failed\");\n        datas.push(data);\n    }\n    datas\n}\n\nasync fn launch_client(client_case: TestCase, server_ep: EndpointAddr) {\n    let client = CLIENTS[&client_case.nat_type].clone().await;\n\n    get_stun_data(\n        InterfaceManager::global()\n            .borrow(&client_case.bind_addr.parse::<SocketAddr>().unwrap().into())\n            .unwrap(),\n    )\n    .await;\n\n    // 不会进行绑定，不会出错\n    let connection = client\n        .connected_to_with_source(\"localhost\", [(Source::System, server_ep)])\n        .await\n        .unwrap();\n    let odcid = connection.origin_dcid().expect(\"connection failed\");\n    tracing::info!(%odcid, \"connected to server\");\n    let test_data = Arc::new(TEST_DATA.to_vec());\n\n    // 循环检查直连路径，每秒检查一次\n    // 如果没有直连路径，执行 echo 测试确保连接正常\n    // 总超时由 run() 函数的 60s 超时控制\n    loop {\n        // 检查是否有直连路径\n        let paths = connection\n            .path_context()\n            .expect(\"connection failed\")\n            .paths::<Vec<_>>()\n            .into_iter()\n            .map(|(p, _)| p)\n            .collect::<Vec<_>>();\n\n        let has_direct = paths\n            .iter()\n            .any(|pathway| matches!(pathway.local(), EndpointAddr::Direct { .. }));\n\n        if has_direct {\n            tracing::info!(\"Direct path established: {:?}\", paths);\n            return;\n        }\n\n        // 没有直连路径，执行 echo 测试确保连接正常\n        tracing::debug!(\"no direct path yet, verifying connection with echo test\");\n        send_and_verify_echo(&connection, &test_data)\n            .await\n            .expect(\"echo test failed\");\n\n        // 等待 1 秒后再次检查\n        tokio::time::sleep(Duration::from_secs(1)).await;\n    }\n}\n\npub type Error = Box<dyn std::error::Error + Send + Sync>;\n\n#[test]\nfn test_knock_ttl_is_1_in_tests() {\n    assert_eq!(dquic::qtraversal::punch::puncher::KNOCK_TTL, 1);\n}\n"
  },
  {
    "path": "h3-shim/Cargo.toml",
    "content": "[package]\nname = \"h3-shim\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"Shim libray between dquic and h3\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\nautoexamples = false\n\n[dependencies]\nh3 = { workspace = true }\nh3-datagram = { workspace = true, optional = true }\nbytes = { workspace = true }\ndashmap = { workspace = true }\nfutures = { workspace = true }\ndquic = { workspace = true }\ntokio = { workspace = true }\n\n[features]\ndatagram = [\"dep:h3-datagram\", \"dquic/datagram\"]\ntelemetry = [\"dquic/telemetry\"]\n\n[dev-dependencies]\nbase64 = \"0.22\"\nclap = { workspace = true, features = [\"derive\"] }\ncrossterm = { version = \"0.29\", features = [\"events\", \"event-stream\"] }\nhttp = { workspace = true }\nindicatif = { workspace = true }\nlibc = \"0.2\"\nqevent = { workspace = true, features = [\"telemetry\"] }\nrustls = { workspace = true, features = [\"logging\", \"ring\"] }\nrustls-native-certs = { workspace = true }\nrpassword = \"7.3\"\nserde = { workspace = true }\nserde_json = { workspace = true }\ntokio = { workspace = true, features = [\"io-std\", \"fs\", \"rt-multi-thread\"] }\ntracing = { workspace = true }\ntracing-appender = { workspace = true }\n\n# console-subscriber = \"0.4\"\n\n[dev-dependencies.tracing-subscriber]\nworkspace = true\nfeatures = [\"env-filter\", \"time\"]\n\n[[example]]\nname = \"h3-server\"\n\n[[example]]\nname = \"h3-client\"\n"
  },
  {
    "path": "h3-shim/examples/README.md",
    "content": "# h3-shim测试\n\n本测试所使用的密钥来自<https://github.com/hyperium/h3/tree/master/examples>，`h3-server.rs`和`h3-client.rs`的源代码亦是在其基础上修改而来\n\n你也可以自己签名密钥，并在运行server/client时通过命令行参数指定自己的密钥\n\n> 我们还有一个对reqwest的[fork](https://github.com/genmeta/reqwest/tree/dquic)，其quic实现被替换为为dquic。基于reqwest的client用例可以参考[此gist](https://gist.github.com/ealinmen/ed79f3bf95fa91e9475484560fb2744e)\n\n运行之前，推荐设置环境变量`RUST_LOG=info`，以便查看更多的日志信息\n\n```shell\n# 非必需，但是建议\nexport RUST_LOG=info\n```\n\n## 运行\n\n所需命令行参数均已预设，你也可以通过`--help`查看帮助，自己指定参数\n\ncd到`dquic`目录下，运行以下命令即可\n\n```shell\ncd path/to/dquic\n# 启动Server，默认会加载localhost的自签名证书，因此必须通过localhost来请求\n# server会默认监听[127.0.0.1:4433, [::1]:4433]两个地址，请确保您的机器支持IPv6\n# 如果不支持，请使用-b参数手动绑定监听地址\ncargo run --example=h3-server --package=h3-shim -- --dir=./h3-shim\n# 启动Client\ncargo run --example=h3-client --package=h3-shim -- https://localhost:4433/examples/h3-server.rs --keylog\n```\n\nclient默认会向`https://localhost:4433/Cargo.toml`发送一个Get请求，你可以通过命令行参数改变请求的url\n\n如下，client会向`https://localhost:4433/examples/server.rs`发送一个Get请求\n\n```shell\ncargo run --example=h3-client --package=h3-shim -- https://localhost:4433/examples/server.rs\n```\n\n你也可以指定服务的根目录，或者更改绑定端口\n\n```shell\n# 设置服务根目录\ncargo run --example=h3-server --package=h3-shim -- --dir=/path/to/www\n# 更改绑定端口\ncargo run --example=h3-server --package=h3-shim -- -l=127.0.0.1:123456\n```\n\n## 问题排查\n\n### 找不到文件\n\n如果你遇到类似这样的错误\n\n```\nfailed to read CA certificate: Os { code: 2, kind: NotFound, message: \"No such file or directory\" }\nfailed to read certificate file: Os { code: 2, kind: NotFound, message: \"No such file or directory\" }\n```\n\n说明你并没有移动到`h3-shim`目录下，你可以移动到`h3-shim`目录下，再次运行；或者通过命令行参数指定证书文件，密钥文件的路径\n\n### 无法连接\n\n首先检查你设置的ip和端口是否正确\n\nclient和server默认使用ipv6。如果在你的设备上localhost被解析为ipv4，你需要通过`-b`参数指定客户端和服务端使用ipv4地址\n\n```shell\ncargo run --example=h3-server --package=h3-shim -- -b=127.0.0.1:0\ncargo run --example=h3-client --package=h3-shim -- -b=127.0.0.1\n```\n\n## 抓包\n\n如果你想使用Wireshark抓包，你需要设置环境变量`SSLKEYLOGFILE`，且在启动client时加上`--keylog`参数，以获得keylog文件\n\n```shell\nexport SSLKEYLOGFILE= <指定一个地方>\ncargo run --example=h3-client --package=h3-shim -- --keylog\n```\n\n然后，打开wireshark，Preferences -> Protocols-> TLS ->\n(Pre)-Master-Secret log filename 的地方填入上述keylog文件的路径，即可享受wireshark抓包并解密的便利。\n"
  },
  {
    "path": "h3-shim/examples/h3-client.rs",
    "content": "use std::{collections::HashMap, path::PathBuf, sync::Arc, time::Instant};\n\nuse clap::Parser;\nuse dquic::prelude::{\n    handy::{ToCertificate, client_parameters},\n    *,\n};\nuse http::{\n    Uri,\n    uri::{Authority, Parts, Scheme},\n};\nuse indicatif::{MultiProgress, ProgressBar, ProgressStyle};\nuse tokio::{\n    fs,\n    io::{AsyncWrite, AsyncWriteExt},\n    task::JoinSet,\n};\nuse tracing::Instrument;\nuse tracing_subscriber::prelude::*;\n\n#[derive(Parser, Clone)]\nstruct Options {\n    #[arg(long, help = \"Save the qlog to a dir\", value_name = \"PATH\")]\n    qlog: Option<PathBuf>,\n    #[arg(\n        long,\n        help = \"Certificate of CA who issues the server certificate\",\n        value_delimiter = ',',\n        default_value = \"tests/keychain/localhost/ca.cert\"\n    )]\n    roots: Vec<PathBuf>,\n    #[arg(\n        long,\n        default_value = \"false\",\n        help = \"Skip verification of server certificate\"\n    )]\n    skip_verify: bool,\n    #[arg(\n        long,\n        short,\n        value_delimiter = ',',\n        default_value = \"h3\",\n        help = \"ALPNs to use for the connection\"\n    )]\n    alpns: Vec<Vec<u8>>,\n    #[arg(\n        long,\n        short = 'p',\n        action = clap::ArgAction::Set,\n        help = \"enable progress bar\",\n        default_value = \"true\",\n        value_enum\n    )]\n    progress: bool,\n    #[arg(\n        long,\n        action = clap::ArgAction::Set,\n        help = \"enable ansi\",\n        default_value = \"true\",\n        value_enum\n    )]\n    ansi: bool,\n    #[arg(\n        long,\n        short = 'r',\n        help = \"number of requests per connection\",\n        default_value = \"1\"\n    )]\n    reqs: usize,\n    #[arg(\n        long,\n        short = 'c',\n        help = \"number of connections client initiates\",\n        default_value = \"1\"\n    )]\n    conns: usize,\n    #[arg(long, help = \"Save the response to a dir\", value_name = \"PATH\")]\n    save: Option<PathBuf>,\n    #[arg(\n        help = \"URI to request\",\n        value_delimiter = ',',\n        default_value = \"https://localhost:4433/Cargo.lock\"\n    )]\n    uris: Vec<Uri>,\n}\n\n#[tokio::main]\nasync fn main() {\n    let options = Options::parse();\n    let (non_blocking, _guard) = tracing_appender::non_blocking(std::io::stdout());\n    tracing_subscriber::registry()\n        // .with(\n        //     console_subscriber::ConsoleLayer::builder()\n        //         .server_addr(\"127.0.0.1:6670\".parse::<SocketAddr>().unwrap())\n        //         .spawn(),\n        // )\n        .with(\n            tracing_subscriber::fmt::layer()\n                .with_writer(non_blocking)\n                .with_ansi(options.ansi)\n                .with_filter(\n                    tracing_subscriber::EnvFilter::builder()\n                        .with_default_directive(match options.progress {\n                            true => tracing::level_filters::LevelFilter::OFF.into(),\n                            false => tracing::level_filters::LevelFilter::INFO.into(),\n                        })\n                        .from_env_lossy(),\n                ),\n        )\n        .init();\n    if let Err(error) = run(options).await {\n        tracing::error!(?error);\n        std::process::exit(1);\n    };\n}\n\ntype Error = Box<dyn std::error::Error + Send + Sync>;\n\nasync fn run(options: Options) -> Result<(), Error> {\n    let qlogger: Arc<dyn qevent::telemetry::QLog + Send + Sync> = match options.qlog {\n        Some(dir) => Arc::new(handy::LegacySeqLogger::new(dir)),\n        None => Arc::new(handy::NoopLogger),\n    };\n\n    let client_builder = if options.skip_verify {\n        tracing::warn!(\"skip server verify\");\n        QuicClient::builder().without_verifier()\n    } else {\n        tracing::info!(\"load ca certs\");\n        let mut roots = rustls::RootCertStore::empty();\n        roots.add_parsable_certificates(rustls_native_certs::load_native_certs().certs);\n        roots\n            .add_parsable_certificates(options.roots.iter().flat_map(|path| path.to_certificate()));\n        QuicClient::builder().with_root_certificates(roots)\n    };\n\n    let client = Arc::new(\n        client_builder\n            .with_qlog(qlogger)\n            .without_cert()\n            .with_parameters(client_parameters())\n            .with_alpns(options.alpns)\n            .enable_sslkeylog()\n            .build(),\n    );\n\n    let pbs = MultiProgress::new();\n    if !options.progress {\n        pbs.set_draw_target(indicatif::ProgressDrawTarget::hidden());\n    }\n    let conns_pb = pbs.add(ProgressBar::new(0).with_prefix(\"connections\").with_style(\n        ProgressStyle::with_template(\"{prefix} {wide_bar} {pos}/{len}\")?,\n    ));\n    let total_pb = pbs.add(ProgressBar::new(0).with_prefix(\"requests\").with_style(\n        ProgressStyle::with_template(\"{prefix} {wide_bar} {pos}/{len} {per_sec} {eta}\")?,\n    ));\n\n    let queries = options\n        .uris\n        .into_iter()\n        // 根据 authority 分组\n        .fold(HashMap::<_, Vec<_>>::new(), |mut uris, uri| {\n            let auth = uri.authority().expect(\"uri must have authority\");\n            uris.entry(auth.to_string())\n                .or_default()\n                .push(uri.path().to_owned());\n            uris\n        })\n        .into_iter()\n        // 压力测试：让uri变多\n        .map(|(auth, uris)| {\n            let authority = auth.parse::<Authority>().unwrap();\n            let totoal_reqs = uris.len() * options.reqs;\n            let total_reqs = uris.into_iter().cycle().take(totoal_reqs);\n            (authority, total_reqs)\n        });\n\n    let start_time = Instant::now();\n    let mut connections = JoinSet::new();\n\n    for (authority, paths) in queries {\n        for _conn_idx in 0..options.conns {\n            conns_pb.inc_length(1);\n            connections.spawn(download_files_with_progress(\n                client.clone(),\n                authority.clone(),\n                paths.clone(),\n                total_pb.clone(),\n                options.save.clone(),\n            ));\n        }\n    }\n\n    let mut success_queries = 0;\n    while let Some(res) = connections.join_next().await {\n        match res {\n            Ok(Ok(queries)) => {\n                tracing::info!(target: \"counting\", queries, \"connection finished\");\n                success_queries += queries;\n                conns_pb.inc(1);\n            }\n            Ok(Err(err)) => {\n                tracing::error!(target: \"counting\", error=?err, \"conenction failed\");\n                conns_pb.dec_length(1);\n            }\n            Err(err) if err.is_panic() => std::panic::resume_unwind(err.into_panic()),\n            Err(err) => panic!(\"{err}\"),\n        }\n    }\n\n    conns_pb.finish();\n    total_pb.finish();\n\n    let total_time = start_time.elapsed().as_secs_f64();\n    let qps = success_queries as f64 / total_time;\n\n    tracing::info!(target: \"counting\", success_queries, total_time, qps, \"done!\");\n\n    Ok(())\n}\n\nasync fn download_files_with_progress(\n    client: Arc<QuicClient>,\n    authority: Authority,\n    paths: impl Iterator<Item = String>,\n    total_pb: ProgressBar,\n    save: Option<PathBuf>,\n) -> Result<usize, Error> {\n    let quic_connection = Arc::new(client.connect(authority.host()).await?);\n    let odcid = quic_connection.origin_dcid()?;\n    let span = tracing::info_span!(\"requests\", %odcid, host = authority.host());\n\n    let (mut connection, send_request) =\n        h3::client::new(h3_shim::QuicConnection::new(quic_connection.clone()))\n            .instrument(span.clone())\n            .await?;\n    tokio::spawn(async move { connection.wait_idle().await }.instrument(span.clone()));\n\n    let mut requests = JoinSet::new();\n    for path in paths {\n        total_pb.inc_length(1);\n        let uri = {\n            let mut parts = Parts::default();\n            parts.scheme = Some(Scheme::HTTPS);\n            parts.authority = Some(authority.clone());\n            parts.path_and_query = Some(path.parse()?);\n            Uri::from_parts(parts)?\n        };\n\n        let save_to = save\n            .as_ref()\n            .map(|dir| dir.join(uri.path().strip_prefix('/').unwrap()));\n\n        let request = http::Request::builder().uri(uri).body(())?;\n        let mut send_request = send_request.clone();\n\n        requests.spawn(\n            async move {\n                let mut request_stream = send_request.send_request(request).await?;\n                request_stream.finish().await?;\n\n                let resp = request_stream.recv_response().await?;\n                if resp.status() != http::StatusCode::OK {\n                    return Err(format!(\"response status: {}\", resp.status()).into());\n                }\n\n                let mut save_to: Box<dyn AsyncWrite + Send + Unpin> = match save_to {\n                    Some(path) => Box::new(fs::File::create(path).await?),\n                    None => Box::new(tokio::io::sink()),\n                };\n\n                while let Some(mut data) = request_stream.recv_data().await? {\n                    save_to.write_all_buf(&mut data).await?;\n                }\n\n                Result::<(), Error>::Ok(())\n            }\n            .instrument(span.clone()),\n        );\n    }\n\n    let mut error = None;\n    let mut success_queries = 0;\n\n    tracing::info!(target: \"counting\", \"Waiting for {} requests to finish\", requests.len());\n    while let Some(res) = requests.join_next().await {\n        match res {\n            Ok(Ok(())) => {\n                tracing::warn!(target: \"counting\", \"Request success\");\n                success_queries += 1;\n                total_pb.inc(1);\n            }\n            Ok(Err(err)) => {\n                tracing::warn!(target: \"counting\", ?err, \"Request failed\");\n                total_pb.dec_length(1);\n                error = Some(err);\n            }\n            Err(err) if err.is_panic() => std::panic::resume_unwind(err.into_panic()),\n            Err(err) => panic!(\"{err}\"),\n        }\n    }\n\n    tracing::info!(target: \"counting\", success_queries, \"Requests completed\");\n    if success_queries != 0 {\n        Ok(success_queries)\n    } else {\n        Err(error.unwrap())\n    }\n}\n"
  },
  {
    "path": "h3-shim/examples/h3-server.rs",
    "content": "use std::{ops::Deref, path::PathBuf, sync::Arc};\n\nuse bytes::{Bytes, BytesMut};\nuse clap::Parser;\nuse dquic::{\n    prelude::*,\n    qinterface::{bind_uri::BindUri, io::IO},\n};\nuse h3::{quic::BidiStream, server::RequestStream};\nuse http::{Request, StatusCode};\nuse tokio::{fs::File, io::AsyncReadExt};\nuse tracing::level_filters::LevelFilter;\nuse tracing_subscriber::{EnvFilter, prelude::*};\n\n#[derive(Parser, Debug)]\n#[command(name = \"server\")]\nstruct Options {\n    #[arg(\n        name = \"dir\",\n        short,\n        long,\n        help = \"Root directory of the files to serve. \\\n                If omitted, server will respond OK.\",\n        default_value = \"./\"\n    )]\n    root: PathBuf,\n    #[arg(long, help = \"Save the qlog to a dir\", value_name = \"PATH\")]\n    qlog: Option<PathBuf>,\n    #[arg(\n        short,\n        long,\n        value_delimiter = ',',\n        default_values = [\"127.0.0.1:4433\", \"[::1]:4433\"],\n        help = \"What BindUris to listen for new connections\"\n    )]\n    listen: Vec<BindUri>,\n    #[arg(\n        long,\n        short,\n        value_delimiter = ',',\n        default_value = \"h3\",\n        help = \"ALPNs to use for the connection\"\n    )]\n    alpns: Vec<Vec<u8>>,\n    #[arg(\n        long,\n        short,\n        default_value = \"4096\",\n        help = \"Maximum number of requests in the backlog. \\\n                If the backlog is full, new connections will be refused.\"\n    )]\n    backlog: usize,\n    #[arg(\n        long,\n        action = clap::ArgAction::Set,\n        default_value = \"true\",\n        help = \"Enable ANSI color output in logs\"\n    )]\n    ansi: bool,\n    #[command(flatten)]\n    certs: Certs,\n}\n\n#[derive(Parser, Debug)]\nstruct Certs {\n    #[arg(long, short, default_value = \"localhost\", help = \"Server name.\")]\n    server_name: String,\n    #[arg(\n        long,\n        short,\n        default_value = \"tests/keychain/localhost/server.cert\",\n        help = \"Certificate for TLS. If present, `--key` is mandatory.\"\n    )]\n    cert: PathBuf,\n    #[arg(\n        long,\n        short,\n        default_value = \"tests/keychain/localhost/server.key\",\n        help = \"Private key for the certificate.\"\n    )]\n    key: PathBuf,\n}\n\nfn main() {\n    let options = Options::parse();\n    let (non_blocking, _guard) = tracing_appender::non_blocking(std::io::stdout());\n    tracing_subscriber::registry()\n        // .with(console_subscriber::spawn())\n        .with(\n            tracing_subscriber::fmt::layer()\n                .with_writer(non_blocking)\n                .with_ansi(options.ansi)\n                .with_filter(\n                    EnvFilter::builder()\n                        .with_default_directive(LevelFilter::INFO.into())\n                        .from_env_lossy(),\n                ),\n        )\n        .init();\n\n    // 测试日志是否工作\n    tracing::info!(\"tracing initialized successfully\");\n\n    let rt = tokio::runtime::Builder::new_multi_thread()\n        .enable_all()\n        // default value 512 out of macos ulimit\n        .max_blocking_threads(256)\n        .build()\n        .expect(\"failed to build tokio runtime\");\n\n    if let Err(error) = rt.block_on(run(options)) {\n        tracing::info!(?error);\n        std::process::exit(1);\n    }\n}\n\nasync fn run(options: Options) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {\n    tracing::info!(\"Serving {}\", options.root.display());\n    let root = Arc::new(options.root);\n    if !root.is_dir() {\n        return Err(format!(\"{}: is not a readable directory\", root.display()).into());\n    }\n\n    let qlogger: Arc<dyn qevent::telemetry::QLog + Send + Sync> = match options.qlog {\n        Some(dir) => Arc::new(handy::LegacySeqLogger::new(dir)),\n        None => Arc::new(handy::NoopLogger),\n    };\n\n    let Certs {\n        server_name,\n        cert,\n        key,\n    } = options.certs;\n\n    let listeners = QuicListeners::builder()\n        .with_qlog(qlogger)\n        .without_client_cert_verifier()\n        .with_parameters(handy::server_parameters())\n        .with_alpns(options.alpns)\n        .listen(options.backlog)?;\n    listeners\n        .add_server(\n            server_name.as_str(),\n            cert.as_path(),\n            key.as_path(),\n            options.listen,\n            None,\n        )\n        .await?;\n    tracing::info!(\n        \"Listening on {}\",\n        listeners\n            .get_server(server_name.as_str())\n            .unwrap()\n            .bind_interfaces()\n            .iter()\n            .next()\n            .unwrap()\n            .1\n            .borrow()\n            .bound_addr()?\n    );\n\n    // handle incoming connections and requests\n    while let Ok((new_conn, _server, _pathway, _link)) = listeners.accept().await {\n        let h3_conn =\n            match h3::server::Connection::new(h3_shim::QuicConnection::new(Arc::new(new_conn)))\n                .await\n            {\n                Ok(h3_conn) => {\n                    tracing::info!(\"accept a new quic connection\");\n                    h3_conn\n                }\n                Err(error) => {\n                    tracing::error!(\"failed to establish h3 connection: {}\", error);\n                    continue;\n                }\n            };\n        let root = root.clone();\n        tokio::spawn(handle_connection(root, h3_conn));\n    }\n\n    Ok(())\n}\n\nasync fn handle_connection<T>(\n    serve_root: Arc<PathBuf>,\n    mut connection: h3::server::Connection<T, Bytes>,\n) where\n    T: h3::quic::Connection<Bytes> + 'static,\n    <T as h3::quic::OpenStreams<Bytes>>::BidiStream: h3::quic::BidiStream<Bytes> + Send + 'static,\n{\n    loop {\n        match connection.accept().await {\n            Ok(Some(request_resolver)) => {\n                let serve_root = serve_root.clone();\n                let handle_request = async move {\n                    let (request, stream) = request_resolver.resolve_request().await?;\n                    handle_request(request, stream, serve_root).await\n                };\n                tokio::spawn(async move {\n                    if let Err(e) = handle_request.await {\n                        tracing::error!(\"handling request failed: {}\", e);\n                    }\n                });\n            }\n            Ok(None) => break,\n            Err(..) => break,\n        }\n    }\n}\n\n#[tracing::instrument(skip_all)]\nasync fn handle_request<T>(\n    request: Request<()>,\n    mut stream: RequestStream<T, Bytes>,\n    serve_root: Arc<PathBuf>,\n) -> Result<(), Box<dyn std::error::Error>>\nwhere\n    T: BidiStream<Bytes>,\n{\n    let (status, to_serve) = match serve_root.deref() {\n        _ if request.uri().path().contains(\"..\") => (StatusCode::NOT_FOUND, None),\n        root => {\n            let to_serve = root.join(request.uri().path().strip_prefix('/').unwrap_or(\"\"));\n            match File::open(&to_serve).await {\n                Ok(file) => (StatusCode::OK, Some(file)),\n                Err(e) => {\n                    tracing::error!(\"failed to open: \\\"{}\\\": {}\", to_serve.to_string_lossy(), e);\n                    (StatusCode::NOT_FOUND, None)\n                }\n            }\n        }\n    };\n\n    let resp = http::Response::builder().status(status).body(())?;\n    stream.send_response(resp).await?;\n\n    if let Some(mut file) = to_serve {\n        loop {\n            let mut buf = BytesMut::with_capacity(4096 * 10);\n            if file.read_buf(&mut buf).await? == 0 {\n                break;\n            }\n            stream.send_data(buf.freeze()).await?;\n        }\n    }\n\n    stream.finish().await?;\n    Ok(())\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_name() {}\n}\n"
  },
  {
    "path": "h3-shim/src/conn.rs",
    "content": "use std::{\n    ops::Deref,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n};\n\nuse dquic::prelude::{Connection, StreamId, StreamReader, StreamWriter};\nuse futures::Stream;\nuse h3::quic::{ConnectionErrorIncoming, StreamErrorIncoming};\n\nuse crate::{\n    error::{self, convert_quic_error},\n    streams::{BidiStream, RecvStream, SendStream},\n};\n// 由于数据报的特性，接收流的特征，QuicConnection不允许被Clone\npub struct QuicConnection {\n    connection: Arc<Connection>,\n    accept_bi: AcceptBiStreams,\n    accept_uni: AcceptUniStreams,\n    open_bi: OpenBiStreams,\n    open_uni: OpenUniStreams,\n}\n\nimpl Deref for QuicConnection {\n    type Target = Arc<Connection>;\n\n    fn deref(&self) -> &Self::Target {\n        &self.connection\n    }\n}\n\nimpl QuicConnection {\n    pub fn new(conn: Arc<Connection>) -> Self {\n        Self {\n            accept_bi: AcceptBiStreams::new(conn.clone()),\n            accept_uni: AcceptUniStreams::new(conn.clone()),\n            open_bi: OpenBiStreams::new(conn.clone()),\n            open_uni: OpenUniStreams::new(conn.clone()),\n            connection: conn,\n        }\n    }\n}\n\n/// 首先，QuicConnection需能主动创建双向流和发送流，以及关闭连接.\nimpl<B: bytes::Buf> h3::quic::OpenStreams<B> for QuicConnection {\n    type BidiStream = BidiStream<B>;\n\n    type SendStream = SendStream<B>;\n\n    #[inline]\n    fn poll_open_bidi(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Self::BidiStream, StreamErrorIncoming>> {\n        // 以下代码的代价是，每次调用open_bi_stream()都是一个新的实现了Future的闭包\n        // 实际上应该是同一个，否则每次poll都会造成open_bi_stream()中的每个await点\n        // 都得重新执行一遍，这是有问题的。\n        // let mut fut = self.connection.open_bi_stream();\n        // let mut task = pin!(fut);\n        // let result = ready!(task.as_mut().poll_unpin(cx));\n        // let bi_stream = result\n        //     .and_then(|o| o.ok_or_else(sid_exceed_limit_error))\n        //     .map(|s| BidiStream::new(s))\n        //     .map_err(Into::into);\n        // Poll::Ready(bi_stream)\n\n        // 以下代码的问题是：不可重入，切忌上个流未成功打开返回前，任何地方不可尝试打开流\n        self.open_bi.poll_open(cx)\n\n        // 应该的做法是，与这个poll_open_bidi关联的一个open_bi_stream()返回的固定Future来poll\n    }\n\n    #[inline]\n    fn poll_open_send(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Self::SendStream, StreamErrorIncoming>> {\n        self.open_uni.poll_open(cx)\n    }\n\n    #[inline]\n    fn close(&mut self, code: h3::error::Code, reason: &[u8]) {\n        let reason = unsafe { String::from_utf8_unchecked(reason.to_vec()) };\n        _ = self.connection.close(reason, code.into());\n    }\n}\n\n/// 其次，QuicConnection需能接收双向流和发送流.\n/// 欲实现`h3::quic::Connection`，必须先实现`h3::quic::OpenStreams`\nimpl<B: bytes::Buf> h3::quic::Connection<B> for QuicConnection {\n    type RecvStream = RecvStream;\n\n    type OpenStreams = OpenStreams;\n\n    #[inline]\n    fn poll_accept_recv(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Self::RecvStream, ConnectionErrorIncoming>> {\n        self.accept_uni.poll_accept(cx)\n    }\n\n    #[inline]\n    fn poll_accept_bidi(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Self::BidiStream, ConnectionErrorIncoming>> {\n        self.accept_bi.poll_accept(cx)\n    }\n\n    /// 为何要再来个这玩意？多次一举\n    /// 如果这个opener()的返回值只负责打开一条流，不可重用；\n    /// 再打开流，要再次调用opener()来open，那还有点意思\n    #[inline]\n    fn opener(&self) -> Self::OpenStreams {\n        OpenStreams::new(self.connection.clone())\n    }\n}\n\n/// 多此一举，实在是多此一举\npub struct OpenStreams {\n    connection: Arc<Connection>,\n    open_bi: OpenBiStreams,\n    open_uni: OpenUniStreams,\n}\n\nimpl OpenStreams {\n    fn new(conn: Arc<Connection>) -> Self {\n        Self {\n            open_bi: OpenBiStreams::new(conn.clone()),\n            open_uni: OpenUniStreams::new(conn.clone()),\n            connection: conn,\n        }\n    }\n}\n\nimpl Clone for OpenStreams {\n    fn clone(&self) -> Self {\n        Self {\n            open_bi: OpenBiStreams::new(self.connection.clone()),\n            open_uni: OpenUniStreams::new(self.connection.clone()),\n            connection: self.connection.clone(),\n        }\n    }\n}\n\n/// 跟QuicConnection::poll_open_bidi()的实现一样，重复\nimpl<B: bytes::Buf> h3::quic::OpenStreams<B> for OpenStreams {\n    type BidiStream = BidiStream<B>;\n\n    type SendStream = SendStream<B>;\n\n    #[inline]\n    fn poll_open_bidi(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Self::BidiStream, StreamErrorIncoming>> {\n        self.open_bi.poll_open(cx)\n    }\n\n    #[inline]\n    fn poll_open_send(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Self::SendStream, StreamErrorIncoming>> {\n        self.open_uni.poll_open(cx)\n    }\n\n    #[inline]\n    fn close(&mut self, code: h3::error::Code, reason: &[u8]) {\n        let reason = unsafe { String::from_utf8_unchecked(reason.to_vec()) };\n        _ = self.connection.close(reason, code.into());\n    }\n}\n\ntype BoxStream<T> = Pin<Box<dyn Stream<Item = T> + Send + Sync>>;\n\nfn sid_exceed_limit_error() -> ConnectionErrorIncoming {\n    ConnectionErrorIncoming::Undefined(Arc::from(Box::from(\n        \"the stream IDs in the `dir` direction exceed 2^60, this is very very hard to happen.\",\n    )) as _)\n}\n\n#[allow(clippy::type_complexity)]\nstruct OpenBiStreams(\n    BoxStream<Result<(StreamId, (StreamReader, StreamWriter)), ConnectionErrorIncoming>>,\n);\n\nimpl OpenBiStreams {\n    fn new(conn: Arc<Connection>) -> Self {\n        let stream = futures::stream::unfold(conn, |conn| async {\n            let bidi = conn\n                .open_bi_stream()\n                .await\n                .map_err(convert_quic_error)\n                .and_then(|o| o.ok_or_else(sid_exceed_limit_error));\n            Some((bidi, conn))\n        });\n        Self(Box::pin(stream))\n    }\n\n    /// TODO: 以此法实现的`poll_open`方法，不可重入，即A、B同时要打开一个流，\n    /// 实际上只有一个能成功，后一个的waker会取代前一个的waker注册在stream中，导致前一个waker无法被唤醒\n    /// 以下同\n    fn poll_open<B>(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<BidiStream<B>, StreamErrorIncoming>> {\n        self.0\n            .as_mut()\n            .poll_next(cx)\n            .map(Option::unwrap)\n            .map_ok(|(sid, stream)| BidiStream::new(sid, stream))\n            .map_err(|e| StreamErrorIncoming::ConnectionErrorIncoming {\n                connection_error: e,\n            })\n    }\n}\n\nstruct OpenUniStreams(BoxStream<Result<(StreamId, StreamWriter), ConnectionErrorIncoming>>);\n\nimpl OpenUniStreams {\n    fn new(conn: Arc<Connection>) -> Self {\n        let stream = futures::stream::unfold(conn, |conn| async {\n            let send = conn\n                .open_uni_stream()\n                .await\n                .map_err(convert_quic_error)\n                .and_then(|o| o.ok_or_else(sid_exceed_limit_error));\n            Some((send, conn))\n        });\n        Self(Box::pin(stream))\n    }\n\n    fn poll_open<B>(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<SendStream<B>, StreamErrorIncoming>> {\n        self.0\n            .as_mut()\n            .poll_next(cx)\n            .map(Option::unwrap)\n            .map_ok(|(sid, writer)| SendStream::new(sid, writer))\n            .map_err(|e| StreamErrorIncoming::ConnectionErrorIncoming {\n                connection_error: e,\n            })\n    }\n}\n\n#[allow(clippy::type_complexity)]\nstruct AcceptBiStreams(\n    BoxStream<Result<(StreamId, (StreamReader, StreamWriter)), ConnectionErrorIncoming>>,\n);\n\nimpl AcceptBiStreams {\n    fn new(conn: Arc<Connection>) -> Self {\n        let stream = futures::stream::unfold(conn, |conn| async {\n            Some((\n                conn.accept_bi_stream()\n                    .await\n                    .map_err(error::convert_quic_error),\n                conn,\n            ))\n        });\n        Self(Box::pin(stream))\n    }\n\n    fn poll_accept<B>(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<BidiStream<B>, ConnectionErrorIncoming>> {\n        self.0\n            .as_mut()\n            .poll_next(cx)\n            .map(Option::unwrap)\n            .map_ok(|(sid, stream)| BidiStream::new(sid, stream))\n    }\n}\n\nstruct AcceptUniStreams(BoxStream<Result<(StreamId, StreamReader), ConnectionErrorIncoming>>);\n\nimpl AcceptUniStreams {\n    fn new(conn: Arc<Connection>) -> Self {\n        let stream = futures::stream::unfold(conn, |conn| async {\n            let uni = conn\n                .accept_uni_stream()\n                .await\n                .map_err(error::convert_quic_error);\n            Some((uni, conn))\n        });\n        Self(Box::pin(stream))\n    }\n\n    fn poll_accept(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<RecvStream, ConnectionErrorIncoming>> {\n        self.0\n            .as_mut()\n            .poll_next(cx)\n            .map(Option::unwrap)\n            .map_ok(|(sid, reader)| RecvStream::new(sid, reader))\n    }\n}\n"
  },
  {
    "path": "h3-shim/src/error.rs",
    "content": "use std::{error::Error, sync::Arc};\n\nuse dquic::qbase;\nuse h3::quic::{ConnectionErrorIncoming, StreamErrorIncoming};\nuse qbase::frame::ResetStreamError;\n\npub fn convert_quic_error(e: qbase::error::Error) -> ConnectionErrorIncoming {\n    match e {\n        qbase::error::Error::Quic(quic_error) => {\n            ConnectionErrorIncoming::Undefined(Arc::new(quic_error))\n        }\n        qbase::error::Error::App(app_error) => ConnectionErrorIncoming::ApplicationClose {\n            error_code: app_error.error_code(),\n        },\n    }\n}\n\npub fn convert_stream_io_error(e: std::io::Error) -> StreamErrorIncoming {\n    if let Some(reset_stream_error) = e\n        .source()\n        .and_then(|e| e.downcast_ref::<ResetStreamError>())\n    {\n        return StreamErrorIncoming::StreamTerminated {\n            error_code: reset_stream_error.error_code(),\n        };\n    }\n    if let Some(quic_error) = e\n        .source()\n        .and_then(|e| e.downcast_ref::<qbase::error::Error>())\n    {\n        return StreamErrorIncoming::ConnectionErrorIncoming {\n            connection_error: convert_quic_error(quic_error.clone()),\n        };\n    }\n    StreamErrorIncoming::Unknown(e.into())\n}\n"
  },
  {
    "path": "h3-shim/src/ext.rs",
    "content": "// See https://github.com/hyperium/h3/issues/307\"\n\n// use std::{\n//     io,\n//     ops::Deref,\n//     task::{Context, Poll},\n// };\n\n// use bytes::{Buf, Bytes};\n// use futures::future::BoxFuture;\n// use dquic::{DatagramReader, DatagramWriter};\n// use h3_datagram::{\n//     ConnectionErrorIncoming,\n//     datagram::EncodedDatagram,\n//     quic_traits::{DatagramConnectionExt, RecvDatagram, SendDatagram, SendDatagramErrorIncoming},\n// };\n\n// use crate::{conn::QuicConnection, error::convert_connection_io_error};\n\n// impl<B: bytes::Buf> DatagramConnectionExt<B> for QuicConnection {\n//     type SendDatagramHandler = DatagramSender;\n\n//     type RecvDatagramHandler = DatagramReceiver;\n\n//     fn send_datagram_handler(&self) -> Self::SendDatagramHandler {\n//         let conn = self.deref().clone();\n//         DatagramSender::Pending(Box::pin(async move { conn.datagram_writer().await }))\n//     }\n\n//     fn recv_datagram_handler(&self) -> Self::RecvDatagramHandler {\n//         let conn = self.deref().clone();\n//         DatagramReceiver::Pending(Box::pin(async move { conn.datagram_reader() }))\n//     }\n// }\n\n// pub enum DatagramSender {\n//     Pending(BoxFuture<'static, io::Result<DatagramWriter>>),\n//     Ready(Result<DatagramWriter, SendDatagramErrorIncoming>),\n// }\n\n// impl<B: bytes::Buf> SendDatagram<B> for DatagramSender {\n//     fn send_datagram<T: Into<EncodedDatagram<B>>>(\n//         &mut self,\n//         data: T,\n//     ) -> Result<(), SendDatagramErrorIncoming> {\n//         // let mut buf = bytes::BytesMut::new();\n//         // buf\n//         // data.encode(&mut buf);\n//         let mut datagram = <T as Into<EncodedDatagram<B>>>::into(data);\n//         self.0\n//             .send_bytes(datagram.copy_to_bytes(datagram.remaining()))\n//             .map_err(|e| match e {\n//                 e if e.kind() == io::ErrorKind::InvalidInput => SendDatagramErrorIncoming::TooLarge,\n//                 e => SendDatagramErrorIncoming::ConnectionError(convert_connection_io_error(e)),\n//             })\n//     }\n// }\n\n// pub enum DatagramReceiver {\n//     Pending(BoxFuture<'static, io::Result<DatagramReader>>),\n//     Ready(Result<DatagramReader, ConnectionErrorIncoming>),\n// }\n\n// impl RecvDatagram for DatagramReceiver {\n//     /// The buffer type\n//     type Buffer = Bytes;\n\n//     /// Poll the connection for incoming datagrams.\n//     fn poll_incoming_datagram(\n//         &mut self,\n//         cx: &mut Context<'_>,\n//     ) -> Poll<Result<Self::Buffer, ConnectionErrorIncoming>> {\n//         self.0.poll_recv(cx).map_err(convert_connection_io_error)\n//     }\n// }\n"
  },
  {
    "path": "h3-shim/src/lib.rs",
    "content": "pub mod conn;\nmod error;\npub mod pool;\npub use conn::{OpenStreams, QuicConnection};\n#[cfg(feature = \"datagram\")]\npub mod ext;\n#[cfg(feature = \"datagram\")]\n#[allow(unused_imports)]\npub use ext::*;\npub mod streams;\npub use dquic;\npub use streams::{BidiStream, RecvStream, SendStream};\n"
  },
  {
    "path": "h3-shim/src/pool.rs",
    "content": "//! TODO: unimplemented\n"
  },
  {
    "path": "h3-shim/src/streams.rs",
    "content": "use std::{\n    mem::MaybeUninit,\n    pin::Pin,\n    task::{Context, Poll, ready},\n};\n\nuse bytes::Buf;\nuse dquic::{\n    prelude::{CancelStream, StopSending, StreamReader, StreamWriter},\n    qbase,\n};\nuse h3::quic::StreamErrorIncoming;\nuse tokio::io::{AsyncRead, AsyncWrite, ReadBuf};\n\nuse crate::error::convert_stream_io_error;\n\npub struct SendStream<B> {\n    writer: StreamWriter,\n    data: Option<h3::quic::WriteBuf<B>>,\n    send_id: h3::quic::StreamId,\n}\n\nimpl<B> SendStream<B> {\n    pub fn new(sid: qbase::sid::StreamId, writer: StreamWriter) -> Self {\n        let sid = u64::from(sid);\n        Self {\n            writer,\n            data: None,\n            send_id: h3::quic::StreamId::try_from(sid).expect(\"unreachable\"),\n        }\n    }\n}\n\nimpl<B: bytes::Buf> h3::quic::SendStream<B> for SendStream<B> {\n    #[inline]\n    fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamErrorIncoming>> {\n        let Some(buf) = self.data.as_mut() else {\n            return Poll::Ready(Ok(()));\n        };\n        loop {\n            match ready!(Pin::new(&mut self.writer).poll_write(cx, buf.chunk())) {\n                Ok(written) => {\n                    buf.advance(written);\n                    if buf.remaining() == 0 {\n                        self.data = None;\n                        return Poll::Ready(Ok(()));\n                    }\n                }\n                Err(e) => {\n                    self.data = None;\n                    return Poll::Ready(Err(convert_stream_io_error(e)));\n                }\n            }\n        }\n    }\n\n    #[inline]\n    fn send_data<T: Into<h3::quic::WriteBuf<B>>>(\n        &mut self,\n        data: T,\n    ) -> Result<(), StreamErrorIncoming> {\n        assert!(self.data.is_none());\n        self.data = Some(data.into());\n        Ok(())\n    }\n\n    #[inline]\n    fn poll_finish(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamErrorIncoming>> {\n        assert!(self.data.is_none());\n\n        Pin::new(&mut self.writer)\n            .poll_shutdown(cx)\n            .map(|r| r.map_err(convert_stream_io_error))\n    }\n\n    #[inline]\n    fn reset(&mut self, reset_code: u64) {\n        assert!(self.data.is_none());\n        self.writer.cancel(reset_code);\n    }\n\n    #[inline]\n    fn send_id(&self) -> h3::quic::StreamId {\n        self.send_id\n    }\n}\n\nimpl<B: bytes::Buf> h3::quic::SendStreamUnframed<B> for SendStream<B> {\n    #[inline]\n    fn poll_send<D: Buf>(\n        &mut self,\n        cx: &mut Context<'_>,\n        buf: &mut D,\n    ) -> Poll<Result<usize, StreamErrorIncoming>> {\n        assert!(self.data.is_none());\n\n        Pin::new(&mut self.writer)\n            .poll_write(cx, buf.chunk())\n            .map(|r| r.map_err(convert_stream_io_error))\n    }\n}\n\npub struct RecvStream {\n    reader: StreamReader,\n    recv_id: h3::quic::StreamId,\n}\n\nimpl RecvStream {\n    pub(crate) fn new(sid: qbase::sid::StreamId, reader: StreamReader) -> Self {\n        let sid = u64::from(sid);\n        Self {\n            reader,\n            recv_id: h3::quic::StreamId::try_from(sid).expect(\"unreachable\"),\n        }\n    }\n}\n\nimpl h3::quic::RecvStream for RecvStream {\n    type Buf = bytes::Bytes;\n\n    #[inline]\n    fn poll_data(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Option<Self::Buf>, StreamErrorIncoming>> {\n        let mut uninit_buf = [MaybeUninit::uninit(); 4096];\n        let mut read_buf = ReadBuf::uninit(&mut uninit_buf);\n        match ready!(Pin::new(&mut self.reader).poll_read(cx, &mut read_buf)) {\n            Ok(()) => {\n                if read_buf.filled().is_empty() {\n                    return Poll::Ready(Ok(None));\n                }\n                let bytes = bytes::Bytes::copy_from_slice(read_buf.filled());\n                Poll::Ready(Ok(Some(bytes)))\n            }\n            Err(e) => Poll::Ready(Err(convert_stream_io_error(e))),\n        }\n    }\n\n    #[inline]\n    fn stop_sending(&mut self, error_code: u64) {\n        self.reader.stop(error_code);\n    }\n\n    #[inline]\n    fn recv_id(&self) -> h3::quic::StreamId {\n        self.recv_id\n    }\n}\n\npub struct BidiStream<B> {\n    send: SendStream<B>,\n    recv: RecvStream,\n}\n\nimpl<B> BidiStream<B> {\n    pub(crate) fn new(\n        sid: qbase::sid::StreamId,\n        (reader, writer): (StreamReader, StreamWriter),\n    ) -> Self {\n        Self {\n            send: SendStream::new(sid, writer),\n            recv: RecvStream::new(sid, reader),\n        }\n    }\n}\n\nimpl<B> h3::quic::RecvStream for BidiStream<B> {\n    type Buf = bytes::Bytes;\n\n    #[inline]\n    fn poll_data(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<Option<Self::Buf>, StreamErrorIncoming>> {\n        self.recv.poll_data(cx)\n    }\n\n    #[inline]\n    fn stop_sending(&mut self, error_code: u64) {\n        self.recv.stop_sending(error_code);\n    }\n\n    #[inline]\n    fn recv_id(&self) -> h3::quic::StreamId {\n        self.recv.recv_id()\n    }\n}\n\nimpl<B: bytes::Buf> h3::quic::SendStream<B> for BidiStream<B> {\n    #[inline]\n    fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamErrorIncoming>> {\n        self.send.poll_ready(cx)\n    }\n\n    #[inline]\n    fn send_data<T: Into<h3::quic::WriteBuf<B>>>(\n        &mut self,\n        data: T,\n    ) -> Result<(), StreamErrorIncoming> {\n        self.send.send_data(data)\n    }\n\n    #[inline]\n    fn poll_finish(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamErrorIncoming>> {\n        self.send.poll_finish(cx)\n    }\n\n    #[inline]\n    fn reset(&mut self, reset_code: u64) {\n        self.send.reset(reset_code);\n    }\n\n    #[inline]\n    fn send_id(&self) -> h3::quic::StreamId {\n        self.send.send_id()\n    }\n}\n\nimpl<B: bytes::Buf> h3::quic::SendStreamUnframed<B> for BidiStream<B> {\n    #[inline]\n    fn poll_send<D: Buf>(\n        &mut self,\n        cx: &mut Context<'_>,\n        buf: &mut D,\n    ) -> Poll<Result<usize, StreamErrorIncoming>> {\n        self.send.poll_send(cx, buf)\n    }\n}\n\nimpl<B: bytes::Buf> h3::quic::BidiStream<B> for BidiStream<B> {\n    type SendStream = SendStream<B>;\n\n    type RecvStream = RecvStream;\n\n    #[inline]\n    fn split(self) -> (Self::SendStream, Self::RecvStream) {\n        (self.send, self.recv)\n    }\n}\n"
  },
  {
    "path": "interop/Dockerfile",
    "content": "FROM docker.io/martenseemann/quic-network-simulator-endpoint:latest\n\nRUN env\n\n# download and build your QUIC implementation\nCOPY . /dquic\n\n# setup rust\nRUN apt-get update && apt-get install -y curl gcc \\ \n    && curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y  \\ \n    && . \"$HOME/.cargo/env\" \\\n    # build the QUIC implementation\n    && cd dquic \\\n    && cargo build --release --example http-server \\\n    && cargo build --release --example http-client \\\n    && cargo build --release --example h3-client \\\n    && cargo build --release --example h3-server \\\n    # copy the binary \n    && mv target/release/examples/http-server / \\\n    && mv target/release/examples/http-client / \\\n    && mv target/release/examples/h3-client / \\\n    && mv target/release/examples/h3-server / \\\n    # cleanup\n    && cd / && rm -rf /dquic \\\n    && rm -rf $HOME/.cargo/registry \\\n    && rm -rf $HOME/.cargo/git \\\n    && rustup self uninstall -y \\\n    && apt-get remove -y curl gcc \\\n    && apt-get autoremove -y \\\n    && apt-get clean \\\n    && rm -rf /var/lib/apt/lists/*\n\n# copy run script and run it\nCOPY interop/run_endpoint.sh .\nRUN chmod +x run_endpoint.sh\nENTRYPOINT [ \"./run_endpoint.sh\" ]\n"
  },
  {
    "path": "interop/run_endpoint.sh",
    "content": "#!/bin/bash\n\n# Set up the routing needed for the simulation\n/setup.sh\n\n# The following variables are available for use:\n# - ROLE contains the role of this execution context, client or server\n# - SERVER_PARAMS contains user-supplied command line parameters\n# - CLIENT_PARAMS contains user-supplied command line parameters\n\n\nrun_client() {\n    binary=\"/http-client\"\n\n    case \"$TESTCASE\" in\n        \"handshake\" | \"transfer\" | \"rebind-port\" | \"rebind-addr\" )\n            # do nothing\n            ;;\n        \"multiconnect\" )\n            CLIENT_PARAMS=\"$CLIENT_PARAMS\"\n            ;;\n        \"http3\" )\n            binary=\"/h3-client\"\n            ;;\n        *)\n            echo \"Unupported test case: $TESTCASE\"\n            exit 127\n            ;;\n    esac\n\n    # Start the client\n    echo \"Starting client with parameters: $CLIENT_PARAMS\"\n    RUST_LOG=debug $binary --alpns hq-interop --qlog $QLOGDIR \\\n        --skip-verify --save /downloads $CLIENT_PARAMS $REQUESTS\n}\n\nrun_server() {\n    binary=\"/http-server\"\n\n    case \"$TESTCASE\" in\n        \"handshake\" | \"transfer\" | \"multiconnect\" | \"rebind-port\" | \"rebind-addr\" )\n            # do nothing\n            ;;\n        \"http3\" )\n            binary=\"/h3-server\"\n            ;;\n        *)\n            echo \"Unupported test case: $TESTCASE\"\n            exit 127\n            ;;\n    esac\n    # Start the server\n    echo \"Starting server with parameters: $SERVER_PARAMS\"\n    RUST_LOG=debug $binary --alpns hq-interop --qlog $QLOGDIR \\\n        -c /certs/cert.pem -k /certs/server.key -d /www $SERVER_PARAMS\n}\n\nif [ \"$ROLE\" == \"client\" ]; then\n    # Wait for the simulator to start up.\n    /wait-for-it.sh sim:57832 -s -t 30\n    run_client\nelif [ \"$ROLE\" == \"server\" ]; then\n    run_server\nfi"
  },
  {
    "path": "qbase/Cargo.toml",
    "content": "[package]\nname = \"qbase\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"Core structure of the QUIC protocol, a part of dquic\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbitflags = { workspace = true }\nbytes = { workspace = true }\nderive_more = { workspace = true, features = [\n    \"as_ref\",\n    \"deref\",\n    \"deref_mut\",\n    \"display\",\n    \"from\",\n    \"into\",\n    \"try_into\",\n] }\nenum_dispatch = { workspace = true }\nfutures = { workspace = true }\ngetset = { workspace = true }\nhttp = { workspace = true }\nnetdev = { workspace = true }\nnom = { workspace = true }\nqmacro = { workspace = true }\ntracing = { workspace = true }\nrand = { workspace = true }\nrustls = { workspace = true }\nserde = { workspace = true, features = [\"derive\"] }\nsmallvec = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"rt\", \"sync\", \"time\"] }\n\n[dev-dependencies]\ntokio = { workspace = true, features = [\"test-util\", \"macros\"] }\nrustls = { workspace = true, features = [\"ring\"] }\n"
  },
  {
    "path": "qbase/src/cid/connection_id.rs",
    "content": "use std::{\n    hash::{Hash, Hasher},\n    ops::Deref,\n};\n\nuse nom::{IResult, bytes::streaming::take, number::streaming::be_u8};\nuse rand::RngExt;\n\n/// The connection id length must not exceed 20 bytes. See [`ConnectionId`].\npub const MAX_CID_SIZE: usize = 20;\n\n/// Connection ID in [QUIC RFC 9000](https://tools.ietf.org/html/rfc9000).\n///\n/// In QUIC version 1, this value MUST NOT exceed 20 bytes.\n/// Endpoints that receive a version 1 long header with a value larger than\n/// 20 MUST drop the packet.\n/// See [Connection Id Length](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.2-3.11).\n///\n/// See [connection id](https://tools.ietf.org/html/rfc9000#name-connection-id)\n/// of [QUIC RFC 9000](https://www.rfc-editor.org/rfc/rfc9000.html)\n/// for more details.\n#[derive(Clone, Copy, Eq, Default)]\npub struct ConnectionId {\n    pub(crate) len: u8,\n    pub(crate) bytes: [u8; MAX_CID_SIZE],\n}\n\nimpl core::fmt::LowerHex for ConnectionId {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        for &b in self.as_ref() {\n            write!(f, \"{b:02x}\")?;\n        }\n        Ok(())\n    }\n}\n\nimpl core::fmt::Display for ConnectionId {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        core::fmt::LowerHex::fmt(self, f)\n    }\n}\n\nimpl core::fmt::Debug for ConnectionId {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        core::fmt::LowerHex::fmt(self, f)\n    }\n}\n\n/// Parse a connection ID from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n///\n/// ## Note:\n///\n/// The connection ID length is limited to 20 bytes, or it will return an error.\n/// See [`ConnectionId`].\npub fn be_connection_id(input: &[u8]) -> IResult<&[u8], ConnectionId> {\n    let (remain, len) = be_u8(input)?;\n    be_connection_id_with_len(remain, len as usize)\n}\n\n/// Parse a given `len` connection ID from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n///\n/// ## Note:\n///\n/// The connection ID length is limited to 20 bytes, or it will return an error.\npub fn be_connection_id_with_len(input: &[u8], len: usize) -> IResult<&[u8], ConnectionId> {\n    if len > MAX_CID_SIZE {\n        return Err(nom::Err::Error(nom::error::make_error(\n            input,\n            nom::error::ErrorKind::TooLarge,\n        )));\n    }\n    let (remain, bytes) = take(len)(input)?;\n    Ok((remain, ConnectionId::from_slice(bytes)))\n}\n\n/// A BufMut extension trait, makes buffer more friendly to write connection ID.\npub trait WriteConnectionId: bytes::BufMut {\n    /// Write a connection ID to the buffer.\n    fn put_connection_id(&mut self, cid: &ConnectionId);\n}\n\nimpl<T: bytes::BufMut> WriteConnectionId for T {\n    fn put_connection_id(&mut self, cid: &ConnectionId) {\n        self.put_u8(cid.len);\n        self.put_slice(cid);\n    }\n}\n\nimpl ConnectionId {\n    /// Create a new connection ID from a given bytes slice.\n    pub fn from_slice(bytes: &[u8]) -> Self {\n        debug_assert!(bytes.len() <= MAX_CID_SIZE);\n        let mut res = Self {\n            len: bytes.len() as u8,\n            bytes: [0; MAX_CID_SIZE],\n        };\n        res.bytes[..bytes.len()].copy_from_slice(bytes);\n        res\n    }\n\n    /// Random generate a connection ID of the given length.\n    /// The connection ID maybe not unique, so it should be checked before use.\n    pub fn random_gen(len: usize) -> Self {\n        debug_assert!(len <= MAX_CID_SIZE);\n        let mut bytes = [0; MAX_CID_SIZE];\n        rand::rng().fill(&mut bytes[..len]);\n        Self {\n            len: len as u8,\n            bytes,\n        }\n    }\n\n    /// Generates a random connection ID like [`Self::random_gen`].\n    /// Additionally, allows specific bits of the connection ID to be set to the given mark.\n    pub fn random_gen_with_mark(len: usize, mark: u8, mask: u8) -> Self {\n        debug_assert!(len > 0 && len <= MAX_CID_SIZE);\n        let mut bytes = [0; MAX_CID_SIZE];\n        rand::rng().fill(&mut bytes[..len]);\n        bytes[0] = (bytes[0] & mask) | mark;\n        Self {\n            len: len as u8,\n            bytes,\n        }\n    }\n\n    /// Get the encoding size of the connection ID.\n    ///\n    /// Includes 1-byte length encoding and connection ID bytes.\n    pub fn encoding_size(&self) -> usize {\n        1 + self.len as usize\n    }\n}\n\nimpl Deref for ConnectionId {\n    type Target = [u8];\n\n    fn deref(&self) -> &Self::Target {\n        &self.bytes[0..self.len as usize]\n    }\n}\n\nimpl PartialEq<ConnectionId> for ConnectionId {\n    fn eq(&self, other: &ConnectionId) -> bool {\n        self.len == other.len && self.bytes[..self.len as usize] == other.bytes[..self.len as usize]\n    }\n}\n\nimpl Hash for ConnectionId {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.len.hash(state);\n        self.bytes[..self.len as usize].hash(state);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_read_connection_id() {\n        let buf = vec![0x04, 0x01, 0x02, 0x03, 0x04];\n        let (remain, cid) = be_connection_id(&buf).unwrap();\n        assert!(remain.is_empty());\n        assert_eq!(*cid, [0x01, 0x02, 0x03, 0x04],);\n\n        let buf = vec![21, 0x01, 0x02, 0x03, 0x04];\n        assert_eq!(\n            be_connection_id(&buf),\n            Err(nom::Err::Error(nom::error::make_error(\n                &buf[1..],\n                nom::error::ErrorKind::TooLarge\n            )))\n        );\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_cid_from_large_slice() {\n        ConnectionId::from_slice(&[0; MAX_CID_SIZE + 1]);\n    }\n\n    #[test]\n    fn test_write_connection_id() {\n        use bytes::{Bytes, BytesMut};\n        let mut buf = BytesMut::new();\n        let cid = ConnectionId::from_slice(&[0x01, 0x02, 0x03, 0x04]);\n        buf.put_connection_id(&cid);\n        assert_eq!(\n            buf.freeze(),\n            Bytes::from_static(&[0x04, 0x01, 0x02, 0x03, 0x04])\n        );\n    }\n}\n"
  },
  {
    "path": "qbase/src/cid/local_cid.rs",
    "content": "use std::sync::{Arc, Mutex};\n\nuse super::{ConnectionId, GenUniqueCid, RetireCid};\nuse crate::{\n    error::{Error, ErrorKind, QuicError},\n    frame::{\n        FrameType, GetFrameType, NewConnectionIdFrame, RetireConnectionIdFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    token::ResetToken,\n    util::IndexDeque,\n    varint::{VARINT_MAX, VarInt},\n};\n\n/// Local connection ID management.\n#[derive(Debug)]\nstruct LocalCids<ISSUED>\nwhere\n    ISSUED: GenUniqueCid + RetireCid + SendFrame<NewConnectionIdFrame>,\n{\n    // If the item in cid_deque is None, it means the connection ID has been retired.\n    cid_deque: IndexDeque<Option<(ConnectionId, ResetToken)>, VARINT_MAX>,\n    // Each issued connection ID will be written into this issued_cids.\n    // The frames in issued_cids should be able to enter the QUIC sending channel\n    // and be reliably sent to the peer finally.\n    issued_cids: ISSUED,\n    // This is an integer value specifying the maximum number of active connection\n    // IDs limited by peer.\n    // While the client does not know the server's parameters at the beginning,\n    // it can be set to None and will be reset.\n    // If this transport parameter is absent, a default of 2 is assumed.\n    active_cid_limit: Option<u64>,\n}\n\nimpl<ISSUED> LocalCids<ISSUED>\nwhere\n    ISSUED: GenUniqueCid + RetireCid + SendFrame<NewConnectionIdFrame>,\n{\n    /// Create a new local connection ID manager.\n    fn new(scid: ConnectionId, issued_cids: ISSUED) -> Self {\n        let mut cid_deque = IndexDeque::default();\n        cid_deque\n            .push_back(Some((scid, ResetToken::default())))\n            .unwrap();\n\n        let new_cid = issued_cids.gen_unique_cid();\n        let new_cid_frame =\n            NewConnectionIdFrame::new(new_cid, VarInt::from_u32(1), VarInt::from_u32(0));\n        issued_cids.send_frame([new_cid_frame]);\n        cid_deque\n            .push_back(Some((\n                *new_cid_frame.connection_id(),\n                *new_cid_frame.reset_token(),\n            )))\n            .unwrap();\n        Self {\n            cid_deque,\n            issued_cids,\n            active_cid_limit: None,\n        }\n    }\n\n    fn initial_scid(&self) -> Option<ConnectionId> {\n        self.cid_deque.get(0)?.map(|(cid, _)| cid)\n    }\n\n    /// Set the maximum number of active connection IDs.\n    ///\n    /// The value of the active_connection_id_limit parameter MUST be at least 2.\n    /// An endpoint that receives a value less than 2 MUST close the connection\n    /// with an error of type TRANSPORT_PARAMETER_ERROR.\n    fn set_limit(&mut self, active_cid_limit: u64) -> Result<(), Error> {\n        debug_assert!(self.active_cid_limit.is_none());\n        if active_cid_limit < 2 {\n            return Err(QuicError::new(\n                ErrorKind::TransportParameter,\n                FrameType::Crypto.into(),\n                format!(\"active connection id limit {active_cid_limit} < 2\"),\n            )\n            .into());\n        }\n        for _ in self.cid_deque.largest()..active_cid_limit {\n            self.issue_new_cid();\n        }\n        self.active_cid_limit = Some(active_cid_limit);\n        Ok(())\n    }\n\n    /// Issue a new connection ID, for internal used only.\n    fn issue_new_cid(&mut self) {\n        let seq = VarInt::from_u64(self.cid_deque.largest()).unwrap();\n        let retire_prior_to = VarInt::from_u64(self.cid_deque.offset()).unwrap();\n        let new_cid = self.issued_cids.gen_unique_cid();\n        let new_cid_frame = NewConnectionIdFrame::new(new_cid, seq, retire_prior_to);\n        self.issued_cids.send_frame([new_cid_frame]);\n        self.cid_deque.push_back(Some((*new_cid_frame.connection_id(), *new_cid_frame.reset_token())))\n            .expect(\"it's very very hard to issue a new connection ID whose sequence excceeds VARINT_MAX\");\n    }\n\n    /// Receive a [`RetireConnectionIdFrame`] from the peer,\n    /// retire the connection IDs of the sequence in [`RetireConnectionIdFrame`].\n    fn recv_retire_cid_frame(&mut self, frame: RetireConnectionIdFrame) -> Result<(), Error> {\n        let seq = frame.sequence();\n        if seq >= self.cid_deque.largest() {\n            return Err(QuicError::new(\n                ErrorKind::ConnectionIdLimit,\n                frame.frame_type().into(),\n                format!(\n                    \"Sequence({seq}) in RetireConnectionIdFrame exceeds the largest one({}) issued by us\",\n                    self.cid_deque.largest().saturating_sub(1)\n                ),\n            ).into());\n        }\n\n        if let Some(value) = self.cid_deque.get_mut(seq) {\n            if let Some((cid, _)) = value.take() {\n                let n = self.cid_deque.iter().take_while(|v| v.is_none()).count();\n                self.cid_deque.advance(n);\n\n                // generates a new connection ID while retiring an old one.\n                self.issue_new_cid();\n                self.issued_cids.retire_cid(cid);\n            }\n        }\n        Ok(())\n    }\n\n    fn clear(&mut self) {\n        for (cid, _reset_token) in self.cid_deque.drain_to(self.cid_deque.largest()).flatten() {\n            self.issued_cids.retire_cid(cid);\n        }\n    }\n}\n\nimpl<ISSUED> Drop for LocalCids<ISSUED>\nwhere\n    ISSUED: GenUniqueCid + RetireCid + SendFrame<NewConnectionIdFrame>,\n{\n    fn drop(&mut self) {\n        self.clear();\n    }\n}\n\n/// Shared local connection ID manager. Most times, you should use this struct.\n///\n/// Responsible for generating and issuing connection IDs to the peer.\n/// The number of active connection IDs is limited by the peer's active_cid_limit.\n///\n/// - `ISSUED`: is a struct that can generate unique connection id and finally send the new\n///   issued connection ID frame to the peer.\n///   It can be a channel, a queue, or a buffer. Whatever, it must be able to send the\n///   [`NewConnectionIdFrame`] to the peer.\n///\n/// ## Note\n///\n/// The generated connection ID will be added to the packet reception routing table,\n/// which is shared with other QUIC connections.\n/// Therefore, the generated connection ID must not duplicate other local connection IDs,\n/// including connection IDs of other connections,\n/// and those issued to the peer and have not been retired,\n/// otherwise routing conflicts will occur.\n#[derive(Debug, Clone)]\npub struct ArcLocalCids<ISSUED>(Arc<Mutex<LocalCids<ISSUED>>>)\nwhere\n    ISSUED: GenUniqueCid + RetireCid + SendFrame<NewConnectionIdFrame>;\n\nimpl<ISSUED> ArcLocalCids<ISSUED>\nwhere\n    ISSUED: GenUniqueCid + RetireCid + SendFrame<NewConnectionIdFrame>,\n{\n    /// Create a new share local connection ID manager.\n    ///\n    /// - `scid` is set initially, whether it is a client or a server,\n    ///   they both get their early `scid` externally.\n    /// - `issued_cids` is responsible for generating CIDs that do not conflict\n    ///   in the packet reception routing table and will also be responsible for\n    ///   eventually sending the [`NewConnectionIdFrame`] to the peer.\n    pub fn new(scid: ConnectionId, issued_cids: ISSUED) -> Self {\n        let raw_local_cids = LocalCids::new(scid, issued_cids);\n        Self(Arc::new(Mutex::new(raw_local_cids)))\n    }\n\n    /// Get the initial source connection ID.\n    ///\n    /// 0-RTT packets in the first flight use the same Destination Connection ID\n    /// and Source Connection ID values as the client's first Initial packet.\n    /// see [Section 7.2.6](https://datatracker.ietf.org/doc/html/rfc9000#section-7.2-6)\n    /// of [RFC9000](https://datatracker.ietf.org/doc/html/rfc9000).\n    ///\n    /// Once a client has received a valid Initial packet from the server,\n    /// it MUST discard any subsequent packet it receives on that connection\n    /// with a different Source Connection ID,\n    /// see [Section 7.2.7](https://datatracker.ietf.org/doc/html/rfc9000#section-7.2-7)\n    /// of [RFC9000](https://datatracker.ietf.org/doc/html/rfc9000).\n    ///\n    /// Any further changes to the Destination Connection ID are only permitted\n    /// if the values are taken from NEW_CONNECTION_ID frames;\n    /// if subsequent Initial packets include a different Source Connection ID,\n    /// they MUST be discarded,\n    /// see [Section 7.2.8](https://datatracker.ietf.org/doc/html/rfc9000#section-7.2-8)\n    /// of [RFC9000](https://datatracker.ietf.org/doc/html/rfc9000) for more details.\n    ///\n    /// It means that the initial source connection ID is the only one that can be used\n    /// to send the Initial, 0Rtt and Handshake packets.\n    /// Changing the scid is like issuing a new connection ID to the peer,\n    /// without specifying a sequence number or Stateless Reset Token.\n    /// Changing the scid during the Handshake phase is meaningless and harmful.\n    ///\n    /// For the server, even though the server provides the preferred address\n    /// as the first connection ID, and even though the server can use this\n    /// connection ID as the scid in the Handshake packet, it is not necessary.\n    /// The client could not eliminate the zero connection ID before entering 1RTT.\n    /// When the client actually eliminates the zero connection ID,\n    /// it means that 1RTT packets have already started to be transmitted,\n    /// and all subsequent transmissions should be through 1RTT packets.\n    ///\n    /// Return None if the initial source connection ID has been retired,\n    /// which indicates that the connection has been established,\n    /// and only the short header packet should be used.\n    pub fn initial_scid(&self) -> Option<ConnectionId> {\n        self.0.lock().unwrap().initial_scid()\n    }\n\n    /// Unilaterally no longer use all local connection IDs.\n    ///\n    /// No longer used means that packets sent by the peer to that connection ID are no\n    /// longer accepted. This method is called when the Termination event occurs and `LocalCids`\n    /// dropped, to clean up the state of the connection after the connection ends.\n    ///\n    /// In some rare cases, there are still connection IDs issued after the Termination event occurs,\n    /// resulting in incomplete cleaning of the connection status.\n    /// After externally receiving the Termination event, the connection instance should be dropped\n    /// as early as possible to trigger another cleanup in the [`Drop`] implementation to\n    /// completely clean up connection's state.\n    pub fn clear(&self) {\n        self.0.lock().unwrap().clear();\n    }\n\n    /// Set the maximum number of active connection IDs.\n    ///\n    /// After fully obtaining the peer's connection parameters, extract the peer's\n    /// active_cid_limit parameter and set it through this method.\n    pub fn set_limit(&self, active_cid_limit: u64) -> Result<(), Error> {\n        self.0.lock().unwrap().set_limit(active_cid_limit)\n    }\n}\n\nimpl<ISSUED> ReceiveFrame<RetireConnectionIdFrame> for ArcLocalCids<ISSUED>\nwhere\n    ISSUED: GenUniqueCid + RetireCid + SendFrame<NewConnectionIdFrame>,\n{\n    type Output = ();\n\n    /// Receive a [`RetireConnectionIdFrame`] from the peer,\n    /// retire the connection IDs of the sequence in [`RetireConnectionIdFrame`].\n    fn recv_frame(\n        &self,\n        frame: RetireConnectionIdFrame,\n    ) -> Result<Self::Output, crate::error::Error> {\n        self.0.lock().unwrap().recv_retire_cid_frame(frame)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::{collections::HashMap, sync::MutexGuard};\n\n    use super::*;\n\n    #[derive(Default)]\n    struct IssuedCids {\n        frames: Mutex<Vec<NewConnectionIdFrame>>,\n        active_cids: Mutex<HashMap<ConnectionId, ResetToken>>,\n    }\n\n    impl IssuedCids {\n        fn frames(&self) -> MutexGuard<'_, Vec<NewConnectionIdFrame>> {\n            self.frames.lock().unwrap()\n        }\n\n        fn active_cids(&self) -> MutexGuard<'_, HashMap<ConnectionId, ResetToken>> {\n            self.active_cids.lock().unwrap()\n        }\n    }\n\n    impl GenUniqueCid for IssuedCids {\n        fn gen_unique_cid(&self) -> ConnectionId {\n            let mut local_cids = self.active_cids.lock().unwrap();\n            let unique_cid =\n                core::iter::from_fn(|| Some(ConnectionId::random_gen_with_mark(8, 0x80, 0x7F)))\n                    .find(|cid| !local_cids.contains_key(cid))\n                    .unwrap();\n\n            local_cids.insert(unique_cid, ResetToken::default());\n            unique_cid\n        }\n    }\n\n    impl RetireCid for IssuedCids {\n        fn retire_cid(&self, cid: ConnectionId) {\n            self.active_cids.lock().unwrap().remove(&cid);\n        }\n    }\n\n    impl SendFrame<NewConnectionIdFrame> for IssuedCids {\n        fn send_frame<I: IntoIterator<Item = NewConnectionIdFrame>>(&self, iter: I) {\n            self.frames.lock().unwrap().extend(iter);\n        }\n    }\n\n    #[test]\n    fn test_issue_cid() {\n        let initial_scid = ConnectionId::random_gen(8);\n        let local_cids = ArcLocalCids::new(initial_scid, IssuedCids::default());\n        let mut local_cids = local_cids.0.lock().unwrap();\n\n        assert_eq!(local_cids.cid_deque.len(), 2);\n\n        local_cids.set_limit(3).unwrap();\n        assert_eq!(local_cids.cid_deque.len(), 3);\n    }\n\n    #[test]\n    fn test_recv_retire_cid_frame() {\n        let initial_scid = ConnectionId::random_gen(8);\n        let mut local_cids = LocalCids::new(initial_scid, IssuedCids::default());\n\n        assert_eq!(local_cids.cid_deque.len(), 2);\n        assert_eq!(local_cids.issued_cids.frames().len(), 1);\n\n        let issued_cid2 = *local_cids.issued_cids.frames()[0].connection_id();\n\n        let retire_frame = RetireConnectionIdFrame::new(VarInt::from_u32(1));\n        let cid2 = local_cids.recv_retire_cid_frame(retire_frame);\n        assert!(cid2.is_ok());\n        assert!(\n            !local_cids\n                .issued_cids\n                .active_cids()\n                .contains_key(&issued_cid2)\n        );\n        assert_eq!(local_cids.cid_deque.get(1), Some(&None));\n        // issued new cid while retiring an old one\n        assert_eq!(local_cids.cid_deque.len(), 3);\n        assert_eq!(local_cids.issued_cids.frames().len(), 2);\n\n        let retire_frame = RetireConnectionIdFrame::new(VarInt::from_u32(0));\n        let cid1 = local_cids.recv_retire_cid_frame(retire_frame);\n        assert!(cid1.is_ok());\n        assert!(\n            !local_cids\n                .issued_cids\n                .active_cids()\n                .contains_key(&initial_scid)\n        );\n        assert_eq!(local_cids.cid_deque.get(0), None); // have been slided out\n\n        assert_eq!(local_cids.cid_deque.len(), 2);\n        assert_eq!(local_cids.issued_cids.frames().len(), 3);\n\n        let retire_frame = RetireConnectionIdFrame::new(VarInt::from_u32(2));\n        let cid3 = local_cids.recv_retire_cid_frame(retire_frame);\n        assert!(cid3.is_ok());\n    }\n}\n"
  },
  {
    "path": "qbase/src/cid/remote_cid.rs",
    "content": "use std::{\n    collections::VecDeque,\n    ops::Deref,\n    sync::{Arc, Mutex},\n};\n\nuse super::ConnectionId;\nuse crate::{\n    error::{Error, ErrorKind, QuicError},\n    frame::{\n        GetFrameType, NewConnectionIdFrame, RetireConnectionIdFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::tx::{ArcSendWaker, Signals},\n    token::ResetToken,\n    util::IndexDeque,\n    varint::{VARINT_MAX, VarInt},\n};\n\n/// RemoteCids is used to manage the connection IDs issued by the peer,\n/// and to send [`RetireConnectionIdFrame`] to the peer.\n// TODO: support 0RTT?\n#[derive(Debug)]\nstruct RemoteCids<RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    // The cid issued by the peer, the sequence number maybe not continuous\n    // since the disordered [`NewConnectionIdFrame`]\n    cid_deque: IndexDeque<Option<(u64, ConnectionId, ResetToken)>, VARINT_MAX>,\n    // The cell of the connection ID, which is ready in use\n    ready_cells: IndexDeque<ArcCidCell<RETIRED>, VARINT_MAX>,\n    // The cell of the connection ID, which needs to be assigned or reassigned\n    // They can be retired before being assigned or reassigned.\n    pending_cells: VecDeque<ArcCidCell<RETIRED>>,\n    // The maximum number of connection IDs which is used to check if the\n    // maximum number of connection IDs has been exceeded\n    // when receiving a [`NewConnectionIdFrame`]\n    active_cid_limit: u64,\n    // The position of the cid to be used, and the position of the cell to be assigned.\n    cursor: u64,\n    // The retired cids, each needs send a [`RetireConnectionIdFrame`] to peer\n    retired_cids: RETIRED,\n}\n\nimpl<RETIRED> RemoteCids<RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    /// Create a new RemoteCids with the maximum number of active cids,\n    /// and the retired cids.\n    ///\n    /// As mentioned above, the retired cids can be a deque, a channel, or any buffer,\n    /// as long as it can send those [`RetireConnectionIdFrame`] to the peer finally.\n    /// See [`RemoteCids`]\n    fn new(active_cid_limit: u64, retired_cids: RETIRED) -> Self {\n        let cid_deque = IndexDeque::default();\n\n        Self {\n            active_cid_limit,\n            cid_deque,\n            ready_cells: Default::default(),\n            pending_cells: Default::default(),\n            cursor: 0,\n            retired_cids,\n        }\n    }\n\n    fn apply_initial_dcid(&mut self, initial_dcid: ConnectionId, dcid_cell: &ArcCidCell<RETIRED>) {\n        assert!(\n            self.cid_deque.is_empty() && self.cid_deque.offset() == 0 && self.cursor == 0,\n            \"NewConnectionIdFrame received before the first initial packet processed\"\n        );\n\n        self.cid_deque\n            .push_back(Some((0, initial_dcid, ResetToken::default())))\n            .expect(\"Initial connection ID should be inserted at the offset 0\");\n\n        let handshake_path = self\n            .pending_cells\n            .iter()\n            .enumerate()\n            .find_map(|(idx, cell)| Arc::ptr_eq(&cell.0, &dcid_cell.0).then_some(idx))\n            .expect(\"Initial path should be in pending_cells\");\n        // Move the initial path to the front of the pending cells\n        let handshake_path = self.pending_cells.remove(handshake_path).unwrap();\n        self.pending_cells.insert(0, handshake_path);\n\n        self.arrange_idle_cid();\n    }\n\n    /// Receive a [`NewConnectionIdFrame`] from peer.\n    ///\n    /// Add the new connection id to the deque, and retire the old cids before\n    /// the retire_prior_to in the [`NewConnectionIdFrame`].\n    /// Try to arrange the idle cids to the hungry cid applys if exist.\n    ///\n    /// Return the reset token of this [`NewConnectionIdFrame`] if it is valid.\n    fn recv_new_cid_frame(\n        &mut self,\n        frame: NewConnectionIdFrame,\n    ) -> Result<Option<ResetToken>, Error> {\n        let seq = frame.sequence();\n        let retire_prior_to = frame.retire_prior_to();\n        let active_len = seq.saturating_sub(retire_prior_to);\n        if active_len > self.active_cid_limit {\n            return Err(QuicError::new(\n                ErrorKind::ConnectionIdLimit,\n                frame.frame_type().into(),\n                format!(\n                    \"{active_len} exceed active_cid_limit {}\",\n                    self.active_cid_limit\n                ),\n            )\n            .into());\n        }\n\n        // Discard the frame if the sequence number is less than the current offset.\n        if seq < self.cid_deque.offset() {\n            return Ok(None);\n        }\n\n        let id = *frame.connection_id();\n        let token = *frame.reset_token();\n        self.cid_deque.insert(seq, Some((seq, id, token))).unwrap();\n        self.retire_prior_to(retire_prior_to);\n        self.arrange_idle_cid();\n\n        Ok(Some(token))\n    }\n\n    /// Arrange the idle cids to the front of the cid applys\n    #[doc(hidden)]\n    fn arrange_idle_cid(&mut self) {\n        loop {\n            let next_unalloced_cell = self.pending_cells.front();\n            if next_unalloced_cell.is_none() {\n                break;\n            }\n\n            let next_unalloced_cell = next_unalloced_cell.unwrap();\n            let mut guard = next_unalloced_cell.0.lock().unwrap();\n            if guard.is_retired {\n                drop(guard);\n                self.pending_cells.pop_front();\n                continue;\n            }\n\n            let next_unused_cid = self.cid_deque.get(self.cursor);\n            if let Some(Some((seq, cid, _))) = next_unused_cid {\n                guard.assign(*seq, *cid);\n                // Until an unused CID is allocated, the guard cannot be released early.\n                drop(guard);\n\n                let apply = self.pending_cells.pop_front().unwrap();\n                self.ready_cells\n                    .push_back(apply)\n                    .expect(\"Sequence of new connection ID should never exceed the limit\");\n                self.cursor += 1;\n            } else {\n                break;\n            }\n        }\n    }\n\n    /// Eliminate the old cids and inform the peer with a\n    /// [`RetireConnectionIdFrame`] for each retired connection ID.\n    #[doc(hidden)]\n    fn retire_prior_to(&mut self, tomb_seq: u64) {\n        if tomb_seq <= self.ready_cells.offset() {\n            return;\n        }\n\n        _ = self.cid_deque.drain_to(tomb_seq);\n        // it is possible that the connection id that has not been used is directly retired,\n        // and there is no chance to assign it, this phenomenon is called \"jumping retire cid\"\n        self.cursor = self.cursor.max(tomb_seq);\n\n        // reassign the cid that has been assigned to the Path but is facing retirement\n        if self.ready_cells.is_empty() {\n            // it is not necessary to resize the deque, because all elements will be drained\n            // // self.cid_cells.resize(seq, ArcCidCell::default()).expect(\"\");\n            self.retired_cids\n                .send_frame((self.ready_cells.offset()..tomb_seq).map(|seq| {\n                    RetireConnectionIdFrame::new(\n                        VarInt::from_u64(seq)\n                            .expect(\"Sequence of connection id is very hard to exceed VARINT_MAX\"),\n                    )\n                }));\n            self.ready_cells.reset_offset(tomb_seq);\n        } else {\n            let actual_applied = self.ready_cells.largest();\n            let need_reassigned = actual_applied.min(tomb_seq);\n            // retire the cids before seq, including the applied and unapplied\n            for _ in self.ready_cells.offset()..need_reassigned {\n                let (_, cell) = self.ready_cells.pop_front().unwrap();\n                if cell.is_retired() {\n                    continue;\n                }\n                self.pending_cells.push_back(cell);\n            }\n            if actual_applied < tomb_seq {\n                self.ready_cells.reset_offset(tomb_seq);\n                // even the cid that has not been applied is retired right now\n                self.retired_cids\n                    .send_frame((actual_applied..tomb_seq).map(|seq| {\n                        RetireConnectionIdFrame::new(\n                            VarInt::from_u64(seq).expect(\n                                \"Sequence of connection id is very hard to exceed VARINT_MAX\",\n                            ),\n                        )\n                    }));\n            }\n        }\n    }\n\n    /// Apply for a new connection ID, and return an [`ArcCidCell`], which may be not ready state.\n    fn apply_dcid(&mut self) -> ArcCidCell<RETIRED> {\n        let cell = ArcCidCell::new(self.retired_cids.clone());\n        self.pending_cells.push_back(cell.clone());\n        self.arrange_idle_cid();\n        cell\n    }\n}\n\n/// Shared remote connection ID manager. Most of the time, you should use this struct.\n///\n/// These connection IDs will be assigned to the Path.\n/// Every new path needs to apply for a new connection ID from the RemoteCids.\n/// Each path may retire the old connection ID proactively, and apply for a new one.\n///\n/// `RETIRED` stores the [`RetireConnectionIdFrame`], which need to be sent to the peer.\n/// It can be a deque, a channel, or any buffer,\n/// as long as it can send those [`RetireConnectionIdFrame`] to the peer finally.\n#[derive(Debug, Clone)]\npub struct ArcRemoteCids<RETIRED>(Arc<Mutex<RemoteCids<RETIRED>>>)\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone;\n\nimpl<RETIRED> ArcRemoteCids<RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    /// Create a new RemoteCids with the maximum number of active cids,\n    /// and the retired cids.\n    ///\n    /// As mentioned above, the `retired_cids` can be a deque, a channel, or any buffer,\n    /// as long as it can send those [`RetireConnectionIdFrame`] to the peer finally.\n    pub fn new(active_cid_limit: u64, retired_cids: RETIRED) -> Self {\n        Self(Arc::new(Mutex::new(RemoteCids::new(\n            active_cid_limit,\n            retired_cids,\n        ))))\n    }\n\n    /// Apply initial dcid to handshake path.\n    ///\n    /// dquic implements multi-path handshake feature, the client creates many paths and sends initial packets.\n    ///\n    /// The client and server must negotiate a handshake path and assign the initial dcid to this path\n    /// to prevent the unique connection ID from being obtained by an invalid path, causing the connection to fail.\n    ///\n    /// The client and server choose the path where they receive the first initial packet as the handshake path.\n    /// The server will only return the initial packet on the handshake path to negotiate the handshake path.\n    ///\n    /// This method should only be called when the connection receives the first initial packet, or panic.\n    /// The parameters are the Source Connection Id of the first initial packet received by the connection,\n    /// and the [`ArcCidCell`] of the path that passed this packet.\n    pub fn apply_initial_dcid(&self, initial_dcid: ConnectionId, dcid_cell: &ArcCidCell<RETIRED>) {\n        self.0\n            .lock()\n            .unwrap()\n            .apply_initial_dcid(initial_dcid, dcid_cell);\n    }\n\n    /// Apply for a new connection ID, which is used when the Path is created.\n    ///\n    /// Return an [`ArcCidCell`], which may be not ready state.\n    pub fn apply_dcid(&self) -> ArcCidCell<RETIRED> {\n        self.0.lock().unwrap().apply_dcid()\n    }\n\n    /// Return the latest connection ID issued by the peer.\n    ///\n    /// The cid is used to assemble the packet that contains a connection close frame. When the\n    /// connection is closed, the connection close frame will be sent to the peer.\n    pub fn latest_dcid(&self) -> Option<ConnectionId> {\n        self.0\n            .lock()\n            .unwrap()\n            .cid_deque\n            .iter()\n            .rev()\n            .flatten()\n            .next()\n            .map(|(_, cid, _)| *cid)\n    }\n}\n\nimpl<RETIRED> ReceiveFrame<NewConnectionIdFrame> for ArcRemoteCids<RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    type Output = Option<ResetToken>;\n\n    fn recv_frame(&self, frame: NewConnectionIdFrame) -> Result<Self::Output, Error> {\n        self.0.lock().unwrap().recv_new_cid_frame(frame)\n    }\n}\n\n#[derive(Debug)]\nstruct CidCell<RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame>,\n{\n    retired_cids: RETIRED,\n    allocated_cids: VecDeque<(u64, ConnectionId)>,\n    waker: Option<ArcSendWaker>,\n    is_retired: bool,\n    is_using: bool,\n}\n\nimpl<RETIRED> CidCell<RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    fn assign(&mut self, seq: u64, cid: ConnectionId) {\n        assert!(!self.is_retired);\n        self.allocated_cids.push_front((seq, cid));\n        if !self.is_using {\n            while self.allocated_cids.len() > 1 {\n                let (seq, _) = self.allocated_cids.pop_back().unwrap();\n                let sequence = VarInt::try_from(seq)\n                    .expect(\"Sequence of connection id is very hard to exceed VARINT_MAX\");\n                self.retired_cids\n                    .send_frame([RetireConnectionIdFrame::new(sequence)]);\n            }\n        }\n\n        if let Some(waker) = self.waker.take() {\n            waker.wake_by(Signals::CONNECTION_ID);\n        }\n    }\n\n    fn borrow_cid(&mut self, tx_waker: ArcSendWaker) -> Result<Option<ConnectionId>, Signals> {\n        if self.is_retired {\n            return Ok(None);\n        }\n\n        if self.allocated_cids.is_empty() {\n            self.waker = Some(tx_waker);\n            Err(Signals::CONNECTION_ID)\n        } else {\n            let cid = self.allocated_cids[0].1;\n            self.is_using = true;\n            Ok(Some(cid))\n        }\n    }\n\n    fn renew(&mut self) {\n        assert!(self.is_using);\n        self.is_using = false;\n        while self.allocated_cids.len() > 1 {\n            let (seq, _) = self.allocated_cids.pop_back().unwrap();\n            let sequence = VarInt::try_from(seq)\n                .expect(\"Sequence of connection id is very hard to exceed VARINT_MAX\");\n            self.retired_cids\n                .send_frame([RetireConnectionIdFrame::new(sequence)]);\n        }\n    }\n\n    fn retire(&mut self) {\n        if !self.is_retired {\n            self.is_retired = true;\n\n            while let Some((seq, _)) = self.allocated_cids.pop_front() {\n                let sequence = VarInt::try_from(seq)\n                    .expect(\"Sequence of connection id is very hard to exceed VARINT_MAX\");\n                self.retired_cids\n                    .send_frame([RetireConnectionIdFrame::new(sequence)]);\n            }\n\n            if let Some(waker) = self.waker.take() {\n                waker.wake_by(Signals::CONNECTION_ID);\n            }\n        }\n    }\n}\n\n/// Shared connection ID cell. Most of the time, you should use this struct.\n#[derive(Debug, Clone)]\npub struct ArcCidCell<RETIRED>(Arc<Mutex<CidCell<RETIRED>>>)\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone;\n\nimpl<RETIRED> ArcCidCell<RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    /// Create a new CidCell with the retired cids, the sequence number of the connection ID,\n    /// and the state of the connection ID.\n    ///\n    /// It can be created only by the [`ArcRemoteCids::apply_dcid`] method.\n    #[doc(hidden)]\n    fn new(retired_cids: RETIRED) -> Self {\n        Self(Arc::new(Mutex::new(CidCell {\n            retired_cids,\n            allocated_cids: VecDeque::with_capacity(2),\n            waker: None,\n            is_retired: false,\n            is_using: false,\n        })))\n    }\n\n    fn is_retired(&self) -> bool {\n        self.0.lock().unwrap().is_retired\n    }\n\n    /// Asynchronously get the connection ID, if it is not ready, return Pending.\n    ///\n    /// If the corresponding path which applied this cid is inactive,\n    /// then this cid apply is retired.\n    /// In this case, None will be returned.\n    pub fn borrow_cid(\n        &'_ self,\n        tx_waker: ArcSendWaker,\n    ) -> Result<Option<BorrowedCid<'_, RETIRED>>, Signals> {\n        self.0.lock().unwrap().borrow_cid(tx_waker).map(|cid| {\n            cid.map(|cid| BorrowedCid {\n                cid_cell: &self.0,\n                cid,\n            })\n        })\n    }\n\n    /// When the Path is invalid, the connection id needs to be retired, and this Cell\n    /// is marked as no longer in use, with a [`RetireConnectionIdFrame`] being sent to peer.\n    pub fn retire(&self) {\n        self.0.lock().unwrap().retire();\n    }\n}\n\n/// A borrowed connection ID, which will be returned back when it is dropped.\n///\n/// While the connection ID is borrowed, the retired cids will not be truly retired. The retire will be delayed until\n/// the [`BorrowedCid`] is dropped, a [`RetireConnectionIdFrame`] will be sent to the peer.\npub struct BorrowedCid<'a, RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    cid: ConnectionId,\n    cid_cell: &'a Mutex<CidCell<RETIRED>>,\n}\n\nimpl<RETIRED> Deref for BorrowedCid<'_, RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    type Target = ConnectionId;\n\n    fn deref(&self) -> &Self::Target {\n        &self.cid\n    }\n}\n\nimpl<RETIRED> Drop for BorrowedCid<'_, RETIRED>\nwhere\n    RETIRED: SendFrame<RetireConnectionIdFrame> + Clone,\n{\n    fn drop(&mut self) {\n        self.cid_cell.lock().unwrap().renew();\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use derive_more::Deref;\n\n    use super::*;\n\n    #[derive(Debug, Clone, Default, Deref)]\n    struct RetiredCids(Arc<Mutex<Vec<RetireConnectionIdFrame>>>);\n\n    impl SendFrame<RetireConnectionIdFrame> for RetiredCids {\n        fn send_frame<I: IntoIterator<Item = RetireConnectionIdFrame>>(&self, iter: I) {\n            self.0.lock().unwrap().extend(iter);\n        }\n    }\n\n    #[test]\n    fn test_remote_cids() {\n        let retired_cids = RetiredCids::default();\n        let mut remote_cids = RemoteCids::new(8, retired_cids);\n\n        let initial_dcid = ConnectionId::random_gen(8);\n        let cid_apply0 = remote_cids.apply_dcid();\n        remote_cids.apply_initial_dcid(initial_dcid, &cid_apply0);\n\n        let waker = ArcSendWaker::new();\n        assert!(matches!(\n            cid_apply0.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == initial_dcid\n        ));\n\n        // Will return Pending, because the peer hasn't issue any connection id\n        let cid_apply1 = remote_cids.apply_dcid();\n        assert!(matches!(\n            cid_apply1.borrow_cid(waker.clone()),\n            Err(Signals::CONNECTION_ID)\n        ));\n\n        let new_dcid = ConnectionId::random_gen(8);\n        let frame = NewConnectionIdFrame::new(new_dcid, VarInt::from_u32(1), VarInt::from_u32(0));\n        assert!(remote_cids.recv_new_cid_frame(frame).is_ok());\n        assert_eq!(remote_cids.cid_deque.len(), 2);\n\n        assert!(matches!(\n            cid_apply0.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == initial_dcid\n        ));\n        assert!(matches!(\n            cid_apply1.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == new_dcid\n        ));\n\n        // Additionally, a new request will be made because if the peer-issued CID is\n        // insufficient, it will still return Pending.\n        remote_cids.retire_prior_to(1);\n        let cid_apply2 = remote_cids.apply_dcid();\n        assert!(cid_apply2.borrow_cid(waker.clone()).is_err());\n        assert!(matches!(\n            cid_apply0.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == initial_dcid\n        ));\n    }\n\n    #[test]\n    fn test_retire_in_remote_cids() {\n        let retired_cids = RetiredCids::default();\n        let remote_cids = ArcRemoteCids::new(8, retired_cids);\n\n        let initial_dcid = ConnectionId::random_gen(8);\n        let cid_apply0 = remote_cids.apply_dcid();\n        remote_cids.apply_initial_dcid(initial_dcid, &cid_apply0);\n\n        let mut guard = remote_cids.0.lock().unwrap();\n\n        let mut cids = vec![initial_dcid];\n        for seq in 1..8 {\n            let cid = ConnectionId::random_gen(8);\n            cids.push(cid);\n            let frame = NewConnectionIdFrame::new(cid, VarInt::from_u32(seq), VarInt::from_u32(0));\n            _ = guard.recv_new_cid_frame(frame);\n        }\n\n        let cid_apply1 = guard.apply_dcid();\n\n        let waker = ArcSendWaker::new();\n        assert_eq!(cid_apply0.0.lock().unwrap().allocated_cids[0].0, 0);\n        assert!(matches!(\n            cid_apply0.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == cids[0]\n        ));\n        assert_eq!(cid_apply1.0.lock().unwrap().allocated_cids[0].0, 1);\n        assert!(matches!(\n            cid_apply1.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == cids[1]\n        ));\n\n        guard.retire_prior_to(4);\n        assert_eq!(guard.cid_deque.offset(), 4);\n        assert_eq!(guard.ready_cells.offset(), 4);\n        // delay retire cid\n        assert_eq!(guard.retired_cids.0.lock().unwrap().len(), 2);\n\n        assert_eq!(cid_apply0.0.lock().unwrap().allocated_cids[0].0, 0);\n        assert_eq!(cid_apply1.0.lock().unwrap().allocated_cids[0].0, 1);\n\n        assert!(matches!(\n            cid_apply0.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == cids[0]\n        ));\n        assert!(matches!(\n            cid_apply1.borrow_cid(waker.clone()),\n            Ok(Some(cid)) if *cid == cids[1]\n        ));\n\n        guard.arrange_idle_cid();\n        assert_eq!(guard.retired_cids.0.lock().unwrap().len(), 4);\n\n        let retired_cids = [1, 0, 3, 2];\n        for seq in retired_cids {\n            assert_eq!(\n                // like a stack, the last in the first out\n                guard.retired_cids.0.lock().unwrap().pop(),\n                Some(RetireConnectionIdFrame::new(VarInt::from_u32(seq)))\n            );\n        }\n\n        assert!(matches!(\n            cid_apply0.borrow_cid(waker.clone()),\n            Ok(Some(entry)) if *entry == cids[4]\n        ));\n        assert!(matches!(\n            cid_apply1.borrow_cid(waker.clone()),\n           Ok(Some(entry)) if *entry == cids[5]\n        ));\n\n        cid_apply1.retire();\n        assert_eq!(guard.retired_cids.lock().unwrap().len(), 1);\n        assert_eq!(\n            guard.retired_cids.0.lock().unwrap().pop(),\n            Some(RetireConnectionIdFrame::new(VarInt::from_u32(5)))\n        );\n    }\n\n    #[test]\n    fn test_retire_without_apply() {\n        let retired_cids = RetiredCids::default();\n        let remote_cids = ArcRemoteCids::new(8, retired_cids);\n\n        let initial_dcid = ConnectionId::random_gen(8);\n        let cid_apply0 = remote_cids.apply_dcid();\n        remote_cids.apply_initial_dcid(initial_dcid, &cid_apply0);\n\n        let mut guard = remote_cids.0.lock().unwrap();\n\n        let mut cids = vec![initial_dcid];\n        for seq in 1..8 {\n            let cid = ConnectionId::random_gen(8);\n            cids.push(cid);\n            let frame = NewConnectionIdFrame::new(cid, VarInt::from_u32(seq), VarInt::from_u32(0));\n            _ = guard.recv_new_cid_frame(frame);\n        }\n\n        guard.retire_prior_to(4);\n        assert_eq!(guard.cid_deque.offset(), 4);\n        assert_eq!(guard.ready_cells.offset(), 4);\n        assert_eq!(guard.retired_cids.0.lock().unwrap().len(), 3);\n\n        let cid_apply1 = guard.apply_dcid();\n        assert_eq!(cid_apply0.0.lock().unwrap().allocated_cids[0].0, 4);\n        assert_eq!(cid_apply1.0.lock().unwrap().allocated_cids[0].0, 5);\n        let waker = ArcSendWaker::new();\n        assert!(matches!(\n            cid_apply0.borrow_cid(waker.clone()),\n           Ok(Some(entry)) if *entry == cids[4]\n        ));\n        assert!(matches!(\n            cid_apply1.borrow_cid(waker.clone()),\n            Ok(Some(entry)) if *entry == cids[5]\n        ));\n    }\n}\n"
  },
  {
    "path": "qbase/src/cid.rs",
    "content": "mod connection_id;\npub use connection_id::*;\n\nmod local_cid;\npub use local_cid::*;\n\nmod remote_cid;\npub use remote_cid::*;\n\nuse crate::role::Role;\n\n/// When issuing a CID to the peer, be careful not to duplicate\n/// other local connection IDs, as this will cause routing conflicts.\npub trait GenUniqueCid {\n    /// Generate a unique connection ID.\n    #[must_use]\n    fn gen_unique_cid(&self) -> ConnectionId;\n}\n\npub trait RetireCid {\n    /// Retire a connection ID.\n    fn retire_cid(&self, cid: ConnectionId);\n}\n\n/// Connection ID registry.\n///\n/// - `local` represents the management of connection IDs issued by me to peer,\n/// - `remote` represents the reception of connection IDs issued by peer,\n///   which will be used by the path.\n#[derive(Debug, Clone)]\npub struct Registry<LOCAL, REMOTE> {\n    pub local: LOCAL,\n    pub remote: REMOTE,\n    role: Role,\n    origin_dcid: ConnectionId,\n}\n\nimpl<LOCAL, REMOTE> Registry<LOCAL, REMOTE> {\n    /// Create a new connection ID registry.\n    pub fn new(role: Role, origin_dcid: ConnectionId, local: LOCAL, remote: REMOTE) -> Self {\n        Self {\n            role,\n            origin_dcid,\n            local,\n            remote,\n        }\n    }\n\n    pub fn role(&self) -> Role {\n        self.role\n    }\n\n    pub fn origin_dcid(&self) -> ConnectionId {\n        self.origin_dcid\n    }\n}\n"
  },
  {
    "path": "qbase/src/error.rs",
    "content": "use std::{borrow::Cow, fmt::Display};\n\nuse derive_more::From;\nuse thiserror::Error;\n\nuse crate::{\n    frame::{ConnectionCloseFrame, FrameType},\n    varint::VarInt,\n};\n\n/// QUIC transport error codes and application error codes.\n///\n/// See [table-7](https://www.rfc-editor.org/rfc/rfc9000.html#table-7)\n/// and [error codes](https://www.rfc-editor.org/rfc/rfc9000.html#name-error-codes)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, PartialEq, Eq, Clone, Copy)]\npub enum ErrorKind {\n    /// An endpoint uses this with CONNECTION_CLOSE to signal that\n    /// the connection is being closed abruptly in the absence of any error.\n    None,\n    /// The endpoint encountered an internal error and cannot continue with the connection.\n    Internal,\n    /// The server refused to accept a new connection.\n    ConnectionRefused,\n    /// An endpoint received more data than it permitted in its advertised data limits.\n    FlowControl,\n    /// An endpoint received a frame for a stream identifier that\n    /// exceeded its advertised stream limit for the corresponding stream type.\n    StreamLimit,\n    /// An endpoint received a frame for a stream that was not in a state that permitted that frame.\n    StreamState,\n    /// - An endpoint received a STREAM frame containing data that\n    ///   exceeded the previously established final size,\n    /// - an endpoint received a STREAM frame or a RESET_STREAM frame containing a final size\n    ///   that was lower than the size of stream data that was already received, or\n    /// - an endpoint received a STREAM frame or a RESET_STREAM frame containing a different\n    ///   final size to the one already established.\n    FinalSize,\n    /// An endpoint received a frame that was badly formatted.\n    FrameEncoding,\n    /// An endpoint received transport parameters that were badly formatted, included:\n    /// - an invalid value, omitted a mandatory transport parameter\n    /// - a forbidden transport parameter\n    /// - otherwise in error.\n    TransportParameter,\n    /// The number of connection IDs provided by the peer exceeds\n    /// the advertised active_connection_id_limit.\n    ConnectionIdLimit,\n    /// An endpoint detected an error with protocol compliance\n    /// that was not covered by more specific error codes.\n    ProtocolViolation,\n    /// A server received a client Initial that contained an invalid Token field.\n    InvalidToken,\n    /// The application or application protocol caused the connection to be closed.\n    Application,\n    /// An endpoint has received more data in CRYPTO frames than it can buffer.\n    CryptoBufferExceeded,\n    /// An endpoint detected errors in performing key updates; see\n    /// [Section 6](https://www.rfc-editor.org/rfc/rfc9001#section-6)\n    /// of [QUIC-TLS](https://www.rfc-editor.org/rfc/rfc9000.html#QUIC-TLS).\n    KeyUpdate,\n    /// An endpoint has reached the confidentiality or integrity limit\n    /// for the AEAD algorithm used by the given connection.\n    AeadLimitReached,\n    /// An endpoint has determined that the network path is incapable of supporting QUIC.\n    /// An endpoint is unlikely to receive a CONNECTION_CLOSE frame carrying this code\n    /// except when the path does not support a large enough MTU.\n    NoViablePath,\n    /// The cryptographic handshake failed.\n    /// A range of 256 values is reserved for carrying error codes specific\n    /// to the cryptographic handshake that is used.\n    /// Codes for errors occurring when TLS is used for the cryptographic handshake are described\n    /// in [Section 4.8](https://www.rfc-editor.org/rfc/rfc9001#section-4.8)\n    /// of [QUIC-TLS](https://www.rfc-editor.org/rfc/rfc9000.html#QUIC-TLS).\n    Crypto(u8),\n}\n\nimpl Display for ErrorKind {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        let description = match self {\n            ErrorKind::None => \"No error\",\n            ErrorKind::Internal => \"Implementation error\",\n            ErrorKind::ConnectionRefused => \"Server refuses a connection\",\n            ErrorKind::FlowControl => \"Flow control error\",\n            ErrorKind::StreamLimit => \"Too many streams opened\",\n            ErrorKind::StreamState => \"Frame received in invalid stream state\",\n            ErrorKind::FinalSize => \"Change to final size\",\n            ErrorKind::FrameEncoding => \"Frame encoding error\",\n            ErrorKind::TransportParameter => \"Error in transport parameters\",\n            ErrorKind::ConnectionIdLimit => \"Too many connection IDs received\",\n            ErrorKind::ProtocolViolation => \"Generic protocol violation\",\n            ErrorKind::InvalidToken => \"Invalid Token received\",\n            ErrorKind::Application => \"Application error\",\n            ErrorKind::CryptoBufferExceeded => \"CRYPTO data buffer overflowed\",\n            ErrorKind::KeyUpdate => \"Invalid packet protection update\",\n            ErrorKind::AeadLimitReached => \"Excessive use of packet protection keys\",\n            ErrorKind::NoViablePath => \"No viable network path exists\",\n            ErrorKind::Crypto(x) => return write!(f, \"TLS alert code: {x}\"),\n        };\n        write!(f, \"{description}\",)\n    }\n}\n\n/// Invalid error code while parsing.\n/// The parsed [`VarInt`] error code exceeds the normal range of error codes.\n///\n/// See [Initial QUIC Transport Error Codes Registry Entries](https://www.rfc-editor.org/rfc/rfc9000.html#table-7)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, PartialEq, Eq, Clone, Copy, Error)]\n#[error(\"Invalid error code {0}\")]\npub struct InvalidErrorKind(u64);\n\nimpl TryFrom<VarInt> for ErrorKind {\n    type Error = InvalidErrorKind;\n\n    fn try_from(value: VarInt) -> Result<Self, Self::Error> {\n        Ok(match value.into_u64() {\n            0x00 => ErrorKind::None,\n            0x01 => ErrorKind::Internal,\n            0x02 => ErrorKind::ConnectionRefused,\n            0x03 => ErrorKind::FlowControl,\n            0x04 => ErrorKind::StreamLimit,\n            0x05 => ErrorKind::StreamState,\n            0x06 => ErrorKind::FinalSize,\n            0x07 => ErrorKind::FrameEncoding,\n            0x08 => ErrorKind::TransportParameter,\n            0x09 => ErrorKind::ConnectionIdLimit,\n            0x0a => ErrorKind::ProtocolViolation,\n            0x0b => ErrorKind::InvalidToken,\n            0x0c => ErrorKind::Application,\n            0x0d => ErrorKind::CryptoBufferExceeded,\n            0x0e => ErrorKind::KeyUpdate,\n            0x0f => ErrorKind::AeadLimitReached,\n            0x10 => ErrorKind::NoViablePath,\n            0x0100..=0x01ff => ErrorKind::Crypto((value.into_u64() & 0xff) as u8),\n            other => return Err(InvalidErrorKind(other)),\n        })\n    }\n}\n\nimpl From<ErrorKind> for VarInt {\n    fn from(value: ErrorKind) -> Self {\n        match value {\n            ErrorKind::None => VarInt::from(0x00u8),\n            ErrorKind::Internal => VarInt::from(0x01u8),\n            ErrorKind::ConnectionRefused => VarInt::from(0x02u8),\n            ErrorKind::FlowControl => VarInt::from(0x03u8),\n            ErrorKind::StreamLimit => VarInt::from(0x04u8),\n            ErrorKind::StreamState => VarInt::from(0x05u8),\n            ErrorKind::FinalSize => VarInt::from(0x06u8),\n            ErrorKind::FrameEncoding => VarInt::from(0x07u8),\n            ErrorKind::TransportParameter => VarInt::from(0x08u8),\n            ErrorKind::ConnectionIdLimit => VarInt::from(0x09u8),\n            ErrorKind::ProtocolViolation => VarInt::from(0x0au8),\n            ErrorKind::InvalidToken => VarInt::from(0x0bu8),\n            ErrorKind::Application => VarInt::from(0x0cu8),\n            ErrorKind::CryptoBufferExceeded => VarInt::from(0x0du8),\n            ErrorKind::KeyUpdate => VarInt::from(0x0eu8),\n            ErrorKind::AeadLimitReached => VarInt::from(0x0fu8),\n            ErrorKind::NoViablePath => VarInt::from(0x10u8),\n            ErrorKind::Crypto(x) => VarInt::from(0x0100u16 | x as u16),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Copy)]\npub enum ErrorFrameType {\n    V1(FrameType),\n    Ext(VarInt),\n}\n\n/// QUIC transport error.\n///\n/// Its definition conforms to the usage of [`ConnectionCloseFrame`].\n/// A value of 0 (equivalent to the mention of the PADDING frame) is used when the frame type is unknown.\n#[derive(Debug, Clone, PartialEq, Eq, Error)]\n#[error(\"{kind} in {frame_type:?}, reason: {reason}\")]\npub struct QuicError {\n    kind: ErrorKind,\n    frame_type: ErrorFrameType,\n    reason: Cow<'static, str>,\n}\n\nimpl QuicError {\n    /// Create a new error with the given kind, frame type, and reason.\n    /// The frame type is the one that triggered this error.\n    pub fn new<T: Into<Cow<'static, str>>>(\n        kind: ErrorKind,\n        frame_type: ErrorFrameType,\n        reason: T,\n    ) -> Self {\n        Self {\n            kind,\n            frame_type,\n            reason: reason.into(),\n        }\n    }\n\n    /// Create a new error with unknown frame type, and\n    /// the [`FrameType::Padding`] type will be used by default.\n    pub fn with_default_fty<T: Into<Cow<'static, str>>>(kind: ErrorKind, reason: T) -> Self {\n        Self {\n            kind,\n            frame_type: FrameType::Padding.into(),\n            reason: reason.into(),\n        }\n    }\n\n    /// Return the error kind.\n    pub fn kind(&self) -> ErrorKind {\n        self.kind\n    }\n\n    /// Return the frame type that triggered this error.\n    pub fn frame_type(&self) -> ErrorFrameType {\n        self.frame_type\n    }\n\n    /// Return the reason of this error.\n    pub fn reason(&self) -> &str {\n        &self.reason\n    }\n}\n\nimpl From<FrameType> for ErrorFrameType {\n    fn from(value: FrameType) -> Self {\n        Self::V1(value)\n    }\n}\n\nimpl From<ErrorFrameType> for VarInt {\n    fn from(val: ErrorFrameType) -> Self {\n        match val {\n            ErrorFrameType::V1(frame) => frame.into(),\n            ErrorFrameType::Ext(value) => value,\n        }\n    }\n}\n\n/// App specific error.\n#[derive(Debug, Clone, PartialEq, Eq, Error)]\n#[error(\"App layer error occur with error code {error_code}, reason: {reason}\")]\npub struct AppError {\n    error_code: VarInt,\n    reason: Cow<'static, str>,\n}\n\nimpl AppError {\n    /// Create a new app error with the given app error code and reason.\n    pub fn new(error_code: VarInt, reason: impl Into<Cow<'static, str>>) -> Self {\n        Self {\n            error_code,\n            reason: reason.into(),\n        }\n    }\n\n    /// Return the error code.\n    ///\n    /// The error code is an application error code.\n    pub fn error_code(&self) -> u64 {\n        self.error_code.into_u64()\n    }\n\n    /// Return the reason of this error.\n    pub fn reason(&self) -> &str {\n        &self.reason\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Error, From)]\npub enum Error {\n    #[error(transparent)]\n    Quic(QuicError),\n    #[error(transparent)]\n    App(AppError),\n}\n\nimpl Error {\n    pub fn kind(&self) -> ErrorKind {\n        match self {\n            Error::Quic(e) => e.kind(),\n            Error::App(_) => ErrorKind::Application,\n        }\n    }\n\n    pub fn frame_type(&self) -> ErrorFrameType {\n        match self {\n            Error::Quic(e) => e.frame_type(),\n            Error::App(_) => FrameType::Padding.into(),\n        }\n    }\n}\n\nimpl From<Error> for std::io::Error {\n    fn from(e: Error) -> Self {\n        Self::new(std::io::ErrorKind::BrokenPipe, e)\n    }\n}\n\nimpl From<Error> for ConnectionCloseFrame {\n    fn from(e: Error) -> Self {\n        match e {\n            Error::Quic(e) => Self::new_quic(e.kind, e.frame_type, e.reason),\n            Error::App(app_error) => Self::new_app(app_error.error_code, app_error.reason),\n        }\n    }\n}\n\nimpl From<AppError> for ConnectionCloseFrame {\n    fn from(e: AppError) -> Self {\n        Self::new_app(e.error_code, e.reason)\n    }\n}\n\nimpl From<ConnectionCloseFrame> for Error {\n    fn from(frame: ConnectionCloseFrame) -> Self {\n        match frame {\n            ConnectionCloseFrame::Quic(frame) => Self::Quic(QuicError {\n                kind: frame.error_kind(),\n                frame_type: frame.frame_type(),\n                reason: frame.reason().to_owned().into(),\n            }),\n            ConnectionCloseFrame::App(frame) => Self::App(AppError {\n                error_code: VarInt::from_u64(frame.error_code())\n                    .expect(\"error code never overflow\"),\n                reason: frame.reason().to_owned().into(),\n            }),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_error_kind_display() {\n        assert_eq!(ErrorKind::None.to_string(), \"No error\");\n        assert_eq!(ErrorKind::Internal.to_string(), \"Implementation error\");\n        assert_eq!(ErrorKind::Crypto(10).to_string(), \"TLS alert code: 10\");\n    }\n\n    #[test]\n    fn test_error_kind_conversion() {\n        // Test VarInt to ErrorKind\n        assert_eq!(\n            ErrorKind::try_from(VarInt::from(0x00u8)).unwrap(),\n            ErrorKind::None\n        );\n        assert_eq!(\n            ErrorKind::try_from(VarInt::from(0x10u8)).unwrap(),\n            ErrorKind::NoViablePath\n        );\n        assert_eq!(\n            ErrorKind::try_from(VarInt::from(0x0100u16)).unwrap(),\n            ErrorKind::Crypto(0)\n        );\n\n        // Test invalid error code\n        assert_eq!(\n            ErrorKind::try_from(VarInt::from(0x1000u16)).unwrap_err(),\n            InvalidErrorKind(0x1000)\n        );\n\n        // Test ErrorKind to VarInt\n        assert_eq!(VarInt::from(ErrorKind::None), VarInt::from(0x00u8));\n        assert_eq!(VarInt::from(ErrorKind::NoViablePath), VarInt::from(0x10u8));\n        assert_eq!(VarInt::from(ErrorKind::Crypto(5)), VarInt::from(0x0105u16));\n    }\n\n    #[test]\n    fn test_error_creation() {\n        let err = QuicError::new(ErrorKind::Internal, FrameType::Ping.into(), \"test error\");\n        assert_eq!(err.kind(), ErrorKind::Internal);\n        assert_eq!(err.frame_type(), FrameType::Ping.into());\n\n        let err = QuicError::with_default_fty(ErrorKind::Application, \"default frame type\");\n        assert_eq!(err.frame_type(), FrameType::Padding.into());\n    }\n\n    #[test]\n    fn test_error_conversion() {\n        let err = Error::Quic(QuicError::new(\n            ErrorKind::Internal,\n            FrameType::Ping.into(),\n            \"test conversion\",\n        ));\n\n        // Test Error to ConnectionCloseFrame\n        let frame: ConnectionCloseFrame = err.clone().into();\n        match frame {\n            ConnectionCloseFrame::Quic(frame) => {\n                assert_eq!(frame.error_kind(), err.kind());\n                assert_eq!(frame.frame_type(), err.frame_type());\n            }\n            _ => panic!(\"unexpected frame type\"),\n        }\n\n        // Test Error to io::Error\n        let io_err: std::io::Error = err.into();\n        assert_eq!(io_err.kind(), std::io::ErrorKind::BrokenPipe);\n    }\n}\n"
  },
  {
    "path": "qbase/src/flow.rs",
    "content": "use std::{\n    ops::{Deref, DerefMut},\n    sync::{Arc, Mutex},\n};\n\nuse crate::{\n    error::{Error, ErrorFrameType, ErrorKind, QuicError},\n    frame::{\n        DataBlockedFrame, FrameType, MaxDataFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::tx::{ArcSendWakers, Signals},\n    varint::VarInt,\n};\n\n/// Connection-level global Stream Flow Control in the sending direction,\n/// regulated by the peer's `initial_max_data` transport parameter\n/// and updated by the [`MaxDataFrame`] sent by the peer.\n///\n/// Private controler in [`ArcSendControler`].\n#[derive(Debug)]\nstruct SendControler<TX> {\n    sent_data: u64,\n    max_data: u64,\n    flow_limited: bool,\n    broker: TX,\n    tx_wakers: ArcSendWakers,\n}\n\nimpl<TX> SendControler<TX> {\n    fn new(initial_max_data: u64, broker: TX, tx_wakers: ArcSendWakers) -> Self {\n        Self {\n            sent_data: 0,\n            max_data: initial_max_data,\n            flow_limited: false,\n            broker,\n            tx_wakers,\n        }\n    }\n\n    fn increase_limit(&mut self, max_data: u64) {\n        if max_data > self.max_data {\n            self.max_data = max_data;\n            self.flow_limited = false;\n            self.tx_wakers.wake_all_by(Signals::FLOW_CONTROL);\n        }\n    }\n\n    fn avaliable(&self) -> u64 {\n        self.max_data - self.sent_data\n    }\n\n    fn commit(&mut self, flow: u64)\n    where\n        TX: SendFrame<DataBlockedFrame>,\n    {\n        self.sent_data += flow;\n\n        if self.avaliable() == 0 && !self.flow_limited {\n            self.flow_limited = true;\n            self.broker.send_frame([DataBlockedFrame::new(\n                VarInt::from_u64(self.max_data)\n                    .expect(\"max_data of flow controller is very very hard to exceed 2^62 - 1\"),\n            )]);\n        }\n    }\n\n    fn return_back(&mut self, flow: u64) {\n        self.sent_data -= flow;\n        if self.avaliable() > 0 {\n            self.tx_wakers.wake_all_by(Signals::FLOW_CONTROL);\n        }\n    }\n\n    fn revise_max_data(&mut self, zero_rtt_rejected: bool, max_data: u64) {\n        if zero_rtt_rejected {\n            self.max_data = 0;\n            self.flow_limited = false;\n        }\n        self.increase_limit(max_data);\n    }\n}\n\n/// Shared connection-level Stream Flow Control in the sending direction,\n/// regulated by the peer's `initial_max_data` transport parameter\n/// and updated by the [`MaxDataFrame`] received from the peer.\n///\n/// Only the new data sent in [`StreamFrame`](`crate::frame::StreamFrame`) counts toward this limit.\n/// Retransmitted stream data does not count towards this limit.\n///\n/// When flow control is 0,\n/// retransmitted stream data can still be sent,\n/// but new data cannot be sent.\n/// When the stream has no data to retransmit,\n/// meaning all old data has been successfully acknowledged,\n/// it is necessary to wait for the receiver's [`MaxDataFrame`]`\n/// to increase the connection-level flow control limit.\n///\n/// To avoid having to pause sending tasks while waiting for the [`MaxDataFrame`],\n/// the receiver should promptly send the [`MaxDataFrame`]\n/// to increase the flow control limit,\n/// ensuring that the sender always has enough space to send smoothly.\n/// An extreme yet simple strategy is to set the flow control limit to infinity from the start,\n/// causing the connection-level flow control to never reach its limit,\n/// effectively rendering it useless.\n#[derive(Clone, Debug)]\npub struct ArcSendControler<TX>(Arc<Mutex<Result<SendControler<TX>, Error>>>);\n\nimpl<TX> ArcSendControler<TX> {\n    /// Creates a new [`ArcSendControler`] with `initial_max_data`.\n    ///\n    /// `initial_max_data` should be known to each other after the handshake is\n    /// completed. If sending data in 0-RTT space, `initial_max_data` should be\n    /// the value from the previous connection.\n    ///\n    /// `initial_max_data` is allowed to be 0, which is reasonable when creating a\n    /// connection without knowing the peer's `iniitial_max_data` setting.\n    pub fn new(initial_max_data: u64, broker: TX, tx_wakers: ArcSendWakers) -> Self {\n        Self(Arc::new(Mutex::new(Ok(SendControler::new(\n            initial_max_data,\n            broker,\n            tx_wakers,\n        )))))\n    }\n\n    fn increase_limit(&self, max_data: u64) {\n        let mut guard = self.0.lock().unwrap();\n        if let Ok(inner) = guard.deref_mut() {\n            inner.increase_limit(max_data);\n        }\n    }\n\n    // Get some flow control credit to send fresh flow data.\n    /// The returned value may be smaller than the parameter's intended value.\n    /// If some QUIC error occured, it would return the error directly.\n    ///\n    /// # Note\n    ///\n    /// After obtaining the flow control,\n    /// the traffic credit is considered to be consumed immediately.\n    /// The unused flow control quota for this send will be returned to the sending controller.\n    /// This design avoids the sending task’s exclusive access to the sending controller.\n    pub fn credit(&self, quota: usize) -> Result<Credit<'_, TX>, Error>\n    where\n        TX: SendFrame<DataBlockedFrame>,\n    {\n        match self.0.lock().unwrap().as_mut() {\n            Ok(inner) => {\n                let avaliable = inner.avaliable().min(quota as u64);\n                inner.commit(avaliable);\n                Ok(Credit {\n                    available: avaliable as usize,\n                    controller: self,\n                })\n            }\n            Err(e) => Err(e.clone()),\n        }\n    }\n\n    pub fn revise_max_data(&self, zero_rtt_rejected: bool, max_data: u64) {\n        if let Ok(inner) = self.0.lock().unwrap().deref_mut() {\n            inner.revise_max_data(zero_rtt_rejected, max_data);\n        }\n    }\n\n    /// Connection-level Stream Flow Control can only be terminated\n    /// if the connection encounters an error\n    pub fn on_error(&self, error: &Error) {\n        let mut guard = self.0.lock().unwrap();\n        if guard.deref().is_err() {\n            return;\n        }\n        *guard = Err(error.clone());\n    }\n}\n\n/// [`ArcSendControler`] need to receive [`MaxDataFrame`] from peer\n/// to increase flow control limit continuely.\nimpl<TX> ReceiveFrame<MaxDataFrame> for ArcSendControler<TX> {\n    type Output = ();\n\n    fn recv_frame(&self, frame: MaxDataFrame) -> Result<Self::Output, Error> {\n        self.increase_limit(frame.max_data());\n        Ok(())\n    }\n}\n\n/// Exclusive access to the flow control limit.\n///\n/// As mentioned in the [`ArcSendControler::credit`] method,\n/// the flow controller in the period between obtaining flow control\n/// and finally updating(or maybe not) the flow control should be exclusive.\npub struct Credit<'a, TX> {\n    available: usize,\n    controller: &'a ArcSendControler<TX>,\n}\n\nimpl<TX> Credit<'_, TX> {\n    /// Return the available amount of new stream data that can be sent.\n    pub fn available(&self) -> usize {\n        self.available\n    }\n}\n\nimpl<TX> Credit<'_, TX>\nwhere\n    TX: SendFrame<DataBlockedFrame>,\n{\n    /// Updates the amount of new data sent.\n    pub fn post_sent(&mut self, amount: usize) {\n        self.available -= amount;\n    }\n}\n\nimpl<TX> Drop for Credit<'_, TX> {\n    fn drop(&mut self) {\n        if let Ok(inner) = self.controller.0.lock().unwrap().as_mut() {\n            inner.return_back(self.available as u64);\n        }\n    }\n}\n\n/// Receiver's flow controller for managing the flow limit of incoming stream data.\n#[derive(Debug, Default)]\nstruct RecvController<TX> {\n    rcvd_data: u64,\n    max_data: u64,\n    step: u64,\n    broker: TX,\n}\n\nimpl<TX> RecvController<TX> {\n    /// Creates a new [`RecvController`] with the specified `initial_max_data`.\n    fn new(initial_max_data: u64, broker: TX) -> Self {\n        Self {\n            rcvd_data: 0,\n            max_data: initial_max_data,\n            step: initial_max_data / 2,\n            broker,\n        }\n    }\n}\n\nimpl<TX> RecvController<TX>\nwhere\n    TX: SendFrame<MaxDataFrame>,\n{\n    /// Handles the event when new data is received.\n    ///\n    /// The data must be new, old retransmitted data does not count. Whether the data is\n    /// new or not will be determined by each stream after delivering the data packet to them.\n    /// The amount of new data will be passed as the `amount` parameter.\n    fn on_new_rcvd(&mut self, frame_type: FrameType, amount: usize) -> Result<usize, Error> {\n        self.rcvd_data += amount as u64;\n        if self.rcvd_data <= self.max_data {\n            if self.rcvd_data + self.step >= self.max_data {\n                self.max_data += self.step;\n                self.broker\n                    .send_frame([MaxDataFrame::new(VarInt::from_u64(self.max_data).expect(\n                        \"max_data of flow controller is very very hard to exceed 2^62 - 1\",\n                    ))])\n            }\n            Ok(amount)\n        } else {\n            // Err(Overflow((rcvd_data - max_data) as usize))\n            Err(QuicError::new(\n                ErrorKind::FlowControl,\n                ErrorFrameType::V1(frame_type),\n                format!(\"flow control overflow: {}\", self.rcvd_data - self.max_data),\n            )\n            .into())\n        }\n    }\n}\n\n/// Shared receiver's flow controller for managing the incoming stream data flow.\n///\n/// Flow control on the receiving end,\n/// primarily used to regulate the data flow sent by the sender.\n/// Since the receive buffer is limited,\n/// if the application layer cannot read the data in time,\n/// the receive buffer will not expand, and the sender must be suspended.\n///\n/// The sender must never send new stream data exceeding\n/// the flow control limit of the receiver advertised,\n/// otherwise it will be considered a [`FlowControl`](`crate::error::ErrorKind::FlowControl`) error.\n///\n/// Additionally, the flow control on the receiving end also needs to\n/// promptly send a [`MaxDataFrame`] to the sender after the application layer reads the data,\n/// to expand the receive window since more receive buffer space is freed up,\n/// and to inform the sender that more data can be sent.\n#[derive(Debug, Default, Clone)]\npub struct ArcRecvController<TX>(Arc<Mutex<RecvController<TX>>>);\n\nimpl<TX> ArcRecvController<TX> {\n    /// Creates a new [`ArcRecvController`] with local `initial_max_data` transport parameter.\n    pub fn new(initial_max_data: u64, broker: TX) -> Self {\n        Self(Arc::new(Mutex::new(RecvController::new(\n            initial_max_data,\n            broker,\n        ))))\n    }\n}\n\nimpl<TX> ArcRecvController<TX>\nwhere\n    TX: SendFrame<MaxDataFrame>,\n{\n    /// Updates the total received data size and checks if the flow control limit is exceeded\n    /// when new stream data is received.\n    ///\n    /// As mentioned in [`ArcSendControler`], if the flow control limit is exceeded,\n    /// a [`Error`] error will be returned.\n    pub fn on_new_rcvd(&self, frame_type: FrameType, amount: usize) -> Result<usize, Error> {\n        self.0.lock().unwrap().on_new_rcvd(frame_type, amount)\n    }\n}\n\n/// [`ArcRecvController`] need to receive [`DataBlockedFrame`] from peer.\n///\n/// However, the receiver may also not be able to immediately expand the receive window\n/// and must wait for the application layer to read the data to free up more space\n/// in the receive buffer.\nimpl<TX> ReceiveFrame<DataBlockedFrame> for ArcRecvController<TX> {\n    type Output = ();\n\n    fn recv_frame(&self, _frame: DataBlockedFrame) -> Result<Self::Output, Error> {\n        // Do nothing\n        Ok(())\n    }\n}\n\n/// Connection-level flow controller, including an [`ArcSendControler`] as the sending side\n/// and an [`ArcRecvController`] as the receiving side.\n#[derive(Debug, Clone)]\npub struct FlowController<TX> {\n    pub sender: ArcSendControler<TX>,\n    pub recver: ArcRecvController<TX>,\n}\n\nimpl<TX: Clone> FlowController<TX> {\n    /// Creates a new `FlowController` with the specified initial send and receive window sizes.\n    ///\n    /// Unfortunately, at the beginning, the peer's `initial_max_data` is unknown.\n    /// Therefore, peer's `initial_max_data` can be set to 0 initially,\n    /// and then updated later after obtaining the peer's `initial_max_data` setting.\n    pub fn new(\n        peer_initial_max_data: u64,\n        local_initial_max_data: u64,\n        broker: TX,\n        tx_wakers: ArcSendWakers,\n    ) -> Self {\n        Self {\n            sender: ArcSendControler::new(peer_initial_max_data, broker.clone(), tx_wakers),\n            recver: ArcRecvController::new(local_initial_max_data, broker),\n        }\n    }\n\n    /// Updates the initial send window size,\n    /// which should be the peer's `initial_max_data` transport parameter.\n    /// So once the peer's [`Parameters`](`crate::param::Parameters`) are obtained,\n    /// this method should be called immediately.\n    pub fn reset_send_window(&self, snd_wnd: u64) {\n        self.sender.increase_limit(snd_wnd);\n    }\n\n    /// Get some flow control credit to send fresh flow data.\n    /// The returned value may be smaller than the parameter's intended value.\n    /// If some QUIC error occured, it would return the error directly.\n    pub fn send_limit(&self, quota: usize) -> Result<Credit<'_, TX>, Error>\n    where\n        TX: SendFrame<DataBlockedFrame>,\n    {\n        self.sender.credit(quota)\n    }\n\n    /// Handles the error event of the QUIC connection.\n    ///\n    /// It will makes\n    /// the connection-level stream flow controller in the sending direction become unavailable,\n    /// and the connection-level stream flow controller in the receiving direction terminate.\n    pub fn on_conn_error(&self, error: &Error) {\n        self.sender.on_error(error);\n    }\n}\n\nimpl<TX> FlowController<TX>\nwhere\n    TX: SendFrame<MaxDataFrame>,\n{\n    /// Updates the total received data size and checks if the flow control limit is exceeded.\n    /// By the way, it will also send a [`MaxDataFrame`] to the sender\n    /// to expand the receive window if necessary.\n    pub fn on_new_rcvd(&self, frame_type: FrameType, amount: usize) -> Result<usize, Error> {\n        self.recver.on_new_rcvd(frame_type, amount)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use derive_more::{Deref, DerefMut};\n\n    use super::*;\n\n    #[derive(Clone, Debug, Default, Deref, DerefMut)]\n    struct SendControllerBroker(Arc<Mutex<Vec<DataBlockedFrame>>>);\n\n    impl SendFrame<DataBlockedFrame> for SendControllerBroker {\n        fn send_frame<I: IntoIterator<Item = DataBlockedFrame>>(&self, iter: I) {\n            self.0.lock().unwrap().extend(iter);\n        }\n    }\n\n    #[test]\n    fn test_send_controler() {\n        let broker = SendControllerBroker::default();\n        let controler = ArcSendControler::new(0, broker.clone(), Default::default());\n        controler.increase_limit(100);\n        let mut credit = controler.credit(200).unwrap();\n        assert_eq!(credit.available(), 100);\n        credit.post_sent(50);\n        assert_eq!(credit.available(), 50);\n        credit.post_sent(50);\n        assert_eq!(credit.available(), 0);\n        drop(credit);\n\n        // broker should have a DataBlockedFrame\n        assert_eq!(broker.lock().unwrap().len(), 1);\n        assert_eq!(broker.lock().unwrap()[0].limit(), 100);\n\n        let credit = controler.credit(1).unwrap();\n        assert_eq!(credit.available(), 0);\n        drop(credit);\n\n        controler.increase_limit(200);\n\n        let mut credit = controler.credit(200).unwrap();\n        assert_eq!(credit.available(), 100);\n        credit.post_sent(50);\n        assert_eq!(credit.available(), 50);\n        credit.post_sent(50);\n        assert_eq!(credit.available(), 0);\n        drop(credit);\n\n        // broker should have a DataBlockedFrame\n        assert_eq!(broker.lock().unwrap().len(), 2);\n        assert_eq!(broker.lock().unwrap()[1].limit(), 200);\n    }\n\n    #[derive(Clone, Debug, Default, Deref, DerefMut)]\n    struct RecvControllerBroker(Arc<Mutex<Vec<MaxDataFrame>>>);\n\n    impl SendFrame<MaxDataFrame> for RecvControllerBroker {\n        fn send_frame<I: IntoIterator<Item = MaxDataFrame>>(&self, iter: I) {\n            self.0.lock().unwrap().extend(iter);\n        }\n    }\n\n    #[test]\n    fn test_recv_controller() {\n        use crate::frame::{Fin, Len, Offset};\n        let broker = RecvControllerBroker::default();\n        let controler = ArcRecvController::new(100, broker.clone());\n        let amount = controler\n            .on_new_rcvd(FrameType::Stream(Offset::Zero, Len::Omit, Fin::No), 20)\n            .unwrap();\n        assert_eq!(amount, 20);\n        assert_eq!(broker.lock().unwrap().len(), 0);\n\n        let amount = controler\n            .on_new_rcvd(FrameType::Stream(Offset::Zero, Len::Explicit, Fin::Yes), 30)\n            .unwrap();\n        assert_eq!(amount, 30);\n        // broker should have a MaxDataFrame\n        assert_eq!(broker.lock().unwrap().len(), 1);\n        assert_eq!(broker.lock().unwrap()[0].max_data(), 150);\n\n        // test overflow\n        let result = controler.on_new_rcvd(FrameType::ResetStream, 101);\n        assert!(result.is_err());\n        assert_eq!(result.unwrap_err().kind(), ErrorKind::FlowControl);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/ack.rs",
    "content": "use std::ops::RangeInclusive;\n\nuse nom::{Parser, combinator::map};\n\nuse crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// ECN flag for ACK frames\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub enum Ecn {\n    /// ECN counts are not present\n    None,\n    /// ECN counts are present\n    Exist,\n}\n\nimpl From<Ecn> for u8 {\n    fn from(ecn: Ecn) -> u8 {\n        match ecn {\n            Ecn::None => 0,\n            Ecn::Exist => 1,\n        }\n    }\n}\n\nimpl From<u8> for Ecn {\n    fn from(value: u8) -> Self {\n        match value & 0x01 {\n            0 => Ecn::None,\n            _ => Ecn::Exist,\n        }\n    }\n}\n\n/// ACK Frame\n///\n/// ```text\n/// ACK Frame {\n///   Type (i) = 0x02..0x03,\n///   Largest Acknowledged (i),\n///   ACK Delay (i),\n///   ACK Range Count (i),\n///   First ACK Range (i),\n///   ACK Range (..) ...,\n///   [ECN Counts (..)],\n/// }\n/// ```\n///\n/// Receiver sends ACK frames (types 0x02 and 0x03) to inform the sender of packets they have\n/// received and processed. The ACK frame contains one or more ACK Ranges.\n///\n/// See [ack frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-ack-frames) of QUIC RFC 9000.\n///\n/// The ACK Range Count is not included in the struct because it is an intermediate variable.\n/// It can be obtained from the ranges when writing and is no longer needed after generating\n/// the ranges when parsing.\n#[derive(Debug, Clone, Eq, PartialEq)]\npub struct AckFrame {\n    largest: VarInt,\n    delay: VarInt,\n    first_range: VarInt,\n    ranges: Vec<(VarInt, VarInt)>,\n    ecn: Option<EcnCounts>,\n}\n\nimpl super::GetFrameType for AckFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::Ack(if self.ecn.is_some() {\n            Ecn::Exist\n        } else {\n            Ecn::None\n        })\n    }\n}\n\nimpl super::EncodeSize for AckFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8 + 8 + 8 + 8 + self.ranges.len() * 16 + if self.ecn.is_some() { 24 } else { 0 }\n    }\n\n    fn encoding_size(&self) -> usize {\n        let ack_range_count = VarInt::try_from(self.ranges.len()).unwrap();\n\n        1 + self.largest.encoding_size()\n            + self.delay.encoding_size()\n            + ack_range_count.encoding_size()\n            + self.first_range.encoding_size()\n            + self\n                .ranges\n                .iter()\n                .map(|(gap, ack)| gap.encoding_size() + ack.encoding_size())\n                .sum::<usize>()\n            + if let Some(e) = self.ecn.as_ref() {\n                e.encoding_size()\n            } else {\n                0\n            }\n    }\n}\n\nimpl AckFrame {\n    /// Create a new [`AckFrame`].\n    pub fn new(\n        largest: VarInt,\n        delay: VarInt,\n        first_range: VarInt,\n        ranges: Vec<(VarInt, VarInt)>,\n        ecn: Option<EcnCounts>,\n    ) -> Self {\n        Self {\n            largest,\n            delay,\n            first_range,\n            ranges,\n            ecn,\n        }\n    }\n\n    /// Return the largest acknowledged packet number.\n    pub fn largest(&self) -> u64 {\n        self.largest.into_u64()\n    }\n\n    /// Return the delay in microseconds.\n    pub fn delay(&self) -> u64 {\n        self.delay.into_u64()\n    }\n\n    /// Return the first range.\n    pub fn first_range(&self) -> u64 {\n        self.first_range.into_u64()\n    }\n\n    /// Return the ranges.\n    pub fn ranges(&self) -> &Vec<(VarInt, VarInt)> {\n        &self.ranges\n    }\n\n    /// Return the ECN (Explicit Congestion Notification) counter.\n    pub fn ecn(&self) -> Option<EcnCounts> {\n        self.ecn\n    }\n\n    /// Set the value of the ECN (Explicit Congestion Notification) counter\n    pub fn set_ecn(&mut self, ecn: EcnCounts) {\n        self.ecn = Some(ecn);\n    }\n\n    /// Take the value of the ECN (Explicit Congestion Notification) counter\n    pub fn take_ecn(&mut self) -> Option<EcnCounts> {\n        self.ecn.take()\n    }\n\n    /// Iterate through the sequence numbers of the packets acknowledged by the iterative ACK frame,\n    /// starting from the largest and going down.\n    pub fn iter(&self) -> impl Iterator<Item = RangeInclusive<u64>> + '_ {\n        let right = self.largest.into_u64();\n        let left = right - self.first_range.into_u64();\n        Some(left..=right).into_iter().chain(\n            self.ranges\n                .iter()\n                .map(|(gap, range)| (gap.into_u64(), range.into_u64()))\n                .scan(left, |largest, (gap, range)| {\n                    let right = *largest - gap - 2;\n                    let left = right - range;\n                    *largest = left;\n                    Some(left..=right)\n                }),\n        )\n    }\n}\n\n/// The counts of Explicit Congestion Notification (ECN) types.\n///\n/// See [ecn-counts](https://www.rfc-editor.org/rfc/rfc9000.html#name-ecn-counts) of QUIC RFC 9000.\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub struct EcnCounts {\n    ect0: VarInt,\n    ect1: VarInt,\n    ce: VarInt,\n}\n\nimpl EcnCounts {\n    /// Create a new [`EcnCounts`].\n    pub fn new(ect0: VarInt, ect1: VarInt, ce: VarInt) -> Self {\n        Self { ect0, ect1, ce }\n    }\n\n    /// Get the value of the ECT0 counter.\n    pub fn ect0(&self) -> u64 {\n        self.ect0.into_u64()\n    }\n\n    /// Get the value of the ECT1 counter.\n    pub fn ect1(&self) -> u64 {\n        self.ect1.into_u64()\n    }\n\n    /// Get the value of the CE counter.\n    pub fn ce(&self) -> u64 {\n        self.ce.into_u64()\n    }\n\n    /// Calculates the encoding size of the [`EcnCounts`] struct.\n    fn encoding_size(&self) -> usize {\n        self.ect0.encoding_size() + self.ect1.encoding_size() + self.ce.encoding_size()\n    }\n}\n\n/// Parser for parsing an ACK frame with the given ECN flag,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn ack_frame_with_ecn(ecn: Ecn) -> impl Fn(&[u8]) -> nom::IResult<&[u8], AckFrame> {\n    move |input: &[u8]| {\n        let (mut remain, (largest, delay, count, first_range)) =\n            (be_varint, be_varint, be_varint, be_varint).parse(input)?;\n        let mut ranges = Vec::new();\n        let mut count = count.into_u64() as usize;\n        while count > 0 {\n            let (i, (gap, ack)) = (be_varint, be_varint).parse(remain)?;\n            ranges.push((gap, ack));\n            count -= 1;\n            remain = i;\n        }\n\n        let ecn = if ecn == Ecn::Exist {\n            let (i, ecn) = be_ecn_counts(remain)?;\n            remain = i;\n            Some(ecn)\n        } else {\n            None\n        };\n\n        Ok((\n            remain,\n            AckFrame {\n                largest,\n                delay,\n                first_range,\n                ranges,\n                ecn,\n            },\n        ))\n    }\n}\n\n/// Parse the ECN counts from the input bytes,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub(super) fn be_ecn_counts(input: &[u8]) -> nom::IResult<&[u8], EcnCounts> {\n    map((be_varint, be_varint, be_varint), |(ect0, ect1, ce)| {\n        EcnCounts { ect0, ect1, ce }\n    })\n    .parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<AckFrame> for T {\n    fn put_frame(&mut self, frame: &AckFrame) {\n        let frame_type = frame.frame_type();\n        self.put_frame_type(frame_type);\n        self.put_varint(&frame.largest);\n        self.put_varint(&frame.delay);\n\n        let ack_range_count = VarInt::try_from(frame.ranges.len()).unwrap();\n        self.put_varint(&ack_range_count);\n        self.put_varint(&frame.first_range);\n        for (gap, ack) in &frame.ranges {\n            self.put_varint(gap);\n            self.put_varint(ack);\n        }\n        if let Some(ecn) = &frame.ecn {\n            self.put_varint(&ecn.ect0);\n            self.put_varint(&ecn.ect1);\n            self.put_varint(&ecn.ce);\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use nom::{Parser, combinator::flat_map};\n\n    use super::*;\n    use crate::{\n        frame::{EncodeSize, FrameType, GetFrameType, io::WriteFrame},\n        varint::{VarInt, be_varint},\n    };\n\n    #[test]\n    fn test_ack_frame() {\n        // test frame type, encoding size, and max encoding size\n        let mut frame = AckFrame {\n            largest: VarInt::from_u32(0x1234),\n            delay: VarInt::from_u32(0x1234),\n            first_range: VarInt::from_u32(0x1234),\n            ranges: vec![(VarInt::from_u32(3), VarInt::from_u32(20))],\n            ecn: None,\n        };\n        assert_eq!(frame.frame_type(), FrameType::Ack(Ecn::None));\n        assert_eq!(frame.encoding_size(), 1 + 2 * 3 + 1 + 2);\n        assert_eq!(frame.max_encoding_size(), 1 + 4 * 8 + 2 * 8);\n\n        // test set_ecn and take_ecn\n        let ecn = EcnCounts {\n            ect0: VarInt::from_u32(0x1234),\n            ect1: VarInt::from_u32(0x1234),\n            ce: VarInt::from_u32(0x1234),\n        };\n        frame.set_ecn(ecn);\n        assert!(frame.ecn.is_some());\n        assert_eq!(frame.take_ecn(), Some(ecn));\n    }\n\n    #[test]\n    fn test_read_ecn_count() {\n        let input = vec![0x52, 0x34, 0x52, 0x34, 0x52, 0x34];\n        let (input, ecn) = be_ecn_counts(&input).unwrap();\n        assert!(input.is_empty());\n        assert_eq!(\n            ecn,\n            EcnCounts {\n                ect0: VarInt::from_u32(0x1234),\n                ect1: VarInt::from_u32(0x1234),\n                ce: VarInt::from_u32(0x1234),\n            }\n        );\n    }\n\n    #[test]\n    fn test_read_ack_frame() {\n        let input = vec![0x02, 0x52, 0x34, 0x52, 0x34, 0x01, 0x52, 0x34, 3, 20];\n        let (input, ack_frame) = flat_map(be_varint, |frame_type| {\n            let ack_frame_type: VarInt = FrameType::Ack(Ecn::None).into();\n            assert_eq!(frame_type, ack_frame_type);\n            ack_frame_with_ecn(Ecn::None)\n        })\n        .parse(&input)\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(\n            ack_frame,\n            AckFrame {\n                largest: VarInt::from_u32(0x1234),\n                delay: VarInt::from_u32(0x1234),\n                first_range: VarInt::from_u32(0x1234),\n                ranges: vec![(VarInt::from_u32(3), VarInt::from_u32(20))],\n                ecn: None,\n            }\n        );\n    }\n\n    #[test]\n    fn test_write_ack_frame() {\n        let mut buf = Vec::new();\n        let frame = AckFrame {\n            largest: VarInt::from_u32(0x1234),\n            delay: VarInt::from_u32(0x1234),\n            first_range: VarInt::from_u32(0x1234),\n            ranges: vec![(VarInt::from_u32(3), VarInt::from_u32(20))],\n            ecn: Some(EcnCounts {\n                ect0: VarInt::from_u32(0x1234),\n                ect1: VarInt::from_u32(0x1234),\n                ce: VarInt::from_u32(0x1234),\n            }),\n        };\n\n        buf.put_frame(&frame);\n        assert_eq!(\n            buf,\n            vec![\n                0x03, 0x52, 0x34, 0x52, 0x34, 0x01, 0x52, 0x34, 3, 20, // frame\n                0x52, 0x34, 0x52, 0x34, 0x52, 0x34 // ecn\n            ]\n        );\n    }\n\n    #[test]\n    fn test_ack_frame_into_iter() {\n        // let mut frame = AckFrame::new(1000, 0, 0x1234, None).unwrap();\n        let frame = AckFrame {\n            largest: VarInt::from_u32(1000),\n            delay: VarInt::from_u32(0x1234),\n            first_range: VarInt::from_u32(0),\n            ranges: vec![\n                (VarInt::from_u32(0), VarInt::from_u32(2)),\n                (VarInt::from_u32(4), VarInt::from_u32(30)),\n                (VarInt::from_u32(7), VarInt::from_u32(40)),\n            ],\n            ecn: None,\n        };\n        // frame.alternating_gap_and_range(0, 2);\n        // frame.alternating_gap_and_range(4, 30);\n        // frame.alternating_gap_and_range(7, 40);\n\n        let mut iter = frame.iter();\n        assert_eq!(iter.next(), Some(1000..=1000));\n        assert_eq!(iter.next(), Some(996..=998));\n        assert_eq!(iter.next(), Some(960..=990));\n        assert_eq!(iter.next(), Some(911..=951));\n        assert_eq!(iter.next(), None);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/add_address.rs",
    "content": "use std::net::{IpAddr, SocketAddr};\n\nuse derive_more::Deref;\n\nuse super::{\n    EncodeSize, GetFrameType,\n    io::{WriteFrame, WriteFrameType},\n};\nuse crate::{\n    net::{AddrFamily, Family, NatType, WriteSocketAddr, be_socket_addr},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n// ADD_ADDRESS Frame {\n//     Type (i) = 0x3d7e90..0x3d7e91,\n//     Sequence Number (i),\n//     [ IPv4 (32) ],\n//     [ IPv6 (128) ],\n//     Port (16),\n//     Tire (i),\n//     NAT Type (i),\n// }\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Deref)]\npub struct AddAddressFrame {\n    #[deref]\n    address: SocketAddr,\n    seq_num: VarInt,\n    tire: VarInt,\n    nat_type: NatType,\n}\n\npub(crate) fn be_add_address_frame(\n    family: Family,\n) -> impl Fn(&[u8]) -> nom::IResult<&[u8], AddAddressFrame> {\n    move |input| {\n        let (remain, seq_num) = be_varint(input)?;\n        let (remain, addr) = be_socket_addr(remain, family)?;\n        let (remain, tire) = be_varint(remain)?;\n        let (remain, nat_type) = be_varint(remain)?;\n        let nat_type = NatType::try_from(nat_type).map_err(|_| {\n            nom::Err::Error(nom::error::Error::new(\n                remain,\n                nom::error::ErrorKind::Verify,\n            ))\n        })?;\n        Ok((\n            remain,\n            AddAddressFrame {\n                seq_num,\n                address: addr,\n                tire,\n                nat_type,\n            },\n        ))\n    }\n}\n\nimpl GetFrameType for AddAddressFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::AddAddress(self.address.family())\n    }\n}\n\nimpl EncodeSize for AddAddressFrame {\n    fn max_encoding_size(&self) -> usize {\n        4 // frame type\n            + 8 // seq_num\n            + 2  // port\n            + 16 // ipv6 IP\n            + 8  // tire\n            + 8 // nat_type\n    }\n\n    fn encoding_size(&self) -> usize {\n        let addr_size = match self.address.ip() {\n            IpAddr::V4(_) => 2 + 4,\n            IpAddr::V6(_) => 2 + 16,\n        };\n        VarInt::from(self.frame_type()).encoding_size()\n            + self.seq_num.encoding_size()\n            + addr_size\n            + self.tire.encoding_size()\n            + VarInt::from(self.nat_type).encoding_size()\n    }\n}\n\nimpl AddAddressFrame {\n    pub fn new(seq_num: u32, address: SocketAddr, tire: u32, nat_type: NatType) -> Self {\n        Self {\n            seq_num: VarInt::from_u32(seq_num),\n            address,\n            tire: VarInt::from_u32(tire),\n            nat_type,\n        }\n    }\n\n    pub fn seq_num(&self) -> u32 {\n        self.seq_num.into_u64() as u32\n    }\n\n    pub fn tire(&self) -> u32 {\n        self.tire.into_u64() as u32\n    }\n\n    pub fn nat_type(&self) -> NatType {\n        self.nat_type\n    }\n}\n\nimpl<T: bytes::BufMut> WriteFrame<AddAddressFrame> for T {\n    fn put_frame(&mut self, frame: &AddAddressFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.seq_num);\n        self.put_socket_addr(&frame.address);\n        self.put_varint(&frame.tire);\n        self.put_varint(&VarInt::from(frame.nat_type));\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::net::Ipv4Addr;\n\n    use bytes::BytesMut;\n\n    use super::*;\n    use crate::frame::{GetFrameType, be_frame_type, io::WriteFrame};\n\n    #[test]\n    fn test_add_address_frame() {\n        let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080);\n        let frame = AddAddressFrame {\n            seq_num: VarInt::from_u32(1u32),\n            address: addr,\n            tire: VarInt::from_u32(1u32),\n            nat_type: NatType::FullCone,\n        };\n        let mut buf = BytesMut::new();\n        buf.put_frame(&frame);\n        let (remain, frame_type) = be_frame_type(&buf).unwrap();\n        assert_eq!(frame_type, frame.frame_type());\n        let frame2 = be_add_address_frame(Family::V4)(remain).unwrap().1;\n        assert_eq!(frame, frame2);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/connection_close.rs",
    "content": "use std::borrow::Cow;\n\nuse derive_more::From;\nuse nom::bytes::complete::take;\n\nuse super::FrameType;\nuse crate::{\n    error::{ErrorFrameType, ErrorKind},\n    frame::{GetFrameType, be_frame_type, io::WriteFrameType},\n    varint::{VarInt, be_varint},\n};\n\n/// Layer flag for CONNECTION_CLOSE frames\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub enum Layer {\n    /// QUIC transport layer (0x1c)\n    Quic,\n    /// Application layer (0x1d)\n    App,\n}\n\nimpl From<Layer> for u8 {\n    fn from(layer: Layer) -> u8 {\n        match layer {\n            Layer::Quic => 0,\n            Layer::App => 1,\n        }\n    }\n}\n\nimpl From<u8> for Layer {\n    fn from(value: u8) -> Self {\n        match value & 0x01 {\n            0 => Layer::Quic,\n            _ => Layer::App,\n        }\n    }\n}\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct AppCloseFrame {\n    error_code: VarInt,\n    reason: Cow<'static, str>,\n}\n\nimpl AppCloseFrame {\n    /// Return the error code of the frame.\n    pub fn error_code(&self) -> u64 {\n        self.error_code.into_u64()\n    }\n\n    /// Return the reason of the frame.\n    pub fn reason(&self) -> &str {\n        &self.reason\n    }\n\n    /// Otherwise, information about the application state might be revealed.\n    ///\n    /// Endpoints MUST clear the value of the Reason Phrase field and SHOULD use\n    /// the APPLICATION_ERROR code when converting to a CONNECTION_CLOSE of type 0x1c.\n    ///\n    /// See [section-10.2.3-3](https://datatracker.ietf.org/doc/html/rfc9000#section-10.2.3-3)\n    /// of [QUIC](https://datatracker.ietf.org/doc/html/rfc9000) for more details.\n    pub fn conceal(&self) -> QuicCloseFrame {\n        QuicCloseFrame {\n            error_kind: ErrorKind::Application,\n            frame_type: ErrorFrameType::V1(FrameType::Padding),\n            reason: Cow::Borrowed(\"\"),\n        }\n    }\n}\n\nimpl From<AppCloseFrame> for QuicCloseFrame {\n    fn from(_: AppCloseFrame) -> Self {\n        QuicCloseFrame {\n            error_kind: ErrorKind::Application,\n            frame_type: ErrorFrameType::V1(FrameType::Padding),\n            reason: Cow::Borrowed(\"\"),\n        }\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct QuicCloseFrame {\n    error_kind: ErrorKind,\n    frame_type: ErrorFrameType,\n    reason: Cow<'static, str>,\n}\n\nimpl QuicCloseFrame {\n    /// Return the error kind of the frame.\n    pub fn error_kind(&self) -> ErrorKind {\n        self.error_kind\n    }\n\n    /// Return the frame type of the frame.\n    pub fn frame_type(&self) -> ErrorFrameType {\n        self.frame_type\n    }\n\n    /// Return the reason of the frame.\n    pub fn reason(&self) -> &str {\n        &self.reason\n    }\n}\n\n/// CONNECTION_CLOSE Frame.\n///\n/// ```text\n/// CONNECTION_CLOSE Frame {\n///   Type (i) = 0x1c..0x1d,\n///   Error Code (i),\n///   [Frame Type (i)],\n///   Reason Phrase Length (i),\n///   Reason Phrase (..),\n/// }\n/// ```\n///\n/// See [connection close frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-connection-close-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, From, PartialEq, Eq)]\npub enum ConnectionCloseFrame {\n    App(AppCloseFrame),\n    Quic(QuicCloseFrame),\n}\n\nimpl super::GetFrameType for ConnectionCloseFrame {\n    fn frame_type(&self) -> FrameType {\n        match self {\n            ConnectionCloseFrame::App(_) => FrameType::ConnectionClose(Layer::App),\n            ConnectionCloseFrame::Quic(_) => FrameType::ConnectionClose(Layer::Quic),\n        }\n    }\n}\n\nimpl super::EncodeSize for ConnectionCloseFrame {\n    fn max_encoding_size(&self) -> usize {\n        // reason's length could not exceed 16KB, so it can be encoded in 2 bytes.\n        match self {\n            ConnectionCloseFrame::App(frame) => 1 + 8 + 2 + frame.reason.len(),\n            ConnectionCloseFrame::Quic(frame) => 1 + 8 + 8 + 2 + frame.reason.len(),\n        }\n    }\n\n    fn encoding_size(&self) -> usize {\n        match self {\n            ConnectionCloseFrame::App(frame) => {\n                1 + frame.error_code.encoding_size()\n                    // reason's length could not exceed 16KB.\n                    + VarInt::try_from(frame.reason.len()).unwrap().encoding_size()\n                    + frame.reason.len()\n            }\n            ConnectionCloseFrame::Quic(frame) => {\n                1 + VarInt::from(frame.error_kind).encoding_size() + 1\n                    // reason's length could not exceed 16KB.\n                    + VarInt::try_from(frame.reason.len()).unwrap().encoding_size()\n                    + frame.reason.len()\n            }\n        }\n    }\n}\n\nimpl ConnectionCloseFrame {\n    /// Create a new `ConnectionCloseFrame` at QUIC layer.\n    pub fn new_quic(\n        error_kind: ErrorKind,\n        frame_type: ErrorFrameType,\n        reason: impl Into<Cow<'static, str>>,\n    ) -> Self {\n        Self::Quic(QuicCloseFrame {\n            error_kind,\n            frame_type,\n            reason: reason.into(),\n        })\n    }\n\n    /// Create a new `ConnectionCloseFrame` at application layer.\n    pub fn new_app(error_code: VarInt, reason: impl Into<Cow<'static, str>>) -> Self {\n        Self::App(AppCloseFrame {\n            error_code,\n            reason: reason.into(),\n        })\n    }\n}\n\nfn be_app_close_frame(input: &[u8]) -> nom::IResult<&[u8], AppCloseFrame> {\n    let (remain, error_code) = be_varint(input)?;\n    let (remain, reason_length) = be_varint(remain)?;\n    let (remain, reason) = take(reason_length)(remain)?;\n    let cow = String::from_utf8_lossy(reason).into_owned();\n    Ok((\n        remain,\n        AppCloseFrame {\n            error_code,\n            reason: Cow::Owned(cow),\n        },\n    ))\n}\n\nfn be_quic_close_frame(input: &[u8]) -> nom::IResult<&[u8], QuicCloseFrame> {\n    let (remain, error_code) = be_varint(input)?;\n    let error_kind = ErrorKind::try_from(error_code)\n        .map_err(|_e| nom::Err::Error(nom::error::make_error(input, nom::error::ErrorKind::Alt)))?;\n    let (remain, frame_type) = be_frame_type(remain)\n        .map_err(|_e| nom::Err::Error(nom::error::make_error(input, nom::error::ErrorKind::Alt)))?;\n    let (remain, reason_length) = be_varint(remain)?;\n    let (remain, reason) = take(reason_length)(remain)?;\n    let cow = String::from_utf8_lossy(reason).into_owned();\n    Ok((\n        remain,\n        QuicCloseFrame {\n            error_kind,\n            frame_type: frame_type.into(),\n            reason: Cow::Owned(cow),\n        },\n    ))\n}\n\n/// Return a parser for a CONNECTION_CLOSE frame with the given layer.\n///\n/// The `layer` parameter specifies which type of CONNECTION_CLOSE frame to parse:\n/// - `Layer::Conn`: Parse a QUIC transport layer CONNECTION_CLOSE frame (0x1c)\n/// - `Layer::App`: Parse an application layer CONNECTION_CLOSE frame (0x1d)\n///\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn connection_close_frame_at_layer(\n    layer: Layer,\n) -> impl Fn(&[u8]) -> nom::IResult<&[u8], ConnectionCloseFrame> {\n    move |input: &[u8]| match layer {\n        Layer::App => {\n            be_app_close_frame(input).map(|(remain, app)| (remain, ConnectionCloseFrame::App(app)))\n        }\n        Layer::Quic => be_quic_close_frame(input)\n            .map(|(remain, quic)| (remain, ConnectionCloseFrame::Quic(quic))),\n    }\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<ConnectionCloseFrame> for T {\n    fn put_frame(&mut self, frame: &ConnectionCloseFrame) {\n        use crate::varint::WriteVarInt;\n        self.put_frame_type(frame.frame_type());\n        match frame {\n            ConnectionCloseFrame::App(frame) => {\n                self.put_varint(&frame.error_code);\n                let len = frame.reason.len().min(self.remaining_mut());\n                self.put_varint(&VarInt::from_u32(len as u32));\n                self.put_slice(&frame.reason.as_bytes()[..len]);\n            }\n            ConnectionCloseFrame::Quic(frame) => {\n                self.put_varint(&frame.error_kind.into());\n                self.put_varint(&frame.frame_type.into());\n                let len = frame.reason.len().min(self.remaining_mut());\n                self.put_varint(&VarInt::from_u32(len as u32));\n                self.put_slice(&frame.reason.as_bytes()[..len]);\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::{\n        error::ErrorKind,\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n            stream::{Fin, Len, Offset},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_connection_close_frame() {\n        let frame = ConnectionCloseFrame::new_app(VarInt::from_u32(0x1234), \"wrong\");\n        assert_eq!(frame.frame_type(), FrameType::ConnectionClose(Layer::App));\n        assert_eq!(frame.max_encoding_size(), 1 + 8 + 2 + 5);\n        assert_eq!(frame.encoding_size(), 1 + 2 + 1 + 5);\n    }\n\n    #[test]\n    fn test_read_connection_close_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        use crate::varint::be_varint;\n        let mut buf = Vec::new();\n        buf.put_frame_type(FrameType::ConnectionClose(Layer::App));\n        buf.extend_from_slice(&[0x0c, 5, b'w', b'r', b'o', b'n', b'g']);\n        let app_close_frame_type = VarInt::from(FrameType::ConnectionClose(Layer::App));\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == app_close_frame_type {\n                connection_close_frame_at_layer(Layer::App)\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(\n            frame,\n            super::ConnectionCloseFrame::new_app(VarInt::from_u32(0x0c), \"wrong\",)\n        );\n    }\n\n    #[test]\n    fn test_write_connection_close_frame() {\n        use super::FrameType;\n        let mut buf = Vec::<u8>::new();\n        let frame = ConnectionCloseFrame::new_quic(\n            ErrorKind::FlowControl,\n            FrameType::Stream(Offset::NonZero, Len::Explicit, Fin::No).into(),\n            \"wrong\",\n        );\n        buf.put_frame(&frame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::ConnectionClose(Layer::Quic));\n        expected.extend_from_slice(&[0x03, 0xe, 5, b'w', b'r', b'o', b'n', b'g']);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/crypto.rs",
    "content": "use std::ops::Range;\n\nuse nom::Parser;\n\nuse crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    util::{ContinuousData, WriteData},\n    varint::{VARINT_MAX, VarInt, WriteVarInt, be_varint},\n};\n\n/// CRYPTO Frame\n///\n/// ```text\n/// CRYPTO Frame {\n///   Type (i) = 0x06,\n///   Offset (i),\n///   Length (i),\n///   Crypto Data (..),\n/// }\n/// ```\n///\n/// See [crypto frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-crypto-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct CryptoFrame {\n    offset: VarInt,\n    length: VarInt,\n}\n\nimpl super::GetFrameType for CryptoFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::Crypto\n    }\n}\n\nimpl super::EncodeSize for CryptoFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.offset.encoding_size() + self.length.encoding_size()\n    }\n}\n\nimpl CryptoFrame {\n    /// Create a new [`CryptoFrame`] with the given offset and length.\n    pub fn new(offset: VarInt, length: VarInt) -> Self {\n        Self { offset, length }\n    }\n\n    /// Return the offset of the frame.\n    pub fn offset(&self) -> u64 {\n        self.offset.into_u64()\n    }\n\n    /// Return the length of the frame.\n    pub fn len(&self) -> u64 {\n        self.length.into_u64()\n    }\n\n    /// Evaluate the maximum number of bytes of data that can be accommodated,\n    /// starting from a certain offset, within a given capacity. If it cannot\n    /// accommodate a CryptoFrame header or can only accommodate 0 bytes, return None.\n    ///\n    /// Note: Panic if the offset exceeds 2^62-1, or the the capacity is too large\n    /// (about 2^32. It is impossible to have so much crypto stream data)\n    pub fn estimate_max_capacity(capacity: usize, offset: u64) -> Option<usize> {\n        assert!(offset <= VARINT_MAX);\n        capacity\n            // Must accommodate at least one byte, 'len' takes up 1 byte,\n            // content takes up 1 byte. If these are not satisfied, return None.\n            .checked_sub(1 + VarInt::from_u64(offset).unwrap().encoding_size() + 2)\n            .map(|remaining| match remaining {\n                // Including the 1 byte already considered in check_sub,\n                // 'length' still takes up 1 byte.\n                value @ 0..=62 => value + 1,\n                // The encoding of 'length' directly takes up 2 bytes, the final 2 bytes\n                // subtracted in 'check_sub' are all occupied by the encoding of 'length'.\n                // Interestingly, if only 65 bytes are left after removing the encoding of\n                // Type and Offset, whether the encoding of 'length' takes up 1 byte or 2\n                // bytes, only 63 bytes of data can be carried.\n                value @ 0x3F..=0x3F_FF => value,\n                // For the following lengths, the encoding of 'length' needs to occupy 4 bytes.\n                // When the buffer capacity is 0x4000 or 0x40001, the encoding of 'length'\n                // changes to 4 bytes, but the capacity is not enough, so it needs to be rolled back.\n                0x40_00..=0x40_01 => 0x3FFF,\n                value @ 0x40_02..=0x40_00_00_01 => value - 2,\n                // Any longer, a packet exceeding 100 million bytes is already impossible.\n                _ => unreachable!(\"crypto frame length could not be too large\"),\n            })\n    }\n\n    /// Return the range of bytes that this frame covers.\n    pub fn range(&self) -> Range<u64> {\n        let start = self.offset.into_u64();\n        let end = start + self.length.into_u64();\n        start..end\n    }\n}\n\n/// Parse a CRYPTO frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_crypto_frame(input: &[u8]) -> nom::IResult<&[u8], CryptoFrame> {\n    let (remain, (offset, length)) = (be_varint, be_varint).parse(input)?;\n    if offset.into_u64() + offset.into_u64() > VARINT_MAX {\n        return Err(nom::Err::Error(nom::error::make_error(\n            input,\n            nom::error::ErrorKind::TooLarge,\n        )));\n    }\n    Ok((remain, CryptoFrame { offset, length }))\n}\n\nimpl<T, D> super::io::WriteDataFrame<CryptoFrame, D> for T\nwhere\n    T: bytes::BufMut + WriteData<D>,\n    D: ContinuousData,\n{\n    fn put_data_frame(&mut self, frame: &CryptoFrame, data: &D) {\n        assert_eq!(frame.length.into_u64(), data.len() as u64);\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.offset);\n        self.put_varint(&frame.length);\n        self.put_data(data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::CryptoFrame;\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteDataFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_crypto_frame() {\n        let frame = CryptoFrame::new(VarInt::from_u32(0), VarInt::from_u32(500));\n        assert_eq!(frame.frame_type(), super::super::FrameType::Crypto);\n        assert_eq!(frame.max_encoding_size(), 1 + 8 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 1 + 2);\n        assert_eq!(frame.offset(), 0);\n        assert_eq!(frame.len(), 500);\n        assert_eq!(frame.range(), 0..500);\n    }\n\n    #[test]\n    fn test_read_crypto_frame() {\n        use super::be_crypto_frame;\n        let buf = vec![0x52, 0x34, 0x80, 0x00, 0x56, 0x78];\n        let (remain, frame) = be_crypto_frame(&buf).unwrap();\n        assert_eq!(remain, &[]);\n        assert_eq!(\n            frame,\n            CryptoFrame::new(VarInt::from_u32(0x1234), VarInt::from_u32(0x5678))\n        );\n    }\n\n    #[test]\n    fn test_write_crypto_frame() {\n        let mut buf = bytes::BytesMut::new();\n        let frame = CryptoFrame::new(VarInt::from_u32(0x1234), VarInt::from_u32(0x5));\n        buf.put_data_frame(&frame, b\"hello\");\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::Crypto);\n        expected.extend_from_slice(&[0x52, 0x34, 0x05]);\n        expected.extend_from_slice(b\"hello\");\n        assert_eq!(buf, bytes::Bytes::from(expected));\n    }\n\n    #[test]\n    fn test_encoding_capacity_estimate() {\n        assert_eq!(CryptoFrame::estimate_max_capacity(1, 0), None);\n        assert_eq!(CryptoFrame::estimate_max_capacity(4, 0), Some(1));\n        assert_eq!(CryptoFrame::estimate_max_capacity(4, 64), None);\n        assert_eq!(CryptoFrame::estimate_max_capacity(5, 65), Some(1));\n        assert_eq!(CryptoFrame::estimate_max_capacity(67, 65), Some(63));\n        assert_eq!(CryptoFrame::estimate_max_capacity(68, 65), Some(63));\n        assert_eq!(CryptoFrame::estimate_max_capacity(69, 65), Some(64));\n        assert_eq!(CryptoFrame::estimate_max_capacity(16387, 65), Some(16382));\n        assert_eq!(CryptoFrame::estimate_max_capacity(16388, 65), Some(16383));\n        assert_eq!(CryptoFrame::estimate_max_capacity(16389, 65), Some(16383));\n        assert_eq!(CryptoFrame::estimate_max_capacity(16390, 65), Some(16383));\n        assert_eq!(CryptoFrame::estimate_max_capacity(16391, 65), Some(16384));\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_encoding_with_offset_exceeded() {\n        CryptoFrame::estimate_max_capacity(60, 1 << 62);\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_encoding_with_length_too_large() {\n        CryptoFrame::estimate_max_capacity(1 << 31, 20);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/data_blocked.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// DATA_BLOCKED Frame\n///\n/// ```text\n/// DATA_BLOCKED Frame {\n///   Type (i) = 0x14,\n///   Maximum Data (i),\n/// }\n/// ```\n///\n/// See [data-blocked frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-data_blocked-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]\npub struct DataBlockedFrame {\n    limit: VarInt,\n}\n\nimpl super::GetFrameType for DataBlockedFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::DataBlocked\n    }\n}\n\nimpl super::EncodeSize for DataBlockedFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.limit.encoding_size()\n    }\n}\n\nimpl DataBlockedFrame {\n    /// Create a new [`DataBlockedFrame`] with the given limit.\n    pub fn new(limit: VarInt) -> Self {\n        Self { limit }\n    }\n\n    /// Return the limit of the frame.\n    pub fn limit(&self) -> u64 {\n        self.limit.into_u64()\n    }\n}\n\n/// Parse a DATA_BLOCKED frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_data_blocked_frame(input: &[u8]) -> nom::IResult<&[u8], DataBlockedFrame> {\n    use nom::{Parser, combinator::map};\n    map(be_varint, DataBlockedFrame::new).parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<DataBlockedFrame> for T {\n    fn put_frame(&mut self, frame: &DataBlockedFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.limit);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::DataBlockedFrame;\n    use crate::{\n        frame::{EncodeSize, FrameType, GetFrameType, io::WriteFrame},\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_data_blocked_frame() {\n        let frame = DataBlockedFrame::new(VarInt::from_u32(0x1234));\n        assert_eq!(frame.frame_type(), FrameType::DataBlocked);\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2);\n    }\n\n    #[test]\n    fn test_read_data_blocked_frame() {\n        use super::be_data_blocked_frame;\n        let buf = vec![0x52, 0x34];\n        let (_, frame) = be_data_blocked_frame(&buf).unwrap();\n        assert_eq!(frame, DataBlockedFrame::new(VarInt::from_u32(0x1234)));\n    }\n\n    #[test]\n    fn test_write_data_blocked_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&DataBlockedFrame::new(VarInt::from_u32(0x1234)));\n        let frame_type: VarInt = FrameType::DataBlocked.into();\n        assert_eq!(buf, vec![frame_type.into_u64() as u8, 0x52, 0x34]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/datagram.rs",
    "content": "use bytes::Buf;\nuse nom::IResult;\n\nuse super::{FrameType, GetFrameType, io::WriteFrameType};\nuse crate::{\n    util::{ContinuousData, WriteData},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// DATAGRAM Frame\n///\n/// ```text\n/// DATAGRAM Frame {\n///   Type (i) = 0x30..0x31,\n///   [Length (i)],\n///   Datagram Data (..),\n/// }\n/// ```\n///\n/// See [datagram frame types](https://www.rfc-editor.org/rfc/rfc9000.html#name-datagram-frame-types)\n/// of [An Unreliable Datagram Extension to QUIC](https://www.rfc-editor.org/rfc/rfc9221.html)\n/// for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct DatagramFrame {\n    encode_len: bool,\n    len: VarInt,\n}\n\nimpl DatagramFrame {\n    /// Create a new `DatagramFrame` with the given length.\n    pub fn new(encode_len: bool, len: VarInt) -> Self {\n        Self { encode_len, len }\n    }\n\n    #[inline]\n    pub fn encode_len(&self) -> bool {\n        self.encode_len\n    }\n\n    #[inline]\n    pub fn len(&self) -> VarInt {\n        self.len\n    }\n}\n\nimpl GetFrameType for DatagramFrame {\n    fn frame_type(&self) -> FrameType {\n        FrameType::Datagram(self.encode_len as _)\n    }\n}\n\nimpl super::EncodeSize for DatagramFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self\n            .encode_len\n            .then_some(self.len)\n            .map(VarInt::encoding_size)\n            .unwrap_or_default()\n    }\n}\n\n/// Return a parser for DATAGRAM frames with a flag,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn datagram_frame_with_flag(flag: u8) -> impl FnOnce(&[u8]) -> IResult<&[u8], DatagramFrame> {\n    move |input| {\n        let (remain, len) = if flag == 1 {\n            be_varint(input)?\n        } else {\n            let len = VarInt::try_from(input.remaining())\n                .expect(\"size of datagram frame payload never exceeds limit\");\n            (input, len)\n        };\n        let with_len = flag == 1;\n        Ok((\n            remain,\n            DatagramFrame {\n                encode_len: with_len,\n                len,\n            },\n        ))\n    }\n}\n\nimpl<T, D> super::io::WriteDataFrame<DatagramFrame, D> for T\nwhere\n    T: bytes::BufMut + WriteData<D>,\n    D: ContinuousData,\n{\n    fn put_data_frame(&mut self, frame: &DatagramFrame, data: &D) {\n        self.put_frame_type(frame.frame_type());\n        if frame.encode_len {\n            self.put_varint(&frame.len);\n        }\n        self.put_data(data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::frame::{EncodeSize, io::WriteDataFrame};\n\n    #[test]\n    fn test_datagram_frame() {\n        let frame = DatagramFrame {\n            encode_len: true,\n            len: VarInt::from_u32(3),\n        };\n        assert_eq!(frame.frame_type(), FrameType::Datagram(1));\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 1);\n    }\n\n    #[test]\n    fn test_datagram_frame_with_flag() {\n        let input = [0x05, 0x00, 0x00, 0x00, 0x00, 0x00];\n        let expected_output = DatagramFrame {\n            encode_len: true,\n            len: VarInt::from_u32(5),\n        };\n        let (remain, frame) = datagram_frame_with_flag(1)(&input).unwrap();\n        assert_eq!(remain, &[0x00, 0x00, 0x00, 0x00, 0x00]);\n        assert_eq!(frame, expected_output);\n    }\n\n    #[test]\n    fn test_datagram_frame_with_flag_no_length() {\n        let input = b\"114514\";\n        let expected_output = DatagramFrame {\n            encode_len: false,\n            len: VarInt::from_u32(6),\n        };\n        let (remain, frame) = datagram_frame_with_flag(0)(input).unwrap();\n        assert_eq!(remain, input);\n        assert_eq!(frame, expected_output);\n    }\n\n    #[test]\n    fn test_put_datagram_frame_with_length() {\n        let frame = DatagramFrame {\n            encode_len: true,\n            len: VarInt::from_u32(3),\n        };\n        let mut buf = Vec::new();\n        buf.put_data_frame(&frame, &[0x01, 0x02, 0x03]);\n        assert_eq!(&buf, &[0x31, 0x03, 0x01, 0x02, 0x03]);\n    }\n\n    #[test]\n    fn test_put_datagram_frame_no_length() {\n        let frame = DatagramFrame {\n            encode_len: false,\n            len: VarInt::from_u32(3),\n        };\n        let mut buf = Vec::new();\n        buf.put_data_frame(&frame, &[0x01, 0x02, 0x03]);\n        assert_eq!(&buf, &[0x30, 0x01, 0x02, 0x03]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/error.rs",
    "content": "use nom::error::ErrorKind as NomErrorKind;\nuse thiserror::Error;\n\nuse super::FrameType;\nuse crate::{\n    error::{ErrorKind as QuicErrorKind, QuicError},\n    packet::r#type::Type,\n    varint::VarInt,\n};\n\n/// Parse errors when decoding QUIC frames.\n#[derive(Debug, Clone, Eq, PartialEq, Error)]\npub enum Error {\n    #[error(\"A packet containing no frames\")]\n    NoFrames,\n    #[error(\"Incomplete frame type: {0}\")]\n    IncompleteType(String),\n    #[error(\"Invalid frame type from {0}\")]\n    InvalidType(VarInt),\n    #[error(\"Wrong frame type {0:?}\")]\n    WrongType(FrameType, Type),\n    #[error(\"Incomplete frame {0:?}: {1}\")]\n    IncompleteFrame(FrameType, String),\n    #[error(\"Error occurred when parsing frame {0:?}: {1}\")]\n    ParseError(FrameType, String),\n}\n\nimpl From<Error> for QuicError {\n    fn from(e: Error) -> Self {\n        match e {\n            // An endpoint MUST treat receipt of a packet containing no frames as a connection error of type PROTOCOL_VIOLATION.\n            Error::NoFrames => {\n                Self::with_default_fty(QuicErrorKind::ProtocolViolation, e.to_string())\n            }\n            Error::IncompleteType(_) => {\n                Self::with_default_fty(QuicErrorKind::FrameEncoding, e.to_string())\n            }\n            Error::InvalidType(_) => {\n                Self::with_default_fty(QuicErrorKind::FrameEncoding, e.to_string())\n            }\n            Error::WrongType(fty, _) => {\n                Self::new(QuicErrorKind::FrameEncoding, fty.into(), e.to_string())\n            }\n            Error::IncompleteFrame(fty, _) => {\n                Self::new(QuicErrorKind::FrameEncoding, fty.into(), e.to_string())\n            }\n            Error::ParseError(fty, _) => {\n                Self::new(QuicErrorKind::FrameEncoding, fty.into(), e.to_string())\n            }\n        }\n    }\n}\n\nimpl From<nom::Err<Error>> for Error {\n    fn from(error: nom::Err<Error>) -> Self {\n        match error {\n            nom::Err::Incomplete(_needed) => {\n                unreachable!(\"Because the parsing of QUIC packets and frames is not stream-based.\")\n            }\n            nom::Err::Error(err) | nom::Err::Failure(err) => err,\n        }\n    }\n}\n\nimpl nom::error::ParseError<&[u8]> for Error {\n    fn from_error_kind(_input: &[u8], _kind: NomErrorKind) -> Self {\n        debug_assert_eq!(_kind, NomErrorKind::ManyTill);\n        unreachable!(\"QUIC frame parser must always consume\")\n    }\n\n    fn append(_input: &[u8], _kind: NomErrorKind, source: Self) -> Self {\n        // 在解析帧时遇到了source错误，many_till期望通过ManyTill的错误类型告知\n        // 这里，源错误更有意义，所以直接返回源错误\n        debug_assert_eq!(_kind, NomErrorKind::ManyTill);\n        source\n    }\n}\n\n// TODO: conver DecodingError to quic error\n\n#[cfg(test)]\nmod tests {\n    use nom::error::ParseError;\n\n    use super::*;\n    use crate::packet::r#type::{\n        Type,\n        long::{Type::V1, Ver1},\n    };\n\n    #[test]\n    fn test_error_conversion_to_transport_error() {\n        let cases = vec![\n            (Error::NoFrames, QuicErrorKind::ProtocolViolation),\n            (\n                Error::IncompleteType(\"test\".to_string()),\n                QuicErrorKind::FrameEncoding,\n            ),\n            (\n                Error::InvalidType(VarInt::from_u32(0x1f)),\n                QuicErrorKind::FrameEncoding,\n            ),\n            (\n                Error::WrongType(FrameType::Ping, Type::Long(V1(Ver1::INITIAL))),\n                QuicErrorKind::FrameEncoding,\n            ),\n            (\n                Error::IncompleteFrame(FrameType::Ping, \"incomplete\".to_string()),\n                QuicErrorKind::FrameEncoding,\n            ),\n            (\n                Error::ParseError(FrameType::Ping, \"parse error\".to_string()),\n                QuicErrorKind::FrameEncoding,\n            ),\n        ];\n\n        for (error, expected_kind) in cases {\n            let quic_error: QuicError = error.into();\n            assert_eq!(quic_error.kind(), expected_kind);\n        }\n    }\n\n    #[test]\n    fn test_nom_error_conversion() {\n        let error = Error::NoFrames;\n        let nom_error = nom::Err::Error(error.clone());\n        let converted: Error = nom_error.into();\n        assert_eq!(converted, error);\n\n        let nom_failure = nom::Err::Failure(error.clone());\n        let converted: Error = nom_failure.into();\n        assert_eq!(converted, error);\n    }\n\n    #[test]\n    fn test_parse_error_impl() {\n        let error = Error::ParseError(FrameType::Ping, \"test error\".to_string());\n        let appended = Error::append(&[], NomErrorKind::ManyTill, error.clone());\n        assert_eq!(appended, error);\n    }\n\n    #[test]\n    #[should_panic(expected = \"QUIC frame parser must always consume\")]\n    fn test_parse_error_unreachable() {\n        Error::from_error_kind(&[], NomErrorKind::ManyTill);\n    }\n\n    #[test]\n    fn test_error_display() {\n        let error = Error::NoFrames;\n        assert_eq!(error.to_string(), \"A packet containing no frames\");\n\n        let error = Error::IncompleteType(\"test\".to_string());\n        assert_eq!(error.to_string(), \"Incomplete frame type: test\");\n\n        let error = Error::InvalidType(VarInt::from_u32(0x1f));\n        assert_eq!(error.to_string(), \"Invalid frame type from 31\");\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/handshake_done.rs",
    "content": "use super::EncodeSize;\nuse crate::frame::{GetFrameType, io::WriteFrameType};\n/// HandshakeDone frame\n///\n/// ```text\n/// HANDSHAKE_DONE Frame {\n///   Type (i) = 0x1e,\n/// }\n/// ```\n///\n/// See [HANDSHAKE_DONE Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-handshake_done-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]\npub struct HandshakeDoneFrame;\n\nimpl super::GetFrameType for HandshakeDoneFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::HandshakeDone\n    }\n}\n\nimpl EncodeSize for HandshakeDoneFrame {}\n\n/// Parse a HANDSHAKE_DONE frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n#[allow(unused)]\npub fn be_handshake_done_frame(input: &[u8]) -> nom::IResult<&[u8], HandshakeDoneFrame> {\n    Ok((input, HandshakeDoneFrame))\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<HandshakeDoneFrame> for T {\n    fn put_frame(&mut self, frame: &HandshakeDoneFrame) {\n        self.put_frame_type(frame.frame_type());\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType, HandshakeDoneFrame,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_handshake_done_frame() {\n        assert_eq!(HandshakeDoneFrame.frame_type(), FrameType::HandshakeDone);\n        assert_eq!(HandshakeDoneFrame.max_encoding_size(), 1);\n        assert_eq!(HandshakeDoneFrame.encoding_size(), 1);\n    }\n\n    #[test]\n    fn test_read_handshake_done_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        use super::be_handshake_done_frame;\n        use crate::varint::be_varint;\n        let handshake_done_frame_type = VarInt::from(FrameType::HandshakeDone);\n        let buf = vec![handshake_done_frame_type.into_u64() as u8];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == handshake_done_frame_type {\n                be_handshake_done_frame\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, super::HandshakeDoneFrame);\n    }\n\n    #[test]\n    fn test_write_handshake_done_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&HandshakeDoneFrame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::HandshakeDone);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/io.rs",
    "content": "use std::{\n    pin::Pin,\n    task::{Context, Poll},\n};\n\nuse bytes::Bytes;\n\nuse super::{\n    ack::ack_frame_with_ecn, add_address::be_add_address_frame,\n    connection_close::connection_close_frame_at_layer, crypto::be_crypto_frame,\n    data_blocked::be_data_blocked_frame, datagram::datagram_frame_with_flag,\n    max_data::be_max_data_frame, max_stream_data::be_max_stream_data_frame,\n    max_streams::max_streams_frame_with_dir, new_connection_id::be_new_connection_id_frame,\n    new_token::be_new_token_frame, path_challenge::be_path_challenge_frame,\n    path_response::be_path_response_frame, punch_done::be_punch_done_frame,\n    punch_hello::be_punch_hello_frame, punch_me_now::be_punch_me_now_frame,\n    remove_address::be_remove_address_frame, reset_stream::be_reset_stream_frame,\n    retire_connection_id::be_retire_connection_id_frame, stop_sending::be_stop_sending_frame,\n    stream::stream_frame_with_flag, stream_data_blocked::be_stream_data_blocked_frame,\n    streams_blocked::streams_blocked_frame_with_dir, *,\n};\nuse crate::{ArcReceiving, Receiving, ResetError, util::ContinuousData};\n\n/// Return a parser for a complete frame from the raw bytes with the given type,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n///\n/// Some frames like [`StreamFrame`] and [`CryptoFrame`] have a data body,\n/// which use `bytes::Bytes` to store.\nfn complete_frame(\n    frame_type: FrameType,\n    raw: Bytes,\n) -> impl Fn(&[u8]) -> nom::IResult<&[u8], Frame> {\n    use nom::{Parser, combinator::map};\n    move |input: &[u8]| match frame_type {\n        FrameType::Padding => Ok((input, Frame::Padding(PaddingFrame))),\n        FrameType::Ping => Ok((input, Frame::Ping(PingFrame))),\n        FrameType::ConnectionClose(layer) => {\n            map(connection_close_frame_at_layer(layer), Frame::Close).parse(input)\n        }\n        FrameType::NewConnectionId => {\n            map(be_new_connection_id_frame, Frame::NewConnectionId).parse(input)\n        }\n        FrameType::RetireConnectionId => {\n            map(be_retire_connection_id_frame, Frame::RetireConnectionId).parse(input)\n        }\n        FrameType::DataBlocked => map(be_data_blocked_frame, Frame::DataBlocked).parse(input),\n        FrameType::MaxData => map(be_max_data_frame, Frame::MaxData).parse(input),\n        FrameType::PathChallenge => map(be_path_challenge_frame, Frame::PathChallenge).parse(input),\n        FrameType::PathResponse => map(be_path_response_frame, Frame::PathResponse).parse(input),\n        FrameType::HandshakeDone => Ok((input, Frame::HandshakeDone(HandshakeDoneFrame))),\n        FrameType::NewToken => map(be_new_token_frame, Frame::NewToken).parse(input),\n        FrameType::Ack(ecn) => map(ack_frame_with_ecn(ecn), Frame::Ack).parse(input),\n        FrameType::ResetStream => {\n            map(be_reset_stream_frame, |f| Frame::StreamCtl(f.into())).parse(input)\n        }\n        FrameType::StopSending => {\n            map(be_stop_sending_frame, |f| Frame::StreamCtl(f.into())).parse(input)\n        }\n        FrameType::MaxStreamData => {\n            map(be_max_stream_data_frame, |f| Frame::StreamCtl(f.into())).parse(input)\n        }\n        FrameType::MaxStreams(dir) => map(max_streams_frame_with_dir(dir), |f| {\n            Frame::StreamCtl(f.into())\n        })\n        .parse(input),\n        FrameType::StreamsBlocked(dir) => map(streams_blocked_frame_with_dir(dir), |f| {\n            Frame::StreamCtl(f.into())\n        })\n        .parse(input),\n        FrameType::StreamDataBlocked => {\n            map(be_stream_data_blocked_frame, |f| Frame::StreamCtl(f.into())).parse(input)\n        }\n        FrameType::Crypto => {\n            let (input, frame) = be_crypto_frame(input)?;\n            let start = raw.len() - input.len();\n            let len = frame.len() as usize;\n            if input.len() < len {\n                Err(nom::Err::Incomplete(nom::Needed::new(len - input.len())))\n            } else {\n                let data = raw.slice(start..start + len);\n                Ok((&input[len..], Frame::Crypto(frame, data)))\n            }\n        }\n        FrameType::Stream(offset, len, fin) => {\n            let (input, frame) = stream_frame_with_flag(offset, len, fin)(input)?;\n            let start = raw.len() - input.len();\n            let len = frame.len();\n            if input.len() < len {\n                Err(nom::Err::Incomplete(nom::Needed::new(len - input.len())))\n            } else {\n                let data = raw.slice(start..start + len);\n                Ok((&input[len..], Frame::Stream(frame, data)))\n            }\n        }\n        FrameType::Datagram(with_len) => {\n            let (input, frame) = datagram_frame_with_flag(with_len)(input)?;\n            let start = raw.len() - input.len();\n            match frame.encode_len() {\n                true if frame.len().into_u64() > input.len() as u64 => Err(nom::Err::Incomplete(\n                    nom::Needed::new((frame.len().into_u64() - input.len() as u64) as usize),\n                )),\n                true => {\n                    let data = raw.slice(start..start + frame.len().into_u64() as usize);\n                    Ok((\n                        &input[frame.len().into_u64() as usize..],\n                        Frame::Datagram(frame, data),\n                    ))\n                }\n                false => {\n                    let data = raw.slice(start..);\n                    Ok((&[], Frame::Datagram(frame, data)))\n                }\n            }\n        }\n        FrameType::AddAddress(family) => {\n            map(be_add_address_frame(family), Frame::AddAddress).parse(input)\n        }\n        FrameType::RemoveAddress => map(be_remove_address_frame, Frame::RemoveAddress).parse(input),\n        FrameType::PunchMeNow(family) => {\n            map(be_punch_me_now_frame(family), Frame::PunchMeNow).parse(input)\n        }\n        FrameType::PunchHello => map(be_punch_hello_frame, Frame::PunchHello).parse(input),\n        FrameType::PunchDone => map(be_punch_done_frame, Frame::PunchDone).parse(input),\n    }\n}\n\n/// Parse a frame type from the raw bytes, [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_frame(raw: &Bytes, packet_type: Type) -> Result<(usize, Frame, FrameType), Error> {\n    let input = raw.as_ref();\n    let (remain, frame_type) = be_frame_type(input)?;\n    if !frame_type.belongs_to(packet_type) {\n        return Err(Error::WrongType(frame_type, packet_type));\n    }\n\n    let (remain, frame) = complete_frame(frame_type, raw.clone())(remain).map_err(|e| match e {\n        ne @ nom::Err::Incomplete(_) => {\n            nom::Err::Error(Error::IncompleteFrame(frame_type, ne.to_string()))\n        }\n        nom::Err::Error(ne) => {\n            // may be TooLarge in MaxStreamsFrame/CryptoFrame/StreamFrame,\n            // or may be Verify in NewConnectionIdFrame,\n            // or may be Alt in ConnectionCloseFrame\n            nom::Err::Error(Error::ParseError(\n                frame_type,\n                ne.code.description().to_owned(),\n            ))\n        }\n        _ => unreachable!(\"parsing frame never fails\"),\n    })?;\n    Ok((input.len() - remain.len(), frame, frame_type))\n}\n\n/// A [`bytes::BufMut`] extension trait, makes buffer more friendly\n/// to write all kinds of frames.\npub trait WriteFrame<F>: bytes::BufMut {\n    /// Write a frame to the buffer.\n    fn put_frame(&mut self, frame: &F);\n}\n\nimpl<B: BufMut, D: ContinuousData> WriteFrame<Frame<D>> for B\nwhere\n    D: ContinuousData,\n    B: BufMut + ?Sized,\n    for<'b> &'b mut B: crate::util::WriteData<D>,\n{\n    fn put_frame(&mut self, frame: &Frame<D>) {\n        #[inline(always)]\n        fn put<F, B: WriteFrame<F> + ?Sized>(buf: &mut B, frame: &F) {\n            buf.put_frame(frame);\n        }\n        let mut buf = self;\n        match frame {\n            Frame::Padding(f) => put(&mut buf, f),\n            Frame::Ping(f) => put(&mut buf, f),\n            Frame::Ack(f) => put(&mut buf, f),\n            Frame::Close(f) => put(&mut buf, f),\n            Frame::NewToken(f) => put(&mut buf, f),\n            Frame::MaxData(f) => put(&mut buf, f),\n            Frame::DataBlocked(f) => put(&mut buf, f),\n            Frame::AddAddress(f) => put(&mut buf, f),\n            Frame::RemoveAddress(f) => put(&mut buf, f),\n            Frame::PunchMeNow(f) => put(&mut buf, f),\n            Frame::PunchHello(f) => put(&mut buf, f),\n            Frame::PunchDone(f) => put(&mut buf, f),\n            Frame::NewConnectionId(f) => put(&mut buf, f),\n            Frame::RetireConnectionId(f) => put(&mut buf, f),\n            Frame::HandshakeDone(f) => put(&mut buf, f),\n            Frame::PathChallenge(f) => put(&mut buf, f),\n            Frame::PathResponse(f) => put(&mut buf, f),\n            Frame::StreamCtl(f) => put(&mut buf, f),\n            Frame::Stream(f, d) => buf.put_data_frame(f, d),\n            Frame::Crypto(f, d) => buf.put_data_frame(f, d),\n            Frame::Datagram(f, d) => buf.put_data_frame(f, d),\n        }\n    }\n}\n\n/// A [`bytes::BufMut`] extension trait, makes buffer more friendly\n/// to write frame with data.\npub trait WriteDataFrame<F, D: ContinuousData>: bytes::BufMut {\n    /// Write a frame and its data to the buffer.\n    fn put_data_frame(&mut self, frame: &F, data: &D);\n}\n\n/// A [`bytes::BufMut`] extension trait to write [`FrameType`].\npub trait WriteFrameType: bytes::BufMut {\n    /// Write a frame type to the buffer.\n    fn put_frame_type(&mut self, frame_type: FrameType);\n}\n\nimpl<T: BufMut> WriteFrameType for T {\n    fn put_frame_type(&mut self, frame_type: FrameType) {\n        use crate::varint::WriteVarInt;\n        let fty: VarInt = frame_type.into();\n        self.put_varint(&fty);\n    }\n}\n\n/// Some modules that need send specific frames can implement `SendFrame` trait directly.\n///\n/// Alternatively, a temporary buffer that stores certain frames can also implement this trait,\n/// But additional processing is required to ensure that the frames in the buffer are eventually\n/// sent to the peer.\npub trait SendFrame<T> {\n    /// Need send the frames to the peer\n    fn send_frame<I: IntoIterator<Item = T>>(&self, iter: I);\n}\n\n/// Some modules that need receive specific frames can implement `ReceiveFrame` trait directly.\n///\n/// Alternatively, a temporary buffer that stores certain frames can also implement this trait,\n/// But additional processing is required to ensure that the frames in the buffer are eventually\n/// delivered to the corresponding modules.\npub trait ReceiveFrame<T> {\n    type Output;\n\n    /// Receive the frames from the peer\n    fn recv_frame(&self, frame: T) -> Result<Self::Output, crate::error::Error>;\n}\n\nimpl<F: Unpin> Future for Receiving<F> {\n    type Output = Result<Option<F>, ResetError>;\n\n    fn poll(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {\n        let state = self.get_mut();\n        match std::mem::take(state) {\n            Self::Pending => Poll::Pending,\n            Self::Waiting(waker) => {\n                *state = Self::Waiting(waker);\n                Poll::Pending\n            }\n            Self::Rcvd(frame) => {\n                *state = Self::Read;\n                Poll::Ready(Ok(Some(frame)))\n            }\n            Self::Read => {\n                *state = Self::Read;\n                Poll::Ready(Ok(None))\n            }\n            Self::Reset => {\n                *state = Self::Reset;\n                Poll::Ready(Err(ResetError))\n            }\n        }\n    }\n}\n\nimpl<F> ReceiveFrame<F> for ArcReceiving<F> {\n    type Output = ();\n\n    fn recv_frame(&self, frame: F) -> Result<Self::Output, crate::error::Error> {\n        self.0.lock().unwrap().recv_frame(frame);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/max_data.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// MAX_DATA Frame\n///\n/// ```text\n/// MAX_DATA Frame {\n///   Type (i) = 0x10,\n///   Maximum Data (i),\n/// }\n/// ```\n///\n/// See [MAX_DATA Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-max_data-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]\npub struct MaxDataFrame {\n    max_data: VarInt,\n}\n\nimpl super::GetFrameType for MaxDataFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::MaxData\n    }\n}\n\nimpl super::EncodeSize for MaxDataFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.max_data.encoding_size()\n    }\n}\n\nimpl MaxDataFrame {\n    /// Create a new [`MaxDataFrame`] with the given maximum data.\n    pub fn new(max_data: VarInt) -> Self {\n        Self { max_data }\n    }\n\n    /// Return the maximum data of the frame.\n    pub fn max_data(&self) -> u64 {\n        self.max_data.into_u64()\n    }\n}\n\n/// Parse a MAX_DATA frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_max_data_frame(input: &[u8]) -> nom::IResult<&[u8], MaxDataFrame> {\n    use nom::{Parser, combinator::map};\n    map(be_varint, MaxDataFrame::new).parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<MaxDataFrame> for T {\n    fn put_frame(&mut self, frame: &MaxDataFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.max_data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::MaxDataFrame;\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_max_data_frame() {\n        let frame = MaxDataFrame::new(VarInt::from_u32(0x1234));\n        assert_eq!(frame.frame_type(), FrameType::MaxData);\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2);\n    }\n\n    #[test]\n    fn test_read_max_data_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        use super::be_max_data_frame;\n        use crate::varint::be_varint;\n        let max_data_frame_type = VarInt::from(FrameType::MaxData);\n        let buf = vec![max_data_frame_type.into_u64() as u8, 0x52, 0x34];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == max_data_frame_type {\n                be_max_data_frame\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, MaxDataFrame::new(VarInt::from_u32(0x1234),));\n    }\n\n    #[test]\n    fn test_write_max_data_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&MaxDataFrame::new(VarInt::from_u32(0x1234)));\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::MaxData);\n        expected.extend_from_slice(&[0x52, 0x34]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/max_stream_data.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    sid::{StreamId, WriteStreamId, be_streamid},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// MAX_STREAM_DATA frame.\n///\n/// ```text\n/// MAX_STREAM_DATA Frame {\n///   Type (i) = 0x11,\n///   Stream ID (i),\n///   Maximum Stream Data (i),\n/// }\n/// ```\n///\n/// See [MAX_STREAM_DATA Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-max_stream_data-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct MaxStreamDataFrame {\n    stream_id: StreamId,\n    max_stream_data: VarInt,\n}\n\nimpl MaxStreamDataFrame {\n    /// Create a new [`MaxStreamDataFrame`].\n    pub fn new(stream_id: StreamId, max_stream_data: VarInt) -> Self {\n        Self {\n            stream_id,\n            max_stream_data,\n        }\n    }\n\n    /// Return the stream ID of the frame.\n    pub fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    /// Return the maximum stream data of the frame.\n    pub fn max_stream_data(&self) -> u64 {\n        self.max_stream_data.into_u64()\n    }\n}\n\nimpl super::GetFrameType for MaxStreamDataFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::MaxStreamData\n    }\n}\n\nimpl super::EncodeSize for MaxStreamDataFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.stream_id.encoding_size() + self.max_stream_data.encoding_size()\n    }\n}\n\n/// Parse a MAX_STREAM_DATA frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_max_stream_data_frame(input: &[u8]) -> nom::IResult<&[u8], MaxStreamDataFrame> {\n    use nom::{Parser, combinator::map, sequence::pair};\n    map(\n        pair(be_streamid, be_varint),\n        |(stream_id, max_stream_data)| MaxStreamDataFrame {\n            stream_id,\n            max_stream_data,\n        },\n    )\n    .parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<MaxStreamDataFrame> for T {\n    fn put_frame(&mut self, frame: &MaxStreamDataFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_streamid(&frame.stream_id);\n        self.put_varint(&frame.max_stream_data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::MaxStreamDataFrame;\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_max_stream_data_frame() {\n        let frame =\n            MaxStreamDataFrame::new(VarInt::from_u32(0x1234).into(), VarInt::from_u32(0x5678));\n        assert_eq!(frame.stream_id, VarInt::from_u32(0x1234).into());\n        assert_eq!(frame.max_stream_data, VarInt::from_u32(0x5678));\n        assert_eq!(frame.frame_type(), FrameType::MaxStreamData);\n        assert_eq!(frame.max_encoding_size(), 1 + 8 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2 + 4);\n    }\n\n    #[test]\n    fn test_read_max_stream_data_frame() {\n        use super::be_max_stream_data_frame;\n        let buf = vec![0x52, 0x34, 0x80, 0, 0x56, 0x78];\n        let (_, frame) = be_max_stream_data_frame(&buf).unwrap();\n        assert_eq!(frame.stream_id(), VarInt::from_u32(0x1234).into());\n        assert_eq!(frame.max_stream_data(), 0x5678);\n    }\n\n    #[test]\n    fn test_write_max_stream_data_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&MaxStreamDataFrame::new(\n            VarInt::from_u32(0x1234).into(),\n            VarInt::from_u32(0x5678),\n        ));\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::MaxStreamData);\n        expected.extend_from_slice(&[0x52, 0x34, 0x80, 0, 0x56, 0x78]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/max_streams.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    sid::{Dir, MAX_STREAMS_LIMIT},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// MAX_STREAMS frame.\n///\n/// ```text\n/// MAX_STREAMS Frame {\n///   Type (i) = 0x12..0x13,\n///   Maximum Streams (i),\n/// }\n/// ```\n///\n/// See [MAX_STREAMS Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-max_streams-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum MaxStreamsFrame {\n    Bi(VarInt),\n    Uni(VarInt),\n}\n\nimpl MaxStreamsFrame {\n    pub fn with(dir: Dir, max_streams: VarInt) -> Self {\n        match dir {\n            Dir::Bi => MaxStreamsFrame::Bi(max_streams),\n            Dir::Uni => MaxStreamsFrame::Uni(max_streams),\n        }\n    }\n}\n\nimpl super::GetFrameType for MaxStreamsFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::MaxStreams(match self {\n            MaxStreamsFrame::Bi(_) => Dir::Bi,\n            MaxStreamsFrame::Uni(_) => Dir::Uni,\n        })\n    }\n}\n\nimpl super::EncodeSize for MaxStreamsFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + match self {\n            MaxStreamsFrame::Bi(max_streams) => max_streams.encoding_size(),\n            MaxStreamsFrame::Uni(max_streams) => max_streams.encoding_size(),\n        }\n    }\n}\n\n/// Returns a parser for MAX_STREAMS frame with the given direction,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn max_streams_frame_with_dir(\n    dir: Dir,\n) -> impl Fn(&[u8]) -> nom::IResult<&[u8], MaxStreamsFrame> {\n    move |input: &[u8]| {\n        let (remain, max_streams) = be_varint(input)?;\n        if max_streams > MAX_STREAMS_LIMIT {\n            Err(nom::Err::Error(nom::error::Error::new(\n                input,\n                nom::error::ErrorKind::TooLarge,\n            )))\n        } else {\n            Ok((\n                remain,\n                match dir {\n                    Dir::Bi => MaxStreamsFrame::Bi(max_streams),\n                    Dir::Uni => MaxStreamsFrame::Uni(max_streams),\n                },\n            ))\n        }\n    }\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<MaxStreamsFrame> for T {\n    fn put_frame(&mut self, frame: &MaxStreamsFrame) {\n        match frame {\n            MaxStreamsFrame::Bi(max_streams) => {\n                // self.put_frame_type(frame.frame_type());\n                self.put_frame_type(frame.frame_type());\n                self.put_varint(max_streams);\n            }\n            MaxStreamsFrame::Uni(max_streams) => {\n                self.put_frame_type(frame.frame_type());\n                self.put_varint(max_streams);\n            }\n        }\n    }\n}\n#[cfg(test)]\nmod tests {\n    use nom::{Parser, combinator::flat_map};\n\n    use super::{MaxStreamsFrame, max_streams_frame_with_dir};\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        sid::Dir,\n        varint::{VarInt, be_varint},\n    };\n\n    #[test]\n    fn test_max_streams_frame() {\n        let frame = MaxStreamsFrame::Bi(VarInt::from_u32(0x1234));\n        assert_eq!(frame.frame_type(), FrameType::MaxStreams(Dir::Bi));\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2);\n\n        let frame = MaxStreamsFrame::Uni(VarInt::from_u32(0x1236));\n        assert_eq!(frame.frame_type(), FrameType::MaxStreams(Dir::Uni));\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2);\n    }\n\n    #[test]\n    fn test_read_max_streams_frame() {\n        let max_streams_bi_type = VarInt::from(FrameType::MaxStreams(Dir::Bi));\n        let max_streams_uni_type = VarInt::from(FrameType::MaxStreams(Dir::Uni));\n        let buf = vec![max_streams_bi_type.into_u64() as u8, 0x52, 0x34];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == max_streams_bi_type {\n                max_streams_frame_with_dir(Dir::Bi)\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, MaxStreamsFrame::Bi(VarInt::from_u32(0x1234)));\n\n        let buf = vec![max_streams_uni_type.into_u64() as u8, 0x52, 0x36];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == max_streams_uni_type {\n                max_streams_frame_with_dir(Dir::Uni)\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, MaxStreamsFrame::Uni(VarInt::from_u32(0x1236)));\n    }\n\n    #[test]\n    fn test_read_too_large_max_streams_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame_type(FrameType::MaxStreams(Dir::Bi));\n        buf.extend_from_slice(&[0xd0, 0x34, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80]);\n        let result = flat_map(be_varint, |frame_type| {\n            if frame_type == VarInt::from(FrameType::MaxStreams(Dir::Bi)) {\n                max_streams_frame_with_dir(Dir::Bi)\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref());\n        assert_eq!(\n            result,\n            Err(nom::Err::Error(nom::error::Error::new(\n                &buf[1..],\n                nom::error::ErrorKind::TooLarge,\n            )))\n        );\n    }\n\n    #[test]\n    fn test_write_max_streams_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&MaxStreamsFrame::Bi(VarInt::from_u32(0x1234)));\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::MaxStreams(Dir::Bi));\n        expected.extend_from_slice(&[0x52, 0x34]);\n        assert_eq!(buf, expected);\n        buf.clear();\n        buf.put_frame(&MaxStreamsFrame::Uni(VarInt::from_u32(0x1236)));\n        expected.clear();\n        expected.put_frame_type(FrameType::MaxStreams(Dir::Uni));\n        expected.extend_from_slice(&[0x52, 0x36]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/new_connection_id.rs",
    "content": "use crate::{\n    cid::{ConnectionId, WriteConnectionId, be_connection_id},\n    frame::{GetFrameType, io::WriteFrameType},\n    token::{RESET_TOKEN_SIZE, ResetToken, be_reset_token},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// NEW_CONNECTION_ID frame.\n///\n/// ```text\n/// NEW_CONNECTION_ID Frame {\n///   Type (i) = 0x18,\n///   Sequence Number (i),\n///   Retire Prior To (i),\n///   Length (8),\n///   Connection ID (8..160),\n///   Stateless Reset Token (128),\n/// }\n/// ```\n///\n/// See [NEW_CONNECTION_ID Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-new_connection_id-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct NewConnectionIdFrame {\n    sequence: VarInt,\n    retire_prior_to: VarInt,\n    id: ConnectionId,\n    reset_token: ResetToken,\n}\n\nimpl NewConnectionIdFrame {\n    /// Create a new [`NewConnectionIdFrame`].\n    pub fn new(cid: ConnectionId, sequence: VarInt, retire_prior_to: VarInt) -> Self {\n        let reset_token = ResetToken::random_gen();\n        Self {\n            sequence,\n            retire_prior_to,\n            id: cid,\n            reset_token,\n        }\n    }\n\n    /// Return the sequence number of the frame.\n    pub fn sequence(&self) -> u64 {\n        self.sequence.into_u64()\n    }\n\n    /// Return the retire prior to of the frame.\n    pub fn retire_prior_to(&self) -> u64 {\n        self.retire_prior_to.into_u64()\n    }\n\n    /// Return the connection ID of the frame.\n    pub fn connection_id(&self) -> &ConnectionId {\n        &self.id\n    }\n\n    /// Return the reset token of the frame.\n    pub fn reset_token(&self) -> &ResetToken {\n        &self.reset_token\n    }\n}\n\nimpl super::GetFrameType for NewConnectionIdFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::NewConnectionId\n    }\n}\n\nimpl super::EncodeSize for NewConnectionIdFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8 + 8 + 21 + RESET_TOKEN_SIZE\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.sequence.encoding_size()\n            + self.retire_prior_to.encoding_size()\n            + 1\n            + self.id.len as usize\n            + RESET_TOKEN_SIZE\n    }\n}\n\n/// Parse a NEW_CONNECTION_ID frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_new_connection_id_frame(input: &[u8]) -> nom::IResult<&[u8], NewConnectionIdFrame> {\n    let (remain, sequence) = be_varint(input)?;\n    let (remain, retire_prior_to) = be_varint(remain)?;\n    // The value in the Retire Prior To field MUST be less than or equal to the value in the\n    // Sequence Number field. Receiving a value in the Retire Prior To field that is greater\n    // than that in the Sequence Number field MUST be treated as a connection error of type\n    // FRAME_ENCODING_ERROR.\n    if retire_prior_to > sequence {\n        // TODO: 这里有信息损失\n        return Err(nom::Err::Error(nom::error::make_error(\n            input,\n            nom::error::ErrorKind::Verify,\n        )));\n    }\n    let (remain, cid) = be_connection_id(remain)?;\n    if cid.is_empty() {\n        // TODO: 这里有信息损失\n        return Err(nom::Err::Error(nom::error::make_error(\n            input,\n            nom::error::ErrorKind::Verify,\n        )));\n    }\n\n    let (remain, reset_token) = be_reset_token(remain)?;\n    Ok((\n        remain,\n        NewConnectionIdFrame {\n            sequence,\n            retire_prior_to,\n            id: cid,\n            reset_token,\n        },\n    ))\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<NewConnectionIdFrame> for T {\n    fn put_frame(&mut self, frame: &NewConnectionIdFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.sequence);\n        self.put_varint(&frame.retire_prior_to);\n        self.put_connection_id(&frame.id);\n        self.put_slice(frame.reset_token.as_slice());\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use bytes::{BufMut, BytesMut};\n\n    use super::*;\n    use crate::frame::{\n        EncodeSize, FrameType, GetFrameType,\n        io::{WriteFrame, WriteFrameType},\n    };\n\n    #[test]\n    fn test_new_connection_id_frame() {\n        let new_cid_frame = NewConnectionIdFrame::new(\n            ConnectionId::from_slice(&[1, 2, 3, 4][..]),\n            VarInt::from_u32(1),\n            VarInt::from_u32(0),\n        );\n        assert_eq!(new_cid_frame.sequence(), 1);\n        assert_eq!(new_cid_frame.retire_prior_to(), 0);\n        assert_eq!(\n            new_cid_frame.id,\n            ConnectionId::from_slice(&[1, 2, 3, 4][..])\n        );\n\n        assert_eq!(new_cid_frame.frame_type(), FrameType::NewConnectionId);\n        assert_eq!(\n            new_cid_frame.max_encoding_size(),\n            1 + 8 + 8 + 21 + RESET_TOKEN_SIZE\n        );\n        assert_eq!(new_cid_frame.encoding_size(), 1 + 1 + 1 + 1 + 4 + 16);\n    }\n\n    #[test]\n    fn test_frame_parsing() {\n        let mut buf = BytesMut::new();\n        let original_cid = ConnectionId::from_slice(&[1, 2, 3, 4][..]);\n        let original_frame =\n            NewConnectionIdFrame::new(original_cid, VarInt::from_u32(1), VarInt::from_u32(0));\n\n        // Write frame to buffer\n        buf.put_frame(&original_frame);\n\n        // Skip frame type byte\n        let (_, parsed_frame) = be_new_connection_id_frame(&buf[1..]).unwrap();\n\n        assert_eq!(parsed_frame.sequence(), original_frame.sequence());\n        assert_eq!(\n            parsed_frame.retire_prior_to(),\n            original_frame.retire_prior_to()\n        );\n        assert_eq!(parsed_frame.connection_id(), original_frame.connection_id());\n        assert_eq!(parsed_frame.reset_token(), original_frame.reset_token());\n    }\n\n    #[test]\n    fn test_invalid_retire_prior_to() {\n        let mut buf = BytesMut::new();\n        buf.put_frame_type(FrameType::NewConnectionId);\n        buf.put_varint(&VarInt::from_u32(1)); // sequence\n        buf.put_varint(&VarInt::from_u32(2)); // retire_prior_to > sequence\n\n        assert!(be_new_connection_id_frame(&buf[1..]).is_err());\n    }\n\n    #[test]\n    fn test_zero_length_connection_id() {\n        let mut buf = BytesMut::new();\n        buf.put_frame_type(FrameType::NewConnectionId);\n        buf.put_varint(&VarInt::from_u32(1));\n        buf.put_varint(&VarInt::from_u32(0));\n        buf.put_u8(0); // zero length CID\n\n        assert!(be_new_connection_id_frame(&buf[1..]).is_err());\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/new_token.rs",
    "content": "use derive_more::Deref;\n\nuse crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n/// NEW_TOKEN frame.\n///\n/// ```text\n/// NEW_TOKEN Frame {\n///   Type (i) = 0x07,\n///   Token Length (i),\n///   Token (..),\n/// }\n/// ```\n///\n/// See [NEW_TOKEN Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-new_token-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Deref, Debug, Clone, PartialEq, Eq)]\npub struct NewTokenFrame {\n    #[deref]\n    token: Vec<u8>,\n}\n\nimpl super::GetFrameType for NewTokenFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::NewToken\n    }\n}\n\nimpl super::EncodeSize for NewTokenFrame {\n    fn max_encoding_size(&self) -> usize {\n        // token's length could not exceed 20\n        1 + 1 + self.token.len()\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + 1 + self.token.len()\n    }\n}\n\nimpl NewTokenFrame {\n    /// Create a new [`NewTokenFrame`] with the given token.\n    pub fn new(token: Vec<u8>) -> Self {\n        Self { token }\n    }\n\n    /// Create a new [`NewTokenFrame`] from the given token slice.\n    pub fn from_slice(token: &[u8]) -> Self {\n        Self {\n            token: token.to_vec(),\n        }\n    }\n\n    /// Return the token of the frame.\n    pub fn token(&self) -> &[u8] {\n        &self.token\n    }\n}\n\n/// Parse a NEW_TOKEN frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_new_token_frame(input: &[u8]) -> nom::IResult<&[u8], NewTokenFrame> {\n    use nom::{\n        Parser,\n        bytes::streaming::take,\n        combinator::{flat_map, map},\n    };\n    flat_map(be_varint, |length| {\n        map(take(length.into_u64() as usize), NewTokenFrame::from_slice)\n    })\n    .parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<NewTokenFrame> for T {\n    fn put_frame(&mut self, frame: &NewTokenFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&VarInt::from_u32(frame.token.len() as u32));\n        self.put_slice(&frame.token);\n    }\n}\n#[cfg(test)]\nmod tests {\n    use crate::frame::{\n        EncodeSize, FrameType, GetFrameType,\n        io::{WriteFrame, WriteFrameType},\n    };\n\n    #[test]\n    fn test_new_token_frame() {\n        let frame = super::NewTokenFrame::new(vec![0x01, 0x02]);\n        assert_eq!(frame.frame_type(), FrameType::NewToken);\n        assert_eq!(frame.max_encoding_size(), 1 + 1 + 2);\n        assert_eq!(frame.encoding_size(), 1 + 1 + 2);\n    }\n\n    #[test]\n    fn test_read_new_token_frame() {\n        use super::be_new_token_frame;\n        let buf = vec![0x02, 0x01, 0x02];\n        let (input, frame) = be_new_token_frame(&buf).unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame.token, vec![0x01, 0x02]);\n    }\n\n    #[test]\n    fn test_write_new_token_frame() {\n        let mut buf = Vec::<u8>::new();\n        let frame = super::NewTokenFrame::from_slice(&[0x01, 0x02]);\n        buf.put_frame(&frame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::NewToken);\n        expected.extend_from_slice(&[0x02, 0x01, 0x02]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/padding.rs",
    "content": "use crate::frame::{GetFrameType, io::WriteFrameType};\n/// PADDING Frame.\n///\n/// ```text\n/// PADDING Frame {\n///   Type (i) = 0x00,\n/// }\n/// ```\n///\n/// See [PADDING Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-padding-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]\npub struct PaddingFrame;\n\nimpl super::GetFrameType for PaddingFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::Padding\n    }\n}\n\nimpl super::EncodeSize for PaddingFrame {}\n\n/// Parse a PADDING frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n#[allow(dead_code)]\npub fn be_padding_frame(input: &[u8]) -> nom::IResult<&[u8], PaddingFrame> {\n    Ok((input, PaddingFrame))\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<PaddingFrame> for T {\n    fn put_frame(&mut self, frame: &PaddingFrame) {\n        self.put_frame_type(frame.frame_type());\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{PaddingFrame, be_padding_frame};\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_padding_frame() {\n        assert_eq!(PaddingFrame.frame_type(), FrameType::Padding);\n        assert_eq!(PaddingFrame.max_encoding_size(), 1);\n        assert_eq!(PaddingFrame.encoding_size(), 1);\n    }\n\n    #[test]\n    fn test_read_padding_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        use crate::varint::be_varint;\n        let padding_frame_type = VarInt::from(FrameType::Padding);\n        let buf = vec![padding_frame_type.into_u64() as u8];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == padding_frame_type {\n                be_padding_frame\n            } else {\n                unreachable!(\"wrong frame type: {}\", frame_type)\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, PaddingFrame);\n    }\n\n    #[test]\n    fn test_write_padding_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&PaddingFrame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::Padding);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/path_challenge.rs",
    "content": "use derive_more::Deref;\nuse rand::RngExt;\n\nuse crate::frame::{GetFrameType, io::WriteFrameType};\n/// PATH_CHALLENGE frame.\n///\n/// ```text\n/// PATH_CHALLENGE Frame {\n///   Type (i) = 0x1a,\n///   Data (64),\n/// }\n/// ```\n///\n/// See [PATH_CHALLENGE Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-path_challenge-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Deref)]\npub struct PathChallengeFrame {\n    #[deref]\n    data: [u8; 8],\n}\n\nimpl PathChallengeFrame {\n    pub fn from_slice(data: &[u8]) -> Self {\n        let mut frame = Self { data: [0; 8] };\n        frame.data.copy_from_slice(data);\n        frame\n    }\n\n    pub fn random() -> Self {\n        let mut rng = rand::rng();\n        let mut data = [0; 8];\n        rng.fill(&mut data);\n        Self { data }\n    }\n}\n\nimpl super::GetFrameType for PathChallengeFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::PathChallenge\n    }\n}\n\nimpl super::EncodeSize for PathChallengeFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + self.data.len()\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.data.len()\n    }\n}\n\n/// Parse a PATH_CHALLENGE frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_path_challenge_frame(input: &[u8]) -> nom::IResult<&[u8], PathChallengeFrame> {\n    use nom::{Parser, bytes::streaming::take, combinator::map};\n    map(take(8usize), PathChallengeFrame::from_slice).parse(input)\n}\n\n// BufMut write extension for PATH_CHALLENGE_FRAME\nimpl<T: bytes::BufMut> super::io::WriteFrame<PathChallengeFrame> for T {\n    fn put_frame(&mut self, frame: &PathChallengeFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_slice(&frame.data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use nom::{Parser, combinator::flat_map};\n\n    use super::be_path_challenge_frame;\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::{VarInt, be_varint},\n    };\n    #[test]\n    fn test_path_challenge_frame() {\n        let frame = super::PathChallengeFrame::from_slice(&[\n            0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,\n        ]);\n        assert_eq!(frame.frame_type(), FrameType::PathChallenge);\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 8);\n    }\n\n    #[test]\n    fn test_read_path_challenge_frame() {\n        let path_challenge_frame_type = VarInt::from(FrameType::PathChallenge);\n        let mut buf = Vec::new();\n        buf.put_frame_type(FrameType::PathChallenge);\n        buf.extend_from_slice(&[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]);\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == path_challenge_frame_type {\n                be_path_challenge_frame\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(\n            frame,\n            super::PathChallengeFrame {\n                data: [0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]\n            }\n        );\n    }\n\n    #[test]\n    fn test_write_path_challenge_frame() {\n        let mut buf = Vec::new();\n        let frame = super::PathChallengeFrame::from_slice(&[\n            0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,\n        ]);\n        buf.put_frame(&frame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::PathChallenge);\n        expected.extend_from_slice(&[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/path_response.rs",
    "content": "use std::ops::Deref;\n\nuse derive_more::Deref;\n\nuse crate::frame::{GetFrameType, io::WriteFrameType};\n/// PATH_RESPONSE Frame.\n///\n/// ```text\n/// PATH_RESPONSE Frame {\n///   Type (i) = 0x1b,\n///   Data (64),\n/// }\n/// ```\n///\n/// See [PATH_RESPONSE Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-path_response-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, Default, Deref, PartialEq, Eq)]\npub struct PathResponseFrame {\n    #[deref]\n    data: [u8; 8],\n}\n\nimpl PathResponseFrame {\n    fn from_slice(data: &[u8]) -> Self {\n        let mut frame = Self { data: [0; 8] };\n        frame.data.copy_from_slice(data);\n        frame\n    }\n}\n\n/// The only public way to create a PathResponseFrame is from a PathChallengeFrame\nimpl From<super::PathChallengeFrame> for PathResponseFrame {\n    fn from(challenge: super::PathChallengeFrame) -> Self {\n        Self::from_slice(challenge.deref())\n    }\n}\n\nimpl super::GetFrameType for PathResponseFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::PathResponse\n    }\n}\n\nimpl super::EncodeSize for PathResponseFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + self.data.len()\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.data.len()\n    }\n}\n\n/// Parse a PATH_RESPONSE frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_path_response_frame(input: &[u8]) -> nom::IResult<&[u8], PathResponseFrame> {\n    use nom::{Parser, bytes::complete::take, combinator::map};\n    map(take(8usize), PathResponseFrame::from_slice).parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<PathResponseFrame> for T {\n    fn put_frame(&mut self, frame: &PathResponseFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_slice(&frame.data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::frame::{\n        EncodeSize, FrameType, GetFrameType,\n        io::{WriteFrame, WriteFrameType},\n    };\n\n    #[test]\n    fn test_path_response_frame() {\n        let frame =\n            PathResponseFrame::from_slice(&[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]);\n        assert_eq!(frame.frame_type(), FrameType::PathResponse);\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 8);\n    }\n\n    #[test]\n    fn test_read_path_response_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        use crate::{\n            frame::FrameType,\n            varint::{VarInt, be_varint},\n        };\n        let path_response_frame_type = VarInt::from(FrameType::PathResponse);\n        let mut buf = Vec::new();\n        buf.put_frame_type(FrameType::PathResponse);\n        buf.extend_from_slice(&[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]);\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == path_response_frame_type {\n                be_path_response_frame\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(\n            frame,\n            PathResponseFrame::from_slice(&[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08])\n        );\n    }\n\n    #[test]\n    fn test_write_path_response_frame() {\n        let mut buf = Vec::<u8>::new();\n        let frame =\n            PathResponseFrame::from_slice(&[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]);\n        buf.put_frame(&frame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::PathResponse);\n        expected.extend_from_slice(&[0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/ping.rs",
    "content": "use crate::frame::{GetFrameType, io::WriteFrameType};\n/// PING Frame.\n///\n/// ```text\n/// PING Frame {\n///   Type (i) = 0x01,\n/// }\n/// ```\n///\n/// See [PING Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-ping-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]\npub struct PingFrame;\n\nimpl super::GetFrameType for PingFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::Ping\n    }\n}\n\nimpl super::EncodeSize for PingFrame {}\n\n/// Parse a PING frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n#[allow(unused)]\npub fn be_ping_frame(input: &[u8]) -> nom::IResult<&[u8], PingFrame> {\n    Ok((input, PingFrame))\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<PingFrame> for T {\n    fn put_frame(&mut self, frame: &PingFrame) {\n        self.put_frame_type(frame.frame_type());\n    }\n}\n#[cfg(test)]\nmod tests {\n    use super::PingFrame;\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_ping_frame() {\n        assert_eq!(PingFrame.frame_type(), FrameType::Ping);\n        assert_eq!(PingFrame.max_encoding_size(), 1);\n        assert_eq!(PingFrame.encoding_size(), 1);\n    }\n\n    #[test]\n    fn test_read_ping_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        use super::be_ping_frame;\n        use crate::varint::be_varint;\n        let ping_frame_type = VarInt::from(FrameType::Ping);\n        let buf = vec![ping_frame_type.into_u64() as u8];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == ping_frame_type {\n                be_ping_frame\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, PingFrame);\n    }\n\n    #[test]\n    fn test_write_ping_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&PingFrame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::Ping);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/punch_done.rs",
    "content": "use super::{\n    EncodeSize, GetFrameType,\n    io::{WriteFrame, WriteFrameType},\n};\nuse crate::{\n    frame::PunchHelloFrame,\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// PUNCH_Done Frame {\n///     Type (i) = 0x3d7e96,\n///     Local Sequence Number (i),\n///     Remote Sequence Number (i),\n///     Probe Identifier (i),\n/// }\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct PunchDoneFrame {\n    local_seq: VarInt,\n    remote_seq: VarInt,\n    probe_id: VarInt,\n}\n\nimpl PunchDoneFrame {\n    pub fn new(local_seq: u32, remote_seq: u32, probe_id: u32) -> Self {\n        Self {\n            local_seq: VarInt::from_u32(local_seq),\n            remote_seq: VarInt::from_u32(remote_seq),\n            probe_id: VarInt::from_u32(probe_id),\n        }\n    }\n\n    pub fn local_seq(&self) -> u32 {\n        self.local_seq.into_u64() as u32\n    }\n\n    pub fn remote_seq(&self) -> u32 {\n        self.remote_seq.into_u64() as u32\n    }\n\n    pub fn probe_id(&self) -> u32 {\n        self.probe_id.into_u64() as u32\n    }\n\n    /// Construct a PunchDone responding to a received PunchHello,\n    /// automatically swapping local/remote seq to reflect our perspective.\n    pub fn respond_to(hello: &PunchHelloFrame) -> Self {\n        Self::new(hello.remote_seq(), hello.local_seq(), hello.probe_id())\n    }\n}\n\nimpl GetFrameType for PunchDoneFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::PunchDone\n    }\n}\n\nimpl EncodeSize for PunchDoneFrame {\n    fn max_encoding_size(&self) -> usize {\n        4 + 8 + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        VarInt::from(self.frame_type()).encoding_size()\n            + self.local_seq.encoding_size()\n            + self.remote_seq.encoding_size()\n            + self.probe_id.encoding_size()\n    }\n}\n\nimpl<T: bytes::BufMut> WriteFrame<PunchDoneFrame> for T {\n    fn put_frame(&mut self, frame: &PunchDoneFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.local_seq);\n        self.put_varint(&frame.remote_seq);\n        self.put_varint(&frame.probe_id);\n    }\n}\n\npub(crate) fn be_punch_done_frame(input: &[u8]) -> nom::IResult<&[u8], PunchDoneFrame> {\n    let (input, local_seq) = be_varint(input)?;\n    let (input, remote_seq) = be_varint(input)?;\n    let (input, probe_id) = be_varint(input)?;\n    Ok((\n        input,\n        PunchDoneFrame {\n            local_seq,\n            remote_seq,\n            probe_id,\n        },\n    ))\n}\n"
  },
  {
    "path": "qbase/src/frame/punch_hello.rs",
    "content": "use super::{\n    EncodeSize, GetFrameType,\n    io::{WriteFrame, WriteFrameType},\n};\nuse crate::varint::{VarInt, WriteVarInt, be_varint};\n\n/// PUNCH_Hello Frame {\n///     Type (i) = 0x3d7e95,\n///     Local Sequence Number (i),\n///     Remote Sequence Number (i),\n///     Probe Identifier (i),\n/// }\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct PunchHelloFrame {\n    local_seq: VarInt,\n    remote_seq: VarInt,\n    probe_id: VarInt,\n}\n\nimpl PunchHelloFrame {\n    pub fn new(local_seq: u32, remote_seq: u32, probe_id: u32) -> Self {\n        Self {\n            local_seq: VarInt::from_u32(local_seq),\n            remote_seq: VarInt::from_u32(remote_seq),\n            probe_id: VarInt::from_u32(probe_id),\n        }\n    }\n\n    pub fn local_seq(&self) -> u32 {\n        self.local_seq.into_u64() as u32\n    }\n\n    pub fn remote_seq(&self) -> u32 {\n        self.remote_seq.into_u64() as u32\n    }\n\n    pub fn probe_id(&self) -> u32 {\n        self.probe_id.into_u64() as u32\n    }\n}\n\nimpl GetFrameType for PunchHelloFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::PunchHello\n    }\n}\n\nimpl EncodeSize for PunchHelloFrame {\n    fn max_encoding_size(&self) -> usize {\n        4 + 8 + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        VarInt::from(self.frame_type()).encoding_size()\n            + self.local_seq.encoding_size()\n            + self.remote_seq.encoding_size()\n            + self.probe_id.encoding_size()\n    }\n}\n\nimpl<T: bytes::BufMut> WriteFrame<PunchHelloFrame> for T {\n    fn put_frame(&mut self, frame: &PunchHelloFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.local_seq);\n        self.put_varint(&frame.remote_seq);\n        self.put_varint(&frame.probe_id);\n    }\n}\n\npub(crate) fn be_punch_hello_frame(input: &[u8]) -> nom::IResult<&[u8], PunchHelloFrame> {\n    let (input, local_seq) = be_varint(input)?;\n    let (input, remote_seq) = be_varint(input)?;\n    let (input, probe_id) = be_varint(input)?;\n    Ok((\n        input,\n        PunchHelloFrame {\n            local_seq,\n            remote_seq,\n            probe_id,\n        },\n    ))\n}\n"
  },
  {
    "path": "qbase/src/frame/punch_me_now.rs",
    "content": "use std::net::SocketAddr;\n\nuse derive_more::Deref;\n\nuse super::{\n    EncodeSize, GetFrameType,\n    io::{WriteFrame, WriteFrameType},\n};\nuse crate::{\n    net::{AddrFamily, Family, NatType, be_socket_addr},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// PUNCH_ME_NOW Frame\n///\n///```text\n/// PUNCH_ME_NOW Frame {\n///     Type (i) = 0x3d7e92,0x3d7e93\n///     Local Sequence Number (i),\n///     Remote Sequence Number (i),\n///     [ IPv4 (32) ],\n///     [ IPv6 (128) ],\n///     Port (16),\n///     Tire (i),\n///     Nat type (i),\n/// }\n/// ```\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Deref)]\npub struct PunchMeNowFrame {\n    local_seq: VarInt,\n    remote_seq: VarInt,\n    #[deref]\n    address: SocketAddr,\n    tire: VarInt,\n    nat_type: NatType,\n}\n\npub(crate) fn be_punch_me_now_frame(\n    family: Family,\n) -> impl Fn(&[u8]) -> nom::IResult<&[u8], PunchMeNowFrame> {\n    move |input| {\n        let (remain, local_seq) = be_varint(input)?;\n        let (remain, remote_seq) = be_varint(remain)?;\n        let (remain, address) = be_socket_addr(remain, family)?;\n        let (remain, tire) = be_varint(remain)?;\n        let (remain, nat_type) = be_varint(remain)?;\n        let nat_type = NatType::try_from(nat_type).map_err(|_| {\n            nom::Err::Error(nom::error::Error::new(\n                remain,\n                nom::error::ErrorKind::Verify,\n            ))\n        })?;\n        Ok((\n            remain,\n            PunchMeNowFrame {\n                local_seq,\n                remote_seq,\n                address,\n                tire,\n                nat_type,\n            },\n        ))\n    }\n}\n\nimpl GetFrameType for PunchMeNowFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::PunchMeNow(self.address.family())\n    }\n}\n\nimpl EncodeSize for PunchMeNowFrame {\n    fn max_encoding_size(&self) -> usize {\n        4 + 8 + 8 + self.address.max_encoding_size() + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        VarInt::from(self.frame_type()).encoding_size()\n            + self.local_seq.encoding_size()\n            + self.remote_seq.encoding_size()\n            + self.address.encoding_size()\n            + self.tire.encoding_size()\n            + VarInt::from(self.nat_type).encoding_size()\n    }\n}\n\nimpl PunchMeNowFrame {\n    pub fn new(\n        local_seq: u32,\n        remote_seq: u32,\n        address: SocketAddr,\n        tire: u32,\n        nat_type: NatType,\n    ) -> Self {\n        Self {\n            local_seq: VarInt::from_u32(local_seq),\n            remote_seq: VarInt::from_u32(remote_seq),\n            address,\n            tire: VarInt::from_u32(tire),\n            nat_type,\n        }\n    }\n\n    pub fn local_seq(&self) -> u32 {\n        self.local_seq.into_u64() as u32\n    }\n\n    pub fn remote_seq(&self) -> u32 {\n        self.remote_seq.into_u64() as u32\n    }\n\n    pub fn nat_type(&self) -> NatType {\n        self.nat_type\n    }\n\n    pub fn set_addr(&mut self, addr: SocketAddr) {\n        self.address = addr;\n    }\n\n    pub fn address(&self) -> SocketAddr {\n        self.address\n    }\n\n    pub fn tire(&self) -> u32 {\n        self.tire.into_u64() as u32\n    }\n}\n\nimpl<T: bytes::BufMut> WriteFrame<PunchMeNowFrame> for T {\n    fn put_frame(&mut self, frame: &PunchMeNowFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.local_seq);\n        self.put_varint(&frame.remote_seq);\n        self.put_u16(frame.address.port());\n        match frame.address.ip() {\n            std::net::IpAddr::V4(ipv4) => self.put_slice(&ipv4.octets()),\n            std::net::IpAddr::V6(ipv6) => self.put_slice(&ipv6.octets()),\n        }\n        self.put_varint(&frame.tire);\n        self.put_varint(&VarInt::from(frame.nat_type));\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use bytes::BytesMut;\n\n    use super::*;\n    use crate::frame::{GetFrameType, be_frame_type, io::WriteFrame};\n\n    #[test]\n    fn test_punch_me_now_frame() {\n        let frame = PunchMeNowFrame {\n            local_seq: VarInt::from_u32(1),\n            remote_seq: VarInt::from_u32(2),\n            address: \"127.0.0.1:12345\".parse().unwrap(),\n            tire: VarInt::from_u32(0x01u32),\n            nat_type: NatType::FullCone,\n        };\n        let mut buf = BytesMut::with_capacity(frame.max_encoding_size());\n        buf.put_frame(&frame);\n        let (remain, frame_type) = be_frame_type(&buf).unwrap();\n        assert_eq!(frame_type, frame.frame_type());\n        let frame2 = be_punch_me_now_frame(Family::V4)(remain).unwrap().1;\n        assert_eq!(frame, frame2);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/remove_address.rs",
    "content": "use derive_more::Deref;\n\nuse super::{\n    EncodeSize, GetFrameType,\n    io::{WriteFrame, WriteFrameType},\n};\nuse crate::varint::{VarInt, WriteVarInt, be_varint};\n\n/// REMOVE_ADDRESS Frame {\n///     Type (i) = 0x3d7e94,\n///     Sequence Number (i),\n/// }\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Deref)]\npub struct RemoveAddressFrame {\n    #[deref]\n    pub seq_num: VarInt,\n}\n\npub(crate) fn be_remove_address_frame(input: &[u8]) -> nom::IResult<&[u8], RemoveAddressFrame> {\n    let (input, sequence_number) = be_varint(input)?;\n    Ok((\n        input,\n        RemoveAddressFrame {\n            seq_num: sequence_number,\n        },\n    ))\n}\n\nimpl GetFrameType for RemoveAddressFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::RemoveAddress\n    }\n}\n\nimpl EncodeSize for RemoveAddressFrame {\n    fn max_encoding_size(&self) -> usize {\n        4 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        VarInt::from(self.frame_type()).encoding_size() + self.seq_num.encoding_size()\n    }\n}\n\nimpl<T: bytes::BufMut> WriteFrame<RemoveAddressFrame> for T {\n    fn put_frame(&mut self, frame: &RemoveAddressFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.seq_num);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use bytes::BytesMut;\n\n    use super::*;\n    use crate::frame::{GetFrameType, be_frame_type, io::WriteFrame};\n\n    #[test]\n    fn test_remove_address_frame() {\n        let frame = RemoveAddressFrame {\n            seq_num: VarInt::from_u32(0x1234),\n        };\n\n        assert_eq!(frame.max_encoding_size(), 12);\n        assert_eq!(frame.encoding_size(), 6);\n\n        let mut buf = BytesMut::new();\n        buf.put_frame(&frame);\n\n        let (remain, frame_type) = be_frame_type(&buf).unwrap();\n        assert_eq!(frame_type, frame.frame_type());\n        let frame2 = be_remove_address_frame(remain).unwrap().1;\n        assert_eq!(frame, frame2);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/reset_stream.rs",
    "content": "use thiserror::Error;\n\nuse crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    sid::{StreamId, WriteStreamId, be_streamid},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// RESET_STREAM frame.\n///\n/// ```text\n/// RESET_STREAM Frame {\n///   Type (i) = 0x04,\n///   Stream ID (i),\n///   Application Protocol Error Code (i),\n///   Final Size (i),\n/// }\n/// ```\n///\n/// See [RESET_STREAM Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-reset_stream-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct ResetStreamFrame {\n    stream_id: StreamId,\n    app_error_code: VarInt,\n    final_size: VarInt,\n}\n\nimpl super::GetFrameType for ResetStreamFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::ResetStream\n    }\n}\n\nimpl super::EncodeSize for ResetStreamFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8 + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.stream_id.encoding_size()\n            + self.app_error_code.encoding_size()\n            + self.final_size.encoding_size()\n    }\n}\n\nimpl ResetStreamFrame {\n    /// Create a new [`ResetStreamFrame`].\n    pub fn new(stream_id: StreamId, app_error_code: VarInt, final_size: VarInt) -> Self {\n        Self {\n            stream_id,\n            app_error_code,\n            final_size,\n        }\n    }\n\n    /// Return the stream ID of the frame.\n    pub fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    /// Return the application protocol error code of the frame.\n    pub fn app_error_code(&self) -> u64 {\n        self.app_error_code.into_u64()\n    }\n\n    /// Return the final size of the frame.\n    pub fn final_size(&self) -> u64 {\n        self.final_size.into_u64()\n    }\n}\n\n/// Parse a RESET_STREAM frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_reset_stream_frame(input: &[u8]) -> nom::IResult<&[u8], ResetStreamFrame> {\n    use nom::{Parser, combinator::map};\n    map(\n        (be_streamid, be_varint, be_varint),\n        |(stream_id, app_error_code, final_size)| ResetStreamFrame {\n            stream_id,\n            app_error_code,\n            final_size,\n        },\n    )\n    .parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<ResetStreamFrame> for T {\n    fn put_frame(&mut self, frame: &ResetStreamFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_streamid(&frame.stream_id);\n        self.put_varint(&frame.app_error_code);\n        self.put_varint(&frame.final_size);\n    }\n}\n\n#[derive(Clone, Copy, Debug, Error, PartialEq, Eq)]\n#[error(\"The stream was reset with app error code: {app_error_code}, final size: {final_size}\")]\npub struct ResetStreamError {\n    app_error_code: VarInt,\n    final_size: VarInt,\n}\n\nimpl ResetStreamError {\n    pub fn new(app_error_code: VarInt, final_size: VarInt) -> Self {\n        Self {\n            app_error_code,\n            final_size,\n        }\n    }\n\n    pub fn error_code(&self) -> u64 {\n        self.app_error_code.into_u64()\n    }\n\n    pub fn combine(self, sid: StreamId) -> ResetStreamFrame {\n        ResetStreamFrame {\n            stream_id: sid,\n            app_error_code: self.app_error_code,\n            final_size: self.final_size,\n        }\n    }\n}\n\nimpl From<&ResetStreamFrame> for ResetStreamError {\n    fn from(frame: &ResetStreamFrame) -> Self {\n        Self {\n            app_error_code: frame.app_error_code,\n            final_size: frame.final_size,\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use nom::{Parser, combinator::flat_map};\n\n    use super::{ResetStreamError, ResetStreamFrame};\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::{VarInt, be_varint},\n    };\n\n    #[test]\n    fn test_reset_stream_frame() {\n        let frame = ResetStreamFrame::new(\n            VarInt::from_u32(0x1234).into(),\n            VarInt::from_u32(0x5678),\n            VarInt::from_u32(0x9abc),\n        );\n        assert_eq!(frame.frame_type(), FrameType::ResetStream);\n        assert_eq!(frame.max_encoding_size(), 1 + 8 + 8 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2 + 4 + 4);\n        assert_eq!(frame.stream_id(), VarInt::from_u32(0x1234).into());\n        assert_eq!(frame.app_error_code(), 0x5678);\n        assert_eq!(frame.final_size(), 0x9abc);\n\n        let reset_stream_error: ResetStreamError = (&frame).into();\n        assert_eq!(\n            reset_stream_error,\n            ResetStreamError::new(VarInt::from_u32(0x5678), VarInt::from_u32(0x9abc))\n        );\n    }\n\n    #[test]\n    fn test_read_reset_stream_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame_type(FrameType::ResetStream);\n        buf.extend_from_slice(&[0x52, 0x34, 0x80, 0, 0x56, 0x78, 0x80, 0, 0x9a, 0xbc]);\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == VarInt::from(FrameType::ResetStream) {\n                super::be_reset_stream_frame\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(\n            frame,\n            ResetStreamFrame::new(\n                VarInt::from_u32(0x1234).into(),\n                VarInt::from_u32(0x5678),\n                VarInt::from_u32(0x9abc),\n            )\n        );\n    }\n\n    #[test]\n    fn test_write_reset_stream_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&ResetStreamFrame::new(\n            VarInt::from_u32(0x1234).into(),\n            // 0x5678 = 0b01010110 01111000 => 0b10000000 0x00 0x56 0x78\n            VarInt::from_u32(0x5678),\n            // 0x9abc = 0b10011010 10111100 => 0b10000000 0x00 0x9a 0xbc\n            VarInt::from_u32(0x9abc),\n        ));\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::ResetStream);\n        expected.extend_from_slice(&[0x52, 0x34, 0x80, 0, 0x56, 0x78, 0x80, 0, 0x9a, 0xbc]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/retire_connection_id.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// RETIRE_CONNECTION_ID frame.\n///\n/// ```text\n/// RETIRE_CONNECTION_ID Frame {\n///   Type (i) = 0x19,\n///   Sequence Number (i),\n/// }\n/// ```\n///\n/// See [RETIRE_CONNECTION_ID Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-retire_connection_id-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct RetireConnectionIdFrame {\n    sequence: VarInt,\n}\n\nimpl super::GetFrameType for RetireConnectionIdFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::RetireConnectionId\n    }\n}\n\nimpl super::EncodeSize for RetireConnectionIdFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.sequence.encoding_size()\n    }\n}\n\nimpl RetireConnectionIdFrame {\n    /// Create a new [`RetireConnectionIdFrame`].\n    pub fn new(sequence: VarInt) -> Self {\n        Self { sequence }\n    }\n\n    /// Return the sequence number of the frame.\n    pub fn sequence(&self) -> u64 {\n        self.sequence.into_u64()\n    }\n}\n\n/// Parse a RETIRE_CONNECTION_ID frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_retire_connection_id_frame(input: &[u8]) -> nom::IResult<&[u8], RetireConnectionIdFrame> {\n    use nom::{Parser, combinator::map};\n    map(be_varint, RetireConnectionIdFrame::new).parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<RetireConnectionIdFrame> for T {\n    fn put_frame(&mut self, frame: &RetireConnectionIdFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_varint(&frame.sequence);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{RetireConnectionIdFrame, be_retire_connection_id_frame};\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_retire_connection_id_frame() {\n        let frame = RetireConnectionIdFrame::new(VarInt::from_u32(0x1234));\n        assert_eq!(frame.frame_type(), FrameType::RetireConnectionId);\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2);\n        assert_eq!(frame.sequence(), 0x1234);\n    }\n\n    #[test]\n    fn test_read_retire_connection_id_frame() {\n        let buf = vec![0x52, 0x34];\n        let (remain, frame) = be_retire_connection_id_frame(&buf).unwrap();\n        assert!(remain.is_empty());\n        assert_eq!(\n            frame,\n            RetireConnectionIdFrame::new(VarInt::from_u32(0x1234))\n        );\n    }\n\n    #[test]\n    fn test_write_retire_connection_id_frame() {\n        let mut buf = Vec::new();\n        let frame = RetireConnectionIdFrame::new(VarInt::from_u32(0x1234));\n        buf.put_frame(&frame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::RetireConnectionId);\n        expected.extend_from_slice(&[0x52, 0x34]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/stop_sending.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    sid::{StreamId, WriteStreamId, be_streamid},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// STOP_SENDING frame.\n///\n/// ```text\n/// STOP_SENDING Frame {\n///   Type (i) = 0x05,\n///   Stream ID (i),\n///   Application Protocol Error Code (i),\n/// }\n/// ```\n///\n/// See [STOP_SENDING Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-stop_sending-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct StopSendingFrame {\n    stream_id: StreamId,\n    app_err_code: VarInt,\n}\n\nimpl StopSendingFrame {\n    /// Create a new [`StopSendingFrame`].\n    pub fn new(stream_id: StreamId, app_err_code: VarInt) -> Self {\n        Self {\n            stream_id,\n            app_err_code,\n        }\n    }\n\n    /// Return the stream ID of the frame.\n    pub fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    /// Return the application protocol error code of the frame.\n    pub fn app_err_code(&self) -> u64 {\n        self.app_err_code.into_u64()\n    }\n\n    /// Compose a RESET_STREAM frame from the STOP_SENDING frame with the given final size.\n    pub fn reset_stream(&self, final_size: VarInt) -> super::ResetStreamFrame {\n        super::ResetStreamFrame::new(self.stream_id, self.app_err_code, final_size)\n    }\n}\n\nimpl super::GetFrameType for StopSendingFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::StopSending\n    }\n}\n\nimpl super::EncodeSize for StopSendingFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.stream_id.encoding_size() + self.app_err_code.encoding_size()\n    }\n}\n\n/// Parse a STOP_SENDING frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_stop_sending_frame(input: &[u8]) -> nom::IResult<&[u8], StopSendingFrame> {\n    use nom::{Parser, combinator::map};\n    map((be_streamid, be_varint), |(stream_id, app_err_code)| {\n        StopSendingFrame {\n            stream_id,\n            app_err_code,\n        }\n    })\n    .parse(input)\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<StopSendingFrame> for T {\n    fn put_frame(&mut self, frame: &StopSendingFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_streamid(&frame.stream_id);\n        self.put_varint(&frame.app_err_code);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{StopSendingFrame, be_stop_sending_frame};\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::{VarInt, be_varint},\n    };\n\n    #[test]\n    fn test_stop_sending_frame() {\n        let frame =\n            StopSendingFrame::new(VarInt::from_u32(0x1234).into(), VarInt::from_u32(0x5678));\n        assert_eq!(frame.stream_id(), VarInt::from_u32(0x1234).into());\n        assert_eq!(frame.app_err_code(), 0x5678);\n        assert_eq!(frame.frame_type(), FrameType::StopSending);\n        assert_eq!(frame.max_encoding_size(), 1 + 8 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2 + 4);\n    }\n\n    #[test]\n    fn test_parse_stop_sending_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        let frame =\n            StopSendingFrame::new(VarInt::from_u32(0x1234).into(), VarInt::from_u32(0x5678));\n        let mut buf = Vec::new();\n        buf.put_frame(&frame);\n        let stop_sending_frame_type = VarInt::from(FrameType::StopSending);\n        let (input, parsed) = flat_map(be_varint, |frame_type| {\n            if frame_type == stop_sending_frame_type {\n                be_stop_sending_frame\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(parsed, frame);\n    }\n\n    #[test]\n    fn test_write_stop_sending_frame() {\n        let mut buf = Vec::new();\n        let frame = StopSendingFrame {\n            stream_id: VarInt::from_u32(0x1234).into(),\n            app_err_code: VarInt::from_u32(0x5678),\n        };\n        buf.put_frame(&frame);\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::StopSending);\n        expected.extend_from_slice(&[0x52, 0x34, 0x80, 0, 0x56, 0x78]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/stream.rs",
    "content": "use std::ops::Range;\n\nuse super::GetFrameType;\nuse crate::{\n    frame::EncodeSize,\n    sid::{StreamId, WriteStreamId, be_streamid},\n    util::{ContinuousData, WriteData},\n    varint::{VARINT_MAX, VarInt, WriteVarInt, be_varint},\n};\n\n/// Offset flag for STREAM frames\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub enum Offset {\n    /// Offset field is zero (not present in frame)\n    Zero,\n    /// Offset field is non-zero (present in frame)\n    NonZero,\n}\n\nimpl From<Offset> for u8 {\n    fn from(offset: Offset) -> u8 {\n        match offset {\n            Offset::Zero => 0,\n            Offset::NonZero => 0x04,\n        }\n    }\n}\n\nimpl From<u64> for Offset {\n    fn from(value: u64) -> Self {\n        match value & 0x04 {\n            0 => Offset::Zero,\n            _ => Offset::NonZero,\n        }\n    }\n}\n\n/// Length flag for STREAM frames\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub enum Len {\n    /// Length field is present\n    Explicit,\n    /// Length field is omitted (extends to end of packet)\n    Omit,\n}\n\nimpl From<Len> for u8 {\n    fn from(length: Len) -> u8 {\n        match length {\n            Len::Explicit => 0x02,\n            Len::Omit => 0,\n        }\n    }\n}\n\nimpl From<u64> for Len {\n    fn from(value: u64) -> Self {\n        match value & 0x02 {\n            0 => Len::Omit,\n            _ => Len::Explicit,\n        }\n    }\n}\n\n/// Fin flag for STREAM frames\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub enum Fin {\n    /// Stream is finished\n    Yes,\n    /// Stream is not finished\n    No,\n}\n\nimpl From<Fin> for u8 {\n    fn from(fin: Fin) -> u8 {\n        match fin {\n            Fin::Yes => 0x01,\n            Fin::No => 0,\n        }\n    }\n}\n\nimpl From<u64> for Fin {\n    fn from(value: u64) -> Self {\n        match value & 0x01 {\n            0 => Fin::No,\n            _ => Fin::Yes,\n        }\n    }\n}\n\n/// STREAM frame.\n///\n/// ```text\n/// STREAM Frame {\n///   Type (i) = 0x08..0x0f,\n///   Stream ID (i),\n///   [Offset (i)],\n///   [Length (i)],\n///   Stream Data (..),\n/// }\n/// ```\n///\n/// The lower 3 bits of the frame type are used to indicate the presence of the following fields:\n/// - OFF bit: 0x04\n/// - LEN bit: 0x02\n/// - FIN bit: 0x01\n///\n/// See [STREAM Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-stream-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct StreamFrame {\n    id: StreamId,\n    offset: VarInt,\n    length: usize,\n    len_bit: Len,\n    fin_bit: Fin,\n}\n\npub const STREAM_FRAME_MAX_ENCODING_SIZE: usize = 1 + 8 + 8 + 8;\n\nimpl GetFrameType for StreamFrame {\n    fn frame_type(&self) -> super::FrameType {\n        let offset = if self.offset == 0 {\n            Offset::Zero\n        } else {\n            Offset::NonZero\n        };\n        super::FrameType::Stream(offset, self.len_bit, self.fin_bit)\n    }\n}\n\nimpl super::EncodeSize for StreamFrame {\n    fn max_encoding_size(&self) -> usize {\n        STREAM_FRAME_MAX_ENCODING_SIZE\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.id.encoding_size()\n            + if self.offset.into_u64() != 0 {\n                self.offset.encoding_size()\n            } else {\n                0\n            }\n            + if self.len_bit == Len::Explicit {\n                VarInt::from_u64(self.length as u64)\n                    .expect(\"msg length must be less than 2^62\")\n                    .encoding_size()\n            } else {\n                0\n            }\n    }\n}\n\n/// Efficient strategies for encoding stream frames\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct EncodingStrategy {\n    len_bit: Len,\n    pre_padding: usize,\n}\n\nimpl EncodingStrategy {\n    /// Cound the stream frame carry its data's length.\n    pub fn len_bit(&self) -> Len {\n        self.len_bit\n    }\n\n    /// How many padding frames should be put before the stream frame.\n    pub fn pre_padding(&self) -> usize {\n        self.pre_padding\n    }\n}\n\nimpl StreamFrame {\n    /// Create a new stream frame with the given stream id, offset, and length.\n    pub fn new(id: StreamId, offset: u64, length: usize) -> Self {\n        assert!(offset <= VARINT_MAX);\n        Self {\n            id,\n            offset: VarInt::from_u64(offset)\n                .expect(\"offset of stream frame must be less than 2^62\"),\n            length,\n            len_bit: Len::Omit,\n            fin_bit: Fin::No,\n        }\n    }\n\n    /// Return the stream id of this stream frame.\n    pub fn stream_id(&self) -> StreamId {\n        self.id\n    }\n\n    /// Return whether this stream frame is the end of the stream.\n    pub fn is_fin(&self) -> bool {\n        self.fin_bit == Fin::Yes\n    }\n\n    /// Return the offset of this stream frame.\n    pub fn offset(&self) -> u64 {\n        self.offset.into_u64()\n    }\n\n    /// Return the length of this stream frame.\n    pub fn len(&self) -> usize {\n        self.length\n    }\n\n    /// Return whether this stream frame is empty.\n    pub fn is_empty(&self) -> bool {\n        self.length == 0\n    }\n\n    /// Return the range of this stream frame covered.\n    pub fn range(&self) -> Range<u64> {\n        self.offset.into_u64()..self.offset.into_u64() + self.length as u64\n    }\n\n    /// Set the end of stream flag of this stream frame.\n    pub fn set_eos_flag(&mut self, is_eos: bool) {\n        if is_eos {\n            self.fin_bit = Fin::Yes;\n        } else {\n            self.fin_bit = Fin::No;\n        }\n    }\n\n    /// Set the length bit of this stream frame.\n    pub fn set_len_bit(&mut self, len_bit: Len) {\n        self.len_bit = len_bit;\n    }\n\n    /// Returns the most efficient stream frame encoding strategy.\n    ///\n    /// By default, a stream frame is considered the last frame within a data packet,\n    /// allowing it to carry data up to the maximum payload capacity. However, if the\n    ///  data does not fill the entire frame and there is sufficient space remaining\n    /// in the packet, other data frames can be carried after it. In this case, the\n    /// frame is designated as carrying length. However, when a stream frame with a length\n    /// is put into the data packet, the remaining space may be too small to put another\n    /// stream frame. Filling the remaining space is sometimes more beneficial to taking\n    /// advantage of GSO features.\n    pub fn encoding_strategy(&self, capacity: usize) -> EncodingStrategy {\n        // this method is used to determine the encoding strategy of the stream frame\n        debug_assert_eq!(self.len_bit, Len::Omit);\n\n        let encoding_size_without_length = self.encoding_size() + self.length;\n        assert!(encoding_size_without_length <= capacity);\n\n        let len_encoding_size = VarInt::try_from(self.length)\n            .expect(\"length of stream frame must be less than 2^62\")\n            .encoding_size();\n\n        let remaining = capacity - encoding_size_without_length;\n        if remaining >= len_encoding_size {\n            let remaining = remaining - len_encoding_size;\n            // TODO: It doesn't make sense, STREAM_FRAME_MAX_ENCODING_SIZE is 25 bytes\n            // but the minium stream size can be as small as 3 bytes\n            // with stream id less than 64 and offset 0 and without length\n            if remaining < STREAM_FRAME_MAX_ENCODING_SIZE {\n                EncodingStrategy {\n                    len_bit: Len::Explicit,\n                    pre_padding: remaining,\n                }\n            } else {\n                EncodingStrategy {\n                    len_bit: Len::Explicit,\n                    pre_padding: 0,\n                }\n            }\n        } else {\n            EncodingStrategy {\n                len_bit: Len::Omit,\n                pre_padding: remaining,\n            }\n        }\n    }\n\n    /// Estimate the maximum capacity that one stream frame with the given capacity,\n    /// stream id, and offset can carry.\n    pub fn estimate_max_capacity(capacity: usize, sid: StreamId, offset: u64) -> Option<usize> {\n        assert!(offset <= VARINT_MAX);\n        let mut least = 1 + sid.encoding_size();\n        if offset != 0 {\n            least += VarInt::from_u64(offset).unwrap().encoding_size();\n        }\n        if capacity <= least {\n            None\n        } else {\n            Some(capacity - least)\n        }\n    }\n}\n\n/// Return a parser for a stream frame with the given flag,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn stream_frame_with_flag(\n    offset: Offset,\n    len: Len,\n    fin: Fin,\n) -> impl Fn(&[u8]) -> nom::IResult<&[u8], StreamFrame> {\n    move |input| {\n        let (remain, id) = be_streamid(input)?;\n        let (remain, offset) = if offset == Offset::NonZero {\n            be_varint(remain)?\n        } else {\n            (remain, VarInt::default())\n        };\n        let (remain, length) = if len == Len::Explicit {\n            let (remain, length) = be_varint(remain)?;\n            (remain, length.into_u64() as usize)\n        } else {\n            (remain, remain.len())\n        };\n        if offset.into_u64() + length as u64 > VARINT_MAX {\n            return Err(nom::Err::Error(nom::error::make_error(\n                input,\n                nom::error::ErrorKind::TooLarge,\n            )));\n        }\n        Ok((\n            remain,\n            StreamFrame {\n                id,\n                offset,\n                length,\n                len_bit: len,\n                fin_bit: fin,\n            },\n        ))\n    }\n}\n\nimpl<T, D> super::io::WriteDataFrame<StreamFrame, D> for T\nwhere\n    T: bytes::BufMut + WriteData<D>,\n    D: ContinuousData,\n{\n    fn put_data_frame(&mut self, frame: &StreamFrame, data: &D) {\n        use crate::frame::io::WriteFrameType;\n        self.put_frame_type(frame.frame_type());\n        self.put_streamid(&frame.id);\n        if frame.offset.into_u64() != 0 {\n            self.put_varint(&frame.offset);\n        }\n        if frame.len_bit == Len::Explicit {\n            // Generally, a data frame will not exceed 4GB.\n            self.put_varint(&VarInt::from_u32(frame.length as u32));\n        }\n        self.put_data(data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use bytes::Bytes;\n    use nom::{Parser, combinator::flat_map};\n\n    use super::*;\n    use crate::{\n        frame::{EncodeSize, FrameType, GetFrameType, io::WriteDataFrame},\n        varint::{VarInt, be_varint},\n    };\n\n    #[test]\n    fn test_stream_frame() {\n        let stream_frame = StreamFrame {\n            id: VarInt::from_u32(0x1234).into(),\n            offset: VarInt::from_u32(0x1234),\n            length: 11,\n            len_bit: Len::Explicit,\n            fin_bit: Fin::No,\n        };\n        assert_eq!(\n            stream_frame.frame_type(),\n            FrameType::Stream(Offset::NonZero, Len::Explicit, Fin::No)\n        );\n        assert_eq!(stream_frame.max_encoding_size(), 1 + 8 + 8 + 8);\n        assert_eq!(stream_frame.encoding_size(), 1 + 2 + 2 + 1);\n    }\n\n    #[test]\n    fn test_read_stream_frame() {\n        let raw = Bytes::from_static(&[\n            0x0e, 0x52, 0x34, 0x52, 0x34, 0x0b, b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o',\n            b'r', b'l', b'd', 0,\n        ]);\n        let input = raw.as_ref();\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            let stream_frame_type: VarInt =\n                FrameType::Stream(Offset::NonZero, Len::Explicit, Fin::No).into();\n            assert_eq!(frame_type, stream_frame_type);\n            stream_frame_with_flag(Offset::NonZero, Len::Explicit, Fin::No)\n        })\n        .parse(input)\n        .unwrap();\n\n        assert_eq!(\n            input,\n            &[\n                b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o', b'r', b'l', b'd', 0,\n            ][..]\n        );\n        assert_eq!(\n            frame,\n            StreamFrame {\n                id: VarInt::from_u32(0x1234).into(),\n                offset: VarInt::from_u32(0x1234),\n                length: 11,\n                len_bit: Len::Explicit,\n                fin_bit: Fin::No,\n            }\n        );\n    }\n\n    #[test]\n    fn test_read_last_stream_frame() {\n        let raw = Bytes::from_static(&[\n            0x0c, 0x52, 0x34, 0x52, 0x34, b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o', b'r',\n            b'l', b'd',\n        ]);\n        let input = raw.as_ref();\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            let stream_frame_type: VarInt =\n                FrameType::Stream(Offset::NonZero, Len::Omit, Fin::No).into();\n            assert_eq!(frame_type, stream_frame_type);\n            stream_frame_with_flag(Offset::NonZero, Len::Omit, Fin::No)\n        })\n        .parse(input)\n        .unwrap();\n\n        assert_eq!(\n            input,\n            &[\n                b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o', b'r', b'l', b'd',\n            ][..]\n        );\n        assert_eq!(\n            frame,\n            StreamFrame {\n                id: VarInt::from_u32(0x1234).into(),\n                offset: VarInt::from_u32(0x1234),\n                length: 11,\n                len_bit: Len::Omit,\n                fin_bit: Fin::No,\n            }\n        );\n    }\n\n    #[test]\n    fn test_write_initial_stream_frame() {\n        let mut buf = Vec::new();\n        let frame = StreamFrame {\n            id: VarInt::from_u32(0x1234).into(),\n            offset: VarInt::from_u32(0),\n            length: 11,\n            len_bit: Len::Explicit,\n            fin_bit: Fin::Yes,\n        };\n        buf.put_data_frame(&frame, b\"hello world\");\n        assert_eq!(\n            buf,\n            vec![\n                0xb, 0x52, 0x34, 0x0b, b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o', b'r', b'l',\n                b'd'\n            ]\n        );\n    }\n\n    #[test]\n    fn test_write_last_stream_frame() {\n        let mut buf = Vec::new();\n        let frame = StreamFrame {\n            id: VarInt::from_u32(0x1234).into(),\n            offset: VarInt::from_u32(0),\n            length: 11,\n            len_bit: Len::Omit,\n            fin_bit: Fin::Yes,\n        };\n        buf.put_data_frame(&frame, b\"hello world\");\n        assert_eq!(\n            buf,\n            vec![\n                0x9, 0x52, 0x34, b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o', b'r', b'l', b'd'\n            ]\n        );\n    }\n\n    #[test]\n    fn test_write_eos_frame() {\n        let mut buf = Vec::new();\n        let frame = StreamFrame {\n            id: VarInt::from_u32(0x1234).into(),\n            offset: VarInt::from_u32(0x1234),\n            length: 11,\n            len_bit: Len::Explicit,\n            fin_bit: Fin::Yes,\n        };\n        buf.put_data_frame(&frame, b\"hello world\");\n        assert_eq!(\n            buf,\n            vec![\n                0x0f, 0x52, 0x34, 0x52, 0x34, 0x0b, b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o',\n                b'r', b'l', b'd'\n            ]\n        );\n    }\n\n    #[test]\n    fn test_write_unfinished_stream_frame() {\n        let mut buf = Vec::new();\n        let frame = StreamFrame {\n            id: VarInt::from_u32(0x1234).into(),\n            offset: VarInt::from_u32(0x1234),\n            length: 11,\n            len_bit: Len::Explicit,\n            fin_bit: Fin::No,\n        };\n        buf.put_data_frame(&frame, b\"hello world\");\n        assert_eq!(\n            buf,\n            vec![\n                0x0e, 0x52, 0x34, 0x52, 0x34, 0x0b, b'h', b'e', b'l', b'l', b'o', b' ', b'w', b'o',\n                b'r', b'l', b'd'\n            ]\n        );\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/stream_data_blocked.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    sid::{StreamId, WriteStreamId, be_streamid},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// STREAM_DATA_BLOCKED frame.\n///\n/// ```text\n/// STREAM_DATA_BLOCKED Frame {\n///   Type (i) = 0x15,\n///   Stream ID (i),\n///   Maximum Stream Data (i),\n/// }\n/// ```\n///\n/// See [STREAM_DATA_BLOCKED Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-stream_data_blocked-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct StreamDataBlockedFrame {\n    stream_id: StreamId,\n    maximum_stream_data: VarInt,\n}\n\nimpl super::GetFrameType for StreamDataBlockedFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::StreamDataBlocked\n    }\n}\n\nimpl super::EncodeSize for StreamDataBlockedFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.stream_id.encoding_size() + self.maximum_stream_data.encoding_size()\n    }\n}\n\nimpl StreamDataBlockedFrame {\n    /// Create a new [`StreamDataBlockedFrame`].\n    pub fn new(stream_id: StreamId, maximum_stream_data: VarInt) -> Self {\n        Self {\n            stream_id,\n            maximum_stream_data,\n        }\n    }\n\n    /// Return the stream ID of the frame.\n    pub fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    /// Return the maximum stream data of the frame.\n    pub fn maximum_stream_data(&self) -> u64 {\n        self.maximum_stream_data.into_u64()\n    }\n}\n\n/// Parse a STREAM_DATA_BLOCKED frame from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_stream_data_blocked_frame(input: &[u8]) -> nom::IResult<&[u8], StreamDataBlockedFrame> {\n    let (input, stream_id) = be_streamid(input)?;\n    let (input, maximum_stream_data) = be_varint(input)?;\n    Ok((\n        input,\n        StreamDataBlockedFrame {\n            stream_id,\n            maximum_stream_data,\n        },\n    ))\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<StreamDataBlockedFrame> for T {\n    fn put_frame(&mut self, frame: &StreamDataBlockedFrame) {\n        self.put_frame_type(frame.frame_type());\n        self.put_streamid(&frame.stream_id);\n        self.put_varint(&frame.maximum_stream_data);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::StreamDataBlockedFrame;\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_stream_data_blocked_frame() {\n        let frame =\n            StreamDataBlockedFrame::new(VarInt::from_u32(0x1234).into(), VarInt::from_u32(0x5678));\n        assert_eq!(frame.frame_type(), FrameType::StreamDataBlocked);\n        assert_eq!(frame.max_encoding_size(), 1 + 8 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2 + 4);\n        assert_eq!(frame.stream_id(), VarInt::from_u32(0x1234).into());\n        assert_eq!(frame.maximum_stream_data(), 0x5678);\n    }\n\n    #[test]\n    fn test_read_stream_data_blocked() {\n        use super::be_stream_data_blocked_frame;\n        let buf = [0x52, 0x34, 0x80, 0, 0x56, 0x78];\n        let (_, frame) = be_stream_data_blocked_frame(&buf).unwrap();\n        assert_eq!(\n            frame,\n            StreamDataBlockedFrame::new(VarInt::from_u32(0x1234).into(), VarInt::from_u32(0x5678))\n        );\n    }\n\n    #[test]\n    fn test_write_stream_data_blocked_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&StreamDataBlockedFrame::new(\n            VarInt::from_u32(0x1234).into(),\n            VarInt::from_u32(0x5678),\n        ));\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::StreamDataBlocked);\n        expected.extend_from_slice(&[0x52, 0x34, 0x80, 0, 0x56, 0x78]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame/streams_blocked.rs",
    "content": "use crate::{\n    frame::{GetFrameType, io::WriteFrameType},\n    sid::Dir,\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// STREAMS_BLOCKED frame.\n///\n/// ```text\n/// STREAMS_BLOCKED Frame {\n///   Type (i) = 0x16..0x17,\n///   Maximum Streams (i),\n/// }\n/// ```\n///\n/// See [STREAMS_BLOCKED Frames](https://www.rfc-editor.org/rfc/rfc9000.html#name-streams_blocked-frames)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum StreamsBlockedFrame {\n    Bi(VarInt),\n    Uni(VarInt),\n}\n\nimpl StreamsBlockedFrame {\n    pub fn with(dir: Dir, max_streams: VarInt) -> Self {\n        match dir {\n            Dir::Bi => StreamsBlockedFrame::Bi(max_streams),\n            Dir::Uni => StreamsBlockedFrame::Uni(max_streams),\n        }\n    }\n}\n\nimpl super::GetFrameType for StreamsBlockedFrame {\n    fn frame_type(&self) -> super::FrameType {\n        super::FrameType::StreamsBlocked(match self {\n            StreamsBlockedFrame::Bi(_) => Dir::Bi,\n            StreamsBlockedFrame::Uni(_) => Dir::Uni,\n        })\n    }\n}\n\nimpl super::EncodeSize for StreamsBlockedFrame {\n    fn max_encoding_size(&self) -> usize {\n        1 + 8\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + match self {\n            StreamsBlockedFrame::Bi(stream_id) => stream_id.encoding_size(),\n            StreamsBlockedFrame::Uni(stream_id) => stream_id.encoding_size(),\n        }\n    }\n}\n\n/// Return a parser for STREAMS_BLOCKED frame with the given direction,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn streams_blocked_frame_with_dir(\n    dir: Dir,\n) -> impl Fn(&[u8]) -> nom::IResult<&[u8], StreamsBlockedFrame> {\n    move |input: &[u8]| {\n        let (input, max_streams) = be_varint(input)?;\n        Ok((\n            input,\n            match dir {\n                Dir::Bi => StreamsBlockedFrame::Bi(max_streams),\n                Dir::Uni => StreamsBlockedFrame::Uni(max_streams),\n            },\n        ))\n    }\n}\n\nimpl<T: bytes::BufMut> super::io::WriteFrame<StreamsBlockedFrame> for T {\n    fn put_frame(&mut self, frame: &StreamsBlockedFrame) {\n        match frame {\n            StreamsBlockedFrame::Bi(max_streams) => {\n                self.put_frame_type(frame.frame_type());\n                self.put_varint(max_streams);\n            }\n            StreamsBlockedFrame::Uni(max_streams) => {\n                self.put_frame_type(frame.frame_type());\n                self.put_varint(max_streams);\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::StreamsBlockedFrame;\n    use crate::{\n        frame::{\n            EncodeSize, FrameType, GetFrameType,\n            io::{WriteFrame, WriteFrameType},\n        },\n        sid::Dir,\n        varint::VarInt,\n    };\n\n    #[test]\n    fn test_stream_data_blocked_frame() {\n        let frame = StreamsBlockedFrame::Bi(VarInt::from_u32(0x1234));\n        assert_eq!(frame.frame_type(), FrameType::StreamsBlocked(Dir::Bi));\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2);\n\n        let frame = StreamsBlockedFrame::Uni(VarInt::from_u32(0x1234));\n        assert_eq!(frame.frame_type(), FrameType::StreamsBlocked(Dir::Uni));\n        assert_eq!(frame.max_encoding_size(), 1 + 8);\n        assert_eq!(frame.encoding_size(), 1 + 2);\n    }\n\n    #[test]\n    fn test_read_streams_blocked_frame() {\n        use nom::{Parser, combinator::flat_map};\n\n        use super::streams_blocked_frame_with_dir;\n        use crate::varint::be_varint;\n\n        let streams_blocked_bi_type = VarInt::from(FrameType::StreamsBlocked(Dir::Bi));\n        let streams_blocked_uni_type = VarInt::from(FrameType::StreamsBlocked(Dir::Uni));\n        let buf = vec![streams_blocked_bi_type.into_u64() as u8, 0x52, 0x34];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == streams_blocked_bi_type {\n                streams_blocked_frame_with_dir(Dir::Bi)\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, StreamsBlockedFrame::Bi(VarInt::from_u32(0x1234)));\n\n        let buf = vec![streams_blocked_uni_type.into_u64() as u8, 0x52, 0x34];\n        let (input, frame) = flat_map(be_varint, |frame_type| {\n            if frame_type == streams_blocked_uni_type {\n                streams_blocked_frame_with_dir(Dir::Uni)\n            } else {\n                panic!(\"wrong frame type: {frame_type}\")\n            }\n        })\n        .parse(buf.as_ref())\n        .unwrap();\n        assert!(input.is_empty());\n        assert_eq!(frame, StreamsBlockedFrame::Uni(VarInt::from_u32(0x1234)));\n    }\n\n    #[test]\n    fn test_write_streams_blocked_frame() {\n        let mut buf = Vec::new();\n        buf.put_frame(&StreamsBlockedFrame::Bi(VarInt::from_u32(0x1234)));\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::StreamsBlocked(Dir::Bi));\n        expected.extend_from_slice(&[0x52, 0x34]);\n        assert_eq!(buf, expected);\n        let mut buf = Vec::new();\n        buf.put_frame(&StreamsBlockedFrame::Uni(VarInt::from_u32(0x1234)));\n        let mut expected = Vec::new();\n        expected.put_frame_type(FrameType::StreamsBlocked(Dir::Uni));\n        expected.extend_from_slice(&[0x52, 0x34]);\n        assert_eq!(buf, expected);\n    }\n}\n"
  },
  {
    "path": "qbase/src/frame.rs",
    "content": "use std::fmt::Debug;\n\nuse bytes::{Buf, BufMut, Bytes};\nuse derive_more::{Deref, DerefMut, From, TryInto};\nuse enum_dispatch::enum_dispatch;\nuse io::WriteFrame;\n\nuse super::varint::VarInt;\nuse crate::{net::Family, packet::r#type::Type, sid::Dir};\n\nmod ack;\nmod connection_close;\nmod crypto;\nmod data_blocked;\nmod datagram;\nmod handshake_done;\nmod max_data;\nmod max_stream_data;\nmod max_streams;\nmod new_connection_id;\nmod new_token;\nmod padding;\nmod path_challenge;\nmod path_response;\nmod ping;\nmod reset_stream;\nmod retire_connection_id;\nmod stop_sending;\nmod stream;\nmod stream_data_blocked;\nmod streams_blocked;\n\nmod add_address;\nmod punch_done;\nmod punch_hello;\nmod punch_me_now;\nmod remove_address;\n\n/// Error module for parsing frames\npub mod error;\n/// IO module for frame encoding and decoding\npub mod io;\n\npub use ack::{AckFrame, Ecn, EcnCounts};\npub use add_address::AddAddressFrame;\npub use connection_close::{AppCloseFrame, ConnectionCloseFrame, Layer, QuicCloseFrame};\npub use crypto::CryptoFrame;\npub use data_blocked::DataBlockedFrame;\npub use datagram::DatagramFrame;\n#[doc(hidden)]\npub use error::Error;\npub use handshake_done::HandshakeDoneFrame;\npub use max_data::MaxDataFrame;\npub use max_stream_data::MaxStreamDataFrame;\npub use max_streams::MaxStreamsFrame;\npub use new_connection_id::NewConnectionIdFrame;\npub use new_token::NewTokenFrame;\npub use padding::PaddingFrame;\npub use path_challenge::PathChallengeFrame;\npub use path_response::PathResponseFrame;\npub use ping::PingFrame;\npub use punch_done::PunchDoneFrame;\npub use punch_hello::PunchHelloFrame;\npub use punch_me_now::PunchMeNowFrame;\npub use remove_address::RemoveAddressFrame;\npub use reset_stream::{ResetStreamError, ResetStreamFrame};\npub use retire_connection_id::RetireConnectionIdFrame;\npub use stop_sending::StopSendingFrame;\npub use stream::{EncodingStrategy, Fin, Len, Offset, STREAM_FRAME_MAX_ENCODING_SIZE, StreamFrame};\npub use stream_data_blocked::StreamDataBlockedFrame;\npub use streams_blocked::StreamsBlockedFrame;\n\n/// Define the basic behaviors for all kinds of frames\n#[enum_dispatch]\npub trait GetFrameType {\n    /// Return the type of frame\n    fn frame_type(&self) -> FrameType;\n}\n\n#[enum_dispatch]\npub trait EncodeSize {\n    /// Return the max number of bytes needed to encode this value\n    ///\n    /// Calculate the maximum size by summing up the maximum length of each field.\n    /// If a field type has a maximum length, use it, otherwise use the actual length\n    /// of the data in that field.\n    ///\n    /// When packaging data, by pre-estimating this value to effectively avoid spending\n    /// extra resources to calculate the actual encoded size.\n    fn max_encoding_size(&self) -> usize {\n        1\n    }\n\n    /// Return the exact number of bytes needed to encode this value\n    fn encoding_size(&self) -> usize {\n        1\n    }\n}\n\n/// The `Spec` summarizes any special rules governing the processing\n/// or generation of the frame type, as indicated by the following characters.\n///\n/// See [table-3](https://www.rfc-editor.org/rfc/rfc9000.html#table-3)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\npub enum Spec {\n    /// Packets containing only frames with this marking are not ack-eliciting.\n    ///\n    /// See [Section 13.2](https://www.rfc-editor.org/rfc/rfc9000.html#generating-acks)\n    /// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n    NonAckEliciting = 1,\n    /// Packets containing only frames with this marking do not count toward bytes\n    /// in flight for congestion control purposes.\n    /// See [section-12.4-14.4](https://www.rfc-editor.org/rfc/rfc9000.html#section-12.4-14.4)\n    /// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html).\n    ///\n    /// Similar to TCP, packets containing only ACK frames do not count toward bytes\n    /// in flight and are not congestion controlled.\n    /// See [Section 7.4](https://www.rfc-editor.org/rfc/rfc9002#section-7-4)\n    /// of [QUIC-RECOVERY](https://www.rfc-editor.org/rfc/rfc9002).\n    CongestionControlFree = 2,\n    /// Packets containing only frames with this marking can be used to probe\n    /// new network paths during connection migration.\n    ///\n    /// See [Section 9.1](https://www.rfc-editor.org/rfc/rfc9000.html#probing)\n    /// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html).\n    ProbeNewPath = 4,\n    /// The contents of frames with this marking are flow controlled.\n    ///\n    /// See [Section 4](https://www.rfc-editor.org/rfc/rfc9000.html#flow-control)\n    /// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n    FlowControlled = 8,\n}\n\npub trait ContainSpec {\n    fn contain(&self, spec: Spec) -> bool;\n}\n\nimpl ContainSpec for u8 {\n    #[inline]\n    fn contain(&self, spec: Spec) -> bool {\n        *self & spec as u8 != 0\n    }\n}\n\n/// The sum type of all the core QUIC frame types.\n///\n/// See [table-3](https://www.rfc-editor.org/rfc/rfc9000.html#table-3)\n/// and [frame types and formats](https://www.rfc-editor.org/rfc/rfc9000.html#name-frame-types-and-formats)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub enum FrameType {\n    /// PADDED frame, see [`PaddingFrame`].\n    Padding,\n    /// PING frame, see [`PingFrame`].\n    Ping,\n    /// ACK frame, see [`AckFrame`].\n    Ack(Ecn),\n    /// RESET_STREAM frame, see [`ResetStreamFrame`].\n    ResetStream,\n    /// STOP_SENDING frame, see [`StopSendingFrame`].\n    StopSending,\n    /// CRYPTO frame, see [`CryptoFrame`].\n    Crypto,\n    /// NEW_TOKEN frame, see [`NewTokenFrame`].\n    NewToken,\n    /// STREAM frame, see [`StreamFrame`].\n    Stream(Offset, Len, Fin),\n    /// MAX_DATA frame, see [`MaxDataFrame`].\n    MaxData,\n    /// MAX_STREAM_DATA frame, see [`MaxStreamDataFrame`].\n    MaxStreamData,\n    /// MAX_STREAMS frame, see [`MaxStreamsFrame`].\n    MaxStreams(Dir),\n    /// DATA_BLOCKED frame, see [`DataBlockedFrame`].\n    DataBlocked,\n    /// STREAM_DATA_BLOCKED frame, see [`StreamDataBlockedFrame`].\n    StreamDataBlocked,\n    /// STREAMS_BLOCKED frame, see [`StreamsBlockedFrame`].\n    StreamsBlocked(Dir),\n    /// NEW_CONNECTION_ID frame, see [`NewConnectionIdFrame`].\n    NewConnectionId,\n    /// RETIRE_CONNECTION_ID frame, see [`RetireConnectionIdFrame`].\n    RetireConnectionId,\n    /// PATH_CHALLENGE frame, see [`PathChallengeFrame`].\n    PathChallenge,\n    /// PATH_RESPONSE frame, see [`PathResponseFrame`].\n    PathResponse,\n    /// CONNECTION_CLOSE frame, see [`ConnectionCloseFrame`].\n    ConnectionClose(Layer),\n    /// HANDSHAKE_DONE frame, see [`HandshakeDoneFrame`].\n    HandshakeDone,\n    /// DATAGRAM frame, see [`DatagramFrame`].\n    Datagram(u8),\n    /// ADD_ADDRESS frame, see [`AddAddressFrame`].\n    AddAddress(Family),\n    /// REMOVE_ADDRESS frame, see [`RemoveAddressFrame`].\n    RemoveAddress,\n    /// PUNCH_ME_NOW frame, see [`PunchMeNowFrame`].\n    PunchMeNow(Family),\n    /// PUNCH_HELLO frame, see [`PunchHelloFrame`].\n    PunchHello,\n    /// PUNCH_DONE frame, see [`PunchDoneFrame`].\n    PunchDone,\n}\n\n#[enum_dispatch]\npub trait FrameFeature {\n    /// Return whether a frame type belongs to the given packet_type\n    fn belongs_to(&self, packet_type: Type) -> bool;\n    /// Return the specs of the frame type\n    fn specs(&self) -> u8;\n}\n\nimpl<T: GetFrameType> FrameFeature for T {\n    fn belongs_to(&self, packet_type: Type) -> bool {\n        self.frame_type().belongs_to(packet_type)\n    }\n\n    fn specs(&self) -> u8 {\n        self.frame_type().specs()\n    }\n}\n\nimpl FrameFeature for FrameType {\n    fn belongs_to(&self, packet_type: Type) -> bool {\n        use crate::packet::r#type::{\n            long::{Type::V1, Ver1},\n            short::OneRtt,\n        };\n        // IH01\n        let i = matches!(packet_type, Type::Long(V1(Ver1::INITIAL)));\n        let h = matches!(packet_type, Type::Long(V1(Ver1::HANDSHAKE)));\n        let o = matches!(packet_type, Type::Long(V1(Ver1::ZERO_RTT)));\n        let l = matches!(packet_type, Type::Short(OneRtt(_)));\n\n        match self {\n            FrameType::Padding => i | h | o | l,\n            FrameType::Ping => i | h | o | l,\n            FrameType::Ack(_) => i | h | l,\n            FrameType::ResetStream => o | l,\n            FrameType::StopSending => o | l,\n            FrameType::Crypto => i | h | l,\n            FrameType::NewToken => l,\n            FrameType::Stream(..) => o | l,\n            FrameType::MaxData => o | l,\n            FrameType::MaxStreamData => o | l,\n            FrameType::MaxStreams(_) => o | l,\n            FrameType::DataBlocked => o | l,\n            FrameType::StreamDataBlocked => o | l,\n            FrameType::StreamsBlocked(_) => o | l,\n            FrameType::NewConnectionId => o | l,\n            FrameType::RetireConnectionId => o | l,\n            FrameType::PathChallenge => o | l,\n            FrameType::PathResponse => l,\n            // The application-specific variant of CONNECTION_CLOSE (type 0x1d) can only be\n            // sent using 0-RTT or 1-RTT packets;\n            // See [Section 12.5](https://www.rfc-editor.org/rfc/rfc9000.html#section-12.5).\n            //\n            // When an application wishes to abandon a connection during the handshake,\n            // an endpoint can send a CONNECTION_CLOSE frame (type 0x1c) with an error code\n            // of APPLICATION_ERROR in an Initial or Handshake packet.\n            FrameType::ConnectionClose(layer) => match layer {\n                Layer::App => o | l,\n                Layer::Quic => i | h | o | l,\n            },\n            FrameType::HandshakeDone => l,\n            FrameType::Datagram(_) => o | l,\n            FrameType::AddAddress(_) => o | l,\n            FrameType::RemoveAddress => o | l,\n            FrameType::PunchMeNow(_) => o | l,\n            FrameType::PunchHello => o | l,\n            FrameType::PunchDone => o | l,\n        }\n    }\n\n    fn specs(&self) -> u8 {\n        let (n, c, p, f) = (\n            Spec::NonAckEliciting as u8,\n            Spec::CongestionControlFree as u8,\n            Spec::ProbeNewPath as u8,\n            Spec::FlowControlled as u8,\n        );\n        match self {\n            FrameType::Padding => n | p,\n            FrameType::Ack(_) => n | c,\n            FrameType::Stream(..) => f,\n            FrameType::NewConnectionId => p,\n            FrameType::PathChallenge => p,\n            FrameType::PathResponse => p,\n            // different from [table 3](https://www.rfc-editor.org/rfc/rfc9000.html#table-3),\n            // add the [`Spec::Con`] for the CONNECTION_CLOSE frame\n            FrameType::ConnectionClose(_) => n | c,\n            FrameType::PunchHello => n,\n            FrameType::PunchDone => n,\n            _ => 0,\n        }\n    }\n}\n\nimpl TryFrom<VarInt> for FrameType {\n    type Error = Error;\n\n    fn try_from(frame_type: VarInt) -> Result<Self, Self::Error> {\n        Ok(match frame_type.into_u64() {\n            0x00 => FrameType::Padding,\n            0x01 => FrameType::Ping,\n            // The last bit is the ECN flag.\n            0x02 => FrameType::Ack(Ecn::None),\n            0x03 => FrameType::Ack(Ecn::Exist),\n            0x04 => FrameType::ResetStream,\n            0x05 => FrameType::StopSending,\n            0x06 => FrameType::Crypto,\n            0x07 => FrameType::NewToken,\n            // The last three bits are the offset, length, and fin flag bits respectively.\n            ty @ 0x08..=0x0f => FrameType::Stream(Offset::from(ty), Len::from(ty), Fin::from(ty)),\n            0x10 => FrameType::MaxData,\n            0x11 => FrameType::MaxStreamData,\n            // The last bit is the direction flag bit, 0 indicates bidirectional, 1 indicates unidirectional.\n            0x12 => FrameType::MaxStreams(Dir::Bi),\n            0x13 => FrameType::MaxStreams(Dir::Uni),\n            0x14 => FrameType::DataBlocked,\n            0x15 => FrameType::StreamDataBlocked,\n            // The last bit is the direction flag bit, 0 indicates bidirectional, 1 indicates unidirectional.\n            0x16 => FrameType::StreamsBlocked(Dir::Bi),\n            0x17 => FrameType::StreamsBlocked(Dir::Uni),\n            0x18 => FrameType::NewConnectionId,\n            0x19 => FrameType::RetireConnectionId,\n            0x1a => FrameType::PathChallenge,\n            0x1b => FrameType::PathResponse,\n            0x1c => FrameType::ConnectionClose(Layer::Quic),\n            0x1d => FrameType::ConnectionClose(Layer::App),\n            0x1e => FrameType::HandshakeDone,\n            // The last bit is the length flag bit, 0 the length field is absent and the Datagram Data\n            // field extends to the end of the packet, 1 the length field is present.\n            ty @ (0x30 | 0x31) => FrameType::Datagram(ty as u8 & 1),\n            0x3d7e90 => FrameType::AddAddress(Family::V4),\n            0x3d7e91 => FrameType::AddAddress(Family::V6),\n            0x3d7e92 => FrameType::PunchMeNow(Family::V4),\n            0x3d7e93 => FrameType::PunchMeNow(Family::V6),\n            0x3d7e94 => FrameType::RemoveAddress,\n            0x3d7e95 => FrameType::PunchHello,\n            0x3d7e96 => FrameType::PunchDone,\n            // May be extension frame\n            _ => return Err(Self::Error::InvalidType(frame_type)),\n        })\n    }\n}\n\nimpl From<FrameType> for VarInt {\n    fn from(frame_type: FrameType) -> Self {\n        match frame_type {\n            FrameType::Padding => VarInt::from_u32(0x00),\n            FrameType::Ping => VarInt::from_u32(0x01),\n            FrameType::Ack(Ecn::None) => VarInt::from_u32(0x02),\n            FrameType::Ack(Ecn::Exist) => VarInt::from_u32(0x03),\n            FrameType::ResetStream => VarInt::from_u32(0x04),\n            FrameType::StopSending => VarInt::from_u32(0x05),\n            FrameType::Crypto => VarInt::from_u32(0x06),\n            FrameType::NewToken => VarInt::from_u32(0x07),\n            FrameType::Stream(offset, len, fin) => {\n                let offset: u8 = offset.into();\n                let len: u8 = len.into();\n                let fin: u8 = fin.into();\n                VarInt::from(0x08u8 | offset | len | fin)\n            }\n            FrameType::MaxData => VarInt::from_u32(0x10),\n            FrameType::MaxStreamData => VarInt::from_u32(0x11),\n            FrameType::MaxStreams(Dir::Bi) => VarInt::from_u32(0x12),\n            FrameType::MaxStreams(Dir::Uni) => VarInt::from_u32(0x13),\n            FrameType::DataBlocked => VarInt::from_u32(0x14),\n            FrameType::StreamDataBlocked => VarInt::from_u32(0x15),\n            FrameType::StreamsBlocked(Dir::Bi) => VarInt::from_u32(0x16),\n            FrameType::StreamsBlocked(Dir::Uni) => VarInt::from_u32(0x17),\n            FrameType::NewConnectionId => VarInt::from_u32(0x18),\n            FrameType::RetireConnectionId => VarInt::from_u32(0x19),\n            FrameType::PathChallenge => VarInt::from_u32(0x1a),\n            FrameType::PathResponse => VarInt::from_u32(0x1b),\n            FrameType::ConnectionClose(Layer::Quic) => VarInt::from_u32(0x1c),\n            FrameType::ConnectionClose(Layer::App) => VarInt::from_u32(0x1d),\n            FrameType::HandshakeDone => VarInt::from_u32(0x1e),\n            FrameType::Datagram(with_len) => VarInt::from(0x30 | with_len),\n            FrameType::AddAddress(family) => VarInt::from_u32(0x3d7e90 | family as u32),\n            FrameType::PunchMeNow(family) => VarInt::from_u32(0x3d7e92 | family as u32),\n            FrameType::RemoveAddress => VarInt::from_u32(0x3d7e94),\n            FrameType::PunchHello => VarInt::from_u32(0x3d7e95),\n            FrameType::PunchDone => VarInt::from_u32(0x3d7e96),\n        }\n    }\n}\n\n/// Parse the frame type from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_frame_type(input: &[u8]) -> nom::IResult<&[u8], FrameType, Error> {\n    let (remain, frame_type) = crate::varint::be_varint(input).map_err(|_| {\n        nom::Err::Error(Error::IncompleteType(format!(\n            \"Incomplete frame type from input: {input:?}\"\n        )))\n    })?;\n    let frame_type = FrameType::try_from(frame_type).map_err(nom::Err::Error)?;\n    Ok((remain, frame_type))\n}\n\n/// Sum type of all the stream related frames except [`StreamFrame`].\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\n#[enum_dispatch(EncodeSize, GetFrameType)]\npub enum StreamCtlFrame {\n    /// RESET_STREAM frame, see [`ResetStreamFrame`].\n    ResetStream(ResetStreamFrame),\n    /// STOP_SENDING frame, see [`StopSendingFrame`].\n    StopSending(StopSendingFrame),\n    /// MAX_STREAM_DATA frame, see [`MaxStreamDataFrame`].\n    MaxStreamData(MaxStreamDataFrame),\n    /// MAX_STREAMS frame, see [`MaxStreamsFrame`].\n    MaxStreams(MaxStreamsFrame),\n    /// STREAM_DATA_BLOCKED frame, see [`StreamDataBlockedFrame`].\n    StreamDataBlocked(StreamDataBlockedFrame),\n    /// STREAMS_BLOCKED frame, see [`StreamsBlockedFrame`].\n    StreamsBlocked(StreamsBlockedFrame),\n}\n\n/// Sum type of all the reliable frames.\n#[derive(Debug, Clone, Eq, PartialEq)]\n#[enum_dispatch(EncodeSize, GetFrameType)]\npub enum ReliableFrame {\n    /// NEW_TOKEN frame, see [`NewTokenFrame`].\n    NewToken(NewTokenFrame),\n    /// MAX_DATA frame, see [`MaxDataFrame`].\n    MaxData(MaxDataFrame),\n    /// DATA_BLOCKED frame, see [`DataBlockedFrame`].\n    DataBlocked(DataBlockedFrame),\n    /// NEW_CONNECTION_ID frame, see [`NewConnectionIdFrame`].\n    NewConnectionId(NewConnectionIdFrame),\n    /// RETIRE_CONNECTION_ID frame, see [`RetireConnectionIdFrame`].\n    RetireConnectionId(RetireConnectionIdFrame),\n    /// HANDSHAKE_DONE frame, see [`HandshakeDoneFrame`].\n    HandshakeDone(HandshakeDoneFrame),\n    /// ADD_ADDRESS frame, see [`AddAddressFrame`].\n    AddAddress(AddAddressFrame),\n    /// REMOVE_ADDRESS frame, see [`RemoveAddressFrame`].\n    RemoveAddress(RemoveAddressFrame),\n    /// PUNCH_ME_NOW frame, see [`PunchMeNowFrame`].\n    PunchMeNow(PunchMeNowFrame),\n    /// PUNCH_DONE frame, see [`PunchDoneFrame`].\n    PunchDone(PunchDoneFrame),\n    /// STREAM control frame, see [`StreamCtlFrame`].\n    StreamCtl(StreamCtlFrame),\n}\n\n/// Sum type of all the frames.\n///\n/// The data frames' body are stored in the second field.\n#[derive(Debug, Clone, From, TryInto, Eq, PartialEq)]\npub enum Frame<D = Bytes> {\n    /// PADDING frame, see [`PaddingFrame`].\n    Padding(PaddingFrame),\n    /// PING frame, see [`PingFrame`].\n    Ping(PingFrame),\n    /// ACK frame, see [`AckFrame`].\n    Ack(AckFrame),\n    /// CONNECTION_CLOSE frame, see [`ConnectionCloseFrame`].\n    Close(ConnectionCloseFrame),\n    /// NEW_TOKEN frame, see [`NewTokenFrame`].\n    NewToken(NewTokenFrame),\n    /// MAX_DATA frame, see [`MaxDataFrame`].\n    MaxData(MaxDataFrame),\n    /// DATA_BLOCKED frame, see [`DataBlockedFrame`].\n    DataBlocked(DataBlockedFrame),\n    /// NEW_CONNECTION_ID frame, see [`NewConnectionIdFrame`].\n    NewConnectionId(NewConnectionIdFrame),\n    /// RETIRE_CONNECTION_ID frame, see [`RetireConnectionIdFrame`].\n    RetireConnectionId(RetireConnectionIdFrame),\n    /// HANDSHAKE_DONE frame, see [`HandshakeDoneFrame`].\n    HandshakeDone(HandshakeDoneFrame),\n    /// PATH_CHALLENGE frame, see [`PathChallengeFrame`].\n    PathChallenge(PathChallengeFrame),\n    /// PATH_RESPONSE frame, see [`PathResponseFrame`].\n    PathResponse(PathResponseFrame),\n    /// Stream control frame, see [`StreamCtlFrame`].\n    StreamCtl(StreamCtlFrame),\n    /// STREAM frame and its data, see [`StreamFrame`].\n    Stream(StreamFrame, D),\n    /// CRYPTO frame and its data, see [`CryptoFrame`].\n    Crypto(CryptoFrame, D),\n    /// DATAGRAM frame and its data, see [`DatagramFrame`].\n    Datagram(DatagramFrame, D),\n    /// ADD_ADDRESS frame, see [`AddAddressFrame`].\n    AddAddress(AddAddressFrame),\n    /// REMOVE_ADDRESS frame, see [`RemoveAddressFrame`].\n    RemoveAddress(RemoveAddressFrame),\n    /// PUNCH_ME_NOW frame, see [`PunchMeNowFrame`].\n    PunchMeNow(PunchMeNowFrame),\n    /// PUNCH_HELLO frame, see [`PunchHelloFrame`].\n    PunchHello(PunchHelloFrame),\n    /// PUNCH_DONE frame, see [`PunchDoneFrame`].\n    PunchDone(PunchDoneFrame),\n}\n\nimpl<D> From<ReliableFrame> for Frame<D> {\n    #[inline]\n    fn from(frame: ReliableFrame) -> Self {\n        match frame {\n            ReliableFrame::NewToken(new_token_frame) => Frame::NewToken(new_token_frame),\n            ReliableFrame::MaxData(max_data_frame) => Frame::MaxData(max_data_frame),\n            ReliableFrame::DataBlocked(data_blocked_frame) => {\n                Frame::DataBlocked(data_blocked_frame)\n            }\n            ReliableFrame::NewConnectionId(new_connection_id_frame) => {\n                Frame::NewConnectionId(new_connection_id_frame)\n            }\n            ReliableFrame::RetireConnectionId(retire_connection_id_frame) => {\n                Frame::RetireConnectionId(retire_connection_id_frame)\n            }\n            ReliableFrame::HandshakeDone(handshake_done_frame) => {\n                Frame::HandshakeDone(handshake_done_frame)\n            }\n            ReliableFrame::AddAddress(add_address_frame) => Frame::AddAddress(add_address_frame),\n            ReliableFrame::RemoveAddress(remove_address_frame) => {\n                Frame::RemoveAddress(remove_address_frame)\n            }\n            ReliableFrame::PunchMeNow(punch_me_now_frame) => Frame::PunchMeNow(punch_me_now_frame),\n            ReliableFrame::PunchDone(punch_done_frame) => Frame::PunchDone(punch_done_frame),\n            ReliableFrame::StreamCtl(stream_frame) => Frame::StreamCtl(stream_frame),\n        }\n    }\n}\n\nimpl<'f, D> TryFrom<&'f Frame<D>> for CryptoFrame {\n    type Error = &'f Frame<D>;\n\n    #[inline]\n    fn try_from(frame: &'f Frame<D>) -> Result<Self, Self::Error> {\n        match frame {\n            Frame::Crypto(frame, _data) => Ok(*frame),\n            frame => Err(frame),\n        }\n    }\n}\n\nimpl<'f, D> TryFrom<&'f Frame<D>> for ReliableFrame {\n    type Error = &'f Frame<D>;\n\n    #[inline]\n    fn try_from(frame: &'f Frame<D>) -> Result<Self, Self::Error> {\n        match frame {\n            Frame::NewToken(new_token_frame) => {\n                Ok(ReliableFrame::NewToken(new_token_frame.clone()))\n            }\n            Frame::MaxData(max_data_frame) => Ok(ReliableFrame::MaxData(*max_data_frame)),\n            Frame::DataBlocked(data_blocked_frame) => {\n                Ok(ReliableFrame::DataBlocked(*data_blocked_frame))\n            }\n            Frame::NewConnectionId(new_connection_id_frame) => {\n                Ok(ReliableFrame::NewConnectionId(*new_connection_id_frame))\n            }\n            Frame::RetireConnectionId(retire_connection_id_frame) => Ok(\n                ReliableFrame::RetireConnectionId(*retire_connection_id_frame),\n            ),\n            Frame::HandshakeDone(handshake_done_frame) => {\n                Ok(ReliableFrame::HandshakeDone(*handshake_done_frame))\n            }\n            Frame::AddAddress(add_address_frame) => {\n                Ok(ReliableFrame::AddAddress(*add_address_frame))\n            }\n            Frame::RemoveAddress(remove_address_frame) => {\n                Ok(ReliableFrame::RemoveAddress(*remove_address_frame))\n            }\n            Frame::PunchMeNow(punch_me_now_frame) => {\n                Ok(ReliableFrame::PunchMeNow(*punch_me_now_frame))\n            }\n            Frame::PunchDone(punch_done_frame) => Ok(ReliableFrame::PunchDone(*punch_done_frame)),\n            Frame::StreamCtl(stream_frame) => Ok(ReliableFrame::StreamCtl(*stream_frame)),\n            frame => Err(frame),\n        }\n    }\n}\n\nimpl<D> GetFrameType for Frame<D> {\n    #[doc = \" Return the type of frame\"]\n    #[inline]\n    fn frame_type(&self) -> FrameType {\n        match self {\n            Frame::Padding(f) => f.frame_type(),\n            Frame::Ping(f) => f.frame_type(),\n            Frame::Ack(f) => f.frame_type(),\n            Frame::Close(f) => f.frame_type(),\n            Frame::NewToken(f) => f.frame_type(),\n            Frame::MaxData(f) => f.frame_type(),\n            Frame::DataBlocked(f) => f.frame_type(),\n            Frame::NewConnectionId(f) => f.frame_type(),\n            Frame::RetireConnectionId(f) => f.frame_type(),\n            Frame::HandshakeDone(f) => f.frame_type(),\n            Frame::PathChallenge(f) => f.frame_type(),\n            Frame::PathResponse(f) => f.frame_type(),\n            Frame::StreamCtl(f) => f.frame_type(),\n            Frame::Stream(f, _) => f.frame_type(),\n            Frame::Crypto(f, _) => f.frame_type(),\n            Frame::Datagram(f, _) => f.frame_type(),\n            Frame::AddAddress(f) => f.frame_type(),\n            Frame::RemoveAddress(f) => f.frame_type(),\n            Frame::PunchMeNow(f) => f.frame_type(),\n            Frame::PunchHello(f) => f.frame_type(),\n            Frame::PunchDone(f) => f.frame_type(),\n        }\n    }\n}\n\nimpl<D> EncodeSize for Frame<D> {\n    #[doc = \" Return the max number of bytes needed to encode this value\"]\n    #[doc = \"\"]\n    #[doc = \" Calculate the maximum size by summing up the maximum length of each field.\"]\n    #[doc = \" If a field type has a maximum length, use it, otherwise use the actual length\"]\n    #[doc = \" of the data in that field.\"]\n    #[doc = \"\"]\n    #[doc = \" When packaging data, by pre-estimating this value to effectively avoid spending\"]\n    #[doc = \" extra resources to calculate the actual encoded size.\"]\n    #[inline]\n    fn max_encoding_size(&self) -> usize {\n        match self {\n            Frame::Padding(f) => f.max_encoding_size(),\n            Frame::Ping(f) => f.max_encoding_size(),\n            Frame::Ack(f) => f.max_encoding_size(),\n            Frame::Close(f) => f.max_encoding_size(),\n            Frame::NewToken(f) => f.max_encoding_size(),\n            Frame::MaxData(f) => f.max_encoding_size(),\n            Frame::DataBlocked(f) => f.max_encoding_size(),\n            Frame::NewConnectionId(f) => f.max_encoding_size(),\n            Frame::RetireConnectionId(f) => f.max_encoding_size(),\n            Frame::HandshakeDone(f) => f.max_encoding_size(),\n            Frame::PathChallenge(f) => f.max_encoding_size(),\n            Frame::PathResponse(f) => f.max_encoding_size(),\n            Frame::StreamCtl(f) => f.max_encoding_size(),\n            Frame::Stream(f, _) => f.max_encoding_size(),\n            Frame::Crypto(f, _) => f.max_encoding_size(),\n            Frame::Datagram(f, _) => f.max_encoding_size(),\n            Frame::AddAddress(f) => f.max_encoding_size(),\n            Frame::RemoveAddress(f) => f.max_encoding_size(),\n            Frame::PunchMeNow(f) => f.max_encoding_size(),\n            Frame::PunchHello(f) => f.max_encoding_size(),\n            Frame::PunchDone(f) => f.max_encoding_size(),\n        }\n    }\n\n    #[doc = \" Return the exact number of bytes needed to encode this value\"]\n    #[inline]\n    fn encoding_size(&self) -> usize {\n        match self {\n            Frame::Padding(f) => f.encoding_size(),\n            Frame::Ping(f) => f.encoding_size(),\n            Frame::Ack(f) => f.encoding_size(),\n            Frame::Close(f) => f.encoding_size(),\n            Frame::NewToken(f) => f.encoding_size(),\n            Frame::MaxData(f) => f.encoding_size(),\n            Frame::DataBlocked(f) => f.encoding_size(),\n            Frame::NewConnectionId(f) => f.encoding_size(),\n            Frame::RetireConnectionId(f) => f.encoding_size(),\n            Frame::HandshakeDone(f) => f.encoding_size(),\n            Frame::PathChallenge(f) => f.encoding_size(),\n            Frame::PathResponse(f) => f.encoding_size(),\n            Frame::StreamCtl(f) => f.encoding_size(),\n            Frame::Stream(f, _) => f.encoding_size(),\n            Frame::Crypto(f, _) => f.encoding_size(),\n            Frame::Datagram(f, _) => f.encoding_size(),\n            Frame::AddAddress(f) => f.encoding_size(),\n            Frame::RemoveAddress(f) => f.encoding_size(),\n            Frame::PunchMeNow(f) => f.encoding_size(),\n            Frame::PunchHello(f) => f.encoding_size(),\n            Frame::PunchDone(f) => f.encoding_size(),\n        }\n    }\n}\n\n/// Reads frames from a buffer until the packet buffer is empty.\n#[derive(Deref, DerefMut)]\npub struct FrameReader {\n    #[deref]\n    #[deref_mut]\n    payload: Bytes,\n    packet_type: Type,\n}\n\nimpl FrameReader {\n    /// Creates a [`FrameReader`] for a packet of type `packet_type`\n    pub fn new(payload: Bytes, packet_type: Type) -> Self {\n        Self {\n            payload,\n            packet_type,\n        }\n    }\n}\n\nimpl Iterator for FrameReader {\n    type Item = Result<(Frame, FrameType), Error>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        if self.payload.is_empty() {\n            return None;\n        }\n\n        match io::be_frame(&self.payload, self.packet_type) {\n            Ok((consumed, frame, frame_type)) => {\n                self.payload.advance(consumed);\n                Some(Ok((frame, frame_type)))\n            }\n            Err(e) => Some(Err(e)),\n        }\n    }\n}\n\nimpl<T: BufMut> WriteFrame<StreamCtlFrame> for T {\n    fn put_frame(&mut self, frame: &StreamCtlFrame) {\n        match frame {\n            StreamCtlFrame::ResetStream(frame) => self.put_frame(frame),\n            StreamCtlFrame::StopSending(frame) => self.put_frame(frame),\n            StreamCtlFrame::MaxStreamData(frame) => self.put_frame(frame),\n            StreamCtlFrame::MaxStreams(frame) => self.put_frame(frame),\n            StreamCtlFrame::StreamDataBlocked(frame) => self.put_frame(frame),\n            StreamCtlFrame::StreamsBlocked(frame) => self.put_frame(frame),\n        }\n    }\n}\n\nimpl<T: BufMut> WriteFrame<ReliableFrame> for T {\n    fn put_frame(&mut self, frame: &ReliableFrame) {\n        match frame {\n            ReliableFrame::NewToken(frame) => self.put_frame(frame),\n            ReliableFrame::MaxData(frame) => self.put_frame(frame),\n            ReliableFrame::DataBlocked(frame) => self.put_frame(frame),\n            ReliableFrame::NewConnectionId(frame) => self.put_frame(frame),\n            ReliableFrame::RetireConnectionId(frame) => self.put_frame(frame),\n            ReliableFrame::HandshakeDone(frame) => self.put_frame(frame),\n            ReliableFrame::AddAddress(frame) => self.put_frame(frame),\n            ReliableFrame::RemoveAddress(frame) => self.put_frame(frame),\n            ReliableFrame::PunchMeNow(frame) => self.put_frame(frame),\n            ReliableFrame::PunchDone(frame) => self.put_frame(frame),\n            ReliableFrame::StreamCtl(frame) => self.put_frame(frame),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::net::SocketAddr;\n\n    use nom::Parser;\n\n    use super::*;\n    use crate::{\n        net::Family,\n        packet::{\n            PacketContent,\n            r#type::{\n                Type,\n                long::{Type::V1, Ver1},\n                short::OneRtt,\n            },\n        },\n        varint::{WriteVarInt, be_varint},\n    };\n\n    #[test]\n    fn test_frame_type_conversion() {\n        let frame_types = vec![\n            FrameType::Padding,\n            FrameType::Ping,\n            FrameType::Ack(Ecn::None),\n            FrameType::Stream(Offset::Zero, Len::Omit, Fin::No),\n            FrameType::MaxData,\n            FrameType::ConnectionClose(Layer::Quic),\n            FrameType::HandshakeDone,\n            FrameType::Datagram(0),\n        ];\n\n        for frame_type in frame_types {\n            let byte: VarInt = frame_type.into();\n            assert_eq!(FrameType::try_from(byte).unwrap(), frame_type);\n        }\n    }\n\n    #[test]\n    fn test_frame_type_specs() {\n        assert!(FrameType::Padding.specs().contain(Spec::NonAckEliciting));\n        assert!(\n            FrameType::Ack(Ecn::None)\n                .specs()\n                .contain(Spec::CongestionControlFree)\n        );\n        assert!(\n            FrameType::Stream(Offset::Zero, Len::Omit, Fin::No)\n                .specs()\n                .contain(Spec::FlowControlled)\n        );\n        assert!(FrameType::PathChallenge.specs().contain(Spec::ProbeNewPath));\n    }\n\n    #[test]\n    fn test_frame_type_belongs_to() {\n        let initial = Type::Long(V1(Ver1::INITIAL));\n        assert!(FrameType::Padding.belongs_to(initial));\n        assert!(FrameType::Ping.belongs_to(initial));\n        assert!(FrameType::Ack(Ecn::None).belongs_to(initial));\n        assert!(!FrameType::Stream(Offset::Zero, Len::Omit, Fin::No).belongs_to(initial));\n    }\n\n    #[test]\n    fn test_frame_reader() {\n        let mut buf = bytes::BytesMut::new();\n        buf.put_u8(0x00); // PADDING\n        buf.put_u8(0x01); // PING\n\n        let packet_type = Type::Long(V1(Ver1::INITIAL));\n        let mut reader = FrameReader::new(buf.freeze(), packet_type);\n\n        // Read PADDING frame\n        let (frame, frame_type) = reader.next().unwrap().unwrap();\n        assert!(matches!(frame, Frame::Padding(_)));\n        assert!(frame_type.specs().contain(Spec::NonAckEliciting));\n\n        // Read PING frame\n        let (frame, frame_type) = reader.next().unwrap().unwrap();\n        assert!(matches!(frame, Frame::Ping(_)));\n        assert!(!frame_type.specs().contain(Spec::NonAckEliciting));\n\n        // No more frames\n        assert!(reader.next().is_none());\n    }\n\n    #[test]\n    fn test_invalid_frame_type() {\n        assert!(FrameType::try_from(VarInt::from_u32(0xFF)).is_err());\n    }\n\n    #[test]\n    fn test_frame_reader_parses_add_address_frame() {\n        use super::io::WriteFrame;\n\n        let add_address = AddAddressFrame::new(\n            1,\n            \"127.0.0.1:4433\".parse::<SocketAddr>().unwrap(),\n            2,\n            crate::net::NatType::RestrictedPort,\n        );\n        let expected = add_address;\n        let mut buf = bytes::BytesMut::new();\n        buf.put_frame(&ReliableFrame::AddAddress(add_address));\n\n        let mut reader = FrameReader::new(buf.freeze(), Type::Short(OneRtt(0.into())));\n        let (frame, frame_type) = reader.next().unwrap().unwrap();\n\n        assert_eq!(frame_type, FrameType::AddAddress(Family::V4));\n        assert_eq!(frame, Frame::AddAddress(expected));\n        assert!(reader.next().is_none());\n    }\n\n    #[test]\n    fn test_frame_reader_rejects_add_address_frame_in_non_data_packets() {\n        use super::io::WriteFrame;\n\n        let mut buf = bytes::BytesMut::new();\n        buf.put_frame(&ReliableFrame::AddAddress(AddAddressFrame::new(\n            7,\n            \"127.0.0.1:8443\".parse::<SocketAddr>().unwrap(),\n            4,\n            crate::net::NatType::Dynamic,\n        )));\n\n        for packet_type in [\n            Type::Long(V1(Ver1::INITIAL)),\n            Type::Long(V1(Ver1::HANDSHAKE)),\n        ] {\n            let mut reader = FrameReader::new(buf.clone().freeze(), packet_type);\n            assert_eq!(\n                reader.next().unwrap().unwrap_err(),\n                Error::WrongType(FrameType::AddAddress(Family::V4), packet_type)\n            );\n        }\n    }\n\n    #[test]\n    fn test_manual_unknown_custom_frame_fallback() {\n        use crate::varint::WriteVarInt;\n\n        #[derive(Debug, Clone, Eq, PartialEq)]\n        struct UnknownCustomFrame {\n            pub seq_num: VarInt,\n            pub tire: VarInt,\n            pub nat_type: VarInt,\n        }\n\n        fn be_unknown_custom_frame(input: &[u8]) -> nom::IResult<&[u8], UnknownCustomFrame> {\n            use nom::{combinator::verify, sequence::preceded};\n            preceded(\n                verify(be_varint, |typ| typ == &VarInt::from_u32(0xff)),\n                (be_varint, be_varint, be_varint),\n            )\n            .map(|(seq_num, tire, nat_type)| UnknownCustomFrame {\n                seq_num,\n                tire,\n                nat_type,\n            })\n            .parse(input)\n        }\n\n        fn parse_unknown_custom_frame(input: &[u8]) -> Result<(usize, UnknownCustomFrame), Error> {\n            let origin = input.len();\n            let (remain, frame) = be_unknown_custom_frame(input).map_err(|_| {\n                Error::IncompleteType(format!(\"Incomplete frame type from input: {input:?}\"))\n            })?;\n            let consumed = origin - remain.len();\n            Ok((consumed, frame))\n        }\n\n        impl<T: bytes::BufMut> super::io::WriteFrame<UnknownCustomFrame> for T {\n            fn put_frame(&mut self, frame: &UnknownCustomFrame) {\n                self.put_varint(&0xff_u32.into());\n                self.put_varint(&frame.seq_num);\n                self.put_varint(&frame.tire);\n                self.put_varint(&frame.nat_type);\n            }\n        }\n\n        let mut buf = bytes::BytesMut::new();\n        let unknown_custom_frame = UnknownCustomFrame {\n            seq_num: VarInt::from_u32(0x01),\n            tire: VarInt::from_u32(0x02),\n            nat_type: VarInt::from_u32(0x03),\n        };\n        buf.put_frame(&unknown_custom_frame);\n        buf.put_frame(&PaddingFrame);\n        buf.put_frame(&PaddingFrame);\n        buf.put_frame(&unknown_custom_frame);\n        buf.put_varint(&0xfe_u32.into());\n        let mut padding_count = 0;\n        let mut unknown_custom_count = 0;\n        let mut reader = FrameReader::new(buf.freeze(), Type::Short(OneRtt(0.into())));\n        loop {\n            match reader.next() {\n                Some(Ok((frame, typ))) => {\n                    assert!(matches!(frame, Frame::Padding(_)));\n                    assert_eq!(typ, FrameType::Padding);\n                    padding_count += 1;\n                }\n                Some(Err(_e)) => {\n                    // Parse the unknown custom frame manually.\n                    if let Ok((consum, frame)) = parse_unknown_custom_frame(&reader) {\n                        reader.advance(consum);\n                        assert_eq!(frame, unknown_custom_frame);\n                        unknown_custom_count += 1;\n                    } else {\n                        reader.clear();\n                    }\n                }\n                None => break,\n            };\n        }\n        assert_eq!(padding_count, 2);\n        assert_eq!(unknown_custom_count, 2);\n    }\n\n    #[test]\n    fn test_frame_reader_stops_at_unknown_custom_frame() {\n        let mut buf = bytes::BytesMut::new();\n        buf.put_frame(&PaddingFrame);\n        buf.put_frame(&PaddingFrame);\n        // error frame type\n        buf.put_varint(&0xfe_u32.into());\n        buf.put_frame(&PaddingFrame);\n\n        let mut padding_count = 0;\n        let _ = FrameReader::new(buf.freeze(), Type::Short(OneRtt(0.into()))).try_fold(\n            PacketContent::default(),\n            |packet_contains, frame| {\n                let (frame, frame_type) = frame?;\n\n                assert!(matches!(frame, Frame::Padding(_)));\n                assert_eq!(frame_type, FrameType::Padding);\n                padding_count += 1;\n                Result::<_, Error>::Ok(packet_contains)\n            },\n        );\n\n        assert_eq!(padding_count, 2);\n    }\n}\n"
  },
  {
    "path": "qbase/src/handshake.rs",
    "content": "use std::sync::{\n    Arc,\n    atomic::{AtomicBool, Ordering},\n};\n\nuse crate::{\n    error::{Error, ErrorKind, QuicError},\n    frame::{\n        HandshakeDoneFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    role::Role,\n};\n\n/// The completion flag for the client handshake.\n///\n/// The client considers the handshake complete only after\n/// receiving the [`HandshakeDoneFrame`] from the server.\n/// In the QUIC protocol, there are no tasks that specifically\n/// require waiting for the client handshake to complete.\n/// Instead, it simply queries the handshake status.\n#[derive(Debug, Default, Clone)]\npub struct ClientHandshake {\n    done: Arc<AtomicBool>,\n}\n\nimpl ClientHandshake {\n    /// Check if the client handshake is complete.\n    pub fn is_handshake_done(&self) -> bool {\n        self.done.load(Ordering::Acquire)\n    }\n\n    /// Receive the HANDSHAKE_DONE frame.\n    ///\n    /// Once the client receives the HANDSHAKE_DONE frame,\n    /// it marks the completion of the client handshake.\n    ///\n    /// Return whether it is the first time to receive the HANDSHAKE_DONE frame.\n    pub fn recv_handshake_done_frame(&self, _frame: HandshakeDoneFrame) -> bool {\n        !self.done.swap(true, Ordering::AcqRel)\n    }\n}\n\n/// Server's handshake status.\n///\n/// - `T` is responsible for reliably sending [`HandshakeDoneFrame`] to the client.\n///   It can be a channel, a queue, or a buffer. Whatever, it must be able to send the\n///   [`HandshakeDoneFrame`] to the client.\n///\n/// The server considers the handshake complete only after receiving\n/// the [finished message](https://www.rfc-editor.org/rfc/rfc8446.html#section-4.4.4)\n/// from the client during the TLS handshake process.\n/// If the [finished message](https://www.rfc-editor.org/rfc/rfc8446.html#section-4.4.4)\n/// from the TLS handshake is not received,\n/// the server can also consider the handshake complete upon receiving and\n/// successfully decrypting the client's 1-RTT packet.\n/// Once the server's handshake is complete, the server will send a [`HandshakeDoneFrame`] immediately.\n#[derive(Debug, Clone)]\npub struct ServerHandshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    is_done: Arc<AtomicBool>,\n    output: T,\n}\n\nimpl<T> ServerHandshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    /// Create a new server handshake signal.\n    ///\n    /// The `output` is responsible for sending the [`HandshakeDoneFrame`] to the client,\n    /// see [`ServerHandshake`].\n    pub fn new(output: T) -> Self {\n        ServerHandshake {\n            is_done: Arc::new(AtomicBool::new(false)),\n            output,\n        }\n    }\n\n    /// Check if the server handshake is complete.\n    pub fn is_handshake_done(&self) -> bool {\n        self.is_done.load(Ordering::Acquire)\n    }\n\n    /// Actively set the server's handshake status to complete.\n    ///\n    /// Call this method when the TLS handshake\n    /// [finished message](https://www.rfc-editor.org/rfc/rfc8446.html#section-4.4.4) is received.\n    /// If the TLS handshake completion message is not received,\n    /// receiving and successfully decrypting the client's 1-RTT packet\n    /// is also considered handshake completion.\n    /// Servers MUST NOT send a [`HandshakeDoneFrame`] before completing the handshake.\n    /// and once the server handshake is complete,\n    /// servers should send the [`HandshakeDoneFrame`] immediately.\n    /// See [`ServerHandshake`].\n    ///\n    /// This method return [`true`] when it first time set the handshake status to complete.\n    pub fn done(&self) -> bool {\n        if self\n            .is_done\n            .compare_exchange(false, true, Ordering::AcqRel, Ordering::Acquire)\n            .is_ok()\n        {\n            self.output.send_frame([HandshakeDoneFrame]);\n            true\n        } else {\n            false\n        }\n    }\n}\n\n/// A merged handshake state that can be used by both the client and the server.\n///\n/// For convenience, a unified [`Handshake`]` should be used,\n/// which will internally choose the corresponding behavior based on the role.\n#[derive(Debug, Clone)]\npub enum Handshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    /// The client's handshake state if the endpoint is a client.\n    Client(ClientHandshake),\n    /// The server's handshake state if the endpoint is a server.\n    Server(ServerHandshake<T>),\n}\n\nimpl<T> Handshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    /// Create a new handshake state, based on the role.\n    pub fn new(role: Role, output: T) -> Self {\n        match role {\n            Role::Client => Handshake::Client(ClientHandshake::default()),\n            Role::Server => Handshake::Server(ServerHandshake::new(output)),\n        }\n    }\n\n    /// Create a new client handshake state.\n    pub fn new_client() -> Self {\n        Handshake::Client(ClientHandshake::default())\n    }\n\n    /// Create a new server handshake state.\n    /// The `output` is responsible for sending the [`HandshakeDoneFrame`] to the client,\n    /// see [`ServerHandshake::new`].\n    pub fn new_server(output: T) -> Self {\n        Handshake::Server(ServerHandshake::new(output))\n    }\n\n    /// Check if the handshake is complete.\n    pub fn is_handshake_done(&self) -> bool {\n        match self {\n            Handshake::Client(h) => h.is_handshake_done(),\n            Handshake::Server(h) => h.is_handshake_done(),\n        }\n    }\n\n    /// Set the handshake status to complete(for server)\n    ///\n    /// For client, this method does nothing and always returns [`false`].\n    ///\n    /// This method return [`true`] when it first time set the handshake status to complete.\n    pub fn done(&self) -> bool {\n        match self {\n            Handshake::Client(..) => false, /* for client, do nothing */\n            Handshake::Server(h) => h.done(),\n        }\n    }\n\n    /// Return the role of this handshake signal.\n    pub fn role(&self) -> Role {\n        match self {\n            Handshake::Client(_) => Role::Client,\n            Handshake::Server(_) => Role::Server,\n        }\n    }\n}\n\nimpl<T> ReceiveFrame<HandshakeDoneFrame> for Handshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    type Output = bool;\n\n    /// Receive the [`HandshakeDoneFrame`].\n    ///\n    /// A [`HandshakeDoneFrame`] can only be received by the client.\n    /// A server MUST treat receipt of a [`HandshakeDoneFrame`]\n    /// as a connection error of type PROTOCOL_VIOLATION.\n    /// See [section 19.20](https://www.rfc-editor.org/rfc/rfc9000.html#section-19.20)\n    /// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html).\n    ///\n    /// Return whether it is the first time to receive the HANDSHAKE_DONE frame(for client).\n    fn recv_frame(&self, frame: HandshakeDoneFrame) -> Result<bool, Error> {\n        match self {\n            Handshake::Client(h) => Ok(h.recv_handshake_done_frame(frame)),\n            _ => Err(QuicError::with_default_fty(\n                ErrorKind::ProtocolViolation,\n                \"Server received a HANDSHAKE_DONE frame\",\n            )\n            .into()),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use derive_more::Deref;\n\n    use super::*;\n    use crate::{\n        error::ErrorKind,\n        frame::io::{ReceiveFrame, SendFrame},\n        util::ArcAsyncDeque,\n    };\n\n    #[derive(Debug, Default, Clone, Deref)]\n    struct HandshakeDoneFrameTx(ArcAsyncDeque<HandshakeDoneFrame>);\n\n    impl SendFrame<HandshakeDoneFrame> for HandshakeDoneFrameTx {\n        fn send_frame<I: IntoIterator<Item = HandshakeDoneFrame>>(&self, iter: I) {\n            (&self.0).extend(iter);\n        }\n    }\n\n    #[test]\n    fn test_client_handshake() {\n        let handshake = Handshake::<HandshakeDoneFrameTx>::new_client();\n        assert!(!handshake.is_handshake_done());\n\n        let ret = handshake.recv_frame(HandshakeDoneFrame);\n        assert!(ret.is_ok());\n        assert!(handshake.is_handshake_done());\n    }\n\n    #[test]\n    fn test_client_handshake_done() {\n        let handshake = Handshake::<HandshakeDoneFrameTx>::new_client();\n        assert!(!handshake.is_handshake_done());\n\n        assert!(handshake.recv_frame(HandshakeDoneFrame).unwrap());\n        assert!(handshake.is_handshake_done());\n\n        // recv_frame will only return `true` once when handshake first done\n        assert!(!handshake.recv_frame(HandshakeDoneFrame).unwrap());\n        assert!(handshake.is_handshake_done());\n    }\n\n    #[test]\n    fn test_server_handshake() {\n        let handshake = Handshake::new_server(HandshakeDoneFrameTx::default());\n        assert!(!handshake.is_handshake_done());\n\n        assert!(handshake.done());\n        assert!(handshake.is_handshake_done());\n\n        // same as last test\n        assert!(!handshake.done());\n        assert!(handshake.is_handshake_done());\n    }\n\n    #[test]\n    fn test_server_recv_handshake_done_frame() {\n        let handshake = Handshake::new_server(HandshakeDoneFrameTx::default());\n        assert!(!handshake.is_handshake_done());\n\n        let ret = handshake.recv_frame(HandshakeDoneFrame);\n        assert_eq!(\n            ret,\n            Err(QuicError::with_default_fty(\n                ErrorKind::ProtocolViolation,\n                \"Server received a HANDSHAKE_DONE frame\",\n            )\n            .into())\n        );\n    }\n\n    #[test]\n    fn test_server_send_handshake_done_frame() {\n        let handshake = ServerHandshake::new(HandshakeDoneFrameTx::default());\n        handshake.done();\n        assert!(handshake.is_handshake_done());\n        assert_eq!(handshake.output.len(), 1);\n    }\n}\n"
  },
  {
    "path": "qbase/src/lib.rs",
    "content": "#![allow(clippy::all)]\n//! # The QUIC base library\n//!\n//! The `qbase` library defines the necessary basic structures in the QUIC protocol,\n//! including connection IDs, stream IDs, frames, packets, keys, parameters, error codes, etc.\n//!\n//! Additionally, based on these basic structures,\n//! it defines components for various mechanisms in QUIC,\n//! including flow control, handshake, tokens, stream ID management, connection ID management, etc.\n//!\n//! Finally, the `qbase` module also defines some utility functions\n//! for handling common data structures in the QUIC protocol.\n//!\n#![allow(clippy::all)]\nuse std::{\n    ops::{Index, IndexMut},\n    pin::Pin,\n    sync::{Arc, Mutex},\n    task::{Context, Poll, Waker},\n};\n\nuse futures::FutureExt;\nuse thiserror::Error;\n\n/// Operations about QUIC connection IDs.\npub mod cid;\n/// [QUIC errors](https://www.rfc-editor.org/rfc/rfc9000.html#name-error-codes).\npub mod error;\n/// QUIC connection-level flow control.\npub mod flow;\n/// QUIC frames and their codec.\npub mod frame;\n/// Handshake signal for QUIC connections.\npub mod handshake;\n/// QUIC connection metrics for tracking data volumes.\npub mod metric;\n/// Endpoint address and Pathway.\npub mod net;\n/// QUIC packets and their codec.\npub mod packet;\n/// [QUIC transport parameters and their codec](https://www.rfc-editor.org/rfc/rfc9000.html#name-transport-parameter-encodin).\npub mod param;\n/// QUIC client and server roles.\npub mod role;\n/// Stream id types and controllers for different roles and different directions.\npub mod sid;\n/// Max idle timer and defer idle timer.\npub mod time;\n/// Issuing, storing and verifing tokens operations.\npub mod token;\n/// Utilities for common data structures.\npub mod util;\n/// [Variable-length integers](https://www.rfc-editor.org/rfc/rfc9000.html#name-variable-length-integer-enc).\npub mod varint;\n\n/// The epoch of sending, usually been seen as the index of spaces.\n#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Debug)]\npub enum Epoch {\n    Initial = 0,\n    Handshake = 1,\n    Data = 2,\n}\n\npub trait GetEpoch {\n    fn epoch(&self) -> Epoch;\n}\n\nimpl Epoch {\n    pub const EPOCHS: [Epoch; 3] = [Epoch::Initial, Epoch::Handshake, Epoch::Data];\n    /// An iterator for the epoch of each spaces.\n    ///\n    /// Equals to `Epoch::EPOCHES.iter()`\n    pub fn iter() -> std::slice::Iter<'static, Epoch> {\n        Self::EPOCHS.iter()\n    }\n\n    /// The number of epoches.\n    pub const fn count() -> usize {\n        Self::EPOCHS.len()\n    }\n}\n\nimpl<T> Index<Epoch> for [T]\nwhere\n    T: Sized,\n{\n    type Output = T;\n\n    fn index(&self, index: Epoch) -> &Self::Output {\n        self.index(index as usize)\n    }\n}\n\nimpl<T> IndexMut<Epoch> for [T]\nwhere\n    T: Sized,\n{\n    fn index_mut(&mut self, index: Epoch) -> &mut Self::Output {\n        self.index_mut(index as usize)\n    }\n}\n\n#[derive(Debug, Default)]\npub enum Receiving<F> {\n    #[default]\n    Pending,\n    Waiting(Waker),\n    Rcvd(F),\n    Read,\n    Reset,\n}\n\nimpl<F> Receiving<F> {\n    fn recv_frame(&mut self, frame: F) {\n        match std::mem::take(self) {\n            Self::Pending => {\n                *self = Self::Rcvd(frame);\n            }\n            Self::Waiting(waker) => {\n                waker.wake();\n                *self = Self::Rcvd(frame);\n            }\n            _ => (),\n        }\n    }\n\n    fn reset(&mut self) {\n        if let Self::Waiting(waker) = std::mem::replace(self, Self::Reset) {\n            waker.wake();\n        }\n    }\n}\n\n#[derive(Debug, Error)]\n#[error(\"Reset\")]\npub struct ResetError;\n\n#[derive(Debug, Default, Clone)]\npub struct ArcReceiving<F>(Arc<Mutex<Receiving<F>>>);\n\nimpl<F> ArcReceiving<F> {\n    pub fn reset(&self) {\n        self.0.lock().unwrap().reset();\n    }\n}\n\nimpl<F: Unpin> Future for ArcReceiving<F> {\n    type Output = Result<Option<F>, ResetError>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.0.lock().unwrap().poll_unpin(cx)\n    }\n}\n\n#[cfg(test)]\nmod tests {}\n"
  },
  {
    "path": "qbase/src/metric.rs",
    "content": "use std::sync::{\n    Arc,\n    atomic::{AtomicU64, Ordering},\n};\n\n/// Metrics for tracking data volumes in a QUIC connection.\n///\n/// This struct provides atomic counters to track:\n/// - Data written by application but not yet sent\n/// - Data sent but not yet acknowledged\n/// - Data sent and acknowledged\n#[derive(Debug, Default)]\npub struct ConnectionMetrics {\n    /// Data written by application layer but not yet sent by transport layer\n    pending_bytes: AtomicU64,\n    /// Data sent by transport layer but not yet acknowledged by peer\n    inflight_bytes: AtomicU64,\n    /// Data sent and acknowledged by peer\n    acked_bytes: AtomicU64,\n}\n\nimpl ConnectionMetrics {\n    /// Increments the pending send bytes counter when application writes data.\n    ///\n    /// Called when application layer writes data to a stream.\n    pub fn new_pending(&self, bytes: u64) {\n        self.pending_bytes.fetch_add(bytes, Ordering::Relaxed);\n    }\n\n    /// Updates counters when transport layer sends new data.\n    ///\n    /// Increments sent_unacked_bytes and decrements pending_send_bytes.\n    /// Called when transport layer sends new stream data.\n    pub fn on_data_sent(&self, bytes: u64) {\n        self.inflight_bytes.fetch_add(bytes, Ordering::Relaxed);\n        self.pending_bytes.fetch_sub(bytes, Ordering::Relaxed);\n    }\n\n    /// Updates counters when data is acknowledged by peer.\n    ///\n    /// Increments sent_acked_bytes and decrements sent_unacked_bytes.\n    /// Called when receiving acknowledgment for stream data.\n    pub fn on_data_acked(&self, bytes: u64) {\n        self.acked_bytes.fetch_add(bytes, Ordering::Relaxed);\n        self.inflight_bytes.fetch_sub(bytes, Ordering::Relaxed);\n    }\n\n    /// Gets the current amount of data pending to be sent.\n    pub fn pending_bytes(&self) -> u64 {\n        self.pending_bytes.load(Ordering::Relaxed)\n    }\n\n    /// Gets the current amount of data sent but not acknowledged.\n    pub fn inflight_bytes(&self) -> u64 {\n        self.inflight_bytes.load(Ordering::Relaxed)\n    }\n\n    /// Gets the total amount of data sent and acknowledged.\n    pub fn acked_bytes(&self) -> u64 {\n        self.acked_bytes.load(Ordering::Relaxed)\n    }\n}\n\n/// Arc-wrapped ConnectionMetrics for shared ownership across the connection.\npub type ArcConnectionMetrics = Arc<ConnectionMetrics>;\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_connection_metrics_new() {\n        let metrics = ConnectionMetrics::default();\n        assert_eq!(metrics.pending_bytes(), 0);\n        assert_eq!(metrics.inflight_bytes(), 0);\n        assert_eq!(metrics.acked_bytes(), 0);\n    }\n\n    #[test]\n    fn test_add_pending_send() {\n        let metrics = ConnectionMetrics::default();\n        metrics.new_pending(100);\n        assert_eq!(metrics.pending_bytes(), 100);\n        metrics.new_pending(50);\n        assert_eq!(metrics.pending_bytes(), 150);\n    }\n\n    #[test]\n    fn test_on_data_sent() {\n        let metrics = ConnectionMetrics::default();\n        metrics.new_pending(200);\n        metrics.on_data_sent(150);\n        assert_eq!(metrics.pending_bytes(), 50);\n        assert_eq!(metrics.inflight_bytes(), 150);\n    }\n\n    #[test]\n    fn test_on_data_acked() {\n        let metrics = ConnectionMetrics::default();\n        metrics.new_pending(200);\n        metrics.on_data_sent(150);\n        metrics.on_data_acked(100);\n        assert_eq!(metrics.pending_bytes(), 50);\n        assert_eq!(metrics.inflight_bytes(), 50);\n        assert_eq!(metrics.acked_bytes(), 100);\n    }\n\n    #[test]\n    fn test_full_data_flow() {\n        let metrics = ConnectionMetrics::default();\n\n        // Application writes 1000 bytes\n        metrics.new_pending(1000);\n        assert_eq!(metrics.pending_bytes(), 1000);\n        assert_eq!(metrics.inflight_bytes(), 0);\n        assert_eq!(metrics.acked_bytes(), 0);\n\n        // Transport layer sends 600 bytes\n        metrics.on_data_sent(600);\n        assert_eq!(metrics.pending_bytes(), 400);\n        assert_eq!(metrics.inflight_bytes(), 600);\n        assert_eq!(metrics.acked_bytes(), 0);\n\n        // Peer acknowledges 300 bytes\n        metrics.on_data_acked(300);\n        assert_eq!(metrics.pending_bytes(), 400);\n        assert_eq!(metrics.inflight_bytes(), 300);\n        assert_eq!(metrics.acked_bytes(), 300);\n\n        // Transport layer sends remaining 400 bytes\n        metrics.on_data_sent(400);\n        assert_eq!(metrics.pending_bytes(), 0);\n        assert_eq!(metrics.inflight_bytes(), 700);\n        assert_eq!(metrics.acked_bytes(), 300);\n\n        // Peer acknowledges all remaining data\n        metrics.on_data_acked(700);\n        assert_eq!(metrics.pending_bytes(), 0);\n        assert_eq!(metrics.inflight_bytes(), 0);\n        assert_eq!(metrics.acked_bytes(), 1000);\n    }\n\n    #[test]\n    fn test_arc_connection_metrics() {\n        let metrics = Arc::new(ConnectionMetrics::default());\n        let metrics_clone = Arc::clone(&metrics);\n\n        metrics.new_pending(100);\n        assert_eq!(metrics_clone.pending_bytes(), 100);\n\n        metrics_clone.on_data_sent(100);\n        assert_eq!(metrics.inflight_bytes(), 100);\n        assert_eq!(metrics.pending_bytes(), 0);\n    }\n}\n"
  },
  {
    "path": "qbase/src/net/addr.rs",
    "content": "use std::{\n    fmt::Display,\n    net::{AddrParseError, SocketAddr},\n    ops::Deref,\n    str::FromStr,\n};\n\nuse bytes::BufMut;\nuse serde::{Deserialize, Serialize};\n\nuse crate::net::{Family, be_socket_addr};\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub enum EndpointAddr {\n    Direct {\n        addr: SocketAddr,\n    },\n    Agent {\n        agent: SocketAddr,\n        outer: SocketAddr,\n    },\n}\n\nimpl EndpointAddr {\n    pub fn direct(addr: SocketAddr) -> Self {\n        EndpointAddr::Direct { addr }\n    }\n\n    pub fn with_agent(agent: SocketAddr, outer: SocketAddr) -> Self {\n        EndpointAddr::Agent { agent, outer }\n    }\n\n    /// Returns the outer addr of this EndpointAddr\n    ///\n    /// Note: Before successful hole punching with this Endpoint, packets should be sent to the addr\n    /// returned by deref() to establish communication. Once hole punching is successful or about to\n    /// begin, use the addr returned by this function.\n    pub fn addr(&self) -> SocketAddr {\n        match self {\n            EndpointAddr::Direct { addr } => *addr,\n            EndpointAddr::Agent { outer, .. } => *outer,\n        }\n    }\n\n    pub fn encoding_size(&self) -> usize {\n        match self {\n            EndpointAddr::Direct {\n                addr: SocketAddr::V4(_),\n            } => 2 + 4,\n            EndpointAddr::Direct {\n                addr: SocketAddr::V6(_),\n            } => 2 + 16,\n            EndpointAddr::Agent {\n                agent: SocketAddr::V4(_),\n                outer: SocketAddr::V4(_),\n            } => 2 + 4 + 2 + 4,\n            EndpointAddr::Agent {\n                agent: SocketAddr::V6(_),\n                outer: SocketAddr::V6(_),\n            } => 2 + 16 + 2 + 16,\n            _ => unimplemented!(\"Unix socket addresses are not supported\"),\n        }\n    }\n}\n\npub trait WriteEndpointAddr {\n    fn put_endpoint_addr(&mut self, endpoint: EndpointAddr);\n}\n\nimpl<T: BufMut> WriteEndpointAddr for T {\n    fn put_endpoint_addr(&mut self, endpoint: EndpointAddr) {\n        use crate::net::WriteSocketAddr;\n        match endpoint {\n            EndpointAddr::Direct { addr } => self.put_socket_addr(&addr),\n            EndpointAddr::Agent {\n                agent,\n                outer: inner,\n            } => {\n                self.put_socket_addr(&agent);\n                self.put_socket_addr(&inner);\n            }\n        }\n    }\n}\n\npub fn be_endpoint_addr(\n    input: &[u8],\n    relay: u8,\n    family: Family,\n) -> nom::IResult<&[u8], EndpointAddr> {\n    if relay != 0 {\n        let (remain, agent) = be_socket_addr(input, family)?;\n        let (remain, outer) = be_socket_addr(remain, family)?;\n        Ok((remain, EndpointAddr::with_agent(agent, outer)))\n    } else {\n        let (remain, addr) = be_socket_addr(input, family)?;\n        Ok((remain, EndpointAddr::direct(addr)))\n    }\n}\n\nimpl Display for EndpointAddr {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            EndpointAddr::Direct { addr } => write!(f, \"{addr}\"),\n            EndpointAddr::Agent { agent, outer } => write!(f, \"{agent}-{outer}\"),\n        }\n    }\n}\n\nimpl Deref for EndpointAddr {\n    type Target = SocketAddr;\n\n    fn deref(&self) -> &Self::Target {\n        match self {\n            EndpointAddr::Direct { addr } => addr,\n            EndpointAddr::Agent { agent, .. } => agent,\n        }\n    }\n}\n\nimpl FromStr for EndpointAddr {\n    type Err = AddrParseError;\n\n    fn from_str(s: &str) -> Result<Self, Self::Err> {\n        if let Some((first, second)) = s.split_once(\"-\") {\n            // Agent format: \"inet:1.12.124.56:1234-inet:202.106.68.43:6080\"\n            let agent = first.trim().parse()?;\n            let outer = second.trim().parse()?;\n            Ok(EndpointAddr::with_agent(agent, outer))\n        } else {\n            // Direct format: \"1.12.124.56:1234\"\n            let addr = s.trim().parse()?;\n            Ok(EndpointAddr::direct(addr))\n        }\n    }\n}\n\nimpl From<SocketAddr> for EndpointAddr {\n    fn from(addr: SocketAddr) -> Self {\n        EndpointAddr::direct(addr)\n    }\n}\n\nimpl From<(SocketAddr, SocketAddr)> for EndpointAddr {\n    fn from((agent, outer): (SocketAddr, SocketAddr)) -> Self {\n        EndpointAddr::with_agent(agent, outer)\n    }\n}\n"
  },
  {
    "path": "qbase/src/net/nat.rs",
    "content": "use std::io;\n\nuse crate::varint::VarInt;\n\nbitflags::bitflags! {\n    #[derive(Debug, Clone, Copy, PartialEq, Eq)]\n    pub struct NetFeature: u8 {\n        const Blocked = 0x01;\n        const Public = 0x02;\n        const Restricted = 0x04;\n        const PortRestricted = 0x08;\n        const Symmetric = 0x10;\n        const Dynamic = 0x20;\n    }\n}\n\n#[derive(Debug, Clone, Copy, Eq, PartialEq, Hash)]\npub enum NatType {\n    Blocked = 0x00,\n    FullCone = 0x01,\n    RestrictedCone = 0x02,\n    RestrictedPort = 0x03,\n    Symmetric = 0x04,\n    Dynamic = 0x05,\n}\n\nimpl From<NatType> for VarInt {\n    fn from(nat_type: NatType) -> Self {\n        VarInt::from(nat_type as u8)\n    }\n}\n\nimpl TryFrom<u8> for NatType {\n    type Error = io::Error;\n\n    fn try_from(value: u8) -> Result<Self, Self::Error> {\n        match value {\n            0x00 => Ok(NatType::Blocked),\n            0x01 => Ok(NatType::FullCone),\n            0x02 => Ok(NatType::RestrictedCone),\n            0x03 => Ok(NatType::RestrictedPort),\n            0x04 => Ok(NatType::Symmetric),\n            0x05 => Ok(NatType::Dynamic),\n            _ => Err(io::Error::new(\n                io::ErrorKind::InvalidInput,\n                \"Invalid value for NatType\",\n            )),\n        }\n    }\n}\n\nimpl TryFrom<VarInt> for NatType {\n    type Error = io::Error;\n\n    fn try_from(value: VarInt) -> Result<Self, Self::Error> {\n        Self::try_from(value.into_u64() as u8)\n    }\n}\n\nimpl From<NetFeature> for NatType {\n    fn from(value: NetFeature) -> Self {\n        if value.contains(NetFeature::Blocked) {\n            NatType::Blocked\n        } else if value.contains(NetFeature::Symmetric) {\n            NatType::Symmetric\n        } else if value.contains(NetFeature::Dynamic) {\n            NatType::Dynamic\n        } else if value.contains(NetFeature::PortRestricted) {\n            NatType::RestrictedPort\n        } else if value.contains(NetFeature::Restricted) {\n            NatType::RestrictedCone\n        } else {\n            NatType::FullCone\n        }\n    }\n}\n"
  },
  {
    "path": "qbase/src/net/route.rs",
    "content": "use std::{fmt::Display, net::SocketAddr};\n\nuse bytes::BufMut;\nuse derive_more::{Deref, DerefMut};\nuse nom::number::streaming::be_u8;\nuse serde::{Deserialize, Serialize};\n\nuse crate::{\n    frame::EncodeSize,\n    net::{Family, addr::EndpointAddr, be_socket_addr},\n};\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Hash, PartialOrd, Ord)]\npub struct Pathway<E = EndpointAddr> {\n    local: E,\n    remote: E,\n}\n\nimpl<E> Pathway<E> {\n    #[inline]\n    pub fn new(local: E, remote: E) -> Self {\n        Self { local, remote }\n    }\n\n    #[inline]\n    pub fn local(&self) -> E\n    where\n        E: Clone,\n    {\n        self.local.clone()\n    }\n\n    #[inline]\n    pub fn remote(&self) -> E\n    where\n        E: Clone,\n    {\n        self.remote.clone()\n    }\n\n    #[inline]\n    pub fn map<E1>(self, mut f: impl FnMut(E) -> E1) -> Pathway<E1> {\n        Pathway {\n            local: f(self.local),\n            remote: f(self.remote),\n        }\n    }\n\n    #[inline]\n    pub fn flip(self) -> Self {\n        Self {\n            local: self.remote,\n            remote: self.local,\n        }\n    }\n}\n\nimpl<E: Display> Display for Pathway<E> {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}---{}\", self.local, self.remote)\n    }\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Hash)]\npub struct Link {\n    pub src: SocketAddr,\n    pub dst: SocketAddr,\n}\n\nimpl Display for Link {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}<->{}\", self.src, self.dst)\n    }\n}\n\npub fn be_link(input: &[u8]) -> nom::IResult<&[u8], Link> {\n    let (remain, family) = be_u8(input)?;\n    let family = match family {\n        0 => Family::V4,\n        1 => Family::V6,\n        _ => {\n            return Err(nom::Err::Error(nom::error::Error::new(\n                input,\n                nom::error::ErrorKind::Alt,\n            )));\n        }\n    };\n    let (remain, src) = be_socket_addr(remain, family)?;\n    let (remain, dst) = be_socket_addr(remain, family)?;\n    Ok((remain, Link { src, dst }))\n}\n\npub trait WriteLink {\n    fn put_link(&mut self, link: &Link);\n}\n\nimpl<T: BufMut> WriteLink for T {\n    fn put_link(&mut self, link: &Link) {\n        use crate::net::WriteSocketAddr;\n        self.put_u8(link.src.is_ipv6() as u8);\n        self.put_socket_addr(&link.src);\n        self.put_socket_addr(&link.dst);\n    }\n}\n\nimpl EncodeSize for Link {\n    fn max_encoding_size(&self) -> usize {\n        1 + self.src.max_encoding_size() + self.dst.max_encoding_size()\n    }\n\n    fn encoding_size(&self) -> usize {\n        1 + self.src.encoding_size() + self.dst.encoding_size()\n    }\n}\n\nimpl Link {\n    #[inline]\n    pub fn new(src: SocketAddr, dst: SocketAddr) -> Self {\n        Self { src, dst }\n    }\n\n    #[inline]\n    pub fn flip(self) -> Self {\n        Self {\n            src: self.dst,\n            dst: self.src,\n        }\n    }\n}\n\nimpl<E: From<SocketAddr>> From<Link> for Pathway<E> {\n    fn from(link: Link) -> Self {\n        Pathway::new(E::from(link.src), E::from(link.dst))\n    }\n}\n\n#[derive(Clone, Copy, Debug, Deref, DerefMut)]\npub struct Line {\n    #[deref]\n    #[deref_mut]\n    pub link: Link,\n    pub ttl: u8,\n    // Explicit congestion notification (ECN)\n    pub ecn: Option<u8>,\n    // packet segment size\n    pub seg_size: u16,\n}\n\nimpl Line {\n    pub const DEFAULT_TTL: u8 = 64;\n\n    pub fn new(link: Link, ttl: u8, ecn: Option<u8>, seg_size: u16) -> Self {\n        Self {\n            link,\n            ttl,\n            ecn,\n            seg_size,\n        }\n    }\n}\n\nimpl Default for Line {\n    fn default() -> Self {\n        Self {\n            link: Link::new(\n                SocketAddr::from(([0, 0, 0, 0], 0)),\n                SocketAddr::from(([0, 0, 0, 0], 0)),\n            ),\n            ttl: Self::DEFAULT_TTL,\n            ecn: None,\n            seg_size: 0,\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy, Deref, DerefMut)]\npub struct Route {\n    pub pathway: Pathway,\n    #[deref]\n    #[deref_mut]\n    pub line: Line,\n}\n\nimpl Route {\n    pub fn new(pathway: Pathway, line: Line) -> Self {\n        Self { pathway, line }\n    }\n\n    /// Create a new empty packet header for receive packets.\n    pub fn empty() -> Self {\n        let src = SocketAddr::from(([0, 0, 0, 0], 0));\n        let dst = SocketAddr::from(([0, 0, 0, 0], 0));\n        let link = Link::new(SocketAddr::from(src), SocketAddr::from(dst));\n        Self::new(link.into(), Line::default())\n    }\n\n    pub fn pathway(&self) -> Pathway {\n        self.pathway\n    }\n\n    pub fn link(&self) -> Link {\n        self.line.link\n    }\n\n    pub fn ttl(&self) -> u8 {\n        self.line.ttl\n    }\n\n    pub fn ecn(&self) -> Option<u8> {\n        self.line.ecn\n    }\n\n    pub fn seg_size(&self) -> u16 {\n        self.line.seg_size\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_endpoint_addr_from_str() {\n        // Test direct format\n        let addr = \"127.0.0.1:8080\".parse::<EndpointAddr>().unwrap();\n        assert!(matches!(addr, EndpointAddr::Direct { .. }));\n\n        // Test agent format\n        let addr = \"127.0.0.1:8080-192.168.1.1:9000\"\n            .parse::<EndpointAddr>()\n            .unwrap();\n        assert!(matches!(addr, EndpointAddr::Agent { .. }));\n\n        // Test with whitespace\n        let addr = \"  127.0.0.1:8080  -  192.168.1.1:9000  \"\n            .parse::<EndpointAddr>()\n            .unwrap();\n        assert!(matches!(addr, EndpointAddr::Agent { .. }));\n\n        // Test invalid format\n        assert!(\"invalid\".parse::<EndpointAddr>().is_err());\n    }\n}\n"
  },
  {
    "path": "qbase/src/net/tx.rs",
    "content": "use std::{\n    collections::BTreeMap,\n    future::poll_fn,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker},\n};\n\nuse super::route::Pathway;\n\ntype SignalsBits = u16;\n\nbitflags::bitflags! {\n    #[derive(Debug, Clone, Copy,PartialEq, Eq)]\n    pub struct Signals: SignalsBits {\n        const CONGESTION    = 1 << 0; // cc\n        const FLOW_CONTROL  = 1 << 1; // flow\n        const TRANSPORT     = 1 << 2; // ack/retran/reliable....\n        const WRITTEN       = 1 << 3; // fresh stream\n        const CONNECTION_ID = 1 << 4; // cid\n        const CREDIT        = 1 << 5; // aa\n        const KEYS          = 1 << 6; // key(no waker in SendWaker)\n        const PING          = 1 << 7; // packet which contains ping frames only\n        const TLS_FIN       = 1 << 8; // TLS handshake is required to send and receive 1rtt data\n        const PATH_VALIDATE = 1 << 9; // path validated\n    }\n}\n\n#[derive(Default, Debug)]\npub struct SendWaker {\n    waker: Option<Waker>,\n    // Signals 对应的bit设置为1意为该位的条件已经满足，为0表示需要该条件满足\n    state: SignalsBits,\n}\n\nimpl SendWaker {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    const WAITING: SignalsBits = 0;\n\n    #[inline]\n    pub fn poll_wait_for(&mut self, cx: &mut Context, signals: Signals) -> Poll<()> {\n        if self.state & signals.bits() == 0 {\n            self.state = !signals.bits();\n            match self.waker.as_ref() {\n                Some(old_waker) if old_waker.will_wake(cx.waker()) => {}\n                _ => self.waker = Some(cx.waker().clone()),\n            }\n            Poll::Pending\n        } else {\n            self.state = Self::WAITING;\n            Poll::Ready(())\n        }\n    }\n\n    #[inline]\n    fn wake_by(&mut self, signals: Signals) {\n        if self.state | signals.bits() != self.state {\n            if let Some(waker) = self.waker.as_ref() {\n                waker.wake_by_ref();\n            }\n        }\n        self.state |= signals.bits();\n    }\n}\n\nunsafe impl Send for SendWaker {}\nunsafe impl Sync for SendWaker {}\n\n#[derive(Debug, Default, Clone)]\npub struct ArcSendWaker(Arc<Mutex<SendWaker>>);\n\nimpl ArcSendWaker {\n    #[inline]\n    pub fn new() -> Self {\n        Self(Arc::new(Mutex::new(SendWaker::new())))\n    }\n\n    #[inline]\n    pub async fn wait_for(&self, signals: Signals) {\n        poll_fn(|cx| self.0.lock().unwrap().poll_wait_for(cx, signals)).await\n    }\n\n    #[inline]\n    pub fn wake_by(&self, signals: Signals) {\n        self.0.lock().unwrap().wake_by(signals);\n    }\n}\n\n/// connection level send wakers\n#[derive(Debug, Default)]\npub struct SendWakers {\n    last_woken: Option<Pathway>,\n    paths: BTreeMap<Pathway, ArcSendWaker>,\n}\n\nimpl SendWakers {\n    #[inline]\n    pub fn new() -> Self {\n        Default::default()\n    }\n\n    #[inline]\n    pub fn insert(&mut self, pathway: Pathway, waker: &ArcSendWaker) {\n        self.paths.entry(pathway).or_insert_with(|| waker.clone());\n    }\n\n    #[inline]\n    pub fn remove(&mut self, pathway: &Pathway) {\n        self.paths.remove(pathway);\n    }\n\n    #[inline]\n    pub fn wake_all_by(&mut self, signals: Signals) {\n        fn wake_all_by<'a>(\n            paths: impl IntoIterator<Item = (&'a Pathway, &'a ArcSendWaker)>,\n            signals: Signals,\n        ) -> Option<Pathway> {\n            let mut paths = paths.into_iter().peekable();\n            let first_path = paths.peek().map(|(pathway, _)| pathway).copied().copied();\n\n            paths.for_each(|(_, waker)| {\n                waker.wake_by(signals);\n            });\n\n            first_path\n        }\n\n        use std::ops::Bound::*;\n\n        self.last_woken = match self.last_woken {\n            Some(last_woken) => wake_all_by(\n                self.paths\n                    .range((Excluded(last_woken), Unbounded))\n                    .chain(self.paths.range((Unbounded, Included(last_woken)))),\n                signals,\n            ),\n            None => wake_all_by(self.paths.range(..), signals),\n        }\n    }\n}\n\n#[derive(Default, Debug, Clone)]\npub struct ArcSendWakers(Arc<Mutex<SendWakers>>);\n\nimpl ArcSendWakers {\n    #[inline]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    fn lock_guard(&self) -> MutexGuard<'_, SendWakers> {\n        self.0.lock().unwrap()\n    }\n\n    #[inline]\n    pub fn insert(&self, pathway: Pathway, waker: &ArcSendWaker) {\n        self.lock_guard().insert(pathway, waker);\n    }\n\n    #[inline]\n    pub fn remove(&self, pathway: &Pathway) {\n        self.lock_guard().remove(pathway);\n    }\n\n    #[inline]\n    pub fn wake_all_by(&self, signals: Signals) {\n        self.lock_guard().wake_all_by(signals);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::atomic::{AtomicUsize, Ordering::*};\n\n    impl ArcSendWaker {\n        fn state(&self) -> SignalsBits {\n            self.0.lock().unwrap().state\n        }\n    }\n\n    use super::*;\n\n    #[tokio::test]\n    async fn single_condition() {\n        let waker = ArcSendWaker::new();\n        let woken_times = Arc::new(AtomicUsize::new(0));\n\n        tokio::spawn({\n            let waker = waker.clone();\n            let wake_times = woken_times.clone();\n            async move {\n                loop {\n                    waker.wait_for(Signals::CONGESTION).await;\n                    wake_times.fetch_add(1, Release);\n                }\n            }\n        });\n\n        waker.wake_by(Signals::FLOW_CONTROL);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 0); // not woken\n\n        waker.wake_by(Signals::TRANSPORT);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 0); // not woken\n\n        waker.wake_by(Signals::CONGESTION);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 1); // woken\n    }\n\n    #[tokio::test]\n    async fn all_condition() {\n        let waker = ArcSendWaker::new();\n        let woken_times = Arc::new(AtomicUsize::new(0));\n\n        tokio::spawn({\n            let waker = waker.clone();\n            let wake_times = woken_times.clone();\n            async move {\n                loop {\n                    waker.wait_for(Signals::all()).await;\n                    wake_times.fetch_add(1, Release);\n                }\n            }\n        });\n\n        let wait_for_all_cond_state = !Signals::all().bits();\n\n        waker.wake_by(Signals::FLOW_CONTROL);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 1); // woken\n        assert_eq!(waker.state(), wait_for_all_cond_state);\n\n        waker.wake_by(Signals::TRANSPORT);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 2); // woken\n        assert_eq!(waker.state(), wait_for_all_cond_state);\n\n        waker.wake_by(Signals::CONGESTION);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 3); // woken\n        assert_eq!(waker.state(), wait_for_all_cond_state);\n    }\n\n    #[tokio::test]\n    async fn wake_before_register() {\n        let waker = ArcSendWaker::new();\n        let woken_times = Arc::new(AtomicUsize::new(0));\n\n        waker.wake_by(Signals::CONGESTION); // pre set woken state\n\n        tokio::spawn({\n            let waker = waker.clone();\n            let wake_times = woken_times.clone();\n            async move {\n                loop {\n                    waker.wait_for(Signals::CONGESTION).await;\n                    wake_times.fetch_add(1, Release);\n                }\n            }\n        });\n\n        let wait_for_quota_state = !Signals::CONGESTION.bits();\n\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 1); // woken\n        assert_eq!(waker.state(), wait_for_quota_state);\n    }\n\n    #[tokio::test]\n    async fn state_change() {\n        let waker = ArcSendWaker::new();\n        let woken_times = Arc::new(AtomicUsize::new(0));\n\n        tokio::spawn({\n            let waker = waker.clone();\n            let wake_times = woken_times.clone();\n\n            let wait_for = move |r#for| {\n                let wake_times = wake_times.clone();\n                let waker = waker.clone();\n                async move {\n                    waker.wait_for(r#for).await;\n                    wake_times.fetch_add(1, Release);\n                }\n            };\n\n            async move {\n                wait_for(Signals::all()).await;\n                wait_for(Signals::CONGESTION | Signals::TRANSPORT).await;\n                wait_for(Signals::TRANSPORT).await;\n            }\n        });\n\n        let wait_for_all_cond_state = !Signals::all().bits();\n\n        let wait_for_quota_state = !(Signals::CONGESTION | Signals::TRANSPORT).bits();\n\n        let wait_for_data_state = !Signals::TRANSPORT.bits();\n\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 0); // not woken\n        assert_eq!(waker.state(), wait_for_all_cond_state);\n\n        waker.wake_by(Signals::TRANSPORT); // all condition will be met\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 1); // woken\n        assert_eq!(waker.state(), wait_for_quota_state);\n\n        waker.wake_by(Signals::CONGESTION); // quota\\data will be met\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 2); // woken\n        assert_eq!(waker.state(), wait_for_data_state);\n\n        waker.wake_by(Signals::CONGESTION); // only data will be met\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 2); // not woken\n\n        waker.wake_by(Signals::FLOW_CONTROL); // only data will be met\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 2); // not woken\n\n        waker.wake_by(Signals::TRANSPORT); // only data will be met\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 3); // woken\n        assert_eq!(waker.state(), SendWaker::WAITING); // state reset \n    }\n\n    #[tokio::test]\n    async fn mult_wake_signals() {\n        let waker = ArcSendWaker::new();\n        let woken_times = Arc::new(AtomicUsize::new(0));\n\n        tokio::spawn({\n            let waker = waker.clone();\n            let wake_times = woken_times.clone();\n            async move {\n                loop {\n                    wake_times.fetch_add(1, Release);\n                    waker.wait_for(Signals::TRANSPORT).await;\n                }\n            }\n        });\n\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 1); //  wake\n        assert_eq!(waker.state(), !Signals::TRANSPORT.bits());\n\n        waker.wake_by(Signals::TRANSPORT);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 2); // enter + wake\n        assert_eq!(waker.state(), !Signals::TRANSPORT.bits());\n\n        waker.wake_by(Signals::CONGESTION | Signals::TRANSPORT);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 3); // enter + wake * 2\n        assert_eq!(waker.state(), !Signals::TRANSPORT.bits());\n    }\n\n    #[tokio::test]\n    async fn not_wake() {\n        let waker = ArcSendWaker::new();\n        let woken_times = Arc::new(AtomicUsize::new(0));\n\n        tokio::spawn({\n            let waker = waker.clone();\n            let wake_times = woken_times.clone();\n            async move {\n                loop {\n                    wake_times.fetch_add(1, Release);\n                    waker.wait_for(Signals::CONGESTION).await;\n                }\n            }\n        });\n\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 1); // not woken\n\n        waker.wake_by(Signals::FLOW_CONTROL);\n        tokio::task::yield_now().await;\n        assert_eq!(woken_times.load(Acquire), 1); // not woken\n    }\n}\n"
  },
  {
    "path": "qbase/src/net.rs",
    "content": "use std::{\n    fmt::Display,\n    net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},\n    str::FromStr,\n};\n\nuse bytes::BufMut;\nuse nom::{\n    IResult, Parser,\n    combinator::{flat_map, map},\n    number::complete::{be_u16, be_u32, be_u128},\n};\nuse serde::{Deserialize, Serialize};\nuse thiserror::Error;\n\nuse crate::frame::EncodeSize;\n\npub mod addr;\npub mod nat;\npub mod route;\npub mod tx;\n\npub use nat::{NatType, NetFeature};\n\n/// IP protocol family\n///\n/// Represents IPv4 or IPv6 protocol family.\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Hash)]\npub enum Family {\n    /// IPv4 protocol family\n    V4 = 0,\n    /// IPv6 protocol family\n    V6 = 1,\n}\n\nimpl Display for Family {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            Family::V4 => write!(f, \"v4\"),\n            Family::V6 => write!(f, \"v6\"),\n        }\n    }\n}\n\n/// Invalid IP protocol family error\n///\n/// Returned when attempting to parse an unsupported IP protocol family string.\n///\n/// Supported values: `v4`, `V4`, `v6`, `V6`\n#[derive(Debug, Clone, Error, PartialEq, Eq)]\n#[error(\"Invalid ip family\")]\npub struct UnknownFamily;\n\nimpl FromStr for Family {\n    type Err = UnknownFamily;\n\n    fn from_str(s: &str) -> Result<Self, Self::Err> {\n        match s.to_lowercase().as_str() {\n            \"v4\" => Ok(Family::V4),\n            \"v6\" => Ok(Family::V6),\n            _ => Err(UnknownFamily),\n        }\n    }\n}\n\npub trait AddrFamily {\n    /// Get the IP protocol family\n    ///\n    /// Returns `IpFamily::V4` for IPv4 addresses and `IpFamily::V6` for IPv6 addresses.\n    fn family(&self) -> Family;\n}\n\nimpl AddrFamily for std::net::Ipv4Addr {\n    fn family(&self) -> Family {\n        Family::V4\n    }\n}\n\nimpl AddrFamily for std::net::Ipv6Addr {\n    fn family(&self) -> Family {\n        Family::V6\n    }\n}\n\nimpl AddrFamily for std::net::IpAddr {\n    fn family(&self) -> Family {\n        match self {\n            std::net::IpAddr::V4(_) => Family::V4,\n            std::net::IpAddr::V6(_) => Family::V6,\n        }\n    }\n}\n\nimpl AddrFamily for std::net::SocketAddr {\n    fn family(&self) -> Family {\n        self.ip().family()\n    }\n}\n\npub trait WriteSocketAddr {\n    fn put_socket_addr(&mut self, addr: &SocketAddr);\n}\n\nimpl<T: BufMut> WriteSocketAddr for T {\n    fn put_socket_addr(&mut self, addr: &SocketAddr) {\n        self.put_u16(addr.port());\n        match addr.ip() {\n            IpAddr::V4(ipv4) => self.put_u32(ipv4.into()),\n            IpAddr::V6(ipv6) => self.put_u128(ipv6.into()),\n        }\n    }\n}\n\npub fn be_socket_addr(input: &[u8], family: Family) -> IResult<&[u8], SocketAddr> {\n    flat_map(be_u16, |port| {\n        map(be_ip_addr(family), move |ip| SocketAddr::new(ip, port))\n    })\n    .parse(input)\n}\n\npub fn be_ip_addr(family: Family) -> impl Fn(&[u8]) -> IResult<&[u8], IpAddr> {\n    move |input| match family {\n        Family::V6 => map(be_u128, |ip| IpAddr::V6(Ipv6Addr::from(ip))).parse(input),\n        Family::V4 => map(be_u32, |ip| IpAddr::V4(Ipv4Addr::from(ip))).parse(input),\n    }\n}\n\nimpl EncodeSize for SocketAddr {\n    fn max_encoding_size(&self) -> usize {\n        2 + 16 // IPv6 address\n    }\n\n    fn encoding_size(&self) -> usize {\n        match self.ip() {\n            IpAddr::V4(_) => 2 + 4,\n            IpAddr::V6(_) => 2 + 16,\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_ip_family_display_and_parse() {\n        assert_eq!(Family::V4.to_string(), \"v4\");\n        assert_eq!(Family::V6.to_string(), \"v6\");\n\n        assert_eq!(\"v4\".parse::<Family>().unwrap(), Family::V4);\n        assert_eq!(\"V4\".parse::<Family>().unwrap(), Family::V4);\n        assert_eq!(\"v6\".parse::<Family>().unwrap(), Family::V6);\n        assert_eq!(\"V6\".parse::<Family>().unwrap(), Family::V6);\n\n        assert!(matches!(\"v7\".parse::<Family>(), Err(UnknownFamily)));\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/decrypt.rs",
    "content": "use rustls::quic::{HeaderProtectionKey, PacketKey};\n\nuse super::{\n    GetPacketNumberLength, KeyPhaseBit, LongSpecificBits, PacketNumber, ShortSpecificBits,\n    error::Error, take_pn_len,\n};\n\n/// Removes the header protection of the long packet.\n/// Returns the undecoded packet number in the header finally.\n///\n/// When receiving a long packet, the header protection must be removed before\n/// the packet number can be decoded. If removing header protection is failed, it\n/// indicates that the packet is problematic and can be ignored.\n/// In this case, no error but None will be returned.\n/// If not so, it will put the QUIC connection in a situation that is highly susceptible\n/// to denial-of-service attacks.\n///\n/// Note that after removing the long header protection, the 2-bit reserved bits of the\n/// long header, i.e., the 5th and 6th bits of the first byte of the first byte, must\n/// be 0, otherwise it will return a connection error of type PROTOCOL_VIOLATION.\n///\n/// See [Section 17.2](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.2-8.2) of\n/// QUIC RFC 9000.\n///\n/// After obtaining the undecoded packet number, it is necessary to rely on the largest\n/// received packet number to further decode the actual packet number.\npub fn remove_protection_of_long_packet(\n    key: &dyn HeaderProtectionKey,\n    pkt_buf: &mut [u8],\n    payload_offset: usize,\n) -> Result<Option<PacketNumber>, Error> {\n    let (pre_data, payload) = pkt_buf.split_at_mut(payload_offset);\n    let first_byte = &mut pre_data[0];\n    let (max_pn_buf, sample) = payload.split_at_mut(4);\n    // 去除包头保护失败，忽略即可\n    if key\n        .decrypt_in_place(&sample[..key.sample_len()], first_byte, max_pn_buf)\n        .is_err()\n    {\n        return Ok(None);\n    }\n\n    let specific_bits = LongSpecificBits::from(*first_byte);\n    let pn_len = specific_bits.pn_len()?;\n    let (_, undecoded_pn) = take_pn_len(pn_len)(max_pn_buf).unwrap();\n\n    Ok(Some(undecoded_pn))\n}\n\n/// Removes the header protection of the short packet.\n/// Returns the undecoded packet number and the key phase bit in the header.\n///\n/// When receiving a short packet, the header protection must be removed first before\n/// the packet number can be decoded. If removing header protection is failed, it\n/// indicates that the packet is problematic and can be ignored.\n/// In this case, no error but None will be returned instead.\n/// If not so, it will put the QUIC connection in a situation that is highly susceptible\n/// to denial-of-service attacks.\n///\n/// Note that after removing the long header protection, the 2-bit reserved bits of the\n/// long header, i.e., the 4th and 5th bits of the first byte of the first byte, must\n/// be 0, otherwise it will return a connection error of type PROTOCOL_VIOLATION.\n///\n/// See [Section 17.3.1](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.3.1-4.8) of\n/// QUIC RFC 9000.\n///\n/// After obtaining the undecoded packet number, it is necessary to rely on the maximum\n/// receiving packet number to further decode the actual packet number.\npub fn remove_protection_of_short_packet(\n    key: &dyn HeaderProtectionKey,\n    pkt_buf: &mut [u8],\n    payload_offset: usize,\n) -> Result<Option<(PacketNumber, KeyPhaseBit)>, Error> {\n    let (pre_data, payload) = pkt_buf.split_at_mut(payload_offset);\n    let first_byte = &mut pre_data[0];\n    let (max_pn_buf, sample) = payload.split_at_mut(4);\n    // 去除包头保护失败，忽略即可\n    if key\n        .decrypt_in_place(&sample[..key.sample_len()], first_byte, max_pn_buf)\n        .is_err()\n    {\n        return Ok(None);\n    }\n\n    let clear_bits = ShortSpecificBits::from(*first_byte);\n    let pn_len = clear_bits.pn_len()?;\n    let (_, undecoded_pn) = take_pn_len(pn_len)(max_pn_buf).unwrap();\n\n    Ok(Some((undecoded_pn, clear_bits.key_phase())))\n}\n\n/// Decrypt the body of a packet, applicable to both long and short packets.\n///\n/// It will decrypt the body data of the packet in place and return the length of the valid\n/// plaintext body data in the packet.\n/// The final valid plaintext body length is not equal to the raw ciphered body length of the packet.\n/// This is because the ciphertext body length usually contains checksum codes at the end,\n/// which is not part of the plaintext body.\n///\n/// Decrypting a packet relies on the packet number decoded from the packet header, and then\n/// uses the corresponding level of packet decryption key to decrypt the packet body.\n/// The packet body refers to the content located after the packet number.\n/// Decrypting a packet will verify the integrity of the packet.\n/// If decryption fails, it indicates that the packet is incorrect (strangely, removing the\n/// header protection succeeded, right?), indicating an error in the peer's packaging\n/// and encrypting logic, and then the QUIC connection should be terminated.\npub fn decrypt_packet(\n    key: &dyn PacketKey,\n    pn: u64,\n    pkt_buf: &mut [u8],\n    body_offset: usize,\n) -> Result<usize, Error> {\n    let (aad, body) = pkt_buf.split_at_mut(body_offset);\n    let plain = key\n        .decrypt_in_place(pn, aad, body)\n        .map_err(|_| Error::DecryptPacketFailure)?;\n    // should return plain.len()\n    Ok(plain.len())\n}\n"
  },
  {
    "path": "qbase/src/packet/encrypt.rs",
    "content": "use std::ops::Deref;\n\nuse rustls::quic::{HeaderProtectionKey, PacketKey};\n\nuse super::{KeyPhaseBit, LongSpecificBits, ShortSpecificBits};\n\n/// Encrypt the packet body, applicable to both long and short packets.\n///\n/// It relies on the packet encryption key of the corresponding level and the packet\n/// number to encrypt the packet body.\n/// The packet body refers to the packet data located after the packet number,\n/// specifically including the intergrity checksum codes at the end, which usually consist of\n/// 16 bytes depending on the encryption algorithm.\n///\n/// # Note\n///\n/// Before encrypting the packet body, the entire packet content must be fully and\n/// correctly populated, including the packet header and body, especially the last\n/// few bits of the first byte.\npub fn encrypt_packet(key: &dyn PacketKey, pn: u64, pkt_buf: &mut [u8], body_offset: usize) {\n    let (aad, body_tag) = pkt_buf.split_at_mut(body_offset);\n    let (body, tag_buf) = body_tag.split_at_mut(body_tag.len() - key.tag_len());\n    let tag = key.encrypt_in_place(pn, aad, body).unwrap();\n    tag_buf.copy_from_slice(tag.as_ref());\n}\n\n/// Add header protection, applicable to both long and short packets.\n/// Mainly protects the Reserved Bits and Packet Number Length in the packet header,\n/// as well as the Packet Number.\n///\n/// Use the header protection key of the corresponding level to protect the header.\n/// For long headers, the last 4 bits of the first byte are protected;\n/// and for short headers, the last 5 bits of the first byte are protected.\n///\n/// This function uses the first bit of the first byte of the packet to determine\n/// whether it is a long packet or a short packet, and then performs the corresponding\n/// header protection.\n///\n/// ## Note\n///\n/// Before encrypting the packet body, the entire packet content must be fully and\n/// correctly filled, including the packet header and body, especially the last\n/// few bits of the first byte, and the packet body encryption must be completed.\npub fn protect_header(\n    key: &dyn HeaderProtectionKey,\n    pkt_buf: &mut [u8],\n    payload_offset: usize,\n    pn_len: usize,\n) {\n    let (predata, payload) = pkt_buf.split_at_mut(payload_offset);\n    let first_byte = &mut predata[0];\n\n    let (max_pn_buf, sample) = payload.split_at_mut(4);\n    let sample_len = key.sample_len();\n    key.encrypt_in_place(&sample[..sample_len], first_byte, &mut max_pn_buf[..pn_len])\n        .unwrap();\n}\n\n/// Encode the last 4 specific bits of the first byte of the long packet, i.e.,\n/// two reserved bits of 0 and two bits of packet number encoding length.\npub fn encode_long_first_byte(first_byte: &mut u8, pn_len: usize) {\n    let specific_bits = LongSpecificBits::with_pn_len(pn_len);\n    *first_byte |= specific_bits.deref();\n}\n\n/// Encode the last 5 specific bits of the first byte of the short packet, i.e.,\n/// two reserverd bits of 0, one bit of key phase, and two bits of packet number encoding length.\npub fn encode_short_first_byte(first_byte: &mut u8, pn_len: usize, key_phase: KeyPhaseBit) {\n    let mut specific_bits = ShortSpecificBits::with_pn_len(pn_len);\n    specific_bits.set_key_phase(key_phase);\n    *first_byte |= specific_bits.deref();\n}\n\n#[cfg(test)]\nmod tests {}\n"
  },
  {
    "path": "qbase/src/packet/error.rs",
    "content": "use nom::error::ErrorKind as NomErrorKind;\nuse thiserror::Error;\n\nuse super::r#type::Type;\n\n/// Parse error of QUIC packet.\n#[derive(Debug, PartialEq, Eq, Error)]\npub enum Error {\n    #[error(\"Unsupport version {0}\")]\n    UnsupportedVersion(u32),\n    #[error(\"Invalid fixed bit in long header\")]\n    InvalidFixedBit,\n    #[error(\"Incomplete packet type: {0}\")]\n    IncompleteType(String),\n    #[error(\"Incomplete packet header {0:?}: {1}\")]\n    IncompleteHeader(Type, String),\n    #[error(\"Incomplete packet body {0:?}: {1}\")]\n    IncompletePacket(Type, String),\n    #[error(\"Sampling of {0:?} packet content less than 20 bytes, only {1} bytes available\")]\n    UnderSampling(Type, usize),\n    #[error(\"Fail to remove protection\")]\n    RemoveProtectionFailure,\n    #[error(\"Invalid reserved bits: {0:05b} & {1:05b} must be 0\")]\n    InvalidReservedBits(u8, u8),\n    #[error(\"Fail to decrypt packet\")]\n    DecryptPacketFailure,\n}\n\nimpl nom::error::ParseError<&[u8]> for Error {\n    fn from_error_kind(_input: &[u8], _kind: NomErrorKind) -> Self {\n        debug_assert_eq!(_kind, NomErrorKind::ManyTill);\n        unreachable!(\"QUIC frame parser must always consume\")\n    }\n\n    fn append(_input: &[u8], _kind: NomErrorKind, source: Self) -> Self {\n        // 在解析帧时遇到了source错误，many_till期望通过ManyTill的错误类型告知\n        // 这里，源错误更有意义，所以直接返回源错误\n        debug_assert_eq!(_kind, NomErrorKind::ManyTill);\n        source\n    }\n}\n\nimpl From<Error> for crate::error::QuicError {\n    fn from(e: Error) -> Self {\n        match e {\n            Error::InvalidReservedBits(_, _) => crate::error::QuicError::with_default_fty(\n                crate::error::ErrorKind::ProtocolViolation,\n                e.to_string(),\n            ),\n            _ => unreachable!(),\n        }\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/header/long.rs",
    "content": "use derive_more::{Deref, DerefMut};\n\nuse super::*;\nuse crate::{cid::ConnectionId, varint::VarInt};\n\n/// The long header structure, whose specific contents are determined by the\n/// concrete packet type, including VN/Retry/Initial/0Rtt/Handshake packet.\n///\n/// Long headers are used for packets that are sent prior to the establishment\n/// of 1-RTT keys. Once 1-RTT keys are available, a sender switches to sending\n/// packets using the short header.\n///\n/// ```text\n/// +---------------+-------------+------+--------------+------+--------------+----------+\n/// |1|1|X X 0 0 0 0| Version(32) | DCIL | DCID(0..160) | SCIL | SCID(0..160) | Specific |\n/// +---+---+---+---+-------------+------+--------------+------+--------------+----------+\n///     |<->|<->|<->|\n///       |   |   |\n///       |   |   +---> packet number length\n///       |   +---> reserved bits, must be zero\n///       +---> represent specific long packet type\n/// ```\n///\n/// See [Long Header Packet Format](https://www.rfc-editor.org/rfc/rfc9000.html#name-long-header-packets)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Default, Clone, Deref, DerefMut)]\npub struct LongHeader<T> {\n    dcid: ConnectionId,\n    scid: ConnectionId,\n    #[deref]\n    #[deref_mut]\n    specific: T,\n}\n\nimpl<T> super::GetDcid for LongHeader<T> {\n    fn dcid(&self) -> &ConnectionId {\n        &self.dcid\n    }\n}\n\nimpl<T> super::GetScid for LongHeader<T> {\n    fn scid(&self) -> &ConnectionId {\n        &self.scid\n    }\n}\n\n// The following is the header definition, which may exist in all future versions\n// of QUIC, so it is placed in this file without distinguishing versions.\n\n/// The specific contents of the version negotiation packet, which includes all the\n/// version numbers supported by the server.\n///\n/// When the server receives an initial packet or 0-RTT packet with an unsupported\n/// version number, it will respond with a version negotiation packet that contains\n/// all the version numbers supported by the server, each version being 32 bits.\n#[derive(Debug, Default, Clone)]\npub struct VersionNegotiation {\n    versions: Vec<u32>,\n}\n\nimpl VersionNegotiation {\n    /// Create a new VersionNegotiation packet from the version numbers.\n    pub fn new(versions: Vec<u32>) -> Self {\n        VersionNegotiation { versions }\n    }\n\n    /// Get the version numbers supported by the server.\n    pub fn versions(&self) -> &Vec<u32> {\n        &self.versions\n    }\n}\n\n/// The specific contents of the retry packet, which includes a retry token and a\n/// 16-byte integrity checksum codes.\n///\n/// After accepting the client's new connection, the server may return a retry packet\n/// due to load balancing strategies or simply for address verification,\n/// requiring the client to reconnect to the new address with the token.\n#[derive(Debug, Default, Clone)]\npub struct Retry {\n    token: Vec<u8>,\n    integrity: [u8; 16],\n}\n\nimpl Retry {\n    /// Create a new Retry packet from the token and integrity value.\n    ///\n    /// The token is required to be carried by the Initial packet when the client\n    /// reconnects in the future and will be used by the server for address verification.\n    pub fn new(token: &[u8], integrity: &[u8]) -> Self {\n        let mut retry = Retry {\n            token: Vec::from(token),\n            integrity: [0; 16],\n        };\n        retry.integrity.copy_from_slice(integrity);\n        retry\n    }\n\n    /// Get the retry token.\n    pub fn token(&self) -> &Vec<u8> {\n        &self.token\n    }\n\n    /// Get the integrity value.\n    pub fn integrity(&self) -> &[u8; 16] {\n        &self.integrity\n    }\n}\n\n/// The specific contents of the initial packet, which just includes a token.\n///\n/// The token comes from the Retry packet responded by the server, or it is issued to\n/// the client by the server through the NewToken frame in past QUIC connections.\n/// After the server receives this token, it will be used for address verification.\n/// If the client connects to the server for the first time, the token is empty.\n#[derive(Debug, Default, Clone)]\npub struct Initial {\n    token: Vec<u8>,\n}\n\nimpl Initial {\n    /// Create a new Initial packet from the token.\n    pub fn with_token(token: Vec<u8>) -> Self {\n        Initial { token }\n    }\n\n    /// Create a new Initial packet from the token slice.\n    pub fn from_slice(token: &[u8]) -> Self {\n        Initial {\n            token: Vec::from(token),\n        }\n    }\n\n    /// Get the token.\n    pub fn token(&self) -> &Vec<u8> {\n        &self.token\n    }\n}\n\n/// The specific contents of the 0-RTT packet, which is empty.\n#[derive(Debug, Default, Clone)]\npub struct ZeroRtt;\n\n/// The specific contents of the handshake packet, which is empty.\n#[derive(Debug, Default, Clone)]\npub struct Handshake;\n\nimpl EncodeHeader for Initial {\n    fn size(&self) -> usize {\n        VarInt::try_from(self.token.len())\n            .expect(\"token length can not be more than 2^62\")\n            .encoding_size()\n            + self.token.len()\n    }\n}\n\nimpl EncodeHeader for ZeroRtt {}\nimpl EncodeHeader for Handshake {}\n\n/// Version negotiation packet, which is a long header packet.\n///\n/// See [version negotiation packet](https://www.rfc-editor.org/rfc/rfc9000.html#name-version-negotiation-packet)\n/// in [RFC9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\npub type VersionNegotiationHeader = LongHeader<VersionNegotiation>;\n\n/// Retry packet, which is a long header packet.\n///\n/// See [retry packet](https://www.rfc-editor.org/rfc/rfc9000.html#name-retry-packet)\n/// in [RFC9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\npub type RetryHeader = LongHeader<Retry>;\n\n/// Initial packet header, which is a long header packet.\n///\n/// See [initial packet](https://www.rfc-editor.org/rfc/rfc9000.html#name-initial-packet)\n/// in [RFC9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\npub type InitialHeader = LongHeader<Initial>;\n\n/// Handshake packet header, which is a long header packet.\n///\n/// See [handshake packet](https://www.rfc-editor.org/rfc/rfc9000.html#name-handshake-packet)\n/// in [RFC9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\npub type HandshakeHeader = LongHeader<Handshake>;\n\n/// 0-RTT packet header, which is a long header packet.\n///\n/// See [0-RTT packet](https://www.rfc-editor.org/rfc/rfc9000.html#name-0-rtt-packet)\n/// in [RFC9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\npub type ZeroRttHeader = LongHeader<ZeroRtt>;\n\nimpl<S: EncodeHeader> EncodeHeader for LongHeader<S> {\n    fn size(&self) -> usize {\n        1 + 4 +\n        1 + self.dcid.len()       // dcid长度最多20字节，长度编码只占1字节，加上cid本身的长度\n            + 1 + self.scid.len() // scid一样\n            + self.specific.size()\n    }\n\n    fn length_encoding(&self) -> usize {\n        2 // 长包头都带有length字段，统一2字节，能表达1～16KB的长度的包\n    }\n}\n\nmacro_rules! bind_type {\n    ($($type:ty => $value:expr),*) => {\n        $(\n            impl GetType for $type {\n                fn get_type(&self) -> Type {\n                    $value\n                }\n            }\n        )*\n    };\n}\n\nbind_type!(\n    VersionNegotiationHeader => Type::Long(LongType::VersionNegotiation),\n    RetryHeader => Type::Long(LongType::V1(Version::<1, _>(v1::Type::Retry))),\n    InitialHeader => Type::Long(LongType::V1(Version::<1, _>(v1::Type::Initial))),\n    ZeroRttHeader => Type::Long(LongType::V1(Version::<1, _>(v1::Type::ZeroRtt))),\n    HandshakeHeader => Type::Long(LongType::V1(Version::<1, _>(v1::Type::Handshake)))\n);\n\n/// The sum type of long packets that carry data,\n/// including Initial, ZeroRtt, and Handshake packets.\n#[derive(Debug, Clone)]\n#[enum_dispatch(Encode, GetType, GetDcid, GetScid)]\npub enum DataHeader {\n    Initial(InitialHeader),\n    ZeroRtt(ZeroRttHeader),\n    Handshake(HandshakeHeader),\n}\n\n/// The io module provides functions for parsing and writing long headers.\npub mod io {\n    use std::ops::Deref;\n\n    use bytes::BufMut;\n    use nom::{\n        Err, Parser,\n        bytes::streaming::take,\n        combinator::{eof, map},\n        multi::{length_data, many_till},\n        number::streaming::be_u32,\n    };\n\n    use super::*;\n    use crate::{\n        cid::WriteConnectionId,\n        packet::{\n            header::io::WriteHeader,\n            r#type::{\n                io::WritePacketType,\n                long::{Type as LongType, v1::Type as LongV1Type},\n            },\n        },\n        varint::{WriteVarInt, be_varint},\n    };\n\n    /// Parse the version negotiation packet,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_version_negotiation(input: &[u8]) -> nom::IResult<&[u8], VersionNegotiation> {\n        let (remain, (versions, _)) = many_till(be_u32, eof).parse(input)?;\n        Ok((remain, VersionNegotiation::new(versions)))\n    }\n\n    /// Parse the retry packet,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_retry(input: &[u8]) -> nom::IResult<&[u8], Retry> {\n        if input.len() < 16 {\n            return Err(Err::Incomplete(nom::Needed::new(16)));\n        }\n        let token_length = input.len() - 16;\n        let (integrity, token) = take(token_length)(input)?;\n        Ok((&[][..], Retry::new(token, integrity)))\n    }\n\n    /// Parse the initial packet,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_initial(input: &[u8]) -> nom::IResult<&[u8], Initial> {\n        map(length_data(be_varint), Initial::from_slice).parse(input)\n    }\n\n    /// Parse the 0-RTT packet,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_zero_rtt(input: &[u8]) -> nom::IResult<&[u8], ZeroRtt> {\n        Ok((input, ZeroRtt))\n    }\n\n    /// Parse the handshake packet,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_handshake(input: &[u8]) -> nom::IResult<&[u8], Handshake> {\n        Ok((input, Handshake))\n    }\n\n    /// The builder for the long header, which is used to create a long header.\n    ///\n    /// ## Example\n    /// ```\n    /// use qbase::{cid::ConnectionId, packet::header::long::io::LongHeaderBuilder};\n    ///\n    /// let scid = ConnectionId::from_slice(b\"scid\");\n    /// let dcid = ConnectionId::from_slice(b\"dcid\");\n    ///\n    /// let handshake_header = LongHeaderBuilder::with_cid(dcid, scid).handshake();\n    /// ```\n    pub struct LongHeaderBuilder {\n        pub(crate) dcid: ConnectionId,\n        pub(crate) scid: ConnectionId,\n    }\n\n    impl LongHeaderBuilder {\n        /// Create a new long header builder with the given destination\n        /// and source connection IDs.\n        pub fn with_cid(dcid: ConnectionId, scid: ConnectionId) -> Self {\n            Self { dcid, scid }\n        }\n\n        /// Build into a version negotiation header.\n        pub fn vn(self, versions: Vec<u32>) -> LongHeader<VersionNegotiation> {\n            self.wrap(VersionNegotiation::new(versions))\n        }\n\n        /// Build into a retry header.\n        pub fn retry(self, token: Vec<u8>, integrity: [u8; 16]) -> LongHeader<Retry> {\n            self.wrap(Retry { token, integrity })\n        }\n\n        /// Build into an initial header.\n        pub fn initial(self, token: Vec<u8>) -> LongHeader<Initial> {\n            self.wrap(Initial::with_token(token))\n        }\n\n        /// Build into a 0-RTT header.\n        pub fn zero_rtt(self) -> LongHeader<ZeroRtt> {\n            self.wrap(ZeroRtt)\n        }\n\n        /// Build into a handshake header.\n        pub fn handshake(self) -> LongHeader<Handshake> {\n            self.wrap(Handshake)\n        }\n\n        /// Wrap the specific header into the long generic header.\n        /// Return the specific long header.\n        pub fn wrap<T>(self, specific: T) -> LongHeader<T> {\n            LongHeader {\n                dcid: self.dcid,\n                scid: self.scid,\n                specific,\n            }\n        }\n\n        /// Parse a long header from the input buffer,\n        /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n        ///\n        /// The input buffer would be the remaining data of the buffer.\n        pub fn parse(self, ty: LongType, input: &[u8]) -> nom::IResult<&[u8], Header> {\n            match ty {\n                LongType::VersionNegotiation => {\n                    let (remain, versions) = be_version_negotiation(input)?;\n                    Ok((remain, Header::VN(self.wrap(versions))))\n                }\n                LongType::V1(ty) => match ty.deref() {\n                    LongV1Type::Retry => {\n                        let (remain, retry) = be_retry(input)?;\n                        Ok((remain, Header::Retry(self.wrap(retry))))\n                    }\n                    LongV1Type::Initial => {\n                        let (remain, initial) = be_initial(input)?;\n                        Ok((remain, Header::Initial(self.wrap(initial))))\n                    }\n                    LongV1Type::ZeroRtt => {\n                        let (remain, zero_rtt) = be_zero_rtt(input)?;\n                        Ok((remain, Header::ZeroRtt(self.wrap(zero_rtt))))\n                    }\n                    LongV1Type::Handshake => {\n                        let (remain, handshake) = be_handshake(input)?;\n                        Ok((remain, Header::Handshake(self.wrap(handshake))))\n                    }\n                },\n            }\n        }\n    }\n\n    /// A [`bytes::BufMut`] extension trait, makes buffer more friendly to write long headers.\n    pub trait WriteSpecific<S>: BufMut {\n        /// Write the specific header content.\n        fn put_specific(&mut self, _specific: &S) {}\n    }\n\n    impl<T: BufMut> WriteSpecific<VersionNegotiation> for T {\n        fn put_specific(&mut self, specific: &VersionNegotiation) {\n            for version in &specific.versions {\n                self.put_u32(*version);\n            }\n        }\n    }\n\n    impl<T: BufMut> WriteSpecific<Retry> for T {\n        fn put_specific(&mut self, specific: &Retry) {\n            self.put_slice(&specific.token);\n            self.put_slice(&specific.integrity);\n        }\n    }\n\n    impl<T: BufMut> WriteSpecific<Initial> for T {\n        fn put_specific(&mut self, specific: &Initial) {\n            self.put_varint(\n                &VarInt::try_from(specific.token.len())\n                    .expect(\"token length can not be more than 2^62\"),\n            );\n            self.put_slice(&specific.token);\n        }\n    }\n\n    /// 0-Rtt headers are empty, so there is nothing to write.\n    impl<T: BufMut> WriteSpecific<ZeroRtt> for T {}\n    /// Handshake headers are empty, so there is nothing to write.\n    impl<T: BufMut> WriteSpecific<Handshake> for T {}\n\n    impl<T, S> WriteHeader<LongHeader<S>> for T\n    where\n        T: BufMut + WriteSpecific<S>,\n        LongHeader<S>: GetType,\n    {\n        fn put_header(&mut self, header: &LongHeader<S>) {\n            let ty = header.get_type();\n            self.put_packet_type(&ty);\n            self.put_connection_id(&header.dcid);\n            self.put_connection_id(&header.scid);\n            self.put_specific(&header.specific);\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::packet::header::WriteSpecific;\n\n    #[test]\n    fn test_be_version_negotiation() {\n        use super::io::be_version_negotiation;\n\n        let buf = vec![0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02];\n        let (remain, versions) = be_version_negotiation(buf.as_ref()).unwrap();\n        assert_eq!(versions.versions, vec![0x01, 0x02]);\n        assert_eq!(remain.len(), 0);\n    }\n\n    #[test]\n    fn test_be_retry() {\n        use super::io::be_retry;\n\n        let buf = vec![\n            0x00, 0x00, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a,\n            0x0b, 0x0c, 0x0d, 0x0e, 0x0f,\n        ];\n        let (remain, retry) = be_retry(buf.as_ref()).unwrap();\n        assert_eq!(\n            retry.integrity,\n            [\n                0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d,\n                0x0e, 0x0f\n            ]\n        );\n        assert_eq!(retry.token, vec![0x00, 0x00, 0x00]);\n        assert_eq!(remain.len(), 0);\n        let buf = vec![\n            0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e,\n            0x0f,\n        ];\n        match be_retry(&buf) {\n            Err(e) => assert_eq!(e, nom::Err::Incomplete(nom::Needed::new(16))),\n            _ => panic!(\"unexpected result\"),\n        }\n    }\n\n    #[test]\n    fn test_be_initial() {\n        use crate::packet::header::long::io::be_initial;\n        // Note: The length of the last bit is filled in when sending, here set as 0x01\n        // Consistent behavior with zero_rtt and handshake\n        let buf = vec![0x03, 0x00, 0x00, 0x00];\n        let (remain, initial) = be_initial(buf.as_ref()).unwrap();\n        assert_eq!(initial.token, vec![0x00, 0x00, 0x00]);\n        assert_eq!(remain.len(), 0);\n    }\n\n    #[test]\n    fn test_write_version_negotiation_long_header() {\n        use super::{LongHeaderBuilder, VersionNegotiation};\n        use crate::cid::ConnectionId;\n\n        let mut buf = Vec::<u8>::new();\n        let vn_long_header =\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default())\n                .wrap(VersionNegotiation::new(vec![0x01, 0x02]));\n        buf.put_specific(&vn_long_header.specific);\n        assert_eq!(buf, vec![0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02]);\n    }\n\n    #[test]\n    fn test_write_retry_long_header() {\n        use super::{LongHeaderBuilder, Retry};\n        use crate::cid::ConnectionId;\n\n        let mut buf = Vec::<u8>::new();\n        let retry_long_header =\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default()).wrap(\n                Retry::new(\n                    &[0x00, 0x00, 0x00],\n                    &[\n                        0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,\n                        0x0c, 0x0d, 0x0e, 0x0f,\n                    ],\n                ),\n            );\n        buf.put_specific(&retry_long_header.specific);\n        assert_eq!(\n            buf,\n            vec![\n                0x00, 0x00, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a,\n                0x0b, 0x0c, 0x0d, 0x0e, 0x0f,\n            ]\n        );\n    }\n\n    #[test]\n    fn test_write_initial_long_header() {\n        use super::LongHeaderBuilder;\n        use crate::cid::ConnectionId;\n\n        let mut buf = Vec::<u8>::new();\n        let initial_long_header =\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default())\n                .initial(vec![0x00, 0x00, 0x00]);\n        buf.put_specific(&initial_long_header.specific);\n        assert_eq!(buf, vec![0x03, 0x00, 0x00, 0x00,]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/header/short.rs",
    "content": "use super::*;\nuse crate::{cid::ConnectionId, packet::SpinBit};\n\n/// A packet with a short header does not include a length,\n/// so it can only be the last packet in a UDP datagram.\n///\n/// ```text\n///      +---spin bit\n///      |     +---key phase bits\n///      |     |\n/// +----+-----+----+------+--------------+----......---+\n/// |1|1|S 0 0 K 0 0| DCIL | DCID(0..160) | Payload ... |\n/// +-----+---+-+---+------+--------------+----......---+\n///       |<->| |<->|\n///         |     |\n///         |     +---> packet number length\n///         +---> reserved bits, must be 0\n/// ```\n///\n/// See [1-RTT Packet](https://www.rfc-editor.org/rfc/rfc9000.html#name-1-rtt-packet)\n/// in [RFC9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Default, Clone, Copy, PartialEq, Eq)]\npub struct OneRttHeader {\n    // For simplicity, the spin bit is also part of the 1RTT header.\n    spin: SpinBit,\n    dcid: ConnectionId,\n}\n\nimpl OneRttHeader {\n    /// Create a new 1RTT header.\n    pub fn new(spin: SpinBit, dcid: ConnectionId) -> Self {\n        Self { spin, dcid }\n    }\n\n    /// Get the spin bit.\n    pub fn spin(&self) -> SpinBit {\n        self.spin\n    }\n}\n\nimpl EncodeHeader for OneRttHeader {\n    fn size(&self) -> usize {\n        1 + self.dcid.len()\n    }\n}\n\nimpl GetType for OneRttHeader {\n    fn get_type(&self) -> Type {\n        Type::Short(OneRtt(self.spin))\n    }\n}\n\nimpl super::GetDcid for OneRttHeader {\n    fn dcid(&self) -> &ConnectionId {\n        &self.dcid\n    }\n}\n\n/// The io module provides functions for parsing and writing 1RTT headers.\npub mod io {\n    use bytes::BufMut;\n\n    use super::{GetType, OneRttHeader};\n    use crate::packet::{header::io::WriteHeader, signal::SpinBit, r#type::io::WritePacketType};\n\n    /// Parse a 1RTT header from the input buffer,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_one_rtt_header(\n        spin: SpinBit,\n        dcid_len: usize,\n        input: &[u8],\n    ) -> nom::IResult<&[u8], OneRttHeader> {\n        use nom::bytes::streaming::take;\n        let (remain, dcid) = take(dcid_len)(input)?;\n        let dcid = crate::cid::ConnectionId::from_slice(dcid);\n        Ok((remain, OneRttHeader { spin, dcid }))\n    }\n\n    impl<T: BufMut> WriteHeader<OneRttHeader> for T {\n        fn put_header(&mut self, header: &OneRttHeader) {\n            let ty = header.get_type();\n            self.put_packet_type(&ty);\n            self.put_slice(&header.dcid);\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::packet::header::io::WriteHeader;\n\n    #[test]\n    fn test_read_one_rtt_header() {\n        use super::io::be_one_rtt_header;\n        use crate::packet::{SpinBit, header::ConnectionId};\n\n        let (remain, header) = be_one_rtt_header(SpinBit::One, 0, &[][..]).unwrap();\n\n        assert_eq!(remain.len(), 0);\n        assert_eq!(header.spin, SpinBit::One);\n        assert_eq!(header.dcid, ConnectionId::default());\n    }\n\n    #[test]\n    fn test_write_one_rtt_header() {\n        use super::OneRttHeader;\n        use crate::{cid::ConnectionId, packet::SpinBit};\n\n        let mut buf = vec![];\n        let header = OneRttHeader {\n            spin: SpinBit::One,\n            dcid: ConnectionId::default(),\n        };\n\n        buf.put_header(&header);\n        // Note: 0x60 == SHORT_HEADER_BIT | FIXED_BIT | Toggle<SPIN_BIT>.value()\n        assert_eq!(buf, [0x60]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/header.rs",
    "content": "use enum_dispatch::enum_dispatch;\n\nuse crate::cid::ConnectionId;\n\n/// All structure definitions related to long headers.\npub mod long;\n/// All structure definitions related to short headers.\npub mod short;\n\n#[doc(hidden)]\npub use long::{\n    DataHeader, HandshakeHeader, InitialHeader, LongHeader, RetryHeader, VersionNegotiationHeader,\n    ZeroRttHeader,\n    io::{LongHeaderBuilder, WriteSpecific},\n};\n#[doc(hidden)]\npub use short::OneRttHeader;\n\nuse super::r#type::{\n    Type,\n    long::{Type as LongType, Version, v1},\n    short::OneRtt,\n};\n\n/// Each packet has its type. For more detailed definition on packet types, see [`Type`].\n#[enum_dispatch]\npub trait GetType {\n    /// Get the packet type.\n    fn get_type(&self) -> Type;\n}\n\n/// When encoding a packet for sending, we need to know the size of the packet encoding,\n/// so this trait needs to be implemented.\n///\n/// However, the length field of the packet payload is variable-length encoded and\n/// requires special encoding, which is not considered here.\n#[enum_dispatch]\npub trait EncodeHeader {\n    /// Returns the length of the encoded packet header.\n    fn size(&self) -> usize {\n        0\n    }\n\n    fn length_encoding(&self) -> usize {\n        0\n    }\n}\n\n/// Get the Destination Connection ID (DCID) of the packet, each packet has a DCID.\n#[enum_dispatch]\npub trait GetDcid {\n    /// Get the Destination Connection ID (DCID) of the packet.\n    fn dcid(&self) -> &ConnectionId;\n}\n\n/// Get the Source Connection ID (SCID) of the packet, only long packets have SCID.\n#[enum_dispatch]\npub trait GetScid {\n    /// Get the Source Connection ID (SCID) of the packet.\n    fn scid(&self) -> &ConnectionId;\n}\n\n/// The sum type of all packet headers.\n#[derive(Debug, Clone)]\n#[enum_dispatch(GetDcid)]\npub enum Header {\n    VN(long::VersionNegotiationHeader),\n    Retry(long::RetryHeader),\n    Initial(long::InitialHeader),\n    ZeroRtt(long::ZeroRttHeader),\n    Handshake(long::HandshakeHeader),\n    OneRtt(short::OneRttHeader),\n}\n\n/// The io module for packet headers, including\n/// how to parse the header from a UDP packet and\n/// how to write the header into a UDP packet.\npub mod io {\n    use super::{\n        Header, LongHeader, OneRttHeader,\n        long::{Handshake, Initial, Retry, VersionNegotiation, ZeroRtt, io::LongHeaderBuilder},\n    };\n    use crate::{\n        cid::be_connection_id,\n        packet::{\n            header::short::io::be_one_rtt_header,\n            r#type::{Type, short::OneRtt},\n        },\n    };\n\n    /// Parse a packet header from the input buffer,\n    /// returns [`Header`] if succeed,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_header(\n        packet_type: Type,\n        dcid_len: usize,\n        input: &[u8],\n    ) -> nom::IResult<&[u8], Header> {\n        match packet_type {\n            Type::Long(long_ty) => {\n                let (remain, dcid) = be_connection_id(input)?;\n                let (remain, scid) = be_connection_id(remain)?;\n                let builder = LongHeaderBuilder { dcid, scid };\n                builder.parse(long_ty, remain)\n            }\n            Type::Short(OneRtt(spin)) => {\n                let (remain, one_rtt) = be_one_rtt_header(spin, dcid_len, input)?;\n                Ok((remain, Header::OneRtt(one_rtt)))\n            }\n        }\n    }\n\n    /// A [`bytes::BufMut`] extension trait for writing packet headers.\n    ///\n    /// When sending packets, it is necessary to organize the data and write\n    /// various types of QUIC packets into an UDP datagram. This trait will\n    /// be used to write the packet header.\n    pub trait WriteHeader<H>: bytes::BufMut {\n        /// Write a packet header to the buffer.\n        fn put_header(&mut self, header: &H);\n    }\n\n    impl<T> WriteHeader<Header> for T\n    where\n        T: bytes::BufMut\n            + WriteHeader<LongHeader<VersionNegotiation>>\n            + WriteHeader<LongHeader<Retry>>\n            + WriteHeader<LongHeader<Initial>>\n            + WriteHeader<LongHeader<ZeroRtt>>\n            + WriteHeader<LongHeader<Handshake>>\n            + WriteHeader<OneRttHeader>,\n    {\n        fn put_header(&mut self, header: &Header) {\n            match header {\n                Header::VN(vn) => self.put_header(vn),\n                Header::Retry(retry) => self.put_header(retry),\n                Header::Initial(initial) => self.put_header(initial),\n                Header::ZeroRtt(zero_rtt) => self.put_header(zero_rtt),\n                Header::Handshake(handshake) => self.put_header(handshake),\n                Header::OneRtt(one_rtt) => self.put_header(one_rtt),\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::ops::Deref;\n\n    use super::{\n        Header, LongHeaderBuilder,\n        io::be_header,\n        long::{Handshake, Initial, Retry, VersionNegotiation, ZeroRtt},\n    };\n    use crate::{\n        cid::ConnectionId,\n        packet::{\n            GetDcid, OneRttHeader, SpinBit,\n            header::{GetScid, io::WriteHeader},\n            r#type::{\n                Type,\n                long::{self, Ver1},\n                short::OneRtt,\n            },\n        },\n    };\n\n    #[test]\n    fn test_read_header() {\n        // VersionNegotiation Header\n        let buf = vec![0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02];\n        let (remain, vn_long_header) =\n            be_header(Type::Long(long::Type::VersionNegotiation), 0, &buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        match vn_long_header {\n            Header::VN(vn) => {\n                assert_eq!(vn.dcid(), &ConnectionId::default());\n                assert_eq!(vn.scid(), &ConnectionId::default());\n                assert_eq!(vn.versions(), &vec![0x01, 0x02]);\n            }\n            _ => panic!(\"unexpected header type\"),\n        }\n\n        // Retry Header\n        let buf = vec![\n            0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,\n            0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,\n        ];\n        let (remain, retry_long_header) =\n            be_header(Type::Long(long::Type::V1(Ver1::RETRY)), 0, &buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        match retry_long_header {\n            Header::Retry(retry) => {\n                assert_eq!(retry.dcid(), &ConnectionId::default());\n                assert_eq!(retry.scid(), &ConnectionId::default());\n                assert_eq!(retry.token().deref(), &[0x00, 0x00, 0x00]);\n                assert_eq!(\n                    retry.integrity(),\n                    &[\n                        0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,\n                        0x0c, 0x0d, 0x0e, 0x0f\n                    ]\n                );\n            }\n            _ => panic!(\"unexpected header type\"),\n        }\n\n        // Retry Header with invalid length\n        let buf = vec![\n            0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e,\n            0x0f,\n        ];\n        match be_header(Type::Long(long::Type::V1(Ver1::RETRY)), 0, &buf) {\n            Err(e) => assert_eq!(e, nom::Err::Incomplete(nom::Needed::new(16))),\n            _ => panic!(\"unexpected result\"),\n        }\n\n        // Initial Header\n        let buf = vec![0x00, 0x00, 0x03, 0x01, 0x02, 0x03];\n        let (remain, initial_long_header) =\n            be_header(Type::Long(long::Type::V1(Ver1::INITIAL)), 0, &buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        match initial_long_header {\n            Header::Initial(initial) => {\n                assert_eq!(initial.dcid(), &ConnectionId::default());\n                assert_eq!(initial.scid(), &ConnectionId::default());\n                assert_eq!(initial.token().deref(), [0x01, 0x02, 0x03,]);\n            }\n            _ => panic!(\"unexpected header type\"),\n        }\n\n        // ZeroRTT Header\n        let buf = vec![0x00, 0x00];\n        let (remain, zero_rtt_long_header) =\n            be_header(Type::Long(long::Type::V1(Ver1::ZERO_RTT)), 0, &buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        match zero_rtt_long_header {\n            Header::ZeroRtt(zero_rtt) => {\n                assert_eq!(zero_rtt.dcid(), &ConnectionId::default());\n                assert_eq!(zero_rtt.scid(), &ConnectionId::default());\n            }\n            _ => panic!(\"unexpected header type\"),\n        }\n\n        // Handshake Header\n        let buf = vec![0x00, 0x00];\n        let (remain, handshake_long_header) =\n            be_header(Type::Long(long::Type::V1(Ver1::HANDSHAKE)), 0, &buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        match handshake_long_header {\n            Header::Handshake(handshake) => {\n                assert_eq!(handshake.dcid(), &ConnectionId::default());\n                assert_eq!(handshake.scid(), &ConnectionId::default());\n            }\n            _ => panic!(\"unexpected header type\"),\n        }\n\n        // OneRtt Header\n        let buf = vec![];\n        let (remain, one_rtt_header) =\n            be_header(Type::Short(OneRtt(SpinBit::One)), 0, &buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        match one_rtt_header {\n            Header::OneRtt(one_rtt) => {\n                assert_eq!(\n                    one_rtt,\n                    OneRttHeader::new(SpinBit::One, ConnectionId::default())\n                );\n            }\n            _ => panic!(\"unexpected header type\"),\n        }\n    }\n\n    #[test]\n    fn test_write_header() {\n        // VersionNegotiation Header\n        let mut buf = vec![];\n        let vn_long_header = Header::VN(\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default())\n                .wrap(VersionNegotiation::new(vec![0x01, 0x02])),\n        );\n        buf.put_header(&vn_long_header);\n        assert_eq!(\n            buf,\n            [\n                0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,\n                0x02\n            ]\n        );\n\n        // Retry Header\n        let mut buf = vec![];\n        let retry_long_header = Header::Retry(\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default()).wrap(\n                Retry::new(\n                    &[0x00, 0x00, 0x00],\n                    &[\n                        0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b,\n                        0x0c, 0x0d, 0x0e, 0x0f,\n                    ],\n                ),\n            ),\n        );\n        buf.put_header(&retry_long_header);\n        assert_eq!(\n            buf,\n            [\n                0xf0, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x02, 0x03,\n                0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f\n            ]\n        );\n\n        // Initial Header\n        let mut buf = vec![];\n        let initial_header = Header::Initial(\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default())\n                .wrap(Initial::with_token(vec![0x01, 0x02, 0x03])),\n        );\n        buf.put_header(&initial_header);\n        assert_eq!(\n            buf,\n            [\n                0xc0, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x03, 0x01, 0x02, 0x03\n            ]\n        );\n\n        // ZeroRtt Header\n        let mut buf = vec![];\n        let zero_rtt_header = Header::ZeroRtt(\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default())\n                .wrap(ZeroRtt),\n        );\n        buf.put_header(&zero_rtt_header);\n        assert_eq!(buf, [0xd0, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00]);\n\n        // Handshake Header\n        let mut buf = vec![];\n        let handshake_header = Header::Handshake(\n            LongHeaderBuilder::with_cid(ConnectionId::default(), ConnectionId::default())\n                .wrap(Handshake),\n        );\n        buf.put_header(&handshake_header);\n        assert_eq!(buf, [0xe0, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00]);\n\n        // OneRtt Header with SpinBit::On\n        let mut buf = vec![];\n        let one_rtt_header =\n            Header::OneRtt(OneRttHeader::new(SpinBit::One, ConnectionId::default()));\n        buf.put_header(&one_rtt_header);\n        assert_eq!(buf, [0x60]);\n\n        // OneRtt Header with SpinBit::Off\n        let mut buf = vec![];\n        let one_rtt_header =\n            Header::OneRtt(OneRttHeader::new(SpinBit::Zero, ConnectionId::default()));\n        buf.put_header(&one_rtt_header);\n        assert_eq!(buf, [0x40]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/io.rs",
    "content": "use std::{any::Any, mem};\n\nuse bytes::BytesMut;\nuse nom::{Parser, multi::length_data};\n\nuse super::{\n    error::Error,\n    header::io::be_header,\n    r#type::{Type, io::be_packet_type},\n    *,\n};\nuse crate::{\n    Epoch,\n    frame::{io::WriteFrame, *},\n    net::tx::Signals,\n    util::{ContinuousData, NonData, WriteData},\n    varint::be_varint,\n};\n\n/// Parse the payload of a packet.\n///\n/// - For long packets, the payload is a [`nom::multi::length_data`].\n/// - For 1-RTT packet, the payload is the remaining content of the datagram.\nfn be_payload(\n    pkty: Type,\n    datagram: &mut BytesMut,\n    remain_len: usize,\n) -> Result<(BytesMut, usize), Error> {\n    let offset = datagram.len() - remain_len;\n    let input = &datagram[offset..];\n    let (remain, payload) = length_data(be_varint).parse(input).map_err(|e| match e {\n        ne @ nom::Err::Incomplete(_) => Error::IncompleteHeader(pkty, ne.to_string()),\n        _ => unreachable!(\"parsing packet header never generates error or failure\"),\n    })?;\n    let payload_len = payload.len();\n    if payload_len < 20 {\n        // The payload needs at least 20 bytes to have enough samples to remove the packet header protection.\n        return Err(Error::UnderSampling(pkty, payload.len()));\n    }\n    let packet_length = datagram.len() - remain.len();\n    let bytes = datagram.split_to(packet_length);\n    Ok((bytes, packet_length - payload_len))\n}\n\n/// Parse the QUIC packet from the datagram, given the length of the DCID.\n/// Returns the parsed packet or an error, and the datagram removed the packet's content.\npub fn be_packet(datagram: &mut BytesMut, dcid_len: usize) -> Result<Packet, Error> {\n    let input = datagram.as_ref();\n    let (remain, pkty) = be_packet_type(input).map_err(|e| match e {\n        ne @ nom::Err::Incomplete(_) => Error::IncompleteType(ne.to_string()),\n        nom::Err::Error(e) => e,\n        _ => unreachable!(\"parsing packet type never generates failure\"),\n    })?;\n    let (remain, header) = be_header(pkty, dcid_len, remain).map_err(|e| match e {\n        ne @ nom::Err::Incomplete(_) => Error::IncompleteHeader(pkty, ne.to_string()),\n        _ => unreachable!(\"parsing packet header never generates error or failure\"),\n    })?;\n    match header {\n        Header::VN(header) => {\n            datagram.clear();\n            Ok(Packet::VN(header))\n        }\n        Header::Retry(header) => {\n            datagram.clear();\n            Ok(Packet::Retry(header))\n        }\n        Header::Initial(header) => {\n            let (bytes, offset) = be_payload(pkty, datagram, remain.len())?;\n            Ok(Packet::Data(DataPacket {\n                header: DataHeader::Long(long::DataHeader::Initial(header)),\n                bytes,\n                offset,\n            }))\n        }\n        Header::ZeroRtt(header) => {\n            let (bytes, offset) = be_payload(pkty, datagram, remain.len())?;\n            Ok(Packet::Data(DataPacket {\n                header: DataHeader::Long(long::DataHeader::ZeroRtt(header)),\n                bytes,\n                offset,\n            }))\n        }\n        Header::Handshake(header) => {\n            let (bytes, offset) = be_payload(pkty, datagram, remain.len())?;\n            Ok(Packet::Data(DataPacket {\n                header: DataHeader::Long(long::DataHeader::Handshake(header)),\n                bytes,\n                offset,\n            }))\n        }\n        Header::OneRtt(header) => {\n            if remain.len() < 20 {\n                // The payload needs at least 20 bytes to have enough samples to remove the packet header protection.\n                return Err(Error::UnderSampling(pkty, remain.len()));\n            }\n            let remain_len = remain.len();\n            let bytes = mem::replace(datagram, BytesMut::new());\n            let offset = bytes.len() - remain_len;\n            datagram.clear();\n            Ok(Packet::Data(DataPacket {\n                header: DataHeader::Short(header),\n                bytes,\n                offset,\n            }))\n        }\n    }\n}\n\npub trait ProductHeader<H> {\n    fn new_header(&self) -> Result<H, Signals>;\n}\n\npub trait PacketSpace<H> {\n    type PacketAssembler<'b>: AssemblePacket\n    where\n        Self: 'b;\n\n    fn new_packet<'b>(\n        &'b self,\n        header: H,\n        buffer: &'b mut [u8],\n    ) -> Result<Self::PacketAssembler<'b>, Signals>;\n}\n\n// Target -> Target\npub trait Package<Target: ?Sized> {\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals>;\n}\n\nimpl<Target: BufMut + ?Sized, P: Package<Target> + ?Sized> Package<Target> for &mut P {\n    #[inline]\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        P::dump(self, target)\n    }\n}\n\nimpl<Target: BufMut + ?Sized, P: Package<Target> + ?Sized> Package<Target> for Box<P> {\n    #[inline]\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        P::dump(self, target)\n    }\n}\n\nimpl<Target: BufMut + ?Sized, P: Package<Target>> Package<Target> for Option<P> {\n    #[inline]\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        self.take()\n            .map_or_else(|| Err(Signals::empty()), |mut package| package.dump(target))\n    }\n}\n\nimpl<Target: BufMut + ?Sized, P: Package<Target>> Package<Target> for [P] {\n    #[inline]\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        let origin = target.remaining_mut();\n        let mut signals = Signals::empty();\n        let mut packet_content = PacketContent::default();\n        for package in self {\n            match package.dump(target) {\n                Ok(content) => packet_content += content,\n                Err(s) => signals |= s,\n            }\n        }\n\n        (origin != target.remaining_mut())\n            .then_some(packet_content)\n            .ok_or(signals)\n    }\n}\n\nimpl<Target: BufMut + ?Sized, P: Package<Target>, const N: usize> Package<Target> for [P; N] {\n    #[inline]\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        let origin = target.remaining_mut();\n        let mut signals = Signals::empty();\n        let mut packet_content = PacketContent::default();\n        for package in self {\n            match package.dump(target) {\n                Ok(content) => packet_content += content,\n                Err(s) => signals |= s,\n            }\n        }\n\n        (origin != target.remaining_mut())\n            .then_some(packet_content)\n            .ok_or(signals)\n    }\n}\n\npub struct PadTo20;\n\nimpl<'b, P> Package<P> for PadTo20\nwhere\n    P: AsRef<PacketWriter<'b>> + BufMut + ?Sized,\n{\n    #[inline]\n    fn dump(&mut self, target: &mut P) -> Result<PacketContent, Signals> {\n        let packet = target.as_ref();\n        match packet.payload_len() + packet.tag_len() {\n            _ if packet.is_empty() => Err(Signals::empty()),\n            len if len < 20 => {\n                target.put_bytes(0, 20 - len);\n                Ok(PacketContent::NonAckEliciting)\n            }\n            _ => Ok(PacketContent::NonAckEliciting),\n        }\n    }\n}\n\npub struct PadToFull;\n\nimpl<'b, P> Package<P> for PadToFull\nwhere\n    P: AsRef<PacketWriter<'b>> + BufMut + ?Sized,\n{\n    #[inline]\n    fn dump(&mut self, target: &mut P) -> Result<PacketContent, Signals> {\n        let packet = target.as_ref();\n        match packet.payload_len() + packet.tag_len() {\n            _ if packet.is_empty() => Err(Signals::empty()),\n            len if len < packet.buffer().len() => {\n                target.put_bytes(0, packet.remaining_mut());\n                Ok(PacketContent::NonAckEliciting)\n            }\n            _ => Ok(PacketContent::NonAckEliciting),\n        }\n    }\n}\n\npub struct PadProbe;\n\nimpl<'b, P> Package<P> for PadProbe\nwhere\n    P: AsRef<PacketWriter<'b>> + BufMut + ?Sized,\n{\n    #[inline]\n    fn dump(&mut self, target: &mut P) -> Result<PacketContent, Signals> {\n        if target.as_ref().is_probe_new_path() {\n            return PadToFull.dump(target);\n        }\n        Err(Signals::empty())\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\npub struct Repeat<P>(pub P);\n\nimpl<Target: ?Sized + BufMut, P: Package<Target>> Package<Target> for Repeat<P> {\n    #[inline]\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        let origin = target.remaining_mut();\n        let mut packet_content = PacketContent::default();\n        let signals = loop {\n            match self.0.dump(target) {\n                Ok(content) => packet_content += content,\n                Err(signals) => break signals,\n            }\n        };\n\n        (origin != target.remaining_mut())\n            .then_some(packet_content)\n            .ok_or(signals)\n    }\n}\n\npub struct Packages<T>(pub T);\n\nmacro_rules! impl_package_for_tuple {\n    () => {};\n    ($head:ident $($tail:ident)*) => {\n        impl_package_for_tuple!(@imp $head $($tail)*);\n        impl_package_for_tuple!(           $($tail)*);\n\n    };\n    (@imp $($t:ident)*) => {\n        impl<Target: BufMut + ?Sized, $($t: Package<Target>),*> Package<Target> for Packages<($($t,)*)> {\n            #[inline]\n            fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n                let origin = target.remaining_mut();\n                let mut signals = Signals::empty();\n                let mut packet_content = PacketContent::default();\n\n                #[allow(non_snake_case)]\n                let ($($t,)*) = &mut self.0;\n\n                $( #[allow(non_snake_case)]\n                match $t.dump(target) {\n                    Ok(content) => packet_content += content,\n                    Err(s) => signals |= s,\n                } )*\n\n                (origin != target.remaining_mut())\n                    .then_some(packet_content)\n                    .ok_or(signals)\n            }\n        }\n    }\n}\n\nimpl_package_for_tuple! {\n    Z Y X W V U T S R Q P O N M L K J I H G F E D C B A\n}\n\nmacro_rules! frame_packages {\n    () => {};\n    (@imp_frame $($frame:tt)*) => {\n        impl<Target> Package<Target> for $($frame)*\n        where\n            Target: BufMut + RecordFrame<Frame<NonData>, NonData> + ?Sized,\n        {\n            #[inline]\n            fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n                if !(target.remaining_mut() >= self.max_encoding_size()\n                    || target.remaining_mut() >= self.encoding_size())\n                {\n                    return Err(Signals::CONGESTION);\n                }\n                let frame = self.clone().into();\n                target.record_frame(&frame);\n                target.put_frame(&frame);\n                Ok(PacketContent::from(self.frame_type()))\n            }\n        }\n    };\n    (impl<Target: WriteFrame<Self>> Package<Target> for $frame:ident {} $($tail:tt)*) => {\n        frame_packages!{ @imp_frame $frame }\n        frame_packages!{ @imp_frame &$frame }\n        frame_packages!{ $($tail)* }\n    };\n    (@imp_data_frame $($frame_with_data:tt)*) => {\n        impl<Target,D> Package<Target> for $($frame_with_data)*\n        where\n            Target: BufMut + RecordFrame<Frame<D>, D> + ?Sized,\n            D: ContinuousData + Clone,\n            for<'b> &'b mut Target: WriteData<D>,\n        {\n            #[inline]\n            fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n                let (frame, data) = self;\n                if !(target.remaining_mut() >= frame.max_encoding_size()\n                    || target.remaining_mut() >= frame.encoding_size())\n                {\n                    return Err(Signals::CONGESTION);\n                }\n                let frame = (frame.clone(), data.clone()).into();\n                target.record_frame(&frame);\n                target.put_frame(&frame);\n                Ok(PacketContent::from(frame.frame_type()))\n            }\n        }\n    };\n    (impl<Target: WriteDataFrame<Self, D>, D: ContinuousData> Package<Target> for ($frame:ident, D) {} $($tail:tt)*) => {\n        frame_packages!{ @imp_data_frame ($frame, D) }\n        frame_packages!{ @imp_data_frame &($frame, D) }\n        frame_packages!{ $($tail)* }\n    };\n}\n\nframe_packages! {\n    impl<Target: WriteFrame<Self>> Package<Target> for PaddingFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for PingFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for AckFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for ConnectionCloseFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for NewTokenFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for MaxDataFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for DataBlockedFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for HandshakeDoneFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for PathChallengeFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for PathResponseFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for StreamCtlFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for ReliableFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for PunchHelloFrame {}\n    impl<Target: WriteFrame<Self>> Package<Target> for PunchDoneFrame {}\n    impl<Target: WriteDataFrame<Self, D>, D: ContinuousData> Package<Target> for (StreamFrame, D) {}\n    impl<Target: WriteDataFrame<Self, D>, D: ContinuousData> Package<Target> for (CryptoFrame, D) {}\n    impl<Target: WriteDataFrame<Self, D>, D: ContinuousData> Package<Target> for (DatagramFrame, D) {}\n}\n\npub enum Keys {\n    LongHeaderPacket {\n        keys: DirectionalKeys,\n    },\n    ShortHeaderPacket {\n        keys: DirectionalKeys,\n        key_phase: KeyPhaseBit,\n    },\n}\n\nimpl Debug for Keys {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            Self::LongHeaderPacket { .. } => f\n                .debug_struct(\"LongHeaderPacket\")\n                .field(\"keys\", &\"...\")\n                .finish(),\n            Self::ShortHeaderPacket { key_phase, .. } => f\n                .debug_struct(\"ShortHeaderPacket\")\n                .field(\"keys\", &\"...\")\n                .field(\"key_phase\", key_phase)\n                .finish(),\n        }\n    }\n}\n\nimpl Keys {\n    fn hpk(&self) -> &dyn rustls::quic::HeaderProtectionKey {\n        match self {\n            Self::LongHeaderPacket { keys } | Self::ShortHeaderPacket { keys, .. } => {\n                keys.header.as_ref()\n            }\n        }\n    }\n\n    fn pk(&self) -> &dyn rustls::quic::PacketKey {\n        match self {\n            Self::LongHeaderPacket { keys } | Self::ShortHeaderPacket { keys, .. } => {\n                keys.packet.as_ref()\n            }\n        }\n    }\n\n    fn key_phase(&self) -> Option<KeyPhaseBit> {\n        match self {\n            Self::LongHeaderPacket { .. } => None,\n            Self::ShortHeaderPacket { key_phase, .. } => Some(*key_phase),\n        }\n    }\n}\n\n#[derive(Debug)]\nstruct PacketLayout {\n    hdr_len: usize,\n    len_encoding: usize,\n    pn_len: usize,\n\n    cursor: usize,\n    end: usize,\n}\n\nimpl PacketLayout {\n    pub fn payload_len(&self) -> usize {\n        self.cursor - self.hdr_len - self.len_encoding\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.payload_len() == self.pn_len\n    }\n}\n\n#[derive(Debug, CopyGetters)]\npub struct PacketInfo {\n    #[getset(get_copy = \"pub\")]\n    packet_type: Type,\n    #[getset(get_copy = \"pub\")]\n    packet_number: u64,\n    // Packets containing only frames with [`Spec::N`] are not ack-eliciting;\n    // otherwise, they are ack-eliciting.\n    #[getset(get_copy = \"pub\")]\n    ack_eliciting: bool,\n    // A Boolean that indicates whether the packet counts toward bytes in flight.\n    // See [Section 2](https://www.rfc-editor.org/rfc/rfc9002#section-2)\n    // and [Appendix A.1](https://www.rfc-editor.org/rfc/rfc9002#section-a.1)\n    // of [QUIC Recovery](https://www.rfc-editor.org/rfc/rfc9002).\n    //\n    // Packets containing only frames with [`Spec::C`] do not\n    // count toward bytes in flight for congestion control purposes.\n    #[getset(get_copy = \"pub\")]\n    in_flight: bool,\n    // Packets containing only frames with [`Spec::P`] can be used to\n    // probe new network paths during connection migration.\n    #[getset(get_copy = \"pub\")]\n    probe_new_path: bool,\n    #[getset(get_copy = \"pub\")]\n    largest_ack: Option<u64>,\n}\n\nimpl PacketInfo {\n    pub fn new(ty: Type, pn: u64) -> Self {\n        Self {\n            packet_type: ty,\n            packet_number: pn,\n            ack_eliciting: false,\n            in_flight: false,\n            probe_new_path: false,\n            largest_ack: None,\n        }\n    }\n\n    pub fn epoch(&self) -> Option<Epoch> {\n        match self.packet_type() {\n            Type::Long(long) => match long {\n                r#type::long::Type::VersionNegotiation => None,\n                r#type::long::Type::V1(version) => match version.0 {\n                    r#type::long::v1::Type::Initial => Some(Epoch::Initial),\n                    r#type::long::v1::Type::ZeroRtt => Some(Epoch::Data),\n                    r#type::long::v1::Type::Handshake => Some(Epoch::Handshake),\n                    r#type::long::v1::Type::Retry => None,\n                },\n            },\n            Type::Short(..) => Some(Epoch::Data),\n        }\n    }\n\n    pub fn add_frame<F: FrameFeature + 'static>(&mut self, frame: &F) {\n        debug_assert!(\n            frame.belongs_to(self.packet_type()),\n            \"Frame {:?} does not belong to packet type {:?}\",\n            std::any::type_name_of_val(frame),\n            self.packet_type()\n        );\n        self.ack_eliciting |= !frame.specs().contain(Spec::NonAckEliciting);\n        self.in_flight |= !frame.specs().contain(Spec::CongestionControlFree);\n        self.probe_new_path |= frame.specs().contain(Spec::ProbeNewPath);\n        if let Some(ack_frame) = (frame as &dyn Any).downcast_ref::<AckFrame>() {\n            self.largest_ack = Some(match self.largest_ack {\n                Some(largest_ack) => largest_ack.max(ack_frame.largest()),\n                None => ack_frame.largest(),\n            });\n        }\n    }\n}\n\npub trait RecordFrame<F, D: ContinuousData> {\n    fn record_frame(&mut self, frame: &F);\n}\n\nimpl<D: ContinuousData> RecordFrame<Frame<D>, D> for PacketInfo {\n    fn record_frame(&mut self, frame: &Frame<D>) {\n        debug_assert!(\n            frame.belongs_to(self.packet_type(),),\n            \"Frame {:?} does not belong to packet type {:?}\",\n            frame.frame_type(),\n            self.packet_type()\n        );\n        self.ack_eliciting |= !frame.specs().contain(Spec::NonAckEliciting);\n        self.in_flight |= !frame.specs().contain(Spec::CongestionControlFree);\n        self.probe_new_path |= frame.specs().contain(Spec::ProbeNewPath);\n        if let Frame::Ack(ack_frame) = frame {\n            self.largest_ack = Some(match self.largest_ack {\n                Some(largest_ack) => largest_ack.max(ack_frame.largest()),\n                None => ack_frame.largest(),\n            });\n        }\n    }\n}\n\nimpl<F, D: ContinuousData> RecordFrame<F, D> for PacketWriter<'_>\nwhere\n    PacketInfo: RecordFrame<F, D>,\n{\n    #[inline]\n    fn record_frame(&mut self, frame: &F) {\n        self.pkt_info.record_frame(frame);\n    }\n}\n\npub struct PacketWriter<'b> {\n    keys: Keys,\n    layout: PacketLayout,\n    pkt_info: PacketInfo,\n    buffer: &'b mut [u8],\n}\n\nimpl<'b> PacketWriter<'b> {\n    pub fn new_long<S>(\n        header: &LongHeader<S>,\n        buffer: &'b mut [u8],\n        (actual_pn, encoded_pn): (u64, PacketNumber),\n        keys: DirectionalKeys,\n    ) -> Result<Self, Signals>\n    where\n        S: EncodeHeader,\n        LongHeader<S>: GetType,\n        for<'a> &'a mut [u8]: WriteHeader<LongHeader<S>>,\n    {\n        let hdr_len = header.size();\n        let len_encoding = header.length_encoding();\n        if buffer.len() < hdr_len + len_encoding + 20 {\n            return Err(Signals::CONGESTION);\n        }\n\n        let (mut hdr_buf, mut payload_buf) = buffer.split_at_mut(hdr_len + len_encoding);\n        hdr_buf.put_header(header);\n        payload_buf.put_packet_number(encoded_pn);\n\n        let cursor = hdr_len + len_encoding + encoded_pn.size();\n        Ok(Self {\n            layout: PacketLayout {\n                hdr_len,\n                len_encoding,\n                pn_len: encoded_pn.size(),\n                cursor,\n                end: buffer.len() - keys.packet.tag_len(),\n            },\n            keys: Keys::LongHeaderPacket { keys },\n            pkt_info: PacketInfo::new(header.get_type(), actual_pn),\n            buffer,\n        })\n    }\n\n    pub fn new_short(\n        header: &OneRttHeader,\n        buffer: &'b mut [u8],\n        (actual_pn, encoded_pn): (u64, PacketNumber),\n        keys: DirectionalKeys,\n        key_phase: KeyPhaseBit,\n    ) -> Result<Self, Signals> {\n        let hdr_len = header.size();\n        if buffer.len() < hdr_len + 20 {\n            return Err(Signals::CONGESTION);\n        }\n\n        let (mut hdr_buf, mut payload_buf) = buffer.split_at_mut(hdr_len);\n        hdr_buf.put_header(header);\n        payload_buf.put_packet_number(encoded_pn);\n        Ok(Self {\n            layout: PacketLayout {\n                hdr_len,\n                len_encoding: 0,\n                pn_len: encoded_pn.size(),\n                cursor: hdr_len + encoded_pn.size(),\n                end: buffer.len() - keys.packet.tag_len(),\n            },\n            keys: Keys::ShortHeaderPacket { keys, key_phase },\n            pkt_info: PacketInfo::new(header.get_type(), actual_pn),\n            buffer,\n        })\n    }\n\n    #[inline]\n    pub fn buffer(&self) -> &[u8] {\n        self.buffer\n    }\n\n    #[inline]\n    pub fn is_short_header(&self) -> bool {\n        self.keys.key_phase().is_some()\n    }\n\n    #[inline]\n    pub fn packet_type(&self) -> Type {\n        self.pkt_info.packet_type()\n    }\n\n    #[inline]\n    pub fn packet_number(&self) -> u64 {\n        self.pkt_info.packet_number\n    }\n\n    #[inline]\n    pub fn is_ack_eliciting(&self) -> bool {\n        self.pkt_info.ack_eliciting\n    }\n\n    #[inline]\n    pub fn in_flight(&self) -> bool {\n        self.pkt_info.in_flight\n    }\n\n    #[inline]\n    pub fn is_probe_new_path(&self) -> bool {\n        self.pkt_info.probe_new_path\n    }\n\n    #[inline]\n    pub fn payload_len(&self) -> usize {\n        self.layout.payload_len()\n    }\n\n    #[inline]\n    pub fn tag_len(&self) -> usize {\n        self.keys.pk().tag_len()\n    }\n\n    #[inline]\n    pub fn is_empty(&self) -> bool {\n        self.layout.is_empty()\n    }\n\n    #[inline]\n    pub fn packet_len(&self) -> usize {\n        self.layout.cursor + self.keys.pk().tag_len()\n    }\n}\n\nunsafe impl BufMut for PacketWriter<'_> {\n    #[inline]\n    fn remaining_mut(&self) -> usize {\n        self.layout.end - self.layout.cursor\n    }\n\n    #[inline]\n    unsafe fn advance_mut(&mut self, cnt: usize) {\n        if self.remaining_mut() < cnt {\n            panic!(\n                \"advance out of bounds: the len is {} but advancing by {}\",\n                cnt,\n                self.remaining_mut()\n            );\n        }\n\n        self.layout.cursor += cnt;\n    }\n\n    #[inline]\n    fn chunk_mut(&mut self) -> &mut UninitSlice {\n        let range = self.layout.cursor..self.layout.end;\n        UninitSlice::new(&mut self.buffer[range])\n    }\n}\n\npub trait AssemblePacket: BufMut {\n    #[inline]\n    fn assemble_packet(\n        &mut self,\n        package: &mut dyn Package<Self>,\n    ) -> Result<PacketContent, Signals> {\n        package.dump(self)\n    }\n\n    fn encrypt_and_protect_packet(self) -> (usize, PacketInfo);\n}\n\nimpl AssemblePacket for PacketWriter<'_> {\n    fn encrypt_and_protect_packet(self) -> (usize, PacketInfo) {\n        use crate::{\n            packet::encrypt::*,\n            varint::{EncodeBytes, VarInt, WriteVarInt},\n        };\n\n        let Self {\n            keys,\n            layout,\n            pkt_info,\n            buffer,\n        } = self;\n\n        let payload_len = layout.payload_len();\n        let tag_len = keys.pk().tag_len();\n\n        let actual_pn = pkt_info.packet_number;\n        let pn_len = layout.pn_len;\n        let pkt_size = layout.cursor + tag_len;\n\n        assert!(\n            payload_len + tag_len >= 20,\n            \"The payload and tag needs at least 20 bytes to have enough samples for the packet header protection.\"\n        );\n\n        if let Some(key_phase) = keys.key_phase() {\n            encode_short_first_byte(&mut buffer[0], pn_len, key_phase);\n\n            let pk = keys.pk();\n            let payload_offset = layout.hdr_len;\n            let body_offset = payload_offset + pn_len;\n            encrypt_packet(pk, actual_pn, &mut buffer[..pkt_size], body_offset);\n\n            let hpk = keys.hpk();\n            protect_header(hpk, &mut buffer[..pkt_size], payload_offset, pn_len);\n        } else {\n            let packet_len = payload_len + tag_len;\n            let len_buffer_range = layout.hdr_len..layout.hdr_len + layout.len_encoding;\n            let mut len_buf = &mut buffer[len_buffer_range];\n            len_buf.encode_varint(&VarInt::try_from(packet_len).unwrap(), EncodeBytes::Two);\n\n            encode_long_first_byte(&mut buffer[0], pn_len);\n\n            let pk = keys.pk();\n            let payload_offset = layout.hdr_len + layout.len_encoding;\n            let body_offset = payload_offset + pn_len;\n            encrypt_packet(pk, actual_pn, &mut buffer[..pkt_size], body_offset);\n\n            let hpk = keys.hpk();\n            protect_header(hpk, &mut buffer[..pkt_size], payload_offset, pn_len);\n        }\n        (pkt_size, pkt_info)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::{frame::CryptoFrame, varint::VarInt};\n\n    struct TransparentKeys;\n\n    impl rustls::quic::PacketKey for TransparentKeys {\n        fn decrypt_in_place<'a>(\n            &self,\n            _packet_number: u64,\n            _header: &[u8],\n            payload: &'a mut [u8],\n        ) -> Result<&'a [u8], rustls::Error> {\n            Ok(&payload[..payload.len() - self.tag_len()])\n        }\n\n        fn encrypt_in_place(\n            &self,\n            _packet_number: u64,\n            _header: &[u8],\n            _payload: &mut [u8],\n        ) -> Result<rustls::quic::Tag, rustls::Error> {\n            Ok(rustls::quic::Tag::from(\"transparent_keys\".as_bytes()))\n        }\n\n        fn confidentiality_limit(&self) -> u64 {\n            0\n        }\n\n        fn integrity_limit(&self) -> u64 {\n            0\n        }\n\n        fn tag_len(&self) -> usize {\n            16\n        }\n    }\n\n    impl rustls::quic::HeaderProtectionKey for TransparentKeys {\n        fn decrypt_in_place(\n            &self,\n            _sample: &[u8],\n            _first_byte: &mut u8,\n            _payload: &mut [u8],\n        ) -> Result<(), rustls::Error> {\n            Ok(())\n        }\n\n        fn encrypt_in_place(\n            &self,\n            _sample: &[u8],\n            _first_byte: &mut u8,\n            _payload: &mut [u8],\n        ) -> Result<(), rustls::Error> {\n            Ok(())\n        }\n\n        fn sample_len(&self) -> usize {\n            20\n        }\n    }\n\n    #[test]\n    fn test_initial_packet_writer() {\n        let mut buffer = vec![0u8; 128];\n        let header = LongHeaderBuilder::with_cid(\n            ConnectionId::from_slice(\"testdcid\".as_bytes()),\n            ConnectionId::from_slice(\"testscid\".as_bytes()),\n        )\n        .initial(b\"test_token\".to_vec());\n\n        let pn = (0, PacketNumber::encode(0, 0));\n\n        let keys = DirectionalKeys {\n            packet: Arc::new(TransparentKeys),\n            header: Arc::new(TransparentKeys),\n        };\n\n        let mut writer = PacketWriter::new_long(&header, &mut buffer, pn, keys).unwrap();\n        let frame = CryptoFrame::new(VarInt::from_u32(0), VarInt::from_u32(12));\n        writer\n            .assemble_packet(&mut (frame, \"client_hello\".as_bytes()))\n            .unwrap();\n        assert!(writer.is_ack_eliciting());\n        assert!(writer.in_flight());\n\n        let (sent_bytes, final_packet_layout) = writer.encrypt_and_protect_packet();\n        assert!(final_packet_layout.ack_eliciting());\n        assert!(final_packet_layout.in_flight());\n        assert_eq!(sent_bytes, 69);\n        assert_eq!(\n            &buffer[..sent_bytes],\n            [\n                // initial packet:\n                // header form (1) = 1,, long header\n                // fixed bit (1) = 1,\n                // long packet type (2) = 0, initial packet\n                // reserved bits (2) = 0,\n                // packet number length (2) = 0, 1 byte\n                193, // first byte\n                0, 0, 0, 1, // quic version\n                // destination connection id, \"testdcid\"\n                8, // dcid length\n                b't', b'e', b's', b't', b'd', b'c', b'i', b'd', // dcid bytes\n                // source connection id, \"testscid\"\n                8, // scid length\n                b't', b'e', b's', b't', b's', b'c', b'i', b'd', // scid bytes\n                10,   // token length, no token\n                b't', b'e', b's', b't', b'_', b't', b'o', b'k', b'e', b'n', // token bytes\n                64, 33, // payload length, 2 bytes encoded varint\n                0, 0, // encoded packet number\n                // crypto frame header\n                6,  // crypto frame type\n                0,  // crypto frame offset\n                12, // crypto frame length\n                // crypto frame data, \"client hello\"\n                b'c', b'l', b'i', b'e', b'n', b't', b'_', b'h', b'e', b'l', b'l', b'o',\n                // tag, \"transparent_keys\"\n                b't', b'r', b'a', b'n', b's', b'p', b'a', b'r', b'e', b'n', b't', b'_', b'k', b'e',\n                b'y', b's',\n            ]\n            .as_slice()\n        );\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/keys.rs",
    "content": "use std::{\n    future::Future,\n    ops::DerefMut,\n    pin::Pin,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker},\n};\n\nuse futures::FutureExt;\nuse rustls::quic::{\n    DirectionalKeys as RustlsDirectionalKeys, HeaderProtectionKey, Keys as RustlsKeys, PacketKey,\n    Secrets,\n};\n\n/// Keys used to communicate in a single direction\n#[derive(Clone)]\npub struct DirectionalKeys {\n    /// Encrypts or decrypts a packet's headers\n    pub header: Arc<dyn HeaderProtectionKey>,\n    /// Encrypts or decrypts the payload of a packet\n    pub packet: Arc<dyn PacketKey>,\n}\n\nimpl From<RustlsDirectionalKeys> for DirectionalKeys {\n    fn from(keys: RustlsDirectionalKeys) -> Self {\n        Self {\n            header: keys.header.into(),\n            packet: keys.packet.into(),\n        }\n    }\n}\n\n/// Complete set of keys used to communicate with the peer\n#[derive(Clone)]\npub struct Keys {\n    /// Encrypts outgoing packets\n    pub local: DirectionalKeys,\n    /// Decrypts incoming packets\n    pub remote: DirectionalKeys,\n}\n\nimpl From<RustlsKeys> for Keys {\n    fn from(keys: RustlsKeys) -> Self {\n        Self {\n            local: keys.local.into(),\n            remote: keys.remote.into(),\n        }\n    }\n}\n\nuse super::KeyPhaseBit;\nuse crate::role::Role;\n\n#[derive(Clone)]\nenum KeysState<K> {\n    Pending(Option<Waker>),\n    Ready(K),\n    Invalid,\n}\n\nimpl<K> KeysState<K> {\n    fn set(&mut self, keys: K) {\n        match self {\n            KeysState::Pending(waker) => {\n                if let Some(waker) = waker.take() {\n                    waker.wake();\n                }\n                *self = KeysState::Ready(keys);\n            }\n            KeysState::Ready(_) => unreachable!(\"KeysState::set called twice\"),\n            KeysState::Invalid => unreachable!(\"KeysState::set called after invalidation\"),\n        }\n    }\n\n    fn get(&mut self) -> Option<&K> {\n        match self {\n            KeysState::Ready(keys) => Some(keys),\n            KeysState::Pending(..) | KeysState::Invalid => None,\n        }\n    }\n\n    fn invalid(&mut self) -> Option<K> {\n        match std::mem::replace(self, KeysState::Invalid) {\n            KeysState::Pending(waker) => {\n                if let Some(waker) = waker {\n                    waker.wake();\n                }\n                None\n            }\n            KeysState::Ready(keys) => Some(keys),\n            KeysState::Invalid => None,\n        }\n    }\n}\n\nimpl<K: Unpin + Clone> Future for KeysState<K> {\n    type Output = Option<K>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        match self.get_mut() {\n            KeysState::Pending(waker) => {\n                if waker\n                    .as_ref()\n                    .is_some_and(|waker| !waker.will_wake(cx.waker()))\n                {\n                    unreachable!(\n                        \"Try to get remote keys from multiple tasks! This is a bug, please report it.\"\n                    )\n                }\n                *waker = Some(cx.waker().clone());\n                Poll::Pending\n            }\n            KeysState::Ready(keys) => Poll::Ready(Some(keys.clone())),\n            KeysState::Invalid => Poll::Ready(None),\n        }\n    }\n}\n\n/// Long packet keys, for encryption and decryption keys for those long packets,\n/// as well as keys for adding and removing long packet header protection.\n///\n/// - When sending, obtain the local keys for packet encryption and adding header protection.\n///   If the keys are not ready, skip sending the packet of this level immidiately.\n/// - When receiving a packet and decrypting it, obtain the remote keys for removing header\n///   protection and packet decryption.\n///   If the keys are not ready, wait asynchronously until the keys to be ready to continue.\n///\n/// ## Note\n///\n/// The keys for 1-RTT packets are a separate structure, see [`ArcOneRttKeys`].\n#[derive(Clone)]\npub struct ArcKeys(Arc<Mutex<KeysState<Keys>>>);\n\nimpl ArcKeys {\n    fn lock_guard(&self) -> MutexGuard<'_, KeysState<Keys>> {\n        self.0.lock().unwrap()\n    }\n\n    /// Create a Pending state [`ArcKeys`].\n    ///\n    /// For a new Quic connection, initially only the Initial key is known, and the 0-RTT\n    /// and Handshake keys are unknown.\n    /// Therefore, the 0-RTT and Handshake keys can be created in a Pending state, waiting\n    /// for updates during the TLS handshake process.\n    pub fn new_pending() -> Self {\n        Self(Arc::new(KeysState::Pending(None).into()))\n    }\n\n    /// Create an [`ArcKeys`] with a specified [`rustls::quic::Keys`].\n    ///\n    /// The initial keys are known at first, can use this method to create the [`ArcKeys`].\n    pub fn with_keys(keys: Keys) -> Self {\n        Self(Arc::new(KeysState::Ready(keys).into()))\n    }\n\n    /// Asynchronously obtain the remote keys for removing header protection and packet decryption.\n    ///\n    /// Rreturn [`GetRemoteKeys`], which implemented Future trait.\n    ///\n    /// ## Example\n    ///\n    /// The following is only a demonstration.\n    /// In fact, removing header protection and decrypting packets are far more complex!\n    ///\n    /// ```\n    /// use qbase::packet::keys::ArcKeys;\n    ///\n    /// async fn decrypt_demo(keys: ArcKeys, cipher_text: &mut [u8]) {\n    ///     let Some(keys) = keys.get_remote_keys().await else {\n    ///         return;\n    ///     };\n    ///\n    ///     let hpk = keys.remote.header.as_ref();\n    ///     let pk = keys.remote.packet.as_ref();\n    ///\n    ///     // use hpk to remove header protection...\n    ///     // use pk to decrypt packet body...\n    /// }\n    /// ```\n    pub fn get_remote_keys(&self) -> GetRemoteKeys<'_, Keys> {\n        GetRemoteKeys(&self.0)\n    }\n\n    /// Get the local keys for packet encryption and adding header protection.\n    /// If the keys is not ready, just return None immediately.\n    ///\n    /// ## Example\n    ///\n    /// The following is only a demonstration.\n    /// In fact, encrypting packets and adding header protection are far more complex!\n    ///\n    /// ```\n    /// use qbase::packet::keys::ArcKeys;\n    ///\n    /// fn encrypt_demo(keys: ArcKeys, plain_text: &mut [u8]) {\n    ///     let Some(keys) = keys.get_local_keys() else {\n    ///         return;\n    ///     };\n    ///\n    ///     let hpk = keys.local.header.as_ref();\n    ///     let pk = keys.local.packet.as_ref();\n    ///\n    ///     // use pk to encrypt packet body...\n    ///     // use hpk to add header protection...\n    /// }\n    /// ```\n    pub fn get_local_keys(&self) -> Option<Keys> {\n        self.lock_guard().get().cloned()\n    }\n\n    /// Set the keys to the [`ArcKeys`].\n    ///\n    /// As the TLS handshake progresses, higher-level keys will be obtained.\n    /// These keys are set to the related [`ArcKeys`] through this method, and\n    /// its internal waker will be awakened to notify the packet decryption task\n    /// to continue, if the internal waker was registered.\n    pub fn set_keys(&self, keys: Keys) {\n        self.lock_guard().set(keys);\n    }\n\n    /// Retire the keys, which means that the keys are no longer available.\n    ///\n    /// This is used when the connection enters the closing state or draining state.\n    /// Especially in the closing state, the return keys are used to generate the final packet\n    /// containing the ConnectionClose frame, and decrypt the data packets received from the\n    /// peer for a while.\n    pub fn invalid(&self) -> Option<Keys> {\n        self.lock_guard().invalid()\n    }\n}\n\n/// To obtain the remote keys from [`ArcKeys`] or [`ArcOneRttKeys`] for removing long header protection\n/// and packet decryption.\npub struct GetRemoteKeys<'k, K>(&'k Mutex<KeysState<K>>);\n\nimpl<K: Unpin + Clone> Future for GetRemoteKeys<'_, K> {\n    type Output = Option<K>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        Pin::new(self.0.lock().unwrap()).poll_unpin(cx)\n    }\n}\n\n#[derive(Clone)]\npub struct ArcZeroRttKeys {\n    role: Role,\n    keys: Arc<Mutex<KeysState<DirectionalKeys>>>,\n}\n\nimpl ArcZeroRttKeys {\n    pub fn new_pending(role: Role) -> Self {\n        Self {\n            role,\n            keys: Arc::new(Mutex::new(KeysState::Pending(None))),\n        }\n    }\n\n    fn lock_guard(&self) -> MutexGuard<'_, KeysState<DirectionalKeys>> {\n        self.keys.lock().unwrap()\n    }\n\n    pub fn set_keys(&self, keys: DirectionalKeys) {\n        self.lock_guard().set(keys);\n    }\n\n    pub fn get_encrypt_keys(&self) -> Option<DirectionalKeys> {\n        match self.role {\n            Role::Client => self.lock_guard().get().cloned(),\n            Role::Server => None,\n        }\n    }\n\n    pub fn get_decrypt_keys(&self) -> Option<GetRemoteKeys<'_, DirectionalKeys>> {\n        match self.role {\n            Role::Client => None,\n            Role::Server => Some(GetRemoteKeys(&self.keys)),\n        }\n    }\n\n    pub fn invalid(&self) -> Option<DirectionalKeys> {\n        self.lock_guard().invalid()\n    }\n}\n\n/// The packet encryption and decryption keys for 1-RTT packets,\n/// which will still change after negotiation between the two endpoints.\n///\n/// See [key update](https://www.rfc-editor.org/rfc/rfc9001#name-key-update)\n/// of [RFC 9001](https://www.rfc-editor.org/rfc/rfc9001) for more details.\npub struct OneRttPacketKeys {\n    cur_phase: KeyPhaseBit,\n    secrets: Secrets,\n    // TODO: 保存三个\n    remote: [Option<Arc<dyn PacketKey>>; 2],\n    local: Arc<dyn PacketKey>,\n}\n\nimpl OneRttPacketKeys {\n    /// Create new [`OneRttPacketKeys`].\n    ///\n    /// The TLS handshake session must exchange enough information to generate the 1-RTT keys.\n    fn new(remote: Box<dyn PacketKey>, local: Box<dyn PacketKey>, secrets: Secrets) -> Self {\n        Self {\n            cur_phase: KeyPhaseBit::default(),\n            secrets,\n            remote: [Some(Arc::from(remote)), None],\n            local: Arc::from(local),\n        }\n    }\n\n    /// Proactively update the 1-RTT packet key locally.\n    /// Or be informed by the peer to update the key.\n    ///\n    /// The key phase bit will be toggled and sent to the peer,\n    /// informing the peer to update the key to next 1-RTT packet key too.\n    pub fn update(&mut self) {\n        self.cur_phase.toggle();\n        let key_set = self.secrets.next_packet_keys();\n        self.remote[self.cur_phase.as_index()] = Some(Arc::from(key_set.remote));\n        self.local = Arc::from(key_set.local);\n    }\n\n    /// Old key must be phased out within a certain period of time.\n    ///\n    /// If the old one don't go, the new ones won't come.\n    /// If it is not phased out, it will be considered as new keys and\n    /// fail to decrypt the packet in future.\n    pub fn phase_out(&mut self) {\n        self.remote[(!self.cur_phase).as_index()].take();\n    }\n\n    /// Get the remote key to decrypt the incoming 1-RTT packet.\n    /// If the key phase is not the current key phase, update the key, see [`Self::update`].\n    ///\n    /// Return `Arc<PacketKey>` to decrypt the incoming 1-RTT packet.\n    pub fn get_remote(&mut self, key_phase: KeyPhaseBit, _pn: u64) -> Arc<dyn PacketKey> {\n        if key_phase != self.cur_phase && self.remote[key_phase.as_index()].is_none() {\n            self.update();\n        }\n        self.remote[key_phase.as_index()].clone().unwrap()\n    }\n\n    /// Get the local current key to encrypt the outgoing packet.\n    ///\n    /// Return `Arc<PacketKey>` to encrypt the outgoing 1-RTT packet.\n    pub fn get_local(&self) -> (KeyPhaseBit, Arc<dyn PacketKey>) {\n        (self.cur_phase, self.local.clone())\n    }\n}\n\n/// The packet encryption and decryption keys for 1-RTT packets, which will still\n/// change based on the KeyPhase bit in the receiving packet, or they can be update\n/// it proactively locally.\n///\n/// For performance reasons, the second element of the tuple is the length of the\n/// tag of the local packet key's underlying AEAD algorithm redundantly.\n#[derive(Clone)]\npub struct ArcOneRttPacketKeys(Arc<(Mutex<OneRttPacketKeys>, usize)>);\n\nimpl ArcOneRttPacketKeys {\n    /// Obtain exclusive access to the 1-RTT packet keys.\n    /// During the exclusive period of encrypting or decrypting packets,\n    /// the keys must not be updated elsewhere.\n    pub fn lock_guard(&self) -> MutexGuard<'_, OneRttPacketKeys> {\n        self.0.0.lock().unwrap()\n    }\n\n    /// Get the length of the tag of the packet key's underlying AEAD algorithm.\n    ///\n    /// For example, when collecting data to send, buffer needs to reserve\n    /// the tag length space to fill in the integrity checksum codes.\n    /// After collecting the data, encryption will be performed, and exclusive\n    /// access will be obtained during encryption.\n    /// There is no need to acquire the lock at the beginning to get the tag\n    /// length, because nothing might be sent later, and the task might be canceled.\n    /// This would save the initial locking overhead.\n    /// Keeping a redundant tag length that can be obtained without locking\n    /// will improve lock performance.\n    pub fn tag_len(&self) -> usize {\n        self.0.1\n    }\n}\n\n/// The header protection keys for 1-RTT packets.\n#[derive(Clone)]\npub struct HeaderProtectionKeys {\n    pub local: Arc<dyn HeaderProtectionKey>,\n    pub remote: Arc<dyn HeaderProtectionKey>,\n}\n\nenum OneRttKeysState {\n    Pending(Option<Waker>),\n    Ready {\n        hpk: HeaderProtectionKeys,\n        pk: ArcOneRttPacketKeys,\n    },\n    Invalid,\n}\n\n/// 1-RTT packet keys, for packet encryption and decryption for 1-RTT packets,\n/// as well as keys for adding and removing 1-RTT packet header protection.\n///\n/// and its packet key will be updated.\n///\n/// Unlike [`ArcKeys`], the HeaderProtectionKey for 1-RTT keys does not change,\n/// but the PacketKey may still be updated with changes in the KeyPhase bit.\n/// Therefore, the HeaderProtectionKey and PacketKey need to be managed separately.\n#[derive(Clone)]\npub struct ArcOneRttKeys(Arc<Mutex<OneRttKeysState>>);\n\nimpl ArcOneRttKeys {\n    fn lock_guard(&self) -> MutexGuard<'_, OneRttKeysState> {\n        self.0.lock().unwrap()\n    }\n\n    /// Create a Pending state [`ArcOneRttKeys`], waiting for the keys being ready\n    /// from TLS handshaking.\n    pub fn new_pending() -> Self {\n        Self(Arc::new(OneRttKeysState::Pending(None).into()))\n    }\n\n    /// Set the keys to the [`ArcOneRttKeys`].\n    ///\n    /// As the TLS handshake progresses, 1-RTT keys will finally be obtained.\n    /// And then its internal waker will be awakened to notify the packet\n    /// decryption task to continue, if the internal waker was registered.\n    pub fn set_keys(&self, keys: RustlsKeys, secrets: Secrets) {\n        let mut state = self.lock_guard();\n        match &mut *state {\n            OneRttKeysState::Pending(waker) => {\n                let hpk = HeaderProtectionKeys {\n                    local: Arc::from(keys.local.header),\n                    remote: Arc::from(keys.remote.header),\n                };\n                let tag_len = keys.local.packet.tag_len();\n                let pk = ArcOneRttPacketKeys(Arc::new((\n                    Mutex::new(OneRttPacketKeys::new(\n                        keys.remote.packet,\n                        keys.local.packet,\n                        secrets,\n                    )),\n                    tag_len,\n                )));\n                if let Some(w) = waker.take() {\n                    w.wake();\n                }\n                *state = OneRttKeysState::Ready { hpk, pk };\n            }\n            OneRttKeysState::Ready { .. } => panic!(\"set_keys called twice\"),\n            OneRttKeysState::Invalid => panic!(\"set_keys called after invalidation\"),\n        }\n    }\n\n    pub fn invalid(&self) -> Option<(HeaderProtectionKeys, ArcOneRttPacketKeys)> {\n        let mut state = self.lock_guard();\n        match std::mem::replace(state.deref_mut(), OneRttKeysState::Invalid) {\n            OneRttKeysState::Pending(rx_waker) => {\n                if let Some(waker) = rx_waker {\n                    waker.wake();\n                }\n                None\n            }\n            OneRttKeysState::Ready { hpk, pk } => Some((hpk, pk)),\n            OneRttKeysState::Invalid => unreachable!(),\n        }\n    }\n\n    /// Get the local keys for packet encryption and adding header protection.\n    /// If the keys are not ready, just return None immediately.\n    ///\n    /// Return a tuple of HeaderProtectionKey and OneRttPacketKeys.  \n    /// The OneRttPacketKeys need to be locked during the entire packet encryption process.\n    pub fn get_local_keys(&self) -> Option<(Arc<dyn HeaderProtectionKey>, ArcOneRttPacketKeys)> {\n        let mut keys = self.lock_guard();\n        match &mut *keys {\n            OneRttKeysState::Ready { hpk, pk, .. } => Some((hpk.local.clone(), pk.clone())),\n            _ => None,\n        }\n    }\n\n    pub fn remote_keys(&self) -> Option<(Arc<dyn HeaderProtectionKey>, ArcOneRttPacketKeys)> {\n        match &mut *self.lock_guard() {\n            OneRttKeysState::Ready { hpk, pk, .. } => Some((hpk.remote.clone(), pk.clone())),\n            _ => None,\n        }\n    }\n\n    /// Asynchronously obtain the remote keys for removing header protection and packet decryption.\n    ///\n    /// Rreturn [`GetRemoteKeys`], which implemented the Future trait.\n    pub fn get_remote_keys(&self) -> GetRemoteOneRttKeys<'_> {\n        GetRemoteOneRttKeys(self)\n    }\n}\n\n/// To obtain the remote key from [`ArcOneRttKeys`]` for removing 1-RTT header\n/// protection and packet decryption.\npub struct GetRemoteOneRttKeys<'k>(&'k ArcOneRttKeys);\n\nimpl Future for GetRemoteOneRttKeys<'_> {\n    type Output = Option<(Arc<dyn HeaderProtectionKey>, ArcOneRttPacketKeys)>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        let mut keys = self.0.lock_guard();\n        match &mut *keys {\n            OneRttKeysState::Pending(waker) => {\n                if waker\n                    .as_ref()\n                    .is_some_and(|waker| !waker.will_wake(cx.waker()))\n                {\n                    unreachable!(\n                        \"Try to get remote keys from multiple tasks! This is a bug, please report it.\"\n                    )\n                }\n                *waker = Some(cx.waker().clone());\n                Poll::Pending\n            }\n            OneRttKeysState::Ready { hpk, pk, .. } => {\n                Poll::Ready(Some((hpk.remote.clone(), pk.clone())))\n            }\n            OneRttKeysState::Invalid => Poll::Ready(None),\n        }\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/number.rs",
    "content": "use std::cmp::max;\n\nuse bytes::BufMut;\nuse thiserror::Error;\n\n/// An encoded or undecoded packet number\n///\n/// The actual packet number is an integer in the range 0 to 2^62  - 1 and encoded in 1 to 4 bytes.\n///\n/// See [packet numbers](https://www.rfc-editor.org/rfc/rfc9000.html#name-packet-numbers) and\n/// [packet number encoding and decoding](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.1)\n/// of [RFC 9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Copy, Clone, Eq, PartialEq)]\npub enum PacketNumber {\n    U8(u8),\n    U16(u16),\n    U24(u32),\n    U32(u32),\n}\n\n#[derive(Debug, Error, PartialEq, Eq)]\npub enum InvalidPacketNumber {\n    #[error(\"Packet number too old\")]\n    TooOld,\n    #[error(\"Packet number too large\")]\n    TooLarge,\n    #[error(\"Packet with this number has been received\")]\n    Duplicate,\n}\n\n/// Implement this trait for buffer, which can be used to write the packet number into the buffer.\npub trait WritePacketNumber {\n    /// Write the encoded packet number to the buffer.\n    fn put_packet_number(&mut self, pn: PacketNumber);\n}\n\nimpl<T: BufMut> WritePacketNumber for T {\n    fn put_packet_number(&mut self, pn: PacketNumber) {\n        use self::PacketNumber::*;\n        match pn {\n            U8(x) => self.put_u8(x),\n            U16(x) => self.put_u16(x),\n            U24(x) => {\n                self.put_u8((x >> 16) as u8);\n                self.put_u16(x as u16);\n            }\n            U32(x) => self.put_u32(x),\n        }\n    }\n}\n\n/// Parse the packet number from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n///\n/// ## Example\n///\n/// ```\n/// use qbase::packet::number::{PacketNumber, take_pn_len};\n///\n/// let buf = [0x01, 0x00];\n/// assert_eq!(\n///     (&[][..], PacketNumber::U16(1 << 8)),\n///     take_pn_len(2)(&buf).unwrap()\n/// );\n/// ```\npub fn take_pn_len(pn_len: u8) -> impl FnMut(&[u8]) -> nom::IResult<&[u8], PacketNumber> {\n    use nom::{\n        Parser,\n        combinator::map,\n        number::complete::{be_u8, be_u16, be_u24, be_u32},\n    };\n    move |input: &[u8]| match pn_len {\n        1 => map(be_u8, PacketNumber::U8).parse(input),\n        2 => map(be_u16, PacketNumber::U16).parse(input),\n        3 => map(be_u24, PacketNumber::U24).parse(input),\n        4 => map(be_u32, PacketNumber::U32).parse(input),\n        _ => unreachable!(),\n    }\n}\n\nimpl PacketNumber {\n    /// Encode the packet number, based on the maximum confirmed packet number.\n    ///\n    /// The size of the packet number encoding is at least one bit more than the\n    /// base-2 logarithm of the number of contiguous unacknowledged packet numbers\n    ///\n    /// See [Section 17.1-5](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.1-5) and\n    /// [Appendix A.2](https://www.rfc-editor.org/rfc/rfc9000.html#section-a.2)\n    /// for more details.\n    pub fn encode(pn: u64, largest_acked: u64) -> Self {\n        // Minimum 16-bit PN encoding ensures delayed packets on slower paths remain decodable\n        let range = max((pn - largest_acked) * 2, (1 << 16) - 1);\n        if range < 1 << 8 {\n            Self::U8(pn as u8)\n        } else if range < 1 << 16 {\n            Self::U16(pn as u16)\n        } else if range < 1 << 24 {\n            Self::U24(pn as u32)\n        } else if range < 1 << 32 {\n            Self::U32(pn as u32)\n        } else {\n            panic!(\"packet number too large to encode\")\n        }\n    }\n\n    /// Return the size of the packet number encoding.\n    pub fn size(self) -> usize {\n        use self::PacketNumber::*;\n        match self {\n            U8(_) => 1,\n            U16(_) => 2,\n            U24(_) => 3,\n            U32(_) => 4,\n        }\n    }\n\n    /// Decode the packet number after header protection has been removed.\n    ///\n    /// The packet number is decoded based on the largest received packet number.\n    /// The next expected packet is the largest received packet number plus one.\n    ///\n    /// See [Section 17.1-7](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.1-7) and\n    /// [Section A.3](https://www.rfc-editor.org/rfc/rfc9000.html#section-a.3)\n    /// for more details.\n    pub fn decode(self, expected: u64) -> u64 {\n        use self::PacketNumber::*;\n\n        let (truncated, nbits) = match self {\n            U8(x) => (u64::from(x), 8),\n            U16(x) => (u64::from(x), 16),\n            U24(x) => (u64::from(x), 24),\n            U32(x) => (u64::from(x), 32),\n        };\n        let win = 1 << nbits;\n        let hwin = win / 2;\n        let mask = win - 1;\n        // The incoming packet number should be greater than expected - hwin and less than or equal\n        // to expected + hwin\n        //\n        // This means we can't just strip the trailing bits from expected and add the truncated\n        // because that might yield a value outside the window.\n        //\n        // The following code calculates a candidate value and makes sure it's within the packet\n        // number window.\n        let candidate = (expected & !mask) | truncated;\n        if expected.checked_sub(hwin).is_some_and(|x| candidate <= x) {\n            candidate + win\n        } else if candidate > expected + hwin && candidate > win {\n            candidate - win\n        } else {\n            candidate\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{PacketNumber, WritePacketNumber};\n\n    #[test]\n    fn test_read_packet_number() {\n        let buf = [0x00];\n        assert_eq!(\n            (&[][..], super::PacketNumber::U8(0)),\n            super::take_pn_len(1)(&buf).unwrap()\n        );\n\n        let buf = [0x01, 0x00];\n        assert_eq!(\n            (&[][..], super::PacketNumber::U16(1 << 8)),\n            super::take_pn_len(2)(&buf).unwrap()\n        );\n\n        let buf = [0x01, 0x00, 0x00];\n        assert_eq!(\n            (&[][..], super::PacketNumber::U24(1 << 16)),\n            super::take_pn_len(3)(&buf).unwrap()\n        );\n\n        let buf = [0x01, 0x00, 0x00, 0x00];\n        assert_eq!(\n            (&[][..], super::PacketNumber::U32(1 << 24)),\n            super::take_pn_len(4)(&buf).unwrap()\n        );\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_read_packet_number_too_large() {\n        let buf = [0x01, 0x00, 0x00, 0x00, 0x00];\n        super::take_pn_len(5)(&buf).unwrap();\n    }\n\n    #[test]\n    fn test_write_packet_number() {\n        let mut buf = vec![];\n        buf.put_packet_number(PacketNumber::encode(0, 0));\n        // Minimum 16-bit PN encoding ensures delayed packets on slower paths remain decodable\n        assert_eq!(buf, [0x00, 0x00]);\n\n        buf.clear();\n        buf.put_packet_number(PacketNumber::encode(1 << 8, 0));\n        assert_eq!(buf, [0x01, 0x00]);\n\n        buf.clear();\n        buf.put_packet_number(PacketNumber::encode(1 << 16, 0));\n        assert_eq!(buf, [0x01, 0x00, 0x00]);\n\n        buf.clear();\n        buf.put_packet_number(PacketNumber::encode(1 << 24, 0));\n        assert_eq!(buf, [0x01, 0x00, 0x00, 0x00]);\n    }\n\n    #[test]\n    fn test_encode_packet_number() {\n        let pn = super::PacketNumber::encode((1 << 31) - 1, 0);\n        assert_eq!(pn.decode(0), (1 << 31) - 1);\n\n        let pn = super::PacketNumber::encode(0, 0);\n        assert_eq!(pn.decode(0), 0);\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_encode_packet_number_overflow() {\n        PacketNumber::encode(1 << 31, 0);\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/signal.rs",
    "content": "/// The spin bit in 1-RTT packets\nconst SPIN_BIT: u8 = 0x20;\n/// The key phase bit in 1-RTT packets\nconst KEY_PHASE_BIT: u8 = 0x04;\n\n/// The toggle type, which can be used to represent the spin bit and key phase bit.\n#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]\npub enum Toggle<const B: u8> {\n    /// Represents the bit is 0\n    #[default]\n    Zero,\n    /// Represents the bit is 1\n    One,\n}\n\n/// The spin bit in the 1-RTT packet.\npub type SpinBit = Toggle<SPIN_BIT>;\n\n/// The key phase bit in the 1-RTT packet.\npub type KeyPhaseBit = Toggle<KEY_PHASE_BIT>;\n\nimpl<const B: u8> Toggle<B> {\n    /// Toggle the bit, from 0 to 1, or from 1 to 0.\n    pub fn toggle(&mut self) {\n        *self = match self {\n            Toggle::Zero => Toggle::One,\n            Toggle::One => Toggle::Zero,\n        }\n    }\n\n    /// Get the value of the bit.\n    pub fn value(&self) -> u8 {\n        match self {\n            Toggle::Zero => 0,\n            Toggle::One => B,\n        }\n    }\n\n    /// Imply the bit to the byte.\n    pub fn imply(&self, byte: &mut u8) {\n        match self {\n            Toggle::Zero => *byte &= !B,\n            Toggle::One => *byte |= B,\n        }\n    }\n\n    /// Treat Toggle as an index and get the index value it represents, i.e., 0 or 1\n    pub(crate) fn as_index(&self) -> usize {\n        match self {\n            Toggle::Zero => 0,\n            Toggle::One => 1,\n        }\n    }\n}\n\nimpl<const B: u8> std::ops::Not for Toggle<B> {\n    type Output = Self;\n\n    fn not(self) -> Self::Output {\n        match self {\n            Toggle::Zero => Toggle::One,\n            Toggle::One => Toggle::Zero,\n        }\n    }\n}\n\nimpl<const B: u8> From<u8> for Toggle<B> {\n    fn from(value: u8) -> Self {\n        if value & B == 0 {\n            Toggle::Zero\n        } else {\n            Toggle::One\n        }\n    }\n}\n\nimpl<const B: u8> From<Toggle<B>> for u8 {\n    fn from(value: Toggle<B>) -> Self {\n        value.value()\n    }\n}\n\nimpl<const B: u8> From<bool> for Toggle<B> {\n    fn from(value: bool) -> Self {\n        if value { Toggle::One } else { Toggle::Zero }\n    }\n}\n\nimpl<const B: u8> From<Toggle<B>> for bool {\n    fn from(value: Toggle<B>) -> Self {\n        match value {\n            Toggle::Zero => false,\n            Toggle::One => true,\n        }\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/type/long/v1.rs",
    "content": "use crate::packet::{error::Error, r#type::FIXED_BIT};\n\n/// Long packet types. The 3th and 4th bits of the first byte of the long header\n/// represent the specific packet type.\n///\n/// See [long header packet types](https://www.rfc-editor.org/rfc/rfc9000.html#name-long-header-packet-types)\n/// of [RFC 9000](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum Type {\n    /// Initial packet type, represented by 0b00\n    Initial,\n    /// 0-RTT packet type, represented by 0b01\n    ZeroRtt,\n    /// Handshake packet type, represented by 0b10\n    Handshake,\n    /// Retry packet type, represented by 0b11\n    Retry,\n}\n\nconst LONG_PACKET_TYPE_MASK: u8 = 0x30;\nconst INITIAL_PACKET_TYPE: u8 = 0x00;\nconst ZERO_RTT_PACKET_TYPE: u8 = 0x10;\nconst HANDSHAKE_PACKET_TYPE: u8 = 0x20;\nconst RETRY_PACKET_TYPE: u8 = 0x30;\n\nimpl From<Type> for u8 {\n    fn from(value: Type) -> u8 {\n        match value {\n            Type::Retry => RETRY_PACKET_TYPE,\n            Type::Initial => INITIAL_PACKET_TYPE,\n            Type::ZeroRtt => ZERO_RTT_PACKET_TYPE,\n            Type::Handshake => HANDSHAKE_PACKET_TYPE,\n        }\n    }\n}\n\nimpl TryFrom<u8> for Type {\n    type Error = Error;\n\n    fn try_from(value: u8) -> Result<Self, Self::Error> {\n        if value & FIXED_BIT == 0 {\n            return Err(Error::InvalidFixedBit);\n        }\n        match value & LONG_PACKET_TYPE_MASK {\n            INITIAL_PACKET_TYPE => Ok(Type::Initial),\n            ZERO_RTT_PACKET_TYPE => Ok(Type::ZeroRtt),\n            HANDSHAKE_PACKET_TYPE => Ok(Type::Handshake),\n            RETRY_PACKET_TYPE => Ok(Type::Retry),\n            _ => unreachable!(),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n\n    #[test]\n    fn test_try_from() {\n        use super::Type;\n        use crate::packet::error::Error;\n\n        assert_eq!(Type::try_from(0xc0), Ok(Type::Initial));\n        assert_eq!(Type::try_from(0xd0), Ok(Type::ZeroRtt));\n        assert_eq!(Type::try_from(0xe0), Ok(Type::Handshake));\n        assert_eq!(Type::try_from(0xf0), Ok(Type::Retry));\n        assert_eq!(Type::try_from(0x00), Err(Error::InvalidFixedBit));\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/type/long.rs",
    "content": "use derive_more::Deref;\n\n/// Supports IQuic version 1, if other versions are supported in the future, add them here.\npub mod v1;\n\n/// The long packet header contains version information, so the 32-bit\n/// version number info is also one part of the versioned packet type.\n///\n/// `N`` represents an 32-bit version number, and\n/// `Ty`` represents the specific type of the version.\n#[derive(Debug, Clone, Copy, Deref, PartialEq, Eq)]\npub struct Version<const N: u32, Ty>(#[deref] pub(crate) Ty);\n\n/// Long packet types all have a Version, so the version number can be obtained\n/// from the long packet type.\npub trait GetVersion {\n    /// Get the version number from long packet type.\n    fn get_version(&self) -> u32;\n}\n\nimpl<const N: u32, Ty> GetVersion for Version<N, Ty> {\n    fn get_version(&self) -> u32 {\n        N\n    }\n}\n\n/// Mainly define the long packet types of the IQuic version 1.\nimpl Version<1, v1::Type> {\n    /// Retry packet type of the IQuic version 1.\n    pub const RETRY: Self = Self(v1::Type::Retry);\n    /// Initial packet type of the IQuic version 1.\n    pub const INITIAL: Self = Self(v1::Type::Initial);\n    /// 0-RTT packet type of the IQuic version 1.\n    pub const ZERO_RTT: Self = Self(v1::Type::ZeroRtt);\n    /// Handshake packet type of the IQuic version 1.\n    pub const HANDSHAKE: Self = Self(v1::Type::Handshake);\n}\n\n/// Represent the packet types in the IQuic version 1, including Retry/Initial/0-RTT/Handshake.\npub type Ver1 = Version<1, v1::Type>;\n\n/// The sum types of the long packets.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum Type {\n    VersionNegotiation,\n    V1(Version<1, v1::Type>),\n    // in the future, add other versions here\n    // V2(v2::HeaderType),\n}\n\n/// The io module provides the functions to parse and write the long packet type.\npub mod io {\n    use bytes::BufMut;\n    use nom::number::streaming::be_u32;\n\n    use super::{super::FIXED_BIT, *};\n    use crate::packet::error::Error;\n\n    const LONG_HEADER_BIT: u8 = 0x80;\n\n    /// Parse the long packet type from the input buffer,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn parse_long_type(ty: u8) -> impl FnMut(&[u8]) -> nom::IResult<&[u8], Type, Error> {\n        move |input| {\n            let (remain, version) = be_u32(input)?;\n            match version {\n                0 => Ok((remain, Type::VersionNegotiation)),\n                1 => Ok((\n                    remain,\n                    Type::V1(Version::<1, v1::Type>(\n                        ty.try_into().map_err(nom::Err::Error)?,\n                    )),\n                )),\n                v => Err(nom::Err::Error(Error::UnsupportedVersion(v))),\n            }\n        }\n    }\n\n    /// A [`bytes::BufMut`] extension trait, makes buffer more friendly to write long packet type.\n    pub trait WriteLongType: BufMut {\n        /// Write the long packet type to the buffer.\n        fn put_long_type(&mut self, value: &Type);\n    }\n\n    impl<B: BufMut> WriteLongType for B {\n        fn put_long_type(&mut self, value: &Type) {\n            match value {\n                Type::VersionNegotiation => {\n                    self.put_u8(LONG_HEADER_BIT);\n                    self.put_u32(0);\n                }\n                Type::V1(Version::<1, _>(ty)) => {\n                    let ty: u8 = (*ty).into();\n                    self.put_u8(LONG_HEADER_BIT | FIXED_BIT | ty);\n                    self.put_u32(1);\n                }\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::packet::r#type::long::Ver1;\n\n    #[test]\n    fn test_read_long_type() {\n        use super::{Type, io::parse_long_type};\n\n        let buf = vec![0x00, 0x00, 0x00, 0x01];\n        let (remain, ty) = parse_long_type(0xc0)(&buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        assert_eq!(ty, Type::V1(Ver1::INITIAL));\n\n        let buf = vec![0x00, 0x00, 0x00, 0x00];\n        let (remain, ty) = parse_long_type(0x80)(&buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        assert_eq!(ty, Type::VersionNegotiation);\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_read_long_type_with_wrong_version() {\n        use super::{Type, io::parse_long_type};\n\n        let buf = vec![0x00, 0x00, 0x00, 0x03];\n        let (remain, ty) = parse_long_type(0xc0)(&buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        assert_eq!(ty, Type::V1(Ver1::INITIAL));\n    }\n\n    #[test]\n    fn test_write_long_type() {\n        use super::Type;\n        use crate::packet::r#type::long::io::WriteLongType;\n\n        let mut buf = vec![];\n        let ty = Type::V1(Ver1::INITIAL);\n        buf.put_long_type(&ty);\n        assert_eq!(buf, vec![0xc0, 0x00, 0x00, 0x00, 0x01]);\n    }\n\n    #[test]\n    fn test_write_version_negotiation_long_type() {\n        use super::Type;\n        use crate::packet::r#type::long::io::WriteLongType;\n\n        let mut buf = vec![];\n        let ty = Type::VersionNegotiation;\n        buf.put_long_type(&ty);\n        assert_eq!(buf, vec![0x80, 0x00, 0x00, 0x00, 0x00]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/type/short.rs",
    "content": "use bytes::BufMut;\nuse derive_more::Deref;\n\nuse crate::packet::SpinBit;\n\nconst SHORT_HEADER_BIT: u8 = 0x00;\n\n/// The type of the 1-Rtt packet.\n/// For simplicity, the spin bit is also one part of the 1-Rtt packet type.\n#[derive(Debug, Clone, Copy, Deref, PartialEq, Eq)]\npub struct OneRtt(#[deref] pub SpinBit);\n\nimpl From<u8> for OneRtt {\n    fn from(value: u8) -> Self {\n        OneRtt(SpinBit::from(value))\n    }\n}\n\nimpl From<OneRtt> for u8 {\n    fn from(one_rtt: OneRtt) -> Self {\n        SHORT_HEADER_BIT | super::FIXED_BIT | one_rtt.0.value()\n    }\n}\n\n/// A [`bytes::BufMut`] extension trait, makes buffer more friendly to write the short packet type.\npub trait WriteShortType: BufMut {\n    /// Write the short packet type to the buffer.\n    fn put_short_type(&mut self, ty: &OneRtt);\n}\n\nimpl<B: BufMut> WriteShortType for B {\n    fn put_short_type(&mut self, ty: &OneRtt) {\n        self.put_u8((*ty).into());\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_write_short_type() {\n        use super::OneRtt;\n\n        let mut buf = vec![];\n        let ty = OneRtt::from(0x00);\n        buf.put_short_type(&ty);\n        // Note: 0x40 == SHORT_HEADER_BIT | super::FIXED_BIT | 0x00\n        assert_eq!(buf, vec![0x40]);\n\n        let mut buf = vec![];\n        let ty = OneRtt::from(0x20);\n        buf.put_short_type(&ty);\n        // Note: 0x60 == SHORT_HEADER_BIT | super::FIXED_BIT | 0x20\n        assert_eq!(buf, vec![0x60]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet/type.rs",
    "content": "use derive_more::Deref;\n\nuse super::{KeyPhaseBit, PacketNumber, error::Error};\n\n/// Definitions of packet types related to long headers\npub mod long;\n/// Definitions of packet types related to short headers\npub mod short;\n\n/// Header form bit\nconst HEADER_FORM_MASK: u8 = 0x80;\n/// The next bit (0x40) of byte 0 is set to 1, unless the packet is a Version Negotiation packet.\nconst FIXED_BIT: u8 = 0x40;\n\n/// Reserved bits mask for long headers, for the 5th and 6th bits of the first byte of the long header\npub const LONG_RESERVED_MASK: u8 = 0x0C;\n/// Reserved bits mask for short headers, for the 4th and 5th bits of the first byte of the short header\npub const SHORT_RESERVED_MASK: u8 = 0x18;\n\n/// The lower specific bits of the first byte of the long or short header.\n/// 'R' represents the reserved bits.\n///\n/// - For long packet headers, it is the lower 4 bits of the first byte, and R is 0x0C.\n/// - For the short packet header, it is the lower 5 bits of the first byte, and R is 0x18.\n#[derive(Debug, Clone, Copy, Deref)]\npub struct SpecificBits<const R: u8>(pub(super) u8);\n\n/// The lower 4 bits of the first byte of the long header.\n///\n/// Include 2 reserved bits that must be 0, and 2 bits for the packet number length.\n/// All of them are protected.\npub type LongSpecificBits = SpecificBits<LONG_RESERVED_MASK>;\n/// The lower 5 bits of the first byte of the short header, i.e., the last 5 bits.\n///\n/// Include 2 reserved bits that must be 0, 1 bit for the key phase,\n/// and 2 bits for the packet number length.\n/// All of them are protected.\npub type ShortSpecificBits = SpecificBits<SHORT_RESERVED_MASK>;\n\nimpl<const R: u8> SpecificBits<R> {\n    /// Create a [`SpecificBits`] with the [`PacketNumber`].\n    pub fn from_pn(pn: &PacketNumber) -> Self {\n        Self(pn.size() as u8 - 1)\n    }\n\n    /// Create a [`SpecificBits`] with the packet number length.\n    pub fn with_pn_len(pn_size: usize) -> Self {\n        debug_assert!(pn_size <= 4 && pn_size > 0);\n        Self(pn_size as u8 - 1)\n    }\n}\n\nimpl ShortSpecificBits {\n    /// Set the Key Phase bit to the specific bits for 1rtt header.\n    pub fn set_key_phase(&mut self, key_phase_bit: KeyPhaseBit) {\n        key_phase_bit.imply(&mut self.0);\n    }\n\n    /// Get the Key Phase bit from the specific bits of 1rtt header.\n    pub fn key_phase(&self) -> KeyPhaseBit {\n        KeyPhaseBit::from(self.0)\n    }\n}\n\nimpl<const R: u8> From<u8> for SpecificBits<R> {\n    fn from(byte: u8) -> Self {\n        Self(byte)\n    }\n}\n\n/// Get the packet number length from the protected first byte of the long or short header.\n/// The reserved bits must be 0; otherwise, a connection error of type PROTOCOL_VIOLATION\n/// is returned.\n///\n/// See [Section 17.2](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.2-8.2) and\n/// [Section 17.3.1](https://www.rfc-editor.org/rfc/rfc9000.html#section-17.3.1-4.8) of QUIC.\npub trait GetPacketNumberLength {\n    /// The last two bits of first byte contain the length of the Packet Number\n    const PN_LEN_MASK: u8 = 0x03;\n\n    /// Get the encoding length of the Packet Number\n    fn pn_len(&self) -> Result<u8, Error>;\n}\n\nimpl<const R: u8> GetPacketNumberLength for SpecificBits<R> {\n    fn pn_len(&self) -> Result<u8, Error> {\n        let reserved_bit = self.0 & R;\n        if reserved_bit == 0 {\n            Ok((self.0 & Self::PN_LEN_MASK) + 1)\n        } else {\n            Err(Error::InvalidReservedBits(reserved_bit, R))\n        }\n    }\n}\n\n/// The Type of the packet\n///\n/// The Type is only extracted from the first 3 or 4 bits of the first byte, these contents\n/// are not protected.\n/// For simplicity and future-oriented considerations, the Version of the long packet header\n/// is also considered part of the Type, such as the Initial packet of V1 version,\n/// That is, the Initial packet only makes sense under the V1 version, and it is uncertain\n/// whether future versions of QUIC will still have Initial packets.\n/// The SpinBit of the short packet header should be part of the short packet header, but for\n/// simplicity, the SpinBit is also part of the 1RTT header type.\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum Type {\n    Long(long::Type),\n    Short(short::OneRtt),\n}\n\nimpl Type {\n    #[inline]\n    pub fn encoding_size(&self) -> usize {\n        match self {\n            Type::Short(_) => 1,\n            Type::Long(_) => 5,\n        }\n    }\n}\n\n/// The io module provides the functions to parse and write the packet type.\npub mod io {\n    use bytes::BufMut;\n\n    use super::{long::io::WriteLongType, short::WriteShortType, *};\n\n    /// Parse the packet type from the input buffer,\n    /// [nom](https://docs.rs/nom/latest/nom/) parser style.\n    pub fn be_packet_type(input: &[u8]) -> nom::IResult<&[u8], Type, Error> {\n        let (remain, ty) = nom::number::streaming::be_u8(input)?;\n        if ty & HEADER_FORM_MASK == 0 {\n            Ok((remain, Type::Short(short::OneRtt::from(ty))))\n        } else {\n            let (remain, ty) = long::io::parse_long_type(ty)(remain)?;\n            Ok((remain, Type::Long(ty)))\n        }\n    }\n\n    /// A [`bytes::BufMut`] extension trait, makes buffer more friendly to write packet type.\n    pub trait WritePacketType: BufMut {\n        /// Write the packet type to the buffer.\n        fn put_packet_type(&mut self, ty: &Type);\n    }\n\n    impl<B: BufMut> WritePacketType for B {\n        fn put_packet_type(&mut self, ty: &Type) {\n            match ty {\n                Type::Short(one_rtt) => self.put_short_type(one_rtt),\n                Type::Long(long_type) => self.put_long_type(long_type),\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_long_clear_bits() {\n        let specific_bits = SpecificBits::<0x0C>(0x0C);\n        assert_eq!(\n            specific_bits.pn_len(),\n            Err(Error::InvalidReservedBits(0x0C, 0x0C))\n        );\n        let specific_bits = SpecificBits::<0x0C>(0x04);\n        assert_eq!(\n            specific_bits.pn_len(),\n            Err(Error::InvalidReservedBits(0x04, 0x0C))\n        );\n        let specific_bits = SpecificBits::<0x0C>(0x08);\n        assert_eq!(\n            specific_bits.pn_len(),\n            Err(Error::InvalidReservedBits(0x08, 0x0C))\n        );\n\n        let specific_bits = LongSpecificBits::with_pn_len(4);\n        assert_eq!(specific_bits.pn_len().unwrap(), 4);\n        let specific_bits = LongSpecificBits::with_pn_len(3);\n        assert_eq!(specific_bits.pn_len().unwrap(), 3);\n        let specific_bits = LongSpecificBits::with_pn_len(2);\n        assert_eq!(specific_bits.pn_len().unwrap(), 2);\n        let specific_bits = LongSpecificBits::with_pn_len(1);\n        assert_eq!(specific_bits.pn_len().unwrap(), 1);\n    }\n\n    #[test]\n    fn test_short_specific_bits() {\n        let specific_bits = SpecificBits::<0x18>(0x18);\n        assert_eq!(\n            specific_bits.pn_len(),\n            Err(Error::InvalidReservedBits(0x18, 0x18))\n        );\n        let specific_bits = SpecificBits::<0x18>(0x11);\n        assert_eq!(\n            specific_bits.pn_len(),\n            Err(Error::InvalidReservedBits(0x10, 0x18))\n        );\n        let specific_bits = SpecificBits::<0x18>(0x0A);\n        assert_eq!(\n            specific_bits.pn_len(),\n            Err(Error::InvalidReservedBits(0x08, 0x18))\n        );\n\n        let specific_bits = ShortSpecificBits::with_pn_len(4);\n        assert_eq!(specific_bits.pn_len().unwrap(), 4);\n        let specific_bits = ShortSpecificBits::with_pn_len(3);\n        assert_eq!(specific_bits.pn_len().unwrap(), 3);\n        let specific_bits = ShortSpecificBits::with_pn_len(2);\n        assert_eq!(specific_bits.pn_len().unwrap(), 2);\n        let specific_bits = ShortSpecificBits::with_pn_len(1);\n        assert_eq!(specific_bits.pn_len().unwrap(), 1);\n    }\n\n    #[test]\n    fn test_set_key_phase_bit() {\n        let mut specific_bits = ShortSpecificBits::with_pn_len(4);\n        assert_eq!(specific_bits.0, 0x03);\n        specific_bits.set_key_phase(KeyPhaseBit::One);\n        assert_eq!(specific_bits.0, 0x07);\n        assert_eq!(specific_bits.key_phase(), KeyPhaseBit::One);\n        specific_bits.set_key_phase(KeyPhaseBit::Zero);\n        assert_eq!(specific_bits.0, 0x03);\n        assert_eq!(specific_bits.key_phase(), KeyPhaseBit::Zero);\n    }\n}\n"
  },
  {
    "path": "qbase/src/packet.rs",
    "content": "use std::{fmt::Debug, ops};\n\nuse bytes::{BufMut, BytesMut, buf::UninitSlice};\nuse derive_more::{Deref, DerefMut};\nuse enum_dispatch::enum_dispatch;\nuse getset::CopyGetters;\nuse header::{LongHeader, io::WriteHeader};\n\nuse crate::{\n    cid::ConnectionId,\n    frame::{ContainSpec, FrameFeature, FrameType, Spec},\n    packet::keys::DirectionalKeys,\n};\n\n/// QUIC packet parse error definitions.\npub mod error;\n\n/// Define signal util, such as key phase bit and spin bit.\npub mod signal;\n#[doc(hidden)]\npub use signal::{KeyPhaseBit, SpinBit};\n\n/// Definitions of QUIC packet types.\npub mod r#type;\n#[doc(hidden)]\npub use r#type::{\n    GetPacketNumberLength, LONG_RESERVED_MASK, LongSpecificBits, SHORT_RESERVED_MASK,\n    ShortSpecificBits, Type,\n};\n\n/// Definitions of QUIC packet headers.\npub mod header;\n#[doc(hidden)]\npub use header::{\n    EncodeHeader, GetDcid, GetScid, GetType, HandshakeHeader, Header, InitialHeader,\n    LongHeaderBuilder, OneRttHeader, RetryHeader, VersionNegotiationHeader, ZeroRttHeader, long,\n};\n\n/// The io module provides the functions to parse the QUIC packet.\n///\n/// The writing of the QUIC packet is not provided here, they are written in place.\npub mod io;\npub use io::{\n    AssemblePacket, Package, PacketInfo, PacketSpace, PacketWriter, ProductHeader, RecordFrame,\n};\n\n/// Encoding and decoding of packet number\npub mod number;\n#[doc(hidden)]\npub use number::{InvalidPacketNumber, PacketNumber, WritePacketNumber, take_pn_len};\n\n/// Include operations such as decrypting QUIC packets, removing header protection,\n/// and parsing the first byte of the packet to get the right packet numbers\npub mod decrypt;\n\n/// Include operations such as encrypting QUIC packets, adding header protection,\n/// and encoding the first byte of the packet with pn_len and key_phase optionally.\npub mod encrypt;\n\n/// Encapsulate the crypto keys's logic for long headers and 1-RTT headers.\npub mod keys;\n\n/// The sum type of all QUIC packet headers.\n#[derive(Debug, Clone)]\n#[enum_dispatch(GetDcid, GetType)]\npub enum DataHeader {\n    Long(long::DataHeader),\n    Short(OneRttHeader),\n}\n\n/// The sum type of all QUIC data packets.\n///\n/// The long header has the len field, the short header does not have the len field.\n/// Remember, the len field is not an attribute of the header, but a attribute of the packet.\n///\n/// ```text\n///                                 +---> payload length in long packet\n///                                 |     |<----------- payload --------->|\n/// +-----------+---+--------+------+-----+-----------+---......--+-------+\n/// |X|1|X X 0 0|0 0| ...hdr | len(0..16) | pn(8..32) | body...   |  tag  |\n/// +---+-------+-+-+--------+------------+-----+-----+---......--+-------+\n///               |                             |\n///               +---> encoded pn length       +---> encoded packet number\n/// ```\n#[derive(Debug, Clone, Deref, DerefMut)]\npub struct DataPacket {\n    #[deref]\n    #[deref_mut]\n    pub header: DataHeader,\n    pub bytes: BytesMut,\n    // payload_offset\n    pub offset: usize,\n}\n\nimpl GetType for DataPacket {\n    fn get_type(&self) -> Type {\n        self.header.get_type()\n    }\n}\n\n#[derive(Default, Debug, Clone, Copy, PartialEq)]\npub enum PacketContent {\n    #[default]\n    NonAckEliciting,\n    JustPing,\n    EffectivePayload,\n}\n\nimpl PacketContent {\n    pub fn is_ack_eliciting(self) -> bool {\n        self != Self::NonAckEliciting\n    }\n}\n\nimpl From<FrameType> for PacketContent {\n    fn from(frame_type: FrameType) -> Self {\n        match frame_type {\n            FrameType::Ping => Self::JustPing,\n            fty if !fty.specs().contain(Spec::NonAckEliciting) => Self::EffectivePayload,\n            _ => Self::NonAckEliciting,\n        }\n    }\n}\n\nimpl ops::AddAssign<FrameType> for PacketContent {\n    fn add_assign(&mut self, rhs: FrameType) {\n        match rhs {\n            FrameType::Ping if *self == PacketContent::NonAckEliciting => *self = Self::JustPing,\n            fty if !fty.specs().contain(Spec::NonAckEliciting) => *self = Self::EffectivePayload,\n            _ => (),\n        }\n    }\n}\n\nimpl ops::AddAssign for PacketContent {\n    fn add_assign(&mut self, rhs: Self) {\n        match rhs {\n            PacketContent::EffectivePayload => *self = PacketContent::EffectivePayload,\n            PacketContent::JustPing if *self == PacketContent::NonAckEliciting => {\n                *self = PacketContent::JustPing\n            }\n            _ => {}\n        }\n    }\n}\n\n/// The sum type of all QUIC packets.\n#[derive(Debug, Clone)]\npub enum Packet {\n    VN(VersionNegotiationHeader),\n    Retry(RetryHeader),\n    // Data(header, bytes, payload_offset)\n    Data(DataPacket),\n}\n\n/// QUIC packet reader, reading packets from the incoming datagrams.\n///\n/// The parsing here does not involve removing header protection or decrypting the packet.\n/// It only parses information such as packet type and connection ID,\n/// and prepares for further delivery to the connection by finding the connection ID.\n///\n/// The received packet is a BytesMut, in order to be decrypted in future, and make as few\n/// copies cheaply until it is read by the application layer.\n#[derive(Debug)]\npub struct PacketReader {\n    raw_bytes: BytesMut,\n    dcid_len: usize,\n    // TODO: 添加level，各种包类型顺序不能错乱，否则失败\n}\n\nimpl PacketReader {\n    pub fn new(raw_bytes: BytesMut, dcid_len: usize) -> Self {\n        Self {\n            raw_bytes,\n            dcid_len,\n        }\n    }\n}\n\nimpl Iterator for PacketReader {\n    type Item = Result<Packet, error::Error>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        if self.raw_bytes.is_empty() {\n            return None;\n        }\n\n        match io::be_packet(&mut self.raw_bytes, self.dcid_len) {\n            Ok(packet) => Some(Ok(packet)),\n            Err(error) => {\n                tracing::debug!(target: \"quic\", ?error, \"dropped unparsed packet\");\n                self.raw_bytes.clear(); // no longer parsing\n                Some(Err(error))\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "qbase/src/param/core.rs",
    "content": "use std::{collections::HashMap, marker::PhantomData, time::Duration};\n\nuse bytes::Bytes;\nuse derive_more::{From, TryInto, TryIntoError};\n\nuse super::{error::Error, preferred_address::PreferredAddress};\nuse crate::{\n    cid::ConnectionId,\n    role::*,\n    token::ResetToken,\n    varint::{VARINT_MAX, VarInt},\n};\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum ParameterValueType {\n    VarInt,\n    Boolean,\n    Bytes,\n    Duration,\n    ResetToken,\n    ConnectionId,\n    PreferredAddress,\n}\n\n#[derive(Debug, Clone, PartialEq, From)]\npub enum ParameterValue {\n    Bytes(Bytes),\n    True,\n    VarInt(VarInt),\n    Duration(Duration),\n    ConnectionId(ConnectionId),\n    ResetToken(ResetToken),\n    PreferredAddress(PreferredAddress),\n}\n\nimpl ParameterValue {\n    pub fn value_type(&self) -> ParameterValueType {\n        match self {\n            ParameterValue::VarInt(_) => ParameterValueType::VarInt,\n            ParameterValue::True => ParameterValueType::Boolean,\n            ParameterValue::Bytes(_) => ParameterValueType::Bytes,\n            ParameterValue::Duration(_) => ParameterValueType::Duration,\n            ParameterValue::ConnectionId(_) => ParameterValueType::ConnectionId,\n            ParameterValue::ResetToken(_) => ParameterValueType::ResetToken,\n            ParameterValue::PreferredAddress(_) => ParameterValueType::PreferredAddress,\n        }\n    }\n}\n\nimpl From<u32> for ParameterValue {\n    fn from(value: u32) -> Self {\n        ParameterValue::VarInt(VarInt::from_u32(value))\n    }\n}\n\nimpl From<String> for ParameterValue {\n    fn from(value: String) -> Self {\n        ParameterValue::Bytes(Bytes::from(Vec::from(value)))\n    }\n}\n\nimpl TryFrom<ParameterValue> for Duration {\n    type Error = TryIntoError<ParameterValue>;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, TryIntoError<ParameterValue>> {\n        match value {\n            ParameterValue::Duration(v) => Ok(v),\n            _ => Err(TryIntoError::new(value, \"Duration\", \"Duration\")),\n        }\n    }\n}\n\nimpl TryFrom<ParameterValue> for ConnectionId {\n    type Error = TryIntoError<ParameterValue>;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, TryIntoError<ParameterValue>> {\n        match value {\n            ParameterValue::ConnectionId(v) => Ok(v),\n            _ => Err(TryIntoError::new(value, \"ConnectionId\", \"ConnectionId\")),\n        }\n    }\n}\n\nimpl TryFrom<ParameterValue> for VarInt {\n    type Error = TryIntoError<ParameterValue>;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, TryIntoError<ParameterValue>> {\n        match value {\n            ParameterValue::VarInt(v) => Ok(v),\n            _ => Err(TryIntoError::new(value, \"VarInt\", \"VarInt\")),\n        }\n    }\n}\n\nimpl TryFrom<ParameterValue> for u64 {\n    type Error = <VarInt as TryFrom<ParameterValue>>::Error;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, Self::Error> {\n        VarInt::try_from(value).map(|value| value.into_u64())\n    }\n}\n\nimpl TryFrom<ParameterValue> for PreferredAddress {\n    type Error = TryIntoError<ParameterValue>;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, TryIntoError<ParameterValue>> {\n        match value {\n            ParameterValue::PreferredAddress(v) => Ok(v),\n            _ => Err(TryIntoError::new(\n                value,\n                \"PreferredAddress\",\n                \"PreferredAddress\",\n            )),\n        }\n    }\n}\n\nimpl TryFrom<ParameterValue> for Bytes {\n    type Error = TryIntoError<ParameterValue>;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, TryIntoError<ParameterValue>> {\n        match value {\n            ParameterValue::Bytes(v) => Ok(v),\n            _ => Err(TryIntoError::new(value, \"Bytes\", \"Bytes\")),\n        }\n    }\n}\n\nimpl TryFrom<ParameterValue> for bool {\n    type Error = TryIntoError<ParameterValue>;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, Self::Error> {\n        match value {\n            ParameterValue::True => Ok(true),\n            _ => Err(TryIntoError::new(value, \"Enabled\", \"bool\")),\n        }\n    }\n}\n\nimpl TryFrom<ParameterValue> for ResetToken {\n    type Error = TryIntoError<ParameterValue>;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, TryIntoError<ParameterValue>> {\n        match value {\n            ParameterValue::ResetToken(v) => Ok(v),\n            _ => Err(TryIntoError::new(value, \"ResetToken\", \"ResetToken\")),\n        }\n    }\n}\n\nimpl TryFrom<ParameterValue> for String {\n    type Error = <Bytes as TryFrom<ParameterValue>>::Error;\n\n    #[inline]\n    fn try_from(value: ParameterValue) -> Result<Self, Self::Error> {\n        Bytes::try_from(value).map(|bytes| String::from_utf8_lossy(&bytes).into_owned())\n    }\n}\n\n#[repr(u64)]\n// qmacro::TransportParameter\n#[derive(qmacro::ParameterId, Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub enum ParameterId {\n    #[param(value_type = ConnectionId)]\n    OriginalDestinationConnectionId = 0x0000,\n    #[param(value_type = Duration, default = Duration::ZERO)]\n    MaxIdleTimeout = 0x0001,\n    #[param(value_type = ResetToken)]\n    StatelessResetToken = 0x0002,\n    #[param(value_type = VarInt, default = 65527u32, bound = 1200..=65527)]\n    MaxUdpPayloadSize = 0x0003,\n    #[param(value_type = VarInt, default = 0u32)]\n    InitialMaxData = 0x0004,\n    #[param(value_type = VarInt, default = 0u32)]\n    InitialMaxStreamDataBidiLocal = 0x0005,\n    #[param(value_type = VarInt, default = 0u32)]\n    InitialMaxStreamDataBidiRemote = 0x0006,\n    #[param(value_type = VarInt, default = 0u32)]\n    InitialMaxStreamDataUni = 0x0007,\n    #[param(value_type = VarInt, default = 0u32)]\n    InitialMaxStreamsBidi = 0x0008,\n    #[param(value_type = VarInt, default = 0u32)]\n    InitialMaxStreamsUni = 0x0009,\n    #[param(value_type = VarInt, default = 3u32, bound = 0..=20)]\n    AckDelayExponent = 0x000a,\n    #[param(value_type = Duration, default = Duration::from_millis(25))]\n    MaxAckDelay = 0x000b,\n    #[param(value_type = Boolean)]\n    DisableActiveMigration = 0x000c,\n    #[param(value_type = PreferredAddress)]\n    PreferredAddress = 0x000d,\n    #[param(value_type = VarInt, default = 2u32, bound = 2..=VARINT_MAX)]\n    ActiveConnectionIdLimit = 0x000e,\n    #[param(value_type = ConnectionId)]\n    InitialSourceConnectionId = 0x000f,\n    #[param(value_type = ConnectionId)]\n    RetrySourceConnectionId = 0x0010,\n    #[param(value_type = VarInt, default = 0u32)]\n    MaxDatagramFrameSize = 0x0020,\n    #[param(value_type = Boolean)]\n    GreaseQuicBit = 0x2ab2,\n    /// Genemta extension parameter.\n    #[param(value_type = Bytes, default = 0u32)]\n    ClientName = 0xffee,\n}\n\nimpl ParameterId {\n    pub fn belong_to(self, role: Role) -> Result<(), Error> {\n        match self {\n            ParameterId::OriginalDestinationConnectionId\n            | ParameterId::StatelessResetToken\n            | ParameterId::PreferredAddress\n            | ParameterId::RetrySourceConnectionId\n                if role != Role::Server =>\n            {\n                Err(Error::InvalidParameterId(self, role))\n            }\n            ParameterId::ClientName if role != Role::Client => {\n                Err(Error::InvalidParameterId(self, role))\n            }\n            _ => Ok(()),\n        }\n    }\n}\n\nimpl std::fmt::LowerHex for ParameterId {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{:x}\", VarInt::from(*self).into_u64())\n    }\n}\n\n#[derive(Default, Debug, Clone, PartialEq)]\npub struct Parameters<Role> {\n    pub(super) map: HashMap<ParameterId, ParameterValue>,\n    _role: PhantomData<Role>,\n}\n\nimpl<Role> Parameters<Role> {\n    pub fn get<V>(&self, id: ParameterId) -> Option<V>\n    where\n        V: TryFrom<ParameterValue>,\n    {\n        (self.map.get(&id).cloned().or_else(|| id.default_value()))\n            .and_then(|value| value.try_into().ok())\n    }\n\n    pub fn contains(&self, id: ParameterId) -> bool {\n        self.map.contains_key(&id)\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.map.is_empty()\n    }\n}\n\nimpl<R: IntoRole + Default> Parameters<R> {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    pub fn set(&mut self, id: ParameterId, value: impl Into<ParameterValue>) -> Result<(), Error> {\n        let role: Role = R::into_role();\n        id.belong_to(role)?;\n        let value = value.into();\n        id.validate(&value)?;\n        self.map.insert(id, value);\n        Ok(())\n    }\n}\n\npub type ClientParameters = Parameters<Client>;\npub type ServerParameters = Parameters<Server>;\n\nimpl ServerParameters {\n    #[inline]\n    pub fn is_0rtt_accepted(&self, server_params: &ServerParameters) -> bool {\n        [\n            ParameterId::InitialMaxData,\n            ParameterId::InitialMaxStreamDataBidiLocal,\n            ParameterId::InitialMaxStreamDataBidiRemote,\n            ParameterId::InitialMaxStreamDataUni,\n            ParameterId::InitialMaxStreamsBidi,\n            ParameterId::InitialMaxStreamsUni,\n            ParameterId::ActiveConnectionIdLimit,\n            ParameterId::MaxDatagramFrameSize,\n        ]\n        .into_iter()\n        .all(\n            |id| match (self.get::<VarInt>(id), server_params.get::<VarInt>(id)) {\n                (Some(old_value), Some(new_value)) => old_value <= new_value,\n                _ => unreachable!(\"Expected VarInt values for 0-RTT acceptance check\"),\n            },\n        )\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, From, TryInto)]\npub enum PeerParameters {\n    Client(ClientParameters),\n    Server(ServerParameters),\n}\n"
  },
  {
    "path": "qbase/src/param/error.rs",
    "content": "use std::ops::RangeInclusive;\n\nuse nom::error::ErrorKind as NomErrorKind;\nuse thiserror::Error;\n\nuse crate::{\n    error::{ErrorKind as QuicErrorKind, QuicError},\n    frame::FrameType,\n    param::{ParameterId, ParameterValueType},\n    role::Role,\n    varint::VarInt,\n};\n\n/// Error for QUIC parameters.\n#[derive(Debug, PartialEq, Eq, Error)]\npub enum Error {\n    #[error(\"Incomplete parameter id: {0}\")]\n    IncompleteParameterId(String),\n    #[error(\"Parameter {0} is not defined\")]\n    UnknownParameterId(VarInt),\n    #[error(\"Lack {1:?} for {0}\")]\n    LackParameterId(Role, ParameterId),\n    #[error(\"{0:?} is not belong to {1}\")]\n    InvalidParameterId(ParameterId, Role),\n    #[error(\"Incomplete value for {0:?}: {1}\")]\n    IncompleteValue(ParameterId, String),\n    #[error(\"{0:?} is not supported for {1:?}\")]\n    InvalidValueType(ParameterId, ParameterValueType),\n    #[error(\"{0:?}'s value {1} is out of bounds {2:?}\")]\n    OutOfBounds(ParameterId, u64, RangeInclusive<u64>),\n}\n\nimpl From<Error> for QuicError {\n    fn from(e: Error) -> Self {\n        Self::new(\n            QuicErrorKind::TransportParameter,\n            FrameType::Crypto.into(),\n            e.to_string(),\n        )\n    }\n}\n\nimpl nom::error::ParseError<&[u8]> for Error {\n    fn from_error_kind(_input: &[u8], _kind: NomErrorKind) -> Self {\n        unreachable!(\"QUIC parameter parser must always consume\")\n    }\n\n    fn append(_input: &[u8], _kind: NomErrorKind, source: Self) -> Self {\n        source\n    }\n}\n"
  },
  {
    "path": "qbase/src/param/handy.rs",
    "content": "use std::time::Duration;\n\nuse crate::param::ParameterId;\n\npub fn client_parameters() -> super::ClientParameters {\n    let mut params = super::ClientParameters::default();\n\n    for (id, value) in [\n        (ParameterId::InitialMaxStreamsBidi, 100u32),\n        (ParameterId::InitialMaxStreamsUni, 100u32),\n        (ParameterId::InitialMaxData, 1u32 << 20),\n        (ParameterId::InitialMaxStreamDataBidiLocal, 1u32 << 20),\n        (ParameterId::InitialMaxStreamDataBidiRemote, 1u32 << 20),\n        (ParameterId::InitialMaxStreamDataUni, 1u32 << 20),\n        (ParameterId::ActiveConnectionIdLimit, 10u32),\n    ] {\n        params.set(id, value).expect(\"unreachable\");\n    }\n\n    params\n        .set(ParameterId::MaxIdleTimeout, Duration::from_secs(20))\n        .expect(\"unreachable\");\n\n    params\n}\n\npub fn server_parameters() -> super::ServerParameters {\n    let mut params = super::ServerParameters::default();\n\n    for (id, value) in [\n        (ParameterId::InitialMaxStreamsBidi, 100u32),\n        (ParameterId::InitialMaxStreamsUni, 100u32),\n        (ParameterId::InitialMaxData, 1u32 << 20),\n        (ParameterId::InitialMaxStreamDataBidiLocal, 1u32 << 20),\n        (ParameterId::InitialMaxStreamDataBidiRemote, 1u32 << 20),\n        (ParameterId::InitialMaxStreamDataUni, 1u32 << 20),\n        (ParameterId::ActiveConnectionIdLimit, 10u32),\n    ] {\n        params.set(id, value).expect(\"unreachable\");\n    }\n    params\n        .set(ParameterId::MaxIdleTimeout, Duration::from_secs(30))\n        .expect(\"unreachable\");\n\n    params\n}\n"
  },
  {
    "path": "qbase/src/param/io.rs",
    "content": "use std::{fmt::Debug, time::Duration};\n\nuse bytes::Bytes;\nuse nom::{Parser, multi::length_data};\n\nuse crate::{\n    cid::{ConnectionId, WriteConnectionId},\n    error::QuicError,\n    param::{\n        core::{ParameterId, ParameterValue, ParameterValueType, Parameters, ServerParameters},\n        error::Error,\n        preferred_address::{PreferredAddress, WirtePreferredAddress, be_preferred_address},\n    },\n    role::{IntoRole, RequiredParameters, Role},\n    token::{ResetToken, WriteResetToken, be_reset_token},\n    varint::{VarInt, WriteVarInt, be_varint},\n};\n\n/// A [`bytes::BufMut`] extension trait, makes buffer more friendly\n/// to write the parameter id.\npub trait WriteParameterId: bytes::BufMut {\n    /// Write the parameter id to the buffer.\n    fn put_parameter_id(&mut self, param_id: ParameterId);\n}\n\nimpl<T: bytes::BufMut> WriteParameterId for T {\n    fn put_parameter_id(&mut self, param_id: ParameterId) {\n        self.put_varint(&VarInt::from(param_id));\n    }\n}\n\npub fn be_raw_parameter(input: &[u8]) -> nom::IResult<&[u8], (VarInt, &[u8])> {\n    let (remain, param_id) = crate::varint::be_varint(input)?;\n    let (remain, data) = length_data(be_varint).parse(remain)?;\n    Ok((remain, (param_id, data)))\n}\n\npub fn be_parameter_value(input: &[u8], id: ParameterId) -> nom::IResult<&[u8], ParameterValue> {\n    use nom::combinator::map;\n\n    match id.value_type() {\n        ParameterValueType::VarInt => map(be_varint, ParameterValue::VarInt).parse(input),\n        ParameterValueType::Boolean => Ok((input, ParameterValue::True)),\n        ParameterValueType::Bytes => {\n            Ok((&[], ParameterValue::Bytes(Bytes::copy_from_slice(input))))\n        }\n        ParameterValueType::Duration => {\n            map(be_varint, |v| Duration::from_millis(v.into_u64()).into()).parse(input)\n        }\n        ParameterValueType::ResetToken => {\n            map(be_reset_token, ParameterValue::ResetToken).parse(input)\n        }\n        ParameterValueType::ConnectionId => Ok((\n            &[],\n            ParameterValue::ConnectionId(ConnectionId::from_slice(input)),\n        )),\n        ParameterValueType::PreferredAddress => {\n            map(be_preferred_address, ParameterValue::PreferredAddress).parse(input)\n        }\n    }\n}\n\n// A trait for writing parameters to the buffer.\npub trait WriteParameter {\n    fn put_bytes_parameter(&mut self, id: ParameterId, bytes: &Bytes);\n\n    fn put_cid_parameter(&mut self, id: ParameterId, cid: &ConnectionId);\n\n    fn put_duration_parameter(&mut self, id: ParameterId, dur: &Duration) {\n        let value = VarInt::from_u128(dur.as_millis()).expect(\"Duration too large\");\n        self.put_varint_parameter(id, &value);\n    }\n\n    fn put_bool_parameter(&mut self, id: ParameterId);\n\n    fn put_preferred_address_parameter(&mut self, id: ParameterId, addr: &PreferredAddress);\n\n    fn put_reset_token_parameter(&mut self, id: ParameterId, token: &ResetToken);\n\n    fn put_varint_parameter(&mut self, id: ParameterId, value: &VarInt);\n\n    fn put_parameter(&mut self, id: ParameterId, value: &ParameterValue) {\n        match value {\n            ParameterValue::Bytes(bytes) => self.put_bytes_parameter(id, bytes),\n            ParameterValue::ConnectionId(cid) => self.put_cid_parameter(id, cid),\n            ParameterValue::Duration(dur) => self.put_duration_parameter(id, dur),\n            ParameterValue::True => self.put_bool_parameter(id),\n            ParameterValue::PreferredAddress(addr) => {\n                self.put_preferred_address_parameter(id, addr)\n            }\n            ParameterValue::ResetToken(token) => self.put_reset_token_parameter(id, token),\n            ParameterValue::VarInt(varint) => self.put_varint_parameter(id, varint),\n        }\n    }\n}\n\n/// A [`bytes::BufMut`] extension trait, makes buffer more friendly\n/// to write parameters.\nimpl<T: bytes::BufMut> WriteParameter for T {\n    fn put_bytes_parameter(&mut self, id: ParameterId, bytes: &Bytes) {\n        self.put_parameter_id(id);\n        self.put_varint(&VarInt::try_from(bytes.len()).expect(\"param too large\"));\n        self.put_slice(bytes);\n    }\n\n    fn put_cid_parameter(&mut self, id: ParameterId, cid: &ConnectionId) {\n        self.put_parameter_id(id);\n        self.put_connection_id(cid);\n    }\n\n    fn put_bool_parameter(&mut self, id: ParameterId) {\n        self.put_parameter_id(id);\n        self.put_varint(&VarInt::from_u32(0));\n    }\n\n    fn put_preferred_address_parameter(&mut self, id: ParameterId, addr: &PreferredAddress) {\n        self.put_parameter_id(id);\n        self.put_varint(&VarInt::try_from(addr.encoding_size()).expect(\"param too large\"));\n        self.put_preferred_address(addr);\n    }\n\n    fn put_reset_token_parameter(&mut self, id: ParameterId, token: &ResetToken) {\n        self.put_parameter_id(id);\n        self.put_varint(&VarInt::try_from(token.encoding_size()).expect(\"param too large\"));\n        self.put_reset_token(token);\n    }\n\n    fn put_varint_parameter(&mut self, id: ParameterId, value: &VarInt) {\n        self.put_parameter_id(id);\n        self.put_varint(&VarInt::try_from(value.encoding_size()).expect(\"param too large\"));\n        self.put_varint(value);\n    }\n}\n\npub trait WriteParameters<Role> {\n    fn put_parameters(&mut self, params: &Parameters<Role>);\n}\n\nimpl<Role, T: bytes::BufMut> WriteParameters<Role> for T {\n    fn put_parameters(&mut self, params: &Parameters<Role>) {\n        for (id, value) in &params.map {\n            self.put_parameter(*id, value);\n        }\n    }\n}\n\nfn handle_nom_error<F: Debug, E: Debug>(input: &[u8], nom_error: nom::Err<F, E>) -> Error {\n    assert!(\n        matches!(nom_error, nom::Err::Incomplete(..)),\n        \"Only incomplete errors should occur, but {nom_error:?} happened for input: {input:?}\"\n    );\n    Error::IncompleteParameterId(format!(\"incomplete parameter data for input: {input:?}\"))\n}\n\nimpl<R: IntoRole + RequiredParameters + Default> Parameters<R> {\n    pub fn parse_from_bytes(mut buf: &[u8]) -> Result<Self, QuicError> {\n        let mut parameters = Self::default();\n        while !buf.is_empty() {\n            let (param_id, param_value);\n            (buf, (param_id, param_value)) =\n                be_raw_parameter(buf).map_err(|nom_error| handle_nom_error(buf, nom_error))?;\n\n            let param_id = match ParameterId::try_from(param_id) {\n                Ok(param_id) => param_id,\n                Err(unknown @ Error::UnknownParameterId(..)) => {\n                    tracing::warn!(target: \"quic\", \"{unknown}, ignore\");\n                    continue; // Ignore unknown parameters\n                }\n                Err(e) => return Err(e.into()),\n            };\n\n            ParameterId::belong_to(param_id, R::into_role())?;\n            let (remain, param_value) = be_parameter_value(param_value, param_id)\n                .map_err(|nom_error| handle_nom_error(param_value, nom_error))?;\n            assert!(remain.is_empty(), \"Parameter value should consume all data\");\n\n            parameters.set(param_id, param_value)?;\n        }\n        for id in R::required_parameters() {\n            if !parameters.contains(id) {\n                return Err(Error::LackParameterId(R::into_role(), id).into());\n            }\n        }\n        Ok(parameters)\n    }\n}\n\nimpl ServerParameters {\n    pub fn try_from_remembered_bytes(mut buf: &[u8]) -> Result<Self, QuicError> {\n        let mut parameters = Self::new();\n        while !buf.is_empty() {\n            let (param_id, param_value);\n            (buf, (param_id, param_value)) =\n                be_raw_parameter(buf).map_err(|nom_error| handle_nom_error(buf, nom_error))?;\n\n            let param_id = match ParameterId::try_from(param_id) {\n                Ok(param_id) => param_id,\n                Err(unknown @ Error::UnknownParameterId(..)) => {\n                    tracing::warn!(target: \"quic\", \"{unknown}, ignore\");\n                    continue; // Ignore unknown parameters\n                }\n                Err(e) => return Err(e.into()),\n            };\n\n            ParameterId::belong_to(param_id, Role::Server)?;\n            let (remain, param_value) = be_parameter_value(param_value, param_id)\n                .map_err(|nom_error| handle_nom_error(param_value, nom_error))?;\n            assert!(remain.is_empty(), \"Parameter value should consume all data\");\n\n            parameters.set(param_id, param_value)?;\n        }\n        Ok(parameters)\n    }\n}\n"
  },
  {
    "path": "qbase/src/param/preferred_address.rs",
    "content": "use std::net::{SocketAddrV4, SocketAddrV6};\n\nuse getset::{CopyGetters, MutGetters, Setters};\nuse nom::Parser;\n\nuse crate::{\n    cid::{ConnectionId, WriteConnectionId, be_connection_id},\n    token::{ResetToken, WriteResetToken, be_reset_token},\n};\n\n/// The server's preferred address, which is used to effect\n/// a change in server address at the end of the handshake.\n///\n/// See [section-18.2-4.31](https://datatracker.ietf.org/doc/html/rfc9000#section-18.2-4.32)\n/// and [figure-22](https://datatracker.ietf.org/doc/html/rfc9000#figure-22)\n/// for more details.\n#[derive(CopyGetters, Setters, MutGetters, Debug, PartialEq, Clone, Copy)]\npub struct PreferredAddress {\n    #[getset(get_copy = \"pub\", set = \"pub\")]\n    address_v4: SocketAddrV4,\n    #[getset(get_copy = \"pub\", set = \"pub\")]\n    address_v6: SocketAddrV6,\n    #[getset(get_copy = \"pub\", set = \"pub\")]\n    connection_id: ConnectionId,\n    #[getset(get_copy = \"pub\", set = \"pub\")]\n    stateless_reset_token: ResetToken,\n}\n\nimpl PreferredAddress {\n    /// Create a new preferred address.\n    pub fn new(\n        address_v4: SocketAddrV4,\n        address_v6: SocketAddrV6,\n        connection_id: ConnectionId,\n        stateless_reset_token: ResetToken,\n    ) -> Self {\n        Self {\n            address_v4,\n            address_v6,\n            connection_id,\n            stateless_reset_token,\n        }\n    }\n\n    /// Returns the encoding size of the preferred address.\n    pub fn encoding_size(&self) -> usize {\n        6 + 18 + self.connection_id.encoding_size() + self.stateless_reset_token.encoding_size()\n    }\n}\n\n/// Parse the preferred address from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\npub fn be_preferred_address(input: &[u8]) -> nom::IResult<&[u8], PreferredAddress> {\n    use nom::{bytes::streaming::take, combinator::map};\n\n    let (input, address_v4) = map(take(6usize), |buf: &[u8]| {\n        let mut addr = [0u8; 4];\n        addr.copy_from_slice(&buf[..4]);\n        let port = u16::from_be_bytes([buf[4], buf[5]]);\n        SocketAddrV4::new(addr.into(), port)\n    })\n    .parse(input)?;\n\n    let (input, address_v6) = map(take(18usize), |buf: &[u8]| {\n        let mut addr = [0u8; 16];\n        addr.copy_from_slice(&buf[..16]);\n        let port = u16::from_be_bytes([buf[16], buf[17]]);\n        SocketAddrV6::new(addr.into(), port, 0, 0)\n    })\n    .parse(input)?;\n\n    let (input, connection_id) = be_connection_id(input)?;\n    let (input, stateless_reset_token) = be_reset_token(input)?;\n\n    Ok((\n        input,\n        PreferredAddress {\n            address_v4,\n            address_v6,\n            connection_id,\n            stateless_reset_token,\n        },\n    ))\n}\n\n/// A [`bytes::BufMut`] extension trait, makes buffer more friendly\n/// to write the preferred address.\npub trait WirtePreferredAddress: bytes::BufMut {\n    /// Write the preferred address to the buffer.\n    fn put_preferred_address(&mut self, addr: &PreferredAddress);\n}\n\nimpl<T: bytes::BufMut> WirtePreferredAddress for T {\n    fn put_preferred_address(&mut self, addr: &PreferredAddress) {\n        self.put_slice(&addr.address_v4.ip().octets());\n        self.put_u16(addr.address_v4.port());\n\n        self.put_slice(&addr.address_v6.ip().octets());\n        self.put_u16(addr.address_v6.port());\n\n        self.put_connection_id(&addr.connection_id);\n        self.put_reset_token(&addr.stateless_reset_token);\n    }\n}\n"
  },
  {
    "path": "qbase/src/param.rs",
    "content": "use std::{\n    fmt::Debug,\n    ops::{Deref, DerefMut},\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker},\n    time::Duration,\n};\n\nuse crate::{\n    cid::ConnectionId,\n    error::{Error, ErrorKind, QuicError},\n    frame::FrameType,\n    role::Role,\n};\n\npub mod core;\npub mod error;\npub mod handy;\npub mod io;\npub mod preferred_address;\n\npub use self::{\n    core::{\n        ClientParameters, ParameterId, ParameterValue, ParameterValueType, PeerParameters,\n        ServerParameters,\n    },\n    io::*,\n};\n\n/// Requires that the connection IDs in the transport parameters of\n/// the received Initial packet must match those used during the\n/// connection establishment process.\n///\n/// For the Initial packet received by the server from the client,\n/// the initial_source_connection_id in the client's Transport\n/// parameters must match the source connection id in that Initial packet.\n/// For the Initial packet received by the client from the server,\n/// not only must the server's Transport parameter\n/// initial_source_connection_id match the source connection id\n/// in that Initial packet,\n/// but also requires that the original_destination_connection_id matches the\n/// destination connection id in the first packet sent by the client.\n/// Specifically, if the server has responded with a Retry packet,\n/// then the server's Transport parameter retry_source_connection_id\n/// must match the source connection id in that Retry packet.\n///\n/// See [Authenticating Connection IDs](https://datatracker.ietf.org/doc/html/rfc9000#name-authenticating-connection-i)\n/// of [RFC9000](https://datatracker.ietf.org/doc/html/rfc9000)\n/// for more details.\n///\n/// Whether client or server, after receiving the Initial packet from\n/// the peer, these requirements must be set;\n/// then after parsing the peer's Transport parameters, verify that\n/// all these requirements are met.\n/// If not met, it is considered a TransportParameters error.\n#[derive(Debug, Clone, Copy)]\nenum Requirements {\n    Client {\n        initial_scid: Option<ConnectionId>,\n        retry_scid: Option<ConnectionId>,\n        origin_dcid: ConnectionId,\n    },\n    Server {\n        initial_scid: Option<ConnectionId>,\n    },\n}\n\n/// Transport parameters for QUIC.\n/// The transport parameters are used to negotiate the initial\n/// settings of a QUIC connection.\n///\n/// They are exchanged in the Initial packets of the handshake,\n/// including client and server transport parameters.\n/// Client transport parameters and server transport parameters\n/// exist independently and are not merged.\n/// They each constrain the behavior of the remote peer.\n///\n/// For different roles, local transport parameters and remote\n/// transport parameters differ.\n/// For example, as a client, the local transport parameters\n/// are client parameters, while remote transport parameters\n/// are server parameters. The same applies to the server.\n///\n/// Note that client transport parameters and server transport\n/// parameters are different, as some transport parameters can\n/// only appear in server transport parameters.\n/// Therefore, for a QUIC connection, the transport parameter\n/// sets for both ends are defined as follows.\n#[derive(Debug)]\npub struct Parameters {\n    state: u8,\n    client: Arc<ClientParameters>,\n    server: Arc<ServerParameters>,\n    remembered: Option<Arc<ServerParameters>>,\n    requirements: Requirements,\n    wakers: Vec<Waker>,\n}\n\nimpl Drop for Parameters {\n    fn drop(&mut self) {\n        self.wake_all();\n    }\n}\n\nimpl Parameters {\n    const CLIENT_READY: u8 = 1;\n    const SERVER_READY: u8 = 2;\n\n    /// Creates a new client transport parameters, with the client\n    /// parameters and remembered server parameters if exist.\n    ///\n    /// It will wait for the server transport parameters to be\n    /// received and parsed.\n    pub fn new_client(\n        client: ClientParameters,\n        remembered: Option<ServerParameters>,\n        origin_dcid: ConnectionId,\n    ) -> Self {\n        Self {\n            state: Self::CLIENT_READY,\n            client: Arc::new(client),\n            server: Arc::default(),\n            remembered: remembered.map(Arc::new),\n            requirements: Requirements::Client {\n                origin_dcid,\n                initial_scid: None,\n                retry_scid: None,\n            },\n            wakers: Vec::with_capacity(2),\n        }\n    }\n\n    /// Creates a new server transport parameters, with the server\n    /// parameters.\n    ///\n    /// It will wait for the client transport parameters to be\n    /// received and parsed.\n    pub fn new_server(server: ServerParameters) -> Self {\n        Self {\n            state: Self::SERVER_READY,\n            client: Arc::default(),\n            server: Arc::new(server),\n            remembered: None,\n            requirements: Requirements::Server { initial_scid: None },\n            wakers: Vec::with_capacity(2),\n        }\n    }\n\n    pub fn role(&self) -> Role {\n        match self.requirements {\n            Requirements::Client { .. } => Role::Client,\n            Requirements::Server { .. } => Role::Server,\n        }\n    }\n\n    pub fn client(&self) -> Option<&Arc<ClientParameters>> {\n        if self.state & Self::CLIENT_READY != 0 {\n            Some(&self.client)\n        } else {\n            None\n        }\n    }\n\n    pub fn server(&self) -> Option<&Arc<ServerParameters>> {\n        if self.state & Self::SERVER_READY != 0 {\n            Some(&self.server)\n        } else {\n            None\n        }\n    }\n\n    /// Returns the remembered server transport parameters if exist,\n    /// which means the client connected the server, and stored the\n    /// server transport parameters.\n    ///\n    /// It is meaningful only for the client, to send early data\n    /// with 0Rtt packets before receving the server transport params.\n    pub fn remembered(&self) -> Option<&Arc<ServerParameters>> {\n        self.remembered.as_ref()\n    }\n\n    pub fn get_local<V: TryFrom<ParameterValue>>(&self, id: ParameterId) -> Option<V> {\n        match self.role() {\n            Role::Client => self.client()?.get(id),\n            Role::Server => self.server()?.get(id),\n        }\n    }\n\n    pub fn get_remote<V: TryFrom<ParameterValue>>(&self, id: ParameterId) -> Option<V> {\n        match self.role() {\n            Role::Client => self.server()?.get(id),\n            Role::Server => self.client()?.get(id),\n        }\n    }\n\n    // fn set_retry_scid(&mut self, cid: ConnectionId) {\n    //     assert_eq!(self.role(), Role::Server);\n    //     self.server.set_retry_source_connection_id(cid);\n    // }\n\n    pub fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        if self.state == Self::CLIENT_READY | Self::SERVER_READY {\n            Poll::Ready(())\n        } else {\n            self.wakers.push(cx.waker().clone());\n            Poll::Pending\n        }\n    }\n\n    pub fn is_remote_params_received(&self) -> bool {\n        match self.role() {\n            Role::Client => !self.server.is_empty(),\n            Role::Server => !self.client.is_empty(),\n        }\n    }\n\n    /// Returns true if the remote transport parameters have been received and authed.\n    ///\n    /// It is usually used to avoid processing remote transport parameters\n    /// more than once.\n    pub fn is_remote_params_ready(&self) -> bool {\n        self.state == Self::CLIENT_READY | Self::SERVER_READY\n    }\n\n    /// Being called when the remote transport parameters are received.\n    /// It will parse and check the remote transport parameters,\n    /// and wake all the wakers waiting for the remote transport parameters\n    /// if the remote transport parameters are valid.\n    pub fn recv_remote_params(\n        &mut self,\n        params: impl Into<PeerParameters>,\n    ) -> Result<(), QuicError> {\n        match params.into() {\n            PeerParameters::Client(p) => {\n                assert_eq!(self.role(), Role::Server);\n                assert!(self.client.is_empty());\n                self.client = Arc::new(p);\n            }\n            PeerParameters::Server(p) => {\n                assert_eq!(self.role(), Role::Client);\n                assert!(self.server.is_empty());\n                self.server = Arc::new(p);\n            }\n        }\n\n        // Because TLS and packet parsing are in parallel,\n        // the scid of the peer end may not be set when the transmission parameters of the peer are obtained.\n        // Therefore, if the scid of the other end is not set, authentication will not be performed first,\n        // and authentication will be performed when it is set.\n        if self.authenticate_cids()? {\n            self.state = Self::CLIENT_READY | Self::SERVER_READY;\n            self.remembered.take();\n            self.wake_all();\n            return Ok(());\n        }\n\n        Ok(())\n    }\n\n    fn wake_all(&mut self) {\n        for waker in self.wakers.drain(..) {\n            waker.wake();\n        }\n    }\n\n    /// No matter the client or server, after receiving the Initial\n    /// packet from the peer, the initial_source_connection_id in\n    /// the remote transport parameters must equal the source connection\n    /// id in the received Initial packet.\n    ///\n    /// If the peer's transmission parameters have not been verified,\n    /// it will be verified here. If verification fails, this method will\n    /// return Err.\n    pub fn initial_scid_from_peer_need_equal(\n        &mut self,\n        cid: ConnectionId,\n    ) -> Result<(), QuicError> {\n        let initial_scid = match &mut self.requirements {\n            Requirements::Client { initial_scid, .. } => initial_scid,\n            Requirements::Server { initial_scid } => initial_scid,\n        };\n        assert!(initial_scid.replace(cid).is_none());\n\n        // Because the TLS handshak and packet parsing are in parallel,\n        // the scid of the peer end may not be set when the transmission parameters of the peer are obtained.\n        // Therefore, if the scid of the other end is not set, authentication will not be performed first,\n        // and authentication will be performed when it is set.\n        if self.is_remote_params_received() && self.authenticate_cids()? {\n            self.state = Self::CLIENT_READY | Self::SERVER_READY;\n            self.remembered.take();\n            self.wake_all();\n            return Ok(());\n        }\n\n        Ok(())\n    }\n\n    /// After receiving the Retry packet from the server, the\n    /// retry_source_connection_id in the server transport parameters\n    /// must equal the source connection id in the Retry packet.\n    pub fn retry_scid_from_server_need_equal(&mut self, cid: ConnectionId) {\n        match &mut self.requirements {\n            Requirements::Client { retry_scid, .. } => *retry_scid = Some(cid),\n            Requirements::Server { .. } => panic!(\"server shuold never call this\"),\n        }\n    }\n\n    pub fn initial_scid_from_peer(&self) -> Option<ConnectionId> {\n        match self.requirements {\n            Requirements::Client { initial_scid, .. } => initial_scid,\n            Requirements::Server { initial_scid, .. } => initial_scid,\n        }\n    }\n\n    fn authenticate_cids(&self) -> Result<bool, QuicError> {\n        fn param_error(reason: &'static str) -> QuicError {\n            QuicError::new(\n                ErrorKind::TransportParameter,\n                FrameType::Crypto.into(),\n                reason,\n            )\n        }\n\n        // Because TLS and packet parsing are in parallel,\n        // the scid of the peer end may not be set when the transmission parameters of the peer are obtained.\n        // Therefore, if the scid of the other end is not set, authentication will not be performed first,\n        // and authentication will be performed when it is set.\n        match self.requirements {\n            Requirements::Client {\n                initial_scid,\n                retry_scid: _,\n                origin_dcid,\n            } => {\n                let Some(initial_scid) = initial_scid else {\n                    return Ok(false);\n                };\n                if self\n                    .server\n                    .get::<ConnectionId>(ParameterId::InitialSourceConnectionId)\n                    .expect(\"this value must be set\")\n                    != initial_scid\n                {\n                    return Err(param_error(\n                        \"Initial Source Connection ID from server mismatch\",\n                    ));\n                }\n                // 并不正确，要和intiial_scid一样地去验证\n                // if self.server.retry_source_connection_id() != retry_scid {\n                //     return Err(param_error(\"Retry Source Connection ID mismatch\"));\n                // }\n                if self\n                    .server\n                    .get::<ConnectionId>(ParameterId::OriginalDestinationConnectionId)\n                    .expect(\"this value must be set\")\n                    != origin_dcid\n                {\n                    return Err(param_error(\"Original Destination Connection ID mismatch\"));\n                }\n                Ok(true)\n            }\n            Requirements::Server { initial_scid } => {\n                let Some(initial_scid) = initial_scid else {\n                    return Ok(false);\n                };\n                if self\n                    .client\n                    .get::<ConnectionId>(ParameterId::InitialSourceConnectionId)\n                    .expect(\"this value must be set\")\n                    != initial_scid\n                {\n                    return Err(param_error(\n                        \"Initial Source Connection ID from client mismatch\",\n                    ));\n                }\n                Ok(true)\n            }\n        }\n    }\n\n    /// Returns None if the remote parameters are not ready.\n    pub fn negotiated_max_idle_timeout(&self) -> Option<Duration> {\n        let local_max_idle_timeout = self.get_local(ParameterId::MaxIdleTimeout)?;\n        let remote_max_idle_timeout = self.get_remote(ParameterId::MaxIdleTimeout)?;\n\n        Some(match (local_max_idle_timeout, remote_max_idle_timeout) {\n            // rfc: https://datatracker.ietf.org/doc/html/rfc9000#name-idle-timeout\n            // Each endpoint advertises a max_idle_timeout, but the effective value\n            // at an endpoint is computed as the minimum of the two advertised\n            // values (or the sole advertised value, if only one endpoint advertises\n            // a non-zero value). By announcing a max_idle_timeout, an endpoint\n            // commits to initiating an immediate close (Section 10.2) if\n            // it abandons the connection prior to the effective value.\n            (Duration::ZERO, Duration::ZERO) => Duration::MAX,\n            (Duration::ZERO, d) | (d, Duration::ZERO) => d,\n            // rfc: https://datatracker.ietf.org/doc/html/rfc9000#name-idle-timeout\n            // If a max_idle_timeout is specified by either endpoint in its\n            // transport parameters (Section 18.2), the connection is silently\n            // closed and its state is discarded when it remains idle for longer\n            // than the minimum of the max_idle_timeout value advertised by both\n            // endpoints.\n            (d1, d2) => d1.min(d2),\n        })\n    }\n}\n\n/// Shared transport parameter sets for both endpoints.\n///\n/// The local transport parameters are set initially, while\n/// the remote transport parameters must wait until they are\n/// received through network transmission and can be parsed.\n/// After parsing, the peer parameters must be immediately\n/// verified to ensure they meet the requirements and validity\n/// checks.\n///\n/// Note that a connection error may occur before receiving\n/// the remote transport parameters, such as network unreachable.\n/// In such cases, the entire connection parameters will be\n/// converted into an error state.\n#[derive(Debug, Clone)]\npub struct ArcParameters(Arc<Mutex<Result<Parameters, Error>>>);\n\n// ArcParameters::lock_guard(&self) -> Result<ArcParametersGuard, Error>;\n// pub struct ArcParametersGuard: impl Deref<Target = Parameters>\n\npub struct ParametersGuard<'a>(MutexGuard<'a, Result<Parameters, Error>>);\n\nimpl Deref for ParametersGuard<'_> {\n    type Target = Parameters;\n\n    fn deref(&self) -> &Self::Target {\n        self.0.as_ref().expect(\"parameters must be valid\")\n    }\n}\n\nimpl DerefMut for ParametersGuard<'_> {\n    fn deref_mut(&mut self) -> &mut Self::Target {\n        self.0.as_mut().expect(\"parameters must be valid\")\n    }\n}\n\nimpl From<Parameters> for ArcParameters {\n    fn from(params: Parameters) -> Self {\n        Self(Arc::new(Mutex::new(Ok(params))))\n    }\n}\n\nimpl ArcParameters {\n    #[inline]\n    pub fn lock_guard(&self) -> Result<ParametersGuard<'_>, Error> {\n        let guard = self.0.lock().unwrap();\n        match guard.as_ref() {\n            Ok(_) => Ok(ParametersGuard(guard)),\n            Err(e) => Err(e.clone()),\n        }\n    }\n\n    #[inline]\n    pub async fn remote_ready(&self) -> Result<ParametersGuard<'_>, Error> {\n        std::future::poll_fn(|cx| {\n            let mut parameters = self.lock_guard()?;\n            parameters.poll_ready(cx).map(|()| Ok(parameters))\n        })\n        .await\n    }\n\n    // /// Sets the retry source connection ID in the server\n    // /// transport parameters.\n    // ///\n    // /// It is meaningful only for the client, because only\n    // /// server can send the Retry packet.\n    // pub fn set_retry_scid(&self, cid: ConnectionId) {\n    //     let mut guard = self.0.lock().unwrap();\n    //     if let Ok(params) = guard.deref_mut() {\n    //         params.set_retry_scid(cid);\n    //     }\n    // }\n\n    /// When some connection error occurred, convert this parameters\n    /// into error state.\n    pub fn on_conn_error(&self, error: &Error) {\n        let mut guard = self.0.lock().unwrap();\n        if guard.deref_mut().is_ok() {\n            *guard = Err(error.clone());\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::varint::VarInt;\n\n    fn create_test_client_params() -> ClientParameters {\n        let mut params = ClientParameters::default();\n        params\n            .set(\n                ParameterId::InitialSourceConnectionId,\n                ConnectionId::from_slice(b\"client_test\"),\n            )\n            .unwrap();\n        params\n    }\n\n    fn create_test_server_params() -> ServerParameters {\n        let mut params = ServerParameters::default();\n        params\n            .set(\n                ParameterId::InitialSourceConnectionId,\n                ConnectionId::from_slice(b\"server_test\"),\n            )\n            .unwrap();\n        params\n            .set(\n                ParameterId::OriginalDestinationConnectionId,\n                ConnectionId::from_slice(b\"original\"),\n            )\n            .unwrap();\n        params\n    }\n\n    #[test]\n    fn test_parameters_new() {\n        let client_params = create_test_client_params();\n        let params =\n            Parameters::new_client(client_params, None, ConnectionId::from_slice(b\"odcid\"));\n        assert_eq!(params.role(), Role::Client);\n        assert_eq!(params.state, Parameters::CLIENT_READY);\n\n        let server_params = create_test_server_params();\n        let params = Parameters::new_server(server_params);\n        assert_eq!(params.role(), Role::Server);\n        assert_eq!(params.state, Parameters::SERVER_READY);\n    }\n\n    #[test]\n    fn test_authenticate_cids() {\n        let client_params = create_test_client_params();\n\n        let odcid = ConnectionId::from_slice(b\"odcid\");\n\n        let mut params = Parameters::new_client(client_params, None, odcid);\n\n        let server_cid = ConnectionId::from_slice(b\"server_test\");\n        params\n            .initial_scid_from_peer_need_equal(server_cid)\n            .unwrap();\n\n        params.server = Arc::new({\n            let mut server_params = ServerParameters::default();\n            server_params\n                .set(ParameterId::InitialSourceConnectionId, server_cid)\n                .unwrap();\n            server_params\n                .set(ParameterId::OriginalDestinationConnectionId, odcid)\n                .unwrap();\n            server_params\n        });\n\n        assert!(params.authenticate_cids().is_ok());\n    }\n\n    #[test]\n    fn test_parameters_as_client() {\n        let client_params = create_test_client_params();\n        let arc_params = ArcParameters::from(Parameters::new_client(\n            client_params,\n            None,\n            ConnectionId::from_slice(b\"odcid\"),\n        ));\n\n        // Test accessing parameters through lock_guard\n        let guard = arc_params.lock_guard().unwrap();\n\n        // Test local params\n        assert!(matches!(\n            guard.get_local::<VarInt>(ParameterId::MaxUdpPayloadSize),\n            Some(value) if value.into_u64() >= 1200\n        ));\n\n        // Test remembered params\n        assert!(guard.remembered().is_none());\n    }\n\n    #[test]\n    fn test_validate_remote_params() {\n        // Test invalid max_udp_payload_size\n        assert_eq!(\n            ClientParameters::parse_from_bytes(&[\n                1, 1, 0, // max_idle_timeout\n                3, 2, 0x43, 0xE8, // max_udp_payload_size: 1000\n                4, 1, 0, // initial_max_data\n                5, 1, 0, // initial_max_stream_data_bidi_local\n                6, 1, 0, // initial_max_stream_data_bidi_remote\n                7, 1, 0, // initial_max_stream_data_uni\n                8, 1, 0, // initial_max_streams_bidi\n                9, 1, 0, // initial_max_streams_uni\n                10, 1, 3, // ack_delay_exponent\n                11, 1, 25, // max_ack_delay\n                14, 1, 2, // active_connection_id_limit\n                15, 0, // initial_source_connection_id\n                32, 4, 128, 0, 255, 255, // max_datagram_frame_size\n            ]),\n            Err(QuicError::new(\n                ErrorKind::TransportParameter,\n                FrameType::Crypto.into(),\n                \"MaxUdpPayloadSize's value 1000 is out of bounds 1200..=65527\",\n            ))\n        );\n    }\n\n    #[test]\n    fn test_write_parameters() {\n        let client_params = create_test_client_params();\n        let params = ArcParameters::from(Parameters::new_client(\n            client_params,\n            None,\n            ConnectionId::from_slice(b\"odcid\"),\n        ));\n\n        // Test that we can access the parameters\n        let guard = params.lock_guard().unwrap();\n        assert_eq!(guard.role(), Role::Client);\n    }\n\n    #[tokio::test]\n    async fn test_arc_parameters_error_handling() {\n        let arc_params = ArcParameters::from(Parameters::new_client(\n            create_test_client_params(),\n            None,\n            ConnectionId::from_slice(b\"odcid\"),\n        ));\n\n        // Simulate connection error\n        let error = QuicError::new(\n            ErrorKind::TransportParameter,\n            FrameType::Crypto.into(),\n            \"test error\",\n        )\n        .into();\n        arc_params.on_conn_error(&error);\n\n        assert!(arc_params.lock_guard().is_err());\n    }\n}\n"
  },
  {
    "path": "qbase/src/role.rs",
    "content": "use std::{fmt, ops};\n\nuse crate::param::ParameterId;\n\n/// Roles in the QUIC protocol, including client and server.\n///\n/// The least significant bit (0x01) of the [`StreamId`](crate::sid) identifies the initiator role of the stream.\n/// Client-initiated streams have even-numbered stream IDs (with the bit set to 0),\n/// and server-initiated streams have odd-numbered stream IDs (with the bit set to 1).\n/// See [section-2.1-3](https://www.rfc-editor.org/rfc/rfc9000.html#section-2.1-3)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html).\n///\n/// # Note\n///\n/// As a protocol capable of multiplexing streams, QUIC is different from traditional\n/// HTTP protocols for clients and servers.\n/// In the QUIC protocol, it is not only the client that can actively open a new stream;\n/// the server can also actively open a new stream to push some data to the client.\n/// In fact, in a new stream, the server can initiate an HTTP3 request to the client,\n/// and the client, upon receiving the request, responds back to the server.\n/// In this case, the client surprisingly plays the role of the traditional \"server\",\n/// which is quite fascinating.\n///\n/// # Example\n///\n/// ```\n/// use qbase::role::Role;\n///\n/// let local = Role::Client;\n/// let peer = !local;\n/// let is_client = matches!(local, Role::Client); // true\n/// let is_server = matches!(peer, Role::Server); // true\n/// ```\n#[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]\npub enum Role {\n    /// The initiator of a connection\n    Client = 0,\n    /// The acceptor of a connection\n    Server = 1,\n}\n\nimpl fmt::Display for Role {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        f.pad(match *self {\n            Self::Client => \"client\",\n            Self::Server => \"server\",\n        })\n    }\n}\n\nimpl ops::Not for Role {\n    type Output = Self;\n    fn not(self) -> Self {\n        match self {\n            Self::Client => Self::Server,\n            Self::Server => Self::Client,\n        }\n    }\n}\n\npub trait IntoRole {\n    /// Convert the type into a [`Role`].\n    fn into_role() -> Role;\n}\n\npub trait RequiredParameters {\n    fn required_parameters() -> impl IntoIterator<Item = ParameterId>;\n}\n\n#[derive(Default, Debug, Clone, Copy, PartialEq, Eq)]\npub struct Client;\n\nimpl From<Client> for Role {\n    fn from(_: Client) -> Self {\n        Role::Client\n    }\n}\n\nimpl IntoRole for Client {\n    fn into_role() -> Role {\n        Role::Client\n    }\n}\n\nimpl RequiredParameters for Client {\n    fn required_parameters() -> impl IntoIterator<Item = ParameterId> {\n        [ParameterId::InitialSourceConnectionId].into_iter()\n    }\n}\n\n#[derive(Default, Debug, Clone, Copy, PartialEq, Eq)]\npub struct Server;\n\nimpl From<Server> for Role {\n    fn from(_: Server) -> Self {\n        Role::Server\n    }\n}\n\nimpl IntoRole for Server {\n    fn into_role() -> Role {\n        Role::Server\n    }\n}\n\nimpl RequiredParameters for Server {\n    fn required_parameters() -> impl IntoIterator<Item = ParameterId> {\n        [\n            ParameterId::InitialSourceConnectionId,\n            ParameterId::OriginalDestinationConnectionId,\n        ]\n        .into_iter()\n    }\n}\n"
  },
  {
    "path": "qbase/src/sid/handy.rs",
    "content": "use super::{ControlStreamsConcurrency, Dir};\n\n/// Consistent concurrency strategy increase limits as streams are closed,\n/// to keep the number of streams available to peers roughly consistent.\n#[derive(Debug)]\npub struct ConsistentConcurrency {\n    max_streams: [u64; 2],\n}\n\nimpl ConsistentConcurrency {\n    pub fn new(initial_max_bi: u64, initial_max_uni: u64) -> Self {\n        Self {\n            max_streams: [initial_max_bi, initial_max_uni],\n        }\n    }\n}\n\nimpl ControlStreamsConcurrency for ConsistentConcurrency {\n    fn on_accept_streams(&mut self, _dir: Dir, _sid: u64) -> Option<u64> {\n        None\n    }\n\n    fn on_end_of_stream(&mut self, dir: Dir, _sid: u64) -> Option<u64> {\n        let idx = dir as usize;\n        let new_limit = self.max_streams[idx] + 1;\n\n        self.max_streams[idx] = new_limit;\n        Some(new_limit)\n    }\n\n    fn on_streams_blocked(&mut self, _dir: Dir, _max_streams: u64) -> Option<u64> {\n        None\n    }\n}\n\n/// Demand concurrency strategy increase limits as long as receiving a\n/// [`StreamsBlockedFrame`](crate::frame::StreamsBlockedFrame).\n#[derive(Debug)]\npub struct DemandConcurrency;\n\nimpl ControlStreamsConcurrency for DemandConcurrency {\n    fn on_accept_streams(&mut self, _dir: Dir, _sid: u64) -> Option<u64> {\n        None\n    }\n\n    fn on_end_of_stream(&mut self, _dir: Dir, _sid: u64) -> Option<u64> {\n        None\n    }\n\n    fn on_streams_blocked(&mut self, _dir: Dir, max_streams: u64) -> Option<u64> {\n        Some(max_streams + 1)\n    }\n}\n"
  },
  {
    "path": "qbase/src/sid/local_sid.rs",
    "content": "use std::{\n    collections::VecDeque,\n    sync::{Arc, Mutex},\n    task::{Context, Poll, Waker},\n};\n\nuse super::{Dir, Role, StreamId};\nuse crate::{\n    frame::{\n        MaxStreamsFrame, StreamsBlockedFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::tx::{ArcSendWakers, Signals},\n    sid::MAX_STREAMS_LIMIT,\n    varint::VarInt,\n};\n\n/// Local stream IDs management.\n#[derive(Debug)]\nstruct LocalStreamIds<BLOCKED> {\n    /// Our role\n    role: Role,\n    max: [u64; 2],\n    unallocated: [u64; 2],\n    /// Used for waiting for the MaxStream frame notification from peer when we have exhausted the creation of stream IDs\n    wakers: [VecDeque<Waker>; 2],\n    /// The StreamsBlocked frames that will be sent to peer\n    blocked: BLOCKED,\n    tx_wakers: ArcSendWakers,\n}\n\nimpl<BLOCKED> LocalStreamIds<BLOCKED>\nwhere\n    BLOCKED: SendFrame<StreamsBlockedFrame> + Clone + Send + 'static,\n{\n    /// Create a new [`LocalStreamIds`] with the given role,\n    /// and maximum number of streams that can be created in each [`Dir`].\n    fn new(\n        role: Role,\n        init_max_bi_streams: u64,\n        init_max_uni_streams: u64,\n        blocked: BLOCKED,\n        tx_wakers: ArcSendWakers,\n    ) -> Self {\n        debug_assert!(\n            role == Role::Client || (init_max_bi_streams == 0 && init_max_uni_streams == 0),\n            \"Server cannot remember the parameters\"\n        );\n        Self {\n            role,\n            max: [init_max_bi_streams, init_max_uni_streams],\n            unallocated: [0, 0],\n            wakers: [VecDeque::with_capacity(2), VecDeque::with_capacity(2)],\n            blocked,\n            tx_wakers,\n        }\n    }\n\n    /// Returns local role.\n    fn role(&self) -> Role {\n        self.role\n    }\n\n    /// Returns the number of opened streams in the `dir` direction.\n    fn opened_streams(&self, dir: Dir) -> u64 {\n        self.unallocated[dir as usize]\n    }\n\n    /// Receive the [`MaxStreamsFrame`](`crate::frame::MaxStreamsFrame`) from peer,\n    /// update the maximum stream ID that can be opened locally in the given direction.\n    fn recv_max_streams_frame(&mut self, frame: MaxStreamsFrame) {\n        let (dir, val) = match frame {\n            MaxStreamsFrame::Bi(max) => (Dir::Bi, max.into_u64()),\n            MaxStreamsFrame::Uni(max) => (Dir::Uni, max.into_u64()),\n        };\n        self.increase_limit(dir, val);\n    }\n\n    fn increase_limit(&mut self, dir: Dir, val: u64) {\n        assert!(val <= MAX_STREAMS_LIMIT);\n        let max_streams = &mut self.max[dir as usize];\n        // RFC9000: MAX_STREAMS frames that do not increase the stream limit MUST be ignored.\n        if *max_streams < val {\n            // The rejected 0rtt stream can be sent again, as if new data was written.\n            if *max_streams < self.unallocated[dir as usize] {\n                self.tx_wakers.wake_all_by(Signals::WRITTEN);\n            }\n            for waker in self.wakers[dir as usize].drain(..) {\n                waker.wake();\n            }\n            *max_streams = val;\n        }\n    }\n\n    fn poll_alloc_sid(&mut self, cx: &mut Context<'_>, dir: Dir) -> Poll<Option<StreamId>> {\n        let idx = dir as usize;\n        let max = self.max[idx];\n        let unallocated = self.unallocated[idx];\n        if unallocated > MAX_STREAMS_LIMIT {\n            Poll::Ready(None)\n        } else if unallocated < max {\n            self.unallocated[idx] += 1;\n            Poll::Ready(Some(StreamId::new(self.role, dir, unallocated)))\n        } else {\n            // waiting for MAX_STREAMS frame from peer\n            self.wakers[idx].push_back(cx.waker().clone());\n            // if Poll::Pending is returned, connection can send a STREAMS_BLOCKED frame to peer\n            self.blocked.send_frame([StreamsBlockedFrame::with(\n                dir,\n                VarInt::from_u64(max).expect(\"max_streams limit must be less than VARINT_MAX\"),\n            )]);\n            Poll::Pending\n        }\n    }\n\n    pub fn revise_max_streams(\n        &mut self,\n        zero_rtt_rejected: bool,\n        max_stream_bidi: u64,\n        max_stream_uni: u64,\n    ) {\n        if zero_rtt_rejected {\n            self.max = [0, 0];\n        }\n        self.increase_limit(Dir::Bi, max_stream_bidi);\n        self.increase_limit(Dir::Uni, max_stream_uni);\n    }\n}\n\n/// Management of stream IDs that can ben allowed to use locally.\n///\n/// The maximum stream ID that can be created is limited by the\n/// [`MaxStreamsFrame`](`crate::frame::MaxStreamsFrame`) from the peer.\n///\n/// When the stream IDs in the `dir` direction are exhausted,\n/// a [`StreamsBlockedFrame`](`crate::frame::StreamsBlockedFrame`) will be sent to the peer.\n/// The generic parameter `BLOCKED` is the container of the [`StreamsBlockedFrame`]\n/// that will be sent to peer, it can be a channel, a queue, or a buffer,\n/// as long as it can send the [`StreamsBlockedFrame`] to peer.\n#[derive(Debug, Clone)]\npub struct ArcLocalStreamIds<BLOCKED>(Arc<Mutex<LocalStreamIds<BLOCKED>>>);\n\nimpl<BLOCKED> ArcLocalStreamIds<BLOCKED>\nwhere\n    BLOCKED: SendFrame<StreamsBlockedFrame> + Clone + Send + 'static,\n{\n    /// Create a new [`ArcLocalStreamIds`] with the given role,\n    /// and maximum number of streams that can be created in each direction,\n    /// the `blocked` contains the [`StreamsBlockedFrame`] that will be sent to peer.\n    pub fn new(\n        role: Role,\n        max_bidi: u64,\n        max_uni: u64,\n        blocked: BLOCKED,\n        tx_wakers: ArcSendWakers,\n    ) -> Self {\n        Self(Arc::new(Mutex::new(LocalStreamIds::new(\n            role, max_bidi, max_uni, blocked, tx_wakers,\n        ))))\n    }\n\n    /// Returns local role\n    pub fn role(&self) -> Role {\n        self.0.lock().unwrap().role()\n    }\n\n    /// Returns the number of opened streams in the `dir` direction.\n    ///\n    /// If `is_0rtt` is true, this will return the stream open in 0rtt phase.\n    ///\n    /// If `is_0rtt` is false, the return value will not be greater than max_streams,\n    /// that is, if 0rtt is rejected, the return value may be less than the number of open streams.\n    /// This is the number of streams that can actually be sent in the 1rtt space.\n    pub fn opened_streams(&self, dir: Dir) -> u64 {\n        self.0.lock().unwrap().opened_streams(dir)\n    }\n\n    /// Receive the [`MaxStreamsFrame`](`crate::frame::MaxStreamsFrame`) from peer,\n    /// and then update the maximum stream ID that can be allowed to use locally.\n    ///\n    /// The maximum stream ID that can be allowed to use is limited by peer.\n    /// Therefore, it mainly depends on the peer's attitude\n    /// and is subject to the [`MaxStreamsFrame`](`crate::frame::MaxStreamsFrame`)\n    /// received from peer.\n    pub fn recv_max_streams_frame(&self, frame: MaxStreamsFrame) {\n        self.0.lock().unwrap().recv_max_streams_frame(frame);\n    }\n\n    /// Asynchronously allocate the next new [`StreamId`] in the `dir` direction.\n    ///\n    /// Return a bool indicating whether the stream is opened in 0rtt phase.\n    ///\n    /// When the application layer wants to proactively open a new stream,\n    /// it needs to first apply to allocate the next unused [`StreamId`].\n    /// Note that streams on a QUIC connection usually have a maximum concurrency limit,\n    /// so when requesting a [`StreamId`], it may not be possible to obtain one due to\n    /// reaching the maximum concurrency limit.\n    /// However, this is temporary. When the active current streams end,\n    /// the peer will expand the maximum stream ID limit through a\n    /// [`MaxStreamsFrame`](`crate::frame::MaxStreamsFrame`),\n    /// allowing the allocation of the [`StreamId`] meanwhile.\n    ///\n    /// Return Pending when the stream IDs in the `dir` direction are exhausted,\n    /// until receiving the [`MaxStreamsFrame`](`crate::frame::MaxStreamsFrame`) from peer.\n    ///\n    /// Return None if the stream IDs in the `dir` direction finally exceed 2^60,\n    /// but it is very very hard to happen.\n    pub fn poll_alloc_sid(&self, cx: &mut Context<'_>, dir: Dir) -> Poll<Option<StreamId>> {\n        self.0.lock().unwrap().poll_alloc_sid(cx, dir)\n    }\n\n    pub fn revise_max_streams(\n        &self,\n        zero_rtt_rejected: bool,\n        max_stream_bidi: u64,\n        max_stream_uni: u64,\n    ) {\n        self.0.lock().unwrap().revise_max_streams(\n            zero_rtt_rejected,\n            max_stream_bidi,\n            max_stream_uni,\n        );\n    }\n}\n\nimpl<BLOCKED> ReceiveFrame<MaxStreamsFrame> for ArcLocalStreamIds<BLOCKED>\nwhere\n    BLOCKED: SendFrame<StreamsBlockedFrame> + Clone + Send + 'static,\n{\n    type Output = ();\n\n    fn recv_frame(&self, frame: MaxStreamsFrame) -> Result<Self::Output, crate::error::Error> {\n        self.recv_max_streams_frame(frame);\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use derive_more::Deref;\n\n    use super::*;\n    use crate::util::ArcAsyncDeque;\n\n    #[derive(Clone, Deref, Default)]\n    struct StreamsBlockedFrameTx(ArcAsyncDeque<StreamsBlockedFrame>);\n\n    impl SendFrame<StreamsBlockedFrame> for StreamsBlockedFrameTx {\n        fn send_frame<I: IntoIterator<Item = StreamsBlockedFrame>>(&self, iter: I) {\n            (&self.0).extend(iter);\n        }\n    }\n\n    #[test]\n    fn test_stream_id_new() {\n        let sid = StreamId::new(Role::Client, Dir::Bi, 0);\n        assert_eq!(sid, StreamId(0));\n        assert_eq!(sid.role(), Role::Client);\n        assert_eq!(sid.dir(), Dir::Bi);\n    }\n\n    #[test]\n    fn test_recv_max_stream_frames() {\n        let local = ArcLocalStreamIds::new(\n            Role::Client,\n            0,\n            0,\n            StreamsBlockedFrameTx::default(),\n            ArcSendWakers::default(),\n        );\n        local.recv_max_streams_frame(MaxStreamsFrame::Bi(VarInt::from_u32(0)));\n        let waker = futures::task::noop_waker();\n        let mut cx = Context::from_waker(&waker);\n        assert_eq!(local.poll_alloc_sid(&mut cx, Dir::Bi), Poll::Pending,);\n        assert!(!local.0.lock().unwrap().wakers[0].is_empty());\n\n        local.recv_max_streams_frame(MaxStreamsFrame::Bi(VarInt::from_u32(1)));\n        let _ = local.0.lock().unwrap().wakers[0].pop_front();\n        assert_eq!(\n            local.poll_alloc_sid(&mut cx, Dir::Bi),\n            Poll::Ready(Some(StreamId(0)))\n        );\n        assert_eq!(local.poll_alloc_sid(&mut cx, Dir::Bi), Poll::Pending);\n        assert!(!local.0.lock().unwrap().wakers[0].is_empty());\n\n        local.recv_max_streams_frame(MaxStreamsFrame::Uni(VarInt::from_u32(2)));\n        assert_eq!(\n            local.poll_alloc_sid(&mut cx, Dir::Uni),\n            Poll::Ready(Some(StreamId(2)))\n        );\n        assert_eq!(\n            local.poll_alloc_sid(&mut cx, Dir::Uni),\n            Poll::Ready(Some(StreamId(6)))\n        );\n        assert_eq!(local.poll_alloc_sid(&mut cx, Dir::Uni), Poll::Pending);\n        assert!(!local.0.lock().unwrap().wakers[1].is_empty());\n    }\n}\n"
  },
  {
    "path": "qbase/src/sid/remote_sid.rs",
    "content": "use std::sync::{Arc, Mutex};\n\nuse thiserror::Error;\n\nuse super::{ControlStreamsConcurrency, Dir, Role, StreamId};\nuse crate::{\n    frame::{\n        MaxStreamsFrame, StreamsBlockedFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    varint::VarInt,\n};\n\n/// Exceed the maximum stream ID limit error,\n/// similar with [`ErrorKind::StreamLimit`](`crate::error::ErrorKind::StreamLimit`).\n///\n/// This error occurs when the stream ID in the received stream-related frames\n/// exceeds the maximum stream ID limit.\n#[derive(Debug, PartialEq, Error)]\n#[error(\"{0} exceed limit: {1}\")]\npub struct ExceedLimitError(StreamId, u64);\n\n/// Accept the stream ID received from peer,\n/// returned by [`ArcRemoteStreamIds::try_accept_sid`].\n#[derive(Debug, PartialEq)]\npub enum AcceptSid {\n    /// Indicates that the stream ID is already exist.\n    Old,\n    /// Indicates that the stream ID is new and need to create.\n    /// The `NeedCreate` inside indicates the range of stream IDs that need to be created together.\n    New(NeedCreate),\n}\n\n/// The range of stream IDs that need to be created,\n/// see [`ArcRemoteStreamIds::try_accept_sid`] and [`AcceptSid::New`].\n#[derive(Debug, PartialEq)]\npub struct NeedCreate {\n    start: StreamId,\n    end: StreamId,\n}\n\nimpl Iterator for NeedCreate {\n    type Item = StreamId;\n    fn next(&mut self) -> Option<Self::Item> {\n        if self.start > self.end {\n            None\n        } else {\n            // Safety: Since being generated from \"StreamIds\", they could not overflow.\n            let id = self.start;\n            self.start = unsafe { self.start.next_unchecked() };\n            Some(id)\n        }\n    }\n}\n\n/// Remote stream IDs management.\n#[derive(Debug)]\nstruct RemoteStreamIds<MAX> {\n    role: Role,                               // The role of the peer\n    max: [u64; 2],                            // The maximum stream ID that limit peer to create\n    unallocated: [StreamId; 2],               // The stream ID that peer has not used\n    ctrl: Box<dyn ControlStreamsConcurrency>, // The strategy to control the concurrency of streams\n    max_tx: MAX,                              // The channel to send the MAX_STREAMS frame to peer\n}\n\nimpl<MAX> RemoteStreamIds<MAX>\nwhere\n    MAX: SendFrame<MaxStreamsFrame> + Clone + Send + 'static,\n{\n    /// Create a new [`RemoteStreamIds`] with the given role,\n    /// and maximum number of streams that can be created by peer in each [`Dir`].\n    fn new(\n        role: Role,\n        max_bi: u64,\n        max_uni: u64,\n        max_tx: MAX,\n        ctrl: Box<dyn ControlStreamsConcurrency>,\n    ) -> Self {\n        Self {\n            role,\n            max: [max_bi, max_uni],\n            unallocated: [\n                StreamId::new(role, Dir::Bi, 0),\n                StreamId::new(role, Dir::Uni, 0),\n            ],\n            ctrl,\n            max_tx,\n        }\n    }\n\n    /// Returns the role of the peer.\n    fn role(&self) -> Role {\n        self.role\n    }\n\n    fn try_accept_sid(&mut self, sid: StreamId) -> Result<AcceptSid, ExceedLimitError> {\n        debug_assert_eq!(sid.role(), self.role);\n        let idx = sid.dir() as usize;\n        if sid.id() > self.max[idx] {\n            return Err(ExceedLimitError(sid, self.max[idx]));\n        }\n        let cur = &mut self.unallocated[idx];\n        if sid < *cur {\n            Ok(AcceptSid::Old)\n        } else {\n            let start = *cur;\n            *cur = unsafe { sid.next_unchecked() };\n            if let Some(max_streams) = self.ctrl.on_accept_streams(sid.dir(), sid.id()) {\n                self.max[idx] = max_streams;\n                self.max_tx.send_frame([MaxStreamsFrame::with(\n                    sid.dir(),\n                    VarInt::from_u64(max_streams)\n                        .expect(\"max_streams must be less than VARINT_MAX\"),\n                )]);\n            }\n            Ok(AcceptSid::New(NeedCreate { start, end: sid }))\n        }\n    }\n\n    fn on_end_of_stream(&mut self, sid: StreamId) {\n        if sid.role() != self.role {\n            return;\n        }\n\n        if let Some(max_streams) = self.ctrl.on_end_of_stream(sid.dir(), sid.id()) {\n            self.max[sid.dir() as usize] = max_streams;\n            self.max_tx.send_frame([MaxStreamsFrame::with(\n                sid.dir(),\n                VarInt::from_u64(max_streams).expect(\"max_streams must be less than VARINT_MAX\"),\n            )]);\n        }\n    }\n\n    fn recv_streams_blocked_frame(&mut self, frame: StreamsBlockedFrame) {\n        let (dir, max_streams) = match frame {\n            StreamsBlockedFrame::Bi(max) => (Dir::Bi, max.into_u64()),\n            StreamsBlockedFrame::Uni(max) => (Dir::Uni, max.into_u64()),\n        };\n        if let Some(max_streams) = self.ctrl.on_streams_blocked(dir, max_streams) {\n            self.max[dir as usize] = max_streams;\n            self.max_tx.send_frame([MaxStreamsFrame::with(\n                dir,\n                VarInt::from_u64(max_streams).expect(\"max_streams must be less than VARINT_MAX\"),\n            )]);\n        }\n    }\n}\n\n/// Shared remote stream IDs, mainly controls and monitors the stream IDs\n/// in the received stream-related frames from peer.\n///\n/// Checks whether the stream IDs exceed the limit ,and creates them if necessary.\n/// And sends a [`MaxStreamsFrame`](`crate::frame::MaxStreamsFrame`)\n/// to the peer to update the maximum stream ID limit in time.\n///\n/// # Note\n///\n/// After receiving the peer's stream-related frames,\n/// due to possible out-of-order reception issues,\n/// the stream IDs in these frames may have gaps,\n/// i.e., they may not be continuous with the previous stream ID of the same type.\n/// So before a stream is created,\n/// all streams of the same type with lower-numbered stream IDs MUST be created.\n/// This ensures that the creation order for streams is consistent on both endpoints\n#[derive(Debug, Clone)]\npub struct ArcRemoteStreamIds<MAX>(Arc<Mutex<RemoteStreamIds<MAX>>>);\n\nimpl<MAX> ArcRemoteStreamIds<MAX>\nwhere\n    MAX: SendFrame<MaxStreamsFrame> + Clone + Send + 'static,\n{\n    /// Create a new [`ArcRemoteStreamIds`] with the given role,\n    /// and maximum number of streams that can be created by peer in each direction.\n    ///\n    /// The maximum number of streams that can be created by peer in each direction\n    /// are `initial_max_streams_bidi` and `initial_max_sterams_uni`\n    /// in local [`Parameters`](`crate::param::Parameters`).\n    /// See [section-18.2-4.21](https://www.rfc-editor.org/rfc/rfc9000.html#section-18.2-4.21)\n    /// and [section-18.2-4.23](https://www.rfc-editor.org/rfc/rfc9000.html#section-18.2-4.23)\n    /// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n    pub fn new(\n        role: Role,\n        max_bi: u64,\n        max_uni: u64,\n        max_tx: MAX,\n        ctrl: Box<dyn ControlStreamsConcurrency>,\n    ) -> Self {\n        Self(Arc::new(Mutex::new(RemoteStreamIds::new(\n            role, max_bi, max_uni, max_tx, ctrl,\n        ))))\n    }\n\n    /// Returns the role of the peer.\n    pub fn role(&self) -> Role {\n        self.0.lock().unwrap().role()\n    }\n\n    /// Try to accept the stream ID received from peer.\n    ///\n    /// Only if this stream ID must be created by peer, this function needs to be called.\n    ///\n    /// This stream ID may belong to an already existing stream or a new stream that does not yet exist.\n    /// If it is the latter, a new stream needs to be created.\n    /// Before a stream is created, all streams of the same type\n    /// with lower-numbered stream IDs MUST be created.\n    /// See [section-3.2-6](https://www.rfc-editor.org/rfc/rfc9000.html#section-3.2-6)\n    /// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n    ///\n    /// # Return\n    ///\n    /// - Return [`ExceedLimitError`] if the stream ID exceeds the maximum stream ID limit.\n    /// - Return [`AcceptSid::Old`] if the stream ID is already exist.\n    /// - Return [`AcceptSid::New`] if the stream ID is new and need to create.\n    ///   The `NeedCreate` inside indicates the range of stream IDs that need to be created.\n    pub fn try_accept_sid(&self, sid: StreamId) -> Result<AcceptSid, ExceedLimitError> {\n        self.0.lock().unwrap().try_accept_sid(sid)\n    }\n\n    #[inline]\n    pub fn on_end_of_stream(&self, sid: StreamId) {\n        self.0.lock().unwrap().on_end_of_stream(sid);\n    }\n\n    #[inline]\n    pub fn recv_streams_blocked_frame(&self, frame: StreamsBlockedFrame) {\n        self.0.lock().unwrap().recv_streams_blocked_frame(frame);\n    }\n}\n\nimpl<MAX> ReceiveFrame<StreamsBlockedFrame> for ArcRemoteStreamIds<MAX>\nwhere\n    MAX: SendFrame<MaxStreamsFrame> + Clone + Send + 'static,\n{\n    type Output = ();\n\n    fn recv_frame(&self, frame: StreamsBlockedFrame) -> Result<Self::Output, crate::error::Error> {\n        self.recv_streams_blocked_frame(frame);\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use derive_more::Deref;\n\n    use super::*;\n    use crate::{sid::handy::ConsistentConcurrency, util::ArcAsyncDeque};\n\n    #[derive(Clone, Deref, Default)]\n    struct MaxStreamsFrameTx(ArcAsyncDeque<MaxStreamsFrame>);\n\n    impl SendFrame<MaxStreamsFrame> for MaxStreamsFrameTx {\n        fn send_frame<I: IntoIterator<Item = MaxStreamsFrame>>(&self, iter: I) {\n            (&self.0).extend(iter);\n        }\n    }\n\n    #[test]\n    fn test_try_accept_sid() {\n        let remote = ArcRemoteStreamIds::new(\n            Role::Server,\n            10,\n            5,\n            MaxStreamsFrameTx::default(),\n            Box::new(ConsistentConcurrency::new(10, 5)),\n        );\n        let result = remote.try_accept_sid(StreamId(21));\n        assert_eq!(\n            result,\n            Ok(AcceptSid::New(NeedCreate {\n                start: StreamId(1),\n                end: StreamId(21)\n            }))\n        );\n        assert_eq!(remote.0.lock().unwrap().unallocated[0], StreamId(25));\n\n        let result = remote.try_accept_sid(StreamId(25));\n        assert_eq!(\n            result,\n            Ok(AcceptSid::New(NeedCreate {\n                start: StreamId(25),\n                end: StreamId(25)\n            }))\n        );\n        assert_eq!(remote.0.lock().unwrap().unallocated[0], StreamId(29));\n\n        let result = remote.try_accept_sid(StreamId(41));\n        assert_eq!(\n            result,\n            Ok(AcceptSid::New(NeedCreate {\n                start: StreamId(29),\n                end: StreamId(41)\n            }))\n        );\n        assert_eq!(remote.0.lock().unwrap().unallocated[0], StreamId(45));\n        if let Ok(AcceptSid::New(mut range)) = result {\n            assert_eq!(range.next(), Some(StreamId(29)));\n            assert_eq!(range.next(), Some(StreamId(33)));\n            assert_eq!(range.next(), Some(StreamId(37)));\n            assert_eq!(range.next(), Some(StreamId(41)));\n            assert_eq!(range.next(), None);\n        }\n\n        let result = remote.try_accept_sid(StreamId(65));\n        assert_eq!(result, Err(ExceedLimitError(StreamId(65), 10)));\n    }\n}\n"
  },
  {
    "path": "qbase/src/sid.rs",
    "content": "use std::fmt;\n\nuse super::{\n    frame::MaxStreamsFrame,\n    varint::{VarInt, WriteVarInt, be_varint},\n};\nuse crate::{\n    frame::{StreamsBlockedFrame, io::SendFrame},\n    net::tx::ArcSendWakers,\n    role::Role,\n};\n\n/// Sum type for stream directions.\n///\n/// Streams can be unidirectional or bidirectional.\n/// Unidirectional streams carry data in one direction: from the initiator of the stream to its peer.\n/// Bidirectional streams allow for data to be sent in both directions.\n/// See [section-2.1-1](https://www.rfc-editor.org/rfc/rfc9000.html#section-2.1-1)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html).\n///\n/// The second least significant bit (0x02) of the [`StreamId`] distinguishes between\n/// bidirectional streams (with the bit set to 0) and unidirectional streams (with the bit set to 1).\n/// See [section-2.1-4](https://www.rfc-editor.org/rfc/rfc9000.html#section-2.1-4)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html).\n#[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]\npub enum Dir {\n    /// Data flows in both directions\n    Bi = 0,\n    /// Data flows only from the stream's initiator\n    Uni = 1,\n}\n\nimpl fmt::Display for Dir {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        f.pad(match *self {\n            Self::Bi => \"bidirectional\",\n            Self::Uni => \"unidirectional\",\n        })\n    }\n}\n\n/// Streams are identified within a connection by a numeric value,\n/// referred to as the stream ID.\n///\n/// A stream ID is a 62-bit integer (0 to 262-1) that is unique for all streams on a connection.\n/// Stream IDs are encoded as [`VarInt`].\n/// A QUIC endpoint MUST NOT reuse a stream ID within a connection.\n///\n/// There are four types of streams in QUIC, divided according to the role and direction of the stream.\n/// See [Stream ID Types](https://www.rfc-editor.org/rfc/rfc9000.html#name-stream-id-types)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]\npub struct StreamId(u64);\n\n/// Maximum ID for each type of stream.\n///\n/// [`StreamId`] is encoded with [`VarInt`].\n/// After removing the lowest 2 bits for direction and role,\n/// the remaining 60 bits are used to represent the actual ID for each type of stream,\n/// so its maximum range cannot exceed 2^60.\npub const MAX_STREAMS_LIMIT: u64 = (1 << 60) - 1;\n\nimpl StreamId {\n    /// Create a new stream ID with the given role, direction, and ID.\n    ///\n    /// It is prohibited to directly create a StreamId from external sources.\n    /// StreamId can only be allocated incrementally by proactively creating new streams locally.\n    /// or accepting new streams opened by peer.\n    pub fn new(role: Role, dir: Dir, id: u64) -> Self {\n        assert!(id <= MAX_STREAMS_LIMIT);\n        Self((((id << 1) | (dir as u64)) << 1) | (role as u64))\n    }\n\n    /// Returns the role of this stream ID.\n    pub fn role(&self) -> Role {\n        if self.0 & 0x1 == 0 {\n            Role::Client\n        } else {\n            Role::Server\n        }\n    }\n\n    /// Returns the direction of this stream ID.\n    pub fn dir(&self) -> Dir {\n        if self.0 & 2 == 0 { Dir::Bi } else { Dir::Uni }\n    }\n\n    /// Get the actual ID of this stream, removing the lowest 2 bits for direction and role.\n    pub fn id(&self) -> u64 {\n        self.0 >> 2\n    }\n\n    unsafe fn next_unchecked(&self) -> Self {\n        Self(self.0 + 4)\n    }\n\n    /// Return the encoding size of this stream ID.\n    pub fn encoding_size(&self) -> usize {\n        VarInt::from(*self).encoding_size()\n    }\n}\n\nimpl fmt::Display for StreamId {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        write!(\n            f,\n            \"{} side {} stream {}\",\n            self.role(),\n            self.dir(),\n            self.id()\n        )\n    }\n}\n\nimpl From<VarInt> for StreamId {\n    fn from(v: VarInt) -> Self {\n        Self(v.into_u64())\n    }\n}\n\nimpl From<StreamId> for VarInt {\n    fn from(s: StreamId) -> Self {\n        VarInt::from_u64(s.0).expect(\"stream id must be less than VARINT_MAX\")\n    }\n}\n\nimpl From<StreamId> for u64 {\n    fn from(s: StreamId) -> Self {\n        s.0\n    }\n}\n\n/// Parse a stream ID from the input bytes,\n/// [nom](https://docs.rs/nom/6.2.1/nom/) parser style.\npub fn be_streamid(input: &[u8]) -> nom::IResult<&[u8], StreamId> {\n    use nom::{Parser, combinator::map};\n    map(be_varint, StreamId::from).parse(input)\n}\n\n/// A BufMut extension trait for writing a stream ID.\npub trait WriteStreamId: bytes::BufMut {\n    /// Write a stream ID to the buffer.\n    fn put_streamid(&mut self, stream_id: &StreamId);\n}\n\nimpl<T: bytes::BufMut> WriteStreamId for T {\n    fn put_streamid(&mut self, stream_id: &StreamId) {\n        self.put_varint(&(*stream_id).into());\n    }\n}\n\n/// Controls the concurrency of unidirectional and bidirectional streams created by the peer,\n/// primarily through [`StreamsBlockedFrame`] and [`MaxStreamsFrame`].\n///\n/// [RFC 9000](https://www.rfc-editor.org/rfc/rfc9000.html)\n/// leaves implementations to decide when and how many streams should be\n/// advertised to a peer via MAX_STREAMS. Implementations might choose to\n/// increase limits as streams are closed, to keep the number of streams\n/// available to peers roughly consistent.\n///\n/// Implementations might also choose to increase limits as long as the\n/// peer needs to create new streams.\n///\n/// See [controlling concurrency](https://www.rfc-editor.org/rfc/rfc9000.html#name-controlling-concurrency).\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\npub trait ControlStreamsConcurrency: fmt::Debug + Send + Sync {\n    /// Called back upon accepting a new `dir` direction streams with stream id `sid` from peer,\n    /// all previous inexistent `dir` direction streams should be opened by peer will also be created.\n    ///\n    /// Returns whether to increase the maximum stream ID limit,\n    /// which will be communicated to the peer via a MAX_STREAMS frame in the future.\n    /// If None is returned, it means there is no need to\n    /// increase the MAX_STREAMS for the time being.\n    #[must_use]\n    fn on_accept_streams(&mut self, dir: Dir, sid: u64) -> Option<u64>;\n\n    /// Called back upon a `dir` directional stream is ended,\n    /// whether it is closed normally or reset abnormally.\n    ///\n    /// The `sid` is the stream ID of the ended `dir` direction stream.\n    ///\n    /// Returns whether to increase the maximum stream ID limit,\n    /// which will be communicated to the peer via a MAX_STREAMS frame in the future.\n    /// If None is returned, it means there is no need to\n    /// increase the MAX_STREAMS for the time being.\n    fn on_end_of_stream(&mut self, dir: Dir, sid: u64) -> Option<u64>;\n\n    /// Called back upon receiving the StreamsBlocked frame,\n    /// which indicates that the peer is limited to create more `dir` direction streams.\n    ///\n    /// It may optionally return an increased value for the `max_streams`\n    /// for the `dir` directional streams.\n    /// If None is returned, it means there is no need to increase\n    /// the MAX_STREAMS for the time being.\n    fn on_streams_blocked(&mut self, dir: Dir, max_streams: u64) -> Option<u64>;\n}\n\nimpl<C: ?Sized + ControlStreamsConcurrency> ControlStreamsConcurrency for Box<C> {\n    fn on_accept_streams(&mut self, dir: Dir, sid: u64) -> Option<u64> {\n        self.as_mut().on_accept_streams(dir, sid)\n    }\n\n    fn on_end_of_stream(&mut self, dir: Dir, sid: u64) -> Option<u64> {\n        self.as_mut().on_end_of_stream(dir, sid)\n    }\n\n    fn on_streams_blocked(&mut self, dir: Dir, max_streams: u64) -> Option<u64> {\n        self.as_mut().on_streams_blocked(dir, max_streams)\n    }\n}\n\npub trait ProductStreamsConcurrencyController: Send + Sync {\n    fn init(\n        &self,\n        init_max_bidi_streams: u64,\n        init_max_uni_streams: u64,\n    ) -> Box<dyn ControlStreamsConcurrency>;\n}\n\nimpl<F, C> ProductStreamsConcurrencyController for F\nwhere\n    F: Fn(u64, u64) -> C + Send + Sync,\n    C: ControlStreamsConcurrency + 'static,\n{\n    #[inline]\n    fn init(\n        &self,\n        init_max_bidi_streams: u64,\n        init_max_uni_streams: u64,\n    ) -> Box<dyn ControlStreamsConcurrency> {\n        Box::new((self)(init_max_bidi_streams, init_max_uni_streams))\n    }\n}\n\npub mod handy;\n\npub mod local_sid;\npub use local_sid::ArcLocalStreamIds;\n\npub mod remote_sid;\npub use remote_sid::ArcRemoteStreamIds;\n\n/// Stream IDs management, including an [`ArcLocalStreamIds`] as local,\n/// and an [`ArcRemoteStreamIds`] as remote.\n#[derive(Debug, Clone)]\npub struct StreamIds<BLOCKED, MAX> {\n    pub local: ArcLocalStreamIds<BLOCKED>,\n    pub remote: ArcRemoteStreamIds<MAX>,\n}\n\nimpl<T> StreamIds<T, T>\nwhere\n    T: SendFrame<MaxStreamsFrame> + SendFrame<StreamsBlockedFrame> + Clone + Send + 'static,\n{\n    /// Create a new [`StreamIds`] with the given role, and maximum number of streams of each direction.\n    ///\n    /// The troublesome part is that the maximum number of streams that can be created locally\n    /// is restricted by the peer's `initial_max_streams_uni` and `initial_max_streams_bidi` transport\n    /// parameters, which are unknown at the beginning.\n    /// Therefore, peer's `initial_max_streams_xx` can be set to 0 initially,\n    /// and then updated later after obtaining the peer's `initial_max_streams_xx` setting.\n    #[allow(clippy::too_many_arguments)]\n    pub fn new(\n        role: Role,\n        local_max_bi: u64,\n        local_max_uni: u64,\n        remote_max_bi: u64,\n        remote_max_uni: u64,\n        sid_frames_tx: T,\n        ctrl: Box<dyn ControlStreamsConcurrency>,\n        tx_wakers: ArcSendWakers,\n    ) -> Self {\n        // 缺省为0\n        let local = ArcLocalStreamIds::new(\n            role,\n            remote_max_bi,\n            remote_max_uni,\n            sid_frames_tx.clone(),\n            tx_wakers,\n        );\n        let remote =\n            ArcRemoteStreamIds::new(!role, local_max_bi, local_max_uni, sid_frames_tx, ctrl);\n        Self { local, remote }\n    }\n}\n"
  },
  {
    "path": "qbase/src/time.rs",
    "content": "use std::sync::{Arc, Mutex, RwLock};\n\nuse thiserror::Error;\nuse tokio::time::{Duration, Instant};\n\nuse crate::{frame::PingFrame, packet::PacketContent};\n\n#[derive(Debug, Error)]\n#[error(\"Path has been idle for too long\")]\npub struct TimeOut;\n\n#[derive(Debug)]\npub struct IdleConfig {\n    max_idle_timeout: Duration,\n    defer_idle_timeout: Duration,\n    heartbeat_interval: Duration,\n}\n\nimpl IdleConfig {\n    fn suitable_heartbeat_interval(max_idle_timeout: Duration) -> Duration {\n        if max_idle_timeout == Duration::ZERO {\n            Duration::from_secs(30)\n        } else {\n            (max_idle_timeout / 2)\n                .max(Duration::from_secs(1))\n                .min(Duration::from_secs(30))\n        }\n    }\n\n    // Creates a new `IdleTimer` with the specified maximum idle timeout and defer idle timeout.\n    pub fn new(max_idle_timeout: Duration, defer_idle_timeout: Duration) -> Self {\n        let heartbeat_interval = Self::suitable_heartbeat_interval(max_idle_timeout);\n        Self {\n            max_idle_timeout,\n            defer_idle_timeout,\n            heartbeat_interval,\n        }\n    }\n\n    // Each endpoint advertises a max_idle_timeout, but the effective value at an endpoint\n    // is computed as the minimum of the two advertised values (or the sole advertised value,\n    // if only one endpoint advertises a non-zero value).\n    //\n    // Idle timeout is disabled when both endpoints omit this transport parameter or specify a value of 0.\n    pub fn negotiate_max_idle_timeout(&mut self, max_idle_timeout: Duration) {\n        match (self.max_idle_timeout, max_idle_timeout) {\n            (_, Duration::ZERO) => (),\n            (Duration::ZERO, remote) => self.max_idle_timeout = remote,\n            (local, remote) => self.max_idle_timeout = local.min(remote),\n        }\n        self.heartbeat_interval = Self::suitable_heartbeat_interval(self.max_idle_timeout);\n    }\n\n    // Sets the interval for sending heartbeat packets.\n    pub fn set_heartbeat_interval(&mut self, interval: Duration) {\n        self.heartbeat_interval = interval;\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct ArcIdleConfig(Arc<RwLock<IdleConfig>>);\n\nimpl ArcIdleConfig {\n    // Creates a new `ArcIdleConfig` with the specified maximum idle timeout and defer idle timeout.\n    pub fn new(max_idle_timeout: Duration, defer_idle_timeout: Duration) -> Self {\n        ArcIdleConfig(Arc::new(RwLock::new(IdleConfig::new(\n            max_idle_timeout,\n            defer_idle_timeout,\n        ))))\n    }\n\n    // Each endpoint advertises a max_idle_timeout, but the effective value at an endpoint\n    // is computed as the minimum of the two advertised values (or the sole advertised value,\n    // if only one endpoint advertises a non-zero value).\n    //\n    // Idle timeout is disabled when both endpoints omit this transport parameter or specify a value of 0.\n    pub fn negotiate_max_idle_timeout(&self, max_idle_timeout: Duration) {\n        self.0\n            .write()\n            .unwrap()\n            .negotiate_max_idle_timeout(max_idle_timeout);\n    }\n\n    // Sets the interval for sending heartbeat packets.\n    pub fn set_heartbeat_interval(&self, interval: Duration) {\n        self.0.write().unwrap().set_heartbeat_interval(interval);\n    }\n\n    pub fn timer(&self) -> ArcIdleTimer {\n        ArcIdleTimer(Arc::new(Mutex::new(IdleTimer {\n            idle_config: self.clone(),\n            heartbeat_times: 0,\n            last_effective_comm: None,\n            idle_begin_at: None,\n        })))\n    }\n\n    fn defer_idle_timeout(&self) -> Duration {\n        self.0.read().unwrap().defer_idle_timeout\n    }\n\n    fn heartbeat_interval(&self) -> Duration {\n        self.0.read().unwrap().heartbeat_interval\n    }\n\n    fn timeout_after(&self, idle_at: Instant) -> bool {\n        let max_idle_timeout = self.0.read().unwrap().max_idle_timeout;\n        max_idle_timeout != Duration::ZERO && idle_at.elapsed() > max_idle_timeout\n    }\n}\n\n// A timer for each path to determine when to send heartbeat packets\n// and when to delete the path due to idle timeout.\n#[derive(Debug)]\npub struct IdleTimer {\n    idle_config: ArcIdleConfig,\n    heartbeat_times: u32,\n    last_effective_comm: Option<Instant>,\n    idle_begin_at: Option<Instant>,\n}\n\nimpl IdleTimer {\n    // Updates the timer when a packet is sent.\n    pub fn on_sent(&mut self, packet_content: PacketContent) {\n        if packet_content == PacketContent::EffectivePayload {\n            self.last_effective_comm = Some(Instant::now());\n            self.heartbeat_times = 0;\n            self.idle_begin_at = None;\n        }\n    }\n\n    // Updates the timer when a packet is received.\n    pub fn on_rcvd(&mut self, packet_content: PacketContent) {\n        if packet_content == PacketContent::EffectivePayload {\n            self.last_effective_comm = Some(Instant::now());\n            self.heartbeat_times = 0;\n            self.idle_begin_at = None;\n        }\n        if self.idle_begin_at.is_some() {\n            self.idle_begin_at = Some(Instant::now());\n        }\n    }\n\n    // Checks health of the path and\n    // determines whether a heartbeat packet needs to be sent.\n    pub fn health(&mut self) -> Result<Option<PingFrame>, TimeOut> {\n        if let Some(t) = self.last_effective_comm {\n            let elapsed = t.elapsed();\n            if elapsed > self.idle_config.defer_idle_timeout() {\n                if self.idle_begin_at.is_none() {\n                    self.idle_begin_at = Some(Instant::now());\n                    return Ok(Some(PingFrame)); // heartbeat for the last time\n                }\n            } else if elapsed > self.idle_config.heartbeat_interval() * (self.heartbeat_times + 1) {\n                self.heartbeat_times += 1;\n                return Ok(Some(PingFrame));\n            }\n        }\n        if self\n            .idle_begin_at\n            .is_some_and(|t| self.idle_config.timeout_after(t))\n        {\n            return Err(TimeOut);\n        }\n        Ok(None)\n    }\n}\n\n// A shared timer for each path to determine when to send heartbeat packets\n// and when to delete the path due to idle timeout.\n#[derive(Debug, Clone)]\npub struct ArcIdleTimer(Arc<Mutex<IdleTimer>>);\n\nimpl ArcIdleTimer {\n    // Updates the timer when a packet is sent.\n    pub fn on_sent(&self, packet_content: PacketContent) {\n        self.0.lock().unwrap().on_sent(packet_content);\n    }\n\n    // Updates the timer when a packet is received.\n    pub fn on_rcvd(&self, packet_content: PacketContent) {\n        self.0.lock().unwrap().on_rcvd(packet_content);\n    }\n\n    // Checks health of the path and\n    // determines whether a heartbeat packet needs to be sent.\n    pub fn health(&self) -> Result<Option<PingFrame>, TimeOut> {\n        self.0.lock().unwrap().health()\n    }\n}\n"
  },
  {
    "path": "qbase/src/token.rs",
    "content": "use std::{ops::Deref, sync::Arc};\n\nuse bytes::BufMut;\nuse derive_more::Deref;\nuse nom::{IResult, bytes::complete::take};\nuse rand::RngExt;\n\nuse crate::{\n    error::{ErrorKind, QuicError},\n    frame::{GetFrameType, NewTokenFrame, io::ReceiveFrame},\n};\n\npub const RESET_TOKEN_SIZE: usize = 16;\n\n#[derive(Deref, Debug, Copy, Clone, Default, PartialEq, Eq, Hash)]\npub struct ResetToken([u8; RESET_TOKEN_SIZE]);\n\nimpl ResetToken {\n    pub fn new(bytes: &[u8]) -> Self {\n        Self(bytes.try_into().unwrap())\n    }\n\n    pub fn random_gen() -> Self {\n        let mut bytes = [0; RESET_TOKEN_SIZE];\n        rand::rng().fill(&mut bytes);\n        Self(bytes)\n    }\n\n    pub fn encoding_size(&self) -> usize {\n        RESET_TOKEN_SIZE\n    }\n}\n\npub fn be_reset_token(input: &[u8]) -> IResult<&[u8], ResetToken> {\n    let (input, bytes) = take(RESET_TOKEN_SIZE)(input)?;\n    Ok((input, ResetToken::new(bytes)))\n}\n\npub trait WriteResetToken {\n    fn put_reset_token(&mut self, token: &ResetToken);\n}\n\nimpl<T: BufMut> WriteResetToken for T {\n    fn put_reset_token(&mut self, token: &ResetToken) {\n        self.put_slice(token.as_slice());\n    }\n}\n\npub trait TokenSink: Send + Sync {\n    fn sink(&self, server_name: &str, token: Vec<u8>);\n\n    fn fetch_token(&self, server_name: &str) -> Vec<u8>;\n}\n\npub trait TokenProvider: Send + Sync {\n    fn gen_new_token(&self, server_name: &str) -> Vec<u8>;\n\n    fn gen_retry_token(&self, server_name: &str) -> Vec<u8>;\n\n    // A token sent in a NEW_TOKEN frame or a Retry packet MUST be constructed in\n    // a way that allows the server to identify how it was provided to a client\n    fn verify_token(&self, server_name: &str, token: &[u8]) -> bool;\n}\n\npub enum TokenRegistry {\n    Client((String, Arc<dyn TokenSink>)),\n    Server(Arc<dyn TokenProvider>),\n}\n\n#[derive(Clone)]\npub struct ArcTokenRegistry(Arc<TokenRegistry>);\n\nimpl ArcTokenRegistry {\n    pub fn with_sink(server_name: String, sink: Arc<dyn TokenSink>) -> Self {\n        Self(Arc::new(TokenRegistry::Client((server_name, sink))))\n    }\n\n    pub fn with_provider(provider: Arc<dyn TokenProvider>) -> Self {\n        Self(Arc::new(TokenRegistry::Server(provider)))\n    }\n}\n\nimpl Deref for ArcTokenRegistry {\n    type Target = TokenRegistry;\n\n    fn deref(&self) -> &Self::Target {\n        self.0.deref()\n    }\n}\n\nimpl ReceiveFrame<NewTokenFrame> for ArcTokenRegistry {\n    type Output = ();\n\n    fn recv_frame(&self, frame: NewTokenFrame) -> Result<Self::Output, crate::error::Error> {\n        match self.deref() {\n            TokenRegistry::Client((server_name, client)) => {\n                client.sink(server_name, frame.token().to_vec());\n                Ok(())\n            }\n            TokenRegistry::Server(_) => Err(QuicError::new(\n                ErrorKind::ProtocolViolation,\n                frame.frame_type().into(),\n                \"Server received NewTokenFrame\",\n            )\n            .into()),\n        }\n    }\n}\n\npub mod handy {\n    pub struct NoopTokenRegistry;\n\n    impl super::TokenSink for NoopTokenRegistry {\n        fn sink(&self, _: &str, _: Vec<u8>) {}\n\n        fn fetch_token(&self, _: &str) -> Vec<u8> {\n            Vec::with_capacity(0)\n        }\n    }\n\n    impl super::TokenProvider for NoopTokenRegistry {\n        fn gen_new_token(&self, _: &str) -> Vec<u8> {\n            Vec::new()\n        }\n\n        fn gen_retry_token(&self, _: &str) -> Vec<u8> {\n            Vec::new()\n        }\n\n        fn verify_token(&self, _: &str, _: &[u8]) -> bool {\n            false\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    #[test]\n    fn test_create_token() {\n        super::ResetToken::new(&[0; 16]);\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_creat_token_with_less_size() {\n        super::ResetToken::new(&[0; 15]);\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_creat_token_with_more_size() {\n        super::ResetToken::new(&[0; 17]);\n    }\n\n    #[test]\n    fn test_read_reset_token() {\n        use nom::error::{Error, ErrorKind};\n\n        let buf = vec![0; 16];\n        let (remain, token) = super::be_reset_token(&buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        assert_eq!(token, super::ResetToken::new(&[0; 16]));\n        let buf = vec![0; 15];\n        assert_eq!(\n            super::be_reset_token(&buf),\n            Err(nom::Err::Error(Error::new(&buf[..], ErrorKind::Eof)))\n        );\n    }\n\n    #[test]\n    fn test_write_reset_token() {\n        use super::WriteResetToken;\n\n        let mut buf = vec![];\n        let token = super::ResetToken::new(&[0; 16]);\n        buf.put_reset_token(&token);\n        assert_eq!(buf, &[0; 16]);\n    }\n}\n"
  },
  {
    "path": "qbase/src/util/async_deque.rs",
    "content": "use std::{\n    collections::VecDeque,\n    future::Future,\n    pin::Pin,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker},\n};\n\n/// AsyncDeque is a deque that can be used in async context.\n///\n/// It is a wrapper around VecDeque, with the ability to be popped in async context.\n/// That is, when calling pop on an empty queue,\n/// it will suspend the current task until a new element is pushed in.\n/// In a sense, it is a combination of the sender and receiver ends of an mpsc channel,\n/// and the sender can insert in both directions.\n#[derive(Debug)]\nstruct AsyncDeque<T> {\n    queue: Option<VecDeque<T>>,\n    waker: Option<Waker>,\n}\n\nimpl<T> AsyncDeque<T> {\n    /// Insert an element at the back of the queue,\n    /// and wake up the `pop` task registered by [AsyncDeque::poll_pop] if necessary.\n    fn push_back(&mut self, value: T) {\n        if let Some(queue) = &mut self.queue {\n            queue.push_back(value);\n            if let Some(waker) = self.waker.take() {\n                waker.wake();\n            }\n        }\n    }\n\n    /// Insert an element at the front of the deque,\n    /// and wake up the `pop` task registered by [AsyncDeque::poll_pop] if necessary.\n    fn push_front(&mut self, value: T) {\n        if let Some(queue) = &mut self.queue {\n            queue.push_front(value);\n            if let Some(waker) = self.waker.take() {\n                waker.wake();\n            }\n        }\n    }\n\n    /// Poll the next element in the queue.\n    ///\n    /// If the deque is empty, the current `pop` will be suspended until a new element is pushed in.\n    ///\n    /// If the deque is closed, the `pop` task will get the final `None` element,\n    /// indicating that the queue has been closed,\n    /// and the `pop` task should stop.\n    fn poll_pop(&mut self, cx: &mut Context<'_>) -> Poll<Option<T>> {\n        match &mut self.queue {\n            Some(queue) => {\n                if let Some(frame) = queue.pop_front() {\n                    Poll::Ready(Some(frame))\n                } else if let Some(ref waker) = self.waker {\n                    if !waker.will_wake(cx.waker()) {\n                        panic!(\n                            \"Multiple tasks are attempting to wait on the same AsyncDeque. This is a bug, place report it.\"\n                        );\n                    }\n                    self.waker = Some(cx.waker().clone());\n                    // same waker, no need to update again\n                    Poll::Pending\n                } else {\n                    // no waker, register the current waker\n                    self.waker = Some(cx.waker().clone());\n                    Poll::Pending\n                }\n            }\n            None => Poll::Ready(None),\n        }\n    }\n\n    /// Return the number of elements in the queue.\n    fn len(&self) -> usize {\n        self.queue.as_ref().map(|v| v.len()).unwrap_or(0)\n    }\n\n    /// Return whether the queue is empty.\n    fn is_empty(&self) -> bool {\n        self.len() == 0\n    }\n\n    /// Close the deque, and wake up the `pop` task registered by [AsyncDeque::poll_pop] nescessary.\n    ///\n    /// This will cause the `pop`` task get the final `None` element,\n    /// indicating that the queue has been closed,\n    /// and the `pop`` task should stop.\n    ///\n    /// # Examples\n    pub fn close(&mut self) {\n        self.queue = None;\n        if let Some(waker) = self.waker.take() {\n            waker.wake();\n        }\n    }\n}\n\nimpl<T> Extend<T> for AsyncDeque<T> {\n    fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {\n        if let Some(queue) = &mut self.queue {\n            queue.extend(iter);\n            if let Some(waker) = self.waker.take() {\n                waker.wake();\n            }\n        }\n    }\n}\n\n/// A shared deque that can be used in async context.\n///\n/// It is a wrapper around VecDeque, with the ability to be popped in async context.\n/// That is, when calling pop on an empty queue,\n/// it will suspend the current task until a new element is pushed in.\n/// In a sense, it is a combination of the sender and receiver ends of an mpsc channel,\n/// and the sender can insert in both directions.\n#[derive(Debug)]\npub struct ArcAsyncDeque<T>(Arc<Mutex<AsyncDeque<T>>>);\n\nimpl<T> ArcAsyncDeque<T> {\n    /// Create a new [`ArcAsyncDeque`] with 8 as the default capacity.\n    pub fn new() -> Self {\n        Self(Arc::new(Mutex::new(AsyncDeque {\n            queue: Some(VecDeque::with_capacity(8)),\n            waker: None,\n        })))\n    }\n\n    /// Create a new [`ArcAsyncDeque`] with a given capacity.\n    pub fn with_capacity(capacity: usize) -> Self {\n        Self(Arc::new(Mutex::new(AsyncDeque {\n            queue: Some(VecDeque::with_capacity(capacity)),\n            waker: None,\n        })))\n    }\n\n    fn lock_guard(&self) -> MutexGuard<'_, AsyncDeque<T>> {\n        self.0.lock().unwrap()\n    }\n\n    /// Insert an element at the front of the queue,\n    /// and wake up the `pop` task  if registered by [ArcAsyncDeque::pop].\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::ArcAsyncDeque;\n    ///\n    /// let mut deque = ArcAsyncDeque::new();\n    /// deque.push_front(1);\n    /// deque.push_front(2);\n    /// assert_eq!(deque.len(), 2);\n    /// ```\n    pub fn push_front(&self, value: T) {\n        self.lock_guard().push_front(value);\n    }\n\n    /// Insert an element at the back of the queue,\n    /// and wake up the `pop` task  if registered by [ArcAsyncDeque::pop].\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::ArcAsyncDeque;\n    ///\n    /// let mut deque = ArcAsyncDeque::new();\n    /// deque.push_back(1);\n    /// deque.push_back(2);\n    /// assert_eq!(deque.len(), 2);\n    /// ```\n    pub fn push_back(&self, value: T) {\n        self.lock_guard().push_back(value);\n    }\n\n    /// Asynchronously pop the next element in the queue.\n    ///\n    /// If the deque is empty, the current `pop` will be suspended until a new element is pushed in.\n    ///\n    /// If the deque is closed, the `pop` task will get the final `None` element,\n    /// indicating that the queue has been closed,\n    /// and the `pop` task should stop.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::ArcAsyncDeque;\n    ///\n    /// #[tokio::test]\n    /// async fn test() {\n    ///    let mut deque = ArcAsyncDeque::new();\n    ///\n    ///     tokio::spawn({\n    ///         let deque = deque.clone();\n    ///         async move {\n    ///             assert_eq!(deque.pop().await, Some(1));\n    ///         }\n    ///     });\n    ///\n    ///     deque.push_back(1);\n    /// }\n    /// ```\n    pub fn pop(&self) -> Self {\n        self.clone()\n    }\n\n    /// Poll pop the next element in the queue.\n    ///\n    /// If the deque is empty, the current `pop` will be suspended until a new element is pushed in.\n    ///\n    /// If the deque is closed, the `pop` task will get the final `None` element,\n    /// indicating that the queue has been closed,\n    /// and the `pop` task should stop.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::ArcAsyncDeque;\n    /// use futures::task::{Poll, noop_waker};\n    ///\n    /// let waker = noop_waker();\n    /// let mut cx = std::task::Context::from_waker(&waker);\n    /// let mut deque = ArcAsyncDeque::new();\n    /// assert_eq!(deque.poll_pop(&mut cx), Poll::Pending);\n    ///\n    /// deque.push_back(1);\n    /// assert_eq!(deque.poll_pop(&mut cx), Poll::Ready(Some(1)));\n    /// assert_eq!(deque.poll_pop(&mut cx), Poll::Pending);\n    /// ```\n    pub fn poll_pop(&self, cx: &mut Context<'_>) -> Poll<Option<T>> {\n        self.lock_guard().poll_pop(cx)\n    }\n\n    /// Return the number of elements in the queue.\n    pub fn len(&self) -> usize {\n        self.lock_guard().len()\n    }\n\n    /// Return whether the queue is empty.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::ArcAsyncDeque;\n    ///\n    /// let mut deque = ArcAsyncDeque::new();\n    /// assert!(deque.is_empty());\n    ///\n    /// deque.push_back(1);\n    /// assert!(!deque.is_empty());\n    /// ```\n    pub fn is_empty(&self) -> bool {\n        self.lock_guard().is_empty()\n    }\n\n    /// Close the deque, and wake up the `pop` task if registered by [ArcAsyncDeque::poll_pop].\n    ///\n    /// This will cause the `pop` task get the final `None` element,\n    /// indicating that the queue has been closed,\n    /// and the `pop` task should stop.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::ArcAsyncDeque;\n    ///\n    /// #[tokio::test]\n    /// async fn test() {\n    ///    let mut deque = ArcAsyncDeque::new();\n    ///\n    ///     tokio::spawn({\n    ///         let deque = deque.clone();\n    ///         async move {\n    ///             assert_eq!(deque.pop().await, Some(1));\n    ///             assert_eq!(deque.pop().await, None);\n    ///         }\n    ///     });\n    ///\n    ///     deque.push_back(1);\n    ///     deque.close();\n    /// }\n    /// ```\n    pub fn close(&self) {\n        self.lock_guard().close();\n    }\n}\n\nimpl<T> Default for ArcAsyncDeque<T> {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl<T> Clone for ArcAsyncDeque<T> {\n    fn clone(&self) -> Self {\n        Self(self.0.clone())\n    }\n}\n\nimpl<T> Future for ArcAsyncDeque<T> {\n    type Output = Option<T>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.poll_pop(cx)\n    }\n}\n\nimpl<T: Unpin> futures::Stream for ArcAsyncDeque<T> {\n    type Item = T;\n\n    fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        self.poll_pop(cx)\n    }\n}\n\nimpl<T> Extend<T> for &ArcAsyncDeque<T> {\n    fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {\n        self.0.lock().unwrap().extend(iter);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use futures::FutureExt;\n\n    use super::*;\n\n    #[tokio::test]\n    async fn push_pop() {\n        let deque = ArcAsyncDeque::new();\n        assert!(deque.is_empty());\n\n        deque.push_back(1);\n        deque.push_back(2);\n        assert_eq!(deque.len(), 2);\n        assert_eq!(deque.pop().await, Some(1));\n        assert_eq!(deque.pop().await, Some(2));\n\n        let deque = ArcAsyncDeque::with_capacity(2);\n        deque.push_back(1);\n        deque.push_front(2);\n        assert_eq!(deque.len(), 2);\n        assert_eq!(deque.pop().await, Some(2));\n        assert_eq!(deque.pop().await, Some(1));\n    }\n\n    #[tokio::test]\n    async fn close() {\n        let deque = ArcAsyncDeque::new();\n        assert!(deque.is_empty());\n\n        deque.push_back(1);\n        deque.push_back(2);\n        assert_eq!(deque.len(), 2);\n\n        deque.close();\n        assert!(deque.is_empty());\n        assert_eq!(deque.pop().await, None);\n    }\n\n    #[tokio::test]\n    async fn wake() {\n        let deque = ArcAsyncDeque::new();\n        tokio::select! {\n            item = deque.pop() => {\n                assert_eq!(item, Some(1));\n            }\n            _ = async {\n                deque.push_back(1);\n                std::future::pending::<()>().await;\n            } => unreachable!()\n        }\n\n        let deque = ArcAsyncDeque::new();\n        tokio::select! {\n            item = deque.pop() => {\n                assert_eq!(item, Some(1));\n            }\n            _ = async {\n                deque.push_back(1);\n                std::future::pending::<()>().await;\n            } => unreachable!()\n        }\n    }\n\n    #[tokio::test]\n    async fn cancel() {\n        let deque = ArcAsyncDeque::new();\n\n        // register Waker\n        let poll = core::future::poll_fn(|cx| Poll::Ready(deque.pop().poll_unpin(cx))).await;\n        assert_eq!(poll, Poll::Pending);\n\n        // pop directly\n        (&deque).extend([654]);\n        let poll = core::future::poll_fn(|cx| Poll::Ready(deque.pop().poll_unpin(cx))).await;\n        assert_eq!(poll, Poll::Ready(Some(654)));\n\n        // register new Waker\n        let poll = core::future::poll_fn(|cx| Poll::Ready(deque.pop().poll_unpin(cx))).await;\n        assert_eq!(poll, Poll::Pending);\n\n        // replace cancelled Waker: same task, so its ok\n        let poll = core::future::poll_fn(|cx| Poll::Ready(deque.pop().poll_unpin(cx))).await;\n        assert_eq!(poll, Poll::Pending);\n    }\n\n    #[tokio::test]\n    async fn racing() {\n        let deque: ArcAsyncDeque<()> = ArcAsyncDeque::new();\n\n        let consumer = tokio::spawn(deque.pop());\n        tokio::task::yield_now().await;\n\n        let abuse = tokio::spawn(deque.pop());\n        tokio::task::yield_now().await;\n\n        // willnot be waked up\n        _ = consumer;\n        // should panic\n        assert!(abuse.await.is_err());\n    }\n}\n"
  },
  {
    "path": "qbase/src/util/bound_queue.rs",
    "content": "use std::{\n    self,\n    future::poll_fn,\n    sync::{Arc, Mutex},\n};\n\nuse futures::{SinkExt, StreamExt, channel::mpsc};\n\n#[derive(Debug)]\nstruct BoundQueueInner<T> {\n    tx: mpsc::Sender<T>,\n    rx: Mutex<mpsc::Receiver<T>>,\n}\n\n#[derive(Debug)]\npub struct BoundQueue<T>(Arc<BoundQueueInner<T>>);\n\nimpl<T> Clone for BoundQueue<T> {\n    fn clone(&self) -> Self {\n        Self(self.0.clone())\n    }\n}\n\nimpl<T> BoundQueue<T> {\n    #[inline]\n    pub fn new(size: usize) -> Self {\n        let (tx, rx) = mpsc::channel(size);\n        Self(Arc::new(BoundQueueInner { tx, rx: rx.into() }))\n    }\n\n    #[inline]\n    pub fn try_send(&self, item: T) -> Result<(), mpsc::TrySendError<T>> {\n        self.0.tx.clone().try_send(item)\n    }\n\n    #[inline]\n    pub async fn send(&self, item: T) -> Result<(), mpsc::SendError> {\n        self.0.tx.clone().send(item).await\n    }\n\n    #[inline]\n    pub async fn recv(&self) -> Option<T> {\n        poll_fn(|cx| self.0.rx.lock().unwrap().poll_next_unpin(cx)).await\n    }\n\n    #[inline]\n    pub fn close(&self) {\n        self.0.tx.clone().close_channel();\n    }\n\n    #[inline]\n    pub fn is_closed(&self) -> bool {\n        self.0.tx.is_closed()\n    }\n\n    #[inline]\n    pub fn same_queue(&self, other: &Self) -> bool {\n        Arc::ptr_eq(&self.0, &other.0)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_send_receive() {\n        let queue = Arc::new(BoundQueue::new(2));\n\n        tokio::spawn({\n            let queue = queue.clone();\n            async move {\n                assert!(queue.send(1).await.is_ok());\n                assert!(queue.send(2).await.is_ok());\n            }\n        });\n\n        assert_eq!(queue.recv().await, Some(1));\n        assert_eq!(queue.recv().await, Some(2));\n    }\n}\n"
  },
  {
    "path": "qbase/src/util/data.rs",
    "content": "use bytes::{BufMut, Bytes, BytesMut};\n\npub trait ContinuousData {\n    fn len(&self) -> usize;\n\n    fn is_empty(&self) -> bool;\n\n    fn to_bytes(&self) -> Bytes;\n}\n\npub type DataPair<'a> = (&'a [u8], &'a [u8]);\n\nimpl ContinuousData for DataPair<'_> {\n    #[inline]\n    fn len(&self) -> usize {\n        self.0.len() + self.1.len()\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        self.0.is_empty() && self.1.is_empty()\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        Bytes::from([self.0, self.1].concat())\n    }\n}\n\nimpl ContinuousData for [u8] {\n    #[inline]\n    fn len(&self) -> usize {\n        <[u8]>::len(self)\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        <[u8]>::is_empty(self)\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        Bytes::copy_from_slice(self)\n    }\n}\n\nimpl<const N: usize> ContinuousData for [u8; N] {\n    #[inline]\n    fn len(&self) -> usize {\n        N\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        N == 0\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        Bytes::copy_from_slice(self)\n    }\n}\n\nimpl ContinuousData for Vec<u8> {\n    #[inline]\n    fn len(&self) -> usize {\n        self.len()\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        self.is_empty()\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        Bytes::copy_from_slice(self)\n    }\n}\n\nimpl ContinuousData for Bytes {\n    #[inline]\n    fn len(&self) -> usize {\n        self.len()\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        self.is_empty()\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        self.clone()\n    }\n}\n\npub type NonData = ();\n\nimpl ContinuousData for NonData {\n    #[inline]\n    fn len(&self) -> usize {\n        0\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        true\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        Bytes::new()\n    }\n}\n\nimpl<D: ContinuousData + ?Sized> ContinuousData for &D {\n    #[inline]\n    fn len(&self) -> usize {\n        D::len(*self)\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        D::is_empty(*self)\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        D::to_bytes(*self)\n    }\n}\n\nimpl<D: ContinuousData> ContinuousData for [D] {\n    #[inline]\n    fn len(&self) -> usize {\n        self.iter().map(|d| d.len()).sum()\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        self.iter().all(|d| d.is_empty())\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        self.iter()\n            .fold(BytesMut::with_capacity(self.len()), |mut acc, d| {\n                acc.extend(d.to_bytes());\n                acc\n            })\n            .freeze()\n    }\n}\n\nimpl<D: ContinuousData, const N: usize> ContinuousData for [D; N] {\n    #[inline]\n    fn len(&self) -> usize {\n        <[D]>::len(self)\n    }\n\n    #[inline]\n    fn is_empty(&self) -> bool {\n        <[D]>::is_empty(self)\n    }\n\n    #[inline]\n    fn to_bytes(&self) -> Bytes {\n        <[D]>::to_bytes(self)\n    }\n}\n\npub trait WriteData<D: ContinuousData + ?Sized>: BufMut {\n    fn put_data(&mut self, data: &D);\n}\n\nimpl<T: BufMut> WriteData<DataPair<'_>> for T {\n    #[inline]\n    fn put_data(&mut self, data: &DataPair<'_>) {\n        self.put_slice(data.0);\n        self.put_slice(data.1);\n    }\n}\n\nimpl<T: BufMut> WriteData<[u8]> for T {\n    #[inline]\n    fn put_data(&mut self, data: &[u8]) {\n        self.put_slice(data)\n    }\n}\n\nimpl<const N: usize, T: BufMut> WriteData<[u8; N]> for T {\n    #[inline]\n    fn put_data(&mut self, data: &[u8; N]) {\n        self.put_slice(data)\n    }\n}\n\nimpl<T: BufMut> WriteData<Bytes> for T {\n    #[inline]\n    fn put_data(&mut self, data: &Bytes) {\n        self.put_slice(data);\n    }\n}\n\nimpl<T: BufMut> WriteData<NonData> for T {\n    #[inline]\n    fn put_data(&mut self, &(): &()) {}\n}\n\nimpl<T, D: ContinuousData + ?Sized> WriteData<&D> for T\nwhere\n    T: BufMut + WriteData<D>,\n{\n    #[inline]\n    fn put_data(&mut self, data: &&D) {\n        <T as WriteData<D>>::put_data(self, data);\n    }\n}\n\nimpl<T, D: ContinuousData> WriteData<[D]> for T\nwhere\n    T: BufMut + WriteData<D>,\n{\n    #[inline]\n    fn put_data(&mut self, data: &[D]) {\n        for data in data {\n            self.put_data(data);\n        }\n    }\n}\n\nimpl<T, D: ContinuousData, const N: usize> WriteData<[D; N]> for T\nwhere\n    T: BufMut + WriteData<D>,\n{\n    #[inline]\n    fn put_data(&mut self, data: &[D; N]) {\n        <T as WriteData<[D]>>::put_data(self, data);\n    }\n}\n"
  },
  {
    "path": "qbase/src/util/index_deque.rs",
    "content": "use std::{\n    collections::VecDeque,\n    ops::{Index, IndexMut},\n};\n\nuse thiserror::Error;\n\n/// The index error type for [`IndexDeque`].\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Error)]\npub enum IndexError {\n    #[error(\"The index {0} exceed the limit {1}\")]\n    ExceedLimit(u64, u64),\n    #[error(\"The index {0} is less than the offset {1}\")]\n    TooSmall(u64, u64),\n}\n\n/// A first-in-first-out queue indexed by the enqueue sequence number.\n///\n/// For [`VecDeque`], the index of elements starts from 0 even after they are dequeued.\n/// However, for [`IndexDeque`], the index is the enqueue sequence number.\n/// Even if some elements have been dequeued,\n/// the enqueue index of other elements in IndexDeque remains unchanged.\n///\n/// - `T` is the type of elements in the queue.\n/// - `LIMIT` is the maximum limit of the enqueue index.\n///\n/// [`IndexDeque`] is useful in many places in QUIC implementation,\n/// such as recording packet sending history.\n#[derive(Debug)]\npub struct IndexDeque<T, const LIMIT: u64> {\n    deque: VecDeque<T>,\n    offset: u64,\n}\n\nimpl<T, const LIMIT: u64> Default for IndexDeque<T, LIMIT> {\n    fn default() -> Self {\n        Self {\n            deque: VecDeque::default(),\n            offset: 0,\n        }\n    }\n}\n\nimpl<T, const LIMIT: u64> IndexDeque<T, LIMIT> {\n    /// Create a new empty IndexDeque with the specified capacity.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let deque: IndexDeque<u64, 19> = IndexDeque::with_capacity(10);\n    /// ```\n    pub fn with_capacity(capacity: usize) -> Self {\n        Self {\n            deque: VecDeque::with_capacity(capacity),\n            offset: 0,\n        }\n    }\n\n    /// Returns true if the queue is empty.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// assert!(deque.is_empty());\n    /// deque.push_back(1).unwrap();\n    /// assert!(!deque.is_empty());\n    /// ```\n    pub fn is_empty(&self) -> bool {\n        self.deque.is_empty()\n    }\n\n    /// Returns the number of elements in the queue.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// assert_eq!(deque.len(), 0);\n    /// deque.push_back(1).unwrap();\n    /// assert_eq!(deque.len(), 1);\n    /// ```\n    pub fn len(&self) -> usize {\n        self.deque.len()\n    }\n\n    /// Returns the enqueue sequence number of the first element in the queue.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// assert_eq!(deque.offset(), 0);\n    /// deque.push_back(1).unwrap();\n    /// assert_eq!(deque.offset(), 0);\n    /// deque.pop_front();\n    /// assert_eq!(deque.offset(), 1);\n    /// ```\n    pub fn offset(&self) -> u64 {\n        self.offset\n    }\n\n    /// Returns the next enqueue sequence number of the queue.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// assert_eq!(deque.largest(), 0);\n    /// deque.push_back(1).unwrap();\n    /// assert_eq!(deque.largest(), 1);\n    /// ```\n    pub fn largest(&self) -> u64 {\n        self.offset + self.deque.len() as u64\n    }\n\n    /// Returns true if the queue contains the specified enqueue index.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// assert!(!deque.contain(0));\n    /// deque.push_back(1).unwrap();\n    /// assert!(deque.contain(0));\n    /// assert!(!deque.contain(1));\n    /// ```\n    pub fn contain(&self, idx: u64) -> bool {\n        idx >= self.offset && idx < self.largest()\n    }\n\n    /// Provides a reference to an element at the specified enqueue index.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// assert_eq!(deque.get(1), Some(&2));\n    /// assert_eq!(deque.get(3), None);\n    /// ```\n    pub fn get(&self, idx: u64) -> Option<&T> {\n        if self.contain(idx) {\n            Some(&self.deque[(idx - self.offset) as usize])\n        } else {\n            None\n        }\n    }\n\n    /// Provides a mutable reference to an element at the specified enqueue index.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// assert_eq!(deque[1], 2);\n    /// if let Some(v) = deque.get_mut(1) {\n    ///    *v = 4;\n    /// }\n    /// assert_eq!(deque[1], 4);\n    /// ```\n    pub fn get_mut(&mut self, idx: u64) -> Option<&mut T> {\n        if self.contain(idx) {\n            Some(&mut self.deque[(idx - self.offset) as usize])\n        } else {\n            None\n        }\n    }\n\n    /// Append an element to the end of the queue and return the enqueue index of the element.\n    /// If it exceeds the maximum limit of the enqueue index, return [`IndexError`].\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::{IndexDeque, IndexError};\n    ///\n    /// let mut deque: IndexDeque<u64, 2> = IndexDeque::default();\n    /// assert_eq!(deque.push_back(1), Ok(0));\n    /// assert_eq!(deque.push_back(2), Ok(1));\n    /// assert_eq!(deque.push_back(3), Ok(2));\n    /// assert_eq!(deque.push_back(4), Err(IndexError::ExceedLimit(3, 2)));\n    /// ```\n    pub fn push_back(&mut self, value: T) -> Result<u64, IndexError> {\n        let next_idx = self.offset.overflowing_add(self.deque.len() as u64);\n        if next_idx.1 || next_idx.0 > LIMIT {\n            Err(IndexError::ExceedLimit(next_idx.0, LIMIT))\n        } else {\n            self.deque.push_back(value);\n            Ok(self.deque.len() as u64 - 1 + self.offset)\n        }\n    }\n\n    /// Returns None if the queue is empty; otherwise, returns\n    /// the first element in the queue along with its enqueue index.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// assert_eq!(deque.pop_front(), None);\n    ///\n    /// deque.push_back(1).unwrap();\n    /// assert_eq!(deque.pop_front(), Some((0, 1)));\n    /// assert!(deque.is_empty());\n    /// ```\n    pub fn pop_front(&mut self) -> Option<(u64, T)> {\n        self.deque.pop_front().map(|v| {\n            let offset = self.offset;\n            self.offset += 1;\n            (offset, v)\n        })\n    }\n\n    pub fn front(&self) -> Option<(u64, &T)> {\n        self.deque.front().map(|v| (self.offset, v))\n    }\n\n    pub fn back(&self) -> Option<(u64, &T)> {\n        self.deque.back().map(|v| (self.largest() - 1, v))\n    }\n\n    /// Returns a front-to-back iterator.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// let b: &[_] = &[&1, &2, &3];\n    /// let c: Vec<&u64> = deque.iter().collect();\n    /// assert_eq!(b, c.as_slice());\n    /// ```\n    pub fn iter(&self) -> impl DoubleEndedIterator<Item = &T> {\n        self.deque.iter()\n    }\n\n    /// Returns a front-to-back iterator that returns mutable references.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// for num in deque.iter_mut() {\n    ///    *num += 1;\n    /// }\n    /// let b: &[_] = &[&mut 2, &mut 3, &mut 4];\n    /// assert_eq!(deque.iter_mut().collect::<Vec<&mut u64>>().as_slice(), b);\n    /// ```\n    pub fn iter_mut(&mut self) -> impl DoubleEndedIterator<Item = &mut T> {\n        self.deque.iter_mut()\n    }\n\n    /// Returns a front-to-back iterator that returns the enqueue index along with the references.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// for (idx, num) in deque.enumerate() {\n    ///    assert_eq!(idx + 1, *num);\n    /// }\n    /// ```\n    pub fn enumerate(&self) -> impl DoubleEndedIterator<Item = (u64, &T)> {\n        self.deque\n            .iter()\n            .enumerate()\n            .map(|(idx, item)| (self.offset + idx as u64, item))\n    }\n\n    /// Returns a front-to-back iterator that returns\n    /// the enqueue index along with the mutable references.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// for (idx, num) in deque.enumerate_mut() {\n    ///     *num = *num + idx;\n    /// }\n    /// let b: &[_] = &[(0, &mut 1), (1, &mut 3), (2, &mut 5)];\n    /// assert_eq!(deque.enumerate_mut().collect::<Vec<(u64, &mut u64)>>().as_slice(), b);\n    /// ```\n    pub fn enumerate_mut(&mut self) -> impl DoubleEndedIterator<Item = (u64, &mut T)> {\n        self.deque\n            .iter_mut()\n            .enumerate()\n            .map(|(idx, item)| (self.offset + idx as u64, item))\n    }\n\n    /// Shortens the queue, dropping the first `n` elements.\n    ///\n    /// If `n` is greater or equal to the queue's length, this method will clear the queue.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// deque.advance(2);\n    /// assert_eq!(deque.len(), 1);\n    /// assert_eq!(deque.offset(), 2);\n    /// assert_eq!(deque[2], 3);\n    /// ```\n    pub fn advance(&mut self, n: usize) {\n        self.offset += n as u64;\n        let _ = self.deque.drain(..n);\n    }\n\n    /// Removes the elements from the queue until the enqueue index is equal to `end`.\n    /// Returns a front-to-back iterator over the removed elements.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.push_back(1).unwrap();\n    /// deque.push_back(2).unwrap();\n    /// deque.push_back(3).unwrap();\n    /// let b: &[_] = &[1, 2];\n    /// assert_eq!(deque.drain_to(2).collect::<Vec<u64>>().as_slice(), b);\n    /// assert_eq!(deque.offset(), 2);\n    /// ```\n    pub fn drain_to(&mut self, end: u64) -> impl DoubleEndedIterator<Item = T> + '_ {\n        #[cfg(not(test))]\n        debug_assert!(end >= self.offset && end <= self.offset + self.deque.len() as u64);\n        // avoid end < self.offset\n        let end = std::cmp::max(end, self.offset);\n        let offset = self.offset;\n        // avoid end > self.offset + self.deque.len()\n        self.offset = std::cmp::min(end, offset + self.deque.len() as u64);\n        let end = (self.offset - offset) as usize;\n        self.deque.drain(..end)\n    }\n\n    /// Force to reset the first enqueue index of the queue to `new_offset`.\n    /// Then, it will affect the enqueue sequence numbers of all subsequent elements.\n    ///\n    /// Be careful to use this method, you must know what you are doing.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::IndexDeque;\n    ///\n    /// let mut deque: IndexDeque<u64, 19> = IndexDeque::default();\n    /// deque.reset_offset(5);\n    /// assert_eq!(deque.largest(), 5);\n    /// deque.push_back(1).unwrap();\n    /// assert_eq!(deque[5], 1);\n    /// ```\n    pub fn reset_offset(&mut self, new_offset: u64) {\n        // assert!(self.is_empty() && new_offset >= self.offset);\n        self.offset = new_offset;\n    }\n}\n\nimpl<T, const LIMIT: u64> Extend<T> for IndexDeque<T, LIMIT> {\n    fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {\n        self.deque.extend(iter)\n    }\n}\n\nimpl<T: Default + Clone, const LIMIT: u64> IndexDeque<T, LIMIT> {\n    /// Inserts an element at the specified enqueue index `idx`,\n    /// returns the origin element at the index if it exists.\n    ///\n    /// It will insert the default value in the gap\n    /// between the current largest index and the `idx`,\n    /// if the `idx` is greater than the current largest index.\n    ///\n    /// Returns [`IndexError`] if the enqueue index is less than the offset or exceeds the maximum limit.\n    ///\n    /// # Examples\n    ///\n    /// ```\n    /// use qbase::util::{IndexDeque, IndexError};\n    ///\n    /// let mut deque: IndexDeque<u64, 3> = IndexDeque::default();\n    /// let old_value = deque.insert(1, 2).unwrap();\n    /// assert_eq!(old_value, None);\n    /// assert_eq!(deque[0], u64::default());\n    /// assert_eq!(deque[1], 2);\n    ///\n    /// let result = deque.insert(4, 5);\n    /// assert_eq!(result, Err(IndexError::ExceedLimit(4, 3)));\n    /// ```\n    pub fn insert(&mut self, idx: u64, value: T) -> Result<Option<T>, IndexError> {\n        if idx > LIMIT {\n            Err(IndexError::ExceedLimit(idx, LIMIT))\n        } else if idx < self.offset {\n            Err(IndexError::TooSmall(idx, self.offset))\n        } else {\n            let pos = (idx - self.offset) as usize;\n            if pos < self.deque.len() {\n                return Ok(Some(std::mem::replace(&mut self.deque[pos], value)));\n            }\n\n            if pos > self.deque.len() {\n                self.deque.resize(pos, T::default());\n            }\n            self.deque.push_back(value);\n            Ok(None)\n        }\n    }\n\n    /// Modifies the deque in-place so that offset() is equal to new_offset, either by\n    /// removing excess elements from the back or by appending clones of value to the back.\n    pub fn resize(&mut self, new_end: u64, value: T) -> Result<(), IndexError> {\n        if new_end < self.offset {\n            Err(IndexError::TooSmall(new_end, self.offset))\n        } else if new_end > LIMIT {\n            Err(IndexError::ExceedLimit(new_end, LIMIT))\n        } else {\n            let len = new_end.saturating_sub(self.offset);\n            self.deque.resize(len as usize, value.clone());\n            Ok(())\n        }\n    }\n}\n\nimpl<T, const LIMIT: u64> Index<u64> for IndexDeque<T, LIMIT> {\n    type Output = T;\n\n    fn index(&self, index: u64) -> &Self::Output {\n        &self.deque[(index - self.offset) as usize]\n    }\n}\n\nimpl<T, const LIMIT: u64> IndexMut<u64> for IndexDeque<T, LIMIT> {\n    fn index_mut(&mut self, index: u64) -> &mut Self::Output {\n        &mut self.deque[(index - self.offset) as usize]\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_index_queue() {\n        let mut deque = IndexDeque::<u64, 19>::default();\n        for i in 0..10 {\n            assert_eq!(deque.push_back(i + 1), Ok(i));\n        }\n        assert_eq!(deque.offset, 0);\n\n        for i in 0..10 {\n            assert_eq!(deque.pop_front(), Some((i, i + 1)));\n            assert_eq!(deque.offset, i + 1);\n        }\n        assert_eq!(deque.pop_front(), None);\n        assert_eq!(deque.offset, 10);\n\n        for i in 10..20 {\n            assert_eq!(deque.push_back(i + 1), Ok(i));\n        }\n        assert_eq!(deque.push_back(21), Err(IndexError::ExceedLimit(20, 19)));\n        assert_eq!(deque.offset, 10);\n\n        assert!(!deque.contain(0));\n        assert!(!deque.contain(9));\n        assert!(deque.contain(10));\n        assert!(deque.contain(19));\n        assert!(!deque.contain(21));\n\n        assert_eq!(deque[10], 11);\n        assert_eq!(deque[19], 20);\n\n        assert_eq!(deque.drain_to(10).count(), 0);\n        let mut i = 10;\n        for item in deque.drain_to(15) {\n            i += 1;\n            assert_eq!(item, i);\n        }\n        assert_eq!(i, 15);\n        assert!(deque.contain(15));\n        assert_eq!(deque.offset, 15);\n\n        assert_eq!(deque.drain_to(30).count(), 5);\n        assert_eq!(deque.offset, 20);\n        assert!(deque.is_empty());\n    }\n\n    #[test]\n    fn test_insert() {\n        let mut deque = IndexDeque::<u64, 19>::default();\n        deque.insert(10, 11).unwrap();\n        assert_eq!(deque.offset, 0);\n        assert_eq!(deque.len(), 11);\n\n        for i in 0..10 {\n            assert_eq!(deque[i], u64::default());\n        }\n        assert_eq!(deque[10], 11);\n    }\n\n    #[test]\n    fn test_skip() {\n        let mut deque = IndexDeque::<u64, 19>::default();\n        for i in 0..10 {\n            assert_eq!(deque.push_back(i), Ok(i));\n        }\n        assert_eq!(deque.offset, 0);\n\n        deque.advance(5);\n        assert_eq!(deque.offset, 5);\n\n        deque.enumerate().for_each(|(idx, item)| {\n            assert_eq!(idx, *item);\n        });\n    }\n\n    #[test]\n    fn test_reset_offset() {\n        let mut deque = IndexDeque::<u64, 19>::default();\n        deque.reset_offset(5);\n        assert_eq!(deque.offset, 5);\n        for i in 0..10 {\n            assert_eq!(deque.push_back(i), Ok(i + 5));\n        }\n        for i in 0..10 {\n            assert_eq!(deque.pop_front(), Some((i + 5, i)));\n        }\n    }\n\n    #[test]\n    fn test_reset_offset_with_content() {\n        let mut deque = IndexDeque::<u64, 19>::default();\n        for i in 0..5 {\n            assert_eq!(deque.push_back(i), Ok(i));\n        }\n        deque.reset_offset(10);\n        deque.enumerate().for_each(|(idx, item)| {\n            assert_eq!(idx, *item + 10);\n        });\n    }\n\n    #[test]\n    fn test_reset_offset_panic2() {\n        let mut deque = IndexDeque::<u64, 19>::default();\n        for i in 0..10 {\n            assert_eq!(deque.push_back(i), Ok(i));\n        }\n        for i in 0..5 {\n            assert_eq!(deque.pop_front(), Some((i, i)));\n        }\n        assert_eq!(deque.offset, 5);\n        deque.reset_offset(3);\n        deque.enumerate().for_each(|(idx, item)| {\n            assert_eq!(idx + 2, *item);\n        });\n    }\n\n    #[test]\n    fn test_resize() {\n        let mut deque = IndexDeque::<u64, 19>::default();\n        for i in 0..10 {\n            assert_eq!(deque.push_back(i), Ok(i));\n        }\n        assert_eq!(deque.offset, 0);\n\n        deque.resize(15, 10).unwrap();\n        assert_eq!(deque.offset, 0);\n        assert_eq!(deque.len(), 15);\n        for i in 10..15 {\n            assert_eq!(deque[i], 10);\n        }\n\n        deque.resize(5, 10).unwrap();\n        assert_eq!(deque.offset, 0);\n        assert_eq!(deque.len(), 5);\n        for i in 0..5 {\n            assert_eq!(deque[i], i);\n        }\n\n        assert_eq!(deque.resize(20, 10), Err(IndexError::ExceedLimit(20, 19)));\n\n        for i in 0..5 {\n            assert_eq!(deque.pop_front(), Some((i, i)));\n        }\n        assert_eq!(deque.resize(0, 10), Err(IndexError::TooSmall(0, 5)));\n    }\n}\n"
  },
  {
    "path": "qbase/src/util/unique_id.rs",
    "content": "use std::{\n    hash::Hash,\n    sync::atomic::{AtomicUsize, Ordering},\n};\n\nuse derive_more::Into;\n\n/// Opque, hashable, unique ID type.\n#[derive(Debug, Clone, Copy, Into, PartialEq, Eq, Hash)]\npub struct UniqueId(usize);\n\n/// Thread safe, lock free unique ID generator.\n#[derive(Debug)]\npub struct UniqueIdGenerator(AtomicUsize);\n\nimpl Default for UniqueIdGenerator {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl UniqueIdGenerator {\n    /// Create a new `UniqueIdGenerator`\n    ///\n    /// # Example\n    ///\n    /// ```\n    /// use qbase::util::UniqueIdGenerator;\n    ///\n    /// let generator = UniqueIdGenerator::new();\n    /// let id1 = generator.generate();\n    /// let id2 = generator.generate();\n    /// assert_ne!(id1, id2);\n    /// ```\n    pub const fn new() -> Self {\n        UniqueIdGenerator(AtomicUsize::new(1))\n    }\n\n    /// Generated a new `UniqueId` starting from a specific value\n    ///\n    /// # Example\n    ///\n    /// ```\n    /// use qbase::util::UniqueIdGenerator;\n    ///\n    /// let generator = UniqueIdGenerator::new();\n    /// let id1 = generator.generate();\n    /// let id2 = generator.generate();\n    /// assert_ne!(id1, id2);\n    /// ```\n    pub fn generate(&self) -> UniqueId {\n        let id = self.0.fetch_add(1, Ordering::Relaxed);\n        assert_ne!(id, 0, \"UniqueId overflow\");\n        UniqueId(id)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::{collections::HashSet, sync::Arc, thread};\n\n    use super::*;\n\n    #[test]\n    fn test_unique_id_basic() {\n        let generator = UniqueIdGenerator::new();\n        let id1 = generator.generate();\n        let id2 = generator.generate();\n\n        assert_ne!(id1, id2);\n        assert_eq!(id1.0, 1);\n        assert_eq!(id2.0, 2);\n    }\n\n    #[test]\n    fn test_unique_id_hash() {\n        let generator = UniqueIdGenerator::new();\n        let id1 = generator.generate();\n        let id2 = generator.generate();\n\n        let mut set = HashSet::new();\n        set.insert(id1);\n        set.insert(id2);\n\n        assert_eq!(set.len(), 2);\n    }\n\n    #[test]\n    fn test_unique_id_clone_copy() {\n        let generator = UniqueIdGenerator::new();\n        let id1 = generator.generate();\n        let id2 = id1; // Copy\n\n        assert_eq!(id1, id2);\n    }\n\n    #[test]\n    fn test_thread_safety() {\n        let generator = Arc::new(UniqueIdGenerator::new());\n        let mut handles = vec![];\n\n        // 启动多个线程同时生成ID\n        for _ in 0..10 {\n            let generator = Arc::clone(&generator);\n            let handle = thread::spawn(move || {\n                let mut ids = Vec::new();\n                for _ in 0..100 {\n                    ids.push(generator.generate());\n                }\n                ids\n            });\n            handles.push(handle);\n        }\n\n        // 收集所有生成的ID\n        let mut all_ids = HashSet::new();\n        for handle in handles {\n            let ids = handle.join().unwrap();\n            for id in ids {\n                assert!(all_ids.insert(id), \"Duplicate ID found: {id:?}\");\n            }\n        }\n\n        // 应该有1000个唯一的ID\n        assert_eq!(all_ids.len(), 1000);\n    }\n\n    #[test]\n    fn test_default_generator() {\n        let gen1 = UniqueIdGenerator::new();\n        let gen2 = UniqueIdGenerator::new();\n\n        assert_eq!(gen1.generate(), gen2.generate())\n    }\n}\n"
  },
  {
    "path": "qbase/src/util/wakers.rs",
    "content": "use std::{\n    mem,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Wake, Waker},\n    usize,\n};\n\nuse smallvec::SmallVec;\n\n#[derive(Debug, Clone)]\npub struct WakerVec<const N: usize = 4> {\n    wakers: SmallVec<[Waker; N]>,\n}\n\nimpl<const N: usize> Default for WakerVec<N> {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl<const N: usize> WakerVec<N> {\n    pub const fn new() -> Self {\n        Self {\n            wakers: SmallVec::new_const(),\n        }\n    }\n\n    pub fn register(&mut self, waker: &Waker) {\n        if !self.wakers.iter().any(|w| w.will_wake(waker)) {\n            self.wakers.push(waker.clone());\n        }\n    }\n\n    pub fn wake_all(&mut self) {\n        for waker in self.wakers.drain(..) {\n            waker.wake();\n        }\n    }\n}\n\nimpl<const N: usize> Drop for WakerVec<N> {\n    fn drop(&mut self) {\n        self.wake_all();\n    }\n}\n\n#[derive(Debug)]\npub struct Wakers<const N: usize = 4> {\n    wakers: Mutex<WakerVec<N>>,\n}\n\nimpl<const N: usize> Wake for Wakers<N> {\n    fn wake(self: Arc<Self>) {\n        self.wake_all();\n    }\n\n    fn wake_by_ref(self: &Arc<Self>) {\n        self.wake_all();\n    }\n}\n\nimpl<const N: usize> Default for Wakers<N> {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl<const N: usize> Wakers<N> {\n    pub const fn new() -> Self {\n        Self {\n            wakers: Mutex::new(WakerVec::new()),\n        }\n    }\n\n    fn lock(&self) -> MutexGuard<'_, WakerVec<N>> {\n        self.wakers.lock().expect(\"Wakers mutex poisoned\")\n    }\n\n    pub fn register(&self, waker: &Waker) {\n        self.lock().register(waker)\n    }\n\n    pub fn wake_all(&self) {\n        { mem::replace(&mut *self.lock(), WakerVec::new()) }.wake_all()\n    }\n\n    pub fn to_waker(self: &Arc<Self>) -> Waker {\n        Waker::from(self.clone())\n    }\n\n    pub fn combine_with<T>(\n        self: &Arc<Self>,\n        cx: &mut Context<'_>,\n        poll: impl FnOnce(&mut Context<'_>) -> Poll<T>,\n    ) -> Poll<T> {\n        self.register(cx.waker());\n        poll(&mut Context::from_waker(&self.to_waker()))\n    }\n}\n"
  },
  {
    "path": "qbase/src/util.rs",
    "content": "mod async_deque;\npub use async_deque::ArcAsyncDeque;\n\nmod bound_queue;\npub use bound_queue::BoundQueue;\n\nmod data;\npub use data::{ContinuousData, DataPair, NonData, WriteData};\n\nmod index_deque;\npub use index_deque::{IndexDeque, IndexError};\n\nmod unique_id;\npub use unique_id::{UniqueId, UniqueIdGenerator};\n\nmod wakers;\npub use wakers::{WakerVec, Wakers};\n"
  },
  {
    "path": "qbase/src/varint.rs",
    "content": "use std::{cmp::Ordering, convert::TryFrom, fmt};\n\n/// An integer less than 2^62\n///\n/// Values of this type are suitable for encoding as QUIC variable-length integer.\n/// It would be neat if we could express to Rust that the top two bits are available for use as enum\n/// discriminants\n///\n/// See [variable-length integers](https://www.rfc-editor.org/rfc/rfc9000.html#name-variable-length-integer-enc)\n/// of [QUIC](https://www.rfc-editor.org/rfc/rfc9000.html) for more details.\n#[derive(Default, Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]\npub struct VarInt(u64);\n\n/// The maximum value that can be represented by a QUIC variable-length integer.\npub const VARINT_MAX: u64 = 0x3fff_ffff_ffff_ffff;\n\n/// The number of bytes that a QUIC variable-length integer can be encoded in.\n///\n/// [`VarInt`] doesn't need to be encoded on the minimum number of bytes necessary,\n/// with the sole exception of the Frame Type field.\npub enum EncodeBytes {\n    One = 1,\n    Two = 2,\n    Four = 4,\n    Eight = 8,\n}\n\nimpl VarInt {\n    /// The largest representable value\n    pub const MAX: Self = Self(VARINT_MAX);\n    /// The largest encoded value length\n    pub const MAX_SIZE: usize = 8;\n\n    /// Construct a `VarInt` from a [`u32`].\n    pub const fn from_u32(x: u32) -> Self {\n        Self(x as u64)\n    }\n\n    /// Construct a `VarInt` from a [`u64`].\n    /// Succeeds if `x` < 2^62.\n    pub const fn from_u64(x: u64) -> Result<Self, err::Overflow> {\n        if x < (1 << 62) {\n            Ok(Self(x))\n        } else {\n            Err(err::Overflow(x as _))\n        }\n    }\n\n    /// Create a VarInt from a [`u64`] without ensuring it's in range\n    ///\n    /// # Safety\n    ///\n    /// `x` must be less than 2^62.\n    pub unsafe fn from_u64_unchecked(x: u64) -> Self {\n        Self(x)\n    }\n\n    /// Construct a `VarInt` from a [`u128`].\n    /// Succeeds if `x` < 2^62.\n    pub fn from_u128(x: u128) -> Result<Self, err::Overflow> {\n        if x < (1 << 62) {\n            Ok(Self(x as _))\n        } else {\n            Err(err::Overflow(x))\n        }\n    }\n\n    /// Extract the integer value\n    pub fn into_u64(self) -> u64 {\n        self.0\n    }\n\n    /// Compute the number of bytes needed to encode this value\n    pub fn encoding_size(self) -> usize {\n        let x = self.0;\n        if x < (1 << 6) {\n            1\n        } else if x < (1 << 14) {\n            2\n        } else if x < (1 << 30) {\n            4\n        } else if x < (1 << 62) {\n            8\n        } else {\n            unreachable!(\"malformed VarInt\");\n        }\n    }\n}\n\nimpl From<VarInt> for u64 {\n    fn from(x: VarInt) -> Self {\n        x.0\n    }\n}\n\nimpl From<u8> for VarInt {\n    fn from(x: u8) -> Self {\n        Self(x.into())\n    }\n}\n\nimpl From<u16> for VarInt {\n    fn from(x: u16) -> Self {\n        Self(x.into())\n    }\n}\n\nimpl From<u32> for VarInt {\n    fn from(x: u32) -> Self {\n        Self(x.into())\n    }\n}\n\nimpl TryFrom<u128> for VarInt {\n    type Error = err::Overflow;\n\n    fn try_from(x: u128) -> Result<Self, Self::Error> {\n        Self::from_u128(x)\n    }\n}\n\nimpl TryFrom<u64> for VarInt {\n    type Error = err::Overflow;\n\n    /// Succeeds if `x` < 2^62\n    fn try_from(x: u64) -> Result<Self, Self::Error> {\n        Self::from_u64(x)\n    }\n}\n\nimpl TryFrom<usize> for VarInt {\n    type Error = err::Overflow;\n\n    /// Succeeds if `x` < 2^62\n    fn try_from(x: usize) -> Result<Self, Self::Error> {\n        Self::try_from(x as u64)\n    }\n}\n\nimpl nom::ToUsize for VarInt {\n    fn to_usize(&self) -> usize {\n        self.0 as usize\n    }\n}\n\nimpl PartialEq<u64> for VarInt {\n    fn eq(&self, other: &u64) -> bool {\n        self.0.eq(other)\n    }\n}\n\nimpl PartialOrd<u64> for VarInt {\n    fn partial_cmp(&self, other: &u64) -> Option<Ordering> {\n        self.0.partial_cmp(other)\n    }\n}\n\nimpl fmt::Display for VarInt {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        self.0.fmt(f)\n    }\n}\n\n/// Error module for VarInt\npub mod err {\n    use std::fmt::Debug;\n\n    use thiserror::Error;\n\n    /// Overflow error indicating that a value exceeds 2^62\n    #[derive(Debug, Copy, Clone, Eq, PartialEq, Error)]\n    #[error(\"Value({0}) too large for varint encoding\")]\n    pub struct Overflow(pub(super) u128);\n}\n\nuse bytes::BufMut;\nuse nom::{IResult, Parser, bits::streaming::take, combinator::flat_map, error::Error};\n\n/// Parse a variable-length integer from the input buffer,\n/// [nom](https://docs.rs/nom/latest/nom/) parser style.\n///\n/// ## Example\n/// ```\n/// use qbase::varint::be_varint;\n///\n/// let input = &[0b01000000, 0x01][..];\n/// let result = be_varint(input);\n/// assert_eq!(result, Ok((&[][..], 1u32.into())));\n/// ```\npub fn be_varint(input: &[u8]) -> IResult<&[u8], VarInt> {\n    flat_map(take(2usize), |prefix: u8| {\n        take::<&[u8], u64, usize, Error<(&[u8], usize)>>((8 << prefix) - 2)\n    })\n    .parse((input, 0))\n    .map_err(|err| match err {\n        nom::Err::Incomplete(needed) => {\n            nom::Err::Incomplete(needed.map(|n| n.get().div_ceil(8) - input.len()))\n        }\n        _ => unreachable!(),\n    })\n    .map(|((buf, _), value)| (buf, VarInt(value)))\n}\n\n/// A [`bytes::BufMut`] extension trait, makes buffer more friendly to write VarInt.\npub trait WriteVarInt: BufMut {\n    /// Write a variable-length integer.\n    ///\n    /// `put_varint` will write the smallest number of bytes needed to represent the value.\n    /// `encode_varint` will write the specified number of bytes, and panic if the specified number of bytes\n    /// is less than the smallest number of bytes needed to repressent the value.\n    ///\n    /// # Example\n    /// ```rust\n    /// use bytes::BufMut;\n    /// use qbase::varint::{EncodeBytes, VarInt, WriteVarInt};\n    ///\n    /// let val = VarInt::from_u32(1);\n    /// let mut encode_buf = [0u8; 8];\n    ///\n    /// let mut buf = &mut encode_buf[..];\n    /// buf.put_varint(&val);\n    /// assert_eq!(buf.len(), 7);\n    /// assert_eq!(encode_buf[0..1], [0x01]);\n    ///\n    /// let mut buf = &mut encode_buf[..];\n    /// buf.encode_varint(&val, EncodeBytes::Two);\n    /// assert_eq!(buf.len(), 6);\n    /// assert_eq!(encode_buf[0..2], [0x40, 0x01]);\n    /// ```\n    fn put_varint(&mut self, value: &VarInt);\n\n    /// Write a variable-length integer with specified number of bytes.\n    fn encode_varint(&mut self, value: &VarInt, nbytes: EncodeBytes);\n}\n\n// 所有的BufMut都可以调用put_varint来写入VarInt了\nimpl<T: BufMut> WriteVarInt for T {\n    fn put_varint(&mut self, value: &VarInt) {\n        let x = value.0;\n        if x < 1u64 << 6 {\n            self.put_u8(x as u8);\n        } else if x < 1u64 << 14 {\n            self.put_u16((0b01 << 14) | x as u16);\n        } else if x < 1u64 << 30 {\n            self.put_u32((0b10 << 30) | x as u32);\n        } else if x < 1u64 << 62 {\n            self.put_u64((0b11 << 62) | x);\n        } else {\n            unreachable!(\"malformed VarInt\")\n        }\n    }\n\n    fn encode_varint(&mut self, value: &VarInt, nbytes: EncodeBytes) {\n        match nbytes {\n            EncodeBytes::One => {\n                assert!(value.0 < 1u64 << 6);\n                self.put_u8(value.0 as u8);\n            }\n            EncodeBytes::Two => {\n                assert!(value.0 < 1u64 << 14);\n                self.put_u16((0b01 << 14) | value.0 as u16);\n            }\n            EncodeBytes::Four => {\n                assert!(value.0 < 1u64 << 30);\n                self.put_u32((0b10 << 30) | value.0 as u32);\n            }\n            EncodeBytes::Eight => {\n                assert!(value.0 < 1u64 << 62);\n                self.put_u64((0b11 << 62) | value.0);\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{EncodeBytes, VarInt, WriteVarInt};\n\n    #[test]\n    fn test_equal() {\n        let val = VarInt(0);\n        assert_eq!(val, 0);\n        assert!(val == 0);\n        assert!(val != 1)\n    }\n\n    #[test]\n    fn test_be_varint() {\n        {\n            let buf = &[0b00000001u8, 0x01][..];\n            let r = super::be_varint(buf);\n            assert_eq!(r, Ok((&[0x01][..], VarInt(1))));\n        }\n        {\n            let buf = &[0b01000000u8, 0x06u8][..];\n            let r = super::be_varint(buf);\n            assert_eq!(r, Ok((&[][..], VarInt(6))));\n        }\n        {\n            let buf = &[0b10000000u8, 1, 1, 1][..];\n            let r = super::be_varint(buf);\n            assert_eq!(r, Ok((&[][..], VarInt(0x010101))));\n        }\n        {\n            let buf = &[0b11000000u8, 1, 1, 1, 1, 1, 1, 1][..];\n            let r = super::be_varint(buf);\n            assert_eq!(r, Ok((&[][..], VarInt(0x01010101010101))));\n        }\n        {\n            let buf = &[0b11000000u8, 0x06u8][..];\n            let r = super::be_varint(buf);\n            assert_eq!(r, Err(nom::Err::Incomplete(nom::Needed::new(6))));\n        }\n    }\n\n    fn assert_put_varint_eq(val: u64, expected: &[u8]) {\n        let val = VarInt::from_u64(val).unwrap();\n        let mut buf = vec![];\n        buf.put_varint(&val);\n        assert_eq!(buf, expected);\n    }\n\n    #[test]\n    fn test_put_varint() {\n        assert_put_varint_eq(0x0000_0000_0000_0000, &[0]);\n        assert_put_varint_eq(0x0000_0000_0000_003F, &[0x3F]);\n        assert_put_varint_eq(0x0000_0000_0000_0040, &[0x40, 0x40]);\n        assert_put_varint_eq(0x0000_0000_0000_3FFF, &[0x7F, 0xFF]);\n        assert_put_varint_eq(0x0000_0000_0000_4000, &[0x80, 0x00, 0x40, 0x00]);\n        assert_put_varint_eq(0x0000_0000_3FFF_FFFF, &[0xBF, 0xFF, 0xFF, 0xFF]);\n        assert_put_varint_eq(\n            0x0000_0000_4000_0000,\n            &[0xC0, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00],\n        );\n        assert_put_varint_eq(\n            0x3FFF_FFFF_FFFF_FFFF,\n            &[0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF],\n        );\n    }\n\n    #[test]\n    fn test_encode_varint() {\n        let val = VarInt::from_u32(1);\n        let mut encode_buf = [0u8; 8];\n\n        let mut buf = &mut encode_buf[..];\n        buf.put_varint(&val);\n        assert_eq!(buf.len(), 7);\n        assert_eq!(encode_buf[0..1], [0x01]);\n\n        let mut buf = &mut encode_buf[..];\n        buf.encode_varint(&val, EncodeBytes::Two);\n        assert_eq!(buf.len(), 6);\n        assert_eq!(encode_buf[0..2], [0x40, 0x01]);\n    }\n}\n"
  },
  {
    "path": "qcongestion/Cargo.toml",
    "content": "[package]\nname = \"qcongestion\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"Congestion control in QUIC, a part of dquic\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nthiserror = { workspace = true }\ntracing = { workspace = true }\nqbase = { workspace = true }\nqevent = { workspace = true }\nrand = { workspace = true }\ntokio = { workspace = true, features = [\"rt\", \"sync\", \"time\", \"macros\"] }\n\n[dev-dependencies]\ntokio = { workspace = true, features = [\"test-util\"] }\n"
  },
  {
    "path": "qcongestion/src/algorithm/bbr/delivery_rate.rs",
    "content": "// https://tools.ietf.org/html/draft-cheng-iccrg-delivery-rate-estimation-01\n\nuse std::time::{Duration, Instant};\n\nuse crate::packets::{AckedPackets, SentPacket};\n\n#[derive(Debug)]\npub struct Rate {\n    delivered: usize,\n    delivered_time: Instant,\n    first_sent_time: Instant,\n    // Packet number of the last sent packet with app limited.\n    end_of_app_limited: u64,\n    // Packet number of the last sent packet.\n    last_sent_packet: u64,\n    // Packet number of the largest acked packet.\n    largest_acked: u64,\n    // Sample of rate estimation.\n    rate_sample: RateSample,\n}\n\nimpl Default for Rate {\n    fn default() -> Self {\n        let now = tokio::time::Instant::now();\n\n        Rate {\n            delivered: 0,\n            delivered_time: now,\n            first_sent_time: now,\n            end_of_app_limited: 0,\n            last_sent_packet: 0,\n            largest_acked: 0,\n            rate_sample: RateSample::default(),\n        }\n    }\n}\n\nimpl Rate {\n    // 3.2. Transmitting or retransmitting a data packet\n    pub fn on_packet_sent(\n        &mut self,\n        pkt: &mut SentPacket,\n        bytes_in_flight: usize,\n        bytes_lost: u64,\n    ) {\n        // No packets in flight.\n        if bytes_in_flight == 0 {\n            self.first_sent_time = pkt.time_sent;\n            self.delivered_time = pkt.time_sent;\n        }\n\n        pkt.first_sent_time = self.first_sent_time;\n        pkt.delivered_time = self.delivered_time;\n        pkt.delivered = self.delivered;\n        pkt.is_app_limited = self.app_limited();\n        pkt.tx_in_flight = bytes_in_flight;\n        pkt.lost = bytes_lost;\n\n        self.last_sent_packet = pkt.packet_number;\n    }\n\n    // Update the delivery rate sample when a packet is acked.\n    pub fn update_rate_sample(&mut self, pkt: &AckedPackets, now: Instant) {\n        self.delivered += pkt.size;\n        self.delivered_time = now;\n\n        if self.rate_sample.prior_time.is_none() || pkt.delivered > self.rate_sample.prior_delivered\n        {\n            self.rate_sample.prior_delivered = pkt.delivered;\n            self.rate_sample.prior_time = Some(pkt.delivered_time);\n            self.rate_sample.is_app_limited = pkt.is_app_limited;\n            self.rate_sample.send_elapsed =\n                pkt.time_sent.saturating_duration_since(pkt.first_sent_time);\n            self.rate_sample.rtt = pkt.rtt;\n            self.rate_sample.ack_elapsed = self\n                .delivered_time\n                .saturating_duration_since(pkt.delivered_time);\n\n            self.first_sent_time = pkt.time_sent;\n        }\n\n        self.largest_acked = self.largest_acked.max(pkt.pn);\n    }\n\n    pub fn generate_rate_sample(&mut self) {\n        // End app-limited phase if bubble is ACKed and gone.\n        if self.app_limited() && self.largest_acked > self.end_of_app_limited {\n            self.update_app_limited(false);\n        }\n\n        if self.rate_sample.prior_time.is_some() {\n            let interval = self\n                .rate_sample\n                .send_elapsed\n                .max(self.rate_sample.ack_elapsed);\n\n            self.rate_sample.delivered = self\n                .delivered\n                .saturating_sub(self.rate_sample.prior_delivered);\n            self.rate_sample.interval = interval;\n\n            if !interval.is_zero() {\n                // Fill in rate_sample with a rate sample.\n                self.rate_sample.delivery_rate =\n                    (self.rate_sample.delivered as f64 / interval.as_secs_f64()) as u64;\n            }\n        }\n    }\n\n    pub fn update_app_limited(&mut self, v: bool) {\n        self.end_of_app_limited = if v { self.last_sent_packet.max(1) } else { 0 }\n    }\n\n    pub fn app_limited(&mut self) -> bool {\n        self.end_of_app_limited != 0\n    }\n\n    pub fn delivered(&self) -> usize {\n        self.delivered\n    }\n\n    pub fn sample_delivery_rate(&self) -> u64 {\n        self.rate_sample.delivery_rate\n    }\n\n    pub fn sample_rtt(&self) -> Duration {\n        self.rate_sample.rtt\n    }\n\n    pub fn sample_is_app_limited(&self) -> bool {\n        self.rate_sample.is_app_limited\n    }\n}\n\n#[derive(Default, Debug)]\nstruct RateSample {\n    delivery_rate: u64,\n    is_app_limited: bool,\n    interval: Duration,\n    delivered: usize,\n    prior_delivered: usize,\n    prior_time: Option<Instant>,\n    send_elapsed: Duration,\n    ack_elapsed: Duration,\n    rtt: Duration,\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_rate() {\n        let mut rate = Rate::default();\n\n        let now = Instant::now();\n\n        let mut sents: Vec<SentPacket> = (0..5)\n            .map(|i| SentPacket {\n                packet_number: i,\n                sent_bytes: 100,\n                time_sent: now,\n                ..Default::default()\n            })\n            .collect();\n\n        for sent in &mut sents {\n            let pkt_num = sent.packet_number;\n            rate.on_packet_sent(sent, (pkt_num * 100) as usize, 0);\n        }\n\n        let delay = Duration::from_millis(100);\n        let recv_ack_time = now + delay;\n\n        for _ in 0..3 {\n            let sent = sents.pop().unwrap();\n            let mut acked: AckedPackets = sent.into();\n            acked.rtt = delay;\n            rate.update_rate_sample(&acked, recv_ack_time);\n            rate.generate_rate_sample();\n        }\n        // 300 / 0.1\n        assert_eq!(rate.sample_delivery_rate(), 3000);\n        assert_eq!(rate.sample_rtt(), delay);\n        assert!(!rate.sample_is_app_limited());\n    }\n}\n"
  },
  {
    "path": "qcongestion/src/algorithm/bbr/min_max.rs",
    "content": "use std::fmt::Debug;\n\n#[derive(Copy, Clone, Debug)]\npub(super) struct MinMax {\n    /// round count, not a timestamp\n    window: u64,\n    samples: [MinMaxSample; 3],\n}\n\nimpl MinMax {\n    fn fill(&mut self, sample: MinMaxSample) {\n        self.samples.fill(sample);\n    }\n\n    pub(super) fn update_max(&mut self, current_round: u64, measurement: u64) -> u64 {\n        let sample = MinMaxSample {\n            time: current_round,\n            value: measurement,\n        };\n\n        if self.samples[0].value == 0  /* uninitialised */\n            || /* found new max? */ sample.value >= self.samples[0].value\n            || /* nothing left in window? */ sample.time - self.samples[2].time > self.window\n        {\n            self.fill(sample); /* forget earlier samples */\n            return self.samples[0].value;\n        }\n\n        if sample.value >= self.samples[1].value {\n            self.samples[2] = sample;\n            self.samples[1] = sample;\n        } else if sample.value >= self.samples[2].value {\n            self.samples[2] = sample;\n        }\n\n        self.subwin_update(sample);\n        self.samples[0].value\n    }\n\n    /* As time advances, update the 1st, 2nd, and 3rd choices. */\n    fn subwin_update(&mut self, sample: MinMaxSample) {\n        let dt = sample.time - self.samples[0].time;\n        if dt > self.window {\n            /*\n             * Passed entire window without a new sample so make 2nd\n             * choice the new sample & 3rd choice the new 2nd choice.\n             * we may have to iterate this since our 2nd choice\n             * may also be outside the window (we checked on entry\n             * that the third choice was in the window).\n             */\n            self.samples[0] = self.samples[1];\n            self.samples[1] = self.samples[2];\n            self.samples[2] = sample;\n            if sample.time - self.samples[0].time > self.window {\n                self.samples[0] = self.samples[1];\n                self.samples[1] = self.samples[2];\n                self.samples[2] = sample;\n            }\n        } else if self.samples[1].time == self.samples[0].time && dt > self.window / 4 {\n            /*\n             * We've passed a quarter of the window without a new sample\n             * so take a 2nd choice from the 2nd quarter of the window.\n             */\n            self.samples[2] = sample;\n            self.samples[1] = sample;\n        } else if self.samples[2].time == self.samples[1].time && dt > self.window / 2 {\n            /*\n             * We've passed half the window without finding a new sample\n             * so take a 3rd choice from the last half of the window\n             */\n            self.samples[2] = sample;\n        }\n    }\n}\n\nimpl Default for MinMax {\n    fn default() -> Self {\n        Self {\n            window: 10,\n            samples: [Default::default(); 3],\n        }\n    }\n}\n\n#[derive(Debug, Copy, Clone, Default)]\nstruct MinMaxSample {\n    /// round number, not a timestamp\n    time: u64,\n    value: u64,\n}\n"
  },
  {
    "path": "qcongestion/src/algorithm/bbr/model.rs",
    "content": "use std::time::Instant;\n\n// 4.1.  Maintaining the Network Path Model\n// This model includes two estimated parameters: self.BtlBw, and self.RTprop.\nuse super::{Bbr, RTPROP_FILTER_LEN};\nuse crate::packets::AckedPackets;\n\nimpl Bbr {\n    // 4.1.1.3.  Tracking Time for the self.BtlBw Max Filter\n    // Upon connection initialization:\n    pub(super) fn init_round_counting(&mut self) {\n        self.next_round_delivered = 0;\n        self.round_count = 0;\n        self.is_round_start = false;\n    }\n\n    // Upon receiving an ACK for a given data packet:\n    fn update_round(&mut self, packet: &AckedPackets) {\n        if packet.delivered >= self.next_round_delivered {\n            self.next_round_delivered = self.delivery_rate.delivered();\n            self.round_count += 1;\n            self.is_round_start = true;\n            self.packet_conservation = false;\n        } else {\n            self.is_round_start = false;\n        }\n    }\n\n    // 4.1.1.5.  Updating the BBR.BtlBw Max Filter\n    pub(super) fn update_btlbw(&mut self, packet: &AckedPackets) {\n        self.update_round(packet);\n\n        if self.delivery_rate.sample_delivery_rate() >= self.btlbw\n            || !self.delivery_rate.sample_is_app_limited()\n        {\n            self.btlbw = self\n                .btlbwfilter\n                .update_max(self.round_count, self.delivery_rate.sample_delivery_rate());\n        }\n    }\n\n    // 4.1.2.2.  BBR.RTprop Min Filter\n    pub(super) fn update_rtprop(&mut self) {\n        let sample_rtt = self.delivery_rate.sample_rtt();\n        let now = tokio::time::Instant::now();\n        self.is_rtprop_expired =\n            now.saturating_duration_since(self.rtprop_stamp) > RTPROP_FILTER_LEN;\n\n        if !sample_rtt.is_zero() && (sample_rtt <= self.rtprop || self.is_rtprop_expired) {\n            self.rtprop = sample_rtt;\n            self.rtprop_stamp = now;\n        }\n    }\n}\n"
  },
  {
    "path": "qcongestion/src/algorithm/bbr/parameters.rs",
    "content": "// 4.2.  BBR Control Parameters\n// BBR uses three distinct but interrelated control parameters: pacing rate,\n// send quantum, and congestion window (cwnd).\n\nuse std::time::Duration;\n\nuse super::{\n    Bbr, BbrStateMachine, INITIAL_CWND, MIN_PIPE_CWND_PKTS, MINIMUM_WINDOW_PACKETS, MSS,\n    SEND_QUANTUM_THRESHOLD_PACING_RATE,\n};\nuse crate::rtt::INITIAL_RTT;\n\nimpl Bbr {\n    // 4.2.1.  Pacing Rate\n    pub(super) fn init_pacing_rate(&mut self) {\n        let srtt = INITIAL_RTT;\n        let nominal_bandwidth = INITIAL_CWND as f64 / srtt.as_secs_f64();\n        self.pacing_rate = (self.pacing_gain * nominal_bandwidth) as u64;\n    }\n\n    pub(super) fn set_pacing_rate(&mut self) {\n        self.set_pacing_rate_with_gain(self.pacing_gain);\n    }\n\n    pub(super) fn set_pacing_rate_with_gain(&mut self, pacing_gain: f64) {\n        let rate = (pacing_gain * self.btlbw as f64) as u64;\n        if self.is_filled_pipe || rate > self.pacing_rate {\n            self.pacing_rate = rate;\n        }\n    }\n\n    // 4.2.2.  Send Quantum\n    pub(super) fn set_send_quantum(&mut self) {\n        let floor = if self.pacing_rate < SEND_QUANTUM_THRESHOLD_PACING_RATE {\n            MSS\n        } else {\n            2 * MSS\n        };\n\n        // BBR.send_quantum  = min(BBR.pacing_rate * 1ms, 64KBytes)\n        self.send_quantum = (self.pacing_rate / 1000).clamp(floor as u64, 64 * 1024);\n    }\n\n    // 4.2.3.  Congestion Window\n    // 4.2.3.2.  Target cwnd\n    pub fn inflight(&self, gain: f64) -> u64 {\n        if self.rtprop == Duration::MAX {\n            return INITIAL_CWND;\n        }\n\n        let quanta = 3 * self.send_quantum;\n        let estimated_bdp = self.btlbw as f64 * self.rtprop.as_secs_f64();\n        (gain * estimated_bdp) as u64 + quanta\n    }\n\n    fn update_target_cwnd(&mut self) {\n        self.target_cwnd = self.inflight(self.cwnd_gain);\n    }\n\n    // 4.2.3.4 Modulating cwnd in Loss Recovery\n    pub(super) fn save_cwnd(&mut self) {\n        self.prior_cwnd = if !self.in_recovery && self.state != BbrStateMachine::ProbeRTT {\n            self.cwnd\n        } else {\n            self.cwnd.max(self.prior_cwnd)\n        }\n    }\n\n    pub fn restore_cwnd(&mut self) {\n        self.cwnd = self.cwnd.max(self.prior_cwnd)\n    }\n\n    fn modulate_cwnd_for_recovery(&mut self, bytes_in_flight: u64) {\n        if self.newly_lost_bytes > 0 {\n            self.cwnd = self\n                .cwnd\n                .saturating_sub(self.newly_lost_bytes)\n                .max((MSS * MINIMUM_WINDOW_PACKETS) as u64);\n        }\n\n        if self.packet_conservation {\n            self.cwnd = self.cwnd.max(bytes_in_flight + self.newly_acked_bytes);\n        }\n    }\n\n    // 4.2.3.5 Modulating cwnd in ProbeRTT\n    fn modulate_cwnd_for_probe_rtt(&mut self) {\n        if self.state == BbrStateMachine::ProbeRTT {\n            self.cwnd = self.cwnd.min(self.min_pipe_cwnd());\n        }\n    }\n\n    // 4.2.3.6.  Core cwnd Adjustment Mechanism\n    pub(super) fn set_cwnd(&mut self) {\n        let bytes_in_flight = self.bytes_in_flight;\n\n        self.update_target_cwnd();\n        self.modulate_cwnd_for_recovery(bytes_in_flight);\n\n        if !self.packet_conservation {\n            if self.is_filled_pipe {\n                self.cwnd = self.target_cwnd.min(self.cwnd + self.newly_acked_bytes);\n            } else if self.cwnd < self.target_cwnd\n                || self.delivery_rate.delivered() < INITIAL_CWND as usize\n            {\n                self.cwnd += self.newly_acked_bytes;\n            }\n            self.cwnd = self.cwnd.max(self.min_pipe_cwnd());\n        }\n\n        self.modulate_cwnd_for_probe_rtt();\n    }\n\n    /// The minimal cwnd value BBR tries to target, in bytes\n    pub(super) fn min_pipe_cwnd(&self) -> u64 {\n        (MIN_PIPE_CWND_PKTS * MSS) as u64\n    }\n}\n\n#[cfg(test)]\nmod tests {\n\n    use super::*;\n\n    #[test]\n    fn test_init_pacing_rate() {\n        let mut bbr = Bbr::new();\n        bbr.init();\n        assert_eq!(\n            bbr.pacing_rate,\n            (bbr.pacing_gain * INITIAL_CWND as f64 / INITIAL_RTT.as_secs_f64()) as u64\n        );\n    }\n\n    #[test]\n    fn test_bbr_set_pacing_rate() {\n        let mut bbr = Bbr::new();\n        bbr.btlbw = 1000;\n        bbr.is_filled_pipe = true;\n        bbr.set_pacing_rate();\n        assert_eq!(bbr.pacing_rate, (bbr.btlbw as f64 * bbr.pacing_gain) as u64);\n    }\n\n    #[test]\n    fn test_bbr_set_send_quantum() {\n        let mut bbr = Bbr::new();\n        bbr.pacing_rate = SEND_QUANTUM_THRESHOLD_PACING_RATE + 1;\n\n        bbr.set_send_quantum();\n        assert_eq!(bbr.send_quantum, (2 * MSS) as u64);\n\n        bbr.pacing_rate = SEND_QUANTUM_THRESHOLD_PACING_RATE - 1;\n        bbr.set_send_quantum();\n        assert_eq!(bbr.send_quantum, MSS as u64);\n\n        bbr.pacing_rate = 120_000_000;\n        bbr.set_send_quantum();\n        assert_eq!(bbr.send_quantum, 64 * 1024);\n\n        bbr.pacing_rate = 10_000_000;\n        bbr.set_send_quantum();\n        assert_eq!(bbr.send_quantum, 10000);\n    }\n\n    #[test]\n    fn test_bbr_inflight() {\n        let mut bbr = Bbr::new();\n        bbr.btlbw = 10_000_000;\n        bbr.rtprop = Duration::from_millis(100);\n        let bdp = bbr.inflight(1.0);\n        assert_eq!(bdp, 1_000_000);\n\n        bbr.send_quantum = 64 * 1024;\n\n        let bdp = bbr.inflight(1.0);\n        assert_eq!(bdp, 1_000_000 + bbr.send_quantum * 3);\n    }\n\n    #[test]\n    fn test_bbr_modulate_cwnd_for_recovery() {\n        let mut bbr = Bbr::new();\n\n        bbr.cwnd = 10000;\n        bbr.packet_conservation = false;\n        bbr.newly_lost_bytes = 1000;\n\n        // when packet lost cwnd sub lost_bytes\n        bbr.modulate_cwnd_for_recovery(9000);\n        assert_eq!(bbr.cwnd, 9000);\n\n        bbr.packet_conservation = true;\n        bbr.newly_lost_bytes = 0;\n        bbr.cwnd = 10000;\n        // when packet conservation cwnd add newly_acked_bytes\n        bbr.modulate_cwnd_for_recovery(9000);\n        bbr.newly_acked_bytes = 1000;\n        assert_eq!(bbr.cwnd, 10000);\n    }\n\n    #[test]\n    fn test_modulate_cwnd_for_probe_rtt() {\n        let mut bbr = Bbr::new();\n        bbr.cwnd = 10000;\n        // min(4 * MSS, cwnd)\n        bbr.state = BbrStateMachine::ProbeRTT;\n        bbr.modulate_cwnd_for_probe_rtt();\n\n        assert_eq!(bbr.cwnd, (4 * MSS) as u64);\n    }\n\n    #[test]\n    fn test_bbr_set_cwnd() {\n        let mut bbr = Bbr::new();\n\n        bbr.bytes_in_flight = 1000;\n        bbr.packet_conservation = false;\n        bbr.btlbw = 10_000_000; // 10Mbps\n        bbr.rtprop = Duration::from_millis(100);\n        bbr.newly_acked_bytes = 4000;\n\n        // pacing_rate = btlbw * pacing_gain\n        // target_cwnd = (btlbw * rtt) * cwnd_gain + quantum\n\n        // init cwnd < tartget_cwnd\n        // when receive ack, adjust cwnd\n        bbr.set_cwnd();\n        assert_eq!(bbr.cwnd, 100000);\n    }\n}\n"
  },
  {
    "path": "qcongestion/src/algorithm/bbr/state.rs",
    "content": "use std::time::Instant;\n\nuse super::{Bbr, BbrStateMachine, HIGH_GAIN, PROBE_RTT_DURATION};\nuse crate::rtt::INITIAL_RTT;\n\n// BBRGainCycleLen: the number of phases in the BBR ProbeBW gain cycle: 8.\nconst GAIN_CYCLE_LEN: usize = 8;\n\n// Pacing Gain Cycles. Each phase normally lasts for roughly BBR.RTprop.\nconst PACING_GAIN_CYCLE: [f64; GAIN_CYCLE_LEN] = [1.25, 0.75, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0];\n\nimpl Bbr {\n    pub(super) fn init(&mut self) {\n        self.rtprop = INITIAL_RTT;\n        self.rtprop_stamp = tokio::time::Instant::now();\n        self.probe_rtt_done_stamp = None;\n        self.probe_rtt_round_done = false;\n        self.packet_conservation = false;\n        self.prior_cwnd = 0;\n        self.is_idle_restart = false;\n\n        self.init_round_counting();\n        self.init_full_pipe();\n        self.init_pacing_rate();\n        self.enter_startup();\n    }\n\n    // 4.3.2.1. Startup Dynamics\n    pub(crate) fn enter_startup(&mut self) {\n        self.state = BbrStateMachine::Startup;\n        self.pacing_gain = HIGH_GAIN;\n        self.cwnd_gain = HIGH_GAIN;\n    }\n\n    // 4.3.2.2.  Estimating When Startup has Filled the Pipe\n    fn init_full_pipe(&mut self) {\n        self.is_filled_pipe = false;\n        self.full_bw = 0;\n        self.full_bw_count = 0;\n    }\n\n    // 退出 startup 进入 drain 的条件是连续三回合没有带宽增长\n    pub(super) fn check_full_pipe(&mut self) {\n        if self.is_filled_pipe || !self.is_round_start || self.delivery_rate.app_limited() {\n            // no need to check for a full pipe now\n            return;\n        }\n\n        // BBR.BtlBw still growing?\n        if self.btlbw as f64 >= self.full_bw as f64 * 1.25 {\n            // record new baseline level\n            self.full_bw = self.btlbw;\n            self.full_bw_count = 0;\n        }\n\n        self.full_bw_count += 1;\n        if self.full_bw_count >= 3 {\n            self.is_filled_pipe = true;\n        }\n    }\n\n    // 4.3.3.  Drain\n    fn enter_drain(&mut self) {\n        self.state = BbrStateMachine::Drain;\n        self.pacing_gain = 1.0 / HIGH_GAIN; // pace slowly\n        self.cwnd_gain = HIGH_GAIN; // maintain cwnd\n    }\n\n    pub(super) fn check_drain(&mut self) {\n        if self.state == BbrStateMachine::Startup && self.is_filled_pipe {\n            self.enter_drain()\n        }\n        if self.state == BbrStateMachine::Drain && self.bytes_in_flight <= self.inflight(1.0) {\n            self.enter_probe_bw();\n        }\n    }\n\n    // 4.3.4.  ProbeBW\n    pub fn enter_probe_bw(&mut self) {\n        self.state = BbrStateMachine::ProbeBW;\n        self.pacing_gain = 1.0;\n        self.cwnd_gain = 2.0;\n\n        // 随机从一个阶段开始\n        self.cycle_index = GAIN_CYCLE_LEN - 1 - rand::rng().random_range(0..GAIN_CYCLE_LEN - 1);\n        self.advance_cycle_phase()\n    }\n\n    // On each ACK BBR runs BBRCheckCyclePhase(), to see if it's time to\n    // advance to the next gain cycle phase:\n    pub(super) fn check_cycle_phase(&mut self) {\n        if self.state == BbrStateMachine::ProbeBW && self.is_next_cycle_phase() {\n            self.advance_cycle_phase();\n        }\n    }\n\n    fn advance_cycle_phase(&mut self) {\n        self.cycle_stamp = tokio::time::Instant::now();\n        self.cycle_index = (self.cycle_index + 1) % GAIN_CYCLE_LEN;\n        self.pacing_gain = PACING_GAIN_CYCLE[self.cycle_index];\n    }\n\n    // 是否要进入下一阶段\n    fn is_next_cycle_phase(&mut self) -> bool {\n        let now = tokio::time::Instant::now();\n        let is_full_length = now.saturating_duration_since(self.cycle_stamp) > self.rtprop;\n\n        // pacing_gain == 1.0 持续 rtprop\n        if (self.pacing_gain - 1.0).abs() < f64::EPSILON {\n            return is_full_length;\n        }\n\n        // pacing_gain > 1 至少持续 rtprop 且 出现丢包或 inflight 达到 5/4 * estimated_BDP\n        if self.pacing_gain > 1.0 {\n            return is_full_length\n                && (self.newly_lost_bytes > 0\n                    || self.prior_bytes_in_flight >= self.inflight(self.pacing_gain));\n        }\n\n        // pacing_gain < 1 至少持续 rtprop 且  inflight 达到 estimated_BDP\n        is_full_length || self.prior_bytes_in_flight <= self.inflight(1.0)\n    }\n\n    // 4.3.4.4.  Restarting From Idle\n    pub(super) fn handle_restart_from_idle(&mut self) {\n        if self.bytes_in_flight == 0 && self.delivery_rate.app_limited() {\n            self.is_idle_restart = true;\n\n            if self.state == BbrStateMachine::ProbeBW {\n                self.set_pacing_rate_with_gain(1.0);\n            }\n        }\n    }\n\n    // 4.3.5.  ProbeRTT\n    pub(super) fn check_probe_rtt(&mut self) {\n        if self.state != BbrStateMachine::ProbeRTT\n            && self.is_rtprop_expired\n            && !self.is_idle_restart\n        {\n            self.enter_probe_rtt();\n            self.save_cwnd();\n            self.probe_rtt_done_stamp = None;\n        }\n\n        if self.state == BbrStateMachine::ProbeRTT {\n            self.handle_probe_rtt();\n        }\n\n        self.is_idle_restart = false;\n    }\n\n    fn enter_probe_rtt(&mut self) {\n        self.state = BbrStateMachine::ProbeRTT;\n\n        self.pacing_gain = 1.0;\n        self.cwnd_gain = 1.0;\n    }\n\n    fn handle_probe_rtt(&mut self) {\n        // C.app_limited = (BW.delivered + packets_in_flight) ? : 1\n        self.delivery_rate.update_app_limited(true);\n\n        let now = tokio::time::Instant::now();\n        if let Some(probe_rtt_done_stamp) = self.probe_rtt_done_stamp {\n            if self.is_round_start {\n                self.probe_rtt_round_done = true;\n            }\n\n            if self.probe_rtt_round_done && now >= probe_rtt_done_stamp {\n                self.rtprop_stamp = now;\n\n                self.restore_cwnd();\n                self.exit_probe_rtt(now);\n            }\n        } else if self.bytes_in_flight <= self.min_pipe_cwnd() {\n            self.probe_rtt_done_stamp = Some(now + PROBE_RTT_DURATION);\n            self.probe_rtt_round_done = false;\n            self.next_round_delivered = self.delivery_rate.delivered();\n        }\n    }\n\n    fn exit_probe_rtt(&mut self, _: Instant) {\n        if self.is_filled_pipe {\n            self.enter_probe_bw();\n        } else {\n            self.enter_startup();\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n\n    use std::time::{Duration, Instant};\n\n    use crate::algorithm::bbr::{\n        BbrStateMachine, HIGH_GAIN, INITIAL_CWND, MSS, tests::simulate_round_trip,\n    };\n\n    #[test]\n    fn test_bbr_init() {\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n        assert_eq!(bbr.state, BbrStateMachine::Startup);\n        assert_eq!(bbr.pacing_gain, HIGH_GAIN);\n        assert_eq!(bbr.cwnd_gain, HIGH_GAIN);\n        assert_eq!(bbr.cwnd, INITIAL_CWND);\n    }\n\n    #[test]\n    fn test_bbr_enter_startup() {\n        let mut bbr = super::Bbr::new();\n        bbr.enter_startup();\n        assert_eq!(bbr.state, BbrStateMachine::Startup);\n        assert_eq!(bbr.pacing_gain, HIGH_GAIN);\n        assert_eq!(bbr.cwnd_gain, HIGH_GAIN);\n    }\n\n    #[test]\n    fn test_bbr_check_full_pipe() {\n        let mut bbr = super::Bbr::new();\n\n        let mut now = tokio::time::Instant::now();\n        let rtt = Duration::from_millis(100);\n        simulate_round_trip(&mut bbr, now, rtt, 0, 10, MSS);\n        now += Duration::from_secs(1);\n        simulate_round_trip(&mut bbr, now, rtt, 0, 10, MSS);\n\n        assert_eq!(bbr.btlbw, (10 * 10 * MSS) as u64);\n        bbr.check_full_pipe();\n        assert!(!bbr.is_filled_pipe);\n\n        now += Duration::from_secs(1);\n        simulate_round_trip(&mut bbr, now, rtt, 0, 10, MSS);\n        assert_eq!(bbr.btlbw, (10 * 10 * MSS) as u64);\n\n        bbr.check_full_pipe();\n        assert!(!bbr.is_filled_pipe);\n\n        now += Duration::from_secs(1);\n        simulate_round_trip(&mut bbr, now, rtt, 0, 10, MSS);\n\n        bbr.check_full_pipe();\n        assert!(bbr.is_filled_pipe);\n    }\n\n    #[test]\n    fn test_bbr_check_drain() {\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n        bbr.is_filled_pipe = true;\n        bbr.bytes_in_flight = 100;\n        bbr.check_drain();\n        assert_eq!(bbr.state, BbrStateMachine::Drain);\n\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n        bbr.is_filled_pipe = true;\n        bbr.check_drain();\n        assert_eq!(bbr.state, BbrStateMachine::ProbeBW);\n    }\n\n    #[test]\n    fn test_bbr_enter_probe_bw() {\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n        bbr.enter_probe_bw();\n        assert_eq!(bbr.state, BbrStateMachine::ProbeBW);\n        assert_eq!(bbr.cwnd_gain, 2.0);\n    }\n\n    #[test]\n    fn test_bbr_advance_cycle_phase() {\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n        bbr.cycle_index = 0;\n        bbr.advance_cycle_phase();\n        assert_eq!(bbr.pacing_gain, 0.75);\n\n        bbr.cycle_index = 7;\n        bbr.advance_cycle_phase();\n        assert_eq!(bbr.pacing_gain, 1.25)\n    }\n\n    #[test]\n    fn test_bbr_is_next_cycle_phase() {\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n        bbr.enter_probe_bw();\n        let now = Instant::now();\n\n        bbr.pacing_gain = 1.0;\n        bbr.cycle_stamp = now - Duration::from_secs(1);\n        assert!(bbr.is_next_cycle_phase());\n\n        bbr.pacing_gain = 0.75;\n        bbr.cycle_stamp = now - Duration::from_secs(1);\n        bbr.prior_bytes_in_flight = 100;\n        assert!(bbr.is_next_cycle_phase());\n\n        bbr.pacing_gain = 1.25;\n        bbr.cycle_stamp = now - Duration::from_secs(1);\n        assert!(bbr.is_next_cycle_phase());\n    }\n\n    #[test]\n    fn test_restart_from_idle() {\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n\n        bbr.bytes_in_flight = 0;\n        bbr.handle_restart_from_idle();\n\n        assert!(!bbr.is_idle_restart);\n    }\n}\n"
  },
  {
    "path": "qcongestion/src/algorithm/bbr.rs",
    "content": "use std::{\n    collections::VecDeque,\n    time::{Duration, Instant},\n};\n\nuse delivery_rate::Rate;\nuse min_max::MinMax;\nuse qevent::quic::recovery::RecoveryMetricsUpdated;\n\nuse super::Control;\nuse crate::packets::AckedPackets;\n\nmod delivery_rate;\nmod min_max;\npub(crate) mod model;\npub(crate) mod parameters;\npub(crate) mod state;\n\nconst MSS: usize = 1200;\n// RTpropFilterLen: A constant specifying the length of the RTProp min\n// filter window, RTpropFilterLen is `10` secs.\nconst RTPROP_FILTER_LEN: Duration = Duration::from_secs(10);\n\n// BBRHighGain: A constant specifying the minimum gain value that will\n// allow the sending rate to double each round (`2/ln(2)` ~= `2.89`), used\n// in Startup mode for both BBR.pacing_gain and BBR.cwnd_gain.\nconst HIGH_GAIN: f64 = 2.89;\n\n// ProbeRTTDuration: A constant specifying the minimum duration for\n// which ProbeRTT state holds inflight to BBRMinPipeCwnd or fewer\n// packets: 200 ms.\nconst PROBE_RTT_DURATION: Duration = Duration::from_millis(200);\n\n// Pacing rate threshold for select different send quantum. Default `1.2Mbps`.\nconst SEND_QUANTUM_THRESHOLD_PACING_RATE: u64 = 1_200_000 / 8;\n\n// Initial congestion window in bytes.\npub(crate) const INITIAL_CWND: u64 = 80 * MSS as u64;\n\n// The minimal cwnd value BBR tries to target using: 4 packets, or 4 * SMSS\nconst MIN_PIPE_CWND_PKTS: usize = 4;\n\nconst MINIMUM_WINDOW_PACKETS: usize = 2;\n\n// BBR State\n//\n// https://datatracker.ietf.org/doc/html/draft-cardwell-iccrg-bbr-congestion-control-00#section-3.4\n#[derive(Debug, PartialEq, Eq)]\nenum BbrStateMachine {\n    Startup,\n    Drain,\n    ProbeBW,\n    ProbeRTT,\n}\n\npub(crate) struct Bbr {\n    // StateMachine\n    state: BbrStateMachine,\n    // BBR.pacing_rate: The current pacing rate for a BBR flow, which\n    // controls inter-packet spacing.\n    pacing_rate: u64,\n    // BBR.send_quantum: The maximum size of a data aggregate scheduled and\n    // transmitted together.\n    send_quantum: u64,\n    // Cwnd: The transport sender's congestion window, which limits the\n    // amount of data in flight.\n    cwnd: u64,\n    // BBR.BtlBw: BBR's estimated bottleneck bandwidth available to the transport\n    // flow, estimated from the maximum delivery rate sample in a sliding window.\n    btlbw: u64,\n    // BBR.BtlBwFilter: The max filter used to estimate BBR.BtlBw.\n    btlbwfilter: MinMax,\n    // Delivery rate.\n    delivery_rate: Rate,\n    // BBR.RTprop: BBR's estimated two-way round-trip propagation delay of path,\n    // estimated from the windowed minimum recent round-trip delay sample.\n    rtprop: Duration,\n    // BBR.rtprop_stamp: The wall clock time at which the current BBR.RTProp\n    // sample was obtained.\n    rtprop_stamp: Instant,\n    // BBR.rtprop_expired: A boolean recording whether the BBR.RTprop has\n    // expired and is due for a refresh with an application idle period or a\n    // transition into ProbeRTT state.\n    is_rtprop_expired: bool,\n    // BBR.pacing_gain: The dynamic gain factor used to scale BBR.BtlBw to\n    // produce BBR.pacing_rate.\n    pacing_gain: f64,\n    // BBR.cwnd_gain: The dynamic gain factor used to scale the estimated\n    // BDP to produce a congestion window (cwnd).\n    cwnd_gain: f64,\n    // BBR.round_count: Count of packet-timed round trips.\n    round_count: u64,\n    // BBR.round_start: A boolean that BBR sets to true once per packet-\n    // timed round trip, on ACKs that advance BBR.round_count.\n    is_round_start: bool,\n    // BBR.next_round_delivered: packet.delivered value denoting the end of\n    // a packet-timed round trip.\n    next_round_delivered: usize,\n    // Estimator of full pipe.\n    // BBR.filled_pipe: A boolean that records whether BBR estimates that it\n    // has ever fully utilized its available bandwidth (\"filled the pipe\").\n    is_filled_pipe: bool,\n    // Baseline level delivery rate for full pipe estimator.\n    full_bw: u64,\n    // The number of round for full pipe estimator without much growth.\n    full_bw_count: u64,\n    // Timestamp when ProbeRTT state ends.\n    probe_rtt_done_stamp: Option<Instant>,\n    // Whether a roundtrip in ProbeRTT state ends.\n    probe_rtt_round_done: bool,\n    // Whether in packet sonservation mode.\n    packet_conservation: bool,\n    // Cwnd before loss recovery.\n    prior_cwnd: u64,\n    // Whether restarting from idle.\n    is_idle_restart: bool,\n    // Last time when cycle_index is updated.\n    cycle_stamp: Instant,\n    // Current index of pacing_gain_cycle[].\n    cycle_index: usize,\n    // The upper bound on the volume of data BBR allows in flight.\n    target_cwnd: u64,\n    // Whether in the recovery mode.\n    in_recovery: bool,\n    // Time of the last recovery event starts.\n    recovery_epoch_start: Option<Instant>,\n    // Ack time.\n    ack_time: Instant,\n    // Newly marked lost data size in bytes.\n    newly_lost_bytes: u64,\n    // lost data size in total bytes.\n    bytes_lost_in_total: u64,\n    // Newly acked data size in bytes.\n    newly_acked_bytes: u64,\n    // The last P.delivered in bytes.\n    packet_delivered: u64,\n    // The last P.sent_time to determine whether exit recovery.\n    last_ack_packet_sent_time: Instant,\n    // The amount of data that was in flight before processing this ACK.\n    prior_bytes_in_flight: u64,\n    // The sum of the size in bytes of all sent packets that contain at least\n    // one ack-eliciting or PADDING frame and have not been acknowledged or\n    // declared lost. The size does not include IP or UDP overhead.\n    pub bytes_in_flight: u64,\n}\n\nimpl From<&Bbr> for RecoveryMetricsUpdated {\n    fn from(value: &Bbr) -> Self {\n        qevent::build!(RecoveryMetricsUpdated {\n            congestion_window: value.cwnd,\n            bytes_in_flight: value.bytes_in_flight,\n            pacing_rate: value.pacing_rate,\n            custom_fields: Map {\n                // AI补全\n                delivery_rate: value.delivery_rate.sample_delivery_rate(),\n                packet_delivered: value.packet_delivered,\n                newly_acked_bytes: value.newly_acked_bytes,\n                newly_lost_bytes: value.newly_lost_bytes,\n                bytes_lost_in_total: value.bytes_lost_in_total,\n            }\n        })\n    }\n}\n\nimpl Bbr {\n    pub fn new() -> Self {\n        let now = Instant::now();\n        let mut bbr = Bbr {\n            state: BbrStateMachine::Startup,\n            pacing_rate: 0,\n            send_quantum: 0,\n            cwnd: INITIAL_CWND,\n            btlbw: 0,\n            btlbwfilter: MinMax::default(),\n            delivery_rate: Rate::default(),\n            rtprop: Duration::MAX,\n            rtprop_stamp: now,\n            is_rtprop_expired: false,\n            pacing_gain: HIGH_GAIN,\n            cwnd_gain: HIGH_GAIN,\n            round_count: 0,\n            is_round_start: false,\n            next_round_delivered: 0,\n            is_filled_pipe: false,\n            full_bw: 0,\n            full_bw_count: 0,\n            probe_rtt_done_stamp: None,\n            probe_rtt_round_done: false,\n            packet_conservation: false,\n            prior_cwnd: 0,\n            is_idle_restart: false,\n            cycle_stamp: now,\n            cycle_index: 0,\n            target_cwnd: 0,\n            in_recovery: false,\n            recovery_epoch_start: None,\n            ack_time: now,\n            newly_lost_bytes: 0,\n            newly_acked_bytes: 0,\n            last_ack_packet_sent_time: now,\n            prior_bytes_in_flight: 0,\n            packet_delivered: 0,\n            bytes_in_flight: 0,\n            bytes_lost_in_total: 0,\n        };\n        bbr.on_connection_init();\n        bbr\n    }\n}\n\nimpl Control for Bbr {\n    fn on_packet_sent_cc(&mut self, packet: &crate::packets::SentPacket) {\n        todo!()\n    }\n\n    fn on_packet_acked(&mut self, acked_packet: &crate::packets::SentPacket) {\n        todo!()\n    }\n\n    fn on_packets_lost(&mut self, lost_packets: &[crate::packets::SentPacket]) {\n        todo!()\n    }\n\n    fn on_congestion_event(&mut self, sent_time: &Instant) {\n        todo!()\n    }\n\n    fn congestion_window(&self) -> usize {\n        todo!()\n    }\n\n    fn pacing_rate(&self) -> Option<usize> {\n        todo!()\n    }\n}\n\nimpl Bbr {\n    // 3.5.1.  Initialization\n    fn on_connection_init(&mut self) {\n        self.init();\n    }\n\n    // 3.5.2.  Per-ACK Steps\n    fn update_model_and_state(&mut self, ack: &mut AckedPackets) {\n        self.update_btlbw(ack);\n        self.check_cycle_phase();\n        self.check_full_pipe();\n        self.check_drain();\n        self.update_rtprop();\n        self.check_probe_rtt();\n    }\n\n    fn update_control_parameters(&mut self) {\n        self.set_pacing_rate();\n        self.set_send_quantum();\n        self.set_cwnd();\n    }\n\n    // 3.5.3.  Per-Transmit Steps\n    fn on_transmit(&mut self) {\n        self.handle_restart_from_idle();\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::{\n        collections::VecDeque,\n        time::{Duration, Instant},\n    };\n\n    use crate::{\n        algorithm::bbr::{BbrStateMachine, HIGH_GAIN, INITIAL_CWND, MSS},\n        packets::{AckedPackets, SentPacket},\n        rtt::INITIAL_RTT,\n    };\n\n    #[test]\n    fn test_bbr_init() {\n        let mut bbr = super::Bbr::new();\n        bbr.init();\n        assert_eq!(bbr.state, BbrStateMachine::Startup);\n        assert_eq!(bbr.pacing_gain, HIGH_GAIN);\n        assert_eq!(bbr.cwnd_gain, HIGH_GAIN);\n        assert_eq!(bbr.cycle_index, 0);\n        assert_eq!(bbr.cwnd, INITIAL_CWND);\n        assert_eq!(bbr.bytes_in_flight, 0);\n        assert_eq!(\n            bbr.pacing_rate,\n            (bbr.pacing_gain * INITIAL_CWND as f64 / INITIAL_RTT.as_secs_f64()) as u64\n        );\n    }\n\n    #[test]\n    fn test_bbr_sent() {\n        let mut bbr = super::Bbr::new();\n        for _ in 0..10 {\n            let mut sent = SentPacket {\n                sent_bytes: MSS,\n                ..Default::default()\n            };\n            bbr.on_sent(&mut sent, MSS);\n        }\n        assert_eq!(bbr.bytes_in_flight, 10 * MSS as u64);\n    }\n\n    #[test]\n    fn test_bbr_ack() {\n        let mut bbr = super::Bbr::new();\n        let mut now = Instant::now();\n        let rtt = Duration::from_millis(100);\n\n        simulate_round_trip(&mut bbr, now, rtt, 0, 10, MSS);\n        assert_eq!(bbr.bytes_in_flight, 0);\n        assert_eq!(bbr.delivery_rate.delivered(), 10 * MSS);\n        assert_eq!(\n            bbr.delivery_rate.sample_delivery_rate(),\n            (10 * 10 * MSS) as u64\n        );\n\n        now += Duration::from_secs(1);\n        // next roud\n        // generate btlbw\n        simulate_round_trip(&mut bbr, now, rtt, 10, 40, MSS);\n        assert_eq!(bbr.delivery_rate.delivered(), 40 * MSS);\n        assert_eq!(\n            bbr.delivery_rate.sample_delivery_rate(),\n            (30 * 10 * MSS) as u64\n        );\n        assert_eq!(bbr.btlbw, (10 * 10 * MSS) as u64);\n        assert_eq!(\n            bbr.pacing_rate,\n            (bbr.pacing_gain * INITIAL_CWND as f64 / INITIAL_RTT.as_secs_f64()) as u64\n        );\n\n        now += Duration::from_secs(1);\n        // update btlbw\n        simulate_round_trip(&mut bbr, now, rtt, 40, 60, MSS);\n        assert_eq!(\n            bbr.delivery_rate.sample_delivery_rate(),\n            (20 * 10 * MSS) as u64\n        );\n        assert_eq!(bbr.btlbw, (3 * 10 * 10 * MSS) as u64);\n        assert_eq!(bbr.pacing_rate, (bbr.btlbw as f64 * bbr.pacing_gain) as u64);\n    }\n\n    pub(super) fn simulate_round_trip(\n        bbr: &mut super::Bbr,\n        start_time: Instant,\n        rtt: Duration,\n        start: usize,\n        end: usize,\n        packet_size: usize,\n    ) {\n        let mut acks = VecDeque::with_capacity(end - start);\n        for i in start..end {\n            let mut sent: SentPackets = SentPackets {\n                packet_number: i as u64,\n                sent_bytes: packet_size,\n                time_sent: start_time,\n                ..Default::default()\n            };\n            bbr.on_sent(&mut sent, 0);\n\n            let mut ack: AckedPackets = sent.into();\n            ack.rtt = rtt;\n            acks.push_back(ack);\n        }\n\n        // let ack_time = start_time + rtt;\n        bbr.on_ack(acks);\n    }\n}\n"
  },
  {
    "path": "qcongestion/src/algorithm/new_reno.rs",
    "content": "use std::sync::{\n    Arc,\n    atomic::{AtomicU16, Ordering},\n};\n\nuse qbase::{Epoch, frame::AckFrame};\nuse qevent::quic::recovery::RecoveryMetricsUpdated;\nuse tokio::time::Instant;\n\nuse crate::{\n    algorithm::Control,\n    packets::{SentPacket, State},\n};\n\nconst INFINITRE_SSTHRESH: usize = usize::MAX;\n\npub(crate) struct NewReno {\n    max_datagram_size: Arc<AtomicU16>,\n    ecn_ce_counters: [u64; Epoch::count()],\n    bytes_in_flight: usize,\n    congestion_window: usize,\n    congestion_recovery_start_time: Option<Instant>,\n    ssthresh: usize,\n}\n\nimpl From<&NewReno> for RecoveryMetricsUpdated {\n    fn from(reno: &NewReno) -> Self {\n        qevent::build!(RecoveryMetricsUpdated {\n            congestion_window: reno.congestion_window as u64,\n            ssthresh: reno.ssthresh as u64,\n        })\n    }\n}\n\nimpl NewReno {\n    /// B.3. Initialization\n    pub(crate) fn new(max_datagram_size: Arc<AtomicU16>) -> Self {\n        // The upper bound for the initial window will be\n        // min (10*MSS, max (2*MSS, 14600))\n        // See https://datatracker.ietf.org/doc/html/rfc6928#autoid-3\n        let mtu = max_datagram_size.load(Ordering::Relaxed);\n        let initial_window = (mtu * 10).min((mtu * 2).max(14600));\n        NewReno {\n            max_datagram_size,\n            ecn_ce_counters: [0, 0, 0],\n            congestion_window: initial_window as usize,\n            bytes_in_flight: 0,\n            congestion_recovery_start_time: None,\n            ssthresh: INFINITRE_SSTHRESH,\n        }\n    }\n\n    /// B.4. On Packet Sent\n    /// OnPacketSentCC(sent_bytes):\n    /// . bytes_in_flight += sent_bytes\n    fn on_packet_sent_cc(&mut self, sent_bytes: usize) {\n        self.bytes_in_flight += sent_bytes;\n    }\n\n    /// B.5. On Packet Acknowledgment\n    /// InCongestionRecovery(sent_time):\n    ///   return sent_time <= congestion_recovery_start_time\n    fn in_congestion_recovery(&self, sent_time: &Instant) -> bool {\n        self.congestion_recovery_start_time\n            .map(|recovery_start_time| *sent_time <= recovery_start_time)\n            .unwrap_or(false)\n    }\n\n    /// OnPacketAcked(acked_packet):\n    ///   if (!acked_packet.in_flight):\n    ///     return;\n    ///   // Remove from bytes_in_flight.\n    ///   bytes_in_flight -= acked_packet.sent_bytes\n    ///   // Do not increase congestion_window if application\n    ///   // limited or flow control limited.\n    ///   if (IsAppOrFlowControlLimited())\n    ///     return\n    ///   // Do not increase congestion window in recovery period.\n    ///   if (InCongestionRecovery(acked_packet.time_sent)):\n    ///     return\n    ///   if (congestion_window < ssthresh):\n    ///     // Slow start.\n    ///     congestion_window += acked_packet.sent_bytes\n    ///   else:\n    ///     // Congestion avoidance.\n    ///     congestion_window +=\n    ///       max_datagram_size * acked_packet.sent_bytes\n    ///       / congestion_window\n    fn on_packet_acked(&mut self, acked_packet: &SentPacket) {\n        if !acked_packet.count_for_cc {\n            return;\n        }\n        // 如果不是 inflight 状态，说明已经丢包重传了\n        if acked_packet.state == State::Inflight {\n            self.bytes_in_flight = self.bytes_in_flight.saturating_sub(acked_packet.sent_bytes);\n        }\n        // 如果是 Retranmit 状态，又被 ack， 把拥塞窗口加回来\n        if self.in_congestion_recovery(&acked_packet.time_sent) {\n            qevent::event!({ RecoveryMetricsUpdated::from(&*self) });\n            return;\n        }\n        if self.congestion_window < self.ssthresh {\n            self.congestion_window += acked_packet.sent_bytes;\n        } else {\n            self.congestion_window +=\n                self.max_datagram_size() * acked_packet.sent_bytes / self.congestion_window;\n        }\n        qevent::event!({ RecoveryMetricsUpdated::from(&*self) });\n    }\n\n    /// B.6. On New Congestion Event\n    /// OnCongestionEvent(sent_time):\n    ///   // No reaction if already in a recovery period.\n    ///   if (InCongestionRecovery(sent_time)):\n    ///     return\n    ///   // Enter recovery period.\n    ///   congestion_recovery_start_time = now()\n    ///   ssthresh = congestion_window * kLossReductionFactor\n    ///   congestion_window = max(ssthresh, kMinimumWindow)\n    ///   // A packet can be sent to speed up loss recovery.\n    ///   MaybeSendOnePacket()\n    fn on_congestion_event(&mut self, sent_time: &Instant) {\n        if self.in_congestion_recovery(sent_time) {\n            return;\n        }\n\n        let now = tokio::time::Instant::now();\n        self.congestion_recovery_start_time = Some(now);\n        // WARN: will be zero\n        self.ssthresh = self.congestion_window - self.max_datagram_size();\n        // The RECOMMENDED value is 2 * max_datagram_size.\n        // See https://datatracker.ietf.org/doc/html/rfc9002#name-initial-and-minimum-congest\n        self.congestion_window = self.ssthresh.max(2 * self.max_datagram_size());\n        // A packet can be sent to speed up loss recovery.\n        // self.maybe_send_packet(1);\n        qevent::event!({ RecoveryMetricsUpdated::from(&*self) });\n    }\n\n    /// B.7. Process ECN Information\n    /// ProcessECN(ack, pn_space):\n    ///   // If the ECN-CE counter reported by the peer has increased,\n    ///   // this could be a new congestion event.\n    ///   if (ack.ce_counter > ecn_ce_counters[pn_space]):\n    ///     ecn_ce_counters[pn_space] = ack.ce_counter\n    ///     sent_time = sent_packets[ack.largest_acked].time_sent\n    ///     OnCongestionEvent(sent_time)\n    fn process_ecn(&mut self, ack: &AckFrame, sent_time: &Instant, epoch: Epoch) {\n        if let Some(ecn) = ack.ecn()\n            && ecn.ce() > self.ecn_ce_counters[epoch]\n        {\n            self.ecn_ce_counters[epoch] = ecn.ce();\n            self.on_congestion_event(sent_time);\n        }\n    }\n\n    /// B.8. On Packets Lost\n    /// OnPacketsLost(lost_packets):\n    ///   sent_time_of_last_loss = 0\n    ///   // Remove lost packets from bytes_in_flight.\n    ///   for lost_packet in lost_packets:\n    ///     if lost_packet.in_flight:\n    ///       bytes_in_flight -= lost_packet.sent_bytes\n    ///       sent_time_of_last_loss =\n    ///         max(sent_time_of_last_loss, lost_packet.time_sent)\n    ///   // Congestion event if in-flight packets were lost\n    ///   if (sent_time_of_last_loss != 0):\n    ///     OnCongestionEvent(sent_time_of_last_loss)\n    ///   // Reset the congestion window if the loss of these\n    ///   // packets indicates persistent congestion.\n    ///   // Only consider packets sent after getting an RTT sample.\n    ///   if (first_rtt_sample == 0):\n    ///     return\n    ///   pc_lost = []\n    ///   for lost in lost_packets:\n    ///     if lost.time_sent > first_rtt_sample:\n    ///       pc_lost.insert(lost)\n    ///   if (InPersistentCongestion(pc_lost)):\n    ///     congestion_window = kMinimumWindow\n    ///     congestion_recovery_start_time = 0\n    fn on_packets_lost(\n        &mut self,\n        lost_packets: &mut dyn Iterator<Item = &SentPacket>,\n        persistent_lost: bool,\n    ) {\n        let mut sent_time_last_loss: Option<Instant> = None;\n        for lost_packet in lost_packets {\n            if lost_packet.count_for_cc {\n                self.bytes_in_flight = self.bytes_in_flight.saturating_sub(lost_packet.sent_bytes);\n                sent_time_last_loss = sent_time_last_loss\n                    .map(|t| t.max(lost_packet.time_sent))\n                    .or(Some(lost_packet.time_sent));\n            }\n        }\n        if let Some(time) = sent_time_last_loss {\n            self.on_congestion_event(&time);\n        }\n\n        if persistent_lost {\n            // WARN: will be zero\n            self.ssthresh = self.congestion_window >> 1;\n            self.congestion_window = self.ssthresh.max(2 * self.max_datagram_size());\n            self.congestion_recovery_start_time = None;\n        }\n    }\n\n    /// RemoveFromBytesInFlight(discarded_packets):\n    ///  // Remove any unacknowledged packets from flight.\n    ///  foreach packet in discarded_packets:\n    ///    if packet.in_flight\n    ///      bytes_in_flight -= size\n    fn remove_from_bytes_in_flight(\n        &mut self,\n        discard_packets: &mut dyn Iterator<Item = &SentPacket>,\n    ) {\n        for packet in discard_packets {\n            if packet.count_for_cc && packet.state != State::Retransmitted {\n                self.bytes_in_flight -= packet.sent_bytes;\n            }\n        }\n    }\n\n    fn max_datagram_size(&self) -> usize {\n        self.max_datagram_size.load(Ordering::Relaxed) as usize\n    }\n}\n\nimpl Control for NewReno {\n    fn on_packet_sent_cc(&mut self, packet: &SentPacket) {\n        self.on_packet_sent_cc(packet.sent_bytes);\n    }\n\n    fn on_packet_acked(&mut self, acked_packet: &SentPacket) {\n        self.on_packet_acked(acked_packet);\n    }\n\n    fn on_packets_lost(\n        &mut self,\n        lost_packets: &mut dyn Iterator<Item = &SentPacket>,\n        persistent_lost: bool,\n    ) {\n        self.on_packets_lost(lost_packets, persistent_lost);\n    }\n\n    fn congestion_window(&self) -> usize {\n        self.congestion_window\n    }\n\n    fn pacing_rate(&self) -> Option<usize> {\n        None\n    }\n\n    fn remove_from_bytes_in_flight(&mut self, packets: &mut dyn Iterator<Item = &SentPacket>) {\n        self.remove_from_bytes_in_flight(packets);\n    }\n\n    fn process_ecn(&mut self, ack: &AckFrame, sent_time: &Instant, epoch: Epoch) {\n        self.process_ecn(ack, sent_time, epoch);\n    }\n}\n\n/*\n#[cfg(test)]\nmod tests {\n\n    use super::*;\n    use crate::packets::SentPacket;\n\n    #[test]\n    fn test_reno_init() {\n        let reno = NewReno::new();\n        assert_eq!(reno.cwnd, INIT_CWND);\n        assert_eq!(reno.ssthresh, super::INFINITRE_SSTHRESH);\n        assert_eq!(reno.recovery_start_time, None);\n    }\n\n    #[test]\n    fn test_reno_slow_start() {\n        let mut reno = NewReno::new();\n        let acks = generate_acks(0, 10);\n\n        // first roud trip\n        reno.on_ack(acks);\n        assert_eq!(reno.cwnd, 20 * MSS as u64);\n\n        // second roud trip\n        let acks = generate_acks(10, 30);\n        reno.on_ack(acks);\n        assert_eq!(reno.cwnd, 40 * MSS as u64);\n    }\n\n    #[test]\n    fn test_reno_congestion_avoidance() {\n        let mut reno = NewReno::new();\n        reno.ssthresh = 30 * MSS as u64;\n        let acks = generate_acks(0, 20);\n        let pre_cwnd = reno.cwnd();\n        // slow start\n        reno.on_ack(acks);\n        assert_eq!(reno.cwnd, pre_cwnd + 20 * MSS as u64);\n\n        let pre_cwnd = reno.cwnd();\n        let acks = generate_acks(20, 60);\n        // congestion avoidance\n        // increase a MSS when bytes_acked is greater than cwnd\n        reno.on_ack(acks);\n        assert_eq!(reno.cwnd, pre_cwnd + MSS as u64);\n    }\n\n    #[test]\n    fn test_reno_congestion_event() {\n        let mut reno = NewReno::new();\n        let now = Instant::now();\n        reno.ssthresh = 20 * MSS as u64;\n        let acks = generate_acks(0, 10);\n\n        reno.on_ack(acks);\n\n        assert_eq!(reno.cwnd, 20 * MSS as u64);\n        assert_eq!(reno.recovery_start_time, None);\n\n        let time_lost = now + std::time::Duration::from_millis(100);\n        let lost = SentPacket {\n            packet_number: 11,\n            sent_bytes: MSS,\n            time_sent: now,\n            ..Default::default()\n        };\n\n        reno.on_congestion_event(&lost);\n\n        assert_eq!(reno.cwnd, 10 * MSS as u64);\n        assert_eq!(reno.ssthresh, 10 * MSS as u64);\n        assert_eq!(reno.recovery_start_time, Some(time_lost));\n    }\n\n    fn generate_acks(start: usize, end: usize) -> VecDeque<AckedPackets> {\n        let mut acks = VecDeque::with_capacity(end - start);\n        for i in start..end {\n            let sent = SentPacket {\n                packet_number: i as u64,\n                sent_bytes: MSS,\n                time_sent: Instant::now(),\n                ..Default::default()\n            };\n            let ack: AckedPackets = sent.into();\n            acks.push_back(ack);\n        }\n        acks\n    }\n}\n*/\n"
  },
  {
    "path": "qcongestion/src/algorithm.rs",
    "content": "use qbase::{Epoch, frame::AckFrame};\nuse tokio::time::Instant;\n\nuse crate::packets::SentPacket;\n\n// pub(crate) mod bbr;\npub(crate) mod new_reno;\n\n/// The [`Algorithm`] enum represents different congestion control algorithms that can be used.\npub enum Algorithm {\n    Bbr,\n    NewReno,\n}\n\npub trait Control: Send {\n    fn on_packet_sent_cc(&mut self, packet: &SentPacket);\n\n    fn on_packet_acked(&mut self, acked_packet: &SentPacket);\n\n    fn on_packets_lost(\n        &mut self,\n        lost_packets: &mut dyn Iterator<Item = &SentPacket>,\n        persistent_lost: bool,\n    );\n\n    fn process_ecn(&mut self, ack: &AckFrame, sent_time: &Instant, epoch: Epoch);\n\n    fn congestion_window(&self) -> usize;\n\n    fn pacing_rate(&self) -> Option<usize>;\n\n    fn remove_from_bytes_in_flight(&mut self, packets: &mut dyn Iterator<Item = &SentPacket>);\n}\n"
  },
  {
    "path": "qcongestion/src/congestion.rs",
    "content": "use std::sync::{Arc, Mutex};\n\nuse qbase::{\n    Epoch,\n    frame::AckFrame,\n    net::tx::{ArcSendWaker, Signals},\n};\nuse qevent::quic::recovery::PacketLostTrigger;\nuse tokio::time::{Duration, Instant};\n\nuse crate::{\n    Algorithm, Feedback, MSS, TooManyPtos,\n    algorithm::{Control, new_reno::NewReno},\n    pacing::{self, Pacer},\n    packets::{PacketSpace, SentPacket},\n    rtt::{ArcRtt, INITIAL_RTT},\n    status::PathStatus,\n};\n\nconst INIT_CWND: usize = MSS * 10;\nconst PACKET_THRESHOLD: usize = 3;\n\n/// Imple RFC 9002 Appendix A. Loss Recovery\n/// See [Appendix A](https://datatracker.ietf.org/doc/html/rfc9002#name-loss-recovery-pseudocode)\npub struct CongestionController {\n    algorithm: Box<dyn Control>,\n    // The Round-Trip Time (RTT) estimator.\n    rtt: ArcRtt,\n    loss_detection_timer: Option<Instant>,\n    // The number of times a PTO has been sent without receiving an acknowledgment.\n    // Use to pto backoff\n    pto_count: u32,\n    max_ack_delay: Duration,\n    packet_spaces: [PacketSpace; Epoch::count()],\n    // pacer is used to control the burst rate\n    pacer: pacing::Pacer,\n    // The waker to notify when the controller is ready to send.\n    pending_burst: bool,\n    // epoch packet trackers\n    trackers: [Arc<dyn Feedback>; 3],\n    need_send_ack_eliciting_packets: [usize; Epoch::count()],\n    path_status: PathStatus,\n    tx_waker: ArcSendWaker,\n}\n\nimpl CongestionController {\n    /// A.4. Initialization\n    fn init(\n        algorithm: Algorithm,\n        max_ack_delay: Duration,\n        trackers: [Arc<dyn Feedback>; 3],\n        path_status: PathStatus,\n        tx_waker: ArcSendWaker,\n    ) -> Self {\n        let algorithm: Box<dyn Control> = match algorithm {\n            Algorithm::Bbr => todo!(\"implement BBR\"),\n            Algorithm::NewReno => Box::new(NewReno::new(path_status.pmtu())),\n        };\n\n        let now = Instant::now();\n        CongestionController {\n            algorithm,\n            rtt: ArcRtt::new(),\n            loss_detection_timer: None,\n            pto_count: 0,\n            max_ack_delay,\n            packet_spaces: [\n                PacketSpace::with_epoch(Epoch::Initial, Duration::ZERO),\n                PacketSpace::with_epoch(Epoch::Handshake, Duration::ZERO),\n                PacketSpace::with_epoch(Epoch::Data, max_ack_delay),\n            ],\n            pacer: Pacer::new(INITIAL_RTT, INIT_CWND, path_status.mtu(), now, None),\n            pending_burst: false,\n            trackers,\n            need_send_ack_eliciting_packets: [0; Epoch::count()],\n            path_status,\n            tx_waker,\n        }\n    }\n\n    /// A.5. On Sending a Packet\n    /// OnPacketSent(packet_number, pn_space, ack_eliciting,\n    ///              in_flight, sent_bytes):\n    ///   sent_packets[pn_space][packet_number].packet_number =\n    ///                                            packet_number\n    ///   sent_packets[pn_space][packet_number].time_sent = now()\n    ///   sent_packets[pn_space][packet_number].ack_eliciting =\n    ///                                            ack_eliciting\n    ///   sent_packets[pn_space][packet_number].in_flight = in_flight\n    ///   sent_packets[pn_space][packet_number].sent_bytes = sent_bytes\n    ///   if (in_flight):\n    ///     if (ack_eliciting):\n    ///       time_of_last_ack_eliciting_packet[pn_space] = now()\n    ///     OnPacketSentCC(sent_bytes)\n    ///     SetLossDetectionTimer()\n    pub fn on_packet_sent(\n        &mut self,\n        packet_number: u64,\n        epoch: Epoch,\n        ack_eliciting: bool,\n        in_flight: bool,\n        sent_bytes: usize,\n    ) {\n        let now = Instant::now();\n        let sent = SentPacket::new(packet_number, now, ack_eliciting, in_flight, sent_bytes);\n        if in_flight {\n            if ack_eliciting {\n                self.packet_spaces[epoch].time_of_last_ack_eliciting_packet = Some(now);\n                self.need_send_ack_eliciting_packets[epoch] =\n                    self.need_send_ack_eliciting_packets[epoch].saturating_sub(1);\n            }\n            self.algorithm.on_packet_sent_cc(&sent);\n            self.packet_spaces[epoch]\n                .loss_time\n                .get_or_insert_with(|| now + self.rtt.loss_delay());\n            self.set_loss_detection_timer();\n        }\n        self.packet_spaces[epoch].sent_packets.push_back(sent);\n        self.pacer.on_sent(sent_bytes);\n    }\n\n    /// A.6. On Receiving a Datagram\n    /// OnDatagramReceived(datagram):\n    ///   // If this datagram unblocks the server, arm the\n    ///   // PTO timer to avoid deadlock.\n    ///   if (server was at anti-amplification limit):\n    ///     SetLossDetectionTimer()\n    ///     if loss_detection_timer.timeout < now():\n    ///       // Execute PTO if it would have expired\n    ///       // while the amplification limit applied.\n    ///       OnLossDetectionTimeout()\n    pub fn on_datagram_rcvd(&mut self) {\n        // If this datagram unblocks the server, arm the PTO timer to avoid deadlock.\n        if self.path_status.is_at_anti_amplification_limit() {\n            let now = Instant::now();\n            self.set_loss_detection_timer();\n            if self.loss_detection_timer.is_some_and(|t| t < now) {\n                // Execute PTO if it would have expired while the amplification limit applied.\n                self.on_loss_detection_timeout();\n            }\n        }\n    }\n\n    /// A.7. On Receiving an Acknowledgment\n    /// OnAckReceived(ack, pn_space):\n    ///   if (largest_acked_packet[pn_space] == infinite):\n    ///     largest_acked_packet[pn_space] = ack.largest_acked\n    ///   else:\n    ///     largest_acked_packet[pn_space] =\n    ///         max(largest_acked_packet[pn_space], ack.largest_acked)\n    ///\n    ///   // DetectAndRemoveAckedPackets finds packets that are newly\n    ///   // acknowledged and removes them from sent_packets.\n    ///   newly_acked_packets =\n    ///       DetectAndRemoveAckedPackets(ack, pn_space)\n    ///   // Nothing to do if there are no newly acked packets.\n    ///   if (newly_acked_packets.empty()):\n    ///     return\n    ///\n    ///   // Update the RTT if the largest acknowledged is newly acked\n    ///   // and at least one ack-eliciting was newly acked.\n    ///   if (newly_acked_packets.largest().packet_number ==\n    ///           ack.largest_acked &&\n    ///       IncludesAckEliciting(newly_acked_packets)):\n    ///     latest_rtt =\n    ///       now() - newly_acked_packets.largest().time_sent\n    ///     UpdateRtt(ack.ack_delay)\n    ///\n    ///   // Process ECN information if present.\n    ///   if (ACK frame contains ECN information):\n    ///       ProcessECN(ack, pn_space)\n    ///\n    ///   lost_packets = DetectAndRemoveLostPackets(pn_space)\n    ///   if (!lost_packets.empty()):\n    ///     OnPacketsLost(lost_packets)\n    ///   OnPacketsAcked(newly_acked_packets)\n    ///\n    ///   // Reset pto_count unless the client is unsure if\n    ///   // the server has validated the client's address.\n    ///   if (PeerCompletedAddressValidation()):\n    ///     pto_count = 0\n    ///   SetLossDetectionTimer()\n    pub fn on_ack_rcvd(&mut self, epoch: Epoch, ack_frame: &AckFrame, now: Instant) {\n        self.packet_spaces[epoch].update_largest_acked_packet(ack_frame.largest());\n\n        match self.packet_spaces[epoch].on_ack_rcvd(ack_frame, &mut self.algorithm) {\n            None => return,\n            Some(newly_acked_packets) => {\n                let (largest_pn, largest_time_sent) = newly_acked_packets.largest;\n                if largest_pn == ack_frame.largest() && newly_acked_packets.include_ack_eliciting {\n                    self.rtt.update(\n                        now - largest_time_sent,\n                        Duration::from_micros(ack_frame.delay()),\n                        self.path_status.is_handshake_confirmed(),\n                    );\n                }\n                // Process ECN information if present.\n                if ack_frame.ecn().is_some() {\n                    self.process_ecn(ack_frame, &largest_time_sent, epoch)\n                }\n            }\n        }\n\n        let mut loss_pns = self.packet_spaces[epoch]\n            .detect_lost_packets(self.rtt.loss_delay(), PACKET_THRESHOLD, &mut self.algorithm)\n            .peekable();\n\n        if loss_pns.peek().is_some() {\n            self.rtt.try_backoff_rtt();\n            self.trackers[epoch].may_loss(PacketLostTrigger::TimeThreshold, &mut loss_pns);\n        }\n\n        if self.peer_completed_address_validation() {\n            self.pto_count = 0;\n        }\n        self.set_loss_detection_timer();\n    }\n\n    /// A.8. Setting the Loss Detection Timer\n    /// SetLossDetectionTimer():\n    ///   earliest_loss_time, _ = GetLossTimeAndSpace()\n    ///   if (earliest_loss_time != 0):\n    ///     // Time threshold loss detection.\n    ///     loss_detection_timer.update(earliest_loss_time)\n    ///     return\n    ///\n    ///   if (server is at anti-amplification limit):\n    ///     // The server's timer is not set if nothing can be sent.\n    ///     loss_detection_timer.cancel()\n    ///     return\n    ///\n    ///   if (no ack-eliciting packets in flight &&\n    ///       PeerCompletedAddressValidation()):\n    ///     // There is nothing to detect lost, so no timer is set.\n    ///     // However, the client needs to arm the timer if the\n    ///     // server might be blocked by the anti-amplification limit.\n    ///     loss_detection_timer.cancel()\n    ///     return\n    ///\n    ///   timeout, _ = GetPtoTimeAndSpace()\n    ///   loss_detection_timer.update(timeout)\n    fn set_loss_detection_timer(&mut self) {\n        if let Some((earliest_loss_time, _)) = self.get_loss_time_and_epoch() {\n            self.loss_detection_timer = Some(earliest_loss_time);\n            return;\n        }\n\n        if self.path_status.is_at_anti_amplification_limit() {\n            self.loss_detection_timer = None;\n            return;\n        }\n\n        if self.no_ack_eliciting_in_flight() && self.peer_completed_address_validation() {\n            self.loss_detection_timer = None;\n            return;\n        }\n\n        self.loss_detection_timer = self.get_pto_time_and_epoch().map(|(timeout, _)| timeout);\n    }\n\n    // A.9. On Timeout\n    /// OnLossDetectionTimeout():\n    ///   earliest_loss_time, pn_space = GetLossTimeAndSpace()\n    ///   if (earliest_loss_time != 0):\n    ///     // Time threshold loss Detection\n    ///     lost_packets = DetectAndRemoveLostPackets(pn_space)\n    ///     assert(!lost_packets.empty())\n    ///     OnPacketsLost(lost_packets)\n    ///     SetLossDetectionTimer()\n    ///     return\n    ///\n    ///   if (no ack-eliciting packets in flight):\n    ///     assert(!PeerCompletedAddressValidation())\n    ///     // Client sends an anti-deadlock packet: Initial is padded\n    ///     // to earn more anti-amplification credit,\n    ///     // a Handshake packet proves address ownership.\n    ///     if (has Handshake keys):\n    ///       SendOneAckElicitingHandshakePacket()\n    ///     else:\n    ///       SendOneAckElicitingPaddedInitialPacket()\n    ///   else:\n    ///     // PTO. Send new data if available, else retransmit old data.\n    ///     // If neither is available, send a single PING frame.\n    ///     _, pn_space = GetPtoTimeAndSpace()\n    ///     SendOneOrTwoAckElicitingPackets(pn_space)\n    ///\n    ///   pto_count++\n    ///   SetLossDetectionTimer()\n    fn on_loss_detection_timeout(&mut self) -> u32 {\n        if let Some((_, epoch)) = self.get_loss_time_and_epoch() {\n            let mut loss_pns = self.packet_spaces[epoch]\n                .detect_lost_packets(self.rtt.loss_delay(), PACKET_THRESHOLD, &mut self.algorithm)\n                .peekable();\n\n            if loss_pns.peek().is_some() {\n                self.rtt.try_backoff_rtt();\n                self.trackers[epoch].may_loss(PacketLostTrigger::TimeThreshold, &mut loss_pns);\n            }\n            self.set_loss_detection_timer();\n            return self.pto_count;\n        }\n\n        if self.no_ack_eliciting_in_flight() {\n            // assert!(!self.peer_completed_address_validation());\n            if self.path_status.has_handshake_key() {\n                // Send an anti-deadlock packet: Initial is padded\n                // to earn more anti-amplification credit,\n                // a Handshake packet proves address ownership.\n                self.send_ack_eliciting_packet(Epoch::Handshake, 1);\n            } else {\n                self.send_ack_eliciting_packet(Epoch::Initial, 1);\n            }\n        } else {\n            // PTO. Send new data if available, else retransmit old data.\n            // If neither is available, send a single PING frame.\n            if let Some((_, epoch)) = self.get_pto_time_and_epoch() {\n                self.send_ack_eliciting_packet(epoch, 1);\n            }\n        }\n\n        self.pto_count += 1;\n        self.set_loss_detection_timer();\n        self.pto_count\n    }\n\n    /// GetLossTimeAndSpace():\n    ///   time = loss_time[Initial]\n    ///   space = Initial\n    ///   for pn_space in [ Handshake, ApplicationData ]:\n    ///     if (time == 0 || loss_time[pn_space] < time):\n    ///       time = loss_time[pn_space];\n    ///       space = pn_space\n    ///   return time, space\n    fn get_loss_time_and_epoch(&self) -> Option<(Instant, Epoch)> {\n        self.packet_spaces\n            .iter()\n            .zip(Epoch::iter())\n            .filter(|(space, _)| space.loss_time.is_some())\n            .map(|(space, epoch)| (space.loss_time.unwrap(), *epoch))\n            .min_by_key(|(loss_time, _)| *loss_time)\n    }\n\n    // GetPtoTimeAndSpace():\n    //   duration = (smoothed_rtt + max(4 * rttvar, kGranularity))\n    //       * (2 ^ pto_count)\n    //   // Anti-deadlock PTO starts from the current time\n    //   if (no ack-eliciting packets in flight):\n    //     assert(!PeerCompletedAddressValidation())\n    //     if (has handshake keys):\n    //       return (now() + duration), Handshake\n    //     else:\n    //       return (now() + duration), Initial\n    //   pto_timeout = infinite\n    //   pto_space = Initial\n    //   for space in [ Initial, Handshake, ApplicationData ]:\n    //     if (no ack-eliciting packets in flight in space):\n    //         continue;\n    //     if (space == ApplicationData):\n    //       // Skip Application Data until handshake confirmed.\n    //       if (handshake is not confirmed):\n    //         return pto_timeout, pto_space\n    //       // Include max_ack_delay and backoff for Application Data.\n    //       duration += max_ack_delay * (2 ^ pto_count)\n    //\n    //     t = time_of_last_ack_eliciting_packet[space] + duration\n    //     if (t < pto_timeout):\n    //       pto_timeout = t\n    //       pto_space = space\n    //   return pto_timeout, pto_space\n    fn get_pto_time_and_epoch(&self) -> Option<(Instant, Epoch)> {\n        let mut duration = self.rtt.base_pto(self.pto_count);\n        let now = Instant::now();\n        if self.no_ack_eliciting_in_flight() {\n            // assert!(!self.peer_completed_address_validation());\n            if self.path_status.has_handshake_key() {\n                return Some((now + duration, Epoch::Handshake));\n            } else {\n                return Some((now + duration, Epoch::Initial));\n            }\n        }\n\n        let mut pto_time = None;\n        for &epoch in Epoch::iter() {\n            if self.packet_spaces[epoch].no_ack_eliciting_in_flight() {\n                continue;\n            }\n            if epoch == Epoch::Data {\n                // An endpoint MUST NOT set its PTO timer for the Application Data\n                // packet number epoch until the handshake is confirmed\n                if !self.path_status.is_handshake_confirmed() {\n                    return pto_time;\n                }\n                duration += self.max_ack_delay * (1 << self.pto_count);\n            }\n            let t = self.packet_spaces[epoch]\n                .time_of_last_ack_eliciting_packet\n                .unwrap()\n                + duration;\n            if pto_time.is_none() || pto_time.is_some_and(|(pto_time, _)| t < pto_time) {\n                pto_time = Some((t, epoch));\n            }\n        }\n        pto_time\n    }\n\n    fn no_ack_eliciting_in_flight(&self) -> bool {\n        Epoch::iter().all(|epoch| self.packet_spaces[*epoch].no_ack_eliciting_in_flight())\n    }\n\n    /// PeerCompletedAddressValidation():\n    ///   // Assume clients validate the server's address implicitly.\n    ///   if (endpoint is server):\n    ///     return true\n    ///   // Servers complete address validation when a\n    ///   // protected packet is received.\n    ///   return has received Handshake ACK ||\n    ///        handshake confirmed\n    fn peer_completed_address_validation(&self) -> bool {\n        self.path_status.is_server()\n            || self.path_status.has_received_handshake_ack()\n            || self.path_status.is_handshake_confirmed()\n    }\n\n    fn process_ecn(&mut self, ack: &AckFrame, sent_time: &Instant, epoch: Epoch) {\n        self.algorithm.process_ecn(ack, sent_time, epoch);\n    }\n\n    fn send_ack_eliciting_packet(&mut self, epoch: Epoch, count: usize) {\n        self.need_send_ack_eliciting_packets[epoch] += count;\n        self.tx_waker.wake_by(Signals::PING);\n    }\n\n    #[inline]\n    fn need_ack(&self) -> bool {\n        Epoch::iter().any(|&epoch| self.packet_spaces[epoch].rcvd_packets.need_ack().is_some())\n    }\n\n    #[inline]\n    fn send_quota(&mut self) -> usize {\n        let now = Instant::now();\n        self.pacer.schedule(\n            self.rtt.smoothed_rtt(),\n            self.algorithm.congestion_window(),\n            self.path_status.mtu(),\n            now,\n            self.algorithm.pacing_rate(),\n        )\n    }\n\n    //OnPacketNumberSpaceDiscarded(pn_space):\n    //   assert(pn_space != ApplicationData)\n    //   RemoveFromBytesInFlight(sent_packets[pn_space])\n    //   sent_packets[pn_space].clear()\n    //   // Reset the loss detection and PTO timer\n    //   time_of_last_ack_eliciting_packet[pn_space] = 0\n    //   loss_time[pn_space] = 0\n    //   pto_count = 0\n    //   SetLossDetectionTimer()\n    fn discard_epoch(&mut self, epoch: Epoch) {\n        assert!(epoch != Epoch::Data);\n        self.packet_spaces[epoch].discard(&mut self.algorithm);\n        self.loss_detection_timer = None;\n        self.pto_count = 0;\n        self.set_loss_detection_timer();\n    }\n\n    fn get_pto(&self, epoch: Epoch) -> Duration {\n        let mut pto_time = self.rtt.base_pto(self.pto_count);\n        if epoch == Epoch::Data {\n            pto_time += self.max_ack_delay * (1 << self.pto_count);\n        }\n        pto_time\n    }\n}\n\n#[derive(Clone)]\npub struct ArcCC(Arc<Mutex<CongestionController>>);\n\nimpl ArcCC {\n    pub fn new(\n        algorithm: Algorithm,\n        max_ack_delay: Duration,\n        trackers: [Arc<dyn Feedback>; 3],\n        path_status: PathStatus,\n        tx_waker: ArcSendWaker,\n    ) -> Self {\n        ArcCC(Arc::new(Mutex::new(CongestionController::init(\n            algorithm,\n            max_ack_delay,\n            trackers,\n            path_status,\n            tx_waker,\n        ))))\n    }\n}\n\nimpl super::Transport for ArcCC {\n    fn do_tick(&self) -> Result<(), TooManyPtos> {\n        let now = Instant::now();\n        let mut guard = self.0.lock().unwrap();\n        if guard.loss_detection_timer.is_some_and(|t| t <= now) {\n            let pto_count = guard.on_loss_detection_timeout();\n            if pto_count > 6 {\n                return Err(TooManyPtos(pto_count));\n            }\n        }\n\n        if guard.pending_burst && guard.send_quota() >= guard.path_status.mtu() {\n            guard.pending_burst = false;\n            guard.tx_waker.wake_by(Signals::CONGESTION);\n        }\n        if guard.need_ack() {\n            guard.tx_waker.wake_by(Signals::TRANSPORT);\n        }\n\n        Ok(())\n    }\n\n    fn send_quota(&self) -> Result<usize, Signals> {\n        let mut guard = self.0.lock().unwrap();\n        let send_quota = guard.send_quota();\n        if send_quota >= guard.path_status.mtu() {\n            Ok(send_quota)\n        } else {\n            guard.pending_burst = true;\n            Err(Signals::CONGESTION)\n        }\n    }\n\n    fn retransmit_and_expire_time(&self, epoch: Epoch) -> (Duration, Duration) {\n        let guard = self.0.lock().unwrap();\n        (\n            // 尽量让路径先发起重传\n            guard.rtt.loss_delay() + guard.rtt.rttvar(),\n            guard.get_pto(epoch),\n        )\n    }\n\n    fn need_ack(&self, epoch: Epoch) -> Option<(u64, Instant)> {\n        let guard = self.0.lock().unwrap();\n        guard.packet_spaces[epoch].rcvd_packets.need_ack()\n    }\n\n    fn on_pkt_sent(\n        &self,\n        epoch: Epoch,\n        pn: u64,\n        is_ack_eliciting: bool,\n        sent_bytes: usize,\n        in_flight: bool,\n        ack: Option<u64>,\n    ) {\n        let mut guard = self.0.lock().unwrap();\n        guard.on_packet_sent(pn, epoch, is_ack_eliciting, in_flight, sent_bytes);\n\n        if let Some(largest_acked) = ack {\n            guard.packet_spaces[epoch]\n                .rcvd_packets\n                .on_ack_sent(pn, largest_acked);\n        }\n        // See [Section 17.2.2.1](https://www.rfc-editor.org/rfc/rfc9000#name-abandoning-initial-packets)\n        if epoch == Epoch::Handshake && !guard.path_status.is_server() {\n            guard.discard_epoch(Epoch::Initial);\n        }\n    }\n\n    fn on_ack_rcvd(&self, epoch: Epoch, ack_frame: &AckFrame) {\n        let mut guard = self.0.lock().unwrap();\n        let now = Instant::now();\n        guard.on_ack_rcvd(epoch, ack_frame, now);\n\n        // See [Section 17.2.2.1](https://www.rfc-editor.org/rfc/rfc9000#name-abandoning-initial-packets)\n        if epoch == Epoch::Handshake && guard.path_status.is_server() {\n            guard.discard_epoch(Epoch::Initial);\n        }\n    }\n\n    fn on_pkt_rcvd(&self, epoch: Epoch, pn: u64, is_ack_eliciting: bool) {\n        if !is_ack_eliciting {\n            return;\n        }\n        let mut guard = self.0.lock().unwrap();\n        guard.packet_spaces[epoch].rcvd_packets.on_pkt_rcvd(pn);\n        guard.on_datagram_rcvd();\n    }\n\n    fn get_pto(&self, epoch: Epoch) -> Duration {\n        let guard = self.0.lock().unwrap();\n        guard.get_pto(epoch)\n    }\n\n    fn discard_epoch(&self, epoch: Epoch) {\n        let mut guard = self.0.lock().unwrap();\n        guard.discard_epoch(epoch);\n    }\n\n    fn need_send_ack_eliciting(&self, epoch: Epoch) -> usize {\n        let guard = self.0.lock().unwrap();\n        guard.need_send_ack_eliciting_packets[epoch]\n    }\n\n    fn grant_anti_amplification(&self) {\n        let guard = self.0.lock().unwrap();\n        guard.path_status.release_anti_amplification_limit();\n    }\n}\n\n#[cfg(test)]\nmod tests {}\n"
  },
  {
    "path": "qcongestion/src/lib.rs",
    "content": "use qbase::{Epoch, frame::AckFrame, net::tx::Signals};\nuse qevent::quic::recovery::PacketLostTrigger;\nuse thiserror::Error;\nuse tokio::time::{Duration, Instant};\n\nmod algorithm;\npub use algorithm::Algorithm;\nmod congestion;\npub use congestion::ArcCC;\nmod pacing;\nmod packets;\nmod rtt;\nmod status;\npub use status::{HandshakeStatus, PathStatus};\n\n/// default datagram size in bytes.\npub const MSS: usize = 1200;\n\n#[derive(Debug, Clone, Copy, Error)]\n#[error(\"Too many PTOs: {0}\")]\npub struct TooManyPtos(u32);\n\n/// The [`Transport`] trait defines the interface for congestion control algorithms.\npub trait Transport {\n    /// Performs a periodic tick to drive the congestion control algorithm.\n    fn do_tick(&self) -> Result<(), TooManyPtos>;\n\n    /// Returns how many bytes can be sent at the moment.\n    /// If the congestion controller is not ready, returns an signal that should be waited for.\n    fn send_quota(&self) -> Result<usize, Signals>;\n\n    /// Gets the retransmission and expiration time for the given epoch.\n    fn retransmit_and_expire_time(&self, epoch: Epoch) -> (Duration, Duration);\n\n    /// Records the sending of a packet, which may affect congestion control state.\n    /// # Parameters\n    /// - `pn`: The packet number of the sent packet.\n    /// - `is_ack_eliciting`: A boolean indicating whether the packet is ack-eliciting.\n    /// - `sent_bytes`: The number of bytes sent in this packet.\n    /// - `in_flight`: A boolean indicating whether the packet is considered in-flight.\n    /// - `ack`: An optional `u64` representing the largest acknowledged packet number if an AckFrame was included.\n    fn on_pkt_sent(\n        &self,\n        epoch: Epoch,\n        pn: u64,\n        is_ack_eliciting: bool,\n        sent_bytes: usize,\n        in_flight: bool,\n        ack: Option<u64>,\n    );\n\n    /// Records the receipt of a packet, which may influence future packet transmissions.\n    /// # Parameters\n    /// - `pn`: The packet number of the received packet.\n    /// - `is_ack_elicition`: A boolean indicating whether the received packet is ack-eliciting.\n    fn on_pkt_rcvd(&self, space: Epoch, pn: u64, is_ack_elicition: bool);\n\n    /// Checks if an AckFrame should be sent in the next packet for the given epoch.\n    /// # Returns\n    /// An [`Option`] containing the largest packet ID and the time it was received if an AckFrame is needed.\n    fn need_ack(&self, space: Epoch) -> Option<(u64, Instant)>;\n\n    /// Checks if an ack-eliciting packet should be sent for the given epoch.\n    fn need_send_ack_eliciting(&self, space: Epoch) -> usize;\n\n    /// Updates the congestion control state upon receiving an AckFrame.\n    fn on_ack_rcvd(&self, space: Epoch, ack_frame: &AckFrame);\n\n    /// Retrieves the current path's PTO duration.\n    /// # Returns\n    /// The current PTO duration for the given epoch.\n    fn get_pto(&self, epoch: Epoch) -> Duration;\n\n    /// Discards the congestion control state for the specified epoch.\n    fn discard_epoch(&self, epoch: Epoch);\n\n    /// Releases the anti-amplification limit for this path.\n    fn grant_anti_amplification(&self);\n}\n\n/// The [`Feedback`] trait defines the interface for packet tracking\npub trait Feedback: Send + Sync {\n    /// Indicates that a packet with the specified packet number may have been lost.\n    /// # Parameters\n    /// - `pn`: The packet number of the potentially lost packet.\n    fn may_loss(&self, trigger: PacketLostTrigger, pns: &mut dyn Iterator<Item = u64>);\n}\n"
  },
  {
    "path": "qcongestion/src/pacing.rs",
    "content": "use tokio::time::{Duration, Instant};\n\n//  The burst  interval in milliseconds\nconst BURST_INTERVAL: Duration = Duration::from_millis(10);\nconst MIN_BURST_SIZE: usize = 10;\nconst MAX_BURST_SIZE: usize = 1280;\n// Using a value for N that is small, but at least 1 (for example, 1.25)\n// ensures that variations in RTT do not result in underutilization of the congestion window.\nconst N: f64 = 1.25;\n\npub(super) struct Pacer {\n    capacity: usize,\n    cwnd: usize,\n    tokens: usize,\n    last_burst_time: Instant,\n    rate: Option<usize>,\n}\n\nimpl Pacer {\n    pub(super) fn new(\n        smoothed_rtt: Duration,\n        cwnd: usize,\n        mtu: usize,\n        now: Instant,\n        rate: Option<usize>,\n    ) -> Self {\n        let capacity = Pacer::calculate_capacity(smoothed_rtt, cwnd, mtu, rate);\n\n        Pacer {\n            capacity,\n            cwnd,\n            tokens: capacity,\n            last_burst_time: now,\n            rate,\n        }\n    }\n\n    pub(super) fn on_sent(&mut self, packet_size: usize) {\n        self.tokens = self.tokens.saturating_sub(packet_size);\n    }\n\n    // Schedule and return the packet size to send, max size is mtu\n    pub(super) fn schedule(\n        &mut self,\n        srtt: Duration,\n        cwnd: usize,\n        mtu: usize,\n        now: Instant,\n        rate: Option<usize>,\n    ) -> usize {\n        // Update capacity if cwnd or rate has changed\n        if self.cwnd != cwnd || rate != self.rate {\n            self.capacity = Pacer::calculate_capacity(srtt, cwnd, mtu, rate);\n            self.tokens = self.tokens.min(self.capacity);\n        }\n\n        self.cwnd = cwnd;\n        self.rate = rate;\n\n        let rate = match rate {\n            Some(r) => r,\n            // RFC 9002 7.7. Pacing\n            // rate = N * congestion_window / smoothed_rtt\n            None => (N * cwnd as f64 / srtt.as_secs_f64()) as usize,\n        };\n\n        // Update the last_burst_time and tokens\n        let elapsed = now.duration_since(self.last_burst_time);\n        // TODO: 时间间隔有上限\n        // elapsed.max(srtt.as_secs_f64() * 2);\n        let new_token = elapsed.as_secs_f64() * rate as f64;\n        self.tokens = self\n            .tokens\n            .saturating_add(new_token as usize)\n            .min(self.capacity);\n        self.last_burst_time = now;\n\n        self.tokens\n    }\n\n    fn calculate_capacity(\n        smoothed_rtt: Duration,\n        cwnd: usize,\n        mtu: usize,\n        rate: Option<usize>,\n    ) -> usize {\n        let rtt = smoothed_rtt.as_nanos().max(1);\n\n        let capacity = match rate {\n            // Use the provided rate to calculate the capacity\n            Some(r) => (r as f64 * BURST_INTERVAL.as_secs_f64()) as usize,\n            // Use cwnd and smoothed_rtt to calculate the capacity\n            None => ((cwnd as u128 * BURST_INTERVAL.as_nanos()) / rtt) as usize,\n        };\n        capacity.clamp(MIN_BURST_SIZE * mtu, MAX_BURST_SIZE * mtu)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_pacer_initialization() {\n        let now = Instant::now();\n        let pacer = Pacer::new(\n            Duration::from_millis(100),\n            10,\n            1500, // MTU\n            now,\n            Some(1_000_000),\n        );\n        // min capacity is 15KB\n        assert_eq!(pacer.capacity, 15_000);\n        assert_eq!(pacer.tokens, pacer.capacity);\n        assert_eq!(pacer.last_burst_time, now);\n\n        // if rate is None capacity = cwnd * brust_interval / rtt\n        let pacer = Pacer::new(Duration::from_millis(100), 2_000_000, 1500, now, None);\n        assert_eq!(pacer.capacity, 200_000);\n\n        let pacer = Pacer::new(\n            Duration::from_millis(100),\n            2_000_000,\n            1500,\n            now,\n            Some(18_000_000), // 18_000 kB/s\n        );\n        // 180KB\n        assert_eq!(pacer.capacity, 180_000);\n    }\n\n    #[test]\n    fn test_on_sent() {\n        let mut pacer = Pacer::new(\n            Duration::from_millis(100),\n            10,\n            1500,\n            Instant::now(),\n            Some(1_000_000),\n        );\n        // token 15_000\n        assert_eq!(pacer.tokens, 15_000);\n        pacer.on_sent(1500); // 发送一个 MTU 大小的数据包\n        assert_eq!(pacer.tokens, 15_000 - 1500);\n\n        pacer.on_sent(20_000);\n        assert_eq!(pacer.tokens, 0);\n    }\n\n    #[test]\n    fn test_schedule_no_rate() {\n        let srtt = Duration::from_millis(100);\n        let mut cwnd = 2_000_000; // 2MB\n        let mtu: usize = 1500;\n        let mut update_time = Instant::now();\n        let mut pacer = Pacer::new(srtt, cwnd, mtu, update_time, None);\n        // token  = 200_000\n        pacer.on_sent(20_000);\n        assert_eq!(pacer.tokens, 180_000);\n\n        // rate  = 1.25 * cwnd / srtt\n        // after 20 ms\n        update_time += BURST_INTERVAL * 2;\n        let packet_size = pacer.schedule(srtt, cwnd, mtu, update_time, None);\n\n        assert_eq!(pacer.tokens, 200_000);\n        assert_eq!(packet_size, 200_000);\n        pacer.on_sent(1500 * 13);\n\n        assert_eq!(pacer.tokens, 180_500);\n\n        // add token\n        update_time += BURST_INTERVAL;\n        let packet_size = pacer.schedule(srtt, cwnd, mtu, update_time, None);\n\n        assert_eq!(pacer.capacity, 200_000);\n        assert_eq!(pacer.tokens, 200_000);\n        assert_eq!(packet_size, 200_000);\n\n        // change cwnd, change capacity\n        cwnd = 1_500_000; // 1.5 MB\n        let packet_size = pacer.schedule(srtt, cwnd, mtu, update_time, None);\n        assert_eq!(pacer.capacity, 150_000);\n        assert_eq!(pacer.tokens, 150_000);\n        assert_eq!(packet_size, 150_000);\n    }\n\n    #[test]\n    fn test_schedule_with_rate() {\n        let srtt = Duration::from_millis(100);\n        let cwnd = 2_000_000; // 2MB\n        let mtu: usize = 1500;\n        let mut update_time = Instant::now();\n        // 16MB/s\n        let mut rate = Some(16_000_000);\n\n        let mut pacer = Pacer::new(srtt, cwnd, mtu, update_time, rate);\n        assert_eq!(pacer.capacity, 160_000);\n\n        let size = pacer.schedule(srtt, cwnd, mtu, update_time, rate);\n        assert_eq!(size, 160_000);\n        pacer.on_sent(150_000);\n        let size = pacer.schedule(srtt, cwnd, mtu, update_time, rate);\n        assert_eq!(size, 10_000);\n\n        // update rate to update capacity\n        // 1 MB\n        rate = Some(1_000_000);\n        let size = pacer.schedule(srtt, cwnd, mtu, update_time, rate);\n        assert_eq!(size, 10_000);\n        assert_eq!(pacer.capacity, 15_000);\n        update_time += BURST_INTERVAL;\n        let size = pacer.schedule(srtt, cwnd, mtu, update_time, rate);\n        assert_eq!(pacer.tokens, 15_000);\n        assert_eq!(size, 15_000);\n    }\n}\n"
  },
  {
    "path": "qcongestion/src/packets.rs",
    "content": "use std::{cmp::Ordering, collections::VecDeque, time::Duration};\n\nuse qbase::{Epoch, frame::AckFrame};\nuse tokio::time::Instant;\n\nuse crate::algorithm::Control;\n\n#[derive(Default, PartialEq, Eq, Clone, Debug)]\npub(crate) enum State {\n    #[default]\n    Inflight,\n    Acked,\n    Retransmitted,\n}\n\n#[derive(Eq, Clone, Debug)]\npub struct SentPacket {\n    pub(crate) packet_number: u64,\n    pub(crate) time_sent: Instant,\n    pub(crate) ack_eliciting: bool,\n    pub(crate) sent_bytes: usize,\n    pub(crate) state: State,\n    pub(crate) count_for_cc: bool,\n}\n\nimpl SentPacket {\n    pub(crate) fn new(\n        packet_number: u64,\n        time_sent: Instant,\n        ack_eliciting: bool,\n        count_for_cc: bool,\n        sent_bytes: usize,\n    ) -> Self {\n        SentPacket {\n            packet_number,\n            time_sent,\n            ack_eliciting,\n            count_for_cc,\n            sent_bytes,\n            state: State::Inflight,\n        }\n    }\n}\n\nimpl PartialOrd for SentPacket {\n    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {\n        Some(self.cmp(other))\n    }\n}\n\nimpl PartialEq for SentPacket {\n    fn eq(&self, other: &Self) -> bool {\n        self.packet_number == other.packet_number\n    }\n}\n\nimpl Ord for SentPacket {\n    fn cmp(&self, other: &Self) -> Ordering {\n        self.packet_number.cmp(&other.packet_number)\n    }\n}\n\n/// The [`RcvdRecords`] struct is used to maintain records of received packets for each epoch.\n/// It tracks acknowledged packets and determines when an ACK frame should be sent.\n/// It also retires packets that have been acknowledged by an ACK frame that has already sent and which has been confirmed by the peer.\n#[derive(Debug)]\npub(crate) struct RcvdRecords {\n    epoch: Epoch,\n    ack_immedietly: bool,\n    latest_rcvd_time: Option<Instant>,\n    largest_rcvd_packet: Option<(u64, Instant)>,\n    max_ack_delay: Duration,\n}\n\nimpl RcvdRecords {\n    pub(crate) fn new(epoch: Epoch, max_ack_delay: Duration) -> Self {\n        Self {\n            epoch,\n            ack_immedietly: false,\n            latest_rcvd_time: None,\n            largest_rcvd_packet: None,\n            max_ack_delay,\n        }\n    }\n\n    pub(crate) fn on_pkt_rcvd(&mut self, pn: u64) {\n        // An endpoint MUST acknowledge all ack-eliciting Initial and Handshake packets immediately\n        if self.epoch == Epoch::Initial || self.epoch == Epoch::Handshake {\n            self.ack_immedietly = true;\n        }\n        // See [Section 13.2.1](https://www.rfc-editor.org/rfc/rfc9000.html#name-sending-ack-frames)\n        // An endpoint SHOULD generate and send an ACK frame without delay when it receives an ack-eliciting packet either:\n        // 1. When the received packet has a packet number less than another ack-eliciting packet that has been received\n        // 2. when the packet has a packet number larger than the highest-numbered ack-eliciting packet that has been\n        // received and there are missing packets between that packet and this packet.\n        let now = Instant::now();\n        if self.latest_rcvd_time.is_none() {\n            self.latest_rcvd_time = Some(now);\n        }\n        self.ack_immedietly |= self\n            .largest_rcvd_packet\n            .is_some_and(|(largest_pn, _)| pn < largest_pn);\n\n        self.largest_rcvd_packet =\n            self.largest_rcvd_packet\n                .map_or(Some((pn, now)), |(largest_pn, time)| {\n                    if pn > largest_pn {\n                        Some((pn, now))\n                    } else {\n                        Some((largest_pn, time))\n                    }\n                });\n    }\n\n    /// Checks whether an ACK frame needs to be sent.\n    /// Returns [`Some`] if it's time to send an ACK based on the maximum delay.\n    pub(crate) fn need_ack(&self) -> Option<(u64, Instant)> {\n        let now = Instant::now();\n        if self.ack_immedietly {\n            return self.largest_rcvd_packet;\n        }\n\n        if self\n            .latest_rcvd_time\n            .is_some_and(|t| t + self.max_ack_delay < now)\n        {\n            return self.largest_rcvd_packet;\n        }\n        None\n    }\n\n    /// Called when an ACK is sent.\n    /// Updates the last ACK sent information and resets the `need_ack` flag.\n    pub(crate) fn on_ack_sent(&mut self, _pn: u64, _largest_acked: u64) {\n        self.largest_rcvd_packet = None;\n        self.latest_rcvd_time = None;\n        self.ack_immedietly = false;\n    }\n}\n\n// bbr_packet: VecDeque<BbrPackets>\npub(crate) struct PacketSpace {\n    pub(crate) largest_acked_packet: Option<u64>,\n    pub(crate) time_of_last_ack_eliciting_packet: Option<Instant>,\n    pub(crate) loss_time: Option<Instant>,\n    pub(crate) sent_packets: VecDeque<SentPacket>,\n    pub(crate) rcvd_packets: RcvdRecords,\n    pub(crate) max_ack_delay: Duration,\n}\n\npub(crate) struct NewlyAckedPackets {\n    pub(crate) include_ack_eliciting: bool,\n    pub(crate) largest: (u64, Instant),\n}\n\nimpl PacketSpace {\n    pub(crate) fn with_epoch(epoch: Epoch, max_ack_delay: Duration) -> Self {\n        Self {\n            largest_acked_packet: None,\n            time_of_last_ack_eliciting_packet: None,\n            loss_time: None,\n            sent_packets: VecDeque::with_capacity(4),\n            rcvd_packets: RcvdRecords::new(epoch, max_ack_delay),\n            max_ack_delay,\n        }\n    }\n\n    pub(crate) fn update_largest_acked_packet(&mut self, pn: u64) {\n        self.largest_acked_packet = self.largest_acked_packet.map(|n| n.max(pn)).or(Some(pn));\n    }\n\n    pub(crate) fn on_ack_rcvd(\n        &mut self,\n        ack_frame: &AckFrame,\n        algorithm: &mut Box<dyn Control>,\n    ) -> Option<NewlyAckedPackets> {\n        if self.sent_packets.is_empty() {\n            return None;\n        }\n        let mut include_ack_eliciting = false;\n        let mut largest_acked = None;\n        let mut index = self\n            .sent_packets\n            .binary_search_by(|p| p.packet_number.cmp(&ack_frame.largest()))\n            .unwrap_or_else(|i| i.saturating_sub(1));\n\n        for range in ack_frame.iter() {\n            for pn in range.rev() {\n                while index > 0 && self.sent_packets[index].packet_number > pn {\n                    index = index.saturating_sub(1);\n                }\n                if self.sent_packets[index].packet_number == pn\n                    && self.sent_packets[index].state != State::Acked\n                {\n                    algorithm.on_packet_acked(&self.sent_packets[index]);\n                    self.sent_packets[index].state = State::Acked;\n                    include_ack_eliciting |= self.sent_packets[index].ack_eliciting;\n                    largest_acked = largest_acked\n                        .map(|(n, t)| {\n                            if n < pn {\n                                (pn, self.sent_packets[index].time_sent)\n                            } else {\n                                (n, t)\n                            }\n                        })\n                        .or(Some((pn, self.sent_packets[index].time_sent)));\n                }\n            }\n        }\n\n        while self\n            .sent_packets\n            .front()\n            .is_some_and(|sent| sent.state == State::Acked || sent.state == State::Retransmitted)\n        {\n            self.sent_packets.pop_front();\n        }\n\n        Some(NewlyAckedPackets {\n            include_ack_eliciting,\n            largest: largest_acked?,\n        })\n    }\n\n    pub(crate) fn no_ack_eliciting_in_flight(&self) -> bool {\n        self.sent_packets\n            .iter()\n            .all(|sent| !sent.ack_eliciting || sent.state != State::Inflight)\n    }\n\n    pub(crate) fn detect_lost_packets(\n        &mut self,\n        loss_delay: Duration,\n        packet_threshold: usize,\n        algorithm: &mut Box<dyn Control>,\n    ) -> impl Iterator<Item = u64> + use<> {\n        // assert!(self.largest_acked_packet.is_some());\n        self.loss_time = None;\n\n        let now = Instant::now();\n        let lost_sent_time = now - loss_delay - self.max_ack_delay;\n        let largest_acked = self.largest_acked_packet.unwrap_or(0);\n        let largest_index = self\n            .sent_packets\n            .binary_search_by(|p| p.packet_number.cmp(&largest_acked))\n            .unwrap_or_else(|i| i.saturating_sub(1));\n\n        let loss: Vec<_> = self\n            .sent_packets\n            .iter_mut()\n            .enumerate()\n            .filter(|(_, pkt)| pkt.state == State::Inflight)\n            .map(move |(idx, unacked)| {\n                if unacked.time_sent < lost_sent_time || largest_index >= idx + packet_threshold {\n                    unacked.state = State::Retransmitted;\n                    Ok((idx, &*unacked))\n                } else {\n                    Err(unacked.time_sent + loss_delay)\n                }\n            })\n            .filter_map(|result| match result {\n                Ok(t) => Some(t),\n                Err(time) => {\n                    self.loss_time = self.loss_time.map_or(Some(time), |t| Some(t.min(time)));\n                    None\n                }\n            })\n            .collect();\n\n        const PERSISTENT_LOSS_THRESHOLD: usize = 3;\n        let persistent_lost = loss\n            .iter()\n            .map(|(idx, _)| idx)\n            .try_fold((None, 0), |(prev, count), &idx| {\n                let lost_count = prev.map_or(0, |p| (idx - p == 1) as usize * (count + 1));\n                if lost_count + 1 >= PERSISTENT_LOSS_THRESHOLD {\n                    Err(())\n                } else {\n                    Ok((Some(idx), lost_count))\n                }\n            })\n            .is_err();\n\n        let (packet_numbers, loss_packet): (Vec<_>, Vec<_>) = loss\n            .into_iter()\n            .map(|(_, pkt)| (pkt.packet_number, pkt))\n            .unzip();\n\n        if !loss_packet.is_empty() {\n            algorithm.on_packets_lost(&mut loss_packet.into_iter(), persistent_lost);\n        }\n        packet_numbers.into_iter()\n    }\n\n    pub(crate) fn discard(&mut self, algorithm: &mut Box<dyn Control>) {\n        let mut remove_from_inflight = self\n            .sent_packets\n            .iter()\n            .filter(|sent| sent.state == State::Inflight);\n        algorithm.remove_from_bytes_in_flight(&mut remove_from_inflight);\n        self.sent_packets.clear();\n        self.time_of_last_ack_eliciting_packet = None;\n        self.loss_time = None;\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::{\n        sync::{Arc, atomic::AtomicU16},\n        vec,\n    };\n\n    use super::*;\n    use crate::algorithm::new_reno::NewReno;\n\n    #[test]\n    fn test_packet_space() {\n        let mut packet_space = PacketSpace::with_epoch(Epoch::Initial, Duration::from_millis(100));\n        // let now = Instant::now();\n\n        for i in 0..10 {\n            packet_space.sent_packets.push_back(SentPacket::new(\n                i,\n                Instant::now(),\n                true,\n                true,\n                1200,\n            ));\n        }\n\n        // ack 9 ~ 4, 1 ~ 0 loss 2,3\n        let ack_frame = AckFrame::new(\n            9_u32.into(),\n            100_u32.into(),\n            5_u32.into(),\n            vec![(1_u32.into(), 1_u32.into())],\n            None,\n        );\n\n        let mut reno: Box<dyn Control> = Box::new(NewReno::new(Arc::new(AtomicU16::new(1200))));\n        packet_space.on_ack_rcvd(&ack_frame, &mut reno);\n        // init 12000, ack 8 packet 12000 + 8 * MSS = 21600\n        assert_eq!(reno.congestion_window(), 21600);\n        packet_space.largest_acked_packet = Some(ack_frame.largest());\n        let loss = packet_space.detect_lost_packets(Duration::from_millis(100), 3, &mut reno);\n        assert_eq!(loss.collect::<Vec<_>>(), vec![2, 3]);\n        // loss 2, 3 cwnd = 21600 - MSS\n        assert_eq!(reno.congestion_window(), 20400);\n\n        for i in 10..15 {\n            packet_space.sent_packets.push_back(SentPacket::new(\n                i,\n                Instant::now(),\n                true,\n                true,\n                1200,\n            ));\n        }\n        for i in 20..25 {\n            packet_space.sent_packets.push_back(SentPacket::new(\n                i,\n                Instant::now(),\n                false,\n                true,\n                1200,\n            ));\n        }\n\n        // ack 24 ~ 20 13\n        // loss 10, 11,12,14\n        let ack_frame = AckFrame::new(\n            24_u32.into(),\n            100_u32.into(),\n            5_u32.into(),\n            vec![(4_u32.into(), 0_u32.into())],\n            None,\n        );\n\n        packet_space.on_ack_rcvd(&ack_frame, &mut reno);\n        packet_space.largest_acked_packet = Some(ack_frame.largest());\n        assert_eq!(reno.congestion_window(), 20817);\n        packet_space.largest_acked_packet = Some(ack_frame.largest());\n        let loss = packet_space.detect_lost_packets(Duration::from_millis(100), 3, &mut reno);\n        assert_eq!(loss.collect::<Vec<_>>(), vec![10, 11, 12, 14]);\n        assert_eq!(reno.congestion_window(), (20817 - 1200) / 2);\n    }\n\n    #[tokio::test(flavor = \"current_thread\")]\n    async fn test_rcvd_records() {\n        let mut rcvd_records = RcvdRecords::new(Epoch::Data, Duration::from_millis(100));\n        for i in 0..10 {\n            rcvd_records.on_pkt_rcvd(i);\n        }\n\n        tokio::time::pause();\n        tokio::time::advance(Duration::from_millis(100)).await;\n        assert_eq!(rcvd_records.need_ack().unwrap().0, 9);\n        rcvd_records.on_ack_sent(9, 9);\n        assert_eq!(rcvd_records.need_ack(), None);\n\n        tokio::time::resume();\n        rcvd_records.on_pkt_rcvd(10);\n        assert_eq!(rcvd_records.need_ack(), None);\n        rcvd_records.on_pkt_rcvd(15);\n        assert_eq!(rcvd_records.need_ack(), None);\n        rcvd_records.on_pkt_rcvd(11);\n        assert_eq!(rcvd_records.need_ack().unwrap().0, 15);\n    }\n}\n"
  },
  {
    "path": "qcongestion/src/rtt.rs",
    "content": "use std::sync::{Arc, Mutex};\n\nuse qevent::quic::recovery::RecoveryMetricsUpdated;\nuse tokio::time::{Duration, Instant};\n\npub const INITIAL_RTT: Duration = Duration::from_millis(33);\npub const MAX_INITIAL_RTT: Duration = Duration::from_millis(333);\nconst GRANULARITY: Duration = Duration::from_millis(1);\nconst TIME_THRESHOLD: f32 = 1.125;\n\n#[derive(Debug, Clone)]\npub struct Rtt {\n    max_ack_delay: Duration,\n    first_rtt_sample: Option<Instant>,\n    latest_rtt: Duration,\n    smoothed_rtt: Duration,\n    rttvar: Duration,\n    min_rtt: Duration,\n}\n\nimpl From<&Rtt> for RecoveryMetricsUpdated {\n    fn from(rtt: &Rtt) -> Self {\n        qevent::build!(RecoveryMetricsUpdated {\n            smoothed_rtt: rtt.smoothed_rtt.as_secs_f32() * 1000.0,\n            min_rtt: rtt.min_rtt.as_secs_f32() * 1000.0,\n            latest_rtt: rtt.latest_rtt.as_secs_f32() * 1000.0,\n            rtt_variance: rtt.rttvar.as_secs_f32() * 1000.0,\n        })\n    }\n}\n\nimpl Default for Rtt {\n    fn default() -> Self {\n        Self {\n            max_ack_delay: Duration::from_millis(0),\n            first_rtt_sample: None,\n            latest_rtt: Duration::from_millis(0),\n            smoothed_rtt: INITIAL_RTT,\n            rttvar: INITIAL_RTT / 2,\n            min_rtt: Duration::from_millis(0),\n        }\n    }\n}\n\nimpl Rtt {\n    fn update(\n        &mut self,\n        latest_rtt: Duration,\n        mut ack_delay: Duration,\n        is_handshake_confirmed: bool,\n    ) {\n        self.latest_rtt = latest_rtt;\n        if self.first_rtt_sample.is_none() {\n            self.min_rtt = latest_rtt;\n            self.smoothed_rtt = latest_rtt;\n            self.rttvar = latest_rtt / 2;\n            self.first_rtt_sample = Some(tokio::time::Instant::now());\n        } else {\n            // min_rtt ignores acknowledgment delay.\n            self.min_rtt = std::cmp::min(self.min_rtt, latest_rtt);\n\n            // Limit ack_delay by max_ack_delay after handshake confirmation.\n            if is_handshake_confirmed {\n                ack_delay = std::cmp::min(ack_delay, self.max_ack_delay);\n            }\n\n            // Adjust for acknowledgment delay if plausible.\n            let mut adjusted_rtt = latest_rtt;\n            if latest_rtt >= self.min_rtt + ack_delay {\n                adjusted_rtt = latest_rtt - ack_delay;\n            }\n\n            let abs_diff = self.smoothed_rtt.abs_diff(adjusted_rtt);\n            self.rttvar = self.rttvar.mul_f32(0.75) + abs_diff.mul_f32(0.25);\n            self.smoothed_rtt = self.smoothed_rtt.mul_f32(0.875) + adjusted_rtt.mul_f32(0.125);\n        }\n\n        let event = RecoveryMetricsUpdated::from(&*self);\n        qevent::event!(event);\n    }\n\n    fn loss_delay(&self) -> Duration {\n        std::cmp::max(\n            std::cmp::max(self.latest_rtt, self.smoothed_rtt).mul_f32(TIME_THRESHOLD),\n            GRANULARITY,\n        )\n    }\n\n    /// duration = (smoothed_rtt + max(4 * rttvar, kGranularity))\n    ///     * (2 ^ pto_count)\n    fn base_pto(&self, pto_count: u32) -> Duration {\n        self.smoothed_rtt + std::cmp::max(4 * self.rttvar, GRANULARITY) * (1 << pto_count)\n    }\n\n    fn try_backoff_rtt(&mut self) {\n        if self.first_rtt_sample.is_some() {\n            return;\n        }\n        self.smoothed_rtt = self\n            .smoothed_rtt\n            .mul_f32(TIME_THRESHOLD)\n            .min(MAX_INITIAL_RTT);\n        self.rttvar = self.smoothed_rtt / 2;\n        tracing::trace!(target: \"quic\", \"Back off initial RTT {}ms\", self.smoothed_rtt.as_millis());\n    }\n}\n\n#[derive(Debug, Clone, Default)]\npub struct ArcRtt(Arc<Mutex<Rtt>>);\n\n/// 对外只需暴露ArcRtt，Rtt成为内部实现\nimpl ArcRtt {\n    pub fn new() -> Self {\n        Self(Arc::new(Mutex::new(Rtt::default())))\n    }\n\n    pub fn update(&self, latest_rtt: Duration, ack_delay: Duration, is_handshake_confirmed: bool) {\n        self.0\n            .lock()\n            .unwrap()\n            .update(latest_rtt, ack_delay, is_handshake_confirmed);\n    }\n\n    pub fn loss_delay(&self) -> Duration {\n        self.0.lock().unwrap().loss_delay()\n    }\n\n    pub fn smoothed_rtt(&self) -> Duration {\n        self.0.lock().unwrap().smoothed_rtt\n    }\n\n    pub fn rttvar(&self) -> Duration {\n        self.0.lock().unwrap().rttvar\n    }\n\n    pub fn base_pto(&self, pto_count: u32) -> Duration {\n        self.0.lock().unwrap().base_pto(pto_count)\n    }\n\n    /// Backs off initial RTT on loss before first RTT sample\n    pub fn try_backoff_rtt(&self) {\n        self.0.lock().unwrap().try_backoff_rtt();\n    }\n}\n\n#[cfg(test)]\nmod tests {}\n"
  },
  {
    "path": "qcongestion/src/status.rs",
    "content": "use std::sync::{\n    Arc,\n    atomic::{AtomicBool, AtomicU16, Ordering},\n};\n\n#[derive(Debug)]\npub struct HandshakeStatus {\n    is_server: AtomicBool,\n    has_handshake_key: AtomicBool,\n    has_received_handshake_ack: AtomicBool,\n    is_handshake_confirmed: AtomicBool,\n}\n\nimpl HandshakeStatus {\n    pub fn new(is_server: bool) -> Self {\n        Self {\n            is_server: AtomicBool::new(is_server),\n            has_handshake_key: AtomicBool::new(false),\n            has_received_handshake_ack: AtomicBool::new(false),\n            is_handshake_confirmed: AtomicBool::new(false),\n        }\n    }\n}\n\nimpl HandshakeStatus {\n    pub fn got_handshake_key(&self) {\n        self.has_handshake_key.store(true, Ordering::Relaxed);\n    }\n\n    pub fn received_handshake_ack(&self) {\n        self.has_received_handshake_ack\n            .store(true, Ordering::Relaxed);\n    }\n\n    pub fn handshake_confirmed(&self) {\n        self.is_handshake_confirmed.store(true, Ordering::Relaxed);\n    }\n}\n\n#[derive(Clone)]\npub struct PathStatus {\n    handshake: Arc<HandshakeStatus>,\n    is_at_anti_amplification_limit: Arc<AtomicBool>,\n    pmtu: Arc<AtomicU16>,\n}\n\nimpl PathStatus {\n    pub fn new(handshake: Arc<HandshakeStatus>, pmut: Arc<AtomicU16>) -> Self {\n        Self {\n            handshake,\n            is_at_anti_amplification_limit: Arc::new(AtomicBool::new(true)),\n            pmtu: pmut,\n        }\n    }\n\n    pub(crate) fn is_server(&self) -> bool {\n        self.handshake.is_server.load(Ordering::Relaxed)\n    }\n\n    pub(crate) fn has_handshake_key(&self) -> bool {\n        self.handshake.has_handshake_key.load(Ordering::Relaxed)\n    }\n\n    pub(crate) fn has_received_handshake_ack(&self) -> bool {\n        self.handshake\n            .has_received_handshake_ack\n            .load(Ordering::Relaxed)\n    }\n\n    pub(crate) fn is_handshake_confirmed(&self) -> bool {\n        self.handshake\n            .is_handshake_confirmed\n            .load(Ordering::Relaxed)\n    }\n\n    pub(crate) fn is_at_anti_amplification_limit(&self) -> bool {\n        self.is_at_anti_amplification_limit.load(Ordering::Relaxed)\n    }\n\n    pub fn release_anti_amplification_limit(&self) {\n        self.is_at_anti_amplification_limit\n            .store(false, Ordering::Release);\n    }\n\n    pub fn enter_anti_amplification_limit(&self) {\n        self.is_at_anti_amplification_limit\n            .store(true, Ordering::Release);\n    }\n\n    pub(super) fn pmtu(&self) -> Arc<AtomicU16> {\n        self.pmtu.clone()\n    }\n\n    pub(crate) fn mtu(&self) -> usize {\n        self.pmtu.load(Ordering::Relaxed) as usize\n    }\n}\n"
  },
  {
    "path": "qconnection/Cargo.toml",
    "content": "[package]\nname = \"qconnection\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"Encapsulation of QUIC connections, a part of dquic\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbytes = { workspace = true }\ndashmap = { workspace = true }\nderive_more = { workspace = true, features = [\n    \"as_ref\",\n    \"deref\",\n    \"display\",\n    \"from\",\n] }\nenum_dispatch = { workspace = true }\nfutures = { workspace = true }\nqbase = { workspace = true }\nqcongestion = { workspace = true }\nqresolve = { workspace = true }\nqevent = { workspace = true }\nqrecovery = { workspace = true }\nring = { workspace = true }\nqtraversal = { workspace = true }\nrand = { workspace = true }\nrustls = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"rt\", \"sync\", \"time\", \"macros\"] }\ntokio-util = { workspace = true, features = [\"rt\"] }\ntracing = { workspace = true }\nqinterface = { workspace = true }\nx509-parser = { workspace = true }\n\n[target.'cfg(any(unix, windows))'.dependencies]\nqinterface = { workspace = true, features = [\"qudp\"] }\n\n# features: datagram\nqdatagram = { workspace = true, optional = true }\n\n[features]\ndefault = [\"datagram\"]\ndatagram = [\"dep:qdatagram\"]\ntelemetry = [\"qevent/telemetry\"]\n"
  },
  {
    "path": "qconnection/src/builder.rs",
    "content": "use std::{\n    net::SocketAddr,\n    sync::{Arc, atomic::AtomicBool},\n    time::Duration,\n};\n\nuse qbase::{\n    cid::{ConnectionId, GenUniqueCid},\n    error::Error,\n    net::tx::{ArcSendWakers, Signals},\n    packet::keys::ArcZeroRttKeys,\n    param::{ArcParameters, ClientParameters, ParameterId, Parameters, ServerParameters},\n    role::{IntoRole, Role},\n    sid::{\n        ControlStreamsConcurrency, ProductStreamsConcurrencyController, handy::DemandConcurrency,\n    },\n    time::ArcIdleConfig,\n    token::{ArcTokenRegistry, TokenProvider, TokenSink},\n};\nuse qcongestion::HandshakeStatus;\nuse qdatagram::DatagramFlow;\nuse qevent::{\n    GroupID,\n    quic::{\n        Owner,\n        transport::{ParametersRestored, ParametersSet},\n    },\n    telemetry::{Instrument, QLog, handy::NoopLogger},\n};\nuse qinterface::{\n    component::{\n        location::Locations,\n        route::{QuicRouter, RcvdPacketQueue},\n    },\n    io::{ProductIO, handy::DEFAULT_IO_FACTORY},\n    manager::InterfaceManager,\n};\nuse qrecovery::crypto::CryptoStream;\nuse qtraversal::punch::puncher::ArcPuncher;\nuse rustls::{\n    ClientConfig as TlsClientConfig, ServerConfig as TlsServerConfig, crypto::CryptoProvider,\n};\nuse tokio::sync::mpsc;\nuse tracing::Instrument as _;\n\nuse crate::{\n    ArcLocalCids, ArcReliableFrameDeque, ArcRemoteCids, CidRegistry, Components, Connection,\n    ConnectionState, DataJournal, DataStreams, FlowController, Handshake, QuicRouterRegistry,\n    RawHandshake, SpecificComponents,\n    events::{ArcEventBroker, EmitEvent, Event},\n    path::ArcPathContexts,\n    space::{\n        Spaces, data::DataSpace, handshake::HandshakeSpace, initial::InitialSpace,\n        spawn_deliver_and_parse,\n    },\n    state::ArcConnState,\n    tls::{\n        AcceptAllClientAuther, ArcSendLock, ArcTlsHandshake, AuthClient, ClientTlsSession,\n        ServerTlsSession, TlsHandshakeInfo, TlsSession,\n    },\n    traversal::PunchTransaction,\n};\n\nimpl Connection {\n    pub fn new_client(server_name: String, token_sink: Arc<dyn TokenSink>) -> ClientFoundation {\n        ClientFoundation {\n            server_name: server_name.clone(),\n            token_registry: ArcTokenRegistry::with_sink(server_name, token_sink),\n            client_params: ClientParameters::default(),\n        }\n    }\n\n    pub fn new_server(token_provider: Arc<dyn TokenProvider>) -> ServerFoundation {\n        ServerFoundation {\n            token_registry: ArcTokenRegistry::with_provider(token_provider),\n            server_params: ServerParameters::default(),\n            client_auther: Box::new(AcceptAllClientAuther),\n        }\n    }\n}\n\npub struct ClientFoundation {\n    server_name: String,\n    token_registry: ArcTokenRegistry,\n    client_params: ClientParameters,\n}\n\nimpl ClientFoundation {\n    pub fn with_parameters(mut self, params: ClientParameters) -> Self {\n        self.client_params = params;\n        self\n    }\n}\n\npub struct ServerFoundation {\n    token_registry: ArcTokenRegistry,\n    server_params: ServerParameters,\n    client_auther: Box<dyn AuthClient>,\n}\n\nimpl ServerFoundation {\n    pub fn with_parameters(mut self, params: ServerParameters) -> Self {\n        self.server_params = params;\n        self\n    }\n\n    pub fn with_client_auther(mut self, authers: Box<dyn AuthClient>) -> Self {\n        self.client_auther = authers;\n        self\n    }\n}\n\npub struct ConnectionFoundation<Foundation, TlsConfig> {\n    foundation: Foundation,\n    tls_config: TlsConfig,\n\n    ifaces: Arc<InterfaceManager>,\n    iface_factory: Arc<dyn ProductIO>,\n    quic_router: Arc<QuicRouter>,\n    locations: Arc<Locations>,\n    stun_servers: Arc<[SocketAddr]>,\n    streams_ctrl: Box<dyn ControlStreamsConcurrency>,\n    defer_idle_timeout: Duration,\n}\n\npub type ClientConnectionFoundation = ConnectionFoundation<ClientFoundation, TlsClientConfig>;\npub type ServerConnectionFoundation = ConnectionFoundation<ServerFoundation, TlsServerConfig>;\n\nimpl ClientFoundation {\n    pub fn with_tls_config(\n        self,\n        tls_config: TlsClientConfig,\n    ) -> ConnectionFoundation<Self, TlsClientConfig> {\n        ConnectionFoundation {\n            foundation: self,\n            tls_config,\n            ifaces: InterfaceManager::global().clone(),\n            iface_factory: Arc::new(DEFAULT_IO_FACTORY),\n            quic_router: QuicRouter::global().clone(),\n            locations: Arc::new(Locations::new()),\n            stun_servers: Arc::new([]),\n            streams_ctrl: Box::new(DemandConcurrency), // ZST cause no alloc\n            defer_idle_timeout: Duration::ZERO,\n        }\n    }\n}\n\nimpl ConnectionFoundation<ClientFoundation, TlsClientConfig> {\n    pub fn with_streams_concurrency_strategy<F>(self, strategy_factory: &F) -> Self\n    where\n        F: ProductStreamsConcurrencyController + ?Sized,\n    {\n        let client_params = &self.foundation.client_params;\n        let init_max_bidi_streams = client_params\n            .get(ParameterId::InitialMaxStreamsBidi)\n            .expect(\"unreachable: default value will be got if the value unset\");\n        let init_max_uni_streams = client_params\n            .get(ParameterId::InitialMaxStreamsUni)\n            .expect(\"unreachable: default value will be got if the value unset\");\n        ConnectionFoundation {\n            streams_ctrl: strategy_factory.init(init_max_bidi_streams, init_max_uni_streams),\n            ..self\n        }\n    }\n\n    pub fn with_zero_rtt(mut self, enabled: bool) -> Self {\n        self.tls_config.enable_early_data = enabled;\n        self\n    }\n}\n\nimpl ServerFoundation {\n    pub fn with_tls_config(\n        self,\n        tls_config: TlsServerConfig,\n    ) -> ConnectionFoundation<Self, TlsServerConfig> {\n        ConnectionFoundation {\n            foundation: self,\n            tls_config,\n            ifaces: InterfaceManager::global().clone(),\n            iface_factory: Arc::new(DEFAULT_IO_FACTORY),\n            quic_router: QuicRouter::global().clone(),\n            locations: Arc::new(Locations::new()),\n            stun_servers: Arc::new([]),\n            streams_ctrl: Box::new(DemandConcurrency), // ZST cause no alloc\n            defer_idle_timeout: Duration::ZERO,\n        }\n    }\n}\n\nimpl ConnectionFoundation<ServerFoundation, TlsServerConfig> {\n    pub fn with_streams_concurrency_strategy<F>(self, strategy_factory: &F) -> Self\n    where\n        F: ProductStreamsConcurrencyController + ?Sized,\n    {\n        let server_params = &self.foundation.server_params;\n        let init_max_bidi_streams = server_params\n            .get(ParameterId::InitialMaxStreamsBidi)\n            .expect(\"unreachable: default value will be got if the value unset\");\n        let init_max_uni_streams = server_params\n            .get(ParameterId::InitialMaxStreamsUni)\n            .expect(\"unreachable: default value will be got if the value unset\");\n        ConnectionFoundation {\n            streams_ctrl: strategy_factory.init(init_max_bidi_streams, init_max_uni_streams),\n            ..self\n        }\n    }\n\n    pub fn with_zero_rtt(mut self, enabled: bool) -> Self {\n        match enabled {\n            true => self.tls_config.max_early_data_size = 0xffffffff,\n            false => self.tls_config.max_early_data_size = 0,\n        }\n        self\n    }\n}\n\nimpl<Foundation, TlsConfig> ConnectionFoundation<Foundation, TlsConfig> {\n    pub fn with_iface_factory(mut self, factory: Arc<dyn ProductIO>) -> Self {\n        self.iface_factory = factory;\n        self\n    }\n\n    pub fn with_iface_manager(mut self, ifaces: Arc<InterfaceManager>) -> Self {\n        self.ifaces = ifaces;\n        self\n    }\n\n    pub fn with_quic_router(mut self, quic_router: Arc<QuicRouter>) -> Self {\n        self.quic_router = quic_router;\n        self\n    }\n\n    pub fn with_locations(mut self, locations: Arc<Locations>) -> Self {\n        self.locations = locations;\n        self\n    }\n\n    pub fn with_stun_servers(mut self, stun_servers: Arc<[SocketAddr]>) -> Self {\n        self.stun_servers = stun_servers;\n        self\n    }\n\n    pub fn with_defer_idle_timeout(mut self, timeout: Duration) -> Self {\n        self.defer_idle_timeout = timeout;\n        self\n    }\n}\n\nfn initial_keys_with(\n    crypto_provider: &Arc<CryptoProvider>,\n    origin_dcid: &ConnectionId,\n    side: rustls::Side,\n    version: rustls::quic::Version,\n) -> rustls::quic::Keys {\n    crypto_provider\n        .cipher_suites\n        .iter()\n        .find_map(|cs| match (cs.suite(), cs.tls13()) {\n            (rustls::CipherSuite::TLS13_AES_128_GCM_SHA256, Some(suite)) => {\n                Some(suite.quic_suite())\n            }\n            _ => None,\n        })\n        .flatten()\n        .expect(\"crypto provider does not provide supported cipher suite\")\n        .keys(origin_dcid, side, version)\n}\n\nimpl ConnectionFoundation<ClientFoundation, TlsClientConfig> {\n    pub fn with_cids(self, origin_dcid: ConnectionId) -> PendingConnection {\n        let initial_keys = initial_keys_with(\n            self.tls_config.crypto_provider(),\n            &origin_dcid,\n            rustls::Side::Client,\n            crate::tls::QUIC_VERSION,\n        );\n\n        let rcvd_pkt_q = Arc::new(RcvdPacketQueue::new());\n\n        let tx_wakers = ArcSendWakers::default();\n        let reliable_frames = ArcReliableFrameDeque::with_capacity_and_wakers(8, tx_wakers.clone());\n        let quic_router_registry = self\n            .quic_router\n            .registry_on_issuing_scid(rcvd_pkt_q.clone(), reliable_frames.clone());\n        let initial_scid = quic_router_registry.gen_unique_cid();\n\n        let mut client_params = self.foundation.client_params;\n        _ = client_params.set(ParameterId::InitialSourceConnectionId, initial_scid);\n\n        let host = self\n            .foundation\n            .server_name\n            .split_once(':')\n            .map(|(h, _)| h)\n            .unwrap_or(&self.foundation.server_name)\n            .to_string();\n        let tls_session = ClientTlsSession::init(host, Arc::new(self.tls_config), &client_params)\n            .expect(\"Failed to initialize TLS handshake\");\n\n        let zero_rtt_keys = ArcZeroRttKeys::new_pending(Role::Client);\n\n        // if zero rtt enabled && loadede remembered parameters && zero rtt keys is available\n        let parameters = match tls_session.load_zero_rtt() {\n            Some((remembered_parameters, avaliable_zero_rtt_keys)) => {\n                qevent::event!(ParametersRestored {\n                    client_parameters: &remembered_parameters,\n                });\n                zero_rtt_keys.set_keys(avaliable_zero_rtt_keys);\n                Parameters::new_client(client_params, Some(remembered_parameters), origin_dcid)\n            }\n            None => Parameters::new_client(client_params, None, origin_dcid),\n        };\n\n        PendingConnection {\n            interfaces: self.ifaces,\n            iface_factory: self.iface_factory,\n            quic_router: self.quic_router,\n            locations: self.locations,\n            stun_servers: self.stun_servers,\n            rcvd_pkt_q,\n            defer_idle_timeout: self.defer_idle_timeout,\n            role: Role::Client,\n            origin_dcid,\n            initial_scid,\n            tx_wakers,\n            send_lock: ArcSendLock::unrestricted(),\n            reliable_frames,\n            quicrouter_registry: quic_router_registry,\n            parameters,\n            token_registry: self.foundation.token_registry,\n            tls_session: TlsSession::Client(tls_session),\n            initial_keys,\n            zero_rtt_keys,\n            streams_ctrl: self.streams_ctrl,\n            specific: SpecificComponents::Client {},\n            qlogger: Arc::new(NoopLogger),\n        }\n    }\n}\n\nimpl ConnectionFoundation<ServerFoundation, TlsServerConfig> {\n    pub fn with_cids(self, origin_dcid: ConnectionId) -> PendingConnection {\n        let initial_keys = initial_keys_with(\n            self.tls_config.crypto_provider(),\n            &origin_dcid,\n            rustls::Side::Server,\n            crate::tls::QUIC_VERSION,\n        );\n\n        let rcvd_pkt_q = Arc::new(RcvdPacketQueue::new());\n\n        let tx_wakers = ArcSendWakers::default();\n        let reliable_frames = ArcReliableFrameDeque::with_capacity_and_wakers(8, tx_wakers.clone());\n        let quic_router_registry = self\n            .quic_router\n            .registry_on_issuing_scid(rcvd_pkt_q.clone(), reliable_frames.clone());\n        let initial_scid = quic_router_registry.gen_unique_cid();\n        let odcid_router_entry = self\n            .quic_router\n            .insert(origin_dcid.into(), rcvd_pkt_q.clone());\n\n        let mut server_params = self.foundation.server_params;\n        _ = server_params.set(ParameterId::InitialSourceConnectionId, initial_scid);\n        _ = server_params.set(ParameterId::OriginalDestinationConnectionId, origin_dcid);\n\n        let tls_session = ServerTlsSession::init(\n            Arc::new(self.tls_config),\n            &server_params,\n            self.foundation.client_auther,\n        )\n        .expect(\"Failed to initialize TLS handshake\"); // TODO: tls创建的错误处理\n\n        PendingConnection {\n            interfaces: self.ifaces,\n            iface_factory: self.iface_factory,\n            quic_router: self.quic_router,\n            locations: self.locations,\n            stun_servers: self.stun_servers,\n            rcvd_pkt_q,\n            defer_idle_timeout: self.defer_idle_timeout,\n            role: Role::Server,\n            origin_dcid,\n            initial_scid,\n            tx_wakers,\n            send_lock: tls_session.send_lock().clone(),\n            reliable_frames,\n            quicrouter_registry: quic_router_registry,\n            parameters: Parameters::new_server(server_params),\n            token_registry: self.foundation.token_registry,\n            tls_session: TlsSession::Server(tls_session),\n            initial_keys,\n            zero_rtt_keys: ArcZeroRttKeys::new_pending(Role::Server),\n            streams_ctrl: self.streams_ctrl,\n            specific: SpecificComponents::Server {\n                odcid_router_entry: Arc::new(odcid_router_entry),\n                using_odcid: Arc::new(AtomicBool::new(true)),\n            },\n            qlogger: Arc::new(NoopLogger),\n        }\n    }\n}\n\npub struct PendingConnection {\n    interfaces: Arc<InterfaceManager>,\n    iface_factory: Arc<dyn ProductIO>,\n    quic_router: Arc<QuicRouter>,\n    locations: Arc<Locations>,\n    stun_servers: Arc<[SocketAddr]>,\n    rcvd_pkt_q: Arc<RcvdPacketQueue>,\n    defer_idle_timeout: Duration,\n    role: Role,\n    origin_dcid: ConnectionId,\n    initial_scid: ConnectionId,\n    send_lock: ArcSendLock,\n    tx_wakers: ArcSendWakers,\n    reliable_frames: ArcReliableFrameDeque,\n    quicrouter_registry: QuicRouterRegistry,\n    parameters: Parameters,\n    token_registry: ArcTokenRegistry,\n    tls_session: TlsSession,\n    initial_keys: rustls::quic::Keys,\n    zero_rtt_keys: ArcZeroRttKeys,\n    streams_ctrl: Box<dyn ControlStreamsConcurrency>,\n    specific: SpecificComponents,\n    qlogger: Arc<dyn QLog>,\n}\n\nfn init_stream_and_datagram<LR: IntoRole, RR: IntoRole>(\n    local_params: &qbase::param::core::Parameters<LR>,\n    remote_params: &qbase::param::core::Parameters<RR>,\n    reliable_frames: ArcReliableFrameDeque,\n    streams_ctrl: Box<dyn ControlStreamsConcurrency>,\n    tx_wakers: ArcSendWakers,\n    metrics: qbase::metric::ArcConnectionMetrics,\n) -> (DataStreams, FlowController, DatagramFlow) {\n    assert_ne!(LR::into_role(), RR::into_role());\n    let flow_ctrl = FlowController::new(\n        remote_params\n            .get(ParameterId::InitialMaxData)\n            .expect(\"unreachable: default value will be got if the value unset\"),\n        local_params\n            .get(ParameterId::InitialMaxData)\n            .expect(\"unreachable: default value will be got if the value unset\"),\n        reliable_frames.clone(),\n        tx_wakers.clone(),\n    );\n    let data_streams = DataStreams::new(\n        LR::into_role(),\n        local_params,\n        remote_params,\n        streams_ctrl,\n        reliable_frames.clone(),\n        tx_wakers.clone(),\n        Some(metrics),\n    );\n    let datagram_flow = DatagramFlow::new(\n        local_params\n            .get(ParameterId::MaxDatagramFrameSize)\n            .expect(\"unreachable: default value will be got if the value unset\"),\n        tx_wakers.clone(),\n    );\n    (data_streams, flow_ctrl, datagram_flow)\n}\n\nimpl PendingConnection {\n    pub fn with_qlog(mut self, qlogger: Arc<dyn QLog>) -> Self {\n        self.qlogger = qlogger;\n        self\n    }\n\n    pub fn run(self) -> Connection {\n        let (event_broker, events) = mpsc::unbounded_channel();\n\n        let group_id = GroupID::from(self.origin_dcid);\n        let qlog_span = self.qlogger.new_trace(self.role.into(), group_id.clone());\n        let tracing_span =\n            tracing::debug_span!(parent: None, \"connection\", role = %self.role, odcid = %group_id);\n        let _span = (qlog_span.enter(), tracing_span.clone().entered());\n\n        tracing::trace!(parameters=?self.parameters, \"starting new connection\");\n\n        let conn_state = ArcConnState::new();\n        let event_broker = ArcEventBroker::new(conn_state.clone(), event_broker);\n\n        let quic_handshake = Handshake::new(\n            RawHandshake::new(self.role, self.reliable_frames.clone()),\n            Arc::new(HandshakeStatus::new(self.role == Role::Server)),\n            event_broker.clone(),\n        );\n\n        let local_cids = ArcLocalCids::new(self.initial_scid, self.quicrouter_registry);\n        let remote_cids = ArcRemoteCids::new(\n            self.parameters\n                .get_local(ParameterId::ActiveConnectionIdLimit)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n            self.reliable_frames.clone(),\n        );\n        let cid_registry = CidRegistry::new(self.role, self.origin_dcid, local_cids, remote_cids);\n\n        let spaces = Spaces::new(\n            InitialSpace::new(self.initial_keys.into()),\n            HandshakeSpace::new(),\n            DataSpace::new(self.zero_rtt_keys),\n        );\n\n        let crypto_streams = [\n            CryptoStream::new(self.tx_wakers.clone()),\n            CryptoStream::new(self.tx_wakers.clone()),\n            CryptoStream::new(self.tx_wakers.clone()),\n        ];\n\n        let metrics = Arc::new(qbase::metric::ConnectionMetrics::default());\n\n        let (data_streams, flow_ctrl, datagram_flow) = match self.role {\n            Role::Client => init_stream_and_datagram(\n                self.parameters.client().unwrap(),\n                self.parameters\n                    .remembered()\n                    .map(|p| p.as_ref())\n                    .unwrap_or(&ServerParameters::default()),\n                self.reliable_frames.clone(),\n                self.streams_ctrl,\n                self.tx_wakers.clone(),\n                metrics.clone(),\n            ),\n            Role::Server => init_stream_and_datagram(\n                self.parameters.server().unwrap(),\n                &ClientParameters::default(),\n                self.reliable_frames.clone(),\n                self.streams_ctrl,\n                self.tx_wakers.clone(),\n                metrics.clone(),\n            ),\n        };\n        let puncher = ArcPuncher::new(\n            self.reliable_frames.clone(),\n            PunchTransaction::new(cid_registry.clone()),\n            spaces.data().clone(),\n            self.interfaces.clone(),\n            self.iface_factory,\n            self.quic_router.clone(),\n            self.stun_servers.clone(),\n        );\n\n        let max_idle_timeout = self\n            .parameters\n            .get_local(ParameterId::MaxIdleTimeout)\n            .expect(\"Duration::ZERO if not specified\");\n        let components = Components {\n            interfaces: self.interfaces,\n            locations: self.locations,\n            rcvd_pkt_q: self.rcvd_pkt_q,\n            conn_state,\n            idle_config: ArcIdleConfig::new(max_idle_timeout, self.defer_idle_timeout),\n            paths: ArcPathContexts::new(self.tx_wakers.clone(), event_broker.clone()),\n            send_lock: self.send_lock,\n            tls_handshake: ArcTlsHandshake::new(self.tls_session),\n            quic_handshake,\n            parameters: ArcParameters::from(self.parameters),\n            token_registry: self.token_registry,\n            cid_registry,\n            spaces,\n            crypto_streams,\n            reliable_frames: self.reliable_frames,\n            data_streams,\n            flow_ctrl,\n            datagram_flow,\n            event_broker,\n            metrics,\n            specific: self.specific,\n            puncher,\n        };\n\n        spawn_tls_handshake(&components, self.tx_wakers.clone());\n        spawn_deliver_and_parse(&components);\n\n        let connection_state = Arc::new(ConnectionState {\n            state: Ok(components).into(),\n            qlog_span,\n            tracing_span,\n        });\n\n        spawn_drive_connection(events, connection_state.clone());\n\n        Connection(connection_state)\n    }\n}\n\nfn spawn_tls_handshake(components: &Components, tx_wakers: ArcSendWakers) {\n    let task = components.tls_handshake.clone().start(\n        components.parameters.clone(),\n        components.quic_handshake.clone(),\n        components.crypto_streams.clone(),\n        (\n            components.spaces.handshake().keys(),\n            components.spaces.data().zero_rtt_keys(),\n            components.spaces.data().one_rtt_keys(),\n        ),\n        tls_fin_handler(\n            components.parameters.clone(),\n            components.data_streams.clone(),\n            components.flow_ctrl.clone(),\n            components.spaces.data().journal().clone(),\n            components.cid_registry.local.clone(),\n            components.idle_config.clone(),\n            tx_wakers,\n        ),\n    );\n\n    let event_broker = components.event_broker.clone();\n    let task = async move {\n        if let Err(Error::Quic(e)) = task.await {\n            event_broker.emit(Event::Failed(e));\n        }\n    };\n\n    // Terminates when the QUIC connection closes and the event broker shuts down.\n    tokio::spawn(task.instrument_in_current().in_current_span());\n}\n\nfn tls_fin_handler(\n    parameters: ArcParameters,\n    data_streams: DataStreams,\n    flow_ctrl: FlowController,\n    data_journal: DataJournal,\n    local_cids: ArcLocalCids,\n    idle_config: ArcIdleConfig,\n    tx_wakers: ArcSendWakers,\n) -> impl FnOnce(&TlsHandshakeInfo) -> Result<(), Error> + Send {\n    fn apply_parameters<Role: IntoRole>(\n        data_streams: &DataStreams,\n        flow_ctrl: &FlowController,\n        // datagram_flow\n        data_journal: &DataJournal,\n        local_cids: &ArcLocalCids,\n        idle_config: &ArcIdleConfig,\n        zero_rtt_rejected: bool,\n        remote_parameters: Arc<qbase::param::core::Parameters<Role>>,\n    ) -> Result<(), Error> {\n        // accept InitialMaxStreamsBidi, InitialMaxStreamUni,\n        // InitialMaxStreamDataBidiLocal, InitialMaxStreamDataBidiRemote, InitialMaxStreamDataUni,\n        data_streams.revise_params(zero_rtt_rejected, remote_parameters.as_ref());\n        // accept InitialMaxData:\n        flow_ctrl.sender.revise_max_data(\n            zero_rtt_rejected,\n            remote_parameters\n                .get(ParameterId::InitialMaxData)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n        );\n        // accept ActiveConnectionIdLimit\n        local_cids.set_limit(\n            remote_parameters\n                .get(ParameterId::ActiveConnectionIdLimit)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n        )?;\n        data_journal.of_rcvd_packets().revise_max_ack_delay(\n            remote_parameters\n                .get(ParameterId::MaxAckDelay)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n        );\n        idle_config.negotiate_max_idle_timeout(\n            remote_parameters\n                .get(ParameterId::MaxIdleTimeout)\n                .expect(\"Duration::ZERO if not specified\"),\n        );\n\n        Ok(())\n    }\n\n    move |info| {\n        let zero_rtt_rejected = info\n            .zero_rtt_accepted()\n            .map(|accepted| !accepted)\n            .unwrap_or(false);\n\n        let parameters = parameters.lock_guard()?;\n\n        if parameters.role() == Role::Client {\n            if zero_rtt_rejected {\n                debug_assert_eq!(parameters.role(), Role::Client);\n                tracing::trace!(target: \"quic\", \"0-RTT is not enabled, or not accepted by the server.\");\n            } else {\n                tracing::trace!(target: \"quic\", \"0-RTT is enabled and accepted by the server.\");\n            }\n        }\n\n        match parameters.role() {\n            Role::Client => {\n                let remote_parameters = parameters\n                    .server()\n                    .expect(\"client and server parameters has been ready\")\n                    .clone();\n                drop(parameters);\n                qevent::event!(ParametersSet {\n                    owner: Owner::Remote,\n                    server_parameters: &remote_parameters,\n                });\n                apply_parameters(\n                    &data_streams,\n                    &flow_ctrl,\n                    &data_journal,\n                    &local_cids,\n                    &idle_config,\n                    zero_rtt_rejected,\n                    remote_parameters,\n                )?;\n            }\n            Role::Server => {\n                let remote_parameters = parameters\n                    .client()\n                    .expect(\"client and server parameters has been ready\")\n                    .clone();\n                drop(parameters);\n                qevent::event!(ParametersSet {\n                    owner: Owner::Remote,\n                    client_parameters: &remote_parameters,\n                });\n                apply_parameters(\n                    &data_streams,\n                    &flow_ctrl,\n                    &data_journal,\n                    &local_cids,\n                    &idle_config,\n                    zero_rtt_rejected,\n                    remote_parameters,\n                )?;\n            }\n        }\n        tx_wakers.wake_all_by(Signals::TLS_FIN);\n\n        Result::<_, Error>::Ok(())\n    }\n}\n\nfn spawn_drive_connection(mut events: mpsc::UnboundedReceiver<Event>, state: Arc<ConnectionState>) {\n    tokio::spawn(\n        async move {\n            while let Some(event) = events.recv().await {\n                match event {\n                    Event::Handshaked => {}\n                    Event::Failed(quic_error) => _ = state.enter_closing(quic_error),\n                    Event::ApplicationClose(_app_error) => {}\n                    Event::Closed(ccf) => _ = state.enter_draining(ccf),\n                    Event::StatelessReset => {}\n                    Event::Terminated => {}\n                }\n            }\n        }\n        .instrument_in_current()\n        .in_current_span(),\n    );\n}\n"
  },
  {
    "path": "qconnection/src/events.rs",
    "content": "use std::sync::Arc;\n\nuse qbase::{\n    self,\n    error::{AppError, QuicError},\n    frame::ConnectionCloseFrame,\n};\nuse qevent::quic::connectivity::BaseConnectionStates;\nuse tokio::sync::mpsc;\n\nuse crate::state::ArcConnState;\n\n/// The events that can be emitted by a quic connection\n#[derive(Debug, Clone, PartialEq, Eq)]\npub enum Event {\n    // The connection is handshaked\n    Handshaked,\n    // An Error occurred during the connection, will enter the closing state\n    Failed(QuicError),\n    // The connection is closed by application, just a notification\n    ApplicationClose(AppError),\n    // Received a connection close frame, will enter the draining state\n    Closed(ConnectionCloseFrame),\n    // Received a stateless reset, will enter the draining state\n    StatelessReset,\n    // The connection is terminated completely\n    Terminated,\n}\n\npub trait EmitEvent: Send + Sync {\n    fn emit(&self, event: Event);\n}\n\n#[derive(Clone)]\npub struct ArcEventBroker {\n    conn_state: ArcConnState,\n    raw_broker: Arc<dyn EmitEvent>,\n}\n\nimpl ArcEventBroker {\n    pub fn new<E: EmitEvent + 'static>(conn_state: ArcConnState, event_broker: E) -> Self {\n        Self {\n            conn_state,\n            raw_broker: Arc::new(event_broker),\n        }\n    }\n}\n\nimpl EmitEvent for ArcEventBroker {\n    fn emit(&self, event: Event) {\n        match &event {\n            Event::Handshaked => {\n                if self.conn_state.enter_handshaked().is_none() {\n                    return;\n                }\n            }\n            Event::Failed(error) => {\n                if self.conn_state.enter_closing(error).is_none() {\n                    return;\n                }\n            }\n            Event::ApplicationClose(error) => {\n                if self.conn_state.enter_closing(error).is_none() {\n                    return;\n                }\n            }\n            Event::Closed(ccf) => {\n                if self.conn_state.enter_draining(ccf).is_none() {\n                    return;\n                }\n            }\n            Event::Terminated => {\n                let terminated_state = BaseConnectionStates::Closed;\n                self.conn_state.update(terminated_state.into());\n            }\n            Event::StatelessReset => todo!(\"unsupported\"),\n        };\n        tracing::debug!(target: \"quic\", new_state = ?event, \"connection state changed\");\n        self.raw_broker.emit(event);\n    }\n}\n\nimpl EmitEvent for mpsc::UnboundedSender<Event> {\n    fn emit(&self, event: Event) {\n        _ = self.send(event);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use tokio::sync::mpsc;\n\n    use super::*;\n\n    #[test]\n    fn test_emit_event() {\n        let (tx, mut rx) = mpsc::unbounded_channel();\n        tx.emit(Event::Handshaked);\n        assert_eq!(rx.try_recv().unwrap(), Event::Handshaked);\n    }\n}\n"
  },
  {
    "path": "qconnection/src/handshake.rs",
    "content": "use std::{ops::Deref, sync::Arc};\n\nuse qbase::{\n    error::Error,\n    frame::{\n        HandshakeDoneFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    role::Role,\n};\nuse qcongestion::HandshakeStatus;\n\nuse crate::{\n    events::{ArcEventBroker, EmitEvent, Event},\n    path::ArcPathContexts,\n};\n\npub type RawHandshake<T> = qbase::handshake::Handshake<T>;\n\n/// A wrapper of [`qbase::handshake::Handshake`] that will emit [`Event::Handshaked`] when the handshake is done.\n///\n/// Read the documentation of [`qbase::handshake::Handshake`] for more information.\n#[derive(Clone)]\npub struct Handshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    inner: RawHandshake<T>,\n    inform_cc: Arc<HandshakeStatus>,\n    broker: ArcEventBroker,\n}\n\nimpl<T> Handshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    pub fn new(\n        raw: RawHandshake<T>,\n        inform_cc: Arc<HandshakeStatus>,\n        broker: ArcEventBroker,\n    ) -> Self {\n        Self {\n            inner: raw,\n            inform_cc,\n            broker,\n        }\n    }\n\n    pub fn discard_spaces_on_server_handshake_done(&self, paths: &ArcPathContexts) -> bool {\n        let is_server_done = self.inner.done();\n        if is_server_done {\n            self.inform_cc.handshake_confirmed();\n            paths.discard_initial_and_handshake_space();\n            self.broker.emit(Event::Handshaked);\n        }\n        is_server_done\n    }\n\n    pub fn role(&self) -> Role {\n        self.inner.role()\n    }\n\n    pub fn status(&self) -> Arc<HandshakeStatus> {\n        self.inform_cc.clone()\n    }\n\n    pub fn discard_spaces_on_client_handshake_done(\n        &self,\n        paths: ArcPathContexts,\n    ) -> HandshakeDoneReceiver<T> {\n        HandshakeDoneReceiver {\n            handshake: self.clone(),\n            paths,\n        }\n    }\n}\n\npub struct HandshakeDoneReceiver<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    handshake: Handshake<T>,\n    paths: ArcPathContexts,\n}\n\nimpl<T> ReceiveFrame<HandshakeDoneFrame> for HandshakeDoneReceiver<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    type Output = ();\n\n    fn recv_frame(&self, frame: HandshakeDoneFrame) -> Result<(), Error> {\n        if self.handshake.inner.recv_frame(frame)? {\n            self.handshake.inform_cc.handshake_confirmed();\n            self.paths.discard_initial_and_handshake_space();\n            self.handshake.broker.emit(Event::Handshaked);\n        }\n        Ok(())\n    }\n}\n\nimpl<T> Deref for Handshake<T>\nwhere\n    T: SendFrame<HandshakeDoneFrame> + Clone,\n{\n    type Target = HandshakeStatus;\n\n    fn deref(&self) -> &Self::Target {\n        &self.inform_cc\n    }\n}\n"
  },
  {
    "path": "qconnection/src/lib.rs",
    "content": "pub mod builder;\npub mod events;\npub mod handshake;\npub mod path;\npub mod space;\npub mod state;\npub mod termination;\npub mod tls;\nmod traversal;\npub mod tx;\npub mod prelude {\n    pub use qbase::{\n        cid::ConnectionId,\n        error::{AppError, Error, ErrorKind, QuicError},\n        frame::ConnectionCloseFrame,\n        net::{addr::*, route::*},\n        param::ParameterId,\n        role::{Client, IntoRole, Role, Server},\n        sid::{ControlStreamsConcurrency, ProductStreamsConcurrencyController, StreamId},\n        varint::VarInt,\n    };\n    #[cfg(feature = \"datagram\")]\n    pub use qdatagram::{DatagramReader, DatagramWriter};\n    pub use qinterface::{\n        bind_uri::BindUri,\n        io::{IO, IoExt},\n    };\n    pub use qrecovery::{recv::StopSending, send::CancelStream, streams::error::StreamError};\n\n    pub mod handy {\n        pub use qbase::{param::handy::*, sid::handy::*, token::handy::*};\n        pub use qevent::telemetry::handy::*;\n        pub use qinterface::io::handy::*;\n    }\n\n    pub use crate::{\n        Connection, StreamReader, StreamWriter,\n        tls::{\n            AuthClient, ClientAgentVerifyResult, ClientNameVerifyResult, LocalAgent, RemoteAgent,\n            SignError, VerifyError,\n        },\n    };\n}\n\n// Re-export dependencies\nuse std::{\n    borrow::Cow,\n    fmt::Debug,\n    future::Future,\n    io,\n    net::SocketAddr,\n    sync::{Arc, RwLock, atomic::AtomicBool},\n};\n\npub use ::{qbase, qdatagram, qevent, qinterface, qrecovery, qtraversal};\nuse derive_more::From;\nuse enum_dispatch::enum_dispatch;\nuse events::{ArcEventBroker, EmitEvent, Event};\nuse futures::{FutureExt, TryFutureExt};\nuse path::ArcPathContexts;\nuse qbase::{\n    cid,\n    error::{AppError, Error, ErrorKind, QuicError},\n    flow,\n    frame::{ConnectionCloseFrame, CryptoFrame, Frame, ReliableFrame, StreamFrame},\n    net::{\n        addr::EndpointAddr,\n        route::{Link, Pathway},\n    },\n    param::{ArcParameters, ParameterId},\n    role::Role,\n    sid::StreamId,\n    time::ArcIdleConfig,\n    token::ArcTokenRegistry,\n};\nuse qdatagram::DatagramFlow;\n#[cfg(feature = \"datagram\")]\nuse qdatagram::{DatagramReader, DatagramWriter};\nuse qevent::{\n    quic::{Owner, connectivity::ConnectionClosed},\n    telemetry::Instrument,\n};\nuse qinterface::{\n    bind_uri::BindUri,\n    component::{\n        location::Locations,\n        route::{self, QuicRouterEntry, RcvdPacketQueue},\n    },\n    manager::InterfaceManager,\n};\nuse qrecovery::{\n    crypto::CryptoStream,\n    journal, recv, reliable, send,\n    streams::{self, Ext},\n};\nuse space::Spaces;\nuse state::ArcConnState;\nuse termination::Termination;\nuse tls::ArcSendLock;\nuse tracing::Instrument as _;\n\nuse crate::{\n    path::{CreatePathFailure, PathDeactivated},\n    space::data::DataSpace,\n    termination::Terminator,\n    tls::{ArcTlsHandshake, LocalAgent, RemoteAgent},\n    traversal::PunchTransaction,\n};\n\n/// The kind of frame which guaratend to be received by peer.\n///\n/// The bundle of [`StreamFrame`], [`CryptoFrame`], and [`ReliableFrame`].\n#[derive(Debug, Clone, From, Eq, PartialEq)]\n#[enum_dispatch(EncodeSize, FrameFeture)]\npub enum GuaranteedFrame {\n    Stream(StreamFrame),\n    Crypto(CryptoFrame),\n    Reliable(ReliableFrame),\n}\n\nimpl<'f, D> TryFrom<&'f Frame<D>> for GuaranteedFrame {\n    type Error = &'f Frame<D>;\n\n    fn try_from(frame: &'f Frame<D>) -> Result<Self, Self::Error> {\n        Ok(match ReliableFrame::try_from(frame) {\n            Ok(reliable) => Self::Reliable(reliable),\n            Err(Frame::Crypto(crypto, _data)) => Self::Crypto(*crypto),\n            Err(Frame::Stream(stream, _data)) => Self::Stream(*stream),\n            Err(frame) => return Err(frame),\n        })\n    }\n}\n\n/// For initial space, only reliable transmission of crypto frames is required.\npub type InitialJournal = journal::Journal<CryptoFrame>;\n/// For handshake space, only reliable transmission of crypto frames is required.\npub type HandshakeJournal = journal::Journal<CryptoFrame>;\n/// For data space, reliable transmission of [`GuaranteedFrame`] (crypto frames, stream frames and reliable frames) is required.\npub type DataJournal = journal::Journal<GuaranteedFrame>;\n\npub type ArcReliableFrameDeque = reliable::ArcReliableFrameDeque<ReliableFrame>;\npub type QuicRouterRegistry = route::QuicRouterRegistry<ArcReliableFrameDeque>;\npub type ArcLocalCids = cid::ArcLocalCids<QuicRouterRegistry>;\npub type ArcRemoteCids = cid::ArcRemoteCids<ArcReliableFrameDeque>;\npub type CidRegistry = cid::Registry<ArcLocalCids, ArcRemoteCids>;\npub type ArcDcidCell = cid::ArcCidCell<ArcReliableFrameDeque>;\n\npub type FlowController = flow::FlowController<ArcReliableFrameDeque>;\npub type Credit<'a> = flow::Credit<'a, ArcReliableFrameDeque>;\n\npub type Handshake = handshake::Handshake<ArcReliableFrameDeque>;\npub type RawHandshake = handshake::RawHandshake<ArcReliableFrameDeque>;\n\npub type DataStreams = streams::DataStreams<ArcReliableFrameDeque>;\npub type StreamReader = recv::Reader<Ext<ArcReliableFrameDeque>>;\npub type StreamWriter = send::Writer<Ext<ArcReliableFrameDeque>>;\npub type ArcPuncher =\n    qtraversal::punch::puncher::ArcPuncher<ArcReliableFrameDeque, PunchTransaction, DataSpace>;\n\n#[derive(Clone)]\npub struct Components {\n    // TODO: delete this\n    interfaces: Arc<InterfaceManager>,\n    locations: Arc<Locations>,\n    rcvd_pkt_q: Arc<RcvdPacketQueue>,\n    conn_state: ArcConnState,\n    idle_config: ArcIdleConfig,\n    paths: ArcPathContexts,\n    send_lock: ArcSendLock,\n    tls_handshake: ArcTlsHandshake,\n    quic_handshake: Handshake,\n    parameters: ArcParameters,\n    token_registry: ArcTokenRegistry,\n    cid_registry: CidRegistry,\n    spaces: Spaces,\n    crypto_streams: [CryptoStream; 3],\n    reliable_frames: ArcReliableFrameDeque,\n    data_streams: DataStreams,\n    flow_ctrl: FlowController,\n    datagram_flow: DatagramFlow,\n    event_broker: ArcEventBroker,\n    metrics: qbase::metric::ArcConnectionMetrics,\n    specific: SpecificComponents,\n    puncher: ArcPuncher,\n}\n\n#[derive(Clone)]\npub enum SpecificComponents {\n    Client {},\n    Server {\n        using_odcid: Arc<AtomicBool>,\n        odcid_router_entry: Arc<QuicRouterEntry>,\n    },\n}\n\n/// expand Impl_Future![Type] to `impl Future<Output = Type> + Send + use<>`\nmacro_rules! Impl_Future {\n    [$ty:ty] => {\n        impl Future<Output = $ty> + Send + use<>\n    };\n}\n\nimpl Components {\n    pub fn role(&self) -> Role {\n        match self.specific {\n            SpecificComponents::Client { .. } => Role::Client,\n            SpecificComponents::Server { .. } => Role::Server,\n        }\n    }\n\n    /// Gets the connection metrics for tracking data volumes.\n    pub fn metrics(&self) -> &qbase::metric::ArcConnectionMetrics {\n        &self.metrics\n    }\n\n    #[allow(clippy::type_complexity)]\n    pub fn open_bi_stream(\n        &self,\n    ) -> Impl_Future![Result<Option<(StreamId, (StreamReader, StreamWriter))>, Error>] {\n        let zero_rtt_avaliable = self.spaces.data().is_zero_rtt_avaliable();\n        let tls_handshake = self.tls_handshake.clone();\n        let data_streams = self.data_streams.clone();\n        let parameters = self.parameters.clone();\n        async move {\n            if !zero_rtt_avaliable {\n                tls_handshake.info().await?;\n            }\n            data_streams.open_bi(&parameters).await\n        }\n        .instrument_in_current()\n        .in_current_span()\n    }\n\n    pub fn open_uni_stream(&self) -> Impl_Future![Result<Option<(StreamId, StreamWriter)>, Error>] {\n        let zero_rtt_avaliable = self.spaces.data().is_zero_rtt_avaliable();\n        let tls_handshake = self.tls_handshake.clone();\n        let data_streams = self.data_streams.clone();\n        let parameters = self.parameters.clone();\n        async move {\n            if !zero_rtt_avaliable {\n                tls_handshake.info().await?;\n            }\n            data_streams.open_uni(&parameters).await\n        }\n        .instrument_in_current()\n        .in_current_span()\n    }\n\n    #[allow(clippy::type_complexity)]\n    pub fn accept_bi_stream(\n        &self,\n    ) -> Impl_Future![Result<(StreamId, (StreamReader, StreamWriter)), Error>] {\n        let data_streams = self.data_streams.clone();\n        let parameters = self.parameters.clone();\n        async move { data_streams.accept_bi(&parameters).await }\n            .instrument_in_current()\n            .in_current_span()\n    }\n\n    pub fn accept_uni_stream(&self) -> Impl_Future![Result<(StreamId, StreamReader), Error>] {\n        let data_streams = self.data_streams.clone();\n        async move { data_streams.accept_uni().await }\n            .instrument_in_current()\n            .in_current_span()\n    }\n\n    #[cfg(feature = \"datagram\")]\n    #[deprecated]\n    pub fn datagram_reader(&self) -> io::Result<DatagramReader> {\n        self.datagram_flow.reader()\n    }\n\n    #[cfg(feature = \"datagram\")]\n    #[deprecated]\n    pub fn datagram_writer(&self) -> Impl_Future![io::Result<DatagramWriter>] {\n        let params = self.parameters.clone();\n        let datagram_flow = self.datagram_flow.clone();\n        async move {\n            let max_datagram_frame_size = params\n                .remote_ready()\n                .await?\n                .get_remote(ParameterId::MaxDatagramFrameSize)\n                .expect(\"unreachable: default value will be got if the value unset\");\n            datagram_flow.writer(max_datagram_frame_size)\n        }\n        .instrument_in_current()\n        .in_current_span()\n    }\n\n    pub fn add_path(\n        &self,\n        bind_uri: BindUri,\n        link: Link,\n        pathway: Pathway,\n    ) -> Result<(), CreatePathFailure> {\n        self.get_or_try_create_path(bind_uri, link, pathway, false)\n            .map(|_| ())\n    }\n\n    pub fn del_path(&self, pathway: &Pathway) {\n        self.paths.remove(pathway, &PathDeactivated::App);\n    }\n\n    pub fn local_agent(&self) -> Impl_Future![Result<Option<LocalAgent>, Error>] {\n        let tls_handshake = self.tls_handshake.clone();\n        async move {\n            match tls_handshake.info().await?.as_ref() {\n                tls::TlsHandshakeInfo::Client { local_agent, .. } => Ok(local_agent.clone()),\n                tls::TlsHandshakeInfo::Server { local_agent, .. } => Ok(Some(local_agent.clone())),\n            }\n        }\n        .instrument_in_current()\n        .in_current_span()\n    }\n\n    pub fn remote_agent(&self) -> Impl_Future![Result<Option<RemoteAgent>, Error>] {\n        let tls_handshake = self.tls_handshake.clone();\n        async move {\n            match tls_handshake.info().await?.as_ref() {\n                tls::TlsHandshakeInfo::Client { remote_agent, .. } => {\n                    Ok(Some(remote_agent.clone()))\n                }\n                tls::TlsHandshakeInfo::Server { remote_agent, .. } => Ok(remote_agent.clone()),\n            }\n        }\n        .instrument_in_current()\n        .in_current_span()\n    }\n}\n\nimpl Components {\n    pub fn enter_closing(self, error: Error) -> Termination {\n        qevent::event!(ConnectionClosed {\n            owner: Owner::Local,\n            error: &error, // TODO: trigger\n        });\n\n        self.data_streams.on_conn_error(&error);\n        self.datagram_flow.on_conn_error(&error);\n        self.tls_handshake.on_conn_error(&error);\n        self.parameters.on_conn_error(&error);\n\n        tokio::spawn(\n            {\n                let pto_duration = self.paths.max_pto_duration().unwrap_or_default();\n                let event_broker = self.event_broker.clone();\n                async move {\n                    tokio::time::sleep(pto_duration).await;\n                    event_broker.emit(Event::Terminated);\n                }\n            }\n            .instrument_in_current()\n            .in_current_span(),\n        );\n\n        match self.send_lock.is_permitted() {\n            // If permitted, we can send ccf packets.\n            true => {\n                let terminator = Arc::new(Terminator::new(error.clone().into(), &self));\n                tokio::spawn(\n                    async move { self.spaces.send_ccf_packets(terminator.as_ref()).await }\n                        .instrument_in_current()\n                        .in_current_span(),\n                );\n            }\n            // No need to send packets, just clear the paths.\n            false => {\n                // TODO: check the remote of close spaces\n                self.paths.clear();\n            }\n        }\n\n        Termination::closing(error, self.cid_registry.local, self.rcvd_pkt_q)\n    }\n\n    pub fn enter_draining(self, ccf: ConnectionCloseFrame) -> Termination {\n        qevent::event!(ConnectionClosed {\n            owner: Owner::Local,\n            ccf: &ccf // TODO: trigger\n        });\n\n        let error = ccf.clone().into();\n        self.data_streams.on_conn_error(&error);\n        self.datagram_flow.on_conn_error(&error);\n        self.tls_handshake.on_conn_error(&error);\n        self.parameters.on_conn_error(&error);\n\n        tokio::spawn(\n            {\n                let pto_duration = self.paths.max_pto_duration().unwrap_or_default();\n                let event_broker = self.event_broker.clone();\n                async move {\n                    tokio::time::sleep(pto_duration).await;\n                    event_broker.emit(Event::Terminated);\n                }\n            }\n            .instrument_in_current()\n            .in_current_span(),\n        );\n\n        match self.send_lock.is_permitted() {\n            // If permitted, we can send ccf packets.\n            true => {\n                let terminator = Arc::new(Terminator::new(ccf, &self));\n                tokio::spawn(\n                    async move { self.spaces.send_ccf_packets(terminator.as_ref()).await }\n                        .instrument_in_current()\n                        .in_current_span(),\n                );\n            }\n            // No need to send packets, just clear the paths.\n            false => {\n                self.paths.clear();\n            }\n        }\n\n        // No need to receive packets, just close all queues.\n        self.rcvd_pkt_q.close_all();\n        Termination::draining(error, self.cid_registry.local)\n    }\n}\n\nstruct ConnectionState {\n    state: RwLock<Result<Components, Termination>>,\n    qlog_span: qevent::telemetry::Span,\n    tracing_span: tracing::Span,\n}\n\nimpl ConnectionState {\n    // called by event\n    pub fn enter_closing(&self, error: QuicError) -> Result<(), Error> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n        let mut conn = self.state.write().unwrap();\n        let core_conn = conn.as_ref().map_err(|t| t.error())?;\n\n        *conn = Err(core_conn.clone().enter_closing(error.into()));\n        Ok(())\n    }\n\n    pub fn application_close(\n        &self,\n        reason: impl Into<Cow<'static, str>>,\n        code: u64,\n    ) -> Result<(), Error> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n        let mut conn = self.state.write().unwrap();\n        let core_conn = conn.as_ref().map_err(|t| t.error())?;\n\n        let error_code = code.try_into().expect(\"application error code overflow\");\n        let error = AppError::new(error_code, reason);\n        let event = Event::ApplicationClose(error.clone());\n        core_conn.event_broker.emit(event);\n        *conn = Err(core_conn.clone().enter_closing(error.into()));\n\n        Ok(())\n    }\n\n    pub fn enter_draining(&self, ccf: ConnectionCloseFrame) -> bool {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n        let mut conn = self.state.write().unwrap();\n        match conn.as_mut() {\n            Ok(core_conn) => {\n                *conn = Err(core_conn.clone().enter_draining(ccf));\n                true\n            }\n            Err(termination) => termination.enter_draining(),\n        }\n    }\n\n    fn try_map_components<T>(&self, op: impl FnOnce(&Components) -> T) -> Result<T, Error> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n        self.state\n            .read()\n            .unwrap()\n            .as_ref()\n            .map(op)\n            .map_err(|termination| termination.error())\n    }\n\n    fn try_map_components_future<F, M>(\n        &self,\n        op: M,\n    ) -> impl Future<Output = Result<F::Output, Error>> + Send + use<F, M>\n    where\n        F: Future + Send,\n        M: FnOnce(&Components) -> F,\n    {\n        match self.try_map_components(op) {\n            Ok(future) => future.map(Ok).left_future(),\n            Err(error) => std::future::ready(error).map(Err).right_future(),\n        }\n    }\n\n    fn validate(&self) -> Result<(), Error> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n        let mut conn = self.state.write().unwrap();\n        let core_conn = conn.as_ref().map_err(|e| e.error())?;\n        let validate = 'validate: {\n            if core_conn.paths.is_empty() {\n                let error =\n                    QuicError::with_default_fty(ErrorKind::NoViablePath, \"No viable path exist\");\n                break 'validate Err(error);\n            }\n            Ok(())\n        };\n        if let Err(error) = validate {\n            core_conn.event_broker.emit(Event::Failed(error.clone()));\n            let termination = core_conn.clone().enter_closing(error.into());\n            let error = termination.error();\n            *conn = Err(termination);\n            return Err(error);\n        }\n        Ok(())\n    }\n}\n\nimpl Drop for ConnectionState {\n    fn drop(&mut self) {\n        let _span = self.tracing_span.enter();\n        if self.validate().is_ok() && self.application_close(\"\", 0).is_ok() {\n            #[cfg(debug_assertions)]\n            tracing::warn!(target: \"quic\", \"connection is still active when dropped, close it automatically.\");\n            #[cfg(not(debug_assertions))]\n            tracing::debug!(target: \"quic\", \"connection is still active when dropped, close it automatically.\");\n        }\n    }\n}\n\n#[derive(Clone)]\npub struct Connection(Arc<ConnectionState>);\n\nimpl Connection {\n    pub fn role(&self) -> Result<Role, Error> {\n        self.0.try_map_components(|core_conn| core_conn.role())\n    }\n\n    /// Close the connection with application close frame.\n    ///\n    /// Return error if the connection is already closed.\n    pub fn close(&self, reason: impl Into<Cow<'static, str>>, code: u64) -> Result<(), Error> {\n        self.0.application_close(reason, code)\n    }\n\n    /// Gets the connection metrics for tracking data volumes.\n    ///\n    /// Returns the metrics that track:\n    /// - pending_send_bytes: Data written by application but not yet sent\n    /// - sent_unacked_bytes: Data sent but not yet acknowledged\n    /// - sent_acked_bytes: Data sent and acknowledged\n    pub fn metrics(&self) -> Result<qbase::metric::ArcConnectionMetrics, Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.metrics().clone())\n    }\n\n    #[allow(clippy::type_complexity)]\n    pub fn open_bi_stream(\n        &self,\n    ) -> Impl_Future![Result<Option<(StreamId, (StreamReader, StreamWriter))>, Error>] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.open_bi_stream())\n            .map(|result| result?)\n    }\n\n    pub fn open_uni_stream(&self) -> Impl_Future![Result<Option<(StreamId, StreamWriter)>, Error>] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.open_uni_stream())\n            .map(|result| result?)\n    }\n\n    #[allow(clippy::type_complexity)]\n    pub fn accept_bi_stream(\n        &self,\n    ) -> Impl_Future![Result<(StreamId, (StreamReader, StreamWriter)), Error>] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.accept_bi_stream())\n            .map(|result| result?)\n    }\n\n    pub fn accept_uni_stream(&self) -> Impl_Future![Result<(StreamId, StreamReader), Error>] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.accept_uni_stream())\n            .map(|result| result?)\n    }\n\n    #[cfg(feature = \"datagram\")]\n    #[deprecated]\n    #[allow(deprecated)]\n    pub fn datagram_reader(&self) -> Result<io::Result<DatagramReader>, Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.datagram_reader())\n    }\n\n    #[cfg(feature = \"datagram\")]\n    #[deprecated]\n    #[allow(deprecated)]\n    pub async fn datagram_writer(&self) -> Result<io::Result<DatagramWriter>, Error> {\n        Ok(self\n            .0\n            .try_map_components(|core_conn| core_conn.datagram_writer())?\n            .await)\n    }\n\n    pub fn add_path(\n        &self,\n        bind_uri: BindUri,\n        link: Link,\n        pathway: Pathway,\n    ) -> Result<(), CreatePathFailure> {\n        self.0\n            .try_map_components(|core_conn| core_conn.add_path(bind_uri, link, pathway))\n            .unwrap_or_else(|cc| Err(CreatePathFailure::ConnectionClosed(cc)))\n    }\n\n    pub fn del_path(&self, pathway: &Pathway) -> Result<(), Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.del_path(pathway))\n    }\n\n    pub fn origin_dcid(&self) -> Result<cid::ConnectionId, Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.cid_registry.origin_dcid())\n    }\n\n    pub fn handshaked(&self) -> Impl_Future![Result<(), Error>] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.conn_state.handshaked())\n            .map(|result| result?)\n    }\n\n    pub fn terminated(&self) -> Impl_Future![Error] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.conn_state.terminated())\n            .map(|(Ok(error) | Err(error))| error)\n    }\n\n    pub fn local_agent(&self) -> Impl_Future![Result<Option<LocalAgent>, Error>] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.local_agent())\n            .map(|result| result?)\n    }\n\n    pub fn remote_agent(&self) -> Impl_Future![Result<Option<RemoteAgent>, Error>] {\n        self.0\n            .try_map_components_future(|core_conn| core_conn.remote_agent())\n            .map(|result| result?)\n    }\n\n    pub fn server_name(&self) -> Impl_Future![Result<String, Error>] {\n        self.0\n            .try_map_components_future(|core_conn| match core_conn.role() {\n                Role::Client => core_conn\n                    .remote_agent()\n                    .map_ok(|agent| agent.unwrap().name().to_owned())\n                    .left_future(),\n                Role::Server => core_conn\n                    .local_agent()\n                    .map_ok(|agent| agent.unwrap().name().to_owned())\n                    .right_future(),\n            })\n            .map(|result| result?)\n    }\n\n    pub fn add_local_endpoint(&self, bind: BindUri, addr: EndpointAddr) -> Result<(), Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.add_local_endpoint(bind, addr))\n    }\n\n    pub fn add_peer_endpoint(\n        &self,\n        addr: EndpointAddr,\n        source: qresolve::Source,\n    ) -> Result<(), Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.add_peer_endpoint(addr, source))\n    }\n\n    pub fn remove_address(&self, addr: SocketAddr) -> Result<(), Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.remove_address(addr))\n    }\n\n    pub fn subscribe_local_address(&self) -> Result<(), Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.subscribe_local_address())\n    }\n\n    pub fn path_context(&self) -> Result<ArcPathContexts, Error> {\n        self.0\n            .try_map_components(|core_conn| core_conn.paths.clone())\n    }\n\n    /// Check if the connection is still valid.\n    ///\n    /// Return error if no viable path exists, or the connection is closed.\n    pub fn validate(&self) -> Result<(), Error> {\n        self.0.validate()\n    }\n}\n"
  },
  {
    "path": "qconnection/src/path/aa.rs",
    "content": "use std::sync::atomic::{AtomicU8, AtomicUsize, Ordering};\n\nuse qbase::net::tx::{ArcSendWaker, Signals};\n\npub const DEFAULT_ANTI_FACTOR: usize = 3;\n/// Therefore, after receiving packets from an address that is not yet validated,\n/// an endpoint MUST limit the amount of data it sends to the unvalidated address\n/// to N(three) times the amount of data received from that address.\n#[derive(Debug)]\npub struct AntiAmplifier<const N: usize = DEFAULT_ANTI_FACTOR> {\n    // Each time data is received, credit is increased;\n    // each time data is sent, credit is consumed.\n    credit: AtomicUsize,\n    // If the credit is exhausted, it needs to wait until\n    // new data is received before it can continue to send.\n    tx_waker: ArcSendWaker,\n    state: AtomicU8,\n}\n\nimpl<const N: usize> AntiAmplifier<N> {\n    const NORMAL: u8 = 0;\n    const GRANTED: u8 = 1;\n    const ABORTED: u8 = 2;\n\n    pub fn new(tx_waker: ArcSendWaker) -> Self {\n        Self {\n            credit: AtomicUsize::new(0),\n            tx_waker,\n            state: AtomicU8::new(0),\n        }\n    }\n\n    /// Store N * amount of credit\n    pub fn on_rcvd(&self, amount: usize) {\n        if self.state.load(Ordering::Acquire) != Self::NORMAL {\n            return;\n        }\n        self.credit.fetch_add(amount * N, Ordering::AcqRel);\n        self.tx_waker.wake_by(Signals::CREDIT);\n    }\n\n    /// This function must only be called by one at a time, and the amount of data sent\n    /// must be feed back to the anti-amplifier before poll_apply can be called again.\n    pub fn balance(&self) -> Result<Option<usize>, Signals> {\n        match self.state.load(Ordering::Acquire) {\n            Self::GRANTED => Ok(Some(usize::MAX)),\n            Self::ABORTED => Ok(None),\n            Self::NORMAL => {\n                let credit = self.credit.load(Ordering::Acquire);\n                if credit == 0 {\n                    // 再次检查，以防grant、abort在self.waker赋值前被调用，导致任务死掉\n                    let state = self.state.load(Ordering::Acquire);\n                    if state == Self::NORMAL {\n                        Err(Signals::CREDIT)\n                    } else {\n                        self.tx_waker.wake_by(Signals::CREDIT);\n                        if state == Self::GRANTED {\n                            Ok(Some(usize::MAX))\n                        } else {\n                            Ok(None)\n                        }\n                    }\n                } else {\n                    Ok(Some(credit))\n                }\n            }\n            _ => unreachable!(),\n        }\n    }\n\n    pub fn on_sent(&self, amount: usize) {\n        if self.state.load(Ordering::Acquire) == Self::NORMAL {\n            self.credit.fetch_sub(amount, Ordering::AcqRel);\n        }\n    }\n\n    pub fn grant(&self) {\n        if self\n            .state\n            .compare_exchange(\n                Self::NORMAL,\n                Self::GRANTED,\n                Ordering::AcqRel,\n                Ordering::Acquire,\n            )\n            .is_ok()\n        {\n            self.tx_waker.wake_by(Signals::CREDIT);\n        }\n    }\n\n    pub fn abort(&self) {\n        if self\n            .state\n            .compare_exchange(\n                Self::NORMAL,\n                Self::ABORTED,\n                Ordering::AcqRel,\n                Ordering::Acquire,\n            )\n            .is_ok()\n        {\n            self.tx_waker.wake_by(Signals::CREDIT);\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    #[test]\n    fn test_deposit_and_poll_apply() {\n        let waker = ArcSendWaker::new();\n        let anti_amplifier = AntiAmplifier::<3>::new(waker);\n        // Initially, no credit\n        assert_eq!(anti_amplifier.balance(), Err(Signals::CREDIT));\n\n        // Deposit 1 unit of data, should add 3 units of credit\n        anti_amplifier.on_rcvd(1);\n        assert_eq!(anti_amplifier.credit.load(Ordering::Acquire), 3);\n\n        // Apply for 2 units of data, should return 2 units\n        assert_eq!(anti_amplifier.balance(), Ok(Some(3)));\n        assert_eq!(anti_amplifier.credit.load(Ordering::Acquire), 3);\n\n        anti_amplifier.on_sent(3);\n\n        // No credit left, should return Pending\n        assert_eq!(anti_amplifier.balance(), Err(Signals::CREDIT));\n    }\n\n    #[test]\n    fn test_multiple_deposits() {\n        let waker = ArcSendWaker::new();\n        let anti_amplifier = AntiAmplifier::<3>::new(waker);\n\n        // Deposit 1 unit of data, should add 3 units of credit\n        anti_amplifier.on_rcvd(1);\n        assert_eq!(anti_amplifier.credit.load(Ordering::Acquire), 3);\n\n        // Deposit another 1 unit of data, should add another 3 units of credit\n        anti_amplifier.on_rcvd(1);\n        assert_eq!(anti_amplifier.credit.load(Ordering::Acquire), 6);\n\n        // Apply for 5 units of data, should return 5 units\n        assert_eq!(anti_amplifier.balance(), Ok(Some(6)));\n        assert_eq!(anti_amplifier.credit.load(Ordering::Acquire), 6);\n\n        // Post sent 5 units, should reduce credit by 5\n        anti_amplifier.on_sent(5);\n        assert_eq!(anti_amplifier.credit.load(Ordering::Acquire), 1);\n    }\n}\n"
  },
  {
    "path": "qconnection/src/path/burst.rs",
    "content": "use std::{\n    io,\n    ops::Deref,\n    sync::{Arc, atomic::Ordering::Acquire},\n};\n\nuse bytes::BufMut;\nuse derive_more::From;\nuse qbase::{\n    Epoch, GetEpoch,\n    cid::{BorrowedCid, ConnectionId},\n    frame::PingFrame,\n    net::tx::{ArcSendWaker, Signals},\n    packet::{\n        AssemblePacket, Package, PacketContent, PacketInfo, ProductHeader,\n        header::{\n            long::{HandshakeHeader, InitialHeader, ZeroRttHeader, io::LongHeaderBuilder},\n            short::OneRttHeader,\n        },\n        io::{Packages, PadProbe, PadTo20, PadToFull, Repeat},\n        signal::SpinBit,\n    },\n    role::Role,\n    token::TokenRegistry,\n};\nuse qcongestion::{ArcCC, Transport};\nuse qinterface::io::IO;\nuse qrecovery::journal::{AckPackege, ArcRcvdJournal, Journal};\nuse qtraversal::packet::{ForwardHeader, WriteForwardHeader};\n\nuse crate::{\n    ArcDcidCell, ArcReliableFrameDeque, CidRegistry, Components,\n    path::{AntiAmplifier, Constraints},\n    space::{Spaces, data::DataSpace, handshake::HandshakeSpace, initial::InitialSpace},\n    tls::ArcTlsHandshake,\n    tx::PacketWriter,\n};\n\n// /// Trait alias\n// pub trait PackageIntoSpacePacketWriter<H, S: PacketSpace<H>>:\n//     for<'b, 's> Package<PacketWriter<'b, 's, S::JournalFrame>>\n// {\n// }\n\n// impl<H, S: PacketSpace<H>, P> PackageIntoSpacePacketWriter<H, S> for P where\n//     P: for<'b, 's> Package<PacketWriter<'b, 's, S::JournalFrame>>\n// {\n// }\n\n// pn space?\npub trait PacketSpace<H> {\n    type JournalFrame;\n\n    fn new_packet<'b, 's>(\n        &'s self,\n        header: H,\n        cc: &ArcCC,\n        buffer: &'b mut [u8],\n    ) -> Result<PacketWriter<'b, 's, Self::JournalFrame>, Signals>;\n}\n\npub struct Burst {\n    path: Arc<super::Path>,\n    initial_token: Vec<u8>,\n    cid_registry: CidRegistry,\n    spin: bool,\n\n    spaces: Spaces,\n\n    tls_handshake: ArcTlsHandshake,\n}\n\nimpl super::Path {\n    pub fn new_burst(self: &Arc<Self>, components: &Components) -> Burst {\n        Burst {\n            path: self.clone(),\n            initial_token: match components.token_registry.deref() {\n                TokenRegistry::Client((server_name, token_sink)) => {\n                    token_sink.fetch_token(server_name)\n                }\n                TokenRegistry::Server(..) => vec![],\n            },\n            cid_registry: components.cid_registry.clone(),\n            spin: false, // TODO\n            spaces: components.spaces.clone(),\n            tls_handshake: components.tls_handshake.clone(),\n        }\n    }\n}\n\n// 用双层Result\n#[derive(From)]\npub enum BurstError {\n    Signals(Signals),\n    PathDeactived,\n}\n\npub struct PacketsAssembler<'a> {\n    cc: &'a ArcCC,\n    constraints: Constraints,\n    cid_registry: &'a CidRegistry,\n    borrowed_dcid: Result<BorrowedCid<'a, ArcReliableFrameDeque>, Signals>,\n    initial_token: &'a [u8],\n    spin: SpinBit,\n}\n\nimpl<'a> PacketsAssembler<'a> {\n    fn new(\n        cid_registry: &'a CidRegistry,\n        dcid_cell: &'a ArcDcidCell,\n        anti_amplifier: &AntiAmplifier,\n        cc: &'a ArcCC,\n        tx_waker: ArcSendWaker,\n        initial_token: &'a [u8],\n        spin: impl Into<SpinBit>,\n    ) -> Result<PacketsAssembler<'a>, BurstError> {\n        let send_quota = cc.send_quota()?;\n        let Some(credit_limit) = anti_amplifier.balance()? else {\n            return Err(BurstError::PathDeactived);\n        };\n\n        let Some(borrowed_dcid) = dcid_cell.borrow_cid(tx_waker).transpose() else {\n            return Err(BurstError::PathDeactived);\n        };\n\n        let constraints = Constraints::new(credit_limit, send_quota);\n        Ok(Self {\n            cid_registry,\n            borrowed_dcid,\n            cc,\n            constraints,\n            initial_token,\n            spin: spin.into(),\n        })\n    }\n\n    fn initial_scid(&self) -> Result<ConnectionId, Signals> {\n        self.cid_registry\n            .local\n            .initial_scid()\n            .ok_or(Signals::empty())\n    }\n\n    fn applied_dcid(&self) -> Result<ConnectionId, Signals> {\n        self.borrowed_dcid.as_deref().copied().map_err(|e| *e)\n    }\n\n    /// Return the connection ID that used to send the initial and zero rtt packets.\n    ///\n    /// dquic implements multi-path handshake feature, the client creates many paths and sends initial packets.\n    ///\n    /// Client will only use origin_dcid to send initial and zero rtt packets.\n    ///\n    /// The client and server must negotiate a handshake path and assign the initial dcid to this path\n    /// to prevent the unique connection ID from being obtained by an invalid path, causing the connection to fail.\n    ///\n    /// The client and server choose the path where they receive the first initial packet as the handshake path.\n    /// The server will only return the initial packet on the handshake path to negotiate the handshake path.\n    ///\n    /// Therefore, for the server, it can only send the initial packet with the connection ID assigned to the path.\n    /// This manifests itself during the handshake as sending the initial packet only on the first path.\n    fn initial_dcid(&self) -> Result<ConnectionId, Signals> {\n        match self.cid_registry.role() {\n            Role::Client => Ok(self.cid_registry.origin_dcid()),\n            Role::Server => self.applied_dcid(),\n        }\n    }\n\n    pub fn commit(&mut self, sent_bytes: usize, pkt_info: PacketInfo) {\n        self.constraints.commit(sent_bytes, pkt_info.in_flight());\n        self.cc.on_pkt_sent(\n            pkt_info.epoch().expect(\"todo\"),\n            pkt_info.packet_number(),\n            pkt_info.ack_eliciting(),\n            sent_bytes,\n            pkt_info.in_flight(),\n            pkt_info.largest_ack(),\n        );\n    }\n}\n\nimpl ProductHeader<InitialHeader> for PacketsAssembler<'_> {\n    fn new_header(&self) -> Result<InitialHeader, Signals> {\n        Ok(\n            LongHeaderBuilder::with_cid(self.initial_dcid()?, self.initial_scid()?)\n                .initial(self.initial_token.to_vec()),\n        )\n    }\n}\n\nimpl ProductHeader<ZeroRttHeader> for PacketsAssembler<'_> {\n    fn new_header(&self) -> Result<ZeroRttHeader, Signals> {\n        Ok(LongHeaderBuilder::with_cid(self.initial_dcid()?, self.initial_scid()?).zero_rtt())\n    }\n}\n\nimpl ProductHeader<HandshakeHeader> for PacketsAssembler<'_> {\n    fn new_header(&self) -> Result<HandshakeHeader, Signals> {\n        Ok(LongHeaderBuilder::with_cid(self.applied_dcid()?, self.initial_scid()?).handshake())\n    }\n}\n\nimpl ProductHeader<OneRttHeader> for PacketsAssembler<'_> {\n    fn new_header(&self) -> Result<OneRttHeader, Signals> {\n        Ok(OneRttHeader::new(self.spin, self.applied_dcid()?))\n    }\n}\n\nimpl<'a> PacketsAssembler<'a> {\n    pub fn assemble<'s, 'b, H, Space, P>(\n        &mut self,\n        space: &'s Space,\n        data_sources: P,\n        buffer: &'b mut [u8],\n        packet_content: &mut PacketContent,\n    ) -> Result<usize, Signals>\n    where\n        Self: ProductHeader<H>,\n        Space: PacketSpace<H> + GetEpoch,\n        Space::JournalFrame: 's,\n        P: Package<PacketWriter<'b, 's, Space::JournalFrame>>,\n    {\n        let buffer = self.constraints.constrain(buffer);\n        let mut packet = space.new_packet(self.new_header()?, self.cc, buffer)?;\n        *packet_content += packet.assemble_packet(&mut Packages((data_sources, PadTo20)))?;\n        let (sent_bytes, props) = packet.encrypt_and_protect_packet();\n        self.commit(sent_bytes, props);\n        Result::<_, Signals>::Ok(sent_bytes)\n    }\n}\n\npub type PackageIntoSpace<H, S> =\n    dyn for<'b, 's> Package<PacketWriter<'b, 's, <S as PacketSpace<H>>::JournalFrame>> + Send;\n\npub struct DataSources {\n    initial: Box<PackageIntoSpace<InitialHeader, InitialSpace>>,\n    zero_rtt: Box<PackageIntoSpace<ZeroRttHeader, DataSpace>>,\n    handshake: Box<PackageIntoSpace<HandshakeHeader, HandshakeSpace>>,\n    one_rtt: Box<PackageIntoSpace<OneRttHeader, DataSpace>>,\n}\n\nimpl Components {\n    pub(super) fn packages(&self) -> DataSources {\n        let initial_packages = self.crypto_streams[Epoch::Initial]\n            .outgoing()\n            .package(Epoch::Initial);\n        let zero_rtt_packages = Packages((\n            // repeat to send multi reliable frames in one packet\n            Repeat(self.reliable_frames.clone()),\n            // repeat to send multi stream frames in one packet\n            Repeat(\n                self.data_streams\n                    .package(self.flow_ctrl.sender.clone(), true),\n            ),\n            // TODO: datagram\n        ));\n        let handshake_packages = self.crypto_streams[Epoch::Handshake]\n            .outgoing()\n            .package(Epoch::Handshake);\n        let one_rtt_packages = Packages((\n            self.crypto_streams[Epoch::Data]\n                .outgoing()\n                .package(Epoch::Data),\n            // repeat to send multi reliable frames in one packet\n            Repeat(self.reliable_frames.clone()),\n            // repeat to send multi stream frames in one packet\n            Repeat(\n                self.data_streams\n                    .package(self.flow_ctrl.sender.clone(), false),\n            ),\n            // TODO: datagram\n        ));\n        DataSources {\n            initial: Box::new(initial_packages),\n            zero_rtt: Box::new(zero_rtt_packages),\n            handshake: Box::new(handshake_packages),\n            one_rtt: Box::new(one_rtt_packages),\n        }\n    }\n}\n\nfn ack_package<'s, S, F>(space: &'s S, cc: &ArcCC) -> AckPackege<'s>\nwhere\n    S: GetEpoch + AsRef<Journal<F>>,\n    F: 's,\n{\n    // (1) may_loss被调用时cc已经被锁定，may_loss会尝试锁定sent_journal\n    // (2) PacketMemory会持有sent_journal的guard，而need_ack会尝试锁定cc\n    // 在PacketMemory存在时尝试锁定cc，可能会和 (1) 冲突:\n    //   (1)持有cc，要锁定sent_journal；(2)持有sent_journal要锁定cc\n    // 在多线程的情况下，可能会发生死锁。所以提前调用need_ack，避免交叉导致死锁\n    ArcRcvdJournal::ack_package(space.as_ref().as_ref(), cc.need_ack(space.epoch()))\n}\n\nimpl Burst {\n    fn assembler<'a>(&'a self) -> Result<PacketsAssembler<'a>, BurstError> {\n        PacketsAssembler::new(\n            &self.cid_registry,\n            &self.path.dcid_cell,\n            &self.path.anti_amplifier,\n            &self.path.cc,\n            self.path.tx_waker.clone(),\n            &self.initial_token,\n            self.spin,\n        )\n    }\n\n    fn load_spaces(\n        &self,\n        DataSources {\n            initial: initial_data_sources,\n            zero_rtt: zero_rtt_data_sources,\n            handshake: handshake_data_sources,\n            one_rtt: one_rtt_data_sources,\n        }: &mut DataSources,\n        mut buffer: &mut [u8],\n    ) -> Result<(usize, PacketContent), BurstError> {\n        let Self {\n            path,\n            spaces,\n            tls_handshake,\n            ..\n        } = self;\n\n        let initial_space = spaces.initial().as_ref();\n        let handshake_space = spaces.handshake().as_ref();\n        let data_space = spaces.data().as_ref();\n\n        let origin = buffer.remaining_mut();\n        let mut packet_content = PacketContent::default();\n\n        let mut assembler = self.assembler()?;\n        let mut signals = Signals::empty();\n\n        let Ok(tls_fin) = tls_handshake.is_finished() else {\n            return Err(BurstError::PathDeactived);\n        };\n\n        match assembler.assemble(\n            initial_space,\n            &mut Packages((ack_package(initial_space, &path.cc), initial_data_sources)),\n            buffer,\n            &mut packet_content,\n        ) {\n            Ok(bytes_sent) => buffer = buffer[bytes_sent..].as_mut(),\n            Err(s) => signals |= s,\n        };\n\n        let loaded_initial = buffer.remaining_mut() != origin;\n\n        if !tls_fin {\n            match assembler.assemble::<ZeroRttHeader, _, _>(\n                data_space,\n                zero_rtt_data_sources,\n                buffer,\n                &mut packet_content,\n            ) {\n                Ok(bytes_sent) => buffer = buffer[bytes_sent..].as_mut(),\n                Err(s) => signals |= s,\n            }\n        }\n\n        match assembler.assemble(\n            handshake_space,\n            &mut Packages((\n                ack_package(handshake_space, &path.cc),\n                handshake_data_sources,\n            )),\n            buffer,\n            &mut packet_content,\n        ) {\n            Ok(bytes_sent) => buffer = buffer[bytes_sent..].as_mut(),\n            Err(s) => signals |= s,\n        }\n\n        if tls_fin {\n            let result = if path.validated.load(Acquire) {\n                assembler.assemble::<OneRttHeader, _, _>(\n                    data_space,\n                    &mut Packages((\n                        ack_package(data_space, &path.cc),\n                        &path.challenge_sndbuf,\n                        &path.response_sndbuf,\n                        one_rtt_data_sources,\n                        loaded_initial.then_some(PadToFull),\n                        PadProbe,\n                    )),\n                    buffer,\n                    &mut packet_content,\n                )\n            } else {\n                assembler.assemble::<OneRttHeader, _, _>(\n                    data_space,\n                    &mut Packages((\n                        ack_package(data_space, &path.cc),\n                        &path.challenge_sndbuf,\n                        &path.response_sndbuf,\n                        loaded_initial.then_some(PadToFull),\n                        PadProbe,\n                    )),\n                    buffer,\n                    &mut packet_content,\n                )\n            };\n\n            match result {\n                Ok(bytes_sent) => buffer = buffer[bytes_sent..].as_mut(),\n                Err(s) => signals |= s,\n            }\n        }\n\n        if loaded_initial {\n            assert!(buffer.remaining_mut() != origin);\n            buffer.put_bytes(0, buffer.remaining_mut());\n            return Ok((origin, packet_content));\n        }\n\n        let sent_bytes = origin - buffer.remaining_mut();\n        (sent_bytes > 0)\n            .then_some((sent_bytes, packet_content))\n            .ok_or(BurstError::Signals(signals))\n    }\n}\n\nstruct PingSource {\n    need_send_ack_eliciting: usize,\n}\n\nimpl<Target: ?Sized> Package<Target> for PingSource\nwhere\n    PingFrame: Package<Target>,\n{\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        if self.need_send_ack_eliciting > 0 {\n            return PingFrame.dump(target);\n        }\n        // TODO: refactor signal names\n        Err(Signals::PING)\n    }\n}\n\nfn ping_package(cc: &ArcCC, epoch: Epoch) -> PingSource {\n    // avoid deadlock, same as ack_package\n    PingSource {\n        need_send_ack_eliciting: cc.need_send_ack_eliciting(epoch),\n    }\n}\n\nimpl Burst {\n    fn load_ping(&self, buffer: &mut [u8]) -> Result<(usize, PacketContent), BurstError> {\n        let Self { spaces, path, .. } = self;\n\n        let mut assembler = self.assembler()?;\n        let mut signals = Signals::empty();\n        let mut packet_content = PacketContent::default();\n\n        for &epoch in Epoch::iter().rev() {\n            let result = match epoch {\n                Epoch::Data => {\n                    let ack_package = ack_package(spaces.data().as_ref(), &path.cc);\n                    let ping_package = ping_package(&path.cc, epoch);\n                    assembler.assemble::<OneRttHeader, _, _>(\n                        spaces.data().as_ref(),\n                        &mut Packages((ack_package, ping_package, PadToFull)),\n                        buffer,\n                        &mut packet_content,\n                    )\n                }\n                Epoch::Handshake => {\n                    let ack_package = ack_package(spaces.handshake().as_ref(), &path.cc);\n                    let ping_package = ping_package(&path.cc, epoch);\n                    assembler.assemble(\n                        spaces.handshake().as_ref(),\n                        &mut Packages((ack_package, ping_package, PadToFull)),\n                        buffer,\n                        &mut packet_content,\n                    )\n                }\n                Epoch::Initial => {\n                    let ack_package = ack_package(spaces.initial().as_ref(), &path.cc);\n                    let ping_package = ping_package(&path.cc, epoch);\n                    assembler.assemble(\n                        spaces.initial().as_ref(),\n                        &mut Packages((ack_package, ping_package, PadToFull)),\n                        buffer,\n                        &mut packet_content,\n                    )\n                }\n            };\n\n            match result {\n                Ok(sent_bytes) => return Ok((sent_bytes, packet_content)),\n                Err(s) => signals |= s,\n            }\n        }\n\n        Err(BurstError::Signals(signals))\n    }\n\n    fn load_heartbeat(&self, buffer: &mut [u8]) -> Result<(usize, PacketContent), BurstError> {\n        let Self { spaces, path, .. } = self;\n        let mut assembler = self.assembler()?;\n        let mut packet_content = PacketContent::default();\n        match assembler.assemble::<OneRttHeader, _, _>(\n            spaces.data().as_ref(),\n            &path.heartbeat_sndbuf,\n            buffer,\n            &mut packet_content,\n        ) {\n            Ok(sent_bytes) => Ok((sent_bytes, packet_content)),\n            Err(s) => Err(BurstError::Signals(s)),\n        }\n    }\n\n    pub async fn burst<'b>(\n        &self,\n        data_sources: &mut DataSources,\n        buffers: &'b mut Vec<Vec<u8>>,\n    ) -> Result<Vec<io::IoSlice<'b>>, BurstError> {\n        let Ok(max_segments) = self.path.interface.max_segments() else {\n            return Err(BurstError::PathDeactived);\n        };\n        let Ok(max_segment_size) = self.path.interface.max_segment_size() else {\n            return Err(BurstError::PathDeactived);\n        };\n\n        if buffers.len() < max_segments {\n            buffers.resize_with(max_segments, || vec![0; max_segment_size]);\n        }\n\n        use core::ops::ControlFlow::*;\n\n        let reversed_size = ForwardHeader::encoding_size(&self.path.pathway);\n\n        let (Break(result) | Continue(result)) = buffers\n            .iter_mut()\n            .map(move |buffer| {\n                if buffer.len() < max_segment_size {\n                    buffer.resize(max_segment_size, 0);\n                }\n                &mut buffer[..max_segment_size]\n            })\n            .map(move |segment| {\n                let buffer_size = segment.len().min(self.path.mtu() as _);\n                let buffer = &mut segment[..buffer_size][reversed_size..];\n\n                self.load_spaces(data_sources, buffer)\n                    .inspect(|(_, packet_content)| {\n                        self.path.idle_timer.on_sent(*packet_content);\n                    })\n                    .or_else(|error| match error {\n                        BurstError::Signals(signals) => {\n                            self.load_ping(buffer).map_err(|e| match e {\n                                BurstError::Signals(s) => BurstError::Signals(signals | s),\n                                e @ BurstError::PathDeactived => e,\n                            })\n                        }\n                        e @ BurstError::PathDeactived => Err(e),\n                    })\n                    .or_else(|error| match error {\n                        BurstError::Signals(signals) => {\n                            self.load_heartbeat(buffer).map_err(|e| match e {\n                                BurstError::Signals(s) => BurstError::Signals(signals | s),\n                                e @ BurstError::PathDeactived => e,\n                            })\n                        }\n                        e @ BurstError::PathDeactived => Err(e),\n                    })\n                    .map(|(packet_size, _)| {\n                        if reversed_size > 0 {\n                            let (mut header, payload) = segment.split_at_mut(reversed_size);\n                            let forward_hdr = ForwardHeader::new(\n                                0,\n                                // FIXME: unwrap\n                                &self.path.pathway,\n                                payload,\n                            );\n                            tracing::trace!(?forward_hdr, link=%self.path.link(),\"put forward header\");\n                            header.put_forward_header(&forward_hdr);\n                        }\n                        io::IoSlice::new(&segment[..reversed_size + packet_size])\n                    })\n            })\n            .try_fold(\n                Ok(Vec::with_capacity(max_segments)),\n                |segments, load_result| match (segments, load_result) {\n                    (Ok(segments), Err(signals)) if segments.is_empty() => Break(Err(signals)),\n                    (Ok(segments), Err(_signals)) => Break(Ok(segments)),\n                    (Ok(mut segments), Ok(segment))\n                        if segment.len() < segments.last().copied().unwrap_or_default() =>\n                    {\n                        segments.push(segment.len());\n                        Break(Ok(segments))\n                    }\n                    (Ok(mut segments), Ok(segment)) => {\n                        segments.push(segment.len());\n                        Continue(Ok(segments))\n                    }\n                    (Err(_), _) => unreachable!(\"segments should not be Err in this context\"),\n                },\n            );\n\n        Ok(result?\n            .iter()\n            .zip(buffers)\n            .map(|(&len, buffer)| io::IoSlice::new(&buffer[..len]))\n            .collect())\n    }\n}\n"
  },
  {
    "path": "qconnection/src/path/drive.rs",
    "content": "use qcongestion::Transport;\nuse tokio::time::Duration;\n\nuse crate::{path::PathDeactivated, tls::ArcTlsHandshake};\n\nimpl super::Path {\n    pub async fn drive(&self, _tls_handshake: ArcTlsHandshake) -> Result<(), PathDeactivated> {\n        loop {\n            tokio::time::sleep(Duration::from_millis(10)).await;\n            if let Some(frame) = self.idle_timer.health()? {\n                self.heartbeat_sndbuf.write(frame);\n            }\n            self.cc.do_tick()?;\n        }\n    }\n}\n"
  },
  {
    "path": "qconnection/src/path/error.rs",
    "content": "use derive_more::From;\nuse qbase::{error::Error as QuicError, time::TimeOut};\nuse qcongestion::TooManyPtos;\nuse qinterface::bind_uri::BindUri;\nuse thiserror::Error;\n\nuse crate::path::validate::ValidateFailure;\n\n#[derive(Debug, From, Error)]\npub enum CreatePathFailure {\n    #[error(\"Network interface not found for bind URI: {0}\")]\n    NoInterface(BindUri),\n    #[error(\"Connection is closed\")]\n    ConnectionClosed(QuicError),\n}\n\n#[derive(Debug, From, Error)]\npub enum PathDeactivated {\n    #[error(\"Path validation failed\")]\n    Invalid(#[source] ValidateFailure),\n    #[error(transparent)]\n    Idle(TimeOut),\n    #[error(\"Lost path state\")]\n    Lost(#[source] TooManyPtos),\n    #[error(\"Failed to send packets on path\")]\n    Io(#[source] std::io::Error),\n    #[error(\"Manually removed by application\")]\n    App,\n}\n"
  },
  {
    "path": "qconnection/src/path/paths.rs",
    "content": "use std::{\n    future::Future,\n    sync::{Arc, Mutex, Weak},\n    time::Duration,\n};\n\nuse dashmap::DashMap;\nuse derive_more::Deref;\nuse qbase::{\n    Epoch,\n    cid::ConnectionId,\n    error::{ErrorKind, QuicError},\n    net::{addr::EndpointAddr, route::Pathway, tx::ArcSendWakers},\n};\nuse qcongestion::Transport;\nuse qevent::telemetry::Instrument;\nuse tokio_util::task::AbortOnDropHandle;\nuse tracing::Instrument as _;\n\nuse super::Path;\nuse crate::{\n    ArcRemoteCids,\n    events::{ArcEventBroker, EmitEvent, Event},\n    path::{CreatePathFailure, PathDeactivated},\n};\n\n#[derive(Deref)]\npub struct PathContext {\n    #[deref]\n    path: Arc<Path>,\n    _task: AbortOnDropHandle<()>,\n}\n\n#[derive(Clone)]\npub struct ArcPathContexts {\n    paths: Arc<DashMap<Pathway, PathContext>>,\n    tx_wakers: ArcSendWakers,\n    broker: ArcEventBroker,\n    initial_path: Arc<Mutex<Option<Weak<Path>>>>,\n}\n\nimpl ArcPathContexts {\n    pub fn new(tx_wakers: ArcSendWakers, broker: ArcEventBroker) -> Self {\n        Self {\n            paths: Default::default(),\n            tx_wakers,\n            broker,\n            initial_path: Arc::default(),\n        }\n    }\n\n    pub fn assign_handshake_path(\n        &self,\n        path: &Arc<Path>,\n        remote_cids: &ArcRemoteCids,\n        initial_dcid: ConnectionId,\n    ) -> bool {\n        let mut handshake_path = self.initial_path.lock().unwrap();\n        if handshake_path.is_some() {\n            return false;\n        }\n        remote_cids.apply_initial_dcid(initial_dcid, &path.dcid_cell);\n        *handshake_path = Some(Arc::downgrade(path));\n        true\n    }\n\n    pub fn handshake_path(&self) -> Option<Arc<Path>> {\n        self.initial_path\n            .lock()\n            .unwrap()\n            .clone()\n            .expect(\"unreachable: Handshake packet received before first initial packet processed\")\n            .upgrade()\n    }\n\n    pub fn get_or_try_create_with<T>(\n        &self,\n        pathway: Pathway,\n        try_create: impl FnOnce() -> Result<(Arc<Path>, T), CreatePathFailure>,\n    ) -> Result<Arc<Path>, CreatePathFailure>\n    where\n        T: Future<Output = Result<(), PathDeactivated>> + Send + 'static,\n    {\n        match self.paths.entry(pathway) {\n            dashmap::Entry::Occupied(occupied_entry) => Ok(occupied_entry.get().path.clone()),\n            dashmap::Entry::Vacant(vacant_entry) => {\n                let (path, task) = try_create()?;\n                self.tx_wakers.insert(pathway, &path.tx_waker);\n                let paths = self.clone();\n                let task = AbortOnDropHandle::new(tokio::spawn(\n                    async move {\n                        let reason = task.await.unwrap_err();\n                        paths.remove(&pathway, &reason);\n                    }\n                    .instrument_in_current()\n                    .in_current_span(),\n                ));\n                Ok(vacant_entry\n                    .insert(PathContext { path, _task: task })\n                    .clone())\n            }\n        }\n    }\n\n    pub fn get(&self, pathway: &Pathway) -> Option<Arc<Path>> {\n        self.paths.get(pathway).map(|p| p.path.clone())\n    }\n\n    pub fn remove(&self, pathway: &Pathway, reason: &PathDeactivated) {\n        if self.paths.remove(pathway).is_some() {\n            self.tx_wakers.remove(pathway);\n            tracing::debug!(target: \"quic\", %pathway, %reason, \"path deactivated\");\n            if self.is_empty() {\n                let error = QuicError::with_default_fty(\n                    ErrorKind::NoViablePath,\n                    format!(\"No viable path exist, last path removed because: {reason}\"),\n                );\n                self.broker.emit(Event::Failed(error));\n            }\n        }\n    }\n\n    pub fn is_empty(&self) -> bool {\n        self.paths.is_empty()\n    }\n\n    pub fn max_pto_duration(&self) -> Option<Duration> {\n        self.paths.iter().map(|p| p.cc().get_pto(Epoch::Data)).max()\n    }\n\n    pub fn paths<C: FromIterator<(Pathway, Arc<Path>)>>(&self) -> C {\n        self.paths\n            .iter()\n            .map(|p| (*p.key(), p.path.clone()))\n            .collect()\n    }\n\n    pub fn discard_initial_and_handshake_space(&self) {\n        self.paths.iter().for_each(|p| {\n            p.cc().discard_epoch(Epoch::Initial);\n            p.cc().discard_epoch(Epoch::Handshake);\n        });\n    }\n\n    pub fn clear(&self) {\n        self.paths.clear();\n    }\n\n    pub fn on_path_validated(&self, pathway: Pathway) {\n        if matches!(pathway.remote(), EndpointAddr::Direct { .. }) {\n            self.paths.iter().for_each(|p| {\n                if matches!(p.pathway.remote(), EndpointAddr::Direct { .. }) {\n                    p.path.deactivate();\n                }\n            });\n        }\n    }\n}\n"
  },
  {
    "path": "qconnection/src/path/util.rs",
    "content": "use std::{\n    pin::Pin,\n    sync::Mutex,\n    task::{Context, Poll},\n};\n\nuse bytes::BufMut;\nuse futures::StreamExt;\nuse qbase::{\n    net::tx::{ArcSendWaker, Signals},\n    packet::{Package, PacketContent},\n    util::ArcAsyncDeque,\n};\n\n/// A buffer that contains a single frame to be sent.\n///\n/// This struct impl [`Default`], and the `new` method is not provided.\npub struct SendBuffer<T> {\n    item: Mutex<Option<T>>,\n    tx_waker: ArcSendWaker,\n}\n\nimpl<T> SendBuffer<T> {\n    pub fn new(tx_waker: ArcSendWaker) -> Self {\n        Self {\n            item: Default::default(),\n            tx_waker,\n        }\n    }\n\n    /// Write a frame to the buffer.\n    ///\n    /// [`SendBuffer`] can only buffer one frame at a time. If you write a new frame to the buffer before the previous\n    /// frame is sent, the previous frame will be overwritten.\n    pub fn write(&self, frame: T) {\n        self.tx_waker.wake_by(Signals::TRANSPORT);\n        *self.item.lock().unwrap() = Some(frame);\n    }\n}\n\nimpl<F> SendBuffer<F> {\n    /// Try load the frame to be sent into the `packet`.\n    pub fn try_load_frames_into<P: ?Sized>(&self, packet: &mut P) -> Result<(), Signals>\n    where\n        for<'a> &'a F: Package<P>,\n    {\n        let mut guard = self.item.lock().unwrap();\n        match guard.as_ref() {\n            Some(mut frame) => {\n                frame.dump(packet)?;\n                guard.take().unwrap();\n                Ok(())\n            }\n            None => Err(Signals::TRANSPORT),\n        }\n    }\n}\n\nimpl<F, P: ?Sized> Package<P> for &SendBuffer<F>\nwhere\n    for<'a> &'a F: Package<P>,\n{\n    #[inline]\n    fn dump(&mut self, into: &mut P) -> Result<PacketContent, Signals> {\n        self.try_load_frames_into(into)?;\n        Ok(PacketContent::EffectivePayload)\n    }\n}\n\n/// A buffer to cache received frames.\n///\n///\n/// [`Stream`] is implemented for this struct, you can use it as a stream to receive frames.\n///\n/// You can also use the [`RecvBuffer::receive`] method to wait for a frame to be received.\n///\n/// # Example\n/// ```rust\n/// use qconnection::path::RecvBuffer;\n/// use futures::StreamExt;\n/// # async fn demo() {\n/// let rcv_buf = RecvBuffer::default();\n///\n/// tokio::spawn({\n///     let rcv_buf = rcv_buf.clone();\n///     async move {\n///         let value = rcv_buf.receive().await;\n///         assert_eq!(value, Some(42u32));\n///     }\n/// });\n///\n/// rcv_buf.write(42u32);\n/// # }\n/// ```\n///\n/// [`Stream`]: futures::Stream\n/// [`Future`]: core::future::Future\n#[derive(Clone, Debug, Default)]\npub struct RecvBuffer<T>(ArcAsyncDeque<T>);\n\nimpl<T> RecvBuffer<T> {\n    /// Create a new empty [`RecvBuffer`].\n    pub fn new() -> Self {\n        Self(ArcAsyncDeque::with_capacity(2))\n    }\n\n    /// Write a frame to the buffer.\n    pub fn write(&self, value: T) {\n        self.0.push_back(value);\n    }\n\n    /// Waiting for a frame to be received.\n    pub async fn receive(&self) -> Option<T> {\n        let mut this = self;\n        this.next().await\n    }\n\n    /// Dismiss the buffer\n    ///\n    /// Append received frames will be Ignored, existing frames will be dropped, the future will return `None`.\n    pub fn dismiss(&self) {\n        self.0.close();\n    }\n}\n\nimpl<T> futures::Stream for RecvBuffer<T> {\n    type Item = T;\n\n    fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        self.0.poll_pop(cx)\n    }\n}\n\nimpl<T> futures::Stream for &RecvBuffer<T> {\n    type Item = T;\n\n    fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        self.0.poll_pop(cx)\n    }\n}\n\n/// The constraints for sending data, appllied to the data buffer.\n#[derive(Debug, Clone, Copy)]\npub struct Constraints {\n    /// Credit limit which is from anti-amplification attack limit, appllied to all data, including the packet header.\n    ///\n    /// When the verification is passed, the limit will be removed, and the value is `usize::MAX`.\n    // 信用额度，源于抗放大攻击；当验证通过后，将不再设限，表现为usize::MAX\n    // 作用于所有数据，包括包头\n    credit_limit: usize,\n    /// Send quota, which is from the congestion control algorithm. As time goes by, the amount of data that should be\n    /// sent.\n    ///\n    /// It is applied to ack-eliciting data packets, unless the packet only sends Padding/Ack/Ccf frames.\n    // 发送配额，源于拥塞控制算法，随着时间的流逝，得到的本次Burst应当发送的数据量\n    // 作用于ack-eliciting数据包，除非该包只发送Padding/Ack/Ccf帧\n    send_quota: usize,\n}\n\nimpl Constraints {\n    /// Create a new [`Constraints`] with the given credit limit and send quota.\n    pub fn new(credit_limit: usize, send_quota: usize) -> Self {\n        Self {\n            credit_limit,\n            send_quota,\n        }\n    }\n\n    /// Return whether the constraints are available(More frames can be send).\n    ///\n    /// The conditions for ending is the credit limit is used up. Even if the send quota is not used up, packets that\n    /// only contain Padding/Ack/Ccf can still be sent.\n    ///\n    // 结束条件\n    // - 抗放大攻击额度用完\n    // - 抗放大攻击额度没用完，但发送配额用完\n    //  + 此时，仍可以仅发送Ack帧\n    pub fn is_available(&self) -> bool {\n        self.credit_limit > 0\n    }\n\n    /// Constrain the buffer, make it smaller than the limit and quota.\n    pub fn constrain<'b>(&self, buf: &'b mut [u8]) -> &'b mut [u8] {\n        let min_len = buf\n            .remaining_mut()\n            .min(self.credit_limit)\n            .min(self.send_quota);\n        &mut buf[..min_len]\n    }\n\n    pub fn available(&self) -> usize {\n        self.credit_limit.min(self.send_quota)\n    }\n\n    /// Commit consumption of credit limit and send quota.\n    ///\n    /// The `len` is how much data was written to the constrained buffer, `is_just_ack` instruct whether the send quota\n    /// should be consume.\n    ///\n    /// See [section-12.4-14.4.1](https//rfc-editor.org/rfc/rfc9000.html#section-12.4-14.4.1)\n    /// and [table 3](https//rfc-editor.org/rfc/rfc9000.html#table-3)\n    /// of [RFC9000](https//rfc-editor.org/rfc/rfc9000.html) for more details.\n    pub fn commit(&mut self, len: usize, in_flight: bool) {\n        self.credit_limit = self.credit_limit.saturating_sub(len);\n        if in_flight {\n            self.send_quota = self.send_quota.saturating_sub(len);\n        }\n    }\n}\n\n/// The struct that can be constrained by the [`Constraints`], usually a buffer.\npub trait ApplyConstraints {\n    /// Apply the [`Constraints`] on the struct.\n    fn apply(self, constraints: &Constraints) -> Self;\n}\n\nimpl ApplyConstraints for &mut [u8] {\n    fn apply(self, constraints: &Constraints) -> Self {\n        constraints.constrain(self)\n    }\n}\n"
  },
  {
    "path": "qconnection/src/path/validate.rs",
    "content": "use std::{sync::atomic::Ordering, time::Duration};\n\nuse qbase::{frame::PathChallengeFrame, net::tx::Signals};\nuse qcongestion::Transport;\nuse thiserror::Error;\nuse tokio::time::Instant;\n\n#[derive(Debug, Error, Clone, Copy)]\npub enum ValidateFailure {\n    #[error(\n        \"Path validation abort due to path inactivity by other reasons(usually connection closed)\"\n    )]\n    PathInactive,\n    #[error(\"Path validation failed after {0} ms\", elapsed.as_millis())]\n    Timeout { elapsed: Duration },\n}\n\nimpl super::Path {\n    pub fn validated(&self) {\n        self.validated.store(true, Ordering::Release);\n        self.tx_waker.wake_by(Signals::PATH_VALIDATE);\n    }\n\n    pub async fn validate(&self) -> Result<(), ValidateFailure> {\n        let challenge = PathChallengeFrame::random();\n        let start = Instant::now();\n        for _ in 0..30 {\n            let timeout_duration = self.cc().get_pto(qbase::Epoch::Data);\n            self.challenge_sndbuf.write(challenge);\n            match tokio::time::timeout(timeout_duration, self.response_rcvbuf.receive()).await {\n                Ok(Some(response)) if *response == *challenge => {\n                    self.validated();\n                    self.anti_amplifier.grant();\n                    tracing::debug!(target: \"quic\", pathway=%self.pathway, \"path validated successfully\");\n                    return Ok(());\n                }\n                // 外部发生变化，导致路径验证任务作废\n                Ok(None) => return Err(ValidateFailure::PathInactive),\n                // 超时或者收到不对的response，按\"停-等协议\"，继续再发一次Challenge，最多3次\n                _ => continue,\n            }\n        }\n        Err(ValidateFailure::Timeout {\n            elapsed: start.elapsed(),\n        })\n    }\n}\n"
  },
  {
    "path": "qconnection/src/path.rs",
    "content": "use std::{\n    io,\n    sync::{\n        Arc,\n        atomic::{AtomicBool, AtomicU16, Ordering},\n    },\n};\n\nuse qbase::{\n    Epoch,\n    error::Error,\n    frame::{PathChallengeFrame, PathResponseFrame, PingFrame, io::ReceiveFrame},\n    net::{\n        route::{Line, Link, Pathway, Route},\n        tx::ArcSendWaker,\n    },\n    packet::PacketContent,\n    param::ParameterId,\n    time::ArcIdleTimer,\n};\nuse qcongestion::{Algorithm, ArcCC, Feedback, HandshakeStatus, MSS, PathStatus, Transport};\nuse qevent::{quic::connectivity::PathAssigned, telemetry::Instrument};\nuse qinterface::{\n    Interface,\n    bind_uri::BindUri,\n    io::{IO, IoExt},\n};\nuse tokio::time::Duration;\n\nmod aa;\nmod burst;\nmod drive;\npub mod error;\npub mod paths;\npub mod util;\nmod validate;\npub use aa::*;\npub use burst::PacketSpace;\npub use error::*;\npub use paths::*;\nuse tokio_util::task::AbortOnDropHandle;\nuse tracing::Instrument as _;\npub use util::*;\n\nuse crate::{ArcDcidCell, Components, path::burst::BurstError};\n// pub mod burst;\n\npub struct Path {\n    interface: Interface,\n    validated: AtomicBool,\n    active: AtomicBool,\n    link: Link,\n    pathway: Pathway,\n    cc: ArcCC,\n    dcid_cell: ArcDcidCell,\n    anti_amplifier: AntiAmplifier,\n    idle_timer: ArcIdleTimer,\n    heartbeat_sndbuf: SendBuffer<PingFrame>,\n    challenge_sndbuf: SendBuffer<PathChallengeFrame>,\n    response_sndbuf: SendBuffer<PathResponseFrame>,\n    response_rcvbuf: RecvBuffer<PathResponseFrame>,\n    tx_waker: ArcSendWaker,\n    pmtu: Arc<AtomicU16>,\n    status: PathStatus,\n}\n\nimpl Components {\n    pub fn get_or_try_create_path(\n        &self,\n        bind_uri: BindUri,\n        link: Link,\n        pathway: Pathway,\n        is_probed: bool,\n    ) -> Result<Arc<Path>, CreatePathFailure> {\n        let try_create = || {\n            let interface = self\n                .interfaces\n                .borrow(&bind_uri)\n                .ok_or(CreatePathFailure::NoInterface(bind_uri))?;\n            let dcid_cell = self.cid_registry.remote.apply_dcid();\n            let max_ack_delay = self\n                .parameters\n                .lock_guard()?\n                .get_local(ParameterId::MaxAckDelay)\n                .expect(\"unreachable: default value will be got if the value unset\");\n\n            let is_initial_path = self.conn_state.try_entry_attempted(self, link)?;\n            qevent::event!(PathAssigned {\n                path_id: pathway.to_string(),\n                path_local: link.src,\n                path_remote: link.dst,\n            });\n\n            let path = Arc::new(Path::new(\n                interface,\n                link,\n                pathway,\n                dcid_cell,\n                max_ack_delay,\n                self.idle_config.timer(),\n                [\n                    Arc::new(\n                        self.spaces\n                            .initial()\n                            .tracker(self.crypto_streams[Epoch::Initial].clone()),\n                    ),\n                    Arc::new(\n                        self.spaces\n                            .handshake()\n                            .tracker(self.crypto_streams[Epoch::Handshake].clone()),\n                    ),\n                    Arc::new(self.spaces.data().tracker(\n                        self.crypto_streams[Epoch::Data].clone(),\n                        self.data_streams.clone(),\n                        self.reliable_frames.clone(),\n                    )),\n                ],\n                self.quic_handshake.status(),\n            ));\n\n            let validate = {\n                let path = path.clone();\n                let paths = self.paths.clone();\n                let tls_handshake = self.tls_handshake.clone();\n                let conn_state = self.conn_state.clone();\n                async move {\n                    if !is_probed {\n                        path.grant_anti_amplification();\n                    }\n                    if tls_handshake.info().await.is_err() {\n                        return Ok(());\n                    }\n\n                    match paths.handshake_path() {\n                        Some(handshake_path) if Arc::ptr_eq(&handshake_path, &path) => {\n                            path.validated();\n                            Ok(())\n                        }\n                        _ => {\n                            if conn_state.handshaked().await.is_err() {\n                                return Ok(());\n                            }\n                            path.validate().await\n                        }\n                    }\n                }\n            };\n\n            let drive = {\n                let path = path.clone();\n                let tls_handshake = self.tls_handshake.clone();\n                async move { path.drive(tls_handshake).await }\n            };\n\n            let burst = {\n                let path = path.clone();\n                let mut packages = self.packages();\n                let burst = path.new_burst(self);\n                async move {\n                    let mut buffers = vec![];\n                    loop {\n                        match burst.burst(&mut packages, &mut buffers).await {\n                            Ok(segments) => path.send_packets(&segments).await?,\n                            Err(BurstError::Signals(s)) => path.tx_waker.wait_for(s).await,\n                            Err(BurstError::PathDeactived) => return io::Result::Ok(()),\n                        }\n                    }\n                }\n            };\n\n            let task = async move {\n                Err(tokio::select! {\n                    Ok(Err(e)) = AbortOnDropHandle::new(tokio::spawn(validate.instrument_in_current().in_current_span())) => PathDeactivated::from(e),\n                    Ok(Err(e)) = AbortOnDropHandle::new(tokio::spawn(drive.instrument_in_current().in_current_span())) => e,\n                    Ok(Err(e)) = AbortOnDropHandle::new(tokio::spawn(burst.instrument_in_current().in_current_span())) => PathDeactivated::from(e),\n                })\n            };\n\n            let task =\n                Instrument::instrument(task, qevent::span!(@current, path=pathway.to_string()))\n                    .in_current_span();\n\n            tracing::trace!(target: \"quic\", %pathway, %link, is_probed, is_initial_path, \"add new path\");\n\n            Ok((path, task))\n        };\n        self.paths.get_or_try_create_with(pathway, try_create)\n    }\n}\n\nimpl Path {\n    #[allow(clippy::too_many_arguments)]\n    pub fn new(\n        interface: Interface,\n        link: Link,\n        pathway: Pathway,\n        dcid_cell: ArcDcidCell,\n        max_ack_delay: Duration,\n        idle_timer: ArcIdleTimer,\n        feedbacks: [Arc<dyn Feedback>; 3],\n        handshake_status: Arc<HandshakeStatus>,\n    ) -> Self {\n        let pmtu = Arc::new(AtomicU16::new(MSS as u16));\n        let path_status = PathStatus::new(handshake_status, pmtu.clone());\n        let tx_waker = ArcSendWaker::new();\n\n        let cc = ArcCC::new(\n            Algorithm::NewReno,\n            max_ack_delay,\n            feedbacks,\n            path_status.clone(),\n            tx_waker.clone(),\n        );\n        Self {\n            interface,\n            link,\n            pathway,\n            cc,\n            dcid_cell,\n            validated: AtomicBool::new(false),\n            active: AtomicBool::new(true),\n            anti_amplifier: AntiAmplifier::new(tx_waker.clone()),\n            idle_timer,\n            heartbeat_sndbuf: SendBuffer::new(tx_waker.clone()),\n            challenge_sndbuf: SendBuffer::new(tx_waker.clone()),\n            response_sndbuf: SendBuffer::new(tx_waker.clone()),\n            response_rcvbuf: Default::default(),\n            tx_waker,\n            pmtu,\n            status: path_status,\n        }\n    }\n\n    pub fn cc(&self) -> &ArcCC {\n        &self.cc\n    }\n\n    pub fn on_packet_rcvd(\n        &self,\n        epoch: Epoch,\n        pn: u64,\n        size: usize,\n        packet_content: PacketContent,\n    ) {\n        self.anti_amplifier.on_rcvd(size);\n        if size > 0 {\n            self.status.release_anti_amplification_limit();\n        }\n        self.idle_timer.on_rcvd(packet_content);\n        self.cc()\n            .on_pkt_rcvd(epoch, pn, packet_content.is_ack_eliciting());\n    }\n\n    pub fn grant_anti_amplification(&self) {\n        self.anti_amplifier.grant();\n        self.cc().grant_anti_amplification();\n    }\n\n    pub fn mtu(&self) -> u16 {\n        self.pmtu.load(Ordering::Acquire)\n    }\n\n    pub async fn send_packets(&self, bufs: &[io::IoSlice<'_>]) -> io::Result<()> {\n        self.anti_amplifier\n            .on_sent(bufs.iter().map(|s| s.len()).sum());\n        if self.anti_amplifier.balance().is_err() {\n            self.status.enter_anti_amplification_limit();\n        }\n        let line = Line::new(self.link, 64, None, self.mtu());\n        let route = Route::new(self.pathway, line);\n        self.interface.sendmmsg(bufs, route).await\n    }\n\n    pub fn deactivate(&self) {\n        self.active.store(false, Ordering::Release);\n    }\n\n    pub fn active(&self) {\n        self.active.store(true, Ordering::Release);\n    }\n\n    pub fn link(&self) -> &Link {\n        &self.link\n    }\n\n    pub fn pathway(&self) -> &Pathway {\n        &self.pathway\n    }\n\n    pub fn bind_uri(&self) -> BindUri {\n        self.interface.bind_uri()\n    }\n}\n\nimpl Drop for Path {\n    fn drop(&mut self) {\n        self.response_rcvbuf.dismiss();\n    }\n}\n\nimpl ReceiveFrame<PathChallengeFrame> for Path {\n    type Output = ();\n\n    fn recv_frame(&self, frame: PathChallengeFrame) -> Result<Self::Output, Error> {\n        self.response_sndbuf.write(frame.into());\n        Ok(())\n    }\n}\n\nimpl ReceiveFrame<PathResponseFrame> for Path {\n    type Output = ();\n\n    fn recv_frame(&self, frame: PathResponseFrame) -> Result<Self::Output, Error> {\n        self.response_rcvbuf.write(frame);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "qconnection/src/space/data.rs",
    "content": "use std::sync::Arc;\n\nuse qbase::{\n    Epoch, GetEpoch,\n    error::{Error, QuicError},\n    frame::{\n        ConnectionCloseFrame, Frame, ReliableFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::{\n        route::{Link, Pathway},\n        tx::Signals,\n    },\n    packet::{\n        self,\n        header::{GetType, OneRttHeader, long::ZeroRttHeader},\n        io::PacketSpace,\n        keys::{ArcOneRttKeys, ArcZeroRttKeys, DirectionalKeys},\n        r#type::Type,\n    },\n    util::BoundQueue,\n};\nuse qcongestion::{ArcCC, Feedback, Transport};\nuse qevent::{\n    quic::{\n        PacketHeader, PacketType, QuicFramesCollector,\n        recovery::{PacketLost, PacketLostTrigger},\n    },\n    telemetry::Instrument,\n};\nuse qinterface::{\n    bind_uri::BindUri,\n    component::route::{CipherPacket, PlainPacket},\n};\nuse qrecovery::crypto::CryptoStream;\nuse tokio::sync::mpsc;\n\nuse crate::{\n    ArcReliableFrameDeque, Components, DataJournal, DataStreams, GuaranteedFrame,\n    events::{ArcEventBroker, EmitEvent, Event},\n    path::{self, Path, error::CreatePathFailure},\n    space::{\n        AckDataSpace, FlowControlledDataStreams, assemble_closing_packet, filter_odcid_packet,\n        pipe, read_plain_packet,\n    },\n    state,\n    termination::Terminator,\n    tx::{PacketWriter, TrivialPacketWriter},\n};\n\npub type CipherZeroRttPacket = CipherPacket<ZeroRttHeader>;\npub type PlainZeroRttPacket = PlainPacket<ZeroRttHeader>;\npub type ReceivedZeroRttFrom = (CipherZeroRttPacket, (BindUri, Pathway, Link));\n\npub type CipherOneRttPacket = CipherPacket<OneRttHeader>;\npub type PlainOneRttPacket = PlainPacket<OneRttHeader>;\npub type ReceivedOneRttFrom = (CipherOneRttPacket, (BindUri, Pathway, Link));\n\npub struct DataSpace {\n    zero_rtt_keys: ArcZeroRttKeys,\n    one_rtt_keys: ArcOneRttKeys,\n    journal: DataJournal,\n}\n\nimpl AsRef<DataJournal> for DataSpace {\n    fn as_ref(&self) -> &DataJournal {\n        &self.journal\n    }\n}\n\nimpl DataSpace {\n    pub fn new(zero_rtt_keys: ArcZeroRttKeys) -> Self {\n        Self {\n            zero_rtt_keys,\n            one_rtt_keys: ArcOneRttKeys::new_pending(),\n            journal: DataJournal::with_capacity(16, None),\n        }\n    }\n\n    pub async fn decrypt_0rtt_packet(\n        &self,\n        packet: CipherZeroRttPacket,\n    ) -> Option<Result<PlainZeroRttPacket, QuicError>> {\n        // TODO: client should never received 0rtt packet...\n        match self.zero_rtt_keys.get_decrypt_keys()?.await {\n            Some(keys) => {\n                packet.decrypt_long_packet(keys.header.as_ref(), keys.packet.as_ref(), |pn| {\n                    self.journal.of_rcvd_packets().decode_pn(pn)\n                })\n            }\n            None => {\n                packet.drop_on_key_unavailable();\n                None\n            }\n        }\n    }\n\n    pub async fn decrypt_1rtt_packet(\n        &self,\n        packet: CipherOneRttPacket,\n    ) -> Option<Result<PlainOneRttPacket, QuicError>> {\n        match self.one_rtt_keys.get_remote_keys().await {\n            Some((hpk, pk)) => packet.decrypt_short_packet(hpk.as_ref(), &pk, |pn| {\n                self.journal.of_rcvd_packets().decode_pn(pn)\n            }),\n            None => {\n                packet.drop_on_key_unavailable();\n                None\n            }\n        }\n    }\n\n    pub fn is_one_rtt_keys_ready(&self) -> bool {\n        self.one_rtt_keys.get_local_keys().is_some()\n    }\n\n    pub fn is_zero_rtt_avaliable(&self) -> bool {\n        self.zero_rtt_keys.get_encrypt_keys().is_some()\n    }\n\n    pub fn one_rtt_keys(&self) -> ArcOneRttKeys {\n        self.one_rtt_keys.clone()\n    }\n\n    pub fn zero_rtt_keys(&self) -> ArcZeroRttKeys {\n        self.zero_rtt_keys.clone()\n    }\n\n    pub(crate) fn journal(&self) -> &DataJournal {\n        &self.journal\n    }\n\n    pub fn tracker(\n        &self,\n        crypto_stream: CryptoStream,\n        streams: DataStreams,\n        reliable_frames: ArcReliableFrameDeque,\n    ) -> DataTracker {\n        DataTracker {\n            journal: self.journal.clone(),\n            crypto_stream,\n            streams,\n            reliable_frames,\n        }\n    }\n}\n\nimpl GetEpoch for DataSpace {\n    fn epoch(&self) -> Epoch {\n        Epoch::Data\n    }\n}\n\nimpl path::PacketSpace<ZeroRttHeader> for DataSpace {\n    type JournalFrame = GuaranteedFrame;\n\n    fn new_packet<'b, 's>(\n        &'s self,\n        header: ZeroRttHeader,\n        cc: &ArcCC,\n        buffer: &'b mut [u8],\n    ) -> Result<PacketWriter<'b, 's, GuaranteedFrame>, Signals> {\n        if self.one_rtt_keys.get_local_keys().is_some() {\n            return Err(Signals::TLS_FIN); // should 1rtt\n        }\n\n        let Some(keys) = self.zero_rtt_keys.get_encrypt_keys() else {\n            return Err(Signals::empty()); // no 0rtt keys, just skip 0rtt\n        };\n\n        let (retran_timeout, expire_timeout) = cc.retransmit_and_expire_time(Epoch::Data);\n        PacketWriter::new_long(\n            header,\n            buffer,\n            keys,\n            self.journal.as_ref(),\n            retran_timeout,\n            expire_timeout,\n        )\n    }\n}\n\nimpl PacketSpace<ZeroRttHeader> for DataSpace {\n    type PacketAssembler<'a> = TrivialPacketWriter<'a, 'a, GuaranteedFrame>;\n\n    #[inline]\n    fn new_packet<'a>(\n        &'a self,\n        header: ZeroRttHeader,\n        buffer: &'a mut [u8],\n    ) -> Result<Self::PacketAssembler<'a>, Signals> {\n        if self.one_rtt_keys.get_local_keys().is_some() {\n            return Err(Signals::TLS_FIN); // should 1rtt\n        }\n\n        let Some(keys) = self.zero_rtt_keys.get_encrypt_keys() else {\n            return Err(Signals::empty()); // no 0rtt keys, just skip 0rtt\n        };\n\n        TrivialPacketWriter::new_long(header, buffer, keys, self.journal.as_ref())\n    }\n}\n\nimpl path::PacketSpace<OneRttHeader> for DataSpace {\n    type JournalFrame = GuaranteedFrame;\n\n    fn new_packet<'b, 's>(\n        &'s self,\n        header: OneRttHeader,\n        cc: &ArcCC,\n        buffer: &'b mut [u8],\n    ) -> Result<PacketWriter<'b, 's, GuaranteedFrame>, Signals> {\n        let (hpk, pk) = self.one_rtt_keys.get_local_keys().ok_or(Signals::KEYS)?;\n        let (key_phase, pk) = pk.lock_guard().get_local();\n        let (retran_timeout, expire_timeout) = cc.retransmit_and_expire_time(Epoch::Data);\n        PacketWriter::new_short(\n            header,\n            buffer,\n            DirectionalKeys {\n                header: hpk,\n                packet: pk,\n            },\n            key_phase,\n            self.journal.as_ref(),\n            retran_timeout,\n            expire_timeout,\n        )\n    }\n}\n\nimpl PacketSpace<OneRttHeader> for DataSpace {\n    type PacketAssembler<'a> = TrivialPacketWriter<'a, 'a, GuaranteedFrame>;\n\n    #[inline]\n    fn new_packet<'a>(\n        &'a self,\n        header: OneRttHeader,\n        buffer: &'a mut [u8],\n    ) -> Result<Self::PacketAssembler<'a>, Signals> {\n        let (hpk, pk) = self.one_rtt_keys.get_local_keys().ok_or(Signals::KEYS)?;\n        let (key_phase, pk) = pk.lock_guard().get_local();\n        TrivialPacketWriter::new_short(\n            header,\n            buffer,\n            DirectionalKeys {\n                header: hpk,\n                packet: pk,\n            },\n            key_phase,\n            self.journal.as_ref(),\n        )\n    }\n}\n\nfn frame_dispathcer(\n    space: &DataSpace,\n    components: &Components,\n    event_broker: &ArcEventBroker,\n) -> impl for<'p> Fn(Frame, Type, &'p Path) + use<> {\n    let (ack_frames_entry, rcvd_ack_frames) = mpsc::unbounded_channel();\n    // 连接级的\n    let (max_data_frames_entry, rcvd_max_data_frames) = mpsc::unbounded_channel();\n    let (data_blocked_frames_entry, rcvd_data_blocked_frames) = mpsc::unbounded_channel();\n    let (new_cid_frames_entry, rcvd_new_cid_frames) = mpsc::unbounded_channel();\n    let (retire_cid_frames_entry, rcvd_retire_cid_frames) = mpsc::unbounded_channel();\n    let (handshake_done_frames_entry, rcvd_handshake_done_frames) = mpsc::unbounded_channel();\n    let (new_token_frames_entry, rcvd_new_token_frames) = mpsc::unbounded_channel();\n    // 数据级的\n    let (crypto_frames_entry, rcvd_crypto_frames) = mpsc::unbounded_channel();\n    let (stream_ctrl_frames_entry, rcvd_stream_ctrl_frames) = mpsc::unbounded_channel();\n    let (stream_frames_entry, rcvd_stream_frames) = mpsc::unbounded_channel();\n    #[cfg(feature = \"datagram\")]\n    let (datagram_frames_entry, rcvd_datagram_frames) = mpsc::unbounded_channel();\n    let (punch_frames_entry, rcvd_punch_frames) = mpsc::unbounded_channel();\n    let (punch_hello_frames_entry, rcvd_punch_hello_frames) = mpsc::unbounded_channel();\n\n    let flow_controlled_data_streams = FlowControlledDataStreams::new(\n        components.data_streams.clone(),\n        components.flow_ctrl.clone(),\n    );\n\n    // Assemble the pipelines of frame processing\n    pipe(\n        rcvd_retire_cid_frames,\n        components.cid_registry.local.clone(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_new_cid_frames,\n        components.cid_registry.remote.clone(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_max_data_frames,\n        components.flow_ctrl.sender.clone(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_data_blocked_frames,\n        components.flow_ctrl.recver.clone(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_handshake_done_frames,\n        components\n            .quic_handshake\n            .discard_spaces_on_client_handshake_done(components.paths.clone()),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_crypto_frames,\n        components.crypto_streams[space.epoch()].incoming(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_stream_ctrl_frames,\n        flow_controlled_data_streams.clone(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_stream_frames,\n        flow_controlled_data_streams,\n        event_broker.clone(),\n    );\n    #[cfg(feature = \"datagram\")]\n    pipe(\n        rcvd_datagram_frames,\n        components.datagram_flow.clone(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_ack_frames,\n        AckDataSpace::new(\n            &space.journal,\n            components.data_streams.clone(),\n            &components.crypto_streams[space.epoch()],\n        ),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_new_token_frames,\n        components.token_registry.clone(),\n        event_broker.clone(),\n    );\n    pipe(rcvd_punch_frames, components.clone(), event_broker.clone());\n    pipe(\n        rcvd_punch_hello_frames,\n        components.clone(),\n        event_broker.clone(),\n    );\n    let event_broker = event_broker.clone();\n    let rcvd_joural = space.journal.of_rcvd_packets();\n    move |frame: Frame, pty: packet::Type, path: &Path| match frame {\n        Frame::Ack(f) => {\n            path.cc().on_ack_rcvd(Epoch::Data, &f);\n            rcvd_joural.on_rcvd_ack(&f);\n            _ = ack_frames_entry.send(f)\n        }\n        Frame::NewToken(f) => _ = new_token_frames_entry.send(f),\n        Frame::MaxData(f) => _ = max_data_frames_entry.send(f),\n        Frame::NewConnectionId(f) => _ = new_cid_frames_entry.send(f),\n        Frame::RetireConnectionId(f) => _ = retire_cid_frames_entry.send(f),\n        Frame::HandshakeDone(f) => {\n            // See [Section 4.1.2](https://datatracker.ietf.org/doc/html/rfc9001#handshake-confirmed)\n            _ = handshake_done_frames_entry.send(f)\n        }\n        Frame::DataBlocked(f) => _ = data_blocked_frames_entry.send(f),\n        Frame::PathChallenge(f) => _ = path.recv_frame(f),\n        Frame::PathResponse(f) => _ = path.recv_frame(f),\n        Frame::StreamCtl(f) => _ = stream_ctrl_frames_entry.send(f),\n        Frame::Stream(f, data) => _ = stream_frames_entry.send((f, data)),\n        Frame::Crypto(f, bytes) => _ = crypto_frames_entry.send((f, bytes)),\n        #[cfg(feature = \"datagram\")]\n        Frame::Datagram(f, data) => _ = datagram_frames_entry.send((f, data)),\n        Frame::Close(f) if matches!(pty, Type::Short(_)) => event_broker.emit(Event::Closed(f)),\n        Frame::AddAddress(frame) => {\n            _ = punch_frames_entry.send((\n                path.bind_uri().clone(),\n                *path.pathway(),\n                *path.link(),\n                ReliableFrame::AddAddress(frame),\n            ))\n        }\n        Frame::RemoveAddress(frame) => {\n            _ = punch_frames_entry.send((\n                path.bind_uri().clone(),\n                *path.pathway(),\n                *path.link(),\n                ReliableFrame::RemoveAddress(frame),\n            ))\n        }\n        Frame::PunchMeNow(frame) => {\n            _ = punch_frames_entry.send((\n                path.bind_uri().clone(),\n                *path.pathway(),\n                *path.link(),\n                ReliableFrame::PunchMeNow(frame),\n            ))\n        }\n        Frame::PunchHello(frame) => {\n            _ = punch_hello_frames_entry.send((\n                path.bind_uri().clone(),\n                *path.pathway(),\n                *path.link(),\n                frame,\n            ))\n        }\n        Frame::PunchDone(frame) => {\n            _ = punch_frames_entry.send((\n                path.bind_uri().clone(),\n                *path.pathway(),\n                *path.link(),\n                ReliableFrame::PunchDone(frame),\n            ))\n        }\n        _ => {}\n    }\n}\n\nasync fn parse_normal_zero_rtt_packet(\n    (packet, (bind_uri, pathway, link)): ReceivedZeroRttFrom,\n    space: &DataSpace,\n    components: &Components,\n    dispatch_frame: impl Fn(Frame, Type, &Path),\n) -> Result<(), Error> {\n    let Some(packet) = space.decrypt_0rtt_packet(packet).await.transpose()? else {\n        return Ok(());\n    };\n\n    let path = match components.get_or_try_create_path(bind_uri, link, pathway, true) {\n        Ok(path) => path,\n        Err(CreatePathFailure::ConnectionClosed(..)) => {\n            packet.drop_on_conenction_closed();\n            return Ok(());\n        }\n        Err(CreatePathFailure::NoInterface(..)) => {\n            packet.drop_on_interface_not_found();\n            return Ok(());\n        }\n    };\n\n    let Some(packet) = filter_odcid_packet(packet, &components.specific) else {\n        return Ok(());\n    };\n\n    let packet_content = read_plain_packet(&packet, |frame| {\n        dispatch_frame(frame, packet.get_type(), &path);\n    })?;\n\n    space.journal.of_rcvd_packets().on_rcvd_pn(\n        packet.pn(),\n        packet_content.is_ack_eliciting(),\n        path.cc().get_pto(Epoch::Data),\n    );\n    path.on_packet_rcvd(Epoch::Data, packet.pn(), packet.size(), packet_content);\n\n    Result::<(), Error>::Ok(())\n}\n\nasync fn parse_normal_one_rtt_packet(\n    (packet, (bind_uri, pathway, link)): ReceivedOneRttFrom,\n    space: &DataSpace,\n    components: &Components,\n    dispatch_frame: impl Fn(Frame, Type, &Path),\n) -> Result<(), Error> {\n    let Some(packet) = space.decrypt_1rtt_packet(packet).await.transpose()? else {\n        return Ok(());\n    };\n\n    let path = match components.get_or_try_create_path(bind_uri, link, pathway, true) {\n        Ok(path) => path,\n        Err(CreatePathFailure::ConnectionClosed(..)) => {\n            packet.drop_on_conenction_closed();\n            return Ok(());\n        }\n        Err(CreatePathFailure::NoInterface(..)) => {\n            packet.drop_on_interface_not_found();\n            return Ok(());\n        }\n    };\n\n    let Some(packet) = filter_odcid_packet(packet, &components.specific) else {\n        return Ok(());\n    };\n\n    components\n        .quic_handshake\n        .discard_spaces_on_server_handshake_done(&components.paths);\n\n    let packet_content = read_plain_packet(&packet, |frame| {\n        dispatch_frame(frame, packet.get_type(), &path);\n    })?;\n    space.journal.of_rcvd_packets().on_rcvd_pn(\n        packet.pn(),\n        packet_content.is_ack_eliciting(),\n        path.cc().get_pto(Epoch::Data),\n    );\n    path.on_packet_rcvd(Epoch::Data, packet.pn(), packet.size(), packet_content);\n\n    Result::<(), Error>::Ok(())\n}\n\nfn parse_closing_one_rtt_packet(\n    space: &DataSpace,\n    packet: CipherOneRttPacket,\n) -> Option<ConnectionCloseFrame> {\n    let (hpk, pk) = space.one_rtt_keys.remote_keys()?;\n    let packet = packet\n        .decrypt_short_packet(hpk.as_ref(), &pk, |pn| {\n            space.journal.of_rcvd_packets().decode_pn(pn)\n        })\n        .and_then(Result::ok)?;\n\n    let mut ccf = None;\n    _ = read_plain_packet(&packet, |frame| {\n        ccf = ccf.take().or(match frame {\n            Frame::Close(ccf) => Some(ccf),\n            _ => None,\n        });\n    });\n    ccf\n}\n\npub async fn deliver_and_parse_packets(\n    zeor_rtt_packets: BoundQueue<ReceivedZeroRttFrom>,\n    one_rtt_packets: BoundQueue<ReceivedOneRttFrom>,\n    space: Arc<DataSpace>,\n    components: Components,\n    event_broker: ArcEventBroker,\n) {\n    let conn_state = &components.conn_state;\n    let dispatch_frame = frame_dispathcer(&space, &components, &event_broker);\n    let normal_deliver_and_parse_zero_rtt_loop = async {\n        while let Some(form) = zeor_rtt_packets.recv().await {\n            let span = qevent::span!(@current, path=form.1.2.to_string());\n            let parse = parse_normal_zero_rtt_packet(form, &space, &components, &dispatch_frame);\n            if let Err(Error::Quic(error)) = Instrument::instrument(parse, span).await {\n                event_broker.emit(Event::Failed(error));\n            };\n        }\n    };\n    let normal_deliver_and_parse_one_rtt_loop = async {\n        while let Some(form) = one_rtt_packets.recv().await {\n            let span = qevent::span!(@current, path=form.1.2.to_string());\n            let parse = parse_normal_one_rtt_packet(form, &space, &components, &dispatch_frame);\n            if let Err(Error::Quic(error)) = Instrument::instrument(parse, span).await {\n                event_broker.emit(Event::Failed(error));\n            };\n        }\n    };\n\n    let normal_deliver_and_parse_loops = async {\n        if components.tls_handshake.info().await.is_err() {\n            return;\n        }\n        tokio::join!(\n            normal_deliver_and_parse_zero_rtt_loop,\n            normal_deliver_and_parse_one_rtt_loop,\n        );\n    };\n\n    let ccf = tokio::select! {\n        // deliver and parse packets. complete when packet queue closed\n        _ = normal_deliver_and_parse_loops => return,\n        // connection terminated(enter closing/draining state)\n        error = conn_state.terminated() => match conn_state.current() {\n            // entered closing_state, keep receiving packets, and send ccf\n            state if state == Some(state::CLOSING) => ConnectionCloseFrame::from(error),\n            // entered other state, do nothing\n            _ => return\n        }\n    };\n\n    let terminator = Terminator::new(ccf, &components);\n    // Release the primary connection state\n    drop(components);\n    zeor_rtt_packets.close();\n\n    while let Some((packet, (_bind_uri, pathway, _link))) = one_rtt_packets.recv().await {\n        if let Some(ccf) = parse_closing_one_rtt_packet(&space, packet) {\n            event_broker.emit(Event::Closed(ccf));\n        }\n\n        if terminator.should_send() {\n            terminator\n                .try_send_on(pathway, |buffer, ccf| {\n                    assemble_closing_packet::<OneRttHeader, _>(\n                        space.as_ref(),\n                        &terminator,\n                        buffer,\n                        ccf,\n                    )\n                })\n                .await\n        }\n    }\n}\n\npub struct DataTracker {\n    journal: DataJournal,\n    crypto_stream: CryptoStream,\n    streams: DataStreams,\n    reliable_frames: ArcReliableFrameDeque,\n}\n\nimpl Feedback for DataTracker {\n    fn may_loss(&self, trigger: PacketLostTrigger, pns: &mut dyn Iterator<Item = u64>) {\n        let sent_jornal = self.journal.of_sent_packets();\n        let crypto_outgoing = self.crypto_stream.outgoing();\n        let mut sent_packets = sent_jornal.rotate();\n        for pn in pns {\n            let mut may_lost_frames = QuicFramesCollector::<PacketLost>::new();\n            for frame in sent_packets.may_loss_packet(pn) {\n                match frame {\n                    GuaranteedFrame::Crypto(frame) => {\n                        may_lost_frames.extend([&frame]);\n                        crypto_outgoing.may_loss_data(&frame);\n                    }\n                    GuaranteedFrame::Stream(frame) => {\n                        may_lost_frames.extend([&frame]);\n                        self.streams.may_loss_data(&frame);\n                    }\n                    GuaranteedFrame::Reliable(frame) => {\n                        may_lost_frames.extend([&frame]);\n                        self.reliable_frames.send_frame([frame]);\n                    }\n                };\n            }\n            qevent::event!(PacketLost {\n                header: PacketHeader {\n                    // TOOD: 如果只有支持0rtt，这里就不一定是1rtt了\n                    packet_type: PacketType::OneRTT,\n                    packet_number: pn\n                },\n                frames: may_lost_frames,\n                trigger\n            });\n        }\n    }\n}\n"
  },
  {
    "path": "qconnection/src/space/handshake.rs",
    "content": "use std::sync::Arc;\n\nuse qbase::{\n    Epoch, GetEpoch,\n    error::{Error, QuicError},\n    frame::{ConnectionCloseFrame, CryptoFrame, Frame},\n    net::tx::Signals,\n    packet::{header::long::HandshakeHeader, io::PacketSpace, keys::ArcKeys},\n    util::BoundQueue,\n};\nuse qcongestion::{Feedback, Transport};\nuse qevent::{\n    quic::{\n        PacketHeader, PacketType, QuicFramesCollector,\n        recovery::{PacketLost, PacketLostTrigger},\n    },\n    telemetry::Instrument,\n};\nuse qinterface::component::route::{CipherPacket, PlainPacket, Way};\nuse qrecovery::crypto::CryptoStream;\nuse tokio::sync::mpsc;\n\nuse crate::{\n    Components, HandshakeJournal,\n    events::{ArcEventBroker, EmitEvent, Event},\n    path::{self, Path, error::CreatePathFailure},\n    space::{\n        AckHandshakeSpace, assemble_closing_packet, filter_odcid_packet, pipe, read_plain_packet,\n    },\n    state,\n    termination::Terminator,\n    tx::{PacketWriter, TrivialPacketWriter},\n};\n\npub type CipherHanshakePacket = CipherPacket<HandshakeHeader>;\npub type PlainHandshakePacket = PlainPacket<HandshakeHeader>;\npub type ReceivedFrom = (CipherHanshakePacket, Way);\n\npub struct HandshakeSpace {\n    keys: ArcKeys,\n    journal: HandshakeJournal,\n}\n\nimpl AsRef<HandshakeJournal> for HandshakeSpace {\n    fn as_ref(&self) -> &HandshakeJournal {\n        &self.journal\n    }\n}\n\nimpl HandshakeSpace {\n    pub fn new() -> Self {\n        Self {\n            keys: ArcKeys::new_pending(),\n            journal: HandshakeJournal::with_capacity(16, None),\n        }\n    }\n\n    pub fn keys(&self) -> ArcKeys {\n        self.keys.clone()\n    }\n\n    pub async fn decrypt_packet(\n        &self,\n        packet: CipherHanshakePacket,\n    ) -> Option<Result<PlainHandshakePacket, QuicError>> {\n        match self.keys.get_remote_keys().await {\n            Some(keys) => packet.decrypt_long_packet(\n                keys.remote.header.as_ref(),\n                keys.remote.packet.as_ref(),\n                |pn| self.journal.of_rcvd_packets().decode_pn(pn),\n            ),\n            None => {\n                packet.drop_on_key_unavailable();\n                None\n            }\n        }\n    }\n\n    pub fn tracker(&self, crypto_stream: CryptoStream) -> HandshakeTracker {\n        HandshakeTracker {\n            journal: self.journal.clone(),\n            crypto_stream,\n        }\n    }\n}\n\nimpl Default for HandshakeSpace {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl GetEpoch for HandshakeSpace {\n    fn epoch(&self) -> Epoch {\n        Epoch::Handshake\n    }\n}\n\nimpl path::PacketSpace<HandshakeHeader> for HandshakeSpace {\n    type JournalFrame = CryptoFrame;\n\n    fn new_packet<'b, 's>(\n        &'s self,\n        header: HandshakeHeader,\n        cc: &qcongestion::ArcCC,\n        buffer: &'b mut [u8],\n    ) -> Result<PacketWriter<'b, 's, CryptoFrame>, Signals> {\n        let keys = self.keys.get_local_keys().ok_or(Signals::KEYS)?;\n        let (retran_timeout, expire_timeout) = cc.retransmit_and_expire_time(Epoch::Handshake);\n        PacketWriter::new_long(\n            header,\n            buffer,\n            keys.local.clone(),\n            self.journal.as_ref(),\n            retran_timeout,\n            expire_timeout,\n        )\n    }\n}\n\nimpl PacketSpace<HandshakeHeader> for HandshakeSpace {\n    type PacketAssembler<'a> = TrivialPacketWriter<'a, 'a, CryptoFrame>;\n\n    #[inline]\n    fn new_packet<'a>(\n        &'a self,\n        header: HandshakeHeader,\n        buffer: &'a mut [u8],\n    ) -> Result<Self::PacketAssembler<'a>, Signals> {\n        let keys = self.keys.get_local_keys().ok_or(Signals::KEYS)?;\n        TrivialPacketWriter::new_long(header, buffer, keys.local, self.journal.as_ref())\n    }\n}\n\nfn frame_dispathcer(\n    space: &HandshakeSpace,\n    components: &Components,\n    event_broker: &ArcEventBroker,\n) -> impl for<'p> Fn(Frame, &'p Path) + use<> {\n    let (crypto_frames_entry, rcvd_crypto_frames) = mpsc::unbounded_channel();\n    let (ack_frames_entry, rcvd_ack_frames) = mpsc::unbounded_channel();\n\n    pipe(\n        rcvd_crypto_frames,\n        components.crypto_streams[space.epoch()].incoming(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_ack_frames,\n        AckHandshakeSpace::new(&space.journal, &components.crypto_streams[space.epoch()]),\n        event_broker.clone(),\n    );\n\n    let inform_cc = components.quic_handshake.status();\n    let event_broker = event_broker.clone();\n    let rcvd_joural = space.journal.of_rcvd_packets();\n    move |frame: Frame, path: &Path| match frame {\n        Frame::Ack(f) => {\n            path.cc().on_ack_rcvd(Epoch::Handshake, &f);\n            rcvd_joural.on_rcvd_ack(&f);\n            _ = ack_frames_entry.send(f);\n            inform_cc.received_handshake_ack();\n        }\n        Frame::Close(f) => event_broker.emit(Event::Closed(f)),\n        Frame::Crypto(f, bytes) => _ = crypto_frames_entry.send((f, bytes)),\n        Frame::Padding(_) | Frame::Ping(_) => {}\n        _ => unreachable!(\"unexpected frame: {:?} in handshake packet\", frame),\n    }\n}\n\nasync fn parse_normal_packet(\n    (packet, (bind_uri, pathway, link)): ReceivedFrom,\n    space: &HandshakeSpace,\n    components: &Components,\n    dispatch_frame: impl Fn(Frame, &Path),\n) -> Result<(), Error> {\n    let Some(packet) = space.decrypt_packet(packet).await.transpose()? else {\n        return Ok(());\n    };\n\n    let path = match components.get_or_try_create_path(bind_uri, link, pathway, true) {\n        Ok(path) => path,\n        Err(CreatePathFailure::ConnectionClosed(..)) => {\n            packet.drop_on_conenction_closed();\n            return Ok(());\n        }\n        Err(CreatePathFailure::NoInterface(..)) => {\n            packet.drop_on_interface_not_found();\n            return Ok(());\n        }\n    };\n\n    let Some(packet) = filter_odcid_packet(packet, &components.specific) else {\n        return Ok(());\n    };\n\n    // See [RFC 9000 section 8.1](https://www.rfc-editor.org/rfc/rfc9000.html#name-address-validation-during-c)\n    // Once an endpoint has successfully processed a Handshake packet from the peer, it can consider the peer\n    // address to have been validated.\n    // It may have already been verified using tokens in the Handshake space\n    path.grant_anti_amplification();\n\n    let packet_content = read_plain_packet(&packet, |frame| dispatch_frame(frame, &path))?;\n\n    space.journal.of_rcvd_packets().on_rcvd_pn(\n        packet.pn(),\n        packet_content.is_ack_eliciting(),\n        path.cc().get_pto(Epoch::Handshake),\n    );\n    path.on_packet_rcvd(Epoch::Handshake, packet.pn(), packet.size(), packet_content);\n\n    Result::<(), Error>::Ok(())\n}\n\nfn parse_closing_packet(\n    space: &HandshakeSpace,\n    packet: CipherHanshakePacket,\n) -> Option<ConnectionCloseFrame> {\n    // TOOD: improve Keys\n    let remote_keys = space.keys.get_local_keys()?.remote;\n    let packet = packet\n        .decrypt_long_packet(\n            remote_keys.header.as_ref(),\n            remote_keys.packet.as_ref(),\n            |pn| space.journal.of_rcvd_packets().decode_pn(pn),\n        )\n        .and_then(Result::ok)?;\n\n    let mut ccf = None;\n    _ = read_plain_packet(&packet, |frame| {\n        ccf = ccf.take().or(match frame {\n            Frame::Close(ccf) => Some(ccf),\n            _ => None,\n        });\n    });\n    ccf\n}\n\npub async fn deliver_and_parse_packets(\n    packets: BoundQueue<ReceivedFrom>,\n    space: Arc<HandshakeSpace>,\n    components: Components,\n    event_broker: ArcEventBroker,\n) {\n    let conn_state = &components.conn_state;\n    let dispatch_frame = frame_dispathcer(&space, &components, &event_broker);\n    let normal_deliver_and_parse_loop = async {\n        while let Some(form) = packets.recv().await {\n            let span = qevent::span!(@current, path=form.1.2.to_string());\n            let parse = parse_normal_packet(form, &space, &components, &dispatch_frame);\n            if let Err(Error::Quic(error)) = Instrument::instrument(parse, span).await {\n                event_broker.emit(Event::Failed(error));\n            };\n        }\n    };\n\n    let ccf = tokio::select! {\n        // deliver and parse packets. complete when packet queue closed\n        _ = normal_deliver_and_parse_loop => return,\n        // connection terminated(enter closing/draining state)\n        error = conn_state.terminated() => match conn_state.current() {\n            // entered closing_state, keep receiving packets, and send ccf\n            state if state == Some(state::CLOSING) => ConnectionCloseFrame::from(error),\n            // entered other state, do nothing\n            _ => return\n        }\n    };\n\n    let terminator = Terminator::new(ccf, &components);\n    // Release the primary connection state\n    drop(components);\n\n    while let Some((packet, (_bind_uri, pathway, _link))) = packets.recv().await {\n        if let Some(ccf) = parse_closing_packet(&space, packet) {\n            event_broker.emit(Event::Closed(ccf));\n        }\n\n        if terminator.should_send() {\n            terminator\n                .try_send_on(pathway, |buffer, ccf| {\n                    assemble_closing_packet(space.as_ref(), &terminator, buffer, ccf)\n                })\n                .await\n        }\n    }\n}\n\npub struct HandshakeTracker {\n    journal: HandshakeJournal,\n    crypto_stream: CryptoStream,\n}\n\nimpl Feedback for HandshakeTracker {\n    fn may_loss(&self, trigger: PacketLostTrigger, pns: &mut dyn Iterator<Item = u64>) {\n        let sent_jornal = self.journal.of_sent_packets();\n        let outgoing = self.crypto_stream.outgoing();\n        let mut sent_packets = sent_jornal.rotate();\n        for pn in pns {\n            let mut may_lost_frames = QuicFramesCollector::<PacketLost>::new();\n            for frame in sent_packets.may_loss_packet(pn) {\n                may_lost_frames.extend([&frame]);\n                outgoing.may_loss_data(&frame);\n            }\n            qevent::event!(PacketLost {\n                header: PacketHeader {\n                    packet_type: PacketType::Handshake,\n                    packet_number: pn\n                },\n                frames: may_lost_frames,\n                trigger\n            });\n        }\n    }\n}\n"
  },
  {
    "path": "qconnection/src/space/initial.rs",
    "content": "use std::{ops::Deref, sync::Arc};\n\nuse qbase::{\n    Epoch, GetEpoch,\n    error::{Error, QuicError},\n    frame::{ConnectionCloseFrame, CryptoFrame, Frame},\n    net::tx::Signals,\n    packet::{\n        header::{GetScid, long::InitialHeader},\n        io::PacketSpace,\n        keys::{ArcKeys, Keys},\n    },\n    token::TokenRegistry,\n    util::BoundQueue,\n};\nuse qcongestion::{Feedback, Transport};\nuse qevent::{\n    quic::{\n        PacketHeader, PacketType, QuicFramesCollector,\n        recovery::{PacketLost, PacketLostTrigger},\n    },\n    telemetry::Instrument,\n};\nuse qinterface::component::route::{CipherPacket, PlainPacket, Way};\nuse qrecovery::crypto::CryptoStream;\nuse tokio::sync::mpsc;\n\nuse crate::{\n    Components, InitialJournal,\n    events::{ArcEventBroker, EmitEvent, Event},\n    path::{self, Path, error::CreatePathFailure},\n    space::{\n        AckInitialSpace, assemble_closing_packet, filter_odcid_packet, pipe, read_plain_packet,\n    },\n    state,\n    termination::Terminator,\n    tx::{PacketWriter, TrivialPacketWriter},\n};\n\npub type CipherInitialPacket = CipherPacket<InitialHeader>;\npub type PlainInitialPacket = PlainPacket<InitialHeader>;\npub type ReceivedFrom = (CipherInitialPacket, Way);\n\npub struct InitialSpace {\n    keys: ArcKeys,\n    journal: InitialJournal,\n}\n\nimpl AsRef<InitialJournal> for InitialSpace {\n    fn as_ref(&self) -> &InitialJournal {\n        &self.journal\n    }\n}\n\nimpl InitialSpace {\n    // Initial keys应该是预先知道的，或者传入dcid，可以构造出来\n    pub fn new(keys: Keys) -> Self {\n        let journal = InitialJournal::with_capacity(16, None);\n        Self {\n            keys: ArcKeys::with_keys(keys),\n            journal,\n        }\n    }\n\n    pub fn keys(&self) -> ArcKeys {\n        self.keys.clone()\n    }\n\n    pub async fn decrypt_packet(\n        &self,\n        packet: CipherInitialPacket,\n    ) -> Option<Result<PlainInitialPacket, QuicError>> {\n        match self.keys.get_remote_keys().await {\n            Some(keys) => packet.decrypt_long_packet(\n                keys.remote.header.as_ref(),\n                keys.remote.packet.as_ref(),\n                |pn| self.journal.of_rcvd_packets().decode_pn(pn),\n            ),\n            None => {\n                packet.drop_on_key_unavailable();\n                None\n            }\n        }\n    }\n\n    pub fn tracker(&self, crypto_stream: CryptoStream) -> InitialTracker {\n        InitialTracker {\n            journal: self.journal.clone(),\n            crypto_stream,\n        }\n    }\n}\n\nimpl GetEpoch for InitialSpace {\n    fn epoch(&self) -> Epoch {\n        Epoch::Initial\n    }\n}\n\nimpl path::PacketSpace<InitialHeader> for InitialSpace {\n    type JournalFrame = CryptoFrame;\n\n    fn new_packet<'b, 's>(\n        &'s self,\n        header: InitialHeader,\n        cc: &qcongestion::ArcCC,\n        buffer: &'b mut [u8],\n    ) -> Result<PacketWriter<'b, 's, CryptoFrame>, Signals> {\n        let keys = self.keys.get_local_keys().ok_or(Signals::KEYS)?;\n        let (retran_timeout, expire_timeout) = cc.retransmit_and_expire_time(Epoch::Handshake);\n        PacketWriter::new_long(\n            header,\n            buffer,\n            keys.local,\n            self.journal.as_ref(),\n            retran_timeout,\n            expire_timeout,\n        )\n    }\n}\n\nimpl PacketSpace<InitialHeader> for InitialSpace {\n    type PacketAssembler<'a> = TrivialPacketWriter<'a, 'a, CryptoFrame>;\n\n    #[inline]\n    fn new_packet<'a>(\n        &'a self,\n        header: InitialHeader,\n        buffer: &'a mut [u8],\n    ) -> Result<Self::PacketAssembler<'a>, Signals> {\n        let keys = self.keys.get_local_keys().ok_or(Signals::KEYS)?;\n        TrivialPacketWriter::new_long(header, buffer, keys.local, self.journal.as_ref())\n    }\n}\n\nfn frame_dispathcer(\n    space: &InitialSpace,\n    components: &Components,\n    event_broker: &ArcEventBroker,\n) -> impl for<'p> Fn(Frame, &'p Path) + use<> {\n    let (crypto_frames_entry, rcvd_crypto_frames) = mpsc::unbounded_channel();\n    let (ack_frames_entry, rcvd_ack_frames) = mpsc::unbounded_channel();\n\n    pipe(\n        rcvd_crypto_frames,\n        components.crypto_streams[space.epoch()].incoming(),\n        event_broker.clone(),\n    );\n    pipe(\n        rcvd_ack_frames,\n        AckInitialSpace::new(&space.journal, &components.crypto_streams[space.epoch()]),\n        event_broker.clone(),\n    );\n\n    let event_broker = event_broker.clone();\n    let rcvd_joural = space.journal.of_rcvd_packets();\n    move |frame: Frame, path: &Path| match frame {\n        Frame::Ack(f) => {\n            path.cc().on_ack_rcvd(Epoch::Initial, &f);\n            rcvd_joural.on_rcvd_ack(&f);\n            _ = ack_frames_entry.send(f);\n        }\n        Frame::Close(f) => event_broker.emit(Event::Closed(f)),\n        Frame::Crypto(f, bytes) => _ = crypto_frames_entry.send((f, bytes)),\n        Frame::Padding(_) | Frame::Ping(_) => {}\n        _ => unreachable!(\"unexpected frame: {:?} in initial packet\", frame),\n    }\n}\n\nasync fn parse_normal_packet(\n    (packet, (bind_uri, pathway, link)): ReceivedFrom,\n    space: &InitialSpace,\n    components: &Components,\n    dispatch_frame: impl Fn(Frame, &Path),\n) -> Result<(), Error> {\n    let parameters = &components.parameters;\n    let paths = &components.paths;\n    let remote_cids = &components.cid_registry.remote;\n\n    let validate_token = {\n        let token_registry = &components.token_registry;\n        let tls_handshake = &components.tls_handshake;\n        |initial_token: &[u8], path: &Path| {\n            if let TokenRegistry::Server(provider) = token_registry.deref()\n                && let Ok(Some(server_name)) = tls_handshake.server_name()\n                && provider.verify_token(server_name.as_ref(), initial_token)\n            {\n                path.grant_anti_amplification();\n            }\n        }\n    };\n\n    // rfc9000 7.2:\n    // if subsequent Initial packets include a different Source Connection ID, they MUST be discarded. This avoids\n    // unpredictable outcomes that might otherwise result from stateless processing of multiple Initial packets\n    // with different Source Connection IDs.\n    if matches!(parameters.lock_guard()?.initial_scid_from_peer(), Some(scid) if scid != *packet.scid())\n    {\n        packet.drop_on_scid_unmatch();\n        return Ok(());\n    }\n\n    let Some(packet) = space.decrypt_packet(packet).await.transpose()? else {\n        return Ok(());\n    };\n\n    let path = match components.get_or_try_create_path(bind_uri, link, pathway, true) {\n        Ok(path) => path,\n        Err(CreatePathFailure::ConnectionClosed(..)) => {\n            packet.drop_on_conenction_closed();\n            return Ok(());\n        }\n        Err(CreatePathFailure::NoInterface(..)) => {\n            packet.drop_on_interface_not_found();\n            return Ok(());\n        }\n    };\n\n    let Some(packet) = filter_odcid_packet(packet, &components.specific) else {\n        return Ok(());\n    };\n\n    let packet_content = read_plain_packet(&packet, |frame| dispatch_frame(frame, &path))?;\n\n    space.journal.of_rcvd_packets().on_rcvd_pn(\n        packet.pn(),\n        packet_content.is_ack_eliciting(),\n        path.cc().get_pto(Epoch::Initial),\n    );\n    path.on_packet_rcvd(Epoch::Initial, packet.pn(), packet.size(), packet_content);\n\n    // Negotiate handshake path\n    if paths.assign_handshake_path(&path, remote_cids, *packet.scid()) {\n        parameters\n            .lock_guard()?\n            .initial_scid_from_peer_need_equal(*packet.scid())?;\n    }\n\n    // See [RFC 9000 section 8.1](https://www.rfc-editor.org/rfc/rfc9000.html#name-address-validation-during-c)\n    // A server might wish to validate the client address before starting the cryptographic handshake.\n    // QUIC uses a token in the Initial packet to provide address validation prior to completing the handshake.\n    // This token is delivered to the client during connection establishment with a Retry packet (see Section 8.1.2)\n    // or in a previous connection using the NEW_TOKEN frame (see Section 8.1.3).\n    if !packet.token().is_empty() {\n        validate_token(packet.token(), &path);\n    }\n    Result::<(), Error>::Ok(())\n}\n\nfn parse_closing_packet(\n    space: &InitialSpace,\n    packet: CipherInitialPacket,\n) -> Option<ConnectionCloseFrame> {\n    // TOOD: improve Keys\n    let remote_keys = space.keys.get_local_keys()?.remote;\n    let packet = packet\n        .decrypt_long_packet(\n            remote_keys.header.as_ref(),\n            remote_keys.packet.as_ref(),\n            |pn| space.journal.of_rcvd_packets().decode_pn(pn),\n        )\n        .and_then(Result::ok)?;\n\n    let mut ccf = None;\n    _ = read_plain_packet(&packet, |frame| {\n        ccf = ccf.take().or(match frame {\n            Frame::Close(ccf) => Some(ccf),\n            _ => None,\n        });\n    });\n    ccf\n}\n\npub async fn deliver_and_parse_packets(\n    packets: BoundQueue<ReceivedFrom>,\n    space: Arc<InitialSpace>,\n    components: Components,\n    event_broker: ArcEventBroker,\n) {\n    let conn_state = &components.conn_state;\n    let dispatch_frame = frame_dispathcer(&space, &components, &event_broker);\n    let normal_deliver_and_parse_loop = async {\n        while let Some(form) = packets.recv().await {\n            let span = qevent::span!(@current, path=form.1.2.to_string());\n            let parse = parse_normal_packet(form, &space, &components, &dispatch_frame);\n            if let Err(Error::Quic(error)) = Instrument::instrument(parse, span).await {\n                event_broker.emit(Event::Failed(error));\n            };\n        }\n    };\n\n    let ccf = tokio::select! {\n        // deliver and parse packets. complete when packet queue closed\n        _ = normal_deliver_and_parse_loop => return,\n        // connection terminated(enter closing/draining state)\n        error = conn_state.terminated() => match conn_state.current() {\n            // entered closing_state, keep receiving packets, and send ccf\n            state if state == Some(state::CLOSING) => ConnectionCloseFrame::from(error),\n            // entered other state, do nothing\n            _ => return\n        }\n    };\n\n    let terminator = Terminator::new(ccf, &components);\n    // Release the primary connection state\n    drop(components);\n\n    while let Some((packet, (_bind_uri, pathway, _link))) = packets.recv().await {\n        if let Some(ccf) = parse_closing_packet(&space, packet) {\n            event_broker.emit(Event::Closed(ccf));\n        }\n\n        // TODO：尝试解决计数分离的问题？将收包统计转为连接和路径级？发送数据包交给路径？\n        if terminator.should_send() {\n            terminator\n                .try_send_on(pathway, |buffer, ccf| {\n                    assemble_closing_packet(space.as_ref(), &terminator, buffer, ccf)\n                })\n                .await\n        }\n    }\n}\n\npub struct InitialTracker {\n    journal: InitialJournal,\n    crypto_stream: CryptoStream,\n}\n\nimpl Feedback for InitialTracker {\n    fn may_loss(&self, trigger: PacketLostTrigger, pns: &mut dyn Iterator<Item = u64>) {\n        let sent_jornal = self.journal.of_sent_packets();\n        let outgoing = self.crypto_stream.outgoing();\n        let mut sent_packets = sent_jornal.rotate();\n        for pn in pns {\n            let mut may_lost_frames = QuicFramesCollector::<PacketLost>::new();\n            for frame in sent_packets.may_loss_packet(pn) {\n                may_lost_frames.extend([&frame]);\n                outgoing.may_loss_data(&frame);\n            }\n            qevent::event!(PacketLost {\n                header: PacketHeader {\n                    packet_type: PacketType::Initial,\n                    packet_number: pn\n                },\n                frames: may_lost_frames,\n                trigger\n            });\n        }\n    }\n}\n"
  },
  {
    "path": "qconnection/src/space.rs",
    "content": "pub mod data;\npub mod handshake;\npub mod initial;\n\nuse std::{borrow::Cow, fmt::Debug, sync::Arc};\n\nuse bytes::Bytes;\nuse qbase::{\n    error::{Error, QuicError},\n    frame::{\n        AckFrame, ConnectionCloseFrame, CryptoFrame, FrameFeature, FrameReader, GetFrameType,\n        ReliableFrame, StreamCtlFrame, StreamFrame, io::ReceiveFrame,\n    },\n    packet::{\n        AssemblePacket, Package, PacketContent, PacketSpace, PacketWriter, ProductHeader,\n        header::{GetDcid, GetType, short::OneRttHeader},\n        io::{Packages, PadTo20},\n    },\n};\nuse qevent::{\n    quic::{\n        PacketHeaderBuilder, QuicFramesCollector,\n        transport::{PacketReceived, PacketsAcked},\n    },\n    telemetry::Instrument,\n};\nuse qinterface::component::route::PlainPacket;\nuse qrecovery::{\n    crypto::{CryptoStream, CryptoStreamOutgoing},\n    journal::{ArcSentJournal, Journal},\n};\nuse tokio::sync::mpsc::UnboundedReceiver;\nuse tracing::Instrument as _;\n\nuse crate::{\n    Components, DataStreams, FlowController, GuaranteedFrame, SpecificComponents,\n    events::{ArcEventBroker, EmitEvent, Event},\n    termination::Terminator,\n};\n\n#[derive(Clone)]\npub struct Spaces {\n    initial: Arc<initial::InitialSpace>,\n    handshake: Arc<handshake::HandshakeSpace>,\n    data: Arc<data::DataSpace>,\n}\n\nimpl Spaces {\n    pub fn new(\n        initial: initial::InitialSpace,\n        handshake: handshake::HandshakeSpace,\n        data: data::DataSpace,\n    ) -> Self {\n        Self {\n            initial: Arc::new(initial),\n            handshake: Arc::new(handshake),\n            data: Arc::new(data),\n        }\n    }\n\n    pub fn initial(&self) -> &Arc<initial::InitialSpace> {\n        &self.initial\n    }\n\n    pub fn handshake(&self) -> &Arc<handshake::HandshakeSpace> {\n        &self.handshake\n    }\n\n    pub fn data(&self) -> &Arc<data::DataSpace> {\n        &self.data\n    }\n}\n\nfn assemble_closing_packet<'s, 'b: 's, H, S>(\n    space: &'s S,\n    product_header: &impl ProductHeader<H>,\n    buffer: &'b mut [u8],\n    ccf: &ConnectionCloseFrame,\n) -> Option<usize>\nwhere\n    S: PacketSpace<H>,\n    S::PacketAssembler<'s>: AsRef<PacketWriter<'b>>,\n    for<'f> &'f ConnectionCloseFrame: Package<S::PacketAssembler<'s>>,\n{\n    let header = product_header.new_header().ok()?;\n    let mut packet = S::new_packet(space, header, buffer).ok()?;\n\n    let ccf = match ccf.belongs_to(packet.as_ref().packet_type()) {\n        true => Cow::Borrowed(ccf),\n        false => Cow::Owned(ConnectionCloseFrame::from(match ccf {\n            ConnectionCloseFrame::App(app_close_frame) => app_close_frame.conceal(),\n            ConnectionCloseFrame::Quic(..) => unreachable!(),\n        })),\n    };\n\n    packet\n        .assemble_packet(&mut Packages((ccf.as_ref(), PadTo20)))\n        .ok()?;\n    Some(packet.encrypt_and_protect_packet().0)\n}\n\nimpl Spaces {\n    pub async fn send_ccf_packets(&self, t: &Terminator) {\n        t.try_send(|mut buf, ccf| {\n            let original_size = buf.len();\n            let initial_size = assemble_closing_packet(self.initial().as_ref(), t, buf, ccf);\n            buf = &mut buf[initial_size.unwrap_or(0)..];\n            let handshake_size = assemble_closing_packet(self.handshake().as_ref(), t, buf, ccf);\n            buf = &mut buf[handshake_size.unwrap_or(0)..];\n            let one_rtt_size =\n                assemble_closing_packet::<OneRttHeader, _>(self.data().as_ref(), t, buf, ccf);\n            buf = &mut buf[one_rtt_size.unwrap_or(0)..];\n\n            if initial_size.is_some() {\n                buf.fill(0);\n                Some(original_size)\n            } else {\n                (original_size != buf.len()).then_some(original_size - buf.len())\n            }\n        })\n        .await;\n    }\n}\n\nfn pipe<F: Send + Debug + 'static>(\n    mut source: UnboundedReceiver<F>,\n    destination: impl ReceiveFrame<F> + Send + 'static,\n    broker: ArcEventBroker,\n) {\n    tokio::spawn(\n        async move {\n            while let Some(f) = source.recv().await {\n                if let Err(Error::Quic(e)) = destination.recv_frame(f) {\n                    broker.emit(Event::Failed(e));\n                    break;\n                }\n            }\n        }\n        .instrument_in_current()\n        .in_current_span(),\n    );\n}\n\n/// When receiving a [`StreamFrame`] or [`StreamCtlFrame`],\n/// flow control must be updated accordingly\n#[derive(Clone)]\nstruct FlowControlledDataStreams {\n    streams: DataStreams,\n    flow_ctrl: FlowController,\n}\n\nimpl FlowControlledDataStreams {\n    fn new(streams: DataStreams, flow_ctrl: FlowController) -> Self {\n        Self { streams, flow_ctrl }\n    }\n}\n\nimpl ReceiveFrame<(StreamFrame, Bytes)> for FlowControlledDataStreams {\n    type Output = ();\n\n    fn recv_frame(&self, data_frame: (StreamFrame, Bytes)) -> Result<Self::Output, Error> {\n        let frame_type = data_frame.0.frame_type();\n        let new_data_size = self.streams.recv_data(data_frame)?;\n        self.flow_ctrl.on_new_rcvd(frame_type, new_data_size)?;\n        Ok(())\n    }\n}\n\nimpl ReceiveFrame<StreamCtlFrame> for FlowControlledDataStreams {\n    type Output = ();\n\n    fn recv_frame(&self, frame: StreamCtlFrame) -> Result<Self::Output, Error> {\n        let new_data_size = self.streams.recv_stream_control(frame)?;\n        self.flow_ctrl\n            .on_new_rcvd(frame.frame_type(), new_data_size)?;\n        Ok(())\n    }\n}\n\nstruct AckInitialSpace {\n    sent_journal: ArcSentJournal<CryptoFrame>,\n    crypto_stream_outgoing: CryptoStreamOutgoing,\n}\n\nimpl AckInitialSpace {\n    fn new(journal: &Journal<CryptoFrame>, crypto_stream: &CryptoStream) -> Self {\n        Self {\n            sent_journal: journal.of_sent_packets(),\n            crypto_stream_outgoing: crypto_stream.outgoing(),\n        }\n    }\n}\n\nimpl ReceiveFrame<AckFrame> for AckInitialSpace {\n    type Output = ();\n\n    fn recv_frame(&self, ack_frame: AckFrame) -> Result<Self::Output, Error> {\n        let mut rotate_guard = self.sent_journal.rotate();\n        rotate_guard.update_largest(&ack_frame)?;\n\n        let acked = ack_frame.iter().flat_map(|r| r.rev()).collect::<Vec<_>>();\n        qevent::event!(PacketsAcked {\n            packet_number_space: qbase::Epoch::Initial,\n            packet_nubers: acked.clone(),\n        });\n        for pn in acked {\n            for frame in rotate_guard.on_packet_acked(pn) {\n                self.crypto_stream_outgoing.on_data_acked(&frame);\n            }\n        }\n\n        Ok(())\n    }\n}\n\nstruct AckHandshakeSpace {\n    sent_journal: ArcSentJournal<CryptoFrame>,\n    crypto_stream_outgoing: CryptoStreamOutgoing,\n}\n\nimpl AckHandshakeSpace {\n    fn new(journal: &Journal<CryptoFrame>, crypto_stream: &CryptoStream) -> Self {\n        Self {\n            sent_journal: journal.of_sent_packets(),\n            crypto_stream_outgoing: crypto_stream.outgoing(),\n        }\n    }\n}\n\nimpl ReceiveFrame<AckFrame> for AckHandshakeSpace {\n    type Output = ();\n\n    fn recv_frame(&self, ack_frame: AckFrame) -> Result<Self::Output, Error> {\n        let mut rotate_guard = self.sent_journal.rotate();\n        rotate_guard.update_largest(&ack_frame)?;\n\n        let acked = ack_frame.iter().flat_map(|r| r.rev()).collect::<Vec<_>>();\n        qevent::event!(PacketsAcked {\n            packet_number_space: qbase::Epoch::Handshake,\n            packet_nubers: acked.clone(),\n        });\n        for pn in acked {\n            for frame in rotate_guard.on_packet_acked(pn) {\n                self.crypto_stream_outgoing.on_data_acked(&frame);\n            }\n        }\n\n        Ok(())\n    }\n}\n\nstruct AckDataSpace {\n    send_journal: ArcSentJournal<GuaranteedFrame>,\n    data_streams: DataStreams,\n    crypto_stream_outgoing: CryptoStreamOutgoing,\n}\n\nimpl AckDataSpace {\n    fn new(\n        journal: &Journal<GuaranteedFrame>,\n        data_streams: DataStreams,\n        crypto_stream: &CryptoStream,\n    ) -> Self {\n        Self {\n            send_journal: journal.of_sent_packets(),\n            data_streams,\n            crypto_stream_outgoing: crypto_stream.outgoing(),\n        }\n    }\n}\n\nimpl ReceiveFrame<AckFrame> for AckDataSpace {\n    type Output = ();\n\n    fn recv_frame(&self, ack_frame: AckFrame) -> Result<Self::Output, Error> {\n        let mut rotate_guard = self.send_journal.rotate();\n        rotate_guard.update_largest(&ack_frame)?;\n\n        let acked = ack_frame.iter().flat_map(|r| r.rev()).collect::<Vec<_>>();\n        qevent::event!(PacketsAcked {\n            packet_number_space: qbase::Epoch::Data,\n            packet_nubers: acked.clone(),\n        });\n        for pn in acked {\n            for frame in rotate_guard.on_packet_acked(pn) {\n                match frame {\n                    GuaranteedFrame::Stream(stream_frame) => {\n                        self.data_streams.on_data_acked(stream_frame)\n                    }\n                    GuaranteedFrame::Crypto(crypto_frame) => {\n                        self.crypto_stream_outgoing.on_data_acked(&crypto_frame)\n                    }\n                    GuaranteedFrame::Reliable(ReliableFrame::StreamCtl(\n                        StreamCtlFrame::ResetStream(reset_frame),\n                    )) => self.data_streams.on_reset_acked(reset_frame),\n                    _ => { /* nothing to do */ }\n                }\n            }\n        }\n        Ok(())\n    }\n}\n\npub fn spawn_deliver_and_parse(components: &Components) {\n    let received_packets_queue = &components.rcvd_pkt_q;\n    let initial = initial::deliver_and_parse_packets(\n        received_packets_queue.initial().clone(),\n        components.spaces.initial.clone(),\n        components.clone(),\n        components.event_broker.clone(),\n    );\n    let handshake = handshake::deliver_and_parse_packets(\n        received_packets_queue.handshake().clone(),\n        components.spaces.handshake.clone(),\n        components.clone(),\n        components.event_broker.clone(),\n    );\n    let data = data::deliver_and_parse_packets(\n        received_packets_queue.zero_rtt().clone(),\n        received_packets_queue.one_rtt().clone(),\n        components.spaces.data.clone(),\n        components.clone(),\n        components.event_broker.clone(),\n    );\n\n    tokio::spawn(\n        async move { tokio::join!(biased; data, handshake, initial) }\n            .instrument_in_current()\n            .in_current_span(),\n    );\n}\n\n/// For server connection, the origin dcid doesnot own a sequences number, once we received a packet which dcid != odcid,\n/// we should stop using the odcid, and drop the subsequent packets with odcid.\n///\n/// We do not remove the route to odcid, otherwise the server may establish multiple connections for packets with same odcid.\n///\n/// https://www.rfc-editor.org/rfc/rfc9000.html#name-negotiating-connection-ids\nfn filter_odcid_packet<H: GetDcid>(\n    packet: PlainPacket<H>,\n    specific: &SpecificComponents,\n) -> Option<PlainPacket<H>> {\n    use std::sync::atomic::Ordering::SeqCst;\n    if let SpecificComponents::Server {\n        odcid_router_entry,\n        using_odcid,\n    } = &specific\n    {\n        let dcid = (*packet.dcid()).into();\n        if odcid_router_entry.signpost() == dcid && !using_odcid.load(SeqCst) {\n            drop(packet); // just drop the packet, It's like we never received this packet.\n            return None;\n        }\n\n        if odcid_router_entry.signpost() != dcid {\n            using_odcid.store(false, SeqCst);\n        }\n    }\n    Some(packet)\n}\n\nfn read_plain_packet<H>(\n    packet: &PlainPacket<H>,\n    mut dispatch_frame: impl FnMut(qbase::frame::Frame),\n) -> Result<PacketContent, Error>\nwhere\n    H: GetType,\n    PacketHeaderBuilder: for<'a> From<&'a H>,\n{\n    let mut frames_collector = QuicFramesCollector::<PacketReceived>::new();\n    let mut packet_content = PacketContent::default();\n    let frame_reader = FrameReader::new(packet.body(), packet.get_type());\n    for frame_result in frame_reader {\n        let (frame, r#type) = frame_result.map_err(QuicError::from)?;\n        frames_collector.extend([&frame]);\n        packet_content += r#type;\n        dispatch_frame(frame);\n    }\n\n    packet.log_received(frames_collector);\n    Ok(packet_content)\n}\n"
  },
  {
    "path": "qconnection/src/state.rs",
    "content": "use std::{\n    future::Future,\n    sync::{\n        Arc,\n        atomic::{AtomicU8, Ordering},\n    },\n};\n\nuse qbase::{error::Error, frame::ConnectionCloseFrame, net::route::Link, role::Role};\nuse qevent::{\n    quic::{\n        Owner,\n        connectivity::{\n            BaseConnectionStates, ConnectionStarted, ConnectionState as QlogConnectionState,\n            ConnectionStateUpdated, GranularConnectionStates,\n        },\n        transport::ParametersSet,\n    },\n    telemetry::Instrument,\n};\nuse tokio::sync::SetOnce;\nuse tracing::Instrument as _;\n\nuse crate::Components;\n\n#[derive(Clone)]\npub struct ArcConnState {\n    state: Arc<AtomicU8>,\n    handshaked: Arc<SetOnce<()>>,\n    terminated: Arc<SetOnce<Error>>,\n}\n\nimpl Default for ArcConnState {\n    fn default() -> Self {\n        Self {\n            state: Default::default(),\n            handshaked: Arc::new(SetOnce::new()),\n            terminated: Arc::new(SetOnce::new()),\n        }\n    }\n}\n\nimpl ArcConnState {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Attempt to set the connection state from None to `BaseConnectionStates::Attempted`.\n    ///\n    /// Returns true if the state was successfully set to `BaseConnectionStates::Attempted`.\n    ///\n    /// Called when creating paths. If it returns true, it means that the path is the first path to connect.\n    pub fn try_entry_attempted(&self, components: &Components, link: Link) -> Result<bool, Error> {\n        let attempted = encode(BaseConnectionStates::Attempted.into());\n        let success = self\n            .state\n            .compare_exchange(0, attempted, Ordering::AcqRel, Ordering::Acquire)\n            .is_ok();\n\n        if success {\n            // same as Self::update\n            qevent::event!(ConnectionStateUpdated {\n                new: BaseConnectionStates::Attempted,\n            });\n            qevent::event!(ConnectionStarted {\n                socket: { (link.src, link.dst) } // cid不在这一层，未知\n            });\n\n            match components.role() {\n                Role::Client => {\n                    let lock_guard = components.parameters.lock_guard();\n                    if let Some(local_parameters) =\n                        lock_guard.as_ref().ok().and_then(|p| p.client())\n                    {\n                        qevent::event!(ParametersSet {\n                            owner: Owner::Local,\n                            client_parameters: local_parameters.as_ref(),\n                        })\n                    }\n                }\n                Role::Server => {\n                    let lock_guard = components.parameters.lock_guard();\n                    if let Some(local_parameters) =\n                        lock_guard.as_ref().ok().and_then(|p| p.server())\n                    {\n                        qevent::event!(ParametersSet {\n                            owner: Owner::Local,\n                            server_parameters: local_parameters.as_ref(),\n                        })\n                    }\n                }\n            };\n        }\n        Ok(success)\n    }\n\n    /// Try to update the connection state, return the old state if successful.\n    pub fn update(&self, state: QlogConnectionState) -> Option<QlogConnectionState> {\n        let new_state_code = encode(state);\n        let mut old_state_code = self.state.load(Ordering::Acquire);\n        loop {\n            if new_state_code <= old_state_code {\n                return None;\n            }\n            match self.state.compare_exchange(\n                old_state_code,\n                new_state_code,\n                Ordering::AcqRel,\n                Ordering::Acquire,\n            ) {\n                Ok(_old_state_code) => {\n                    // when server received a initial packet but failed to decrypt it, connection state will\n                    // enter Closing directly without enter Attempted.\n                    let old_state =\n                        decode(old_state_code).unwrap_or(BaseConnectionStates::Attempted.into());\n                    qevent::event!(ConnectionStateUpdated {\n                        new: state,\n                        old: old_state\n                    });\n                    return Some(old_state);\n                }\n                Err(current_state_code) => old_state_code = current_state_code,\n            }\n        }\n    }\n\n    pub fn enter_handshaked(&self) -> Option<QlogConnectionState> {\n        if let Some(old_state) = self.update(GranularConnectionStates::HandshakeConfirmed.into()) {\n            self.handshaked.set(()).expect(\"Handshaked already set\");\n            return Some(old_state);\n        }\n        None\n    }\n\n    pub fn enter_closing(&self, error: &(impl Into<Error> + Clone)) -> Option<QlogConnectionState> {\n        if let Some(old_state) = self.update(GranularConnectionStates::Closing.into()) {\n            self.terminated\n                .set(error.clone().into())\n                .expect(\"Terminated error already set\");\n            return Some(old_state);\n        }\n        None\n    }\n\n    pub fn enter_draining(&self, ccf: &ConnectionCloseFrame) -> Option<QlogConnectionState> {\n        if let Some(old_state) = self.update(GranularConnectionStates::Draining.into()) {\n            if old_state != QlogConnectionState::Granular(GranularConnectionStates::Closing) {\n                self.terminated\n                    .set(ccf.clone().into())\n                    .expect(\"Terminated error already set\");\n            }\n            return Some(old_state);\n        }\n        None\n    }\n\n    pub fn handshaked(&self) -> impl Future<Output = Result<(), Error>> + Send + use<> {\n        let handshaked = self.handshaked.clone();\n        let terminated = self.terminated.clone();\n        async move {\n            tokio::select! {\n                _ = handshaked.wait() => Ok(()),\n                error = terminated.wait() => Err(error.clone()),\n            }\n        }\n        .instrument_in_current()\n        .in_current_span()\n    }\n\n    pub fn terminated(&self) -> impl Future<Output = Error> + Send + use<> {\n        let terminated = self.terminated.clone();\n        async move { terminated.wait().await.clone() }\n            .instrument_in_current()\n            .in_current_span()\n    }\n\n    pub fn current(&self) -> Option<QlogConnectionState> {\n        decode(self.state.load(Ordering::Acquire))\n    }\n}\n\nmacro_rules! mapping {\n    ($( $a:ident ::$ b:ident ( $c:ident :: $d:ident ) => $number:literal, )*) => {\n        pub fn decode(code: u8) -> Option<QlogConnectionState> {\n            match code {\n                $( $number => Some($a::$b($c::$d)), )*\n                _ => None,\n            }\n        }\n\n        pub fn encode(state: QlogConnectionState) -> u8 {\n            match state {\n                $( $a::$b($c::$d) => $number, )*\n                _ => unreachable!(\"base closed and granular closed are repeated, use the base one\"),\n            }\n        }\n    };\n}\n\nmapping! {\n    QlogConnectionState::Base(BaseConnectionStates::Attempted) => 1,\n    QlogConnectionState::Base(BaseConnectionStates::HandshakeStarted) => 2, // miss\n    QlogConnectionState::Granular(GranularConnectionStates::PeerValidated) => 3, // miss\n    QlogConnectionState::Granular(GranularConnectionStates::EarlyWrite) => 4, // miss\n    QlogConnectionState::Base(BaseConnectionStates::HandshakeComplete) => 5, // miss\n    QlogConnectionState::Granular(GranularConnectionStates::HandshakeConfirmed) => 6,\n    QlogConnectionState::Granular(GranularConnectionStates::Closing) => 7,\n    QlogConnectionState::Granular(GranularConnectionStates::Draining) => 8,\n    // QlogConnectionState::Granular(GranularConnectionStates::Closed) => 9,\n    QlogConnectionState::Base(BaseConnectionStates::Closed) => 9,\n}\n\npub const HANDSHAKE_CONFIRMED: QlogConnectionState =\n    QlogConnectionState::Granular(GranularConnectionStates::HandshakeConfirmed);\n\npub const CLOSING: QlogConnectionState =\n    QlogConnectionState::Granular(GranularConnectionStates::Closing);\n\npub const DRAINING: QlogConnectionState =\n    QlogConnectionState::Granular(GranularConnectionStates::Draining);\n\npub const CLOSED: QlogConnectionState =\n    QlogConnectionState::Granular(GranularConnectionStates::Closed);\n"
  },
  {
    "path": "qconnection/src/termination.rs",
    "content": "use std::{\n    io, mem,\n    sync::{\n        Arc, Mutex,\n        atomic::{AtomicUsize, Ordering},\n    },\n    time::Duration,\n};\n\nuse qbase::{\n    cid::ConnectionId,\n    error::Error,\n    frame::ConnectionCloseFrame,\n    net::{route::Pathway, tx::Signals},\n    packet::{\n        header::{\n            long::{HandshakeHeader, InitialHeader, io::LongHeaderBuilder},\n            short::OneRttHeader,\n        },\n        io::ProductHeader,\n    },\n};\nuse qinterface::component::route::RcvdPacketQueue;\nuse tokio::time::Instant;\n\nuse crate::{ArcLocalCids, Components, path::ArcPathContexts};\n\n/// Keep a few states to support sending packets with ccf.\n///\n/// when it is dropped all paths will be destroyed\npub struct Terminator {\n    last_recv_time: Mutex<Instant>,\n    rcvd_packets: AtomicUsize,\n    scid: Option<ConnectionId>,\n    dcid: Option<ConnectionId>,\n    ccf: ConnectionCloseFrame,\n    paths: ArcPathContexts,\n}\n\nimpl Drop for Terminator {\n    fn drop(&mut self) {\n        self.paths.clear();\n    }\n}\n\nimpl ProductHeader<InitialHeader> for Terminator {\n    fn new_header(&self) -> Result<InitialHeader, Signals> {\n        let (Some(dcid), Some(scid)) = (self.dcid, self.scid) else {\n            return Err(Signals::empty());\n        };\n        // TODO: initial token\n        Ok(LongHeaderBuilder::with_cid(dcid, scid).initial(vec![]))\n    }\n}\n\nimpl ProductHeader<HandshakeHeader> for Terminator {\n    fn new_header(&self) -> Result<HandshakeHeader, Signals> {\n        let (Some(dcid), Some(scid)) = (self.dcid, self.scid) else {\n            return Err(Signals::empty());\n        };\n        Ok(LongHeaderBuilder::with_cid(dcid, scid).handshake())\n    }\n}\n\nimpl ProductHeader<OneRttHeader> for Terminator {\n    fn new_header(&self) -> Result<OneRttHeader, Signals> {\n        let Some(dcid) = self.dcid else {\n            return Err(Signals::empty());\n        };\n        // TODO: spin bit\n        Ok(OneRttHeader::new(false.into(), dcid))\n    }\n}\n\nimpl Terminator {\n    pub fn new(ccf: ConnectionCloseFrame, components: &Components) -> Self {\n        Self {\n            last_recv_time: Mutex::new(Instant::now()),\n            rcvd_packets: AtomicUsize::new(0),\n            scid: components.cid_registry.local.initial_scid(),\n            dcid: components.cid_registry.remote.latest_dcid(),\n            ccf,\n            paths: components.paths.clone(),\n        }\n    }\n\n    pub fn should_send(&self) -> bool {\n        let mut last_recv_time_guard = self.last_recv_time.lock().unwrap();\n        self.rcvd_packets.fetch_add(1, Ordering::AcqRel);\n\n        if self.rcvd_packets.load(Ordering::Acquire) >= 3\n            || last_recv_time_guard.elapsed() > Duration::from_secs(1)\n        {\n            *last_recv_time_guard = tokio::time::Instant::now();\n            self.rcvd_packets.store(0, Ordering::Release);\n            true\n        } else {\n            false\n        }\n    }\n\n    pub async fn try_send<W>(&self, mut write: W)\n    where\n        W: FnMut(&mut [u8], &ConnectionCloseFrame) -> Option<usize>,\n    {\n        for (_pathway, path) in self.paths.paths::<Vec<_>>() {\n            let mut datagram = vec![0; path.mtu() as _];\n            if let Some(written) = write(&mut datagram, &self.ccf)\n                && written > 0\n            {\n                _ = path\n                    .send_packets(&[io::IoSlice::new(&datagram[..written])])\n                    .await;\n            }\n        }\n    }\n\n    pub async fn try_send_on<W>(&self, pathway: Pathway, write: W)\n    where\n        W: FnOnce(&mut [u8], &ConnectionCloseFrame) -> Option<usize>,\n    {\n        let Some(path) = self.paths.get(&pathway) else {\n            return;\n        };\n\n        let mut datagram = vec![0; path.mtu() as _];\n        match write(&mut datagram, &self.ccf) {\n            Some(written) if written > 0 => {\n                _ = path\n                    .send_packets(&[io::IoSlice::new(&datagram[..written])])\n                    .await;\n            }\n            _ => {}\n        };\n    }\n}\n\n#[derive(Clone)]\nenum State {\n    Closing(Arc<RcvdPacketQueue>),\n    Draining,\n}\n\n#[derive(Clone)]\npub struct Termination {\n    // for generate io::Error\n    error: Error,\n    // keep this to keep the routing\n    _local_cids: ArcLocalCids,\n    state: State,\n}\n\nimpl Termination {\n    pub fn closing(error: Error, local_cids: ArcLocalCids, state: Arc<RcvdPacketQueue>) -> Self {\n        Self {\n            error,\n            _local_cids: local_cids,\n            state: State::Closing(state),\n        }\n    }\n\n    pub fn draining(error: Error, local_cids: ArcLocalCids) -> Self {\n        Self {\n            error,\n            _local_cids: local_cids,\n            state: State::Draining,\n        }\n    }\n\n    pub fn error(&self) -> Error {\n        self.error.clone()\n    }\n\n    // Close packets queues, dont send and receive any more packets.\n    pub fn enter_draining(&mut self) -> bool {\n        match mem::replace(&mut self.state, State::Draining) {\n            State::Closing(rcvd_pkt_q) => {\n                rcvd_pkt_q.close_all();\n                true\n            }\n            _ => false,\n        }\n    }\n}\n"
  },
  {
    "path": "qconnection/src/tls/agent.rs",
    "content": "use std::sync::Arc;\n\nuse derive_more::AsRef;\nuse rustls::{\n    SignatureScheme,\n    pki_types::{CertificateDer, SubjectPublicKeyInfoDer},\n    sign::{CertifiedKey, SigningKey},\n};\nuse thiserror::Error;\nuse x509_parser::prelude::FromDer;\n\n#[derive(Debug, Clone, AsRef)]\npub struct LocalAgent {\n    name: Arc<str>,\n    certified_key: Arc<CertifiedKey>,\n}\n\n#[derive(Debug, Error)]\npub enum SignError {\n    #[error(\"Unsupported signature scheme {scheme:?}\")]\n    UnsupportedScheme { scheme: SignatureScheme },\n    #[error(transparent)]\n    Crypto {\n        #[from]\n        source: rustls::Error,\n    },\n}\n\nimpl LocalAgent {\n    pub fn new(name: Arc<str>, certified_key: Arc<CertifiedKey>) -> Self {\n        Self {\n            name,\n            certified_key,\n        }\n    }\n\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n\n    pub fn cert_chain(&self) -> &[CertificateDer<'static>] {\n        &self.certified_key.cert\n    }\n\n    pub fn public_key(&self) -> SubjectPublicKeyInfoDer<'_> {\n        public_key(self.cert_chain())\n    }\n\n    pub fn sign_algorithm(&self) -> rustls::SignatureAlgorithm {\n        self.certified_key.key.algorithm()\n    }\n\n    pub fn sign(&self, scheme: SignatureScheme, data: &[u8]) -> Result<Vec<u8>, SignError> {\n        sign(self.certified_key.key.as_ref(), scheme, data)\n    }\n\n    pub fn verify(\n        &self,\n        scheme: SignatureScheme,\n        data: &[u8],\n        signature: &[u8],\n    ) -> Result<bool, VerifyError> {\n        verify(self.public_key(), scheme, data, signature)\n    }\n}\n\n#[derive(Debug, Clone, AsRef)]\npub struct RemoteAgent {\n    name: Arc<str>,\n    cert: Arc<[CertificateDer<'static>]>,\n}\n\n#[derive(Debug, Error)]\npub enum VerifyError {\n    #[error(\"Unsupported signature scheme {scheme:?}\")]\n    UnsupportedScheme { scheme: SignatureScheme },\n}\n\nimpl RemoteAgent {\n    pub fn new(name: Arc<str>, cert: Arc<[CertificateDer<'static>]>) -> Self {\n        Self { name, cert }\n    }\n\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n\n    pub fn cert_chain(&self) -> &[CertificateDer<'static>] {\n        &self.cert\n    }\n\n    pub fn public_key(&self) -> SubjectPublicKeyInfoDer<'_> {\n        public_key(self.cert_chain())\n    }\n\n    pub fn verify(\n        &self,\n        scheme: SignatureScheme,\n        data: &[u8],\n        signature: &[u8],\n    ) -> Result<bool, VerifyError> {\n        verify(self.public_key(), scheme, data, signature)\n    }\n}\n\nfn public_key<'d>(cert_chain: &'d [CertificateDer<'d>]) -> SubjectPublicKeyInfoDer<'d> {\n    use x509_parser::prelude::*;\n\n    match x509_parser::certificate::X509Certificate::from_der(&cert_chain[0]) {\n        Ok((_remain, certificate)) => {\n            let spki = certificate.public_key().raw;\n            spki.to_owned().into()\n        }\n        Err(_error) if cert_chain.len() == 1 => cert_chain[0].as_ref().into(),\n        Err(_error) => unreachable!(\"rustls returned an invalid peer_certificates.\"),\n    }\n}\n\nfn sign(\n    key: &(impl SigningKey + ?Sized),\n    scheme: SignatureScheme,\n    data: &[u8],\n) -> Result<Vec<u8>, SignError> {\n    // FIXME: same as load spki then sign with ring?\n    let signer = key\n        .choose_scheme(&[scheme])\n        .ok_or(SignError::UnsupportedScheme { scheme })?;\n    Ok(signer.sign(data)?)\n}\n\nfn verify(\n    spki: SubjectPublicKeyInfoDer,\n    scheme: SignatureScheme,\n    data: &[u8],\n    signature: &[u8],\n) -> Result<bool, VerifyError> {\n    let algorithm: &'static dyn ring::signature::VerificationAlgorithm = match scheme {\n        SignatureScheme::ECDSA_NISTP384_SHA384 => &ring::signature::ECDSA_P384_SHA384_ASN1,\n        SignatureScheme::ECDSA_NISTP256_SHA256 => &ring::signature::ECDSA_P256_SHA256_ASN1,\n        SignatureScheme::ED25519 => &ring::signature::ED25519,\n        SignatureScheme::RSA_PKCS1_SHA256 => &ring::signature::RSA_PKCS1_2048_8192_SHA256,\n        SignatureScheme::RSA_PKCS1_SHA384 => &ring::signature::RSA_PKCS1_2048_8192_SHA384,\n        SignatureScheme::RSA_PKCS1_SHA512 => &ring::signature::RSA_PKCS1_2048_8192_SHA512,\n        SignatureScheme::RSA_PSS_SHA256 => &ring::signature::RSA_PSS_2048_8192_SHA256,\n        SignatureScheme::RSA_PSS_SHA384 => &ring::signature::RSA_PSS_2048_8192_SHA384,\n        SignatureScheme::RSA_PSS_SHA512 => &ring::signature::RSA_PSS_2048_8192_SHA512,\n        _ => return Err(VerifyError::UnsupportedScheme { scheme }),\n    };\n\n    let public_key = match x509_parser::x509::SubjectPublicKeyInfo::from_der(&spki) {\n        Ok((_remain, spki)) => spki.subject_public_key,\n        Err(_error) => unreachable!(\"rustls returned an invalid peer_certificates.\"),\n    };\n\n    Ok(\n        ring::signature::UnparsedPublicKey::new(algorithm, public_key)\n            .verify(data, signature)\n            .is_ok(),\n    )\n}\n"
  },
  {
    "path": "qconnection/src/tls/client_auth.rs",
    "content": "use std::{\n    ops::{BitAnd, Deref},\n    sync::Arc,\n};\n\nuse tokio::sync::SetOnce;\n\nuse crate::prelude::{LocalAgent, RemoteAgent};\n\n#[derive(Default, Clone, Debug, PartialEq, Eq)]\npub enum ClientNameVerifyResult {\n    #[default]\n    Accept,\n    /// Refuse the connection with a reason that will be sent to the client.\n    Refuse(String),\n    /// Refuse the connection silently without sending any reason to the client.\n    ///\n    /// Left a reason for logging purpose only.\n    SilentRefuse(String),\n}\n\nimpl BitAnd for ClientNameVerifyResult {\n    type Output = Self;\n\n    fn bitand(self, rhs: Self) -> Self::Output {\n        use ClientNameVerifyResult::*;\n        match (self, rhs) {\n            (Accept, Accept) => Accept,\n            (SilentRefuse(reason), ..) | (.., SilentRefuse(reason)) => SilentRefuse(reason),\n            (Refuse(reason), ..) | (.., Refuse(reason)) => Refuse(reason),\n        }\n    }\n}\n\n#[derive(Default, Clone, Debug, PartialEq, Eq)]\npub enum ClientAgentVerifyResult {\n    #[default]\n    Accept,\n    Refuse(String),\n}\n\nimpl BitAnd for ClientAgentVerifyResult {\n    type Output = Self;\n\n    fn bitand(self, rhs: Self) -> Self::Output {\n        use ClientAgentVerifyResult::*;\n        match (self, rhs) {\n            (Accept, Accept) => Accept,\n            (Refuse(reason), ..) | (.., Refuse(reason)) => Refuse(reason),\n        }\n    }\n}\n\npub trait AuthClient: Send + Sync {\n    fn verify_client_name(\n        &self,\n        server_agent: &LocalAgent,\n        client_name: Option<&str>,\n    ) -> ClientNameVerifyResult;\n\n    fn verify_client_agent(\n        &self,\n        server_agent: &LocalAgent,\n        client_agent: &RemoteAgent,\n    ) -> ClientAgentVerifyResult;\n}\n\n#[derive(Default, Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub struct AcceptAllClientAuther;\n\nimpl AuthClient for AcceptAllClientAuther {\n    fn verify_client_name(&self, _: &LocalAgent, _: Option<&str>) -> ClientNameVerifyResult {\n        ClientNameVerifyResult::Accept\n    }\n\n    fn verify_client_agent(&self, _: &LocalAgent, _: &RemoteAgent) -> ClientAgentVerifyResult {\n        ClientAgentVerifyResult::Accept\n    }\n}\n\n#[derive(Default, Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]\npub struct ClientNameAuther;\n\nimpl AuthClient for ClientNameAuther {\n    fn verify_client_name(&self, _: &LocalAgent, _: Option<&str>) -> ClientNameVerifyResult {\n        ClientNameVerifyResult::Accept\n    }\n\n    fn verify_client_agent(\n        &self,\n        _: &LocalAgent,\n        client_agent: &RemoteAgent,\n    ) -> ClientAgentVerifyResult {\n        use x509_parser::prelude::*;\n        macro_rules! refuse {\n            ($($tt:tt)*) => {\n                return ClientAgentVerifyResult::Refuse(format!($($tt)*))\n            };\n        }\n\n        let cert = match x509_parser::parse_x509_certificate(&client_agent.cert_chain()[0]) {\n            Ok((_remain, cert)) => cert,\n            Err(error) => refuse!(\"Invalid certificate: {error}\"),\n        };\n        let san = match cert.subject_alternative_name() {\n            Ok(Some(san)) => san,\n            Ok(None) => refuse!(\"Missing SAN in certificate\"),\n            Err(error) => refuse!(\"Invalid SAN in certificate: {error}\"),\n        };\n\n        if san.value.general_names.iter().any(|name| match name {\n            GeneralName::DNSName(name) => *name == client_agent.name(),\n            _ => false,\n        }) {\n            return ClientAgentVerifyResult::Accept;\n        }\n\n        refuse!(\"Client name not verified by client certificate\")\n    }\n}\n\nimpl<A: AuthClient + ?Sized> AuthClient for &A {\n    fn verify_client_name(\n        &self,\n        server_agent: &LocalAgent,\n        client_name: Option<&str>,\n    ) -> ClientNameVerifyResult {\n        A::verify_client_name(self, server_agent, client_name)\n    }\n\n    fn verify_client_agent(\n        &self,\n        server_agent: &LocalAgent,\n        client_agent: &RemoteAgent,\n    ) -> ClientAgentVerifyResult {\n        A::verify_client_agent(self, server_agent, client_agent)\n    }\n}\n\nimpl<A: AuthClient + ?Sized> AuthClient for Box<A> {\n    fn verify_client_name(\n        &self,\n        server_agent: &LocalAgent,\n        client_name: Option<&str>,\n    ) -> ClientNameVerifyResult {\n        self.deref().verify_client_name(server_agent, client_name)\n    }\n\n    fn verify_client_agent(\n        &self,\n        server_agent: &LocalAgent,\n        client_agent: &RemoteAgent,\n    ) -> ClientAgentVerifyResult {\n        self.deref().verify_client_agent(server_agent, client_agent)\n    }\n}\n\nimpl<A: AuthClient + ?Sized> AuthClient for Arc<A> {\n    fn verify_client_name(\n        &self,\n        server_agent: &LocalAgent,\n        client_name: Option<&str>,\n    ) -> ClientNameVerifyResult {\n        self.deref().verify_client_name(server_agent, client_name)\n    }\n\n    fn verify_client_agent(\n        &self,\n        server_agent: &LocalAgent,\n        client_agent: &RemoteAgent,\n    ) -> ClientAgentVerifyResult {\n        self.deref().verify_client_agent(server_agent, client_agent)\n    }\n}\n\nmacro_rules! impl_auth_client_for_tuple {\n    ($head:ident $($tail:ident)*) => {\n        impl_auth_client_for_tuple!(@impl $head $($tail)*);\n        impl_auth_client_for_tuple!($($tail)*);\n    };\n    (@impl $($t:ident)*) => {\n        impl<$($t,)*> AuthClient for ($($t,)*)\n        where\n            $($t: AuthClient,)*\n        {\n            fn verify_client_name(\n                &self,\n                server_agent: &LocalAgent,\n                client_name: Option<&str>\n            ) -> ClientNameVerifyResult {\n                #[allow(non_snake_case)]\n                let ($($t,)*) = self;\n                $($t.verify_client_name(server_agent, client_name) &)* Default::default()\n            }\n\n            fn verify_client_agent(\n                &self,\n                server_agent: &LocalAgent,\n                client_agent: &RemoteAgent\n            ) -> ClientAgentVerifyResult {\n                #[allow(non_snake_case)]\n                let ($($t,)*) = self;\n                $($t.verify_client_agent(server_agent, client_agent) &)* Default::default()\n            }\n        }\n    };\n    () => {}\n}\n\nimpl_auth_client_for_tuple! {\n    Z Y X W V U T S R Q P O N M L K J I H G F E D C B A\n}\n\n/// A gate that controls server transmission permissions during parameter verification.\n///\n/// `SendLock` is used by the server to restrict data transmission until transport\n/// parameter validation and server name verification are completed. It provides operations to:\n/// - `request_permit()`: Request permission to send (public method)\n/// - `grant_permit()`: Grant permission to send (internal method, pub(super) visibility)\n///\n/// This mechanism ensures that the server sends no data until it has properly validated\n/// the client's transport parameters and verified the requested server name (SNI),\n/// enhancing security by preventing premature data transmission before proper validation.\n#[derive(Default, Debug, Clone)]\npub struct ArcSendLock(Arc<SetOnce<()>>);\n\nimpl ArcSendLock {\n    /// Create a new `SendLock` in the restricted state.\n    ///\n    /// Transmission will be blocked until client parameters and server\n    /// verification are completed, or when silent rejection is not enabled.\n    ///\n    /// Usually for server, which needs to do extra verify client name and certs.\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    /// Create a new `SendLock` in the unrestricted state.\n    ///\n    /// Transmission is immediately permitted, used when silent rejection\n    /// is disabled or verification has already been completed.\n    ///\n    /// Usually for client, which does not need to do extra verify server name and certs.\n    pub fn unrestricted() -> Self {\n        Self(Arc::new(SetOnce::new_with(Some(()))))\n    }\n\n    /// Request permission to send data.\n    ///\n    /// This method will block until client parameters and server verification\n    /// are completed, or connection error occured.\n    ///\n    /// This method will not block when silent rejection is not enabled\n    pub async fn request_permit(&self) {\n        _ = self.0.wait().await\n    }\n\n    /// Check if transmission is currently permitted.\n    pub fn is_permitted(&self) -> bool {\n        self.0.get().is_some()\n    }\n\n    /// Grant permission for transmission.\n    ///\n    /// Called after client parameters and server verification are completed\n    /// successfully. Unblocks all pending transmission requests.\n    pub fn grant_permit(&self) {\n        _ = self.0.set(());\n    }\n}\n"
  },
  {
    "path": "qconnection/src/tls.rs",
    "content": "mod agent;\nmod client_auth;\n\nuse std::{\n    future::Future,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker},\n};\n\npub use agent::{LocalAgent, RemoteAgent, SignError, VerifyError};\npub use client_auth::{\n    AcceptAllClientAuther, ArcSendLock, AuthClient, ClientAgentVerifyResult, ClientNameVerifyResult,\n};\nuse futures::{future::poll_fn, never::Never};\nuse qbase::{\n    Epoch,\n    error::{Error, ErrorKind, QuicError},\n    packet::keys::{ArcKeys, ArcOneRttKeys, ArcZeroRttKeys, DirectionalKeys},\n    param::{ArcParameters, ClientParameters, ParameterId, ServerParameters, WriteParameters},\n};\nuse qrecovery::crypto::CryptoStream;\nuse rustls::{\n    ClientConfig, HandshakeKind, ServerConfig, SignatureScheme,\n    client::ResolvesClientCert,\n    quic::{ClientConnection, KeyChange, ServerConnection},\n    server::{ClientHello, ResolvesServerCert},\n    sign::CertifiedKey,\n};\nuse tokio::io::{AsyncReadExt, AsyncWriteExt};\n\nuse crate::{Handshake, tls::client_auth::ClientNameAuther};\n\npub enum TlsSession {\n    Client(ClientTlsSession),\n    Server(ServerTlsSession),\n}\n\npub const QUIC_VERSION: rustls::quic::Version = rustls::quic::Version::V1;\n\nimpl TlsSession {\n    fn poll_read_hs(&mut self, cx: &mut Context, buf: &mut Vec<u8>) -> Poll<Option<KeyChange>> {\n        match match self {\n            TlsSession::Client(session) => session.tls_conn.write_hs(buf),\n            TlsSession::Server(session) => session.tls_conn.write_hs(buf),\n        } {\n            None if buf.is_empty() => {\n                match self {\n                    TlsSession::Client(session) => session.read_waker = Some(cx.waker().clone()),\n                    TlsSession::Server(session) => session.read_waker = Some(cx.waker().clone()),\n                }\n                Poll::Pending\n            }\n            key_change => Poll::Ready(key_change),\n        }\n    }\n\n    fn write_hs(&mut self, buf: &[u8]) -> Result<(), rustls::Error> {\n        match self {\n            TlsSession::Client(ClientTlsSession { tls_conn, .. }) => tls_conn.read_hs(buf)?,\n            TlsSession::Server(ServerTlsSession { tls_conn, .. }) => tls_conn.read_hs(buf)?,\n        }\n        if let Some(waker) = match self {\n            TlsSession::Client(ClientTlsSession { read_waker, .. }) => read_waker.take(),\n            TlsSession::Server(ServerTlsSession { read_waker, .. }) => read_waker.take(),\n        } {\n            waker.wake();\n        }\n        Ok(())\n    }\n\n    fn alert(&self) -> Option<rustls::AlertDescription> {\n        match self {\n            TlsSession::Client(session) => session.tls_conn.alert(),\n            TlsSession::Server(session) => session.tls_conn.alert(),\n        }\n    }\n\n    fn is_handshaking(&self) -> bool {\n        match self {\n            TlsSession::Client(session) => session.tls_conn.is_handshaking(),\n            TlsSession::Server(session) => session.tls_conn.is_handshaking(),\n        }\n    }\n\n    fn handshake_kind(&self) -> Option<HandshakeKind> {\n        match self {\n            TlsSession::Client(session) => session.tls_conn.handshake_kind(),\n            TlsSession::Server(session) => session.tls_conn.handshake_kind(),\n        }\n    }\n\n    fn is_finished(&self) -> bool {\n        !self.is_handshaking() && self.handshake_kind().is_some()\n    }\n\n    fn r#yield(&self) -> TlsHandshakeInfo {\n        const INCOMPLETE: &str = \"\";\n        match self {\n            TlsSession::Client(tls_session) => TlsHandshakeInfo::Client {\n                zero_rtt_accepted: tls_session.zero_rtt_accepted.expect(INCOMPLETE),\n                local_agent: tls_session.local_agent().clone(),\n                remote_agent: tls_session.remote_agent.clone().expect(INCOMPLETE),\n            },\n            TlsSession::Server(tls_session) => TlsHandshakeInfo::Server {\n                local_agent: tls_session.local_agent().clone().expect(INCOMPLETE),\n                remote_agent: tls_session.remote_agent.clone(),\n            },\n        }\n    }\n}\n\npub struct ClientTlsSession {\n    server_name: String,\n    tls_conn: ClientConnection,\n    read_waker: Option<Waker>,\n\n    // shared with ClientCertResolver\n    local_agent: Arc<Mutex<Option<LocalAgent>>>,\n    zero_rtt_accepted: Option<bool>,\n    remote_agent: Option<RemoteAgent>,\n}\n\n#[derive(Debug, Clone)]\nstruct ClientCertResolver {\n    client_name: Arc<str>,\n    inner: Arc<dyn ResolvesClientCert>,\n    client_agent: Arc<Mutex<Option<LocalAgent>>>,\n}\n\nimpl ResolvesClientCert for ClientCertResolver {\n    fn resolve(\n        &self,\n        root_hint_subjects: &[&[u8]],\n        sigschemes: &[SignatureScheme],\n    ) -> Option<Arc<CertifiedKey>> {\n        self.inner\n            .resolve(root_hint_subjects, sigschemes)\n            .inspect(|resolved_cert| {\n                let client_agent = LocalAgent::new(self.client_name.clone(), resolved_cert.clone());\n                let old = self.client_agent.lock().unwrap().replace(client_agent);\n                assert!(\n                    old.is_none(),\n                    \"unreachable: qconnection::tls::ClientCertResolver resolve only once\"\n                )\n            })\n    }\n\n    fn only_raw_public_keys(&self) -> bool {\n        self.inner.only_raw_public_keys()\n    }\n\n    fn has_certs(&self) -> bool {\n        self.inner.has_certs()\n    }\n}\n\nimpl ClientTlsSession {\n    pub fn init(\n        server_name: String,\n        mut tls_config: Arc<ClientConfig>,\n        client_params: &ClientParameters,\n    ) -> Result<Self, rustls::Error> {\n        let mut params_buf = Vec::with_capacity(1024);\n        params_buf.put_parameters(client_params);\n\n        let local_agent = Arc::new(Mutex::new(None));\n        // 通过注入ServerCertResolver实现CertifiedKey向上传递\n        if let Some(client_name) = client_params.get::<String>(ParameterId::ClientName) {\n            let tls_config = Arc::make_mut(&mut tls_config);\n            tls_config.client_auth_cert_resolver = Arc::new(ClientCertResolver {\n                client_name: client_name.into(),\n                inner: tls_config.client_auth_cert_resolver.clone(),\n                client_agent: local_agent.clone(),\n            });\n        };\n\n        let name = rustls::pki_types::ServerName::try_from(server_name.clone())\n            .map_err(|e| rustls::Error::Other(rustls::OtherError(Arc::new(e))))?;\n        let tls_conn = ClientConnection::new(tls_config, QUIC_VERSION, name, params_buf)?;\n\n        let tls_session = Self {\n            local_agent,\n            server_name,\n            tls_conn,\n            read_waker: None,\n            zero_rtt_accepted: None,\n            remote_agent: None,\n        };\n        Ok(tls_session)\n    }\n\n    fn local_agent(&self) -> MutexGuard<'_, Option<LocalAgent>> {\n        self.local_agent.lock().expect(\"Poison\")\n    }\n\n    #[must_use]\n    pub fn load_zero_rtt(&self) -> Option<(ServerParameters, DirectionalKeys)> {\n        match (\n            self.tls_conn.quic_transport_parameters(),\n            self.tls_conn.zero_rtt_keys(),\n        ) {\n            (Some(raw_params), Some(keys)) => {\n                let params = ServerParameters::parse_from_bytes(raw_params).ok()?;\n                Some((params, keys.into()))\n            }\n            _ => None,\n        }\n    }\n\n    fn try_process_sh(&mut self) {\n        self.remote_agent = (self.tls_conn.peer_certificates())\n            .map(|cert| RemoteAgent::new(self.server_name.as_str().into(), Arc::from(cert)))\n    }\n\n    fn try_process_ee(&mut self, parameters: &ArcParameters) -> Result<(), Error> {\n        let Some(handshake_kind) = self.tls_conn.handshake_kind() else {\n            return Ok(());\n        };\n        let raw_params = self\n            .tls_conn\n            .quic_transport_parameters()\n            .expect(\"Parameters must be known at this point\");\n        let mut parameters = parameters.lock_guard()?;\n        let remebered = parameters.remembered().cloned();\n        let params = ServerParameters::parse_from_bytes(raw_params)?;\n        self.zero_rtt_accepted = Some(\n            matches!(remebered, Some(remembered) if remembered.is_0rtt_accepted(&params))\n                && matches!(handshake_kind, rustls::HandshakeKind::Resumed),\n        );\n        parameters.recv_remote_params(params)?;\n        Ok(())\n    }\n}\n\nimpl Drop for ClientTlsSession {\n    fn drop(&mut self) {\n        if let Some(read_waker) = self.read_waker.take() {\n            read_waker.wake();\n        }\n    }\n}\n\npub struct ServerTlsSession {\n    client_auther: Box<dyn AuthClient>,\n    tls_conn: ServerConnection,\n    read_waker: Option<Waker>,\n\n    // shared with ServerCertResolver\n    local_agent: Arc<Mutex<Option<LocalAgent>>>,\n    client_name: Option<Arc<str>>,\n    send_lock: ArcSendLock,\n    remote_agent: Option<RemoteAgent>,\n}\n\n#[derive(Debug, Clone)]\nstruct ServerCertResolver {\n    inner: Arc<dyn ResolvesServerCert>,\n    server_agent: Arc<Mutex<Option<LocalAgent>>>,\n}\n\nimpl ResolvesServerCert for ServerCertResolver {\n    fn resolve(&self, client_hello: ClientHello<'_>) -> Option<Arc<CertifiedKey>> {\n        let server_name = client_hello.server_name()?.into();\n        self.inner.resolve(client_hello).inspect(|resolved_cert| {\n            let sever_agent = LocalAgent::new(server_name, resolved_cert.clone());\n            let old = self.server_agent.lock().unwrap().replace(sever_agent);\n            assert!(\n                old.is_none(),\n                \"unreachable: qconnection::tls::ServerCertResolver resolve only once\"\n            )\n        })\n    }\n\n    fn only_raw_public_keys(&self) -> bool {\n        self.inner.only_raw_public_keys()\n    }\n}\n\nimpl ServerTlsSession {\n    pub fn init(\n        mut tls_config: Arc<ServerConfig>,\n        server_params: &ServerParameters,\n        client_auther: Box<dyn AuthClient>,\n    ) -> Result<Self, rustls::Error> {\n        let mut params_buf = Vec::with_capacity(1024);\n        params_buf.put_parameters(server_params);\n\n        let local_agent = Arc::new(Mutex::new(None));\n        // 通过注入ServerCertResolver实现CertifiedKey向上传递\n        {\n            let tls_config = Arc::make_mut(&mut tls_config);\n            tls_config.cert_resolver = Arc::new(ServerCertResolver {\n                inner: tls_config.cert_resolver.clone(),\n                server_agent: local_agent.clone(),\n            });\n        };\n        let tls_conn = ServerConnection::new(tls_config, QUIC_VERSION, params_buf)?;\n\n        let tls_session = Self {\n            client_auther,\n            tls_conn,\n            read_waker: None,\n            local_agent,\n            client_name: None,\n            send_lock: ArcSendLock::new(),\n            remote_agent: None,\n        };\n        Ok(tls_session)\n    }\n\n    pub fn send_lock(&self) -> &ArcSendLock {\n        &self.send_lock\n    }\n\n    fn local_agent(&self) -> MutexGuard<'_, Option<LocalAgent>> {\n        self.local_agent.lock().expect(\"Poison\")\n    }\n\n    pub fn server_name(&self) -> Option<String> {\n        Some(self.local_agent().as_ref()?.name().to_owned())\n    }\n\n    fn try_process_ch(\n        &mut self,\n        parameters: &ArcParameters,\n        zero_rtt_keys: &ArcZeroRttKeys,\n    ) -> Result<(), Error> {\n        let client_params = ClientParameters::parse_from_bytes(\n            self.tls_conn\n                .quic_transport_parameters()\n                .expect(\"Client parameters must be present in ClientHello\"),\n        )?;\n\n        let client_name = client_params.get::<String>(ParameterId::ClientName);\n\n        let server_agent = self.local_agent().clone().ok_or_else(|| {\n            QuicError::with_default_fty(ErrorKind::ConnectionRefused, \"Missing SNI in client hello\")\n        })?;\n\n        match self\n            .client_auther\n            .verify_client_name(&server_agent, client_name.as_deref())\n        {\n            ClientNameVerifyResult::Accept => {\n                self.send_lock.grant_permit();\n                tracing::debug!(?client_name);\n                self.client_name = client_name.map(Arc::from);\n                parameters.lock_guard()?.recv_remote_params(client_params)?;\n\n                match self.tls_conn.zero_rtt_keys() {\n                    Some(keys) => zero_rtt_keys.set_keys(keys.into()),\n                    None => _ = zero_rtt_keys.invalid(),\n                }\n\n                Ok(())\n            }\n            ClientNameVerifyResult::Refuse(reason) => {\n                self.send_lock.grant_permit();\n                tracing::debug!(\n                    target: \"quic\",\n                    server_name = %server_agent.name(),\n                    client_name = ?self.client_name.as_deref(),\n                    ?reason,\n                    \"Client name verification failed, refusing connection.\"\n                );\n                Err(Error::Quic(QuicError::with_default_fty(\n                    ErrorKind::ConnectionRefused,\n                    reason,\n                )))\n            }\n            ClientNameVerifyResult::SilentRefuse(reason) => {\n                tracing::debug!(\n                    target: \"quic\",\n                    server_name = %server_agent.name(),\n                    client_name = ?self.client_name.as_deref(),\n                    ?reason,\n                    \"Client name verification failed, refusing connection silently.\"\n                );\n                Err(Error::Quic(QuicError::with_default_fty(\n                    ErrorKind::ConnectionRefused,\n                    \"\",\n                )))\n            }\n        }\n    }\n\n    fn try_process_cert(&mut self) -> Result<(), Error> {\n        let Some(client_name) = self.client_name.as_ref() else {\n            return Ok(());\n        };\n        let Some(client_cert) = self.tls_conn.peer_certificates().map(Arc::from) else {\n            return Ok(());\n        };\n\n        let client_agent = RemoteAgent::new(client_name.clone(), client_cert);\n\n        let server_agent = self\n            .local_agent()\n            .clone()\n            .expect(\"Server name must be known at this point\");\n\n        match (ClientNameAuther, &self.client_auther)\n            .verify_client_agent(&server_agent, &client_agent)\n        {\n            ClientAgentVerifyResult::Accept => {\n                self.remote_agent = Some(client_agent);\n                Ok(())\n            }\n            ClientAgentVerifyResult::Refuse(reason) => {\n                tracing::debug!(\n                    target: \"quic\",\n                    server_name = %server_agent.name(),\n                    ?self.client_name,\n                    ?reason,\n                    \"Client certificate verification failed, refusing connection.\"\n                );\n                Err(Error::Quic(QuicError::with_default_fty(\n                    ErrorKind::ConnectionRefused,\n                    reason,\n                )))\n            }\n        }\n    }\n}\n\nimpl Drop for ServerTlsSession {\n    fn drop(&mut self) {\n        if let Some(read_waker) = self.read_waker.take() {\n            read_waker.wake();\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub enum TlsHandshakeInfo {\n    Client {\n        local_agent: Option<LocalAgent>,\n        remote_agent: RemoteAgent,\n        zero_rtt_accepted: bool,\n    },\n    Server {\n        local_agent: LocalAgent,\n        remote_agent: Option<RemoteAgent>,\n    },\n}\n\nimpl TlsHandshakeInfo {\n    pub fn zero_rtt_accepted(&self) -> Option<bool> {\n        match self {\n            TlsHandshakeInfo::Client {\n                zero_rtt_accepted, ..\n            } => Some(*zero_rtt_accepted),\n            TlsHandshakeInfo::Server { .. } => None,\n        }\n    }\n}\n\nenum InfoState {\n    Demand(Vec<Waker>),\n    Ready(Arc<TlsHandshakeInfo>),\n}\n\nimpl InfoState {\n    fn set(&mut self, info: Arc<TlsHandshakeInfo>) {\n        // wakers woken in drop\n        *self = Self::Ready(info);\n    }\n\n    fn poll_get(&mut self, cx: &mut Context) -> Poll<Arc<TlsHandshakeInfo>> {\n        match self {\n            InfoState::Demand(wakers) => {\n                wakers.push(cx.waker().clone());\n                Poll::Pending\n            }\n            InfoState::Ready(tls_handshake_info) => Poll::Ready(tls_handshake_info.clone()),\n        }\n    }\n\n    fn get(&self) -> Option<&Arc<TlsHandshakeInfo>> {\n        match self {\n            InfoState::Demand(..) => None,\n            InfoState::Ready(tls_handshake_info) => Some(tls_handshake_info),\n        }\n    }\n}\n\nimpl Default for InfoState {\n    fn default() -> Self {\n        Self::Demand(vec![])\n    }\n}\n\nimpl Drop for InfoState {\n    fn drop(&mut self) {\n        if let Self::Demand(wakers) = self {\n            for waker in wakers.drain(..) {\n                waker.wake();\n            }\n        }\n    }\n}\n\npub struct TlsHandshake {\n    session: TlsSession,\n    info: InfoState,\n}\n\n#[derive(Clone)]\npub struct ArcTlsHandshake(Arc<Mutex<Result<TlsHandshake, Error>>>);\n\nimpl ArcTlsHandshake {\n    pub fn new(session: TlsSession) -> ArcTlsHandshake {\n        Self(Arc::new(Mutex::new(Ok(TlsHandshake {\n            session,\n            info: Default::default(),\n        }))))\n    }\n\n    fn state(&self) -> MutexGuard<'_, Result<TlsHandshake, Error>> {\n        self.0.lock().unwrap()\n    }\n\n    async fn read_hs(&self, buf: &mut Vec<u8>) -> Result<Option<KeyChange>, Error> {\n        poll_fn(|cx| {\n            let mut tls_handshake = self.state();\n            match tls_handshake.as_mut() {\n                Ok(state) => state.session.poll_read_hs(cx, buf).map(Ok),\n                Err(e) => Poll::Ready(Err(e.clone())),\n            }\n        })\n        .await\n    }\n\n    fn write_hs(&self, buf: &[u8]) -> Result<(), Error> {\n        let mut tls_handshake = self.state();\n        let tls_handshake = tls_handshake.as_mut().map_err(|e| e.clone())?;\n        match tls_handshake.session.write_hs(buf) {\n            Ok(_) => Ok(()),\n            Err(error) => {\n                let error_kind = match tls_handshake.session.alert() {\n                    Some(alert) => ErrorKind::Crypto(alert.into()),\n                    None => ErrorKind::ProtocolViolation,\n                };\n                Err(Error::Quic(QuicError::with_default_fty(\n                    error_kind,\n                    format!(\"TLS error: {error}\"),\n                )))\n            }\n        }\n    }\n\n    pub fn info(\n        &self,\n    ) -> impl Future<Output = Result<Arc<TlsHandshakeInfo>, Error>> + Unpin + use<'_> {\n        poll_fn(|cx| {\n            let mut tls_handshake = self.state();\n            match tls_handshake.as_mut() {\n                Ok(state) => state.info.poll_get(cx).map(Ok),\n                Err(e) => Poll::Ready(Err(e.clone())),\n            }\n        })\n    }\n\n    pub fn is_finished(&self) -> Result<bool, Error> {\n        let tls_handshake = self.state();\n        match tls_handshake.as_ref() {\n            Ok(state) => Ok(state.session.is_finished()),\n            Err(e) => Err(e.clone()),\n        }\n    }\n\n    pub fn server_name(&self) -> Result<Option<String>, Error> {\n        let tls_handshake = self.state();\n        let tls_handshake = tls_handshake.as_ref().map_err(|error| error.clone())?;\n        Ok(match &tls_handshake.session {\n            TlsSession::Client(session) => Some(session.server_name.clone()),\n            TlsSession::Server(session) => session.server_name(),\n        })\n    }\n\n    pub fn on_conn_error(&self, error: &Error) {\n        *self.state() = Err(error.clone())\n    }\n\n    fn try_process_tls_message(\n        &self,\n        parameters: &ArcParameters,\n        zero_rtt_keys: &ArcZeroRttKeys,\n    ) -> Result<Option<Arc<TlsHandshakeInfo>>, Error> {\n        let mut state = self.state();\n        let tls_handshake = state.as_mut().map_err(|e| e.clone())?;\n\n        match &mut tls_handshake.session {\n            TlsSession::Client(session) => {\n                if session.remote_agent.is_none() {\n                    session.try_process_sh();\n                }\n                if !parameters.lock_guard()?.is_remote_params_received() {\n                    session.try_process_ee(parameters)?;\n                }\n            }\n            TlsSession::Server(session) => {\n                if !parameters.lock_guard()?.is_remote_params_received() {\n                    session.try_process_ch(parameters, zero_rtt_keys)?;\n                }\n                if session.remote_agent.is_none() {\n                    session.try_process_cert()?;\n                }\n            }\n        }\n\n        if tls_handshake.session.is_finished() && tls_handshake.info.get().is_none() {\n            let info = Arc::new(tls_handshake.session.r#yield());\n            tracing::debug!(target: \"quic\", \"TLS handshake finished\");\n            tls_handshake.info.set(info.clone());\n            return Ok(Some(info));\n        }\n\n        Ok(None)\n    }\n\n    pub fn start(\n        self,\n        parameters: ArcParameters,\n        quic_handshake: Handshake,\n        crypto_streams: [CryptoStream; 3],\n        (handshake_keys, zero_rtt_keys, one_rtt_keys): (ArcKeys, ArcZeroRttKeys, ArcOneRttKeys),\n        on_handshake_conmplete: impl FnOnce(&TlsHandshakeInfo) -> Result<(), Error> + Send + 'static,\n    ) -> impl futures::Future<Output = Result<(), Error>> + Send + 'static {\n        let mut on_handshake_conmplete = Some(on_handshake_conmplete);\n\n        let crypto_read_task = |epoch: Epoch| {\n            let tls_handshake = self.clone();\n            let mut stream_reader = crypto_streams[epoch].reader();\n            async move {\n                let mut buf = [0; 2048];\n                while let Ok(read) = stream_reader.read(&mut buf).await {\n                    tls_handshake.write_hs(&buf[..read])?;\n                }\n                Result::<_, Error>::Ok(())\n            }\n        };\n\n        let [initial_read_task, handshake_read_task, data_read_task] =\n            Epoch::EPOCHS.map(|epoch: Epoch| crypto_read_task(epoch));\n\n        let mut crypto_writers =\n            Epoch::EPOCHS.map(|epoch: Epoch| crypto_streams[epoch].writer().clone());\n\n        let crypto_write_task = async move {\n            let mut buf = Vec::with_capacity(2048);\n            let mut cur_epoch = Epoch::Initial;\n            loop {\n                let key_change = self.read_hs(&mut buf).await?;\n                if !buf.is_empty() {\n                    // error: crypto buffer offset overflow\n                    (crypto_writers[cur_epoch].write_all(&buf).await).map_err(|e| {\n                        QuicError::with_default_fty(ErrorKind::Internal, format!(\"{e:?}\"))\n                    })?;\n                    buf.clear();\n                }\n                match key_change {\n                    Some(KeyChange::Handshake { keys }) => {\n                        handshake_keys.set_keys(keys.into());\n                        quic_handshake.got_handshake_key();\n                        cur_epoch = Epoch::Handshake;\n                    }\n                    Some(KeyChange::OneRtt { keys, next }) => {\n                        one_rtt_keys.set_keys(keys, next);\n                        cur_epoch = Epoch::Data;\n                    }\n                    None => {}\n                };\n                if let Some(info) = self.try_process_tls_message(&parameters, &zero_rtt_keys)? {\n                    (on_handshake_conmplete.take().expect(\"TLS complete twice\"))(&info)?;\n                }\n            }\n        };\n\n        // rustc: error[E0282]: type annotations needed\n        let crypto_write_task = async move {\n            let result: Result<Never, Error> = crypto_write_task.await;\n            result\n        };\n\n        async move {\n            tokio::try_join!(\n                initial_read_task,\n                handshake_read_task,\n                data_read_task,\n                crypto_write_task,\n            )?;\n            Ok(())\n        }\n    }\n}\n"
  },
  {
    "path": "qconnection/src/traversal.rs",
    "content": "use std::{io, net::SocketAddr};\n\nuse futures::{StreamExt, stream::FuturesUnordered};\nuse qbase::{\n    frame::{PunchHelloFrame, ReliableFrame, io::ReceiveFrame},\n    net::{\n        addr::EndpointAddr,\n        route::{Link, Pathway},\n        tx::Signals,\n    },\n    packet::{ProductHeader, header::short::OneRttHeader},\n};\nuse qevent::telemetry::Instrument;\nuse qinterface::{bind_uri::BindUri, component::location::AddressEvent};\nuse qtraversal::nat::client::{ClientLocationData, StunClientsComponent};\nuse tracing::Instrument as _;\n\nuse super::Components;\nuse crate::CidRegistry;\n\nimpl ReceiveFrame<(BindUri, Pathway, Link, ReliableFrame)> for Components {\n    type Output = ();\n    fn recv_frame(\n        &self,\n        frame: (BindUri, Pathway, Link, ReliableFrame),\n    ) -> Result<Self::Output, qbase::error::Error> {\n        self.puncher.recv_frame(frame)\n    }\n}\n\nimpl ReceiveFrame<(BindUri, Pathway, Link, PunchHelloFrame)> for Components {\n    type Output = ();\n\n    fn recv_frame(\n        &self,\n        frame: (BindUri, Pathway, Link, PunchHelloFrame),\n    ) -> Result<Self::Output, qbase::error::Error> {\n        self.puncher.recv_frame(frame)\n    }\n}\n\nimpl Components {\n    pub fn subscribe_local_address(&self) {\n        let mut observer = self.locations.subscribe();\n        let conn = self.clone();\n\n        let future = async move {\n            let handle_address_event = |(bind_uri, event): (BindUri, AddressEvent)| {\n                let event = match event.downcast::<io::Result<SocketAddr>>() {\n                    Ok(AddressEvent::Upsert(data)) => {\n                        // on error: delect from address book\n                        // THINK: Err和remove的异同？\n                        let Ok(bound_addr) = data.as_ref() else {\n                            return;\n                        };\n                        let endpoint_addr = EndpointAddr::direct(*bound_addr);\n                        conn.add_local_endpoint(bind_uri, endpoint_addr);\n                        return;\n                    }\n                    Ok(AddressEvent::Remove(_type_id)) => return,\n                    Ok(AddressEvent::Closed) => return,\n                    Err(event) => event,\n                };\n                let _event = match event.downcast::<ClientLocationData>() {\n                    Ok(AddressEvent::Upsert(data)) => {\n                        let Ok(endpoint_addr) = data.as_ref() else {\n                            return;\n                        };\n                        conn.add_local_endpoint(bind_uri.clone(), *endpoint_addr);\n                        if matches!(*endpoint_addr, EndpointAddr::Agent { .. }) {\n                            _ = conn.add_local_punch_address(bind_uri.clone(), *endpoint_addr);\n                        }\n                        return;\n                    }\n                    Ok(AddressEvent::Remove(_type_id)) => return,\n                    Ok(AddressEvent::Closed) => return,\n                    Err(_event) => return,\n                };\n            };\n\n            loop {\n                tokio::select! {\n                    _ =  conn.conn_state.terminated() => break,\n                    address_event = observer.recv() => {\n                        match address_event {\n                            Some(event) => handle_address_event(event),\n                            None => break,\n                        }\n                    }\n                }\n            }\n        };\n        // Terminates when the connection is closed or the observer channel drops.\n        tokio::spawn(future.instrument_in_current().in_current_span());\n    }\n\n    // 添加本地直通地址 可以直接新建 path\n    pub fn add_local_endpoint(&self, bind: BindUri, addr: EndpointAddr) {\n        tracing::trace!(target: \"quic\", bind_uri = %bind, %addr,\"add local endpoint\");\n        match self.puncher.add_local_endpoint(bind, addr) {\n            Ok(ways) => {\n                let ways: Vec<(BindUri, Link, qtraversal::PathWay)> = ways;\n                ways.into_iter().for_each(|way| {\n                    let _ = self.add_path(way.0, way.1, way.2);\n                });\n            }\n            Err(error) => {\n                tracing::debug!(target: \"quic\", ?error, \"Add local endpoint failed\");\n            }\n        }\n    }\n\n    // 添加对端直通地址，可以直接新建 path\n    pub fn add_peer_endpoint(&self, addr: EndpointAddr, source: qresolve::Source) {\n        tracing::trace!(target: \"quic\", %addr, ?source, \"add peer endpoint\");\n        match self.puncher.add_peer_endpoint(addr, source) {\n            Ok(ways) => {\n                ways.into_iter().for_each(|way| {\n                    let _ = self.add_path(way.0, way.1, way.2);\n                });\n            }\n            Err(error) => {\n                tracing::warn!(target: \"quic\", ?error, \"Add peer endpoint failed\");\n            }\n        }\n    }\n\n    // 添加本地直连地址，用于打洞，不能直接新建路径\n    pub fn add_local_punch_address(\n        &self,\n        bind_uri: BindUri,\n        endpoint_addr: EndpointAddr,\n    ) -> io::Result<()> {\n        let iface = self\n            .interfaces\n            .borrow(&bind_uri)\n            .ok_or_else(|| io::Error::new(io::ErrorKind::NotFound, \"interface not found\"))?;\n\n        let local_addr = endpoint_addr.addr();\n        let conn = self.clone();\n\n        let tasks = iface.with_component(|clinets: &StunClientsComponent| {\n            clinets.with_clients(|map| {\n                // workaround. clippy issue: https://github.com/rust-lang/rust-clippy/issues/16428\n                #[allow(clippy::redundant_iter_cloned)]\n                map.values()\n                    .cloned()\n                    .map(|client| async move { client.nat_type().await })\n                    .collect::<FuturesUnordered<_>>()\n            })\n        })?;\n\n        let Some(mut tasks) = tasks else {\n            return Ok(());\n        };\n\n        tokio::spawn(\n            async move {\n                while let Some(result) = tasks.next().await {\n                    if let Ok(nat_type) = result {\n                        _ = conn.puncher.add_local_address(\n                            bind_uri.clone(),\n                            local_addr,\n                            nat_type,\n                            0,\n                        );\n                    }\n                }\n            }\n            .instrument_in_current()\n            .in_current_span(),\n        );\n        Ok(())\n    }\n\n    pub fn remove_address(&self, addr: SocketAddr) {\n        let _ = self.puncher.remove_local_address(addr);\n    }\n}\n\n#[derive(Clone)]\npub struct PunchTransaction {\n    cid_registry: CidRegistry,\n}\n\nimpl PunchTransaction {\n    pub(crate) fn new(cid_registry: CidRegistry) -> Self {\n        Self { cid_registry }\n    }\n}\n\nimpl ProductHeader<OneRttHeader> for PunchTransaction {\n    fn new_header(&self) -> Result<OneRttHeader, Signals> {\n        Ok(OneRttHeader::new(\n            false.into(),\n            self.cid_registry\n                .remote\n                .latest_dcid()\n                .ok_or(Signals::CONNECTION_ID)?,\n        ))\n    }\n}\n"
  },
  {
    "path": "qconnection/src/tx.rs",
    "content": "use bytes::BufMut;\nuse derive_more::Deref;\nuse qbase::{\n    frame::{ContainSpec, FrameFeature, Spec},\n    net::tx::Signals,\n    packet::{\n        AssemblePacket, PacketInfo, PacketWriter as BasePacketWriter, RecordFrame,\n        header::{EncodeHeader, GetType, io::WriteHeader, long::LongHeader, short::OneRttHeader},\n        keys::DirectionalKeys,\n        signal::KeyPhaseBit,\n    },\n    util::ContinuousData,\n};\nuse qevent::packet::PacketWriter as QEventPacketWriter;\nuse qrecovery::journal::{ArcSentJournal, NewPacketGuard};\nuse tokio::time::Duration;\n\n#[derive(Deref)]\npub struct PacketWriter<'b, 's, F> {\n    #[deref]\n    writer: QEventPacketWriter<'b>,\n    // 不同空间的send guard类型不一样\n    clerk: NewPacketGuard<'s, F>,\n    retran_timeout: Duration,\n    expire_timeout: Duration,\n}\n\nimpl<'b, F> AsRef<BasePacketWriter<'b>> for PacketWriter<'b, '_, F> {\n    #[inline]\n    fn as_ref(&self) -> &BasePacketWriter<'b> {\n        &self.writer\n    }\n}\n\nimpl<'b, F> AsRef<QEventPacketWriter<'b>> for PacketWriter<'b, '_, F> {\n    #[inline]\n    fn as_ref(&self) -> &QEventPacketWriter<'b> {\n        &self.writer\n    }\n}\n\nimpl<'b, 's, F> PacketWriter<'b, 's, F> {\n    pub fn new_long<S>(\n        header: LongHeader<S>,\n        buffer: &'b mut [u8],\n        keys: DirectionalKeys,\n        journal: &'s ArcSentJournal<F>,\n        retran_timeout: Duration,\n        expire_timeout: Duration,\n    ) -> Result<Self, Signals>\n    where\n        S: EncodeHeader + 'static,\n        LongHeader<S>: GetType,\n        for<'a> &'a mut [u8]: WriteHeader<LongHeader<S>>,\n    {\n        let clerk = journal.new_packet();\n        let pn = clerk.pn();\n        Ok(Self {\n            clerk,\n            writer: QEventPacketWriter::new_long(&header, buffer, pn, keys)?,\n            expire_timeout,\n            retran_timeout,\n        })\n    }\n\n    pub fn new_short(\n        header: OneRttHeader,\n        buffer: &'b mut [u8],\n        keys: DirectionalKeys,\n        key_phase: KeyPhaseBit,\n        journal: &'s ArcSentJournal<F>,\n        retran_timeout: Duration,\n        expire_timeout: Duration,\n    ) -> Result<Self, Signals> {\n        let clerk = journal.new_packet();\n        let pn = clerk.pn();\n        Ok(Self {\n            clerk,\n            writer: QEventPacketWriter::new_short(&header, buffer, pn, keys, key_phase)?,\n            expire_timeout,\n            retran_timeout,\n        })\n    }\n}\n\nunsafe impl<'b, 's, F> BufMut for PacketWriter<'b, 's, F> {\n    #[inline]\n    fn remaining_mut(&self) -> usize {\n        self.writer.remaining_mut()\n    }\n\n    #[inline]\n    unsafe fn advance_mut(&mut self, cnt: usize) {\n        unsafe { self.writer.advance_mut(cnt) };\n    }\n\n    #[inline]\n    fn chunk_mut(&mut self) -> &mut bytes::buf::UninitSlice {\n        self.writer.chunk_mut()\n    }\n\n    // steam/datagram可能会手动padding，padding也要被记录，所以这里不能用默认实现\n    #[inline]\n    fn put_bytes(&mut self, val: u8, cnt: usize) {\n        self.writer.put_bytes(val, cnt);\n    }\n}\n\nimpl<F> AssemblePacket for PacketWriter<'_, '_, F> {\n    #[inline]\n    fn encrypt_and_protect_packet(self) -> (usize, PacketInfo) {\n        self.clerk\n            .build_with_time(self.retran_timeout, self.expire_timeout);\n        self.writer.encrypt_and_protect_packet()\n    }\n}\n\nimpl<'b, GF, F, D: ContinuousData> RecordFrame<F, D> for PacketWriter<'b, '_, GF>\nwhere\n    QEventPacketWriter<'b>: RecordFrame<F, D>,\n    for<'f> &'f F: TryInto<GF>,\n{\n    #[inline]\n    fn record_frame(&mut self, frame: &F) {\n        if let Ok(frame) = frame.try_into() {\n            self.clerk.record_frame(frame);\n        } else {\n            self.clerk.record_trivial();\n        }\n\n        self.writer.record_frame(frame);\n    }\n}\n\n#[derive(Deref)]\npub struct TrivialPacketWriter<'b, 's, F> {\n    #[deref]\n    writer: QEventPacketWriter<'b>,\n    // 不同空间的send guard类型不一样\n    clerk: NewPacketGuard<'s, F>,\n}\n\nimpl<'b, F> AsRef<BasePacketWriter<'b>> for TrivialPacketWriter<'b, '_, F> {\n    #[inline]\n    fn as_ref(&self) -> &BasePacketWriter<'b> {\n        &self.writer\n    }\n}\n\nimpl<'b, F> AsRef<QEventPacketWriter<'b>> for TrivialPacketWriter<'b, '_, F> {\n    #[inline]\n    fn as_ref(&self) -> &QEventPacketWriter<'b> {\n        &self.writer\n    }\n}\n\nimpl<'b, 's, F> TrivialPacketWriter<'b, 's, F> {\n    #[inline]\n    pub fn new_long<S>(\n        header: LongHeader<S>,\n        buffer: &'b mut [u8],\n        keys: DirectionalKeys,\n        journal: &'s ArcSentJournal<F>,\n    ) -> Result<Self, Signals>\n    where\n        S: EncodeHeader + 'static,\n        LongHeader<S>: GetType,\n        for<'a> &'a mut [u8]: WriteHeader<LongHeader<S>>,\n    {\n        let clerk = journal.new_packet();\n        let pn = clerk.pn();\n        Ok(Self {\n            clerk,\n            writer: QEventPacketWriter::new_long(&header, buffer, pn, keys)?,\n        })\n    }\n\n    #[inline]\n    pub fn new_short(\n        header: OneRttHeader,\n        buffer: &'b mut [u8],\n        keys: DirectionalKeys,\n        key_phase: KeyPhaseBit,\n        journal: &'s ArcSentJournal<F>,\n    ) -> Result<Self, Signals> {\n        let clerk = journal.new_packet();\n        let pn = clerk.pn();\n        Ok(Self {\n            clerk,\n            writer: QEventPacketWriter::new_short(&header, buffer, pn, keys, key_phase)?,\n        })\n    }\n}\n\nunsafe impl<'b, 's, F> BufMut for TrivialPacketWriter<'b, 's, F> {\n    #[inline]\n    fn remaining_mut(&self) -> usize {\n        self.writer.remaining_mut()\n    }\n\n    #[inline]\n    unsafe fn advance_mut(&mut self, cnt: usize) {\n        unsafe { self.writer.advance_mut(cnt) };\n    }\n\n    #[inline]\n    fn chunk_mut(&mut self) -> &mut bytes::buf::UninitSlice {\n        self.writer.chunk_mut()\n    }\n\n    // steam/datagram可能会手动padding，padding也要被记录，所以这里不能用默认实现\n    #[inline]\n    fn put_bytes(&mut self, val: u8, cnt: usize) {\n        self.writer.put_bytes(val, cnt);\n    }\n}\n\nimpl<F> AssemblePacket for TrivialPacketWriter<'_, '_, F> {\n    #[inline]\n    fn encrypt_and_protect_packet(self) -> (usize, PacketInfo) {\n        self.clerk.build_trivial();\n        self.writer.encrypt_and_protect_packet()\n    }\n}\n\nimpl<'b, GF, F, D: ContinuousData> RecordFrame<F, D> for TrivialPacketWriter<'b, '_, GF>\nwhere\n    F: FrameFeature,\n    QEventPacketWriter<'b>: RecordFrame<F, D>,\n{\n    #[inline]\n    fn record_frame(&mut self, frame: &F) {\n        // however, this will be checked again in NewPacketGuard::build_trivial\n        debug_assert!(\n            frame.specs().contain(Spec::NonAckEliciting),\n            \"Frame is not non-ack eliciting {}\",\n            std::any::type_name::<F>()\n        );\n        self.clerk.record_trivial();\n        self.writer.record_frame(frame);\n    }\n}\n"
  },
  {
    "path": "qdatagram/Cargo.toml",
    "content": "[package]\nname = \"qdatagram\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"Datagram transmission of dquic\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\n[dependencies]\nbytes = { workspace = true }\nfutures = { workspace = true }\nqbase = { workspace = true }\ntokio = { workspace = true }\ntracing = { workspace = true }\n\n[dev-dependencies]\ntokio = { workspace = true, features = [\"test-util\", \"macros\"] }\n"
  },
  {
    "path": "qdatagram/src/lib.rs",
    "content": "mod reader;\nuse bytes::Bytes;\npub use reader::*;\nmod writer;\nuse std::io;\n\nuse qbase::{\n    error::Error,\n    frame::{DatagramFrame, io::ReceiveFrame},\n    net::tx::{ArcSendWakers, Signals},\n    packet::Package,\n};\npub use writer::*;\n\n/// Combination of [`DatagramIncoming`] and [`DatagramOutgoing`]\n#[derive(Debug, Clone)]\npub struct DatagramFlow {\n    /// The incoming datagram frame, see type's doc for more details.\n    incoming: DatagramIncoming,\n    /// The outgoing datagram frame, see type's doc for more details.\n    outgoing: DatagramOutgoing,\n}\n\nimpl DatagramFlow {\n    /// Creates a new instance of [`DatagramFlow`].\n    ///\n    /// This method takes local protocol parameter [`max_datagram_frame_size`],\n    /// the local's transport parameter [`max_datagram_frame_size`] limits the size of the datagram frames that peer\n    /// can send.\n    ///\n    /// [`max_datagram_frame_size`]: https://www.rfc-editor.org/rfc/rfc9221.html#name-transport-parameter\n    #[inline]\n    pub fn new(local_max_datagram_frame_size: u64, tx_wakers: ArcSendWakers) -> Self {\n        Self {\n            incoming: DatagramIncoming::new(local_max_datagram_frame_size as _),\n            outgoing: DatagramOutgoing::new(tx_wakers),\n        }\n    }\n\n    pub fn try_load_data_into<P>(&self, packet: &mut P) -> Result<(), Signals>\n    where\n        P: bytes::BufMut + ?Sized,\n        (DatagramFrame, Bytes): Package<P>,\n    {\n        self.outgoing.try_load_data_into(packet)\n    }\n\n    /// Create a new **unique** instance of [`DatagramReader`].\n    ///\n    /// Return an error if the connection is closing or already closed,\n    /// or datagram is disenabled by local.\n    ///\n    /// See [`DatagramIncoming::new_reader`] for more details.\n    #[inline]\n    pub fn reader(&self) -> io::Result<DatagramReader> {\n        self.incoming.new_reader()\n    }\n\n    /// Create a new instance of [`DatagramWriter`].\n    ///\n    /// Return an error if the connection is closing or already closed,\n    /// or datagram is disenabled by peer(`max_datagram_frame_size` is `0`)\n    ///\n    /// See [`DatagramOutgoing::new_writer`] for more details.\n    #[inline]\n    pub fn writer(&self, max_datagram_frame_size: u64) -> io::Result<DatagramWriter> {\n        self.outgoing.new_writer(max_datagram_frame_size)\n    }\n\n    /// See [`DatagramOutgoing::on_conn_error`] and [`DatagramIncoming::on_conn_error`] for more details.\n    #[inline]\n    pub fn on_conn_error(&self, error: &Error) {\n        self.incoming.on_conn_error(error);\n        self.outgoing.on_conn_error(error);\n    }\n}\n\n/// See [`DatagramIncoming::recv_datagram`] for more details.\nimpl ReceiveFrame<(DatagramFrame, Bytes)> for DatagramFlow {\n    type Output = ();\n\n    #[inline]\n    fn recv_frame(&self, (frame, body): (DatagramFrame, Bytes)) -> Result<Self::Output, Error> {\n        self.incoming.recv_datagram(frame, body)\n    }\n}\n"
  },
  {
    "path": "qdatagram/src/reader.rs",
    "content": "use std::{\n    collections::VecDeque,\n    future::Future,\n    io,\n    pin::Pin,\n    sync::{Arc, Mutex},\n    task::{Context, Poll, Waker, ready},\n};\n\nuse bytes::{BufMut, Bytes};\nuse qbase::{\n    error::{Error, ErrorKind, QuicError},\n    frame::{DatagramFrame, EncodeSize, GetFrameType},\n};\n\n#[derive(Debug)]\nstruct RawDatagarmReader {\n    local_max_size: usize,\n    rcvd_datagrams: VecDeque<Bytes>,\n    read_waker: Option<Waker>,\n}\n\nimpl RawDatagarmReader {\n    fn new(local_max_size: usize) -> Self {\n        Self {\n            local_max_size,\n            rcvd_datagrams: VecDeque::new(),\n            read_waker: None,\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct DatagramIncoming(Arc<Mutex<Result<RawDatagarmReader, Error>>>);\n\nimpl DatagramIncoming {\n    /// Create a new [`DatagramIncoming`] to receive datagram frames.\n    pub fn new(local_max_size: usize) -> Self {\n        Self(Arc::new(Mutex::new(Ok(RawDatagarmReader::new(\n            local_max_size,\n        )))))\n    }\n\n    /// Try to create a new [`DatagramReader`] for the application to read the received datagram frames.\n    ///\n    /// Returns an error when the Unreliable Datagram Extension was disenabled by local parameters,\n    /// see <https://www.rfc-editor.org/rfc/rfc9221.html#name-transport-parameter> for more delails.\n    pub fn new_reader(&self) -> io::Result<DatagramReader> {\n        let mut guard = self.0.lock().unwrap();\n        let reader = guard.as_mut().map_err(|e| e.clone())?;\n        if reader.local_max_size == 0 {\n            tracing::error!(\"   Cause by: DatagramIncoming::new_reader local_max_size is 0\");\n            return Err(io::Error::new(\n                io::ErrorKind::Unsupported,\n                \"Unreliable Datagram Extension was disenabled by local parameters\",\n            ));\n        }\n\n        Ok(DatagramReader(self.0.clone()))\n    }\n\n    /// Receives a datagram frame for the application to read.\n    ///\n    /// If the size of the received datagram exceeds the maximum size set by the local protocol parameters `max_datagram_frame_size`,\n    /// a connection error occurs.\n    ///\n    /// If the connection is closing or closed, the new datagram will be ignored.\n    ///\n    /// If the application is waiting for the data to be read, the task will be woken up when the datagram is received.\n    pub fn recv_datagram(&self, frame: DatagramFrame, data: bytes::Bytes) -> Result<(), Error> {\n        let mut guard = self.0.lock().unwrap();\n        let reader = guard.as_mut().map_err(|e| e.clone())?;\n        if (frame.encoding_size() + data.len()) > reader.local_max_size {\n            tracing::error!(\"   Cause by: DatagramIncoming::recv_datagram\");\n            return Err(QuicError::new(\n                ErrorKind::ProtocolViolation,\n                frame.frame_type().into(),\n                format!(\n                    \"datagram size {} exceeds the maximum size {}\",\n                    frame.encoding_size() + data.len(),\n                    reader.local_max_size\n                ),\n            )\n            .into());\n        }\n\n        reader.rcvd_datagrams.push_back(data);\n        if let Some(waker) = reader.read_waker.take() {\n            waker.wake();\n        }\n\n        Ok(())\n    }\n\n    /// When a connection error occurs, the error will be set to the reader.\n    ///\n    /// Any subsequent calls to [`DatagramIncoming::new_reader`], [`DatagramReader::poll_recv`], [`DatagramReader::read`]\n    /// and [`DatagramReader::read_buf`] will return an error.\n    ///\n    /// If there is a task waiting for the data to be read, the task will be woken up and return an error immediately.\n    ///\n    /// All the received datagrams will be discarded, and subsequent calls to [`DatagramIncoming::recv_datagram`] will be ignored.\n    pub fn on_conn_error(&self, error: &Error) {\n        let guard = &mut self.0.lock().unwrap();\n        if let Ok(reader) = guard.as_mut() {\n            if let Some(waker) = reader.read_waker.take() {\n                waker.wake();\n            }\n            **guard = Err(error.clone());\n        }\n    }\n}\n\n// The reader for the application to read the received [datagram frames].\n///\n/// [datagram frames]: https://www.rfc-editor.org/rfc/rfc9221.html\n#[derive(Debug, Clone)]\npub struct DatagramReader(Arc<Mutex<Result<RawDatagarmReader, Error>>>);\n\nimpl DatagramReader {\n    // Poll to receive a [datagram frame] from peer.\n    ///\n    /// This is the internal implementation of the [`DatagramReader::recv`] method.\n    ///\n    /// If the datagram is not ready, and the connection is active,\n    /// the method will return [`Poll::Pending`] and set the waker for waking up the task when the datagram is received.\n    ///\n    /// Note that only the waker set by the last call may be awakened\n    ///\n    /// While there has a datagram frame received but unread,\n    /// this method will return [`Poll::Ready`] with the received datagram frame as [`Ok`].\n    ///\n    /// If the connection is closing or already closed,\n    /// this method will return [`Poll::Ready`] with an error as [`Err`].\n    ///\n    /// [datagram frame]: https://www.rfc-editor.org/rfc/rfc9221.html\n    pub fn poll_recv(&self, cx: &mut Context<'_>) -> Poll<io::Result<Bytes>> {\n        let mut reader = self.0.lock().unwrap();\n        match reader.as_mut() {\n            Ok(reader) => match reader.rcvd_datagrams.pop_front() {\n                Some(bytes) => Poll::Ready(Ok(bytes)),\n                None => {\n                    reader.read_waker = Some(cx.waker().clone());\n                    Poll::Pending\n                }\n            },\n            Err(e) => Poll::Ready(Err(io::Error::from(e.clone()))),\n        }\n    }\n\n    /// Receive a [datagram frame] from peer.\n    ///\n    /// This method is asynchronous and returns a future that resolves to the received datagram.\n    ///\n    /// ``` rust, ignore\n    /// pub async fn recv(&self) -> io::Result<Bytes>\n    /// ```\n    ///\n    /// The future will yield the received datagram as [`Ok`].\n    ///\n    /// If the connection is closing or already closed, the future will yield an error as [`Err`].\n    ///\n    /// The future is *Cancel Safe*.\n    ///\n    /// [datagram frame]: https://www.rfc-editor.org/rfc/rfc9221.html\n    pub fn recv(&mut self) -> RecvDatagram<'_> {\n        RecvDatagram { reader: self }\n    }\n\n    /// Reads the received [datagram frame] into a mutable slice.\n    ///\n    /// This method is asynchronous and returns a future that resolves to the number of bytes read.\n    ///\n    /// ``` rust, ignore\n    /// pub async fn read(&self, buf: & mut [u8]) -> io::Result<usize>\n    /// ```\n    ///\n    /// The future will yield the size of bytes read from the received datagram as [`Ok`].\n    ///\n    /// If the buffer is not large enough to hold the received data, the received data will be truncated.\n    ///\n    /// If the connection is closing or already closed, the future will yield an error as [`Err`].\n    ///\n    /// [datagram frame]: https://www.rfc-editor.org/rfc/rfc9221.html\n    pub fn read<'b>(&'b mut self, buf: &'b mut [u8]) -> ReadIntoSlice<'b> {\n        ReadIntoSlice { reader: self, buf }\n    }\n\n    /// Reads the received [datagram frame] into a mutable reference to [`bytes::BufMut`].\n    ///\n    /// This method is asynchronous and returns a future that resolves to the number of bytes read.\n    ///\n    /// ``` rust, ignore\n    /// pub async fn read_buf(&self, buf: & mut [u8]) -> io::Result<usize>\n    /// ```\n    ///\n    /// The future will yield the size of bytes read from the received datagram as [`Ok`].\n    ///\n    /// If the buffer is not large enough to hold the received data, the behavior is defined by the [`bytes::BufMut::put`] implementation.\n    ///\n    /// If the connection is closing or already closed, the future will yield an error as [`Err`].\n    ///\n    /// [datagram frame]: https://www.rfc-editor.org/rfc/rfc9221.html\n    pub fn read_buf<'b, B: BufMut>(&'b mut self, buf: &'b mut B) -> ReadIntoBuf<'b, B> {\n        ReadIntoBuf { reader: self, buf }\n    }\n}\n\n/// The [`Future`] created by [`DatagramReader::recv`], see [`DatagramReader::recv`] for more.\npub struct RecvDatagram<'a> {\n    reader: &'a mut DatagramReader,\n}\n\nimpl Future for RecvDatagram<'_> {\n    type Output = io::Result<Bytes>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.reader.poll_recv(cx)\n    }\n}\n\n/// the [`Future`] created by [`DatagramReader::read`], see [`DatagramReader::read`] for more.\npub struct ReadIntoSlice<'a> {\n    reader: &'a mut DatagramReader,\n    buf: &'a mut [u8],\n}\n\nimpl Future for ReadIntoSlice<'_> {\n    type Output = io::Result<usize>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        let s = self.get_mut();\n        let bytes = ready!(s.reader.poll_recv(cx)?);\n\n        let len = bytes.len().min(s.buf.len());\n        s.buf[..len].copy_from_slice(&bytes[..len]);\n        Poll::Ready(Ok(len))\n    }\n}\n\n/// the [`Future`] created by [`DatagramReader::read_buf`], see [`DatagramReader::read_buf`] for more.\npub struct ReadIntoBuf<'a, B> {\n    reader: &'a mut DatagramReader,\n    buf: &'a mut B,\n}\n\nimpl<B> Future for ReadIntoBuf<'_, B>\nwhere\n    B: BufMut,\n{\n    type Output = io::Result<usize>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        let s = self.get_mut();\n        let bytes = ready!(s.reader.poll_recv(cx)?);\n\n        let len = bytes.len();\n        s.buf.put(bytes);\n        Poll::Ready(Ok(len))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use qbase::{frame::FrameType, varint::VarInt};\n\n    use super::*;\n\n    #[tokio::test]\n    async fn test_datagram_reader_recv_buf() {\n        let incoming = DatagramIncoming::new(1024);\n\n        let recv = tokio::spawn({\n            let mut reader = incoming.new_reader().unwrap();\n            async move {\n                let n = reader.read(&mut [0u8; 1024]).await.unwrap();\n                assert_eq!(n, 11);\n            }\n        });\n\n        incoming\n            .recv_datagram(\n                DatagramFrame::new(false, VarInt::from_u32(11)),\n                Bytes::from_static(b\"hello world\"),\n            )\n            .unwrap();\n\n        recv.await.unwrap();\n    }\n\n    #[tokio::test]\n    async fn test_datagram_reader_on_conn_error() {\n        let incoming = DatagramIncoming::new(1024);\n        let error = QuicError::new(\n            ErrorKind::ProtocolViolation,\n            FrameType::Datagram(0).into(),\n            \"protocol violation\",\n        )\n        .into();\n        incoming.on_conn_error(&error);\n\n        let new_reader = incoming.new_reader();\n        assert!(new_reader.is_err());\n        assert_eq!(new_reader.unwrap_err().kind(), io::ErrorKind::BrokenPipe);\n    }\n}\n"
  },
  {
    "path": "qdatagram/src/writer.rs",
    "content": "use std::{\n    collections::VecDeque,\n    io,\n    ops::DerefMut,\n    sync::{Arc, Mutex},\n};\n\nuse bytes::{BufMut, Bytes};\nuse qbase::{\n    error::Error,\n    frame::{DatagramFrame, EncodeSize},\n    net::tx::{ArcSendWakers, Signals},\n    packet::Package,\n    varint::VarInt,\n};\n\n#[derive(Debug)]\nstruct RawDatagramWriter {\n    /// The queue that stores the datagram frame to send.\n    datagrams: VecDeque<Bytes>,\n    tx_wakers: ArcSendWakers,\n}\n\nimpl RawDatagramWriter {\n    fn new(tx_wakers: ArcSendWakers) -> Self {\n        Self {\n            datagrams: VecDeque::new(),\n            tx_wakers,\n        }\n    }\n}\n\n/// The struct for protocol layer to mange the outgoing side of the datagram flow.\n#[derive(Debug, Clone)]\npub struct DatagramOutgoing(Arc<Mutex<Result<RawDatagramWriter, Error>>>);\n\nimpl DatagramOutgoing {\n    pub fn new(tx_wakers: ArcSendWakers) -> DatagramOutgoing {\n        DatagramOutgoing(Arc::new(Mutex::new(Ok(RawDatagramWriter::new(tx_wakers)))))\n    }\n\n    /// Try to reate a new instance of [`DatagramWriter`].\n    ///\n    /// This method takes the remote transport parameters `max_datagram_frame_size`.\n    ///\n    /// Return an error if the connection is closing or already closed,\n    /// or datagram is disenabled by peer(`max_datagram_frame_size` is `0`)\n    pub fn new_writer(&self, max_datagram_frame_size: u64) -> io::Result<DatagramWriter> {\n        let mut guard = self.0.lock().unwrap();\n        let _writer = guard.as_mut().map_err(|e| e.clone())?;\n        if max_datagram_frame_size == 0 {\n            tracing::error!(\"   Cause by: DatagramOutgoing::new_writer\");\n            return Err(io::Error::new(\n                io::ErrorKind::Unsupported,\n                \"Unreliable Datagram Extension was disenabled by peer's parameters\",\n            ));\n        }\n        Ok(DatagramWriter {\n            writer: self.0.clone(),\n            max_datagram_frame_size: max_datagram_frame_size as _,\n        })\n    }\n\n    // Same logic with `try_load_data_into`, only used for test purpose.\n    #[cfg(test)]\n    fn try_read_datagram(&self, mut buf: &mut [u8]) -> Option<(DatagramFrame, usize)> {\n        use qbase::frame::io::WriteDataFrame;\n\n        let mut guard = self.0.lock().unwrap();\n        let Ok(writer) = guard.as_mut() else {\n            return None;\n        };\n        let datagram = writer.datagrams.front()?;\n        let available = buf.remaining_mut();\n\n        let max_encoding_size = available.saturating_sub(datagram.len());\n        if max_encoding_size == 0 {\n            return None;\n        }\n\n        let data = writer.datagrams.pop_front().expect(\"unreachable\");\n        let data_len = VarInt::try_from(data.len()).unwrap();\n        let frame_without_len = DatagramFrame::new(false, data_len);\n        let frame_with_len = DatagramFrame::new(true, data_len);\n        let frame = match max_encoding_size {\n            // Encode length\n            n if n >= frame_with_len.encoding_size() => {\n                buf.put_data_frame(&frame_with_len, &data);\n                frame_with_len\n            }\n            // Do not encode length, may need padding\n            n => {\n                buf.put_bytes(0, n - frame_without_len.encoding_size());\n                buf.put_data_frame(&frame_without_len, &data);\n                frame_without_len\n            }\n        };\n        Some((frame, available - buf.remaining_mut()))\n    }\n\n    /// Attempts to load the datagram frame into the packet.\n    ///\n    /// # Encoding\n    ///\n    /// [`DatagramFrame`] has two types:\n    /// - frame type `0x30`: The datagram frame without the data's length.\n    ///\n    /// The size of this form of frame is `1 byte` + `the size of the data`.\n    ///\n    /// - frame type `0x31`: The datagram frame with the data's length.\n    ///\n    /// The size of this form of frame is `1 byte` + `the size of the data's length` + `the size of the data`.\n    ///\n    /// The datagram won't be split into multiple frames. If the remaining space of packet is not enough to encode the datagram frame,\n    /// the datagram will not be loaded.\n    ///\n    /// This method tries to encode the [`DatagramFrame`] with the data's length first (frame type `0x31`).\n    ///\n    /// If remaining space of the packet is not enough to encode the length,\n    /// it will encode the [`DatagramFrame`] without the data's length (frame type `0x30`).\n    /// Because no frame can be put after the datagram frame without length,\n    /// padding frames will be put before the datagram frame.\n    /// In this case, the packet will be filled.\n    pub fn try_load_data_into<P>(&self, packet: &mut P) -> Result<(), Signals>\n    where\n        P: BufMut + ?Sized,\n        (DatagramFrame, Bytes): Package<P>,\n    {\n        let mut guard = self.0.lock().unwrap();\n        let Ok(writer) = guard.as_mut() else {\n            return Err(Signals::empty()); // connection closed\n        };\n        let Some(datagram) = writer.datagrams.front() else {\n            return Err(Signals::TRANSPORT);\n        };\n\n        let available = packet.remaining_mut();\n\n        let max_encoding_size = available.saturating_sub(datagram.len());\n        if max_encoding_size == 0 {\n            return Err(Signals::CONGESTION);\n        }\n\n        let data = writer.datagrams.pop_front().expect(\"unreachable\");\n        let data_len = VarInt::try_from(data.len()).unwrap();\n        let frame_without_len = DatagramFrame::new(false, data_len);\n        let frame_with_len = DatagramFrame::new(true, data_len);\n        match max_encoding_size {\n            // Encode length\n            n if n >= frame_with_len.encoding_size() => {\n                (frame_with_len, data).dump(packet).unwrap();\n            }\n            // Do not encode length, may need padding\n            n => {\n                packet.put_bytes(0, n - frame_without_len.encoding_size());\n                (frame_without_len, data).dump(packet).unwrap();\n            }\n        }\n        Ok(())\n    }\n\n    /// When a connection error occurs, set the internal state to an error state.\n    ///\n    /// Any subsequent calls to [`DatagramWriter::send`] or [`DatagramWriter::send_bytes`] will return an error.\n    /// All datagrams in the internal queue will be dropped and not sent to the peer.\n    pub fn on_conn_error(&self, error: &Error) {\n        let writer = &mut self.0.lock().unwrap();\n        if writer.is_ok() {\n            **writer = Err(error.clone());\n        }\n    }\n}\n\n/// The writer for application to send the [datagram frames] to the peer.\n///\n/// You can clone the writer or wrapper it in an [`Arc`] to send the datagram frames in many tasks.\n///\n/// [datagram frames]: https://www.rfc-editor.org/rfc/rfc9221.html\n#[derive(Debug, Clone)]\npub struct DatagramWriter {\n    writer: Arc<Mutex<Result<RawDatagramWriter, Error>>>,\n    /// The maximum size of the datagram frame that can be sent to the peer.\n    ///\n    /// The value is set by the remote peer, and the protocol layer will use this value to limit the size of the datagram frame.\n    ///\n    /// If the size of the datagram frame exceeds this value, the protocol layer will return an error.\n    ///\n    /// See [RFC](https://www.rfc-editor.org/rfc/rfc9221.html#name-transport-parameter) for more details.\n    max_datagram_frame_size: usize,\n}\n\nimpl DatagramWriter {\n    /// Send unreliable data to the peer.\n    ///\n    /// The `data` will not be sent immediately, and the `data` sent is not guaranteed to be delivered.\n    ///\n    /// If the peer dont support want to receive datagram frames, the method will return an error.\n    ///\n    /// The size of the datagram frame is limited by the `max_datagram_frame_size` transport parameter set by the peer.\n    /// See [RFC](https://www.rfc-editor.org/rfc/rfc9221.html#name-transport-parameter) for more details about transport\n    /// parameters.\n    ///\n    /// If the size of the `data` exceeds the limit, the method will return an error.\n    ///\n    /// You can call [`DatagramWriter::max_datagram_frame_size`] to know the maximum size of the datagram frame you can\n    /// send, read its documentation for more details.\n    ///\n    /// If the connection is closing or already closed, the method will also return an error.\n    pub fn send_bytes(&self, data: Bytes) -> io::Result<()> {\n        match self.writer.lock().unwrap().deref_mut() {\n            Ok(writer) => {\n                // Only consider the smallest encoding method: 1 byte\n                if (1 + data.len()) > self.max_datagram_frame_size {\n                    tracing::error!(\"   Cause by: DatagramWriter::send_bytes\");\n                    return Err(io::Error::new(\n                        io::ErrorKind::InvalidInput,\n                        format!(\n                            \"data size {} exceeds the limit {}\",\n                            data.len(),\n                            self.max_datagram_frame_size\n                        ),\n                    ));\n                }\n                writer.tx_wakers.wake_all_by(Signals::TRANSPORT);\n                writer.datagrams.push_back(data.clone());\n                Ok(())\n            }\n            Err(e) => Err(io::Error::from(e.clone())),\n        }\n    }\n\n    /// Send unreliable data to the peer.\n    ///\n    /// The `data` will not be sent immediately, and the `data` sent is not guaranteed to be delivered.\n    ///\n    /// The size of the datagram frame is limited by the `max_datagram_frame_size` transport parameter set by the peer.\n    /// See [RFC](https://www.rfc-editor.org/rfc/rfc9221.html#name-transport-parameter) for more details about transport\n    /// parameters.\n    ///\n    /// If the size of the `data` exceeds the limit, the method will return an error.\n    ///\n    /// You can call [`DatagramWriter::max_datagram_frame_size`] to know the maximum size of the datagram frame you can\n    /// send, read its documentation for more details.\n    ///\n    /// If the connection is closing or already closed, the method will also return an error.\n    pub fn send(&self, data: &[u8]) -> io::Result<()> {\n        self.send_bytes(data.to_vec().into())\n    }\n\n    /// Returns the maximum size of the datagram frame that can be sent to the peer.\n    ///\n    /// If the connection is closing or already closed, the method will return an error.\n    ///\n    /// The value is a transport parameter set by the peer,\n    /// and you cant send a datagram frame whose size exceeds this value.\n    ///\n    /// Because of the encoding, the size of the data you can send is less than this value, usually 1 byte less. Although\n    /// its possiable to send a datagram frame with the size of `max_datagram_frame_size` - 1, its hardly to happen.     \n    ///\n    /// We recommend you to send unreliable data that the size is less or equal to `max_encoding_size` - `1` - `the size\n    /// of the size of the data's length in varint form`. [varint] in definded in the QUIC RFC.\n    ///\n    /// Size 0 means the peer does not want to receive datagram frames, but it dont means the peer will not send datagram\n    /// frames to you.\n    ///\n    /// [varint]: https://www.rfc-editor.org/rfc/rfc9000.html#integer-encoding\n    pub fn max_datagram_frame_size(&self) -> io::Result<usize> {\n        match self.writer.lock().unwrap().deref_mut() {\n            Ok(..) => Ok(self.max_datagram_frame_size),\n            Err(e) => Err(io::Error::from(e.clone())),\n        }\n    }\n}\n#[cfg(test)]\nmod tests {\n\n    use qbase::{\n        error::{ErrorKind, QuicError},\n        frame::{\n            FrameType, PaddingFrame,\n            io::{WriteDataFrame, WriteFrame},\n        },\n    };\n\n    use super::*;\n\n    #[test]\n    fn test_datagram_writer_with_length() {\n        let outgoing = DatagramOutgoing::new(Default::default());\n        let writer = outgoing.new_writer(1024).unwrap();\n\n        let data = Bytes::from_static(b\"hello world\");\n        writer.send_bytes(data.clone()).unwrap();\n\n        let mut buffer = [0; 1024];\n        let expected_frame = DatagramFrame::new(true, VarInt::try_from(data.len()).unwrap());\n        assert_eq!(\n            outgoing.try_read_datagram(&mut buffer),\n            Some((expected_frame, 1 + 1 + data.len()))\n        );\n\n        let mut expected_buffer = [0; 1024];\n        {\n            let mut expected_buffer = &mut expected_buffer[..];\n            expected_buffer.put_data_frame(&expected_frame, &data);\n        }\n        assert_eq!(buffer, expected_buffer);\n    }\n\n    #[test]\n    fn test_datagram_writer_without_length() {\n        let outgoing = DatagramOutgoing::new(Default::default());\n        let writer = outgoing.new_writer(1024).unwrap();\n\n        let data = Bytes::from_static(b\"hello world\");\n        writer.send_bytes(data.clone()).unwrap();\n\n        let mut buffer = [0; 1024];\n        assert_eq!(\n            outgoing.try_read_datagram(&mut buffer[0..12]),\n            Some((DatagramFrame::new(false, VarInt::from_u32(11)), 12))\n        );\n\n        let mut expected_buffer = [0; 1024];\n        {\n            let mut expected_buffer = &mut expected_buffer[..];\n            expected_buffer.put_data_frame(&DatagramFrame::new(false, VarInt::from_u32(12)), &data);\n        }\n        assert_eq!(buffer, expected_buffer);\n    }\n\n    #[test]\n    fn test_datagram_writer_unwritten() {\n        let outgoing = DatagramOutgoing::new(Default::default());\n        let writer = outgoing.new_writer(1024).unwrap();\n\n        let data = Bytes::from_static(b\"hello world\");\n        writer.send_bytes(data.clone()).unwrap();\n\n        let mut buffer = [0; 1024];\n        assert!(outgoing.try_read_datagram(&mut buffer[0..1]).is_none());\n\n        let expected_buffer = [0; 1024];\n        assert_eq!(buffer, expected_buffer);\n    }\n\n    #[test]\n    fn test_datagram_writer_padding_first() {\n        let outgoing = DatagramOutgoing::new(Default::default());\n        let writer = outgoing.new_writer(1024).unwrap();\n\n        // Will be encoded to 2 bytes\n        let data = Bytes::from_static(&[b'a'; 2usize.pow(8 - 2)]);\n        let data_len = VarInt::from_u32(data.len() as u32);\n        writer.send_bytes(data.clone()).unwrap();\n\n        let mut buffer = [0; 1024];\n        assert_eq!(\n            outgoing.try_read_datagram(&mut buffer[..data.len() + 2]),\n            Some((DatagramFrame::new(false, data_len), data.len() + 2))\n        );\n\n        let mut expected_buffer = [0; 1024];\n        {\n            let mut expected_buffer = &mut expected_buffer[..];\n            expected_buffer.put_frame(&PaddingFrame);\n            expected_buffer.put_data_frame(&DatagramFrame::new(false, data_len), &data);\n        }\n\n        assert_eq!(buffer, expected_buffer);\n    }\n\n    #[test]\n    fn test_datagram_writer_exceeds_limit() {\n        let outgoing = DatagramOutgoing::new(Default::default());\n        assert!(outgoing.new_writer(0).is_err());\n    }\n\n    #[test]\n    fn test_datagram_writer_on_conn_error() {\n        let outgoing = DatagramOutgoing::new(Default::default());\n        let writer = outgoing.new_writer(1024).unwrap();\n\n        outgoing.on_conn_error(\n            &QuicError::new(\n                ErrorKind::ProtocolViolation,\n                FrameType::Datagram(0).into(),\n                \"test\",\n            )\n            .into(),\n        );\n        let writer_guard = writer.writer.lock().unwrap();\n        assert!(writer_guard.as_ref().is_err());\n    }\n}\n"
  },
  {
    "path": "qevent/Cargo.toml",
    "content": "[package]\nname = \"qevent\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"qlog implementation\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbytes = { workspace = true }\nenum_dispatch = { workspace = true }\nderive_builder = { workspace = true }\nderive_more = { workspace = true, features = [\"from\", \"into\", \"display\"] }\nserde = { workspace = true, features = [\"derive\"] }\npin-project-lite = { workspace = true }\nqbase = { workspace = true }\nserde_json = { workspace = true }\nserde_with = { workspace = true, features = [\"hex\"] }\ntokio = { workspace = true, features = [\n    \"fs\",\n    \"rt\",\n    \"sync\",\n    \"io-std\",\n    \"io-util\",\n] }\ntracing = { workspace = true }\n\n[dev-dependencies]\ntokio = { workspace = true, features = [\"macros\", \"io-std\"] }\n\n[features]\ntelemetry = []\nraw_data = []\n"
  },
  {
    "path": "qevent/src/legacy/exporter.rs",
    "content": "use std::io;\n\nuse tokio::{\n    io::{AsyncWrite, AsyncWriteExt},\n    sync::mpsc,\n};\n\nuse super::QlogFileSeq;\nuse crate::{Event, telemetry::ExportEvent};\n\npub struct IoExpoter(mpsc::UnboundedSender<Event>);\n\nimpl IoExpoter {\n    pub fn new<O>(qlog_file_seq: QlogFileSeq, mut output: O) -> Self\n    where\n        O: AsyncWrite + Unpin + Send + 'static,\n    {\n        let (tx, mut rx) = mpsc::unbounded_channel();\n        tokio::spawn(async move {\n            let task = async {\n                const RS: u8 = 0x1E;\n\n                output.write_u8(RS).await?;\n                let qlog_file_seq = serde_json::to_string(&qlog_file_seq).unwrap();\n                output.write_all(qlog_file_seq.as_bytes()).await?;\n                output.write_u8(b'\\n').await?;\n\n                while let Some(event) = rx.recv().await {\n                    let event = match super::Event::try_from(event) {\n                        Ok(event) => serde_json::to_string(&event).unwrap(),\n                        Err(_unsuppert) => continue,\n                    };\n                    output.write_u8(RS).await?;\n                    output.write_all(event.as_bytes()).await?;\n                    output.write_u8(b'\\n').await?;\n                }\n\n                io::Result::Ok(())\n            };\n            if let Err(error) = task.await {\n                tracing::error!(\n                    target: \"qlog\",\n                    ?error,\n                    ?qlog_file_seq,\n                    \"Failed to write qlog, subsequent qlogs in this exporter will be ignored.\"\n                );\n            }\n        });\n        Self(tx)\n    }\n}\n\nimpl ExportEvent for IoExpoter {\n    fn emit(&self, event: Event) {\n        _ = self.0.send(event);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use super::*;\n    use crate::{\n        legacy::TraceSeq,\n        quic::connectivity::ServerListening,\n        telemetry::{Instrument, Span},\n    };\n\n    #[tokio::test]\n    async fn io_exporter() {\n        let exporter = IoExpoter::new(\n            crate::build!(QlogFileSeq {\n                title: \"io exporter example\",\n                trace: TraceSeq {}\n            }),\n            tokio::io::stdout(),\n        );\n\n        let meaningless_field = 112233u64;\n        crate::span!(Arc::new(exporter), meaningless_field).in_scope(|| {\n            crate::event!(ServerListening {\n                ip_v4: \"127.0.0.1\".to_owned(),\n                port_v4: 443u16\n            });\n\n            tokio::spawn(\n                async move {\n                    assert_eq!(Span::current().load::<String>(\"path_id\"), \"new path\");\n                    assert_eq!(Span::current().load::<u64>(\"meaningless_field\"), 112233u64);\n                    // do something\n                }\n                .instrument(crate::span!(@current, path_id = String::from(\"new path\"))),\n            );\n        });\n\n        tokio::task::yield_now().await;\n    }\n}\n"
  },
  {
    "path": "qevent/src/legacy/quic.rs",
    "content": "use std::collections::HashMap;\n\nuse derive_builder::Builder;\nuse derive_more::{From, Into};\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\n\nuse crate::{HexString, RawInfo};\n\n#[serde_with::skip_serializing_none]\n#[derive(Default, Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct ConnectivityServerListening {\n    ip_v4: Option<IPAddress>,\n    ip_v6: Option<IPAddress>,\n    port_v4: Option<u16>,\n    port_v6: Option<u16>,\n\n    /// the server will always answer client initials with a retry\n    /// (no 1-RTT connection setups by choice)\n    retry_required: Option<bool>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectivityConnectionStarted {\n    #[builder(default)]\n    ip_version: Option<IPVersion>,\n    src_ip: IPAddress,\n    dst_ip: IPAddress,\n\n    /// transport layer protocol\n    #[builder(default = \"ConnectivityConnectionStarted::default_protocol()\")]\n    #[serde(default = \"ConnectivityConnectionStarted::default_protocol\")]\n    protocol: String,\n    #[builder(default)]\n    src_port: Option<u16>,\n    #[builder(default)]\n    dst_port: Option<u16>,\n\n    #[builder(default)]\n    src_cid: Option<ConnectionID>,\n    #[builder(default)]\n    dst_cid: Option<ConnectionID>,\n}\n\nimpl ConnectivityConnectionStarted {\n    pub fn default_protocol() -> String {\n        \"QUIC\".to_string()\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectivityConnectionClosed {\n    /// which side closed the connection\n    #[builder(default)]\n    owner: Option<Owner>,\n\n    #[builder(default)]\n    connection_code: Option<ConnectionCode>,\n    #[builder(default)]\n    application_code: Option<ApplicationCode>,\n    #[builder(default)]\n    internal_code: Option<u32>,\n\n    #[builder(default)]\n    reason: Option<String>,\n    #[builder(default)]\n    trigger: Option<ConnectivityConnectionClosedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, From, Serialize, Deserialize, PartialEq)]\n#[serde(untagged)]\npub enum ConnectionCode {\n    TransportError(TransportError),\n    CryptoError(CryptoError),\n    Value(u32),\n}\n\n#[derive(Debug, Clone, From, Serialize, Deserialize, PartialEq)]\n#[serde(untagged)]\npub enum ApplicationCode {\n    ApplicationError(ApplicationError),\n    Value(u32),\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum ConnectivityConnectionClosedTrigger {\n    Clean,\n    HandshakeTimeout,\n    IdleTimeout,\n    /// this is called the \"immediate close\" in the QUIC RFC\n    Error,\n    StatelessReset,\n    VersionMismatch,\n    /// for example HTTP/3's GOAWAY frame\n    Application,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectivityConnectionIdUpdated {\n    owner: Owner,\n\n    old: Option<ConnectionID>,\n    new: Option<ConnectionID>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectivitySpinBitUpdated {\n    state: bool,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectivityConnectionStateUpdated {\n    #[builder(default)]\n    old: Option<ConnectionState>,\n    new: ConnectionState,\n}\n\n// SimpleConnectionState is a subset of this, so skip SimpleConnectionState\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum ConnectionState {\n    /// initial sent/received\n    Attempted,\n    /// peer address validated by: client sent Handshake packet OR\n    /// client used CONNID chosen by the server.\n    /// transport-draft-32, section-8.1\n    PeerValidated,\n    HandshakeStarted,\n    /// 1 RTT can be sent, but handshake isn't done yet\n    EarlyWrite,\n    /// TLS handshake complete: Finished received and sent\n    /// tls-draft-32, section-4.1.1\n    HandshakeComplete,\n    /// HANDSHAKE_DONE sent/received (connection is now \"active\", 1RTT\n    /// can be sent). tls-draft-32, section-4.1.2\n    HandshakeConfirmed,\n    Closing,\n    /// connection_close sent/received\n    Draining,\n    /// draining period done, connection state discarded\n    Closed,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct SecurityKeyUpdated {\n    key_type: KeyType,\n\n    old: Option<HexString>,\n    new: HexString,\n\n    /// needed for 1RTT key updates\n    generation: Option<u32>,\n\n    trigger: Option<SecurityKeyUpdatedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum SecurityKeyUpdatedTrigger {\n    /// (e.g., initial, handshake and 0-RTT keys\n    /// are generated by TLS)\n    Tls,\n    RemoteUpdate,\n    LocalUpdate,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct SecurityKeyRetired {\n    key_type: KeyType,\n    key: Option<HexString>,\n\n    /// needed for 1RTT key updates\n    generation: Option<u32>,\n\n    trigger: Option<SecurityKeyRetiredTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum SecurityKeyRetiredTrigger {\n    /// (e.g., initial, handshake and 0-RTT keys\n    /// are generated by TLS)\n    Tls,\n    RemoteUpdate,\n    LocalUpdate,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TransportVersionInformation {\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    server_versions: Vec<QuicVersion>,\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    client_versions: Vec<QuicVersion>,\n    chosen_version: Option<QuicVersion>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TransportALPNInformation {\n    server_alpns: Option<Vec<String>>,\n    client_alpns: Option<Vec<String>>,\n    chosen_alpn: Option<String>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportParametersSet {\n    owner: Option<Owner>,\n\n    /// true if valid session ticket was received\n    resumption_allowed: Option<bool>,\n\n    /// true if early data extension was enabled on the TLS layer\n    early_data_enabled: Option<bool>,\n\n    /// e.g., \"AES_128_GCM_SHA256\"\n    tls_cipher: Option<String>,\n\n    /// depends on the TLS cipher, but it's easier to be explicit.\n    /// in bytes\n    #[serde(default = \"TransportParametersSet::default_aead_key_length\")]\n    #[builder(default = \"TransportParametersSet::default_aead_key_length()\")]\n    aead_tag_length: u8,\n\n    /// transport parameters from the TLS layer:\n    original_destination_connection_id: Option<ConnectionID>,\n    initial_source_connection_id: Option<ConnectionID>,\n    retry_source_connection_id: Option<ConnectionID>,\n    stateless_reset_token: Option<Token>,\n    disable_active_migration: Option<bool>,\n\n    max_idle_timeout: Option<u64>,\n    max_udp_payload_size: Option<u32>,\n    ack_delay_exponent: Option<u16>,\n    max_ack_delay: Option<u16>,\n    active_connection_id_limit: Option<u32>,\n\n    initial_max_data: Option<u64>,\n    initial_max_stream_data_bidi_local: Option<u64>,\n    initial_max_stream_data_bidi_remote: Option<u64>,\n    initial_max_stream_data_uni: Option<u64>,\n    initial_max_streams_bidi: Option<u64>,\n    initial_max_streams_uni: Option<u64>,\n\n    preferred_address: Option<PreferredAddress>,\n}\n\nimpl TransportParametersSet {\n    pub fn default_aead_key_length() -> u8 {\n        16\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct PreferredAddress {\n    ip_v4: IPAddress,\n    ip_v6: IPAddress,\n\n    port_v4: u16,\n    port_v6: u16,\n\n    connection_id: ConnectionID,\n    stateless_reset_token: Token,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TransportParametersRestored {\n    disable_active_migration: Option<bool>,\n\n    max_idle_timeout: Option<u64>,\n    max_udp_payload_size: Option<u32>,\n    active_connection_id_limit: Option<u32>,\n\n    initial_max_data: Option<u64>,\n    initial_max_stream_data_bidi_local: Option<u64>,\n    initial_max_stream_data_bidi_remote: Option<u64>,\n    initial_max_stream_data_uni: Option<u64>,\n    initial_max_streams_bidi: Option<u64>,\n    initial_max_streams_uni: Option<u64>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TransportPacketSent {\n    header: PacketHeader,\n\n    /// see appendix for the QuicFrame definitions\n    frames: Option<Vec<QuicFrame>>,\n\n    #[serde(default)]\n    #[builder(default)]\n    is_coalesced: bool,\n\n    /// only if header.packet_type === \"retry\"\n    #[builder(default)]\n    retry_token: Option<Token>,\n\n    /// only if header.packet_type === \"stateless_reset\"\n    /// is always 128 bits in length.\n    #[builder(default)]\n    stateless_reset_token: Option<HexString>,\n\n    /// only if header.packet_type === \"version_negotiation\"\n    #[builder(default)]\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    supported_versions: Vec<QuicVersion>,\n\n    #[builder(default)]\n    raw: Option<RawInfo>,\n    #[builder(default)]\n    datagram_id: Option<u32>,\n\n    #[builder(default)]\n    trigger: Option<TransportPacketSentTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TransportPacketSentTrigger {\n    RetransmitReordered,\n    RetransmitTimeout,\n    PtoProbe,\n    RetransmitCrypto,\n    CcBandwidthProbe,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TransportPacketReceived {\n    header: PacketHeader,\n\n    /// see appendix for the definitions\n    #[builder(default)]\n    frames: Option<Vec<QuicFrame>>,\n\n    #[serde(default)]\n    #[builder(default)]\n    is_coalesced: bool,\n\n    /// only if header.packet_type === \"retry\"\n    #[builder(default)]\n    retry_token: Option<Token>,\n\n    /// only if header.packet_type === \"stateless_reset\"\n    #[builder(default)]\n    /// Is always 128 bits in length.\n    stateless_reset_token: Option<HexString>,\n\n    /// only if header.packet_type === \"version_negotiation\"\n    #[builder(default)]\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    supported_versions: Vec<QuicVersion>,\n\n    #[builder(default)]\n    raw: Option<RawInfo>,\n    #[builder(default)]\n    datagram_id: Option<u32>,\n\n    #[builder(default)]\n    trigger: Option<TransportPacketReceivedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TransportPacketReceivedTrigger {\n    KeysAvailable,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportPacketDropped {\n    /// primarily packet_type should be filled here,\n    /// as other fields might not be parseable\n    header: Option<PacketHeader>,\n\n    raw: Option<RawInfo>,\n    datagram_id: Option<u32>,\n\n    trigger: Option<TransportpacketDroppedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TransportpacketDroppedTrigger {\n    KeyUnavailable,\n    UnknownConnectionId,\n    HeaderParseError,\n    PayloadDecryptError,\n    ProtocolViolation,\n    DosPrevention,\n    UnsupportedVersion,\n    UnexpectedPacket,\n    UnexpectedSourceConnectionId,\n    UnexpectedVersion,\n    Duplicate,\n    InvalidInitial,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportPacketBuffered {\n    /// primarily packet_type and possible packet_number should be\n    /// filled here as other elements might not be available yet\n    header: Option<PacketHeader>,\n\n    raw: Option<RawInfo>,\n    datagram_id: Option<u32>,\n\n    trigger: Option<TransportPacketBufferedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TransportPacketBufferedTrigger {\n    /// indicates the parser cannot keep up, temporarily buffers\n    /// packet for later processing\n    Backpressure,\n    /// if packet cannot be decrypted because the proper keys were\n    /// not yet available\n    KeysUnavailable,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportPacketsAcked {\n    packet_number_space: Option<PacketNumberSpace>,\n\n    #[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n    packet_numbers: Vec<u64>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportDatagramsSent {\n    /// to support passing multiple at once\n    count: Option<u16>,\n\n    /// RawInfo:length field indicates total length of the datagrams\n    /// including UDP header length\n    #[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n    raw: Vec<RawInfo>,\n\n    #[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n    datagram_ids: Vec<u32>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportDatagramsReceived {\n    /// to support passing multiple at once\n    count: Option<u16>,\n\n    /// RawInfo:length field indicates total length of the datagrams\n    /// including UDP header length\n    #[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n    raw: Vec<RawInfo>,\n\n    #[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n    datagram_ids: Vec<u32>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportDatagramDropped {\n    raw: Option<RawInfo>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TransportStreamStateUpdated {\n    stream_id: u64,\n\n    /// mainly useful when opening the stream\n    #[builder(default)]\n    stream_type: Option<StreamType>,\n\n    #[builder(default)]\n    old: Option<StreamState>,\n    new: StreamState,\n\n    #[builder(default)]\n    stream_side: Option<StreamSide>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum StreamState {\n    Idle,\n    Open,\n    // bidirectional stream states, RFC 9000 Section 3.4.\n    HalfClosedLocal,\n    HalfClosedRemote,\n    Closed,\n    // sending-side stream states, RFC 9000 Section 3.1.\n    Ready,\n    Send,\n    DataSent,\n    ResetSent,\n    ResetReceived,\n    // receive-side stream states, RFC 9000 Section 3.2.\n    Receive,\n    SizeKnown,\n    DataRead,\n    ResetRead,\n    // both-side states\n    DataReceived,\n    // qlog-defined: memory actually freed\n    Destroyed,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum StreamType {\n    Unidirectional,\n    Bidirectional,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum StreamSide {\n    Sending,\n    Receiving,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TransportFramesProcessed {\n    /// see appendix for the QuicFrame definitions\n    frames: Vec<QuicFrame>,\n\n    #[builder(default)]\n    packet_number: Option<u64>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TransportDataMoved {\n    stream_id: Option<u64>,\n    offset: Option<u64>,\n\n    /// byte length of the moved data\n    length: Option<u64>,\n\n    from: Option<StreamDataLocation>,\n    to: Option<StreamDataLocation>,\n\n    /// raw bytes that were transferred\n    data: Option<HexString>,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq)]\npub enum StreamDataLocation {\n    User,\n    Application,\n    Transport,\n    Network,\n    Other(String),\n}\n\nimpl Serialize for StreamDataLocation {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::Serializer,\n    {\n        match self {\n            StreamDataLocation::User => serializer.serialize_str(\"user\"),\n            StreamDataLocation::Application => serializer.serialize_str(\"application\"),\n            StreamDataLocation::Transport => serializer.serialize_str(\"transport\"),\n            StreamDataLocation::Network => serializer.serialize_str(\"network\"),\n            StreamDataLocation::Other(s) => serializer.serialize_str(s),\n        }\n    }\n}\n\nimpl<'de> Deserialize<'de> for StreamDataLocation {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: serde::Deserializer<'de>,\n    {\n        match String::deserialize(deserializer)? {\n            s if s == \"user\" => Ok(StreamDataLocation::User),\n            s if s == \"application\" => Ok(StreamDataLocation::Application),\n            s if s == \"transport\" => Ok(StreamDataLocation::Transport),\n            s if s == \"network\" => Ok(StreamDataLocation::Network),\n            s => Ok(StreamDataLocation::Other(s)),\n        }\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct RecoveryParametersSet {\n    /// Loss detection, see recovery draft-23, Appendix A.2\n    /// in amount of packets\n    #[builder(default)]\n    reordering_threshold: Option<u16>,\n\n    /// as RTT multiplier\n    #[builder(default)]\n    time_threshold: Option<f32>,\n\n    /// in ms\n    timer_granularity: u16,\n\n    /// in ms\n    #[builder(default)]\n    initial_rtt: Option<f32>,\n\n    /// congestion control, Appendix B.1.\n    /// in bytes. Note: this, could be updated after pmtud\n    #[builder(default)]\n    max_datagram_size: Option<u32>,\n\n    /// in bytes\n    #[builder(default)]\n    initial_congestion_window: Option<u64>,\n\n    /// Note: this, could change when max_datagram_size changes\n    /// in bytes\n    #[builder(default)]\n    minimum_congestion_window: Option<u32>,\n    #[builder(default)]\n    loss_reduction_factor: Option<f32>,\n\n    /// as PTO multiplier\n    #[builder(default)]\n    persistent_congestion_threshold: Option<u16>,\n\n    /// Additionally, this event can contain any number of unspecified fields\n    /// to support different recovery approaches.\n    #[builder(default)]\n    #[serde(flatten, skip_serializing_if = \"HashMap::is_empty\")]\n    custom_fields: HashMap<String, serde_json::Value>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct RecoveryMetricsUpdated {\n    /// Loss detection, see recovery draft-23, Appendix A.3\n    /// all following rtt fields are expressed in ms\n    #[builder(default)]\n    min_rtt: Option<f32>,\n    #[builder(default)]\n    smoothed_rtt: Option<f32>,\n    #[builder(default)]\n    latest_rtt: Option<f32>,\n    #[builder(default)]\n    rtt_variance: Option<f32>,\n\n    #[builder(default)]\n    pto_count: Option<u16>,\n\n    /// Congestion control, Appendix B.2.\n    /// in bytes\n    #[builder(default)]\n    congestion_window: Option<u64>,\n    #[builder(default)]\n    bytes_in_flight: Option<u64>,\n\n    /// in bytes\n    #[builder(default)]\n    ssthresh: Option<u64>,\n\n    /// qlog defined\n    /// sum of all packet number spaces\n    #[builder(default)]\n    packets_in_flight: Option<u64>,\n\n    /// in bits per second\n    #[builder(default)]\n    pacing_rate: Option<u64>,\n\n    /// Additionally, this event can contain any number of unspecified fields\n    /// to support different recovery approaches.\n    #[builder(default)]\n    #[serde(flatten)]\n    #[serde(skip_serializing_if = \"HashMap::is_empty\")]\n    custom_fields: HashMap<String, serde_json::Value>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct RecoveryCongestionStateUpdated {\n    #[builder(default)]\n    old: Option<String>,\n    new: String,\n\n    #[builder(default)]\n    trigger: Option<RecoveryCongestionStateUpdatedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum RecoveryCongestionStateUpdatedTrigger {\n    PersistentCongestion,\n    Ecn,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct RecoveryLossTimerUpdated {\n    /// called \"mode\" in draft-23 A.9.\n    #[builder(default)]\n    timer_type: Option<LossTimerType>,\n    #[builder(default)]\n    packet_number_space: Option<PacketNumberSpace>,\n\n    event_type: LossTimerEventType,\n\n    /// if event_type === \"set\": delta, time is in ms from\n    /// this event's timestamp until when the timer will trigger\n    #[builder(default)]\n    delta: Option<f32>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum LossTimerType {\n    Ack,\n    Pto,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum LossTimerEventType {\n    Set,\n    Expired,\n    Cancelled,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct RecoveryPacketLost {\n    /// should include at least the packet_type and packet_number\n    header: Option<PacketHeader>,\n\n    /// not all implementations will keep track of full\n    /// packets, so these are optional\n    /// see appendix for the QuicFrame definitions\n    frames: Option<Vec<QuicFrame>>,\n\n    trigger: Option<RecoveryPacketLostTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum RecoveryPacketLostTrigger {\n    ReorderingThreshold,\n    TimeThreshold,\n    /// draft-23 section 5.3.1, MAY\n    PtoExpired,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct RecoveryMarkedForRetransmit {\n    /// see appendix for the QuicFrame definitions\n    frames: Vec<QuicFrame>,\n}\n\n// A.1: skip\n\n// A.2\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct QuicVersion(HexString);\n\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct ConnectionID(HexString);\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum Owner {\n    Local,\n    Remote,\n}\n\n// A.5\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct IPAddress(String);\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum IPVersion {\n    V4,\n    V6,\n}\n\n// A.6\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketType {\n    Initial,\n    Retry,\n    Handshake,\n    #[serde(rename = \"0RTT\")]\n    ZeroRTT,\n    #[serde(rename = \"1RTT\")]\n    OneRTT,\n    StatelessReset,\n    VersionNegotiation,\n    Unknown,\n}\n\n// A.7\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketNumberSpace {\n    Initial,\n    Handshake,\n    ApplicationData,\n}\n\n// A.8\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct PacketHeader {\n    packet_type: PacketType,\n    // In rfc https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-quic-events-02#name-packetheader, this field mut be present.\n    // But in fact, for packet type Retry and VN and for packet dropped before pn decoded, this field is not exist.\n    // In the updated RFC this field is optional, so here we simply mark it as optional as well.\n    #[builder(default)]\n    packet_number: Option<u64>,\n\n    /// the bit flags of the packet headers (spin bit, key update bit,\n    /// etc. up to and including the packet number length bits\n    /// if present\n    #[builder(default)]\n    flags: Option<u8>,\n\n    /// only if packet_type === \"initial\"\n    #[builder(default)]\n    token: Option<Token>,\n\n    /// only if packet_type === \"initial\" || \"handshake\" || \"0RTT\"\n    /// Signifies length of the packet_number plus the payload\n    #[builder(default)]\n    length: Option<u16>,\n\n    /// only if present in the header\n    /// if correctly using transport:connection_id_updated events,\n    /// dcid can be skipped for 1RTT packets\n    #[builder(default)]\n    version: Option<QuicVersion>,\n    #[builder(default)]\n    scil: Option<u8>,\n    #[builder(default)]\n    dcil: Option<u8>,\n    #[builder(default)]\n    scid: Option<ConnectionID>,\n    #[builder(default)]\n    dcid: Option<ConnectionID>,\n}\n\n// A.9\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct Token {\n    r#type: Option<TokenType>,\n\n    /// byte length of the token\n    length: Option<u32>,\n\n    /// raw byte value of the token\n    data: Option<HexString>,\n\n    /// decoded fields included in the token\n    /// (typically: peer,'s IP address, creation time)\n    #[builder(default)]\n    #[serde(default, skip_serializing_if = \"HashMap::is_empty\")]\n    details: HashMap<String, Value>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TokenType {\n    Retry,\n    Resumption,\n    StatelessReset,\n}\n\n// A.10\n#[allow(clippy::enum_variant_names)]\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum KeyType {\n    ServerInitialSecret,\n    ClientInitialSecret,\n    ServerHandshakeSecret,\n    ClientHandshakeSecret,\n    #[serde(rename = \"server_0rtt_secret\")]\n    Server0RTTSecret,\n    #[serde(rename = \"client_0rtt_secret\")]\n    Client0RTTSecret,\n    #[serde(rename = \"server_1rtt_secret\")]\n    Server1RTTSecret,\n    #[serde(rename = \"client_1rtt_secret\")]\n    Client1RTTSecret,\n}\n\n#[derive(Debug, Clone, Serialize, From, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum ConnectionCloseTriggerFrameType {\n    Id(u64),\n    Text(String),\n}\n\n#[derive(Debug, Clone, From, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum ConnectionCloseErrorCode {\n    TransportError(TransportError),\n    ApplicationError(ApplicationError),\n    Value(u64),\n}\n\n// A.11#\n#[serde_with::skip_serializing_none]\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[serde(tag = \"frame_type\")]\n#[serde(rename_all = \"snake_case\")]\npub enum QuicFrame {\n    Padding {\n        length: Option<u32>,\n        payload_length: u32,\n    },\n\n    Ping {\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n\n    Ack {\n        ack_delay: Option<f32>,\n        acked_ranges: Vec<[u64; 2]>,\n\n        ect1: Option<u64>,\n        ect0: Option<u64>,\n        ce: Option<u64>,\n\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n\n    ResetStream {\n        stream_id: u64,\n        error_code: ApplicationCode,\n        final_size: u64,\n\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n\n    StopSending {\n        stream_id: u64,\n        error_code: ApplicationCode,\n\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n\n    Crypto {\n        offset: u64,\n        length: u64,\n\n        payload_length: Option<u32>,\n    },\n\n    NewToken {\n        token: Token,\n    },\n\n    Stream {\n        stream_id: u64,\n        offset: u64,\n        length: u64,\n        #[serde(default)]\n        fin: bool,\n\n        raw: Option<RawInfo>,\n    },\n\n    MaxData {\n        maximum: u64,\n    },\n\n    MaxStreamData {\n        stream_id: u64,\n        maximum: u64,\n    },\n\n    MaxStreams {\n        stream_type: StreamType,\n        maximum: u64,\n    },\n\n    DataBlocked {\n        limit: u64,\n    },\n\n    StreamDataBlocked {\n        stream_id: u64,\n        limit: u64,\n    },\n\n    StreamsBlocked {\n        stream_type: StreamType,\n        limit: u64,\n    },\n\n    NewConnectionId {\n        sequence_number: u32,\n        retire_prior_to: u32,\n        connection_id_length: Option<u8>,\n        connection_id: ConnectionID,\n        stateless_reset_token: Option<Token>,\n    },\n\n    RetireConnectionId {\n        sequence_number: u32,\n    },\n\n    PathChallenge {\n        data: Option<HexString>,\n    },\n\n    PathResponse {\n        data: Option<HexString>,\n    },\n\n    ConnectionClose {\n        error_space: Option<ConnectionCloseErrorSpace>,\n        error_code: Option<ConnectionCloseErrorCode>,\n        raw_error_code: Option<u32>,\n        reason: Option<String>,\n\n        trigger_frame_type: Option<ConnectionCloseTriggerFrameType>,\n    },\n\n    HandshakeDone {},\n\n    Unknown {\n        raw_frame_type: u64,\n        raw_length: Option<u32>,\n        raw: Option<HexString>,\n    },\n    // not in v1\n    Datagram {\n        length: Option<u64>,\n        raw: Option<RawInfo>,\n    },\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq)]\npub enum ConnectionCloseErrorSpace {\n    Transport,\n    Application,\n}\n\n// A.11.22\n#[derive(Debug, Clone, Copy, From, Serialize, Deserialize, PartialEq, Eq)]\npub enum TransportError {\n    NoError,\n    InternalError,\n    ConnectionRefused,\n    FlowControlError,\n    StreamLimitError,\n    StreamStateError,\n    FinalSizeError,\n    FrameEncodingError,\n    TransportParameterError,\n    ConnectionIdLimitError,\n    ProtocolViolation,\n    InvalidToken,\n    ApplicationError,\n    CryptoBufferExceeded,\n    // not in v1\n    KeyUpdateError,\n    AeadLimitReached,\n    NoViablePath,\n}\n\n// A.11.23\n#[derive(Debug, Clone, From, Serialize, Deserialize, PartialEq, Eq)]\npub struct ApplicationError(String);\n\n// A.11.24\n#[derive(Debug, Clone, Copy, From, PartialEq)]\npub struct CryptoError(u8);\n\nimpl Serialize for CryptoError {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::Serializer,\n    {\n        serializer.serialize_str(&format!(\"crypto_error_0x1{:02x}\", self.0))\n    }\n}\n\nimpl<'de> Deserialize<'de> for CryptoError {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: serde::Deserializer<'de>,\n    {\n        let string = String::deserialize(deserializer)?;\n        string.strip_prefix(\"crypto_error_0x1\").map_or_else(\n            || Err(serde::de::Error::custom(\"invalid crypto error\")),\n            |s| {\n                u8::from_str_radix(s, 16)\n                    .map(CryptoError)\n                    .map_err(serde::de::Error::custom)\n            },\n        )\n    }\n}\n\ncrate::gen_builder_method! {\n    ConnectivityServerListeningBuilder        => ConnectivityServerListening;\n    ConnectivityConnectionStartedBuilder      => ConnectivityConnectionStarted;\n    ConnectivityConnectionClosedBuilder       => ConnectivityConnectionClosed;\n    ConnectivityConnectionIdUpdatedBuilder    => ConnectivityConnectionIdUpdated;\n    ConnectivitySpinBitUpdatedBuilder         => ConnectivitySpinBitUpdated;\n    ConnectivityConnectionStateUpdatedBuilder => ConnectivityConnectionStateUpdated;\n    SecurityKeyUpdatedBuilder                 => SecurityKeyUpdated;\n    SecurityKeyRetiredBuilder                 => SecurityKeyRetired;\n    TransportVersionInformationBuilder        => TransportVersionInformation;\n    TransportALPNInformationBuilder           => TransportALPNInformation;\n    TransportParametersSetBuilder             => TransportParametersSet;\n    PreferredAddressBuilder                   => PreferredAddress;\n    TransportParametersRestoredBuilder        => TransportParametersRestored;\n    TransportPacketSentBuilder                => TransportPacketSent;\n    TransportPacketReceivedBuilder            => TransportPacketReceived;\n    TransportPacketDroppedBuilder             => TransportPacketDropped;\n    TransportPacketBufferedBuilder            => TransportPacketBuffered;\n    TransportPacketsAckedBuilder              => TransportPacketsAcked;\n    TransportDatagramsSentBuilder             => TransportDatagramsSent;\n    TransportDatagramsReceivedBuilder         => TransportDatagramsReceived;\n    TransportDatagramDroppedBuilder           => TransportDatagramDropped;\n    TransportStreamStateUpdatedBuilder        => TransportStreamStateUpdated;\n    TransportFramesProcessedBuilder           => TransportFramesProcessed;\n    TransportDataMovedBuilder                 => TransportDataMoved;\n    RecoveryParametersSetBuilder              => RecoveryParametersSet;\n    RecoveryMetricsUpdatedBuilder             => RecoveryMetricsUpdated;\n    RecoveryCongestionStateUpdatedBuilder     => RecoveryCongestionStateUpdated;\n    RecoveryLossTimerUpdatedBuilder           => RecoveryLossTimerUpdated;\n    RecoveryPacketLostBuilder                 => RecoveryPacketLost;\n    RecoveryMarkedForRetransmitBuilder        => RecoveryMarkedForRetransmit;\n    PacketHeaderBuilder                       => PacketHeader;\n    TokenBuilder                              => Token;\n}\n"
  },
  {
    "path": "qevent/src/legacy.rs",
    "content": "pub mod exporter;\npub mod quic;\n\nuse std::collections::HashMap;\n\nuse derive_builder::Builder;\nuse derive_more::{From, Into};\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\n\nuse crate::{GroupID, VantagePoint};\n\npub const QLOG_VERSION: &str = \"0.3\";\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct QlogFile {\n    qlog_version: String,\n    #[builder(default = \"QlogFileSeq::default_format()\")]\n    #[serde(default = \"QlogFileSeq::default_format\")]\n    qlog_format: String,\n    title: Option<String>,\n    description: Option<String>,\n    #[builder(default)]\n    #[serde(default, skip_serializing_if = \"HashMap::is_empty\")]\n    summary: HashMap<String, Value>,\n    #[builder(default)]\n    #[serde(default, skip_serializing_if = \"Vec::is_empty\")]\n    traces: Vec<Traces>,\n}\n\nimpl QlogFile {\n    pub fn default_format() -> String {\n        \"JSON\".to_string()\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct QlogFileSeq {\n    #[builder(default = \"QlogFileSeq::default_qlog_version()\")]\n    #[serde(default = \"QlogFileSeq::default_qlog_version\")]\n    qlog_version: String,\n    #[builder(default = \"QlogFileSeq::default_format()\")]\n    #[serde(default = \"QlogFileSeq::default_format\")]\n    qlog_format: String,\n    #[builder(default)]\n    title: Option<String>,\n    #[builder(default)]\n    description: Option<String>,\n    #[builder(default)]\n    #[serde(default, skip_serializing_if = \"HashMap::is_empty\")]\n    summary: HashMap<String, Value>,\n    trace: TraceSeq,\n}\n\nimpl QlogFileSeq {\n    pub fn default_qlog_version() -> String {\n        QLOG_VERSION.to_string()\n    }\n\n    pub fn default_format() -> String {\n        \"JSON-SEQ\".to_string()\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, From, Into, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Summary {\n    #[builder(default)]\n    #[serde(default, skip_serializing_if = \"HashMap::is_empty\")]\n    custom_fields: HashMap<String, Value>,\n}\n\n#[allow(clippy::large_enum_variant)]\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[serde(untagged)]\npub enum Traces {\n    TraceError(TraceError),\n    Trace(Trace),\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TraceError {\n    error_description: String,\n    /// the original URI at which we attempted to find the file\n    uri: Option<String>,\n    vantage_point: Option<VantagePoint>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Trace {\n    title: Option<String>,\n    description: Option<String>,\n    configuration: Option<Configuration>,\n    common_fields: Option<CommonFields>,\n    vantage_point: Option<VantagePoint>,\n    events: Vec<Event>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Default, Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TraceSeq {\n    title: Option<String>,\n    description: Option<String>,\n    configuration: Option<Configuration>,\n    common_fields: Option<CommonFields>,\n    vantage_point: Option<VantagePoint>,\n}\n\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Configuration {\n    /// time_offset is in milliseconds\n    time_offset: f64,\n    original_uris: Vec<String>,\n    #[builder(default)]\n    #[serde(flatten, default, skip_serializing_if = \"HashMap::is_empty\")]\n    custom_fields: HashMap<String, Value>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Event {\n    time: f64,\n    #[serde(flatten)]\n    data: EventData,\n\n    #[builder(default)]\n    time_format: Option<TimeFormat>,\n\n    #[builder(default)]\n    protocol_type: Option<ProtocolType>,\n    #[builder(default)]\n    group_id: Option<GroupID>,\n\n    /// events can contain any amount of custom fields\n    #[builder(default)]\n    #[serde(flatten, default, skip_serializing_if = \"HashMap::is_empty\")]\n    custom_fields: HashMap<String, Value>,\n}\n\n#[derive(Debug, Clone, Copy, From, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TimeFormat {\n    Relative,\n    Delta,\n    Absolute,\n}\n\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[serde(tag = \"name\", content = \"data\")]\n#[serde(rename_all = \"snake_case\")]\npub enum EventData {\n    // Connectivity\n    #[serde(rename = \"connectivity:server_listening\")]\n    ServerListening(quic::ConnectivityServerListening),\n\n    #[serde(rename = \"connectivity:connection_started\")]\n    ConnectionStarted(quic::ConnectivityConnectionStarted),\n\n    #[serde(rename = \"connectivity:connection_closed\")]\n    ConnectionClosed(quic::ConnectivityConnectionClosed),\n\n    #[serde(rename = \"connectivity:connection_id_updated\")]\n    ConnectionIdUpdated(quic::ConnectivityConnectionIdUpdated),\n\n    #[serde(rename = \"connectivity:spin_bit_updated\")]\n    SpinBitUpdated(quic::ConnectivitySpinBitUpdated),\n\n    #[serde(rename = \"connectivity:connection_state_updated\")]\n    ConnectionStateUpdated(quic::ConnectivityConnectionStateUpdated),\n\n    // Security\n    #[serde(rename = \"security:key_updated\")]\n    KeyUpdated(quic::SecurityKeyUpdated),\n\n    #[serde(rename = \"security:key_retired\")]\n    KeyDiscarded(quic::SecurityKeyRetired),\n\n    // Transport\n    #[serde(rename = \"transport:version_information\")]\n    VersionInformation(quic::TransportVersionInformation),\n\n    #[serde(rename = \"transport:alpn_information\")]\n    AlpnInformation(quic::TransportALPNInformation),\n\n    #[serde(rename = \"transport:parameters_set\")]\n    TransportParametersSet(quic::TransportParametersSet),\n\n    #[serde(rename = \"transport:parameters_restored\")]\n    TransportParametersRestored(quic::TransportParametersRestored),\n\n    #[serde(rename = \"transport:datagrams_received\")]\n    DatagramsReceived(quic::TransportDatagramsReceived),\n\n    #[serde(rename = \"transport:datagrams_sent\")]\n    DatagramsSent(quic::TransportDatagramsSent),\n\n    #[serde(rename = \"transport:datagram_dropped\")]\n    DatagramDropped(quic::TransportDatagramDropped),\n\n    #[serde(rename = \"transport:packet_received\")]\n    PacketReceived(quic::TransportPacketReceived),\n\n    #[serde(rename = \"transport:packet_sent\")]\n    PacketSent(quic::TransportPacketSent),\n\n    #[serde(rename = \"transport:packet_dropped\")]\n    PacketDropped(quic::TransportPacketDropped),\n\n    #[serde(rename = \"transport:packet_buffered\")]\n    PacketBuffered(quic::TransportPacketBuffered),\n\n    #[serde(rename = \"transport:packets_acked\")]\n    PacketsAcked(quic::TransportPacketsAcked),\n\n    #[serde(rename = \"transport:stream_state_updated\")]\n    StreamStateUpdated(quic::TransportStreamStateUpdated),\n\n    #[serde(rename = \"transport:frames_processed\")]\n    FramesProcessed(quic::TransportFramesProcessed),\n\n    #[serde(rename = \"transport:data_moved\")]\n    DataMoved(quic::TransportDataMoved),\n\n    // Recovery\n    #[serde(rename = \"recovery:parameters_set\")]\n    RecoveryParametersSet(quic::RecoveryParametersSet),\n\n    #[serde(rename = \"recovery:metrics_updated\")]\n    MetricsUpdated(quic::RecoveryMetricsUpdated),\n\n    #[serde(rename = \"recovery:congestion_state_updated\")]\n    CongestionStateUpdated(quic::RecoveryCongestionStateUpdated),\n\n    #[serde(rename = \"recovery:loss_timer_updated\")]\n    LossTimerUpdated(quic::RecoveryLossTimerUpdated),\n\n    #[serde(rename = \"recovery:packet_lost\")]\n    PacketLost(quic::RecoveryPacketLost),\n\n    #[serde(rename = \"recovery:marked_for_retransmit\")]\n    MarkedForRetransmit(quic::RecoveryMarkedForRetransmit),\n\n    #[serde(rename = \"generic:error\")]\n    GenericError(GenericError),\n\n    #[serde(rename = \"generic:warning\")]\n    GenericWarning(GenericWarning),\n\n    #[serde(rename = \"generic:info\")]\n    GenericInfo(GenericInfo),\n\n    #[serde(rename = \"generic:debug\")]\n    GenericDebug(GenericDebug),\n\n    #[serde(rename = \"generic:verbose\")]\n    GenericVerbose(GenericVerbose),\n\n    #[serde(rename = \"simulation:scenario\")]\n    SimulationScenario(SimulationScenario),\n\n    #[serde(rename = \"simulation:marker\")]\n    SimulationMarker(SimulationMarker),\n}\n\n#[derive(Default, Debug, Clone, From, Into, Serialize, Deserialize, PartialEq)]\n#[serde(transparent)]\npub struct ProtocolType(Vec<String>);\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct CommonFields {\n    time_format: Option<TimeFormat>,\n    reference_time: Option<f64>,\n\n    protocol_type: Option<ProtocolType>,\n    group_id: Option<GroupID>,\n\n    custom_fields: HashMap<String, Value>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct GenericError {\n    code: Option<u64>,\n    message: Option<String>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct GenericWarning {\n    code: Option<u64>,\n    message: Option<String>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct GenericInfo {\n    message: String,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct GenericDebug {\n    message: String,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct GenericVerbose {\n    message: String,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct SimulationScenario {\n    name: Option<String>,\n    #[builder(default)]\n    #[serde(default, skip_serializing_if = \"HashMap::is_empty\")]\n    details: HashMap<String, Value>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct SimulationMarker {\n    r#type: Option<String>,\n    message: Option<String>,\n}\n\ncrate::gen_builder_method! {\n    QlogFileBuilder => QlogFile;\n    QlogFileSeqBuilder => QlogFileSeq;\n    SummaryBuilder => Summary;\n    TraceErrorBuilder => TraceError;\n    TraceBuilder => Trace;\n    TraceSeqBuilder => TraceSeq;\n    ConfigurationBuilder => Configuration;\n    EventBuilder => Event;\n    GenericErrorBuilder => GenericError;\n    GenericWarningBuilder => GenericWarning;\n    GenericInfoBuilder => GenericInfo;\n    GenericDebugBuilder => GenericDebug;\n    GenericVerboseBuilder => GenericVerbose;\n    SimulationScenarioBuilder => SimulationScenario;\n    SimulationMarkerBuilder => SimulationMarker;\n}\n"
  },
  {
    "path": "qevent/src/lib.rs",
    "content": "pub mod legacy;\npub mod loglevel;\npub mod quic;\npub mod telemetry;\n\n#[doc(hidden)]\npub mod macro_support;\nmod macros;\npub mod packet;\n\nuse std::{collections::HashMap, fmt::Display, net::SocketAddr};\n\nuse bytes::Bytes;\nuse derive_builder::Builder;\nuse derive_more::{Display, From, Into};\nuse qbase::{cid::ConnectionId, role::Role, util::ContinuousData};\nuse quic::ConnectionID;\nuse serde::{Deserialize, Serialize};\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct LogFile {\n    file_schema: String,\n    serialization_format: String,\n    #[builder(default)]\n    title: Option<String>,\n    #[builder(default)]\n    description: Option<String>,\n    #[builder(default)]\n    event_schemas: Vec<String>,\n}\n\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into), build_fn(private, name = \"fallible_build\"))]\npub struct QlogFile {\n    #[serde(flatten)]\n    log_file: LogFile,\n    traces: Vec<Traces>,\n}\n\n/// A qlog file using the QlogFileSeq schema can be serialized to a\n/// streamable JSON format called JSON Text Sequences (JSON-SEQ)\n/// ([RFC7464]). The top-level element in this schema defines only a\n/// small set of \"header\" fields and an array of component traces.\n///\n/// [RFC7464]: https://www.rfc-editor.org/rfc/rfc7464\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into), build_fn(private, name = \"fallible_build\"))]\npub struct QlogFileSeq {\n    #[serde(flatten)]\n    log_file: LogFile,\n    trace_seq: TraceSeq,\n}\n\nimpl QlogFileSeq {\n    pub const SCHEMA: &'static str = \"urn:ietf:params:qlog:file:sequential\";\n}\n\n#[allow(clippy::large_enum_variant)]\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[serde(untagged)]\npub enum Traces {\n    Trace(Trace),\n    TraceError(TraceError),\n}\n\n///  The exact conceptual definition of a Trace can be fluid.  For\n/// example, a trace could contain all events for a single connection,\n/// for a single endpoint, for a single measurement interval, for a\n/// single protocol, etc.  In the normal use case however, a trace is a\n/// log of a single data flow collected at a single location or vantage\n/// point.  For example, for QUIC, a single trace only contains events\n/// for a single logical QUIC connection for either the client or the\n/// server.\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Trace {\n    /// The optional \"title\" fields provide additional free-text information about the trace.\n    #[builder(default)]\n    title: Option<String>,\n    /// The optional \"description\" fields provide additional free-text information about the trace.\n    #[builder(default)]\n    description: Option<String>,\n    #[builder(default)]\n    common_fields: Option<CommonFields>,\n    #[builder(default)]\n    vantage_point: Option<VantagePoint>,\n    events: Vec<Event>,\n}\n\n/// TraceSeq is used with QlogFileSeq. It is conceptually similar to a\n/// Trace, with the exception that qlog events are not contained within\n/// it, but rather appended after it in a QlogFileSeq.\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct TraceSeq {\n    /// The optional \"title\" fields provide additional free-text information about the trace.\n    title: Option<String>,\n    /// The optional \"description\" fields provide additional free-text information about the trace.\n    description: Option<String>,\n    common_fields: Option<CommonFields>,\n    vantage_point: Option<VantagePoint>,\n}\n\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct CommonFields {\n    path: PathID,\n    time_format: TimeFormat,\n    reference_time: ReferenceTime,\n    protocol_types: ProtocolTypeList,\n    group_id: GroupID,\n    #[builder(default)]\n    #[serde(flatten)]\n    #[serde(skip_serializing_if = \"HashMap::is_empty\")]\n    //  * text => any\n    custom_fields: HashMap<String, serde_json::Value>,\n}\n\n/// A VantagePoint describes the vantage point from which a trace originates\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct VantagePoint {\n    #[builder(default)]\n    name: Option<String>,\n    r#type: VantagePointType,\n    #[builder(default)]\n    flow: Option<VantagePointType>,\n}\n\n#[derive(Default, Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum VantagePointType {\n    /// endpoint which initiates the connection\n    Client,\n    /// endpoint which accepts the connection\n    Server,\n    /// observer in between client and server\n    Network,\n    #[default]\n    Unknow,\n}\n\nimpl From<Role> for VantagePointType {\n    fn from(role: Role) -> Self {\n        match role {\n            Role::Client => VantagePointType::Client,\n            Role::Server => VantagePointType::Server,\n        }\n    }\n}\n\nimpl Display for VantagePointType {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            VantagePointType::Client => write!(f, \"client\"),\n            VantagePointType::Server => write!(f, \"server\"),\n            VantagePointType::Network => write!(f, \"network\"),\n            VantagePointType::Unknow => write!(f, \"unknow\"),\n        }\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct TraceError {\n    error_description: String,\n    #[builder(default)]\n    uri: Option<String>,\n    #[builder(default)]\n    vantage_point: Option<VantagePoint>,\n}\n\n/// Events are logged at a time instant and convey specific details of the logging use case.\n///\n/// Events can contain any amount of custom fields.\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Event {\n    time: f64,\n    #[serde(flatten)]\n    data: EventData,\n    /// A qlog event can be associated with a single \"network path\" (usually, but not always, identified by a 4-tuple\n    /// of IP addresses and ports). In many cases, the path will be the same for all events in a given trace, and does\n    /// not need to be logged explicitly with each event. In this case, the \"path\" field can be omitted (in which case\n    /// the default value of \"\" is assumed) or reflected in \"common_fields\" instead\n    #[builder(default)]\n    path: Option<PathID>,\n    #[builder(default)]\n    time_format: Option<TimeFormat>,\n    #[builder(default)]\n    protocol_types: Option<ProtocolTypeList>,\n    #[builder(default)]\n    group_id: Option<GroupID>,\n    #[builder(default)]\n    system_info: Option<SystemInformation>,\n    /// events can contain any amount of custom fields\n    #[builder(default)]\n    #[serde(flatten)]\n    #[serde(skip_serializing_if = \"HashMap::is_empty\")]\n    // * text => any\n    custom_fields: HashMap<String, serde_json::Value>,\n}\n\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct PathID(String);\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\n#[serde(try_from = \"UncheckedReferenceTime\")]\npub struct ReferenceTime {\n    /// The required \"clock_type\" field represents the type of clock used for time measurements. The value \"system\"\n    /// represents a clock that uses system time, commonly measured against a chosen or well-known epoch. However,\n    /// depending on the system, System time can potentially jump forward or back. In contrast, a clock using monotonic\n    /// time is generally guaranteed to never go backwards. The value \"monotonic\" represents such a clock.\n    clock_type: TimeClockType,\n    /// The required \"epoch\" field is the start of the ReferenceTime. When using the \"system\" clock type, the epoch field\n    /// **SHOULD** have a date/time value using the format defined in [RFC3339]. However, the value \"unknown\" **MAY** be\n    /// used\n    ///\n    /// [RFC3339]: https://www.rfc-editor.org/rfc/rfc3339\n    #[serde(default)]\n    epoch: TimeEpoch,\n    /// The optional \"wall_clock_time\" field can be used to provide an approximate date/time value that logging commenced\n    /// at if the epoch value is \"unknown\". It uses the format defined in [RFC3339]. Note that conversion of timestamps\n    /// to calendar time based on wall clock times cannot be safely relied on.\n    ///\n    /// [RFC3339]: https://www.rfc-editor.org/rfc/rfc3339\n    #[builder(default)]\n    wall_clock_time: Option<RFC3339DateTime>,\n}\n\n/// Intermediate data types during deserialization\n#[derive(Deserialize)]\nstruct UncheckedReferenceTime {\n    clock_type: TimeClockType,\n    #[serde(default)]\n    epoch: TimeEpoch,\n    wall_clock_time: Option<RFC3339DateTime>,\n}\n\nimpl TryFrom<UncheckedReferenceTime> for ReferenceTime {\n    type Error = &'static str;\n    fn try_from(value: UncheckedReferenceTime) -> Result<Self, Self::Error> {\n        if value.clock_type == TimeClockType::Monotaonic && value.epoch != TimeEpoch::Unknow {\n            return Err(\n                r#\"When using the \"monotonic\" clock type, the epoch field MUST have the value \"unknown\".\"#,\n            );\n        }\n\n        Ok(ReferenceTime {\n            clock_type: value.clock_type,\n            epoch: value.epoch,\n            wall_clock_time: value.wall_clock_time,\n        })\n    }\n}\n\n#[derive(Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TimeClockType {\n    /// The value \"system\" represents a clock that uses system time, commonly measured against a chosen or well-known\n    /// epoch\n    #[default]\n    System,\n    /// A clock using monotonic time is generally guaranteed to never go backwards. The value \"monotonic\" represents\n    /// such a clock.\n    ///\n    /// When using the \"monotonic\" clock type, the epoch field MUST have the value \"unknown\".\n    Monotaonic,\n    #[serde(untagged)]\n    Custom(String),\n}\n\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\npub enum TimeEpoch {\n    Unknow,\n    #[serde(untagged)]\n    RFC3339DateTime(RFC3339DateTime),\n}\n\nimpl Default for TimeEpoch {\n    fn default() -> Self {\n        Self::RFC3339DateTime(Default::default())\n    }\n}\n\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct RFC3339DateTime(String);\n\nimpl Default for RFC3339DateTime {\n    fn default() -> Self {\n        Self(\"1970-01-01T00:00:00.000Z\".to_owned())\n    }\n}\n\n#[derive(Default, Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TimeFormat {\n    /// A duration relative to the ReferenceTime \"epoch\" field. This approach uses the largest amount of characters.\n    /// It is good for stateless loggers. This is the default value of the \"time_format\" field.\n    #[default]\n    RelativeToEpoch,\n    /// A delta-encoded value, based on the previously logged value. The first event in a trace is always relative to\n    /// the ReferenceTime. This approach uses the least amount of characters. It is suitable for stateful loggers.\n    RelativeToPreviousEvent,\n}\n\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct ProtocolTypeList(Vec<ProtocolType>);\n\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct ProtocolType(String);\n\nimpl ProtocolType {\n    pub fn quic() -> ProtocolType {\n        ProtocolType(\"QUIC\".to_owned())\n    }\n\n    pub fn http3() -> ProtocolType {\n        ProtocolType(\"HTTP/3\".to_owned())\n    }\n}\n\n#[derive(Debug, Display, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct GroupID(String);\n\nimpl From<ConnectionId> for GroupID {\n    fn from(value: ConnectionId) -> Self {\n        Self(format!(\"{value:x}\"))\n    }\n}\n\nimpl From<ConnectionID> for GroupID {\n    fn from(value: ConnectionID) -> Self {\n        Self(format!(\"{value:x}\"))\n    }\n}\n\nimpl From<(SocketAddr, SocketAddr)> for GroupID {\n    fn from(_value: (SocketAddr, SocketAddr)) -> Self {\n        todo!()\n    }\n}\n\n/// The \"system_info\" field can be used to record system-specific details related to an event. This is useful, for instance,\n/// where an application splits work across CPUs, processes, or threads and events for a single trace occur on potentially\n/// different combinations thereof. Each field is optional to support deployment diversity.\n#[derive(Builder, Default, Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde_with::skip_serializing_none]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct SystemInformation {\n    processor_id: Option<u32>,\n    process_id: Option<u32>,\n    thread_id: Option<u32>,\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum EventImportance {\n    Core = 1,\n    Base = 2,\n    Extra = 3,\n}\n\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[serde(tag = \"name\", content = \"data\")]\n#[enum_dispatch::enum_dispatch(BeEventData)]\npub enum EventData {\n    #[serde(rename = \"quic:server_listening\")]\n    ServerListening(quic::connectivity::ServerListening),\n    #[serde(rename = \"quic:connection_started\")]\n    ConnectionStarted(quic::connectivity::ConnectionStarted),\n    #[serde(rename = \"quic:connection_closed\")]\n    ConnectionClosed(quic::connectivity::ConnectionClosed),\n    #[serde(rename = \"quic:connection_id_updated\")]\n    ConnectionIdUpdated(quic::connectivity::ConnectionIdUpdated),\n    #[serde(rename = \"quic:spin_bit_updated\")]\n    SpinBitUpdated(quic::connectivity::SpinBitUpdated),\n    #[serde(rename = \"quic:connection_state_updated\")]\n    ConnectionStateUpdated(quic::connectivity::ConnectionStateUpdated),\n    #[serde(rename = \"quic:path_assigned\")]\n    PathAssigned(quic::connectivity::PathAssigned),\n    #[serde(rename = \"quic:mtu_updated\")]\n    MtuUpdated(quic::connectivity::MtuUpdated),\n    #[serde(rename = \"quic:version_information\")]\n    VersionInformation(quic::transport::VersionInformation),\n    #[serde(rename = \"quic:alpn_information\")]\n    ALPNInformation(quic::transport::ALPNInformation),\n    #[serde(rename = \"quic:parameters_set\")]\n    ParametersSet(quic::transport::ParametersSet),\n    #[serde(rename = \"quic:parameters_restored\")]\n    ParametersRestored(quic::transport::ParametersRestored),\n    #[serde(rename = \"quic:packet_sent\")]\n    PacketSent(quic::transport::PacketSent),\n    #[serde(rename = \"quic:packet_received\")]\n    PacketReceived(quic::transport::PacketReceived),\n    #[serde(rename = \"quic:packet_dropped\")]\n    PacketDropped(quic::transport::PacketDropped),\n    #[serde(rename = \"quic:packet_buffered\")]\n    PacketBuffered(quic::transport::PacketBuffered),\n    #[serde(rename = \"quic:packets_acked\")]\n    PacketsAcked(quic::transport::PacketsAcked),\n    #[serde(rename = \"quic:udp_datagrams_sent\")]\n    UdpDatagramSent(quic::transport::UdpDatagramsSent),\n    #[serde(rename = \"quic:udp_datagrams_received\")]\n    UdpDatagramReceived(quic::transport::UdpDatagramsReceived),\n    #[serde(rename = \"quic:udp_datagram_dropped\")]\n    UdpDatagramDropped(quic::transport::UdpDatagramDropped),\n    #[serde(rename = \"quic:stream_state_updated\")]\n    StreamStateUpdated(quic::transport::StreamStateUpdated),\n    #[serde(rename = \"quic:frames_processed\")]\n    FramesProcessed(quic::transport::FramesProcessed),\n    #[serde(rename = \"quic:stream_data_moved\")]\n    StreamDataMoved(quic::transport::StreamDataMoved),\n    #[serde(rename = \"quic:datagram_data_moved\")]\n    DatagramDataMoved(quic::transport::DatagramDataMoved),\n    #[serde(rename = \"quic:migration_state_updated\")]\n    MigrationStateUpdated(quic::transport::MigrationStateUpdated),\n    #[serde(rename = \"quic:key_updated\")]\n    KeyUpdated(quic::security::KeyUpdated),\n    #[serde(rename = \"quic:key_discarded\")]\n    KeyDiscarded(quic::security::KeyDiscarded),\n    #[serde(rename = \"quic:recovery_parameters_set\")]\n    RecoveryParametersSet(quic::recovery::RecoveryParametersSet),\n    #[serde(rename = \"quic:recovery_metrics_updated\")]\n    RecoveryMetricsUpdated(quic::recovery::RecoveryMetricsUpdated),\n    #[serde(rename = \"quic:congestion_state_updated\")]\n    CongestionStateUpdated(quic::recovery::CongestionStateUpdated),\n    #[serde(rename = \"quic:loss_timer_updated\")]\n    LossTimerUpdated(quic::recovery::LossTimerUpdated),\n    #[serde(rename = \"quic:packet_lost\")]\n    PacketLost(quic::recovery::PacketLost),\n    #[serde(rename = \"quic:marked_for_retransmit\")]\n    MarkedForRetransmit(quic::recovery::MarkedForRetransmit),\n    #[serde(rename = \"quic:ecn_state_updated\")]\n    ECNStateUpdated(quic::recovery::ECNStateUpdated),\n    #[serde(rename = \"loglevel:error\")]\n    Error(loglevel::Error),\n    #[serde(rename = \"loglevel:warning\")]\n    Warning(loglevel::Warning),\n    #[serde(rename = \"loglevel:info\")]\n    Info(loglevel::Info),\n    #[serde(rename = \"loglevel:debug\")]\n    Debug(loglevel::Debug),\n    #[serde(rename = \"loglevel:verbose\")]\n    Verbose(loglevel::Verbose),\n}\n\npub trait BeSpecificEventData {\n    fn scheme() -> &'static str;\n\n    fn importance() -> EventImportance;\n}\n\n#[enum_dispatch::enum_dispatch]\npub trait BeEventData {\n    fn scheme(&self) -> &'static str;\n\n    fn importance(&self) -> EventImportance;\n}\n\nimpl<S: BeSpecificEventData> BeEventData for S {\n    #[inline]\n    fn scheme(&self) -> &'static str {\n        S::scheme()\n    }\n\n    #[inline]\n    fn importance(&self) -> EventImportance {\n        S::importance()\n    }\n}\n\nmacro_rules! imp_be_events {\n    ( $($importance:ident $event:ty => $prefix:ident $schme:literal ;)* ) => {\n        $( imp_be_events!{@impl_one $importance $event => $prefix $schme ; } )*\n    };\n    (@impl_one $importance:ident $event:ty => urn $schme:literal ; ) => {\n        impl BeSpecificEventData for $event {\n            fn scheme() -> &'static str {\n                concat![\"urn:ietf:params:qlog:events:\",$schme]\n            }\n\n            fn importance() -> EventImportance {\n                EventImportance::$importance\n            }\n        }\n    };\n}\n\nimp_be_events! {\n    Extra quic::connectivity::ServerListening        => urn \"quic:server_listening\";\n    Base  quic::connectivity::ConnectionStarted      => urn \"quic:connection_started\";\n    Base  quic::connectivity::ConnectionClosed       => urn \"quic:connection_closed\";\n    Base  quic::connectivity::ConnectionIdUpdated    => urn \"quic:connection_id_updated\";\n    Base  quic::connectivity::SpinBitUpdated         => urn \"quic:spin_bit_updated\";\n    Base  quic::connectivity::ConnectionStateUpdated => urn \"quic:connection_state_updated\";\n    Base  quic::connectivity::PathAssigned           => urn \"quic:path_assigned\";\n    Extra quic::connectivity::MtuUpdated             => urn \"quic:mtu_updated\";\n    Core  quic::transport::VersionInformation        => urn \"quic:version_information\";\n    Core  quic::transport::ALPNInformation           => urn \"quic:alpn_information\";\n    Core  quic::transport::ParametersSet             => urn \"quic:parameters_set\";\n    Base  quic::transport::ParametersRestored        => urn \"quic:parameters_restored\";\n    Core  quic::transport::PacketSent                => urn \"quic:packet_sent\";\n    Core  quic::transport::PacketReceived            => urn \"quic:packet_received\";\n    Base  quic::transport::PacketDropped             => urn \"quic:packet_dropped\";\n    Base  quic::transport::PacketBuffered            => urn \"quic:packet_buffered\";\n    Extra quic::transport::PacketsAcked              => urn \"quic:packets_acked\";\n    Extra quic::transport::UdpDatagramsSent           => urn \"quic:udp_datagrams_sent\";\n    Extra quic::transport::UdpDatagramsReceived       => urn \"quic:udp_datagrams_received\";\n    Extra quic::transport::UdpDatagramDropped        => urn \"quic:udp_datagram_dropped\";\n    Base  quic::transport::StreamStateUpdated        => urn \"quic:stream_state_updated\";\n    Extra quic::transport::FramesProcessed           => urn \"quic:frames_processed\";\n    Base  quic::transport::StreamDataMoved           => urn \"quic:stream_data_moved\";\n    Base  quic::transport::DatagramDataMoved         => urn \"quic:datagram_data_moved\";\n    Extra quic::transport::MigrationStateUpdated     => urn \"quic:migration_state_updated\";\n    Base  quic::security::KeyUpdated                 => urn \"quic:key_updated\";\n    Base  quic::security::KeyDiscarded               => urn \"quic:key_discarded\";\n    Base  quic::recovery::RecoveryParametersSet      => urn \"quic:recovery_parameters_set\";\n    Core  quic::recovery::RecoveryMetricsUpdated     => urn \"quic:recovery_metrics_updated\";\n    Base  quic::recovery::CongestionStateUpdated     => urn \"quic:congestion_state_updated\";\n    Extra quic::recovery::LossTimerUpdated           => urn \"quic:loss_timer_updated\";\n    Core  quic::recovery::PacketLost                 => urn \"quic:packet_lost\";\n    Extra quic::recovery::MarkedForRetransmit        => urn \"quic:marked_for_retransmit\";\n    Extra quic::recovery::ECNStateUpdated            => urn \"quic:ecn_state_updated\";\n    Core  loglevel::Error                            => urn \"loglevel:error\";\n    Base  loglevel::Warning                          => urn \"loglevel:warning\";\n    Extra loglevel::Info                             => urn \"loglevel:info\";\n    Extra loglevel::Debug                            => urn \"loglevel:debug\";\n    Extra loglevel::Verbose                          => urn \"loglevel:verbose\";\n}\n\n/// serialize/deserialize as hex string, but store as bytes in memory\n#[serde_with::serde_as]\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct HexString(#[serde_as(as = \"serde_with::hex::Hex\")] Bytes);\n\nimpl Display for HexString {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{:x}\", self.0)\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct RawInfo {\n    /// the full byte length of the entity (e.g., packet or frame),\n    /// including possible headers and trailers\n    length: Option<u64>,\n    /// the byte length of the entity's payload,\n    /// excluding possible headers or trailers\n    payload_length: Option<u64>,\n    /// the (potentially truncated) contents of the full entity,\n    /// including headers and possibly trailers\n    #[builder(setter(custom))]\n    data: Option<HexString>,\n}\n\nimpl RawInfoBuilder {\n    /// the (potentially truncated) contents of the full entity,\n    /// including headers and possibly trailers\n    pub fn data<D: ContinuousData>(&mut self, data: D) -> &mut Self {\n        self.data = telemetry::filter::raw_data().then(|| Some(data.to_bytes().into()));\n        self\n    }\n}\n\nimpl<D: ContinuousData> From<D> for RawInfo {\n    fn from(data: D) -> Self {\n        build!(RawInfo {\n            length: data.len() as u64,\n            data: data\n        })\n    }\n}\n\n/// ``` rust, ignore\n/// crate::gen_builder_method! {\n///     FooBuilder       => Foo;\n///     BarBuilder       => Bar;\n/// }\n/// ```\n#[doc(hidden)]\n#[macro_export] // used in this crate only\nmacro_rules! gen_builder_method {\n    ( $($builder:ty => $event:ty;)* ) => {\n        $( $crate::gen_builder_method!{@impl_one $event => $builder ;} )*\n    };\n    (@impl_one $event:ty => $builder:ty ; ) => {\n        impl $event {\n            pub fn builder() -> $builder {\n                Default::default()\n            }\n        }\n\n        impl $builder {\n            pub fn build(&mut self) -> $event {\n                self.fallible_build().expect(\"Failed to build event\")\n            }\n        }\n    };\n}\n\ngen_builder_method! {\n    LogFileBuilder       => LogFile;\n    QlogFileBuilder      => QlogFile;\n    QlogFileSeqBuilder   => QlogFileSeq;\n    TraceBuilder         => Trace;\n    TraceSeqBuilder      => TraceSeq;\n    TraceErrorBuilder    => TraceError;\n    CommonFieldsBuilder  => CommonFields;\n    VantagePointBuilder  => VantagePoint;\n    EventBuilder         => Event;\n    ReferenceTimeBuilder => ReferenceTime;\n    RawInfoBuilder       => RawInfo;\n}\n\nmod rollback {\n\n    use super::*;\n    use crate::{build, legacy};\n\n    impl TryFrom<EventData> for legacy::EventData {\n        type Error = ();\n        #[rustfmt::skip]\n        fn try_from(value: EventData) -> Result<Self, Self::Error> {\n            match value {\n                EventData::ServerListening(data) => Ok(legacy::EventData::ServerListening(data.into())),\n                EventData::ConnectionStarted(data) => Ok(legacy::EventData::ConnectionStarted(data.into())),\n                EventData::ConnectionClosed(data) => Ok(legacy::EventData::ConnectionClosed(data.into())),\n                EventData::ConnectionIdUpdated(data) => Ok(legacy::EventData::ConnectionIdUpdated(data.into())),\n                EventData::SpinBitUpdated(data) => Ok(legacy::EventData::SpinBitUpdated(data.into())),\n                EventData::ConnectionStateUpdated(data) => Ok(legacy::EventData::ConnectionStateUpdated(data.into())),\n                EventData::PathAssigned(_data) => Err(()),\n                EventData::MtuUpdated(_data) => Err(()),\n                EventData::VersionInformation(data) => Ok(legacy::EventData::VersionInformation(data.into())),\n                EventData::ALPNInformation(data) => Ok(legacy::EventData::AlpnInformation(data.into())),\n                EventData::ParametersSet(data) => Ok(legacy::EventData::TransportParametersSet(data.into())),\n                EventData::ParametersRestored(data) => Ok(legacy::EventData::TransportParametersRestored(data.into())),\n                EventData::PacketSent(data) => Ok(legacy::EventData::PacketSent(data.into())),\n                EventData::PacketReceived(data) => Ok(legacy::EventData::PacketReceived(data.into())),\n                EventData::PacketDropped(data) => Ok(legacy::EventData::PacketDropped(data.into())),\n                EventData::PacketBuffered(data) => Ok(legacy::EventData::PacketBuffered(data.into())),\n                EventData::PacketsAcked(data) => Ok(legacy::EventData::PacketsAcked(data.into())),\n                EventData::UdpDatagramSent(data) => Ok(legacy::EventData::DatagramsSent(data.into())),\n                EventData::UdpDatagramReceived(data) => Ok(legacy::EventData::DatagramsReceived(data.into())),\n                EventData::UdpDatagramDropped(data) => Ok(legacy::EventData::DatagramDropped(data.into())),\n                EventData::StreamStateUpdated(data) => Ok(legacy::EventData::StreamStateUpdated(data.into())),\n                EventData::FramesProcessed(data) => Ok(legacy::EventData::FramesProcessed(data.into())),\n                EventData::StreamDataMoved(data) => Ok(legacy::EventData::DataMoved(data.into())),\n                EventData::DatagramDataMoved(_data) => Err(()),\n                EventData::MigrationStateUpdated(_data) => Err(()),\n                EventData::KeyUpdated(data) => Ok(legacy::EventData::KeyUpdated(data.into())),\n                EventData::KeyDiscarded(data) => Ok(legacy::EventData::KeyDiscarded(data.into())),\n                EventData::RecoveryParametersSet(data) => Ok(legacy::EventData::RecoveryParametersSet(data.into())),\n                EventData::RecoveryMetricsUpdated(data) => Ok(legacy::EventData::MetricsUpdated(data.into())),\n                EventData::CongestionStateUpdated(data) => Ok(legacy::EventData::CongestionStateUpdated(data.into())),\n                EventData::LossTimerUpdated(data) => Ok(legacy::EventData::LossTimerUpdated(data.into())),\n                EventData::PacketLost(data) => Ok(legacy::EventData::PacketLost(data.into())),\n                EventData::MarkedForRetransmit(data) => Ok(legacy::EventData::MarkedForRetransmit(data.into())),\n                EventData::ECNStateUpdated(_data) => Err(()),\n                EventData::Error(data) => Ok(legacy::EventData::GenericError(data.into())),\n                EventData::Warning(data) => Ok(legacy::EventData::GenericWarning(data.into())),\n                EventData::Info(data) => Ok(legacy::EventData::GenericInfo(data.into())),\n                EventData::Debug(data) => Ok(legacy::EventData::GenericDebug(data.into())),\n                EventData::Verbose(data) => Ok(legacy::EventData::GenericVerbose(data.into())),\n            }\n        }\n    }\n\n    impl From<TimeFormat> for legacy::TimeFormat {\n        fn from(value: TimeFormat) -> Self {\n            match value {\n                // note: depending on reference_time\n                //TOOD: check reference_time here\n                TimeFormat::RelativeToEpoch => legacy::TimeFormat::Absolute,\n                TimeFormat::RelativeToPreviousEvent => legacy::TimeFormat::Delta,\n            }\n        }\n    }\n\n    impl From<ProtocolTypeList> for legacy::ProtocolType {\n        fn from(value: ProtocolTypeList) -> Self {\n            value\n                .0\n                .into_iter()\n                .map(|x| x.into())\n                .collect::<Vec<_>>()\n                .into()\n        }\n    }\n\n    impl TryFrom<Event> for legacy::Event {\n        type Error = ();\n        fn try_from(mut event: Event) -> Result<Self, Self::Error> {\n            if let Some(system_info) = event.system_info {\n                let value = serde_json::to_value(system_info).unwrap();\n                event.custom_fields.insert(\"system_info\".to_owned(), value);\n            }\n            if let Some(path) = event.path {\n                let value = serde_json::to_value(path).unwrap();\n                event.custom_fields.insert(\"path\".to_owned(), value);\n            }\n            Ok(build!(legacy::Event {\n                time: event.time,\n                data: { legacy::EventData::try_from(event.data)? },\n                ?time_format: event.time_format,\n                ?protocol_type: event.protocol_types,\n                ?group_id: event.group_id,\n                custom_fields: event.custom_fields\n            }))\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use qbase::cid::ConnectionId;\n\n    use super::*;\n    use crate::{loglevel::Warning, quic::connectivity::ConnectionStarted, telemetry::ExportEvent};\n\n    #[test]\n    fn custom_fields() {\n        let odcid = ConnectionID::from(ConnectionId::from_slice(&[\n            0x61, 0xb6, 0x91, 0x78, 0x80, 0xf7, 0x95, 0xee,\n        ]));\n        let common_fields = build!(CommonFields {\n            path: \"\".to_owned(),\n            time_format: TimeFormat::default(),\n            reference_time: ReferenceTime::default(),\n            protocol_types: ProtocolTypeList::from(vec![ProtocolType::quic()]),\n            group_id: GroupID::from(odcid),\n        });\n        let expect = r#\"{\n  \"path\": \"\",\n  \"time_format\": \"relative_to_epoch\",\n  \"reference_time\": {\n    \"clock_type\": \"system\",\n    \"epoch\": \"1970-01-01T00:00:00.000Z\"\n  },\n  \"protocol_types\": [\n    \"QUIC\"\n  ],\n  \"group_id\": \"61b6917880f795ee\"\n}\"#;\n        assert_eq!(\n            serde_json::to_string_pretty(&common_fields).unwrap(),\n            expect\n        );\n        let with_custom_fields = r#\"{\n  \"path\": \"\",\n  \"time_format\": \"relative_to_epoch\",\n  \"reference_time\": {\n    \"clock_type\": \"system\",\n    \"epoch\": \"1970-01-01T00:00:00.000Z\"\n  },\n  \"protocol_types\": [\n    \"QUIC\"\n  ],\n  \"group_id\": \"61b6917880f795ee\",\n  \"pathway\": \"from A to relay\",\n  \"customB\": \"some other extensions\"\n}\"#;\n        let des = serde_json::from_str::<CommonFields>(with_custom_fields).unwrap();\n        let filed_string = serde_json::to_string_pretty(&des).unwrap();\n        let des2 = serde_json::from_str::<CommonFields>(&filed_string).unwrap();\n        assert_eq!(des, des2);\n    }\n\n    #[test]\n    fn event_data() {\n        let data = EventData::from(build!(Warning {\n            message: \"deepseek（已深度思考（用时0秒））：服务器繁忙，请稍后再试。\",\n            code: 255u64,\n        }));\n        let event = build!(Event {\n            time: 1.0,\n            data: data.clone(),\n        });\n        let expect = r#\"{\n  \"time\": 1.0,\n  \"name\": \"loglevel:warning\",\n  \"data\": {\n    \"code\": 255,\n    \"message\": \"deepseek（已深度思考（用时0秒））：服务器繁忙，请稍后再试。\"\n  }\n}\"#;\n        assert_eq!(serde_json::to_string_pretty(&event).unwrap(), expect);\n        assert_eq!(data.importance(), EventImportance::Base);\n    }\n\n    #[test]\n    fn rollback() {\n        fn group_id() -> GroupID {\n            GroupID::from(ConnectionID::from(ConnectionId::from_slice(&[\n                0xfe, 0xdc, 0xba, 0x09, 0x87, 0x65, 0x43, 0x32,\n            ])))\n        }\n\n        fn protocol_types() -> Vec<String> {\n            vec![\"QUIC\".to_owned(), \"UNKNOW\".to_owned()]\n        }\n\n        struct TestBroker;\n\n        impl ExportEvent for TestBroker {\n            fn emit(&self, event: Event) {\n                let legacy = legacy::Event::try_from(event).unwrap();\n                let event = serde_json::to_value(legacy).unwrap();\n\n                let data = serde_json::json!({\n                    \"ip_version\": \"v4\",\n                    \"src_ip\": \"127.0.0.1\",\n                    \"dst_ip\": \"192.168.31.1\",\n                    \"protocol\": \"QUIC\",\n                    \"src_port\": 23456,\n                    \"dst_port\": 21\n                });\n                // in 10: this callde protocol_types\n                let protocol_type = serde_json::json!([\"QUIC\", \"UNKNOW\"]);\n\n                assert_eq!(event[\"data\"], data);\n                assert_eq!(event[\"protocol_types\"], serde_json::Value::Null);\n                assert_eq!(event[\"protocol_type\"], protocol_type);\n                assert_eq!(event[\"to_router\"], true);\n            }\n        }\n\n        span!(\n            Arc::new(TestBroker),\n            group_id = group_id(),\n            protocol_types = protocol_types()\n        )\n        .in_scope(|| {\n            let src = \"127.0.0.1:23456\".parse().unwrap();\n            let dst = \"192.168.31.1:21\".parse().unwrap();\n            event!(ConnectionStarted { socket: (src, dst) }, to_router = true)\n        })\n    }\n}\n"
  },
  {
    "path": "qevent/src/loglevel.rs",
    "content": "use derive_builder::Builder;\nuse serde::{Deserialize, Serialize};\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct Error {\n    code: Option<u64>,\n    message: Option<String>,\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct Warning {\n    code: Option<u64>,\n    message: Option<String>,\n}\n\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Info {\n    message: String,\n}\n\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Debug {\n    message: String,\n}\n\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct Verbose {\n    message: String,\n}\n\ncrate::gen_builder_method! {\n    ErrorBuilder   => Error;\n    WarningBuilder => Warning;\n    InfoBuilder    => Info;\n    DebugBuilder   => Debug;\n    VerboseBuilder => Verbose;\n}\n\nmod rollback {\n    use super::*;\n    use crate::{build, legacy};\n\n    impl From<Error> for legacy::GenericError {\n        fn from(value: Error) -> Self {\n            build!(legacy::GenericError {\n                ?code: value.code,\n                ?message: value.message\n            })\n        }\n    }\n\n    impl From<Warning> for legacy::GenericWarning {\n        fn from(value: Warning) -> Self {\n            build!(legacy::GenericWarning {\n                ?code: value.code,\n                ?message: value.message\n            })\n        }\n    }\n\n    impl From<Info> for legacy::GenericInfo {\n        fn from(value: Info) -> Self {\n            build!(legacy::GenericInfo {\n                message: value.message\n            })\n        }\n    }\n\n    impl From<Debug> for legacy::GenericDebug {\n        fn from(value: Debug) -> Self {\n            build!(legacy::GenericDebug {\n                message: value.message\n            })\n        }\n    }\n\n    impl From<Verbose> for legacy::GenericVerbose {\n        fn from(value: Verbose) -> Self {\n            build!(legacy::GenericVerbose {\n                message: value.message\n            })\n        }\n    }\n}\n"
  },
  {
    "path": "qevent/src/macro_support.rs",
    "content": "pub use serde_json::Value;\n"
  },
  {
    "path": "qevent/src/macros.rs",
    "content": "/// A macro to crate a qlog event struct from a set of fields.\n#[macro_export]\nmacro_rules! build {\n    ($struct:ty { $($tt:tt)* }) => {{\n        let mut __builder = <$struct>::builder();\n        $crate::build!(@field __builder, $($tt)*);\n        __builder.build()\n    }};\n    (@field $builder:expr, $field:ident $(, $($remain:tt)* )? ) => {\n        $builder.$field($field);\n        $crate::build!(@field $builder $(, $($remain)* )? );\n    };\n    (@field $builder:expr, $field:ident: Map        { $($tt:tt)* } $(, $($remain:tt)* )? ) => {\n        $builder.$field($crate::map!{ $($tt)* });\n        $crate::build!(@field $builder $(, $($remain)* )? );\n    };\n    (@field $builder:expr, $field:ident: $struct:ty { $($tt:tt)* } $(, $($remain:tt)* )? ) => {\n        $builder.$field($crate::build!($struct { $($tt)* }));\n        $crate::build!(@field $builder $(, $($remain)* )? );\n    };\n    (@field $builder:expr, $field:ident: $value:expr $(, $($remain:tt)* )? ) => {\n        $builder.$field($value);\n        $crate::build!(@field $builder $(, $($remain)* )? );\n    };\n    (@field $builder:expr, ? $field:ident $(, $($remain:tt)* )? ) => {\n        if let Some(__value) = $field {\n            $builder.$field(__value);\n        }\n        $crate::build!(@field $builder $(, $($remain)* )? );\n    };\n    (@field $builder:expr, ? $field:ident: $value:expr $(, $($remain:tt)* )? ) => {\n        if let Some(__value) = $value {\n            $builder.$field(__value);\n        }\n        $crate::build!(@field $builder $(, $($remain)* )? );\n    };\n    (@field $builder:expr $(,)?) => {};\n}\n\n/// A macro to create a `HashMap<String, Value>` from a set of fields.\n/// ``` rust, ignore\n/// qevent::map! {\n///     field1: value,\n///     field2,\n///     field3: Map {\n///        subfield1: value,\n///     },\n///     event: loglevel::Error {\n///          message: \"An error occurred\",\n///     }\n/// }\n/// ```\n#[macro_export]\nmacro_rules! map {\n    {$($tt:tt)*}=>{ {\n        let mut map = ::std::collections::HashMap::<String, $crate::macro_support::Value>::new();\n        $crate::map_internal!(map, $($tt)*);\n        map\n    }};\n}\n\n#[doc(hidden)]\n#[macro_export]\nmacro_rules! map_internal {\n    ($map:expr, $field:ident $(, $($remain:tt)* )?) => {\n        $map.insert(stringify!($field).to_owned(), $field.into());\n        $crate::map_internal!($map $(, $($remain)* )?)\n    };\n    ($map:expr, $field:ident: Map         {$($tt:tt)*} $(, $($remain:tt)* )?) => {\n        $map.insert(stringify!($field).to_owned(), $crate::map!{ $($tt)* });\n        $crate::map_internal!($map $(, $($remain)* )?)\n    };\n    ($map:expr, $field:ident: $struct:ty  {$($tt:tt)*} $(, $($remain:tt)* )?) => {\n        $map.insert(stringify!($field).to_owned(), $crate::build!($struct { $($tt)* }).into());\n        $crate::map_internal!($map $(, $($remain)* )?)\n    };\n    ($map:expr, $field:ident: $value:expr $(, $($remain:tt)* )?) => {\n        $map.insert(stringify!($field).to_owned(), $value.into());\n        $crate::map_internal!($map $(, $($remain)* )?)\n    };\n    ($map:expr $(,)?) => {};\n}\n"
  },
  {
    "path": "qevent/src/packet.rs",
    "content": "use bytes::{BufMut, buf::UninitSlice};\nuse derive_more::Deref;\nuse qbase::{\n    net::tx::Signals,\n    packet::{\n        RecordFrame,\n        header::{\n            EncodeHeader, GetDcid, GetScid, GetType, io::WriteHeader, long::LongHeader,\n            short::OneRttHeader,\n        },\n        io::{AssemblePacket, PacketInfo, PacketWriter as BasePacketWriter},\n        keys::DirectionalKeys,\n        number::PacketNumber,\n        signal::KeyPhaseBit,\n    },\n    util::ContinuousData,\n};\n\nuse crate::{\n    RawInfo,\n    quic::{\n        PacketHeader as QEventPacketHeader, PacketHeaderBuilder as QEventPacketHeaderBuilder,\n        QuicFrame as QEventFrame, QuicFramesCollector, transport::PacketSent,\n    },\n};\n\nstruct PacketLogger {\n    header: QEventPacketHeaderBuilder,\n    frames: QuicFramesCollector<PacketSent>,\n}\n\nimpl PacketLogger {\n    pub fn record_frame(&mut self, frame: impl Into<QEventFrame>) {\n        self.frames.extend([frame]);\n    }\n\n    pub fn log_sent(mut self, packet: &BasePacketWriter) {\n        // TODO: 如果以后涉及到组装VN，Retry，这里的逻辑得改\n        if !packet.is_short_header() {\n            self.header.length((packet.payload_len()) as u16);\n        }\n\n        crate::event!(PacketSent {\n            header: self.header.build(),\n            frames: self.frames,\n            raw: RawInfo {\n                length: packet.packet_len() as u64,\n                payload_length: packet.payload_len() as u64,\n                data: packet.buffer(),\n            },\n            // TODO: trigger\n        })\n    }\n}\n\n#[derive(Deref)]\npub struct PacketWriter<'b> {\n    #[deref]\n    writer: BasePacketWriter<'b>,\n    logger: PacketLogger,\n}\n\nimpl<'b> AsRef<BasePacketWriter<'b>> for PacketWriter<'b> {\n    #[inline]\n    fn as_ref(&self) -> &BasePacketWriter<'b> {\n        &self.writer\n    }\n}\n\nimpl<'b> PacketWriter<'b> {\n    pub fn new_long<S>(\n        header: &LongHeader<S>,\n        buffer: &'b mut [u8],\n        pn: (u64, PacketNumber),\n        keys: DirectionalKeys,\n    ) -> Result<Self, Signals>\n    where\n        S: EncodeHeader,\n        LongHeader<S>: GetType,\n        for<'a> &'a mut [u8]: WriteHeader<LongHeader<S>>,\n    {\n        Ok(Self {\n            writer: BasePacketWriter::new_long(header, buffer, pn, keys)?,\n            logger: PacketLogger {\n                header: {\n                    let mut builder = QEventPacketHeader::builder();\n                    builder\n                        .packet_type(header.get_type())\n                        .packet_number(pn.0)\n                        .scil(header.scid().len() as u8)\n                        .scid(*header.scid())\n                        .dcil(header.dcid().len() as u8)\n                        .dcid(*header.dcid());\n                    builder\n                },\n                frames: QuicFramesCollector::new(),\n            },\n        })\n    }\n\n    pub fn new_short(\n        header: &OneRttHeader,\n        buffer: &'b mut [u8],\n        pn: (u64, PacketNumber),\n        keys: DirectionalKeys,\n        key_phase: KeyPhaseBit,\n    ) -> Result<Self, Signals> {\n        Ok(Self {\n            writer: BasePacketWriter::new_short(header, buffer, pn, keys, key_phase)?,\n            logger: PacketLogger {\n                header: {\n                    let mut builder = QEventPacketHeader::builder();\n                    builder\n                        .packet_type(header.get_type())\n                        .packet_number(pn.0)\n                        .dcil(header.dcid().len() as u8)\n                        .dcid(*header.dcid());\n                    builder\n                },\n                frames: QuicFramesCollector::new(),\n            },\n        })\n    }\n}\n\nunsafe impl<'b> BufMut for PacketWriter<'b> {\n    #[inline]\n    fn remaining_mut(&self) -> usize {\n        self.writer.remaining_mut()\n    }\n\n    #[inline]\n    unsafe fn advance_mut(&mut self, cnt: usize) {\n        unsafe { self.writer.advance_mut(cnt) }\n    }\n\n    #[inline]\n    fn chunk_mut(&mut self) -> &mut UninitSlice {\n        self.writer.chunk_mut()\n    }\n\n    #[inline]\n    fn put_bytes(&mut self, val: u8, cnt: usize) {\n        if cnt > 0 {\n            self.logger.record_frame(QEventFrame::Padding {\n                length: Some(cnt as _),\n                payload_length: cnt as _,\n            });\n            self.writer.put_bytes(val, cnt);\n        }\n    }\n}\n\nimpl<'b, F, D: ContinuousData> RecordFrame<F, D> for PacketWriter<'b>\nwhere\n    for<'f> &'f F: Into<QEventFrame>,\n    BasePacketWriter<'b>: RecordFrame<F, D>,\n{\n    #[inline]\n    fn record_frame(&mut self, frame: &F) {\n        self.logger.record_frame(frame);\n        self.writer.record_frame(frame);\n    }\n}\n\nimpl<'b> AssemblePacket for PacketWriter<'b> {\n    #[inline]\n    fn encrypt_and_protect_packet(self) -> (usize, PacketInfo) {\n        self.logger.log_sent(&self.writer);\n        self.writer.encrypt_and_protect_packet()\n    }\n}\n"
  },
  {
    "path": "qevent/src/quic/connectivity.rs",
    "content": "use std::net::SocketAddr;\n\nuse derive_builder::Builder;\nuse derive_more::From;\nuse qbase::{\n    error::{AppError, Error, ErrorKind, QuicError},\n    frame::{AppCloseFrame, ConnectionCloseFrame, QuicCloseFrame},\n};\n\nuse super::{\n    ApplicationCode, ConnectionID, CryptoError, IPAddress, IpVersion, Owner, PathEndpointInfo,\n    TransportError,\n};\nuse crate::{Deserialize, PathID, Serialize};\n\n/// Emitted when the server starts accepting connections. It has Extra\n/// importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ServerListening {\n    #[builder(default)]\n    ip_v4: Option<IPAddress>,\n    #[builder(default)]\n    ip_v6: Option<IPAddress>,\n    #[builder(default)]\n    port_v4: Option<u16>,\n    #[builder(default)]\n    port_v6: Option<u16>,\n\n    /// the server will always answer client initials with a retry\n    /// (no 1-RTT connection setups by choice)\n    #[builder(default)]\n    retry_required: Option<bool>,\n}\n\nimpl ServerListeningBuilder {\n    pub fn address(&mut self, socket_addr: SocketAddr) -> &mut Self {\n        match socket_addr {\n            SocketAddr::V4(addr) => self.ip_v4(addr.ip().to_string()).port_v4(addr.port()),\n            SocketAddr::V6(addr) => self.ip_v6(addr.ip().to_string()).port_v6(addr.port()),\n        }\n    }\n}\n\n/// The connection_started event is used for both attempting (client-\n/// perspective) and accepting (server-perspective) new connections. Note\n/// that while there is overlap with the connection_state_updated event,\n/// this event is separate event in order to capture additional data that\n/// can be useful to log. It has Base importance level; see Section 9.2\n/// of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectionStarted {\n    ip_version: IpVersion,\n    src_ip: IPAddress,\n    dst_ip: IPAddress,\n\n    // transport layer protocol\n    #[builder(default = \"ConnectionStarted::default_protocol()\")]\n    #[serde(default = \"ConnectionStarted::default_protocol\")]\n    protocol: String,\n    #[builder(default)]\n    src_port: Option<u16>,\n    #[builder(default)]\n    dst_port: Option<u16>,\n    #[builder(default)]\n    src_cid: Option<ConnectionID>,\n    #[builder(default)]\n    dst_cid: Option<ConnectionID>,\n}\n\nimpl ConnectionStartedBuilder {\n    /// helper method to set the source and destination socket addresses\n    pub fn socket(&mut self, (src, dst): (SocketAddr, SocketAddr)) -> &mut Self {\n        debug_assert_eq!(src.is_ipv4(), dst.is_ipv4());\n        self.ip_version(if src.is_ipv4() {\n            IpVersion::V4\n        } else {\n            IpVersion::V6\n        })\n        .src_ip(src.ip().to_string())\n        .dst_ip(dst.ip().to_string())\n        .src_port(src.port())\n        .dst_port(dst.port())\n    }\n}\n\nimpl ConnectionStarted {\n    pub fn default_protocol() -> String {\n        String::from(\"QUIC\")\n    }\n}\n\n/// The connection_closed event is used for logging when a connection was\n/// closed, typically when an error or timeout occurred.  It has Base\n/// importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// Note that this event has overlap with the connection_state_updated\n/// event, as well as the CONNECTION_CLOSE frame.  However, in practice,\n/// when analyzing large deployments, it can be useful to have a single\n/// event representing a connection_closed event, which also includes an\n/// additional reason field to provide more information.  Furthermore, it\n/// is useful to log closures due to timeouts, which are difficult to\n/// reflect using the other options.\n///\n/// The connection_closed event is intended to be logged either when the\n/// local endpoint silently discards the connection due to an idle\n/// timeout, when a CONNECTION_CLOSE frame is sent (the connection enters\n/// the 'closing' state on the sender side), when a CONNECTION_CLOSE\n/// frame is received (the connection enters the 'draining' state on the\n/// receiver side) or when a Stateless Reset packet is received (the\n/// connection is discarded at the receiver side).  Connectivity-related\n/// updates after this point (e.g., exiting a 'closing' or 'draining'\n/// state), should be logged using the connection_state_updated event\n/// instead.\n///\n/// In QUIC there are two main connection-closing error categories:\n/// connection and application errors.  They have well-defined error\n/// codes and semantics.  Next to these however, there can be internal\n/// errors that occur that may or may not get mapped to the official\n/// error codes in implementation-specific ways.  As such, multiple error\n/// codes can be set on the same event to reflect this.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct ConnectionClosed {\n    /// which side closed the connection\n    owner: Option<Owner>,\n    connection_code: Option<ConnectionCode>,\n    application_code: Option<ApplicationCode>,\n    internal_code: Option<u32>,\n    reason: Option<String>,\n    trigger: Option<ConnectionCloseTrigger>,\n}\n\nimpl ConnectionClosedBuilder {\n    pub fn ccf(&mut self, ccf: &ConnectionCloseFrame) -> &mut Self {\n        match &ccf {\n            ConnectionCloseFrame::Quic(frame) => self.quic_close_frame(frame),\n            ConnectionCloseFrame::App(frame) => self.app_close_frame(frame),\n        }\n    }\n\n    fn quic_close_frame(&mut self, frame: &QuicCloseFrame) -> &mut ConnectionClosedBuilder {\n        self.connection_code(frame.error_kind())\n            .reason(frame.reason().to_owned())\n    }\n\n    fn app_close_frame(&mut self, frame: &AppCloseFrame) -> &mut ConnectionClosedBuilder {\n        self.application_code(frame.error_code() as u32)\n            .reason(frame.reason().to_owned())\n    }\n\n    pub fn quic_error(&mut self, error: &QuicError) -> &mut Self {\n        self.connection_code(error.kind())\n            .reason(error.reason().to_owned())\n    }\n\n    pub fn app_error(&mut self, error: &AppError) -> &mut Self {\n        self.application_code(error.error_code() as u32)\n            .reason(error.reason().to_owned())\n    }\n\n    pub fn error(&mut self, error: &Error) {\n        match error {\n            Error::Quic(quic_error) => self.quic_error(quic_error),\n            Error::App(app_error) => self.app_error(app_error),\n        };\n    }\n}\n\n#[derive(Debug, Clone, Copy, From, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum ConnectionCode {\n    TransportError(TransportError),\n    CryptoError(CryptoError),\n    Value(u32),\n}\n\nimpl From<ConnectionCode> for super::ConnectionCloseErrorCode {\n    fn from(value: ConnectionCode) -> Self {\n        match value {\n            ConnectionCode::TransportError(err) => err.into(),\n            ConnectionCode::CryptoError(err) => err.into(),\n            ConnectionCode::Value(code) => (code as u64).into(),\n        }\n    }\n}\n\nimpl From<ErrorKind> for ConnectionCode {\n    fn from(kind: ErrorKind) -> Self {\n        match kind {\n            ErrorKind::None => TransportError::NoError.into(),\n            ErrorKind::Internal => TransportError::InternalError.into(),\n            ErrorKind::ConnectionRefused => TransportError::ConnectionRefused.into(),\n            ErrorKind::FlowControl => TransportError::FlowControlError.into(),\n            ErrorKind::StreamLimit => TransportError::StreamLimitError.into(),\n            ErrorKind::StreamState => TransportError::StreamStateError.into(),\n            ErrorKind::FinalSize => TransportError::FinalSizeError.into(),\n            ErrorKind::FrameEncoding => TransportError::FrameEncodingError.into(),\n            ErrorKind::TransportParameter => TransportError::TransportParameterError.into(),\n            ErrorKind::ConnectionIdLimit => TransportError::ConnectionIdLimitError.into(),\n            ErrorKind::ProtocolViolation => TransportError::ProtocolViolation.into(),\n            ErrorKind::InvalidToken => TransportError::InvalidToken.into(),\n            ErrorKind::Application => TransportError::ApplicationError.into(),\n            ErrorKind::CryptoBufferExceeded => TransportError::CryptoBufferExceeded.into(),\n            ErrorKind::KeyUpdate => TransportError::KeyUpdateError.into(),\n            ErrorKind::AeadLimitReached => TransportError::AeadLimitReached.into(),\n            ErrorKind::NoViablePath => TransportError::NoViablePath.into(),\n            ErrorKind::Crypto(code) => CryptoError(code).into(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum ConnectionCloseTrigger {\n    IdleTimeout,\n    Application,\n    Error,\n    VersionMismatch,\n    /// when received from peer\n    StatelessReset,\n    /// when it is unclear what triggered the CONNECTION_CLOSE\n    Unspecified,\n}\n\n/// The connection_id_updated event is emitted when either party updates\n/// their current Connection ID.  As this typically happens only\n/// sparingly over the course of a connection, using this event is more\n/// efficient than logging the observed CID with each and every\n/// packet_sent or packet_received events.  It has Base importance level;\n/// see Section 9.2 of [QLOG-MAIN].\n///\n/// The connection_id_updated event is viewed from the perspective of the\n/// endpoint applying the new ID.  As such, when the endpoint receives a\n/// new connection ID from the peer, the owner field will be \"remote\".\n/// When the endpoint updates its own connection ID, the owner field will\n/// be \"local\".\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectionIdUpdated {\n    owner: Owner,\n    #[builder(default)]\n    old: Option<ConnectionID>,\n    #[builder(default)]\n    new: Option<ConnectionID>,\n}\n\n/// The spin_bit_updated event conveys information about the QUIC latency\n/// spin bit; see Section 17.4 of [QUIC-TRANSPORT].  The event is emitted\n/// when the spin bit changes value, it SHOULD NOT be emitted if the spin\n/// bit is set without changing its value.  It has Base importance level;\n/// see Section 9.2 of [QLOG-MAIN].\n///\n/// [QUIC-TRANSPORT]: https://www.rfc-editor.org/rfc/rfc9000\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into), build_fn(private, name = \"fallible_build\"))]\npub struct SpinBitUpdated {\n    state: bool,\n}\n\n/// The connection_state_updated event is used to track progress through\n/// QUIC's complex handshake and connection close procedures.  It has\n/// Base importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QUIC-TRANSPORT] does not contain an exhaustive flow diagram with\n/// possible connection states nor their transitions (though some are\n/// explicitly mentioned, like the 'closing' and 'draining' states).  As\n/// such, this document *non-exhaustively* defines those states that are\n/// most likely to be useful for debugging QUIC connections.\n///\n/// QUIC implementations SHOULD mainly log the simplified\n/// BaseConnectionStates, adding the more fine-grained\n/// GranularConnectionStates when more in-depth debugging is required.\n/// Tools SHOULD be able to deal with both types equally.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n/// [QUIC-TRANSPORT]: https://www.rfc-editor.org/rfc/rfc9000\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ConnectionStateUpdated {\n    #[builder(default)]\n    old: Option<ConnectionState>,\n    new: ConnectionState,\n}\n\n#[derive(Debug, Clone, Copy, From, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum ConnectionState {\n    Base(BaseConnectionStates),\n    Granular(GranularConnectionStates),\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum BaseConnectionStates {\n    /// Initial packet sent/received\n    Attempted,\n    /// Handshake packet sent/received\n    HandshakeStarted,\n    /// Both sent a TLS Finished message\n    /// and verified the peer's TLS Finished message\n    /// 1-RTT packets can be sent\n    /// RFC 9001 Section 4.1.1\n    HandshakeComplete,\n    /// CONNECTION_CLOSE sent/received,\n    /// stateless reset received or idle timeout\n    Closed,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum GranularConnectionStates {\n    /// RFC 9000 Section 8.1\n    /// client sent Handshake packet OR\n    /// client used connection ID chosen by the server OR\n    /// client used valid address validation token\n    PeerValidated,\n    /// 1-RTT data can be sent by the server,\n    /// but handshake is not done yet\n    /// (server has sent TLS Finished; sometimes called 0.5 RTT data)\n    EarlyWrite,\n\n    /// HANDSHAKE_DONE sent/received.\n    /// RFC 9001 Section 4.1.2\n    HandshakeConfirmed,\n    /// CONNECTION_CLOSE sent\n    Closing,\n    /// CONNECTION_CLOSE received\n    Draining,\n    /// draining or closing period done, connection state discarded\n    Closed,\n}\n\n/// This event is used to associate a single PathID's value with other\n/// parameters that describe a unique network path.\n///\n/// As described in [QLOG-MAIN], each qlog event can be linked to a\n/// single network path by means of the top-level \"path\" field, whose\n/// value is a PathID.  However, since it can be cumbersome to encode\n/// additional path metadata (such as IP addresses or Connection IDs)\n/// directly into the PathID, this event allows such an association to\n/// happen separately.  As such, PathIDs can be short and unique, and can\n/// even be updated to be associated with new metadata as the\n/// connection's state evolves.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct PathAssigned {\n    path_id: PathID,\n    /// the information for traffic going towards the remote receiver\n    #[builder(default)]\n    path_remote: Option<PathEndpointInfo>,\n    /// the information for traffic coming in at the local endpoint\n    #[builder(default)]\n    path_local: Option<PathEndpointInfo>,\n}\n\n/// The mtu_updated event indicates that the estimated Path MTU was\n/// updated.  This happens as part of the Path MTU discovery process.  It\n/// has Extra importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct MtuUpdated {\n    #[builder(default)]\n    old: Option<u32>,\n    new: u32,\n\n    /// at some point, MTU discovery stops, as a \"good enough\"\n    /// packet size has been found\n    #[builder(default)]\n    #[serde(default)]\n    done: bool,\n}\n\ncrate::gen_builder_method! {\n    ServerListeningBuilder        => ServerListening;\n    ConnectionStartedBuilder      => ConnectionStarted;\n    ConnectionClosedBuilder       => ConnectionClosed;\n    ConnectionIdUpdatedBuilder    => ConnectionIdUpdated;\n    SpinBitUpdatedBuilder         => SpinBitUpdated;\n    ConnectionStateUpdatedBuilder => ConnectionStateUpdated;\n    PathAssignedBuilder           => PathAssigned;\n    MtuUpdatedBuilder             => MtuUpdated;\n}\n\nmod rollback {\n    use super::*;\n    use crate::{build, legacy::quic as legacy};\n\n    impl From<ServerListening> for legacy::ConnectivityServerListening {\n        #[inline]\n        fn from(value: ServerListening) -> Self {\n            build!(legacy::ConnectivityServerListening {\n                ?ip_v4: value.ip_v4,\n                ?ip_v6: value.ip_v6,\n                ?port_v4: value.port_v4,\n                ?port_v6: value.port_v6,\n                ?retry_required: value.retry_required,\n            })\n        }\n    }\n\n    impl From<ConnectionStarted> for legacy::ConnectivityConnectionStarted {\n        #[inline]\n        fn from(value: ConnectionStarted) -> Self {\n            build!(legacy::ConnectivityConnectionStarted {\n                ip_version: value.ip_version,\n                src_ip: value.src_ip,\n                dst_ip: value.dst_ip,\n                protocol: value.protocol,\n                ?src_port: value.src_port,\n                ?dst_port: value.dst_port,\n                ?src_cid: value.src_cid,\n                ?dst_cid: value.dst_cid,\n            })\n        }\n    }\n\n    impl From<CryptoError> for legacy::CryptoError {\n        #[inline]\n        fn from(value: CryptoError) -> Self {\n            legacy::CryptoError::from(value.0)\n        }\n    }\n\n    impl From<ConnectionCode> for legacy::ConnectionCode {\n        #[inline]\n        fn from(value: ConnectionCode) -> Self {\n            match value {\n                ConnectionCode::TransportError(err) => legacy::TransportError::from(err).into(),\n                ConnectionCode::CryptoError(err) => legacy::CryptoError::from(err).into(),\n                ConnectionCode::Value(code) => code.into(),\n            }\n        }\n    }\n\n    // 这两类型的交集有限\n    impl TryFrom<ConnectionCloseTrigger> for legacy::ConnectivityConnectionClosedTrigger {\n        type Error = ();\n        #[inline]\n        fn try_from(value: ConnectionCloseTrigger) -> Result<Self, ()> {\n            match value {\n                ConnectionCloseTrigger::IdleTimeout => {\n                    Ok(legacy::ConnectivityConnectionClosedTrigger::IdleTimeout)\n                }\n                ConnectionCloseTrigger::Application => {\n                    Ok(legacy::ConnectivityConnectionClosedTrigger::Application)\n                }\n                ConnectionCloseTrigger::Error => {\n                    Ok(legacy::ConnectivityConnectionClosedTrigger::Error)\n                }\n                ConnectionCloseTrigger::VersionMismatch => {\n                    Ok(legacy::ConnectivityConnectionClosedTrigger::VersionMismatch)\n                }\n                ConnectionCloseTrigger::StatelessReset => {\n                    Ok(legacy::ConnectivityConnectionClosedTrigger::StatelessReset)\n                }\n                ConnectionCloseTrigger::Unspecified => Err(()),\n            }\n        }\n    }\n\n    impl From<ConnectionClosed> for legacy::ConnectivityConnectionClosed {\n        #[inline]\n        fn from(value: ConnectionClosed) -> Self {\n            build!(legacy::ConnectivityConnectionClosed {\n                ?owner: value.owner,\n                ?connection_code: value.connection_code,\n                ?application_code: value.application_code,\n                ?internal_code: value.internal_code,\n                ?reason: value.reason,\n                ?trigger: value.trigger.and_then(|v| legacy::ConnectivityConnectionClosedTrigger::try_from(v).ok()),\n            })\n        }\n    }\n\n    impl From<ConnectionIdUpdated> for legacy::ConnectivityConnectionIdUpdated {\n        #[inline]\n        fn from(value: ConnectionIdUpdated) -> Self {\n            build!(legacy::ConnectivityConnectionIdUpdated {\n                owner: value.owner,\n                ?old: value.old,\n                ?new: value.new,\n            })\n        }\n    }\n\n    impl From<SpinBitUpdated> for legacy::ConnectivitySpinBitUpdated {\n        #[inline]\n        fn from(value: SpinBitUpdated) -> Self {\n            build!(legacy::ConnectivitySpinBitUpdated { state: value.state })\n        }\n    }\n\n    impl From<ConnectionState> for legacy::ConnectionState {\n        #[inline]\n        fn from(value: ConnectionState) -> Self {\n            match value {\n                ConnectionState::Base(BaseConnectionStates::Attempted) => {\n                    legacy::ConnectionState::Attempted\n                }\n                ConnectionState::Base(BaseConnectionStates::HandshakeStarted) => {\n                    legacy::ConnectionState::HandshakeStarted\n                }\n                ConnectionState::Base(BaseConnectionStates::HandshakeComplete) => {\n                    legacy::ConnectionState::HandshakeComplete\n                }\n                ConnectionState::Base(BaseConnectionStates::Closed) => {\n                    legacy::ConnectionState::Closed\n                }\n                ConnectionState::Granular(GranularConnectionStates::PeerValidated) => {\n                    legacy::ConnectionState::PeerValidated\n                }\n                ConnectionState::Granular(GranularConnectionStates::EarlyWrite) => {\n                    legacy::ConnectionState::EarlyWrite\n                }\n                ConnectionState::Granular(GranularConnectionStates::HandshakeConfirmed) => {\n                    legacy::ConnectionState::HandshakeConfirmed\n                }\n                ConnectionState::Granular(GranularConnectionStates::Closing) => {\n                    legacy::ConnectionState::Closing\n                }\n                ConnectionState::Granular(GranularConnectionStates::Draining) => {\n                    legacy::ConnectionState::Draining\n                }\n                ConnectionState::Granular(GranularConnectionStates::Closed) => {\n                    legacy::ConnectionState::Closed\n                }\n            }\n        }\n    }\n\n    impl From<ConnectionStateUpdated> for legacy::ConnectivityConnectionStateUpdated {\n        #[inline]\n        fn from(value: ConnectionStateUpdated) -> Self {\n            build!(legacy::ConnectivityConnectionStateUpdated {\n                ?old: value.old,\n                new: value.new,\n            })\n        }\n    }\n\n    // event not exist in legacy version\n    // impl From<PathAssigned> for\n\n    // event not exist in legacy version\n    // impl From<MtuUpdated> for\n}\n"
  },
  {
    "path": "qevent/src/quic/recovery.rs",
    "content": "use std::collections::HashMap;\n\nuse derive_builder::Builder;\nuse serde::{Deserialize, Serialize};\n\nuse super::{PacketHeader, PacketNumberSpace, QuicFrame};\n\n/// The recovery_parameters_set event groups initial parameters from both\n/// loss detection and congestion control into a single event.  It has\n/// Base importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// All these settings are typically set once and never change.\n/// Implementation that do, for some reason, change these parameters\n/// during execution, MAY emit the recovery_parameters_set event more\n/// than once\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct RecoveryParametersSet {\n    /// Loss detection, see RFC 9002 Appendix A.2\n    /// in amount of packets\n    #[builder(default)]\n    reordering_threshold: Option<u16>,\n\n    /// as RTT multiplier\n    #[builder(default)]\n    time_threshold: Option<f32>,\n\n    /// in ms\n    timer_granularity: u16,\n\n    /// in ms\n    #[builder(default)]\n    initial_rtt: Option<f32>,\n\n    /// congestion control, see RFC 9002 Appendix B.2\n    /// in bytes. Note that this could be updated after pmtud\n    #[builder(default)]\n    max_datagram_size: Option<u32>,\n\n    /// in bytes\n    #[builder(default)]\n    initial_congestion_window: Option<u64>,\n\n    /// Note that this could change when max_datagram_size changes\n    /// in bytes\n    #[builder(default)]\n    minimum_congestion_window: Option<u64>,\n    loss_reduction_factor: Option<f32>,\n\n    /// as PTO multiplier\n    #[builder(default)]\n    persistent_congestion_threshold: Option<u16>,\n\n    /// Additionally, this event can contain any number of unspecified fields\n    /// to support different recovery approaches.\n    #[builder(default)]\n    #[serde(flatten)]\n    #[serde(skip_serializing_if = \"HashMap::is_empty\")]\n    custom_fields: HashMap<String, serde_json::Value>,\n}\n\n/// The recovery_metrics_updated event is emitted when one or more of the\n/// observable recovery metrics changes value.  It has Core importance\n/// level; see Section 9.2 of [QLOG-MAIN].\n///\n/// This event SHOULD group all possible metric updates that happen at or\n/// around the same time in a single event (e.g., if min_rtt and\n/// smoothed_rtt change at the same time, they should be bundled in a\n/// single recovery_metrics_updated entry, rather than split out into\n/// two).  Consequently, a recovery_metrics_updated event is only\n/// guaranteed to contain at least one of the listed metrics.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct RecoveryMetricsUpdated {\n    /// Loss detection, see RFC 9002 Appendix A.3\n    /// all following rtt fields are expressed in ms\n    smoothed_rtt: Option<f32>,\n    min_rtt: Option<f32>,\n    latest_rtt: Option<f32>,\n    rtt_variance: Option<f32>,\n    pto_count: Option<u16>,\n\n    /// Congestion control, see RFC 9002 Appendix B.2.\n    /// in bytes\n    congestion_window: Option<u64>,\n    bytes_in_flight: Option<u64>,\n\n    /// in bytes\n    ssthresh: Option<u64>,\n\n    /// qlog defined\n    /// sum of all packet number spaces\n    packets_in_flight: Option<u64>,\n    /// in bits per second\n    pacing_rate: Option<u64>,\n\n    /// Additionally, the recovery_metrics_updated event can contain any\n    /// number of unspecified fields to support different recovery\n    /// approaches.\n    #[serde(flatten)]\n    #[serde(skip_serializing_if = \"HashMap::is_empty\")]\n    custom_fields: HashMap<String, serde_json::Value>,\n}\n\n/// The congestion_state_updated event indicates when the congestion\n/// controller enters a significant new state and changes its behaviour.\n/// It has Base importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// The values of the event's fields are intentionally unspecified here\n/// in order to support different Congestion Control algorithms, as these\n/// typically have different states and even different implementations of\n/// these states across stacks.  For example, for the algorithm defined\n/// in the QUIC Recovery RFC (\"enhanced\" New Reno), the following states\n/// are used: Slow Start, Congestion Avoidance, Application Limited and\n/// Recovery.  Similarly, states can be triggered by a variety of events,\n/// including detection of Persistent Congestion or receipt of ECN\n/// markings.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct CongestionStateUpdated {\n    #[builder(default)]\n    old: Option<String>,\n    new: String,\n    #[builder(default)]\n    trigger: Option<String>,\n}\n\n/// The loss_timer_updated event is emitted when a recovery loss timer\n/// changes state.  It has Extra importance level; see Section 9.2 of\n/// [QLOG-MAIN].\n///\n/// The three main event types are:\n///\n/// *  set: the timer is set with a delta timeout for when it will\n/// trigger next\n///\n/// *  expired: when the timer effectively expires after the delta\n/// timeout\n///\n/// *  cancelled: when a timer is cancelled (e.g., all outstanding\n/// packets are acknowledged, start idle period)\n///  \n/// In order to indicate an active timer's timeout update, a new set\n/// event is used.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct LossTimerUpdated {\n    /// called \"mode\" in RFC 9002 A.9.\n    #[builder(default)]\n    timer_type: Option<TimerType>,\n    #[builder(default)]\n    packet_number_space: Option<PacketNumberSpace>,\n    event_type: EventType,\n\n    /// if event_type === \"set\": delta time is in ms from\n    /// this event's timestamp until when the timer will trigger\n    #[builder(default)]\n    delta: Option<f32>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TimerType {\n    Ack,\n    Pto,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum EventType {\n    Set,\n    Expired,\n    Cancelled,\n}\n\n/// The packet_lost event is emitted when a packet is deemed lost by loss\n/// detection.  It has Core importance level; see Section 9.2 of\n/// [QLOG-MAIN].\n///\n/// It is RECOMMENDED to populate the optional trigger field in order to\n/// help disambiguate among the various possible causes of a loss\n/// declaration.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct PacketLost {\n    /// should include at least the packet_type and packet_number\n    header: Option<PacketHeader>,\n\n    /// not all implementations will keep track of full\n    /// packets, so these are optional\n    frames: Option<Vec<QuicFrame>>,\n    is_mtu_probe_packet: bool,\n    trigger: Option<PacketLostTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketLostTrigger {\n    ReorderingThreshold,\n    TimeThreshold,\n    /// RFC 9002 Section 6.2.4 paragraph 6, MAY\n    PtoExpired,\n}\n\n/// The marked_for_retransmit event indicates which data was marked for\n/// retransmission upon detection of packet loss (see packet_lost).  It\n/// has Extra importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// Similar to the reasoning for the frames_processed event, in order to\n/// keep the amount of different events low, this signal is grouped into\n/// in a single event based on existing QUIC frame definitions for all\n/// types of retransmittable data.\n///\n/// Implementations retransmitting full packets or frames directly can\n/// just log the constituent frames of the lost packet here (or do away\n/// with this event and use the contents of the packet_lost event\n/// instead).  Conversely, implementations that have more complex logic\n/// (e.g., marking ranges in a stream's data buffer as in-flight), or\n/// that do not track sent frames in full (e.g., only stream offset +\n/// length), can translate their internal behaviour into the appropriate\n/// frame instance here even if that frame was never or will never be put\n/// on the wire.\n///\n/// Much of this data can be inferred if implementations log packet_sent\n/// events (e.g., looking at overlapping stream data offsets and length,\n/// one can determine when data was retransmitted).\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into), build_fn(private, name = \"fallible_build\"))]\npub struct MarkedForRetransmit {\n    frames: Vec<QuicFrame>,\n}\n\n/// The ecn_state_updated event indicates a progression in the ECN state\n/// machine as described in section A.4 of [QUIC-TRANSPORT].  It has\n/// Extra importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct ECNStateUpdated {\n    #[builder(default)]\n    old: Option<ECNState>,\n    new: ECNState,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum ECNState {\n    /// ECN testing in progress\n    Testing,\n    /// ECN state unknown, waiting for acknowledgements\n    /// for testing packets\n    Unknown,\n    /// ECN testing failed\n    Failed,\n    /// testing was successful, the endpoint now\n    /// sends packets with ECT(0) marking\n    Capable,\n}\n\ncrate::gen_builder_method! {\n    RecoveryParametersSetBuilder  => RecoveryParametersSet;\n    RecoveryMetricsUpdatedBuilder => RecoveryMetricsUpdated;\n    CongestionStateUpdatedBuilder => CongestionStateUpdated;\n    LossTimerUpdatedBuilder       => LossTimerUpdated;\n    PacketLostBuilder             => PacketLost;\n    MarkedForRetransmitBuilder    => MarkedForRetransmit;\n    ECNStateUpdatedBuilder        => ECNStateUpdated;\n}\n\nmod rollback {\n\n    use super::*;\n    use crate::{build, legacy::quic as legacy};\n\n    impl From<RecoveryParametersSet> for legacy::RecoveryParametersSet {\n        fn from(value: RecoveryParametersSet) -> Self {\n            build!(legacy::RecoveryParametersSet {\n                ?reordering_threshold: value.reordering_threshold,\n                ?time_threshold: value.time_threshold,\n                timer_granularity: value.timer_granularity,\n                ?initial_rtt: value.initial_rtt,\n                ?max_datagram_size: value.max_datagram_size,\n                ?initial_congestion_window: value.initial_congestion_window,\n                ?minimum_congestion_window: value.minimum_congestion_window.map(|v| v as u32),\n                ?loss_reduction_factor: value.loss_reduction_factor,\n                ?persistent_congestion_threshold: value.persistent_congestion_threshold,\n                custom_fields: value.custom_fields,\n            })\n        }\n    }\n\n    impl From<RecoveryMetricsUpdated> for legacy::RecoveryMetricsUpdated {\n        fn from(value: RecoveryMetricsUpdated) -> Self {\n            build!(legacy::RecoveryMetricsUpdated {\n                ?smoothed_rtt: value.smoothed_rtt,\n                ?min_rtt: value.min_rtt,\n                ?latest_rtt: value.latest_rtt,\n                ?rtt_variance: value.rtt_variance,\n                ?pto_count: value.pto_count,\n                ?congestion_window: value.congestion_window,\n                ?bytes_in_flight: value.bytes_in_flight,\n                ?ssthresh: value.ssthresh,\n                ?packets_in_flight: value.packets_in_flight,\n                ?pacing_rate: value.pacing_rate,\n                custom_fields: value.custom_fields,\n            })\n        }\n    }\n\n    impl From<CongestionStateUpdated> for legacy::RecoveryCongestionStateUpdated {\n        fn from(value: CongestionStateUpdated) -> Self {\n            build!(legacy::RecoveryCongestionStateUpdated {\n                ?old: value.old,\n                new: value.new,\n                ?trigger: match value.trigger {\n                    Some(s) if s == \"persistent_congestion\" => Some(legacy::RecoveryCongestionStateUpdatedTrigger::PersistentCongestion),\n                    Some(s) if s == \"ecn\" => Some(legacy::RecoveryCongestionStateUpdatedTrigger::Ecn),\n                    _ => None,\n                },\n            })\n        }\n    }\n\n    impl From<TimerType> for legacy::LossTimerType {\n        #[inline]\n        fn from(value: TimerType) -> Self {\n            match value {\n                TimerType::Ack => legacy::LossTimerType::Ack,\n                TimerType::Pto => legacy::LossTimerType::Pto,\n            }\n        }\n    }\n\n    impl From<EventType> for legacy::LossTimerEventType {\n        #[inline]\n        fn from(value: EventType) -> Self {\n            match value {\n                EventType::Set => legacy::LossTimerEventType::Set,\n                EventType::Expired => legacy::LossTimerEventType::Expired,\n                EventType::Cancelled => legacy::LossTimerEventType::Cancelled,\n            }\n        }\n    }\n\n    impl From<LossTimerUpdated> for legacy::RecoveryLossTimerUpdated {\n        fn from(value: LossTimerUpdated) -> Self {\n            build!(legacy::RecoveryLossTimerUpdated {\n                ?timer_type: value.timer_type,\n                ?packet_number_space: value.packet_number_space,\n                event_type: value.event_type,\n                ?delta: value.delta,\n            })\n        }\n    }\n\n    impl From<PacketLostTrigger> for legacy::RecoveryPacketLostTrigger {\n        #[inline]\n        fn from(value: PacketLostTrigger) -> Self {\n            match value {\n                PacketLostTrigger::ReorderingThreshold => {\n                    legacy::RecoveryPacketLostTrigger::ReorderingThreshold\n                }\n                PacketLostTrigger::TimeThreshold => {\n                    legacy::RecoveryPacketLostTrigger::TimeThreshold\n                }\n                PacketLostTrigger::PtoExpired => legacy::RecoveryPacketLostTrigger::PtoExpired,\n            }\n        }\n    }\n\n    impl From<PacketLost> for legacy::RecoveryPacketLost {\n        fn from(value: PacketLost) -> Self {\n            build!(legacy::RecoveryPacketLost {\n                ?header: value.header,\n                ?frames: value.frames.map(|v| v.into_iter().map(Into::into).collect::<Vec<_>>()),\n                ?trigger: value.trigger,\n            })\n        }\n    }\n\n    impl From<MarkedForRetransmit> for legacy::RecoveryMarkedForRetransmit {\n        fn from(value: MarkedForRetransmit) -> Self {\n            build!(legacy::RecoveryMarkedForRetransmit {\n                frames: value.frames.into_iter().map(Into::into).collect::<Vec<_>>(),\n            })\n        }\n    }\n}\n"
  },
  {
    "path": "qevent/src/quic/security.rs",
    "content": "use derive_builder::Builder;\nuse serde::{Deserialize, Serialize};\n\nuse super::KeyType;\nuse crate::HexString;\n\n/// The key_updated event has Base importance level; see Section 9.2 of\n/// [QLOG-MAIN]\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct KeyUpdated {\n    key_type: KeyType,\n    #[builder(default)]\n    old: Option<HexString>,\n    #[builder(default)]\n    new: Option<HexString>,\n\n    /// needed for 1RTT key updates\n    #[builder(default)]\n    key_phase: Option<u64>,\n    #[builder(default)]\n    trigger: Option<KeyUpdatedTrigger>,\n}\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum KeyUpdatedTrigger {\n    /// (e.g., initial, handshake and 0-RTT keys\n    /// are generated by TLS)\n    Tls,\n    RemoteUpdate,\n    LocalUpdate,\n}\n\n/// The key_discarded event has Base importance level; see Section 9.2 of\n/// [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct KeyDiscarded {\n    key_type: KeyType,\n    #[builder(default)]\n    key: Option<HexString>,\n\n    /// needed for 1RTT key updates\n    key_phase: Option<u64>,\n    #[builder(default)]\n    trigger: Option<KeyDiscardedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum KeyDiscardedTrigger {\n    /// (e.g., initial, handshake and 0-RTT keys\n    /// are generated by TLS)\n    Tls,\n    RemoteUpdate,\n    LocalUpdate,\n}\n\ncrate::gen_builder_method! {\n    KeyUpdatedBuilder   => KeyUpdated;\n    KeyDiscardedBuilder => KeyDiscarded;\n}\n\nmod rollback {\n    use super::*;\n    use crate::{build, legacy::quic as legacy};\n\n    impl From<KeyType> for legacy::KeyType {\n        fn from(value: KeyType) -> Self {\n            match value {\n                KeyType::ServerInitialSecret => legacy::KeyType::ServerInitialSecret,\n                KeyType::ClientInitialSecret => legacy::KeyType::ClientInitialSecret,\n                KeyType::ServerHandshakeSecret => legacy::KeyType::ServerHandshakeSecret,\n                KeyType::ClientHandshakeSecret => legacy::KeyType::ClientHandshakeSecret,\n                KeyType::Server0RttSecret => legacy::KeyType::Server0RTTSecret,\n                KeyType::Client0RttSecret => legacy::KeyType::Client0RTTSecret,\n                KeyType::Server1RttSecret => legacy::KeyType::Server1RTTSecret,\n                KeyType::Client1RttSecret => legacy::KeyType::Client1RTTSecret,\n            }\n        }\n    }\n\n    impl From<KeyUpdatedTrigger> for legacy::SecurityKeyUpdatedTrigger {\n        #[inline]\n        fn from(value: KeyUpdatedTrigger) -> Self {\n            match value {\n                KeyUpdatedTrigger::Tls => legacy::SecurityKeyUpdatedTrigger::Tls,\n                KeyUpdatedTrigger::RemoteUpdate => legacy::SecurityKeyUpdatedTrigger::RemoteUpdate,\n                KeyUpdatedTrigger::LocalUpdate => legacy::SecurityKeyUpdatedTrigger::LocalUpdate,\n            }\n        }\n    }\n\n    impl From<KeyUpdated> for legacy::SecurityKeyUpdated {\n        #[inline]\n        fn from(value: KeyUpdated) -> Self {\n            build!(legacy::SecurityKeyUpdated {\n                key_type: value.key_type,\n                ?old: value.old,\n                // for legacy new is not optional\n                ?new: value.new,\n                // is this key_phase?\n                ?generation: value.key_phase.map(|p| p as u32),\n                ?trigger: value.trigger,\n            })\n        }\n    }\n\n    impl From<KeyDiscardedTrigger> for legacy::SecurityKeyRetiredTrigger {\n        #[inline]\n        fn from(value: KeyDiscardedTrigger) -> Self {\n            match value {\n                KeyDiscardedTrigger::Tls => legacy::SecurityKeyRetiredTrigger::Tls,\n                KeyDiscardedTrigger::RemoteUpdate => {\n                    legacy::SecurityKeyRetiredTrigger::RemoteUpdate\n                }\n                KeyDiscardedTrigger::LocalUpdate => legacy::SecurityKeyRetiredTrigger::LocalUpdate,\n            }\n        }\n    }\n\n    impl From<KeyDiscarded> for legacy::SecurityKeyRetired {\n        #[inline]\n        fn from(value: KeyDiscarded) -> Self {\n            build!(legacy::SecurityKeyRetired {\n                key_type: value.key_type,\n                ?key: value.key,\n                // is this key_phase?\n                ?generation: value.key_phase .map(|p| p as u32),\n                ?trigger: value.trigger,\n            })\n        }\n    }\n}\n"
  },
  {
    "path": "qevent/src/quic/transport.rs",
    "content": "use std::{collections::HashMap, time::Duration};\n\nuse derive_builder::Builder;\nuse derive_more::From;\nuse qbase::param::{ClientParameters, ParameterId, ServerParameters};\nuse serde::{Deserialize, Serialize};\n\nuse super::{\n    ConnectionID, ECN, IPAddress, Owner, PacketHeader, PacketNumberSpace, PathEndpointInfo,\n    QuicFrame, QuicVersion, StatelessResetToken, StreamType,\n};\nuse crate::{HexString, PathID, RawInfo};\n\n/// The version_information event supports QUIC version negotiation; see\n/// Section 6 of [QUIC-TRANSPORT].  It has Core importance level; see\n/// Section 9.2 of [QLOG-MAIN].\n///\n/// QUIC endpoints each have their own list of QUIC versions they\n/// support.  The client uses the most likely version in their first\n/// initial.  If the server does not support that version, it replies\n/// with a Version Negotiation packet, which contains its supported\n/// versions.  From this, the client selects a version.  The\n/// version_information event aggregates all this information in a single\n/// event type.  It also allows logging of supported versions at an\n/// endpoint without actual version negotiation needing to happen.\n///\n/// [QUIC-TRANSPORT]: https://www.rfc-editor.org/rfc/rfc9000\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct VersionInformation {\n    // Vec for `? filed: [ +ty]``, Option<Vec> for `* filed: [* ty]`\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    server_versions: Vec<QuicVersion>,\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    client_versions: Vec<QuicVersion>,\n    chosen_version: Option<QuicVersion>,\n}\n\n/// The alpn_information event supports Application-Layer Protocol\n/// Negotiation (ALPN) over the QUIC transport; see [RFC7301] and\n/// Section 7.4 of [QUIC-TRANSPORT].  It has Core importance level; see\n/// Section 9.2 of [QLOG-MAIN].\n///\n/// QUIC endpoints are configured with a list of supported ALPN\n/// identifiers.  Clients send the list in a TLS ClientHello, and servers\n/// match against their list.  On success, a single ALPN identifier is\n/// chosen and sent back in a TLS ServerHello.  If no match is found, the\n/// connection is closed.\n///\n/// ALPN identifiers are byte sequences, that may be possible to present\n/// as UTF-8.  The ALPNIdentifier` type supports either format.\n/// Implementations SHOULD log at least one format, but MAY log both or\n/// none.\n///\n/// [RFC7301]: https://www.rfc-editor.org/rfc/rfc7301\n/// [QUIC-TRANSPORT]: https://www.rfc-editor.org/rfc/rfc9000\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct ALPNInformation {\n    server_alpns: Option<Vec<ALPNIdentifier>>,\n    client_alpns: Option<Vec<ALPNIdentifier>>,\n    chosen_alpn: Option<ALPNIdentifier>,\n}\n\n/// ALPN identifiers are byte sequences, that may be possible to present\n/// as UTF-8.  The ALPNIdentifier` type supports either format.\n/// Implementations SHOULD log at least one format, but MAY log both or\n/// none.\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct ALPNIdentifier {\n    byte_value: Option<HexString>,\n    string_value: Option<String>,\n}\n\n/// The parameters_set event groups settings from several different\n/// sources (transport parameters, TLS ciphers, etc.) into a single\n/// event.  This is done to minimize the amount of events and to decouple\n/// conceptual setting impacts from their underlying mechanism for easier\n/// high-level reasoning.  The event has Core importance level; see\n/// Section 9.2 of [QLOG-MAIN].\n///\n/// Most of these settings are typically set once and never change.\n/// However, they are usually set at different times during the\n/// connection, so there will regularly be several instances of this\n/// event with different fields set.\n///\n/// Note that some settings have two variations (one set locally, one\n/// requested by the remote peer).  This is reflected in the owner field.\n/// As such, this field MUST be correct for all settings included a\n/// single event instance.  If you need to log settings from two sides,\n/// you MUST emit two separate event instances.\n///\n/// Implementations are not required to recognize, process or support\n/// every setting/parameter received in all situations.  For example,\n/// QUIC implementations MUST discard transport parameters that they do\n/// not understand Section 7.4.2 of [QUIC-TRANSPORT].  The\n/// unknown_parameters field can be used to log the raw values of any\n/// unknown parameters (e.g., GREASE, private extensions, peer-side\n/// experimentation).\n///\n/// In the case of connection resumption and 0-RTT, some of the server's\n/// parameters are stored up-front at the client and used for the initial\n/// connection startup.  They are later updated with the server's reply.\n/// In these cases, utilize the separate parameters_restored event to\n/// indicate the initial values, and this event to indicate the updated\n/// values, as normal.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n/// [QUIC-TRANSPORT]: https://www.rfc-editor.org/rfc/rfc9000\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct ParametersSet {\n    owner: Option<Owner>,\n\n    /// true if valid session ticket was received\n    resumption_allowed: Option<bool>,\n\n    /// true if early data extension was enabled on the TLS layer\n    early_data_enabled: Option<bool>,\n\n    /// e.g., \"AES_128_GCM_SHA256\"\n    tls_cipher: Option<String>,\n\n    // RFC9000\n    original_destination_connection_id: Option<ConnectionID>,\n    initial_source_connection_id: Option<ConnectionID>,\n    retry_source_connection_id: Option<ConnectionID>,\n    stateless_reset_token: Option<StatelessResetToken>,\n    disable_active_migration: Option<bool>,\n    max_idle_timeout: Option<u64>,\n    max_udp_payload_size: Option<u32>,\n    ack_delay_exponent: Option<u16>,\n    max_ack_delay: Option<u16>,\n    active_connection_id_limit: Option<u32>,\n    initial_max_data: Option<u64>,\n    initial_max_stream_data_bidi_local: Option<u64>,\n    initial_max_stream_data_bidi_remote: Option<u64>,\n    initial_max_stream_data_uni: Option<u64>,\n    initial_max_streams_bidi: Option<u64>,\n    initial_max_streams_uni: Option<u64>,\n    preferred_address: Option<PreferredAddress>,\n    unknown_parameters: Option<Vec<UnknownParameter>>,\n\n    // RFC9221\n    max_datagram_frame_size: Option<u64>,\n\n    // RFC9287\n    /// can only be restored at the client.\n    /// servers MUST NOT restore this parameter!\n    grease_quic_bit: Option<bool>,\n}\n\nmacro_rules! extract_parameter {\n    ( $(\n        $id:ident as $as:ident $(.map($($tt:tt)*))? from $set:ident to $this:ident.$field:ident\n    ),* $(,)? ) => {\n        $( extract_parameter!(@one $id as $as $(.map($($tt)*))? from $set to $this.$field); )*\n    };\n    (@one $id:ident as $as:ident .map($($tt:tt)*) from $set:ident to $this:ident.$field:ident) => {\n        $this.$field = $this.$field.take().or_else(|| {\n            Some($set.get::<$as>(ParameterId::$id).map($($tt)*))\n        });\n    };\n    (@one $id:ident as $as:ident from $set:ident to $this:ident.$field:ident) => {\n        $this.$field = $this.$field.take().or_else(|| {\n            Some($set.get::<$as>(ParameterId::$id).map(Into::into))\n        });\n    };\n}\n\nimpl ParametersSetBuilder {\n    /// helper method to set all client parameters at once\n    pub fn client_parameters(&mut self, params: &ClientParameters) -> &mut Self {\n        use qbase::cid::ConnectionId;\n        extract_parameter! {\n            InitialSourceConnectionId as ConnectionId from params to self.initial_source_connection_id,\n            DisableActiveMigration as bool from params to self.disable_active_migration,\n            MaxIdleTimeout as Duration.map(|d| d.as_millis() as _) from params to self.max_idle_timeout,\n            MaxUdpPayloadSize as u64.map(|u| u as u32) from params to self.max_udp_payload_size,\n            AckDelayExponent as u64.map(|u| u as u16) from params to self.ack_delay_exponent,\n            MaxAckDelay as Duration.map(|d| d.as_millis() as _) from params to self.max_ack_delay,\n            ActiveConnectionIdLimit as u64.map(|u| u as u32) from params to self.active_connection_id_limit,\n            InitialMaxData as u64 from params to self.initial_max_data,\n            InitialMaxStreamDataBidiLocal as u64 from params to self.initial_max_stream_data_bidi_local,\n            InitialMaxStreamDataBidiRemote as u64 from params to self.initial_max_stream_data_bidi_remote,\n            InitialMaxStreamDataUni as u64 from params to self.initial_max_stream_data_uni,\n            InitialMaxStreamsBidi as u64 from params to self.initial_max_streams_bidi,\n            InitialMaxStreamsUni as u64 from params to self.initial_max_streams_uni,\n            MaxDatagramFrameSize as u64 from params to self.max_datagram_frame_size,\n            GreaseQuicBit as bool from params to self.grease_quic_bit,\n        }\n        self\n    }\n\n    /// helper method to set all server parameters at once\n    pub fn server_parameters(&mut self, params: &ServerParameters) -> &mut Self {\n        use qbase::{\n            cid::ConnectionId, param::preferred_address::PreferredAddress, token::ResetToken,\n        };\n        extract_parameter! {\n            OriginalDestinationConnectionId as ConnectionId from params to self.original_destination_connection_id,\n            InitialSourceConnectionId as ConnectionId from params to self.initial_source_connection_id,\n            RetrySourceConnectionId as ConnectionId from params to self.retry_source_connection_id,\n            StatelessResetToken as ResetToken from params to self.stateless_reset_token,\n            DisableActiveMigration as bool from params to self.disable_active_migration,\n            MaxIdleTimeout as Duration.map(|d| d.as_millis() as _) from params to self.max_idle_timeout,\n            MaxUdpPayloadSize as u64.map(|u| u as u32) from params to self.max_udp_payload_size,\n            AckDelayExponent as u64.map(|u| u as u16) from params to self.ack_delay_exponent,\n            MaxAckDelay as Duration.map(|d| d.as_millis() as _) from params to self.max_ack_delay,\n            ActiveConnectionIdLimit as u64.map(|u| u as u32) from params to self.active_connection_id_limit,\n            InitialMaxData as u64 from params to self.initial_max_data,\n            InitialMaxStreamDataBidiLocal as u64 from params to self.initial_max_stream_data_bidi_local,\n            InitialMaxStreamDataBidiRemote as u64 from params to self.initial_max_stream_data_bidi_remote,\n            InitialMaxStreamDataUni as u64 from params to self.initial_max_stream_data_uni,\n            InitialMaxStreamsBidi as u64 from params to self.initial_max_streams_bidi,\n            InitialMaxStreamsUni as u64 from params to self.initial_max_streams_uni,\n            PreferredAddress as PreferredAddress from params to self.preferred_address,\n            MaxDatagramFrameSize as u64 from params to self.max_datagram_frame_size,\n            GreaseQuicBit as bool from params to self.grease_quic_bit,\n        }\n        self\n    }\n}\n\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into), build_fn(private, name = \"fallible_build\"))]\npub struct PreferredAddress {\n    ip_v4: IPAddress,\n    ip_v6: IPAddress,\n    port_v4: u16,\n    port_v6: u16,\n    connection_id: ConnectionID,\n    stateless_reset_token: StatelessResetToken,\n}\n\nimpl From<qbase::param::preferred_address::PreferredAddress> for PreferredAddress {\n    fn from(pa: qbase::param::preferred_address::PreferredAddress) -> Self {\n        crate::build!(Self {\n            ip_v4: pa.address_v4().ip().to_string(),\n            ip_v6: pa.address_v6().ip().to_string(),\n            port_v4: pa.address_v4().port(),\n            port_v6: pa.address_v6().port(),\n            connection_id: pa.connection_id(),\n            stateless_reset_token: pa.stateless_reset_token(),\n        })\n    }\n}\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct UnknownParameter {\n    id: u64,\n    #[builder(default)]\n    value: Option<HexString>,\n}\n\n/// When using QUIC 0-RTT, clients are expected to remember and restore\n/// the server's transport parameters from the previous connection.  The\n/// parameters_restored event is used to indicate which parameters were\n/// restored and to which values when utilizing 0-RTT.  It has Base\n/// importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// Note that not all transport parameters should be restored (many are\n/// even prohibited from being re-utilized).  The ones listed here are\n/// the ones expected to be useful for correct 0-RTT usage.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct ParametersRestored {\n    // RFC 9000\n    disable_active_migration: Option<bool>,\n    max_idle_timeout: Option<u64>,\n    max_udp_payload_size: Option<u32>,\n    active_connection_id_limit: Option<u32>,\n    initial_max_data: Option<u64>,\n    initial_max_stream_data_bidi_local: Option<u64>,\n    initial_max_stream_data_bidi_remote: Option<u64>,\n    initial_max_stream_data_uni: Option<u64>,\n    initial_max_streams_bidi: Option<u64>,\n    initial_max_streams_uni: Option<u64>,\n\n    // RFC9221\n    max_datagram_frame_size: Option<u64>,\n\n    // RFC9287\n    /// can only be restored at the client.\n    /// servers MUST NOT restore this parameter!\n    grease_quic_bit: Option<bool>,\n}\n\nimpl ParametersRestoredBuilder {\n    /// helper method to set all client parameters at once\n    pub fn client_parameters(&mut self, params: &ServerParameters) -> &mut Self {\n        extract_parameter! {\n            DisableActiveMigration as bool from params to self.disable_active_migration,\n            MaxIdleTimeout as Duration.map(|d| d.as_millis() as _) from params to self.max_idle_timeout,\n            MaxUdpPayloadSize as u64.map(|u| u as u32) from params to self.max_udp_payload_size,\n            ActiveConnectionIdLimit as u64.map(|u| u as u32) from params to self.active_connection_id_limit,\n            InitialMaxData as u64 from params to self.initial_max_data,\n            InitialMaxStreamDataBidiLocal as u64 from params to self.initial_max_stream_data_bidi_local,\n            InitialMaxStreamDataBidiRemote as u64 from params to self.initial_max_stream_data_bidi_remote,\n            InitialMaxStreamDataUni as u64 from params to self.initial_max_stream_data_uni,\n            InitialMaxStreamsBidi as u64 from params to self.initial_max_streams_bidi,\n            InitialMaxStreamsUni as u64 from params to self.initial_max_streams_uni,\n            MaxDatagramFrameSize as u64 from params to self.max_datagram_frame_size,\n            GreaseQuicBit as bool from params to self.grease_quic_bit,\n        }\n        self\n    }\n}\n\n/// The packet_sent event indicates a QUIC-level packet was sent.  It has\n/// Core importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct PacketSent {\n    header: PacketHeader,\n    #[builder(default)]\n    frames: Option<Vec<QuicFrame>>,\n\n    /// only if header.packet_type === \"stateless_reset\"\n    /// is always 128 bits in length.\n    #[builder(default)]\n    stateless_reset_token: Option<StatelessResetToken>,\n\n    /// only if header.packet_type === \"version_negotiation\"\n    #[builder(default)]\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    supported_versions: Vec<QuicVersion>,\n    #[builder(default)]\n    raw: Option<RawInfo>,\n    #[builder(default)]\n    datagram_id: Option<u32>,\n    #[builder(default)]\n    #[serde(default)]\n    is_mtu_probe_packet: bool,\n\n    #[builder(default)]\n    trigger: Option<PacketSentTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketSentTrigger {\n    RetransmitReordered,\n    RetransmitTimeout,\n    PtoProbe,\n    RetransmitCrypto,\n    CcBandwidthProbe,\n}\n\n/// The packet_received event indicates a QUIC-level packet was received.\n/// It has Core importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct PacketReceived {\n    header: PacketHeader,\n    #[builder(default)]\n    frames: Option<Vec<QuicFrame>>,\n\n    /// only if header.packet_type === \"stateless_reset\"\n    /// is always 128 bits in length.\n    #[builder(default)]\n    stateless_reset_token: Option<StatelessResetToken>,\n\n    /// only if header.packet_type === \"version_negotiation\"\n    #[builder(default)]\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    supported_versions: Vec<QuicVersion>,\n    #[builder(default)]\n    raw: Option<RawInfo>,\n    #[builder(default)]\n    datagram_id: Option<u32>,\n\n    #[builder(default)]\n    trigger: Option<PacketReceivedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketReceivedTrigger {\n    /// if packet was buffered because it couldn't be\n    /// decrypted before\n    KeysAvailable,\n}\n/// The packet_dropped event indicates a QUIC-level packet was dropped.\n/// It has Base importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// The trigger field indicates a general reason category for dropping\n/// the packet, while the details field can contain additional\n/// implementation-specific information.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\n#[serde(default)]\npub struct PacketDropped {\n    /// Primarily packet_type should be filled here,\n    /// as other fields might not be decrypteable or parseable\n    header: Option<PacketHeader>,\n    raw: Option<RawInfo>,\n    datagram_id: Option<u32>,\n    details: HashMap<String, serde_json::Value>,\n    trigger: Option<PacketDroppedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketDroppedTrigger {\n    /// not initialized, out of memory\n    InternalError,\n    /// limits reached, DDoS protection, unwilling to track more paths, duplicate packet\n    Rejected,\n    /// unknown or unsupported version.\n    Unsupported,\n    /// packet parsing or validation error\n    Invalid,\n    /// duplicate packet\n    Duplicate,\n    /// packet does not relate to a known connection or Connection ID\n    ConnectionUnknown,\n    /// decryption failed\n    DecryptionFailure,\n    /// decryption key was unavailable\n    KeyUnavailable,\n    /// situations not clearly covered in the other categories\n    Genera,\n}\n\nimpl From<qbase::packet::InvalidPacketNumber> for PacketDroppedTrigger {\n    fn from(value: qbase::packet::InvalidPacketNumber) -> Self {\n        match value {\n            qbase::packet::InvalidPacketNumber::TooOld\n            | qbase::packet::InvalidPacketNumber::TooLarge => PacketDroppedTrigger::Genera,\n            qbase::packet::InvalidPacketNumber::Duplicate => PacketDroppedTrigger::Duplicate,\n        }\n    }\n}\n\nimpl From<qbase::packet::error::Error> for PacketDroppedTrigger {\n    fn from(error: qbase::packet::error::Error) -> Self {\n        match error {\n            qbase::packet::error::Error::UnsupportedVersion(_) => Self::Unsupported,\n            qbase::packet::error::Error::InvalidFixedBit\n            | qbase::packet::error::Error::InvalidReservedBits(_, _)\n            | qbase::packet::error::Error::IncompleteType(_)\n            | qbase::packet::error::Error::IncompleteHeader(_, _)\n            | qbase::packet::error::Error::IncompletePacket(_, _)\n            | qbase::packet::error::Error::UnderSampling(..) => Self::Invalid,\n            qbase::packet::error::Error::RemoveProtectionFailure\n            | qbase::packet::error::Error::DecryptPacketFailure => Self::DecryptionFailure,\n        }\n    }\n}\n\n/// The packet_buffered event is emitted when a packet is buffered\n/// because it cannot be processed yet.  Typically, this is because the\n/// packet cannot be parsed yet, and thus only the full packet contents\n/// can be logged when it was parsed in a packet_received event.  The\n/// event has Base importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct PacketBuffered {\n    /// Primarily packet_type should be filled here,\n    /// as other fields might not be decrypteable or parseable\n    header: Option<PacketHeader>,\n    raw: Option<RawInfo>,\n    datagram_id: Option<u32>,\n    trigger: Option<PacketBufferedTrigger>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketBufferedTrigger {\n    /// indicates the parser cannot keep up, temporarily buffers\n    /// packet for later processing\n    Backpressure,\n    /// if packet cannot be decrypted because the proper keys were\n    /// not yet available\n    KeysUnavailable,\n}\n\n/// The packets_acked event is emitted when a (group of) sent packet(s)\n/// is acknowledged by the remote peer _for the first time_. It has Extra\n/// importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// This information could also be deduced from the contents of received\n/// ACK frames.  However, ACK frames require additional processing logic\n/// to determine when a given packet is acknowledged for the first time,\n/// as QUIC uses ACK ranges which can include repeated ACKs.\n/// Additionally, this event can be used by implementations that do not\n/// log frame contents.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct PacketsAcked {\n    packet_number_space: Option<PacketNumberSpace>,\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    packet_nubers: Vec<u64>,\n}\n/// The datagrams_sent event indicates when one or more UDP-level\n/// datagrams are passed to the underlying network socket.  This is\n/// useful for determining how QUIC packet buffers are drained to the OS.\n/// The event has Extra importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct UdpDatagramsSent {\n    /// to support passing multiple at once\n    count: Option<u16>,\n\n    /// The RawInfo fields do not include the UDP headers,\n    /// only the UDP payload\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    raw: Vec<RawInfo>,\n\n    /// ECN bits in the IP header\n    /// if not set, defaults to the value used on the last\n    /// QUICDatagramsSent event\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    ecn: Vec<ECN>,\n\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    datagram_ids: Vec<u32>,\n}\n\n/// When one or more UDP-level datagrams are received from the socket.\n/// This is useful for determining how datagrams are passed to the user\n/// space stack from the OS.  The event has Extra importance level; see\n/// Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct UdpDatagramsReceived {\n    /// to support passing multiple at once\n    count: Option<u16>,\n\n    /// The RawInfo fields do not include the UDP headers,\n    /// only the UDP payload\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    raw: Vec<RawInfo>,\n\n    /// ECN bits in the IP header\n    /// if not set, defaults to the value used on the last\n    /// QUICDatagramsSent event\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    ecn: Vec<ECN>,\n\n    #[serde(skip_serializing_if = \"Vec::is_empty\")]\n    datagram_ids: Vec<u32>,\n}\n\n/// When a UDP-level datagram is dropped.  This is typically done if it\n/// does not contain a valid QUIC packet.  If it does, but the QUIC\n/// packet is dropped for other reasons, the packet_dropped event\n/// (Section 5.7) should be used instead.  The event has Extra importance\n/// level; see Section 9.2 of [QLOG-MAIN].\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct UdpDatagramDropped {\n    /// The RawInfo fields do not include the UDP headers,\n    /// only the UDP payload\n    raw: Option<RawInfo>,\n}\n// The stream_state_updated event is emitted whenever the internal state\n// of a QUIC stream is updated; see Section 3 of [QUIC-TRANSPORT].  Most\n// of this can be inferred from several types of frames going over the\n// wire, but it's much easier to have explicit signals for these state\n// changes.  The event has Base importance level; see Section 9.2 of\n// [QLOG-MAIN].\n///\n/// [QUIC-TRANSPORT]: https://www.rfc-editor.org/rfc/rfc9000\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct StreamStateUpdated {\n    stream_id: u64,\n\n    /// mainly useful when opening the stream\n    #[builder(default)]\n    stream_type: Option<StreamType>,\n    #[builder(default)]\n    old: Option<StreamState>,\n    new: StreamState,\n\n    #[builder(default)]\n    stream_side: Option<StreamSide>,\n}\n\n#[derive(Debug, Clone, Copy, From, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum StreamState {\n    Base(BaseStreamStates),\n    Granular(GranularStreamStates),\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum BaseStreamStates {\n    Idle,\n    Open,\n    Closed,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum GranularStreamStates {\n    // bidirectional stream states, RFC 9000 Section 3.4.\n    HalfClosedLocal,\n    HalfClosedRemote,\n    // sending-side stream states, RFC 9000 Section 3.1.\n    Ready,\n    Send,\n    DataSent,\n    ResetSent,\n    ResetReceived,\n    // receive-side stream states, RFC 9000 Section 3.2.\n    Receive,\n    SizeKnown,\n    DataRead,\n    ResetRead,\n    // both-side states\n    DataReceived,\n    // qlog-defined: memory actually freed\n    Destroyed,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum StreamSide {\n    Sending,\n    Receiving,\n}\n\n/// The frame_processed event is intended to prevent a large\n/// proliferation of specific purpose events (e.g., packets_acknowledged,\n/// flow_control_updated, stream_data_received).  It has Extra importance\n/// level; see Section 9.2 of [QLOG-MAIN].\n///\n/// Implementations have the opportunity to (selectively) log this type\n/// of signal without having to log packet-level details (e.g., in\n/// packet_received).  Since for almost all cases, the effects of\n/// applying a frame to the internal state of an implementation can be\n/// inferred from that frame's contents, these events are aggregated into\n/// this single frames_processed event.\n///\n/// The frame_processed event can be used to signal internal state change\n/// not resulting directly from the actual \"parsing\" of a frame (e.g.,\n/// the frame could have been parsed, data put into a buffer, then later\n/// processed, then logged with this event).\n///\n/// The packet_received event can convey all constituent frames.  It is\n/// not expected that the frames_processed event will also be used for a\n/// redundant purpose.  Rather, implementations can use this event to\n/// avoid having to log full packets or to convey extra information about\n/// when frames are processed (for example, if frame processing is\n/// deferred for any reason).\n///\n/// Note that for some events, this approach will lose some information\n/// (e.g., for which encryption level are packets being acknowledged?).\n/// If this information is important, the packet_received event can be\n/// used instead.\n///\n/// In some implementations, it can be difficult to log frames directly,\n/// even when using packet_sent and packet_received events.  For these\n/// cases, the frames_processed event also contains the packet_numbers\n/// field, which can be used to more explicitly link this event to the\n/// packet_sent/received events.  The field is an array, which supports\n/// using a single frames_processed event for multiple frames received\n/// over multiple packets.  To map between frames and packets, the\n/// position and order of entries in the frames and packet_numbers is\n/// used.  If the optional packet_numbers field is used, each frame MUST\n/// have a corresponding packet number at the same index.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct FramesProcessed {\n    frames: Vec<QuicFrame>,\n    #[builder(default)]\n    packet_numbers: Option<Vec<u64>>,\n}\n\n/// The stream_data_moved event is used to indicate when QUIC stream data\n/// moves between the different layers.  This helps make clear the flow\n/// of data, how long data remains in various buffers, and the overheads\n/// introduced by individual layers.  The event has Base importance\n/// level; see Section 9.2 of [QLOG-MAIN].\n///\n/// This event relates to stream data only.  There are no packet or frame\n/// headers and length values in the length or raw fields MUST reflect\n/// that.\n///\n/// For example, it can be useful to understand when data moves from an\n/// application protocol (e.g., HTTP) to QUIC stream buffers and vice\n/// versa.\n///\n/// The stream_data_moved event can provide insight into whether received\n/// data on a QUIC stream is moved to the application protocol\n/// immediately (for example per received packet) or in larger batches\n/// (for example, all QUIC packets are processed first and afterwards the\n/// application layer reads from the streams with newly available data).\n/// This can help identify bottlenecks, flow control issues, or\n/// scheduling problems.\n///\n/// The additional_info field supports optional logging of information\n/// related to the stream state.  For example, an application layer that\n/// moves data into transport and simultaneously ends the stream, can log\n/// fin_set.  As another example, a transport layer that has received an\n/// instruction to reset a stream can indicate this to the application\n/// layer using reset_stream.  In both cases, the length-carrying fields\n/// (length or raw) can be omitted or contain zero values.\n///\n/// This event is only for data in QUIC streams.  For data in QUIC\n/// Datagram Frames, see the datagram_data_moved event defined in\n/// Section 5.16.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct StreamDataMoved {\n    stream_id: Option<u64>,\n    offset: Option<u64>,\n\n    /// byte length of the moved data\n    length: Option<u64>,\n\n    from: Option<StreamDataLocation>,\n    to: Option<StreamDataLocation>,\n\n    additional_info: Option<DataMovedAdditionalInfo>,\n\n    raw: Option<RawInfo>,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum StreamDataLocation {\n    Application,\n    Transport,\n    Network,\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum DataMovedAdditionalInfo {\n    FinSet,\n    StreamReset,\n}\n\n/// The datagram_data_moved event is used to indicate when QUIC Datagram\n/// Frame data (see [RFC9221]) moves between the different layers.  This\n/// helps make clear the flow of data, how long data remains in various\n/// buffers, and the overheads introduced by individual layers.  The\n/// event has Base importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// This event relates to datagram data only.  There are no packet or\n/// frame headers and length values in the length or raw fields MUST\n/// reflect that.\n///\n/// For example, passing from the application protocol (e.g.,\n/// WebTransport) to QUIC Datagram Frame buffers and vice versa.\n///\n/// The datagram_data_moved event can provide insight into whether\n/// received data in a QUIC Datagram Frame is moved to the application\n/// protocol immediately (for example per received packet) or in larger\n/// batches (for example, all QUIC packets are processed first and\n/// afterwards the application layer reads all Datagrams at once).  This\n/// can help identify bottlenecks, flow control issues, or scheduling\n/// problems.\n///\n/// This event is only for data in QUIC Datagram Frames.  For data in\n/// QUIC streams, see the stream_data_moved event defined in\n/// Section 5.15.\n///\n/// [RFC9221]: https://www.rfc-editor.org/rfc/rfc9221.html\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct DatagramDataMoved {\n    /// byte length of the moved data\n    length: Option<u64>,\n    from: Option<StreamDataLocation>,\n    to: Option<StreamDataLocation>,\n    raw: Option<RawInfo>,\n}\n\n/// Use to provide additional information when attempting (client-side)\n/// connection migration.  While most details of the QUIC connection\n/// migration process can be inferred by observing the PATH_CHALLENGE and\n/// PATH_RESPONSE frames, in combination with the QUICPathAssigned event,\n/// it can be useful to explicitly log the progression of the migration\n/// and potentially made decisions in a single location/event.  The event\n/// has Extra importance level; see Section 9.2 of [QLOG-MAIN].\n///\n/// Generally speaking, connection migration goes through two phases: a\n/// probing phase (which is not always needed/present), and a migration\n/// phase (which can be abandoned upon error).\n///\n/// Implementations that log per-path information in a\n/// QUICMigrationStateUpdated, SHOULD also emit QUICPathAssigned events,\n/// to serve as a ground-truth source of information.\n///\n/// [QLOG-MAIN]: https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-09\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct MigrationStateUpdated {\n    #[builder(default)]\n    old: Option<MigrationState>,\n    new: MigrationState,\n\n    #[builder(default)]\n    path_id: Option<PathID>,\n\n    /// the information for traffic going towards the remote receiver\n    #[builder(default)]\n    path_remote: Option<PathEndpointInfo>,\n\n    /// the information for traffic coming in at the local endpoint\n    #[builder(default)]\n    path_local: Option<PathEndpointInfo>,\n}\n\n/// Note that MigrationState does not describe a full state machine\n/// These entries are not necessarily chronological,\n/// nor will they always all appear during\n/// a connection migration attempt.\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum MigrationState {\n    /// probing packets are sent, migration not initiated yet\n    ProbingStarted,\n    /// did not get reply to probing packets,\n    /// discarding path as an option\n    ProbingAbandoned,\n    /// received reply to probing packets, path is migration candidate\n    ProbingSuccessful,\n    /// non-probing packets are sent, attempting migration\n    MigrationStarted,\n    /// something went wrong during the migration, abandoning attempt\n    MigrationAbandoned,\n    /// new path is now fully used, old path is discarded\n    MigrationComplete,\n}\n\ncrate::gen_builder_method! {\n    VersionInformationBuilder    => VersionInformation;\n    ALPNInformationBuilder       => ALPNInformation;\n    ALPNIdentifierBuilder        => ALPNIdentifier;\n    ParametersSetBuilder         => ParametersSet;\n    PreferredAddressBuilder      => PreferredAddress;\n    UnknownParameterBuilder      => UnknownParameter;\n    ParametersRestoredBuilder    => ParametersRestored;\n    PacketSentBuilder            => PacketSent;\n    PacketReceivedBuilder        => PacketReceived;\n    PacketDroppedBuilder         => PacketDropped;\n    PacketBufferedBuilder        => PacketBuffered;\n    PacketsAckedBuilder          => PacketsAcked;\n    UdpDatagramsSentBuilder      => UdpDatagramsSent;\n    UdpDatagramsReceivedBuilder  => UdpDatagramsReceived;\n    UdpDatagramDroppedBuilder    => UdpDatagramDropped;\n    StreamStateUpdatedBuilder    => StreamStateUpdated;\n    FramesProcessedBuilder       => FramesProcessed;\n    StreamDataMovedBuilder       => StreamDataMoved;\n    DatagramDataMovedBuilder     => DatagramDataMoved;\n    MigrationStateUpdatedBuilder => MigrationStateUpdated;\n}\n\nmod rollback {\n    use bytes::Bytes;\n\n    use super::*;\n    use crate::{build, legacy::quic as legacy};\n\n    impl From<QuicVersion> for legacy::QuicVersion {\n        #[inline]\n        fn from(value: QuicVersion) -> Self {\n            HexString::from(Bytes::from(value.0.to_be_bytes().to_vec())).into()\n        }\n    }\n\n    impl From<VersionInformation> for legacy::TransportVersionInformation {\n        fn from(vi: VersionInformation) -> Self {\n            build!(legacy::TransportVersionInformation {\n                server_versions: vi\n                    .server_versions\n                    .into_iter()\n                    .map(Into::into)\n                    .collect::<Vec<_>>(),\n                client_versions: vi\n                    .client_versions\n                    .into_iter()\n                    .map(Into::into)\n                    .collect::<Vec<_>>(),\n                ?chosen_version: vi.chosen_version,\n            })\n        }\n    }\n\n    impl From<ALPNIdentifier> for String {\n        fn from(value: ALPNIdentifier) -> Self {\n            value.string_value.as_ref().map_or(\n                value\n                    .byte_value\n                    .as_ref()\n                    .map(|b| b.to_string())\n                    .unwrap_or_default(),\n                |s| s.to_string(),\n            )\n        }\n    }\n\n    impl From<ALPNInformation> for legacy::TransportALPNInformation {\n        fn from(ai: ALPNInformation) -> Self {\n            build!(legacy::TransportALPNInformation {\n                ?client_alpns: ai.client_alpns.map( |v| {\n                    v.into_iter()\n                        .map(Into::into)\n                        .collect::<Vec<_>>()\n                }),\n                ?server_alpns: ai.server_alpns.map( |v| {\n                    v.into_iter()\n                        .map(Into::into)\n                        .collect::<Vec<_>>()\n                }),\n                //\n                ?chosen_alpn: ai.chosen_alpn.map(String::from),\n            })\n        }\n    }\n\n    impl From<PreferredAddress> for legacy::PreferredAddress {\n        fn from(pa: PreferredAddress) -> Self {\n            build!(legacy::PreferredAddress {\n                ip_v4: pa.ip_v4,\n                ip_v6: pa.ip_v6,\n                port_v4: pa.port_v4,\n                port_v6: pa.port_v6,\n                connection_id: pa.connection_id,\n                stateless_reset_token: pa.stateless_reset_token,\n            })\n        }\n    }\n\n    impl From<ParametersSet> for legacy::TransportParametersSet {\n        fn from(ps: ParametersSet) -> Self {\n            build!(legacy::TransportParametersSet {\n                ?owner: ps.owner,\n                ?resumption_allowed: ps.resumption_allowed,\n                ?early_data_enabled: ps.early_data_enabled,\n                ?tls_cipher: ps.tls_cipher,\n                ?original_destination_connection_id: ps.original_destination_connection_id,\n                ?initial_source_connection_id: ps.initial_source_connection_id,\n                ?retry_source_connection_id: ps.retry_source_connection_id,\n                ?stateless_reset_token: ps.stateless_reset_token,\n                ?disable_active_migration: ps.disable_active_migration,\n                ?max_idle_timeout: ps.max_idle_timeout,\n                ?max_udp_payload_size: ps.max_udp_payload_size,\n                ?ack_delay_exponent: ps.ack_delay_exponent,\n                ?max_ack_delay: ps.max_ack_delay,\n                ?active_connection_id_limit: ps.active_connection_id_limit,\n                ?initial_max_data: ps.initial_max_data,\n                ?initial_max_stream_data_bidi_local: ps.initial_max_stream_data_bidi_local,\n                ?initial_max_stream_data_bidi_remote: ps.initial_max_stream_data_bidi_remote,\n                ?initial_max_stream_data_uni: ps.initial_max_stream_data_uni,\n                ?initial_max_streams_bidi: ps.initial_max_streams_bidi,\n                ?initial_max_streams_uni: ps.initial_max_streams_uni,\n                ?preferred_address: ps.preferred_address,\n                // legacy doesnt support these\n                // ?unknown_parameters: ,\n                // ?max_datagram_frame_size: ps.max_datagram_frame_size,\n                // ?grease_quic_bit: ps.grease_quic_bit,\n            })\n        }\n    }\n\n    impl From<ParametersRestored> for legacy::TransportParametersRestored {\n        fn from(value: ParametersRestored) -> Self {\n            build!(legacy::TransportParametersRestored {\n                ?disable_active_migration: value.disable_active_migration,\n                ?max_idle_timeout: value.max_idle_timeout,\n                ?max_udp_payload_size: value.max_udp_payload_size,\n                ?active_connection_id_limit: value.active_connection_id_limit,\n                ?initial_max_data: value.initial_max_data,\n                ?initial_max_stream_data_bidi_local: value.initial_max_stream_data_bidi_local,\n                ?initial_max_stream_data_bidi_remote: value.initial_max_stream_data_bidi_remote,\n                ?initial_max_stream_data_uni: value.initial_max_stream_data_uni,\n                ?initial_max_streams_bidi: value.initial_max_streams_bidi,\n                ?initial_max_streams_uni: value.initial_max_streams_uni,\n                // legacy doesnt support these\n                // ?max_datagram_frame_size: value.max_datagram_frame_size,\n                // ?grease_quic_bit: value.grease_quic_bit,\n            })\n        }\n    }\n\n    impl From<PacketSentTrigger> for legacy::TransportPacketSentTrigger {\n        fn from(value: PacketSentTrigger) -> Self {\n            match value {\n                PacketSentTrigger::RetransmitReordered => Self::RetransmitReordered,\n                PacketSentTrigger::RetransmitTimeout => Self::RetransmitTimeout,\n                PacketSentTrigger::PtoProbe => Self::PtoProbe,\n                PacketSentTrigger::RetransmitCrypto => Self::RetransmitCrypto,\n                PacketSentTrigger::CcBandwidthProbe => Self::CcBandwidthProbe,\n            }\n        }\n    }\n\n    impl From<PacketSent> for legacy::TransportPacketSent {\n        fn from(value: PacketSent) -> Self {\n            build!(legacy::TransportPacketSent {\n                header: value.header,\n                ?frames: value.frames.map(|v| {\n                    v.into_iter()\n                        .map(Into::into)\n                        .collect::<Vec<_>>()\n                }),\n                ?stateless_reset_token: value.stateless_reset_token.map(|tk| Bytes::from(tk.0.to_vec())),\n                supported_versions: value.supported_versions.into_iter()\n                        .map(Into::into)\n                        .collect::<Vec<_>>(),\n                ?raw: value.raw,\n                ?datagram_id: value.datagram_id,\n                ?trigger: value.trigger,\n            })\n        }\n    }\n\n    impl From<PacketReceivedTrigger> for legacy::TransportPacketReceivedTrigger {\n        #[inline]\n        fn from(value: PacketReceivedTrigger) -> Self {\n            match value {\n                PacketReceivedTrigger::KeysAvailable => Self::KeysAvailable,\n            }\n        }\n    }\n\n    impl From<PacketReceived> for legacy::TransportPacketReceived {\n        fn from(value: PacketReceived) -> Self {\n            build!(legacy::TransportPacketReceived {\n                header: value.header,\n                ?frames: value.frames.map(|v| {\n                    v.into_iter()\n                        .map(Into::into)\n                        .collect::<Vec<_>>()\n                }),\n                ?stateless_reset_token: value.stateless_reset_token.map(|tk| Bytes::from(tk.0.to_vec())),\n                supported_versions: value.supported_versions.into_iter()\n                        .map(Into::into)\n                        .collect::<Vec<_>>(),\n                ?raw: value.raw,\n                ?datagram_id: value.datagram_id,\n                ?trigger: value.trigger,\n            })\n        }\n    }\n\n    impl TryFrom<PacketDroppedTrigger> for legacy::TransportpacketDroppedTrigger {\n        type Error = ();\n        #[inline]\n        fn try_from(value: PacketDroppedTrigger) -> Result<Self, ()> {\n            match value {\n                // 新设计不如旧的\n                PacketDroppedTrigger::InternalError\n                | PacketDroppedTrigger::Invalid\n                | PacketDroppedTrigger::Genera\n                // 似乎并没有完全对应, 移除头部保护失败也是这个错误\n                // PacketDroppedTrigger::DecryptionFailure => Ok(Self::PayloadDecryptError),\n                | PacketDroppedTrigger::DecryptionFailure\n                | PacketDroppedTrigger::Rejected => Err(()),\n                PacketDroppedTrigger::Unsupported => Ok(Self::UnsupportedVersion),\n                PacketDroppedTrigger::Duplicate => Ok(Self::Duplicate),\n                PacketDroppedTrigger::ConnectionUnknown => Ok(Self::UnknownConnectionId),\n                PacketDroppedTrigger::KeyUnavailable => Ok(Self::KeyUnavailable),\n            }\n        }\n    }\n\n    impl From<PacketDropped> for legacy::TransportPacketDropped {\n        fn from(value: PacketDropped) -> Self {\n            build!(legacy::TransportPacketDropped {\n                ?header: value.header,\n                ?raw: value.raw,\n                ?datagram_id: value.datagram_id,\n                ?trigger: value.trigger.and_then(|trigger| legacy::TransportpacketDroppedTrigger::try_from(trigger).ok()),\n            })\n        }\n    }\n\n    impl From<PacketBufferedTrigger> for legacy::TransportPacketBufferedTrigger {\n        #[inline]\n        fn from(value: PacketBufferedTrigger) -> Self {\n            match value {\n                PacketBufferedTrigger::Backpressure => Self::Backpressure,\n                PacketBufferedTrigger::KeysUnavailable => Self::KeysUnavailable,\n            }\n        }\n    }\n\n    impl From<PacketBuffered> for legacy::TransportPacketBuffered {\n        fn from(value: PacketBuffered) -> Self {\n            build!(legacy::TransportPacketBuffered {\n                ?header: value.header,\n                ?raw: value.raw,\n                ?datagram_id: value.datagram_id,\n                ?trigger: value.trigger,\n            })\n        }\n    }\n\n    impl From<PacketsAcked> for legacy::TransportPacketsAcked {\n        fn from(value: PacketsAcked) -> Self {\n            build!(legacy::TransportPacketsAcked {\n                ?packet_number_space: value.packet_number_space,\n                packet_numbers: value.packet_nubers,\n            })\n        }\n    }\n\n    impl From<UdpDatagramsSent> for legacy::TransportDatagramsSent {\n        fn from(value: UdpDatagramsSent) -> Self {\n            build!(legacy::TransportDatagramsSent {\n                ?count: value.count,\n                raw: value.raw.into_iter().collect::<Vec<_>>(),\n                datagram_ids: value.datagram_ids,\n            })\n        }\n    }\n\n    impl From<UdpDatagramsReceived> for legacy::TransportDatagramsReceived {\n        fn from(value: UdpDatagramsReceived) -> Self {\n            build!(legacy::TransportDatagramsReceived {\n                ?count: value.count,\n                raw: value.raw.into_iter().collect::<Vec<_>>(),\n                datagram_ids: value.datagram_ids,\n            })\n        }\n    }\n\n    impl From<UdpDatagramDropped> for legacy::TransportDatagramDropped {\n        fn from(value: UdpDatagramDropped) -> Self {\n            build!(legacy::TransportDatagramDropped {\n                ?raw: value.raw,\n            })\n        }\n    }\n\n    impl From<StreamState> for legacy::StreamState {\n        #[inline]\n        fn from(value: StreamState) -> Self {\n            match value {\n                StreamState::Base(BaseStreamStates::Idle) => Self::Idle,\n                StreamState::Base(BaseStreamStates::Open) => Self::Open,\n                StreamState::Base(BaseStreamStates::Closed) => Self::Closed,\n                StreamState::Granular(GranularStreamStates::HalfClosedLocal) => {\n                    Self::HalfClosedLocal\n                }\n                StreamState::Granular(GranularStreamStates::HalfClosedRemote) => {\n                    Self::HalfClosedRemote\n                }\n                StreamState::Granular(GranularStreamStates::Ready) => Self::Ready,\n                StreamState::Granular(GranularStreamStates::Send) => Self::Send,\n                StreamState::Granular(GranularStreamStates::DataSent) => Self::DataSent,\n                StreamState::Granular(GranularStreamStates::ResetSent) => Self::ResetSent,\n                StreamState::Granular(GranularStreamStates::ResetReceived) => Self::ResetReceived,\n                StreamState::Granular(GranularStreamStates::Receive) => Self::Receive,\n                StreamState::Granular(GranularStreamStates::SizeKnown) => Self::SizeKnown,\n                StreamState::Granular(GranularStreamStates::DataRead) => Self::DataRead,\n                StreamState::Granular(GranularStreamStates::ResetRead) => Self::ResetRead,\n                StreamState::Granular(GranularStreamStates::DataReceived) => Self::DataReceived,\n                StreamState::Granular(GranularStreamStates::Destroyed) => Self::Destroyed,\n            }\n        }\n    }\n\n    impl From<StreamSide> for legacy::StreamSide {\n        #[inline]\n        fn from(value: StreamSide) -> Self {\n            match value {\n                StreamSide::Sending => Self::Sending,\n                StreamSide::Receiving => Self::Receiving,\n            }\n        }\n    }\n\n    impl From<StreamStateUpdated> for legacy::TransportStreamStateUpdated {\n        fn from(value: StreamStateUpdated) -> Self {\n            build!(legacy::TransportStreamStateUpdated {\n                stream_id: value.stream_id,\n                ?stream_type: value.stream_type,\n                ?old: value.old,\n                new: value.new,\n                ?stream_side: value.stream_side,\n            })\n        }\n    }\n\n    impl From<FramesProcessed> for legacy::TransportFramesProcessed {\n        fn from(value: FramesProcessed) -> Self {\n            assert!(\n                value.packet_numbers.as_ref().is_none()\n                    || value.packet_numbers.as_ref().is_some_and(|v| v.len() != 1),\n                \"it not possible to do this convert\"\n            );\n            build!(legacy::TransportFramesProcessed {\n                frames: value.frames.into_iter().map(Into::into).collect::<Vec<_>>(),\n                ?packet_number: value.packet_numbers.map(|v| v[0]),\n            })\n        }\n    }\n\n    impl From<StreamDataLocation> for legacy::StreamDataLocation {\n        #[inline]\n        fn from(value: StreamDataLocation) -> Self {\n            match value {\n                StreamDataLocation::Application => Self::Application,\n                StreamDataLocation::Transport => Self::Transport,\n                StreamDataLocation::Network => Self::Network,\n            }\n        }\n    }\n\n    impl From<StreamDataMoved> for legacy::TransportDataMoved {\n        fn from(value: StreamDataMoved) -> Self {\n            build!(legacy::TransportDataMoved {\n                ?stream_id: value.stream_id,\n                ?offset: value.offset,\n                ?length: value.length,\n                ?from: value.from,\n                ?to: value.to,\n                ?data: value.raw.and_then(|raw| raw.data),\n            })\n        }\n    }\n}\n"
  },
  {
    "path": "qevent/src/quic.rs",
    "content": "use std::{\n    collections::HashMap, fmt::Display, marker::PhantomData, net::SocketAddr, time::Duration,\n};\n\nuse bytes::Bytes;\nuse derive_builder::Builder;\nuse derive_more::{From, Into, LowerHex};\nuse qbase::{\n    frame::{\n        AckFrame, ConnectionCloseFrame, CryptoFrame, DatagramFrame, EncodeSize, Frame,\n        GetFrameType, MaxStreamsFrame, NewTokenFrame, PathChallengeFrame, PathResponseFrame,\n        PingFrame, ReliableFrame, StreamCtlFrame, StreamFrame, StreamsBlockedFrame,\n    },\n    packet::header::{\n        GetDcid, GetScid,\n        long::{HandshakeHeader, InitialHeader, ZeroRttHeader},\n        short::OneRttHeader,\n    },\n    util::ContinuousData,\n    varint::VarInt,\n};\nuse serde::{Deserialize, Serialize};\n\npub mod connectivity;\npub mod recovery;\npub mod security;\npub mod transport;\n\nuse crate::{BeSpecificEventData, HexString, RawInfo};\n\n// 8.1\n#[derive(Debug, Clone, From, Into, PartialEq, Eq)]\npub struct QuicVersion(u32);\n\nimpl Serialize for QuicVersion {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::Serializer,\n    {\n        #[serde_with::serde_as]\n        #[derive(Serialize)]\n        struct Helper(#[serde_as(as = \"serde_with::hex::Hex\")] [u8; 4]);\n        Helper(self.0.to_be_bytes()).serialize(serializer)\n    }\n}\n\nimpl<'de> Deserialize<'de> for QuicVersion {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: serde::Deserializer<'de>,\n    {\n        #[serde_with::serde_as]\n        #[derive(Deserialize)]\n        struct Helper(#[serde_as(as = \"serde_with::hex::Hex\")] [u8; 4]);\n        Helper::deserialize(deserializer).map(|b| Self(u32::from_be_bytes(b.0)))\n    }\n}\n\n// 8.2\n// TOOD: 这些结构的序列化/反序列化之后都可以写到qbase中，也不重复写两份结构\n#[derive(Default, Debug, LowerHex, From, Into, Clone, Copy, PartialEq, Eq)]\npub struct ConnectionID(qbase::cid::ConnectionId);\n\nimpl Serialize for ConnectionID {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::Serializer,\n    {\n        #[serde_with::serde_as]\n        #[derive(Serialize)]\n        struct Helper<'b>(#[serde_as(as = \"serde_with::hex::Hex\")] &'b [u8]);\n\n        Helper(self.0.as_ref()).serialize(serializer)\n    }\n}\n\nimpl<'de> Deserialize<'de> for ConnectionID {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: serde::Deserializer<'de>,\n    {\n        #[serde_with::serde_as]\n        #[derive(Deserialize)]\n        struct Helper(#[serde_as(as = \"serde_with::hex::Hex\")] Vec<u8>);\n\n        let bytes = Helper::deserialize(deserializer)?.0;\n        if bytes.len() > qbase::cid::MAX_CID_SIZE {\n            return Err(serde::de::Error::custom(\"ConnectionID too long\"));\n        }\n        Ok(Self(qbase::cid::ConnectionId::from_slice(&bytes)))\n    }\n}\n\n// 8.3\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum Owner {\n    Local,\n    Remote,\n}\n\n// 8.4\n/// an IPAddress can either be a \"human readable\" form\n/// (e.g., \"127.0.0.1\" for v4 or\n/// \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" for v6) or\n/// use a raw byte-form (as the string forms can be ambiguous).\n/// Additionally, a hash-based or redacted representation\n/// can be used if needed for privacy or security reasons.\n#[derive(Debug, Clone, From, Into, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(transparent)]\npub struct IPAddress(String);\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde_with::skip_serializing_none]\n#[serde(rename_all = \"snake_case\")]\npub enum IpVersion {\n    V4,\n    V6,\n}\n\n// 8.5\n/// PathEndpointInfo indicates a single half/direction of a path.  A full\n/// path is comprised of two halves.  Firstly: the server sends to the\n/// remote client IP + port using a specific destination Connection ID.\n/// Secondly: the client sends to the remote server IP + port using a\n/// different destination Connection ID.\n///\n/// As such, structures logging path information SHOULD include two\n/// different PathEndpointInfo instances, one for each half of the path.\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\npub struct PathEndpointInfo {\n    ip_v4: Option<IPAddress>,\n    ip_v6: Option<IPAddress>,\n    port_v4: Option<u16>,\n    port_v6: Option<u16>,\n\n    /// Even though usually only a single ConnectionID\n    /// is associated with a given path at a time,\n    /// there are situations where there can be an overlap\n    /// or a need to keep track of previous ConnectionIDs\n    conenction_ids: Vec<ConnectionID>,\n}\n\nimpl From<SocketAddr> for PathEndpointInfo {\n    fn from(value: SocketAddr) -> Self {\n        match value {\n            SocketAddr::V4(addr) => crate::build!(PathEndpointInfo {\n                ip_v4: addr.ip().to_string(),\n                port_v4: addr.port(),\n            }),\n            SocketAddr::V6(addr) => crate::build!(PathEndpointInfo {\n                ip_v6: addr.ip().to_string(),\n                port_v6: addr.port(),\n            }),\n        }\n    }\n}\n\n// 8.6\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketType {\n    Initial,\n    Handshake,\n    #[serde(rename = \"0RTT\")]\n    ZeroRTT,\n    #[serde(rename = \"1RTT\")]\n    OneRTT,\n    Retry,\n    VersionNegotiation,\n    StatelessReset,\n    Unknown,\n}\n\nimpl From<qbase::packet::Type> for PacketType {\n    fn from(r#type: qbase::packet::Type) -> Self {\n        match r#type {\n            qbase::packet::r#type::Type::Long(long) => match long {\n                qbase::packet::r#type::long::Type::VersionNegotiation => {\n                    PacketType::VersionNegotiation\n                }\n                qbase::packet::r#type::long::Type::V1(\n                    qbase::packet::r#type::long::Version::INITIAL,\n                ) => PacketType::Initial,\n                qbase::packet::r#type::long::Type::V1(\n                    qbase::packet::r#type::long::Version::HANDSHAKE,\n                ) => PacketType::Handshake,\n                qbase::packet::r#type::long::Type::V1(\n                    qbase::packet::r#type::long::Version::ZERO_RTT,\n                ) => PacketType::ZeroRTT,\n                qbase::packet::r#type::long::Type::V1(\n                    qbase::packet::r#type::long::Version::RETRY,\n                ) => PacketType::Retry,\n            },\n            qbase::packet::r#type::Type::Short(_one_rtt) => PacketType::OneRTT,\n        }\n    }\n}\n\n// 8.7\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum PacketNumberSpace {\n    Initial,\n    Handshake,\n    ApplicationData,\n}\n\nimpl From<qbase::Epoch> for PacketNumberSpace {\n    fn from(value: qbase::Epoch) -> Self {\n        match value {\n            qbase::Epoch::Initial => Self::Initial,\n            qbase::Epoch::Handshake => Self::Handshake,\n            qbase::Epoch::Data => Self::ApplicationData,\n        }\n    }\n}\n\n// 8.8\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(setter(into, strip_option), build_fn(private, name = \"fallible_build\"))]\npub struct PacketHeader {\n    #[builder(default)]\n    #[serde(default)]\n    quic_bit: bool,\n    packet_type: PacketType,\n\n    /// only if packet_type === \"initial\" || \"handshake\" || \"0RTT\" || \"1RTT\"\n    #[builder(default)]\n    packet_number: Option<u64>,\n\n    ///  the bit flags of the packet headers (spin bit, key update bit,\n    /// etc. up to and including the packet number length bits\n    /// if present\n    #[builder(default)]\n    flags: Option<u8>,\n\n    /// only if packet_type === \"initial\" || \"retry\"\n    #[builder(default)]\n    token: Option<Token>,\n\n    /// only if packet_type === \"initial\" || \"handshake\" || \"0RTT\"\n    /// Signifies length of the packet_number plus the payload\n    #[builder(default)]\n    length: Option<u16>,\n\n    /// only if present in the header\n    /// if correctly using transport:connection_id_updated events,\n    /// dcid can be skipped for 1RTT packets\n    #[builder(default)]\n    version: Option<QuicVersion>,\n    #[builder(default)]\n    scil: Option<u8>,\n    #[builder(default)]\n    dcil: Option<u8>,\n    #[builder(default)]\n    scid: Option<ConnectionID>,\n    #[builder(default)]\n    dcid: Option<ConnectionID>,\n}\n\nimpl PacketHeaderBuilder {\n    /// Helper method used to set the fields of the initial header,\n    ///\n    /// Since the header defined by qbase is not complete enough, there are still many fields that need to be set manually.\n    pub fn initial(&mut self, header: &InitialHeader) -> &mut Self {\n        crate::build!(@field self,\n            packet_type: PacketType::Initial,\n            ?token: Token::try_from(header).ok(),\n            scil: header.scid().len() as u8,\n            scid: { *header.scid() },\n            dcil: header.dcid().len() as u8,\n            dcid: { *header.dcid() }\n        );\n        self\n    }\n\n    /// Helper method used to set the fields of the handshake header,\n    ///\n    /// Since the header defined by qbase is not complete enough, there are still many fields that need to be set manually.\n    pub fn handshake(&mut self, header: &HandshakeHeader) -> &mut Self {\n        self.packet_type(PacketType::Handshake)\n            .scil(header.scid().len() as u8)\n            .scid(*header.scid())\n            .dcil(header.dcid().len() as u8)\n            .dcid(*header.dcid())\n    }\n\n    /// Helper method used to set the fields of the 0rtt header,\n    ///\n    /// Since the header defined by qbase is not complete enough, there are still many fields that need to be set manually.\n    pub fn zero_rtt(&mut self, header: &ZeroRttHeader) -> &mut Self {\n        self.packet_type(PacketType::ZeroRTT)\n            .scil(header.scid().len() as u8)\n            .scid(*header.scid())\n            .dcil(header.dcid().len() as u8)\n            .dcid(*header.dcid())\n    }\n\n    /// Helper method used to set the fields of the 1rtt header,\n    ///\n    /// Since the header defined by qbase is not complete enough, there are still many fields that need to be set manually.\n    pub fn one_rtt(&mut self, header: &OneRttHeader) -> &mut Self {\n        self.packet_type(PacketType::OneRTT)\n            .dcil(header.dcid().len() as u8)\n            .dcid(*header.dcid())\n    }\n}\n\nimpl From<&InitialHeader> for PacketHeaderBuilder {\n    fn from(header: &InitialHeader) -> Self {\n        let mut builder = PacketHeader::builder();\n        builder.initial(header);\n        builder\n    }\n}\n\nimpl From<&HandshakeHeader> for PacketHeaderBuilder {\n    fn from(header: &HandshakeHeader) -> Self {\n        let mut builder = PacketHeader::builder();\n        builder.handshake(header);\n        builder\n    }\n}\n\nimpl From<&ZeroRttHeader> for PacketHeaderBuilder {\n    fn from(header: &ZeroRttHeader) -> Self {\n        let mut builder = PacketHeader::builder();\n        builder.zero_rtt(header);\n        builder\n    }\n}\n\nimpl From<&OneRttHeader> for PacketHeaderBuilder {\n    fn from(header: &OneRttHeader) -> Self {\n        let mut builder = PacketHeader::builder();\n        builder.one_rtt(header);\n        builder\n    }\n}\n\n// 8.9\n#[serde_with::skip_serializing_none]\n#[derive(Builder, Default, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[builder(\n    default,\n    setter(into, strip_option),\n    build_fn(private, name = \"fallible_build\")\n)]\n#[serde(default)]\npub struct Token {\n    pub r#type: Option<TokenType>,\n\n    /// decoded fields included in the token\n    /// (typically: peer's IP address, creation time)\n    #[serde(skip_serializing_if = \"HashMap::is_empty\")]\n    details: HashMap<String, serde_json::Value>,\n\n    raw: Option<RawInfo>,\n}\n\nimpl<H: 'static> TryFrom<&qbase::packet::header::LongHeader<H>> for Token {\n    type Error = ();\n    fn try_from(header: &qbase::packet::header::LongHeader<H>) -> Result<Self, Self::Error> {\n        use qbase::packet::header::RetryHeader;\n        let header: &dyn core::any::Any = header;\n        if let Some(initial) = header.downcast_ref::<InitialHeader>() {\n            if initial.token().is_empty() {\n                return Err(());\n            }\n            return Ok(crate::build!(Token {\n                // r#type: TokenType::?\n                raw: initial.token(),\n            }));\n        }\n        if let Some(retry) = header.downcast_ref::<RetryHeader>() {\n            return Ok(crate::build!(Token {\n                r#type: TokenType::Retry,\n                raw: retry.token(),\n            }));\n        }\n        Err(())\n    }\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TokenType {\n    Retry,\n    Resumption,\n}\n\n// 8.10\n#[serde_with::serde_as]\n#[derive(Debug, Clone, Copy, From, Into, Serialize, Deserialize, PartialEq, Eq)]\npub struct StatelessResetToken(#[serde_as(as = \"serde_with::hex::Hex\")] [u8; 16]);\n\nimpl From<qbase::token::ResetToken> for StatelessResetToken {\n    fn from(value: qbase::token::ResetToken) -> Self {\n        Self(*value)\n    }\n}\n\n// 8.11\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum KeyType {\n    ServerInitialSecret,\n    ClientInitialSecret,\n    ServerHandshakeSecret,\n    ClientHandshakeSecret,\n    #[serde(rename = \"server_0rtt_secret\")]\n    Server0RttSecret,\n    #[serde(rename = \"client_0rtt_secret\")]\n    Client0RttSecret,\n    #[serde(rename = \"server_1rtt_secret\")]\n    Server1RttSecret,\n    #[serde(rename = \"client_1rtt_secret\")]\n    Client1RttSecret,\n}\n\n// 8.12\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\npub enum ECN {\n    #[serde(rename = \"Not-ECT\")]\n    NotEct,\n    #[serde(rename = \"ECT(1)\")]\n    Ect1,\n    #[serde(rename = \"ECT(0)\")]\n    Ect0,\n    CE,\n}\n\n// 8.13\n#[serde_with::skip_serializing_none]\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]\n#[serde(tag = \"frame_type\")]\n#[serde(rename_all = \"snake_case\")]\npub enum QuicFrame {\n    Padding {\n        /// total frame length, including frame header\n        length: Option<u32>,\n        payload_length: u32,\n    },\n    Ping {\n        /// total frame length, including frame header\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n    Ack {\n        /// in ms\n        ack_delay: Option<f32>,\n\n        /// e.g., looks like \\[\\[1,2],\\[4,5], \\[7], \\[10,22]] serialized\n        ///\n        /// ### AckRange:\n        /// either a single number (e.g., \\[1]) or two numbers (e.g., \\[1,2]).\n        ///\n        /// For two numbers:\n        ///\n        /// the first number is \"from\": lowest packet number in interval\n        ///\n        /// the second number is \"to\": up to and including the highest\n        /// packet number in the interval\n        acked_ranges: Vec<[u64; 2]>,\n\n        /// ECN (explicit congestion notification) related fields\n        /// (not always present)\n        ect1: Option<u64>,\n        ect0: Option<u64>,\n        ce: Option<u64>,\n\n        /// total frame length, including frame header\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n    ResetStream {\n        stream_id: u64,\n        error_code: ApplicationCode,\n\n        /// in bytes\n        final_size: u64,\n\n        /// total frame length, including frame header\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n    StopSending {\n        stream_id: u64,\n        error_code: ApplicationCode,\n\n        /// total frame length, including frame header\n        length: Option<u32>,\n        payload_length: Option<u32>,\n    },\n    Crypto {\n        offset: u64,\n        length: u64,\n        payload_length: Option<u32>,\n        raw: Option<RawInfo>,\n    },\n    NewToken {\n        token: Token,\n    },\n    Stream {\n        stream_id: u64,\n\n        /// These two MUST always be set\n        /// If not present in the Frame type, log their default values\n        offset: u64,\n        length: u64,\n\n        /// this MAY be set any time,\n        /// but MUST only be set if the value is true\n        /// if absent, the value MUST be assumed to be false\n        #[serde(default)]\n        fin: bool,\n        raw: Option<RawInfo>,\n    },\n    MaxData {\n        maximum: u64,\n    },\n    MaxStreamData {\n        stream_id: u64,\n        maximum: u64,\n    },\n    MaxStreams {\n        stream_type: StreamType,\n        maximum: u64,\n    },\n    DataBlocked {\n        limit: u64,\n    },\n    StreamDataBlocked {\n        stream_id: u64,\n        limit: u64,\n    },\n    StreamsBlocked {\n        stream_type: StreamType,\n        limit: u64,\n    },\n    NewConnectionId {\n        sequence_number: u32,\n        retire_prior_to: u32,\n\n        /// mainly used if e.g., for privacy reasons the full\n        /// connection_id cannot be logged\n        connection_id_length: Option<u8>,\n        connection_id: ConnectionID,\n        stateless_reset_token: Option<StatelessResetToken>,\n    },\n    RetireConnectionId {\n        sequence_number: u32,\n    },\n    PathChallenge {\n        /// always 64-bit\n        data: Option<HexString>,\n    },\n    PathResponse {\n        /// always 64-bit\n        data: Option<HexString>,\n    },\n    /// An endpoint that receives unknown error codes can record it in the\n    /// error_code field using the numerical value without variable-length\n    /// integer encoding.\n    ///\n    /// When the connection is closed due a connection-level error, the\n    /// trigger_frame_type field can be used to log the frame that triggered\n    /// the error.  For known frame types, the appropriate string value is\n    /// used.  For unknown frame types, the numerical value without variable-\n    /// length integer encoding is used.\n    ///\n    /// The CONNECTION_CLOSE reason phrase is a byte sequences.  It is likely\n    /// that this sequence is presentable as UTF-8, in which case it can be\n    /// logged in the reason field.  The reason_bytes field supports logging\n    /// the raw bytes, which can be useful when the value is not UTF-8 or\n    /// when an endpoint does not want to decode it.  Implementations SHOULD\n    /// log at least one format, but MAY log both or none.\n    ConnectionClose {\n        error_space: Option<ConnectionCloseErrorSpace>,\n        error_code: Option<ConnectionCloseErrorCode>,\n\n        reason: Option<String>,\n        reason_bytes: Option<HexString>,\n\n        /// when error_space === \"transport\"\n        trigger_frame_type: Option<ConnectionCloseTriggerFrameType>,\n    },\n    HandshakeDone {},\n    /// The frame_type_bytes field is the numerical value without variable-\n    /// length integer encoding.\n    Unknow {\n        frame_type_bytes: u64,\n        raw: Option<RawInfo>,\n    },\n    Datagram {\n        length: Option<u64>,\n        raw: Option<RawInfo>,\n    },\n}\n\nimpl From<&PingFrame> for QuicFrame {\n    fn from(frame: &PingFrame) -> Self {\n        QuicFrame::Ping {\n            length: Some(frame.encoding_size() as u32),\n            payload_length: Some(0),\n        }\n    }\n}\n\nimpl<D: ContinuousData + ?Sized> From<(&CryptoFrame, &D)> for QuicFrame {\n    fn from((frame, data): (&CryptoFrame, &D)) -> Self {\n        let payload_length = frame.len();\n        let length = frame.encoding_size() as u64 + payload_length;\n        QuicFrame::Crypto {\n            offset: frame.offset(),\n            length,\n            payload_length: Some(payload_length as _),\n            raw: Some(crate::build!(RawInfo {\n                length,\n                payload_length,\n                data,\n            })),\n        }\n    }\n}\n\nimpl From<&CryptoFrame> for QuicFrame {\n    fn from(frame: &CryptoFrame) -> Self {\n        let payload_length = frame.len();\n        let length = frame.encoding_size() as u64 + payload_length;\n        QuicFrame::Crypto {\n            offset: frame.offset(),\n            length,\n            payload_length: Some(payload_length as _),\n            raw: Some(crate::build!(RawInfo {\n                length,\n                payload_length,\n            })),\n        }\n    }\n}\n\nimpl<D: ContinuousData + ?Sized> From<(&StreamFrame, &D)> for QuicFrame {\n    fn from((frame, data): (&StreamFrame, &D)) -> Self {\n        let payload_length = frame.len();\n        let length = frame.encoding_size() + payload_length;\n        QuicFrame::Stream {\n            stream_id: frame.stream_id().into(),\n            offset: frame.offset(),\n            length: payload_length as u64,\n            fin: frame.is_fin(),\n            raw: Some(crate::build!(RawInfo {\n                length: length as u64,\n                payload_length: payload_length as u64,\n                data: data,\n            })),\n        }\n    }\n}\n\nimpl From<&StreamFrame> for QuicFrame {\n    fn from(frame: &StreamFrame) -> Self {\n        let payload_length = frame.len();\n        let length = frame.encoding_size() + payload_length;\n        QuicFrame::Stream {\n            stream_id: frame.stream_id().into(),\n            offset: frame.offset(),\n            length: payload_length as u64,\n            fin: frame.is_fin(),\n            raw: Some(crate::build!(RawInfo {\n                length: length as u64,\n                payload_length: payload_length as u64,\n            })),\n        }\n    }\n}\n\nimpl<D: ContinuousData + ?Sized> From<(&DatagramFrame, &D)> for QuicFrame {\n    fn from((frame, data): (&DatagramFrame, &D)) -> Self {\n        let payload_length = frame.len().into_u64();\n        let length = frame.encoding_size() as u64 + payload_length;\n        QuicFrame::Datagram {\n            length: Some(payload_length as _),\n            raw: Some(crate::build!(RawInfo {\n                length,\n                payload_length,\n                data: data,\n            })),\n        }\n    }\n}\n\nimpl From<&DatagramFrame> for QuicFrame {\n    fn from(frame: &DatagramFrame) -> Self {\n        let payload_length = frame.len().into_u64();\n        let length = frame.encoding_size() as u64 + payload_length;\n        QuicFrame::Datagram {\n            length: Some(payload_length as _),\n            raw: Some(crate::build!(RawInfo {\n                length,\n                payload_length,\n            })),\n        }\n    }\n}\n\nimpl From<&PathChallengeFrame> for QuicFrame {\n    fn from(frame: &PathChallengeFrame) -> Self {\n        QuicFrame::PathChallenge {\n            data: Some(Bytes::from_owner(frame.to_vec()).into()),\n        }\n    }\n}\n\nimpl From<&PathResponseFrame> for QuicFrame {\n    fn from(frame: &PathResponseFrame) -> Self {\n        QuicFrame::PathResponse {\n            data: Some(Bytes::from_owner(frame.to_vec()).into()),\n        }\n    }\n}\n\nimpl From<&AckFrame> for QuicFrame {\n    fn from(frame: &AckFrame) -> Self {\n        Self::Ack {\n            ack_delay: Some(Duration::from_micros(frame.delay()).as_secs_f32() * 1000.0),\n            acked_ranges: frame\n                .ranges()\n                .iter()\n                .fold(\n                    (\n                        frame.largest() - frame.first_range(),\n                        vec![[frame.largest() - frame.first_range(), frame.largest()]],\n                    ),\n                    |(previous_smallest, mut acked_ranges), (gap, ack)| {\n                        // see https://www.rfc-editor.org/rfc/rfc9000.html#name-ack-ranges\n                        let largest = previous_smallest - gap.into_u64() - 2;\n                        let smallest = largest - ack.into_u64();\n                        acked_ranges.push([smallest, largest]);\n                        (smallest, acked_ranges)\n                    },\n                )\n                .1,\n            ect1: frame.ecn().map(|ecn| ecn.ect1()),\n            ect0: frame.ecn().map(|ecn| ecn.ect0()),\n            ce: frame.ecn().map(|ecn| ecn.ce()),\n            length: Some(frame.encoding_size() as u32),\n            payload_length: None,\n        }\n    }\n}\n\nimpl From<&ReliableFrame> for QuicFrame {\n    fn from(frame: &ReliableFrame) -> Self {\n        match frame {\n            ReliableFrame::NewToken(new_token_frame) => new_token_frame.into(),\n            ReliableFrame::MaxData(max_data_frame) => QuicFrame::MaxData {\n                maximum: max_data_frame.max_data(),\n            },\n            ReliableFrame::DataBlocked(data_blocked_frame) => QuicFrame::DataBlocked {\n                limit: data_blocked_frame.limit(),\n            },\n            ReliableFrame::NewConnectionId(new_connection_id_frame) => QuicFrame::NewConnectionId {\n                sequence_number: new_connection_id_frame.sequence() as u32,\n                retire_prior_to: new_connection_id_frame.retire_prior_to() as u32,\n                connection_id_length: Some(new_connection_id_frame.connection_id().len() as u8),\n                connection_id: (*new_connection_id_frame.connection_id()).into(),\n                stateless_reset_token: Some((**new_connection_id_frame.reset_token()).into()),\n            },\n            ReliableFrame::RetireConnectionId(retire_connection_id_frame) => {\n                QuicFrame::RetireConnectionId {\n                    sequence_number: retire_connection_id_frame.sequence() as u32,\n                }\n            }\n            ReliableFrame::HandshakeDone(_handshake_done_frame) => QuicFrame::HandshakeDone {},\n            ReliableFrame::AddAddress(frame) => QuicFrame::Unknow {\n                frame_type_bytes: VarInt::from(frame.frame_type()).into_u64() as _,\n                raw: None,\n            },\n            ReliableFrame::RemoveAddress(frame) => QuicFrame::Unknow {\n                frame_type_bytes: VarInt::from(frame.frame_type()).into_u64() as _,\n                raw: None,\n            },\n            ReliableFrame::PunchMeNow(frame) => QuicFrame::Unknow {\n                frame_type_bytes: VarInt::from(frame.frame_type()).into_u64() as _,\n                raw: None,\n            },\n            ReliableFrame::PunchDone(frame) => QuicFrame::Unknow {\n                frame_type_bytes: VarInt::from(frame.frame_type()).into_u64() as _,\n                raw: None,\n            },\n            ReliableFrame::StreamCtl(stream_ctl_frame) => QuicFrame::from(stream_ctl_frame),\n        }\n    }\n}\n\nimpl From<&NewTokenFrame> for QuicFrame {\n    fn from(value: &NewTokenFrame) -> Self {\n        QuicFrame::NewToken {\n            token: crate::build!(Token {\n                r#type: TokenType::Retry,\n                raw: RawInfo {\n                    length: value.encoding_size() as u64,\n                    payload_length: value.token().len() as u64,\n                    data: value.token(),\n                },\n            }),\n        }\n    }\n}\n\nimpl From<&StreamCtlFrame> for QuicFrame {\n    fn from(frame: &StreamCtlFrame) -> Self {\n        match frame {\n            StreamCtlFrame::ResetStream(reset_stream_frame) => QuicFrame::ResetStream {\n                stream_id: reset_stream_frame.stream_id().id(),\n                error_code: (reset_stream_frame.app_error_code() as u32).into(),\n                final_size: reset_stream_frame.final_size(),\n                length: None,\n                payload_length: None,\n            },\n            StreamCtlFrame::StopSending(stop_sending_frame) => QuicFrame::StopSending {\n                stream_id: stop_sending_frame.stream_id().id(),\n                error_code: (stop_sending_frame.app_err_code() as u32).into(),\n                length: None,\n                payload_length: None,\n            },\n            StreamCtlFrame::MaxStreamData(max_stream_data_frame) => QuicFrame::MaxStreamData {\n                stream_id: max_stream_data_frame.stream_id().id(),\n                maximum: max_stream_data_frame.max_stream_data(),\n            },\n            StreamCtlFrame::MaxStreams(max_streams_frame) => match max_streams_frame {\n                MaxStreamsFrame::Bi(maximum) => QuicFrame::MaxStreams {\n                    stream_type: StreamType::Bidirectional,\n                    maximum: maximum.into_u64(),\n                },\n                MaxStreamsFrame::Uni(maximum) => QuicFrame::MaxStreams {\n                    stream_type: StreamType::Unidirectional,\n                    maximum: maximum.into_u64(),\n                },\n            },\n            StreamCtlFrame::StreamDataBlocked(stream_data_blocked_frame) => {\n                QuicFrame::StreamDataBlocked {\n                    stream_id: stream_data_blocked_frame.stream_id().id(),\n                    limit: stream_data_blocked_frame.maximum_stream_data(),\n                }\n            }\n            StreamCtlFrame::StreamsBlocked(streams_blocked_frame) => match streams_blocked_frame {\n                StreamsBlockedFrame::Bi(limit) => QuicFrame::StreamsBlocked {\n                    stream_type: StreamType::Bidirectional,\n                    limit: limit.into_u64(),\n                },\n                StreamsBlockedFrame::Uni(limit) => QuicFrame::StreamsBlocked {\n                    stream_type: StreamType::Unidirectional,\n                    limit: limit.into_u64(),\n                },\n            },\n        }\n    }\n}\n\nimpl From<&ConnectionCloseFrame> for QuicFrame {\n    fn from(frame: &ConnectionCloseFrame) -> Self {\n        Self::ConnectionClose {\n            error_space: Some(match &frame {\n                ConnectionCloseFrame::App(..) => ConnectionCloseErrorSpace::Application,\n                ConnectionCloseFrame::Quic(..) => ConnectionCloseErrorSpace::Transport,\n            }),\n            error_code: match &frame {\n                ConnectionCloseFrame::App(frame) => {\n                    Some(ApplicationCode::from(frame.error_code() as u32).into())\n                }\n                ConnectionCloseFrame::Quic(frame) => {\n                    Some(connectivity::ConnectionCode::from(frame.error_kind()).into())\n                }\n            },\n            reason: match &frame {\n                ConnectionCloseFrame::App(frame) => Some(frame.reason().to_owned()),\n                ConnectionCloseFrame::Quic(frame) => Some(frame.reason().to_owned()),\n            },\n            // TODO: 不应该强制要求reason是utf8的\n            reason_bytes: None,\n            trigger_frame_type: match &frame {\n                ConnectionCloseFrame::Quic(frame) => {\n                    Some((VarInt::from(frame.frame_type()).into_u64()).into())\n                }\n                ConnectionCloseFrame::App(..) => None,\n            },\n        }\n    }\n}\n\nimpl<D: ContinuousData> From<&Frame<D>> for QuicFrame {\n    fn from(frame: &Frame<D>) -> Self {\n        match frame {\n            Frame::Padding(..) => QuicFrame::Padding {\n                length: Some(1),\n                payload_length: 1,\n            },\n            Frame::Ping(..) => QuicFrame::Ping {\n                length: Some(1),\n                payload_length: Some(1),\n            },\n            Frame::Ack(frame) => frame.into(),\n            Frame::Close(frame) => frame.into(),\n            Frame::NewToken(frame) => frame.into(),\n            Frame::MaxData(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::DataBlocked(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::NewConnectionId(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::RetireConnectionId(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::HandshakeDone(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::AddAddress(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::RemoveAddress(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::PunchMeNow(frame) => (&ReliableFrame::from(*frame)).into(),\n            Frame::PathChallenge(frame) => frame.into(),\n            Frame::PathResponse(frame) => frame.into(),\n            Frame::StreamCtl(frame) => frame.into(),\n            Frame::Stream(frame, bytes) if bytes.is_empty() => (frame, bytes).into(),\n            Frame::Crypto(frame, bytes) if bytes.is_empty() => (frame, bytes).into(),\n            Frame::Datagram(frame, bytes) if bytes.is_empty() => (frame, bytes).into(),\n            Frame::Stream(frame, bytes) => (frame, bytes).into(),\n            Frame::Crypto(frame, bytes) => (frame, bytes).into(),\n            Frame::Datagram(frame, bytes) => (frame, bytes).into(),\n            Frame::PunchHello(frame) => QuicFrame::Unknow {\n                frame_type_bytes: VarInt::from(frame.frame_type()).into_u64() as _,\n                raw: None,\n            },\n            Frame::PunchDone(frame) => QuicFrame::Unknow {\n                frame_type_bytes: VarInt::from(frame.frame_type()).into_u64() as _,\n                raw: None,\n            },\n        }\n    }\n}\n\n/// A collection of automatically and efficiently converting raw quic frames into qlog quic frames.\n#[derive(Debug)]\npub struct QuicFramesCollector<E> {\n    event: PhantomData<E>,\n    frames: Vec<QuicFrame>,\n}\n\nimpl<E> QuicFramesCollector<E> {\n    pub fn new() -> Self {\n        Self {\n            event: PhantomData,\n            frames: Vec::new(),\n        }\n    }\n}\n\nimpl<E> Default for QuicFramesCollector<E> {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl<E, F> Extend<F> for QuicFramesCollector<E>\nwhere\n    E: BeSpecificEventData,\n    F: Into<QuicFrame>,\n{\n    fn extend<T: IntoIterator<Item = F>>(&mut self, iter: T) {\n        if !crate::telemetry::Span::current().filter_event(E::scheme()) {\n            return;\n        }\n        for frame in iter.into_iter().map(Into::into) {\n            if let Some(last) = self.frames.last_mut() {\n                match last {\n                    QuicFrame::Padding {\n                        length,\n                        payload_length,\n                    } => {\n                        *last = QuicFrame::Padding {\n                            length: length.map(|length| length + 1),\n                            payload_length: *payload_length + 1,\n                        };\n                        continue;\n                    }\n                    QuicFrame::Ping {\n                        length,\n                        payload_length,\n                    } => {\n                        *last = QuicFrame::Ping {\n                            length: length.map(|length| length + 1),\n                            payload_length: payload_length.map(|length| length + 1),\n                        };\n                        continue;\n                    }\n                    _ => {}\n                }\n            }\n            self.frames.push(frame);\n        }\n    }\n}\n\nimpl<E> From<QuicFramesCollector<E>> for Vec<QuicFrame> {\n    fn from(value: QuicFramesCollector<E>) -> Self {\n        value.frames\n    }\n}\n\n#[derive(Debug, Clone, From, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum ApplicationCode {\n    ApplicationError(ApplicationError),\n    Value(u32),\n}\n\nimpl From<ApplicationCode> for ConnectionCloseErrorCode {\n    fn from(value: ApplicationCode) -> Self {\n        match value {\n            ApplicationCode::ApplicationError(error) => {\n                ConnectionCloseErrorCode::ApplicationError(error)\n            }\n            ApplicationCode::Value(value) => ConnectionCloseErrorCode::Value(value as _),\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum StreamType {\n    Unidirectional,\n    Bidirectional,\n}\n\nimpl From<qbase::sid::Dir> for StreamType {\n    fn from(dir: qbase::sid::Dir) -> Self {\n        match dir {\n            qbase::sid::Dir::Bi => Self::Bidirectional,\n            qbase::sid::Dir::Uni => Self::Unidirectional,\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum ConnectionCloseErrorSpace {\n    Transport,\n    Application,\n}\n\n#[derive(Debug, Clone, From, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum ConnectionCloseErrorCode {\n    TransportError(TransportError),\n    CryptoError(CryptoError),\n    ApplicationError(ApplicationError),\n    Value(u64),\n}\n\n#[derive(Debug, Clone, Serialize, From, Deserialize, PartialEq, Eq)]\n#[serde(untagged)]\npub enum ConnectionCloseTriggerFrameType {\n    Id(u64),\n    Text(String),\n}\n\n// 8.13.23\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub enum TransportError {\n    NoError,\n    InternalError,\n    ConnectionRefused,\n    FlowControlError,\n    StreamLimitError,\n    StreamStateError,\n    FinalSizeError,\n    FrameEncodingError,\n    TransportParameterError,\n    ConnectionIdLimitError,\n    ProtocolViolation,\n    InvalidToken,\n    ApplicationError,\n    CryptoBufferExceeded,\n    KeyUpdateError,\n    AeadLimitReached,\n    NoViablePath,\n}\n\n// 8.13.24\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\n#[serde(rename_all = \"snake_case\")]\npub struct ApplicationError(String);\n\n// 8.13.25\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub struct CryptoError(u8);\n\nimpl Display for CryptoError {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"crypto_error_0x1{:02x}\", self.0)\n    }\n}\n\nimpl Serialize for CryptoError {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::Serializer,\n    {\n        serializer.serialize_str(&self.to_string())\n    }\n}\n\nimpl<'de> Deserialize<'de> for CryptoError {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: serde::Deserializer<'de>,\n    {\n        let string = String::deserialize(deserializer)?;\n        string.strip_prefix(\"crypto_error_0x1\").map_or_else(\n            || Err(serde::de::Error::custom(\"invalid crypto error\")),\n            |s| {\n                u8::from_str_radix(s, 16)\n                    .map(CryptoError)\n                    .map_err(serde::de::Error::custom)\n            },\n        )\n    }\n}\n\ncrate::gen_builder_method! {\n    PathEndpointInfoBuilder => PathEndpointInfo;\n    PacketHeaderBuilder     => PacketHeader;\n    TokenBuilder            => Token;\n}\n\nmod rollback {\n    use super::*;\n    use crate::{build, legacy::quic as legacy};\n\n    impl From<IPAddress> for legacy::IPAddress {\n        #[inline]\n        fn from(value: IPAddress) -> Self {\n            legacy::IPAddress::from(value.0)\n        }\n    }\n\n    impl From<IpVersion> for legacy::IPVersion {\n        #[inline]\n        fn from(value: IpVersion) -> Self {\n            match value {\n                IpVersion::V4 => legacy::IPVersion::V4,\n                IpVersion::V6 => legacy::IPVersion::V6,\n            }\n        }\n    }\n\n    impl From<ConnectionID> for legacy::ConnectionID {\n        #[inline]\n        fn from(value: ConnectionID) -> Self {\n            legacy::ConnectionID::from(HexString::from(Bytes::from(value.0.to_vec())))\n        }\n    }\n\n    impl From<Owner> for legacy::Owner {\n        #[inline]\n        fn from(value: Owner) -> Self {\n            match value {\n                Owner::Local => legacy::Owner::Local,\n                Owner::Remote => legacy::Owner::Remote,\n            }\n        }\n    }\n\n    impl From<PacketType> for legacy::PacketType {\n        #[inline]\n        fn from(value: PacketType) -> Self {\n            match value {\n                PacketType::Initial => legacy::PacketType::Initial,\n                PacketType::Handshake => legacy::PacketType::Handshake,\n                PacketType::ZeroRTT => legacy::PacketType::ZeroRTT,\n                PacketType::OneRTT => legacy::PacketType::OneRTT,\n                PacketType::Retry => legacy::PacketType::Retry,\n                PacketType::VersionNegotiation => legacy::PacketType::VersionNegotiation,\n                PacketType::StatelessReset => legacy::PacketType::StatelessReset,\n                PacketType::Unknown => legacy::PacketType::Unknown,\n            }\n        }\n    }\n\n    impl From<PacketNumberSpace> for legacy::PacketNumberSpace {\n        #[inline]\n        fn from(value: PacketNumberSpace) -> Self {\n            match value {\n                PacketNumberSpace::Initial => legacy::PacketNumberSpace::Initial,\n                PacketNumberSpace::Handshake => legacy::PacketNumberSpace::Handshake,\n                PacketNumberSpace::ApplicationData => legacy::PacketNumberSpace::ApplicationData,\n            }\n        }\n    }\n\n    impl From<TokenType> for legacy::TokenType {\n        #[inline]\n        fn from(value: TokenType) -> Self {\n            match value {\n                TokenType::Retry => legacy::TokenType::Retry,\n                TokenType::Resumption => legacy::TokenType::Resumption,\n            }\n        }\n    }\n\n    impl From<Token> for legacy::Token {\n        #[inline]\n        fn from(value: Token) -> Self {\n            build!(legacy::Token {\n                ?r#type: value.r#type,\n                details: value.details,\n                ?length: value.raw.as_ref().and_then(|raw| raw.length.map(|length| length as u32)),\n                ?data: value.raw.and_then(|raw| raw.data)\n            })\n        }\n    }\n\n    impl From<StatelessResetToken> for legacy::Token {\n        #[inline]\n        fn from(value: StatelessResetToken) -> Self {\n            build!(legacy::Token {\n                r#type: TokenType::Resumption,\n                details: HashMap::new(),\n                length: 16u32,\n                data: { Bytes::from_owner(value.0.to_vec()) }\n            })\n        }\n    }\n\n    impl From<PacketHeader> for legacy::PacketHeader {\n        fn from(value: PacketHeader) -> Self {\n            build!(legacy::PacketHeader {\n                packet_type: value.packet_type,\n                ?packet_number: value.packet_number,\n                ?flags: value.flags,\n                ?token: value.token,\n                ?length: value.length,\n                ?version: value.version,\n                ?scil: value.scil,\n                ?dcil: value.dcil,\n                ?scid: value.scid,\n                ?dcid: value.dcid\n            })\n        }\n    }\n\n    impl From<TransportError> for legacy::TransportError {\n        #[inline]\n        fn from(value: TransportError) -> Self {\n            match value {\n                TransportError::NoError => legacy::TransportError::NoError,\n                TransportError::InternalError => legacy::TransportError::InternalError,\n                TransportError::ConnectionRefused => legacy::TransportError::ConnectionRefused,\n                TransportError::FlowControlError => legacy::TransportError::FlowControlError,\n                TransportError::StreamLimitError => legacy::TransportError::StreamLimitError,\n                TransportError::StreamStateError => legacy::TransportError::StreamStateError,\n                TransportError::FinalSizeError => legacy::TransportError::FinalSizeError,\n                TransportError::FrameEncodingError => legacy::TransportError::FrameEncodingError,\n                TransportError::TransportParameterError => {\n                    legacy::TransportError::TransportParameterError\n                }\n                TransportError::ConnectionIdLimitError => {\n                    legacy::TransportError::ConnectionIdLimitError\n                }\n                TransportError::ProtocolViolation => legacy::TransportError::ProtocolViolation,\n                TransportError::InvalidToken => legacy::TransportError::InvalidToken,\n                TransportError::ApplicationError => legacy::TransportError::ApplicationError,\n                TransportError::CryptoBufferExceeded => {\n                    legacy::TransportError::CryptoBufferExceeded\n                }\n                TransportError::KeyUpdateError => legacy::TransportError::KeyUpdateError,\n                TransportError::AeadLimitReached => legacy::TransportError::AeadLimitReached,\n                TransportError::NoViablePath => legacy::TransportError::NoViablePath,\n            }\n        }\n    }\n\n    impl From<StreamType> for legacy::StreamType {\n        #[inline]\n        fn from(value: StreamType) -> Self {\n            match value {\n                StreamType::Unidirectional => legacy::StreamType::Unidirectional,\n                StreamType::Bidirectional => legacy::StreamType::Bidirectional,\n            }\n        }\n    }\n\n    impl From<ConnectionCloseErrorSpace> for legacy::ConnectionCloseErrorSpace {\n        #[inline]\n        fn from(value: ConnectionCloseErrorSpace) -> Self {\n            match value {\n                ConnectionCloseErrorSpace::Transport => {\n                    legacy::ConnectionCloseErrorSpace::Transport\n                }\n                ConnectionCloseErrorSpace::Application => {\n                    legacy::ConnectionCloseErrorSpace::Application\n                }\n            }\n        }\n    }\n\n    impl TryFrom<ConnectionCloseErrorCode> for legacy::ConnectionCloseErrorCode {\n        type Error = ();\n        #[inline]\n        fn try_from(value: ConnectionCloseErrorCode) -> Result<Self, ()> {\n            match value {\n                ConnectionCloseErrorCode::TransportError(error) => Ok(\n                    legacy::ConnectionCloseErrorCode::TransportError(error.into()),\n                ),\n                ConnectionCloseErrorCode::CryptoError(_error) => Err(()),\n                ConnectionCloseErrorCode::ApplicationError(error) => Ok(\n                    legacy::ConnectionCloseErrorCode::ApplicationError(error.into()),\n                ),\n                ConnectionCloseErrorCode::Value(value) => {\n                    Ok(legacy::ConnectionCloseErrorCode::Value(value))\n                }\n            }\n        }\n    }\n\n    impl From<ConnectionCloseTriggerFrameType> for legacy::ConnectionCloseTriggerFrameType {\n        #[inline]\n        fn from(value: ConnectionCloseTriggerFrameType) -> Self {\n            match value {\n                ConnectionCloseTriggerFrameType::Id(id) => {\n                    legacy::ConnectionCloseTriggerFrameType::Id(id)\n                }\n                ConnectionCloseTriggerFrameType::Text(text) => {\n                    legacy::ConnectionCloseTriggerFrameType::Text(text)\n                }\n            }\n        }\n    }\n\n    impl From<QuicFrame> for legacy::QuicFrame {\n        fn from(value: QuicFrame) -> Self {\n            match value {\n                QuicFrame::Padding {\n                    length,\n                    payload_length,\n                } => legacy::QuicFrame::Padding {\n                    length,\n                    payload_length,\n                },\n                QuicFrame::Ping {\n                    length,\n                    payload_length,\n                } => legacy::QuicFrame::Ping {\n                    length,\n                    payload_length,\n                },\n                QuicFrame::Ack {\n                    ack_delay,\n                    acked_ranges,\n                    ect1,\n                    ect0,\n                    ce,\n                    length,\n                    payload_length,\n                } => legacy::QuicFrame::Ack {\n                    ack_delay,\n                    acked_ranges,\n                    ect1,\n                    ect0,\n                    ce,\n                    length,\n                    payload_length,\n                },\n                QuicFrame::ResetStream {\n                    stream_id,\n                    error_code,\n                    final_size,\n                    length,\n                    payload_length,\n                } => legacy::QuicFrame::ResetStream {\n                    stream_id,\n                    error_code: error_code.into(),\n                    final_size,\n                    length,\n                    payload_length,\n                },\n                QuicFrame::StopSending {\n                    stream_id,\n                    error_code,\n                    length,\n                    payload_length,\n                } => legacy::QuicFrame::StopSending {\n                    stream_id,\n                    error_code: error_code.into(),\n                    length,\n                    payload_length,\n                },\n                QuicFrame::Crypto {\n                    offset,\n                    length,\n                    payload_length,\n                    raw: _,\n                } => legacy::QuicFrame::Crypto {\n                    offset,\n                    length,\n                    payload_length,\n                },\n                QuicFrame::NewToken { token } => legacy::QuicFrame::NewToken {\n                    token: token.into(),\n                },\n                QuicFrame::Stream {\n                    stream_id,\n                    offset,\n                    length,\n                    fin,\n                    raw,\n                } => legacy::QuicFrame::Stream {\n                    stream_id,\n                    offset,\n                    length,\n                    fin,\n                    raw,\n                },\n                QuicFrame::MaxData { maximum } => legacy::QuicFrame::MaxData { maximum },\n                QuicFrame::MaxStreamData { stream_id, maximum } => {\n                    legacy::QuicFrame::MaxStreamData { stream_id, maximum }\n                }\n                QuicFrame::MaxStreams {\n                    stream_type,\n                    maximum,\n                } => legacy::QuicFrame::MaxStreams {\n                    stream_type: stream_type.into(),\n                    maximum,\n                },\n                QuicFrame::DataBlocked { limit } => legacy::QuicFrame::DataBlocked { limit },\n                QuicFrame::StreamDataBlocked { stream_id, limit } => {\n                    legacy::QuicFrame::StreamDataBlocked { stream_id, limit }\n                }\n                QuicFrame::StreamsBlocked { stream_type, limit } => {\n                    legacy::QuicFrame::StreamsBlocked {\n                        stream_type: stream_type.into(),\n                        limit,\n                    }\n                }\n                QuicFrame::NewConnectionId {\n                    sequence_number,\n                    retire_prior_to,\n                    connection_id_length,\n                    connection_id,\n                    stateless_reset_token,\n                } => legacy::QuicFrame::NewConnectionId {\n                    sequence_number,\n                    retire_prior_to,\n                    connection_id_length,\n                    connection_id: connection_id.into(),\n                    stateless_reset_token: stateless_reset_token.map(Into::into),\n                },\n                QuicFrame::RetireConnectionId { sequence_number } => {\n                    legacy::QuicFrame::RetireConnectionId { sequence_number }\n                }\n                QuicFrame::PathChallenge { data } => legacy::QuicFrame::PathChallenge { data },\n                QuicFrame::PathResponse { data } => legacy::QuicFrame::PathResponse { data },\n                QuicFrame::ConnectionClose {\n                    error_space,\n                    error_code,\n                    reason,\n                    reason_bytes: _,\n                    trigger_frame_type,\n                } => legacy::QuicFrame::ConnectionClose {\n                    error_space: error_space.map(Into::into),\n                    raw_error_code: match &error_code {\n                        Some(ConnectionCloseErrorCode::CryptoError(CryptoError(value))) => {\n                            Some(*value as u32)\n                        }\n                        _ => None,\n                    },\n                    error_code: error_code.and_then(|error_code| error_code.try_into().ok()),\n                    reason,\n                    trigger_frame_type: trigger_frame_type.map(Into::into),\n                },\n                QuicFrame::HandshakeDone {} => legacy::QuicFrame::HandshakeDone {},\n                QuicFrame::Unknow {\n                    frame_type_bytes,\n                    raw,\n                } => legacy::QuicFrame::Unknown {\n                    raw_frame_type: frame_type_bytes,\n                    raw_length: raw\n                        .as_ref()\n                        .and_then(|raw| raw.length.map(|length| length as u32)),\n                    raw: raw.and_then(|raw| raw.data),\n                },\n                QuicFrame::Datagram { length, raw } => legacy::QuicFrame::Datagram { length, raw },\n            }\n        }\n    }\n\n    impl From<ApplicationError> for legacy::ApplicationError {\n        #[inline]\n        fn from(value: ApplicationError) -> Self {\n            value.0.into()\n        }\n    }\n\n    impl From<ApplicationCode> for legacy::ApplicationCode {\n        #[inline]\n        fn from(value: ApplicationCode) -> Self {\n            match value {\n                ApplicationCode::ApplicationError(error) => {\n                    legacy::ApplicationCode::ApplicationError(error.into())\n                }\n                ApplicationCode::Value(value) => legacy::ApplicationCode::Value(value),\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn ack() {\n        // 123 56 9\n        let frame = AckFrame::new(\n            9u32.into(),\n            1000u32.into(),\n            0u32.into(),\n            vec![(1u32.into(), 1u32.into()), (0u32.into(), 2u32.into())],\n            None,\n        );\n\n        let encoding_size = frame.encoding_size();\n\n        let quic_frame: QuicFrame = (&frame).into();\n        assert_eq!(\n            quic_frame,\n            QuicFrame::Ack {\n                ack_delay: Some(1.0),\n                acked_ranges: vec![[9, 9], [5, 6], [1, 3]],\n                ect1: None,\n                ect0: None,\n                ce: None,\n                length: Some(encoding_size as u32),\n                payload_length: None,\n            }\n        );\n    }\n}\n"
  },
  {
    "path": "qevent/src/telemetry/filter.rs",
    "content": "#[inline]\n#[cfg(feature = \"telemetry\")]\npub fn event(scheme: &'static str) -> bool {\n    super::current_span::CURRENT_SPAN.with(|span| span.borrow().filter_event(scheme))\n}\n\n#[inline]\n#[cfg(not(feature = \"telemetry\"))]\npub fn event(_scheme: &'static str) -> bool {\n    false\n}\n\n#[inline]\n#[cfg(all(feature = \"telemetry\", feature = \"raw_data\"))]\npub fn raw_data() -> bool {\n    super::current_span::CURRENT_SPAN.with(|span| span.borrow().filter_raw_data())\n}\n\n#[inline]\n#[cfg(not(all(feature = \"telemetry\", feature = \"raw_data\")))]\npub fn raw_data() -> bool {\n    false\n}\n"
  },
  {
    "path": "qevent/src/telemetry/handy.rs",
    "content": "use std::{\n    future::Future,\n    path::{Path, PathBuf},\n    sync::Arc,\n};\n\nuse tokio::{\n    io::{self, AsyncWrite, AsyncWriteExt},\n    sync::mpsc,\n};\n\nuse super::{ExportEvent, QLog, Span};\nuse crate::{Event, GroupID, VantagePoint, VantagePointType, span};\n\npub struct NoopExporter;\n\nimpl ExportEvent for NoopExporter {\n    fn emit(&self, event: Event) {\n        _ = event;\n    }\n\n    fn filter_event(&self, _: &'static str) -> bool {\n        false\n    }\n\n    fn filter_raw_data(&self) -> bool {\n        false\n    }\n}\n\nimpl ExportEvent for mpsc::UnboundedSender<Event> {\n    fn emit(&self, event: Event) {\n        _ = self.send(event);\n    }\n}\n\npub struct NoopLogger;\n\nimpl QLog for NoopLogger {\n    #[inline]\n    fn new_trace(&self, _: VantagePointType, _: GroupID) -> Span {\n        span!(Arc::new(NoopExporter))\n    }\n}\n\nimpl<L: QLog + ?Sized> QLog for Arc<L> {\n    #[inline]\n    fn new_trace(&self, vantage_point: VantagePointType, group_id: GroupID) -> Span {\n        self.as_ref().new_trace(vantage_point, group_id)\n    }\n}\n\npub trait TelemetryStorage {\n    fn join(\n        &self,\n        file_name: &str,\n    ) -> impl Future<Output = impl AsyncWrite + Send + Unpin + 'static> + Send + 'static;\n}\n\nimpl TelemetryStorage for PathBuf {\n    fn join(\n        &self,\n        file_name: &str,\n    ) -> impl Future<Output = impl AsyncWrite + Send + Unpin + 'static> + Send + 'static {\n        let file_path = Path::join(self, file_name);\n        async move {\n            tokio::fs::OpenOptions::new()\n                .create(true)\n                .truncate(true)\n                .write(true)\n                .open(&file_path)\n                .await\n                .unwrap_or_else(|e| {\n                    panic!(\n                        \"failed to create sqlog file {}: {e:?}, qlogs to this connection will be ignored.\",\n                        file_path.display()\n                    )\n                })\n        }\n    }\n}\n\nimpl TelemetryStorage for tokio::io::Stdout {\n    #[allow(clippy::manual_async_fn)]\n    fn join(\n        &self,\n        _: &str,\n    ) -> impl Future<Output = impl AsyncWrite + Send + Unpin + 'static> + Send + 'static {\n        async move { tokio::io::stdout() }\n    }\n}\n\nimpl TelemetryStorage for tokio::io::Stderr {\n    #[allow(clippy::manual_async_fn)]\n    fn join(\n        &self,\n        _: &str,\n    ) -> impl Future<Output = impl AsyncWrite + Send + Unpin + 'static> + Send + 'static {\n        async move { tokio::io::stderr() }\n    }\n}\n\npub struct LegacySeqLogger<S> {\n    storage: S,\n}\n\nimpl<S: Clone> Clone for LegacySeqLogger<S> {\n    fn clone(&self) -> Self {\n        Self {\n            storage: self.storage.clone(),\n        }\n    }\n}\n\nimpl<S> LegacySeqLogger<S> {\n    pub fn new(storage: S) -> Self {\n        Self { storage }\n    }\n}\n\nimpl<S: TelemetryStorage> QLog for LegacySeqLogger<S> {\n    fn new_trace(&self, vantage_point: VantagePointType, group_id: GroupID) -> Span {\n        use crate::legacy;\n\n        let file_name = format!(\"{group_id}_{vantage_point}.sqlog\");\n        let file = self.storage.join(&file_name);\n\n        let qlog_file_seq = crate::build!(legacy::QlogFileSeq {\n            title: file_name,\n            trace: legacy::TraceSeq {\n                vantage_point: VantagePoint {\n                    r#type: vantage_point\n                },\n            }\n        });\n\n        let (tx, mut rx) = mpsc::unbounded_channel::<Event>();\n        tokio::spawn(async move {\n            let mut log_file = io::BufWriter::new(file.await);\n\n            const RS: u8 = 0x1E;\n\n            log_file.write_u8(RS).await?;\n            let qlog_file_seq = serde_json::to_string(&qlog_file_seq).unwrap();\n            log_file.write_all(qlog_file_seq.as_bytes()).await?;\n            log_file.write_u8(b'\\n').await?;\n\n            while let Some(event) = rx.recv().await {\n                let Ok(event) = legacy::Event::try_from(event) else {\n                    continue;\n                };\n                let event = serde_json::to_string(&event).unwrap();\n                // log_file.write_vectored();\n                log_file.write_u8(RS).await?;\n                log_file.write_all(event.as_bytes()).await?;\n                log_file.write_u8(b'\\n').await?;\n            }\n\n            log_file.shutdown().await\n        });\n\n        crate::span!(Arc::new(tx), group_id = group_id)\n    }\n}\n\npub struct TracingLogger;\n\nimpl QLog for TracingLogger {\n    fn new_trace(&self, vantage_point: VantagePointType, group_id: GroupID) -> Span {\n        use crate::legacy;\n\n        let span =\n            tracing::info_span!(parent: None,\"qlog\", role = %vantage_point, odcid = %group_id);\n\n        let qlog_file_seq = crate::build!(legacy::QlogFileSeq {\n            title: format!(\"{group_id}_{vantage_point}.sqlog\"),\n            trace: legacy::TraceSeq {\n                vantage_point: VantagePoint {\n                    r#type: vantage_point\n                },\n            }\n        });\n\n        let (tx, mut rx) = mpsc::unbounded_channel::<Event>();\n        tokio::spawn(tracing::Instrument::instrument(\n            async move {\n                tracing::debug!(target: \"qlog\", \"{}\", serde_json::to_string(&qlog_file_seq).unwrap());\n\n                while let Some(event) = rx.recv().await {\n                    let Ok(event) = legacy::Event::try_from(event) else {\n                        continue;\n                    };\n                    tracing::debug!(target: \"qlog\", \"{}\", serde_json::to_string(&event).unwrap());\n                }\n            },\n            span,\n        ));\n\n        crate::span!(Arc::new(tx), group_id = group_id)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::{\n        quic::connectivity::ServerListening,\n        telemetry::{Instrument, QLog, Span, handy::LegacySeqLogger},\n    };\n\n    #[tokio::test]\n    #[cfg(feature = \"telemetry\")]\n    async fn legacy_seq_exporter() {\n        let exporter = LegacySeqLogger::new(tokio::io::stdout());\n\n        let root_span = exporter.new_trace(\n            crate::VantagePointType::Server,\n            crate::GroupID::from(\"test_group\".to_string()),\n        );\n\n        root_span.in_scope(|| {\n            let any_field = 112233u64;\n            crate::span!(@current, any_field).in_scope(|| {\n                crate::event!(ServerListening {\n                    ip_v4: \"127.0.0.1\".to_owned(),\n                    port_v4: 443u16\n                });\n\n                tokio::spawn(\n                    async move {\n                        assert_eq!(Span::current().load::<u64>(\"any_field\"), 112233u64);\n                        // do something\n                    }\n                    .instrument(crate::span!(@current, path_id = String::from(\"new path\"))),\n                );\n            });\n        });\n\n        tokio::task::yield_now().await;\n    }\n}\n"
  },
  {
    "path": "qevent/src/telemetry/macro_support.rs",
    "content": "use serde::Serialize;\n\nuse super::*;\nuse crate::{BeSpecificEventData, EventBuilder};\n\n#[inline]\npub fn new_span(exporter: Arc<dyn ExportEvent>, fields: HashMap<&'static str, Value>) -> Span {\n    Span {\n        exporter,\n        fields: Arc::new(fields),\n    }\n}\n\npub fn modify_event_builder_costom_fields(\n    builder: &mut EventBuilder,\n    f: impl FnOnce(&mut HashMap<String, Value>),\n) {\n    if builder.custom_fields.is_none() {\n        builder.custom_fields = Some(HashMap::new());\n    }\n    let custom_fields = builder.custom_fields.as_mut().unwrap();\n    f(custom_fields);\n}\n\npub fn current_span_exporter() -> Arc<dyn ExportEvent> {\n    current_span::CURRENT_SPAN.with(|span| span.borrow().exporter.clone())\n}\n\npub fn current_span_fields() -> HashMap<&'static str, Value> {\n    current_span::CURRENT_SPAN.with(|span| span.borrow().fields.as_ref().clone())\n}\n\npub fn try_load_current_span<T: DeserializeOwned>(name: &'static str) -> Option<T> {\n    current_span::CURRENT_SPAN.with(|span| {\n        let span = span.borrow();\n        Some(from_value::<T>(span.fields.get(name)?.clone()))\n    })\n}\n\npub fn build_and_emit_event<D: BeSpecificEventData>(\n    build_data: impl FnOnce() -> D,\n    build_event: impl FnOnce(D) -> Event,\n) {\n    if !filter::event(D::scheme()) {\n        return;\n    }\n    let event = build_event(build_data());\n    current_span::CURRENT_SPAN.with(|span| span.borrow().emit(event));\n}\n\npub fn to_value<T: Serialize>(value: T) -> Value {\n    serde_json::to_value(value).unwrap()\n}\n\npub fn from_value<T: DeserializeOwned>(value: Value) -> T {\n    serde_json::from_value(value).unwrap()\n}\n"
  },
  {
    "path": "qevent/src/telemetry/macros.rs",
    "content": "#[macro_export]\n#[cfg(feature = \"telemetry\")]\nmacro_rules! span {\n    () => {{\n        $crate::telemetry::Span::current()\n    }};\n    (@current     $(, $($tt:tt)* )?) => {{\n        let __current_exporter = $crate::telemetry::macro_support::current_span_exporter();\n        $crate::span!(__current_exporter $(, $($tt)* )?)\n    }};\n    ($broker:expr $(, $($tt:tt)* )?) => {{\n        #[allow(unused_mut)]\n        let mut __current_fields = $crate::telemetry::macro_support::current_span_fields();\n        $crate::span!(@field __current_fields $(, $($tt)* )?);\n        $crate::telemetry::macro_support::new_span($broker, __current_fields)\n    }};\n    (@field $fields:expr, $name:ident               $(, $($tt:tt)* )?) => {\n        $crate::span!( @field $fields, $name = $name $(, $($tt)* )? );\n    };\n    (@field $fields:expr, $name:ident = $value:expr $(, $($tt:tt)* )?) => {\n        let __value = $crate::telemetry::macro_support::to_value($value);\n        $fields.insert(stringify!($name), __value);\n        $crate::span!( @field $fields $(, $($tt)* )? );\n    };\n    (@field $fields:expr $(,)? ) => {};\n}\n\n#[macro_export]\n#[cfg(not(feature = \"telemetry\"))]\nmacro_rules! span {\n    () => {{\n        $crate::telemetry::Span::current()\n    }};\n    (@current     $(, $($tt:tt)* )?) => {{\n        let __current_exporter = $crate::telemetry::macro_support::current_span_exporter();\n        $crate::span!(__current_exporter $(, $($tt)* )?)\n    }};\n    ($broker:expr $(, $($tt:tt)* )?) => {{\n        #[allow(unused_mut)]\n        let mut __current_fields = $crate::telemetry::macro_support::current_span_fields();\n        $crate::span!(@field __current_fields $(, $($tt)* )?);\n        $crate::telemetry::macro_support::new_span($broker, __current_fields)\n    }};\n    (@field $fields:expr, $name:ident               $(, $($tt:tt)* )?) => {\n        $crate::span!( @field $fields, $name = $name $(, $($tt)* )? );\n    };\n    (@field $fields:expr, $name:ident = $value:expr $(, $($tt:tt)* )?) => {\n        _ = $value;\n        $crate::span!( @field $fields $(, $($tt)* )? );\n    };\n    (@field $fields:expr $(,)? ) => {};\n}\n\n#[macro_export]\nmacro_rules! event {\n    ($event_type:ty { $($event_field:tt)* } $(, $($tt:tt)* )?) => {{\n        $crate::event!($crate::build!($event_type { $($event_field)* }) $(, $($tt)* )?);\n    }};\n    ($event_data:expr                       $(, $($tt:tt)* )?) => {{\n        let __build_data = || $event_data;\n        let __build_event = |__event_data| {\n            let mut __event_builder = $crate::Event::builder();\n            // as_millis_f64 is nightly only\n            let __time = std::time::SystemTime::now()\n                .duration_since(std::time::UNIX_EPOCH)\n                .unwrap()\n                .as_secs_f64()\n                * 1000.0;\n            __event_builder.time(__time);\n            __event_builder.data(__event_data);\n            $crate::event!(@load_known __event_builder, path: $crate::PathID);\n            $crate::event!(@load_known __event_builder, protocol_types: $crate::ProtocolTypeList);\n            $crate::event!(@load_known __event_builder, group_id: $crate::GroupID);\n            $crate::event!(@field __event_builder $(, $($tt)* )?);\n\n            __event_builder.build()\n        };\n        $crate::telemetry::macro_support::build_and_emit_event(__build_data, __build_event);\n    }};\n    (@load_known $event_builder:expr, $name:ident: $type:ty) => {\n        if let Some(__value) = $crate::telemetry::macro_support::try_load_current_span::<$type>(stringify!($name)) {\n            $event_builder.$name(__value);\n        }\n    };\n    (@field $event_builder:expr, $name:ident               $(, $($tt:tt)* )?) => {\n        $crate::event!( @field $event_builder, $name = $name $(, $($tt)* )? );\n    };\n    (@field $event_builder:expr, $name:ident = Map           { $($build:tt)* } $(, $($tt:tt)* )?) => {\n        let __value = $crate::telemetry::macro_support::to_value($crate::map!{ $($build)* });\n        $crate::telemetry::macro_support::modify_event_builder_costom_fields(&mut $event_builder, |__custom_fields| {\n            __custom_fields.insert(stringify!($name).to_owned(), __value);\n        });\n        $crate::event!( @field $event_builder $(, $($tt)* )? );\n    };\n    (@field $event_builder:expr, $name:ident = $struct:ident { $(build:tt)* } $(, $($tt:tt)* )?) => {\n        let __value = $crate::telemetry::macro_support::to_value($crate::build!($struct { $(build)* }));\n        $crate::telemetry::macro_support::modify_event_builder_costom_fields(&mut $event_builder, |__custom_fields| {\n            __custom_fields.insert(stringify!($name).to_owned(), __value);\n        });\n        $crate::event!( @field $event_builder $(, $($tt)* )? );\n    };\n    (@field $event_builder:expr, $name:ident = $value:expr $(, $($tt:tt)* )?) => {\n        let __value = $crate::telemetry::macro_support::to_value($value);\n        $crate::telemetry::macro_support::modify_event_builder_costom_fields(&mut $event_builder, |__custom_fields| {\n            __custom_fields.insert(stringify!($name).to_owned(), __value);\n        });\n        $crate::event!( @field $event_builder $(, $($tt)* )? );\n    };\n    (@field $event_builder:expr $(,)? ) => {};\n}\n"
  },
  {
    "path": "qevent/src/telemetry.rs",
    "content": "pub(crate) mod filter;\npub mod handy;\n\n#[doc(hidden)]\npub mod macro_support;\nmod macros;\n\nuse std::{\n    collections::HashMap,\n    fmt::Debug,\n    future::Future,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n};\n\nuse handy::NoopExporter;\nuse serde::de::DeserializeOwned;\nuse serde_json::Value;\n\nuse crate::{Event, GroupID, VantagePointType};\n\npub trait QLog {\n    fn new_trace(&self, vantage_point: VantagePointType, group_id: GroupID) -> Span;\n}\n\npub trait ExportEvent: Send + Sync {\n    fn emit(&self, event: Event);\n\n    fn filter_event(&self, scheme: &'static str) -> bool {\n        _ = scheme;\n        true\n    }\n\n    fn filter_raw_data(&self) -> bool {\n        false\n    }\n}\n\n#[derive(Clone)]\npub struct Span {\n    exporter: Arc<dyn ExportEvent>,\n    fields: Arc<HashMap<&'static str, Value>>,\n}\n\nimpl Debug for Span {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Span\")\n            .field(\"exporter\", &\"..\")\n            .field(\"fields\", &self.fields)\n            .finish()\n    }\n}\n\nimpl Span {\n    #[inline]\n    pub fn emit(&self, event: Event) {\n        self.exporter.emit(event);\n    }\n\n    #[inline]\n    pub fn filter_event(&self, scheme: &'static str) -> bool {\n        self.exporter.filter_event(scheme)\n    }\n\n    #[inline]\n    pub fn filter_raw_data(&self) -> bool {\n        self.exporter.filter_raw_data()\n    }\n\n    #[inline]\n    pub fn load<T: DeserializeOwned>(&self, name: &'static str) -> T {\n        let Some(value) = self.fields.get(name) else {\n            panic!(\n                \"Failed to load field `{name}` from span fields: {:?}\",\n                self.fields\n            );\n        };\n        match serde_json::from_value(value.clone()) {\n            Ok(value) => value,\n            Err(e) => panic!(\n                \"Failed to load field `{name}` from span fields: {:?}, error: {:?}\",\n                self.fields, e\n            ),\n        }\n    }\n\n    #[inline]\n    pub fn try_load<T: DeserializeOwned>(&self, name: &'static str) -> Option<T> {\n        serde_json::from_value(self.fields.get(name)?.clone()).ok()\n    }\n}\n\nimpl PartialEq for Span {\n    fn eq(&self, other: &Self) -> bool {\n        Arc::ptr_eq(&self.fields, &other.fields) && Arc::ptr_eq(&self.exporter, &other.exporter)\n    }\n}\n\nimpl Default for Span {\n    fn default() -> Self {\n        Self {\n            exporter: Arc::new(NoopExporter),\n            fields: Arc::new(HashMap::new()),\n        }\n    }\n}\n\npub struct Entered {\n    previous: Option<Span>,\n}\n\nmod current_span {\n    use std::cell::RefCell;\n\n    use super::{Entered, Span};\n\n    thread_local! {\n        pub static CURRENT_SPAN: RefCell<Span> = RefCell::new(Span::default());\n    }\n\n    impl Drop for Entered {\n        fn drop(&mut self) {\n            if let Some(previous) = &self.previous {\n                CURRENT_SPAN.with(|span| {\n                    span.replace(previous.clone());\n                });\n            }\n        }\n    }\n\n    impl Span {\n        pub fn enter(&self) -> Entered {\n            let previous = CURRENT_SPAN.with(|current| {\n                if &*current.borrow() == self {\n                    None\n                } else {\n                    Some(current.replace(self.clone()))\n                }\n            });\n            Entered { previous }\n        }\n\n        pub fn in_scope<T>(&self, f: impl FnOnce() -> T) -> T {\n            let _guard = self.enter();\n            f()\n        }\n\n        pub fn current() -> Span {\n            CURRENT_SPAN.with(|span| span.borrow().clone())\n        }\n    }\n}\n\npin_project_lite::pin_project! {\n    pub struct Instrumented<F: ?Sized> {\n        span: Span,\n        #[pin]\n        inner: F,\n    }\n}\n\npub trait Instrument {\n    fn instrument(self, span: Span) -> Instrumented<Self>;\n\n    fn instrument_in_current(self) -> Instrumented<Self>;\n}\n\nimpl<F: Future> Instrument for F {\n    fn instrument(self, span: Span) -> Instrumented<Self> {\n        Instrumented { span, inner: self }\n    }\n\n    fn instrument_in_current(self) -> Instrumented<Self> {\n        self.instrument(crate::span!())\n    }\n}\n\nimpl<F: Future> Future for Instrumented<F> {\n    type Output = F::Output;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        let this = self.project();\n        this.span.in_scope(|| this.inner.poll(cx))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use qbase::cid::ConnectionId;\n\n    use super::*;\n    use crate::{\n        GroupID, event,\n        quic::{ConnectionID, connectivity::ServerListening},\n        span,\n    };\n\n    #[test]\n    fn span_fields() {\n        let exporter = Arc::new(NoopExporter);\n        let _span = span!(exporter.clone());\n        let a = 0i32;\n        let c = 123456789usize;\n        span!(exporter.clone(), a, a, b = 12.3f32, c, d = \"Hello world!\").in_scope(|| {\n            assert_eq!(Span::current().load::<i32>(\"a\"), 0);\n            assert_eq!(Span::current().load::<f32>(\"b\"), 12.3);\n            assert_eq!(Span::current().load::<usize>(\"c\"), 123456789);\n            assert_eq!(Span::current().load::<String>(\"d\"), \"Hello world!\");\n            let e = vec![1, 2, 3];\n            span!(exporter.clone(), a = 1, b = 2, c = 3, e).in_scope(|| {\n                assert_eq!(Span::current().load::<i32>(\"a\"), 1);\n                assert_eq!(Span::current().load::<i32>(\"b\"), 2);\n                assert_eq!(Span::current().load::<i32>(\"c\"), 3);\n                assert_eq!(Span::current().load::<String>(\"d\"), \"Hello world!\");\n                assert_eq!(Span::current().load::<Vec<i32>>(\"e\"), vec![1, 2, 3]);\n            });\n        })\n    }\n\n    #[test]\n    fn event() {\n        struct TestBroker;\n\n        impl ExportEvent for TestBroker {\n            fn emit(&self, event: Event) {\n                let str = serde_json::to_string_pretty(&event).unwrap();\n                let event = serde_json::to_value(event).unwrap();\n                println!(\"{str}\");\n                assert_eq!(event[\"name\"], \"quic:server_listening\");\n                let event_data_json = serde_json::json!({\n                    \"ip_v4\": \"127.0.0.1\",\n                    \"port_v4\": 8080,\n                });\n                assert_eq!(event[\"data\"], event_data_json);\n                assert_eq!(event[\"group_id\"], String::from(group_id()));\n                assert_eq!(event[\"use_strict_mode\"], true);\n            }\n        }\n\n        fn group_id() -> GroupID {\n            GroupID::from(ConnectionID::from(ConnectionId::from_slice(&[\n                0x12, 0x34, 0x56, 0x78, 0x90, 0xab, 0xcd, 0xef,\n            ])))\n        }\n\n        span!(Arc::new(TestBroker), group_id = group_id()).in_scope(|| {\n            event!(\n                crate::build!(ServerListening {\n                    ip_v4: \"127.0.0.1\".to_owned(),\n                    port_v4: 8080u16,\n                }),\n                use_strict_mode = true\n            );\n        });\n    }\n}\n"
  },
  {
    "path": "qinterface/Cargo.toml",
    "content": "[package]\nname = \"qinterface\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"dquic's network interface and IO abstractions\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\n\n[dependencies]\nbytes = { workspace = true }\ndashmap = { workspace = true }\nderive_more = { workspace = true, features = [\"deref\"] }\nfutures = { workspace = true }\nhttp = { workspace = true }\nnetdev = { workspace = true }\nnetwatcher = { workspace = true }\nparking_lot = { workspace = true }\npin-project-lite = { workspace = true }\nqbase = { workspace = true }\nqevent = { workspace = true }\nrustls = { workspace = true }\nserde = { workspace = true, features = [\"derive\"] }\ntokio = { workspace = true, features = [\"net\", \"rt\", \"sync\", \"time\", \"macros\"] }\ntokio-util = { workspace = true, features = [\"rt\"] }\nthiserror = { workspace = true }\ntracing = { workspace = true }\n\n[target.'cfg(any(unix, windows))'.dependencies]\nqudp = { workspace = true, optional = true }\n\n[dev-dependencies]\ntokio = { workspace = true, features = [\n    \"test-util\",\n    \"macros\",\n    \"rt-multi-thread\",\n] }\n\n[features]\nqudp = [\"dep:qudp\"]\n"
  },
  {
    "path": "qinterface/examples/interface-monitor.rs",
    "content": "use qinterface::device::Devices;\n\n#[tokio::main(flavor = \"current_thread\")]\nasync fn main() {\n    let global = Devices::global();\n    let mut monitor = global.monitor();\n    for (name, iface) in monitor.interfaces() {\n        println!(\"Interface: {name} => {iface:#?}\");\n    }\n    while let Some((_devices, event)) = monitor.update().await {\n        println!(\"Event: {event:#?}\");\n    }\n}\n"
  },
  {
    "path": "qinterface/src/bind_uri.rs",
    "content": "use std::{\n    borrow::Cow,\n    fmt::Display,\n    io,\n    net::{AddrParseError, IpAddr, SocketAddr},\n    str::FromStr,\n};\n\nuse derive_more::{Display, Into};\nuse qbase::{net::Family, util::UniqueIdGenerator};\nuse thiserror::Error;\n\n#[derive(Debug, Display, Clone, Into, PartialEq, Eq, Hash)]\npub struct BindUri(http::Uri);\n\n#[derive(Debug, Error)]\npub enum ParseError {\n    #[error(\"Invalid uri {0}\")]\n    InvalidUri(<http::Uri as FromStr>::Err),\n    #[error(\"Missing scheme\")]\n    NoScheme,\n    #[error(\"Unsupported bind uri scheme: {0}\")]\n    Unsupported(String),\n    #[error(\"Path must be empty\")]\n    Malformed,\n    #[error(\"Missing ip family for iface scheme BindUri\")]\n    NoFamily,\n    #[error(\"Missing port for iface scheme BindUri\")]\n    NoPort,\n    #[error(\"Invalid IP address family for iface scheme\")]\n    UnknownFamily,\n    #[error(\"Invalid IP address for inet scheme BindUri: {0}\")]\n    InvalidIpAddr(AddrParseError),\n}\n\nfn parse_iface_bind_uri(uri: &http::Uri) -> Result<(Family, &str, u16), ParseError> {\n    let authority = uri.authority().expect(\"BindUri is absolute URI\");\n    let (ip_family, interface) = authority\n        .host()\n        .split_once('.')\n        .ok_or(ParseError::NoFamily)?;\n    let port = authority.port_u16().ok_or(ParseError::NoPort)?;\n    let ip_family: Family = ip_family.parse().or(Err(ParseError::UnknownFamily))?;\n    Ok((ip_family, interface, port))\n}\n\nfn parse_inet_bind_uri(uri: &http::Uri) -> Result<SocketAddr, ParseError> {\n    let authority = uri.authority().expect(\"BindUri is absolute URI\");\n    let port = authority.port_u16().ok_or(ParseError::NoPort)?;\n    let host = match authority.host().as_bytes() {\n        [b'[', .., b']'] => authority.host().trim_matches(|c| matches!(c, '[' | ']')),\n        _ => authority.host(),\n    };\n    match IpAddr::from_str(host) {\n        Ok(ip) => Ok(SocketAddr::new(ip, port)),\n        Err(e) => Err(ParseError::InvalidIpAddr(e)),\n    }\n}\n\nimpl FromStr for BindUri {\n    type Err = ParseError;\n\n    fn from_str(s: &str) -> Result<Self, Self::Err> {\n        if let Ok(socket_addr) = s.parse::<SocketAddr>() {\n            return Ok(socket_addr.into());\n        }\n        s.parse::<http::Uri>()\n            .map_err(ParseError::InvalidUri)?\n            .try_into()\n    }\n}\n\nimpl TryFrom<http::Uri> for BindUri {\n    type Error = ParseError;\n\n    fn try_from(uri: http::Uri) -> Result<Self, Self::Error> {\n        let scheme = uri\n            .scheme()\n            .ok_or(ParseError::NoScheme)?\n            .as_str()\n            .parse()\n            .map_err(ParseError::Unsupported)?;\n        debug_assert!(uri.authority().is_some(), \"BindUri should be absolute URI\");\n\n        if uri.path() != \"/\" {\n            return Err(ParseError::Malformed);\n        }\n\n        match scheme {\n            Scheme::Iface => {\n                parse_iface_bind_uri(&uri)?;\n            }\n            Scheme::Inet => {\n                parse_inet_bind_uri(&uri)?;\n            }\n        }\n\n        Ok(Self(uri))\n    }\n}\n\nimpl From<String> for BindUri {\n    #[inline]\n    fn from(value: String) -> Self {\n        match BindUri::from_str(&value) {\n            Ok(bind_uri) => bind_uri,\n            Err(e) => panic!(\"bind uri should be valid: {e}\"),\n        }\n    }\n}\n\nimpl From<&str> for BindUri {\n    #[inline]\n    fn from(value: &str) -> Self {\n        match BindUri::from_str(value) {\n            Ok(bind_uri) => bind_uri,\n            Err(e) => panic!(\"bind uri should be valid: {e}\"),\n        }\n    }\n}\n\nimpl From<SocketAddr> for BindUri {\n    #[inline]\n    fn from(value: SocketAddr) -> Self {\n        match BindUri::from_str(&format!(\"inet://{value}\")) {\n            Ok(bind_uri) => bind_uri,\n            Err(e) => panic!(\"{e}\"),\n        }\n    }\n}\n\nimpl<T: Copy + Into<BindUri>> From<&T> for BindUri {\n    #[inline]\n    fn from(value: &T) -> Self {\n        (*value).into()\n    }\n}\n\nimpl From<&BindUri> for BindUri {\n    #[inline]\n    fn from(value: &BindUri) -> Self {\n        value.clone()\n    }\n}\n\nimpl BindUri {\n    pub const TEMPORARY_PROP: &str = \"temporary\";\n    pub const STUN_PROP: &str = \"stun\";\n    pub const STUN_SERVER_PROP: &str = \"stun_server\";\n    pub const RELAY_PROP: &str = \"relay\";\n\n    pub fn scheme(&self) -> Scheme {\n        self.0\n            .scheme()\n            .expect(\"Invalid BindUri: Missing scheme\")\n            .as_str()\n            .parse()\n            .expect(\"Invalid BindUri: Invalid scheme\")\n    }\n\n    #[inline]\n    pub fn as_uri(&self) -> &http::Uri {\n        &self.0\n    }\n\n    pub fn family(&self) -> Family {\n        match self.scheme() {\n            Scheme::Iface => {\n                self.as_iface_bind_uri()\n                    .expect(\"Already checked BindUriScheme is iface\")\n                    .0\n            }\n            Scheme::Inet => {\n                match self\n                    .as_inet_bind_uri()\n                    .expect(\"Already checked BindUriScheme is inet\")\n                {\n                    SocketAddr::V4(_) => Family::V4,\n                    SocketAddr::V6(_) => Family::V6,\n                }\n            }\n        }\n    }\n\n    pub fn as_iface_bind_uri(&self) -> Option<(Family, &str, u16)> {\n        if self.scheme() != Scheme::Iface {\n            return None;\n        }\n        Some(parse_iface_bind_uri(&self.0).expect(\"BindUri should be valid\"))\n    }\n\n    pub fn as_inet_bind_uri(&self) -> Option<SocketAddr> {\n        if self.scheme() != Scheme::Inet {\n            return None;\n        }\n        Some(parse_inet_bind_uri(&self.0).expect(\"BindUri should be valid\"))\n    }\n\n    pub fn add_prop(&mut self, key: &str, value: &str) {\n        let mut uri_parts = self.0.clone().into_parts();\n        uri_parts.path_and_query = uri_parts.path_and_query.map(|pq| {\n            let query = match pq.query() {\n                Some(exist_query) => format!(\"{exist_query}&{key}={value}\"),\n                None => format!(\"{key}={value}\"),\n            };\n            format!(\"{}?{}\", pq.path(), query)\n                .parse()\n                .expect(\"Path and query should be valid\")\n        });\n        self.0 = http::Uri::from_parts(uri_parts).expect(\"BindUri should be valid\");\n    }\n\n    pub const ALLOC_PORT_ID: &'static str = \"alloc_port_id\";\n\n    pub fn alloc_port(&self) -> Self {\n        match self.scheme() {\n            Scheme::Iface => {\n                let (.., port) = self\n                    .as_iface_bind_uri()\n                    .expect(\"Already checked BindUriScheme is iface\");\n                assert_eq!(port, 0, \"Only port 0 is allocatable\");\n            }\n            Scheme::Inet => {\n                let addr = self\n                    .as_inet_bind_uri()\n                    .expect(\"Already checked BindUriScheme is inet\");\n                assert_eq!(addr.port(), 0, \"Only port 0 is allocatable\");\n            }\n        }\n\n        let mut new_uri = self.clone();\n\n        static ID_GENERATOR: UniqueIdGenerator = UniqueIdGenerator::new();\n        let alloc_port_id = usize::from(ID_GENERATOR.generate()).to_string();\n        new_uri.add_prop(Self::ALLOC_PORT_ID, &alloc_port_id);\n\n        new_uri\n    }\n\n    #[inline]\n    pub fn prop(&self, key: &str) -> Option<Cow<'_, str>> {\n        // http://127.0.0.1/fx     ?key=value\n        self.0\n            .query()?\n            .split('&')\n            .find_map(|pair| match pair.split_once('=') {\n                Some((k, v)) if k == key => Some(Cow::Borrowed(v)),\n                None if pair == key => Some(Cow::Borrowed(\"\")),\n                _ => None,\n            })\n    }\n\n    pub fn is_temporary(&self) -> bool {\n        match self.prop(Self::TEMPORARY_PROP) {\n            Some(bool) if bool == \"true\" => true,\n            None | Some(..) => false,\n        }\n    }\n\n    pub fn enable_stun(&mut self) {\n        self.add_prop(Self::STUN_PROP, \"true\");\n    }\n\n    pub fn is_stun_enabled(&self) -> bool {\n        match self.prop(Self::STUN_PROP) {\n            Some(bool) if bool == \"true\" => true,\n            None | Some(..) => false,\n        }\n    }\n\n    pub fn with_stun_server(mut self, stun_server: &str) -> Self {\n        self.add_prop(Self::STUN_SERVER_PROP, stun_server);\n        self\n    }\n\n    pub fn stun_server(&self) -> Option<Cow<'_, str>> {\n        self.prop(Self::STUN_SERVER_PROP)\n    }\n\n    // TODO: change to bool flag\n    pub fn with_relay(mut self, relay: &str) -> Self {\n        self.add_prop(Self::RELAY_PROP, relay);\n        self\n    }\n\n    pub fn relay(&self) -> Option<Cow<'_, str>> {\n        self.prop(Self::RELAY_PROP)\n    }\n\n    /// Returns a canonical key for reconciliation purposes.\n    ///\n    /// Strips ephemeral query parameters (like `alloc_port_id`) so that two\n    /// `BindUri`s pointing at the same interface/port compare as equal even\n    /// when produced by separate `alloc_port()` calls.\n    pub fn identity_key(&self) -> String {\n        let uri = &self.0;\n        let mut parts = uri.clone().into_parts();\n        parts.path_and_query = parts.path_and_query.map(|pq| {\n            pq.path()\n                .parse()\n                .expect(\"path portion should always be valid\")\n        });\n        http::Uri::from_parts(parts)\n            .expect(\"BindUri without query should be valid\")\n            .to_string()\n    }\n\n    pub fn resolve(&self) -> Result<SocketAddr, io::Error> {\n        match self.scheme() {\n            Scheme::Iface => {\n                let (ip_family, interface, port) = self\n                    .as_iface_bind_uri()\n                    .expect(\"Already checked BindUriScheme is iface\");\n\n                let devices = crate::device::Devices::global();\n                devices.get(interface).ok_or(io::Error::new(\n                    io::ErrorKind::NotFound,\n                    \"device not found\".to_string(),\n                ))?;\n                let ip_addr = devices.resolve(interface, ip_family).ok_or(io::Error::new(\n                    io::ErrorKind::NotFound,\n                    \"ip not matched\".to_string(),\n                ))?;\n\n                Ok(SocketAddr::new(ip_addr, port))\n            }\n            Scheme::Inet => Ok(self\n                .as_inet_bind_uri()\n                .expect(\"Already checked BindUriScheme is inet\")),\n        }\n    }\n}\n\nimpl TryFrom<&BindUri> for SocketAddr {\n    type Error = io::Error;\n\n    fn try_from(bind_uri: &BindUri) -> Result<Self, Self::Error> {\n        bind_uri.resolve()\n    }\n}\n\nimpl TryFrom<BindUri> for SocketAddr {\n    type Error = io::Error;\n\n    fn try_from(bind_uri: BindUri) -> Result<Self, Self::Error> {\n        SocketAddr::try_from(&bind_uri)\n    }\n}\n#[non_exhaustive]\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub enum Scheme {\n    Iface,\n    Inet,\n}\n\nimpl Scheme {\n    pub const fn to_str(&self) -> &'static str {\n        match self {\n            Scheme::Iface => \"iface\",\n            Scheme::Inet => \"inet\",\n        }\n    }\n}\n\nimpl From<Scheme> for http::uri::Scheme {\n    fn from(value: Scheme) -> Self {\n        value\n            .to_str()\n            .parse()\n            .expect(\"BindUriScheme should be valid URI scheme\")\n    }\n}\n\nimpl FromStr for Scheme {\n    type Err = String;\n\n    fn from_str(s: &str) -> Result<Self, Self::Err> {\n        match s {\n            \"iface\" => Ok(Scheme::Iface),\n            \"inet\" => Ok(Scheme::Inet),\n            other => Err(other.to_string()),\n        }\n    }\n}\n\nimpl Display for Scheme {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        self.to_str().fmt(f)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn invalid_uri() {\n        assert!(matches!(\n            BindUri::from_str(\"iface://\"),\n            Err(ParseError::InvalidUri(_))\n        ));\n    }\n\n    #[test]\n    fn missing_scheme() {\n        assert!(matches!(\n            BindUri::from_str(\"invalid_uri\"),\n            Err(ParseError::NoScheme)\n        ));\n    }\n\n    #[test]\n    fn invalid_scheme() {\n        assert!(matches!(\n            BindUri::from_str(\"invalid://example.com\"),\n            Err(ParseError::Unsupported(_))\n        ));\n    }\n\n    #[test]\n    fn has_path() {\n        assert!(matches!(\n            BindUri::from_str(\"iface://v4.wlan0/1234\"),\n            Err(ParseError::Malformed)\n        ));\n    }\n\n    #[test]\n    fn missing_ip_family() {\n        assert!(matches!(\n            BindUri::from_str(\"iface://wlan0:8080\"),\n            Err(ParseError::NoFamily)\n        ));\n    }\n\n    #[test]\n    fn missing_port() {\n        assert!(matches!(\n            BindUri::from_str(\"iface://v4.wlan0\"),\n            Err(ParseError::NoPort)\n        ));\n    }\n\n    #[test]\n    fn invalid_ip_family() {\n        assert!(matches!(\n            BindUri::from_str(\"iface://invalid.wlan0:8080\"),\n            Err(ParseError::UnknownFamily)\n        ));\n    }\n\n    #[test]\n    fn invalid_ip_addr() {\n        assert!(matches!(\n            BindUri::from_str(\"inet://example.com:8080\"),\n            Err(ParseError::InvalidIpAddr(..))\n        ));\n    }\n\n    #[test]\n    fn iface_bind_uri() {\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080?temporary=true\").unwrap();\n        assert_eq!(bind_uri.scheme(), Scheme::Iface);\n        let (family, interface, port) = bind_uri.as_iface_bind_uri().unwrap();\n        assert_eq!(family, Family::V4);\n        assert_eq!(interface, \"wlan0\");\n        assert_eq!(port, 8080);\n        assert_eq!(\n            bind_uri.prop(BindUri::TEMPORARY_PROP).as_deref(),\n            Some(\"true\")\n        );\n    }\n\n    #[test]\n    fn inet_bind_uri() {\n        let bind_uri = BindUri::from_str(\"inet://127.0.0.1:7777\").unwrap();\n        assert_eq!(bind_uri.scheme(), Scheme::Inet);\n        let addr = bind_uri.as_inet_bind_uri().unwrap();\n        assert_eq!(\n            addr,\n            SocketAddr::new(IpAddr::V4(\"127.0.0.1\".parse().unwrap()), 7777)\n        );\n        assert!(bind_uri.as_uri().query().is_none());\n    }\n\n    // tokio runtime requeired for device listing\n    #[tokio::test]\n    async fn interface_not_found() {\n        let bind_uri = BindUri::from_str(\n            \"iface://v4.ygiubiougbuyasiudbahsdbadfbkjadbhvkjabvckagdoiuehfjoiajhrpfhrbovhaelvkamdjkfs:8080\",\n        )\n        .unwrap();\n        assert!(SocketAddr::try_from(bind_uri).is_err_and(|e| e.kind() == io::ErrorKind::NotFound))\n    }\n\n    #[test]\n    fn to_socket_addr() {\n        let bind_uri = BindUri::from_str(\"inet://127.0.0.1:8080\").unwrap();\n        assert_eq!(\n            SocketAddr::try_from(bind_uri).unwrap(),\n            \"127.0.0.1:8080\".parse().unwrap()\n        );\n    }\n\n    #[test]\n    fn alloc_port() {\n        let bind_uri = BindUri::from_str(\"inet://0.0.0.0:0\").unwrap();\n        assert_ne!(bind_uri.clone().alloc_port(), bind_uri.clone().alloc_port());\n    }\n\n    #[test]\n    #[should_panic]\n    fn alloc_port_for_non_zero_port1() {\n        let bind_uri = BindUri::from_str(\"inet://127.0.0.1:8080\").unwrap();\n        bind_uri.alloc_port();\n    }\n\n    #[test]\n    #[should_panic]\n    fn alloc_port_for_non_zero_port2() {\n        let bind_uri = BindUri::from_str(\"inet://v4.lo:12345\").unwrap();\n        bind_uri.alloc_port();\n    }\n\n    #[test]\n    fn temporary() {\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080?temporary=true\").unwrap();\n        assert!(bind_uri.is_temporary());\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080?temporary=false\").unwrap();\n        assert!(!bind_uri.is_temporary());\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080\").unwrap();\n        assert!(!bind_uri.is_temporary());\n        let bind_uri =\n            BindUri::from_str(\"iface://v4.C5563ED1-2BC9-42C5-8177-59F2F0AF37C8:8080\").unwrap();\n        assert!(!bind_uri.is_temporary());\n\n        let mut bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080\").unwrap();\n        bind_uri.add_prop(BindUri::TEMPORARY_PROP, \"true\");\n        assert_eq!(\n            bind_uri.to_string(),\n            \"iface://v4.wlan0:8080/?temporary=true\"\n        );\n        assert!(bind_uri.is_temporary());\n    }\n\n    #[test]\n    fn stun_enabled() {\n        let mut bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080\").unwrap();\n        assert!(!bind_uri.is_stun_enabled());\n\n        bind_uri.enable_stun();\n        assert!(bind_uri.is_stun_enabled());\n\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080?stun=true\").unwrap();\n        assert!(bind_uri.is_stun_enabled());\n\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080?stun=false\").unwrap();\n        assert!(!bind_uri.is_stun_enabled());\n    }\n\n    #[test]\n    fn stun_server() {\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080\").unwrap();\n        assert!(bind_uri.stun_server().is_none());\n\n        let bind_uri = bind_uri.with_stun_server(\"stun.example.com:3478\");\n        assert_eq!(\n            bind_uri.stun_server().as_deref(),\n            Some(\"stun.example.com:3478\")\n        );\n\n        let bind_uri =\n            BindUri::from_str(\"iface://v4.wlan0:8080?stun_server=stun.genmeta.net\").unwrap();\n        assert_eq!(bind_uri.stun_server().as_deref(), Some(\"stun.genmeta.net\"));\n    }\n\n    #[test]\n    fn relay() {\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080\").unwrap();\n        assert!(bind_uri.relay().is_none());\n\n        let bind_uri = bind_uri.with_relay(\"turn.example.com:3478\");\n        assert_eq!(bind_uri.relay().as_deref(), Some(\"turn.example.com:3478\"));\n\n        let bind_uri = BindUri::from_str(\"iface://v4.wlan0:8080?relay=turn.genmeta.net\").unwrap();\n        assert_eq!(bind_uri.relay().as_deref(), Some(\"turn.genmeta.net\"));\n    }\n}\n"
  },
  {
    "path": "qinterface/src/component/alive.rs",
    "content": "use std::{\n    fmt::Debug,\n    io,\n    net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},\n    pin::Pin,\n    sync::{Mutex, MutexGuard},\n    task::{Context, Poll, ready},\n};\n\nuse qbase::net::route::{Line, Link, Route};\nuse thiserror::Error;\nuse tokio::net::UdpSocket;\nuse tokio_util::task::AbortOnDropHandle;\n\nuse crate::{\n    Interface, RebindedError,\n    component::Component,\n    device::Devices,\n    io::{IO, IoExt},\n};\n\n#[derive(Debug, Error)]\npub enum InterfaceFailure {\n    #[error(\"Invalid QuicIO implementation\")]\n    InvalidImplementation,\n    #[error(\"Interface is broken: {0}\")]\n    InterfaceBroken(io::Error),\n    #[error(\"Real address does not match bind URI\")]\n    AddressMismatch,\n    #[error(\"Failed to bind test socket: {0}\")]\n    TestSocketBindFailed(io::Error),\n    #[error(\"Failed to send test packet: {0}\")]\n    SendTestFailed(io::Error),\n}\n\nimpl From<io::Error> for InterfaceFailure {\n    fn from(error: io::Error) -> Self {\n        Self::TestSocketBindFailed(error)\n    }\n}\n\nimpl InterfaceFailure {\n    pub fn is_recoverable(&self) -> bool {\n        matches!(\n            self,\n            Self::InterfaceBroken(..) | Self::AddressMismatch | Self::SendTestFailed(..)\n        )\n    }\n}\n\npub async fn is_alive(iface: &(impl IO + ?Sized)) -> Result<(), InterfaceFailure> {\n    let bound_addr = iface\n        .bound_addr()\n        .map_err(InterfaceFailure::InterfaceBroken)?;\n\n    let socket_addr = SocketAddr::try_from(&iface.bind_uri())?;\n\n    // Check if addresses match\n    if !(bound_addr.ip() == socket_addr.ip()\n        && (socket_addr.port() == 0 || bound_addr.port() == socket_addr.port()))\n    {\n        return Err(InterfaceFailure::AddressMismatch);\n    }\n\n    // Test connectivity with a local socket\n    let localhost = match bound_addr.ip() {\n        IpAddr::V4(ip) if ip.is_unspecified() => Ipv4Addr::LOCALHOST.into(),\n        IpAddr::V4(ip) => ip.into(),\n        IpAddr::V6(ip) if ip.is_unspecified() => Ipv6Addr::LOCALHOST.into(),\n        IpAddr::V6(ip) => ip.into(),\n    };\n    let socket = UdpSocket::bind(SocketAddr::new(localhost, 0))\n        .await\n        .map_err(InterfaceFailure::TestSocketBindFailed)?;\n    let dst_addr = socket\n        .local_addr()\n        .map_err(InterfaceFailure::TestSocketBindFailed)?;\n\n    // Send test packet\n    let link = Link::new(bound_addr, dst_addr);\n    let packets = [io::IoSlice::new(&[0; 1])];\n    let line = Line::new(link, 64, None, packets[0].len() as u16);\n    let header = Route::new(link.into(), line);\n\n    iface\n        .sendmmsg(&packets, header)\n        .await\n        .map_err(InterfaceFailure::SendTestFailed)?;\n\n    Ok(())\n}\n\n#[derive(Debug)]\npub struct RebindOnNetworkChangedComponent {\n    devices: &'static Devices,\n    task: Mutex<Option<AbortOnDropHandle<()>>>,\n}\n\nimpl RebindOnNetworkChangedComponent {\n    pub fn new(iface: &Interface, devices: &'static Devices) -> Self {\n        let component = Self {\n            devices,\n            task: Mutex::new(None),\n        };\n        component.init(iface);\n        component\n    }\n\n    fn lock_task(&self) -> MutexGuard<'_, Option<AbortOnDropHandle<()>>> {\n        self.task\n            .lock()\n            .expect(\"RebindOnNetworkChanged task mutex poisoned\")\n    }\n\n    fn init(&self, iface: &Interface) {\n        let mut task = self.lock_task();\n        if !task.as_ref().is_none_or(|t| t.is_finished()) {\n            return;\n        }\n\n        let bind_uri = iface.bind_uri();\n        if bind_uri.is_temporary() {\n            return;\n        }\n        let Some((_, device, ..)) = bind_uri.as_iface_bind_uri() else {\n            return;\n        };\n\n        let device = device.to_owned();\n        let weak_iface = iface.bind_interface().downgrade();\n        let mut event_receiver = self.devices.event_receiver();\n        *task = Some(AbortOnDropHandle::new(tokio::spawn(async move {\n            let try_rebind = async move || {\n                if let Ok(iface) = weak_iface.upgrade()\n                    && let Err(error) = is_alive(&iface.borrow()).await\n                    && error.is_recoverable()\n                    && !RebindedError::is_source_of(&error)\n                {\n                    iface.rebind().await;\n                }\n            };\n\n            try_rebind().await;\n            while let Some(event) = event_receiver.recv().await {\n                if event.device() != device {\n                    continue;\n                }\n                try_rebind().await;\n            }\n        })));\n    }\n}\n\nimpl Component for RebindOnNetworkChangedComponent {\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> Poll<()> {\n        let mut task_guard = self.lock_task();\n        if let Some(task) = task_guard.as_mut() {\n            task.abort();\n            _ = ready!(Pin::new(task).poll(cx));\n            *task_guard = None;\n        }\n        Poll::Ready(())\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        self.init(iface);\n    }\n}\n"
  },
  {
    "path": "qinterface/src/component/location.rs",
    "content": "use std::{\n    any::{Any, TypeId},\n    collections::{HashMap, hash_map},\n    fmt::Debug,\n    ops::Deref,\n    sync::{Arc, LazyLock, Mutex, MutexGuard},\n    task::{Context, Poll},\n};\n\nuse qbase::util::{UniqueId, UniqueIdGenerator};\nuse tokio::sync::mpsc;\nuse tokio_util::task::AbortOnDropHandle;\n\nuse crate::{\n    BindUri, Interface, WeakInterface,\n    component::Component,\n    io::{IO, RefIO},\n};\n\n#[derive(Debug)]\npub enum AddressEvent<D: ?Sized = dyn Any + Send + Sync> {\n    Upsert(Arc<D>),\n    Remove(TypeId),\n    Closed,\n}\n\nimpl<D: ?Sized> Clone for AddressEvent<D> {\n    fn clone(&self) -> Self {\n        match self {\n            Self::Upsert(arg0) => Self::Upsert(arg0.clone()),\n            Self::Remove(arg0) => Self::Remove(*arg0),\n            Self::Closed => Self::Closed,\n        }\n    }\n}\n\n// TODO： 固定类型\nimpl AddressEvent {\n    pub fn downcast<D: Any + Send + Sync>(self) -> Result<AddressEvent<D>, Self> {\n        match self {\n            AddressEvent::Upsert(data) => match data.downcast::<D>() {\n                Ok(data) => Ok(AddressEvent::Upsert(data)),\n                Err(data) => Err(AddressEvent::Upsert(data)),\n            },\n            AddressEvent::Remove(type_id) => match TypeId::of::<D>() == type_id {\n                true => Ok(AddressEvent::Remove(type_id)),\n                false => Err(AddressEvent::Remove(type_id)),\n            },\n            AddressEvent::Closed => Ok(AddressEvent::Closed),\n        }\n    }\n}\n\ntype EventSender = mpsc::UnboundedSender<(BindUri, AddressEvent)>;\ntype EventReceiver = mpsc::UnboundedReceiver<(BindUri, AddressEvent)>;\n\nstruct EventPublisher {\n    subscriber_id_generator: UniqueIdGenerator,\n    datas: HashMap<BindUri, HashMap<TypeId, Arc<dyn Any + Send + Sync>>>,\n    subscribers: HashMap<UniqueId, EventSender>,\n}\n\nimpl EventPublisher {\n    pub fn new() -> Self {\n        Self {\n            subscriber_id_generator: UniqueIdGenerator::new(),\n            datas: HashMap::new(),\n            subscribers: HashMap::new(),\n        }\n    }\n\n    pub fn publish_event(&mut self, bind_uri: BindUri, event: AddressEvent) {\n        // 1. update state\n        match event.clone() {\n            AddressEvent::Upsert(data) => {\n                let type_id = data.as_ref().type_id();\n                self.datas\n                    .entry(bind_uri.clone())\n                    .or_default()\n                    .insert(type_id, data);\n            }\n            AddressEvent::Remove(type_id) => {\n                let entry = self.datas.entry(bind_uri.clone());\n                if let hash_map::Entry::Occupied(mut entry) = entry {\n                    entry.get_mut().remove(&type_id);\n                    if entry.get().is_empty() {\n                        entry.remove_entry();\n                    }\n                }\n            }\n            AddressEvent::Closed => _ = self.datas.remove(&bind_uri),\n        }\n        // 2. forward event to subscribers\n        self.subscribers\n            .retain(|_, subscriber| subscriber.send((bind_uri.clone(), event.clone())).is_ok());\n    }\n\n    pub fn register_subscriber(&mut self, subscriber: EventSender) {\n        let subscriber_id = self.subscriber_id_generator.generate();\n        for (bind_uri, datas) in &self.datas {\n            for (.., data) in datas {\n                let event = AddressEvent::Upsert(data.clone());\n                if subscriber.send((bind_uri.clone(), event)).is_err() {\n                    // EventReceiver disconnected, so we skip registering this subscriber.\n                    return;\n                }\n            }\n        }\n        self.subscribers.insert(subscriber_id, subscriber);\n    }\n}\n\n#[derive(Debug)]\npub struct Locations {\n    new_event_tx: EventSender,\n    new_subscriber_tx: mpsc::UnboundedSender<EventSender>,\n    _publisher_task: AbortOnDropHandle<()>,\n}\n\nimpl Default for Locations {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl Locations {\n    pub fn new() -> Self {\n        let (new_event_tx, mut new_event_rx) = mpsc::unbounded_channel::<(BindUri, AddressEvent)>();\n        let (new_subscriber_tx, mut new_subscriber_rx) = mpsc::unbounded_channel();\n\n        let _publisher_task = AbortOnDropHandle::new(tokio::spawn(async move {\n            let mut publisher = EventPublisher::new();\n\n            loop {\n                tokio::select! {\n                    Some((bind_uri, event)) = new_event_rx.recv() => {\n                        publisher.publish_event(bind_uri, event);\n                    }\n                    Some(new_subscriber) = new_subscriber_rx.recv() => {\n                        publisher.register_subscriber(new_subscriber);\n                    }\n                    else => break\n                }\n            }\n        }));\n\n        Self {\n            new_event_tx,\n            new_subscriber_tx,\n            _publisher_task,\n        }\n    }\n\n    pub fn global() -> &'static Arc<Self> {\n        static GLOBAL: LazyLock<Arc<Locations>> = LazyLock::new(|| Arc::new(Locations::new()));\n        &GLOBAL\n    }\n\n    pub fn publish(&self, bind_uri: BindUri, event: AddressEvent) {\n        _ = self.new_event_tx.send((bind_uri, event));\n    }\n\n    pub fn upsert<D: Any + Send + Sync + Debug>(&self, bind_uri: BindUri, data: Arc<D>) {\n        self.publish(bind_uri, AddressEvent::Upsert(data));\n    }\n\n    pub fn remove<D: Any + Send + Sync>(&self, bind_uri: BindUri) {\n        self.publish(bind_uri, AddressEvent::Remove(TypeId::of::<D>()));\n    }\n\n    pub fn close(&self, bind_uri: BindUri) {\n        self.publish(bind_uri, AddressEvent::Closed);\n    }\n\n    pub fn subscribe(&self) -> Observer {\n        let (tx, rx) = mpsc::unbounded_channel();\n        // Register the new subscriber.\n        _ = self.new_subscriber_tx.send(tx);\n        Observer { receiver: rx }\n    }\n}\n\npub struct Observer {\n    receiver: EventReceiver,\n}\n\nimpl Observer {\n    pub async fn recv(&mut self) -> Option<(BindUri, AddressEvent)> {\n        self.receiver.recv().await\n    }\n\n    pub fn try_recv(&mut self) -> Result<(BindUri, AddressEvent), mpsc::error::TryRecvError> {\n        self.receiver.try_recv()\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct IfaceLocations<I> {\n    locations: Arc<Locations>,\n    ref_iface: Arc<Mutex<I>>,\n}\n\nimpl<I: RefIO + 'static> IfaceLocations<I> {\n    pub fn new(ref_iface: I, locations: Arc<Locations>) -> Self {\n        locations.upsert(\n            ref_iface.iface().bind_uri(),\n            Arc::new(ref_iface.iface().bound_addr()),\n        );\n\n        Self {\n            locations,\n            ref_iface: Arc::new(Mutex::new(ref_iface)),\n        }\n    }\n\n    fn lock_ref_iface(&self) -> MutexGuard<'_, I> {\n        self.ref_iface.lock().expect(\"Mutex poisoned\")\n    }\n\n    /// Scope operation to the newest interface.\n    pub fn r#for<R>(&self, ref_iface: &R, f: impl FnOnce(&Locations, BindUri))\n    where\n        R: RefIO + 'static,\n    {\n        let current_iface = self.lock_ref_iface();\n        let current_iface = current_iface.deref();\n        if !(ref_iface as &dyn Any)\n            .downcast_ref::<I>()\n            .is_some_and(|ref_iface| ref_iface.same_io(current_iface))\n        {\n            return;\n        }\n        f(&self.locations, current_iface.iface().bind_uri());\n    }\n}\n\npub type LocationsComponent = IfaceLocations<WeakInterface>;\n\nimpl Component for LocationsComponent {\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> Poll<()> {\n        _ = cx;\n        Poll::Ready(())\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        let mut ref_iface = self.lock_ref_iface();\n        if iface.downgrade().same_io(ref_iface.deref()) {\n            return;\n        }\n        *ref_iface = iface.downgrade();\n        let bind_uri = iface.bind_uri();\n\n        self.locations.close(bind_uri.clone());\n        self.locations\n            .upsert(bind_uri.clone(), Arc::new(iface.bound_addr()));\n    }\n}\n"
  },
  {
    "path": "qinterface/src/component/route/handler.rs",
    "content": "use std::sync::{Mutex, MutexGuard};\n\nuse qbase::packet::Packet;\n\nuse super::Way;\n\npub type PacketSink<P = Packet> = Box<dyn Fn(P, Way) + Send>;\n\npub struct PacketHandler<P = Packet>(Mutex<Option<PacketSink<P>>>);\n\nimpl<P> std::fmt::Debug for PacketHandler<P> {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"PacketHandler\").finish()\n    }\n}\n\nimpl<P> Default for PacketHandler<P> {\n    fn default() -> Self {\n        Self::drain()\n    }\n}\n\nimpl<P> PacketHandler<P> {\n    pub fn new<S>(sink: PacketSink<P>) -> Self {\n        Self(Mutex::new(Some(sink)))\n    }\n\n    pub(crate) fn lock(&self) -> MutexGuard<'_, Option<PacketSink<P>>> {\n        self.0.lock().expect(\"PacketHandler mutex poisoned\")\n    }\n\n    pub fn drain() -> PacketHandler<P> {\n        PacketHandler(Mutex::new(None))\n    }\n\n    pub fn update(&self, handler: PacketSink<P>) {\n        *self.lock() = Some(handler);\n    }\n\n    pub fn is_drain(&self) -> bool {\n        self.lock().is_none()\n    }\n\n    pub fn take(&self) -> Option<PacketSink<P>> {\n        self.lock().take()\n    }\n\n    pub fn deliver(&self, packet: P, way: Way) {\n        if let Some(sink) = self.lock().as_mut() {\n            sink(packet, way);\n        }\n    }\n\n    pub fn deliver_packets(&self, packets: impl IntoIterator<Item = (P, Way)>) {\n        if let Some(sink) = self.lock().as_mut() {\n            for (packet, way) in packets {\n                sink(packet, way);\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "qinterface/src/component/route/packet.rs",
    "content": "use bytes::{Bytes, BytesMut};\nuse derive_more::Deref;\nuse qbase::{\n    error::QuicError,\n    packet::{\n        decrypt::{\n            decrypt_packet, remove_protection_of_long_packet, remove_protection_of_short_packet,\n        },\n        header::long::InitialHeader,\n        keys::ArcOneRttPacketKeys,\n        number::{InvalidPacketNumber, PacketNumber},\n    },\n};\nuse qevent::quic::{\n    PacketHeader, PacketHeaderBuilder, QuicFrame,\n    transport::{PacketDropped, PacketDroppedTrigger, PacketReceived},\n};\nuse rustls::quic::{HeaderProtectionKey, PacketKey};\n\n#[derive(Debug, Deref)]\npub struct CipherPacket<H> {\n    #[deref]\n    header: H,\n    payload: BytesMut,\n    payload_offset: usize,\n}\n\nimpl<H> CipherPacket<H>\nwhere\n    PacketHeaderBuilder: for<'a> From<&'a H>,\n{\n    pub fn new(header: H, payload: BytesMut, payload_offset: usize) -> Self {\n        Self {\n            header,\n            payload,\n            payload_offset,\n        }\n    }\n\n    pub fn header(&self) -> &H {\n        &self.header\n    }\n\n    fn qlog_header(&self) -> PacketHeader {\n        PacketHeaderBuilder::from(&self.header).build()\n    }\n\n    pub fn drop_on_key_unavailable(self) {\n        qevent::event!(PacketDropped {\n            header: self.qlog_header(),\n            raw: self.payload.freeze(),\n            trigger: PacketDroppedTrigger::KeyUnavailable\n        })\n    }\n\n    fn drop_on_remove_header_protection_failure(self) {\n        qevent::event!(\n            PacketDropped {\n                header: self.qlog_header(),\n                raw: self.payload.freeze(),\n                trigger: PacketDroppedTrigger::DecryptionFailure\n            },\n            details = Map {\n                reason: \"remove header protection failure\"\n            }\n        );\n    }\n\n    fn drop_on_decryption_failure(self, error: qbase::packet::error::Error, pn: u64) {\n        qevent::event!(\n            PacketDropped {\n                header: {\n                    PacketHeaderBuilder::from(&self.header)\n                        .packet_number(pn)\n                        .build()\n                },\n                raw: self.payload.freeze(),\n                trigger: PacketDroppedTrigger::DecryptionFailure\n            },\n            details = Map {\n                reason: \"decryption failure\",\n                error: error.to_string(),\n            },\n        )\n    }\n\n    fn drop_on_reverse_bit_error(self, error: &qbase::packet::error::Error) {\n        qevent::event!(\n            PacketDropped {\n                header: self.qlog_header(),\n                raw: self.payload.freeze(),\n                trigger: PacketDroppedTrigger::Invalid,\n            },\n            details = Map {\n                reason: \"reverse bit error\",\n                error: error.to_string()\n            },\n        )\n    }\n\n    fn drop_on_invalid_pn(self, invalid_pn: InvalidPacketNumber) {\n        qevent::event!(\n            PacketDropped {\n                header: self.qlog_header(),\n                raw: self.payload.freeze(),\n                trigger: PacketDroppedTrigger::Invalid,\n            },\n            details = Map {\n                reason: \"invalid packet number\",\n                invalid_pn: invalid_pn.to_string()\n            },\n        )\n    }\n\n    pub fn payload_len(&self) -> usize {\n        self.payload.len()\n    }\n\n    pub fn decrypt_long_packet(\n        mut self,\n        hpk: &dyn HeaderProtectionKey,\n        pk: &dyn PacketKey,\n        pn_decoder: impl FnOnce(PacketNumber) -> Result<u64, InvalidPacketNumber>,\n    ) -> Option<Result<PlainPacket<H>, QuicError>> {\n        let pkt_buf = self.payload.as_mut();\n        let undecoded_pn = match remove_protection_of_long_packet(hpk, pkt_buf, self.payload_offset)\n        {\n            Ok(Some(undecoded_pn)) => undecoded_pn,\n            Ok(None) => {\n                self.drop_on_remove_header_protection_failure();\n                return None;\n            }\n            Err(invalid_reverse_bits) => {\n                self.drop_on_reverse_bit_error(&invalid_reverse_bits);\n                return Some(Err(invalid_reverse_bits.into()));\n            }\n        };\n        let decoded_pn = match pn_decoder(undecoded_pn) {\n            Ok(pn) => pn,\n            Err(invalid_packet_number) => {\n                self.drop_on_invalid_pn(invalid_packet_number);\n                return None;\n            }\n        };\n        let body_offset = self.payload_offset + undecoded_pn.size();\n        let body_length = match decrypt_packet(pk, decoded_pn, pkt_buf, body_offset) {\n            Ok(body_length) => body_length,\n            Err(error) => {\n                self.drop_on_decryption_failure(error, decoded_pn);\n                return None;\n            }\n        };\n\n        Some(Ok(PlainPacket {\n            header: self.header,\n            plain: self.payload.freeze(),\n            payload_offset: self.payload_offset,\n            undecoded_pn,\n            decoded_pn,\n            body_len: body_length,\n        }))\n    }\n\n    pub fn decrypt_short_packet(\n        mut self,\n        hpk: &dyn HeaderProtectionKey,\n        pk: &ArcOneRttPacketKeys,\n        pn_decoder: impl FnOnce(PacketNumber) -> Result<u64, InvalidPacketNumber>,\n    ) -> Option<Result<PlainPacket<H>, QuicError>> {\n        let pkt_buf = self.payload.as_mut();\n        let (undecoded_pn, key_phase) =\n            match remove_protection_of_short_packet(hpk, pkt_buf, self.payload_offset) {\n                Ok(Some((undecoded, key_phase))) => (undecoded, key_phase),\n                Ok(None) => {\n                    self.drop_on_remove_header_protection_failure();\n                    return None;\n                }\n                Err(invalid_reverse_bits) => {\n                    self.drop_on_reverse_bit_error(&invalid_reverse_bits);\n                    return Some(Err(invalid_reverse_bits.into()));\n                }\n            };\n        let decoded_pn = match pn_decoder(undecoded_pn) {\n            Ok(pn) => pn,\n            Err(invalid_pn) => {\n                self.drop_on_invalid_pn(invalid_pn);\n                return None;\n            }\n        };\n        let pk = pk.lock_guard().get_remote(key_phase, decoded_pn);\n        let body_offset = self.payload_offset + undecoded_pn.size();\n        let body_length = match decrypt_packet(pk.as_ref(), decoded_pn, pkt_buf, body_offset) {\n            Ok(body_length) => body_length,\n            Err(error) => {\n                self.drop_on_decryption_failure(error, decoded_pn);\n                return None;\n            }\n        };\n\n        Some(Ok(PlainPacket {\n            header: self.header,\n            plain: self.payload.freeze(),\n            payload_offset: self.payload_offset,\n            undecoded_pn,\n            decoded_pn,\n            body_len: body_length,\n        }))\n    }\n}\n\nimpl CipherPacket<InitialHeader> {\n    pub fn drop_on_scid_unmatch(self) {\n        qevent::event!(\n            PacketDropped {\n                header: self.qlog_header(),\n                raw: self.payload.freeze(),\n                trigger: PacketDroppedTrigger::Rejected\n            },\n            details = Map {\n                reason: \"different scid with first initial packet\"\n            },\n        )\n    }\n}\n\n#[derive(Deref)]\npub struct PlainPacket<H> {\n    #[deref]\n    header: H,\n    decoded_pn: u64,\n    undecoded_pn: PacketNumber,\n    plain: Bytes,\n    payload_offset: usize,\n    body_len: usize,\n}\n\nimpl<H> PlainPacket<H> {\n    pub fn size(&self) -> usize {\n        self.plain.len()\n    }\n\n    pub fn pn(&self) -> u64 {\n        self.decoded_pn\n    }\n\n    pub fn payload_len(&self) -> usize {\n        self.undecoded_pn.size() + self.body_len\n    }\n\n    pub fn body(&self) -> Bytes {\n        let packet_offset = self.payload_offset + self.undecoded_pn.size();\n        self.plain\n            .slice(packet_offset..packet_offset + self.body_len)\n    }\n\n    pub fn raw_info(&self) -> qevent::RawInfo {\n        qevent::build!(qevent::RawInfo {\n            length: self.plain.len() as u64,\n            payload_length: self.payload_len() as u64,\n            data: &self.plain,\n        })\n    }\n}\n\nimpl<H> PlainPacket<H>\nwhere\n    PacketHeaderBuilder: for<'a> From<&'a H>,\n{\n    pub fn qlog_header(&self) -> PacketHeader {\n        let mut builder = PacketHeaderBuilder::from(&self.header);\n        qevent::build! {@field builder,\n            packet_number: self.decoded_pn,\n            length: self.payload_len() as u16\n        };\n        builder.build()\n    }\n\n    pub fn drop_on_interface_not_found(self) {\n        qevent::event!(\n            PacketDropped {\n                header: self.qlog_header(),\n                raw: self.raw_info(),\n                trigger: PacketDroppedTrigger::Genera\n            },\n            details = Map {\n                reason: \"interface not found\"\n            }\n        )\n    }\n\n    pub fn drop_on_conenction_closed(self) {\n        qevent::event!(\n            PacketDropped {\n                header: self.qlog_header(),\n                raw: self.raw_info(),\n                trigger: PacketDroppedTrigger::Genera\n            },\n            details = Map {\n                reason: \"connection closed\"\n            }\n        )\n    }\n\n    pub fn log_received(&self, frames: impl Into<Vec<QuicFrame>>) {\n        qevent::event!(PacketReceived {\n            header: self.qlog_header(),\n            frames,\n            raw: self.raw_info(),\n        })\n    }\n}\n"
  },
  {
    "path": "qinterface/src/component/route/queue.rs",
    "content": "use qbase::{\n    packet::{\n        DataHeader, Packet,\n        header::{long, short},\n    },\n    util::BoundQueue,\n};\n\nuse crate::component::route::{CipherPacket, Way};\n\ntype PacketQueue<P> = BoundQueue<(CipherPacket<P>, Way)>;\n\n// 需要一个四元组，pathway + src + dst\n#[derive(Debug)]\npub struct RcvdPacketQueue {\n    initial: PacketQueue<long::InitialHeader>,\n    handshake: PacketQueue<long::HandshakeHeader>,\n    zero_rtt: PacketQueue<long::ZeroRttHeader>,\n    one_rtt: PacketQueue<short::OneRttHeader>,\n    // pub retry:\n}\n\nimpl Default for RcvdPacketQueue {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl RcvdPacketQueue {\n    pub fn new() -> Self {\n        Self {\n            initial: BoundQueue::new(8),\n            handshake: BoundQueue::new(8),\n            zero_rtt: BoundQueue::new(8),\n            one_rtt: BoundQueue::new(128),\n        }\n    }\n\n    pub fn initial(&self) -> &PacketQueue<long::InitialHeader> {\n        &self.initial\n    }\n\n    pub fn handshake(&self) -> &PacketQueue<long::HandshakeHeader> {\n        &self.handshake\n    }\n\n    pub fn zero_rtt(&self) -> &PacketQueue<long::ZeroRttHeader> {\n        &self.zero_rtt\n    }\n\n    pub fn one_rtt(&self) -> &PacketQueue<short::OneRttHeader> {\n        &self.one_rtt\n    }\n\n    pub fn close_all(&self) {\n        self.initial.close();\n        self.handshake.close();\n        self.zero_rtt.close();\n        self.one_rtt.close();\n    }\n\n    pub async fn deliver(&self, packet: Packet, way: Way) {\n        match packet {\n            Packet::Data(packet) => match packet.header {\n                DataHeader::Long(long::DataHeader::Initial(header)) => {\n                    let packet = CipherPacket::new(header, packet.bytes, packet.offset);\n                    _ = self.initial.send((packet, way)).await;\n                }\n                DataHeader::Long(long::DataHeader::Handshake(header)) => {\n                    let packet = CipherPacket::new(header, packet.bytes, packet.offset);\n                    _ = self.handshake.send((packet, way)).await;\n                }\n                DataHeader::Long(long::DataHeader::ZeroRtt(header)) => {\n                    let packet = CipherPacket::new(header, packet.bytes, packet.offset);\n                    _ = self.zero_rtt.send((packet, way)).await;\n                }\n                DataHeader::Short(header) => {\n                    let packet = CipherPacket::new(header, packet.bytes, packet.offset);\n                    _ = self.one_rtt.send((packet, way)).await;\n                }\n            },\n            Packet::VN(_vn) => {}\n            Packet::Retry(_retry) => {}\n        }\n    }\n}\n"
  },
  {
    "path": "qinterface/src/component/route.rs",
    "content": "use std::{\n    net::SocketAddr,\n    sync::{Arc, OnceLock, Weak},\n    task::{Context, Poll},\n};\n\nuse dashmap::DashMap;\nuse qbase::{\n    cid::{ConnectionId, GenUniqueCid, RetireCid},\n    error::Error,\n    frame::{\n        NewConnectionIdFrame, RetireConnectionIdFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::route::{Link, Pathway},\n    packet::GetDcid,\n};\n\nuse crate::{BindUri, Interface, component::Component};\nmod handler;\nmod packet;\nmod queue;\npub type Way = (BindUri, Pathway, Link);\n\npub use handler::PacketHandler;\npub use packet::{CipherPacket, PlainPacket};\npub use qbase::packet::Packet;\npub use queue::RcvdPacketQueue;\n\n#[derive(Debug)]\npub struct QuicRouter {\n    table: DashMap<Signpost, Arc<RcvdPacketQueue>>,\n    on_unrouted: handler::PacketHandler<Packet>,\n}\n\nimpl QuicRouter {\n    pub fn global() -> &'static Arc<Self> {\n        static GLOBAL_ROUTER: OnceLock<Arc<QuicRouter>> = OnceLock::new();\n        GLOBAL_ROUTER.get_or_init(|| {\n            Arc::new(QuicRouter {\n                table: DashMap::new(),\n                on_unrouted: handler::PacketHandler::drain(),\n            })\n        })\n    }\n\n    pub fn new() -> Self {\n        QuicRouter {\n            table: DashMap::new(),\n            on_unrouted: handler::PacketHandler::drain(),\n        }\n    }\n\n    // for origin_dcid\n    pub fn insert(\n        self: &Arc<Self>,\n        signpost: Signpost,\n        queue: Arc<RcvdPacketQueue>,\n    ) -> QuicRouterEntry {\n        self.table.insert(signpost, queue.clone());\n        QuicRouterEntry {\n            signpost,\n            queue: Arc::downgrade(&queue),\n            router: self.clone(),\n        }\n    }\n\n    pub fn remove(&self, signpost: &Signpost) {\n        self.table.remove(signpost);\n    }\n\n    fn find_entry(&self, packet: &Packet, link: &Link) -> Option<Arc<RcvdPacketQueue>> {\n        let dcid = match packet {\n            Packet::VN(vn) => vn.dcid(),\n            Packet::Retry(retry) => retry.dcid(),\n            Packet::Data(data_packet) => data_packet.dcid(),\n        };\n\n        if !dcid.is_empty() {\n            let signpost = Signpost::from(*dcid);\n            self.table.get(&signpost).map(|queue| queue.clone())\n        } else {\n            let signpost = Signpost::from(link.dst);\n            self.table.get(&signpost).map(|queue| queue.clone())\n        }\n    }\n\n    pub async fn try_deliver(&self, packet: Packet, way: Way) -> Result<(), (Packet, Way)> {\n        match self.find_entry(&packet, &way.2) {\n            Some(rcvd_pkt_q) => {\n                rcvd_pkt_q.deliver(packet, way).await;\n                Ok(())\n            }\n            None => Err((packet, way)),\n        }\n    }\n\n    pub async fn deliver(&self, packet: Packet, way: Way) {\n        let rcvd_pkt_q = match self.find_entry(&packet, &way.2) {\n            Some(rcvd_pkt_q) => rcvd_pkt_q,\n            None => {\n                // For packets that cannot be routed, this likely indicates a new connection.\n                // In some cases, multiple threads (e.g., A and B) may be waiting for the lock,\n                // and both would cause the server to create separate new connections.\n                let mut on_unrouted = self.on_unrouted.lock();\n                let Some(on_unrouted) = on_unrouted.as_mut() else {\n                    // Drain mode, just drop the packet\n                    return;\n                };\n                // Therefore, we retry routing here to allow thread B to route its packet\n                // to the connection created by thread A, instead of creating another new connection.\n                match self.find_entry(&packet, &way.2) {\n                    Some(rcvd_pkt_q) => rcvd_pkt_q,\n                    None => {\n                        (on_unrouted)(packet, way);\n                        return;\n                    }\n                }\n            }\n        };\n        rcvd_pkt_q.deliver(packet, way).await;\n    }\n\n    pub fn on_connectless_packets<S>(&self, sink: S) -> bool\n    where\n        S: Fn(Packet, Way) + Send + 'static,\n    {\n        let mut on_unrouted = self.on_unrouted.lock();\n        if on_unrouted.is_some() {\n            return false;\n        }\n        *on_unrouted = Some(Box::new(sink));\n        true\n    }\n\n    pub fn is_connectless_draining(&self) -> bool {\n        self.on_unrouted.is_drain()\n    }\n\n    pub fn drain_connectless(&self) {\n        self.on_unrouted.take();\n    }\n\n    pub fn registry_on_issuing_scid<T>(\n        self: &Arc<Self>,\n        rcvd_pkts_q: Arc<RcvdPacketQueue>,\n        issued_cids: T,\n    ) -> QuicRouterRegistry<T> {\n        QuicRouterRegistry {\n            router: self.clone(),\n            rcvd_pkts_q,\n            issued_cids,\n        }\n    }\n}\n\nimpl Default for QuicRouter {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\n#[derive(Debug, PartialEq, Clone, Copy, Eq, Hash)]\npub struct Signpost {\n    cid: ConnectionId,\n    peer: Option<SocketAddr>,\n}\n\nimpl From<ConnectionId> for Signpost {\n    fn from(value: ConnectionId) -> Self {\n        Self {\n            cid: value,\n            peer: None,\n        }\n    }\n}\n\nimpl From<SocketAddr> for Signpost {\n    fn from(value: SocketAddr) -> Self {\n        Self {\n            cid: ConnectionId::default(),\n            peer: Some(value),\n        }\n    }\n}\n\n#[must_use = \"When RouterEntry dropped, this will remove the entry from the router table\"]\npub struct QuicRouterEntry {\n    signpost: Signpost,\n    queue: Weak<RcvdPacketQueue>,\n    router: Arc<QuicRouter>,\n}\n\nimpl QuicRouterEntry {\n    pub fn signpost(&self) -> Signpost {\n        self.signpost\n    }\n\n    pub fn remove(&self) {\n        self.router\n            .table\n            .remove_if(&self.signpost, |_, exist_queue| {\n                Weak::ptr_eq(&Arc::downgrade(exist_queue), &self.queue)\n            });\n    }\n}\n\nimpl Drop for QuicRouterEntry {\n    fn drop(&mut self) {\n        self.remove();\n    }\n}\n\n#[derive(Clone)]\npub struct QuicRouterRegistry<TX> {\n    router: Arc<QuicRouter>,\n    rcvd_pkts_q: Arc<RcvdPacketQueue>,\n    issued_cids: TX,\n}\n\nimpl<T> GenUniqueCid for QuicRouterRegistry<T>\nwhere\n    T: Send + Sync + 'static,\n{\n    fn gen_unique_cid(&self) -> ConnectionId {\n        core::iter::from_fn(|| Some(ConnectionId::random_gen_with_mark(8, 0x80, 0x7F)))\n            .find(|cid| {\n                let signpost = Signpost::from(*cid);\n                let entry = self.router.table.entry(signpost);\n\n                if matches!(entry, dashmap::Entry::Occupied(..)) {\n                    return false;\n                }\n\n                entry.insert(self.rcvd_pkts_q.clone());\n                true\n            })\n            .unwrap()\n    }\n}\n\nimpl<TX> RetireCid for QuicRouterRegistry<TX>\nwhere\n    TX: Send + Sync + 'static,\n{\n    fn retire_cid(&self, cid: ConnectionId) {\n        self.router.remove(&Signpost::from(cid));\n    }\n}\n\nimpl<TX> SendFrame<NewConnectionIdFrame> for QuicRouterRegistry<TX>\nwhere\n    TX: SendFrame<NewConnectionIdFrame>,\n{\n    fn send_frame<I: IntoIterator<Item = NewConnectionIdFrame>>(&self, iter: I) {\n        self.issued_cids.send_frame(iter);\n    }\n}\n\nimpl<RX> ReceiveFrame<RetireConnectionIdFrame> for QuicRouterRegistry<RX>\nwhere\n    RX: ReceiveFrame<RetireConnectionIdFrame, Output = ()>,\n{\n    type Output = ();\n\n    fn recv_frame(&self, frame: RetireConnectionIdFrame) -> Result<Self::Output, Error> {\n        self.issued_cids.recv_frame(frame)\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct QuicRouterComponent {\n    router: Arc<QuicRouter>,\n}\n\nimpl QuicRouterComponent {\n    pub fn new(router: Arc<QuicRouter>) -> Self {\n        Self { router }\n    }\n\n    pub fn router(&self) -> Arc<QuicRouter> {\n        self.router.clone()\n    }\n}\n\nimpl Component for QuicRouterComponent {\n    fn reinit(&self, _quic_iface: &Interface) {}\n\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> Poll<()> {\n        _ = cx;\n        Poll::Ready(())\n    }\n}\n"
  },
  {
    "path": "qinterface/src/component.rs",
    "content": "use std::{\n    any::{Any, TypeId},\n    collections::{HashMap, hash_map},\n    fmt::Debug,\n    hash::{BuildHasherDefault, Hasher},\n    task::{Context, Poll, ready},\n};\n\nuse crate::Interface;\n\npub mod alive;\npub mod location;\npub mod route;\n\npub trait Component: Any + Debug + Send + Sync {\n    /// Gracefully shutdown the component when IO is unbound.\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> Poll<()>;\n\n    /// Re-initialize the component after the QuicIO has been rebound\n    ///\n    /// Normally, this method first shuts down the component,\n    /// then re-initializes it with the new QuicIO.\n    ///\n    /// Implementation may override this method for optimization.\n    fn reinit(&self, iface: &Interface);\n}\n\n// With TypeIds as keys, there's no need to hash them. They are already hashes\n// themselves, coming from the compiler. The IdHasher just holds the u64 of\n// the TypeId, and then returns it, instead of doing any bit fiddling.\n#[derive(Default)]\npub(super) struct IdHasher(u64);\n\nimpl Hasher for IdHasher {\n    fn write(&mut self, _: &[u8]) {\n        unreachable!(\"TypeId calls write_u64\");\n    }\n\n    #[inline]\n    fn write_u64(&mut self, id: u64) {\n        self.0 = id;\n    }\n\n    #[inline]\n    fn finish(&self) -> u64 {\n        self.0\n    }\n}\n\n#[derive(Default)]\npub struct Components {\n    pub(super) map: HashMap<TypeId, Box<dyn Component>, BuildHasherDefault<IdHasher>>,\n}\n\nimpl Components {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    pub fn get<C: Component>(&self) -> Option<&C> {\n        self.map\n            .get(&TypeId::of::<C>())\n            .and_then(|c| (c.as_ref() as &dyn Any).downcast_ref())\n    }\n\n    pub fn exist<C: Component>(&self) -> bool {\n        self.map.contains_key(&TypeId::of::<C>())\n    }\n\n    pub fn with<C: Component, T>(&self, f: impl FnOnce(&C) -> T) -> Option<T> {\n        self.get::<C>().map(f)\n    }\n\n    pub fn init_with<C: Component>(&mut self, init: impl FnOnce() -> C) -> &mut C {\n        let ref_mut = self\n            .map\n            .entry(TypeId::of::<C>())\n            .or_insert_with(|| Box::new(init()));\n        (ref_mut.as_mut() as &mut dyn Any).downcast_mut().unwrap()\n    }\n\n    pub fn try_init_with<C: Component, E>(\n        &mut self,\n        init: impl FnOnce() -> Result<C, E>,\n    ) -> Result<&mut C, E> {\n        let entry = self.map.entry(TypeId::of::<C>());\n        let ref_mut = match entry {\n            hash_map::Entry::Occupied(entry) => entry.into_mut(),\n            hash_map::Entry::Vacant(entry) => entry.insert(Box::new(init()?)),\n        };\n        Ok((ref_mut.as_mut() as &mut dyn Any).downcast_mut().unwrap())\n    }\n\n    pub fn poll_remove<C>(&mut self, cx: &mut Context<'_>) -> Poll<()>\n    where\n        C: Component,\n    {\n        let hash_map::Entry::Occupied(entry) = self.map.entry(TypeId::of::<C>()) else {\n            return Poll::Ready(());\n        };\n\n        ready!(entry.get().poll_shutdown(cx));\n        entry.remove();\n\n        Poll::Ready(())\n    }\n}\n"
  },
  {
    "path": "qinterface/src/device.rs",
    "content": "use std::{\n    collections::HashMap,\n    fmt::Debug,\n    net::IpAddr,\n    sync::{Arc, Mutex, OnceLock, RwLock},\n    time::Duration,\n};\n\nuse derive_more::{Deref, DerefMut};\npub use netdev::Interface;\npub use netwatcher::Error as WatcherError;\nuse netwatcher::WatchHandle;\nuse qbase::{\n    net::Family,\n    util::{UniqueId, UniqueIdGenerator},\n};\nuse tokio::{\n    sync::mpsc::{UnboundedReceiver, UnboundedSender},\n    time::MissedTickBehavior,\n};\nuse tokio_util::task::AbortOnDropHandle;\n\n#[allow(clippy::large_enum_variant)]\n#[derive(Debug, Clone, PartialEq, Eq)]\npub enum InterfaceEvent {\n    Added {\n        device: String,\n        new_interface: Interface,\n    },\n    Removed {\n        device: String,\n        old_interface: Interface,\n    },\n    Changed {\n        device: String,\n        old_interface: Interface,\n        new_interface: Interface,\n    },\n}\n\nimpl InterfaceEvent {\n    pub fn device(&self) -> &str {\n        match self {\n            InterfaceEvent::Added { device, .. } => device,\n            InterfaceEvent::Removed { device, .. } => device,\n            InterfaceEvent::Changed { device, .. } => device,\n        }\n    }\n\n    pub fn old_interface(&self) -> Option<&Interface> {\n        match self {\n            InterfaceEvent::Removed { old_interface, .. }\n            | InterfaceEvent::Changed { old_interface, .. } => Some(old_interface),\n            _ => None,\n        }\n    }\n\n    pub fn new_interface(&self) -> Option<&Interface> {\n        match self {\n            InterfaceEvent::Added { new_interface, .. }\n            | InterfaceEvent::Changed { new_interface, .. } => Some(new_interface),\n            _ => None,\n        }\n    }\n}\n\nimpl InterfaceEvent {\n    pub fn from_update<'i>(\n        old_interfaces: &'i HashMap<String, Interface>,\n        new_interfaces: &'i HashMap<String, Interface>,\n    ) -> impl Iterator<Item = Self> + 'i {\n        new_interfaces\n            .iter()\n            .filter_map(|(name, new_interface)| match old_interfaces.get(name) {\n                Some(old_interface) if new_interface != old_interface => {\n                    Some(InterfaceEvent::Changed {\n                        device: name.to_owned(),\n                        old_interface: old_interface.clone(),\n                        new_interface: new_interface.clone(),\n                    })\n                }\n                None => Some(InterfaceEvent::Added {\n                    device: name.to_owned(),\n                    new_interface: new_interface.clone(),\n                }),\n                _ => None,\n            })\n            .chain(\n                old_interfaces\n                    .iter()\n                    .filter(|(name, ..)| !new_interfaces.contains_key(*name))\n                    .map(|(name, old_interface)| InterfaceEvent::Removed {\n                        device: name.to_owned(),\n                        old_interface: old_interface.clone(),\n                    }),\n            )\n    }\n}\n\nfn scan_interfaces() -> HashMap<String, Interface> {\n    netdev::get_interfaces()\n        .into_iter()\n        .map(|mut iface| {\n            // compatibility with windows interface names\n            iface.name = iface\n                .name\n                .trim_start_matches('{')\n                .trim_end_matches('}')\n                .to_string();\n            iface\n        })\n        .map(|iface| (iface.name.clone(), iface))\n        .collect()\n}\n\ntype SubscribersMap = RwLock<HashMap<UniqueId, UnboundedSender<Arc<InterfaceEvent>>>>;\ntype InterfacesMap = RwLock<HashMap<String, Interface>>;\n\n#[derive(Debug, Deref, DerefMut)]\npub struct InterfaceEventReceiver {\n    id: UniqueId,\n    #[deref]\n    #[deref_mut]\n    receiver: UnboundedReceiver<Arc<InterfaceEvent>>,\n    subscribers: Arc<SubscribersMap>,\n}\n\nimpl Drop for InterfaceEventReceiver {\n    fn drop(&mut self) {\n        self.subscribers.write().unwrap().remove(&self.id);\n    }\n}\n\npub struct InterfacesMonitor {\n    interfaces: HashMap<String, Interface>,\n    receiver: InterfaceEventReceiver,\n}\n\nimpl InterfacesMonitor {\n    #[inline]\n    pub async fn update(&mut self) -> Option<(&HashMap<String, Interface>, Arc<InterfaceEvent>)> {\n        self.receiver.recv().await.map(|event| {\n            match event.as_ref() {\n                InterfaceEvent::Added {\n                    device,\n                    new_interface,\n                } => {\n                    self.interfaces\n                        .insert(device.clone(), new_interface.clone());\n                }\n                InterfaceEvent::Removed { device, .. } => {\n                    self.interfaces.remove(device);\n                }\n                InterfaceEvent::Changed {\n                    device,\n                    new_interface,\n                    ..\n                } => {\n                    self.interfaces\n                        .insert(device.clone(), new_interface.clone());\n                }\n            }\n            (self.interfaces(), event)\n        })\n    }\n\n    #[inline]\n    pub fn try_update(&mut self) -> Option<(&HashMap<String, Interface>, Arc<InterfaceEvent>)> {\n        self.receiver.try_recv().ok().map(|event| {\n            match event.as_ref() {\n                InterfaceEvent::Added {\n                    device,\n                    new_interface,\n                } => {\n                    self.interfaces\n                        .insert(device.clone(), new_interface.clone());\n                }\n                InterfaceEvent::Removed { device, .. } => {\n                    self.interfaces.remove(device);\n                }\n                InterfaceEvent::Changed {\n                    device,\n                    new_interface,\n                    ..\n                } => {\n                    self.interfaces\n                        .insert(device.clone(), new_interface.clone());\n                }\n            }\n            (self.interfaces(), event)\n        })\n    }\n\n    #[inline]\n    pub fn interfaces(&self) -> &HashMap<String, Interface> {\n        &self.interfaces\n    }\n\n    pub fn into_inner(self) -> (HashMap<String, Interface>, InterfaceEventReceiver) {\n        (self.interfaces, self.receiver)\n    }\n}\n\n#[derive(Debug)]\nstruct State {\n    interfaces: InterfacesMap,\n    subscrib_id_generator: UniqueIdGenerator,\n    subscribers: Arc<SubscribersMap>,\n}\n\nimpl Default for State {\n    fn default() -> Self {\n        Self {\n            interfaces: RwLock::new(scan_interfaces()),\n            subscrib_id_generator: UniqueIdGenerator::new(),\n            subscribers: Arc::new(RwLock::new(HashMap::new())),\n        }\n    }\n}\n\nimpl State {\n    fn check_network_changes(&self) {\n        let mut interfaces = self.interfaces.write().unwrap();\n        let subscribers = self.subscribers.read().unwrap();\n        let old_interfaces = interfaces.clone();\n        let new_interfaces = scan_interfaces();\n        for event in InterfaceEvent::from_update(&old_interfaces, &new_interfaces) {\n            let arc_event = Arc::new(event);\n            for sender in subscribers.values() {\n                let _ = sender.send(arc_event.clone());\n            }\n        }\n        *interfaces = new_interfaces.clone();\n    }\n\n    fn monitor(&self) -> (HashMap<String, Interface>, InterfaceEventReceiver) {\n        let mut subscribers = self.subscribers.write().unwrap();\n        let interfaces = self.interfaces.read().unwrap().clone();\n\n        let current_interfaces = interfaces;\n\n        let (tx, rx) = tokio::sync::mpsc::unbounded_channel();\n        let id = self.subscrib_id_generator.generate();\n        subscribers.insert(id, tx);\n        let observer = InterfaceEventReceiver {\n            id,\n            receiver: rx,\n            subscribers: Arc::clone(&self.subscribers),\n        };\n\n        (current_interfaces, observer)\n    }\n\n    fn event_receiver(&self) -> InterfaceEventReceiver {\n        let mut subscribers = self.subscribers.write().unwrap();\n\n        let (tx, rx) = tokio::sync::mpsc::unbounded_channel();\n        let id = self.subscrib_id_generator.generate();\n        subscribers.insert(id, tx);\n        InterfaceEventReceiver {\n            id,\n            receiver: rx,\n            subscribers: Arc::clone(&self.subscribers),\n        }\n    }\n\n    fn interfaces(&self) -> HashMap<String, Interface> {\n        self.interfaces.read().unwrap().clone()\n    }\n\n    fn get(&self, name: &str) -> Option<Interface> {\n        self.interfaces.read().unwrap().get(name).cloned()\n    }\n}\n\npub struct Devices {\n    state: Arc<State>,\n    watcher: Mutex<Result<WatchHandle, WatcherError>>,\n    _timer: AbortOnDropHandle<()>,\n}\n\nimpl Debug for Devices {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Devices\")\n            .field(\"state\", &self.state)\n            .field(\"watcher\", &\"...\")\n            .field(\"_timer\", &self._timer)\n            .finish()\n    }\n}\n\nimpl Devices {\n    pub fn global() -> &'static Devices {\n        static DEVICES: OnceLock<Devices> = OnceLock::new();\n        DEVICES.get_or_init(Self::new)\n    }\n\n    pub fn new() -> Self {\n        let state = Arc::new(State::default());\n\n        let timer = AbortOnDropHandle::new(tokio::spawn({\n            let state = state.clone();\n            async move {\n                let mut interval = tokio::time::interval(Duration::from_secs(5));\n                interval.set_missed_tick_behavior(MissedTickBehavior::Delay);\n                loop {\n                    interval.tick().await;\n                    state.check_network_changes();\n                }\n            }\n        }));\n\n        let watcher = netwatcher::watch_interfaces({\n            let state = state.clone();\n            move |_update| {\n                // TODO: use the update info to avoid full scan\n                state.check_network_changes();\n            }\n        });\n\n        if let Err(initial_watcher_error) = &watcher {\n            tracing::warn!(target: \"interface\", \"failed to start interfaces watcher: {initial_watcher_error}\");\n        }\n\n        Self {\n            state,\n            _timer: timer,\n            watcher: watcher.into(),\n        }\n    }\n\n    #[inline]\n    pub fn restart_watcher(&self) -> Result<(), WatcherError> {\n        let new_watcher = netwatcher::watch_interfaces({\n            let state = self.state.clone();\n            move |_update| {\n                // TODO: use the update info to avoid full scan\n                state.check_network_changes();\n            }\n        })?;\n        *self.watcher.lock().unwrap() = Ok(new_watcher);\n        Ok(())\n    }\n\n    #[inline]\n    pub fn on_interface_changed(&self) {\n        self.state.check_network_changes();\n    }\n\n    #[inline]\n    pub fn monitor(&self) -> InterfacesMonitor {\n        let (interfaces, receiver) = self.state.monitor();\n        InterfacesMonitor {\n            interfaces,\n            receiver,\n        }\n    }\n\n    #[inline]\n    pub fn event_receiver(&self) -> InterfaceEventReceiver {\n        self.state.event_receiver()\n    }\n\n    #[inline]\n    pub fn interfaces(&self) -> HashMap<String, Interface> {\n        self.state.interfaces()\n    }\n\n    pub fn get(&self, name: &str) -> Option<Interface> {\n        self.state.get(name)\n    }\n\n    pub fn resolve(&self, device: &str, family: Family) -> Option<IpAddr> {\n        let interface = self.get(device)?;\n        match family {\n            Family::V4 => interface\n                .ipv4\n                .first()\n                .map(|ipnet| ipnet.addr())\n                .map(IpAddr::V4),\n            Family::V6 => interface\n                .ipv6\n                .iter()\n                .map(|ipnet| ipnet.addr())\n                .find(|ip| !matches!(ip.octets(), [0xfe, 0x80, ..]))\n                .map(IpAddr::V6),\n        }\n    }\n}\n\nimpl Default for Devices {\n    #[inline]\n    fn default() -> Self {\n        Self::new()\n    }\n}\n"
  },
  {
    "path": "qinterface/src/iface.rs",
    "content": ""
  },
  {
    "path": "qinterface/src/io/factory.rs",
    "content": "use std::task::{Context, Poll, ready};\n\nuse crate::{BindUri, IO};\n\npub trait ProductIO: Send + Sync {\n    fn bind(&self, bind_uri: BindUri) -> Box<dyn IO>;\n\n    fn poll_rebind(&self, cx: &mut Context<'_>, quic_io: &mut Box<dyn IO>) -> Poll<()> {\n        _ = ready!(quic_io.poll_close(cx));\n        *quic_io = self.bind(quic_io.bind_uri());\n        Poll::Ready(())\n    }\n}\n\npub trait ProductIoExt: ProductIO {\n    fn rebind(&self, quic_io: &mut Box<dyn IO>) -> impl Future<Output = ()> {\n        async { core::future::poll_fn(|cx| self.poll_rebind(cx, quic_io)).await }\n    }\n}\n\nimpl<F, Q> ProductIO for F\nwhere\n    F: Fn(BindUri) -> Q + Send + Sync,\n    Q: IO + 'static,\n{\n    #[inline]\n    fn bind(&self, bind_uri: BindUri) -> Box<dyn IO> {\n        Box::new((self)(bind_uri))\n    }\n}\n"
  },
  {
    "path": "qinterface/src/io/handy.rs",
    "content": "use crate::BindUri;\n\n#[cfg(all(feature = \"qudp\", any(unix, windows)))]\npub mod qudp {\n    use std::{\n        error::{Error, Error as StdError},\n        fmt::Display,\n        io::{self, IoSliceMut},\n        net::SocketAddr,\n        sync::Arc,\n        task::{Context, Poll, ready},\n    };\n\n    use bytes::BytesMut;\n    use qbase::{\n        net::route::{Line, Link, Pathway},\n        util::Wakers,\n    };\n    use qudp::BATCH_SIZE;\n    use thiserror::Error;\n\n    use crate::{BindUri, IO, Route};\n\n    pub struct UdpSocketController {\n        bind_uri: BindUri,\n        send_wakers: Arc<Wakers<64>>,\n        recv_wakers: Arc<Wakers>,\n        io: Result<Result<qudp::UdpSocket, Closed>, BindFailed>,\n    }\n\n    #[derive(Debug, Clone, Copy, Error)]\n    #[error(\"UdpSocketController closed\")]\n    pub struct Closed(());\n\n    impl From<Closed> for io::Error {\n        fn from(error: Closed) -> Self {\n            io::Error::other(error)\n        }\n    }\n\n    #[derive(Debug, Clone)]\n    pub struct BindFailed(Arc<io::Error>);\n\n    impl Display for BindFailed {\n        fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n            write!(f, \"failed to bind UdpSocketController\")\n        }\n    }\n\n    impl StdError for BindFailed {\n        fn source(&self) -> Option<&(dyn Error + 'static)> {\n            Some(self.0.as_ref())\n        }\n    }\n\n    impl From<BindFailed> for io::Error {\n        fn from(error: BindFailed) -> Self {\n            io::Error::other(error)\n        }\n    }\n\n    impl UdpSocketController {\n        pub fn bind(bind_uri: BindUri) -> Self {\n            let io = SocketAddr::try_from(&bind_uri)\n                .map_err(|e| {\n                    io::Error::new(\n                        io::ErrorKind::NotFound,\n                        format!(\"Failed to bind {bind_uri}: {e}\"),\n                    )\n                })\n                .and_then(qudp::UdpSocket::bind);\n            UdpSocketController {\n                bind_uri,\n                send_wakers: Arc::new(Wakers::new()),\n                recv_wakers: Arc::new(Wakers::new()),\n                io: io.map(Ok).map_err(|e| BindFailed(Arc::new(e))),\n            }\n        }\n\n        fn usc(&self) -> io::Result<&qudp::UdpSocket> {\n            self.io\n                .as_ref()\n                .map_err(|e| io::Error::from(e.clone()))\n                .and_then(|result| result.as_ref().map_err(|e| (*e).into()))\n        }\n    }\n\n    impl IO for UdpSocketController {\n        fn bind_uri(&self) -> BindUri {\n            self.bind_uri.clone()\n        }\n\n        fn bound_addr(&self) -> io::Result<SocketAddr> {\n            self.usc()?.local_addr()\n        }\n\n        fn max_segments(&self) -> io::Result<usize> {\n            Ok(BATCH_SIZE)\n        }\n\n        fn max_segment_size(&self) -> io::Result<usize> {\n            Ok(1500)\n        }\n\n        fn poll_send(\n            &self,\n            cx: &mut Context,\n            pkts: &[io::IoSlice],\n            route: Route,\n        ) -> Poll<io::Result<usize>> {\n            let io = self.usc()?;\n            self.send_wakers.combine_with(cx, |cx| {\n                debug_assert_eq!(route.ecn(), None);\n                io.poll_send(cx, pkts, &route.line)\n            })\n        }\n\n        fn poll_recv(\n            &self,\n            cx: &mut Context,\n            pkts: &mut [BytesMut],\n            route: &mut [Route],\n        ) -> Poll<io::Result<usize>> {\n            let io = self.usc()?;\n            self.recv_wakers.combine_with(cx, |cx| {\n                let dst = io.local_addr()?;\n                let len = route.len().min(pkts.len());\n                let mut rcvd_lines = Vec::with_capacity(len);\n                rcvd_lines.resize_with(route.len(), Line::default);\n                let mut bufs = pkts[..len]\n                    .iter_mut()\n                    .map(|p| IoSliceMut::new(p.as_mut()))\n                    .collect::<Vec<_>>();\n                debug_assert_eq!(rcvd_lines.len(), bufs.len());\n                let nrcvd = ready!(io.poll_recv(cx, &mut bufs, &mut rcvd_lines))?;\n\n                for (idx, mut line) in rcvd_lines.into_iter().take(nrcvd).enumerate() {\n                    let pathway = Pathway::new(line.link.src.into(), dst.into());\n                    line.link = Link::new(line.src, io.local_addr()?).flip();\n                    route[idx] = Route::new(pathway.flip(), line);\n                }\n\n                Poll::Ready(Ok(nrcvd))\n            })\n        }\n\n        fn poll_close(&mut self, _cx: &mut Context) -> Poll<io::Result<()>> {\n            self.usc()?;\n            self.send_wakers.wake_all();\n            self.recv_wakers.wake_all();\n            self.io = Ok(Err(Closed(())));\n            Poll::Ready(Ok(()))\n        }\n    }\n}\n\npub mod unsupported {\n    use std::{\n        io,\n        net::SocketAddr,\n        task::{Context, Poll},\n    };\n\n    use bytes::BytesMut;\n    use qbase::net::route::Route;\n    use thiserror::Error;\n\n    use crate::{BindUri, IO};\n\n    #[derive(Debug, Clone)]\n    pub struct Unsupported {\n        bind_uri: BindUri,\n    }\n\n    #[derive(Debug, Clone, Copy, Error)]\n    #[error(\n        \"qudp feature is not enabled or target platform is not supported, you should use your own ProductQuicIO implementation, not the default\"\n    )]\n    pub struct UnsupportedError(());\n\n    impl From<UnsupportedError> for io::Error {\n        fn from(error: UnsupportedError) -> Self {\n            io::Error::new(io::ErrorKind::Unsupported, error)\n        }\n    }\n\n    impl Unsupported {\n        pub fn bind(bind_uri: BindUri) -> Self {\n            Unsupported { bind_uri }\n        }\n    }\n\n    impl IO for Unsupported {\n        fn bind_uri(&self) -> BindUri {\n            self.bind_uri.clone()\n        }\n\n        fn bound_addr(&self) -> io::Result<SocketAddr> {\n            Err(UnsupportedError(()).into())\n        }\n\n        fn max_segment_size(&self) -> io::Result<usize> {\n            Err(UnsupportedError(()).into())\n        }\n\n        fn max_segments(&self) -> io::Result<usize> {\n            Err(UnsupportedError(()).into())\n        }\n\n        fn poll_send(\n            &self,\n            _: &mut Context,\n            _: &[io::IoSlice],\n            _: Route,\n        ) -> Poll<io::Result<usize>> {\n            Poll::Ready(Err(UnsupportedError(()).into()))\n        }\n\n        fn poll_recv(\n            &self,\n            _: &mut Context,\n            _: &mut [BytesMut],\n            _: &mut [Route],\n        ) -> Poll<io::Result<usize>> {\n            Poll::Ready(Err(UnsupportedError(()).into()))\n        }\n\n        fn poll_close(&mut self, _: &mut Context) -> Poll<io::Result<()>> {\n            Poll::Ready(Ok(()))\n        }\n    }\n}\n\n#[cfg(all(feature = \"qudp\", any(unix, windows)))]\npub static DEFAULT_IO_FACTORY: fn(BindUri) -> qudp::UdpSocketController =\n    |bind_uri| qudp::UdpSocketController::bind(bind_uri);\n\n#[cfg(not(all(feature = \"qudp\", any(unix, windows))))]\npub static DEFAULT_IO_FACTORY: fn(BindUri) -> unsupported::Unsupported =\n    |bind_uri| unsupported::Unsupported::bind(bind_uri);\n\nconst _: () = {\n    use super::ProductIO;\n    const fn assert_product_interface_factory<F: ProductIO + Copy>(_: &F) {}\n    assert_product_interface_factory(&DEFAULT_IO_FACTORY);\n};\n"
  },
  {
    "path": "qinterface/src/io.rs",
    "content": "use std::{\n    any::Any,\n    future::Future,\n    io,\n    net::SocketAddr,\n    sync::Arc,\n    task::{Context, Poll},\n};\n\nuse bytes::BytesMut;\nuse qbase::net::route::Route;\n\npub mod handy;\n\nmod factory;\npub use factory::*;\n\nuse crate::bind_uri::BindUri;\n\n/// Network I/O trait\n///\n/// Provides a unified interface for different network transport implementations.\n/// Note that some implementations may not support all bind address types.\n///\n/// `dquic` uses [`ProductIO`] to create (bind) new [`IO`] instances.\n/// Read its documentation for more information.\n///\n/// Wrapping a new [`IO`] is easy,\n/// you can refer to the implementations in the [`handy`] module.\n///\n/// [`ProductIO`]: crate::io::ProductIO\npub trait IO: Send + Sync + Any {\n    /// Get the bind address that this interface is bound to\n    ///\n    /// This value cannot change after the interface is bound,\n    /// as it is used as the unique identifier for the interface.\n    fn bind_uri(&self) -> BindUri;\n\n    /// Get the actual address that this interface is bound to.\n    ///\n    /// For example, if this interface is bound to an [`BindUri`],\n    /// this function should return the actual IP address and port\n    /// address of this interface.\n    ///\n    /// Just like [`UdpSocket::local_addr`] may return an error,\n    /// sometimes an interface cannot get its own actual address,\n    /// then the implementation should return an error as well.\n    ///\n    /// [`UdpSocket::local_addr`]: std::net::UdpSocket::local_addr\n    fn bound_addr(&self) -> io::Result<SocketAddr>;\n\n    /// Maximum size of a single network segment in bytes\n    fn max_segment_size(&self) -> io::Result<usize>;\n\n    /// Maximum number of segments that can be sent in a single batch\n    fn max_segments(&self) -> io::Result<usize>;\n\n    /// Poll for sending packets\n    ///\n    /// Attempts to send multiple packets in a single operation.\n    /// Return the number of packets sent,\n    fn poll_send(\n        &self,\n        cx: &mut Context,\n        pkts: &[io::IoSlice],\n        route: Route,\n    ) -> Poll<io::Result<usize>>;\n\n    /// Poll for receiving packets\n    ///\n    /// Attempts to receive multiple packets in a single operation.\n    /// The number of packets received is limited by the smaller of\n    /// `pkts.capacity()` and `hdrs.len()`.\n    fn poll_recv(\n        &self,\n        cx: &mut Context,\n        pkts: &mut [BytesMut],\n        route: &mut [Route],\n    ) -> Poll<io::Result<usize>>;\n\n    /// Asynchronously destroy the IO.\n    ///\n    /// When it returns [`Poll::Ready`] (whether with `Ok` or `Err`),\n    /// it must indicate that the resource has been completely destroyed,\n    /// and the same [`BindUri`] can be successfully bound again.\n    ///\n    /// Even if this method is not called,\n    /// the implementation should ensure that [`IO`] does not\n    /// leak any resources when it is dropped.\n    fn poll_close(&mut self, cx: &mut Context) -> Poll<io::Result<()>>;\n}\n\npub trait IoExt: IO {\n    #[inline]\n    fn sendmmsg(\n        &self,\n        mut bufs: &[io::IoSlice<'_>],\n        route: Route,\n    ) -> impl Future<Output = io::Result<()>> + Send {\n        async move {\n            while !bufs.is_empty() {\n                let sent = core::future::poll_fn(|cx| self.poll_send(cx, bufs, route)).await?;\n                bufs = &bufs[sent..];\n            }\n            Ok(())\n        }\n    }\n\n    fn recvmmsg<'b>(\n        &self,\n        bufs: &'b mut Vec<BytesMut>,\n        route: &'b mut Vec<Route>,\n    ) -> impl Future<Output = io::Result<impl Iterator<Item = (BytesMut, Route)> + Send + 'b>> + Send\n    {\n        async move {\n            let rcvd = std::future::poll_fn(|cx| {\n                let max_segments = self.max_segments()?;\n                let max_segment_size = self.max_segment_size()?;\n                bufs.resize_with(max_segments, || BytesMut::zeroed(max_segment_size));\n                route.resize_with(max_segments, Route::empty);\n                self.poll_recv(cx, bufs, route)\n            })\n            .await?;\n\n            Ok(bufs\n                .drain(..rcvd)\n                .zip(route.drain(..rcvd))\n                .map(|(mut seg, route)| {\n                    (seg.split_to(seg.len().min(route.seg_size() as _)), route)\n                }))\n        }\n    }\n\n    #[inline]\n    fn close(&mut self) -> impl Future<Output = io::Result<()>> + Send {\n        async { core::future::poll_fn(|cx| self.poll_close(cx)).await }\n    }\n}\n\nimpl<I: IO + ?Sized> IoExt for I {}\n\npub trait RefIO: Clone + Send + Sync {\n    type Interface: IO + ?Sized;\n\n    fn iface(&self) -> &Self::Interface;\n\n    fn same_io(&self, other: &Self) -> bool;\n}\n\nimpl<I: IO + ?Sized> RefIO for Arc<I> {\n    type Interface = I;\n\n    #[inline]\n    fn iface(&self) -> &Self::Interface {\n        self.as_ref()\n    }\n\n    fn same_io(&self, other: &Self) -> bool {\n        Arc::ptr_eq(self, other)\n    }\n}\n"
  },
  {
    "path": "qinterface/src/lib.rs",
    "content": "pub mod bind_uri;\npub mod component;\npub mod device;\npub mod io;\npub mod manager;\n\nuse std::{\n    error::Error,\n    fmt::Debug,\n    net::SocketAddr,\n    sync::{Arc, Weak},\n    task::{Context, Poll},\n};\n\nuse bytes::BytesMut;\nuse qbase::{net::route::Route, util::UniqueId};\nuse thiserror::Error;\n\nuse crate::{\n    bind_uri::BindUri,\n    io::{IO, RefIO},\n    manager::InterfaceContext,\n};\n\n#[derive(Debug, Clone)]\npub struct BindInterface {\n    context: Arc<InterfaceContext>,\n}\n\nimpl BindInterface {\n    pub(crate) fn new(iface: InterfaceContext) -> Self {\n        Self {\n            context: Arc::new(iface),\n        }\n    }\n\n    pub fn bind_uri(&self) -> BindUri {\n        self.context.bind_uri()\n    }\n\n    pub fn close(&self) -> impl Future<Output = std::io::Result<()>> + Send {\n        core::future::poll_fn(|cx| self.context.poll_close(cx))\n    }\n\n    pub fn rebind(&self) -> impl Future<Output = ()> + Send {\n        core::future::poll_fn(|cx| self.poll_rebind(cx))\n    }\n\n    #[inline]\n    pub fn borrow(&self) -> Interface {\n        Interface {\n            bind_id: self.context.bind_id(),\n            bind_iface: self.clone(),\n        }\n    }\n\n    #[inline]\n    pub fn downgrade(&self) -> WeakBindInterface {\n        WeakBindInterface {\n            context: Arc::downgrade(&self.context),\n        }\n    }\n\n    #[inline]\n    pub fn borrow_weak(&self) -> WeakInterface {\n        self.borrow().downgrade()\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct Interface {\n    bind_id: UniqueId,\n    bind_iface: BindInterface,\n}\n\n#[derive(Debug, Error)]\n#[error(\"Interface has been rebinded\")]\npub struct RebindedError;\n\nimpl RebindedError {\n    pub fn is_source_of(mut error: &(dyn Error + 'static)) -> bool {\n        loop {\n            if error.is::<Self>() {\n                return true;\n            }\n            match error.source() {\n                Some(source) => error = source,\n                None => return false,\n            }\n        }\n    }\n}\n\nimpl From<RebindedError> for std::io::Error {\n    fn from(value: RebindedError) -> Self {\n        std::io::Error::new(std::io::ErrorKind::ConnectionReset, value)\n    }\n}\n\nimpl Interface {\n    #[inline]\n    fn with_io<T>(&self, f: impl FnOnce(&dyn IO) -> T) -> std::io::Result<T> {\n        self.bind_iface\n            .context\n            .with_bind_io(self.bind_id, f)\n            .map_err(Into::into)\n    }\n\n    #[inline]\n    pub fn bind_interface(&self) -> &BindInterface {\n        &self.bind_iface\n    }\n\n    #[inline]\n    pub fn downgrade(&self) -> WeakInterface {\n        WeakInterface {\n            bind_uri: self.bind_iface.bind_uri(),\n            bind_id: self.bind_id,\n            weak_iface: self.bind_iface.downgrade(),\n        }\n    }\n\n    pub fn same_io(&self, other: &Interface) -> bool {\n        self.bind_id == other.bind_id\n            && Arc::ptr_eq(&self.bind_iface.context, &other.bind_iface.context)\n    }\n}\n\nimpl RefIO for Interface {\n    type Interface = Self;\n\n    #[inline]\n    fn iface(&self) -> &Self::Interface {\n        self\n    }\n\n    fn same_io(&self, other: &Self) -> bool {\n        self.same_io(other)\n    }\n}\n\nimpl IO for Interface {\n    #[inline]\n    fn bind_uri(&self) -> BindUri {\n        self.bind_iface.bind_uri()\n    }\n\n    #[inline]\n    fn bound_addr(&self) -> std::io::Result<SocketAddr> {\n        self.with_io(|io| io.bound_addr())?\n    }\n\n    #[inline]\n    fn max_segment_size(&self) -> std::io::Result<usize> {\n        self.with_io(|io| io.max_segment_size())?\n    }\n\n    #[inline]\n    fn max_segments(&self) -> std::io::Result<usize> {\n        self.with_io(|io| io.max_segments())?\n    }\n\n    #[inline]\n    fn poll_send(\n        &self,\n        cx: &mut Context,\n        pkts: &[std::io::IoSlice],\n        route: Route,\n    ) -> Poll<std::io::Result<usize>> {\n        self.with_io(|io| io.poll_send(cx, pkts, route))?\n    }\n\n    #[inline]\n    fn poll_recv(\n        &self,\n        cx: &mut Context,\n        pkts: &mut [BytesMut],\n        route: &mut [Route],\n    ) -> Poll<std::io::Result<usize>> {\n        self.with_io(|io| io.poll_recv(cx, pkts, route))?\n    }\n\n    #[inline]\n    fn poll_close(&mut self, cx: &mut Context) -> Poll<std::io::Result<()>> {\n        self.bind_iface.context.poll_close(cx)\n    }\n}\n\n#[derive(Debug, Error)]\n#[error(\"Interface has been unbound\")]\npub struct UnboundError;\n\nimpl UnboundError {\n    pub fn is_source_of(mut error: &(dyn Error + 'static)) -> bool {\n        loop {\n            if error.is::<Self>() {\n                return true;\n            }\n            match error.source() {\n                Some(source) => error = source,\n                None => return false,\n            }\n        }\n    }\n}\n\nimpl From<UnboundError> for std::io::Error {\n    fn from(value: UnboundError) -> Self {\n        std::io::Error::new(std::io::ErrorKind::ConnectionReset, value)\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct WeakBindInterface {\n    context: Weak<InterfaceContext>,\n}\n\nimpl WeakBindInterface {\n    pub fn upgrade(&self) -> Result<BindInterface, UnboundError> {\n        Ok(BindInterface {\n            context: self.context.upgrade().ok_or(UnboundError)?,\n        })\n    }\n\n    pub fn borrow(&self) -> Result<WeakInterface, UnboundError> {\n        Ok(self.upgrade()?.borrow_weak())\n    }\n\n    pub fn same_io(&self, other: &WeakBindInterface) -> bool {\n        Weak::ptr_eq(&self.context, &other.context)\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct WeakInterface {\n    bind_uri: BindUri,\n    bind_id: UniqueId,\n    weak_iface: WeakBindInterface,\n}\n\nimpl From<Interface> for WeakInterface {\n    fn from(iface: Interface) -> Self {\n        iface.downgrade()\n    }\n}\n\nimpl WeakInterface {\n    pub fn upgrade(&self) -> Result<Interface, UnboundError> {\n        Ok(Interface {\n            bind_iface: self.weak_iface.upgrade()?,\n            bind_id: self.bind_id,\n        })\n    }\n\n    pub fn same_io(&self, other: &WeakInterface) -> bool {\n        self.bind_id == other.bind_id && self.weak_iface.same_io(&other.weak_iface)\n    }\n}\n\nimpl RefIO for WeakInterface {\n    type Interface = WeakInterface;\n\n    fn iface(&self) -> &Self::Interface {\n        self\n    }\n\n    fn same_io(&self, other: &Self) -> bool {\n        self.same_io(other)\n    }\n}\n\nimpl IO for WeakInterface {\n    fn bind_uri(&self) -> BindUri {\n        self.bind_uri.clone()\n    }\n\n    fn bound_addr(&self) -> std::io::Result<SocketAddr> {\n        self.upgrade()?.bound_addr()\n    }\n\n    fn max_segment_size(&self) -> std::io::Result<usize> {\n        self.upgrade()?.max_segment_size()\n    }\n\n    fn max_segments(&self) -> std::io::Result<usize> {\n        self.upgrade()?.max_segments()\n    }\n\n    fn poll_send(\n        &self,\n        cx: &mut Context,\n        pkts: &[std::io::IoSlice],\n        route: Route,\n    ) -> Poll<std::io::Result<usize>> {\n        self.upgrade()?.poll_send(cx, pkts, route)\n    }\n\n    fn poll_recv(\n        &self,\n        cx: &mut Context,\n        pkts: &mut [BytesMut],\n        route: &mut [Route],\n    ) -> Poll<std::io::Result<usize>> {\n        self.upgrade()?.poll_recv(cx, pkts, route)\n    }\n\n    fn poll_close(&mut self, cx: &mut Context) -> Poll<std::io::Result<()>> {\n        self.upgrade()?.poll_close(cx)\n    }\n}\n"
  },
  {
    "path": "qinterface/src/manager.rs",
    "content": "use std::{\n    any::Any,\n    fmt::Debug,\n    future::Future,\n    io, mem,\n    net::SocketAddr,\n    ops::{Deref, DerefMut},\n    sync::{Arc, OnceLock},\n    task::{Context, Poll, ready},\n};\n\nuse bytes::BytesMut;\nuse dashmap::{DashMap, Entry};\nuse futures::FutureExt;\nuse parking_lot::{RwLock, RwLockReadGuard, RwLockWriteGuard};\nuse qbase::{\n    net::route,\n    util::{UniqueId, UniqueIdGenerator},\n};\nuse tokio::sync::SetOnce;\nuse tracing::Instrument as _;\n\nuse crate::{\n    BindInterface, Interface, RebindedError, WeakBindInterface,\n    bind_uri::BindUri,\n    component::{Component, Components},\n    io::{IO, IoExt, ProductIO},\n};\n\n/// Global [`IO`] manager that manages the lifecycle of all interfaces.\n///\n/// Calling the [`InterfaceManager::bind`] method with a [`BindUri`] returns a [`BindInterface`], primarily used for listening on addresses.\n/// As long as [`BindInterface`] instances exist, the corresponding [`IO`] for that [`BindUri`] won't be automatically released.\n///\n/// For actual data transmission, you need [`Interface`], which can be obtained via [`InterfaceManager::borrow`] or [`BindInterface::borrow`].\n/// Like [`BindInterface`], it keeps the [`IO`] alive, but with one key difference: once a rebind occurs,\n/// any previous [`Interface`] for that [`BindUri`] becomes invalid, and attempting to send or receive packets\n/// will result in [`RebindedError] errors.\n#[derive(Default, Debug)]\npub struct InterfaceManager {\n    interfaces: DashMap<BindUri, InterfaceEntry>,\n    bind_id_generator: UniqueIdGenerator,\n}\n\n#[derive(Debug)]\nstruct InterfaceEntry {\n    weak_iface: WeakBindInterface,\n    dropped: Arc<SetOnce<()>>,\n}\n\nimpl InterfaceEntry {\n    fn is_dropped(&self) -> bool {\n        self.dropped.get().is_some()\n    }\n\n    fn dropped(&self) -> impl Future<Output = ()> + use<> {\n        let dropped = self.dropped.clone();\n        async move {\n            dropped.wait().await;\n        }\n    }\n}\n\nimpl InterfaceManager {\n    #[inline]\n    pub fn global() -> &'static Arc<Self> {\n        static GLOBAL: OnceLock<Arc<InterfaceManager>> = OnceLock::new();\n        GLOBAL.get_or_init(Arc::default)\n    }\n\n    #[inline]\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    fn new_binding(\n        self: &Arc<Self>,\n        entry: Entry<BindUri, InterfaceEntry>,\n        factory: Arc<dyn ProductIO>,\n    ) -> BindInterface {\n        let context = InterfaceContext {\n            factory: factory.clone(),\n            binding: RwLock::new(Binding::new(\n                factory.bind(entry.key().clone()),\n                self.bind_id_generator.generate(),\n            )),\n            dropped: Arc::new(SetOnce::new()),\n            ifaces: self.clone(),\n            components: RwLock::new(Components::default()),\n        };\n        let dropped = context.dropped.clone();\n        let iface = BindInterface::new(context);\n        let weak_iface = iface.downgrade();\n\n        entry.insert(InterfaceEntry {\n            weak_iface,\n            dropped,\n        });\n\n        iface\n    }\n\n    pub async fn bind(\n        self: &Arc<Self>,\n        bind_uri: BindUri,\n        factory: Arc<dyn ProductIO>,\n    ) -> BindInterface {\n        // TODO: error: rebind with difference factory\n        loop {\n            match self.interfaces.entry(bind_uri.clone()) {\n                // (1) new binding: context closed but not yet removed\n                Entry::Occupied(entry) if entry.get().is_dropped() => {\n                    return self.new_binding(Entry::Occupied(entry), factory);\n                }\n                // (2) new binding: no existing context\n                Entry::Vacant(entry) => {\n                    return self.new_binding(Entry::Vacant(entry), factory);\n                }\n                // try reuse existing binding\n                Entry::Occupied(entry) => match entry.get().weak_iface.upgrade() {\n                    // (3) reuse existing binding\n                    Ok(iface) => return iface.clone(),\n                    // (4) no existing binding: close context and retry\n                    Err(..) => {\n                        let dropped_future = entry.get().dropped();\n                        drop(entry);\n                        dropped_future.await;\n                    }\n                },\n            }\n        }\n    }\n\n    #[inline]\n    pub fn borrow(&self, bind_uri: &BindUri) -> Option<Interface> {\n        self.interfaces\n            .get(bind_uri)\n            .and_then(|entry| Some(entry.weak_iface.upgrade().ok()?.borrow()))\n    }\n\n    #[inline]\n    pub fn get(&self, bind_uri: &BindUri) -> Option<BindInterface> {\n        self.interfaces\n            .get(bind_uri)\n            .and_then(|entry| entry.weak_iface.upgrade().ok())\n    }\n\n    #[inline]\n    pub fn unbind(self: &Arc<Self>, bind_uri: BindUri) -> impl Future<Output = ()> + Send + use<> {\n        let Entry::Occupied(entry) = self.interfaces.entry(bind_uri) else {\n            return std::future::ready(()).right_future();\n        };\n\n        match entry.get().weak_iface.upgrade() {\n            Ok(bind_iface) => {\n                let drop_future = bind_iface.context.as_ref().drop();\n                spawn_on_drop::SpawnOnDrop::new(Box::pin(drop_future)).left_future()\n            }\n            // Dropping by InterfaceContext::Drop\n            Err(..) => entry.get().dropped().right_future(),\n        }\n        .left_future()\n    }\n}\n\nmod spawn_on_drop {\n    use std::{\n        future::Future,\n        pin::Pin,\n        task::{Context, Poll, ready},\n    };\n\n    use tracing::Instrument;\n\n    pub(crate) struct SpawnOnDrop<F: Future<Output: Send + 'static> + Unpin + Send + 'static> {\n        pub(crate) future: Option<F>,\n    }\n\n    impl<F: Future<Output: Send + 'static> + Unpin + Send + 'static> SpawnOnDrop<F> {\n        pub(crate) fn new(future: F) -> Self {\n            Self {\n                future: Some(future),\n            }\n        }\n    }\n\n    impl<F: Future<Output: Send + 'static> + Unpin + Send + 'static> Future for SpawnOnDrop<F> {\n        type Output = F::Output;\n\n        fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n            match self.as_mut().get_mut().future.as_mut() {\n                Some(future) => {\n                    let output = ready!(Pin::new(future).poll(cx));\n                    self.future = None;\n                    Poll::Ready(output)\n                }\n                None => panic!(\"future polled after completion\"),\n            }\n        }\n    }\n\n    impl<F: Future<Output: Send + 'static> + Unpin + Send + 'static> Drop for SpawnOnDrop<F> {\n        fn drop(&mut self) {\n            if let Some(future) = self.future.take() {\n                // Best-effort: schedule the must-complete future before it is dropped.\n                tokio::spawn(future.in_current_span());\n            }\n        }\n    }\n}\n\nstruct Binding {\n    io: Box<dyn IO>,\n    id: UniqueId,\n    span: tracing::Span,\n}\n\nimpl Binding {\n    fn new(io: Box<dyn IO>, id: UniqueId) -> Self {\n        let bind_uri = io.bind_uri();\n        let span = tracing::debug_span!(\n            parent: None,\n            \"interface\",\n            %bind_uri,\n            bind_id = usize::from(id),\n        );\n        Self { io, id, span }\n    }\n}\n\npub struct InterfaceContext {\n    factory: Arc<dyn ProductIO>,\n    binding: RwLock<Binding>,\n    // shared with [InterfaceEntry]\n    dropped: Arc<SetOnce<()>>,\n    ifaces: Arc<InterfaceManager>,\n    components: RwLock<Components>,\n}\n\nimpl InterfaceContext {\n    fn binding(&self) -> RwLockReadGuard<'_, Binding> {\n        self.binding.read_recursive()\n    }\n\n    fn binding_mut(&self) -> RwLockWriteGuard<'_, Binding> {\n        self.binding.write()\n    }\n\n    pub fn bind_id(&self) -> UniqueId {\n        self.binding().id\n    }\n\n    fn with_io<T>(&self, f: impl FnOnce(&dyn IO) -> T) -> T {\n        let binding = self.binding();\n        let _guard = binding.span.enter();\n        f(binding.io.as_ref())\n    }\n\n    pub(crate) fn with_bind_io<T>(\n        &self,\n        bind_id: UniqueId,\n        f: impl FnOnce(&dyn IO) -> T,\n    ) -> Result<T, RebindedError> {\n        let binding = self.binding();\n        if binding.id != bind_id {\n            return Err(RebindedError);\n        }\n        let _guard = binding.span.enter();\n        Ok(f(binding.io.as_ref()))\n    }\n\n    fn components(&self) -> RwLockReadGuard<'_, Components> {\n        self.components.read_recursive()\n    }\n\n    fn components_mut(&self) -> RwLockWriteGuard<'_, Components> {\n        self.components.write()\n    }\n\n    pub fn poll_close(&self, cx: &mut Context) -> Poll<io::Result<()>> {\n        let (mut binding, components) = (self.binding_mut(), self.components());\n        for (.., component) in &components.map {\n            ready!(component.poll_shutdown(cx));\n        }\n        binding.io.poll_close(cx)\n    }\n}\n\nimpl Debug for InterfaceContext {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"Interface\")\n            .field(\"bind_uri\", &self.binding().io.bind_uri())\n            .finish()\n    }\n}\n\nimpl BindInterface {\n    pub fn poll_rebind(&self, cx: &mut Context<'_>) -> Poll<()> {\n        let context = self.context.as_ref();\n\n        // 降级binding锁\n        // A: rebind, reinit\n        // B:                rebind, reinit\n\n        // 释放binding锁\n        // A: lock(B), lock(C), rebind, release(B), reinit, release(C)\n        // B:                                      lock(B),             lock(C), rebind, reinit\n\n        // hold read lock to prevent subsequent rebind, avoid compoents seeing inconsistent state\n        let (new_bind_id, components, span) = {\n            let mut binding = context.binding_mut();\n            let components = context.components();\n\n            ready!(context.factory.poll_rebind(cx, &mut binding.io));\n            binding.id = context.ifaces.bind_id_generator.generate();\n            binding.span = tracing::debug_span!(\n                parent: None,\n                \"interface\",\n                bind_uri = %binding.io.bind_uri(),\n                bind_id = usize::from(binding.id),\n            );\n            (binding.id, components, binding.span.clone())\n        };\n\n        let iface = Interface {\n            bind_id: new_bind_id,\n            bind_iface: self.clone(),\n        };\n        let _guard = span.enter();\n        for (.., component) in &components.map {\n            component.reinit(&iface);\n        }\n        Poll::Ready(())\n    }\n\n    pub fn insert_component_with<C: Component>(&self, init: impl FnOnce(&Interface) -> C) {\n        self.with_components_mut(|components, iface| {\n            components.init_with(|| init(iface));\n        });\n    }\n\n    pub fn with_components<T>(&self, f: impl FnOnce(&Components, &Interface) -> T) -> T {\n        let context = self.context.as_ref();\n        let (binding, components) = (context.binding(), context.components());\n        let _guard = binding.span.enter();\n\n        let iface = Interface {\n            bind_id: binding.id,\n            bind_iface: self.clone(),\n        };\n        f(components.deref(), &iface)\n    }\n\n    pub fn with_components_mut<T>(&self, f: impl FnOnce(&mut Components, &Interface) -> T) -> T {\n        let context = self.context.as_ref();\n        let (binding, mut components) = (context.binding(), context.components_mut());\n        let _guard = binding.span.enter();\n\n        let iface = Interface {\n            bind_id: binding.id,\n            bind_iface: self.clone(),\n        };\n        f(components.deref_mut(), &iface)\n    }\n}\n\nimpl Interface {\n    pub fn with_component<C: Component, T>(\n        &self,\n        f: impl FnOnce(&C) -> T,\n    ) -> Result<Option<T>, RebindedError> {\n        let context = self.bind_iface.context.as_ref();\n        let (binding, components) = (context.binding(), context.components());\n\n        if self.bind_id != binding.id {\n            return Err(RebindedError);\n        }\n\n        let _guard = binding.span.enter();\n        Ok(components.with(f))\n    }\n\n    pub fn with_components<T>(&self, f: impl FnOnce(&Components) -> T) -> Result<T, RebindedError> {\n        let context = self.bind_iface.context.as_ref();\n        let (binding, components) = (context.binding(), context.components());\n\n        if self.bind_id != binding.id {\n            return Err(RebindedError);\n        }\n\n        let _guard = binding.span.enter();\n        Ok(f(components.deref()))\n    }\n\n    pub fn get_component<C: Component + Clone>(&self) -> Result<Option<C>, RebindedError> {\n        self.with_component(C::clone)\n    }\n}\n\nimpl IO for InterfaceContext {\n    fn bind_uri(&self) -> BindUri {\n        self.binding().io.bind_uri()\n    }\n\n    fn bound_addr(&self) -> io::Result<SocketAddr> {\n        self.with_io(|io| io.bound_addr())\n    }\n\n    fn max_segment_size(&self) -> io::Result<usize> {\n        self.with_io(|io| io.max_segment_size())\n    }\n\n    fn max_segments(&self) -> io::Result<usize> {\n        self.with_io(|io| io.max_segments())\n    }\n\n    fn poll_send(\n        &self,\n        cx: &mut Context,\n        pkts: &[io::IoSlice],\n        route: route::Route,\n    ) -> Poll<io::Result<usize>> {\n        self.with_io(|io| io.poll_send(cx, pkts, route))\n    }\n\n    fn poll_recv(\n        &self,\n        cx: &mut Context,\n        pkts: &mut [BytesMut],\n        route: &mut [route::Route],\n    ) -> Poll<io::Result<usize>> {\n        self.with_io(|io| io.poll_recv(cx, pkts, route))\n    }\n\n    fn poll_close(&mut self, cx: &mut Context) -> Poll<io::Result<()>> {\n        InterfaceContext::poll_close(self, cx)\n    }\n}\n\nmod dropping_io {\n    use thiserror::Error;\n\n    use super::*;\n\n    #[derive(Debug, Clone, Error)]\n    #[error(\"QuicIO is dropping and cannot be used anymore, you should never see this error\")]\n    pub(crate) struct DroppingIO {\n        pub(crate) bind_uri: BindUri,\n    }\n\n    impl DroppingIO {\n        pub(crate) fn to_io_error(&self) -> io::Error {\n            io::Error::new(io::ErrorKind::NotConnected, self.clone())\n        }\n    }\n\n    impl From<DroppingIO> for io::Error {\n        fn from(error: DroppingIO) -> Self {\n            error.to_io_error()\n        }\n    }\n\n    impl IO for DroppingIO {\n        fn bind_uri(&self) -> BindUri {\n            self.bind_uri.clone()\n        }\n\n        fn bound_addr(&self) -> io::Result<SocketAddr> {\n            Err(self.to_io_error())\n        }\n\n        fn max_segment_size(&self) -> io::Result<usize> {\n            Err(self.to_io_error())\n        }\n\n        fn max_segments(&self) -> io::Result<usize> {\n            Err(self.to_io_error())\n        }\n\n        fn poll_send(\n            &self,\n            _: &mut Context,\n            _: &[io::IoSlice],\n            _: route::Route,\n        ) -> Poll<io::Result<usize>> {\n            Poll::Ready(Err(self.to_io_error()))\n        }\n\n        fn poll_recv(\n            &self,\n            _: &mut Context,\n            _: &mut [BytesMut],\n            _: &mut [route::Route],\n        ) -> Poll<io::Result<usize>> {\n            Poll::Ready(Err(self.to_io_error()))\n        }\n\n        fn poll_close(&mut self, _: &mut Context) -> Poll<io::Result<()>> {\n            Poll::Ready(Ok(()))\n        }\n    }\n}\n\nimpl Binding {\n    pub fn is_dropping(&self) -> bool {\n        (self.io.as_ref() as &dyn Any).is::<dropping_io::DroppingIO>()\n    }\n\n    pub fn take_io(&mut self) -> Option<Box<dyn IO>> {\n        if self.is_dropping() {\n            return None;\n        }\n        let bind_uri = self.io.bind_uri();\n        let dropping_io = Box::new(dropping_io::DroppingIO { bind_uri });\n        Some(mem::replace(&mut self.io, dropping_io))\n    }\n}\n\nimpl InterfaceContext {\n    fn drop(&self) -> impl Future<Output = ()> + Send + use<> {\n        let dropped = self.dropped.clone();\n        let Some(mut io) = self.binding_mut().take_io() else {\n            return std::future::ready(()).right_future();\n        };\n\n        let ifaces = self.ifaces.clone();\n        let bind_uri = io.bind_uri();\n        let components = mem::take(self.components_mut().deref_mut());\n\n        async move {\n            for (_, component) in components.map {\n                _ = core::future::poll_fn(|cx| component.poll_shutdown(cx)).await;\n            }\n            _ = io.close().await;\n\n            dropped.set(()).expect(\"duplicated drop, this is a bug\");\n            tokio::task::spawn_blocking(move || {\n                ifaces\n                    .interfaces\n                    .remove_if(&bind_uri, |_, entry| entry.is_dropped());\n            });\n        }\n        .left_future()\n    }\n}\n\nimpl Drop for InterfaceContext {\n    fn drop(&mut self) {\n        if !{ self.binding().is_dropping() } {\n            // Best-effort: schedule async cleanup before the context is dropped.\n            tokio::spawn(InterfaceContext::drop(self).in_current_span());\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::{\n        sync::{\n            Arc,\n            atomic::{AtomicUsize, Ordering},\n        },\n        task::{Context, Poll},\n    };\n\n    use futures::task::noop_waker_ref;\n\n    use super::*;\n    use crate::{\n        component::Component,\n        io::{IO, ProductIO},\n    };\n\n    #[derive(Debug)]\n    struct TestComponent {\n        shutdown_calls: Arc<AtomicUsize>,\n    }\n\n    impl Component for TestComponent {\n        fn poll_shutdown(&self, _cx: &mut Context<'_>) -> Poll<()> {\n            self.shutdown_calls.fetch_add(1, Ordering::SeqCst);\n            Poll::Ready(())\n        }\n\n        fn reinit(&self, _iface: &crate::Interface) {}\n    }\n\n    #[derive(Debug)]\n    struct TestIo {\n        bind_uri: BindUri,\n        close_calls: Arc<AtomicUsize>,\n    }\n\n    impl IO for TestIo {\n        fn bind_uri(&self) -> BindUri {\n            self.bind_uri.clone()\n        }\n\n        fn bound_addr(&self) -> io::Result<SocketAddr> {\n            Err(io::Error::new(io::ErrorKind::Unsupported, \"not needed\"))\n        }\n\n        fn max_segment_size(&self) -> io::Result<usize> {\n            Ok(1200)\n        }\n\n        fn max_segments(&self) -> io::Result<usize> {\n            Ok(1)\n        }\n\n        fn poll_send(\n            &self,\n            _cx: &mut Context,\n            _pkts: &[io::IoSlice],\n            _route: route::Route,\n        ) -> Poll<io::Result<usize>> {\n            Poll::Ready(Ok(0))\n        }\n\n        fn poll_recv(\n            &self,\n            _cx: &mut Context,\n            _pkts: &mut [BytesMut],\n            _route: &mut [route::Route],\n        ) -> Poll<io::Result<usize>> {\n            Poll::Pending\n        }\n\n        fn poll_close(&mut self, _cx: &mut Context) -> Poll<io::Result<()>> {\n            self.close_calls.fetch_add(1, Ordering::SeqCst);\n            Poll::Ready(Ok(()))\n        }\n    }\n\n    #[derive(Debug)]\n    struct TestFactory {\n        close_calls: Arc<AtomicUsize>,\n    }\n\n    impl ProductIO for TestFactory {\n        fn bind(&self, bind_uri: BindUri) -> Box<dyn IO> {\n            Box::new(TestIo {\n                bind_uri,\n                close_calls: self.close_calls.clone(),\n            })\n        }\n    }\n\n    #[test]\n    fn binding_take_io_is_idempotent_and_switches_to_dropping_io() {\n        let close_calls = Arc::new(AtomicUsize::new(0));\n        let bind_uri: BindUri = \"inet://127.0.0.1:0\".into();\n\n        let mut binding = Binding::new(\n            Box::new(TestIo {\n                bind_uri: bind_uri.clone(),\n                close_calls: close_calls.clone(),\n            }),\n            UniqueIdGenerator::new().generate(),\n        );\n\n        let first = binding.take_io();\n        assert!(first.is_some());\n        assert!(binding.is_dropping());\n\n        let second = binding.take_io();\n        assert!(second.is_none());\n\n        // Ensure the original IO wasn't closed by take_io itself.\n        assert_eq!(close_calls.load(Ordering::SeqCst), 0);\n    }\n\n    #[test]\n    fn poll_close_shuts_down_components_before_io_close() {\n        let shutdown_calls = Arc::new(AtomicUsize::new(0));\n        let close_calls = Arc::new(AtomicUsize::new(0));\n\n        let bind_uri: BindUri = \"inet://127.0.0.1:0\".into();\n        let mut components = Components::new();\n        components.init_with(|| TestComponent {\n            shutdown_calls: shutdown_calls.clone(),\n        });\n\n        let mut cx = Context::from_waker(noop_waker_ref());\n        let ctx = InterfaceContext {\n            factory: Arc::new(TestFactory {\n                close_calls: close_calls.clone(),\n            }),\n            binding: RwLock::new(Binding::new(\n                Box::new(TestIo {\n                    bind_uri,\n                    close_calls: close_calls.clone(),\n                }),\n                UniqueIdGenerator::new().generate(),\n            )),\n            dropped: Arc::new(SetOnce::new()),\n            ifaces: Arc::new(InterfaceManager::new()),\n            components: RwLock::new(components),\n        };\n\n        let r = ctx.poll_close(&mut cx);\n        assert!(matches!(r, Poll::Ready(Ok(()))));\n        assert_eq!(shutdown_calls.load(Ordering::SeqCst), 1);\n        assert_eq!(close_calls.load(Ordering::SeqCst), 1);\n\n        // Prevent Drop from spawning without a runtime.\n        let _ = ctx.binding_mut().take_io();\n    }\n}\n"
  },
  {
    "path": "qinterface/tests/auto_rebind.rs",
    "content": "mod common;\n\nuse std::{sync::Arc, time::Duration};\n\nuse common::*;\nuse qinterface::{\n    component::alive::RebindOnNetworkChangedComponent, device::Devices, manager::InterfaceManager,\n};\nuse tokio::time;\n\n#[test]\nfn rebind_on_network_changed_triggers_on_recoverable_failure() {\n    run(async {\n        let Some(bind_uri) = any_iface_bind_uri() else {\n            // No real network interface in this environment; skip.\n            return;\n        };\n\n        let manager = InterfaceManager::global().clone();\n        let factory = Arc::new(FakeFactory::new());\n\n        let bind_iface = manager.bind(bind_uri.clone(), factory).await;\n        let before = bind_iface.borrow();\n\n        let probe = Arc::new(Probe::default());\n        bind_iface.insert_component_with(|iface| {\n            RebindOnNetworkChangedComponent::new(iface, Devices::global())\n        });\n        bind_iface.insert_component_with(|_iface| ProbeComponent::new(probe.clone()));\n\n        // The component calls try_rebind() once at init.\n        // If alive-check considers the interface unhealthy (recoverable error), it will rebind.\n        let _ = time::timeout(Duration::from_secs(2), async {\n            loop {\n                let now = bind_iface.borrow();\n                if !now.same_io(&before) {\n                    break;\n                }\n                time::sleep(Duration::from_millis(10)).await;\n            }\n        })\n        .await;\n\n        // If it did rebind, the probe should have seen reinit.\n        // If it didn't (alive-check passed), that's also acceptable on some systems.\n        let _reinit_calls = probe.reinit_calls.load(std::sync::atomic::Ordering::SeqCst);\n    })\n}\n"
  },
  {
    "path": "qinterface/tests/common/mod.rs",
    "content": "#![allow(unused)]\n\nuse std::{\n    future::Future,\n    io,\n    net::{IpAddr, Ipv4Addr, SocketAddr},\n    sync::{\n        Arc, Mutex,\n        atomic::{AtomicBool, AtomicUsize, Ordering},\n    },\n    task::{Context, Poll},\n    time::Duration,\n};\n\nuse bytes::BytesMut;\nuse qbase::net::route::{Line, Link, Pathway, Route};\nuse qinterface::{Interface, bind_uri::BindUri, component::Component, device::Devices, io::IO};\nuse tokio::{runtime::Runtime, sync::Notify, time};\n\npub fn run<F: Future>(future: F) -> F::Output {\n    static RT: std::sync::LazyLock<Runtime> = std::sync::LazyLock::new(|| {\n        tokio::runtime::Builder::new_multi_thread()\n            .enable_all()\n            .build()\n            .unwrap()\n    });\n\n    RT.block_on(async move {\n        match time::timeout(Duration::from_secs(30), future).await {\n            Ok(output) => output,\n            Err(_timedout) => panic!(\"test timed out\"),\n        }\n    })\n}\n\npub fn test_bind_uri() -> BindUri {\n    // inet scheme is easiest & does not require real interfaces\n    let base: BindUri = \"inet://127.0.0.1:0\".into();\n    base.alloc_port()\n}\n\npub fn any_iface_bind_uri() -> Option<BindUri> {\n    let devices = Devices::global();\n    let interfaces = devices.interfaces();\n\n    // prefer v4 for simplicity\n    for (name, iface) in &interfaces {\n        if !iface.ipv4.is_empty() {\n            return Some(format!(\"iface://v4.{name}:0\").as_str().into());\n        }\n    }\n\n    // fallback v6 (non-link-local selection happens in resolve())\n    for (name, iface) in &interfaces {\n        if !iface.ipv6.is_empty() {\n            return Some(format!(\"iface://v6.{name}:0\").as_str().into());\n        }\n    }\n\n    None\n}\n\n#[derive(Debug, Default)]\npub struct FakeIoState {\n    pub generation: AtomicUsize,\n    pub close_calls: AtomicUsize,\n}\n\n#[derive(Debug)]\npub struct FakeIo {\n    bind_uri: BindUri,\n    bound_addr: SocketAddr,\n    state: Arc<FakeIoState>,\n    closed: AtomicBool,\n    close_notify: Arc<Notify>,\n}\n\nimpl FakeIo {\n    pub fn new(bind_uri: BindUri, bound_addr: SocketAddr, state: Arc<FakeIoState>) -> Self {\n        Self {\n            bind_uri,\n            bound_addr,\n            state,\n            closed: AtomicBool::new(false),\n            close_notify: Arc::new(Notify::new()),\n        }\n    }\n\n    pub fn close_notify(&self) -> Arc<Notify> {\n        self.close_notify.clone()\n    }\n}\n\nimpl IO for FakeIo {\n    fn bind_uri(&self) -> BindUri {\n        self.bind_uri.clone()\n    }\n\n    fn bound_addr(&self) -> io::Result<SocketAddr> {\n        Ok(self.bound_addr)\n    }\n\n    fn max_segment_size(&self) -> io::Result<usize> {\n        Ok(1500)\n    }\n\n    fn max_segments(&self) -> io::Result<usize> {\n        Ok(1)\n    }\n\n    fn poll_send(\n        &self,\n        _cx: &mut Context,\n        pkts: &[io::IoSlice],\n        _route: Route,\n    ) -> Poll<io::Result<usize>> {\n        Poll::Ready(Ok(pkts.len()))\n    }\n\n    fn poll_recv(\n        &self,\n        _cx: &mut Context,\n        _pkts: &mut [BytesMut],\n        _route: &mut [Route],\n    ) -> Poll<io::Result<usize>> {\n        Poll::Pending\n    }\n\n    fn poll_close(&mut self, _cx: &mut Context) -> Poll<io::Result<()>> {\n        if !self.closed.swap(true, Ordering::SeqCst) {\n            self.state.close_calls.fetch_add(1, Ordering::SeqCst);\n            self.close_notify.notify_waiters();\n        }\n        Poll::Ready(Ok(()))\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct FakeFactory {\n    pub state: Arc<FakeIoState>,\n    pub base_port: u16,\n}\n\nimpl FakeFactory {\n    pub fn new() -> Self {\n        Self {\n            state: Arc::new(FakeIoState::default()),\n            base_port: 50000,\n        }\n    }\n}\n\nimpl qinterface::io::ProductIO for FakeFactory {\n    fn bind(&self, bind_uri: BindUri) -> Box<dyn IO> {\n        let generation = self.state.generation.fetch_add(1, Ordering::SeqCst) + 1;\n        let bound_addr = SocketAddr::new(\n            IpAddr::V4(Ipv4Addr::LOCALHOST),\n            self.base_port.saturating_add(generation as u16),\n        );\n        Box::new(FakeIo::new(bind_uri, bound_addr, self.state.clone()))\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum ProbeEventKind {\n    Reinit,\n    Shutdown,\n}\n\n#[derive(Debug, Clone)]\npub struct ProbeEvent {\n    pub kind: ProbeEventKind,\n    pub bind_uri: BindUri,\n}\n\n#[derive(Debug, Default)]\npub struct Probe {\n    pub shutdown_calls: AtomicUsize,\n    pub reinit_calls: AtomicUsize,\n    pub events: Mutex<Vec<ProbeEvent>>,\n    pub last_bind_uri: Mutex<Option<BindUri>>,\n}\n\n#[derive(Debug, Clone)]\npub struct ProbeComponent {\n    pub probe: Arc<Probe>,\n}\n\nimpl ProbeComponent {\n    pub fn new(probe: Arc<Probe>) -> Self {\n        Self { probe }\n    }\n}\n\nimpl Component for ProbeComponent {\n    fn poll_shutdown(&self, _cx: &mut Context<'_>) -> Poll<()> {\n        self.probe.shutdown_calls.fetch_add(1, Ordering::SeqCst);\n        let bind_uri = self\n            .probe\n            .last_bind_uri\n            .lock()\n            .unwrap()\n            .clone()\n            .unwrap_or_else(test_bind_uri);\n        self.probe.events.lock().unwrap().push(ProbeEvent {\n            kind: ProbeEventKind::Shutdown,\n            bind_uri,\n        });\n        Poll::Ready(())\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        self.probe.reinit_calls.fetch_add(1, Ordering::SeqCst);\n        *self.probe.last_bind_uri.lock().unwrap() = Some(iface.bind_uri());\n        self.probe.events.lock().unwrap().push(ProbeEvent {\n            kind: ProbeEventKind::Reinit,\n            bind_uri: iface.bind_uri(),\n        });\n    }\n}\n\npub fn dummy_packet_header() -> Route {\n    let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::LOCALHOST), 1);\n    let way = Pathway::new(addr.into(), addr.into());\n    let link = Link::new(addr, addr);\n    let line = Line::new(link, 64, None, 0);\n    Route::new(way, line)\n}\n"
  },
  {
    "path": "qinterface/tests/components.rs",
    "content": "mod common;\n\nuse std::sync::{\n    Arc,\n    atomic::{AtomicBool, AtomicUsize, Ordering},\n};\n\nuse common::*;\nuse qinterface::{Interface, component::Component, manager::InterfaceManager};\n\n#[derive(Debug, Default)]\nstruct RouterState {\n    shutdown_calls: AtomicUsize,\n    reinit_calls: AtomicUsize,\n}\n\n#[derive(Debug, Clone)]\nstruct RouterComponent {\n    state: Arc<RouterState>,\n}\n\nimpl Component for RouterComponent {\n    fn poll_shutdown(&self, _cx: &mut std::task::Context<'_>) -> std::task::Poll<()> {\n        self.state.shutdown_calls.fetch_add(1, Ordering::SeqCst);\n        std::task::Poll::Ready(())\n    }\n\n    fn reinit(&self, _iface: &Interface) {\n        self.state.reinit_calls.fetch_add(1, Ordering::SeqCst);\n    }\n}\n\n#[derive(Debug, Default)]\nstruct ClientState {\n    saw_router: AtomicBool,\n    missing_router_reinits: AtomicUsize,\n}\n\n#[derive(Debug, Clone)]\nstruct ClientComponent {\n    state: Arc<ClientState>,\n}\n\nimpl Component for ClientComponent {\n    fn poll_shutdown(&self, _cx: &mut std::task::Context<'_>) -> std::task::Poll<()> {\n        std::task::Poll::Ready(())\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        let has_router = iface\n            .with_components(|cs| cs.get::<RouterComponent>().is_some())\n            .expect(\"reinit should always see a non-stale iface\");\n\n        if has_router {\n            self.state.saw_router.store(true, Ordering::SeqCst);\n        } else {\n            self.state\n                .missing_router_reinits\n                .fetch_add(1, Ordering::SeqCst);\n        }\n    }\n}\n\n#[test]\nfn component_dependency_missing_then_added_is_observable_on_rebind() {\n    run(async {\n        let manager = InterfaceManager::global().clone();\n        let factory = Arc::new(FakeFactory::new());\n\n        let bind_uri = test_bind_uri();\n        let bind_iface = manager.bind(bind_uri, factory).await;\n\n        let client_state = Arc::new(ClientState::default());\n        bind_iface.insert_component_with(|_iface| ClientComponent {\n            state: client_state.clone(),\n        });\n\n        // First rebind: client exists, router missing\n        bind_iface.rebind().await;\n        assert!(!client_state.saw_router.load(Ordering::SeqCst));\n        assert!(client_state.missing_router_reinits.load(Ordering::SeqCst) > 0);\n\n        // Add dependency later, then rebind again: client should observe it.\n        let router_state = Arc::new(RouterState::default());\n        bind_iface.insert_component_with(|_iface| RouterComponent {\n            state: router_state.clone(),\n        });\n\n        bind_iface.rebind().await;\n        assert!(client_state.saw_router.load(Ordering::SeqCst));\n        assert!(router_state.reinit_calls.load(Ordering::SeqCst) > 0);\n    })\n}\n\n#[test]\nfn component_dependency_present_is_visible_inside_reinit() {\n    run(async {\n        let manager = InterfaceManager::global().clone();\n        let factory = Arc::new(FakeFactory::new());\n\n        let bind_uri = test_bind_uri();\n        let bind_iface = manager.bind(bind_uri, factory).await;\n\n        let router_state = Arc::new(RouterState::default());\n        bind_iface.insert_component_with(|_iface| RouterComponent {\n            state: router_state.clone(),\n        });\n\n        let client_state = Arc::new(ClientState::default());\n        bind_iface.insert_component_with(|_iface| ClientComponent {\n            state: client_state.clone(),\n        });\n\n        bind_iface.rebind().await;\n        assert!(client_state.saw_router.load(Ordering::SeqCst));\n        assert!(router_state.reinit_calls.load(Ordering::SeqCst) > 0);\n    })\n}\n"
  },
  {
    "path": "qinterface/tests/lifecycle.rs",
    "content": "mod common;\n\nuse std::{io::ErrorKind, sync::Arc, time::Duration};\n\nuse common::*;\nuse qinterface::{io::IO, manager::InterfaceManager};\nuse tokio::time;\n\n#[test]\nfn unbind_destroys_and_weak_upgrade_fails() {\n    run(async {\n        let manager = InterfaceManager::global().clone();\n        let factory = Arc::new(FakeFactory::new());\n        let state = factory.state.clone();\n\n        let bind_uri = test_bind_uri();\n        let bind_iface: qinterface::BindInterface = manager.bind(bind_uri.clone(), factory).await;\n        let weak_bind = bind_iface.downgrade();\n        let weak_iface = bind_iface.borrow_weak();\n\n        // unbind is async; ensure it completes\n        manager.unbind(bind_uri.clone()).await;\n\n        // existing strong handle remains upgradeable, but should be unusable\n        let err = bind_iface.borrow().bound_addr().unwrap_err();\n        assert_eq!(err.kind(), ErrorKind::NotConnected);\n\n        // ensure IO was actually closed\n        time::timeout(Duration::from_secs(2), async {\n            while state.close_calls.load(std::sync::atomic::Ordering::SeqCst) == 0 {\n                time::sleep(Duration::from_millis(10)).await;\n            }\n        })\n        .await\n        .expect(\"unbind did not close IO in time\");\n\n        drop(bind_iface);\n\n        time::timeout(Duration::from_secs(2), async {\n            loop {\n                if weak_bind.upgrade().is_err() && weak_iface.upgrade().is_err() {\n                    break;\n                }\n                time::sleep(Duration::from_millis(10)).await;\n            }\n        })\n        .await\n        .expect(\"weak upgrade should eventually fail after unbind + drop\");\n    })\n}\n\n#[test]\nfn auto_drop_when_last_ref_gone_allows_rebind() {\n    run(async {\n        let manager = InterfaceManager::global().clone();\n        let factory = Arc::new(FakeFactory::new());\n        let state = factory.state.clone();\n\n        let bind_uri = test_bind_uri();\n\n        // Bind and create a borrowed Interface (strong ref)\n        let bind_iface: qinterface::BindInterface =\n            manager.bind(bind_uri.clone(), factory.clone()).await;\n        let iface = bind_iface.borrow();\n        drop(bind_iface);\n        drop(iface);\n\n        // Binding again must wait for the dropped signal, so this also verifies auto-drop.\n        let _bind_iface2 = time::timeout(Duration::from_secs(2), async {\n            manager.bind(bind_uri.clone(), factory.clone()).await\n        })\n        .await\n        .expect(\"rebind after auto-drop timed out\");\n\n        assert!(state.close_calls.load(std::sync::atomic::Ordering::SeqCst) > 0);\n    })\n}\n"
  },
  {
    "path": "qinterface/tests/locations.rs",
    "content": "mod common;\n\nuse std::{sync::Arc, time::Duration};\n\nuse common::*;\nuse qinterface::{\n    component::location::{AddressEvent, Locations, LocationsComponent},\n    manager::InterfaceManager,\n};\nuse tokio::time;\n\n#[test]\nfn locations_component_emits_closed_then_upsert_on_rebind() {\n    run(async {\n        let manager = InterfaceManager::global().clone();\n        let factory = Arc::new(FakeFactory::new());\n\n        let bind_uri = test_bind_uri();\n        let bind_iface = manager.bind(bind_uri.clone(), factory).await;\n\n        let locations = Arc::new(Locations::new());\n        let mut observer = locations.subscribe();\n\n        bind_iface.insert_component_with(|iface| {\n            LocationsComponent::new(iface.downgrade(), locations.clone())\n        });\n\n        // initial upsert (bound_addr result) should be delivered to the subscriber\n        let (u_bind, ev) = time::timeout(Duration::from_secs(2), observer.recv())\n            .await\n            .expect(\"timeout waiting for initial upsert\")\n            .expect(\"observer closed\");\n        assert_eq!(u_bind, bind_uri);\n        assert!(matches!(ev, AddressEvent::Upsert(_)));\n\n        // trigger rebind\n        bind_iface.rebind().await;\n\n        // must see Closed then Upsert for same bind_uri\n        let (c_bind, c_ev) = time::timeout(Duration::from_secs(2), observer.recv())\n            .await\n            .expect(\"timeout waiting for closed\")\n            .expect(\"observer closed\");\n        assert_eq!(c_bind, bind_uri);\n        assert!(matches!(c_ev, AddressEvent::Closed));\n\n        let (u2_bind, u2_ev) = time::timeout(Duration::from_secs(2), observer.recv())\n            .await\n            .expect(\"timeout waiting for upsert\")\n            .expect(\"observer closed\");\n        assert_eq!(u2_bind, bind_uri);\n        assert!(matches!(u2_ev, AddressEvent::Upsert(_)));\n\n        // sanity: stale interface should not be able to touch component\n        let old_iface = bind_iface.borrow();\n        bind_iface.rebind().await;\n        let err = old_iface.with_components(|_c| ()).unwrap_err();\n        let _ = err;\n    })\n}\n"
  },
  {
    "path": "qinterface/tests/rebind.rs",
    "content": "mod common;\n\nuse std::{io::ErrorKind, sync::Arc};\n\nuse common::*;\nuse qinterface::{RebindedError, io::IO, manager::InterfaceManager};\n\n#[test]\nfn manual_rebind_makes_old_interface_stale() {\n    run(async {\n        let manager = InterfaceManager::global().clone();\n        let factory = Arc::new(FakeFactory::new());\n\n        let bind_uri = test_bind_uri();\n        let bind_iface = manager.bind(bind_uri.clone(), factory).await;\n\n        let old_iface = bind_iface.borrow();\n\n        // install a component so we can validate stale with_component\n        let probe = Arc::new(Probe::default());\n        bind_iface.insert_component_with(|_iface| ProbeComponent::new(probe.clone()));\n\n        // rebind -> new bind_id\n        bind_iface.rebind().await;\n        let new_iface = bind_iface.borrow();\n        assert!(!old_iface.same_io(&new_iface));\n\n        // Old iface IO operations should fail with ConnectionReset/RebindedError\n        let err = old_iface.bound_addr().unwrap_err();\n        assert_eq!(err.kind(), ErrorKind::ConnectionReset);\n        assert!(RebindedError::is_source_of(err.get_ref().unwrap()));\n\n        // Old iface component access should fail with RebindedError\n        let err = old_iface\n            .with_component::<ProbeComponent, _>(|_c| ())\n            .unwrap_err();\n        let _ = err; // it's exactly RebindedError\n\n        // New iface works\n        new_iface.bound_addr().expect(\"new iface should be usable\");\n        assert!(probe.reinit_calls.load(std::sync::atomic::Ordering::SeqCst) > 0);\n    })\n}\n"
  },
  {
    "path": "qmacro/Cargo.toml",
    "content": "[package]\nname = \"qmacro\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"dquic's proc macros\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\n[lib]\nproc-macro = true\n\n[dependencies]\ndarling = \"0.23\"\nproc-macro2 = \"1\"\nsyn = \"2\"\nquote = \"1\"\n"
  },
  {
    "path": "qmacro/src/derive.rs",
    "content": "use darling::{FromMeta, ast::NestedMeta};\nuse proc_macro::TokenStream;\nuse proc_macro2::TokenStream as TokenStream2;\nuse quote::{ToTokens, format_ident, quote};\nuse syn::{Error, Expr, ExprRange, Ident, ItemEnum, Token, Variant, punctuated::Punctuated};\n\npub fn quic_parameters(item: TokenStream) -> Result<TokenStream2, Error> {\n    let r#enum = syn::parse::<ItemEnum>(item)?;\n    let enum_name = &r#enum.ident;\n\n    let mut try_from_varint_match_arms = quote! {};\n    let mut into_varint_match_arms = quote! {};\n    // TODO: validate\n    let mut validate_match_arms = quote! {};\n    let mut default_value_match_arms = quote! {};\n    let mut value_type_match_arms = quote! {};\n\n    for variant in &r#enum.variants {\n        let discriminant = match variant.discriminant.as_ref() {\n            Some((_eq, discriminant)) => discriminant,\n            None => {\n                return Err(Error::new_spanned(\n                    variant,\n                    \"Each variant must have a discriminant, e.g., `= 0`\",\n                ));\n            }\n        };\n\n        let ident = &variant.ident;\n        try_from_varint_match_arms.extend(quote! {\n            // u64 => Self\n            #discriminant => #enum_name::#ident,\n        });\n        into_varint_match_arms.extend(quote! {\n            // Self => u64\n            #enum_name::#ident => #discriminant,\n        });\n\n        let param_args = parse_variant_attrs(variant)?;\n        let validate =\n            (param_args.gen_validate(ident)).map_err(|msg| Error::new_spanned(variant, msg))?;\n        validate_match_arms.extend(quote! {\n            #enum_name::#ident => { #validate }\n        });\n\n        let default_value = param_args.gen_default_value();\n        default_value_match_arms.extend(quote! {\n            #enum_name::#ident => { #default_value }\n        });\n\n        let value_type = param_args.gen_value_type();\n        value_type_match_arms.extend(quote! {\n            #enum_name::#ident => #value_type,\n        });\n    }\n\n    Ok(quote! {\n        // TODO: try from\n        impl ::core::convert::TryFrom<VarInt> for #enum_name {\n            type Error = Error;\n\n            fn try_from(value: VarInt) -> Result<Self, Self::Error> {\n                Ok(match value.into_u64() {\n                    #try_from_varint_match_arms\n                    unknown => return Err(Error::UnknownParameterId(value))\n                })\n            }\n        }\n\n        impl From<#enum_name> for VarInt {\n            fn from(value: #enum_name) -> Self {\n                VarInt::from_u64(match value {\n                    #into_varint_match_arms\n                }).expect(\"All variants should have a valid discriminant\")\n            }\n        }\n\n        impl #enum_name {\n            pub fn validate(&self, value: &ParameterValue) -> Result<(), Error> {\n                match self {\n                    #validate_match_arms\n                }\n                Ok(())\n            }\n\n            pub fn default_value(&self) -> Option<ParameterValue> {\n                match self {\n                    #default_value_match_arms\n                }\n            }\n\n            pub fn value_type(&self) -> ParameterValueType {\n                match self {\n                    #value_type_match_arms\n                }\n            }\n        }\n    })\n}\n\nfn parse_variant_attrs(variant: &Variant) -> Result<ParamArgs, Error> {\n    let param_attr = variant\n        .attrs\n        .iter()\n        .find(|attr| attr.path().is_ident(\"param\"))\n        .ok_or_else(|| {\n            Error::new_spanned(\n                variant,\n                \"Each variant must have a `#[param(...)]` attribute\",\n            )\n        })?;\n\n    let param_metas = param_attr\n        .parse_args_with(Punctuated::<NestedMeta, Token![,]>::parse_terminated)?\n        .into_iter()\n        .collect::<Vec<_>>();\n\n    ParamArgs::from_list(&param_metas).map_err(|de| de.into())\n}\n\n#[derive(darling::FromMeta)]\nstruct ParamArgs {\n    value_type: ParamType,\n    #[darling(default)]\n    default: Option<Expr>,\n    #[darling(default)]\n    bound: Option<ExprRange>,\n}\n\nimpl ParamArgs {\n    fn gen_validate(&self, id: &Ident) -> Result<TokenStream2, &'static str> {\n        let Some(bound) = &self.bound else {\n            return Ok(quote! {});\n        };\n\n        let value_type = format_ident!(\"{}\", format!(\"{:?}\", self.value_type));\n        let mut convert_value = quote! {\n            let ParameterValue::#value_type(v) = value else {\n                return Err(Error::InvalidValueType(\n                    Self::#id,\n                    value.value_type(),\n                ));\n            };\n        };\n\n        convert_value.extend(match self.value_type {\n            ParamType::VarInt => quote! { v.into_u64() },\n            ParamType::Duration => quote! { v.as_millis() as u64 },\n            _ => return Err(\"Bound is only applicable to VarInt or Duration types\"),\n        });\n\n        Ok(quote! {\n            let value = { #convert_value };\n            if !(#bound).contains(&value) {\n                return Err(Error::OutOfBounds (\n                    Self::#id,\n                    value,\n                    #bound,\n                ));\n            }\n        })\n    }\n\n    fn gen_default_value(&self) -> TokenStream2 {\n        match &self.default {\n            Some(default) => quote! { Some((#default).into()) },\n            None => quote! { None },\n        }\n    }\n\n    fn gen_value_type(&self) -> TokenStream2 {\n        let value_type = format_ident!(\"{}\", format!(\"{:?}\", self.value_type));\n        quote! { ParameterValueType::#value_type }\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum ParamType {\n    VarInt,\n    Boolean,\n    Bytes,\n    Duration,\n    ResetToken,\n    ConnectionId,\n    PreferredAddress,\n}\n\nimpl FromMeta for ParamType {\n    fn from_string(lit: &str) -> ::darling::Result<Self> {\n        match lit {\n            \"VarInt\" => Ok(ParamType::VarInt),\n            \"Boolean\" => Ok(ParamType::Boolean),\n            \"Bytes\" => Ok(ParamType::Bytes),\n            \"Duration\" => Ok(ParamType::Duration),\n            \"ResetToken\" => Ok(ParamType::ResetToken),\n            \"ConnectionId\" => Ok(ParamType::ConnectionId),\n            \"PreferredAddress\" => Ok(ParamType::PreferredAddress),\n            __other => Err(::darling::Error::unknown_value(__other)),\n        }\n    }\n\n    fn from_expr(expr: &Expr) -> darling::Result<Self> {\n        match *expr {\n            Expr::Lit(ref lit) => Self::from_value(&lit.lit),\n            Expr::Group(ref group) => {\n                // syn may generate this invisible group delimiter when the input to the darling\n                // proc macro (specifically, the attributes) are generated by a\n                // macro_rules! (e.g. propagating a macro_rules!'s expr)\n                // Since we want to basically ignore these invisible group delimiters,\n                // we just propagate the call to the inner expression.\n                Self::from_expr(&group.expr)\n            }\n            Expr::Path(ref path) => return Self::from_string(&path.to_token_stream().to_string()),\n            _ => Err(darling::Error::unexpected_expr_type(expr)),\n        }\n        .map_err(|e| e.with_span(expr))\n    }\n}\n"
  },
  {
    "path": "qmacro/src/lib.rs",
    "content": "use proc_macro::TokenStream;\nuse syn::Error;\n\nmod derive;\n\n#[proc_macro_derive(ParameterId, attributes(param))]\npub fn quic_parameters(item: TokenStream) -> TokenStream {\n    TokenStream::from(derive::quic_parameters(item).unwrap_or_else(Error::into_compile_error))\n}\n"
  },
  {
    "path": "qprotocol/Cargo.toml",
    "content": "[package]\nname = \"qprotocol\"\nversion.workspace = true\nedition.workspace = true\ndescription = \"STUN, forward and QUIC packet routing protocol implementation for dquic\"\nreadme = \"README.md\"\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\n\n[dependencies]\nasync-trait = { workspace = true }\nbon = { workspace = true }\nbytes = { workspace = true }\ndashmap = { workspace = true }\nderive_more = { workspace = true }\nenum_dispatch = { workspace = true }\nfutures = { workspace = true }\nbitflags = { workspace = true }\nnom = { workspace = true }\nqbase = { workspace = true }\nqresolve = { workspace = true }\nqevent = { workspace = true }\nqinterface = { workspace = true, features = [\"qudp\"] }\nqudp = { workspace = true }\nrand = { workspace = true }\nrustls = { workspace = true }\nsmallvec = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"sync\", \"rt\", \"time\", \"macros\"] }\ntokio-util = { workspace = true, features = [\"rt\"] }\ntracing = { workspace = true }\nnetdev = { workspace = true }\n\n[dev-dependencies]\nclap = { workspace = true }\nrustls = { workspace = true, features = [\"ring\"] }\ntokio = { features = [\"fs\", \"rt-multi-thread\"], workspace = true }\ntokio-test = \"0.4\"\ntracing = { workspace = true }\n\n[dev-dependencies.tracing-subscriber]\nworkspace = true\nfeatures = [\"fmt\", \"ansi\", \"env-filter\", \"time\", \"tracing-log\"]\n\n[features]\n# Enable shorter TTL only for tests (especially integration tests in other crates).\ntest-ttl = []\n"
  },
  {
    "path": "qprotocol/src/dns.rs",
    "content": "\n"
  },
  {
    "path": "qprotocol/src/forward.rs",
    "content": "\n"
  },
  {
    "path": "qprotocol/src/io.rs",
    "content": "\n"
  },
  {
    "path": "qprotocol/src/lib.rs",
    "content": "pub mod dns;\npub mod forward;\npub mod io;\npub mod quic;\npub mod stun;\n"
  },
  {
    "path": "qprotocol/src/quic.rs",
    "content": "\n"
  },
  {
    "path": "qprotocol/src/stun/msg.rs",
    "content": "use std::{io, net::SocketAddr};\n\nuse bytes::BufMut;\nuse nom::{\n    Err, IResult, Parser,\n    combinator::map,\n    error::{Error, ErrorKind},\n    multi::many0,\n    number::streaming::{be_u8, be_u16},\n};\nuse qbase::net::{AddrFamily, Family, WriteSocketAddr, be_socket_addr};\nuse rand::RngExt;\nuse thiserror::Error;\n\npub const BINDING_REQUEST: u16 = 0x0001;\npub const BINDING_RESPONSE: u16 = 0x0101;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub struct TransactionId([u8; 16]);\n\nimpl AsRef<[u8]> for TransactionId {\n    fn as_ref(&self) -> &[u8] {\n        &self.0\n    }\n}\n\nimpl TransactionId {\n    pub fn from_slice(slice: &[u8]) -> Self {\n        let mut id = [0u8; 16];\n        id.copy_from_slice(slice);\n        TransactionId(id)\n    }\n\n    pub fn random() -> Self {\n        let mut id = [0u8; 16];\n        rand::rng().fill(&mut id);\n        TransactionId(id)\n    }\n}\n\n#[derive(Debug)]\npub enum Packet {\n    Request(Request),\n    Response(Response),\n}\n\n/// STUN数据包中的Attr类型：\n#[derive(Debug, Clone, PartialEq)]\npub enum Attr {\n    // 由服务器返回的外网映射地址\n    MappedAddress(SocketAddr),\n    // 客户端发起请求携带的指定响应地址\n    ResponseAddress(SocketAddr),\n    // 由客户端请求转发时，携带变换Ip:Port响应的指示\n    ChangeRequest(u8),\n    // 由服务器返回的Response消息的源地址，即服务器的地址\n    SourceAddress(SocketAddr),\n    // 由服务器返回的另一台的STUN服务器地址，\n    // 包括不同端口，供后续参考使用\n    ChangedAddress(SocketAddr),\n}\n\n#[derive(Debug)]\npub enum AttrType {\n    MappedAddress(Family),\n    ResponseAddress(Family),\n    // 由客户端请求转发时，携带变换Ip:Port响应的指示\n    ChangeRequest(u8),\n    // 由服务器返回的Response消息的源地址，即服务器的地址\n    SourceAddress(Family),\n    // 由服务器返回的另一台的STUN服务器地址，\n    // 包括不同端口，供后续参考使用\n    ChangedAddress(Family),\n}\n\n#[derive(Debug, Error)]\n#[error(\"Invalid attribute type: {0}\")]\npub struct InvalidAttrType(u8);\n\nimpl From<AttrType> for u8 {\n    fn from(value: AttrType) -> Self {\n        match value {\n            AttrType::MappedAddress(Family::V4) => 0,\n            AttrType::MappedAddress(Family::V6) => 1,\n            AttrType::ResponseAddress(Family::V4) => 2,\n            AttrType::ResponseAddress(Family::V6) => 3,\n            AttrType::SourceAddress(Family::V4) => 4,\n            AttrType::SourceAddress(Family::V6) => 5,\n            AttrType::ChangedAddress(Family::V4) => 6,\n            AttrType::ChangedAddress(Family::V6) => 7,\n            AttrType::ChangeRequest(flag_set) => 8 | flag_set,\n        }\n    }\n}\n\nimpl TryFrom<u8> for AttrType {\n    type Error = InvalidAttrType;\n\n    fn try_from(value: u8) -> Result<Self, Self::Error> {\n        match value {\n            0 => Ok(AttrType::MappedAddress(Family::V4)),\n            1 => Ok(AttrType::MappedAddress(Family::V6)),\n            2 => Ok(AttrType::ResponseAddress(Family::V4)),\n            3 => Ok(AttrType::ResponseAddress(Family::V6)),\n            4 => Ok(AttrType::SourceAddress(Family::V4)),\n            5 => Ok(AttrType::SourceAddress(Family::V6)),\n            6 => Ok(AttrType::ChangedAddress(Family::V4)),\n            7 => Ok(AttrType::ChangedAddress(Family::V6)),\n            8..12 => Ok(AttrType::ChangeRequest(value & 0x3)),\n            _ => Err(InvalidAttrType(value)),\n        }\n    }\n}\n\ntrait WriteAttr {\n    fn put_attr(&mut self, attr: &Attr);\n}\n\nimpl<T: BufMut> WriteAttr for T {\n    fn put_attr(&mut self, attr: &Attr) {\n        let typ: u8 = attr.typ().into();\n        match attr {\n            Attr::MappedAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n            Attr::ResponseAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n            Attr::ChangeRequest(flag) => {\n                self.put_u8(typ | *flag);\n            }\n            Attr::SourceAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n            Attr::ChangedAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n        };\n    }\n}\n\nimpl Attr {\n    pub fn typ(&self) -> AttrType {\n        match self {\n            Attr::MappedAddress(socket_addr) => AttrType::MappedAddress(socket_addr.family()),\n            Attr::ResponseAddress(socket_addr) => AttrType::ResponseAddress(socket_addr.family()),\n            Attr::ChangeRequest(flag_set) => AttrType::ChangeRequest(*flag_set),\n            Attr::SourceAddress(socket_addr) => AttrType::SourceAddress(socket_addr.family()),\n            Attr::ChangedAddress(socket_addr) => AttrType::ChangedAddress(socket_addr.family()),\n        }\n    }\n\n    fn be_attr(input: &[u8]) -> IResult<&[u8], Self> {\n        if input.is_empty() {\n            return Err(Err::Error(Error::new(input, ErrorKind::Eof)));\n        }\n        let (remain, typ) = be_u8(input)?;\n        let typ: AttrType = typ\n            .try_into()\n            .map_err(|_| Err::Error(Error::new(input, ErrorKind::Alt)))?;\n        match typ {\n            AttrType::MappedAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::MappedAddress(addr)))\n            }\n            AttrType::ResponseAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::ResponseAddress(addr)))\n            }\n            AttrType::SourceAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::SourceAddress(addr)))\n            }\n            AttrType::ChangedAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::ChangedAddress(addr)))\n            }\n            AttrType::ChangeRequest(flags) => Ok((remain, Attr::ChangeRequest(flags))),\n        }\n    }\n}\n\n#[derive(Debug, PartialEq, Clone)]\npub struct Request(Vec<Attr>);\n\n/// 目前用到的Request只有3种，一种是空的默认Request；一种是变换IP、Port来响应；一种是只变换端口来响应\n/// 可以看出，ChangeRequest属性不可能有超过一个，为满足这种限制，三种Request均直接构造出来，不再有其他\n/// 可变操作函数。\nimpl Default for Request {\n    fn default() -> Self {\n        Self(Vec::with_capacity(0))\n    }\n}\n\npub(crate) trait WriteRequest {\n    fn put_request(&mut self, request: &Request);\n}\n\nimpl<T: BufMut> WriteRequest for T {\n    fn put_request(&mut self, request: &Request) {\n        for attr in &request.0 {\n            self.put_attr(attr);\n        }\n    }\n}\n\npub fn be_request(input: &[u8]) -> IResult<&[u8], Request> {\n    many0(Attr::be_attr).map(Request).parse(input)\n}\n\npub const CHANGE_PORT: u8 = 0x01;\npub const CHANGE_IP: u8 = 0x02;\n\nimpl Request {\n    pub fn change_ip_and_port() -> Self {\n        let mut request = Request::default();\n        request.0.push(Attr::ChangeRequest(CHANGE_IP | CHANGE_PORT));\n        request\n    }\n\n    pub fn change_port() -> Self {\n        let mut request = Request::default();\n        request.0.push(Attr::ChangeRequest(CHANGE_PORT));\n        request\n    }\n\n    pub fn add_response_address(&mut self, addr: SocketAddr) -> &mut Self {\n        self.0.push(Attr::ResponseAddress(addr));\n        self\n    }\n\n    // 仅发送响应地址，移除ChangeRequest属性\n    pub fn with_response_addr(addr: SocketAddr) -> Self {\n        Request(vec![Attr::ResponseAddress(addr)])\n    }\n\n    pub fn change_request(&self) -> Option<u8> {\n        for attr in &self.0 {\n            if let Attr::ChangeRequest(flags) = attr {\n                return Some(*flags);\n            }\n        }\n        None\n    }\n\n    pub fn response_address(&self) -> Option<&SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::ResponseAddress(addr) = attr {\n                return Some(addr);\n            }\n        }\n        None\n    }\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub struct Response(pub Vec<Attr>);\n\npub(crate) trait WriteResponse {\n    fn put_response(&mut self, response: &Response);\n}\n\nimpl<T: BufMut> WriteResponse for T {\n    fn put_response(&mut self, response: &Response) {\n        for attr in &response.0 {\n            self.put_attr(attr);\n        }\n    }\n}\n\npub fn be_response(input: &[u8]) -> IResult<&[u8], Response> {\n    many0(Attr::be_attr).map(Response).parse(input)\n}\n\nimpl Response {\n    pub fn with(attrs: Vec<Attr>) -> Self {\n        Response(attrs)\n    }\n\n    pub fn map_addr(&self) -> io::Result<SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::MappedAddress(addr) = attr {\n                return Ok(*addr);\n            };\n        }\n        Err(io::Error::other(\"No mapped address found in response\"))\n    }\n\n    pub fn changed_addr(&self) -> io::Result<SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::ChangedAddress(addr) = attr {\n                return Ok(*addr);\n            };\n        }\n        Err(io::Error::other(\"No changed address found in response\"))\n    }\n\n    pub fn source_addr(&self) -> io::Result<SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::SourceAddress(addr) = attr {\n                return Ok(*addr);\n            };\n        }\n        Err(io::Error::other(\"No source address found in response\"))\n    }\n}\n\npub fn be_packet(input: &[u8]) -> IResult<&[u8], (TransactionId, Packet)> {\n    let (remain, typ) = be_u16(input)?;\n    let (txid, remain) = remain.split_at(16);\n    let (remain, packet) = match typ {\n        BINDING_REQUEST => map(be_request, Packet::Request).parse(remain)?,\n        BINDING_RESPONSE => map(be_response, Packet::Response).parse(remain)?,\n        _ => return Err(Err::Error(Error::new(input, ErrorKind::Alt))),\n    };\n    Ok((remain, (TransactionId::from_slice(txid), packet)))\n}\n\npub trait WritePacket {\n    fn put_packet(&mut self, txid: &TransactionId, packet: &Packet);\n}\n\nimpl<T: BufMut> WritePacket for T {\n    fn put_packet(&mut self, txid: &TransactionId, packet: &Packet) {\n        match packet {\n            Packet::Request(request) => {\n                self.put_u16(BINDING_REQUEST);\n                self.put_slice(txid.as_ref());\n                self.put_request(request);\n            }\n            Packet::Response(response) => {\n                self.put_u16(BINDING_RESPONSE);\n                self.put_slice(txid.as_ref());\n                self.put_response(response);\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn attr_deserialize() {\n        assert_eq!(\n            Attr::be_attr(&[4, 78, 34, 127, 0, 0, 1][..]),\n            Ok((\n                &[][..],\n                Attr::SourceAddress(\"127.0.0.1:20002\".parse().unwrap())\n            ))\n        );\n\n        assert_eq!(\n            Attr::be_attr(&[6, 78, 34, 127, 0, 0, 1][..]),\n            Ok((\n                &[][..],\n                Attr::ChangedAddress(\"127.0.0.1:20002\".parse().unwrap())\n            ))\n        );\n        assert_eq!(\n            Attr::be_attr(&[0, 48, 57, 127, 0, 0, 1][..]),\n            Ok((\n                &[][..],\n                Attr::MappedAddress(\"127.0.0.1:12345\".parse().unwrap())\n            ))\n        )\n    }\n\n    #[test]\n    fn request_serialize() {\n        let buf = [\n            4, 78, 34, 127, 0, 0, 1, 0, 48, 57, 127, 0, 0, 1, 6, 78, 34, 127, 0, 0, 1,\n        ];\n        let (remain, response) = be_response(&buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        assert_eq!(\n            response,\n            Response(vec![\n                Attr::SourceAddress(\"127.0.0.1:20002\".parse().unwrap()),\n                Attr::MappedAddress(\"127.0.0.1:12345\".parse().unwrap()),\n                Attr::ChangedAddress(\"127.0.0.1:20002\".parse().unwrap())\n            ])\n        );\n    }\n}\n"
  },
  {
    "path": "qprotocol/src/stun.rs",
    "content": "pub mod msg;\n"
  },
  {
    "path": "qrecovery/Cargo.toml",
    "content": "[package]\nname = \"qrecovery\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"The reliable transport part of QUIC, a part of dquic\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\nrust-version.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbytes = { workspace = true }\nderive_more = { workspace = true, features = [\"deref\"] }\nenum_dispatch = { workspace = true }\nfutures = { workspace = true }\nqbase = { workspace = true }\nqevent = { workspace = true }\nrand = { workspace = true }\nrustls = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"io-util\", \"time\"] }\ntracing = { workspace = true }\n\n[dev-dependencies]\ntokio = { workspace = true, features = [\"test-util\", \"macros\"] }\n"
  },
  {
    "path": "qrecovery/src/crypto.rs",
    "content": "//! The reliable transmission of the crypto stream.\nmod send {\n    use std::{\n        io,\n        pin::Pin,\n        sync::{Arc, Mutex},\n        task::{Context, Poll, Waker},\n    };\n\n    use bytes::{BufMut, Bytes};\n    use qbase::{\n        Epoch,\n        frame::CryptoFrame,\n        net::tx::{ArcSendWakers, Signals},\n        packet::{Package, PacketContent},\n        varint::{VARINT_MAX, VarInt},\n    };\n    use tokio::io::AsyncWrite;\n\n    use crate::send::SendBuf;\n\n    #[derive(Debug)]\n    pub(super) struct Sender {\n        sndbuf: SendBuf,\n        writable_waker: Option<Waker>,\n        flush_waker: Option<Waker>,\n        tx_wakers: ArcSendWakers,\n    }\n\n    impl Sender {\n        /// 不再长的像write，因为rust可以多返回值，因此在返回的结果里面将读到的数据返回.\n        /// 调用者一定要自行将其写入到buffer中发送。\n        /// 一旦这种函数成功使用，try_read_data就可以淘汰了\n        fn try_load_data<P>(&mut self, packet: &mut P) -> Result<(), Signals>\n        where\n            P: BufMut + ?Sized,\n            for<'b> (CryptoFrame, &'b [Bytes]): Package<P>,\n        {\n            let max_size = packet.remaining_mut();\n            let predicate = |offset: u64| CryptoFrame::estimate_max_capacity(max_size, offset);\n            self.sndbuf\n                .pick_up(predicate, usize::MAX)\n                .map(|(range, _is_fresh, data)| {\n                    let frame = CryptoFrame::new(\n                        VarInt::from_u64(range.start).unwrap(),\n                        VarInt::try_from(range.end - range.start).unwrap(),\n                    );\n                    (frame, data.as_slice()).dump(packet).unwrap();\n                })\n        }\n\n        fn on_data_acked(&mut self, crypto_frame: &CryptoFrame) {\n            self.sndbuf.on_data_acked(&crypto_frame.range());\n            if self.sndbuf.remaining_mut() > 0\n                && let Some(waker) = self.writable_waker.take()\n            {\n                waker.wake();\n            }\n        }\n\n        fn may_loss_data(&mut self, crypto_frame: &CryptoFrame) {\n            self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n            self.sndbuf.may_loss_data(&crypto_frame.range())\n        }\n    }\n\n    impl Sender {\n        fn poll_write(&mut self, cx: &mut Context<'_>, buf: &[u8]) -> Poll<io::Result<usize>> {\n            assert!(\n                self.writable_waker.is_none()\n                    || matches!(self.writable_waker, Some(ref waker) if waker.will_wake(cx.waker()))\n            );\n            assert!(\n                self.flush_waker.is_none()\n                    || matches!(self.flush_waker, Some(ref waker) if waker.will_wake(cx.waker()))\n            );\n            if self.sndbuf.written() + buf.len() as u64 > VARINT_MAX {\n                return Poll::Ready(Err(io::Error::new(\n                    io::ErrorKind::WouldBlock,\n                    \"The largest offset delivered on the crypto stream cannot exceed 2^62-1\",\n                )));\n            }\n\n            debug_assert!(self.sndbuf.has_remaining_mut());\n\n            self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n            self.sndbuf.write(Bytes::copy_from_slice(buf));\n            Poll::Ready(Ok(buf.len()))\n        }\n\n        fn poll_flush(&mut self, cx: &mut Context<'_>) -> Poll<io::Result<()>> {\n            assert!(\n                self.flush_waker.is_none()\n                    || matches!(self.flush_waker, Some(ref waker) if waker.will_wake(cx.waker()))\n            );\n            if self.sndbuf.is_all_rcvd() {\n                Poll::Ready(Ok(()))\n            } else {\n                self.flush_waker = Some(cx.waker().clone());\n                Poll::Pending\n            }\n        }\n    }\n\n    pub(super) type ArcSender = Arc<Mutex<Sender>>;\n\n    /// Struct for crypto layer to send crypto data to the peer.\n    ///\n    /// To reduce the memory reallcation, if the internal buffer is filled, the [`write`] call will\n    /// be blocked until the data sent been acknowledged by peer.\n    ///\n    /// [`write`]: tokio::io::AsyncWriteExt::write\n    #[derive(Debug, Clone)]\n    pub struct CryptoStreamWriter(pub(super) ArcSender);\n    /// Struct for transport layer to send crypto data.\n    #[derive(Debug, Clone)]\n    pub struct CryptoStreamOutgoing(pub(super) ArcSender);\n\n    impl AsyncWrite for CryptoStreamWriter {\n        fn poll_write(\n            self: Pin<&mut Self>,\n            cx: &mut Context<'_>,\n            buf: &[u8],\n        ) -> Poll<io::Result<usize>> {\n            self.0.lock().unwrap().poll_write(cx, buf)\n        }\n\n        fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {\n            self.0.lock().unwrap().poll_flush(cx)\n        }\n\n        fn poll_shutdown(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<io::Result<()>> {\n            // 永远不会关闭，直到Connection级别的关闭\n            Poll::Ready(Ok(()))\n        }\n    }\n\n    impl CryptoStreamOutgoing {\n        /// Try to load the crypto data  into the `packet`.\n        pub fn try_load_data_into<P>(&self, packet: &mut P, force: bool) -> Result<(), Signals>\n        where\n            P: BufMut + ?Sized,\n            for<'b> (CryptoFrame, &'b [Bytes]): Package<P>,\n        {\n            use std::ops::ControlFlow::*;\n            let mut inner = self.0.lock().unwrap();\n            if force {\n                inner.sndbuf.resend_flighting();\n            }\n            let (Continue(result) | Break(result)) =\n                core::iter::from_fn(|| Some(inner.try_load_data(packet))).try_fold(\n                    Err(Signals::empty()),\n                    |result, once| match (result, once) {\n                        (Err(_empty), Ok(())) => Continue(Ok(())),\n                        (Err(_empty), Err(signals)) => Break(Err(signals)),\n                        (Ok(()), Ok(())) => Continue(Ok(())),\n                        (Ok(()), Err(_no_more)) => Break(Ok(())),\n                    },\n                );\n            result\n        }\n\n        pub fn package(self, epoch: Epoch) -> CryptoStreamPackage {\n            CryptoStreamPackage {\n                first_load: epoch == Epoch::Initial,\n                outgoing: self,\n            }\n        }\n\n        /// Called when the crypto frame sent is acknowledged by peer.\n        ///\n        /// Acknowledgment of data may free up a segment in the [`SendBuf`], thus waking up the\n        /// writing task,\n        pub fn on_data_acked(&self, crypto_frame: &CryptoFrame) {\n            self.0.lock().unwrap().on_data_acked(crypto_frame)\n        }\n\n        /// Called when the crypto frame sent may loss.\n        pub fn may_loss_data(&self, crypto_frame: &CryptoFrame) {\n            self.0.lock().unwrap().may_loss_data(crypto_frame)\n        }\n    }\n\n    pub struct CryptoStreamPackage {\n        first_load: bool,\n        outgoing: CryptoStreamOutgoing,\n    }\n\n    impl<P> Package<P> for CryptoStreamPackage\n    where\n        P: BufMut + ?Sized,\n        for<'b> (CryptoFrame, &'b [Bytes]): Package<P>,\n    {\n        fn dump(&mut self, packet: &mut P) -> Result<PacketContent, Signals> {\n            let force = self.first_load;\n            match self.outgoing.try_load_data_into(packet, force) {\n                Ok(()) => {\n                    self.first_load = false;\n                    Ok(PacketContent::EffectivePayload)\n                }\n                Err(signals) => Err(signals),\n            }\n        }\n    }\n\n    pub(super) fn create(tx_wakers: ArcSendWakers) -> ArcSender {\n        Arc::new(Mutex::new(Sender {\n            sndbuf: SendBuf::with_capacity(VARINT_MAX),\n            writable_waker: None,\n            flush_waker: None,\n            tx_wakers,\n        }))\n    }\n}\n\nmod recv {\n    use std::{\n        io,\n        pin::Pin,\n        sync::{Arc, Mutex},\n        task::{Context, Poll, Waker},\n    };\n\n    use bytes::{BufMut, Bytes};\n    use qbase::{\n        error::Error,\n        frame::{CryptoFrame, io::ReceiveFrame},\n        varint::VARINT_MAX,\n    };\n    use tokio::io::{AsyncRead, ReadBuf};\n\n    use crate::recv::RecvBuf;\n\n    #[derive(Debug)]\n    pub(super) struct Recver {\n        rcvbuf: RecvBuf,\n        read_waker: Option<Waker>,\n    }\n\n    impl Recver {\n        fn recv(&mut self, offset: u64, data: Bytes) {\n            assert!(offset + data.len() as u64 <= VARINT_MAX);\n            self.rcvbuf.recv(offset, data);\n            if self.rcvbuf.is_readable()\n                && let Some(waker) = self.read_waker.take()\n            {\n                waker.wake()\n            }\n        }\n\n        fn poll_read<T: BufMut>(\n            &mut self,\n            cx: &mut Context<'_>,\n            buf: &mut T,\n        ) -> Poll<io::Result<()>> {\n            assert!(\n                self.read_waker.is_none()\n                    || matches!(self.read_waker, Some(ref waker) if waker.will_wake(cx.waker()))\n            );\n            if self.rcvbuf.is_readable() {\n                self.rcvbuf.try_read(buf);\n                Poll::Ready(Ok(()))\n            } else {\n                self.read_waker = Some(cx.waker().clone());\n                Poll::Pending\n            }\n        }\n    }\n\n    pub(super) type ArcRecver = Arc<Mutex<Recver>>;\n\n    /// Struct for crypto layer to read crypto data from the peer.\n    #[derive(Debug, Clone)]\n    pub struct CryptoStreamReader(pub(super) ArcRecver);\n    /// Struct for transport layer to deliver the received crypto to crypto layer.\n    #[derive(Debug, Clone)]\n    pub struct CryptoStreamIncoming(pub(super) ArcRecver);\n\n    impl AsyncRead for CryptoStreamReader {\n        fn poll_read(\n            self: Pin<&mut Self>,\n            cx: &mut Context<'_>,\n            buf: &mut ReadBuf<'_>,\n        ) -> Poll<io::Result<()>> {\n            self.0.lock().unwrap().poll_read(cx, buf)\n        }\n    }\n\n    impl ReceiveFrame<(CryptoFrame, Bytes)> for CryptoStreamIncoming {\n        type Output = ();\n\n        fn recv_frame(&self, (frame, data): (CryptoFrame, Bytes)) -> Result<Self::Output, Error> {\n            self.0.lock().unwrap().recv(frame.offset(), data);\n            Ok(())\n        }\n    }\n\n    pub(super) fn create() -> ArcRecver {\n        Arc::new(Mutex::new(Recver {\n            rcvbuf: RecvBuf::default(),\n            read_waker: None,\n        }))\n    }\n}\n\nuse qbase::net::tx::ArcSendWakers;\npub use recv::{CryptoStreamIncoming, CryptoStreamReader};\npub use send::{CryptoStreamOutgoing, CryptoStreamWriter};\n\n/// Crypto data stream.\n#[derive(Debug, Clone)]\npub struct CryptoStream {\n    sender: send::ArcSender,\n    recver: recv::ArcRecver,\n}\n\nimpl CryptoStream {\n    /// Create a new instance of [`CryptoStream`] with the given buffer size.\n    pub fn new(tx_wakers: ArcSendWakers) -> Self {\n        Self {\n            sender: send::create(tx_wakers),\n            recver: recv::create(),\n        }\n    }\n\n    /// Create a [`CryptoStreamWriter`] which belong to this crypto stream.\n    pub fn writer(&self) -> CryptoStreamWriter {\n        CryptoStreamWriter(self.sender.clone())\n    }\n\n    /// Create a [`CryptoStreamReader`] which belong to this crypto stream.\n    pub fn reader(&self) -> CryptoStreamReader {\n        CryptoStreamReader(self.recver.clone())\n    }\n\n    /// Create a [`CryptoStreamOutgoing`] which belong to this crypto stream.\n    pub fn outgoing(&self) -> CryptoStreamOutgoing {\n        CryptoStreamOutgoing(self.sender.clone())\n    }\n\n    /// Create a [`CryptoStreamIncoming`] which belong to this crypto stream.\n    pub fn incoming(&self) -> CryptoStreamIncoming {\n        CryptoStreamIncoming(self.recver.clone())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use qbase::{\n        frame::{CryptoFrame, io::ReceiveFrame},\n        varint::VarInt,\n    };\n    use tokio::io::{AsyncReadExt, AsyncWriteExt};\n\n    use super::CryptoStream;\n\n    #[tokio::test]\n    async fn test_read() {\n        let crypto_stream: CryptoStream = CryptoStream::new(Default::default());\n        crypto_stream\n            .writer()\n            .write_all(b\"hello world\")\n            .await\n            .unwrap();\n\n        crypto_stream\n            .incoming()\n            .recv_frame((\n                CryptoFrame::new(VarInt::from_u32(0), VarInt::from_u32(11)),\n                bytes::Bytes::copy_from_slice(b\"hello world\"),\n            ))\n            .unwrap();\n        let mut buf = [0u8; 11];\n        crypto_stream.reader().read_exact(&mut buf).await.unwrap();\n        assert_eq!(&buf[..], b\"hello world\");\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/journal/rcvd.rs",
    "content": "use std::{\n    collections::HashSet,\n    sync::{Arc, RwLock},\n};\n\nuse bytes::BufMut;\nuse qbase::{\n    frame::AckFrame,\n    net::tx::Signals,\n    packet::{InvalidPacketNumber, Package, PacketContent, PacketNumber, PacketWriter},\n    util::{IndexDeque, IndexError},\n    varint::{VARINT_MAX, VarInt},\n};\nuse tokio::time::{Duration, Instant};\n\n/// 收包记录有以下几种状态\n/// - Empty：收包记录为空，未收到该包\n/// - PacketReceived：（收包时间，最晚ack时间，过期时间）, 如果路径没有驱动 ack，由这里驱动\n/// - AckSent：（ack_eliciting，收包时间,淘汰时间，确认了这个包的包号集合），如果set里的任意包号被确认了，则转换成 AckConfirmed 状态\n/// - AckConfirmed：（ack_eliciting，收包时间，淘汰时间）\n#[derive(Debug, Clone, PartialEq, Eq, Default)]\nenum State {\n    #[default]\n    Empty,\n    PacketReceived(Instant, Option<Instant>, Instant),\n    AckSent(bool, Instant, Instant, HashSet<u64>),\n    AckConfirmed(bool, Instant, Instant),\n}\n\nimpl State {\n    // 是否要打包到 ack frame 中，如果需要，PacketReceived 状态转换成 AckSent 状态， AckSent 状态记录 pn\n    fn track_packet_in_ack_frame(&mut self, pn: u64) -> bool {\n        match self {\n            State::PacketReceived(recv_time, latest_ack_time, expire_time) => {\n                *self = State::AckSent(\n                    latest_ack_time.is_some(),\n                    *recv_time,\n                    *expire_time,\n                    [pn].into(),\n                );\n                true\n            }\n            State::AckSent(_, _, _, pns) => {\n                pns.insert(pn);\n                true\n            }\n            State::AckConfirmed(_, _, _) => true,\n            State::Empty => false,\n        }\n    }\n\n    fn could_expire(&self, now: Instant) -> bool {\n        match self {\n            State::Empty => true,\n            State::AckConfirmed(ack_eliciting, _, expire_time) => {\n                !ack_eliciting || *expire_time < now\n            }\n            _ => false,\n        }\n    }\n}\n\n/// 纯碎的一个收包记录，主要用于：\n/// - 记录包有无收到\n/// - 根据某个largest pktno，生成ack frame（ack frame不能超过buf大小）\n/// - 确定记录不再需要，可以被丢弃，滑走\n#[derive(Debug, Default)]\nstruct RcvdJournal {\n    queue: IndexDeque<State, VARINT_MAX>,\n    max_ack_delay: Option<Duration>,\n    packet_include_ack: HashSet<u64>,\n    earliest_not_ack_time: Option<(u64, Instant)>,\n}\n\nimpl RcvdJournal {\n    fn with_capacity(capacity: usize, max_ack_delay: Option<Duration>) -> Self {\n        Self {\n            queue: IndexDeque::with_capacity(capacity),\n            max_ack_delay,\n            packet_include_ack: HashSet::new(),\n            earliest_not_ack_time: None,\n        }\n    }\n\n    fn decode_pn(&mut self, pkt_number: PacketNumber) -> Result<u64, InvalidPacketNumber> {\n        let expected_pn = self.queue.largest();\n        let pn = pkt_number.decode(expected_pn);\n        if pn < self.queue.offset() {\n            return Err(InvalidPacketNumber::TooOld);\n        }\n\n        match self.queue.get(pn) {\n            Some(State::Empty) | None => Ok(pn),\n            _ => Err(InvalidPacketNumber::Duplicate),\n        }\n    }\n\n    fn on_rcvd_pn(&mut self, pn: u64, is_ack_eliciting: bool, pto: Duration) {\n        let now = tokio::time::Instant::now();\n        let ack_time = if is_ack_eliciting {\n            Some(now + self.max_ack_delay.unwrap_or_default())\n        } else {\n            None\n        };\n        let expire_time = now + pto * 3;\n        if let Some(record) = self.queue.get_mut(pn) {\n            // assert!(matches!(record, State::Empty));\n            *record = State::PacketReceived(now, ack_time, expire_time);\n        } else if let Err(e @ IndexError::ExceedLimit(..)) = self\n            .queue\n            .insert(pn, State::PacketReceived(now, ack_time, expire_time))\n        {\n            panic!(\"packet number never exceed limit: {e}\")\n        }\n        if is_ack_eliciting && self.earliest_not_ack_time.is_none() {\n            self.earliest_not_ack_time = Some((pn, now));\n        }\n    }\n\n    fn on_rcvd_ack(&mut self, ack_frame: &AckFrame) {\n        let acked_pns: std::collections::HashSet<_> = ack_frame\n            .iter()\n            .flat_map(|range| range.clone())\n            .filter(|pn| self.packet_include_ack.contains(pn))\n            .collect();\n\n        self.packet_include_ack.retain(|pn| !acked_pns.contains(pn));\n\n        for record in self.queue.iter_mut() {\n            if let State::AckSent(ack_eliciting, recv_time, expire_time, pns) = record\n                && pns.iter().any(|pn| acked_pns.contains(pn))\n            {\n                *record = State::AckConfirmed(*ack_eliciting, *recv_time, *expire_time);\n            }\n        }\n        self.rotate_queue();\n    }\n\n    fn rotate_queue(&mut self) {\n        let now = tokio::time::Instant::now();\n        while self\n            .queue\n            .front()\n            .is_some_and(|(_pn, state)| state.could_expire(now))\n        {\n            self.queue.pop_front();\n        }\n    }\n\n    fn gen_ack_frame_util(\n        &mut self,\n        pn: u64,\n        largest: u64,\n        rcvd_time: Instant,\n        mut capacity: usize,\n    ) -> Result<AckFrame, Signals> {\n        let mut pkts = self\n            .queue\n            .enumerate_mut()\n            .rev()\n            .skip_while(|(pktno, _)| *pktno > largest);\n\n        // Minimum length with at least ACK frame type, largest, delay, range count, first_range (at least 1 byte for 0)\n        let largest = VarInt::from_u64(largest).unwrap();\n        let delay = rcvd_time.elapsed().as_micros() as u64;\n        let delay = VarInt::from_u64(delay).unwrap();\n        let mut first_range = 0_u32;\n        for (_, s) in pkts.by_ref() {\n            if s.track_packet_in_ack_frame(pn) {\n                first_range += 1;\n            } else {\n                break;\n            }\n        }\n        first_range = first_range.saturating_sub(1);\n\n        let first_range = VarInt::from(first_range);\n        // Frame type + Largest Acknowledged + First Ack Range + Ack Range Count\n        let min_len =\n            1 + largest.encoding_size() + delay.encoding_size() + first_range.encoding_size() + 1;\n        if capacity < min_len {\n            return Err(Signals::CONGESTION);\n        }\n        capacity -= min_len;\n\n        fn range_count_size_increment(range_count: usize) -> usize {\n            match range_count {\n                // 接下来需要2字节编码\n                len if len == (1 << 6) - 1 => 1, // 2 - 1\n                // 接下来需要4字节编码\n                len if len == (1 << 14) - 1 => 2, // 4 - 2\n                // 接下来需要8字节编码\n                len if len == (1 << 30) - 1 => 4, // 8 - 4\n                // 放不下了，不可能走到这里\n                _ => 0,\n            }\n        }\n\n        let mut ranges = vec![];\n\n        use core::ops::ControlFlow::*;\n        let (Continue((gap, ack, last_is_acked)) | Break((gap, ack, last_is_acked))) = pkts\n            .try_fold(\n                // take_while第一个被判否的元素会被消耗，如果它是gap那这里有gap=1，如果是因为迭代器没有更多元素这里gap=1也不影响\n                (1, 0, false),\n                |(gap, ack, last_is_acked), (_pktno, state)| {\n                    let range_count = ranges.len();\n                    match (last_is_acked, state.track_packet_in_ack_frame(pn)) {\n                        // 本range结束了，看看是否放得下本range，开始新的range\n                        (true, false) => {\n                            // 修正\n                            let gap = VarInt::from_u32(gap - 1);\n                            let ack = VarInt::from_u32(ack - 1);\n                            let size = range_count_size_increment(range_count)\n                                + gap.encoding_size()\n                                + ack.encoding_size();\n                            if capacity < size {\n                                // last_is_acked为false，不会被填进去\n                                return Break((0, 0, false));\n                            }\n                            capacity -= size;\n                            ranges.push((gap, ack));\n                            Continue((1, 0, state.track_packet_in_ack_frame(pn)))\n                        }\n                        // 如果当前是ack，增加ack，保持gap不变\n                        (false | true, true) => {\n                            Continue((gap, ack + 1, state.track_packet_in_ack_frame(pn)))\n                        }\n                        // 当前和之前都是gap，增加gap\n                        (false, false) => {\n                            Continue((gap + 1, ack, state.track_packet_in_ack_frame(pn)))\n                        }\n                    }\n                },\n            );\n        // 处理最后一个未来完成的range\n        if last_is_acked {\n            let gap = VarInt::from_u32(gap - 1);\n            let ack = VarInt::from_u32(ack - 1);\n            let size = range_count_size_increment(ranges.len())\n                + gap.encoding_size()\n                + ack.encoding_size();\n            if capacity > size {\n                // capacity -= size; unnecessary, never read latter\n                ranges.push((gap, ack));\n            }\n        }\n        self.packet_include_ack.insert(pn);\n        if let Some((pn, _)) = self.earliest_not_ack_time\n            && largest >= pn\n        {\n            self.earliest_not_ack_time = None;\n        }\n        Ok(AckFrame::new(largest, delay, first_range, ranges, None))\n    }\n\n    fn need_ack(&self) -> Option<(u64, Instant)> {\n        let now = tokio::time::Instant::now();\n        let (_, earliest_not_ack_time) = self.earliest_not_ack_time?;\n        let max_ack_delay = self.max_ack_delay.unwrap_or_default();\n        if earliest_not_ack_time + max_ack_delay >= now {\n            return None;\n        }\n        let (largest, state) = self.queue.back()?;\n        let recv_time = match state {\n            State::PacketReceived(rt, _, _)\n            | State::AckSent(_, rt, _, _)\n            | State::AckConfirmed(_, rt, _) => *rt,\n            _ => return None,\n        };\n\n        Some((largest, recv_time))\n    }\n}\n\n/// Records for received packets, decode the packet number and generate ack frames.\n// 接收数据包队列，各处共享的，判断包是否收到以及生成ack frame，只需要读锁；\n// 记录新收到的数据包，或者失活旧数据包并滑走，才需要写锁。\n#[derive(Debug, Clone, Default)]\npub struct ArcRcvdJournal {\n    inner: Arc<RwLock<RcvdJournal>>,\n}\n\nimpl ArcRcvdJournal {\n    /// Create a new empty records with the given `capacity`.\n    ///\n    /// The number of records can exceed the `capacity` specified at creation time, but the internel\n    /// implementation strvies to avoid reallocation.\n    pub fn with_capacity(capacity: usize, max_ack_delay: Option<Duration>) -> Self {\n        Self {\n            inner: Arc::new(RwLock::new(RcvdJournal::with_capacity(\n                capacity,\n                max_ack_delay,\n            ))),\n        }\n    }\n\n    /// Decode the pn from peer's packet to actual packer number.\n    ///\n    /// See [`RFC`](https://www.rfc-editor.org/rfc/rfc9000.html#name-sample-packet-number-decodi)\n    /// for more details about decode packet number.\n    ///\n    /// If the packet is too old or has been received, or the pn is too big, this method will return\n    /// an error.\n    ///\n    /// Note that although the packet number successful decoded, it does not mean that the packet is\n    /// valid, and the frames in it are valid.\n    ///\n    /// The registered packet must be valid, successfully decrypted, and the frames in it must be\n    /// valid.\n    // 当新收到一个数据包，如果这个包很旧，那么大概率意味着是重复包，直接丢弃。\n    // 如果这个数据包号是最大的，那么它之前的空档都是尚未收到的，得记为未收到。\n    // 注意，包号合法，不代表的包内容合法，必须等到包被正确解密且其中帧被正确解出后，才能确认收到。\n    pub fn decode_pn(&self, encoded_pn: PacketNumber) -> Result<u64, InvalidPacketNumber> {\n        self.inner.write().unwrap().decode_pn(encoded_pn)\n    }\n\n    /// Register the packet has been recieved.\n    ///\n    /// The registered packet must be valid, successfully decrypted, and the frames in it must be\n    /// valid.\n    // 当包号合法，且包被完全解密，且包中的帧都正确之后，记录该包已经收到。\n    pub fn on_rcvd_pn(&self, pn: u64, is_ack_eliciting: bool, pto: Duration) {\n        self.inner\n            .write()\n            .unwrap()\n            .on_rcvd_pn(pn, is_ack_eliciting, pto);\n    }\n\n    /// Generate an ack frame which ack the received frames until `largest`.\n    ///\n    /// This method will write an ack frame into the `buf`. The `Ack Delay` field of the frame is\n    /// the argument `recv_time` as microsec, the `Largest Acknowledged` field of the frame is the\n    /// `largest` frame, the ranges in ack frame will not exceed `largest`.\n    pub fn gen_ack_frame_util(\n        &self,\n        pn: u64,\n        largest: u64,\n        rcvd_time: Instant,\n        capacity: usize,\n    ) -> Result<AckFrame, Signals> {\n        self.inner\n            .write()\n            .unwrap()\n            .gen_ack_frame_util(pn, largest, rcvd_time, capacity)\n    }\n\n    pub fn on_rcvd_ack(&self, ack_frame: &AckFrame) {\n        self.inner.write().unwrap().on_rcvd_ack(ack_frame);\n    }\n\n    pub fn need_ack(&self) -> Option<(u64, Instant)> {\n        self.inner.read().unwrap().need_ack()\n    }\n\n    pub fn revise_max_ack_delay(&self, max_ack_delay: Duration) {\n        self.inner.write().unwrap().max_ack_delay = Some(max_ack_delay);\n    }\n\n    pub fn ack_package<'r>(&'r self, need_ack: Option<(u64, Instant)>) -> AckPackege<'r> {\n        AckPackege {\n            journal: self,\n            need_ack,\n        }\n    }\n}\n\npub struct AckPackege<'r> {\n    journal: &'r ArcRcvdJournal,\n    need_ack: Option<(u64, Instant)>,\n}\n\nimpl<'r, Target> Package<Target> for AckPackege<'r>\nwhere\n    Target: AsRef<PacketWriter<'r>> + ?Sized,\n    AckFrame: Package<Target>,\n{\n    fn dump(&mut self, target: &mut Target) -> Result<PacketContent, Signals> {\n        self.need_ack\n            .or_else(|| self.journal.need_ack())\n            .ok_or(Signals::TRANSPORT)\n            .and_then(|(largest_ack, rcvd_time)| {\n                self.journal.gen_ack_frame_util(\n                    target.as_ref().packet_number(),\n                    largest_ack,\n                    rcvd_time,\n                    target.as_ref().remaining_mut(),\n                )\n            })?\n            .dump(target)\n            .unwrap();\n        Ok(PacketContent::NonAckEliciting)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_rcvd_pkt_records() {\n        let records = ArcRcvdJournal::with_capacity(16, None);\n        assert_eq!(records.decode_pn(PacketNumber::encode(1, 0)), Ok(1));\n        assert_eq!(records.inner.read().unwrap().queue.len(), 0);\n\n        let pto = Duration::from_millis(100);\n        records.on_rcvd_pn(1, true, pto);\n\n        assert_eq!(records.inner.read().unwrap().queue.len(), 2);\n        assert_eq!(\n            records.inner.read().unwrap().queue.get(0).unwrap(),\n            &State::Empty\n        );\n\n        assert!(matches!(\n            records.inner.read().unwrap().queue.get(1).unwrap(),\n            State::PacketReceived(_, _, _)\n        ));\n\n        let ack_frame = records.gen_ack_frame_util(0, 1, Instant::now(), 1200);\n\n        assert_eq!(&ack_frame.unwrap().largest(), &1);\n        assert!(\n            records\n                .inner\n                .read()\n                .unwrap()\n                .packet_include_ack\n                .contains(&0)\n        );\n\n        assert!(matches!(\n            records.inner.read().unwrap().queue.get(1).unwrap(),\n            State::AckSent(true, _, _, _)\n        ));\n\n        let ack_frame = AckFrame::new(0_u32.into(), 100_u32.into(), 0_u32.into(), vec![], None);\n\n        records.on_rcvd_ack(&ack_frame);\n\n        assert_eq!(records.inner.read().unwrap().queue.len(), 1);\n        let binding = records.inner.read().unwrap();\n        let record = binding.queue.get(1).unwrap();\n        assert!(matches!(record, State::AckConfirmed(_, _, _)));\n    }\n\n    #[test]\n    fn gen_ack_frame() {\n        let rcvd_state = State::PacketReceived(Instant::now(), None, Instant::now());\n        let unrcvd_state = State::Empty;\n        let mut queue = IndexDeque::with_capacity(45);\n        for idx in 1..11 {\n            queue.insert(idx, rcvd_state.clone()).unwrap();\n        }\n        for idx in 11..12 {\n            queue.insert(idx, unrcvd_state.clone()).unwrap();\n        }\n        for idx in 12..45 {\n            queue.insert(idx, rcvd_state.clone()).unwrap();\n        }\n        for idx in 45..50 {\n            queue.insert(idx, unrcvd_state.clone()).unwrap();\n        }\n        for idx in 50..55 {\n            queue.insert(idx, rcvd_state.clone()).unwrap();\n        }\n\n        let mut rcvd_jornal = RcvdJournal {\n            queue,\n            max_ack_delay: None,\n            packet_include_ack: Default::default(),\n            earliest_not_ack_time: None,\n        };\n\n        let ack = rcvd_jornal\n            .gen_ack_frame_util(0, 52, Instant::now(), 1000)\n            .unwrap();\n        assert_eq!(\n            ack.ranges(),\n            &vec![\n                (VarInt::from_u32(50 - 45 - 1), VarInt::from_u32(45 - 12 - 1)),\n                (VarInt::from_u32(12 - 11 - 1), VarInt::from_u32(11 - 1 - 1))\n            ]\n        );\n        assert_eq!(ack.first_range(), 2)\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/journal/sent.rs",
    "content": "use std::{\n    collections::VecDeque,\n    ops::DerefMut,\n    sync::{Arc, Mutex, MutexGuard},\n    time::Duration,\n};\n\nuse derive_more::{Deref, DerefMut};\nuse qbase::{\n    error::{ErrorKind, QuicError},\n    frame::{AckFrame, GetFrameType},\n    packet::PacketNumber,\n    util::IndexDeque,\n    varint::VARINT_MAX,\n};\nuse tokio::time::Instant;\n\n/// 记录发送的数据包的状态，包括\n/// - Flighting: 数据包正在传输中\n/// - Acked: 数据包已经被确认\n/// - Lost: 数据包丢失\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum SentPktState {\n    Skipped,\n    Flighting {\n        nframes: usize,\n        sent_time: Instant,\n        expire_time: Instant,\n        retran_time: Instant,\n    },\n    Retransmitted {\n        nframes: usize,\n        sent_time: Instant,\n        expire_time: Instant,\n    },\n    Acked {\n        nframes: usize,\n        sent_time: Instant,\n        expire_time: Instant,\n    },\n}\n\nimpl SentPktState {\n    #[allow(dead_code)]\n    fn skipped() -> Self {\n        Self::Skipped\n    }\n\n    fn new(nframes: usize, sent_time: Instant, retran_time: Instant, expire_time: Instant) -> Self {\n        Self::Flighting {\n            nframes,\n            sent_time,\n            retran_time,\n            expire_time,\n        }\n    }\n\n    fn nframes(&self) -> usize {\n        match self {\n            SentPktState::Skipped => 0,\n            SentPktState::Flighting { nframes, .. } => *nframes,\n            SentPktState::Retransmitted { nframes, .. } => *nframes,\n            SentPktState::Acked { nframes, .. } => *nframes,\n        }\n    }\n\n    fn be_acked(&mut self) -> usize {\n        match *self {\n            SentPktState::Skipped => 0,\n            SentPktState::Flighting {\n                nframes,\n                sent_time,\n                expire_time,\n                ..\n            } => {\n                *self = SentPktState::Acked {\n                    nframes,\n                    sent_time,\n                    expire_time,\n                };\n                nframes\n            }\n            SentPktState::Retransmitted {\n                nframes,\n                sent_time,\n                expire_time,\n                ..\n            } => {\n                *self = SentPktState::Acked {\n                    nframes,\n                    sent_time,\n                    expire_time,\n                };\n                nframes\n            }\n            SentPktState::Acked { .. } => 0,\n        }\n    }\n\n    fn maybe_lost(&mut self) -> usize {\n        match *self {\n            SentPktState::Flighting {\n                nframes,\n                sent_time,\n                expire_time,\n                ..\n            } => {\n                *self = SentPktState::Retransmitted {\n                    nframes,\n                    sent_time,\n                    expire_time,\n                };\n                nframes\n            }\n            Self::Retransmitted { nframes, .. } => nframes,\n            _ => 0,\n        }\n    }\n\n    fn should_retransmit_after(&mut self, now: &Instant) -> bool {\n        match *self {\n            SentPktState::Flighting {\n                sent_time,\n                retran_time,\n                expire_time,\n                ..\n            } if retran_time < *now => {\n                *self = SentPktState::Retransmitted {\n                    nframes: self.nframes(),\n                    sent_time,\n                    expire_time,\n                };\n                true\n            }\n            _ => false,\n        }\n    }\n\n    fn should_remain_after(&self, pn: u64, now: &Instant) -> bool {\n        match self {\n            SentPktState::Skipped => false,\n            SentPktState::Flighting { .. } => true,\n            SentPktState::Retransmitted { expire_time, .. } => {\n                if expire_time > now {\n                    true\n                } else {\n                    tracing::trace!(target: \"quic\", \"retransmitted packet {pn} is expired without ack\");\n                    false\n                }\n            }\n            SentPktState::Acked { .. } => false,\n        }\n    }\n}\n\n/// 记录已经发送的帧，尽最大努力省略内存分配。\n/// queue记录着所有发送过的帧，records记录着顺序发送的数据包包含几个帧，以及这些数据包的状态。\n/// 发送数据包的时候，往其中写入数据包的帧，\n/// 接收到确认的时候，更新数据包的状态，被确认就什么都不做；丢失的数据包，得重新发送\n#[derive(Debug, Default, Deref, DerefMut)]\nstruct SentJournal<T> {\n    #[deref]\n    #[deref_mut]\n    queue: VecDeque<T>,\n    // 记录着每个包的内容，其实是一个数字，该数字对应着queue中的record数量\n    sent_packets: IndexDeque<SentPktState, VARINT_MAX>,\n    largest_acked_pktno: u64,\n}\n\nimpl<T: Clone> SentJournal<T> {\n    fn on_packet_acked(&mut self, pn: u64) -> impl Iterator<Item = T> + '_ {\n        let mut len = 0;\n        let offset = self\n            .sent_packets\n            .enumerate()\n            .take_while(|(pkt_idx, _)| *pkt_idx < pn)\n            .map(|(_, s)| s.nframes())\n            .sum::<usize>();\n        if let Some(s) = self.sent_packets.get_mut(pn) {\n            len = s.be_acked();\n        }\n        self.queue\n            .range_mut(offset..offset + len)\n            .map(|f| f.clone())\n    }\n\n    fn may_loss_packet(&mut self, pn: u64) -> impl Iterator<Item = T> + '_ {\n        let mut len = 0;\n        let offset = self\n            .sent_packets\n            .enumerate()\n            // TODO(optimize): 调用者是一种遍历，每次都从头take_while，可以优化\n            .take_while(|(pkt_idx, _)| *pkt_idx < pn)\n            .map(|(_, s)| s.nframes())\n            .sum::<usize>();\n        if let Some(s) = self.sent_packets.get_mut(pn) {\n            len = s.maybe_lost();\n        }\n        self.queue\n            .range_mut(offset..offset + len)\n            .map(|f| f.clone())\n    }\n\n    fn fast_retransmit(&mut self) -> impl Iterator<Item = T> + '_ {\n        self.resize();\n\n        let now = tokio::time::Instant::now();\n        self.sent_packets\n            .enumerate_mut()\n            .take_while(|(pn, _)| *pn < self.largest_acked_pktno)\n            .scan(0, move |sum, (_, s)| {\n                let start = *sum;\n                *sum += s.nframes();\n                Some((s.should_retransmit_after(&now), start..*sum))\n            })\n            .filter(|(should_retran, _)| *should_retran)\n            .flat_map(|(_, r)| self.queue.range(r))\n            .cloned()\n    }\n}\n\nimpl<T> SentJournal<T> {\n    fn with_capacity(capacity: usize) -> Self {\n        Self {\n            queue: VecDeque::with_capacity(capacity * 4),\n            sent_packets: IndexDeque::with_capacity(capacity),\n            largest_acked_pktno: 0,\n        }\n    }\n\n    fn resize(&mut self) {\n        let now = Instant::now();\n        let (n, f) = self\n            .sent_packets\n            .enumerate()\n            .take_while(|(pn, s)| !s.should_remain_after(*pn, &now))\n            .fold((0usize, 0usize), |(n, f), (_, s)| (n + 1, f + s.nframes()));\n        self.sent_packets.advance(n);\n        _ = self.queue.drain(..f);\n    }\n}\n\n/// Records for sent packets and frames in them.\n///\n/// [`DataStreams`] need to be aware of frame acknowledgment or possible loss, and so does [`CryptoStream`].\n/// This structure records some frames (type T) in each packet sent, and feeds back the frames in\n/// these packets to [`DataStreams`] and [`CryptoStream`] when the packet is acknowledged or may be\n/// lost.\n///\n/// The interfaces are on the [`NewPacketGuard`] structure and the [`SentRotateGuard`] structure, read their\n/// documentation for more. This structure only provide the methods to create them.\n///\n/// If multiple tasks are recording at the same time, the recording will become confusing, so the\n/// [`NewPacketGuard`] and the [`SentRotateGuard`] are designed to be `Guard`, which means that they hold a\n/// [`MutexGuard`].\n///\n///\n/// [`DataStreams`]: crate::streams::DataStreams\n/// [`CryptoStream`]: crate::crypto::CryptoStream\n#[derive(Debug, Default)]\npub struct ArcSentJournal<T>(Arc<Mutex<SentJournal<T>>>);\n\nimpl<T> Clone for ArcSentJournal<T> {\n    fn clone(&self) -> Self {\n        Self(self.0.clone())\n    }\n}\n\nimpl<T> ArcSentJournal<T> {\n    /// Create a new empty records with the given `capatity`.\n    ///\n    /// The number of records can exceed the `capacity` specified at creation time, but the internel\n    /// implementation strvies to avoid reallocation.\n    pub fn with_capacity(capacity: usize) -> Self {\n        Self(Arc::new(Mutex::new(SentJournal::with_capacity(capacity))))\n    }\n\n    /// Return a [`SentRotateGuard`] to resolve the ack frame from peer.\n    pub fn rotate(&self) -> SentRotateGuard<'_, T> {\n        SentRotateGuard {\n            inner: self.0.lock().unwrap(),\n        }\n    }\n\n    /// Return a [`NewPacketGuard`] to get the next pn and record frames in the packet.\n    pub fn new_packet(&self) -> NewPacketGuard<'_, T> {\n        let inner = self.0.lock().unwrap();\n        let origin_len = inner.queue.len();\n        NewPacketGuard {\n            trivial: false,\n            origin_len,\n            inner,\n        }\n    }\n}\n\n/// Handle the peer's ack frame and feed back the frames in the acknowledged or possibly lost packets to other components.\npub struct SentRotateGuard<'a, T> {\n    inner: MutexGuard<'a, SentJournal<T>>,\n}\n\nimpl<T: Clone> SentRotateGuard<'_, T> {\n    /// Handle the [`Largest Acknowledged`] field of the ack frame from peer.\n    ///\n    /// [`Largest Acknowleged`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-ack-frames\n    pub fn update_largest(&mut self, ack_frame: &AckFrame) -> Result<(), QuicError> {\n        if ack_frame.largest() > self.inner.sent_packets.largest() {\n            return Err(QuicError::new(\n                ErrorKind::ProtocolViolation,\n                ack_frame.frame_type().into(),\n                \"ack frame largest pn is larger than the largest pn sent\",\n            ));\n        }\n        if ack_frame.largest() > self.inner.largest_acked_pktno {\n            self.inner.largest_acked_pktno = ack_frame.largest();\n        }\n        Ok(())\n    }\n\n    /// Called when the packet sent is acked by peer, return the frames in that packet.\n    pub fn on_packet_acked(&mut self, pn: u64) -> impl Iterator<Item = T> + '_ {\n        self.inner.on_packet_acked(pn)\n    }\n\n    /// Called when the packet sent may lost, reutrn the frames in that packet.\n    pub fn may_loss_packet(&mut self, pn: u64) -> impl Iterator<Item = T> + '_ {\n        self.inner.may_loss_packet(pn)\n    }\n\n    pub fn fast_retransmit(&mut self) -> impl Iterator<Item = T> + '_ {\n        self.inner.fast_retransmit()\n    }\n}\n\nimpl<T> Drop for SentRotateGuard<'_, T> {\n    fn drop(&mut self) {\n        self.inner.resize();\n    }\n}\n\n/// Provide the [encoded] packet number to assemble a packet, and record the frames in packet which\n/// will be send.\n///\n/// One [`NewPacketGuard`] correspond to a packet.\n///\n/// Even if the next packet number is obtained, the packet may not be sent out. If the packet is not\n/// sent out, the packet number will not be consumed.\n///\n/// Call [`NewPacketGuard::record_trivial`] or [`NewPacketGuard::record_frame`] means that the packet will be\n/// correspond to this [`NewPacketGuard`] will be sent, and the packet number will be consumed when the\n/// [`NewPacketGuard`] dropped.\n///\n/// [encoded]: https://www.rfc-editor.org/rfc/rfc9000.html#name-sample-packet-number-encodi\n#[derive(Debug)]\npub struct NewPacketGuard<'a, T> {\n    trivial: bool,\n    origin_len: usize,\n    inner: MutexGuard<'a, SentJournal<T>>,\n}\n\nimpl<T> NewPacketGuard<'_, T> {\n    /// Provide a packet number and its [encoded] form to assemble a packet.\n    ///\n    /// Call this method multipes on the same [`NewPacketGuard`] will result the same pn.\n    ///\n    /// [encoded]: https://www.rfc-editor.org/rfc/rfc9000.html#name-sample-packet-number-encodi\n    pub fn pn(&self) -> (u64, PacketNumber) {\n        let pn = self.inner.sent_packets.largest();\n        let encoded_pn = PacketNumber::encode(pn, self.inner.largest_acked_pktno);\n        (pn, encoded_pn)\n    }\n\n    /// Records trivial frames that do not need retransmission, such as Padding, Ping, and Ack.\n    /// However, this packet does occupy a packet number. Even if no other reliable frames are sent,\n    /// it still needs to be recorded, with the number of reliable frames in this packet being 0.\n    pub fn record_trivial(&mut self) {\n        self.trivial = true;\n    }\n\n    /// Records a frame in the packet being sent.\n    ///\n    /// Once this method or [`NewPacketGuard::record_trivial`] called, the packet number will be consumed.\n    ///\n    /// When the packet is acked, or may loss, the frames in packet will been fed back to the\n    /// components which sent them.\n    pub fn record_frame(&mut self, frame: T) {\n        self.inner.deref_mut().push_back(frame);\n    }\n\n    pub fn build_with_time(mut self, retran_timeout: Duration, expire_timeout: Duration) {\n        let nframes = self.inner.queue.len() - self.origin_len;\n        let sent_time = tokio::time::Instant::now();\n        if self.trivial && nframes == 0 {\n            self.inner\n                .sent_packets\n                .push_back(SentPktState::Skipped)\n                .expect(\"packet number never overflow\");\n        } else if nframes > 0 {\n            self.inner\n                .sent_packets\n                .push_back(SentPktState::new(\n                    nframes,\n                    sent_time,\n                    sent_time + retran_timeout,\n                    sent_time + expire_timeout,\n                ))\n                .expect(\"packet number never overflow\");\n        }\n    }\n\n    pub fn build_trivial(mut self) {\n        assert_eq!(self.inner.queue.len(), self.origin_len);\n        assert!(self.trivial);\n        self.inner\n            .sent_packets\n            .push_back(SentPktState::Skipped)\n            .expect(\"packet number never overflow\");\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/journal.rs",
    "content": "//! The space that reliably transmites frames.\nuse std::time::Duration;\n\nmod rcvd;\npub use rcvd::*;\nmod sent;\npub use sent::*;\n\n/// The bundle of sent packet records and received packet records.\n///\n/// The generic `T` is the generic on [`ArcSentJournal`].\n///\n/// See [`ArcSentJournal`] and [`ArcRcvdJournal`] for more.\n#[derive(Debug, Default, Clone)]\npub struct Journal<T> {\n    sent: ArcSentJournal<T>,\n    rcvd: ArcRcvdJournal,\n}\n\nimpl<T> Journal<T> {\n    /// Create a [`Journal`] containing records with the given `capacity`.\n    pub fn with_capacity(capacity: usize, max_ack_delay: Option<Duration>) -> Self {\n        Self {\n            sent: ArcSentJournal::with_capacity(capacity),\n            rcvd: ArcRcvdJournal::with_capacity(capacity, max_ack_delay),\n        }\n    }\n\n    /// Get the [`ArcSentJournal`] of space.\n    pub fn of_sent_packets(&self) -> ArcSentJournal<T> {\n        self.sent.clone()\n    }\n\n    /// Get the [`ArcRcvdJournal`] of space.\n    pub fn of_rcvd_packets(&self) -> ArcRcvdJournal {\n        self.rcvd.clone()\n    }\n}\n\nimpl<T> AsRef<ArcSentJournal<T>> for Journal<T> {\n    fn as_ref(&self) -> &ArcSentJournal<T> {\n        &self.sent\n    }\n}\n\nimpl<T> AsRef<ArcRcvdJournal> for Journal<T> {\n    fn as_ref(&self) -> &ArcRcvdJournal {\n        &self.rcvd\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/lib.rs",
    "content": "//! Crate to implement reliable transmission.\n//!\n//! The structures in this crate dont have the ability to send or receive frames directly, but they\n//! provide interfaces to generate frames and write them into buffers, handle received frames, and\n//! handle frame acknowledgment and loss. This is what [`Incoming`], [`Outgoing`], [`DataStreams`],\n//! [`CryptoStreamIncoming`], [`CryptoStreamOutgoing`] and [`CryptoStream`] do.\n//!\n//! The [`reliable`] module of this crate provids the records for sent and received packets, and a\n//! reliable frame queue to ensure that the frames in it will be sent to the peer and confirmed.\n//!\n//! The sent record can provide a packet number for the new packet (although the QUIC packet number\n//! is incremented, the packet number stored in the packet header is encoded).\n//!\n//! The sent records are also responsible for processing the ack frames sent by the other party.\n//! Through the other party's ack frames, which packets have been confirmed can be known, and then\n//! the frames in these packets are fed back to [`DataStreams`] and [`CryptoStream`] for processing.\n//!\n//! The loss of packets is determined by congestion control, and sending records can feed back the\n//! frame in may lost data packets to [`DataStreams`] and [`CryptoStream`].\n//!\n//! The received records are used to generate the ack frame, and decode the packet number in the\n//! packet received.\n//!\n//! [`Incoming`]: crate::recv::Incoming\n//! [`Outgoing`]: crate::send::Outgoing\n//! [`DataStreams`]: crate::streams::DataStreams\n//! [`CryptoStreamIncoming`]: crate::crypto::CryptoStreamIncoming\n//! [`CryptoStreamOutgoing`]: crate::crypto::CryptoStreamOutgoing\n//! [`CryptoStream`]: crate::crypto::CryptoStream\npub mod crypto;\npub mod journal;\npub mod recv;\npub mod reliable;\npub mod send;\npub mod streams;\n"
  },
  {
    "path": "qrecovery/src/recv/incoming.rs",
    "content": "use std::ops::DerefMut;\n\nuse bytes::Bytes;\nuse qbase::{\n    error::{Error, QuicError},\n    frame::{MaxStreamDataFrame, ResetStreamFrame, StopSendingFrame, StreamFrame, io::SendFrame},\n};\n\nuse super::recver::{ArcRecver, Recver};\n\n/// An struct for protocol layer to manage the receiving part of a stream.\n#[derive(Debug, Clone)]\npub struct Incoming<TX>(ArcRecver<TX>);\n\nimpl<TX> Incoming<TX>\nwhere\n    TX: SendFrame<StopSendingFrame> + SendFrame<MaxStreamDataFrame> + Clone + Send + 'static,\n{\n    /// Receive a stream frame from peer.\n    ///\n    /// The stream frame will be handed over to the receive state machine.\n    ///\n    /// The data in a stream frame is just a fragment of the data on the stream. The data transmitted\n    /// by different stream frames may not continuous. The data will be assembled by [`RecvBuf`] into\n    /// continuous data for the application layer to read through [`Reader`].\n    ///\n    /// [`RecvBuf`]: crate::recv::RecvBuf\n    /// [`Reader`]: crate::recv::Reader\n    pub fn recv_data(\n        &self,\n        stream_frame: StreamFrame,\n        body: Bytes,\n    ) -> Result<(bool, usize), QuicError> {\n        let mut recver = self.0.recver();\n        let inner = recver.deref_mut();\n        let mut is_into_rcvd = false;\n        let mut fresh_data = 0;\n        if let Ok(receiving_state) = inner {\n            match receiving_state {\n                Recver::Recv(r) => {\n                    if stream_frame.is_fin() {\n                        let mut size_known = r.determin_size(&stream_frame)?;\n                        fresh_data = size_known.recv(stream_frame, body)?;\n                        if size_known.is_all_rcvd() {\n                            is_into_rcvd = true;\n                            *receiving_state = Recver::DataRcvd(size_known.upgrade());\n                        } else {\n                            *receiving_state = Recver::SizeKnown(size_known);\n                        }\n                    } else {\n                        fresh_data = r.recv(stream_frame, body)?;\n                    }\n                }\n                Recver::SizeKnown(r) => {\n                    fresh_data = r.recv(stream_frame, body)?;\n                    if r.is_all_rcvd() {\n                        is_into_rcvd = true;\n                        *receiving_state = Recver::DataRcvd(r.upgrade());\n                    }\n                }\n                _ => {}\n            }\n        }\n        Ok((is_into_rcvd, fresh_data))\n    }\n\n    /// Receive a stream reset frame from peer.\n    ///\n    /// If all data sent by the peer has not been received, receiving a stream reset frame will cause\n    /// any read calls to return an error, received data will be discarded.\n    pub fn recv_reset(&self, reset_frame: ResetStreamFrame) -> Result<usize, QuicError> {\n        // TODO: ResetStream中还有错误信息，比如http3的错误码，看是否能用到\n        let mut sync_fresh_data = 0;\n        let mut recver = self.0.recver();\n        let inner = recver.deref_mut();\n        if let Ok(receiving_state) = inner {\n            match receiving_state {\n                Recver::Recv(r) => {\n                    sync_fresh_data = r.recv_reset(&reset_frame)?;\n                    *receiving_state = Recver::ResetRcvd(reset_frame);\n                }\n                Recver::SizeKnown(r) => {\n                    r.recv_reset(&reset_frame)?;\n                    *receiving_state = Recver::ResetRcvd(reset_frame);\n                }\n                _ => unreachable!(),\n            }\n        }\n        Ok(sync_fresh_data)\n    }\n}\n\nimpl<TX> Incoming<TX> {\n    pub fn new(recver: ArcRecver<TX>) -> Self {\n        Self(recver)\n    }\n\n    /// Called when a connecion error occured\n    ///\n    /// After the connection error occured, trying to read the data from [`Reader`] will result an\n    /// Error.\n    ///\n    /// [`Reader`]: crate::recv::Reader\n    pub fn on_conn_error(&self, err: &Error) {\n        let mut recver = self.0.recver();\n        let inner = recver.deref_mut();\n        match inner {\n            Ok(receiving_state) => match receiving_state {\n                Recver::Recv(r) => r.wake_reader(),\n                Recver::SizeKnown(r) => r.wake_reader(),\n                _ => return,\n            },\n            Err(_) => return,\n        };\n        *inner = Err(err.clone());\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/recv/rcvbuf.rs",
    "content": "//！ An implementation of the receiving buffer for stream data.\n\nuse std::collections::VecDeque;\n\nuse bytes::{Buf, BufMut, Bytes};\n\n/// 一段连续的数据片段，每个片段都是Bytes\n#[derive(Debug, Default)]\nstruct Segment {\n    offset: u64,\n    data: Bytes,\n}\n\nimpl Segment {\n    fn new_with_data(offset: u64, data: Bytes) -> Self {\n        Segment { offset, data }\n    }\n\n    fn end(&self) -> u64 {\n        self.offset + self.data.len() as u64\n    }\n}\n\n/// Received data of a stream is stored in [`RecvBuf`].\n///\n/// The receiving buffer is relatively simple, as it receives segmented data\n/// that may not be continuous. It sequentially stores the received data\n/// fragments and then reassembles them into a continuous data stream for\n/// future reading by the application layer.\n///\n/// It implements the [`Buf`] triat and can operate on the **received continuous\n/// data** through the [`Buf`] trait. [`Buf::has_remaining`] return `flase` not\n/// only when the buffer is empty, but also when no readable continuous data in\n/// the buffer.\n#[derive(Default, Debug)]\npub struct RecvBuf {\n    nread: u64,\n    largest_offset: u64,\n    // segments[0].offset >= nread\n    segments: VecDeque<Segment>,\n}\n\nimpl RecvBuf {\n    /// Returns whether the receiving buffer is empty.\n    pub fn is_empty(&self) -> bool {\n        self.segments.is_empty()\n    }\n\n    /// Returns how many continuous data have been read.\n    ///\n    /// # Example\n    ///\n    /// ``` rust\n    /// # use bytes::{Bytes, BytesMut};\n    /// # use qrecovery::recv::RecvBuf;\n    /// let mut recvbuf = RecvBuf::default();\n    /// assert_eq!(recvbuf.nread(), 0);\n    ///\n    /// recvbuf.recv(0, Bytes::from(\"hello\"));\n    /// assert_eq!(recvbuf.nread(), 0);\n    /// // recvbuf:  hello\n    /// // offset=0  ^\n    ///\n    /// let mut dst = BytesMut::new();\n    /// recvbuf.try_read(&mut dst);\n    /// assert_eq!(recvbuf.nread(), 5);\n    /// // recvbuf:  hello\n    /// // offset=5       ^\n    pub fn nread(&self) -> u64 {\n        self.nread\n    }\n\n    /// Returns the largest offset received.\n    ///\n    /// For receiver in SizeKnown state, this must smaller than the `final_size`\n    pub fn largest_offset(&self) -> u64 {\n        self.largest_offset\n    }\n\n    /// Receive a fragment of data, returns the consumption of the flow limit\n    ///\n    /// # Example\n    ///\n    /// The following example demonstrates how [`RecvBuf`] works.\n    ///\n    /// The data \"hello, world!\" is splitted into four fragments.\n    /// ``` rust\n    /// # use bytes::{Bytes, BytesMut};\n    /// # use qrecovery::recv::RecvBuf;\n    /// let mut recvbuf = RecvBuf::default();\n    /// // data:    \"hello, world!\"\n    /// assert_eq!(recvbuf.recv(0, Bytes::from(\"hell\")), 4);\n    /// // recvbuf: \"hell\"\n    /// // new:     \"hell\"\n    /// assert_eq!(recvbuf.recv(7, Bytes::from(\"world\")), 8);\n    /// // recvbuf: \"hell\" \"world\"\n    /// // new:            \"world\"\n    /// assert_eq!(recvbuf.recv(3, Bytes::from(\"lo, \")), 0);\n    /// // recvbuf: \"hello, world\"\n    /// // new:         \"o, \"\n    /// assert_eq!(recvbuf.recv(7, Bytes::from(\"world!\")), 1);\n    /// // recvbuf: \"hello, world!\"\n    /// // new:                 \"!\"\n    /// let mut received = BytesMut::new();\n    /// recvbuf.try_read(&mut received);\n    /// assert_eq!(received.as_ref(), b\"hello, world!\");\n    /// ```\n    pub fn recv(&mut self, offset: u64, mut data: Bytes) -> u64 {\n        let previous_largest = self.largest_offset;\n\n        // advance data that already read\n        let mut start = offset.max(self.nread);\n        data.advance(data.remaining().min((start - offset) as usize));\n\n        loop {\n            if data.is_empty() {\n                break;\n            }\n\n            // 从前往后放：\n            match self.segments.binary_search_by(|seg| seg.offset.cmp(&start)) {\n                // 恰好和现有的一个数据段在同一位置开始现有的数据段上，如：\n                // | exist_seg | ... |\n                // | new_seg....................|\n                // 裁剪掉new_seg的前面部分，然后继续循环\n                // | exist_seg | ... |\n                //             | new_seg........|\n                // 绝大多数情况下都会先进入这一个分支\n                Ok(exist_seg_index) => {\n                    let length_covered = data.len().min(self.segments[exist_seg_index].data.len());\n                    data.advance(length_covered);\n                    start += length_covered as u64;\n                }\n                // 没有恰好和一个现有的数据段重合：查看和上一个&下一段数据是否重合，裁去重合的部分\n                //      | exist_seg1 |    | exist_seg2 |\n                // 1.                  | new_seg|\n                // 2. | new_seg |\n                // 二分查找的结果seg_index可能是上一个seg的index，也可能是下一个seg的index\n                // 1. 如果是上一个seg的index，需要有逻辑：需要检查下一个seg是否存在，如果存在就裁剪自身\n                // 2. 如果是下一个seg的index（只可能是index=0)，也会执行上述逻辑，故index 0 可以做特别处理\n                Err(0) => {\n                    let uncovered = match self.segments.front() {\n                        // 如果和下一段数据有重合的话，裁下data中前一部分（不重合的部分）\n                        Some(next_seg) if start + data.len() as u64 > next_seg.offset => {\n                            // 裁下后，start必定和next_seg.offset相等，下次loop就会进入上一个分支\n                            // next_seg.offset < start + data.len()\n                            // next_seg.offset - start < data.len() ，不会越界\n                            data.split_to((next_seg.offset - start) as usize)\n                        }\n                        // 如果没有重合，或者这是第一段数据，直接取出整个data\n                        // 然后下次循环时data.is_empty() == true => break\n                        Some(..) | None => core::mem::take(&mut data),\n                    };\n                    let segment = Segment::new_with_data(start, uncovered);\n                    start += segment.data.len() as u64;\n                    self.largest_offset = self.largest_offset.max(segment.end());\n                    self.segments.push_front(segment);\n                }\n                // seg_index != 0 => seg_index > 0\n                // start > prev_seg.offset\n                Err(seg_index) => {\n                    // 首先需要检测是否和上一个seg重合\n                    // 此步骤完成后, offset >= prev_seg.end()\n                    data = match self.segments.get(seg_index - 1) {\n                        // start > prev_seg.offset && end <= prev_seg.end()\n                        //  | ---prev_seg-- |\n                        //    | new_seg     |\n                        // 有可能这一段完全被上一段囊括，直接break\n                        Some(prev_seg) if (start + data.len() as u64) <= prev_seg.end() => break,\n                        // start > prev_seg.offset && start < prev_seg.end()\n                        //  | ---prev_seg-- |\n                        //    | ---new_seg--- |\n                        // 裁剪掉和上一段重合的，剩下的部分也一定不是空的\n                        Some(prev_seg) if start < prev_seg.end() => {\n                            // 裁下后，start必定和prev_seg.end()相等\n                            // 下次loop就会进入上一个分支\n                            // start < prev_seg.end() => 0 < prev_seg.end() - start，不会越界\n                            let length_covered = prev_seg.end() - start;\n                            start += length_covered;\n                            data.split_off(length_covered as usize)\n                        }\n                        // 如果没有重合，直接取出data\n                        // 然后下次循环时data.is_empty() == true => break\n                        Some(..) | None => data,\n                    };\n\n                    let uncovered = match self.segments.get(seg_index) {\n                        // next_seg.offset >= prev_seg.end() && start >= prev_seg.end()\n                        //  | ---next_seg--- |\n                        //  | ---new_seg-- |\n                        // uncovered 为 [prev_seg.end(), next_seg.offset)区间的数据\n                        // 如果offset == next_seg.offset，说明unconvert是空的，直接continue\n                        Some(next_seg) if start == next_seg.offset => continue,\n                        //    | --next_seg--- |\n                        //  | ---new_seg-- |\n                        // 如果和下一段数据有重合的话，裁下data中不重合的部分\n                        Some(next_seg) if start + data.len() as u64 > next_seg.offset => {\n                            // 裁下后，start必定和next_seg.offset相等，下次loop就会进入上一个分支\n                            // next_seg.offset < start + data.len()\n                            // next_seg.offset - start < data.len() ，不会越界\n                            data.split_to((next_seg.offset - start) as usize)\n                        }\n                        // 如果没有重合，或者这是第一段数据，直接取出data\n                        // 然后下次循环时data.is_empty() == true => break\n                        Some(..) | None => core::mem::take(&mut data),\n                    };\n\n                    let segment = Segment::new_with_data(start, uncovered);\n                    start += segment.data.len() as u64;\n                    self.largest_offset = self.largest_offset.max(segment.end());\n                    self.segments.insert(seg_index, segment);\n                }\n            }\n            // 进入新的循环（也可递归）\n        }\n\n        self.largest_offset - previous_largest\n    }\n\n    /// Returns the length of continuous unread data.\n    pub fn available(&self) -> u64 {\n        use core::ops::ControlFlow;\n        let (ControlFlow::Continue(continuous_end) | ControlFlow::Break(continuous_end)) =\n            self.segments.iter().try_fold(self.nread, |offset, seg| {\n                if seg.offset == offset {\n                    ControlFlow::Continue(offset + seg.data.len() as u64)\n                } else {\n                    ControlFlow::Break(offset)\n                }\n            });\n        continuous_end - self.nread\n    }\n\n    /// Once the received data becomes continuous, it becomes readable. If necessary (if the application\n    /// layer is blocked on reading), it is necessary to notify the application layer to read.\n    pub fn is_readable(&self) -> bool {\n        !self.segments.is_empty() && self.segments[0].offset == self.nread\n    }\n\n    /// Try to read continuous data from [`RecvBuf`] into the buffer passed in.\n    ///\n    /// If the following data is not continuous or there is no data, this method returns [`None`]\n    ///\n    /// Otherwise, returns how much data was written to the buffer passed in.\n    ///\n    /// # Example\n    ///\n    /// ``` rust\n    /// # use bytes::{BytesMut, Bytes};\n    /// # use qrecovery::recv::RecvBuf;\n    /// let mut recvbuf = RecvBuf::default();\n    /// recvbuf.recv(0, Bytes::from(\"012\"));\n    /// recvbuf.recv(3, Bytes::from(\"345\"));\n    /// recvbuf.recv(7, Bytes::from(\"789\"));\n    /// // recvbuf:  012345 789\n    /// // readable: ^^^^^^\n    ///\n    /// let mut dst1 = BytesMut::new();\n    /// recvbuf.try_read(&mut dst1);\n    /// assert_eq!(dst1.as_ref(), b\"012345\");\n    ///\n    /// let mut dst2 = BytesMut::new();\n    /// recvbuf.recv(6, Bytes::from(\"6\"));\n    /// // recvbuf:  0123456789\n    /// // readable:       ^^^^\n    ///\n    /// recvbuf.try_read(&mut dst2);\n    /// assert_eq!(dst2.as_ref(), b\"6789\");\n    ///\n    pub fn try_read(&mut self, dst: &mut impl BufMut) -> usize {\n        let origin = dst.remaining_mut();\n        while let Some(seg) = self.segments.front_mut() {\n            if seg.offset != self.nread || !dst.has_remaining_mut() {\n                break;\n            }\n\n            let read = dst.remaining_mut().min(seg.data.len());\n            dst.put(seg.data.split_to(read));\n            self.nread += read as u64;\n            if seg.data.has_remaining() {\n                seg.offset += read as u64;\n            } else {\n                self.segments.pop_front();\n            }\n        }\n        origin - dst.remaining_mut()\n    }\n\n    /// Try to get the next continuous data segment.\n    ///\n    /// Compared with [`Self::try_read`], this method is more efficient\n    /// because it reduces some calculations and copies.\n    pub fn try_next(&mut self) -> Option<Bytes> {\n        if self.is_readable() {\n            let data = self.segments.pop_front().unwrap().data;\n            self.nread += data.len() as u64;\n            return Some(data);\n        }\n\n        None\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_no_overlap() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"hello\")), 5);\n        assert_eq!(buf.recv(6, Bytes::from(\"world\")), 6);\n\n        assert_eq!(buf.segments.len(), 2);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 6);\n\n        assert_eq!(buf.recv(5, Bytes::from(\" \")), 0);\n        assert_eq!(buf.segments.len(), 3);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 5);\n        assert_eq!(buf.segments[2].offset, 6);\n    }\n\n    #[test]\n    fn test_left_partially_overlap() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"01234\")), 5);\n        assert_eq!(buf.recv(2, Bytes::from(\"2345\")), 1); //left segment partially overlapped this\n        assert_eq!(buf.recv(6, Bytes::from(\"6789\")), 4); // no overlap\n\n        assert_eq!(buf.segments.len(), 3);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 5);\n        assert_eq!(buf.segments[2].offset, 6);\n        assert_eq!(buf.available(), 10);\n    }\n\n    #[test]\n    fn test_right_partially_overlap() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"hello\")), 5);\n        assert_eq!(buf.recv(6, Bytes::from(\"world!\")), 7);\n        assert_eq!(buf.recv(5, Bytes::from(\" wor\")), 0); // overlap right\n\n        assert_eq!(buf.segments.len(), 3);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 5);\n        assert_eq!(buf.segments[2].offset, 6);\n        assert_eq!(buf.available(), 12);\n    }\n\n    #[test]\n    #[doc(alias = \"fully_overlap_left\")]\n    fn test_same_offset() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"01234\")), 5);\n        assert_eq!(buf.recv(0, Bytes::from(\"0123456789\")), 5);\n\n        assert_eq!(buf.segments.len(), 2);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 5);\n        assert_eq!(buf.available(), 10);\n    }\n\n    #[test]\n    fn test_fully_overlap_right() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"hello\")), 5);\n        assert_eq!(buf.recv(6, Bytes::from(\"world\")), 6);\n        assert_eq!(buf.recv(5, Bytes::from(\" world!\")), 1); // fully overlap right\n\n        assert_eq!(buf.segments.len(), 4);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 5);\n        assert_eq!(buf.segments[2].offset, 6);\n        assert_eq!(buf.segments[3].offset, 11);\n        assert_eq!(buf.available(), 12);\n    }\n\n    #[test]\n    fn test_left_fully_overlap() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"114514\")), 6);\n        assert_eq!(buf.recv(2, Bytes::from(\"45\")), 0); // left segment fully overlapped this\n        assert_eq!(buf.recv(2, Bytes::from(\"4514\")), 0); // left segment fully overlapped this\n        assert_eq!(buf.segments.len(), 1);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.available(), 6);\n    }\n\n    #[test]\n    fn test_right_fully_overlapp() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"114514\")), 6);\n        assert_eq!(buf.recv(6, Bytes::from(\"1919810\")), 7);\n        assert_eq!(buf.recv(8, Bytes::from(\"1981\")), 0); // right segment fully overlapped this\n        assert_eq!(buf.recv(8, Bytes::from(\"19810\")), 0); // right segment fully overlapped this\n\n        assert_eq!(buf.segments.len(), 2);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 6);\n        assert_eq!(buf.available(), 13);\n    }\n\n    #[test]\n    fn test_left_right_partially_overlap() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"012345\")), 6);\n        assert_eq!(buf.recv(7, Bytes::from(\"789\")), 4);\n        assert_eq!(buf.recv(6, Bytes::from(\"6\")), 0); // left and right partially overlapped this\n\n        assert_eq!(buf.segments.len(), 3);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 6);\n        assert_eq!(buf.segments[2].offset, 7);\n        assert_eq!(buf.available(), 10);\n    }\n\n    #[test]\n    fn test_left_right_fully_overlap() {\n        let mut buf = RecvBuf::default();\n        assert_eq!(buf.recv(0, Bytes::from(\"01234\")), 5);\n        assert_eq!(buf.recv(5, Bytes::from(\"56789\")), 5);\n        assert_eq!(buf.recv(2, Bytes::from(\"2345678\")), 0); // left and right fully overlapped this\n\n        assert_eq!(buf.segments.len(), 2);\n        assert_eq!(buf.segments[0].offset, 0);\n        assert_eq!(buf.segments[1].offset, 5);\n        assert_eq!(buf.available(), 10);\n    }\n\n    #[test]\n    fn test_recvbuf_read() {\n        let mut rcvbuf = RecvBuf::default();\n        assert_eq!(rcvbuf.recv(0, Bytes::from(\"hello\")), 5);\n        assert_eq!(rcvbuf.recv(6, Bytes::from(\"world\")), 6);\n\n        let mut dst = [0u8; 20];\n        let mut buf = &mut dst[..];\n        rcvbuf.try_read(&mut buf);\n        assert_eq!(buf.remaining_mut(), 15);\n\n        assert_eq!(rcvbuf.recv(5, Bytes::from(\" \")), 0);\n        rcvbuf.try_read(&mut buf);\n\n        assert_eq!(buf.remaining_mut(), 9);\n        assert_eq!(dst[..11], b\"hello world\"[..]);\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/recv/reader.rs",
    "content": "use std::{\n    io::{self},\n    ops::DerefMut,\n    pin::Pin,\n    task::{Context, Poll},\n};\n\nuse bytes::Bytes;\nuse futures::Stream;\nuse qbase::{\n    frame::{MaxStreamDataFrame, StopSendingFrame, io::SendFrame},\n    varint::VARINT_MAX,\n};\nuse qevent::quic::transport::{GranularStreamStates, StreamSide, StreamStateUpdated};\nuse tokio::io::{AsyncRead, ReadBuf};\n\nuse super::recver::{ArcRecver, Recver};\nuse crate::streams::error::StreamError;\n\npub trait StopSending {\n    /// Tell peer to stop sending data with the given error code.\n    ///\n    /// If all data has been received (the stream has closed), or the stream has been reset, this method will do\n    /// nothing.\n    ///\n    /// Otherwise, a [`STOP_SENDING frame`] will be sent to the peer, and then the stream will be reset by peer,\n    /// neither new data nor lost data will be sent.\n    ///\n    /// Unlike TCP, stopping a QUIC stream needs an error code, which is used to indicate\n    /// the reason for the stopping. The error code should be a `u64` value,\n    /// defined by the application protocol using QUIC, such as HTTP/3 or gRPC.\n    ///\n    /// [`STOP_SENDING frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-stop_sending-frames\n    fn stop(&mut self, error_code: u64);\n}\n\n/// The reader part of a QUIC stream.\n///\n/// A QUIC stream is *reliable*, *ordered*, and *flow-controlled*.\n///\n/// This struct implements the [`AsyncRead`] trait, allowing you to read an ordered byte stream from\n/// the peer, like [`TcpStream`].\n///\n/// Try to read from the [`Reader`] into a non-empty buffer, the [`Reader`] will block until some data\n/// is available, or the stream is closed, or the stream is reset by peer.\n///\n/// # Note\n///\n/// The stream must be closed before [`Reader`] dropped.\n///\n/// The [`read`] returning `Ok(0)` indicates that all data from peer has been read and the stream has\n/// `closed`, it is okay to drop the [`Reader`] after that.\n///\n/// Alternatively, if the [`read`] result an error, its indicates that the stream has been `reset`, or\n/// closed duo to other reasons. It's also okay to drop the [`Reader`] after that.\n///\n/// You can call [`stop`] to tell the peer to stop sending data with the given error code, the [`Reader`]\n/// will be consumed, and the error code will be sent to the peer.\n///\n/// # Example\n///\n/// The [`Reader`] is created by the `open_bi_stream`, `accept_bi_stream`, or `accept_uni_stream` methods\n/// of `QuicConnection` (in the `quic` crate).\n///\n/// The following example demonstrates how to read and write data on a QUIC stream:\n///\n/// ```rust, ignore\n/// # use tokio::io::{AsyncWriteExt, AsyncReadExt};\n/// # async fn example() -> std::io::Result<()> {\n/// let (reader, writer) = quic_connection.open_bi_stream().await?;\n///\n/// writer.write_all(b\"GET README.md\\r\\n\").await?;\n/// writer.shutdown().await?;\n///\n/// let mut response = String::new();\n/// let n = reader.read_to_string(&mut response).await?;\n/// println!(\"Response {} bytes: {}\", n, response);\n/// Ok(())\n/// # }\n/// ```\n///\n/// [`TcpStream`]: tokio::net::TcpStream\n/// [`read`]: tokio::io::AsyncReadExt::read\n/// [`stop`]: Reader::stop\n/// [`RESET_STREAM frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-reset_stream-frames\n#[derive(Debug)]\npub struct Reader<TX> {\n    inner: ArcRecver<TX>,\n    qlog_span: qevent::telemetry::Span,\n    tracing_span: tracing::Span,\n}\n\nimpl<TX> Reader<TX> {\n    /// Create a new [`Reader`] from the given [`Recver`].\n    ///\n    /// This method is used by the `accept_bi_stream` and `accept_uni_stream` methods of\n    /// [`QuicConnection`](crate::QuicConnection).\n    pub(crate) fn new(inner: ArcRecver<TX>) -> Self {\n        Self {\n            inner,\n            qlog_span: qevent::telemetry::Span::current(),\n            tracing_span: tracing::Span::current(),\n        }\n    }\n\n    #[inline]\n    pub fn poll_read(\n        &mut self,\n        cx: &mut Context<'_>,\n        buf: &mut impl bytes::BufMut,\n    ) -> Poll<Result<(), StreamError>>\n    where\n        TX: SendFrame<MaxStreamDataFrame>,\n    {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut recver = self.inner.recver();\n        let receiving_state = recver.as_mut().map_err(|e| e.clone())?;\n        // 能相当清楚地看到应用层读取数据驱动的接收状态演变\n        match receiving_state {\n            Recver::Recv(r) => r.poll_read(cx, buf).map(Ok),\n            Recver::SizeKnown(r) => r.poll_read(cx, buf).map(Ok),\n            Recver::DataRcvd(r) => {\n                r.poll_read(buf);\n                if r.is_all_read() {\n                    r.upgrade();\n                    *receiving_state = Recver::DataRead;\n                }\n                Poll::Ready(Ok(()))\n            }\n            Recver::DataRead => Poll::Ready(Ok(())),\n            Recver::ResetRcvd(r) => {\n                qevent::event!(StreamStateUpdated {\n                    stream_id: r.stream_id().id(),\n                    stream_type: r.stream_id().dir(),\n                    old: GranularStreamStates::ResetReceived,\n                    new: GranularStreamStates::ResetRead,\n                    stream_side: StreamSide::Receiving\n                });\n                let reset_stream_error = (&*r).into();\n                *receiving_state = Recver::ResetRead(reset_stream_error);\n                Poll::Ready(Err(reset_stream_error.into()))\n            }\n            Recver::ResetRead(r) => Poll::Ready(Err((*r).into())),\n        }\n    }\n\n    #[inline]\n    pub fn poll_next(\n        self: Pin<&mut Self>,\n        cx: &mut Context<'_>,\n    ) -> Poll<Option<Result<Bytes, StreamError>>>\n    where\n        TX: SendFrame<MaxStreamDataFrame>,\n    {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut recver = self.inner.recver();\n        let receiving_state = recver.as_mut().map_err(|e| e.clone())?;\n        // 能相当清楚地看到应用层读取数据驱动的接收状态演变\n        match receiving_state {\n            Recver::Recv(r) => r.poll_next(cx).map(Ok).map(Some),\n            Recver::SizeKnown(r) => r.poll_next(cx).map(Ok).map(Some),\n            Recver::DataRcvd(r) => {\n                let Some(data) = r.poll_next() else {\n                    return Poll::Ready(None);\n                };\n                if r.is_all_read() {\n                    r.upgrade();\n                    *receiving_state = Recver::DataRead;\n                }\n                Poll::Ready(Some(Ok(data)))\n            }\n            Recver::DataRead => Poll::Ready(None),\n            Recver::ResetRcvd(r) => {\n                qevent::event!(StreamStateUpdated {\n                    stream_id: r.stream_id().id(),\n                    stream_type: r.stream_id().dir(),\n                    old: GranularStreamStates::ResetReceived,\n                    new: GranularStreamStates::ResetRead,\n                    stream_side: StreamSide::Receiving\n                });\n                let reset_stream_error = (&*r).into();\n                *receiving_state = Recver::ResetRead(reset_stream_error);\n                Poll::Ready(Some(Err(reset_stream_error.into())))\n            }\n            Recver::ResetRead(r) => Poll::Ready(Some(Err((*r).into()))),\n        }\n    }\n}\n\nimpl<TX> StopSending for Reader<TX>\nwhere\n    TX: SendFrame<StopSendingFrame>,\n{\n    /// Tell peer to stop sending data with the given error code.\n    ///\n    /// If all data has been received(the stream has closed), or the stream has been reset, this method will do\n    /// nothing.\n    ///\n    /// Otherwise, a [`STOP_SENDING frame`] will be sent to the peer, and then the stream will be reset by peer.\n    ///\n    /// [`STOP_SENDING frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-stop_sending-frames\n    fn stop(&mut self, error_code: u64) {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        debug_assert!(error_code <= VARINT_MAX);\n        let mut recver = self.inner.recver();\n        let inner = recver.deref_mut();\n        if let Ok(receiving_state) = inner {\n            match receiving_state {\n                Recver::Recv(r) => {\n                    r.stop(error_code);\n                }\n                Recver::SizeKnown(r) => {\n                    r.stop(error_code);\n                }\n                _ => (),\n            }\n        }\n    }\n}\n\nimpl<TX> AsyncRead for Reader<TX>\nwhere\n    TX: SendFrame<MaxStreamDataFrame>,\n{\n    #[inline]\n    fn poll_read(\n        self: Pin<&mut Self>,\n        cx: &mut Context<'_>,\n        buf: &mut ReadBuf<'_>,\n    ) -> Poll<io::Result<()>> {\n        Reader::poll_read(self.get_mut(), cx, buf).map_err(io::Error::from)\n    }\n}\n\nimpl<TX> Stream for Reader<TX>\nwhere\n    TX: SendFrame<MaxStreamDataFrame>,\n{\n    type Item = Result<Bytes, StreamError>;\n\n    fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        Reader::poll_next(self, cx)\n    }\n}\n\nimpl<TX> Drop for Reader<TX> {\n    fn drop(&mut self) {\n        let mut recver = self.inner.recver();\n        let inner = recver.deref_mut();\n        if let Ok(receiving_state) = inner {\n            match receiving_state {\n                Recver::Recv(r) if !r.is_stopped() => {\n                    #[cfg(debug_assertions)]\n                    tracing::warn!(\n                        target: \"quic\",\n                        \"The receiving {} is not stopped with error before dropped!\",\n                        r.stream_id(),\n                    );\n                    #[cfg(not(debug_assertions))]\n                    tracing::debug!(\n                        target: \"quic\",\n                        \"The receiving {} is not stopped with error before dropped!\",\n                        r.stream_id(),\n                    );\n                }\n                Recver::SizeKnown(r) if !r.is_stopped() => {\n                    #[cfg(debug_assertions)]\n                    tracing::warn!(\n                        target: \"quic\",\n                        \"The receiving {} is not stopped with error before dropped!\",\n                        r.stream_id()\n                    );\n                    #[cfg(not(debug_assertions))]\n                    tracing::debug!(\n                        target: \"quic\",\n                        \"The receiving {} is not stopped with error before dropped!\",\n                        r.stream_id()\n                    );\n                }\n                _ => (),\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/recv/recver.rs",
    "content": "use std::{\n    io,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker},\n};\n\nuse bytes::{BufMut, Bytes};\nuse qbase::{\n    error::{Error, ErrorKind, QuicError},\n    frame::{\n        GetFrameType, MaxStreamDataFrame, ResetStreamError, ResetStreamFrame, StopSendingFrame,\n        StreamFrame, io::SendFrame,\n    },\n    sid::StreamId,\n    varint::{VARINT_MAX, VarInt},\n};\nuse qevent::quic::transport::{\n    GranularStreamStates, StreamDataLocation, StreamDataMoved, StreamSide, StreamStateUpdated,\n};\n\nuse super::rcvbuf;\n\n#[derive(Debug)]\npub(super) struct Recv<TX> {\n    stream_id: StreamId,\n    rcvbuf: rcvbuf::RecvBuf,\n    read_waker: Option<Waker>,\n    stop_state: Option<u64>,\n    broker: TX,\n    largest: u64,\n    max_stream_data: u64,\n}\n\nimpl<TX> Recv<TX>\nwhere\n    TX: SendFrame<MaxStreamDataFrame>,\n{\n    pub(super) fn poll_read(&mut self, cx: &mut Context<'_>, buf: &mut impl BufMut) -> Poll<()> {\n        if let Some(_reset) = self.stop_state {\n            // Though STOP_SENDING has been sent, the application layer can still read the data\n        }\n\n        if !self.rcvbuf.is_readable() {\n            self.read_waker = Some(cx.waker().clone());\n            return Poll::Pending;\n        }\n\n        let offset = self.rcvbuf.nread();\n        let length = self.rcvbuf.try_read(buf) as u64;\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset,\n            length,\n            from: StreamDataLocation::Transport,\n            to: StreamDataLocation::Application,\n        });\n\n        let threshold = 1_000_000;\n        if self.rcvbuf.nread() + threshold > self.max_stream_data {\n            let max_stream_data = (self.rcvbuf.nread() + threshold * 2).min(VARINT_MAX);\n            if max_stream_data > self.max_stream_data {\n                self.max_stream_data = max_stream_data;\n                self.broker.send_frame([MaxStreamDataFrame::new(\n                    self.stream_id,\n                    VarInt::from_u64(max_stream_data).unwrap(),\n                )]);\n            }\n        }\n\n        Poll::Ready(())\n    }\n\n    pub(super) fn poll_next(&mut self, cx: &mut Context<'_>) -> Poll<Bytes> {\n        if !self.rcvbuf.is_readable() {\n            self.read_waker = Some(cx.waker().clone());\n            return Poll::Pending;\n        }\n\n        let offset = self.rcvbuf.nread();\n        let data = self.rcvbuf.try_next().expect(\"is_readable checked\");\n        let length = data.len() as u64;\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset,\n            length,\n            from: StreamDataLocation::Transport,\n            to: StreamDataLocation::Application,\n        });\n\n        let threshold = 1_000_000;\n        if self.rcvbuf.nread() + threshold > self.max_stream_data {\n            let max_stream_data = (self.rcvbuf.nread() + threshold * 2).min(VARINT_MAX);\n            if max_stream_data > self.max_stream_data {\n                self.max_stream_data = max_stream_data;\n                self.broker.send_frame([MaxStreamDataFrame::new(\n                    self.stream_id,\n                    VarInt::from_u64(max_stream_data).unwrap(),\n                )]);\n            }\n        }\n\n        Poll::Ready(data)\n    }\n}\n\nimpl<TX> Recv<TX>\nwhere\n    TX: SendFrame<StopSendingFrame>,\n{\n    pub(super) fn stop(&mut self, err_code: u64) {\n        if self.stop_state.is_none() {\n            self.stop_state = Some(err_code);\n            self.broker.send_frame([StopSendingFrame::new(\n                self.stream_id,\n                VarInt::from_u64(err_code).expect(\"app error code must not exceed 2^62!\"),\n            )]);\n        }\n    }\n}\n\nimpl<TX: Clone> Recv<TX> {\n    pub(super) fn determin_size(\n        &mut self,\n        stream_frame: &StreamFrame,\n    ) -> Result<SizeKnown<TX>, QuicError> {\n        if let Some(waker) = self.read_waker.take() {\n            waker.wake();\n        }\n\n        let final_size = stream_frame.offset() + stream_frame.len() as u64;\n        let received_largest_offset = self.rcvbuf.largest_offset();\n        if received_largest_offset > final_size {\n            return Err(QuicError::new(\n                ErrorKind::FinalSize,\n                stream_frame.frame_type().into(),\n                format!(\n                    \"{} received a wrong smaller final size {} than the largest rcvd data offset {}\",\n                    stream_frame.stream_id(),\n                    final_size,\n                    received_largest_offset\n                ),\n            ));\n        }\n\n        qevent::event!(StreamStateUpdated {\n            stream_id: self.stream_id.id(),\n            stream_type: self.stream_id.dir(),\n            old: GranularStreamStates::Receive,\n            new: GranularStreamStates::SizeKnown,\n            stream_side: StreamSide::Receiving\n        });\n        Ok(SizeKnown {\n            final_size,\n            stream_id: self.stream_id,\n            rcvbuf: std::mem::take(&mut self.rcvbuf),\n            stop_state: self.stop_state.take(),\n            broker: self.broker.clone(),\n            read_waker: self.read_waker.take(),\n        })\n    }\n}\n\nimpl<TX> Recv<TX> {\n    pub(super) fn new(stream_id: StreamId, buf_size: u64, broker: TX) -> Self {\n        Self {\n            stream_id,\n            rcvbuf: rcvbuf::RecvBuf::default(),\n            read_waker: None,\n            stop_state: None,\n            broker,\n            largest: 0,\n            max_stream_data: buf_size,\n        }\n    }\n\n    pub(super) fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    pub(super) fn recv(\n        &mut self,\n        stream_frame: StreamFrame,\n        body: Bytes,\n    ) -> Result<usize, QuicError> {\n        let data_start = stream_frame.offset();\n\n        let data_end = data_start + body.len() as u64;\n        if data_end > self.max_stream_data {\n            return Err(QuicError::new(\n                ErrorKind::FlowControl,\n                stream_frame.frame_type().into(),\n                format!(\n                    \"{} send {data_end} bytes which exceeds the stream data limit {}\",\n                    stream_frame.stream_id(),\n                    self.max_stream_data\n                ),\n            ));\n        }\n        let data_length = body.len() as u64;\n        let fresh_data = self.rcvbuf.recv(data_start, body);\n        qevent::event!(\n            StreamDataMoved {\n                stream_id: self.stream_id,\n                offset: data_start,\n                length: data_length,\n                from: StreamDataLocation::Network,\n                to: StreamDataLocation::Transport,\n            },\n            fresh_data\n        );\n        if self.largest < data_end {\n            self.largest = data_end;\n        }\n        if self.rcvbuf.is_readable()\n            && let Some(waker) = self.read_waker.take()\n        {\n            waker.wake()\n        }\n        Ok(fresh_data as _)\n    }\n\n    pub(super) fn recv_reset(\n        &mut self,\n        reset_frame: &ResetStreamFrame,\n    ) -> Result<usize, QuicError> {\n        let final_size = reset_frame.final_size();\n        if final_size < self.largest {\n            return Err(QuicError::new(\n                ErrorKind::FinalSize,\n                reset_frame.frame_type().into(),\n                format!(\n                    \"{} reset with a wrong smaller final size {final_size} than the largest rcvd data offset {}\",\n                    reset_frame.stream_id(),\n                    self.largest\n                ),\n            ));\n        }\n        self.wake_reader();\n        log_reset_event(self.stream_id, GranularStreamStates::Receive);\n        Ok((final_size - self.largest) as _)\n    }\n\n    pub(super) fn is_stopped(&self) -> bool {\n        self.stop_state.is_some()\n    }\n\n    pub(super) fn wake_reader(&mut self) {\n        if let Some(waker) = self.read_waker.take() {\n            waker.wake()\n        }\n    }\n}\n\n/// Once the size of the data stream is determined, MAX_STREAM_DATA will no longer\n/// be updated. Receiving data on this stream is meaningless. At this point, it is\n/// also meaningless for the application layer to continue receiving data.\n#[derive(Debug)]\npub struct SizeKnown<TX> {\n    stream_id: StreamId,\n    rcvbuf: rcvbuf::RecvBuf,\n    read_waker: Option<Waker>,\n    stop_state: Option<u64>,\n    broker: TX,\n    final_size: u64,\n}\n\nimpl<TX> SizeKnown<TX> {\n    pub(super) fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    pub(super) fn recv(\n        &mut self,\n        stream_frame: StreamFrame,\n        data: Bytes,\n    ) -> Result<usize, QuicError> {\n        let data_start = stream_frame.offset();\n        let data_end = data_start + data.len() as u64;\n        if data_end > self.final_size {\n            return Err(QuicError::new(\n                ErrorKind::FinalSize,\n                stream_frame.frame_type().into(),\n                format!(\n                    \"{} send {data_end} bytes which exceeds the final_size {}\",\n                    stream_frame.stream_id(),\n                    self.final_size\n                ),\n            ));\n        }\n        if stream_frame.is_fin() && data_end != self.final_size {\n            return Err(QuicError::new(\n                ErrorKind::FinalSize,\n                stream_frame.frame_type().into(),\n                format!(\n                    \"{} change the final size from {} to {data_end}\",\n                    stream_frame.stream_id(),\n                    self.final_size\n                ),\n            ));\n        }\n        let data_length = data.len() as u64;\n        let fresh_data = self.rcvbuf.recv(data_start, data);\n        qevent::event!(\n            StreamDataMoved {\n                stream_id: self.stream_id,\n                offset: data_start,\n                length: data_length,\n                from: StreamDataLocation::Network,\n                to: StreamDataLocation::Transport,\n            },\n            fresh_data\n        );\n        if self.rcvbuf.is_readable()\n            && let Some(waker) = self.read_waker.take()\n        {\n            waker.wake()\n        }\n        Ok(fresh_data as usize)\n    }\n\n    pub(super) fn is_all_rcvd(&self) -> bool {\n        self.rcvbuf.nread() + self.rcvbuf.available() == self.final_size\n    }\n\n    #[allow(dead_code)]\n    pub(super) fn read(&mut self, mut buf: &mut [u8]) -> io::Result<usize> {\n        if self.rcvbuf.is_readable() {\n            let buflen = buf.remaining_mut();\n            self.rcvbuf.try_read(&mut buf);\n            Ok(buflen - buf.remaining_mut())\n        } else {\n            Err(io::ErrorKind::WouldBlock.into())\n        }\n    }\n\n    pub(super) fn poll_read(&mut self, cx: &mut Context<'_>, buf: &mut impl BufMut) -> Poll<()> {\n        if let Some(_reset) = self.stop_state {\n            // Though STOP_SENDING has been sent, the application layer can still read the data\n        }\n\n        if !self.rcvbuf.is_readable() {\n            self.read_waker = Some(cx.waker().clone());\n            return Poll::Pending;\n        }\n\n        let offset = self.rcvbuf.nread();\n        let length = self.rcvbuf.try_read(buf) as u64;\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset,\n            length,\n            from: StreamDataLocation::Transport,\n            to: StreamDataLocation::Application,\n        });\n        Poll::Ready(())\n    }\n\n    pub(super) fn poll_next(&mut self, cx: &mut Context<'_>) -> Poll<Bytes> {\n        if !self.rcvbuf.is_readable() {\n            self.read_waker = Some(cx.waker().clone());\n            return Poll::Pending;\n        }\n\n        let offset = self.rcvbuf.nread();\n        let data = self.rcvbuf.try_next().expect(\"is_readable checked\");\n        let length = data.len() as u64;\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset,\n            length,\n            from: StreamDataLocation::Transport,\n            to: StreamDataLocation::Application,\n        });\n        Poll::Ready(data)\n    }\n\n    pub(super) fn recv_reset(&mut self, reset_frame: &ResetStreamFrame) -> Result<(), QuicError> {\n        let final_size = reset_frame.final_size();\n        if final_size != self.final_size {\n            return Err(QuicError::new(\n                ErrorKind::FinalSize,\n                reset_frame.frame_type().into(),\n                format!(\n                    \"{} change the final size from {} to {final_size}\",\n                    reset_frame.stream_id(),\n                    self.final_size\n                ),\n            ));\n        }\n        self.wake_reader();\n        log_reset_event(self.stream_id, GranularStreamStates::SizeKnown);\n        Ok(())\n    }\n\n    pub(super) fn is_stopped(&self) -> bool {\n        self.stop_state.is_some()\n    }\n\n    pub(super) fn wake_reader(&mut self) {\n        if let Some(waker) = self.read_waker.take() {\n            waker.wake()\n        }\n    }\n}\n\nimpl<TX> SizeKnown<TX>\nwhere\n    TX: SendFrame<StopSendingFrame> + Clone + Send + 'static,\n{\n    pub(super) fn upgrade(&mut self) -> DataRcvd {\n        qevent::event!(StreamStateUpdated {\n            stream_id: self.stream_id.id(),\n            stream_type: self.stream_id.dir(),\n            old: GranularStreamStates::SizeKnown,\n            new: GranularStreamStates::DataReceived,\n            stream_side: StreamSide::Receiving\n        });\n        self.wake_reader();\n        DataRcvd {\n            stream_id: self.stream_id,\n            rcvbuf: std::mem::take(&mut self.rcvbuf),\n        }\n    }\n}\n\nimpl<TX> SizeKnown<TX>\nwhere\n    TX: SendFrame<StopSendingFrame>,\n{\n    /// Abort can be called multiple times at the application level,\n    /// but only the first call is effective.\n    pub(super) fn stop(&mut self, err_code: u64) {\n        if self.stop_state.is_none() {\n            self.stop_state = Some(err_code);\n            self.broker.send_frame([StopSendingFrame::new(\n                self.stream_id,\n                VarInt::from_u64(err_code).expect(\"app error code must not exceed 2^62!\"),\n            )]);\n        }\n    }\n}\n\n/// Once all the data has been received, STOP_SENDING becomes meaningless.\n/// If the application layer aborts reading, it will directly result in the termination\n/// of the lifecycle, leading to the release of all states and data. There is also no\n/// need for any further readable notifications to wake up. Subsequent reads will\n/// immediately return the available data until the end.\n#[derive(Debug)]\npub struct DataRcvd {\n    stream_id: StreamId,\n    rcvbuf: rcvbuf::RecvBuf,\n}\n\nimpl DataRcvd {\n    /// Unlike the previous states, when there is no more data, it no longer returns\n    /// \"WouldBlock\" but instead returns 0, which typically indicates the end.\n    #[allow(dead_code)]\n    pub(super) fn read(&mut self, mut buf: &mut [u8]) -> io::Result<usize> {\n        let buflen = buf.remaining_mut();\n        self.rcvbuf.try_read(&mut buf);\n        Ok(buflen - buf.remaining_mut())\n    }\n\n    /// Unlike the previous states, when there is no more data, it no longer returns\n    /// \"Pending\" but instead returns \"Ready\". However, in reality, nothing has been\n    /// read. This kind of result typically indicates the end.\n    pub(super) fn poll_read(&mut self, buf: &mut impl BufMut) {\n        let offset = self.rcvbuf.nread();\n        let length = self.rcvbuf.try_read(buf) as u64;\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset,\n            length,\n            from: StreamDataLocation::Transport,\n            to: StreamDataLocation::Application,\n        });\n    }\n\n    pub(super) fn poll_next(&mut self) -> Option<Bytes> {\n        let offset = self.rcvbuf.nread();\n        let data = self.rcvbuf.try_next()?;\n        let length = data.len() as u64;\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset,\n            length,\n            from: StreamDataLocation::Transport,\n            to: StreamDataLocation::Application,\n        });\n        Some(data)\n    }\n\n    pub(super) fn is_all_read(&self) -> bool {\n        self.rcvbuf.is_empty()\n    }\n}\n\nfn log_reset_event(stream_id: StreamId, old: GranularStreamStates) {\n    qevent::event!(StreamStateUpdated {\n        stream_id: stream_id.id(),\n        stream_type: stream_id.dir(),\n        old,\n        new: GranularStreamStates::ResetReceived,\n        stream_side: StreamSide::Receiving\n    });\n}\n\nimpl DataRcvd {\n    pub(super) fn upgrade(&self) {\n        qevent::event!(StreamStateUpdated {\n            stream_id: self.stream_id.id(),\n            stream_type: self.stream_id.dir(),\n            old: GranularStreamStates::DataReceived,\n            new: GranularStreamStates::DataRead,\n            stream_side: StreamSide::Receiving\n        });\n    }\n}\n\n/// Receiving stream state machine. In fact, here the state variables such as\n/// is_closed/is_reset are replaced by a state machine. This not only provides\n/// clearer semantics and aligns with the QUIC RFC specification but also\n/// allows the compiler to help us check if the state transitions are correct\n#[derive(Debug)]\npub(super) enum Recver<TX> {\n    Recv(Recv<TX>),\n    SizeKnown(SizeKnown<TX>),\n    DataRcvd(DataRcvd),\n    ResetRcvd(ResetStreamFrame),\n    DataRead,\n    ResetRead(ResetStreamError),\n}\n\nimpl<TX> Recver<TX> {\n    pub(super) fn new(stream_id: StreamId, buf_size: u64, frames_tx: TX) -> Self {\n        Self::Recv(Recv::new(stream_id, buf_size, frames_tx))\n    }\n}\n\n/// The internal representations of [`Incoming`] and [`Reader`].\n///\n/// For the application layer, this structure is represented as [`Reader`]. The application can use it\n/// to read the data from the peer on the stream, or ask the peer stop sending.\n///\n/// For the protocol layer, this structure is represented as [`Incoming`]. The protocol layer use it to\n/// manages the status of the `Recver` through it, delivers received data to the application layer and\n/// sends frames to the peer.\n///\n/// [`Incoming`]: super::Incoming\n/// [`Reader`]: super::Reader\n#[derive(Debug, Clone)]\npub struct ArcRecver<TX>(Arc<Mutex<Result<Recver<TX>, Error>>>);\n\nimpl<TX> ArcRecver<TX>\nwhere\n    TX: SendFrame<StopSendingFrame> + SendFrame<MaxStreamDataFrame> + Clone + Send + 'static,\n{\n    #[doc(hidden)]\n    pub(crate) fn new(stream_id: StreamId, buf_size: u64, frames_tx: TX) -> Self {\n        ArcRecver(Arc::new(Mutex::new(Ok(Recver::new(\n            stream_id, buf_size, frames_tx,\n        )))))\n    }\n}\n\nimpl<TX> ArcRecver<TX> {\n    pub(super) fn recver(&'_ self) -> MutexGuard<'_, Result<Recver<TX>, Error>> {\n        self.0.lock().unwrap()\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/recv.rs",
    "content": "//! Types for receiving data on a Stream.\nmod incoming;\nmod rcvbuf;\nmod reader;\nmod recver;\n\npub use incoming::Incoming;\npub use rcvbuf::RecvBuf;\npub use reader::{Reader, StopSending};\npub use recver::ArcRecver;\n"
  },
  {
    "path": "qrecovery/src/reliable.rs",
    "content": "//! The reliable transmission for frames.\nuse std::{\n    collections::VecDeque,\n    sync::{Arc, Mutex, MutexGuard},\n};\n\nuse qbase::{\n    frame::{EncodeSize, FrameFeature, io::SendFrame},\n    net::tx::{ArcSendWakers, Signals},\n    packet::{Package, PacketContent},\n};\n\n/// A deque for data space to send reliable frames.\n///\n/// Like its name, it is just a queue. [`DataStreams`] or other components that need to send reliable\n/// frames write frames to this queue by calling [`SendFrame::send_frame`]. The transport layer can\n/// load the frames from the queue into the packet by calling [`try_load_frames_into`].\n///\n/// # Example\n/// ```rust, no_run\n/// use qbase::frame::{HandshakeDoneFrame, ReliableFrame, io::SendFrame};\n/// use qrecovery::reliable::ArcReliableFrameDeque;\n/// # let data_wakers = Default::default();\n/// let mut reliable_frame_deque = ArcReliableFrameDeque::<ReliableFrame>::with_capacity_and_wakers(10, data_wakers);\n/// reliable_frame_deque.send_frame([HandshakeDoneFrame]);\n/// ```\n///\n/// [`DataStreams`]: crate::streams::DataStreams\n/// [`try_load_frames_into`]: ArcReliableFrameDeque::try_load_frames_into\n#[derive(Debug, Default)]\npub struct ArcReliableFrameDeque<F> {\n    frames: Arc<Mutex<VecDeque<F>>>,\n    tx_wakers: ArcSendWakers,\n}\n\nimpl<F> Clone for ArcReliableFrameDeque<F> {\n    fn clone(&self) -> Self {\n        Self {\n            frames: self.frames.clone(),\n            tx_wakers: self.tx_wakers.clone(),\n        }\n    }\n}\n\nimpl<F> ArcReliableFrameDeque<F> {\n    /// Create a new empty deque with at least the specified capacity.\n    pub fn with_capacity_and_wakers(capacity: usize, tx_wakers: ArcSendWakers) -> Self {\n        Self {\n            frames: Arc::new(Mutex::new(VecDeque::with_capacity(capacity))),\n            tx_wakers,\n        }\n    }\n\n    fn frames_guard(&self) -> MutexGuard<'_, VecDeque<F>> {\n        self.frames.lock().unwrap()\n    }\n\n    /// Try to load the frame in deque and encode it into the `packet`.\n    pub fn try_load_frames_into<P: ?Sized>(&self, packet: &mut P) -> Result<(), Signals>\n    where\n        for<'a> &'a F: Package<P>,\n    {\n        let mut deque = self.frames_guard();\n        if deque.is_empty() {\n            return Err(Signals::TRANSPORT);\n        }\n        while let Some(mut frame) = deque.front() {\n            frame.dump(packet)?;\n            deque.pop_front();\n        }\n        Ok(())\n    }\n}\n\nimpl<F, P: ?Sized> Package<P> for ArcReliableFrameDeque<F>\nwhere\n    for<'a> &'a F: Package<P>,\n{\n    fn dump(&mut self, packet: &mut P) -> Result<PacketContent, Signals> {\n        self.try_load_frames_into(packet)?;\n        Ok(PacketContent::EffectivePayload)\n    }\n}\n\nimpl<T, F> SendFrame<T> for ArcReliableFrameDeque<F>\nwhere\n    F: EncodeSize + FrameFeature,\n    T: Into<F>,\n{\n    fn send_frame<I: IntoIterator<Item = T>>(&self, iter: I) {\n        self.frames_guard().extend(iter.into_iter().map(Into::into));\n        self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/send/outgoing.rs",
    "content": "use std::ops::DerefMut;\n\nuse bytes::{BufMut, Bytes};\nuse qbase::{\n    error::Error as QuicError,\n    frame::{ResetStreamError, StreamFrame},\n    net::tx::Signals,\n    packet::Package,\n    sid::StreamId,\n    util::ContinuousData,\n    varint::VarInt,\n};\nuse qevent::quic::transport::{GranularStreamStates, StreamSide, StreamStateUpdated};\n\nuse super::sender::{ArcSender, Sender, SendingSender, StreamData};\n\n/// An struct for protocol layer to manage the sending part of a stream.\n#[derive(Debug, Clone)]\npub struct Outgoing<TX>(ArcSender<TX>);\n\nimpl<TX: Clone> Outgoing<TX> {\n    /// Try to load data that the application wants to sent to the packet.\n    ///\n    /// See [`DataStreams::try_load_data_into`] for more about this method.\n    ///\n    /// Return the size of data loaded, and whether the data is fresh.\n    ///\n    /// [`DataStreams::try_load_data_into`]: crate::streams::raw::DataStreams::try_load_data_into\n    // consume the token internally, return the number of fresh data have been written to the buffer.\n    // return None indicates that the stream write no data to the buffer.\n    pub fn try_load_data_into<P>(\n        &self,\n        packet: &mut P,\n        sid: StreamId,\n        flow_limit: usize,\n        tokens: usize,\n    ) -> Result<(usize, bool), Signals>\n    where\n        P: BufMut + ?Sized,\n        for<'a> (StreamFrame, &'a [Bytes]): Package<P>,\n    {\n        let origin_len = packet.remaining_mut();\n        let mut write = |(range, is_fresh, data, is_eos): StreamData| {\n            let mut frame = StreamFrame::new(sid, range.start, (range.end - range.start) as usize);\n\n            frame.set_eos_flag(is_eos);\n            let strategy = frame.encoding_strategy(origin_len);\n            frame.set_len_bit(strategy.len_bit());\n            packet.put_bytes(0, strategy.pre_padding());\n            (frame, data.as_slice()).dump(packet).unwrap();\n\n            (ContinuousData::len(data.as_slice()), is_fresh)\n        };\n\n        let predicate = |offset| {\n            StreamFrame::estimate_max_capacity(origin_len, sid, offset)\n                .map(|capacity| tokens.min(capacity))\n        };\n        let mut sender = self.0.sender();\n        let sending_state = sender.as_mut().or(Err(Signals::empty()))?; // other(connection closed)\n\n        match sending_state {\n            Sender::Ready(s) => {\n                let mut s: SendingSender<TX> = s.upgrade();\n                let (result, finished) = s\n                    .pick_up(predicate, flow_limit)\n                    .map(|payload @ (.., with_eos)| (Ok(write(payload)), with_eos))\n                    .map_err(|s| (Err(s), false))\n                    .unwrap_or_else(|x| x);\n                if finished {\n                    *sending_state = Sender::DataSent(s.upgrade());\n                } else {\n                    *sending_state = Sender::Sending(s);\n                }\n                result\n            }\n            Sender::Sending(s) => {\n                let (result, finished) = s\n                    .pick_up(predicate, flow_limit)\n                    .map(|payload @ (.., with_eos)| (Ok(write(payload)), with_eos))\n                    .map_err(|s| (Err(s), false))\n                    .unwrap_or_else(|x| x);\n                if finished {\n                    *sending_state = Sender::DataSent(s.upgrade());\n                }\n                result\n            }\n            Sender::DataSent(s) => s.pick_up(predicate, flow_limit).map(write),\n            _ => Err(Signals::TRANSPORT),\n        }\n    }\n}\n\nimpl<TX> Outgoing<TX> {\n    /// Create a new instance of [`Outgoing`]\n    pub fn new(sender: ArcSender<TX>) -> Self {\n        Self(sender)\n    }\n\n    /// Update the sending window to `max_data_size`\n    ///\n    /// Callded when the  [`MAX_STREAM_DATA frame`] belonging to the stream is received.\n    ///\n    /// [`MAX_STREAM_DATA frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-max_stream_data-frames\n    pub fn update_window(&self, max_stream_data: u64) {\n        self.0.update_window(max_stream_data);\n    }\n\n    /// Called when the data sent to peer is acknowlwged.\n    ///\n    /// * `frame`: the stream frame that has been acknowledged.\n    ///\n    /// Return `true` if the stream is completely acknowledged, all data has been sent and received.\n    ///\n    /// [`SendBuf`]: crate::send::SendBuf\n    pub fn on_data_acked(&self, frame: &StreamFrame) -> bool {\n        let mut sender = self.0.sender();\n        let inner = sender.deref_mut();\n        if let Ok(sending_state) = inner {\n            match sending_state {\n                Sender::Ready(_) => {\n                    unreachable!(\"never send data before recv data\");\n                }\n                Sender::Sending(s) => {\n                    s.on_data_acked(frame);\n                }\n                Sender::DataSent(s) => {\n                    s.on_data_acked(frame);\n                    if s.is_all_rcvd() {\n                        qevent::event!(StreamStateUpdated {\n                            stream_id: frame.stream_id(),\n                            stream_type: frame.stream_id().dir(),\n                            old: GranularStreamStates::DataSent,\n                            new: GranularStreamStates::DataReceived,\n                            stream_side: StreamSide::Sending\n                        });\n                        *sending_state = Sender::DataRcvd;\n                        return true;\n                    }\n                }\n                // ignore recv\n                _ => {}\n            }\n        };\n        false\n    }\n\n    /// Called when the data sent to peer may lost.\n    ///\n    /// * `frame`: the stream frame that may be lost.\n    pub fn may_loss_data(&self, frame: &StreamFrame) {\n        let mut sender = self.0.sender();\n        let inner = sender.deref_mut();\n        if let Ok(sending_state) = inner {\n            match sending_state {\n                Sender::Ready(_) => {\n                    unreachable!(\"never send data before recv data\");\n                }\n                Sender::Sending(s) => {\n                    s.may_loss_data(frame);\n                }\n                Sender::DataSent(s) => {\n                    s.may_loss_data(frame);\n                }\n                // ignore loss\n                _ => (),\n            }\n        };\n    }\n\n    pub fn revise_max_stream_data(&self, zero_rtt_rejected: bool, max_stream_data: u64) {\n        let mut sender = self.0.sender();\n        let inner = sender.deref_mut();\n        if let Ok(sending_state) = inner {\n            match sending_state {\n                Sender::Ready(s) => s.revise_max_stream_data(zero_rtt_rejected, max_stream_data),\n                Sender::Sending(s) => s.revise_max_stream_data(zero_rtt_rejected, max_stream_data),\n                Sender::DataSent(s) => s.revise_max_stream_data(zero_rtt_rejected, max_stream_data),\n                _ => (),\n            }\n        };\n    }\n\n    /// Called when the [`STOP_SENDING frame`] sent by the peer is received.\n    ///\n    /// If the stream has not been closed, the stream will be reset and then a [`RESET_STREAM frame`] will\n    /// be sent to the peer to reset the peer with the `final_size`.\n    /// In this case, the method will return the `final_size`.\n    ///\n    /// If the stream has closed, `None` will be returned, and the method will do nothing.\n    ///\n    /// [`STOP_SENDING frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-stop_sending-frames\n    /// [`STREAM_RESET frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-reset_stream-frames\n    pub fn be_stopped(&self, error_code: u64) -> Option<u64> {\n        let mut sender = self.0.sender();\n        let inner = sender.deref_mut();\n        match inner {\n            Ok(sending_state) => {\n                // THINK: sending_state.stream_id() -> StreamId, sending_state.state() -> GranularStreamStates\n                let (stream_id, old_state, final_size) = match sending_state {\n                    Sender::Ready(s) => {\n                        (s.stream_id(), GranularStreamStates::Ready, s.be_stopped())\n                    }\n                    Sender::Sending(s) => {\n                        (s.stream_id(), GranularStreamStates::Send, s.be_stopped())\n                    }\n                    Sender::DataSent(s) => (\n                        s.stream_id(),\n                        GranularStreamStates::DataSent,\n                        s.be_stopped(),\n                    ),\n                    _ => return None,\n                };\n                let reset = ResetStreamError::new(\n                    // TODO: many places in the codebase perform VarInt -> u64 -> VarInt conversion\n                    //  which is redundant and may cause bugs, consider refactor call-chain.\n                    VarInt::from_u64(error_code).expect(\"app error code must not exceed 2^62\"),\n                    VarInt::from_u64(final_size).expect(\"final size must not exceed 2^62\"),\n                );\n\n                qevent::event!(StreamStateUpdated {\n                    stream_id: stream_id.id(),\n                    stream_type: stream_id.dir(),\n                    old: old_state,\n                    new: GranularStreamStates::ResetReceived,\n                    stream_side: StreamSide::Sending\n                });\n                *sending_state = Sender::ResetSent(reset);\n                Some(final_size)\n            }\n            Err(_) => None,\n        }\n    }\n\n    /// Called When the [`RESET_STREAM frame`] previously sent to the peer is acknowledged\n    ///\n    /// [`RESET_STREAM frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-reset_stream-frames\n    // TODO: stream id not from stream state, consider refactor. (many other places in qrecovery)\n    pub fn on_reset_acked(&self, sid: StreamId) {\n        let mut sender = self.0.sender();\n        let inner = sender.deref_mut();\n        if let Ok(sending_state) = inner {\n            match sending_state {\n                Sender::ResetSent(r) => {\n                    qevent::event!(StreamStateUpdated {\n                        stream_id: sid.id(),\n                        stream_type: sid.dir(),\n                        old: GranularStreamStates::ResetSent,\n                        new: GranularStreamStates::ResetReceived,\n                        stream_side: StreamSide::Sending\n                    });\n                    *sending_state = Sender::ResetRcvd(*r);\n                }\n                Sender::ResetRcvd(..) => {}\n                _ => unreachable!(\n                    \"If no RESET_STREAM has been sent, how can there be a received acknowledgment?\"\n                ),\n            }\n        }\n    }\n\n    /// When a connection-level error occurs, all data streams must be notified.\n    /// Their reading and writing should be terminated, accompanied the error of the connection.\n    pub fn on_conn_error(&self, err: &QuicError) {\n        let mut sender = self.0.sender();\n        let inner = sender.deref_mut();\n        match inner {\n            Ok(sending_state) => match sending_state {\n                Sender::Ready(s) => s.wake_all(),\n                Sender::Sending(s) => s.wake_all(),\n                Sender::DataSent(s) => s.wake_all(),\n                _ => return,\n            },\n            Err(_) => return,\n        };\n        *inner = Err(err.clone());\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/send/sender.rs",
    "content": "use std::{\n    ops::Range,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker},\n};\n\nuse bytes::Bytes;\nuse qbase::{\n    error::Error,\n    frame::{ResetStreamError, ResetStreamFrame, StreamFrame, io::SendFrame},\n    net::tx::{ArcSendWakers, Signals},\n    sid::StreamId,\n    varint::{VARINT_MAX, VarInt},\n};\nuse qevent::{\n    RawInfo,\n    quic::transport::{\n        DataMovedAdditionalInfo, GranularStreamStates, StreamDataLocation, StreamDataMoved,\n        StreamSide, StreamStateUpdated,\n    },\n};\n\nuse super::sndbuf::SendBuf;\nuse crate::streams::error::StreamError;\n\nfn log_reset_event(sid: StreamId, from_state: GranularStreamStates) {\n    qevent::event!(StreamStateUpdated {\n        stream_id: sid.id(),\n        stream_type: sid.dir(),\n        old: from_state,\n        new: GranularStreamStates::ResetSent,\n        stream_side: StreamSide::Sending\n    });\n}\n\n/// The \"Ready\" state represents a newly created stream that is able to accept data from the application.\n/// Stream data might be buffered in this state in preparation for sending.\n/// An implementation might choose to defer allocating a stream ID to a stream until it sends the first\n/// STREAM frame and enters this state, which can allow for better stream prioritization.\n#[derive(Debug)]\npub struct ReadySender<TX> {\n    stream_id: StreamId,\n    sndbuf: SendBuf,\n    flush_waker: Option<Waker>,\n    shutdown_waker: Option<Waker>,\n    broker: TX,\n    tx_wakers: ArcSendWakers,\n    writable_waker: Option<Waker>,\n    metrics: Option<qbase::metric::ArcConnectionMetrics>,\n}\n\nimpl<TX> ReadySender<TX> {\n    pub(super) fn new(\n        stream_id: StreamId,\n        buf_size: u64,\n        broker: TX,\n        tx_wakers: ArcSendWakers,\n        metrics: Option<qbase::metric::ArcConnectionMetrics>,\n    ) -> ReadySender<TX> {\n        ReadySender {\n            stream_id,\n            sndbuf: SendBuf::with_capacity(buf_size),\n            flush_waker: None,\n            shutdown_waker: None,\n            broker,\n            tx_wakers,\n            writable_waker: None,\n            metrics,\n        }\n    }\n\n    pub(super) fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    // /// 非阻塞写，如果没有多余的发送缓冲区，将返回WouldBlock错误。\n    // /// 但什么时候可写，是没通知的，只能不断去尝试写，直到写入成功。\n    // /// 仅供展示学习\n    // #[allow(dead_code)]\n    // fn write(&mut self, buf: &[u8]) -> io::Result<usize> {\n    //     if self.sndbuf.has_remaining_mut() {\n    //         self.tx_wakers.wake_all_by(Signals::WRITTEN);\n    //         self.sndbuf.write(Bytes::copy_from_slice(buf));\n    //         Ok(buf.len())\n    //     } else {\n    //         Err(io::ErrorKind::WouldBlock.into())\n    //     }\n    // }\n\n    pub(crate) fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamError>> {\n        if self.shutdown_waker.is_some() {\n            return Poll::Ready(Err(StreamError::EosSent));\n        }\n\n        if !self.sndbuf.has_remaining_mut() {\n            self.writable_waker = Some(cx.waker().clone());\n            return Poll::Pending;\n        }\n\n        Poll::Ready(Ok(()))\n    }\n\n    pub(crate) fn write(&mut self, data: Bytes) -> Result<(), StreamError> {\n        if self.shutdown_waker.is_some() {\n            return Err(StreamError::EosSent);\n        }\n\n        let data_len = data.len() as u64;\n\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset: self.sndbuf.written(),\n            length: data_len,\n            from: StreamDataLocation::Application,\n            to: StreamDataLocation::Transport,\n            raw: data.clone()\n        });\n\n        // Update metrics when application writes data\n        if let Some(metrics) = &self.metrics {\n            metrics.new_pending(data_len);\n        }\n\n        self.tx_wakers.wake_all_by(Signals::WRITTEN);\n        self.sndbuf.write(data);\n        Ok(())\n    }\n\n    pub(super) fn update_window(&mut self, max_stream_data: u64) {\n        if max_stream_data > self.sndbuf.max_data() {\n            if self.sndbuf.written() > self.sndbuf.max_data() {\n                self.tx_wakers.wake_all_by(Signals::WRITTEN);\n            }\n            self.sndbuf.extend(max_stream_data);\n            if self.sndbuf.has_remaining_mut()\n                && let Some(waker) = self.writable_waker.take()\n            {\n                waker.wake();\n            }\n        }\n    }\n\n    pub(super) fn revise_max_stream_data(&mut self, zero_rtt_rejected: bool, max_stream_data: u64) {\n        if zero_rtt_rejected {\n            self.sndbuf.forget_sent_state();\n        }\n        self.update_window(max_stream_data);\n    }\n\n    pub(super) fn poll_flush(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        if self.sndbuf.is_all_rcvd() {\n            Poll::Ready(())\n        } else {\n            self.flush_waker = Some(cx.waker().clone());\n            Poll::Pending\n        }\n    }\n\n    pub(super) fn poll_shutdown(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        // 就算当前没有流量窗口，也可以单独发送一个空StreamFrame，携带fin bit\n        self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n        self.shutdown_waker = Some(cx.waker().clone());\n        Poll::Pending\n    }\n\n    pub(super) fn wake_all(&mut self) {\n        if let Some(waker) = self.writable_waker.take() {\n            waker.wake();\n        }\n        if let Some(waker) = self.flush_waker.take() {\n            waker.wake();\n        }\n        if let Some(waker) = self.shutdown_waker.take() {\n            waker.wake();\n        }\n    }\n\n    pub(super) fn be_stopped(&mut self) -> u64 {\n        self.wake_all();\n        // ReadyState: no data is sent\n        debug_assert_eq!(self.sndbuf.sent(), 0);\n        self.sndbuf.sent()\n    }\n}\n\n/// 状态升级，ReaderSender => SendingSender\nimpl<TX: Clone> ReadySender<TX> {\n    pub(super) fn upgrade(&mut self) -> SendingSender<TX> {\n        qevent::event!(StreamStateUpdated {\n            stream_id: self.stream_id,\n            stream_type: self.stream_id.dir(),\n            old: GranularStreamStates::Ready,\n            new: GranularStreamStates::Send,\n            stream_side: StreamSide::Sending\n        });\n        SendingSender {\n            stream_id: self.stream_id,\n            sndbuf: std::mem::take(&mut self.sndbuf),\n            flush_waker: self.flush_waker.take(),\n            shutdown_waker: self.shutdown_waker.take(),\n            broker: self.broker.clone(),\n            tx_wakers: self.tx_wakers.clone(),\n            writable_waker: self.writable_waker.take(),\n            metrics: self.metrics.clone(),\n        }\n    }\n}\n\nimpl<TX> ReadySender<TX>\nwhere\n    TX: SendFrame<ResetStreamFrame>,\n{\n    /// 应用层使用，取消发送流\n    pub(super) fn cancel(&mut self, err_code: u64) -> ResetStreamError {\n        let final_size = self.sndbuf.sent();\n        let reset_stream_err = ResetStreamError::new(\n            VarInt::from_u64(err_code).expect(\"app error code must not exceed 2^62\"),\n            VarInt::from_u64(final_size).expect(\"final size must not exceed 2^62\"),\n        );\n        tracing::debug!(\n            target: \"quic\",\n            \"{} is canceled by app layer, with error code {err_code}\",\n            self.stream_id\n        );\n        self.broker\n            .send_frame([reset_stream_err.combine(self.stream_id)]);\n        log_reset_event(self.stream_id, GranularStreamStates::Ready);\n        reset_stream_err\n    }\n}\n\n#[derive(Debug)]\npub struct SendingSender<TX> {\n    stream_id: StreamId,\n    sndbuf: SendBuf,\n    flush_waker: Option<Waker>,\n    shutdown_waker: Option<Waker>,\n    broker: TX,\n    tx_wakers: ArcSendWakers,\n    writable_waker: Option<Waker>,\n    metrics: Option<qbase::metric::ArcConnectionMetrics>,\n}\n\npub type StreamData<'s> = (Range<u64>, bool, Vec<Bytes>, bool);\n\nimpl<TX> SendingSender<TX> {\n    pub(super) fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    pub(super) fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamError>> {\n        if self.shutdown_waker.is_some() {\n            return Poll::Ready(Err(StreamError::EosSent));\n        }\n\n        if !self.sndbuf.has_remaining_mut() {\n            self.writable_waker = Some(cx.waker().clone());\n            return Poll::Pending;\n        }\n\n        Poll::Ready(Ok(()))\n    }\n\n    pub(super) fn write(&mut self, data: Bytes) -> Result<(), StreamError> {\n        if self.shutdown_waker.is_some() {\n            return Err(StreamError::EosSent);\n        }\n\n        let data_len = data.len() as u64;\n\n        qevent::event!(StreamDataMoved {\n            stream_id: self.stream_id,\n            offset: self.sndbuf.written(),\n            length: data_len,\n            from: StreamDataLocation::Application,\n            to: StreamDataLocation::Transport,\n            raw: data.clone()\n        });\n\n        // Update metrics when application writes data\n        if let Some(metrics) = &self.metrics {\n            metrics.new_pending(data_len);\n        }\n\n        self.tx_wakers.wake_all_by(Signals::WRITTEN);\n        self.sndbuf.write(data);\n        Ok(())\n    }\n\n    /// 传输层使用\n    pub(super) fn update_window(&mut self, max_stream_data: u64) {\n        if max_stream_data > self.sndbuf.max_data() {\n            if self.sndbuf.written() > self.sndbuf.max_data() {\n                self.tx_wakers.wake_all_by(Signals::WRITTEN);\n            }\n            self.sndbuf.extend(max_stream_data);\n            if self.sndbuf.has_remaining_mut()\n                && let Some(waker) = self.writable_waker.take()\n            {\n                waker.wake();\n            }\n        }\n    }\n\n    pub(super) fn pick_up<P>(\n        &'_ mut self,\n        predicate: P,\n        flow_limit: usize,\n    ) -> Result<StreamData<'_>, Signals>\n    where\n        P: Fn(u64) -> Option<usize>,\n    {\n        let total_size = self.total_size();\n        let sent = self.sndbuf.sent();\n        self.sndbuf\n            .pick_up(&predicate, flow_limit)\n            .map(|(range, is_fresh, data)| {\n                (range.clone(), is_fresh, data, Some(range.end) == total_size)\n            })\n            .or_else(|signals| {\n                if total_size == Some(sent) {\n                    predicate(sent).ok_or(signals | Signals::CONGESTION)?;\n                    Ok((sent..sent, false, Vec::new(), true))\n                } else {\n                    Err(signals)\n                }\n            })\n            .map(|(range, is_fresh, data, is_eos)| {\n                qevent::event!(StreamDataMoved {\n                    stream_id: self.stream_id,\n                    offset: range.start,\n                    length: range.end - range.start,\n                    from: StreamDataLocation::Transport,\n                    to: StreamDataLocation::Network,\n                    ?additional_info: is_eos.then_some(DataMovedAdditionalInfo::FinSet),\n                    raw: RawInfo { data : data.as_slice() }\n                });\n                (range, is_fresh, data, is_eos)\n            })\n    }\n\n    pub(super) fn on_data_acked(&mut self, frame: &StreamFrame) {\n        self.sndbuf.on_data_acked(&frame.range());\n        if self.sndbuf.is_all_rcvd()\n            && let Some(waker) = self.flush_waker.take()\n        {\n            waker.wake();\n        }\n    }\n\n    pub(super) fn may_loss_data(&mut self, frame: &StreamFrame) {\n        self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n        self.sndbuf.may_loss_data(&frame.range())\n    }\n\n    pub(super) fn revise_max_stream_data(&mut self, zero_rtt_rejected: bool, max_stream_data: u64) {\n        if zero_rtt_rejected {\n            self.sndbuf.forget_sent_state();\n        }\n        self.update_window(max_stream_data);\n    }\n\n    pub(super) fn poll_flush(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        if self.sndbuf.is_all_rcvd() {\n            Poll::Ready(())\n        } else {\n            self.flush_waker = Some(cx.waker().clone());\n            Poll::Pending\n        }\n    }\n\n    pub(super) fn poll_shutdown(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n        self.shutdown_waker = Some(cx.waker().clone());\n        Poll::Pending\n    }\n\n    pub(super) fn total_size(&self) -> Option<u64> {\n        if self.shutdown_waker.is_some() {\n            Some(self.sndbuf.written())\n        } else {\n            None\n        }\n    }\n\n    pub(super) fn wake_all(&mut self) {\n        if let Some(waker) = self.writable_waker.take() {\n            waker.wake();\n        }\n        if let Some(waker) = self.flush_waker.take() {\n            waker.wake();\n        }\n        if let Some(waker) = self.shutdown_waker.take() {\n            waker.wake();\n        }\n    }\n\n    /// 传输层使用\n    pub(super) fn be_stopped(&mut self) -> u64 {\n        self.wake_all();\n        // Actually, these remaining data is not acked and will not be acked\n        self.sndbuf.sent()\n    }\n}\n\nimpl<TX: Clone> SendingSender<TX> {\n    pub(super) fn upgrade(&mut self) -> DataSentSender<TX> {\n        qevent::event!(StreamStateUpdated {\n            stream_id: self.stream_id,\n            stream_type: self.stream_id.dir(),\n            old: GranularStreamStates::Send,\n            new: GranularStreamStates::DataSent,\n            stream_side: StreamSide::Sending\n        });\n        DataSentSender {\n            stream_id: self.stream_id,\n            sndbuf: std::mem::take(&mut self.sndbuf),\n            flush_waker: self.flush_waker.take(),\n            shutdown_waker: self.shutdown_waker.take(),\n            broker: self.broker.clone(),\n            tx_wakers: self.tx_wakers.clone(),\n            fin_state: FinState::Sent,\n        }\n    }\n}\n\nimpl<TX> SendingSender<TX>\nwhere\n    TX: SendFrame<ResetStreamFrame>,\n{\n    pub(super) fn cancel(&mut self, err_code: u64) -> ResetStreamError {\n        let final_size = self.sndbuf.sent();\n        let reset_stream_err = ResetStreamError::new(\n            VarInt::from_u64(err_code).expect(\"app error code must not exceed 2^62\"),\n            VarInt::from_u64(final_size).expect(\"final size must not exceed 2^62\"),\n        );\n        tracing::debug!(\n            target: \"quic\",\n            \"{} is canceled by app layer, with error code {err_code}\",\n            self.stream_id\n        );\n        self.broker\n            .send_frame([reset_stream_err.combine(self.stream_id)]);\n        log_reset_event(self.stream_id, GranularStreamStates::Send);\n        reset_stream_err\n    }\n}\n\n#[derive(Debug, PartialEq)]\nenum FinState {\n    Sent,\n    Lost,\n    Rcvd,\n}\n\n#[derive(Debug)]\npub struct DataSentSender<TX> {\n    stream_id: StreamId,\n    sndbuf: SendBuf,\n    flush_waker: Option<Waker>,\n    shutdown_waker: Option<Waker>,\n    broker: TX,\n    // retran/fin\n    tx_wakers: ArcSendWakers,\n    fin_state: FinState,\n}\n\nimpl<TX> DataSentSender<TX> {\n    pub(super) fn stream_id(&self) -> StreamId {\n        self.stream_id\n    }\n\n    pub(super) fn pick_up<P>(\n        &'_ mut self,\n        predicate: P,\n        flow_limit: usize,\n    ) -> Result<StreamData<'_>, Signals>\n    where\n        P: Fn(u64) -> Option<usize>,\n    {\n        let total_size = self.sndbuf.written();\n        self.sndbuf\n            .pick_up(&predicate, flow_limit)\n            .map(|(range, is_fresh, data)| (range.clone(), is_fresh, data, range.end == total_size))\n            .or_else(|signals| {\n                if self.fin_state == FinState::Lost {\n                    self.fin_state = FinState::Sent;\n                    Ok((total_size..total_size, false, vec![], true))\n                } else {\n                    Err(signals)\n                }\n            })\n            .map(|(range, is_fresh, data, is_eos)| {\n                qevent::event!(StreamDataMoved {\n                    stream_id: self.stream_id,\n                    offset: range.start,\n                    length: range.end - range.start,\n                    from: StreamDataLocation::Transport,\n                    to: StreamDataLocation::Network,\n                    ?additional_info: is_eos.then_some(DataMovedAdditionalInfo::FinSet),\n                    raw: RawInfo { data : data.as_slice() }\n                },);\n                (range, is_fresh, data, is_eos)\n            })\n    }\n\n    pub(super) fn on_data_acked(&mut self, frame: &StreamFrame) {\n        self.sndbuf.on_data_acked(&frame.range());\n        if frame.is_fin() {\n            self.fin_state = FinState::Rcvd;\n        }\n        if self.is_all_rcvd() {\n            if let Some(waker) = self.flush_waker.take() {\n                waker.wake();\n            }\n            if let Some(waker) = self.shutdown_waker.take() {\n                waker.wake();\n            }\n        }\n    }\n\n    pub(super) fn is_all_rcvd(&self) -> bool {\n        self.sndbuf.is_all_rcvd() && self.fin_state == FinState::Rcvd\n    }\n\n    pub(super) fn may_loss_data(&mut self, frame: &StreamFrame) {\n        self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n        if frame.is_fin() && self.fin_state != FinState::Rcvd {\n            self.fin_state = FinState::Lost;\n        }\n        self.sndbuf.may_loss_data(&frame.range())\n    }\n\n    pub(super) fn revise_max_stream_data(&mut self, zero_rtt_rejected: bool, max_stream_data: u64) {\n        if zero_rtt_rejected {\n            self.sndbuf.forget_sent_state();\n        }\n        self.sndbuf.extend(max_stream_data);\n    }\n\n    pub(super) fn poll_flush(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        debug_assert!(!self.is_all_rcvd());\n        self.flush_waker = Some(cx.waker().clone());\n        Poll::Pending\n    }\n\n    pub(super) fn poll_shutdown(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        debug_assert!(!self.is_all_rcvd());\n        self.tx_wakers.wake_all_by(Signals::TRANSPORT);\n        self.shutdown_waker = Some(cx.waker().clone());\n        Poll::Pending\n    }\n\n    pub(super) fn wake_all(&mut self) {\n        if let Some(waker) = self.flush_waker.take() {\n            waker.wake();\n        }\n        if let Some(waker) = self.shutdown_waker.take() {\n            waker.wake();\n        }\n    }\n\n    pub(super) fn be_stopped(&mut self) -> u64 {\n        self.wake_all();\n        // Actually, these remaining data is not acked and will not be acked\n        self.sndbuf.written()\n    }\n}\n\nimpl<TX> DataSentSender<TX>\nwhere\n    TX: SendFrame<ResetStreamFrame>,\n{\n    pub(super) fn cancel(&mut self, err_code: u64) -> ResetStreamError {\n        let final_size = self.sndbuf.sent();\n        let reset_stream_err = ResetStreamError::new(\n            VarInt::from_u64(err_code).expect(\"app error code must not exceed 2^62\"),\n            VarInt::from_u64(final_size).expect(\"final size must not exceed 2^62\"),\n        );\n        tracing::debug!(\n            target: \"quic\",\n            \"{} is canceled by app layer, with error code {err_code}\",\n            self.stream_id\n        );\n        self.broker\n            .send_frame([reset_stream_err.combine(self.stream_id)]);\n        log_reset_event(self.stream_id, GranularStreamStates::DataSent);\n        reset_stream_err\n    }\n}\n\n#[derive(Debug)]\npub(super) enum Sender<TX> {\n    Ready(ReadySender<TX>),\n    Sending(SendingSender<TX>),\n    DataSent(DataSentSender<TX>),\n    DataRcvd,\n    ResetSent(ResetStreamError),\n    ResetRcvd(ResetStreamError),\n}\n\nimpl<TX> Sender<TX> {\n    pub fn new(\n        stream_id: StreamId,\n        buf_size: u64,\n        broker: TX,\n        tx_wakers: ArcSendWakers,\n        metrics: Option<qbase::metric::ArcConnectionMetrics>,\n    ) -> Self {\n        Sender::Ready(ReadySender::new(\n            stream_id, buf_size, broker, tx_wakers, metrics,\n        ))\n    }\n}\n\n/// The internal state representations of [`Outgoing`] and [`Writer`].\n///\n/// For the application layer, this struct is represented as [`Writer`]. The application can use it to\n/// write data to the stream, or reset the stream.\n///\n/// For the protocol layer, this struct is represented as [`Outgoing`]. The protocol layer uses it to\n/// manage the status of the `Sender`, sends data(stream frame),reset frames and other frames to peer.\n///\n/// [`Outgoing`]: super::Outgoing\n/// [`Writer`]: super::Writer\n#[derive(Debug, Clone)]\npub struct ArcSender<TX>(Arc<Mutex<Result<Sender<TX>, Error>>>);\n\nimpl<TX> ArcSender<TX> {\n    #[doc(hidden)]\n    pub(crate) fn new(\n        stream_id: StreamId,\n        buf_size: u64,\n        broker: TX,\n        tx_wakers: ArcSendWakers,\n        metrics: Option<qbase::metric::ArcConnectionMetrics>,\n    ) -> Self {\n        ArcSender(Arc::new(Mutex::new(Ok(Sender::new(\n            stream_id, buf_size, broker, tx_wakers, metrics,\n        )))))\n    }\n}\n\nimpl<TX> ArcSender<TX> {\n    // update send window for opened stream.\n    pub(crate) fn update_window(&self, max_stream_data: u64) {\n        assert!(max_stream_data <= VARINT_MAX);\n        match self.sender().as_mut() {\n            Ok(Sender::Ready(s)) => s.update_window(max_stream_data),\n            Ok(Sender::Sending(s)) => s.update_window(max_stream_data),\n            _ => {}\n        }\n    }\n\n    pub(super) fn sender(&self) -> MutexGuard<'_, Result<Sender<TX>, Error>> {\n        self.0.lock().unwrap()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use qbase::{role::Role, sid::Dir};\n\n    use super::*;\n\n    #[derive(Debug, Default, Clone)]\n    struct MockBroker(Arc<Mutex<Vec<ResetStreamFrame>>>);\n\n    impl SendFrame<ResetStreamFrame> for MockBroker {\n        fn send_frame<I: IntoIterator<Item = ResetStreamFrame>>(&self, iter: I) {\n            self.0.lock().unwrap().extend(iter);\n        }\n    }\n\n    fn create_test_sender() -> ArcSender<MockBroker> {\n        let stream_id = StreamId::new(Role::Client, Dir::Bi, 0);\n        let buf_size = 1000;\n        let broker = MockBroker::default();\n        ArcSender::new(stream_id, buf_size, broker, Default::default(), None)\n    }\n\n    #[test]\n    fn test_ready_sender_new() {\n        let stream_id = StreamId::new(Role::Client, Dir::Bi, 0);\n        let buf_size = 1000;\n        let broker = MockBroker::default();\n        let sender = ReadySender::new(stream_id, buf_size, broker, Default::default(), None);\n\n        assert_eq!(sender.stream_id, stream_id);\n        assert_eq!(sender.sndbuf.max_data(), buf_size);\n        assert!(sender.flush_waker.is_none());\n        assert!(sender.shutdown_waker.is_none());\n        assert!(sender.writable_waker.is_none());\n    }\n\n    #[test]\n    fn test_ready_sender_write() {\n        let stream_id = StreamId::new(Role::Client, Dir::Bi, 0);\n        let buf_size = 10;\n        let broker = MockBroker::default();\n        let mut sender = ReadySender::new(stream_id, buf_size, broker, Default::default(), None);\n\n        let data = Bytes::from_static(b\"hello\");\n        let result = sender.write(data);\n        assert!(result.is_ok());\n\n        // Test write when buffer is full\n        let large_data = Bytes::from_static(include_bytes!(\"./sender.rs\"));\n        let result = sender.write(large_data);\n        assert!(result.is_ok());\n    }\n\n    #[tokio::test]\n    async fn test_ready_sender_poll_write() {\n        let stream_id = StreamId::new(Role::Client, Dir::Bi, 0);\n        let buf_size = 10;\n        let broker = MockBroker::default();\n        let mut sender = ReadySender::new(stream_id, buf_size, broker, Default::default(), None);\n\n        let data = Bytes::from_static(b\"test\");\n\n        assert!(matches!(sender.write(data.clone()), Ok(())));\n\n        // Test poll_write when buffer is full\n        sender.sndbuf.forget_sent_state();\n        let mut cx = Context::from_waker(futures::task::noop_waker_ref());\n        let result = sender.poll_ready(&mut cx);\n        assert!(result.is_pending());\n    }\n\n    #[test]\n    fn test_sender_state_transitions() {\n        let stream_id = StreamId::new(Role::Client, Dir::Bi, 0);\n        let buf_size = 1000;\n        let broker = MockBroker::default();\n        let mut ready = ReadySender::new(stream_id, buf_size, broker, Default::default(), None);\n\n        // Test transition to SendingSender\n        let mut sending = ready.upgrade();\n        assert_eq!(sending.stream_id, stream_id);\n        assert_eq!(sending.sndbuf.max_data(), buf_size);\n\n        // Test transition to DataSentSender\n        let data_sent = sending.upgrade();\n        assert_eq!(data_sent.stream_id, stream_id);\n        assert!(data_sent.fin_state == FinState::Sent);\n    }\n\n    #[test]\n    fn test_arc_sender() {\n        let sender = create_test_sender();\n\n        // Test buffer size revision\n        sender.update_window(2000);\n\n        // Test sender lock access\n        let guard = sender.sender();\n        assert!(guard.is_ok());\n    }\n\n    #[test]\n    fn test_data_sent_sender() {\n        let stream_id = StreamId::new(Role::Client, Dir::Bi, 0);\n        let buf_size = 1000;\n        let broker = MockBroker::default();\n        let mut sender = DataSentSender {\n            stream_id,\n            sndbuf: SendBuf::with_capacity(buf_size),\n            flush_waker: None,\n            shutdown_waker: None,\n            broker,\n            tx_wakers: Default::default(),\n            fin_state: FinState::Sent,\n        };\n\n        // Test pick_up with empty buffer\n        let predicate = |_| Some(100);\n        let result = sender.pick_up(predicate, 1000);\n        assert!(result.is_err());\n    }\n\n    #[tokio::test]\n    async fn test_data_sent_sender_polling() {\n        let stream_id = StreamId::new(Role::Client, Dir::Bi, 0);\n        let buf_size = 1000;\n        let broker = MockBroker::default();\n        let mut sender = DataSentSender {\n            stream_id,\n            sndbuf: SendBuf::with_capacity(buf_size),\n            flush_waker: None,\n            shutdown_waker: None,\n            broker,\n            tx_wakers: Default::default(),\n            fin_state: FinState::Sent,\n        };\n\n        let mut cx = Context::from_waker(futures::task::noop_waker_ref());\n\n        // Test poll_flush when all data received\n        let result = sender.poll_flush(&mut cx);\n        assert!(result.is_pending());\n\n        // Test poll_shutdown when all data received\n        let _ = sender.poll_shutdown(&mut cx);\n        assert!(sender.shutdown_waker.is_some());\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/send/sndbuf.rs",
    "content": "use std::{\n    cmp::Ordering,\n    collections::VecDeque,\n    fmt::{Debug, Display},\n    ops::Range,\n};\n\nuse bytes::Bytes;\nuse qbase::net::tx::Signals;\n\n/// To indicate the state of a data segment, it is colored.\n#[derive(Default, PartialEq, Eq, Clone, Copy, Debug)]\nenum Color {\n    #[default]\n    Pending,\n    Flighting,\n    Recved,\n    Lost,\n}\n\nimpl Color {\n    fn prefix(&self) -> u64 {\n        match self {\n            Self::Pending => 0,\n            Self::Flighting => 0b01 << 62,\n            Self::Lost => 0b10 << 62,\n            Self::Recved => 0b11 << 62,\n        }\n    }\n}\n\n#[derive(PartialEq, PartialOrd, Eq, Clone, Copy)]\nstruct State(u64);\n\nimpl State {\n    #[allow(dead_code)]\n    const PREFIX: u64 = 0b11 << 62;\n    const SUFFIX: u64 = u64::MAX >> 2;\n\n    fn encode(pos: u64, color: Color) -> Self {\n        Self(color.prefix() | pos)\n    }\n\n    fn offset(&self) -> u64 {\n        self.0 & Self::SUFFIX\n    }\n\n    fn color(&self) -> Color {\n        match self.0 >> 62 {\n            0b00 => Color::Pending,\n            0b01 => Color::Flighting,\n            0b10 => Color::Lost,\n            0b11 => Color::Recved,\n            _ => unreachable!(\"impossible\"),\n        }\n    }\n\n    fn set_color(&mut self, value: Color) {\n        self.0 = (self.0 & Self::SUFFIX) | value.prefix();\n    }\n\n    fn decode(&self) -> (u64, Color) {\n        (self.offset(), self.color())\n    }\n}\n\nimpl Display for State {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"[{:?}: {:?}]\", self.offset(), self.color())\n    }\n}\n\nimpl Debug for State {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"[{:?}: {:?}]\", self.offset(), self.color())\n    }\n}\n\n/**\n * Self.0 意思是区间状态信息，它由一段VecDeque表示；\n * VecDeque中的每个元素是State，其中低62位是offset，代高2位是颜色，代表着到下一个State::offset的区间颜色。\n * Self.1代表着结尾位置，不包括Self.1; VecDeque中最后一个元素代表的状态区间，是最后一个State::offset到Self.1的区间。\n * 之所以采用这种数据结构，是考虑到CPU缓存行有64字节，可一次处理8段数据，足够很多小流传输了，很高效。\n * 即便是大流，其中相同状态的合并起来，各种不同状态的区间也不会很多，相比于链表、跳表、线段树等结构依然很高效。\n */\n#[derive(Default, Debug)]\nstruct BufMap(VecDeque<State>, u64);\n\nimpl BufMap {\n    fn size(&self) -> u64 {\n        self.1\n    }\n\n    // 追加写数据\n    fn extend_to(&mut self, pos: u64) -> u64 {\n        debug_assert!(pos < (1 << 62), \"pos({pos}) overflow\",);\n        debug_assert!(pos >= self.size(), \"pos({pos}) less than {}\", self.size());\n\n        if pos > self.size() {\n            let back = self.0.back();\n            match back {\n                Some(s) if s.color() == Color::Pending => {}\n                _ => self.0.push_back(State::encode(self.size(), Color::Pending)),\n            };\n            self.1 = pos;\n        }\n        self.size()\n    }\n\n    fn sent(&self) -> u64 {\n        match self.0.back() {\n            Some(s) if s.color() == Color::Pending => s.offset(),\n            _ => self.size(),\n        }\n    }\n\n    // 挑选Lost/Pending的数据发送。越靠前的数据，越高优先级发送；\n    // 丢包重传的数据，相比于Pending数据更靠前，因此具有更高的优先级。\n    fn pick<P>(\n        &mut self,\n        predicate: P,\n        flow_limit: usize,\n        send_window_size: u64,\n    ) -> Result<(Range<u64>, bool), Signals>\n    where\n        P: Fn(u64) -> Option<usize>,\n    {\n        let mut signals = Signals::WRITTEN | Signals::TRANSPORT;\n        // 先找到第一个能发送的区间，并将该区间染成Flight，返回原State\n        self.0\n            .iter_mut()\n            .enumerate()\n            .find(|(.., state)| {\n                if state.offset() >= send_window_size {\n                    // 如果offset已经超过了发送窗口大小，说明该区间不能被发送\n                    signals |= Signals::FLOW_CONTROL;\n                    return false;\n                }\n                // 选择Pending的区间（如果流控允许），或者选择Lost的区间\n                match state.color() {\n                    Color::Pending if flow_limit != 0 => return true,\n                    Color::Pending => {\n                        signals &= !Signals::WRITTEN;\n                        signals |= Signals::FLOW_CONTROL\n                    }\n                    Color::Lost => return true,\n                    _ => {}\n                }\n                false\n            })\n            .and_then(|(idx, state)| {\n                // 如果区间的offset不符合predicate，就不发送这一段\n                // 其实选择到的第一段数据数据的offset已经是最小的了，如果最小的offset都不能发送，那么后面片段肯定也不能发送\n                let Some(available) = predicate(state.offset()) else {\n                    signals |= Signals::CONGESTION;\n                    return None;\n                };\n\n                let allowance = if state.color() == Color::Lost {\n                    // 重传不受流量控制限制\n                    available\n                } else {\n                    available.min(flow_limit)\n                };\n                Some((idx, allowance, state))\n            })\n            .map(|(index, allowance, state)| {\n                let origin_state = *state; // 此处能归还self.0的可变借用\n                state.set_color(Color::Flighting);\n                (index, origin_state, allowance)\n            })\n            .map(|(index, origin_state, allowance)| {\n                // 找到了一个合适的区间来发送，但检查区间长度是否足够，过长的话，还要拆区间一分为二\n                let (start, color) = origin_state.decode();\n                let mut end = self\n                    .0\n                    .get(index + 1)\n                    .map(|s| s.offset())\n                    .unwrap_or(self.size())\n                    .min(send_window_size);\n\n                let mut i = self.same_before(index, Color::Flighting);\n                if start + (allowance as u64) < end {\n                    end = start + allowance as u64;\n                    if i < index {\n                        // 一分为二，如果本来有合并删除的区间，直接旧state回收复用\n                        *self.0.get_mut(i + 1).unwrap() = State::encode(end, color);\n                    } else {\n                        self.0.insert(i + 1, State::encode(end, color));\n                    }\n                    i += 1;\n                } else {\n                    // TODO: 这里有个优化，如果紧跟着下一个是Lost或者Pending，可以连起来\n                    self.merge_after(index, Color::Flighting);\n                }\n                // i仍然小于index，说明有需要删除直到index的state，意味着前向合并请求，一次drain即可\n                if i < index {\n                    self.0.drain(i + 1..=index);\n                }\n                (start..end, color == Color::Pending)\n            })\n            .ok_or(signals)\n    }\n\n    // 收到了ack确认，确认的数据不需再发送，对于头部连续确认的数据，就可以删掉。\n    // 寻找到ack区间所在的位置，将这些区间都染成Recved，然后检查前后是否有需要合并的区间，合并之。\n    // ack区间，不能ack到Pending的数据，因为Pending的数据尚未发送过，当然无法被ack。\n    fn ack_rcvd(&mut self, range: &Range<u64>) {\n        let pos = self.0.binary_search_by(|s| s.offset().cmp(&range.start));\n        let (mut drain_start, need_insert_at_start, mut drain_end, mut pre_color) = match pos {\n            Ok(idx) => {\n                let s = self.0.get_mut(idx).unwrap();\n                let pre_color = s.color();\n                debug_assert!(\n                    pre_color != Color::Pending,\n                    \"Recved Range({:?}) covered Pending part from {}\",\n                    range,\n                    s.offset()\n                );\n                s.set_color(Color::Recved);\n                (\n                    self.same_before(idx, Color::Recved) + 1,\n                    false,\n                    idx + 1,\n                    pre_color,\n                )\n            }\n            Err(idx) => {\n                if idx == 0 {\n                    (0, false, 0, Color::Recved)\n                } else {\n                    let s = self.0.get(idx - 1).unwrap();\n                    let pre_color = s.color();\n                    debug_assert!(\n                        pre_color != Color::Pending,\n                        \"Recved Range({:?}) covered Pending part from {}\",\n                        range,\n                        s.offset()\n                    );\n                    (idx, pre_color != Color::Recved, idx, pre_color)\n                }\n            }\n        };\n\n        let mut need_insert_at_end = false;\n        loop {\n            let entry = self.0.get(drain_end);\n            match entry {\n                Some(s) => match s.offset().cmp(&range.end) {\n                    Ordering::Less => {\n                        debug_assert!(\n                            s.color() != Color::Pending,\n                            \"Recved Range({:?}) covered Pending parts from {}\",\n                            range,\n                            s.offset()\n                        );\n                        drain_end += 1;\n                        pre_color = s.color();\n                    }\n                    Ordering::Equal => {\n                        // TODO: nightly版本, overflowing_sub 改为unchecked_sub更好\n                        drain_end = self\n                            .same_after(drain_end.overflowing_sub(1).0, Color::Recved)\n                            .overflowing_add(1)\n                            .0;\n                        break;\n                    }\n                    Ordering::Greater => {\n                        need_insert_at_end = pre_color != Color::Recved;\n                        break;\n                    }\n                },\n                None => {\n                    debug_assert!(\n                        range.end <= self.size(),\n                        \"Recved Range({:?}) over {}\",\n                        range,\n                        self.size()\n                    );\n                    need_insert_at_end = range.end < self.size() && pre_color != Color::Recved;\n                    break;\n                }\n            }\n        }\n\n        if need_insert_at_start {\n            if drain_start < drain_end {\n                *self.0.get_mut(drain_start).unwrap() = State::encode(range.start, Color::Recved);\n            } else {\n                self.0\n                    .insert(drain_start, State::encode(range.start, Color::Recved));\n            }\n            drain_start += 1;\n        }\n        if need_insert_at_end {\n            if drain_start < drain_end {\n                *self.0.get_mut(drain_start).unwrap() = State::encode(range.end, pre_color);\n            } else {\n                self.0\n                    .insert(drain_start, State::encode(range.end, pre_color));\n            }\n            drain_start += 1;\n        }\n        if drain_start < drain_end {\n            self.0.drain(drain_start..drain_end);\n        }\n    }\n\n    // 寻找第一个不是Recved的位置，意味着之前的数据都已经被确认接收，\n    // 发送缓冲区可以移动到该位置，以让发送缓冲区腾出更多空间\n    fn shift(&mut self) -> u64 {\n        loop {\n            let entry = self.0.front();\n            match entry {\n                Some(s) if s.color() == Color::Recved => _ = self.0.pop_front(),\n                Some(s) => return s.offset(),\n                None => return self.size(),\n            }\n        }\n    }\n\n    // 判定某部分数据丢失，但不一定真的丢失，判定可能有误；丢失的数据需要优先重传。\n    // 寻找到丢失区间覆盖的范围，其中若遇到Recved的区间，则忽略；只有Flighting/Lost的才可以丢失。\n    // 然后检查Lost区间前后是否有需要合并的区间，合并之。\n    // 同样地，Lost区间不能覆盖Pending的数据，因为Pending的数据尚未发送过，无法丢失。\n    fn may_loss(&mut self, range: &Range<u64>) {\n        let pos = self.0.binary_search_by(|s| s.offset().cmp(&range.start));\n        let (mut drain_start, need_insert_at_start, mut drain_end, mut pre_color) = match pos {\n            Ok(idx) => {\n                let s = self.0.get_mut(idx).unwrap();\n                debug_assert!(\n                    s.color() != Color::Pending,\n                    \"Lost Range({:?}) covered Pending parts from {}\",\n                    range,\n                    s.offset()\n                );\n                if s.color() == Color::Recved {\n                    // 如果是Recved，那就不需要在前面插入了，直接往后探索\n                    self.may_lost_from(idx + 1, range.end);\n                    return;\n                }\n\n                let pre_color = s.color();\n                let mut drain_start = idx;\n                if pre_color == Color::Flighting {\n                    s.set_color(Color::Lost);\n                    // 只有变化了，才会向前寻找同为Lost，寻求合并\n                    // 如果已经是Lost了，那前面的肯定是无法合并的非Lost状态\n                    drain_start = self.same_before(idx, Color::Lost) + 1;\n                } else {\n                    // 如果是lost，那这一段状态不需要改变，继续探索下一段需不需要改变\n                    // 如果下一段还是Lost，那下一段可以删掉，往后合并Lost\n                    drain_start += 1;\n                }\n                // 肯定不需要在前面插入了，从drain_start开始往后探索，pre_color是当前状态\n                (drain_start, false, idx + 1, pre_color)\n            }\n            Err(idx) => {\n                if idx == 0 {\n                    // 之前的数据都是recved，前面不再需要插入\n                    // 表示从0往后，要尝试变为Lost，就完事儿了\n                    self.may_lost_from(idx, range.end);\n                    return;\n                } else {\n                    let s = self.0.get(idx - 1).unwrap();\n                    let pre_color = s.color();\n                    debug_assert!(\n                        pre_color != Color::Pending,\n                        \"Lost Range({:?}) covered Pending parts from {}\",\n                        range,\n                        s.offset()\n                    );\n                    if pre_color == Color::Recved {\n                        // 另有安排，直接调用，lost_from(idx, range.end);\n                        self.may_lost_from(idx, range.end);\n                        return;\n                    }\n                    (idx, pre_color == Color::Flighting, idx, pre_color)\n                }\n            }\n        };\n\n        let mut need_insert_at_end = false;\n        loop {\n            // 从drain_end位置的entry开始遍历，看其是否存在，存在看其是否仍在Lost的range区间里\n            let entry = self.0.get(drain_end);\n            match entry {\n                Some(s) => match s.offset().cmp(&range.end) {\n                    Ordering::Less => {\n                        // 以s.offset开头的区间，仍在Lost的range区间里\n                        debug_assert!(\n                            s.color() != Color::Pending,\n                            \"Lost Range({:?}) covered Pending parts from {}\",\n                            range,\n                            s.offset()\n                        );\n                        if s.color() == Color::Recved {\n                            // s是recved，那就s的下一段到range.end都是丢失的，相当于独立的may_lost区间处理\n                            // 接下来只需处理drain_end之前的操作即可\n                            self.may_lost_from(drain_end + 1, range.end);\n                            break;\n                        } else {\n                            // s是Lost/Flighting，那就将s染成Lost，继续往后探索\n                            drain_end += 1;\n                            pre_color = s.color();\n                        }\n                    }\n                    Ordering::Equal => {\n                        // s之前的是Lost，从上一个检查后续连续lost状态的有多少个\n                        drain_end = self\n                            .same_after(drain_end.overflowing_sub(1).0, Color::Lost)\n                            .overflowing_add(1)\n                            .0;\n                        break;\n                    }\n                    Ordering::Greater => {\n                        // s的offset大于range.end，说明s之后的区间都不在Lost的范围内\n                        // s的前一个是Flighting，它要一分为二，前部分为Lost，后部分为Flighting\n                        need_insert_at_end = pre_color == Color::Flighting;\n                        break;\n                    }\n                },\n                None => {\n                    // 找不到，说明到最后一段了\n                    debug_assert!(\n                        range.end <= self.size(),\n                        \"Lost Range({:?}) over {}\",\n                        range,\n                        self.size()\n                    );\n                    // 如果上一段的color是Flighting，它要一分为二，到range.end的部分为Lost，后续部分为Flighting\n                    need_insert_at_end = range.end < self.size() && pre_color == Color::Flighting;\n                    break;\n                }\n            };\n        }\n\n        if need_insert_at_start {\n            if drain_start < drain_end {\n                *self.0.get_mut(drain_start).unwrap() = State::encode(range.start, Color::Lost);\n            } else {\n                self.0\n                    .insert(drain_start, State::encode(range.start, Color::Lost));\n            }\n            drain_start += 1;\n        }\n        if need_insert_at_end {\n            if drain_start < drain_end {\n                *self.0.get_mut(drain_start).unwrap() = State::encode(range.end, pre_color);\n            } else {\n                self.0\n                    .insert(drain_start, State::encode(range.end, pre_color));\n            }\n            drain_start += 1;\n        }\n        if drain_start < drain_end {\n            self.0.drain(drain_start..drain_end);\n        }\n    }\n\n    fn resend_flighting(&mut self) {\n        for state in self.0.iter_mut() {\n            if state.color() == Color::Flighting {\n                state.set_color(Color::Lost);\n            }\n        }\n    }\n}\n\nimpl BufMap {\n    fn same_before(&self, mut index: usize, color: Color) -> usize {\n        loop {\n            let pre = index.overflowing_sub(1).0;\n            match self.0.get(pre) {\n                Some(s) if s.color() == color => index = pre,\n                _ => break,\n            }\n        }\n        index\n    }\n\n    fn same_after(&self, mut index: usize, color: Color) -> usize {\n        loop {\n            let next = index.overflowing_add(1).0;\n            match self.0.get(next) {\n                Some(s) if s.color() == color => index = next,\n                _ => break,\n            }\n        }\n        index\n    }\n\n    fn merge_after(&mut self, index: usize, color: Color) {\n        let same_after = self.same_after(index, color);\n        if index < same_after {\n            self.0.drain(index + 1..=same_after);\n        }\n    }\n\n    // lost的辅助函数，将idx_start位置的变为Lost，然后向后继续判定丢失\n    fn may_lost_from(&mut self, mut idx_start: usize, end: u64) {\n        let mut idx = idx_start;\n        let mut pre_color = Color::Recved;\n        let mut need_insert_at_end = false;\n        loop {\n            let entry = self.0.get_mut(idx);\n            match entry {\n                Some(s) => match s.offset().cmp(&end) {\n                    Ordering::Less => {\n                        debug_assert!(\n                            s.color() != Color::Pending,\n                            \"Lost Range.end({end}) covered Pending parts from {}\",\n                            s.offset()\n                        );\n                        pre_color = s.color();\n                        if s.color() == Color::Recved {\n                            // 另有安排，直接调用，lost_from(idx, range.end);\n                            self.may_lost_from(idx + 1, end);\n                            break;\n                        } else {\n                            s.set_color(Color::Lost);\n                            idx += 1;\n                        }\n                    }\n                    Ordering::Equal => {\n                        idx = self\n                            .same_after(idx.overflowing_sub(1).0, Color::Lost)\n                            .overflowing_add(1)\n                            .0;\n                        break;\n                    }\n                    Ordering::Greater => {\n                        need_insert_at_end = pre_color == Color::Flighting;\n                        break;\n                    }\n                },\n                None => {\n                    debug_assert!(\n                        end <= self.size(),\n                        \"Lost Range.end({end}) over {}\",\n                        self.size()\n                    );\n                    need_insert_at_end = end < self.size() && pre_color == Color::Flighting;\n                    break;\n                }\n            }\n        }\n        if need_insert_at_end {\n            if idx_start + 1 < idx {\n                *self.0.get_mut(idx_start + 1).unwrap() = State::encode(end, pre_color);\n            } else {\n                self.0.insert(idx_start + 1, State::encode(end, pre_color));\n            }\n            idx_start += 1;\n        }\n        if idx_start + 1 < idx {\n            self.0.drain(idx_start + 1..idx);\n        }\n    }\n}\n\n/// Data to be reliably sent to the peer will first be cached in [`SendBuf`].\n///\n/// SendBuf will record the status of data that has been or has not been sent.\n///\n/// The transport layer needs to notify that the data it has sent is confirmed([`on_data_acked`]) or lost\n/// ([`may_loss_data`]), to uopate the state of [`SendBuf`].\n///\n/// The transport layer can [`pick_up`] a piece of data that needs to be sent. The data may be new data,\n/// or old data that has been sent but has not been acknowledged.\n///\n/// The data picked up may not continuous, the [`receive buffer`] will assemble the data into continuous before\n/// passing them to the application layer.\n///\n/// [`pick_up`]: SendBuf::pick_up\n/// [`on_data_acked`]: SendBuf::on_data_acked\n/// [`may_loss_data`]: SendBuf::may_loss_data\n/// [`receive buffer`]: crate::recv::RecvBuf\n#[derive(Default, Debug)]\npub struct SendBuf {\n    offset: u64,\n    // 写入数据的队列，与接收队列不同的是，每一段数据都是前后连续的\n    data: VecDeque<Bytes>,\n    // 对BufMap::size的限制\n    max_data: u64,\n    state: BufMap,\n}\n\nimpl SendBuf {\n    /// Create a new [`SendBuf`] with the given size.\n    pub fn with_capacity(capacity: u64) -> Self {\n        Self {\n            offset: 0,\n            data: VecDeque::new(),\n            max_data: capacity,\n            state: BufMap::default(),\n        }\n    }\n\n    /// Write data to the [`SendBuf`].\n    ///\n    /// When [`SendBuf`] has buffered [`Self::max_data`] amount of data,\n    /// no more data should be written.\n    pub fn write(&mut self, data: Bytes) {\n        // debug_assert!(self.remaining_mut() > 0, \"Sendbuf buffers excess data\");\n        if !data.is_empty() {\n            self.state\n                .extend_to((self.written() + data.len() as u64).min(self.max_data));\n            self.data.push_back(data);\n        }\n    }\n\n    /// The maximum amount of data that can be sent in the [`SendBuf`].\n    ///\n    /// For [`DataStreams`], this is the flow control of the stream.\n    ///\n    /// For [`CryptoStream`], there should be no restrictions.\n    ///\n    /// [`DataStreams`]: crate::streams::DataStreams\n    /// [`CryptoStream`]: crate::crypto::CryptoStream\n    pub fn max_data(&self) -> u64 {\n        self.max_data\n    }\n\n    /// Forget all state of data that has been sent.\n    ///\n    /// This is usually called when the zero rtt is rejected by server.\n    ///\n    /// All data sent should be resent as fresh data,\n    /// and for the subsequent correction of max_data, max_data is also reset to 0.\n    pub fn forget_sent_state(&mut self) {\n        self.state = BufMap::default();\n        self.max_data = 0;\n    }\n\n    /// Extend the [`Self::max_data`] limit.\n    pub fn extend(&mut self, max_data: u64) {\n        debug_assert!(max_data >= self.max_data, \"Cannot reduce sndbuf size\");\n        self.max_data = max_data;\n        self.state.extend_to(self.written().min(self.max_data));\n    }\n\n    /// Return whether the [`SendBuf`] is empty.\n    pub fn is_empty(&self) -> bool {\n        self.data.is_empty()\n    }\n\n    /// Return the total length of data that has been cumulatively written to the send buffer in the past.\n    ///\n    /// Note that data the returned size may be larger than [`Self::max_data`].\n    pub fn written(&self) -> u64 {\n        self.offset + self.data.iter().map(|data| data.len() as u64).sum::<u64>()\n    }\n\n    /// Return the number of bytes that have been sent.\n    pub fn sent(&self) -> u64 {\n        self.state.sent()\n    }\n\n    /// Return the number of bytes that can be written without exceeding the [`Self::max_data`] limit.\n    ///\n    /// To prevent [`SendBuf`] from buffering excessive data, data should not be written when this method returns 0.\n    pub fn remaining_mut(&self) -> u64 {\n        self.max_data().saturating_sub(self.written())\n    }\n\n    /// Return whether there is remaining space to write data without exceeding the [`Self::max_data`] limit.\n    ///\n    /// When this method returns false, data should not be written.\n    pub fn has_remaining_mut(&self) -> bool {\n        self.max_data() > self.written()\n    }\n\n    // 无需close：不在写入即可，具体到某个状态，才有close\n    // 无需reset：状态转化间，需要reset，而Sender上下文直接释放即可\n    // 无需clean：Sender上下文直接释放即可，\n}\n\ntype Data<'s> = (Range<u64>, bool, Vec<Bytes>);\n\nimpl SendBuf {\n    /// Pick up data that can be sent.\n    ///\n    /// The selected data is subject to `predicate`, which accepts the starting position of the\n    /// data segment, returns whether the segment could be sent and the maximum amount of bytes could\n    /// take.\n    ///\n    /// If the data picked up is new (never sent before), how much data can be sent is also subject\n    /// to `flow_limit`.\n    ///\n    /// ### Returns\n    /// `None` if there is no data picked up.\n    ///\n    /// Otherwise, return a tuple:\n    /// * `Range<u64>`: the range of data picked up (start inclusive, end exclusive).\n    /// * `bool`: whether the data is new(not retransmitted).\n    /// * `(&[u8], &[u8])`: the data picked up, duo to the internal buffer is a ring buffer, the data\n    ///   picked up is in two parts, the begin of the second slice are the end of the first slice\n    pub fn pick_up<P>(&mut self, predicate: P, flow_limit: usize) -> Result<Data<'_>, Signals>\n    where\n        P: Fn(u64) -> Option<usize>,\n    {\n        self.state\n            .pick(predicate, flow_limit, self.max_data())\n            .map(|(range, is_fresh)| {\n                let iter = self\n                    .data\n                    .iter()\n                    .scan(self.offset, |offset, data| {\n                        let current_range = *offset..*offset + data.len() as u64;\n                        *offset += data.len() as u64;\n                        Some((current_range, data))\n                    })\n                    .filter(move |(slice, ..)| slice.end > range.start && slice.start < range.end)\n                    .map(move |(slice, data)| {\n                        if slice.start >= range.start && slice.end <= range.end {\n                            data.clone()\n                        } else {\n                            data.slice(\n                                (range.start.saturating_sub(slice.start)) as usize\n                                    ..(range.end.min(slice.end) - slice.start) as usize,\n                            )\n                        }\n                    });\n\n                (range, is_fresh, iter.collect())\n            })\n    }\n\n    /// Called when the `range` of data sent is acknowledged by the peer.\n    ///\n    /// The `range` is the range of data that has been acknowledged.\n    // 通过传输层接收到的对方的ack帧，确认某些包已经被接收到，这些包携带的数据即被确认。\n    // ack只能确认Flighting/Lost状态的区间；如果确认的是Lost区间，意味着之前的判定丢包是错误的。\n    pub fn on_data_acked(&mut self, range: &Range<u64>) {\n        self.state.ack_rcvd(range);\n        // 对于头部连续确认接收到的，还要前进，以免浪费空间\n        let min_unrecved_pos = self.state.shift();\n        if self.offset < min_unrecved_pos {\n            let mut drain_len = (min_unrecved_pos - self.offset) as usize;\n            self.offset = min_unrecved_pos;\n\n            while !self.data.is_empty() && drain_len > 0 {\n                match drain_len {\n                    n if n >= self.data[0].len() => {\n                        drain_len -= self.data[0].len();\n                        self.data.pop_front().unwrap();\n                    }\n                    n => {\n                        self.data[0] = self.data[0].slice(n..);\n                        break;\n                    }\n                }\n            }\n        }\n    }\n\n    /// Called when the `range` of data sent may be lost.\n    ///\n    /// The `range` is the range of data that may be lost.\n    // 通过传输层收到的ack帧，判定有些数据包丢失，因为它之后的数据包都被确认了，\n    // 或者距离发送该段数据之后相当长一段时间都没收到它的确认。\n    pub fn may_loss_data(&mut self, range: &Range<u64>) {\n        self.state.may_loss(range);\n    }\n\n    pub fn resend_flighting(&mut self) {\n        self.state.resend_flighting()\n    }\n\n    /// Return whether all data currently written has been received(acknowledged) by the peer.\n    pub fn is_all_rcvd(&self) -> bool {\n        self.data.is_empty()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use qbase::net::tx::Signals;\n\n    use super::{BufMap, Color, State};\n\n    #[test]\n    fn test_state() {\n        let state = State::encode(100, Color::Pending);\n        assert_eq!(state.offset(), 100);\n        assert_eq!(state.color(), Color::Pending);\n\n        let mut state = State::encode(100, Color::Pending);\n        state.set_color(Color::Flighting);\n        assert_eq!(state.color(), Color::Flighting);\n\n        let state = State::encode(100, Color::Pending);\n        assert_eq!(state.decode(), (100, Color::Pending));\n\n        // test Dispaly\n        assert_eq!(format!(\"{state}\"), \"[100: Pending]\");\n        assert_eq!(format!(\"{state:?}\"), \"[100: Pending]\");\n    }\n\n    #[test]\n    fn test_bufmap_empty() {\n        let buf_map = BufMap::default();\n        assert!(buf_map.0.is_empty());\n    }\n\n    #[test]\n    fn test_bufmap_extend_to() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(100);\n        assert_eq!(buf_map.0, vec![State::encode(0, Color::Pending)]);\n        assert_eq!(buf_map.1, 100);\n\n        buf_map.0.get_mut(0).unwrap().set_color(Color::Flighting);\n        buf_map.extend_to(200);\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Flighting),\n                State::encode(100, Color::Pending)\n            ]\n        );\n        assert_eq!(buf_map.1, 200);\n    }\n\n    #[test]\n    fn test_bufmap_pick() {\n        let mut buf_map = BufMap::default();\n        let range = buf_map.pick(|_| Some(20), usize::MAX, u64::MAX);\n        assert_eq!(range, Err(Signals::TRANSPORT | Signals::WRITTEN));\n        assert!(buf_map.0.is_empty());\n\n        buf_map.extend_to(200);\n        let (range, is_fresh) = buf_map.pick(|_| Some(20), usize::MAX, u64::MAX).unwrap();\n        assert_eq!(range, 0..20);\n        assert!(is_fresh);\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Flighting),\n                State::encode(20, Color::Pending)\n            ]\n        );\n\n        let (range, is_fresh) = buf_map.pick(|_| Some(20), usize::MAX, u64::MAX).unwrap();\n        assert_eq!(range, 20..40);\n        assert!(is_fresh);\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Flighting),\n                State::encode(40, Color::Pending)\n            ]\n        );\n\n        buf_map.0.insert(2, State::encode(50, Color::Lost));\n        buf_map.0.insert(3, State::encode(120, Color::Pending));\n        let (range, is_fresh) = buf_map.pick(|_| Some(20), usize::MAX, u64::MAX).unwrap();\n        assert_eq!(range, 40..50);\n        assert!(is_fresh);\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Flighting),\n                State::encode(50, Color::Lost),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        buf_map.0.get_mut(0).unwrap().set_color(Color::Recved);\n        let (range, is_fresh) = buf_map.pick(|_| Some(20), usize::MAX, u64::MAX).unwrap();\n        assert_eq!(range, 50..70);\n        assert!(!is_fresh);\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(50, Color::Flighting),\n                State::encode(70, Color::Lost),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        let (range, is_fresh) = buf_map.pick(|_| Some(130), usize::MAX, u64::MAX).unwrap();\n        assert_eq!(range, 70..120);\n        assert!(!is_fresh);\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(50, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        let (range, is_fresh) = buf_map.pick(|_| Some(130), usize::MAX, u64::MAX).unwrap();\n        assert_eq!(range, 120..200);\n        assert!(is_fresh);\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(50, Color::Flighting),\n            ]\n        );\n\n        let result = buf_map.pick(|_| Some(130), usize::MAX, u64::MAX);\n        assert!(result.is_err());\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(50, Color::Flighting),\n            ]\n        );\n    }\n\n    #[test]\n    fn test_bufmap_sent() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(200);\n        assert_eq!(buf_map.sent(), 0);\n\n        assert!(buf_map.pick(|_| Some(120), usize::MAX, u64::MAX).is_ok());\n        assert_eq!(buf_map.sent(), 120);\n\n        assert!(buf_map.pick(|_| Some(80), usize::MAX, u64::MAX).is_ok());\n        assert_eq!(buf_map.sent(), 200);\n    }\n\n    #[test]\n    fn test_bufmap_recved() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(200);\n        assert!(buf_map.pick(|_| Some(120), usize::MAX, u64::MAX).is_ok());\n        buf_map.ack_rcvd(&(0..20));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(20, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        buf_map.ack_rcvd(&(30..50));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(20, Color::Flighting),\n                State::encode(30, Color::Recved),\n                State::encode(50, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        buf_map.ack_rcvd(&(25..55));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(20, Color::Flighting),\n                State::encode(25, Color::Recved),\n                State::encode(55, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        buf_map.ack_rcvd(&(20..25));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(55, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        buf_map.0.pop_front();\n        buf_map.ack_rcvd(&(20..55));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(55, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        buf_map.ack_rcvd(&(30..70));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(70, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        buf_map.ack_rcvd(&(100..119));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(70, Color::Flighting),\n                State::encode(100, Color::Recved),\n                State::encode(119, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n\n        assert!(buf_map.pick(|_| Some(130), usize::MAX, u64::MAX).is_ok());\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(70, Color::Flighting),\n                State::encode(100, Color::Recved),\n                State::encode(119, Color::Flighting),\n            ]\n        );\n\n        buf_map.ack_rcvd(&(119..150));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(70, Color::Flighting),\n                State::encode(100, Color::Recved),\n                State::encode(150, Color::Flighting),\n            ]\n        );\n        buf_map.ack_rcvd(&(150..200));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(70, Color::Flighting),\n                State::encode(100, Color::Recved),\n            ]\n        );\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_bufmap_invalid_recved() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(200);\n        assert!(buf_map.pick(|_| Some(120), usize::MAX, u64::MAX).is_ok());\n        buf_map.ack_rcvd(&(20..40));\n        buf_map.0.insert(2, State::encode(30, Color::Pending));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Flighting),\n                State::encode(20, Color::Recved),\n                // Alerting: 30..40 is Pending, never been sent, but they will be Recved\n                State::encode(30, Color::Pending),\n                State::encode(40, Color::Flighting),\n                State::encode(120, Color::Pending)\n            ]\n        );\n        buf_map.ack_rcvd(&(0..50));\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_bufmap_recved_overflow() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(200);\n        assert!(buf_map.pick(|_| Some(120), usize::MAX, u64::MAX).is_ok());\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n        buf_map.ack_rcvd(&(110..121));\n    }\n\n    #[test]\n    #[should_panic]\n    fn test_bufmap_recved_over_end() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(200);\n        assert!(buf_map.pick(|_| Some(200), usize::MAX, u64::MAX).is_ok());\n        assert_eq!(buf_map.0, vec![State::encode(0, Color::Flighting)]);\n        buf_map.ack_rcvd(&(0..201));\n    }\n\n    #[test]\n    fn test_bufmap_lost() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(200);\n        assert!(buf_map.pick(|_| Some(120), usize::MAX, u64::MAX).is_ok());\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(0..20));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Lost),\n                State::encode(20, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(30..50));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Lost),\n                State::encode(20, Color::Flighting),\n                State::encode(30, Color::Lost),\n                State::encode(50, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.ack_rcvd(&(0..10));\n        buf_map.ack_rcvd(&(70..100));\n        buf_map.0.pop_front();\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(20, Color::Flighting),\n                State::encode(30, Color::Lost),\n                State::encode(50, Color::Flighting),\n                State::encode(70, Color::Recved),\n                State::encode(100, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(15..25));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(25, Color::Flighting),\n                State::encode(30, Color::Lost),\n                State::encode(50, Color::Flighting),\n                State::encode(70, Color::Recved),\n                State::encode(100, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(10..20));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(25, Color::Flighting),\n                State::encode(30, Color::Lost),\n                State::encode(50, Color::Flighting),\n                State::encode(70, Color::Recved),\n                State::encode(100, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(60..110));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(25, Color::Flighting),\n                State::encode(30, Color::Lost),\n                State::encode(50, Color::Flighting),\n                State::encode(60, Color::Lost),\n                State::encode(70, Color::Recved),\n                State::encode(100, Color::Lost),\n                State::encode(110, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.ack_rcvd(&(20..55));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(20, Color::Recved),\n                State::encode(55, Color::Flighting),\n                State::encode(60, Color::Lost),\n                State::encode(70, Color::Recved),\n                State::encode(100, Color::Lost),\n                State::encode(110, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(40..80));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(20, Color::Recved),\n                State::encode(55, Color::Lost),\n                State::encode(70, Color::Recved),\n                State::encode(100, Color::Lost),\n                State::encode(110, Color::Flighting),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.ack_rcvd(&(20..120));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(20, Color::Recved),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(50..80));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(20, Color::Recved),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(2..10));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(20, Color::Recved),\n                State::encode(120, Color::Pending),\n            ]\n        );\n\n        buf_map.may_loss(&(30..50));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(10, Color::Lost),\n                State::encode(20, Color::Recved),\n                State::encode(120, Color::Pending),\n            ]\n        );\n    }\n\n    #[test]\n    fn test_bufmap_ack_and_lost_all() {\n        let mut buf_map = BufMap::default();\n        buf_map.extend_to(46);\n        assert!(buf_map.pick(|_| Some(46), usize::MAX, u64::MAX).is_ok());\n        assert_eq!(buf_map.0, vec![State::encode(0, Color::Flighting)]);\n\n        buf_map.ack_rcvd(&(0..2));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(2, Color::Flighting)\n            ]\n        );\n\n        buf_map.may_loss(&(0..46));\n        assert_eq!(\n            buf_map.0,\n            vec![\n                State::encode(0, Color::Recved),\n                State::encode(2, Color::Lost)\n            ]\n        )\n    }\n\n    #[test]\n    fn test_bufmap_ack_and_lost_all2() {\n        let mut buf_map = BufMap(vec![State::encode(2, Color::Flighting)].into(), 46);\n\n        buf_map.may_loss(&(0..46));\n        assert_eq!(buf_map.0, vec![State::encode(2, Color::Lost)])\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/send/writer.rs",
    "content": "use std::{\n    ops::DerefMut,\n    pin::Pin,\n    task::{Context, Poll, ready},\n};\n\nuse bytes::Bytes;\nuse futures::Sink;\nuse qbase::frame::{ResetStreamFrame, io::SendFrame};\nuse tokio::io::{self, AsyncWrite};\n\nuse super::sender::{ArcSender, Sender};\nuse crate::streams::error::StreamError;\n\npub trait CancelStream {\n    /// Cancels the stream with the given error code.\n    ///\n    /// If all data has been sent and acknowledged by the peer, or the stream has been reset, this\n    /// method will do nothing.\n    ///\n    /// Otherwise, a [`RESET_STREAM frame`] will be sent to the peer, and the stream will be reset,\n    /// neither new data nor lost data will be sent.\n    ///\n    /// Unlike TCP, canceling a QUIC stream needs an error code, which is used to indicate\n    /// the reason for the cancellation. The error code should be a `u64` value,\n    /// defined by the application protocol using QUIC, such as HTTP/3 or gRPC.\n    ///\n    /// [`RESET_STREAM frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-reset_stream-frames\n    fn cancel(&mut self, err_code: u64);\n}\n\n/// The writer part of a QUIC stream.\n///\n/// This struct implements the [`AsyncWrite`] trait, allowing you to write data to the stream.\n///\n/// A QUIC stream is *reliable*, *ordered*, and *flow-controlled*.\n///\n/// The amount of data that can be sent is limited by flow control. The [`write`] call will be blocked\n/// if the amount of data written reaches the flow control limit.\n///\n/// The [`flush`] and [`shutdown`] calls will be blocked until all data written to [`Writer`] has\n/// been sent and acknowledged by the peer.\n///\n/// # Note\n///\n/// The stream must be cancelled or shutdowned before the [`Writer`] dropped.\n///\n/// Call [`shutdown`] means that there are no more new data will been written to the stream. If all\n/// of the data written to the stream has been sent and acknowledged by the peer, the stream will be\n/// `closed`, and the [`shutdown`] call complete with `Ok(())`.\n///\n/// Alternatively, if the operations on the [`Writer`] result an error, its indicates that the stream\n/// has been cancelled in other reason, such as connection closed, the peer acked local to stop sending.\n///\n/// You can call [`cancel`] to `cancel` the stream with the given error code, The [`Writer`] will be\n/// consumed, and neither new data nor lost data will be sent anymore.\n///\n/// # Example\n///\n/// The [`Writer`] is created by the `open_bi_stream`, `open_uni_stream`, or `accept_bi_stream` methods of\n/// `QuicConnection` (in the `dquic` crate).\n///\n/// The following example demonstrates how to read and write data on a QUIC stream.\n///\n/// ```rust, ignore\n/// # use tokio::io::{AsyncWriteExt, AsyncReadExt};\n/// # async fn example() -> std::io::Result<()> {\n/// let (reader, writer) = quic_connection.open_bi_stream().await?;\n///\n/// writer.write_all(b\"GET README.md\\r\\n\").await?;\n/// writer.shutdown().await?;\n///\n/// let mut response = String::new();\n/// let n = reader.read_to_string(&mut response).await?;\n/// println!(\"Response {} bytes: {}\", n, response);\n/// Ok(())\n/// # }\n/// ```\n///\n/// [`write`]: tokio::io::AsyncWriteExt::write\n/// [`flush`]: tokio::io::AsyncWriteExt::flush\n/// [`shutdown`]: tokio::io::AsyncWriteExt::shutdown\n/// [`cancel`]: Writer::cancel\n/// [`STOP_SENDING frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-stop_sending-frames\n#[derive(Debug)]\npub struct Writer<TX> {\n    inner: ArcSender<TX>,\n    qlog_span: qevent::telemetry::Span,\n    tracing_span: tracing::Span,\n}\n\nimpl<TX> Writer<TX> {\n    pub(crate) fn new(inner: ArcSender<TX>) -> Self {\n        Self {\n            inner,\n            qlog_span: qevent::telemetry::Span::current(),\n            tracing_span: tracing::Span::current(),\n        }\n    }\n}\n\nimpl<TX> CancelStream for Writer<TX>\nwhere\n    TX: SendFrame<ResetStreamFrame>,\n{\n    /// Cancels the stream with the given error code(reset the stream).\n    ///\n    /// If all data has been sent and acknowledged by the peer(the stream has closed), or the stream\n    /// has been reset, this method will do nothing.\n    ///\n    /// Otherwise, a [`RESET_STREAM frame`] will be sent to the peer, and the stream will be reset,\n    /// neither new data nor lost data will be sent.\n    ///\n    /// [`RESET_STREAM frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-reset_stream-frames\n    fn cancel(&mut self, err_code: u64) {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut sender = self.inner.sender();\n        let inner = sender.deref_mut();\n        if let Ok(sending_state) = inner {\n            match sending_state {\n                Sender::Ready(s) => {\n                    *sending_state = Sender::ResetSent(s.cancel(err_code));\n                }\n                Sender::Sending(s) => {\n                    *sending_state = Sender::ResetSent(s.cancel(err_code));\n                }\n                Sender::DataSent(s) => {\n                    *sending_state = Sender::ResetSent(s.cancel(err_code));\n                }\n                _ => (),\n            }\n        };\n    }\n}\n\nimpl<TX> Writer<TX> {\n    /// Poll to check whether [`Writer`] can cache more appropriate amount of data.\n    ///\n    /// Even without calling this method in advance, writing data can succeed.\n    /// However, this may cause the QUIC layer to cache excessive data.\n    #[inline]\n    pub fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamError>> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut sender = self.inner.sender();\n        let sending_state = sender.as_mut().map_err(|e| e.clone())?;\n        match sending_state {\n            Sender::Ready(s) => s.poll_ready(cx),\n            Sender::Sending(s) => s.poll_ready(cx),\n            Sender::DataSent(_) => Poll::Ready(Err(StreamError::EosSent)),\n            Sender::DataRcvd => Poll::Ready(Err(StreamError::EosSent)),\n            Sender::ResetSent(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n            Sender::ResetRcvd(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n        }\n    }\n\n    /// Write data to the stream.\n    ///\n    /// Although data written by this method can also be sent,\n    /// it is recommended to use the `Sink` or `AsyncWrite` API to avoid excessive data caching at the QUIC layer.\n    #[inline]\n    pub fn write(&mut self, buf: Bytes) -> Result<(), StreamError> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut sender = self.inner.sender();\n        let sending_state = sender.as_mut().map_err(|e| e.clone())?;\n        match sending_state {\n            Sender::Ready(s) => s.write(buf),\n            Sender::Sending(s) => s.write(buf),\n            Sender::DataSent(_) => Err(StreamError::EosSent),\n            Sender::DataRcvd => Err(StreamError::EosSent),\n            Sender::ResetSent(reset) => Err(StreamError::Reset(*reset)),\n            Sender::ResetRcvd(reset) => Err(StreamError::Reset(*reset)),\n        }\n    }\n\n    #[inline]\n    pub fn poll_write(\n        &mut self,\n        cx: &mut Context<'_>,\n        data: Bytes,\n    ) -> Poll<Result<(), StreamError>> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut sender = self.inner.sender();\n        let sending_state = sender.as_mut().map_err(|e| e.clone())?;\n        match sending_state {\n            Sender::Ready(s) => {\n                ready!(s.poll_ready(cx)?);\n                Poll::Ready(s.write(data))\n            }\n            Sender::Sending(s) => {\n                ready!(s.poll_ready(cx)?);\n                Poll::Ready(s.write(data))\n            }\n            Sender::DataSent(_) => Poll::Ready(Err(StreamError::EosSent)),\n            Sender::DataRcvd => Poll::Ready(Err(StreamError::EosSent)),\n            Sender::ResetSent(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n            Sender::ResetRcvd(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n        }\n    }\n\n    #[inline]\n    pub fn poll_flush(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamError>> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut sender = self.inner.sender();\n        let sending_state = sender.as_mut().map_err(|e| e.clone())?;\n        match sending_state {\n            Sender::Ready(s) => s.poll_flush(cx).map(Ok),\n            Sender::Sending(s) => s.poll_flush(cx).map(Ok),\n            Sender::DataSent(s) => s.poll_flush(cx).map(Ok),\n            Sender::DataRcvd => Poll::Ready(Ok(())),\n            Sender::ResetSent(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n            Sender::ResetRcvd(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n        }\n    }\n\n    #[inline]\n    #[doc(alias = \"poll_close\")]\n    pub fn poll_shutdown(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), StreamError>> {\n        let _span = (self.qlog_span.enter(), self.tracing_span.enter());\n\n        let mut sender = self.inner.sender();\n        let sending_state = sender.as_mut().map_err(|e| e.clone())?;\n        match sending_state {\n            Sender::Ready(s) => s.poll_shutdown(cx).map(Ok),\n            Sender::Sending(s) => s.poll_shutdown(cx).map(Ok),\n            Sender::DataSent(s) => s.poll_shutdown(cx).map(Ok),\n            Sender::DataRcvd => Poll::Ready(Ok(())),\n            Sender::ResetSent(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n            Sender::ResetRcvd(reset) => Poll::Ready(Err(StreamError::Reset(*reset))),\n        }\n    }\n}\n\nimpl<TX> AsyncWrite for Writer<TX> {\n    #[inline]\n    fn poll_write(\n        self: Pin<&mut Self>,\n        cx: &mut Context<'_>,\n        buf: &[u8],\n    ) -> Poll<io::Result<usize>> {\n        Writer::poll_write(self.get_mut(), cx, Bytes::copy_from_slice(buf))\n            .map_ok(|()| buf.len())\n            .map_err(io::Error::from)\n    }\n\n    #[inline]\n    fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {\n        Writer::poll_flush(self.get_mut(), cx).map_err(io::Error::from)\n    }\n\n    #[inline]\n    fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {\n        Writer::poll_shutdown(self.get_mut(), cx).map_err(io::Error::from)\n    }\n}\n\nimpl<TX> Sink<Bytes> for Writer<TX> {\n    type Error = StreamError;\n\n    #[inline]\n    fn poll_ready(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {\n        Writer::poll_ready(self.get_mut(), cx)\n    }\n\n    #[inline]\n    fn start_send(self: Pin<&mut Self>, item: Bytes) -> Result<(), Self::Error> {\n        Writer::write(self.get_mut(), item)\n    }\n\n    #[inline]\n    fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {\n        Writer::poll_flush(self.get_mut(), cx)\n    }\n\n    #[inline]\n    fn poll_close(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {\n        Writer::poll_shutdown(self.get_mut(), cx)\n    }\n}\n\nimpl<TX> Drop for Writer<TX> {\n    fn drop(&mut self) {\n        let mut sender = self.inner.sender();\n        let inner = sender.deref_mut();\n        if let Ok(sending_state) = inner {\n            match sending_state {\n                Sender::Ready(s) => {\n                    #[cfg(debug_assertions)]\n                    tracing::warn!(\n                        target: \"quic\",\n                        \"The sending {} is not closed before dropped!\",\n                        s.stream_id(),\n                    );\n                    #[cfg(not(debug_assertions))]\n                    tracing::debug!(\n                        target: \"quic\",\n                        \"The sending {} is not closed before dropped!\",\n                        s.stream_id(),\n                    );\n                }\n                Sender::Sending(s) => {\n                    #[cfg(debug_assertions)]\n                    tracing::warn!(\n                        target: \"quic\",\n                        \"The sending {} is not closed before dropped!\",\n                        s.stream_id(),\n                    );\n                    #[cfg(not(debug_assertions))]\n                    tracing::debug!(\n                        target: \"quic\",\n                        \"The sending {} is not closed before dropped!\",\n                        s.stream_id(),\n                    );\n                }\n                _ => (),\n            }\n        };\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/send.rs",
    "content": "//! Types for sending data on a Stream.\nmod outgoing;\nmod sender;\nmod sndbuf;\nmod writer;\n\npub use outgoing::Outgoing;\npub use sender::ArcSender;\npub use sndbuf::SendBuf;\npub use writer::{CancelStream, Writer};\n"
  },
  {
    "path": "qrecovery/src/streams/error.rs",
    "content": "use std::io;\n\nuse qbase::{error::Error, frame::ResetStreamError};\nuse thiserror::Error;\n\n#[derive(Error, Debug, Clone, PartialEq, Eq)]\npub enum StreamError {\n    #[error(transparent)]\n    Connection(#[from] Error),\n    #[error(transparent)]\n    Reset(#[from] ResetStreamError),\n    #[error(\"EOS has been sent\")]\n    EosSent,\n}\n\nimpl From<StreamError> for io::Error {\n    fn from(value: StreamError) -> Self {\n        match value {\n            error @ (StreamError::Connection(..) | StreamError::Reset(..)) => {\n                io::Error::new(io::ErrorKind::BrokenPipe, error)\n            }\n            error @ StreamError::EosSent => io::Error::new(io::ErrorKind::Unsupported, error),\n        }\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/streams/io.rs",
    "content": "use std::{\n    collections::{BTreeMap, HashMap},\n    ops::{Deref, DerefMut},\n    sync::{\n        Arc, Mutex, MutexGuard,\n        atomic::{AtomicU8, Ordering},\n    },\n};\n\nuse derive_more::{Deref, DerefMut};\nuse qbase::{\n    error::Error as QuicError,\n    sid::{Dir, StreamId},\n};\n\nuse crate::{recv::Incoming, send::Outgoing};\n\n#[derive(Debug, Clone)]\npub(super) struct IOState(Arc<AtomicU8>);\n\nimpl IOState {\n    const SENDING: u8 = 0x1;\n    const RECEIVING: u8 = 0x2;\n\n    pub fn send_only() -> Self {\n        Self(Arc::new(AtomicU8::new(Self::SENDING)))\n    }\n\n    pub fn receive_only() -> Self {\n        Self(Arc::new(AtomicU8::new(Self::RECEIVING)))\n    }\n\n    pub fn bidirection() -> Self {\n        Self(Arc::new(AtomicU8::new(Self::SENDING | Self::RECEIVING)))\n    }\n\n    pub fn is_terminated(&self) -> bool {\n        self.0.load(Ordering::Acquire) == 0\n    }\n\n    pub fn shutdown_send(&self) {\n        self.0.fetch_and(!Self::SENDING, Ordering::Release);\n    }\n\n    pub fn shutdown_receive(&self) {\n        self.0.fetch_and(!Self::RECEIVING, Ordering::Release);\n    }\n}\n\n#[derive(Debug, Clone, Deref, DerefMut)]\npub(super) struct Output<TX> {\n    #[deref]\n    #[deref_mut]\n    pub(super) outgoings: BTreeMap<StreamId, (Outgoing<TX>, IOState)>,\n    pub(super) cursor: Option<(StreamId, usize)>,\n}\n\nimpl<TX> Output<TX> {\n    fn new() -> Self {\n        Self {\n            outgoings: BTreeMap::default(),\n            cursor: None,\n        }\n    }\n}\n\n/// ArcOutput里面包含一个Result类型，一旦发生quic error，就会被替换为Err\n/// 发生quic error后，其操作将被忽略，不会再抛出QuicError或者panic，因为\n/// 有些异步任务可能还未完成，在置为Err后才会完成。\n#[derive(Debug, Clone)]\npub(super) struct ArcOutput<TX>(Arc<Mutex<Result<Output<TX>, QuicError>>>);\n\nimpl<TX> ArcOutput<TX> {\n    pub(super) fn new() -> Self {\n        Self(Arc::new(Mutex::new(Ok(Output::new()))))\n    }\n\n    pub(super) fn streams(&self) -> MutexGuard<'_, Result<Output<TX>, QuicError>> {\n        self.0.lock().unwrap()\n    }\n\n    pub(super) fn guard(&'_ self) -> Result<ArcOutputGuard<'_, TX>, QuicError> {\n        let streams = self.streams();\n        match streams.as_ref() {\n            Ok(_) => Ok(ArcOutputGuard(streams)),\n            Err(e) => Err(e.clone()),\n        }\n    }\n}\n\npub(super) struct ArcOutputGuard<'a, TX>(MutexGuard<'a, Result<Output<TX>, QuicError>>);\n\nimpl<TX> Deref for ArcOutputGuard<'_, TX> {\n    type Target = Output<TX>;\n\n    fn deref(&self) -> &Self::Target {\n        match self.0.as_ref() {\n            Ok(output) => output,\n            Err(e) => unreachable!(\"output is invalid: {e}\"),\n        }\n    }\n}\n\nimpl<TX> DerefMut for ArcOutputGuard<'_, TX> {\n    fn deref_mut(&mut self) -> &mut Self::Target {\n        match self.0.as_mut() {\n            Ok(output) => output,\n            Err(e) => unreachable!(\"output is invalid: {e}\"),\n        }\n    }\n}\n\nimpl<TX> ArcOutputGuard<'_, TX> {\n    pub(super) fn insert(&mut self, sid: StreamId, outgoing: Outgoing<TX>, io_state: IOState) {\n        self.deref_mut().insert(sid, (outgoing, io_state));\n    }\n\n    pub(super) fn revise_max_stream_data(\n        &self,\n        zero_rtt_rejected: bool,\n        opened_bidi: u64,\n        opened_uni: u64,\n        bidi_snd_wnd_size: u64,\n        uni_snd_wnd_size: u64,\n    ) {\n        self.deref()\n            .iter()\n            .filter(|(sid, _)| {\n                sid.dir() == Dir::Bi && sid.id() < opened_bidi\n                    || sid.dir() == Dir::Uni && sid.id() < opened_uni\n            })\n            .for_each(|(sid, (outgoing, _))| match sid.dir() {\n                Dir::Bi => outgoing.revise_max_stream_data(zero_rtt_rejected, bidi_snd_wnd_size),\n                Dir::Uni => outgoing.revise_max_stream_data(zero_rtt_rejected, uni_snd_wnd_size),\n            });\n    }\n\n    pub(super) fn on_conn_error(&mut self, error: &QuicError) {\n        self.deref()\n            .values()\n            .for_each(|(o, _)| o.on_conn_error(error));\n        *self.0 = Err(error.clone());\n    }\n}\n\n/// ArcInput里面包含一个Result类型，一旦发生quic error，就会被替换为Err\n/// 发生quic error后，其操作将被忽略，不会再抛出QuicError或者panic，因为\n/// 有些异步任务可能还未完成，在置为Err后才会完成。\n#[allow(clippy::type_complexity)]\n#[derive(Debug, Clone)]\npub(super) struct ArcInput<TX>(\n    Arc<Mutex<Result<HashMap<StreamId, (Incoming<TX>, IOState)>, QuicError>>>,\n);\n\nimpl<TX> Default for ArcInput<TX> {\n    fn default() -> Self {\n        Self(Arc::new(Mutex::new(Ok(HashMap::new()))))\n    }\n}\n\nimpl<TX> ArcInput<TX> {\n    #[allow(clippy::type_complexity)]\n    pub(super) fn streams(\n        &self,\n    ) -> MutexGuard<'_, Result<HashMap<StreamId, (Incoming<TX>, IOState)>, QuicError>> {\n        self.0.lock().unwrap()\n    }\n\n    pub(super) fn guard(&self) -> Result<ArcInputGuard<'_, TX>, QuicError> {\n        let guard = self.0.lock().unwrap();\n        match guard.as_ref() {\n            Ok(_) => Ok(ArcInputGuard { inner: guard }),\n            Err(e) => Err(e.clone()),\n        }\n    }\n}\n\n#[allow(clippy::type_complexity)]\npub(super) struct ArcInputGuard<'a, TX> {\n    inner: MutexGuard<'a, Result<HashMap<StreamId, (Incoming<TX>, IOState)>, QuicError>>,\n}\n\nimpl<TX> ArcInputGuard<'_, TX> {\n    pub(super) fn insert(&mut self, sid: StreamId, incoming: Incoming<TX>, io_state: IOState) {\n        match self.inner.as_mut() {\n            Ok(set) => set.insert(sid, (incoming, io_state)),\n            Err(e) => unreachable!(\"input is invalid: {e}\"),\n        };\n    }\n\n    pub(super) fn on_conn_error(&mut self, error: &QuicError) {\n        match self.inner.as_ref() {\n            Ok(set) => set.values().for_each(|(o, _)| o.on_conn_error(error)),\n            Err(e) => unreachable!(\"output is invalid: {e}\"),\n        };\n        *self.inner = Err(error.clone());\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/streams/listener.rs",
    "content": "use std::{\n    collections::VecDeque,\n    future::Future,\n    pin::Pin,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll, Waker, ready},\n};\n\nuse qbase::{\n    error::Error as QuicError,\n    frame::{ResetStreamFrame, io::SendFrame},\n    param::{ArcParameters, ParameterId},\n    sid::StreamId,\n};\n\nuse crate::{\n    recv::{ArcRecver, Reader},\n    send::{ArcSender, Writer},\n};\n\n#[derive(Debug)]\nstruct Listener<TX> {\n    // 对方主动创建的流\n    #[allow(clippy::type_complexity)]\n    bi_streams: VecDeque<(StreamId, (ArcRecver<TX>, ArcSender<TX>))>,\n    uni_streams: VecDeque<(StreamId, ArcRecver<TX>)>,\n    bi_waker: Option<Waker>,\n    uni_waker: Option<Waker>,\n}\n\nimpl<TX> Listener<TX> {\n    fn new() -> Self {\n        Self {\n            bi_streams: VecDeque::with_capacity(4),\n            uni_streams: VecDeque::with_capacity(2),\n            bi_waker: None,\n            uni_waker: None,\n        }\n    }\n\n    fn push_bi_stream(&mut self, sid: StreamId, stream: (ArcRecver<TX>, ArcSender<TX>)) {\n        self.bi_streams.push_back((sid, stream));\n        if let Some(waker) = self.bi_waker.take() {\n            waker.wake();\n        }\n    }\n\n    fn push_recv_stream(&mut self, sid: StreamId, stream: ArcRecver<TX>) {\n        self.uni_streams.push_back((sid, stream));\n        if let Some(waker) = self.uni_waker.take() {\n            waker.wake();\n        }\n    }\n\n    #[allow(clippy::type_complexity)]\n    fn poll_accept_bi_stream(\n        &mut self,\n        cx: &mut Context<'_>,\n        arc_params: &ArcParameters,\n    ) -> Poll<Result<(StreamId, (Reader<TX>, Writer<TX>)), QuicError>> {\n        let mut params = arc_params.lock_guard()?;\n        let snd_buf_size = match params.get_remote(ParameterId::InitialMaxStreamDataBidiLocal) {\n            Some(value) => value,\n            None => {\n                ready!(params.poll_ready(cx));\n                return self.poll_accept_bi_stream(cx, arc_params);\n            }\n        };\n        if let Some((sid, (recver, sender))) = self.bi_streams.pop_front() {\n            sender.update_window(snd_buf_size);\n            // recver.update_window(rcv_buf_size);\n            Poll::Ready(Ok((sid, (Reader::new(recver), Writer::new(sender)))))\n        } else {\n            self.bi_waker = Some(cx.waker().clone());\n            Poll::Pending\n        }\n    }\n\n    fn poll_accept_recv_stream(\n        &mut self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<(StreamId, Reader<TX>), QuicError>> {\n        if let Some((sid, recver)) = self.uni_streams.pop_front() {\n            // recver.update_window(rcv_buf_size);\n            Poll::Ready(Ok((sid, Reader::new(recver))))\n        } else {\n            self.uni_waker = Some(cx.waker().clone());\n            Poll::Pending\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct ArcListener<TX>(Arc<Mutex<Result<Listener<TX>, QuicError>>>);\n\nimpl<TX> ArcListener<TX> {\n    pub(crate) fn new() -> Self {\n        Self(Arc::new(Mutex::new(Ok(Listener::new()))))\n    }\n\n    pub(crate) fn guard(&self) -> Result<ListenerGuard<'_, TX>, QuicError> {\n        let guard = self.0.lock().unwrap();\n        match guard.as_ref() {\n            Ok(_) => Ok(ListenerGuard { inner: guard }),\n            Err(e) => Err(e.clone()),\n        }\n    }\n\n    pub fn accept_bi_stream<'a>(&'a self, params: &'a ArcParameters) -> AcceptBiStream<'a, TX> {\n        AcceptBiStream {\n            listener: self,\n            params,\n        }\n    }\n\n    pub fn accept_uni_stream(&self) -> AcceptUniStream<'_, TX> {\n        AcceptUniStream { listener: self }\n    }\n\n    #[allow(clippy::type_complexity)]\n    pub fn poll_accept_bi_stream(\n        &self,\n        cx: &mut Context<'_>,\n        arc_params: &ArcParameters,\n    ) -> Poll<Result<(StreamId, (Reader<TX>, Writer<TX>)), QuicError>> {\n        match self.0.lock().unwrap().as_mut() {\n            Ok(set) => set.poll_accept_bi_stream(cx, arc_params),\n            Err(e) => Poll::Ready(Err(e.clone())),\n        }\n    }\n\n    pub fn poll_accept_uni_stream(\n        &self,\n        cx: &mut Context<'_>,\n    ) -> Poll<Result<(StreamId, Reader<TX>), QuicError>> {\n        match self.0.lock().unwrap().as_mut() {\n            Ok(set) => set.poll_accept_recv_stream(cx),\n            Err(e) => Poll::Ready(Err(e.clone())),\n        }\n    }\n}\n\npub(crate) struct ListenerGuard<'a, TX> {\n    inner: MutexGuard<'a, Result<Listener<TX>, QuicError>>,\n}\n\nimpl<TX> ListenerGuard<'_, TX>\nwhere\n    TX: SendFrame<ResetStreamFrame> + Clone + Send + 'static,\n{\n    pub(crate) fn push_bi_stream(&mut self, sid: StreamId, stream: (ArcRecver<TX>, ArcSender<TX>)) {\n        match self.inner.as_mut() {\n            Ok(set) => set.push_bi_stream(sid, stream),\n            Err(e) => unreachable!(\"listener is invalid: {e}\"),\n        }\n    }\n\n    pub(crate) fn push_uni_stream(&mut self, sid: StreamId, stream: ArcRecver<TX>) {\n        match self.inner.as_mut() {\n            Ok(set) => set.push_recv_stream(sid, stream),\n            Err(e) => unreachable!(\"listener is invalid: {e}\"),\n        }\n    }\n\n    pub(crate) fn on_conn_error(&mut self, e: &QuicError) {\n        match self.inner.as_mut() {\n            Ok(set) => {\n                if let Some(waker) = set.bi_waker.take() {\n                    waker.wake();\n                }\n                if let Some(waker) = set.uni_waker.take() {\n                    waker.wake();\n                }\n            }\n            Err(e) => unreachable!(\"listener is invalid: {e}\"),\n        };\n        *self.inner = Err(e.clone());\n    }\n}\n\n/// Future to accept a bidirectional stream.\n///\n/// This future is created by `accept_bi_stream` method of `QuicConnection`.\n///\n/// When the peer created a new bidirectional stream, the future will resolve with a [`Reader`] and\n/// a [`Writer`] to read and write data on the stream.\n#[derive(Debug, Clone)]\npub struct AcceptBiStream<'a, TX> {\n    listener: &'a ArcListener<TX>,\n    params: &'a ArcParameters,\n}\n\nimpl<TX> Future for AcceptBiStream<'_, TX>\nwhere\n    TX: SendFrame<ResetStreamFrame> + Clone + Send + 'static,\n{\n    type Output = Result<(StreamId, (Reader<TX>, Writer<TX>)), QuicError>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.listener.poll_accept_bi_stream(cx, self.params)\n    }\n}\n\n/// Future to accept a bidirectional stream.\n///\n/// This future is created by `accept_uni_stream` method of `QuicConnection`.\n///\n/// When the peer created a new bidirectional stream, the future will resolve with a [`Reader`] to\n/// read data on the stream.\n#[derive(Debug, Clone)]\npub struct AcceptUniStream<'l, TX> {\n    listener: &'l ArcListener<TX>,\n}\n\nimpl<TX> Future for AcceptUniStream<'_, TX>\nwhere\n    TX: SendFrame<ResetStreamFrame> + Clone + Send + 'static,\n{\n    type Output = Result<(StreamId, Reader<TX>), QuicError>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.listener.poll_accept_uni_stream(cx)\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/streams/raw.rs",
    "content": "use std::{\n    sync::{\n        Arc,\n        atomic::{AtomicBool, Ordering::*},\n    },\n    task::{Context, Poll, ready},\n};\n\nuse bytes::{BufMut, Bytes};\nuse qbase::{\n    error::{Error, ErrorKind, QuicError},\n    flow::ArcSendControler,\n    frame::{\n        DataBlockedFrame, FrameType, GetFrameType, ResetStreamFrame,\n        STREAM_FRAME_MAX_ENCODING_SIZE, StreamCtlFrame, StreamFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::tx::{ArcSendWakers, Signals},\n    packet::{Package, PacketContent},\n    param::{ArcParameters, ParameterId, core::Parameters},\n    role::Role,\n    sid::{\n        ControlStreamsConcurrency, Dir, StreamId, StreamIds,\n        remote_sid::{AcceptSid, ExceedLimitError},\n    },\n    varint::VarInt,\n};\n\nuse super::{\n    Ext,\n    io::{ArcInput, ArcOutput, IOState},\n    listener::{AcceptBiStream, AcceptUniStream, ArcListener},\n};\nuse crate::{\n    recv::{ArcRecver, Incoming, Reader},\n    send::{ArcSender, Outgoing, Writer},\n};\n\n/// Manage all streams in the connection, send and receive frames, handle frame loss, and acknowledge.\n///\n/// The struct dont truly send and receive frames, this struct provides interfaces to generate frames\n/// will be sent to the peer, receive frames, handle frame loss, and acknowledge.\n///\n/// [`Outgoing`], [`Incoming`] , [`Writer`] and [`Reader`] dont truly send and receive frames, too.\n///\n/// # Send frames\n///\n/// ## Stream frame\n///\n/// When the application wants to send data to the peer, it will call [`write`] method on [`Writer`]\n/// to write data to the [`SendBuf`].\n///\n/// Protocol layer will call [`try_load_data_into`] to read data from the streams into stream frames and\n/// write the frame into the quic packet.\n///\n/// ## Stream control frame\n///\n/// Be different from the stream frame, the stream control frame is much samller in size.\n///\n/// The struct has a generic type `T`, which must implement the [`SendFrame`] trait. The trait has\n/// a method [`send_frame`], which will be called to send the stream control frame to the peer, see\n/// [`SendFrame`] for more details.\n///\n/// # Receive frames, handle frame loss and acknowledge\n///\n/// Frames received, frames lost or acknowledgmented will be delivered to the corresponding method.\n/// | method on [`DataStreams`]                                | corresponding method                               |\n/// | -------------------------------------------------------- | -------------------------------------------------- |\n/// | [`recv_data`]                                            | [`Incoming::recv_data`]                            |\n/// | [`recv_stream_control`] ([`RESET_STREAM frame`])         | [`Incoming::recv_reset`]                           |\n/// | [`recv_stream_control`] ([`STOP_SENDING frame`])         | [`Outgoing::be_stopped`]                           |\n/// | [`recv_stream_control`] ([`MAX_STREAM_DATA frame`])      | [`Outgoing::update_window`]                        |\n/// | [`recv_stream_control`] ([`STREAM_DATA_BLOCKED frame`])  | none(the frame will be ignored)                    |\n/// | [`recv_stream_control`] ([`MAX_STREAMS frame`])          | [`ArcLocalStreamIds::recv_max_streams_frame`]      |\n/// | [`recv_stream_control`] ([`STREAMS_BLOCKED frame`])      | [`ArcRemoteStreamIds::recv_streams_blocked_frame`] |\n/// | [`on_data_acked`]                                        | [`Outgoing::on_data_acked`]                        |\n/// | [`may_loss_data`]                                        | [`Outgoing::may_loss_data`]                        |\n/// | [`on_reset_acked`]                                       | [`Outgoing::on_reset_acked`]                       |\n///\n/// # Create and accept streams\n///\n/// Stream frames and stream control frames have the function of creating flows. If a steam frame is\n/// received but the corresponding stream has not been created, a stream will be created passively.\n///\n/// [`AcceptBiStream`] and [`AcceptUniStream`] are provided to the application layer to `accept` a\n/// stream (obtain a passively created stream). These future will be resolved when a stream is created\n/// by peer.\n///\n/// Alternatively, sending a stream frame or a stream control frame will create a stream actively.\n/// [`OpenBiStream`] and [`OpenUniStream`] are provided to the application layer to `open` a stream.\n/// These future will be resolved when the connection established.\n///\n/// [`write`]: tokio::io::AsyncWriteExt::write\n/// [`SendBuf`]: crate::send::SendBuf\n/// [`send_frame`]: SendFrame::send_frame\n/// [`try_load_data_into`]: DataStreams::try_load_data_into\n/// [`recv_data`]: DataStreams::recv_data\n/// [`recv_stream_control`]: DataStreams::recv_stream_control\n/// [`on_data_acked`]: DataStreams::on_data_acked\n/// [`may_loss_data`]: DataStreams::may_loss_data\n/// [`on_reset_acked`]: DataStreams::on_reset_acked\n/// [`RESET_STREAM frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-reset_stream-frame\n/// [`STOP_SENDING frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-stop_sending-frames\n/// [`MAX_STREAM_DATA frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-max_stream_data-frame\n/// [`MAX_STREAMS frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-max_streams-frame\n/// [`STREAM_DATA_BLOCKED frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-stream_data_blocked-frame\n/// [`STREAMS_BLOCKED frame`]: https://www.rfc-editor.org/rfc/rfc9000.html#name-streams_blocked-frame\n/// [`OpenBiStream`]: crate::streams::OpenBiStream\n/// [`OpenUniStream`]: crate::streams::OpenUniStream\n/// [`ArcLocalStreamIds::recv_max_streams_frame`]: qbase::sid::ArcLocalStreamIds::recv_max_streams_frame\n/// [`ArcRemoteStreamIds::recv_streams_blocked_frame`]: qbase::sid::ArcRemoteStreamIds::recv_streams_blocked_frame\n///\n#[derive(Debug)]\npub struct DataStreams<TX> {\n    // 该queue与space中的transmitter中的frame_queue共享，为了方便向transmitter中写入帧\n    ctrl_frames: TX,\n\n    role: Role,\n    stream_ids: StreamIds<Ext<TX>, Ext<TX>>,\n    // 所有流的待写端，要发送数据，就得向这些流索取\n    output: ArcOutput<Ext<TX>>,\n    // 所有流的待读端，收到了数据，交付给这些流\n    input: ArcInput<Ext<TX>>,\n    // 对方主动创建的流\n    listener: ArcListener<Ext<TX>>,\n    tls_fin: AtomicBool,\n    tx_wakers: ArcSendWakers,\n\n    initial_max_stream_data_bidi_local: u64,\n    initial_max_stream_data_bidi_remote: u64,\n    initial_max_stream_data_uni: u64,\n\n    metrics: Option<qbase::metric::ArcConnectionMetrics>,\n}\n\nfn wrapper_error(fty: FrameType) -> impl FnOnce(ExceedLimitError) -> QuicError {\n    move |e| QuicError::new(ErrorKind::StreamLimit, fty.into(), e.to_string())\n}\n\nimpl<TX> DataStreams<TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    /// Try to load data from streams into the `packet`,\n    /// with a `flow_limit` which limits the max size of fresh data.\n    /// Returns the size of fresh data.\n    fn try_load_data_into_once<P, FTX>(\n        &self,\n        packet: &mut P,\n        flow_ctrl: &ArcSendControler<FTX>,\n        zero_rtt: bool,\n    ) -> Result<(), Signals>\n    where\n        P: BufMut + ?Sized,\n        for<'a> (StreamFrame, &'a [Bytes]): Package<P>,\n        FTX: SendFrame<DataBlockedFrame>,\n    {\n        // todo: use core::range instead in rust 2024\n        use core::ops::Bound::*;\n\n        if packet.remaining_mut() < STREAM_FRAME_MAX_ENCODING_SIZE {\n            return Err(Signals::CONGESTION);\n        }\n\n        let mut guard = self.output.streams();\n        let output = guard.as_mut().map_err(|_| Signals::empty())?; // connection closed\n\n        if zero_rtt && self.tls_fin.load(Acquire) {\n            return Err(Signals::TLS_FIN); // should load 1rtt\n        }\n\n        let Ok(mut credit) = flow_ctrl.credit(packet.remaining_mut()) else {\n            return Err(Signals::empty()); // connection closed\n        };\n\n        fn try_load_data_into_once<'s, P, TX: 's + Clone>(\n            streams: impl Iterator<Item = (StreamId, &'s (Outgoing<TX>, IOState), usize)>,\n            packet: &mut P,\n            flow_limit: usize,\n        ) -> Result<(StreamId, usize, usize), Signals>\n        where\n            P: BufMut + ?Sized,\n            for<'a> (StreamFrame, &'a [Bytes]): Package<P>,\n        {\n            let mut signals = Signals::TRANSPORT;\n            for (sid, (outgoing, _ios), tokens) in streams {\n                match outgoing.try_load_data_into(packet, sid, flow_limit, tokens) {\n                    Ok((data_len, is_fresh)) => {\n                        let remain_tokens = tokens - data_len;\n                        let fresh_bytes = if is_fresh { data_len } else { 0 };\n                        return Ok((sid, remain_tokens, fresh_bytes));\n                    }\n                    Err(s) => signals |= s,\n                }\n            }\n            Err(signals)\n        }\n\n        // 不一定所有流都允许被发送，比如，0rtt被拒绝max_streams会倒缩，此时大于max_streams的流就不允许被发送\n        let remote_role = self.stream_ids.remote.role();\n        let max_streams_bidi = self.stream_ids.local.opened_streams(Dir::Bi);\n        let max_streams_uni = self.stream_ids.local.opened_streams(Dir::Uni);\n        let stream_allowed = |sid: &StreamId| {\n            sid.role() == remote_role\n                || sid.dir() == Dir::Bi && sid.id() < max_streams_bidi\n                || sid.dir() == Dir::Uni && sid.id() < max_streams_uni\n        };\n\n        // 该tokens是令牌桶算法的token，为了多条Stream的公平性，给每个流定期地发放tokens，不累积\n        // 各流轮流按令牌桶算法发放的tokens来整理数据去发送\n        const DEFAULT_TOKENS: usize = 4096;\n        let (sid, remain_tokens, fresh_bytes) = match &output.cursor {\n            // rev([..=sid]) + rev([sid+1..])\n            Some((sid, tokens)) if *tokens == 0 => try_load_data_into_once(\n                (output.outgoings.range(..=sid).rev())\n                    .chain(output.outgoings.range((Excluded(sid), Unbounded)).rev())\n                    .map(|(sid, outgoing)| (*sid, outgoing, DEFAULT_TOKENS))\n                    .filter(|(sid, ..)| stream_allowed(sid)),\n                packet,\n                credit.available(),\n            ),\n            // [sid] + rev([..sid]) + rev([sid+1..])\n            Some((sid, tokens)) => try_load_data_into_once(\n                Option::into_iter(\n                    output\n                        .outgoings\n                        .get(sid)\n                        .map(|outgoing| (*sid, outgoing, *tokens)),\n                )\n                .chain(\n                    (output.outgoings.range(..sid).rev())\n                        .chain(output.outgoings.range((Excluded(sid), Unbounded)).rev())\n                        .map(|(sid, outgoing)| (*sid, outgoing, DEFAULT_TOKENS)),\n                )\n                .filter(|(sid, ..)| stream_allowed(sid)),\n                packet,\n                credit.available(),\n            ),\n            // rev([..])\n            None => try_load_data_into_once(\n                (output.outgoings.range(..).rev())\n                    .map(|(sid, outgoing)| (*sid, outgoing, DEFAULT_TOKENS))\n                    .filter(|(sid, ..)| stream_allowed(sid)),\n                packet,\n                credit.available(),\n            ),\n        }?;\n\n        output.cursor = Some((sid, remain_tokens));\n        credit.post_sent(fresh_bytes);\n\n        // Update metrics when fresh data is sent\n        if fresh_bytes > 0\n            && let Some(metrics) = &self.metrics\n        {\n            metrics.on_data_sent(fresh_bytes as u64);\n        }\n\n        Ok(())\n    }\n\n    #[inline]\n    pub fn package(\n        self: &Arc<Self>,\n        flow_ctrl: ArcSendControler<TX>,\n        zero_rtt: bool,\n    ) -> StreamFramePackages<TX>\n    where\n        TX: SendFrame<DataBlockedFrame>,\n    {\n        StreamFramePackages {\n            data_stream: self.clone(),\n            flow_ctrl,\n            zero_rtt,\n        }\n    }\n\n    /// Try to load data from streams into the packet.\n    ///\n    /// # Fairness\n    ///\n    /// It's fair between streams.\n    ///\n    /// We have implemented a token bucket algorithm, and this method will read the data of each stream\n    /// sequentially.  Starting from the first stream, when a stream exhausts its tokens (default is 4096,\n    /// depending on the priority of the stream), or there is no data to send, the method will move to\n    /// the next stream, and so on.\n    ///\n    /// # Flow control\n    ///\n    /// QUIC employs a limit-based flow control scheme where a receiver advertises the limit of total\n    /// bytes it is prepared to receive on a given stream or for the entire connection. This leads to\n    /// two levels of data flow control in QUIC, stream level and connection level.\n    ///\n    /// Stream-level flow control had limited by the [`write`] calls on [`Writer`], if the application\n    /// wants to write more data than the stream's flow control limit , the [`write`] call will be\n    /// blocked until the sending window is updated.\n    ///\n    /// For connection-level flow control, it's limited by the parameter `flow_limit` of this method.\n    /// The amount of new data(never sent) will be read from the stream is less or equal to `flow_limit`.\n    ///\n    /// # Returns\n    ///\n    /// If no data written to the buffer, the method will return [`None`], or a tuple will be\n    /// returned:\n    ///\n    /// * [`StreamFrame`]: The stream frame to be sent.\n    /// * [`usize`]: The number of bytes written to the buffer.\n    /// * [`usize`]: The number of new data writen to the buffer.\n    ///\n    /// [`write`]: tokio::io::AsyncWriteExt::write\n    pub fn try_load_data_into<P, FTX>(\n        &self,\n        packet: &mut P,\n        flow_ctrl: &ArcSendControler<FTX>,\n        zero_rtt: bool,\n    ) -> Result<(), Signals>\n    where\n        P: BufMut + ?Sized,\n        for<'a> (StreamFrame, &'a [Bytes]): Package<P>,\n        FTX: SendFrame<DataBlockedFrame>,\n    {\n        use core::ops::ControlFlow::*;\n\n        // 取唯一一个最新的错误（如果有）\n        let (Continue(result) | Break(result)) =\n            core::iter::from_fn(|| Some(self.try_load_data_into_once(packet, flow_ctrl, zero_rtt)))\n                .try_fold(Err(Signals::empty()), |result, once| match (result, once) {\n                    (_, Ok(())) => Continue(Ok(())),\n                    (Ok(()), Err(_no_more)) => Break(Ok(())),\n                    (Err(_), Err(signals)) => Break(Err(signals)),\n                });\n        result\n    }\n\n    /// Called when the stream frame acked.\n    ///\n    /// Actually calls the [`Outgoing::on_data_acked`] method of the corresponding stream.\n    pub fn on_data_acked(&self, frame: StreamFrame) {\n        if let Ok(set) = self.output.streams().as_mut() {\n            let mut is_all_rcvd = false;\n            if let Some((o, s)) = set.get(&frame.stream_id()) {\n                is_all_rcvd = o.on_data_acked(&frame);\n\n                // Update metrics when data is acknowledged\n                let acked_len = frame.range().end - frame.range().start;\n                if acked_len > 0\n                    && let Some(metrics) = &self.metrics\n                {\n                    metrics.on_data_acked(acked_len);\n                }\n\n                if is_all_rcvd {\n                    s.shutdown_send();\n                    if s.is_terminated() {\n                        self.stream_ids.remote.on_end_of_stream(frame.stream_id());\n                    }\n                }\n            }\n\n            if is_all_rcvd {\n                set.remove(&frame.stream_id());\n            }\n        }\n    }\n\n    /// Called when the stream frame may lost.\n    ///\n    /// Actually calls the [`Outgoing::may_loss_data`] method of the corresponding stream.\n    pub fn may_loss_data(&self, stream_frame: &StreamFrame) {\n        if let Some((o, _s)) = self\n            .output\n            .streams()\n            .as_mut()\n            .ok()\n            .and_then(|set| set.get(&stream_frame.stream_id()))\n        {\n            o.may_loss_data(stream_frame);\n        }\n    }\n\n    /// Called when the stream reset frame acked.\n    ///\n    /// Actually calls the [`Outgoing::on_reset_acked`] method of the corresponding stream.\n    pub fn on_reset_acked(&self, reset_frame: ResetStreamFrame) {\n        if let Ok(set) = self.output.streams().as_mut()\n            && let Some((o, s)) = set.remove(&reset_frame.stream_id())\n        {\n            o.on_reset_acked(reset_frame.stream_id());\n            s.shutdown_send();\n            if s.is_terminated() {\n                self.stream_ids\n                    .remote\n                    .on_end_of_stream(reset_frame.stream_id());\n            }\n        }\n        // 如果流是双向的，接收部分的流独立地管理结束。其实是上层应用决定接收的部分是否同时结束\n    }\n\n    /// Called when a stream frame which from peer is received by local.\n    ///\n    /// If the correspoding stream is not exist, `accept` the stream.\n    ///\n    /// Actually calls the [`Incoming::recv_data`] method of the corresponding stream.\n    pub fn recv_data(\n        &self,\n        (stream_frame, body): (StreamFrame, bytes::Bytes),\n    ) -> Result<usize, QuicError> {\n        let sid = stream_frame.stream_id();\n        // 对方必须是发送端，才能发送此帧\n        if sid.role() != self.role {\n            // 对方的sid，看是否跳跃，把跳跃的流给创建好\n            self.try_accept_sid(sid)\n                .map_err(wrapper_error(stream_frame.frame_type()))?;\n        } else {\n            // 我方的sid，那必须是双向流才能收到对方的数据，否则就是错误\n            if sid.dir() == Dir::Uni {\n                return Err(QuicError::new(\n                    ErrorKind::StreamState,\n                    stream_frame.frame_type().into(),\n                    format!(\"local {sid} cannot receive STREAM_FRAME\"),\n                ));\n            }\n        }\n\n        if let Ok(set) = self.input.streams().as_mut()\n            && let Some((incoming, s)) = set.get(&sid)\n        {\n            let (is_into_rcvd, fresh_data) = incoming.recv_data(stream_frame, body.clone())?;\n            if is_into_rcvd {\n                // 数据被接收完的，忽略后续的ResetStreamFrame\n                s.shutdown_receive();\n                if s.is_terminated() {\n                    self.stream_ids.remote.on_end_of_stream(sid);\n                }\n                set.remove(&sid);\n            }\n            return Ok(fresh_data);\n        }\n        Ok(0)\n    }\n\n    /// Called when a stream control frame which from peer is received by local.\n    ///\n    /// If the correspoding stream is not exist, `accept` the stream first.\n    ///\n    /// Actually calls the corresponding method of the corresponding stream for the corresponding frame type.\n    pub fn recv_stream_control(\n        &self,\n        stream_ctl_frame: StreamCtlFrame,\n    ) -> Result<usize, QuicError> {\n        let mut sync_fresh_data = 0;\n        match stream_ctl_frame {\n            StreamCtlFrame::ResetStream(reset) => {\n                let sid = reset.stream_id();\n                // 对方必须是发送端，才能发送此帧\n                if sid.role() != self.role {\n                    self.try_accept_sid(sid)\n                        .map_err(wrapper_error(reset.frame_type()))?;\n                } else {\n                    // 我方创建的流必须是双向流，对方才能发送ResetStream,否则就是错误\n                    if sid.dir() == Dir::Uni {\n                        return Err(QuicError::new(\n                            ErrorKind::StreamState,\n                            reset.frame_type().into(),\n                            format!(\"local {sid} cannot receive RESET_STREAM frame\"),\n                        ));\n                    }\n                }\n                if let Ok(set) = self.input.streams().as_mut()\n                    && let Some((incoming, s)) = set.remove(&sid)\n                {\n                    sync_fresh_data = incoming.recv_reset(reset)?;\n                    s.shutdown_receive();\n                    if s.is_terminated() {\n                        self.stream_ids.remote.on_end_of_stream(reset.stream_id());\n                    }\n                }\n            }\n            StreamCtlFrame::StopSending(stop_sending) => {\n                let sid = stop_sending.stream_id();\n                // 对方必须是接收端，才能发送此帧\n                if sid.role() != self.role {\n                    // 对方创建的单向流，接收端是我方，不可能收到对方的StopSendingFrame\n                    if sid.dir() == Dir::Uni {\n                        return Err(QuicError::new(\n                            ErrorKind::StreamState,\n                            stop_sending.frame_type().into(),\n                            format!(\"remote {sid} must not send STOP_SENDING_FRAME\"),\n                        ));\n                    }\n                    self.try_accept_sid(sid)\n                        .map_err(wrapper_error(stop_sending.frame_type()))?;\n                }\n\n                if let Some(final_size) = self\n                    .output\n                    .streams()\n                    .as_mut()\n                    .ok()\n                    .and_then(|set| set.get(&sid))\n                    .and_then(|(outgoing, _s)| outgoing.be_stopped(stop_sending.app_err_code()))\n                {\n                    self.ctrl_frames.send_frame([StreamCtlFrame::ResetStream(\n                        stop_sending.reset_stream(VarInt::from_u64(final_size).unwrap()),\n                    )]);\n                }\n            }\n            StreamCtlFrame::MaxStreamData(max_stream_data) => {\n                let sid = max_stream_data.stream_id();\n                // 对方必须是接收端，才能发送此帧\n                if sid.role() != self.role {\n                    // 对方创建的单向流，接收端是我方，不可能收到对方的MaxStreamData\n                    if sid.dir() == Dir::Uni {\n                        return Err(QuicError::new(\n                            ErrorKind::StreamState,\n                            max_stream_data.frame_type().into(),\n                            format!(\"remote {sid} must not send MAX_STREAM_DATA_FRAME\"),\n                        ));\n                    }\n                    self.try_accept_sid(sid)\n                        .map_err(wrapper_error(max_stream_data.frame_type()))?;\n                }\n                if let Some((outgoing, _s)) = self\n                    .output\n                    .streams()\n                    .as_ref()\n                    .ok()\n                    .and_then(|set| set.get(&sid))\n                {\n                    outgoing.update_window(max_stream_data.max_stream_data());\n                }\n            }\n            StreamCtlFrame::StreamDataBlocked(stream_data_blocked) => {\n                let sid = stream_data_blocked.stream_id();\n                // 对方必须是发送端，才能发送此帧\n                if sid.role() != self.role {\n                    self.try_accept_sid(sid)\n                        .map_err(wrapper_error(stream_data_blocked.frame_type()))?;\n                } else {\n                    // 我方创建的，必须是双向流，对方才是发送端，才能发出StreamDataBlocked；否则就是错误\n                    if sid.dir() == Dir::Uni {\n                        return Err(QuicError::new(\n                            ErrorKind::StreamState,\n                            stream_data_blocked.frame_type().into(),\n                            format!(\"local {sid} cannot receive STREAM_DATA_BLOCKED_FRAME\"),\n                        ));\n                    }\n                }\n                // 仅仅起到通知作用?主动更新窗口的，此帧没多大用，或许要进一步放大缓冲区大小；被动更新窗口的，此帧有用\n            }\n            StreamCtlFrame::MaxStreams(max_streams) => {\n                // 主要更新我方能创建的单双向流\n                _ = self.stream_ids.local.recv_frame(max_streams);\n            }\n            StreamCtlFrame::StreamsBlocked(streams_blocked) => {\n                // 在某些流并发策略中，收到此帧，可能会更新MaxStreams\n                _ = self.stream_ids.remote.recv_frame(streams_blocked);\n            }\n        }\n        Ok(sync_fresh_data)\n    }\n\n    /// Called when a connection error occured.\n    ///\n    /// After the method called, read on [`Reader`] or write on [`Writer`] will return an error,\n    /// the resouces will be released.\n    pub fn on_conn_error(&self, error: &Error) {\n        let mut output = match self.output.guard() {\n            Ok(out) => out,\n            Err(_) => return,\n        };\n        let mut input = match self.input.guard() {\n            Ok(input) => input,\n            Err(_) => return,\n        };\n        let mut listener = match self.listener.guard() {\n            Ok(listener) => listener,\n            Err(_) => return,\n        };\n\n        output.on_conn_error(error);\n        input.on_conn_error(error);\n        listener.on_conn_error(error);\n    }\n}\n\npub struct StreamFramePackages<TX> {\n    data_stream: Arc<DataStreams<TX>>,\n    flow_ctrl: ArcSendControler<TX>,\n    zero_rtt: bool,\n}\n\nimpl<TX, P> Package<P> for StreamFramePackages<TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + SendFrame<DataBlockedFrame> + Clone + Send + 'static,\n    P: BufMut + ?Sized,\n    for<'a> (StreamFrame, &'a [Bytes]): Package<P>,\n{\n    #[inline]\n    fn dump(&mut self, packet: &mut P) -> Result<PacketContent, Signals> {\n        self.data_stream\n            .try_load_data_into_once(packet, &self.flow_ctrl, self.zero_rtt)?;\n        Ok(PacketContent::EffectivePayload)\n    }\n}\n\nimpl<TX> DataStreams<TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    pub(super) fn new<LR, RR>(\n        role: Role,\n        local_params: &Parameters<LR>,\n        remote_params: &Parameters<RR>,\n        ctrl: Box<dyn ControlStreamsConcurrency>,\n        ctrl_frames: TX,\n        tx_wakers: ArcSendWakers,\n        metrics: Option<qbase::metric::ArcConnectionMetrics>,\n    ) -> Self {\n        use ParameterId::*;\n        Self {\n            role,\n            stream_ids: StreamIds::new(\n                role,\n                local_params\n                    .get::<u64>(InitialMaxStreamsBidi)\n                    .expect(\"unreachable: default value will be got if the value unset\"),\n                local_params\n                    .get::<u64>(InitialMaxStreamsUni)\n                    .expect(\"unreachable: default value will be got if the value unset\"),\n                remote_params\n                    .get::<u64>(InitialMaxStreamsBidi)\n                    .expect(\"unreachable: default value will be got if the value unset\"),\n                remote_params\n                    .get::<u64>(InitialMaxStreamsUni)\n                    .expect(\"unreachable: default value will be got if the value unset\"),\n                Ext(ctrl_frames.clone()),\n                ctrl,\n                tx_wakers.clone(),\n            ),\n            output: ArcOutput::new(),\n            input: ArcInput::default(),\n            listener: ArcListener::new(),\n            ctrl_frames,\n            tls_fin: AtomicBool::new(false),\n            tx_wakers,\n            initial_max_stream_data_bidi_local: local_params\n                .get::<u64>(ParameterId::InitialMaxStreamDataBidiLocal)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n            initial_max_stream_data_bidi_remote: local_params\n                .get::<u64>(ParameterId::InitialMaxStreamDataBidiRemote)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n            initial_max_stream_data_uni: local_params\n                .get::<u64>(ParameterId::InitialMaxStreamDataUni)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n            metrics,\n        }\n    }\n\n    pub fn revise_params<Role>(&self, zero_rtt_rejected: bool, remote_params: &Parameters<Role>) {\n        if let Ok(output) = self.output.guard() {\n            // enter 1rtt state, old state must be 0rtt\n            self.tls_fin.store(true, Release);\n\n            let opened_bidi = self.stream_ids.local.opened_streams(Dir::Bi);\n            let opened_uni = self.stream_ids.local.opened_streams(Dir::Uni);\n            let opened_bidi_snd_wnd_size = remote_params\n                .get::<u64>(ParameterId::InitialMaxStreamDataBidiRemote)\n                .expect(\"unreachable: default value will be got if the value unset\");\n            let opened_uni_snd_wnd_size = remote_params\n                .get::<u64>(ParameterId::InitialMaxStreamDataUni)\n                .expect(\"unreachable: default value will be got if the value unset\");\n            output.revise_max_stream_data(\n                zero_rtt_rejected,\n                opened_bidi,\n                opened_uni,\n                opened_bidi_snd_wnd_size,\n                opened_uni_snd_wnd_size,\n            );\n            let max_streams_bidi = remote_params\n                .get::<u64>(ParameterId::InitialMaxStreamsBidi)\n                .expect(\"unreachable: default value will be got if the value unset\");\n            let max_streams_uni = remote_params\n                .get::<u64>(ParameterId::InitialMaxStreamsUni)\n                .expect(\"unreachable: default value will be got if the value unset\");\n            self.stream_ids.local.revise_max_streams(\n                zero_rtt_rejected,\n                max_streams_bidi,\n                max_streams_uni,\n            );\n        }\n    }\n\n    #[allow(clippy::type_complexity)]\n    pub(super) fn poll_open_bi_stream(\n        &self,\n        cx: &mut Context<'_>,\n        arc_params: &ArcParameters,\n    ) -> Poll<Result<Option<(StreamId, (Reader<Ext<TX>>, Writer<Ext<TX>>))>, Error>> {\n        let mut output = self.output.guard()?;\n        let mut input = self.input.guard()?;\n        let mut params = arc_params.lock_guard()?;\n\n        let snd_buf_size = match params.remembered() {\n            Some(remembered) => remembered\n                .get(ParameterId::InitialMaxStreamDataBidiRemote)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n            None => match params.get_remote(ParameterId::InitialMaxStreamDataBidiRemote) {\n                Some(value) => value,\n                None => {\n                    ready!(params.poll_ready(cx));\n                    // tail recursion should be optimized by compiler\n                    return self.poll_open_bi_stream(cx, arc_params);\n                }\n            },\n        };\n\n        let Some(sid) = ready!(self.stream_ids.local.poll_alloc_sid(cx, Dir::Bi)) else {\n            return Poll::Ready(Ok(None));\n        };\n\n        let arc_sender = self.create_sender(sid, snd_buf_size);\n        let arc_recver = self.create_recver(sid, self.initial_max_stream_data_bidi_local);\n        let io_state = IOState::bidirection();\n        output.insert(sid, Outgoing::new(arc_sender.clone()), io_state.clone());\n        input.insert(sid, Incoming::new(arc_recver.clone()), io_state);\n        Poll::Ready(Ok(Some((\n            sid,\n            (Reader::new(arc_recver), Writer::new(arc_sender)),\n        ))))\n    }\n\n    #[allow(clippy::type_complexity)]\n    pub(super) fn poll_open_uni_stream(\n        &self,\n        cx: &mut Context<'_>,\n        arc_params: &ArcParameters,\n    ) -> Poll<Result<Option<(StreamId, Writer<Ext<TX>>)>, Error>> {\n        let mut output = self.output.guard()?;\n        let mut params = arc_params.lock_guard()?;\n\n        let snd_buf_size = match params.remembered() {\n            Some(remembered) => remembered\n                .get(ParameterId::InitialMaxStreamDataUni)\n                .expect(\"unreachable: default value will be got if the value unset\"),\n            None => match params.get_remote(ParameterId::InitialMaxStreamDataBidiRemote) {\n                Some(value) => value,\n                None => {\n                    ready!(params.poll_ready(cx));\n                    // tail recursion should be optimized by compiler\n                    return self.poll_open_uni_stream(cx, arc_params);\n                }\n            },\n        };\n\n        let Some(sid) = ready!(self.stream_ids.local.poll_alloc_sid(cx, Dir::Uni)) else {\n            return Poll::Ready(Ok(None));\n        };\n\n        let arc_sender = self.create_sender(sid, snd_buf_size);\n        let io_state = IOState::send_only();\n        output.insert(sid, Outgoing::new(arc_sender.clone()), io_state);\n        Poll::Ready(Ok(Some((sid, Writer::new(arc_sender)))))\n    }\n\n    pub(super) fn accept_bi<'a>(\n        &'a self,\n        params: &'a ArcParameters,\n    ) -> AcceptBiStream<'a, Ext<TX>> {\n        self.listener.accept_bi_stream(params)\n    }\n\n    pub(super) fn accept_uni(&self) -> AcceptUniStream<'_, Ext<TX>> {\n        self.listener.accept_uni_stream()\n    }\n\n    fn try_accept_sid(&self, sid: StreamId) -> Result<(), ExceedLimitError> {\n        match sid.dir() {\n            Dir::Bi => self.try_accept_bi_sid(sid),\n            Dir::Uni => self.try_accept_uni_sid(sid),\n        }\n    }\n\n    fn try_accept_bi_sid(&self, sid: StreamId) -> Result<(), ExceedLimitError> {\n        let Ok(mut output) = self.output.guard() else {\n            return Ok(());\n        };\n        let Ok(mut input) = self.input.guard() else {\n            return Ok(());\n        };\n        let Ok(mut listener) = self.listener.guard() else {\n            return Ok(());\n        };\n        let result = self.stream_ids.remote.try_accept_sid(sid)?;\n\n        match result {\n            AcceptSid::Old => Ok(()),\n            AcceptSid::New(need_create) => {\n                for sid in need_create {\n                    let arc_recver =\n                        self.create_recver(sid, self.initial_max_stream_data_bidi_remote);\n                    // buf_size will be revised by Listener::poll_accept_bi_stream\n                    let arc_sender = self.create_sender(sid, 0);\n                    let io_state = IOState::bidirection();\n                    input.insert(sid, Incoming::new(arc_recver.clone()), io_state.clone());\n                    output.insert(sid, Outgoing::new(arc_sender.clone()), io_state);\n                    listener.push_bi_stream(sid, (arc_recver, arc_sender));\n                }\n                Ok(())\n            }\n        }\n    }\n\n    fn try_accept_uni_sid(&self, sid: StreamId) -> Result<(), ExceedLimitError> {\n        let mut input = match self.input.guard() {\n            Ok(input) => input,\n            Err(_) => return Ok(()),\n        };\n        let mut listener = match self.listener.guard() {\n            Ok(listener) => listener,\n            Err(_) => return Ok(()),\n        };\n        let result = self.stream_ids.remote.try_accept_sid(sid)?;\n        match result {\n            AcceptSid::Old => Ok(()),\n            AcceptSid::New(need_create) => {\n                for sid in need_create {\n                    let arc_receiver = self.create_recver(sid, self.initial_max_stream_data_uni);\n                    let io_state = IOState::receive_only();\n                    input.insert(sid, Incoming::new(arc_receiver.clone()), io_state);\n                    listener.push_uni_stream(sid, arc_receiver);\n                }\n                Ok(())\n            }\n        }\n    }\n\n    fn create_sender(&self, sid: StreamId, buf_size: u64) -> ArcSender<Ext<TX>> {\n        ArcSender::new(\n            sid,\n            buf_size,\n            Ext(self.ctrl_frames.clone()),\n            self.tx_wakers.clone(),\n            self.metrics.clone(),\n        )\n    }\n\n    fn create_recver(&self, sid: StreamId, buf_size: u64) -> ArcRecver<Ext<TX>> {\n        ArcRecver::new(sid, buf_size, Ext(self.ctrl_frames.clone()))\n    }\n}\n"
  },
  {
    "path": "qrecovery/src/streams.rs",
    "content": "//! The internal implementation of the QUIC stream.\n//!\n//! If you want to know how to create a stream, see the `QuicConnection` in another crate for more.\n//!\n//! If you want to know how to use a stream, see the [`Reader`] and [`Writer`] for more details.\n//!\n//! The structure in this module does not have the ability to actually send and receive frames, or\n//! sense the loss or confirmation of frames. These functions are implemented by other modules. This\n//! module provides the ability to generate frames, process frames, handle the frame lost and acked,\n//! manage the state of all streams.\n//!\n//! [`DataStreams`] provides a large number of APIs for other blocks to call to achieve the above functions.\n//! It corresponds to all streams on the connection.\n//!\n//! [`Incoming`] and[`Outgoing`] correspond to the input and output of a stream. They manage the sending and\n//! receiving state machines and provide APIs for DataStream to use.\n//!\n//! [`Incoming`]: crate::recv::Incoming\n//! [`Outgoing`]: crate::send::Outgoing\nuse std::{\n    fmt::Debug,\n    future::Future,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n};\n\nuse bytes::Bytes;\nuse derive_more::Deref;\npub use listener::{AcceptBiStream, AcceptUniStream};\nuse qbase::{\n    error::Error,\n    frame::{\n        StreamCtlFrame, StreamFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::tx::ArcSendWakers,\n    param::{ArcParameters, core::Parameters},\n    role::Role,\n    sid::{ControlStreamsConcurrency, StreamId},\n};\n\nuse crate::{recv::Reader, send::Writer};\npub mod error;\nmod io;\nmod listener;\npub mod raw;\n\n#[derive(Debug, Clone)]\npub struct Ext<T>(T);\n\nimpl<TX, F> SendFrame<F> for Ext<TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n    F: Into<StreamCtlFrame>,\n{\n    fn send_frame<I: IntoIterator<Item = F>>(&self, iter: I) {\n        self.0.send_frame(iter.into_iter().map(Into::into));\n    }\n}\n\n/// Shared data streams, one for each connection.\n///\n/// App layer can use it to create and accept bidirectional or unidirectional streams.\n/// QUIC layer will read frames and data from the streams and send them to peer,\n/// and also write the frames and data received from peer to this data streams.\n///\n/// The `TX` is the frame sender, it should be able to send the [`StreamCtlFrame`], including:\n/// - [`StreamCtlFrame::MaxStreamData`]\n/// - [`StreamCtlFrame::MaxStreams`]\n/// - [`StreamCtlFrame::StreamDataBlocked`]\n/// - [`StreamCtlFrame::StreamsBlocked`]\n/// - [`StreamCtlFrame::StopSending`]\n/// - [`StreamCtlFrame::ResetStream`]\n///\n/// See [`raw::DataStreams`] for more details.\n#[derive(Debug, Clone, Deref)]\npub struct DataStreams<TX>(Arc<raw::DataStreams<TX>>)\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static;\n\nimpl<TX> DataStreams<TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    /// Creates a new instance of [`DataStreams`].\n    ///\n    /// The `ctrl_frames` is the frame sender, read [`raw::DataStreams`] for more details.\n    pub fn new<LR, RR>(\n        role: Role,\n        local_params: &Parameters<LR>,\n        remote_params: &Parameters<RR>,\n        ctrl: Box<dyn ControlStreamsConcurrency>,\n        ctrl_frames: TX,\n        tx_wakers: ArcSendWakers,\n        metrics: Option<qbase::metric::ArcConnectionMetrics>,\n    ) -> Self {\n        Self(Arc::new(raw::DataStreams::new(\n            role,\n            local_params,\n            remote_params,\n            ctrl,\n            ctrl_frames,\n            tx_wakers,\n            metrics,\n        )))\n    }\n\n    /// Create a bidirectional stream, see the method of the same name on `QuicConnection` for more.\n    #[inline]\n    pub fn open_bi<'a>(&'a self, params: &'a ArcParameters) -> OpenBiStream<'a, TX> {\n        OpenBiStream {\n            streams: self,\n            params,\n        }\n    }\n\n    /// Create a unidirectional stream, see the method of the same name on `QuicConnection` for more.\n    #[inline]\n    pub fn open_uni<'a>(&'a self, params: &'a ArcParameters) -> OpenUniStream<'a, TX> {\n        OpenUniStream {\n            streams: self,\n            params,\n        }\n    }\n\n    /// accept a bidirectional stream, see the method of the same name on `QuicConnection` for more.\n    #[inline]\n    pub fn accept_bi<'a>(&'a self, params: &'a ArcParameters) -> AcceptBiStream<'a, Ext<TX>> {\n        self.0.accept_bi(params)\n    }\n\n    /// accept a unidirectional stream, see the method of the same name on `QuicConnection` for more.\n    #[inline]\n    pub fn accept_uni(&self) -> AcceptUniStream<'_, Ext<TX>> {\n        self.0.accept_uni()\n    }\n}\n\nimpl<TX> ReceiveFrame<StreamCtlFrame> for DataStreams<TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    type Output = usize;\n\n    fn recv_frame(&self, frame: StreamCtlFrame) -> Result<Self::Output, Error> {\n        self.0.recv_stream_control(frame).map_err(Error::Quic)\n    }\n}\n\nimpl<TX> ReceiveFrame<(StreamFrame, Bytes)> for DataStreams<TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    type Output = usize;\n\n    fn recv_frame(&self, frame: (StreamFrame, Bytes)) -> Result<Self::Output, Error> {\n        self.0.recv_data(frame).map_err(Error::Quic)\n    }\n}\n\n/// Future to open a bidirectional stream.\n///\n/// The creation of the stream is limited by the stream id. Once the stream id is available, the\n/// future will complete immediately.\n///\n/// If a connection error occurred, the future will return an error.\n///\n/// Although this is a bidirectional stream, the peer will not be aware of this stream until we send\n/// a frame on this stream.\npub struct OpenBiStream<'d, TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    streams: &'d raw::DataStreams<TX>,\n    params: &'d ArcParameters,\n}\n\nimpl<TX> Future for OpenBiStream<'_, TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    type Output = Result<Option<(StreamId, (Reader<Ext<TX>>, Writer<Ext<TX>>))>, Error>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.streams.poll_open_bi_stream(cx, self.params)\n    }\n}\n\n/// Future to open a unidirectional stream.\n///\n/// The creation of the stream is limited by the stream id. Once the stream id is available, the\n/// future will complete immediately.\n///\n/// If a connection error occurred, the future will return an error.\n///\n/// Note that the peer will not be aware of this stream until we send a frame on this stream.\npub struct OpenUniStream<'a, TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    streams: &'a raw::DataStreams<TX>,\n    params: &'a ArcParameters,\n}\n\nimpl<TX> Future for OpenUniStream<'_, TX>\nwhere\n    TX: SendFrame<StreamCtlFrame> + Clone + Send + 'static,\n{\n    type Output = Result<Option<(StreamId, Writer<Ext<TX>>)>, Error>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.streams.poll_open_uni_stream(cx, self.params)\n    }\n}\n"
  },
  {
    "path": "qresolve/Cargo.toml",
    "content": "[package]\nname = \"qresolve\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"dquic's dns abstractions\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\n\n[dependencies]\nfutures = { workspace = true }\ntokio = { workspace = true }\nqinterface = { workspace = true }\nqbase = { workspace = true }\n"
  },
  {
    "path": "qresolve/src/lib.rs",
    "content": "use std::{\n    fmt::{Debug, Display},\n    io,\n    sync::Arc,\n};\n\nuse futures::{FutureExt, TryFutureExt, future::BoxFuture, stream::BoxStream};\npub use qbase::net::{Family, addr::EndpointAddr};\n\npub type PublishFuture<'a> = BoxFuture<'a, io::Result<()>>;\n\npub trait Publish: Display + Debug {\n    fn publish<'a>(&'a self, name: &'a str, packet: &'a [u8]) -> PublishFuture<'a>;\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub enum Source {\n    Mdns { nic: Arc<str>, family: Family },\n    Http { server: Arc<str> },\n    System,\n    Dht,\n}\n\nimpl Display for Source {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        match self {\n            Source::Mdns { nic, family } => write!(f, \"MDNS Resolver({nic} {family})\"),\n            Source::Http { server } => write!(f, \"HTTP DNS Resolver({server})\"),\n            Source::System => write!(f, \"System DNS Resolver\"),\n            Source::Dht => write!(f, \"DHT\"),\n        }\n    }\n}\n\npub type Record = (Source, EndpointAddr);\npub type RecordStream = BoxStream<'static, Record>;\npub type ResolveResult = io::Result<RecordStream>;\npub type ResolveFuture<'r> = BoxFuture<'r, ResolveResult>;\n\n/// Resolves names into QUIC peer endpoints.\n///\n/// The result is a stream to allow implementations that yield endpoints over time\n/// (e.g. multi-source resolvers, H3x Dns, Mdns).\npub trait Resolve: Send + Sync + Display + Debug {\n    fn lookup<'l>(&'l self, name: &'l str) -> ResolveFuture<'l>;\n}\n\nuse futures::{StreamExt, stream};\n\n/// Default resolver backed by `tokio::net::lookup_host`.\n#[derive(Debug, Default, Clone, Copy)]\npub struct SystemResolver;\n\nimpl Display for SystemResolver {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(&Source::System, f)\n    }\n}\n\nimpl Resolve for SystemResolver {\n    fn lookup<'l>(&'l self, name: &'l str) -> ResolveFuture<'l> {\n        let source = Source::System;\n        tokio::net::lookup_host(name.to_owned())\n            .map_ok(|addrs| {\n                stream::iter(addrs.map(move |addr| {\n                    let ep = EndpointAddr::direct(addr);\n                    (source.clone(), ep)\n                }))\n                .boxed()\n            })\n            .boxed()\n    }\n}\n"
  },
  {
    "path": "qtraversal/Cargo.toml",
    "content": "[package]\nname = \"qtraversal\"\nversion.workspace = true\nedition.workspace = true\ndescription = \"NAT traversal utilities for QUIC, a part of dquic\"\nreadme = \"README.md\"\nrepository.workspace = true\nlicense.workspace = true\nkeywords.workspace = true\ncategories.workspace = true\n\n[dependencies]\nasync-trait = { workspace = true }\nbon = { workspace = true }\nbytes = { workspace = true }\ndashmap = { workspace = true }\nderive_more = { workspace = true }\nenum_dispatch = { workspace = true }\nfutures = { workspace = true }\nbitflags = { workspace = true }\nnom = { workspace = true }\nqbase = { workspace = true }\nqresolve = { workspace = true }\nqevent = { workspace = true }\nqinterface = { workspace = true, features = [\"qudp\"] }\nqudp = { workspace = true }\nrand = { workspace = true }\nrustls = { workspace = true }\nsmallvec = { workspace = true }\nthiserror = { workspace = true }\ntokio = { workspace = true, features = [\"sync\", \"rt\", \"time\", \"macros\"] }\ntokio-util = { workspace = true, features = [\"rt\"] }\ntracing = { workspace = true }\nnetdev = { workspace = true }\n\n[dev-dependencies]\nclap = { workspace = true }\nrustls = { workspace = true, features = [\"ring\"] }\ntokio = { features = [\"fs\", \"rt-multi-thread\"], workspace = true }\ntokio-test = \"0.4\"\ntracing = { workspace = true }\n\n[dev-dependencies.tracing-subscriber]\nworkspace = true\nfeatures = [\"fmt\", \"ansi\", \"env-filter\", \"time\", \"tracing-log\"]\n\n[features]\n# Enable shorter TTL only for tests (especially integration tests in other crates).\ntest-ttl = []\n\n[[example]]\nname = \"stun_client\"\n\n[[example]]\nname = \"stun_server\"\n"
  },
  {
    "path": "qtraversal/README.md",
    "content": "# qtraversal\n\n`qtraversal` is a NAT traversal library designed for QUIC. It implements sophisticated hole-punching strategies to establish peer-to-peer connections even behind difficult NATs (Symmetric, Restricted, etc.).\n\n## Features\n\n- **STUN Client**: Detects NAT type and external IP/Port.\n- **Hole Punching**: Implements various strategies including:\n    - Direct Connection (Full Cone)\n    - Reverse Punching\n    - Birthday Attack (for Symmetric NATs)\n    - Port prediction\n\n## STUN Configuration\n\nThe library uses `nat.genmeta.net:20004` as the default STUN server in examples. You can configure your own STUN server when initializing the client.\n\n## Usage\n\nSee `examples/` for details on how to use the `Client` and `Puncher`.\n\n"
  },
  {
    "path": "qtraversal/examples/stun_client.rs",
    "content": "use std::{io::Result, net::SocketAddr, sync::Arc};\n\nuse clap::Parser;\nuse qinterface::{\n    component::location::Locations,\n    io::{IO, ProductIO, handy::DEFAULT_IO_FACTORY},\n};\nuse qtraversal::{\n    nat::{client::StunClient, router::StunRouter},\n    route::ReceiveAndDeliverPacket,\n};\nuse tracing::info;\n#[derive(Parser, Debug)]\n#[command(version, about, long_about = None)]\npub struct Arguments {\n    #[arg(long, default_value = \"0.0.0.0:12345\")]\n    pub bind: SocketAddr,\n    #[arg(long, default_value = \"nat.genmeta.net:20004\")]\n    pub stun_svr: String,\n}\n\n#[tokio::main(flavor = \"current_thread\")]\nasync fn main() -> Result<()> {\n    init_logger().unwrap();\n    let args = Arguments::parse();\n\n    let stun_server = tokio::net::lookup_host(&args.stun_svr)\n        .await?\n        .find(|addr| addr.is_ipv4() == args.bind.is_ipv4())\n        .ok_or_else(|| std::io::Error::other(\"failed to resolve stun server\"))?;\n\n    let bind_uri = format!(\"inet://{}\", args.bind).into();\n    let iface: Arc<dyn IO> = Arc::from(DEFAULT_IO_FACTORY.bind(bind_uri));\n\n    let stun_router = StunRouter::new();\n    let stun_client = StunClient::new(iface.clone(), stun_router.clone(), stun_server, None);\n\n    let _task = ReceiveAndDeliverPacket::task()\n        .stun_router(stun_router)\n        .iface_ref(iface.clone())\n        .spawn();\n\n    let outer_addr = stun_client\n        .outer_addr()\n        .await\n        .expect(\"failed to get outer addr\");\n    info!(\"Outer addr: {} Agent addr {}\", outer_addr, stun_server);\n    // Ok(())\n    let nat_type = stun_client.nat_type().await;\n    let mut observer = Locations::global().subscribe();\n    while let Some(event) = observer.recv().await {\n        info!(\"Location event: {:?}\", event);\n        info!(\"Nat type: {:?}\", nat_type);\n    }\n    Ok(())\n\n    // unreachable!(\"Observer never return None\")\n}\n\nfn init_logger() -> std::io::Result<()> {\n    tracing_subscriber::fmt()\n        .with_max_level(tracing::Level::DEBUG)\n        .init();\n    Ok(())\n}\n"
  },
  {
    "path": "qtraversal/examples/stun_server.rs",
    "content": "use std::{io::Result, net::SocketAddr, sync::Arc};\n\nuse clap::Parser;\nuse qinterface::io::{IO, ProductIO, handy::DEFAULT_IO_FACTORY};\nuse qtraversal::{\n    nat::{\n        router::StunRouter,\n        server::{StunServer, StunServerConfig},\n    },\n    route::{Forwarder, ReceiveAndDeliverPacket},\n};\nuse tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};\n#[derive(Parser, Debug)]\n#[command(version, about, long_about = None)]\npub struct Arguments {\n    #[arg(long, default_value = \"127.0.0.1:20002\")]\n    pub bind_addr1: SocketAddr,\n    #[arg(long, default_value = \"127.0.0.1:4433\")]\n    pub bind_addr2: SocketAddr,\n    #[arg(long, default_value = \"127.0.0.1:20002\")]\n    pub change_addr: SocketAddr,\n    #[arg(long, default_value = \"127.0.0.1:20002\")]\n    pub outer_addr1: SocketAddr,\n    #[arg(long, default_value = \"127.0.0.1:20002\")]\n    pub outer_addr2: SocketAddr,\n}\n\n#[tokio::main(flavor = \"current_thread\")]\nasync fn main() -> Result<()> {\n    let args = Arguments::parse();\n    init_logger(&args)?;\n\n    let factory: Arc<dyn ProductIO> = Arc::new(DEFAULT_IO_FACTORY);\n\n    let bind_uri1 = format!(\"inet://{}\", args.bind_addr1).into();\n    let iface1: Arc<dyn IO> = Arc::from(factory.bind(bind_uri1));\n    let stun_router1 = StunRouter::new();\n    let _iface1_recv_task = ReceiveAndDeliverPacket::task()\n        .stun_router(stun_router1.clone())\n        .forwarder(Forwarder::Server {\n            outer_addr: args.outer_addr1,\n        })\n        .iface_ref(iface1.clone())\n        .spawn();\n\n    let bind_uri2 = format!(\"inet://{}\", args.bind_addr2).into();\n    let iface2: Arc<dyn IO> = Arc::from(factory.bind(bind_uri2));\n    let stun_router2 = StunRouter::new();\n    let _iface2_recv_task = ReceiveAndDeliverPacket::task()\n        .stun_router(stun_router2.clone())\n        .forwarder(Forwarder::Server {\n            outer_addr: args.outer_addr2,\n        })\n        .iface_ref(iface2.clone())\n        .spawn();\n\n    let server1 = StunServer::new(\n        iface1,\n        stun_router1,\n        StunServerConfig::builder()\n            .change_port(args.bind_addr2.port())\n            .change_address(args.change_addr)\n            .init(),\n    );\n    let server2 = StunServer::new(\n        iface2,\n        stun_router2,\n        StunServerConfig::builder()\n            .change_port(args.bind_addr1.port())\n            .change_address(args.change_addr)\n            .init(),\n    );\n    _ = tokio::try_join!(server1.spawn(), server2.spawn())?;\n    Ok(())\n}\n\nfn init_logger(args: &Arguments) -> std::io::Result<()> {\n    let log_name = args.bind_addr1.ip().to_string() + \"-stun.log\";\n    let file = std::fs::OpenOptions::new()\n        .create(true)\n        .write(true)\n        .truncate(true)\n        .open(log_name)?;\n\n    let _ = tracing_subscriber::registry()\n        .with(\n            tracing_subscriber::fmt::layer()\n                .with_target(true)\n                .with_ansi(false)\n                .with_writer(file),\n        )\n        .try_init();\n    Ok(())\n}\n"
  },
  {
    "path": "qtraversal/src/addr.rs",
    "content": "use std::{\n    collections::{HashMap, HashSet, hash_map::Entry},\n    net::SocketAddr,\n    ops::Deref,\n};\n\nuse futures::io;\nuse qbase::{\n    frame::{AddAddressFrame, RemoveAddressFrame},\n    net::{NatType, addr::EndpointAddr},\n};\nuse qinterface::bind_uri::BindUri;\nuse qresolve::Source;\n\n#[derive(Default)]\npub struct AddressBook {\n    local: HashMap<u32, (BindUri, AddAddressFrame)>,\n    remote: HashMap<u32, AddAddressFrame>,\n    local_endpoint: HashSet<(BindUri, EndpointAddr)>,\n    /// Remote endpoints with their DNS [`Source`] so the puncher can enforce\n    /// source-specific constraints (e.g. mDNS endpoints are tied to a NIC).\n    remote_endpoint: HashMap<EndpointAddr, Source>,\n    largest_seq_num: u32,\n}\n\nimpl AddressBook {\n    pub(crate) fn add_local_address(\n        &mut self,\n        bind: BindUri,\n        addr: SocketAddr,\n        tire: u32,\n        nat_type: NatType,\n    ) -> io::Result<AddAddressFrame> {\n        if self\n            .local\n            .values()\n            .any(|(_local, frame)| *frame.deref() == addr)\n        {\n            tracing::debug!(target: \"quic\", %addr, \"Duplicate local address\");\n            return Err(io::Error::other(\"Duplicate local address\"));\n        }\n        let frame = AddAddressFrame::new(self.largest_seq_num, addr, tire, nat_type);\n        self.local.insert(self.largest_seq_num, (bind, frame));\n        self.largest_seq_num += 1;\n        Ok(frame)\n    }\n\n    pub(crate) fn add_local_endpoint(\n        &mut self,\n        bind: BindUri,\n        addr: EndpointAddr,\n    ) -> io::Result<()> {\n        if !self.local_endpoint.insert((bind, addr)) {\n            return Err(io::Error::other(\"Duplicate local endpoint\"));\n        }\n        Ok(())\n    }\n\n    pub(crate) fn add_peer_endpoint(\n        &mut self,\n        endpoint: EndpointAddr,\n        source: Source,\n    ) -> io::Result<()> {\n        match self.remote_endpoint.entry(endpoint) {\n            Entry::Occupied(_) => return Err(io::Error::other(\"Duplicate remote endpoint\")),\n            Entry::Vacant(e) => {\n                e.insert(source);\n            }\n        }\n        Ok(())\n    }\n\n    pub(crate) fn remote_endpoint(&self) -> &HashMap<EndpointAddr, Source> {\n        &self.remote_endpoint\n    }\n\n    pub(crate) fn local_endpoint(&self) -> &HashSet<(BindUri, EndpointAddr)> {\n        &self.local_endpoint\n    }\n\n    pub(crate) fn remove_local_address(\n        &mut self,\n        addr: SocketAddr,\n    ) -> io::Result<RemoveAddressFrame> {\n        let Some(seq_num) = self\n            .local\n            .iter()\n            .find(|(_, (_local, frame))| *frame.deref() == addr)\n            .map(|(key, _)| *key)\n        else {\n            tracing::debug!(target: \"quic\", %addr, \"No matching local address to remove\");\n            return Err(io::Error::other(\"No matching local address\"));\n        };\n        self.local.remove(&seq_num).map(|(_local, _frame)| seq_num);\n        Ok(RemoveAddressFrame {\n            seq_num: seq_num.into(),\n        })\n    }\n\n    pub(crate) fn get_local_address(&self, seq_num: &u32) -> Option<(BindUri, AddAddressFrame)> {\n        self.local.get(seq_num).cloned()\n    }\n\n    pub(crate) fn add_remote_address(&mut self, remote: AddAddressFrame) -> io::Result<()> {\n        match self.remote.entry(remote.seq_num()) {\n            Entry::Occupied(_) => {\n                tracing::debug!(target: \"quic\", remote_seq_num = remote.seq_num(), \"Duplicate remote address\");\n                return Err(io::Error::other(\"Duplicate remote address\"));\n            }\n            Entry::Vacant(entry) => {\n                entry.insert(remote);\n            }\n        }\n        Ok(())\n    }\n\n    pub(crate) fn remove_remote_address(&mut self, seq_num: u32) -> Option<AddAddressFrame> {\n        self.remote.remove(&seq_num)\n    }\n\n    pub(crate) fn pick_local_address(\n        &self,\n        remote: &AddAddressFrame,\n    ) -> io::Result<(BindUri, AddAddressFrame)> {\n        let mut addrs: Vec<_> = self\n            .local\n            .iter()\n            .filter(|(_seq, (_local, frame))| {\n                frame.tire() == remote.tire() && frame.is_ipv4() == remote.is_ipv4()\n            })\n            .map(|(_, addr)| addr.clone())\n            .collect();\n\n        if addrs.is_empty() {\n            tracing::debug!(target: \"quic\", ?remote, \"No matching local address for remote address\");\n            return Err(io::Error::other(\"No matching local address\"));\n        }\n\n        const NAT_PRIORITY: [NatType; 5] = [\n            NatType::FullCone,\n            NatType::RestrictedCone,\n            NatType::RestrictedPort,\n            NatType::Dynamic,\n            NatType::Symmetric,\n        ];\n\n        addrs.sort_by_key(|(_addr, frame)| {\n            NAT_PRIORITY\n                .iter()\n                .position(|&x| x == frame.nat_type())\n                .unwrap_or(usize::MAX)\n        });\n\n        let (bind, frame) = addrs\n            .iter()\n            .find(|(_, frame)| *frame != *remote)\n            .ok_or_else(|| io::Error::other(\"No matching local address\"))?;\n\n        Ok((bind.clone(), *frame))\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/future.rs",
    "content": "use std::{\n    mem,\n    ops::{Deref, DerefMut},\n    sync::{Mutex, MutexGuard},\n    task::{Context, Poll},\n};\n\nuse qbase::util::WakerVec;\n\n#[derive(Debug)]\nenum FutureState<T> {\n    Demand(WakerVec),\n    Ready(T),\n}\n\nimpl<T> Default for FutureState<T> {\n    fn default() -> Self {\n        Self::Demand(Default::default())\n    }\n}\n\n#[derive(Debug)]\npub struct ReadyFuture<'f, T>(MutexGuard<'f, FutureState<T>>);\n\nimpl<T> Deref for ReadyFuture<'_, T> {\n    type Target = T;\n\n    fn deref(&self) -> &Self::Target {\n        match self.0.deref() {\n            FutureState::Demand(..) => unreachable!(),\n            FutureState::Ready(item) => item,\n        }\n    }\n}\n\n/// A value which will be resolved in the future.\n///\n/// Be different with the [`futures::Future`], this is a value not a computation.\n///\n/// The [`Future`] can only been assigned once, and the value can be get multiple times.(so the T\n/// must be [`Clone`]). If the assign is called multiple times, the old value will not be replaced,\n/// and the new value will be returned as [`Err`].\n///\n/// The task can attempt to get the value synchronously by calling [`try_get`], or asynchronously by\n/// calling [`get`]. There are also a [`poll_get`] method for the task to poll the value. Read their\n/// document for more details about the behavior.\n///\n/// # Examples\n/// ```rust, ignore\n/// # async fn some_work() -> &'static str { \"Hello World\" }\n/// # async fn test() {\n/// use std::sync::Arc;\n///\n/// let fut = Arc::new(Future::new());\n/// let t1 = tokio::spawn({\n///     let fut = fut.clone();\n///     async move {\n///         assert_eq!(fut.get().await, \"Hello world\");\n///         // the value can be get multiple times\n///         assert_eq!(fut.get().await, \"Hello world\");\n///         assert_eq!(fut.get().await, \"Hello world\");\n///     }\n/// });\n///\n/// let t2 = tokio::spawn({\n///     let fut = fut.clone();\n///     async move {\n///         // do some work to get the value\n///         let value = some_work().await;\n///         fut.assign(value);\n///\n///         // the new value will not replace the old value\n///         assert_eq!(fut.assign(\"Hi World\"), Err(\"Hi World\"));\n///     }\n/// });\n///\n/// _ = tokio::join!(t1, t2);\n/// # }\n///\n/// ```\n///\n///\n/// [`get`]: Future::get\n/// [`try_get`]: Future::try_get\n/// [`poll_get`]: Future::poll_get\n#[derive(Debug)]\npub struct Future<T> {\n    state: Mutex<FutureState<T>>,\n}\n\nimpl<T> Future<T> {\n    /// Create a new empty [`Future`].\n    #[inline]\n    #[allow(dead_code)]\n    pub fn new() -> Self {\n        Default::default()\n    }\n\n    /// Create a new [`Future`] with the given value in it.\n    ///\n    /// Once that the future can only been assigned once, its not a good idea to use this method,\n    /// why dont you use the value directly or share the value with the [`Arc`]?\n    ///\n    /// [`Arc`]: std::sync::Arc\n    #[inline]\n    #[allow(dead_code)]\n    pub fn with(item: T) -> Self {\n        Self {\n            state: Mutex::new(FutureState::Ready(item)),\n        }\n    }\n\n    fn state(&'_ self) -> MutexGuard<'_, FutureState<T>> {\n        self.state.lock().unwrap()\n    }\n\n    /// Assign the value to the [`Future`].\n    ///\n    /// Return the old value as [`Some`] if the future is already assigned.\n    #[inline]\n    pub fn assign(&self, item: T) -> Option<T> {\n        match std::mem::replace(self.state().deref_mut(), FutureState::Ready(item)) {\n            FutureState::Demand(mut wakers) => {\n                mem::take(&mut wakers).wake_all();\n                None\n            }\n            FutureState::Ready(old) => Some(old),\n        }\n    }\n\n    /// Poll the value of the [`Future`].\n    ///\n    /// If the value is ready, the value will be returned as [`Poll::Ready`]. If the value is not\n    /// ready, this method will return [`Poll::Pending`] and the waker will be stored.\n    #[inline]\n    pub fn poll_get(&'_ self, cx: &mut Context<'_>) -> Poll<ReadyFuture<'_, T>> {\n        let mut state = self.state();\n        match state.deref_mut() {\n            FutureState::Demand(wakers) => {\n                wakers.register(cx.waker());\n                Poll::Pending\n            }\n            FutureState::Ready(..) => Poll::Ready(ReadyFuture(state)),\n        }\n    }\n\n    /// Try to get the value of the [`Future`].\n    ///\n    /// If the value is ready, the value will be returned as [`Some`]. If the value is not ready, this\n    /// method will return [`None`].\n    pub fn try_get(&'_ self) -> Option<ReadyFuture<'_, T>> {\n        let state = self.state();\n        match state.deref() {\n            FutureState::Demand(..) => None,\n            FutureState::Ready(_) => Some(ReadyFuture(state)),\n        }\n    }\n\n    /// Get the value of the [`Future`] asynchronously.\n    #[inline]\n    #[allow(unused)]\n    pub async fn get(&'_ self) -> ReadyFuture<'_, T> {\n        std::future::poll_fn(|cx| self.poll_get(cx)).await\n    }\n\n    pub fn clear(&self) {\n        let mut state = self.state();\n        *state = match state.deref_mut() {\n            FutureState::Demand(wakers) => FutureState::Demand(mem::take(wakers)),\n            FutureState::Ready(_) => FutureState::Demand(WakerVec::default()),\n        };\n    }\n}\n\nimpl<T> Default for Future<T> {\n    fn default() -> Self {\n        Self {\n            state: Mutex::new(Default::default()),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n\n    use std::{sync::Arc, time::Duration};\n\n    use futures::future::join_all;\n    use tokio::{sync::Notify, time::timeout};\n\n    use super::*;\n\n    #[test]\n    fn new() {\n        let future = Future::new();\n        assert_eq!(future.try_get().as_deref(), None);\n        assert_eq!(future.assign(\"Hello world\"), None);\n        assert_eq!(future.try_get().as_deref(), Some(&\"Hello world\"));\n\n        let future = Future::with(\"Hello World\");\n        assert_eq!(future.try_get().as_deref(), Some(&\"Hello World\"));\n        assert_eq!(future.assign(\"Hi\"), Some(\"Hello World\"));\n    }\n\n    #[tokio::test]\n    async fn wait() {\n        let future = Arc::new(Future::<&str>::new());\n        let write = Arc::new(Notify::new());\n        let task = tokio::spawn({\n            let future = future.clone();\n            let write = write.clone();\n            async move {\n                core::future::poll_fn(|cx| {\n                    assert!(matches!(future.poll_get(cx), Poll::Pending));\n                    write.notify_one();\n\n                    Poll::Ready(())\n                })\n                .await;\n\n                assert_eq!(*future.get().await, \"Hello world\");\n            }\n        });\n\n        write.notified().await;\n        assert_eq!(future.assign(\"Hello world\"), None);\n\n        task.await.unwrap();\n    }\n\n    #[tokio::test]\n    async fn change() {\n        let future = Arc::new(Future::<&str>::new());\n        let write = Arc::new(Notify::new());\n        let task = tokio::spawn({\n            let future = future.clone();\n            let write = write.clone();\n            async move {\n                core::future::poll_fn(|cx| {\n                    assert!(matches!(future.poll_get(cx), Poll::Pending));\n                    write.notify_one();\n                    Poll::Ready(())\n                })\n                .await;\n\n                assert_eq!(*future.get().await, \"Hello world\");\n                assert_eq!(*future.get().await, \"Hello world\");\n                write.notify_one();\n            }\n        });\n\n        write.notified().await;\n        assert_eq!(future.try_get().as_deref(), None);\n        assert_eq!(future.assign(\"Hello world\"), None);\n        write.notified().await;\n        assert_eq!(future.assign(\"Changed\"), Some(\"Hello world\"));\n        task.await.unwrap();\n    }\n\n    #[tokio::test]\n    async fn multiple_wait() {\n        let future = Arc::new(Future::<&str>::new());\n        let timeout_task = tokio::spawn({\n            let future = future.clone();\n            async move {\n                let _ = timeout(Duration::from_millis(100), future.get()).await;\n                let _ = future.assign(\"Hello world\");\n            }\n        });\n\n        let task = tokio::spawn({\n            let future = future.clone();\n            async move {\n                assert_eq!(*future.get().await, \"Hello world\");\n            }\n        });\n\n        join_all([task, timeout_task]).await;\n    }\n\n    #[tokio::test]\n    async fn clear() {\n        let future = Arc::new(Future::<&str>::new());\n        future.assign(\"Hello world\");\n        assert_eq!(*future.get().await, \"Hello world\");\n        future.clear();\n        assert_eq!(future.try_get().as_deref(), None);\n        let task = tokio::spawn({\n            let future = future.clone();\n            async move {\n                assert_eq!(*future.get().await, \"New Hello world\");\n            }\n        });\n        future.assign(\"New Hello world\");\n        task.await.unwrap();\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/lib.rs",
    "content": "use qbase::net::addr::EndpointAddr;\n\npub mod addr;\nmod future;\npub mod nat;\npub mod packet;\npub mod punch;\npub mod route;\n\npub type PathWay<E = EndpointAddr> = qbase::net::route::Pathway<E>;\n"
  },
  {
    "path": "qtraversal/src/nat/client.rs",
    "content": "use std::{\n    collections::HashMap,\n    fmt,\n    io::{self},\n    net::SocketAddr,\n    ops::{ControlFlow, Deref},\n    pin::pin,\n    sync::{\n        Arc, Mutex, MutexGuard,\n        atomic::{AtomicBool, AtomicU8, Ordering::SeqCst},\n    },\n    task::{Context, Poll, ready},\n    time::Duration,\n};\n\nuse futures::{FutureExt, StreamExt, stream::FuturesUnordered};\nuse qbase::net::{Family, addr::EndpointAddr};\npub use qbase::net::{NatType, NetFeature};\nuse qinterface::{\n    Interface, RebindedError, WeakInterface,\n    component::{\n        Component,\n        location::{IfaceLocations, LocationsComponent},\n    },\n    io::{IO, RefIO},\n};\nuse qresolve::Resolve;\nuse thiserror::Error;\nuse tokio::{sync::Notify, task::JoinSet};\nuse tokio_util::task::AbortOnDropHandle;\nuse tracing::Instrument;\n\nuse super::{router::StunRouter, tx::Transaction};\nuse crate::{\n    future::Future,\n    nat::{iface::StunIO, msg::Request, router::StunRouterComponent},\n};\n\n#[derive(Error, Clone)]\n#[error(transparent)]\npub struct ArcIoError(Arc<io::Error>);\n\nimpl fmt::Debug for ArcIoError {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        self.0.as_ref().fmt(f)\n    }\n}\n\nimpl From<io::Error> for ArcIoError {\n    fn from(source: io::Error) -> Self {\n        Self(source.into())\n    }\n}\n\nimpl From<ArcIoError> for io::Error {\n    fn from(source: ArcIoError) -> io::Error {\n        io::Error::other(source)\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\nenum ClientState {\n    Active = 0,\n    Inactive = 1,\n    Closing = 2,\n}\n\n#[derive(Debug, Clone)]\nstruct ArcClientState {\n    state: Arc<AtomicU8>,\n    observers: [Arc<Notify>; 3],\n}\n\nimpl ArcClientState {\n    pub fn new() -> Self {\n        Self {\n            state: Arc::new(AtomicU8::new(ClientState::Active as u8)),\n            observers: <[_; 3]>::default(),\n        }\n    }\n\n    pub fn try_update(&self, old_state: ClientState, new_state: ClientState) -> bool {\n        match self\n            .state\n            .compare_exchange(old_state as u8, new_state as u8, SeqCst, SeqCst)\n        {\n            Ok(_old) => {\n                self.observers[new_state as usize].notify_waiters();\n                true\n            }\n            Err(_current) => false,\n        }\n    }\n\n    pub fn get(&self) -> ClientState {\n        match self.state.load(SeqCst) {\n            0 => ClientState::Active,\n            1 => ClientState::Inactive,\n            2 => ClientState::Closing,\n            _ => unreachable!(),\n        }\n    }\n\n    pub fn set(&self, new_state: ClientState) -> ClientState {\n        let old_state = self.state.swap(new_state as u8, SeqCst);\n        if old_state != new_state as u8 {\n            self.observers[new_state as usize].notify_waiters();\n        }\n        match old_state {\n            0 => ClientState::Active,\n            1 => ClientState::Inactive,\n            2 => ClientState::Closing,\n            _ => unreachable!(),\n        }\n    }\n\n    pub fn wait(&self, expect: ClientState) -> impl futures::Future<Output = ()> + use<> {\n        let notify = self.observers[expect as usize].clone();\n        let state = self.state.clone();\n        async move {\n            let mut notified = pin!(notify.notified());\n            loop {\n                notified.as_mut().enable();\n                if state.load(SeqCst) == expect as u8 {\n                    return;\n                }\n                notified.as_mut().await;\n                notified.set(notify.notified());\n            }\n        }\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct StunClient<I: RefIO + 'static> {\n    #[allow(clippy::type_complexity)]\n    outer_addr: Arc<Future<Result<SocketAddr, ArcIoError>>>,\n    nat_type: Arc<Future<Result<NatType, ArcIoError>>>,\n    ref_iface: I,\n    // 可能被复制进keep_alive_task\n    stun_router: StunRouter,\n    stun_agent: SocketAddr,\n    locations: Option<IfaceLocations<I>>,\n\n    state: ArcClientState,\n    tasks: Arc<Mutex<JoinSet<()>>>,\n}\n\npub type ClientLocationData = Result<EndpointAddr, ArcIoError>;\n\nimpl<I: RefIO + 'static> StunClient<I> {\n    pub fn new(\n        ref_iface: I,\n        stun_router: StunRouter,\n        stun_agent: SocketAddr,\n        locations: Option<IfaceLocations<I>>,\n    ) -> Self {\n        let client = Self {\n            nat_type: Default::default(),\n            outer_addr: Default::default(),\n            stun_agent,\n            ref_iface,\n            stun_router,\n            locations,\n            state: ArcClientState::new(),\n            tasks: Arc::new(Mutex::new(JoinSet::new())),\n        };\n        tracing::debug!(target: \"stun\", %stun_agent, \"created new STUN client\");\n        {\n            let mut tasks = client.lock_tasks();\n            tasks.spawn(client.keep_alive_task());\n            if !client.ref_iface.iface().bind_uri().is_temporary() {\n                tasks.spawn(client.nat_detect_task());\n            }\n        }\n        client\n    }\n\n    fn lock_tasks(&self) -> MutexGuard<'_, JoinSet<()>> {\n        self.tasks.lock().expect(\"StunClient tasks lock poisoned\")\n    }\n\n    fn keep_alive_task(&self) -> impl futures::Future<Output = ()> + use<I> {\n        let outer_addr = self.outer_addr.clone();\n        let stun_agent = self.stun_agent;\n        let stun_router = self.stun_router.clone();\n        tracing::debug!(target: \"stun\", %stun_agent, \"starting STUN client keep alive task\");\n        let ref_iface = self.ref_iface.clone();\n        let bind_uri = ref_iface.iface().bind_uri();\n\n        let locations = self.locations.clone();\n\n        let client_state = self.state.clone();\n\n        let keep_alive_task = async move {\n            let log_detect_result = |detect_result: &io::Result<SocketAddr>| match &detect_result {\n                Ok(new_outer_addr) => match outer_addr.try_get().as_deref().cloned() {\n                    Some(Ok(old_outer)) if old_outer == *new_outer_addr => {\n                        tracing::trace!(target: \"stun\", %new_outer_addr,  \"Keep alive, outer addr unchanged\");\n                    }\n                    Some(old_state) => {\n                        tracing::debug!(target: \"stun\", ?old_state, %new_outer_addr, \"keep alive, outer addr changed\");\n                    }\n                    None => {\n                        tracing::debug!(target: \"stun\", %new_outer_addr, \"detected outer addr\");\n                    }\n                },\n                Err(error) => {\n                    tracing::trace!(target: \"stun\", ?error, \"Detect outer addr failed\");\n                }\n            };\n            tracing::trace!(target: \"stun\", \"Starting keep alive task\");\n            loop {\n                let detect_result = detect_outer_addr(\n                    ref_iface.clone(),\n                    stun_router.clone(),\n                    stun_agent,\n                    3,\n                    Duration::from_millis(300),\n                )\n                .await;\n\n                match &detect_result {\n                    Ok(_) => client_state.try_update(ClientState::Inactive, ClientState::Active),\n                    Err(_) => client_state.try_update(ClientState::Active, ClientState::Inactive),\n                };\n\n                log_detect_result(&detect_result);\n\n                let timeout = match detect_result {\n                    Ok(_) => Duration::from_secs(30),\n                    Err(_) => Duration::from_secs(1),\n                };\n\n                let detect_result = detect_result.map_err(ArcIoError::from);\n\n                if !bind_uri.is_temporary()\n                    && let Some(locations) = locations.as_ref()\n                {\n                    locations.r#for(&ref_iface, |locations, bind_uri| {\n                        let data = detect_result\n                            .clone()\n                            .map(|outer| EndpointAddr::with_agent(stun_agent, outer));\n                        locations.upsert::<ClientLocationData>(bind_uri, Arc::new(data));\n                    });\n                }\n\n                outer_addr.assign(detect_result);\n                tokio::time::sleep(timeout).await;\n            }\n        };\n        let bind_uri = self.ref_iface.iface().bind_uri();\n        keep_alive_task.instrument(tracing::debug_span!(\n            target: \"stun\",\n            \"keep_alive_task\",\n            %bind_uri,\n            %stun_agent,\n        ))\n    }\n\n    pub fn poll_outer_addr(&self, cx: &mut Context) -> Poll<io::Result<SocketAddr>> {\n        if self.state.get() == ClientState::Closing {\n            return Poll::Ready(Err(RebindedError.into()));\n        }\n        self.outer_addr\n            .poll_get(cx)\n            .map(|result| result.clone().map_err(io::Error::from))\n    }\n\n    pub async fn outer_addr(&self) -> io::Result<SocketAddr> {\n        core::future::poll_fn(|cx| self.poll_outer_addr(cx)).await\n    }\n\n    pub fn agent_addr(&self) -> SocketAddr {\n        self.stun_agent\n    }\n\n    pub fn get_outer_addr(&self) -> Option<io::Result<SocketAddr>> {\n        if self.state.get() == ClientState::Closing {\n            return Some(Err(RebindedError.into()));\n        }\n\n        self.outer_addr\n            .try_get()\n            .map(|result| result.clone().map_err(io::Error::from))\n    }\n\n    fn nat_detect_task(&self) -> impl futures::Future<Output = ()> + use<I> {\n        let nat_type = self.nat_type.clone();\n        let ref_iface = self.ref_iface.clone();\n        let stun_router = self.stun_router.clone();\n        let stun_agent = self.stun_agent;\n        let bind_uri = ref_iface.iface().bind_uri();\n        // Note: 原来的逻辑是 nat 探测会新建 iface，但是有的服务器只能开放指定端口，所以还是用监听的端口进行探测\n        // 又因为Dynamic 总是会新建 iface 进行打洞，所以这里污染了影响不会很大\n        let task = async move {\n            tracing::debug!(target: \"stun\", \"starting NAT type detection\");\n            let timeout = Duration::from_millis(100);\n            _ = nat_type.assign(\n                detect_nat_type(ref_iface, stun_router, stun_agent, 30, timeout)\n                    .await\n                    .map_err(ArcIoError::from),\n            );\n        };\n\n        task.instrument(tracing::debug_span!(\n            target: \"stun\",\n            \"nat_type_task\",\n            %bind_uri,\n            %stun_agent,\n        ))\n    }\n\n    pub fn poll_nat_type(&self, cx: &mut Context) -> Poll<io::Result<NatType>> {\n        if self.state.get() == ClientState::Closing {\n            return Poll::Ready(Err(RebindedError.into()));\n        }\n        self.nat_type\n            .poll_get(cx)\n            .map(|result| result.clone().map_err(io::Error::from))\n    }\n\n    pub async fn nat_type(&self) -> io::Result<NatType> {\n        core::future::poll_fn(|cx| self.poll_nat_type(cx)).await\n    }\n\n    pub fn get_nat_type(&self) -> Option<io::Result<NatType>> {\n        if self.state.get() == ClientState::Closing {\n            return Some(Err(RebindedError.into()));\n        }\n        self.nat_type\n            .try_get()\n            .map(|result| result.clone().map_err(io::Error::from))\n    }\n\n    // fn restart(&mut self) -> io::Result<()> {\n    //     self.stun_router.clear();\n    //     *self = RunningClient::new(\n    //         self.ref_iface.clone(),\n    //         self.stun_router.clone(),\n    //         self.stun_agent,\n    //     );\n    //     Ok(())\n    // }\n\n    pub fn poll_close(&self, cx: &mut Context) -> Poll<()> {\n        if self.state.set(ClientState::Closing) == ClientState::Closing {\n            return Poll::Ready(());\n        }\n        self.lock_tasks().abort_all();\n        while ready!(self.lock_tasks().poll_join_next(cx)).is_some() {}\n        self.nat_type.clear();\n        self.outer_addr.clear();\n        Poll::Ready(())\n    }\n}\n\n#[derive(Debug)]\npub struct StunClientComponent {\n    client: Mutex<StunClient<WeakInterface>>,\n}\n\nimpl StunClientComponent {\n    pub fn new(client: StunClient<WeakInterface>) -> Self {\n        Self {\n            client: Mutex::new(client),\n        }\n    }\n\n    fn lock_client(&self) -> MutexGuard<'_, StunClient<WeakInterface>> {\n        self.client.lock().expect(\"StunClient lock poisoned\")\n    }\n\n    pub fn client(&self) -> StunClient<WeakInterface> {\n        self.lock_client().clone()\n    }\n}\n\nimpl Component for StunClientComponent {\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> Poll<()> {\n        self.lock_client().poll_close(cx)\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        let mut client = self.lock_client();\n        if client.ref_iface.same_io(&iface.downgrade()) {\n            return;\n        }\n\n        let Ok(locations) = iface.with_component(|loc: &LocationsComponent| {\n            loc.reinit(iface);\n            loc.clone()\n        }) else {\n            return;\n        };\n\n        let new_client = StunClient::new(\n            iface.downgrade(),\n            client.stun_router.clone(),\n            client.stun_agent,\n            locations,\n        );\n        *client = new_client;\n    }\n}\n\ntype StunClientsMap<I> = HashMap<SocketAddr, StunClient<I>>;\n\n#[derive(Debug)]\nstruct StunClientsInner<I: RefIO + 'static> {\n    ref_iface: I,\n    clients: Arc<Mutex<StunClientsMap<I>>>,\n    resolver: Arc<dyn Resolve + Send + Sync>,\n    server: Arc<str>,\n    task: Option<AbortOnDropHandle<()>>,\n}\n\npub const DEFAULT_STUN_SERVER: &str = \"nat.genmeta.net:20004\";\n\nimpl<I: RefIO + 'static> StunClientsInner<I> {\n    pub const MIN_AGENTS: usize = 3;\n\n    pub fn new(\n        ref_iface: I,\n        router: StunRouter,\n        resolver: Arc<dyn Resolve + Send + Sync>,\n        server: Arc<str>,\n        agents: impl IntoIterator<Item = SocketAddr>,\n        locations: Option<IfaceLocations<I>>,\n    ) -> Self {\n        let new_stun_client = {\n            let ref_iface = ref_iface.clone();\n            move |agent_addr: SocketAddr| {\n                let local_addr = ref_iface.iface().local_addr().ok()?;\n                if local_addr.is_ipv4() != agent_addr.is_ipv4() {\n                    return None;\n                }\n                let stun_router = router.clone();\n                Some(StunClient::new(\n                    ref_iface.clone(),\n                    stun_router,\n                    agent_addr,\n                    locations.clone(),\n                ))\n            }\n        };\n\n        let clients: Arc<Mutex<StunClientsMap<I>>> = Arc::new(Mutex::new(\n            agents\n                .into_iter()\n                .filter_map(|agent| {\n                    tracing::trace!(target: \"stun\", %agent, \"Initializing STUN client for agent\");\n                    new_stun_client(agent).map(|client| (agent, client))\n                })\n                .collect(),\n        ));\n        let task = AbortOnDropHandle::new(tokio::spawn({\n            let clients = clients.clone();\n            let resolver = resolver.clone();\n            let server = server.clone();\n            let ref_iface = ref_iface.clone();\n            async move {\n                let lock_clients = || clients.lock().expect(\"StunClients mutex poisoned\");\n\n                let should_lookup_agents = |clients: &StunClientsMap<I>| match clients\n                    .values()\n                    .try_fold((0, 0), |(active, inactive), client| {\n                        match client.state.get() {\n                            ClientState::Active => ControlFlow::Continue((active + 1, inactive)),\n                            ClientState::Inactive => ControlFlow::Continue((active, inactive + 1)),\n                            ClientState::Closing => ControlFlow::Break(()),\n                        }\n                    }) {\n                    ControlFlow::Continue((active, _inactive)) => active < Self::MIN_AGENTS,\n                    ControlFlow::Break(_) => false,\n                };\n\n                let wait_too_few_agents = |clients: &StunClientsMap<I>| {\n                    let clients_len = clients.len();\n                    debug_assert!(clients_len >= Self::MIN_AGENTS);\n                    let mut stream = clients\n                        .iter()\n                        .map(|(.., client)| client.state.wait(ClientState::Inactive))\n                        .collect::<FuturesUnordered<_>>()\n                        .skip(clients_len.saturating_sub(Self::MIN_AGENTS));\n                    async move { _ = stream.next().await }\n                };\n\n                loop {\n                    while !{ should_lookup_agents(&lock_clients()) } {\n                        { wait_too_few_agents(&lock_clients()) }.await;\n                    }\n\n                    // 保证两次 lookup 至少间隔 10s，同时限时 10s 防止 resolver 卡住\n                    let deadline = tokio::time::Instant::now() + Duration::from_secs(10);\n                    _ = tokio::time::timeout_at(deadline, async {\n                        let Ok(stream) = resolver.lookup(server.as_ref()).await else { return };\n                        let is_ipv4 = ref_iface.iface().bind_uri().family() == Family::V4;\n                        let mut stream = std::pin::pin!(stream);\n                        while let Some((_, addr)) = stream.next().await {\n                            let EndpointAddr::Direct { addr } = addr else { continue };\n                            if addr.is_ipv4() != is_ipv4 { continue }\n                            let done = {\n                                let mut clients = lock_clients();\n                                if clients.contains_key(&addr) { continue }\n                                if let Some(client) = new_stun_client(addr) {\n                                    tracing::debug!(target: \"stun\", %addr, \"discovered new STUN agent\");\n                                    clients.insert(addr, client);\n                                    !should_lookup_agents(&clients)\n                                } else { false }\n                            };\n                            if done { break }\n                        }\n                    }).await;\n                    tokio::time::sleep_until(deadline).await;\n                }\n            }\n        }));\n\n        Self {\n            ref_iface,\n            clients,\n            resolver,\n            server,\n            task: Some(task),\n        }\n    }\n\n    fn lock_clients(&self) -> MutexGuard<'_, StunClientsMap<I>> {\n        self.clients\n            .lock()\n            .expect(\"StunClientsComponentInner lock poisoned\")\n    }\n\n    pub fn poll_close(&mut self, cx: &mut Context<'_>) -> Poll<()> {\n        if let Some(task) = self.task.as_mut() {\n            task.abort();\n            _ = ready!(task.poll_unpin(cx));\n            self.task.take();\n        }\n\n        for (.., client) in self.lock_clients().iter() {\n            ready!(client.poll_close(cx))\n        }\n\n        Poll::Ready(())\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct StunClients<I: RefIO + 'static> {\n    clients: Arc<Mutex<StunClientsInner<I>>>,\n}\n\nimpl<I: RefIO + 'static> StunClients<I> {\n    pub fn new(\n        ref_iface: I,\n        router: StunRouter,\n        resolver: Arc<dyn Resolve + Send + Sync>,\n        server: impl Into<Arc<str>>,\n        agents: impl IntoIterator<Item = SocketAddr>,\n        locations: Option<IfaceLocations<I>>,\n    ) -> Self {\n        Self {\n            clients: Arc::new(Mutex::new(StunClientsInner::new(\n                ref_iface,\n                router,\n                resolver,\n                server.into(),\n                agents,\n                locations,\n            ))),\n        }\n    }\n\n    fn lock_clients(&self) -> MutexGuard<'_, StunClientsInner<I>> {\n        self.clients\n            .lock()\n            .expect(\"StunClientsComponent lock poisoned\")\n    }\n\n    pub fn with_clients<T>(&self, f: impl FnOnce(&StunClientsMap<I>) -> T) -> T {\n        f(self.lock_clients().lock_clients().deref())\n    }\n\n    pub fn poll_close(&self, cx: &mut Context<'_>) -> Poll<()> {\n        self.lock_clients().poll_close(cx)\n    }\n}\n\npub type StunClientsComponent = StunClients<WeakInterface>;\n\nimpl Component for StunClientsComponent {\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> Poll<()> {\n        self.lock_clients().poll_close(cx)\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        let mut clients = self.lock_clients();\n        if clients.ref_iface.same_io(&iface.downgrade()) {\n            return;\n        }\n\n        _ = iface.with_components(|components| {\n            let Some(router) = components.with(|router: &StunRouterComponent| {\n                router.reinit(iface);\n                router.router()\n            }) else {\n                return;\n            };\n            let locations = components.with(|locations: &LocationsComponent| {\n                locations.reinit(iface);\n                locations.clone()\n            });\n\n            let new_clinets = StunClientsInner::new(\n                iface.downgrade(),\n                router,\n                clients.resolver.clone(),\n                clients.server.clone(),\n                clients.lock_clients().keys().copied(),\n                locations,\n            );\n            *clients = new_clinets;\n        });\n    }\n}\n\nfn no_response_error() -> io::Error {\n    io::Error::new(io::ErrorKind::TimedOut, \"No response from STUN server\")\n}\n\nasync fn detect_outer_addr<I: RefIO>(\n    ref_iface: I,\n    stun_router: StunRouter,\n    stun_agent: SocketAddr,\n    retry_times: u8,\n    timeout: Duration,\n) -> io::Result<SocketAddr> {\n    let request = Request::default();\n    let response = Transaction::begin(ref_iface, stun_router, retry_times, timeout)\n        .send_request(request, stun_agent)\n        .await?\n        .ok_or_else(no_response_error)?;\n    response.map_addr()\n}\n\npub static VISUALIZE_NAT_DETECTION: AtomicBool = AtomicBool::new(false);\n\nmacro_rules! visualize_nat_detection {\n    ($($tt:tt)*) => {{\n        if VISUALIZE_NAT_DETECTION.load(std::sync::atomic::Ordering::Relaxed) {\n            tracing::info!($($tt)*);\n        } else {\n            tracing::trace!(target: \"stun\", $($tt)*);\n        }\n    }};\n}\n\npub const RESTRICTED_RETRY_TIMES: u8 = 3;\n\nasync fn detect_nat_type<I: RefIO>(\n    ref_iface: I,\n    stun_router: StunRouter,\n    stun_agent: SocketAddr,\n    retry_times: u8,\n    timeout: Duration,\n) -> io::Result<NatType> {\n    let local_addr = ref_iface.iface().local_addr()?;\n    visualize_nat_detection!(\"Starting NAT detection with local address: {local_addr}\");\n    let stun_agent1 = stun_agent;\n\n    visualize_nat_detection!(\"Access Test: probing server {stun_agent1}\");\n    let request = Request::default();\n    let response = Transaction::begin(ref_iface.clone(), stun_router.clone(), retry_times, timeout)\n        .send_request(request, stun_agent1)\n        .await?;\n\n    let Some(response) = response else {\n        visualize_nat_detection!(\"Result: No response after {retry_times} attempts\");\n        visualize_nat_detection!(\n            \"Conclusion: The network feature is {:?}, NAT Type is {:?}\\n\",\n            NetFeature::Blocked,\n            NatType::Blocked\n        );\n        return Ok(NatType::Blocked);\n    };\n\n    let mut net_features = NetFeature::empty();\n\n    let mapped_addr1 = response.map_addr()?;\n    let stun_agent2 = response.changed_addr()?;\n    visualize_nat_detection!(\"Result: Received from {stun_agent1}, external addr: {mapped_addr1}\");\n    if mapped_addr1 == local_addr {\n        // Public IP\n        visualize_nat_detection!(\n            \"Conclusion: Address {local_addr} has public IP, Proceeding to filtering behavior test.\\n\"\n        );\n        visualize_nat_detection!(\n            \"Filtering Test: probing server {stun_agent2}. Request server to respond from a changed IP:port\",\n        );\n        net_features |= NetFeature::Public;\n        let request = Request::change_ip_and_port();\n        let response =\n            Transaction::begin(ref_iface.clone(), stun_router.clone(), retry_times, timeout)\n                .send_request(request, stun_agent2)\n                .await?;\n        if let Some(response) = response {\n            let mapped_addr2 = response.map_addr()?;\n            visualize_nat_detection!(\n                \"Result: received from {}, external addr: {mapped_addr2}\",\n                response.source_addr()?\n            );\n            visualize_nat_detection!(\"Conclusion: Destination IP independent filtering\\n\");\n        } else {\n            net_features |= NetFeature::Restricted;\n            visualize_nat_detection!(\"Result: No response after {retry_times} attempts\");\n            visualize_nat_detection!(\"Conclusion: Filters packets based on destination IP\\n\");\n        }\n        visualize_nat_detection!(\n            \"Filtering Test: probing server {stun_agent2}. Request server to respond from a changed port\",\n        );\n        let request = Request::change_port();\n        let response =\n            Transaction::begin(ref_iface.clone(), stun_router.clone(), retry_times, timeout)\n                .send_request(request, stun_agent2)\n                .await?;\n        if let Some(response) = response {\n            let mapped_addr2 = response.map_addr()?;\n            visualize_nat_detection!(\n                \"Result: received from {}, external addr: {mapped_addr2}\",\n                response.source_addr()?\n            );\n            visualize_nat_detection!(\"Conclusion: Destination port independent filtering\\n\");\n        } else {\n            net_features |= NetFeature::PortRestricted;\n            visualize_nat_detection!(\"Result: No response after {retry_times} attempts\");\n            visualize_nat_detection!(\"Conclusion: Filters packets based on destination port\\n\");\n        }\n        let nat_type = NatType::from(net_features);\n        visualize_nat_detection!(\n            \"NAT detection completed. Network features: {:?}, NAT Type: {:?}\",\n            net_features,\n            nat_type\n        );\n        Ok(nat_type)\n    } else {\n        // Private IP\n        visualize_nat_detection!(\"Conclusion: Address {local_addr} has private IP.\\n\");\n        visualize_nat_detection!(\"Mapping Test1: probing server {stun_agent2}\");\n        let request = Request::default();\n        let response =\n            Transaction::begin(ref_iface.clone(), stun_router.clone(), retry_times, timeout)\n                .send_request(request, stun_agent2)\n                .await?\n                .ok_or_else(no_response_error)?;\n\n        let stun_agent3 = response.changed_addr()?;\n        let mapped_addr2 = response.map_addr()?;\n        if mapped_addr1 != mapped_addr2 {\n            net_features |= NetFeature::Symmetric;\n            visualize_nat_detection!(\n                \"Result: Received from {stun_agent2}, external addr: {mapped_addr2}\"\n            );\n            visualize_nat_detection!(\n                \"Conclusion: The mapped address is different and destination-dependent.\\n\"\n            );\n\n            // 判断规律\n            visualize_nat_detection!(\"Mapping Test2: probing server {stun_agent3}\");\n            let request = Request::default();\n            let response =\n                Transaction::begin(ref_iface.clone(), stun_router.clone(), retry_times, timeout)\n                    .send_request(request, stun_agent3)\n                    .await?;\n\n            let Some(response) = response else {\n                visualize_nat_detection!(\"Result: No response after {retry_times} attempts\");\n                visualize_nat_detection!(\n                    \"Conclusion: Unable to determine port mapping behavior due to lack of response from third server.\\n\"\n                );\n                return Ok(NatType::from(net_features));\n            };\n\n            let mapped_addr3 = response.map_addr()?;\n            let step1 = mapped_addr2.port() as i32 - mapped_addr1.port() as i32;\n            let step2 = mapped_addr3.port() as i32 - mapped_addr2.port() as i32;\n            visualize_nat_detection!(\n                \"Result: Received from {stun_agent3}, external addr: {mapped_addr3}\"\n            );\n            if step1 == step2 {\n                visualize_nat_detection!(\n                    \"Conclusion: The port changes regularly with step {step1}\\n\"\n                );\n            } else {\n                visualize_nat_detection!(\"Conclusion: The Ports change randomly.\\n\");\n            }\n            Ok(NatType::from(net_features))\n        } else {\n            // 不是对称型\n            // Open test\n            // 发给 server2 换 ip and port 即 server3 回, server3 可能不响应\n            // server1: ip1:port1\n            // server2: ip2:port2\n            // server3: ip3:port1\n            // server4: ip1:port2\n            // server5: ip2:port1\n            // server6: ip3:port2\n            visualize_nat_detection!(\n                \"Filtering Test: probing server {stun_agent2}. Request server to respond from a changed IP and port\",\n            );\n            let request = Request::change_ip_and_port();\n            // 可能会不响应，超时太久会导致探测很久\n            let response = Transaction::begin(\n                ref_iface.clone(),\n                stun_router.clone(),\n                RESTRICTED_RETRY_TIMES,\n                timeout,\n            )\n            .send_request(request, stun_agent2)\n            .await?;\n            if let Some(response) = response {\n                let mapped_addr2 = response.map_addr()?;\n                visualize_nat_detection!(\n                    \"Result: received from {}, external addr: {mapped_addr2}\",\n                    response.source_addr()?\n                );\n                visualize_nat_detection!(\"Conclusion: Destination IP independent filtering\\n\");\n            } else {\n                net_features |= NetFeature::Restricted;\n                visualize_nat_detection!(\n                    \"Result: No response after {RESTRICTED_RETRY_TIMES} attempts\"\n                );\n                visualize_nat_detection!(\"Conclusion: Filters packets based on destination IP\\n\");\n            }\n            visualize_nat_detection!(\n                \"Filtering Test: probing server {stun_agent2}. Request server to respond from a changed port\",\n            );\n            // Restricted test\n            // server2 换 port 即 server5 回，可能不响应\n            // 可能会不响应，超时太久会导致探测很久\n            let request = Request::change_port();\n            let response = Transaction::begin(\n                ref_iface.clone(),\n                stun_router.clone(),\n                RESTRICTED_RETRY_TIMES,\n                timeout,\n            )\n            .send_request(request, stun_agent2)\n            .await?;\n            if let Some(response) = response {\n                let mapped_addr2 = response.map_addr()?;\n                visualize_nat_detection!(\n                    \"Result: received from {}, external addr: {mapped_addr2}\",\n                    response.source_addr()?\n                );\n                visualize_nat_detection!(\"Conclusion: Destination port independent filtering\\n\");\n            } else {\n                net_features |= NetFeature::PortRestricted;\n                visualize_nat_detection!(\n                    \"Result: No response after {RESTRICTED_RETRY_TIMES} attempts\"\n                );\n                visualize_nat_detection!(\"Conclusion: Filters packets based on destination port\\n\");\n            }\n            // dynamic test， 请求 server3\n            visualize_nat_detection!(\"Dynamic Test: probing server {stun_agent3}\",);\n            let request = Request::default();\n            let response =\n                Transaction::begin(ref_iface.clone(), stun_router.clone(), retry_times, timeout)\n                    .send_request(request, stun_agent3)\n                    .await?;\n\n            if let Some(response) = response {\n                // 回包，但是映射地址不一致，为动态型\n                let mapped_addr3 = response.map_addr()?;\n                visualize_nat_detection!(\n                    \"Result: received from {}, external addr: {mapped_addr3}\",\n                    response.source_addr()?\n                );\n                if mapped_addr1 != mapped_addr3 {\n                    net_features |= NetFeature::Dynamic;\n                    visualize_nat_detection!(\n                        \"Conclusion: Mapping inconsistency indicates Address-Dependent Mapping, a Dynamic NAT type\\n\"\n                    );\n                } else {\n                    visualize_nat_detection!(\n                        \"Conclusion: The mapping address is consistent, not Dynamic\\n\"\n                    );\n                }\n            } else {\n                // 不回包也视为动态型\n                net_features |= NetFeature::Dynamic;\n                visualize_nat_detection!(\"Result: No response after 3 attempts\");\n                visualize_nat_detection!(\n                    \"Conclusion: Absence of server response may indicates Dynamic NAT behavior\\n\"\n                );\n            }\n            let nat_type = NatType::from(net_features);\n            visualize_nat_detection!(\n                \"NAT detection completed. Network features: {:?}, NAT Type: {:?}\",\n                net_features,\n                nat_type\n            );\n            Ok(nat_type)\n        }\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/nat/iface.rs",
    "content": "use std::{io, net::SocketAddr};\n\nuse bytes::{BufMut, BytesMut};\nuse qbase::net::route::{Line, Link, Route};\nuse qinterface::io::{IO, IoExt};\n\nuse crate::{\n    nat::msg::{Packet, TransactionId, WritePacket},\n    packet::{StunHeader, WriteStunHeader},\n};\n\npub trait StunIO: IO {\n    fn local_addr(&self) -> io::Result<SocketAddr> {\n        self.bound_addr()\n    }\n\n    fn send_stun_packet(\n        &self,\n        packet: Packet,\n        txid: TransactionId,\n        dst: SocketAddr,\n    ) -> impl Future<Output = io::Result<()>> + Send {\n        async move {\n            let mut buf = BytesMut::zeroed(128);\n            let (mut stun_hdr, mut stun_body) = buf.split_at_mut(StunHeader::encoding_size());\n\n            // put stun header\n            stun_hdr.put_stun_header(&StunHeader::new(0));\n\n            // put stun body\n            let origin = stun_body.remaining_mut();\n            stun_body.put_packet(&txid, &packet);\n            let consumed = origin - stun_body.remaining_mut();\n            buf.truncate(StunHeader::encoding_size() + consumed);\n\n            let bufs = &[io::IoSlice::new(&buf)];\n\n            // assemble packet header\n            let link = Link::new(self.bound_addr()?, dst);\n            let pathway = link.into();\n            let line = Line::new(link, 64, None, 0);\n            let hdr = Route::new(pathway, line);\n\n            self.sendmmsg(bufs, hdr).await\n        }\n    }\n}\n\nimpl<I: IO + ?Sized> StunIO for I {}\n"
  },
  {
    "path": "qtraversal/src/nat/msg.rs",
    "content": "use std::{io, net::SocketAddr};\n\nuse bytes::BufMut;\nuse nom::{\n    Err, IResult, Parser,\n    combinator::map,\n    error::{Error, ErrorKind},\n    multi::many0,\n    number::streaming::{be_u8, be_u16},\n};\nuse qbase::net::{AddrFamily, Family, WriteSocketAddr, be_socket_addr};\nuse rand::RngExt;\nuse thiserror::Error;\n\npub const BINDING_REQUEST: u16 = 0x0001;\npub const BINDING_RESPONSE: u16 = 0x0101;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub struct TransactionId([u8; 16]);\n\nimpl AsRef<[u8]> for TransactionId {\n    fn as_ref(&self) -> &[u8] {\n        &self.0\n    }\n}\n\nimpl TransactionId {\n    pub fn from_slice(slice: &[u8]) -> Self {\n        let mut id = [0u8; 16];\n        id.copy_from_slice(slice);\n        TransactionId(id)\n    }\n\n    pub fn random() -> Self {\n        let mut id = [0u8; 16];\n        rand::rng().fill(&mut id);\n        TransactionId(id)\n    }\n}\n\n#[derive(Debug)]\npub enum Packet {\n    Request(Request),\n    Response(Response),\n}\n\n/// STUN数据包中的Attr类型：\n#[derive(Debug, Clone, PartialEq)]\npub enum Attr {\n    // 由服务器返回的外网映射地址\n    MappedAddress(SocketAddr),\n    // 客户端发起请求携带的指定响应地址\n    ResponseAddress(SocketAddr),\n    // 由客户端请求转发时，携带变换Ip:Port响应的指示\n    ChangeRequest(u8),\n    // 由服务器返回的Response消息的源地址，即服务器的地址\n    SourceAddress(SocketAddr),\n    // 由服务器返回的另一台的STUN服务器地址，\n    // 包括不同端口，供后续参考使用\n    ChangedAddress(SocketAddr),\n}\n\n#[derive(Debug)]\npub enum AttrType {\n    MappedAddress(Family),\n    ResponseAddress(Family),\n    // 由客户端请求转发时，携带变换Ip:Port响应的指示\n    ChangeRequest(u8),\n    // 由服务器返回的Response消息的源地址，即服务器的地址\n    SourceAddress(Family),\n    // 由服务器返回的另一台的STUN服务器地址，\n    // 包括不同端口，供后续参考使用\n    ChangedAddress(Family),\n}\n\n#[derive(Debug, Error)]\n#[error(\"Invalid attribute type: {0}\")]\npub struct InvalidAttrType(u8);\n\nimpl From<AttrType> for u8 {\n    fn from(value: AttrType) -> Self {\n        match value {\n            AttrType::MappedAddress(Family::V4) => 0,\n            AttrType::MappedAddress(Family::V6) => 1,\n            AttrType::ResponseAddress(Family::V4) => 2,\n            AttrType::ResponseAddress(Family::V6) => 3,\n            AttrType::SourceAddress(Family::V4) => 4,\n            AttrType::SourceAddress(Family::V6) => 5,\n            AttrType::ChangedAddress(Family::V4) => 6,\n            AttrType::ChangedAddress(Family::V6) => 7,\n            AttrType::ChangeRequest(flag_set) => 8 | flag_set,\n        }\n    }\n}\n\nimpl TryFrom<u8> for AttrType {\n    type Error = InvalidAttrType;\n\n    fn try_from(value: u8) -> Result<Self, Self::Error> {\n        match value {\n            0 => Ok(AttrType::MappedAddress(Family::V4)),\n            1 => Ok(AttrType::MappedAddress(Family::V6)),\n            2 => Ok(AttrType::ResponseAddress(Family::V4)),\n            3 => Ok(AttrType::ResponseAddress(Family::V6)),\n            4 => Ok(AttrType::SourceAddress(Family::V4)),\n            5 => Ok(AttrType::SourceAddress(Family::V6)),\n            6 => Ok(AttrType::ChangedAddress(Family::V4)),\n            7 => Ok(AttrType::ChangedAddress(Family::V6)),\n            8..12 => Ok(AttrType::ChangeRequest(value & 0x3)),\n            _ => Err(InvalidAttrType(value)),\n        }\n    }\n}\n\ntrait WriteAttr {\n    fn put_attr(&mut self, attr: &Attr);\n}\n\nimpl<T: BufMut> WriteAttr for T {\n    fn put_attr(&mut self, attr: &Attr) {\n        let typ: u8 = attr.typ().into();\n        match attr {\n            Attr::MappedAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n            Attr::ResponseAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n            Attr::ChangeRequest(flag) => {\n                self.put_u8(typ | *flag);\n            }\n            Attr::SourceAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n            Attr::ChangedAddress(socket_addr) => {\n                self.put_u8(typ);\n                self.put_socket_addr(socket_addr);\n            }\n        };\n    }\n}\n\nimpl Attr {\n    pub fn typ(&self) -> AttrType {\n        match self {\n            Attr::MappedAddress(socket_addr) => AttrType::MappedAddress(socket_addr.family()),\n            Attr::ResponseAddress(socket_addr) => AttrType::ResponseAddress(socket_addr.family()),\n            Attr::ChangeRequest(flag_set) => AttrType::ChangeRequest(*flag_set),\n            Attr::SourceAddress(socket_addr) => AttrType::SourceAddress(socket_addr.family()),\n            Attr::ChangedAddress(socket_addr) => AttrType::ChangedAddress(socket_addr.family()),\n        }\n    }\n\n    fn be_attr(input: &[u8]) -> IResult<&[u8], Self> {\n        if input.is_empty() {\n            return Err(Err::Error(Error::new(input, ErrorKind::Eof)));\n        }\n        let (remain, typ) = be_u8(input)?;\n        let typ: AttrType = typ\n            .try_into()\n            .map_err(|_| Err::Error(Error::new(input, ErrorKind::Alt)))?;\n        match typ {\n            AttrType::MappedAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::MappedAddress(addr)))\n            }\n            AttrType::ResponseAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::ResponseAddress(addr)))\n            }\n            AttrType::SourceAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::SourceAddress(addr)))\n            }\n            AttrType::ChangedAddress(family) => {\n                let (remain, addr) = be_socket_addr(remain, family)?;\n                Ok((remain, Attr::ChangedAddress(addr)))\n            }\n            AttrType::ChangeRequest(flags) => Ok((remain, Attr::ChangeRequest(flags))),\n        }\n    }\n}\n\n#[derive(Debug, PartialEq, Clone)]\npub struct Request(Vec<Attr>);\n\n/// 目前用到的Request只有3种，一种是空的默认Request；一种是变换IP、Port来响应；一种是只变换端口来响应\n/// 可以看出，ChangeRequest属性不可能有超过一个，为满足这种限制，三种Request均直接构造出来，不再有其他\n/// 可变操作函数。\nimpl Default for Request {\n    fn default() -> Self {\n        Self(Vec::with_capacity(0))\n    }\n}\n\npub(crate) trait WriteRequest {\n    fn put_request(&mut self, request: &Request);\n}\n\nimpl<T: BufMut> WriteRequest for T {\n    fn put_request(&mut self, request: &Request) {\n        for attr in &request.0 {\n            self.put_attr(attr);\n        }\n    }\n}\n\npub fn be_request(input: &[u8]) -> IResult<&[u8], Request> {\n    many0(Attr::be_attr).map(Request).parse(input)\n}\n\npub const CHANGE_PORT: u8 = 0x01;\npub const CHANGE_IP: u8 = 0x02;\n\nimpl Request {\n    pub fn change_ip_and_port() -> Self {\n        let mut request = Request::default();\n        request.0.push(Attr::ChangeRequest(CHANGE_IP | CHANGE_PORT));\n        request\n    }\n\n    pub fn change_port() -> Self {\n        let mut request = Request::default();\n        request.0.push(Attr::ChangeRequest(CHANGE_PORT));\n        request\n    }\n\n    pub fn add_response_address(&mut self, addr: SocketAddr) -> &mut Self {\n        self.0.push(Attr::ResponseAddress(addr));\n        self\n    }\n\n    // 仅发送响应地址，移除ChangeRequest属性\n    pub fn with_response_addr(addr: SocketAddr) -> Self {\n        Request(vec![Attr::ResponseAddress(addr)])\n    }\n\n    pub fn change_request(&self) -> Option<u8> {\n        for attr in &self.0 {\n            if let Attr::ChangeRequest(flags) = attr {\n                return Some(*flags);\n            }\n        }\n        None\n    }\n\n    pub fn response_address(&self) -> Option<&SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::ResponseAddress(addr) = attr {\n                return Some(addr);\n            }\n        }\n        None\n    }\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub struct Response(pub Vec<Attr>);\n\npub(crate) trait WriteResponse {\n    fn put_response(&mut self, response: &Response);\n}\n\nimpl<T: BufMut> WriteResponse for T {\n    fn put_response(&mut self, response: &Response) {\n        for attr in &response.0 {\n            self.put_attr(attr);\n        }\n    }\n}\n\npub fn be_response(input: &[u8]) -> IResult<&[u8], Response> {\n    many0(Attr::be_attr).map(Response).parse(input)\n}\n\nimpl Response {\n    pub fn with(attrs: Vec<Attr>) -> Self {\n        Response(attrs)\n    }\n\n    pub fn map_addr(&self) -> io::Result<SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::MappedAddress(addr) = attr {\n                return Ok(*addr);\n            };\n        }\n        Err(io::Error::other(\"No mapped address found in response\"))\n    }\n\n    pub fn changed_addr(&self) -> io::Result<SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::ChangedAddress(addr) = attr {\n                return Ok(*addr);\n            };\n        }\n        Err(io::Error::other(\"No changed address found in response\"))\n    }\n\n    pub fn source_addr(&self) -> io::Result<SocketAddr> {\n        for attr in &self.0 {\n            if let Attr::SourceAddress(addr) = attr {\n                return Ok(*addr);\n            };\n        }\n        Err(io::Error::other(\"No source address found in response\"))\n    }\n}\n\npub fn be_packet(input: &[u8]) -> IResult<&[u8], (TransactionId, Packet)> {\n    let (remain, typ) = be_u16(input)?;\n    let (txid, remain) = remain.split_at(16);\n    let (remain, packet) = match typ {\n        BINDING_REQUEST => map(be_request, Packet::Request).parse(remain)?,\n        BINDING_RESPONSE => map(be_response, Packet::Response).parse(remain)?,\n        _ => return Err(Err::Error(Error::new(input, ErrorKind::Alt))),\n    };\n    Ok((remain, (TransactionId::from_slice(txid), packet)))\n}\n\npub trait WritePacket {\n    fn put_packet(&mut self, txid: &TransactionId, packet: &Packet);\n}\n\nimpl<T: BufMut> WritePacket for T {\n    fn put_packet(&mut self, txid: &TransactionId, packet: &Packet) {\n        match packet {\n            Packet::Request(request) => {\n                self.put_u16(BINDING_REQUEST);\n                self.put_slice(txid.as_ref());\n                self.put_request(request);\n            }\n            Packet::Response(response) => {\n                self.put_u16(BINDING_RESPONSE);\n                self.put_slice(txid.as_ref());\n                self.put_response(response);\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn attr_deserialize() {\n        assert_eq!(\n            Attr::be_attr(&[4, 78, 34, 127, 0, 0, 1][..]),\n            Ok((\n                &[][..],\n                Attr::SourceAddress(\"127.0.0.1:20002\".parse().unwrap())\n            ))\n        );\n\n        assert_eq!(\n            Attr::be_attr(&[6, 78, 34, 127, 0, 0, 1][..]),\n            Ok((\n                &[][..],\n                Attr::ChangedAddress(\"127.0.0.1:20002\".parse().unwrap())\n            ))\n        );\n        assert_eq!(\n            Attr::be_attr(&[0, 48, 57, 127, 0, 0, 1][..]),\n            Ok((\n                &[][..],\n                Attr::MappedAddress(\"127.0.0.1:12345\".parse().unwrap())\n            ))\n        )\n    }\n\n    #[test]\n    fn request_serialize() {\n        let buf = [\n            4, 78, 34, 127, 0, 0, 1, 0, 48, 57, 127, 0, 0, 1, 6, 78, 34, 127, 0, 0, 1,\n        ];\n        let (remain, response) = be_response(&buf).unwrap();\n        assert_eq!(remain.len(), 0);\n        assert_eq!(\n            response,\n            Response(vec![\n                Attr::SourceAddress(\"127.0.0.1:20002\".parse().unwrap()),\n                Attr::MappedAddress(\"127.0.0.1:12345\".parse().unwrap()),\n                Attr::ChangedAddress(\"127.0.0.1:20002\".parse().unwrap())\n            ])\n        );\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/nat/router.rs",
    "content": "use std::{\n    net::SocketAddr,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{Context, Poll},\n};\n\nuse dashmap::DashMap;\nuse qbase::{net::route::Link, util::ArcAsyncDeque};\nuse qinterface::{Interface, WeakInterface, component::Component};\nuse tokio::sync::SetOnce;\nuse tracing::debug;\n\nuse super::msg::{self, Packet, Request, Response, TransactionId};\n\ntype ResponseRouter = Arc<DashMap<TransactionId, Arc<SetOnce<(Response, SocketAddr)>>>>;\n\n#[derive(Default, Debug, Clone)]\npub struct StunRouter {\n    request_router: ArcAsyncDeque<(Request, TransactionId, SocketAddr)>,\n    response_router: ResponseRouter,\n}\n\nimpl StunRouter {\n    pub fn new() -> Self {\n        Self::default()\n    }\n\n    pub fn deliver_stun_packet(&self, txid: TransactionId, packet: Packet, link: Link) {\n        match packet {\n            msg::Packet::Request(request) => {\n                self.request_router.push_back((request, txid, link.dst));\n            }\n            msg::Packet::Response(response) => {\n                if let Some((_id, recv_resp)) = self.response_router.remove(&txid) {\n                    let _ = recv_resp.set((response, link.dst));\n                } else {\n                    debug!(\n                        target: \"stun\",\n                        ?txid, %link, from =% link.dst,\n                        \"Unknown request transaction id\",\n                    );\n                }\n            }\n        }\n    }\n\n    pub async fn receive_request(&self) -> Option<(Request, TransactionId, SocketAddr)> {\n        self.request_router.pop().await\n    }\n\n    /// Close the router, causing any pending `receive_request()` to return `None`.\n    /// Called by `StunRouterComponent::reinit()` before replacing with a new router,\n    /// so that a running `StunServer` task can detect the rebind and exit cleanly.\n    pub fn close(&self) {\n        self.request_router.close();\n        self.response_router.clear();\n    }\n\n    pub(super) fn register(\n        &self,\n        transaction_id: TransactionId,\n        future: Arc<SetOnce<(Response, SocketAddr)>>,\n    ) {\n        self.response_router.insert(transaction_id, future);\n    }\n\n    pub(super) fn remove(&self, transaction_id: &TransactionId) {\n        let _ = self.response_router.remove(transaction_id);\n    }\n}\n\n#[derive(Debug)]\nstruct StunRouterComponentInner {\n    router: StunRouter,\n    ref_iface: WeakInterface,\n}\n\n#[derive(Debug)]\npub struct StunRouterComponent {\n    inner: Mutex<StunRouterComponentInner>,\n}\n\nimpl StunRouterComponent {\n    pub fn new(ref_iface: WeakInterface) -> Self {\n        Self {\n            inner: Mutex::new(StunRouterComponentInner {\n                router: StunRouter::new(),\n                ref_iface,\n            }),\n        }\n    }\n\n    fn lock_inner(&self) -> MutexGuard<'_, StunRouterComponentInner> {\n        self.inner.lock().expect(\"StunRouter lock poisoned\")\n    }\n\n    pub fn ref_iface(&self) -> WeakInterface {\n        self.lock_inner().ref_iface.clone()\n    }\n\n    pub fn router(&self) -> StunRouter {\n        self.lock_inner().router.clone()\n    }\n}\n\nimpl Component for StunRouterComponent {\n    fn reinit(&self, iface: &Interface) {\n        let mut inner = self.lock_inner();\n        if inner.ref_iface.same_io(&iface.downgrade()) {\n            return;\n        }\n        // Close the old router so any running StunServer task can detect the rebind and exit.\n        inner.router.close();\n        *inner = StunRouterComponentInner {\n            router: StunRouter::new(),\n            ref_iface: iface.downgrade(),\n        };\n    }\n\n    fn poll_shutdown(&self, _cx: &mut Context<'_>) -> Poll<()> {\n        Poll::Ready(())\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/nat/server.rs",
    "content": "use std::{\n    io,\n    net::SocketAddr,\n    pin::Pin,\n    sync::Mutex,\n    task::{Context, Poll, ready},\n};\n\nuse qinterface::{Interface, WeakInterface, component::Component, io::RefIO};\nuse tokio_util::task::AbortOnDropHandle;\nuse tracing::{info, trace};\n\nuse super::{\n    msg::{Attr, Request, Response},\n    router::StunRouter,\n};\nuse crate::nat::{\n    iface::StunIO,\n    msg::{CHANGE_IP, CHANGE_PORT, Packet},\n    router::StunRouterComponent,\n};\n\n#[derive(Debug, Clone, Default)]\npub struct StunServerConfig {\n    change_port: Option<u16>,\n    change_address: Option<SocketAddr>,\n}\n\n#[bon::bon]\nimpl StunServerConfig {\n    #[builder(finish_fn = init)]\n    pub fn new(change_port: Option<u16>, change_address: Option<SocketAddr>) -> Self {\n        Self {\n            change_port,\n            change_address,\n        }\n    }\n}\n\n#[derive(Debug)]\npub struct StunServer<I: RefIO + 'static> {\n    ref_iface: I,\n    stun_router: StunRouter,\n    config: StunServerConfig,\n}\n\nimpl<I: RefIO + 'static> StunServer<I> {\n    pub fn new(ref_iface: I, stun_router: StunRouter, config: StunServerConfig) -> Self {\n        info!(\n            target: \"stun\",\n            local_addr = ?ref_iface.iface().local_addr(),\n            change_port = ?config.change_port,\n            change_address = ?config.change_address,\n            \"new stun server\",\n        );\n        Self {\n            ref_iface,\n            stun_router,\n            config,\n        }\n    }\n\n    pub fn spawn(self) -> AbortOnDropHandle<io::Result<()>> {\n        AbortOnDropHandle::new(tokio::spawn(async move {\n            serve_loop(self.ref_iface, self.stun_router, self.config).await\n        }))\n    }\n}\n\nasync fn serve_loop<I: RefIO>(\n    ref_iface: I,\n    stun_router: StunRouter,\n    config: StunServerConfig,\n) -> io::Result<()> {\n    info!(target: \"stun\", \"Server started\");\n    let local_addr = ref_iface.iface().local_addr()?;\n\n    while let Some((request, txid, src)) = stun_router.receive_request().await {\n        trace!(target: \"stun\", ?request, \"recv request\");\n        match (request.change_request(), request.response_address()) {\n            (Some(changes), _) => {\n                let Ok(addr) = select_change_target(src, changes, local_addr, &config) else {\n                    trace!(\n                        target: \"stun\",\n                        changes,\n                        change_port = ?config.change_port,\n                        change_address = ?config.change_address,\n                        \"drop request: server lacks requested change capability\",\n                    );\n                    continue;\n                };\n                let request = Request::with_response_addr(src);\n                trace!(target: \"stun\", ?request, to = %addr, \"send request\");\n                ref_iface\n                    .iface()\n                    .send_stun_packet(Packet::Request(request), txid, addr)\n                    .await?;\n            }\n            (None, Some(&response_addr)) => {\n                let mut attrs = vec![\n                    Attr::SourceAddress(local_addr),\n                    Attr::MappedAddress(response_addr),\n                ];\n                if let Some(addr) = config.change_address {\n                    attrs.push(Attr::ChangedAddress(addr));\n                }\n                let response = Response::with(attrs);\n                trace!(target: \"stun\", ?response, to = %response_addr, \"send response\");\n                ref_iface\n                    .iface()\n                    .send_stun_packet(Packet::Response(response), txid, response_addr)\n                    .await?;\n            }\n            _ => {\n                let mut attrs = vec![Attr::SourceAddress(local_addr), Attr::MappedAddress(src)];\n                if let Some(addr) = config.change_address {\n                    attrs.push(Attr::ChangedAddress(addr));\n                }\n                let response = Response::with(attrs);\n                trace!(target: \"stun\", ?response, to = %src, \"send response\");\n                ref_iface\n                    .iface()\n                    .send_stun_packet(Packet::Response(response), txid, src)\n                    .await?;\n            }\n        }\n    }\n\n    trace!(target: \"stun\", \"Request handler finished - no more requests\");\n    Ok(())\n}\n\nfn select_change_target(\n    src: SocketAddr,\n    changes: u8,\n    local_addr: SocketAddr,\n    config: &StunServerConfig,\n) -> io::Result<SocketAddr> {\n    let wants_ip = changes & CHANGE_IP != 0;\n    let wants_port = changes & CHANGE_PORT != 0;\n\n    match (wants_ip, wants_port) {\n        (false, false) => Ok(src),\n        (true, false) => {\n            // CHANGE_IP: respond from a different IP (complete change_address, port may differ)\n            config.change_address.ok_or_else(|| {\n                io::Error::new(io::ErrorKind::Unsupported, \"CHANGE_IP not supported\")\n            })\n        }\n        (false, true) => {\n            let port = config.change_port.ok_or_else(|| {\n                io::Error::new(io::ErrorKind::Unsupported, \"CHANGE_PORT not supported\")\n            })?;\n            Ok(SocketAddr::new(local_addr.ip(), port))\n        }\n        (true, true) => {\n            let addr = config.change_address.ok_or_else(|| {\n                io::Error::new(\n                    io::ErrorKind::Unsupported,\n                    \"CHANGE_IP and CHANGE_PORT not supported\",\n                )\n            })?;\n            Ok(addr)\n        }\n    }\n}\n\n#[derive(Debug)]\nstruct StunServerComponentInner {\n    ref_iface: WeakInterface,\n    config: StunServerConfig,\n    task: Option<AbortOnDropHandle<io::Result<()>>>,\n}\n\n#[derive(Debug)]\npub struct StunServerComponent {\n    inner: Mutex<StunServerComponentInner>,\n}\n\nimpl StunServerComponent {\n    pub fn new(\n        ref_iface: WeakInterface,\n        stun_router: StunRouter,\n        config: StunServerConfig,\n    ) -> Self {\n        let task =\n            Some(StunServer::new(ref_iface.clone(), stun_router.clone(), config.clone()).spawn());\n        Self {\n            inner: Mutex::new(StunServerComponentInner {\n                ref_iface,\n                config,\n                task,\n            }),\n        }\n    }\n\n    fn lock_inner(&self) -> std::sync::MutexGuard<'_, StunServerComponentInner> {\n        self.inner.lock().unwrap()\n    }\n}\n\nimpl Component for StunServerComponent {\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> Poll<()> {\n        let mut inner = self.lock_inner();\n        if let Some(task) = inner.task.as_mut() {\n            task.abort();\n            _ = ready!(Pin::new(task).poll(cx));\n            inner.task = None;\n        }\n        Poll::Ready(())\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        let mut inner = self.lock_inner();\n        if inner.ref_iface.same_io(&iface.downgrade()) {\n            return;\n        }\n\n        _ = iface.with_components(|components| {\n            let Some(router) = components.with(|router: &StunRouterComponent| {\n                router.reinit(iface);\n                router.router()\n            }) else {\n                return;\n            };\n            if let Some(task) = inner.task.take() {\n                task.abort();\n            }\n\n            inner.ref_iface = iface.downgrade();\n            inner.task = Some(\n                StunServer::new(inner.ref_iface.clone(), router, inner.config.clone()).spawn(),\n            );\n        });\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/nat/tx.rs",
    "content": "use std::{io, net::SocketAddr, sync::Arc, time::Duration};\n\nuse qinterface::io::RefIO;\nuse tokio::{sync::SetOnce, time::timeout};\n\nuse super::{\n    msg::{Packet, Request, Response, TransactionId},\n    router::StunRouter,\n};\nuse crate::nat::iface::StunIO;\n\n#[derive(Clone)]\npub struct Transaction<I> {\n    stun_router: StunRouter,\n    ref_iface: I,\n    transaction_id: TransactionId,\n    pending_response: Arc<SetOnce<(Response, SocketAddr)>>,\n    retry_times: u8,\n    timeout: Duration,\n}\n\nimpl<I: RefIO> Transaction<I> {\n    pub fn begin(\n        ref_iface: I,\n        stun_router: StunRouter,\n        retry_times: u8,\n        timeout: Duration,\n    ) -> Self {\n        let pending_response = Arc::new(SetOnce::new());\n        let transaction_id = TransactionId::random();\n        stun_router.register(transaction_id, pending_response.clone());\n        Self {\n            stun_router,\n            ref_iface,\n            transaction_id,\n            pending_response,\n            retry_times,\n            timeout,\n        }\n    }\n    pub async fn send_request(\n        &self,\n        request: Request,\n        dst: SocketAddr,\n    ) -> io::Result<Option<Response>> {\n        let mut retry_times = self.retry_times;\n        while retry_times > 0 {\n            match timeout(self.timeout, self.do_tick(dst, request.clone())).await {\n                Ok(result) => return result.map(Some),\n                Err(_error) => retry_times -= 1,\n            }\n        }\n        Ok(None)\n    }\n\n    async fn do_tick(&self, dst: SocketAddr, request: Request) -> io::Result<Response> {\n        self.ref_iface\n            .iface()\n            .send_stun_packet(Packet::Request(request), self.transaction_id, dst)\n            .await?;\n        let (response, _src) = self.pending_response.wait().await.clone();\n        Ok(response)\n    }\n}\n\nimpl<I> Drop for Transaction<I> {\n    fn drop(&mut self) {\n        self.stun_router.remove(&self.transaction_id);\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/nat.rs",
    "content": "pub mod client;\npub mod iface;\npub mod msg;\npub mod router;\npub mod server;\npub mod tx;\n"
  },
  {
    "path": "qtraversal/src/packet.rs",
    "content": "use bytes::BufMut;\nuse qbase::net::{\n    Family,\n    addr::{EndpointAddr, WriteEndpointAddr, be_endpoint_addr},\n};\n\nuse crate::PathWay;\n\nconst STUN_HEADER_MASK: u8 = 0b1111_1110;\nconst STUN_HEADER_BITS: u8 = 0b1100_0010;\n\nconst FORWARD_HEADER_MASK: u8 = 0b1110_0000;\nconst FORWARD_VERSION_MASK: u8 = 0b1111_0000;\nconst FORWARD_HEADER_BITS: u8 = 0b0110_0000;\nconst FORWARD_BIT: u8 = 0b0000_1000;\nconst FORWARD_FAMILY_BIT: u8 = 0b0000_0100;\nconst FORWARD_SRC_TYPE_BIT: u8 = 0b0000_0010;\nconst FORWARD_DST_TYPE_BIT: u8 = 0b0000_0001;\n\n#[derive(PartialEq, Eq, Debug)]\npub enum HeaderType {\n    Stun(u8),    // 最后 bit\n    Forward(u8), // 最后 5bit\n}\n\n// Stun Packet {\n//     Header Form (1) = 1,\n//     Fixed Bit (1) = 1,\n//     Stun Hdr (6), // Request 0b000010 #Response 0b000011\n//     Version (32) = 0,\n//     DDIL(8) = 0, // 伪装0长度的目标连接ID\n//     SDIL(8) = 0, // 伪装0长度的源连接ID\n//     Ver(16), // 2个字节，表示我们自定义的版本号，方便未来升级\n//     ... Stun payload\n//   }\n#[derive(Clone, Copy)]\npub struct StunHeader {\n    version: u16,\n}\n\nimpl StunHeader {\n    pub fn new(version: u16) -> Self {\n        Self { version }\n    }\n\n    pub fn encoding_size() -> usize {\n        1 + 4 + 4\n    }\n}\n\npub fn be_stun_header(input: &[u8]) -> nom::IResult<&[u8], StunHeader> {\n    let (remain, version) = nom::number::streaming::be_u16(input)?;\n    Ok((remain, StunHeader { version }))\n}\n\npub trait WriteStunHeader {\n    fn put_stun_header(&mut self, stun_header: &StunHeader);\n}\n\nimpl<T: BufMut> WriteStunHeader for T {\n    fn put_stun_header(&mut self, stun_header: &StunHeader) {\n        self.put_u8(STUN_HEADER_BITS);\n        self.put_u32(0);\n        self.put_u8(0);\n        self.put_u8(0);\n        self.put_u16(stun_header.version);\n    }\n}\n\n// Forward Packet {\n//     Header Form (1) = 0,\n//     Fixed Bit (1) = 1,\n//     Spin Bit (1) = 1, // 1表示带有转发包头\n//     Remain (5), // 使其等于真正QUIC包第一字节的后5bit，飘忽不定，伪装够深\n//     Version (4),\n//     Forward (1) = 1,\n//     Family (1),  // 0表示IPv4，1表示IPv6\n//     Src type(1), // 0表示直连，1表示带agent\n//     Dst type(1), // 0表示直连，1表示带agent\n//     Src endpoint, // 根据src type，是Endpoint::Agent还是Direct\n//     Dst endpoint, // 根据dst type，是Endpoint::Agent还是Direct\n//     ... Real Quic Packet\n//   }\n#[derive(Debug, Clone, Copy)]\npub struct ForwardHeader {\n    remian: u8,  // 后 5bits\n    version: u8, // 前 4bits\n    pathway: PathWay,\n}\n\nimpl ForwardHeader {\n    pub fn encoding_size(pathway: &PathWay) -> usize {\n        if matches!(pathway.remote(), EndpointAddr::Direct { .. }) {\n            return 0;\n        }\n        1 + 1 + pathway.local().encoding_size() + pathway.remote().encoding_size()\n    }\n\n    pub fn pathway(&self) -> PathWay {\n        self.pathway\n    }\n\n    pub fn new(version: u8, pathway: &PathWay, buffer: &[u8]) -> Self {\n        let remian = buffer[0] & 0b0001_1111;\n        Self {\n            remian,\n            version,\n            pathway: *pathway,\n        }\n    }\n}\n\npub trait WriteForwardHeader {\n    fn put_forward_header(&mut self, forward_header: &ForwardHeader);\n}\n\nimpl<T: BufMut> WriteForwardHeader for T {\n    fn put_forward_header(&mut self, forward_header: &ForwardHeader) {\n        self.put_u8(FORWARD_HEADER_BITS | forward_header.remian);\n        let mut flag = (forward_header.version << 4) | FORWARD_BIT;\n\n        if forward_header.pathway.local().ip().is_ipv6() {\n            flag |= FORWARD_FAMILY_BIT;\n        }\n        if matches!(forward_header.pathway.local(), EndpointAddr::Agent { .. }) {\n            flag |= FORWARD_SRC_TYPE_BIT;\n        }\n        if matches!(forward_header.pathway.remote(), EndpointAddr::Agent { .. }) {\n            flag |= FORWARD_DST_TYPE_BIT;\n        }\n        self.put_u8(flag);\n        self.put_endpoint_addr(forward_header.pathway.local());\n        self.put_endpoint_addr(forward_header.pathway.remote());\n    }\n}\n\npub fn be_forward_header(input: &[u8]) -> nom::IResult<&[u8], ForwardHeader> {\n    let (remain, first) = nom::number::streaming::be_u8(input)?;\n    let version = (first & FORWARD_VERSION_MASK) >> 4;\n    let flag = first & !FORWARD_VERSION_MASK;\n    let family = match flag & FORWARD_FAMILY_BIT {\n        0 => Family::V4,\n        _ => Family::V6,\n    };\n\n    let src_ep_typ = flag & FORWARD_SRC_TYPE_BIT;\n    let dst_ep_typ = flag & FORWARD_DST_TYPE_BIT;\n    let (remain, src) = be_endpoint_addr(remain, src_ep_typ, family)?;\n    let (remain, dst) = be_endpoint_addr(remain, dst_ep_typ, family)?;\n    let pathway = PathWay::new(src, dst);\n    Ok((\n        remain,\n        ForwardHeader {\n            remian: first,\n            version,\n            pathway,\n        },\n    ))\n}\n\n#[derive(Clone, Copy)]\npub enum Header {\n    Stun(StunHeader),\n    Forward(ForwardHeader),\n}\n\npub fn be_header_type(input: &[u8]) -> nom::IResult<&[u8], HeaderType> {\n    let (remain, first) = nom::number::streaming::be_u8(input)?;\n    if first & STUN_HEADER_MASK == STUN_HEADER_BITS {\n        let (remain, version) = nom::number::streaming::be_u32(remain)?;\n        if version == 0 {\n            let (remain, _) = nom::number::streaming::be_u8(remain)?;\n            let (remain, _) = nom::number::streaming::be_u8(remain)?;\n            return Ok((remain, HeaderType::Stun(first & 1)));\n        }\n    } else if first & FORWARD_HEADER_MASK == FORWARD_HEADER_BITS {\n        return Ok((remain, HeaderType::Forward(first & 0b0001_1111)));\n    }\n    Err(nom::Err::Error(nom::error::make_error(\n        input,\n        nom::error::ErrorKind::Alt,\n    )))\n}\n\npub fn be_header(input: &[u8]) -> nom::IResult<&[u8], Header> {\n    let (remain, ty) = be_header_type(input)?;\n    match ty {\n        HeaderType::Stun(_ty) => {\n            let (remain, stun_hdr) = be_stun_header(remain)?;\n            Ok((remain, Header::Stun(stun_hdr)))\n        }\n        HeaderType::Forward(_ty) => {\n            let (remain, forward_hdr) = be_forward_header(remain)?;\n            Ok((remain, Header::Forward(forward_hdr)))\n        }\n    }\n}\n\npub trait WriteHeader {\n    fn put_header(&mut self, header: &Header);\n}\n\nimpl<T: BufMut> WriteHeader for T {\n    fn put_header(&mut self, header: &Header) {\n        match header {\n            Header::Stun(stun_header) => {\n                self.put_stun_header(stun_header);\n            }\n            Header::Forward(forward_header) => {\n                self.put_forward_header(forward_header);\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use bytes::BytesMut;\n\n    use super::*;\n\n    #[test]\n    fn test_stun_header() {\n        let stun_hdr = StunHeader::new(0);\n        let mut buf = BytesMut::with_capacity(StunHeader::encoding_size());\n        buf.put_stun_header(&stun_hdr);\n        let (remain, hdr) = be_header_type(&buf[..]).unwrap();\n        assert_eq!(hdr, HeaderType::Stun(0));\n        let (remain, stun_hdr) = be_stun_header(remain).unwrap();\n        assert_eq!(stun_hdr.version, 0);\n        assert_eq!(remain.len(), 0)\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/punch/predictor.rs",
    "content": "use std::{\n    collections::{HashMap, VecDeque},\n    future::poll_fn,\n    io,\n    net::SocketAddr,\n    str::FromStr,\n    sync::Arc,\n    time::Duration,\n};\n\nuse qbase::{frame::PunchHelloFrame, net::route::Link};\nuse qinterface::{\n    Interface,\n    bind_uri::{BindUri, Scheme},\n    component::route::{QuicRouter, QuicRouterComponent},\n    io::{IO, ProductIO},\n    manager::InterfaceManager,\n};\n\nuse crate::{\n    punch::{\n        scheduler::SCHEDULER,\n        tx::{PunchId, Transaction},\n    },\n    route::ReceiveAndDeliverPacket,\n};\n\nconst MAX_CONCURRENT_SOCKETS: usize = 60;\nconst MIN_PORT: u16 = 1024;\nconst PACKET_TTL: u8 = 64;\nconst FIRST_PROBE_ID: u32 = 1;\nconst MAX_PROBES: u32 = 300;\nconst PACING_INTERVAL: Duration = Duration::from_millis(20);\n\npub struct PortPredictor {\n    ifaces: Arc<InterfaceManager>,\n    factory: Arc<dyn ProductIO>,\n    quic_router: Arc<QuicRouter>,\n    bind_uri: BindUri,\n    dst: SocketAddr,\n    device: String,\n    probes: ProbeTable,\n    quota_held: u32,\n    probes_created: u32,\n}\n\n#[derive(Debug)]\nstruct PendingProbe {\n    bind_uri: BindUri,\n    iface: Interface,\n    port: u16,\n}\n\npub type PacketSendFn = Arc<\n    dyn Fn(\n            &Interface,\n            Link,\n            u8,\n            PunchHelloFrame,\n        )\n            -> std::pin::Pin<Box<dyn std::future::Future<Output = io::Result<()>> + Send + '_>>\n        + Send\n        + Sync,\n>;\n\nstruct ProbeTable {\n    pending: HashMap<u32, PendingProbe>,\n    active_ports: HashMap<u16, u32>,\n    order: VecDeque<u32>,\n    next_probe_id: u32,\n}\n\nimpl ProbeTable {\n    fn new() -> Self {\n        Self {\n            pending: HashMap::new(),\n            active_ports: HashMap::new(),\n            order: VecDeque::new(),\n            next_probe_id: FIRST_PROBE_ID,\n        }\n    }\n\n    fn len(&self) -> usize {\n        self.pending.len()\n    }\n\n    fn contains_port(&self, port: u16) -> bool {\n        self.active_ports.contains_key(&port)\n    }\n\n    fn allocate_probe_id(&mut self) -> u32 {\n        let probe_id = self.next_probe_id;\n        self.next_probe_id = self.next_probe_id.wrapping_add(1);\n        if self.next_probe_id < FIRST_PROBE_ID {\n            self.next_probe_id = FIRST_PROBE_ID;\n        }\n        probe_id\n    }\n\n    fn insert(&mut self, probe_id: u32, bind_uri: BindUri, iface: Interface, port: u16) {\n        self.active_ports.insert(port, probe_id);\n        self.order.push_back(probe_id);\n        self.pending.insert(\n            probe_id,\n            PendingProbe {\n                bind_uri,\n                iface,\n                port,\n            },\n        );\n    }\n\n    fn take(&mut self, probe_id: u32) -> Option<PendingProbe> {\n        let probe = self.pending.remove(&probe_id)?;\n        self.active_ports.remove(&probe.port);\n        self.order.retain(|&id| id != probe_id);\n        Some(probe)\n    }\n\n    fn oldest_probe_id(&self) -> Option<u32> {\n        self.order.front().copied()\n    }\n\n    fn pending_probe_ids(&self) -> Vec<u32> {\n        self.pending.keys().copied().collect()\n    }\n\n    fn drain_bind_uris(&mut self) -> Vec<BindUri> {\n        self.active_ports.clear();\n        self.order.clear();\n        self.pending\n            .drain()\n            .map(|(_, probe)| probe.bind_uri)\n            .collect()\n    }\n}\n\nimpl PortPredictor {\n    pub fn new(\n        ifaces: Arc<InterfaceManager>,\n        factory: Arc<dyn ProductIO>,\n        quic_router: Arc<QuicRouter>,\n        bind_uri: BindUri,\n        dst: SocketAddr,\n    ) -> io::Result<Self> {\n        let device = match bind_uri.scheme() {\n            Scheme::Iface => bind_uri.as_iface_bind_uri().unwrap().1.to_string(),\n            Scheme::Inet => bind_uri.as_inet_bind_uri().unwrap().ip().to_string(),\n            _ => return Err(io::ErrorKind::Unsupported.into()),\n        };\n        tracing::debug!(\n            target: \"punch\",\n            bind_uri = %bind_uri,\n            dst = %dst,\n            device = %device,\n            \"Created port predictor\"\n        );\n        Ok(Self {\n            ifaces,\n            factory,\n            quic_router,\n            bind_uri,\n            dst,\n            device,\n            probes: ProbeTable::new(),\n            quota_held: 0,\n            probes_created: 0,\n        })\n    }\n\n    fn release_quota(&mut self, count: u32) -> io::Result<()> {\n        SCHEDULER\n            .lock()\n            .unwrap()\n            .release_port(count, self.dst, self.device.clone())?;\n        self.quota_held = self.quota_held.saturating_sub(count);\n        Ok(())\n    }\n\n    fn port_to_bind_uri(&self, port: u16) -> BindUri {\n        match self.bind_uri.scheme() {\n            Scheme::Iface => {\n                let (ip_family, device, _) = self.bind_uri.as_iface_bind_uri().unwrap();\n                let bind_uri = format!(\n                    \"iface://{ip_family}.{device}:{port}?{}=true\",\n                    BindUri::TEMPORARY_PROP\n                );\n                BindUri::from_str(bind_uri.as_str()).unwrap_or_else(|e| {\n                    panic!(\"Constructed invalid iface bind URI {bind_uri}: {e}\")\n                })\n            }\n            Scheme::Inet => {\n                let socket_addr = self.bind_uri.as_inet_bind_uri().unwrap();\n                let ip = socket_addr.ip();\n                let bind_uri = format!(\"inet://{ip}:{port}?{}=true\", BindUri::TEMPORARY_PROP);\n                BindUri::from_str(bind_uri.as_str())\n                    .unwrap_or_else(|e| panic!(\"Constructed invalid inet bind URI {bind_uri}: {e}\"))\n            }\n            _ => unreachable!(\"Unsupported bind URI scheme for port prediction\"),\n        }\n    }\n\n    async fn release_interface(&mut self, bind_uri: BindUri) {\n        self.ifaces.unbind(bind_uri).await;\n        if let Err(error) = self.release_quota(1) {\n            tracing::warn!(target: \"punch\", %error, \"failed to release quota for interface\");\n        }\n    }\n\n    async fn release_probe(&mut self, probe_id: u32) -> bool {\n        let Some(probe) = self.probes.take(probe_id) else {\n            return false;\n        };\n        self.release_interface(probe.bind_uri).await;\n        true\n    }\n\n    fn check_and_claim(&mut self, tx: &Transaction) -> Option<(BindUri, Interface)> {\n        let (_, frame) = tx.try_punch_done()?;\n        let probe_id = frame.probe_id();\n        tracing::debug!(target: \"punch\", probe_id, \"punchDone received, attempting to claim probe\");\n        self.claim_probe(probe_id)\n    }\n\n    async fn evict_if_needed(\n        &mut self,\n        tx: &Transaction,\n    ) -> io::Result<Option<(BindUri, Interface)>> {\n        while self.probes.len() >= MAX_CONCURRENT_SOCKETS {\n            if let Some(result) = self.check_and_claim(tx) {\n                return Ok(Some(result));\n            }\n            let Some(oldest_id) = self.probes.oldest_probe_id() else {\n                break;\n            };\n            self.release_probe(oldest_id).await;\n            tracing::trace!(target: \"punch\", oldest_id, active_probes = self.probes.len(), \"evicted oldest probe\");\n        }\n        Ok(None)\n    }\n\n    fn claim_probe(&mut self, probe_id: u32) -> Option<(BindUri, Interface)> {\n        let probe = self.probes.take(probe_id)?;\n        Some((probe.bind_uri, probe.iface))\n    }\n\n    async fn finalize(\n        &mut self,\n        result: (BindUri, Interface),\n    ) -> io::Result<Option<(BindUri, Interface)>> {\n        if let Err(error) = self.release_all().await {\n            tracing::warn!(target: \"punch\", %error, \"failed to cleanup remaining probes after success\");\n        }\n        Ok(Some(result))\n    }\n\n    pub(super) async fn predict(\n        &mut self,\n        punch_id: PunchId,\n        tx: Arc<Transaction>,\n        packet_send_fn: PacketSendFn,\n    ) -> io::Result<Option<(BindUri, Interface)>> {\n        tracing::debug!(target: \"punch\", %punch_id, \"starting port prediction\");\n\n        while self.probes_created < MAX_PROBES {\n            // Check if PunchDone has been received for an active probe\n            if let Some(result) = self.check_and_claim(tx.as_ref()) {\n                if let Err(error) = self.release_quota(1) {\n                    tracing::warn!(target: \"punch\", %error, \"failed to release quota for claimed probe\");\n                }\n                return self.finalize(result).await;\n            }\n\n            // Evict oldest probe if at capacity\n            if let Some(result) = self.evict_if_needed(tx.as_ref()).await? {\n                if let Err(error) = self.release_quota(1) {\n                    tracing::warn!(target: \"punch\", %error, \"failed to release quota for claimed probe\");\n                }\n                return self.finalize(result).await;\n            }\n\n            // Create and send one probe\n            if let Err(error) = self.create_and_send_probe(punch_id, &packet_send_fn).await {\n                tracing::trace!(target: \"punch\", %punch_id, %error, \"probe creation failed, continuing\");\n            }\n\n            // Pacing: wait interval or return early if PunchDone arrives\n            if tx.try_punch_done().is_none() {\n                tokio::time::timeout(PACING_INTERVAL, tx.wait_punch_done())\n                    .await\n                    .ok();\n            }\n        }\n\n        // Final check before giving up\n        if let Some(result) = self.check_and_claim(tx.as_ref()) {\n            if let Err(error) = self.release_quota(1) {\n                tracing::warn!(target: \"punch\", %error, \"failed to release quota for claimed probe\");\n            }\n            return self.finalize(result).await;\n        }\n\n        if let Err(e) = self.release_all().await {\n            tracing::error!(target: \"punch\", %punch_id, %e, \"failed to cleanup resources\");\n        }\n        tracing::debug!(target: \"punch\", %punch_id, probes_created = self.probes_created, \"port prediction finished without match\");\n        Ok(None)\n    }\n\n    async fn create_and_send_probe(\n        &mut self,\n        punch_id: PunchId,\n        packet_send_fn: &PacketSendFn,\n    ) -> io::Result<()> {\n        self.acquire_quota(1).await?;\n\n        let (bind_uri, iface) = match self.create_interface().await {\n            Ok(result) => result,\n            Err(e) => {\n                if let Err(error) = self.release_quota(1) {\n                    tracing::warn!(target: \"punch\", %error, \"failed to release quota on interface creation failure\");\n                }\n                return Err(e);\n            }\n        };\n\n        let socket_addr = match iface.bound_addr() {\n            Ok(addr) => addr,\n            Err(_) => {\n                self.release_interface(bind_uri).await;\n                return Err(io::Error::new(\n                    io::ErrorKind::AddrNotAvailable,\n                    \"failed to get bound addr\",\n                ));\n            }\n        };\n\n        let port = socket_addr.port();\n        let probe_id = self.probes.allocate_probe_id();\n        let link = Link::new(socket_addr, self.dst);\n        let frame = PunchHelloFrame::new(punch_id.local_seq, punch_id.remote_seq, probe_id);\n\n        if packet_send_fn(&iface, link, PACKET_TTL, frame)\n            .await\n            .is_ok()\n        {\n            self.probes.insert(probe_id, bind_uri, iface, port);\n            self.probes_created += 1;\n            Ok(())\n        } else {\n            self.release_interface(bind_uri).await;\n            Err(io::Error::new(\n                io::ErrorKind::BrokenPipe,\n                \"failed to send probe\",\n            ))\n        }\n    }\n\n    async fn create_interface(&mut self) -> io::Result<(BindUri, Interface)> {\n        for _ in 0..10 {\n            let port = rand::random::<u16>() % (u16::MAX - MIN_PORT) + MIN_PORT;\n            if self.probes.contains_port(port) {\n                continue;\n            }\n            let bind_addr = self.port_to_bind_uri(port);\n            let bind_iface = self\n                .ifaces\n                .bind(bind_addr.clone(), self.factory.clone())\n                .await;\n\n            bind_iface.with_components_mut(|components, iface| {\n                components.init_with(|| QuicRouterComponent::new(self.quic_router.clone()));\n                components.init_with(|| {\n                    ReceiveAndDeliverPacket::builder(iface.downgrade())\n                        .quic_router(self.quic_router.clone())\n                        .init()\n                });\n            });\n\n            let iface = bind_iface.borrow();\n\n            match iface.bound_addr() {\n                Ok(_bound_addr) => {\n                    return Ok((bind_addr, iface));\n                }\n                Err(_) => {\n                    self.ifaces.unbind(bind_addr).await;\n                    continue;\n                }\n            }\n        }\n        tracing::warn!(target: \"punch\", bind_uri = %self.bind_uri, dst = %self.dst, \"failed to create interface after 10 attempts\");\n        Err(io::Error::new(\n            io::ErrorKind::AddrNotAvailable,\n            \"Failed to bind port after max retries\",\n        ))\n    }\n\n    async fn release_all(&mut self) -> io::Result<()> {\n        tracing::debug!(target: \"punch\", active_probes = self.probes.len(), \n                      \"starting resource cleanup\");\n        let probe_ids = self.probes.pending_probe_ids();\n        for probe_id in probe_ids {\n            self.release_probe(probe_id).await;\n        }\n        if self.quota_held > 0 {\n            let orphaned = self.quota_held;\n            tracing::warn!(target: \"punch\", orphaned, \"releasing orphaned quota without pending probes\");\n            self.release_quota(orphaned)?;\n        }\n        tracing::debug!(target: \"punch\", \"resource cleanup completed\");\n        Ok(())\n    }\n\n    async fn acquire_quota(&mut self, count: u32) -> io::Result<u32> {\n        let count = count.min(MAX_PROBES - self.probes_created);\n        if count == 0 {\n            return Err(io::Error::new(\n                io::ErrorKind::ResourceBusy,\n                format!(\"Would exceed maximum limit of {}\", MAX_PROBES),\n            ));\n        }\n        let granted = poll_fn(|cx| {\n            SCHEDULER\n                .lock()\n                .unwrap()\n                .poll_allocate(cx, self.dst, self.device.clone(), count)\n        })\n        .await?;\n        self.quota_held += granted;\n        Ok(granted)\n    }\n}\n\nimpl Drop for PortPredictor {\n    fn drop(&mut self) {\n        let quota_held = self.quota_held;\n        self.quota_held = 0;\n        if quota_held > 0\n            && let Err(error) =\n                SCHEDULER\n                    .lock()\n                    .unwrap()\n                    .release_port(quota_held, self.dst, self.device.clone())\n        {\n            tracing::warn!(target: \"punch\", %error, quota_held, \"failed to release predictor quota during drop\");\n        }\n\n        let bind_uris = self.probes.drain_bind_uris();\n        let futures: Vec<_> = bind_uris\n            .into_iter()\n            .map(|bind_uri| self.ifaces.unbind(bind_uri))\n            .collect();\n        if !futures.is_empty() {\n            tokio::spawn(async move {\n                futures::future::join_all(futures).await;\n            });\n        }\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/punch/puncher.rs",
    "content": "use std::{\n    collections::HashSet,\n    io,\n    net::SocketAddr,\n    ops::Deref,\n    str::FromStr,\n    sync::{Arc, Mutex},\n    time::Duration,\n};\n\nuse dashmap::{DashMap, DashSet, Entry};\nuse qbase::{\n    frame::{\n        AddAddressFrame, PunchDoneFrame, PunchHelloFrame, PunchMeNowFrame, ReliableFrame,\n        RemoveAddressFrame,\n        io::{ReceiveFrame, SendFrame},\n    },\n    net::{\n        AddrFamily, NatType,\n        addr::EndpointAddr,\n        route::{Line, Link, Route},\n        tx::Signals,\n    },\n    packet::{\n        Package, PacketSpace, ProductHeader,\n        header::short::OneRttHeader,\n        io::{AssemblePacket, Packages, PadTo20},\n    },\n};\nuse qevent::telemetry::Instrument;\nuse qinterface::{\n    Interface, WeakInterface,\n    bind_uri::BindUri,\n    component::route::{QuicRouter, QuicRouterComponent},\n    io::{IO, IoExt, ProductIO},\n    manager::InterfaceManager,\n};\nuse tokio::{task::AbortHandle, time::timeout};\nuse tracing::Instrument as _;\n\nuse crate::{\n    PathWay,\n    addr::AddressBook,\n    nat::{client::StunClientComponent, router::StunRouterComponent},\n    punch::{\n        predictor::{PacketSendFn, PortPredictor},\n        tx::{AsPunchId, PunchId, Transaction},\n    },\n    route::ReceiveAndDeliverPacket,\n};\n\ntype StunClient<I = WeakInterface> = crate::nat::client::StunClient<I>;\n// type StunProtocol<IO = WeakQuicInterface> = crate::nat::protocol::StunProtocol<I>;\n\n// TTL\nconst HELLO_TTL: u8 = 64;\nconst DEFAULT_PROBE_ID: u32 = 0;\n#[cfg(any(test, feature = \"test-ttl\"))]\npub const KNOCK_TTL: u8 = 1;\n#[cfg(not(any(test, feature = \"test-ttl\")))]\npub const KNOCK_TTL: u8 = 5;\n\n// Timeout\nconst KNOCK_TIMEOUT: Duration = Duration::from_millis(100);\nconst PUNCH_TIMEOUT: Duration = Duration::from_secs(3);\nconst PUNCH_ME_NOW_TIMEOUT: Duration = Duration::from_secs(1);\nconst COLLISION_TIMEOUT: Duration = Duration::from_secs(3);\n// Birthday attack timeout: must exceed PortPredictor's full run time (~6s for 300 probes × 20ms)\nconst BIRTHDAY_TIMEOUT: Duration = Duration::from_secs(8);\n\n// Quantity\nconst MAX_RETRIES: usize = 5;\nconst COLLISION_PORTS: u32 = 800;\n\npub struct ArcPuncher<TX, PH, S>(Arc<Puncher<TX, PH, S>>);\n\nimpl<TX, PH, S> Clone for ArcPuncher<TX, PH, S> {\n    fn clone(&self) -> Self {\n        Self(self.0.clone())\n    }\n}\n\nimpl<TX, PH, S> ArcPuncher<TX, PH, S>\nwhere\n    TX: SendFrame<ReliableFrame> + Send + Sync + Clone + 'static,\n    PH: ProductHeader<OneRttHeader> + Send + Sync + 'static,\n    S: PacketSpace<OneRttHeader> + Send + Sync + 'static,\n{\n    pub fn new(\n        broker: TX,\n        product_header: PH,\n        packet_space: Arc<S>,\n        ifaces: Arc<InterfaceManager>,\n        iface_factory: Arc<dyn ProductIO>,\n        quic_router: Arc<QuicRouter>,\n        stun_servers: Arc<[SocketAddr]>,\n    ) -> Self {\n        Self(Arc::new(Puncher::new(\n            broker,\n            product_header,\n            packet_space,\n            ifaces,\n            iface_factory,\n            quic_router,\n            stun_servers,\n        )))\n    }\n}\n\npub struct Puncher<TX, PH, S> {\n    transaction: DashMap<PunchId, (AbortHandle, Arc<Transaction>)>,\n    punch_history: DashSet<PunchId>,\n    product_header: PH,\n    packet_space: Arc<S>,\n    ifaces: Arc<InterfaceManager>,\n    iface_factory: Arc<dyn ProductIO>,\n    quic_router: Arc<QuicRouter>,\n    stun_servers: Arc<[SocketAddr]>,\n    address_book: Mutex<AddressBook>,\n    punch_ifaces: DashMap<BindUri, Interface>,\n    broker: TX,\n}\n\nimpl<TX, PH, S> Puncher<TX, PH, S>\nwhere\n    TX: SendFrame<ReliableFrame> + Send + Sync + Clone + 'static,\n    PH: ProductHeader<OneRttHeader> + Send + Sync + 'static,\n    S: PacketSpace<OneRttHeader> + Send + Sync + 'static,\n{\n    pub fn new(\n        broker: TX,\n        product_header: PH,\n        packet_space: Arc<S>,\n        ifaces: Arc<InterfaceManager>,\n        iface_factory: Arc<dyn ProductIO>,\n        quic_router: Arc<QuicRouter>,\n        stun_servers: Arc<[SocketAddr]>,\n    ) -> Self {\n        Self {\n            transaction: DashMap::new(),\n            punch_history: DashSet::new(),\n            product_header,\n            packet_space,\n            ifaces,\n            iface_factory,\n            quic_router,\n            stun_servers,\n            address_book: Mutex::new(AddressBook::default()),\n            punch_ifaces: DashMap::new(),\n            broker,\n        }\n    }\n\n    pub async fn send_packet<P>(\n        &self,\n        iface: &(impl IO + ?Sized),\n        link: Link,\n        ttl: u8,\n        packages: P,\n    ) -> io::Result<()>\n    where\n        P: for<'b> Package<S::PacketAssembler<'b>>,\n        PadTo20: for<'b> Package<S::PacketAssembler<'b>>,\n    {\n        let mut buffer = [0; 128];\n        let sent_bytes = (|| {\n            let mut packet = self\n                .packet_space\n                .new_packet(self.product_header.new_header()?, &mut buffer)?;\n            packet.assemble_packet(&mut Packages((packages, PadTo20)))?;\n            let (sent_bytes, _props) = packet.encrypt_and_protect_packet();\n            Result::<_, Signals>::Ok(sent_bytes)\n        })()\n        .map_err(|s| io::Error::other(format!(\"Failed to assemble packet: {s:?}\")))?;\n\n        let line = Line::new(link, ttl, None, sent_bytes as u16);\n        let route = Route::new(link.into(), line);\n        iface\n            .sendmmsg(&[io::IoSlice::new(&buffer[..sent_bytes])], route)\n            .await\n    }\n\n    async fn collision(\n        &self,\n        iface: &Interface,\n        link: Link,\n        punch_id: PunchId,\n        ttl: u8,\n    ) -> io::Result<()>\n    where\n        PadTo20: for<'b> Package<S::PacketAssembler<'b>>,\n        PunchHelloFrame: for<'b> Package<S::PacketAssembler<'b>>,\n    {\n        tracing::debug!(target: \"punch\", %punch_id, %link, ttl, \"starting collision attack\");\n        let mut random_ports = HashSet::new();\n        let dst = link.dst;\n        let ip = dst.ip();\n        while random_ports.len() < COLLISION_PORTS as usize {\n            let port = rand::random::<u16>() % (u16::MAX - 1024) + 1024;\n            let dst = SocketAddr::new(ip, port);\n            if !random_ports.insert(port) {\n                continue;\n            }\n            let link = Link::new(link.src, dst);\n            let frame =\n                PunchHelloFrame::new(punch_id.local_seq, punch_id.remote_seq, DEFAULT_PROBE_ID);\n            self.send_packet(iface, link, ttl, frame).await?;\n        }\n        Ok(())\n    }\n}\n\nimpl<TX, PH, S> Drop for Puncher<TX, PH, S> {\n    fn drop(&mut self) {\n        for entry in self.transaction.iter() {\n            entry.value().0.abort();\n        }\n        self.transaction.clear();\n        self.punch_history.clear();\n        let futures: Vec<_> = self\n            .punch_ifaces\n            .iter()\n            .map(|entry| self.ifaces.unbind(entry.key().clone()))\n            .collect();\n        if !futures.is_empty() {\n            tokio::spawn(\n                async move {\n                    futures::future::join_all(futures).await;\n                }\n                .instrument_in_current()\n                .in_current_span(),\n            );\n        }\n        self.punch_ifaces.clear();\n    }\n}\n\nimpl<TX, PH, S> ArcPuncher<TX, PH, S>\nwhere\n    TX: SendFrame<ReliableFrame> + Send + Sync + Clone + 'static,\n    PH: ProductHeader<OneRttHeader> + Send + Sync + 'static,\n    S: PacketSpace<OneRttHeader> + Send + Sync + 'static,\n    for<'b> PunchDoneFrame: Package<S::PacketAssembler<'b>>,\n    for<'b> PunchHelloFrame: Package<S::PacketAssembler<'b>>,\n    for<'b> PadTo20: Package<S::PacketAssembler<'b>>,\n{\n    pub fn add_local_address(\n        &self,\n        bind_uri: BindUri,\n        local_addr: SocketAddr,\n        nat_type: NatType,\n        tire: u32,\n    ) -> io::Result<()> {\n        if nat_type == NatType::Dynamic {\n            let puncher = self.clone();\n            let ifaces = self.0.ifaces.clone();\n            let iface_factory = self.0.iface_factory.clone();\n            let stun_servers = self.0.stun_servers.clone();\n            let quic_router = self.0.quic_router.clone();\n\n            tokio::spawn(\n                async move {\n                    let (iface, stun_client) =\n                        dynamic_iface(&bind_uri, &ifaces, &iface_factory, &quic_router, &stun_servers)\n                            .await?;\n                    let dynamic_bind = iface.bind_uri();\n                    let outer = stun_client.outer_addr().await.inspect_err(|error| {\n                        tracing::warn!(target: \"punch\", %error, bind_uri = %dynamic_bind, \"failed to detect outer address for dynamic interface, unbinding\");\n                        let ifaces = ifaces.clone();\n                        let dynamic_bind = dynamic_bind.clone();\n                        tokio::spawn(async move { ifaces.unbind(dynamic_bind).await });\n                    })?;\n                    puncher\n                        .0\n                        .punch_ifaces\n                        .insert(dynamic_bind.clone(), iface.clone());\n\n                    let mut address_book = puncher.0.address_book.lock().unwrap();\n                    let frame =\n                        address_book.add_local_address(dynamic_bind.clone(), outer, tire, nat_type)?;\n                    tracing::trace!(target: \"punch\", bind_uri = %dynamic_bind, %outer, nat_type = ?nat_type, \"sending AddAddress frame for dynamic\");\n                    puncher\n                        .0\n                        .broker\n                        .send_frame([ReliableFrame::AddAddress(frame)]);\n                    Ok::<_, io::Error>(())\n                }\n                .instrument_in_current()\n                .in_current_span(),\n            );\n            return Ok(());\n        }\n        let mut address_book = self.0.address_book.lock().unwrap();\n        let frame = address_book.add_local_address(bind_uri.clone(), local_addr, tire, nat_type)?;\n        tracing::trace!(target: \"punch\", bind_uri = %bind_uri, %local_addr, nat_type = ?nat_type, \"sending AddAddress frame\");\n        self.0.broker.send_frame([ReliableFrame::AddAddress(frame)]);\n        Ok(())\n    }\n\n    pub fn add_local_endpoint(\n        &self,\n        bind: BindUri,\n        addr: EndpointAddr,\n    ) -> io::Result<Vec<(BindUri, Link, PathWay)>> {\n        let mut address_book = self.0.address_book.lock().unwrap();\n        address_book.add_local_endpoint(bind.clone(), addr)?;\n        let mut ways = Vec::new();\n        for (remote_ep, source) in address_book.remote_endpoint().iter() {\n            if let Ok(way) = self.resolve_punch_connection(&bind, &addr, remote_ep, source) {\n                ways.push(way);\n            }\n        }\n        Ok(ways)\n    }\n\n    pub fn add_peer_endpoint(\n        &self,\n        endpoint: EndpointAddr,\n        source: qresolve::Source,\n    ) -> io::Result<Vec<(BindUri, Link, PathWay)>> {\n        let mut address_book = self.0.address_book.lock().unwrap();\n        address_book.add_peer_endpoint(endpoint, source.clone())?;\n        let mut ways = Vec::new();\n        for (bind, local_ep) in address_book.local_endpoint().iter() {\n            if let Ok(way) = self.resolve_punch_connection(bind, local_ep, &endpoint, &source) {\n                ways.push(way);\n            }\n        }\n        Ok(ways)\n    }\n\n    pub fn remove_local_address(&self, addr: SocketAddr) -> io::Result<()> {\n        let mut address_book = self.0.address_book.lock().unwrap();\n        let frame = address_book.remove_local_address(addr)?;\n        self.0\n            .broker\n            .send_frame([ReliableFrame::RemoveAddress(frame)]);\n        Ok(())\n    }\n\n    fn recv_remove_address_frame(&self, remove_address_frame: RemoveAddressFrame) {\n        let mut address_book = self.0.address_book.lock().unwrap();\n        address_book.remove_remote_address(remove_address_frame.deref().into_u64() as u32);\n    }\n\n    fn recv_add_address_frame(&self, add_address_frame: AddAddressFrame) -> io::Result<()> {\n        // The lock on address_book must be released before accessing the transaction map\n        // to avoid a deadlock with recv_punch_me_now, which holds the transaction lock\n        // while trying to acquire the address_book lock.\n        let (bind, local) = {\n            let mut address_book = self.0.address_book.lock().unwrap();\n            address_book.add_remote_address(add_address_frame)?;\n            let (bind, local) = address_book.pick_local_address(&add_address_frame)?;\n            (bind.clone(), local)\n        };\n\n        let punch_id = (&local, &add_address_frame).punch_id();\n        if self.0.punch_history.contains(&punch_id) {\n            tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", Some(local.nat_type()), Some(add_address_frame.nat_type())), \"punch already completed, skipping\");\n            return Ok(());\n        }\n        match self.0.transaction.entry(punch_id) {\n            Entry::Occupied(_) => {\n                tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", Some(local.nat_type()), Some(add_address_frame.nat_type())), \"dup transaction for punch\");\n                return Ok(());\n            }\n            Entry::Vacant(entry) => {\n                let tx = Arc::new(Transaction::new());\n                let task = tokio::spawn(\n                    {\n                        let puncher = self.clone();\n                        let tx = tx.clone();\n                        async move {\n                            let result = puncher\n                                .punch_actively(bind, &local, &add_address_frame, tx)\n                                .await;\n                            puncher.0.punch_history.insert(punch_id);\n                            puncher.0.transaction.remove(&punch_id);\n                            result\n                        }\n                    }\n                    .instrument_in_current()\n                    .in_current_span(),\n                )\n                .abort_handle();\n                entry.insert((task, tx.clone()));\n            }\n        };\n        Ok(())\n    }\n\n    fn recv_punch_me_now(\n        &self,\n        pathway: PathWay,\n        punch_me_now_frame: PunchMeNowFrame,\n    ) -> io::Result<()> {\n        let punch_id = punch_me_now_frame.punch_id().flip();\n        if self.0.punch_history.contains(&punch_id) {\n            tracing::debug!(target: \"punch\", %punch_id, \"punch already completed, skipping\");\n            return Ok(());\n        }\n\n        let crate_punch_task = || {\n            let tx = Arc::new(Transaction::new());\n            let task = tokio::spawn({\n                let puncher = self.clone();\n                let tx = tx.clone();\n                let address_book = self.0.address_book.lock().unwrap();\n                let (bind, local_address) = address_book\n                    .get_local_address(&punch_me_now_frame.remote_seq())\n                    .ok_or_else(|| {\n                        io::Error::new(io::ErrorKind::NotFound, \"local address not matched\")\n                    })?;\n                tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", Some(local_address.nat_type()), Some(punch_me_now_frame.nat_type())), \"received punch me now frame, start passive punch\");\n                async move {\n                    let result = puncher\n                        .punch_passively(bind, &local_address, &punch_me_now_frame, tx)\n                        .await;\n                    puncher.0.punch_history.insert(punch_id);\n                    puncher.0.transaction.remove(&punch_id);\n                    result\n                }\n                .instrument_in_current()\n                .in_current_span()\n            })\n            .abort_handle();\n            Ok::<_, io::Error>((task, tx.clone()))\n        };\n\n        match self.0.transaction.entry(punch_id) {\n            Entry::Occupied(mut entry) => {\n                if pathway.local() < pathway.remote() {\n                    let (task, tx) = crate_punch_task()?;\n                    tx.store_punch_me_now(punch_me_now_frame);\n                    let old_task = entry.get().0.clone();\n                    old_task.abort();\n                    entry.insert((task, tx.clone()));\n                    tracing::trace!(target: \"punch\", %punch_id, \"new passive transaction for punch\");\n                } else {\n                    let tx = entry.get().1.clone();\n                    tracing::trace!(target: \"punch\", %punch_id, \"using existing active transaction to respond to PunchMeNow\");\n                    tx.store_punch_me_now(punch_me_now_frame);\n                }\n            }\n            Entry::Vacant(entry) => {\n                let (task, tx) = crate_punch_task()?;\n                entry.insert((task, tx.clone()));\n                tracing::trace!(target: \"punch\", %punch_id, \"new passive transaction\");\n            }\n        };\n\n        Ok(())\n    }\n\n    async fn punch_actively(\n        &self,\n        bind_uri: BindUri,\n        local: &AddAddressFrame,\n        remote: &AddAddressFrame,\n        tx: Arc<Transaction>,\n    ) -> io::Result<()> {\n        let local_nat = local.nat_type();\n        let remote_nat = remote.nat_type();\n        let bind_addr = SocketAddr::try_from(bind_uri.clone())\n            .map_err(|e| io::Error::new(io::ErrorKind::InvalidInput, e))?;\n        let link = Link::new(bind_addr, *remote.deref());\n        let punch_id = (local, remote).punch_id();\n        tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"starting active punch\");\n\n        let mut punch_me_now = PunchMeNowFrame::new(\n            local.seq_num(),\n            remote.seq_num(),\n            *local.deref(),\n            local.tire(),\n            local_nat,\n        );\n        let ifaces = self.0.ifaces.clone();\n        let dynamic_iface = {\n            let ifaces = self.0.ifaces.clone();\n            let iface_factory = self.0.iface_factory.clone();\n            let quic_router = self.0.quic_router.clone();\n            let stun_servers = self.0.stun_servers.clone();\n            async move |bind_uri: &BindUri| {\n                dynamic_iface(\n                    bind_uri,\n                    &ifaces,\n                    &iface_factory,\n                    &quic_router,\n                    &stun_servers,\n                )\n                .await\n            }\n        };\n\n        let broker = self.0.broker.clone();\n        let punch_ifaces = &self.0.punch_ifaces;\n\n        // local \\ remote  ·FullCone    RestrictedCone    RestrictedPort  Symmetric    Dynamic\n        // FullCone         1               6                 6              6          6\n        // RestrictedCone   1               6                 6              6          6\n        // RestrictedPort   1               6                 6              7          6\n        // Symmetric        1               4                 3              /          8\n        // Dynamic          1               5                 5              2          5\n\n        // 1: Remote is FullCone\n        // Send direct Hello to remote, expecting Hello(Done).\n        // 2: Local Dynamic, Remote Symmetric -> New Interface & Birthday Attack\n        // Send PunchMeNow, expect PunchMeNow. After receiving, start collision, expect Hello(Done).\n        // 3: Local Symmetric, Remote RestrictedPort -> Birthday Attack\n        // Send PunchMeNow, expect PunchMeNow. Use random socket collision, expect Hello(Done).\n        // 4: Local Symmetric, Remote RestrictedCone -> Reverse Punching\n        // Send PunchMeNow, expect remote to open hole and respond PunchMeNow. Then send direct Hello, expect Hello(Done).\n        // 5: Local Dynamic\n        // New Interface, detect external address. Then send PunchMeNow and Hello, expect Hello(Done).\n        // 6: General Punching\n        // Send Hello with TTL and PunchMeNow. Expect Hello, then respond Hello(Done).\n        // 7: Local RestrictedPort, Remote Symmetric -> Birthday Attack (Hold Hole)\n        // Send packets to 300 random ports, then notify with PunchMeNow. Expect Hello, then respond Hello(Done).\n        // 8: Local Symmetric, Remote Dynamic\n        // Hold holes on 30 random ports, send PunchMeNow. Expect Collision, then respond PunchMeNow.\n        // Repeat until 300 sockets used.\n        use NatType::*;\n        let result: io::Result<()> = match (local_nat, remote_nat) {\n            (Blocked, _) | (_, Blocked) | (Symmetric, Symmetric) => {\n                return Err(io::Error::other(\"Unsupported nat type\"));\n            }\n            // 1: Remote is FullCone\n            // Send direct Hello to remote, expecting Hello(Done).\n            (_, FullCone) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"strategy: Remote FullCone, sending direct Hello\");\n                let iface = ifaces\n                    .borrow(&bind_uri)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                let time = Duration::from_millis(100);\n                for i in 0..5 {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending Hello expecting Hello(Done) or receiving Hello\");\n                    self.0\n                        .send_packet(\n                            &iface,\n                            link,\n                            HELLO_TTL,\n                            PunchHelloFrame::new(\n                                punch_id.local_seq,\n                                punch_id.remote_seq,\n                                DEFAULT_PROBE_ID,\n                            ),\n                        )\n                        .await?;\n                    let timeout_duration = time * (1 << i);\n                    tokio::select! {\n                        _ = tokio::time::sleep(timeout_duration) => {\n                            // continue loop\n                        }\n                        Ok((_, punch_hello)) = async { Ok::<_, io::Error>(tx.wait_punch_hello().await) } => {\n                            tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"received Hello, sending broker PunchDone confirmation\");\n                            broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(&punch_hello))]);\n                            return Ok(());\n                        }\n                        _ = tx.wait_punch_done() => {\n                            tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"punch success\");\n                            return Ok(());\n                        }\n                    }\n                }\n                tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"punch failed\");\n                return Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"));\n            }\n            // 2. Local Dynamic, Remote Symmetric -> New Interface & Birthday Attack\n            // Send PunchMeNow, expect PunchMeNow. After receiving, start collision, expect Hello(Done).\n            (Dynamic, Symmetric) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"strategy: Local Dynamic, Remote Symmetric, new interface & birthday attack\");\n                // TODO: Creating a new iface is not strictly necessary; could reuse an available temporary address.\n                let (iface, stun_client) = dynamic_iface(&bind_uri).await?;\n\n                let bind_uri = iface.bind_uri();\n                punch_ifaces.insert(bind_uri.clone(), iface.clone());\n                let outer_addr = stun_client.outer_addr().await?;\n                punch_me_now.set_addr(outer_addr);\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending PunchMeNow expecting PunchMeNow then collision\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n\n                let link = Link::new(iface.bound_addr()?, link.dst);\n                let mut collided = false;\n                let result: io::Result<()> = loop {\n                    tokio::select! {\n                        _ = tokio::time::sleep(BIRTHDAY_TIMEOUT)=>\n                            break Err(io::Error::new(io::ErrorKind::TimedOut, \"Punch timeout\")),\n                        _ = tx.wait_punch_me_now(), if !collided => {\n                            collided = true;\n                            self.0.collision(&iface, link, punch_id, KNOCK_TTL).await?;\n                        }\n                        Ok((link, punch_hello)) = async { Ok::<_, io::Error>(tx.wait_punch_hello().await) } => {\n                            tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"received Hello, sending broker PunchDone confirmation\");\n                            broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(&punch_hello))]);\n                            break Ok(());\n                        }\n                        _ = tx.wait_punch_done() =>\n                            break Ok(()),\n                    };\n                };\n                // If punch failed, clean up the interface\n                if result.is_err() {\n                    punch_ifaces.remove(&bind_uri);\n                    ifaces.unbind(bind_uri).await;\n                }\n                result\n            }\n            // 3. Local Symmetric, Remote RestrictedPort -> Birthday Attack\n            // Send PunchMeNow, expect PunchMeNow. Use random socket collision, expect Hello(Done).\n            (Symmetric, RestrictedPort) => {\n                // Send PunchMeNow first\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending PunchMeNow expecting PunchMeNow then rush\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n\n                if timeout(COLLISION_TIMEOUT, tx.wait_punch_me_now())\n                    .await\n                    .is_ok()\n                {\n                    // Use new consolidated PortPredictor birthday attack\n                    let mut predictor = PortPredictor::new(\n                        ifaces.clone(),\n                        self.0.iface_factory.clone(),\n                        self.0.quic_router.clone(),\n                        bind_uri.clone(),\n                        link.dst,\n                    )?;\n\n                    // Create packet send function\n                    let puncher_ref = self.0.clone();\n                    let packet_send_fn: PacketSendFn = Arc::new(move |iface, link, ttl, frame| {\n                        let puncher = puncher_ref.clone();\n                        Box::pin(async move { puncher.send_packet(iface, link, ttl, frame).await })\n                    });\n\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"starting consolidated birthday attack\");\n                    match predictor\n                        .predict(punch_id, tx.clone(), packet_send_fn)\n                        .await\n                    {\n                        Ok(Some((bind_uri, iface))) => {\n                            tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %bind_uri, \"birthday attack succeeded\");\n                            self.0.punch_ifaces.insert(bind_uri.clone(), iface);\n                            return Ok(());\n                        }\n                        Ok(None) => {\n                            tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"birthday attack completed without success\");\n                        }\n                        Err(e) => {\n                            tracing::warn!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %e, \"birthday attack failed\");\n                        }\n                    }\n                }\n\n                return Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"));\n            }\n            // 4. Local Symmetric, Remote RestrictedCone -> Reverse Punching\n            // Send PunchMeNow, expect remote to open hole and respond PunchMeNow. Then send direct Hello, expect Hello(Done).\n            (Symmetric, RestrictedCone) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"strategy: Local Symmetric, Remote RestrictedCone, reverse punching\");\n                tracing::trace!(target: \"punch\", %punch_id, \"sending PunchMeNow expecting PunchMeNow then Hello\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n                if timeout(PUNCH_ME_NOW_TIMEOUT, tx.wait_punch_me_now())\n                    .await\n                    .is_err()\n                {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"wait for PunchMeNow timeout, try to connect blindly\");\n                }\n\n                let iface = ifaces\n                    .borrow(&bind_uri)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                for i in 0..5 {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending Hello expecting Hello(Done)\");\n                    self.0\n                        .send_packet(\n                            &iface,\n                            link,\n                            HELLO_TTL,\n                            PunchHelloFrame::new(\n                                punch_id.local_seq,\n                                punch_id.remote_seq,\n                                DEFAULT_PROBE_ID,\n                            ),\n                        )\n                        .await?;\n                    if (timeout(KNOCK_TIMEOUT * (1 << i), tx.wait_punch_done()).await).is_ok() {\n                        tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"punch success\");\n                        return Ok(());\n                    }\n                }\n\n                tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"punch failed\");\n                return Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"));\n            }\n            // 5. Local Dynamic\n            // New Interface, detect external address. Then send PunchMeNow and Hello, expect Hello(Done).\n            (Dynamic, _) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"strategy: Local Dynamic, new interface & send PunchMeNow + Hello\");\n                // Use new iface, update PunchMeNow address.\n                // TODO: Creating a new iface is not strictly necessary; could reuse an available temporary address.\n                let (iface, stun_client) = dynamic_iface(&bind_uri).await?;\n                let outer_addr = stun_client.outer_addr().await?;\n                let bind_uri = iface.bind_uri();\n                punch_ifaces.insert(bind_uri.clone(), iface.clone());\n                punch_me_now.set_addr(outer_addr);\n                tracing::trace!(target: \"punch\", %punch_id, \"sending PunchMeNow + Hello expecting Hello(Done)\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n                let link = Link::new(iface.bound_addr()?, link.dst);\n                let time = Duration::from_millis(100);\n                for i in 0..MAX_RETRIES {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending Hello expecting Hello(Done)\");\n                    self.0\n                        .send_packet(\n                            &iface,\n                            link,\n                            HELLO_TTL,\n                            PunchHelloFrame::new(\n                                punch_id.local_seq,\n                                punch_id.remote_seq,\n                                DEFAULT_PROBE_ID,\n                            ),\n                        )\n                        .await?;\n                    let timeout_duration = time * (1 << i);\n                    tokio::select! {\n                        _ = tokio::time::sleep(timeout_duration) => {\n                            // continue loop\n                        }\n                        Ok((_, punch_hello)) = async { Ok::<_, io::Error>(tx.wait_punch_hello().await) } => {\n                            tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"received Hello, sending broker PunchDone confirmation\");\n                            broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(&punch_hello))]);\n                            return Ok(());\n                        }\n                        _ = tx.wait_punch_done() => {\n                            tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"punch success\");\n                            return Ok(());\n                        }\n                    }\n                }\n                // Punch failed, remove the interface\n                punch_ifaces.remove(&bind_uri);\n                ifaces.unbind(bind_uri).await;\n                Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"))\n            }\n            // 6. General Punching\n            // Send Hello with TTL and PunchMeNow. Expect Hello, then respond Hello(Done).\n            (FullCone | RestrictedCone, Symmetric)\n            | (FullCone | RestrictedCone | RestrictedPort, Dynamic)\n            | (_, RestrictedCone | RestrictedPort) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"strategy: General punching, send Hello with TTL & PunchMeNow\");\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending PunchMeNow + Hello expecting Hello then Hello(Done)\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n                let iface = ifaces\n                    .borrow(&bind_uri)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending Hello expecting Hello\");\n                self.0\n                    .send_packet(\n                        &iface,\n                        link,\n                        HELLO_TTL,\n                        PunchHelloFrame::new(\n                            punch_id.local_seq,\n                            punch_id.remote_seq,\n                            DEFAULT_PROBE_ID,\n                        ),\n                    )\n                    .await?;\n                if let Ok((_, punch_hello)) = timeout(PUNCH_TIMEOUT, tx.wait_punch_hello()).await {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending broker PunchDone confirmation\");\n                    broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(\n                        &punch_hello,\n                    ))]);\n                    tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"actively punch success\");\n                    return Ok(());\n                }\n                tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"punch failed\");\n                return Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"));\n            }\n            // 7. Local RestrictedPort, Remote Symmetric -> Birthday Attack (Hold Hole)\n            // Send packets to 300 random ports, then notify with PunchMeNow. Expect Hello, then respond Hello(Done).\n            (RestrictedPort, Symmetric) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"strategy: Local RestrictedPort, Remote Symmetric, birthday attack hold hole\");\n                let iface = ifaces\n                    .borrow(&bind_uri)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                self.0.collision(&iface, link, punch_id, KNOCK_TTL).await?;\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending PunchMeNow expecting Hello then Hello(Done)\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n                if let Ok((link, punch_hello)) =\n                    timeout(BIRTHDAY_TIMEOUT, tx.wait_punch_hello()).await\n                {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending broker PunchDone confirmation\");\n                    broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(\n                        &punch_hello,\n                    ))]);\n                    tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"punch success with collision\");\n                    return Ok(());\n                }\n                return Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"));\n            }\n            // 8. Local Symmetric, Remote Dynamic\n            // Hold holes on 30 random ports, send PunchMeNow. Expect Collision, then respond PunchMeNow.\n            // Repeat until 300 sockets used.\n            (Symmetric, Dynamic) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"strategy: Local Symmetric, Remote Dynamic, hold holes & send PunchMeNow\");\n\n                // Use new consolidated PortPredictor birthday attack\n                let mut predictor = PortPredictor::new(\n                    ifaces.clone(),\n                    self.0.iface_factory.clone(),\n                    self.0.quic_router.clone(),\n                    bind_uri.clone(),\n                    link.dst,\n                )?;\n                // Create packet send function\n                let puncher_ref = self.0.clone();\n                let packet_send_fn: PacketSendFn = Arc::new(move |iface, link, ttl, frame| {\n                    let puncher = puncher_ref.clone();\n                    Box::pin(async move { puncher.send_packet(iface, link, ttl, frame).await })\n                });\n\n                // Send initial PunchMeNow to notify peer\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending initial PunchMeNow for Dynamic strategy\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"starting consolidated birthday attack for Dynamic strategy\");\n                match predictor\n                    .predict(punch_id, tx.clone(), packet_send_fn)\n                    .await\n                {\n                    Ok(Some((bind_uri, iface))) => {\n                        tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %bind_uri, \"birthday attack succeeded for Dynamic strategy\");\n                        self.0.punch_ifaces.insert(bind_uri.clone(), iface);\n                        return Ok(());\n                    }\n                    Ok(None) => {\n                        tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"birthday attack completed without success for Dynamic strategy\");\n                    }\n                    Err(e) => {\n                        tracing::warn!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %e, \"birthday attack failed for Dynamic strategy\");\n                    }\n                }\n                return Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"));\n            }\n        };\n        result\n    }\n\n    async fn punch_passively(\n        &self,\n        bind: BindUri,\n        local_address: &AddAddressFrame,\n        remote_address: &PunchMeNowFrame,\n        tx: Arc<Transaction>,\n    ) -> io::Result<()> {\n        use NatType::*;\n        let remote_nat = remote_address.nat_type();\n        let local_nat = local_address.nat_type();\n        let punch_id = PunchId::new(local_address.seq_num(), remote_address.local_seq());\n        tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"starting passive punch\");\n        let socket_addr = SocketAddr::try_from(bind.clone())\n            .map_err(|e| io::Error::new(io::ErrorKind::InvalidInput, e))?;\n        if local_nat == Blocked\n            || remote_nat == Blocked\n            || (local_nat == Symmetric && remote_nat == Symmetric)\n        {\n            return Err(io::Error::other(\"Unsupported nat type\"));\n        }\n        let link = Link::new(socket_addr, remote_address.address());\n\n        let ifaces = self.0.ifaces.clone();\n        let broker = self.0.broker.clone();\n        // Note: Receiving PunchMeNow implies we sent an AddAddress frame.\n        // For Dynamic NAT, we don't need to create a new interface here;\n        // it should have been created before sending AddAddress.\n        // 1. Local Dynamic, Remote Symmetric\n        // Remote has opened hole. We use new interface to collide, expecting Hello(Done).\n        // 2. Local RestrictedPort, Remote Symmetric\n        // We open holes on 300 random ports, send PunchMeNow. Expect Hello collision, then respond Hello(Done).\n        // 3. Local Symmetric, Remote RestrictedPort | Dynamic\n        // We use random socket collision to open hole, expecting Hello(Done).\n        // 4. Local RestrictedCone, Remote Symmetric\n        // Reflect, hello then Send PunchmeNow, wait for hello, send Hello(Done).\n        // 5. General Punching\n        // Received PunchMeNow implies remote has opened hole. We send direct Hello, expecting Hello(Done).\n\n        match (local_nat, remote_nat) {\n            // 1. Local Dynamic, Remote Symmetric\n            // Remote has opened hole. We use new interface to collide, expecting Hello(Done).\n            (Dynamic, Symmetric) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"passive strategy: Local Dynamic, Remote Symmetric, use new interface to collide\");\n                let iface = ifaces\n                    .borrow(&bind)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                let mut collided = false;\n                loop {\n                    tokio::select! {\n                        _ = tokio::time::sleep(BIRTHDAY_TIMEOUT)=>\n                            return Err(io::Error::new(io::ErrorKind::TimedOut, \"Punch timeout\")),\n                        _ = tx.wait_punch_me_now(), if !collided => {\n                            collided = true;\n                            self.0.collision(&iface, link, punch_id, KNOCK_TTL).await?;\n                        }\n                        Ok((link, punch_hello)) = async { Ok::<_, io::Error>(tx.wait_punch_hello().await) } => {\n                            tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"received Hello, sending broker PunchDone confirmation\");\n                            broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(&punch_hello))]);\n                            return Ok(());\n                        }\n                        _ = tx.wait_punch_done() =>\n                                return Ok::<(), io::Error>(()),\n                    };\n                }\n            }\n            // 2. Local RestrictedPort, Remote Symmetric\n            // We open holes on 300 random ports, send PunchMeNow. Expect Hello collision, then respond Hello(Done).\n            (RestrictedPort, Symmetric) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"passive strategy: Local RestrictedPort, Remote Symmetric, open holes & send PunchMeNow\");\n                let iface = ifaces\n                    .borrow(&bind)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                self.0.collision(&iface, link, punch_id, KNOCK_TTL).await?;\n                let punch_me_now = PunchMeNowFrame::new(\n                    punch_id.local_seq,\n                    punch_id.remote_seq,\n                    *local_address.deref(),\n                    local_address.tire(),\n                    local_nat,\n                );\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending PunchMeNow expecting Hello then Hello(Done)\");\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n                if let Ok((link, punch_hello)) =\n                    tokio::time::timeout(BIRTHDAY_TIMEOUT, tx.wait_punch_hello()).await\n                {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending broker PunchDone confirmation\");\n                    broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(\n                        &punch_hello,\n                    ))]);\n                    return Ok(());\n                }\n            }\n            // 3. Local Symmetric, Remote RestrictedPort\n            // Use new consolidated PortPredictor birthday attack. Expect Hello(Done).\n            (Symmetric, RestrictedPort | Dynamic) => {\n                let mut predictor = PortPredictor::new(\n                    ifaces.clone(),\n                    self.0.iface_factory.clone(),\n                    self.0.quic_router.clone(),\n                    bind.clone(),\n                    link.dst,\n                )?;\n\n                // Create packet send function\n                let puncher_ref = self.0.clone();\n                let packet_send_fn: PacketSendFn = Arc::new(move |iface, link, ttl, frame| {\n                    let puncher = puncher_ref.clone();\n                    Box::pin(async move { puncher.send_packet(iface, link, ttl, frame).await })\n                });\n\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"starting consolidated birthday attack\");\n                match predictor\n                    .predict(punch_id, tx.clone(), packet_send_fn)\n                    .await\n                {\n                    Ok(Some((bind_uri, iface))) => {\n                        tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %bind_uri, \"birthday attack succeeded\");\n                        self.0.punch_ifaces.insert(bind_uri.clone(), iface);\n                        return Ok(());\n                    }\n                    Ok(None) => {\n                        tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"birthday attack completed without success\");\n                    }\n                    Err(e) => {\n                        tracing::warn!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %e, \"birthday attack failed\");\n                    }\n                }\n            }\n            // 4. Local RestrictedCone, Remote Symmetric\n            // Reflect, Hello and  PunchmeNow, wait for hello, send Hello(Done)\n            (RestrictedCone, Symmetric) => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"passive strategy: Local RestrictedCone, Remote Symmetric, reflect & send PunchMeNow\");\n                let iface = ifaces\n                    .borrow(&bind)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                let punch_me_now = PunchMeNowFrame::new(\n                    punch_id.local_seq,\n                    punch_id.remote_seq,\n                    *local_address.deref(),\n                    local_address.tire(),\n                    local_nat,\n                );\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"sending PunchMeNow expecting Hello then Hello(Done)\");\n                let punch_hello_frame =\n                    PunchHelloFrame::new(punch_id.local_seq, punch_id.remote_seq, DEFAULT_PROBE_ID);\n                self.0\n                    .send_packet(&iface, link, HELLO_TTL, punch_hello_frame)\n                    .await?;\n                broker.send_frame([ReliableFrame::PunchMeNow(punch_me_now)]);\n                if let Ok((link, punch_hello)) =\n                    tokio::time::timeout(PUNCH_TIMEOUT, tx.wait_punch_hello()).await\n                {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending broker PunchDone confirmation\");\n                    broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(\n                        &punch_hello,\n                    ))]);\n                    return Ok(());\n                }\n            }\n            // 5. General Punching\n            // Received PunchMeNow implies remote has opened hole. We send direct Hello, expecting Hello(Done).\n            _ => {\n                tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"passive strategy: General punching, send direct Hello\");\n                let iface = ifaces\n                    .borrow(&bind)\n                    .ok_or_else(|| io::Error::other(\"No interface found\"))?;\n                let time = Duration::from_millis(100);\n                for i in 0..MAX_RETRIES {\n                    tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), %link, \"sending Hello expecting Hello(Done)\");\n                    self.0\n                        .send_packet(\n                            &iface,\n                            link,\n                            HELLO_TTL,\n                            PunchHelloFrame::new(\n                                punch_id.local_seq,\n                                punch_id.remote_seq,\n                                DEFAULT_PROBE_ID,\n                            ),\n                        )\n                        .await?;\n                    let timeout_duration = time * (1 << i);\n                    tokio::select! {\n                        _ = tokio::time::sleep(timeout_duration) => {\n                            // continue loop\n                        }\n                        Ok((_, punch_hello)) = async { Ok::<_, io::Error>(tx.wait_punch_hello().await) } => {\n                            tracing::trace!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"received Hello, sending broker PunchDone confirmation\");\n                            broker.send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(&punch_hello))]);\n                            return Ok(());\n                        }\n                        _ = tx.wait_punch_done() => {\n                            tracing::debug!(target: \"punch\", %punch_id, nat_pair = %format!(\"{:?}->{:?}\", local_nat, remote_nat), \"passively punch success\");\n                            return Ok(());\n                        }\n                    }\n                }\n            }\n        };\n        Err(io::Error::new(io::ErrorKind::TimedOut, \"punch timeout\"))\n    }\n\n    fn resolve_punch_connection(\n        &self,\n        bind: &BindUri,\n        local: &EndpointAddr,\n        remote: &EndpointAddr,\n        source: &qresolve::Source,\n    ) -> io::Result<(BindUri, Link, PathWay)> {\n        if let qresolve::Source::Mdns { nic, family } = source {\n            let matches_iface = bind\n                .as_iface_bind_uri()\n                .is_some_and(|(lf, ln, _)| lf == *family && ln == nic.as_ref());\n            if !matches_iface {\n                return Err(io::Error::other(\n                    \"Bind URI does not match source constraint\",\n                ));\n            }\n        }\n        if local == remote {\n            return Err(io::Error::other(\"Local and remote endpoints are identical\"));\n        }\n\n        let (local_addr, remote_addr) = self.extract_addresses(bind, local, remote)?;\n\n        if local_addr.family() != remote_addr.family() {\n            return Err(io::Error::other(\n                \"Local and remote addresses must be in the same address family\",\n            ));\n        }\n\n        let link = Link::new(local_addr, remote_addr);\n        let pathway = PathWay::new(*local, *remote);\n\n        Ok((bind.clone(), link, pathway))\n    }\n\n    fn extract_addresses(\n        &self,\n        bind: &BindUri,\n        local: &EndpointAddr,\n        remote: &EndpointAddr,\n    ) -> io::Result<(SocketAddr, SocketAddr)> {\n        use EndpointAddr::*;\n        match (local, remote) {\n            (Direct { addr: local_addr }, Direct { addr: remote_addr }) => {\n                Ok((*local_addr, *remote_addr))\n            }\n            (\n                Agent { .. },\n                Agent {\n                    agent: remote_agent,\n                    ..\n                },\n            ) => {\n                let iface = self.0.ifaces.borrow(bind).ok_or_else(|| {\n                    io::Error::new(\n                        io::ErrorKind::NotFound,\n                        format!(\"Interface not found for bind URI: {:?}\", bind),\n                    )\n                })?;\n                Ok((iface.bound_addr()?, *remote_agent))\n            }\n            _ => Err(io::Error::other(\n                \"Unsupported endpoint type combination for punching\",\n            )),\n        }\n    }\n}\n\nimpl<TX, PH, S> ReceiveFrame<(BindUri, PathWay, Link, ReliableFrame)> for ArcPuncher<TX, PH, S>\nwhere\n    TX: SendFrame<ReliableFrame> + Send + Sync + Clone + 'static,\n    PH: ProductHeader<OneRttHeader> + Send + Sync + 'static,\n    S: PacketSpace<OneRttHeader> + Send + Sync + 'static,\n    for<'b> PunchDoneFrame: Package<S::PacketAssembler<'b>>,\n    for<'b> PunchHelloFrame: Package<S::PacketAssembler<'b>>,\n    for<'b> PadTo20: Package<S::PacketAssembler<'b>>,\n{\n    type Output = ();\n\n    fn recv_frame(\n        &self,\n        (_bind, pathway, link, frame): (BindUri, PathWay, Link, ReliableFrame),\n    ) -> Result<Self::Output, qbase::error::Error> {\n        tracing::debug!(target: \"punch\", %pathway, %link, frame = ?frame, \"received reliable punch frame\");\n        match frame {\n            ReliableFrame::AddAddress(add_address_frame) => {\n                _ = self.recv_add_address_frame(add_address_frame);\n            }\n            ReliableFrame::PunchMeNow(punch_me_now_frame) => {\n                _ = self.recv_punch_me_now(pathway, punch_me_now_frame);\n            }\n            ReliableFrame::RemoveAddress(remove_address_frame) => {\n                self.recv_remove_address_frame(remove_address_frame);\n            }\n            ReliableFrame::PunchDone(frame) => {\n                let punch_id = frame.punch_id().flip();\n                match self.0.transaction.entry(punch_id) {\n                    Entry::Occupied(mut entry) => {\n                        let tx = entry.get_mut().1.clone();\n                        _ = tx.recv_frame((link, frame));\n                    }\n                    Entry::Vacant(_) => {\n                        tracing::debug!(target: \"punch\", %punch_id, frame = ?frame, %link, \"received unexpected punch done frame\");\n                    }\n                }\n            }\n            frame => {\n                tracing::debug!(target: \"punch\", frame = ?frame, \"received unexpected reliable punch frame\");\n            }\n        };\n\n        Ok(())\n    }\n}\n\nimpl<TX, PH, S> ReceiveFrame<(BindUri, PathWay, Link, PunchHelloFrame)> for ArcPuncher<TX, PH, S>\nwhere\n    TX: SendFrame<ReliableFrame> + Send + Sync + Clone + 'static,\n    PH: ProductHeader<OneRttHeader> + Send + Sync + 'static,\n    S: PacketSpace<OneRttHeader> + Send + Sync + 'static,\n    for<'b> PunchDoneFrame: Package<S::PacketAssembler<'b>>,\n    for<'b> PunchHelloFrame: Package<S::PacketAssembler<'b>>,\n    for<'b> PadTo20: Package<S::PacketAssembler<'b>>,\n{\n    type Output = ();\n\n    fn recv_frame(\n        &self,\n        (_bind, pathway, link, frame): (BindUri, PathWay, Link, PunchHelloFrame),\n    ) -> Result<Self::Output, qbase::error::Error> {\n        tracing::debug!(target: \"punch\", %pathway, %link, frame = ?frame, \"received punch hello frame\");\n        let punch_id = frame.punch_id().flip();\n        match self.0.transaction.entry(punch_id) {\n            Entry::Occupied(mut entry) => {\n                let tx = entry.get_mut().1.clone();\n                _ = tx.recv_frame((link, frame));\n            }\n            Entry::Vacant(_) => {\n                tracing::trace!(target: \"punch\", %punch_id, frame = ?frame, %link, \"received unsolicited punch hello, replying with broker PunchDone\");\n                self.0\n                    .broker\n                    .send_frame([ReliableFrame::PunchDone(PunchDoneFrame::respond_to(&frame))]);\n            }\n        }\n\n        Ok(())\n    }\n}\n\n#[inline]\nasync fn dynamic_iface(\n    bind_uri: &BindUri,\n    ifaces: &Arc<InterfaceManager>,\n    iface_factory: &Arc<dyn ProductIO>,\n    quic_router: &Arc<QuicRouter>,\n    stun_servers: &[SocketAddr],\n) -> io::Result<(Interface, StunClient)> {\n    const MIN_PORT: u16 = 1024;\n    const MAX_PORT: u16 = u16::MAX;\n    let (ip_family, device, _port) = bind_uri.as_iface_bind_uri().ok_or_else(|| {\n        let error = \"Invalid bind uri, expected bind uri with iface schema\";\n        io::Error::new(io::ErrorKind::InvalidInput, error)\n    })?;\n    let port = rand::random::<u16>() % (MAX_PORT - MIN_PORT) + MIN_PORT;\n    let bind_uri = format!(\n        \"iface://{ip_family}.{device}:{port}?{}=true\",\n        BindUri::TEMPORARY_PROP\n    );\n    let bind_uri = BindUri::from_str(bind_uri.as_str())\n        .map_err(|e| io::Error::new(io::ErrorKind::InvalidInput, e))?;\n\n    ifaces\n        .bind(bind_uri, iface_factory.clone())\n        .await\n        .with_components_mut(|components, iface| {\n            // Ensure this temporary iface can receive+deliver QUIC packets to the connection.\n            // Must use the connection-owned router.\n            components.init_with(|| QuicRouterComponent::new(quic_router.clone()));\n\n            let local_addr = iface.bound_addr()?;\n            let stun_server = *stun_servers\n                .iter()\n                .find(|addr| addr.is_ipv4() == local_addr.is_ipv4())\n                .ok_or_else(|| io::Error::other(\"No STUN server matches local address family\"))?;\n            let stun_router = components\n                .init_with(|| {\n                    let ref_iface = iface.downgrade();\n                    StunRouterComponent::new(ref_iface)\n                })\n                .router();\n            let stun_client = components\n                .init_with(|| {\n                    let client =\n                        StunClient::new(iface.downgrade(), stun_router.clone(), stun_server, None);\n                    StunClientComponent::new(client)\n                })\n                .client();\n            components.init_with(|| {\n                ReceiveAndDeliverPacket::builder(iface.downgrade())\n                    .quic_router(quic_router.clone())\n                    .stun_router(stun_router)\n                    .init()\n            });\n            Ok((iface.to_owned(), stun_client))\n        })\n}\n"
  },
  {
    "path": "qtraversal/src/punch/scheduler.rs",
    "content": "use std::{\n    collections::{HashMap, VecDeque},\n    io,\n    net::SocketAddr,\n    sync::{Arc, LazyLock, Mutex},\n    task::{Context, Poll, Waker},\n    time::Duration,\n};\n\nuse qbase::net::{AddrFamily, Family};\nuse tokio::time::Instant;\n\npub static SCHEDULER: LazyLock<Arc<Mutex<Scheduler>>> =\n    LazyLock::new(|| Arc::new(Mutex::new(Scheduler::new())));\n\nconst MAX_SOCKETS_PER_DEVICE: u32 = 300;\nconst MAX_PORTS_PER_DEVICE: u32 = 600;\nconst MAX_TOTAL_SOCKETS: u32 = 600;\nconst MAX_TOTAL_PORTS: u32 = 1200;\nconst PORT_COOLING_INTERVAL: Duration = Duration::from_secs(60);\n\npub struct Scheduler {\n    devices: HashMap<DeviceKey, DeviceLedger>,\n    pub(crate) total_sockets: u32,\n    pub(crate) total_ports: u32,\n    cooling: VecDeque<(Instant, u32)>,\n    waiters: VecDeque<Waker>,\n}\n\nimpl Scheduler {\n    fn new() -> Self {\n        Self {\n            devices: HashMap::new(),\n            total_sockets: 0,\n            total_ports: 0,\n            cooling: VecDeque::new(),\n            waiters: VecDeque::new(),\n        }\n    }\n\n    fn reap_cooling(&mut self) {\n        let now = Instant::now();\n        self.cooling.retain(|(time, count)| {\n            if now - *time > PORT_COOLING_INTERVAL {\n                self.total_ports = self.total_ports.saturating_sub(*count);\n                false\n            } else {\n                true\n            }\n        });\n    }\n\n    fn global_available(&self) -> u32 {\n        let by_socket = MAX_TOTAL_SOCKETS.saturating_sub(self.total_sockets);\n        let by_port = MAX_TOTAL_PORTS.saturating_sub(self.total_ports);\n        by_socket.min(by_port)\n    }\n\n    pub fn poll_allocate(\n        &mut self,\n        cx: &Context,\n        dest: SocketAddr,\n        device: String,\n        count: u32,\n    ) -> Poll<io::Result<u32>> {\n        self.reap_cooling();\n        let global_avail = self.global_available();\n\n        let key = DeviceKey::new(device.clone(), dest.ip().family());\n        let ledger = self.devices.entry(key).or_insert_with(DeviceLedger::new);\n        ledger.reap_cooling();\n        let device_avail = ledger.available();\n        let granted = global_avail.min(device_avail).min(count);\n\n        tracing::trace!(target: \"punch\",\n            global_avail, device_avail, granted, count,\n            total_sockets = self.total_sockets, total_ports = self.total_ports,\n            \"Poll allocate\"\n        );\n\n        if granted > 0 {\n            ledger.sockets += granted;\n            ledger.ports += granted;\n            *ledger.per_dest.entry(dest).or_insert(0) += granted;\n\n            self.total_sockets += granted;\n            self.total_ports += granted;\n\n            tracing::trace!(target: \"punch\", ?dest, device, granted, \"Port allocated\");\n            Poll::Ready(Ok(granted))\n        } else {\n            if !self.waiters.iter().any(|w| w.will_wake(cx.waker())) {\n                self.waiters.push_back(cx.waker().clone());\n            }\n            tracing::trace!(target: \"punch\", ?dest, device, count, \"Port allocation pending\");\n            Poll::Pending\n        }\n    }\n\n    pub fn release_port(&mut self, count: u32, dst: SocketAddr, device: String) -> io::Result<()> {\n        let key = DeviceKey::new(device.clone(), dst.ip().family());\n\n        let ledger = self.devices.get_mut(&key).ok_or_else(|| {\n            tracing::trace!(target: \"punch\", ?dst, device, \"Device not found\");\n            io::Error::other(\"device not found\")\n        })?;\n\n        if ledger.sockets < count {\n            tracing::trace!(target: \"punch\", sockets = ledger.sockets, count, ?dst, \"Insufficient sockets\");\n            return Err(io::Error::other(\"insufficient sockets\"));\n        }\n        let dest_count = ledger.per_dest.get(&dst).copied().unwrap_or(0);\n        if dest_count < count {\n            tracing::trace!(target: \"punch\", ?dst, dest_count, count, \"Socket count mismatch\");\n            return Err(io::Error::other(\"socket count mismatch\"));\n        }\n\n        // Device: release sockets immediately, ports enter cooling\n        ledger.sockets -= count;\n        let now = Instant::now();\n        ledger.cooling.push_back((now, count));\n        if dest_count > count {\n            ledger.per_dest.insert(dst, dest_count - count);\n        } else {\n            ledger.per_dest.remove(&dst);\n        }\n\n        // Global: release sockets immediately, ports enter cooling\n        self.total_sockets = self.total_sockets.saturating_sub(count);\n        self.cooling.push_back((now, count));\n\n        tracing::trace!(target: \"punch\", ?dst, device, count, \"Port released\");\n\n        for waker in self.waiters.drain(..) {\n            waker.wake();\n        }\n\n        Ok(())\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\nstruct DeviceKey {\n    device: String,\n    family: Family,\n}\n\nimpl DeviceKey {\n    fn new(device: String, family: Family) -> Self {\n        Self { device, family }\n    }\n}\n\nstruct DeviceLedger {\n    sockets: u32,\n    ports: u32,\n    cooling: VecDeque<(Instant, u32)>,\n    per_dest: HashMap<SocketAddr, u32>,\n}\n\nimpl DeviceLedger {\n    fn new() -> Self {\n        Self {\n            sockets: 0,\n            ports: 0,\n            cooling: VecDeque::new(),\n            per_dest: HashMap::new(),\n        }\n    }\n\n    fn reap_cooling(&mut self) {\n        let now = Instant::now();\n        self.cooling.retain(|(time, count)| {\n            if now - *time > PORT_COOLING_INTERVAL {\n                self.ports = self.ports.saturating_sub(*count);\n                false\n            } else {\n                true\n            }\n        });\n    }\n\n    fn available(&self) -> u32 {\n        let by_socket = MAX_SOCKETS_PER_DEVICE.saturating_sub(self.sockets);\n        let by_port = MAX_PORTS_PER_DEVICE.saturating_sub(self.ports);\n        by_socket.min(by_port)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::net::{IpAddr, Ipv4Addr};\n\n    use futures::task::noop_waker_ref;\n\n    use super::*;\n\n    fn test_addr() -> SocketAddr {\n        SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080)\n    }\n\n    fn test_cx() -> Context<'static> {\n        Context::from_waker(noop_waker_ref())\n    }\n\n    #[test]\n    fn test_scheduler_new() {\n        let s = Scheduler::new();\n        assert_eq!(s.total_sockets, 0);\n        assert_eq!(s.total_ports, 0);\n        assert!(s.devices.is_empty());\n        assert!(s.cooling.is_empty());\n    }\n\n    #[test]\n    fn test_allocation_success() {\n        let mut s = Scheduler::new();\n        let cx = test_cx();\n        let dest = test_addr();\n\n        let result = s.poll_allocate(&cx, dest, \"eth0\".into(), 10);\n        assert!(matches!(result, Poll::Ready(Ok(10))));\n\n        assert_eq!(s.total_sockets, 10);\n        assert_eq!(s.total_ports, 10);\n\n        let key = DeviceKey::new(\"eth0\".into(), dest.ip().family());\n        let ledger = s.devices.get(&key).unwrap();\n        assert_eq!(ledger.sockets, 10);\n        assert_eq!(ledger.ports, 10);\n        assert_eq!(*ledger.per_dest.get(&dest).unwrap(), 10);\n    }\n\n    #[test]\n    fn test_allocation_pending() {\n        let mut s = Scheduler::new();\n        let cx = test_cx();\n        let dest = test_addr();\n\n        // Fill device to its limit\n        let _ = s.poll_allocate(&cx, dest, \"eth0\".into(), MAX_SOCKETS_PER_DEVICE);\n\n        let result = s.poll_allocate(&cx, dest, \"eth0\".into(), 1);\n        assert!(matches!(result, Poll::Pending));\n        assert_eq!(s.waiters.len(), 1);\n    }\n\n    #[test]\n    fn test_release_port() {\n        let mut s = Scheduler::new();\n        let cx = test_cx();\n        let dest = test_addr();\n\n        let _ = s.poll_allocate(&cx, dest, \"eth0\".into(), 10);\n        assert!(s.release_port(10, dest, \"eth0\".into()).is_ok());\n\n        assert_eq!(s.total_sockets, 0);\n        assert_eq!(s.cooling.len(), 1);\n\n        let key = DeviceKey::new(\"eth0\".into(), dest.ip().family());\n        let ledger = s.devices.get(&key).unwrap();\n        assert_eq!(ledger.sockets, 0);\n        assert!(!ledger.per_dest.contains_key(&dest));\n    }\n\n    #[test]\n    fn test_global_limits() {\n        let mut s = Scheduler::new();\n        let cx = test_cx();\n        let dest = test_addr();\n\n        // Fill eth0 to device limit (300)\n        let r1 = s.poll_allocate(&cx, dest, \"eth0\".into(), MAX_SOCKETS_PER_DEVICE);\n        assert!(matches!(r1, Poll::Ready(Ok(c)) if c == MAX_SOCKETS_PER_DEVICE));\n        assert_eq!(s.total_sockets, MAX_SOCKETS_PER_DEVICE);\n\n        // Fill eth1 with remaining global capacity\n        let remain = MAX_TOTAL_SOCKETS - MAX_SOCKETS_PER_DEVICE;\n        let r2 = s.poll_allocate(&cx, dest, \"eth1\".into(), remain);\n        assert!(matches!(r2, Poll::Ready(Ok(c)) if c == remain));\n        assert_eq!(s.total_sockets, MAX_TOTAL_SOCKETS);\n\n        // Global full → Pending\n        let r3 = s.poll_allocate(&cx, dest, \"eth0\".into(), 1);\n        assert!(matches!(r3, Poll::Pending));\n    }\n\n    #[test]\n    fn test_device_limits() {\n        let mut s = Scheduler::new();\n        let cx = test_cx();\n        let dest = test_addr();\n\n        let r = s.poll_allocate(&cx, dest, \"eth0\".into(), MAX_SOCKETS_PER_DEVICE);\n        assert!(matches!(r, Poll::Ready(Ok(c)) if c == MAX_SOCKETS_PER_DEVICE));\n\n        let key = DeviceKey::new(\"eth0\".into(), dest.ip().family());\n        assert_eq!(s.devices.get(&key).unwrap().sockets, MAX_SOCKETS_PER_DEVICE);\n\n        let r = s.poll_allocate(&cx, dest, \"eth0\".into(), MAX_SOCKETS_PER_DEVICE);\n        assert!(matches!(r, Poll::Pending));\n    }\n\n    #[test]\n    fn test_global_not_updated_on_device_pending() {\n        let mut s = Scheduler::new();\n        let cx = test_cx();\n        let dest = test_addr();\n\n        let _ = s.poll_allocate(&cx, dest, \"eth0\".into(), 10);\n\n        // Manually max out the device\n        let key = DeviceKey::new(\"eth0\".into(), dest.ip().family());\n        if let Some(ledger) = s.devices.get_mut(&key) {\n            ledger.sockets = MAX_SOCKETS_PER_DEVICE;\n            ledger.ports = MAX_PORTS_PER_DEVICE;\n        }\n\n        // Device full → Pending, global unchanged\n        let r = s.poll_allocate(&cx, dest, \"eth0\".into(), 1);\n        assert!(matches!(r, Poll::Pending));\n        assert_eq!(s.total_sockets, 10);\n        assert_eq!(s.total_ports, 10);\n    }\n\n    #[test]\n    fn test_mutex_protection() {\n        use std::sync::Arc;\n\n        use tokio::sync::Mutex;\n\n        let scheduler = Arc::new(Mutex::new(Scheduler::new()));\n        let mut handles = vec![];\n\n        for _ in 0..5 {\n            let s = Arc::clone(&scheduler);\n            handles.push(std::thread::spawn(move || {\n                let mut s = s.blocking_lock();\n                s.total_sockets += 1;\n                s.total_ports += 1;\n                true\n            }));\n        }\n\n        for h in handles {\n            assert!(h.join().unwrap());\n        }\n\n        let s = scheduler.try_lock().unwrap();\n        assert_eq!(s.total_sockets, 5);\n        assert_eq!(s.total_ports, 5);\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/punch/tx.rs",
    "content": "use std::fmt;\n\nuse qbase::{\n    frame::{AddAddressFrame, PunchDoneFrame, PunchHelloFrame, PunchMeNowFrame, io::ReceiveFrame},\n    net::route::Link,\n};\nuse tokio::sync::SetOnce;\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub(crate) struct PunchId {\n    pub local_seq: u32,\n    pub remote_seq: u32,\n}\n\nimpl PunchId {\n    pub fn new(local_seq: u32, remote_seq: u32) -> Self {\n        Self {\n            local_seq,\n            remote_seq,\n        }\n    }\n\n    pub fn flip(self) -> Self {\n        Self {\n            local_seq: self.remote_seq,\n            remote_seq: self.local_seq,\n        }\n    }\n}\n\nimpl fmt::Display for PunchId {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        write!(f, \"({}, {})\", self.local_seq, self.remote_seq)\n    }\n}\n\npub(crate) trait AsPunchId {\n    fn punch_id(&self) -> PunchId;\n}\n\nimpl AsPunchId for PunchHelloFrame {\n    fn punch_id(&self) -> PunchId {\n        PunchId::new(self.local_seq(), self.remote_seq())\n    }\n}\n\nimpl AsPunchId for PunchDoneFrame {\n    fn punch_id(&self) -> PunchId {\n        PunchId::new(self.local_seq(), self.remote_seq())\n    }\n}\n\nimpl AsPunchId for PunchMeNowFrame {\n    fn punch_id(&self) -> PunchId {\n        PunchId::new(self.local_seq(), self.remote_seq())\n    }\n}\n\nimpl AsPunchId for (&AddAddressFrame, &AddAddressFrame) {\n    fn punch_id(&self) -> PunchId {\n        PunchId::new(self.0.seq_num(), self.1.seq_num())\n    }\n}\n\npub(crate) struct Transaction {\n    punch_me_now_frame: SetOnce<PunchMeNowFrame>,\n    punch_hello_frame: SetOnce<(Link, PunchHelloFrame)>,\n    punch_done_frame: SetOnce<(Link, PunchDoneFrame)>,\n}\n\nimpl Transaction {\n    pub fn new() -> Self {\n        Self {\n            punch_me_now_frame: SetOnce::new(),\n            punch_hello_frame: SetOnce::new(),\n            punch_done_frame: SetOnce::new(),\n        }\n    }\n\n    pub async fn wait_punch_done(&self) -> (Link, PunchDoneFrame) {\n        *self.punch_done_frame.wait().await\n    }\n\n    pub fn try_punch_done(&self) -> Option<(Link, PunchDoneFrame)> {\n        self.punch_done_frame.get().copied()\n    }\n\n    pub async fn wait_punch_hello(&self) -> (Link, PunchHelloFrame) {\n        *self.punch_hello_frame.wait().await\n    }\n\n    pub async fn wait_punch_me_now(&self) -> PunchMeNowFrame {\n        *self.punch_me_now_frame.wait().await\n    }\n\n    pub fn store_punch_me_now(&self, frame: PunchMeNowFrame) {\n        _ = self.punch_me_now_frame.set(frame);\n    }\n}\n\nimpl ReceiveFrame<(Link, PunchHelloFrame)> for Transaction {\n    type Output = ();\n\n    fn recv_frame(\n        &self,\n        (link, frame): (Link, PunchHelloFrame),\n    ) -> Result<Self::Output, qbase::error::Error> {\n        _ = self.punch_hello_frame.set((link, frame));\n        Ok(())\n    }\n}\n\nimpl ReceiveFrame<(Link, PunchDoneFrame)> for Transaction {\n    type Output = ();\n\n    fn recv_frame(\n        &self,\n        (link, frame): (Link, PunchDoneFrame),\n    ) -> Result<Self::Output, qbase::error::Error> {\n        _ = self.punch_done_frame.set((link, frame));\n        Ok(())\n    }\n}\n\nimpl ReceiveFrame<(Link, PunchMeNowFrame)> for Transaction {\n    type Output = ();\n\n    fn recv_frame(\n        &self,\n        (_link, frame): (Link, PunchMeNowFrame),\n    ) -> Result<Self::Output, qbase::error::Error> {\n        _ = self.punch_me_now_frame.set(frame);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "qtraversal/src/punch.rs",
    "content": "pub(super) mod predictor;\npub mod puncher;\npub(super) mod scheduler;\npub(super) mod tx;\n"
  },
  {
    "path": "qtraversal/src/route.rs",
    "content": "use std::{\n    convert::identity,\n    io,\n    net::SocketAddr,\n    pin::Pin,\n    sync::{Arc, Mutex, MutexGuard},\n    task::{\n        Context,\n        Poll::{self, Ready},\n        ready,\n    },\n};\n\nuse bytes::BytesMut;\nuse qbase::{\n    net::{\n        addr::EndpointAddr,\n        route::{Line, Link, Route},\n    },\n    util::ArcAsyncDeque,\n};\nuse qinterface::{\n    Interface, WeakInterface,\n    component::{\n        Component,\n        route::{QuicRouter, QuicRouterComponent},\n    },\n    io::{IO, IoExt, RefIO},\n};\nuse smallvec::SmallVec;\nuse tokio_util::task::AbortOnDropHandle;\n\npub type ArcRecvQueue = ArcAsyncDeque<(BytesMut, PathWay, Link)>;\n\nuse crate::{\n    PathWay,\n    nat::{\n        client::{StunClients, StunClientsComponent},\n        router::{StunRouter, StunRouterComponent},\n    },\n    packet::{ForwardHeader, StunHeader},\n};\n\n#[derive(Debug, Clone)]\npub enum Forwarder<I: RefIO + 'static> {\n    Clients { stun_clients: StunClients<I> },\n    Server { outer_addr: SocketAddr },\n}\n\nimpl<I: RefIO> Forwarder<I> {\n    pub fn outers(&self) -> SmallVec<[SocketAddr; 8]> {\n        match self {\n            Forwarder::Clients { stun_clients } => stun_clients.with_clients(|clients| {\n                clients\n                    .values()\n                    .filter_map(|client| client.get_outer_addr()?.ok())\n                    .collect()\n            }),\n            Forwarder::Server { outer_addr } => SmallVec::from_iter([*outer_addr]),\n        }\n    }\n\n    pub fn should_forward(&self, dst: EndpointAddr) -> Option<SocketAddr> {\n        let outers = self.outers();\n\n        if outers.is_empty() {\n            return None;\n        }\n\n        let EndpointAddr::Agent {\n            agent,\n            outer: dst_outer,\n        } = dst\n        else {\n            return None;\n        };\n\n        for outer in outers {\n            if outer == dst_outer {\n                return None;\n            }\n\n            if outer == agent {\n                return Some(dst_outer);\n            }\n        }\n\n        Some(agent)\n    }\n}\n\n#[derive(Debug)]\npub struct ForwardersComponent {\n    forward: Mutex<Forwarder<WeakInterface>>,\n}\n\nimpl ForwardersComponent {\n    pub fn new(forwarder: Forwarder<WeakInterface>) -> Self {\n        Self {\n            forward: Mutex::new(forwarder),\n        }\n    }\n\n    pub fn new_client(stun_clients: StunClients<WeakInterface>) -> Self {\n        Self::new(Forwarder::Clients { stun_clients })\n    }\n\n    pub fn new_server(outer_addr: SocketAddr) -> Self {\n        Self::new(Forwarder::Server { outer_addr })\n    }\n\n    fn lock_forwarders(&self) -> MutexGuard<'_, Forwarder<WeakInterface>> {\n        self.forward.lock().expect(\"Forwarder lock poisoned\")\n    }\n\n    pub fn forwarder(&self) -> Forwarder<WeakInterface> {\n        self.lock_forwarders().clone()\n    }\n}\n\nimpl Component for ForwardersComponent {\n    fn poll_shutdown(&self, _cx: &mut Context<'_>) -> Poll<()> {\n        Poll::Ready(())\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        _ = iface.with_component(|clients: &StunClientsComponent| {\n            clients.reinit(iface);\n            *self.lock_forwarders() = Forwarder::Clients {\n                stun_clients: clients.clone(),\n            };\n        });\n    }\n}\n\n#[derive(Debug)]\npub struct ReceiveAndDeliverPacket {\n    task: Mutex<Option<AbortOnDropHandle<io::Result<()>>>>,\n    quic: bool,\n    stun: bool,\n    forward: bool,\n}\n\npub type ReceiveAndDeliverPacketComponent = ReceiveAndDeliverPacket;\n\n#[bon::bon]\nimpl ReceiveAndDeliverPacket {\n    #[builder(finish_fn = init)]\n    pub fn new(\n        #[builder(start_fn)] weak_iface: WeakInterface,\n        quic_router: Option<Arc<QuicRouter>>,\n        stun_router: Option<StunRouter>,\n        forwarder: Option<Forwarder<WeakInterface>>,\n    ) -> Self {\n        let enable_quic = quic_router.is_some();\n        let enable_stun = stun_router.is_some();\n        let enable_forward = forwarder.is_some();\n\n        let task = Self::task()\n            .maybe_quic_router(quic_router)\n            .maybe_stun_router(stun_router)\n            .maybe_forwarder(forwarder)\n            .iface_ref(weak_iface)\n            .spawn();\n        Self {\n            task: Mutex::new(Some(task)),\n            quic: enable_quic,\n            stun: enable_stun,\n            forward: enable_forward,\n        }\n    }\n\n    #[builder(finish_fn = spawn)]\n    pub fn task<I: RefIO + 'static>(\n        quic_router: Option<Arc<QuicRouter>>,\n        stun_router: Option<StunRouter>,\n        forwarder: Option<Forwarder<I>>,\n        iface_ref: I,\n    ) -> AbortOnDropHandle<io::Result<()>> {\n        AbortOnDropHandle::new(tokio::spawn(async move {\n            let iface = iface_ref.iface();\n            let bind_uri = iface.bind_uri();\n\n            let deliver_quic_packet = async |pkt: BytesMut, route: Route| {\n                let Some(quic_router) = quic_router.as_ref() else {\n                    return;\n                };\n\n                use qbase::packet::{self, Packet, PacketReader};\n                fn is_initial_packet(pkt: &Packet) -> bool {\n                    matches!(pkt, Packet::Data(packet) if matches!(packet.header, packet::DataHeader::Long(packet::long::DataHeader::Initial(..))))\n                }\n\n                let size = pkt.len();\n                let bind_uri = bind_uri.clone();\n                for (packet, way) in PacketReader::new(pkt, 8)\n                    .flatten()\n                    .filter(move |pkt| !(is_initial_packet(pkt) && size < 1100))\n                    .map(move |pkt| (pkt, (bind_uri.clone(), route.pathway(), route.link())))\n                {\n                    quic_router.deliver(packet, way).await;\n                }\n            };\n\n            let deliver_stun_packet = async |mut pkt: BytesMut, route: Route| {\n                let Some(stun_router) = stun_router.as_ref() else {\n                    return;\n                };\n\n                use crate::nat::msg::be_packet;\n                let pkt = pkt.split_off(StunHeader::encoding_size());\n                let Ok((.., (txid, packet))) = be_packet(&pkt) else {\n                    return;\n                };\n\n                stun_router.deliver_stun_packet(txid, packet, route.link());\n            };\n\n            let deliver_forward_packet =\n                async |mut pkt: BytesMut, mut route: Route, fhdr: ForwardHeader| {\n                    if let Some(forwarder) = forwarder.as_ref()\n                        && let Some(target) = forwarder.should_forward(fhdr.pathway().remote())\n                    {\n                        let bufs = &[io::IoSlice::new(&pkt)];\n                        let new_link = Link::new(iface.bound_addr()?, target);\n                        let new_line = Line::new(new_link, 64, None, pkt.len() as u16);\n                        let new_route = Route::new(route.link.into(), new_line);\n                        return iface.sendmmsg(bufs, new_route).await;\n                    };\n\n                    // split_off forward header, deliver the rest as quic packet\n                    let pkt = pkt.split_off(ForwardHeader::encoding_size(&fhdr.pathway()));\n                    route.seg_size = pkt.len() as _;\n                    let new_route = Route::new(fhdr.pathway().flip().map(Into::into), route.line);\n                    deliver_quic_packet(pkt, new_route).await;\n                    Ok(())\n                };\n\n            let (mut bufs, mut hdrs) = (vec![], vec![]);\n            loop {\n                use crate::packet::{Header, be_header};\n                for (pkt, hdr) in iface.recvmmsg(&mut bufs, &mut hdrs).await? {\n                    match be_header(&pkt) {\n                        // quic\n                        Err(_) => deliver_quic_packet(pkt, hdr).await,\n                        // stun\n                        Ok((_remain, Header::Stun(_stun_header))) => {\n                            deliver_stun_packet(pkt, hdr).await\n                        }\n                        // forward\n                        Ok((_remain, Header::Forward(forward_header))) => {\n                            deliver_forward_packet(pkt, hdr, forward_header).await?\n                        }\n                    }\n                }\n            }\n        }))\n    }\n}\n\nimpl ReceiveAndDeliverPacket {\n    fn lock_task(&self) -> MutexGuard<'_, Option<AbortOnDropHandle<io::Result<()>>>> {\n        self.task.lock().unwrap()\n    }\n\n    pub fn reinit(&self, iface: &Interface) {\n        _ = iface.with_components(|components| {\n            let quic_router = (self.quic)\n                .then(|| components.with(QuicRouterComponent::router))\n                .and_then(identity);\n            let stun_router = self\n                .stun\n                .then(|| components.with(StunRouterComponent::router))\n                .and_then(identity);\n            let forwarder = self\n                .forward\n                .then(|| components.with(ForwardersComponent::forwarder))\n                .and_then(identity);\n            *self.lock_task() = Some(\n                Self::task()\n                    .maybe_quic_router(quic_router)\n                    .maybe_stun_router(stun_router)\n                    .maybe_forwarder(forwarder)\n                    .iface_ref(iface.downgrade())\n                    .spawn(),\n            );\n        });\n    }\n}\n\nimpl Component for ReceiveAndDeliverPacket {\n    fn poll_shutdown(&self, cx: &mut Context<'_>) -> std::task::Poll<()> {\n        let mut task_guard = self.lock_task();\n        if let Some(task) = task_guard.as_mut() {\n            task.abort();\n            _ = ready!(Pin::new(task).poll(cx));\n            *task_guard = None;\n        }\n        Ready(())\n    }\n\n    fn reinit(&self, iface: &Interface) {\n        self.reinit(iface);\n    }\n}\n"
  },
  {
    "path": "qtraversal/tests/detect.rs",
    "content": "use std::{\n    io,\n    sync::{Arc, LazyLock},\n};\n\nuse qinterface::io::{IO, ProductIO, handy::DEFAULT_IO_FACTORY};\nuse qtraversal::{\n    nat::{\n        client::{NatType, StunClient},\n        router::StunRouter,\n    },\n    route::ReceiveAndDeliverPacket,\n};\nuse tracing::{Instrument, info_span};\nuse tracing_subscriber::{prelude::__tracing_subscriber_SubscriberExt, util::SubscriberInitExt};\n\n#[derive(Debug, Clone, Copy)]\npub struct TestCase {\n    pub bind_addr: &'static str,\n    pub outer_addr: &'static str,\n    pub nat_type: NatType,\n}\n\npub const STUN_AGENT: &str = \"10.10.0.64:20002\";\n\npub const CASES: [TestCase; 10] = [\n    TestCase {\n        bind_addr: \"192.168.0.98:6001\",\n        outer_addr: \"10.10.0.98:6001\",\n        nat_type: NatType::FullCone,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.96:6002\",\n        outer_addr: \"10.10.0.96:6002\",\n        nat_type: NatType::RestrictedCone,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.88:6003\",\n        outer_addr: \"10.10.0.88:6003\",\n        nat_type: NatType::RestrictedPort,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.86:6004\",\n        outer_addr: \"10.10.0.86:6004\",\n        nat_type: NatType::Dynamic,\n    },\n    TestCase {\n        bind_addr: \"192.168.0.84:6005\",\n        outer_addr: \"10.10.0.84:6005\",\n        nat_type: NatType::Symmetric,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.48:6006\",\n        outer_addr: \"10.10.0.48:6006\",\n        nat_type: NatType::FullCone,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.46:6007\",\n        outer_addr: \"10.10.0.46:6007\",\n        nat_type: NatType::RestrictedCone,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.38:6008\",\n        outer_addr: \"10.10.0.38:6008\",\n        nat_type: NatType::RestrictedPort,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.36:6009\",\n        outer_addr: \"10.10.0.36:6009\",\n        nat_type: NatType::Dynamic,\n    },\n    TestCase {\n        bind_addr: \"172.16.0.34:6010\",\n        outer_addr: \"10.10.0.34:6010\",\n        nat_type: NatType::Symmetric,\n    },\n];\n\npub fn init_tracing() -> io::Result<()> {\n    let file = std::fs::OpenOptions::new()\n        .create(true)\n        .write(true)\n        .truncate(true)\n        .open(\"tests.log\")?;\n\n    let filter = tracing_subscriber::filter::filter_fn(|metadata| {\n        !metadata.target().contains(\"netlink_packet_route\")\n    });\n\n    _ = tracing_subscriber::registry()\n        .with(tracing_subscriber::Layer::with_filter(\n            tracing_subscriber::fmt::layer()\n                .with_target(true)\n                .with_ansi(false)\n                .with_file(true)\n                .with_line_number(true),\n            filter.clone(),\n        ))\n        .with(tracing_subscriber::Layer::with_filter(\n            tracing_subscriber::fmt::layer().with_writer(file),\n            filter,\n        ))\n        .try_init();\n    Ok(())\n}\n\nfn run<F: Future<Output: Send + 'static> + Send + 'static>(\n    test_name: &'static str,\n    f: F,\n) -> F::Output {\n    static RT: LazyLock<tokio::runtime::Runtime> = LazyLock::new(|| {\n        init_tracing().expect(\"failed to init tracing\");\n        tokio::runtime::Builder::new_multi_thread()\n            .enable_all()\n            .build()\n            .unwrap()\n    });\n    RT.block_on(f.instrument(info_span!(\"test\", test_name)))\n}\n\nasync fn test_detect_case(case: usize) {\n    let stun_agent = STUN_AGENT.parse().unwrap();\n    let case = CASES[case];\n    let bind_uri = format!(\"inet://{}\", case.bind_addr);\n    let iface: Arc<dyn IO> = Arc::from(DEFAULT_IO_FACTORY.bind(bind_uri.into()));\n    let stun_router = StunRouter::new();\n    let stun_client = StunClient::new(iface.clone(), stun_router.clone(), stun_agent, None);\n\n    let _route_task = ReceiveAndDeliverPacket::task()\n        .stun_router(stun_router)\n        .iface_ref(iface.clone())\n        .spawn();\n\n    let outer_addr = stun_client\n        .outer_addr()\n        .await\n        .expect(\"failed to get outer addr\");\n    tracing::info!(\"Outer addr: {} Agent addr {}\", outer_addr, stun_agent);\n    let nat_type = stun_client\n        .nat_type()\n        .await\n        .expect(\"failed to get nat type\");\n    tracing::info!(case.bind_addr, case.outer_addr, ?nat_type, ?case.nat_type);\n    assert!(nat_type == case.nat_type);\n}\n\nmacro_rules! test_detect {\n    (async fn $test_name:ident = test_detect_case($case:expr) $($tt:tt)*) => {\n\n        #[test]\n        #[ignore] // run manually\n        fn $test_name() {\n            run(stringify!($test_name), async move {\n                test_detect_case($case).await\n            })\n        }\n\n        test_detect!($($tt)*);\n    };\n    () => {}\n}\n\n// ip netns exec nsa cargo test --package qtraversal test_detect -- --include-ignored --nocapture\ntest_detect! {\n    async fn test_detect_full_cone_client = test_detect_case(0)\n    async fn test_detect_restricted_cone_client = test_detect_case(1)\n    async fn test_detect_port_restricted_client = test_detect_case(2)\n    async fn test_detect_dynamic_client = test_detect_case(3)\n    async fn test_detect_symmetric_client = test_detect_case(4)\n    async fn test_detect_full_cone_server = test_detect_case(5)\n    async fn test_detect_restricted_cone_server = test_detect_case(6)\n    async fn test_detect_port_restricted_server = test_detect_case(7)\n    async fn test_detect_dynamic_server = test_detect_case(8)\n    async fn test_detect_symmetric_server = test_detect_case(9)\n}\n"
  },
  {
    "path": "qtraversal/tools/build_nat.sh",
    "content": "#!/bin/bash\n  \n# set -x\nset -e\n# 创建局域网的网桥\nip link add brlan1 type bridge\nip link set dev brlan1 up\niptables -A FORWARD -o brlan1 -m comment --comment \"allow packets to pass from lxd lan bridge\" -j ACCEPT\niptables -A FORWARD -i brlan1 -m comment --comment \"allow input packets to pass to lxd lan bridge\" -j ACCEPT\n  \nip link add brlan2 type bridge\nip link set dev brlan2 up\niptables -A FORWARD -o brlan2 -m comment --comment \"allow packets to pass from lxd lan bridge\" -j ACCEPT\niptables -A FORWARD -i brlan2 -m comment --comment \"allow input packets to pass to lxd lan bridge\" -j ACCEPT\n  \n# 创建广域网的网桥\nip link add brwan type bridge\nip link set dev brwan up\niptables -A FORWARD -o brwan -m comment --comment \"allow packets to pass from lxd wan bridge\" -j ACCEPT\niptables -A FORWARD -i brwan -m comment --comment \"allow input packets to pass to lxd wan bridge\" -j ACCEPT\n  \n# 创建内网主机Host A,多网卡\nip netns add nsa\nip netns exec nsa ip link set lo up\n  \nfunction create_new(){\n    devpair=$1  # aveth0\n    devbr=$2    # brlan1\n    virtnet=$3  # nsa\n    devhost=$4  # eth0\n    devaddr=$5  # 192.168.0.98\n    gateway=$6  # 192.168.0.1\n    routemap=$7 # 101\n \n    dveth0=$devpair\"0\"\n    dveth1=$devpair\"1\"\n \n    ip link add $dveth0 type veth peer name $dveth1\n      \n    ip link set dev $dveth1 master $devbr\n    ip link set dev $dveth1 up\n      \n    ip link set dev $dveth0 netns $virtnet\n    ip netns exec $virtnet ip link set dev $dveth0 name $devhost\n    ip netns exec $virtnet ip addr add $devaddr/24 dev $devhost\n    ip netns exec $virtnet ip link set dev $devhost up\n    ip netns exec $virtnet ip route add default via $gateway dev $devhost src $devaddr table $routemap\n    ip netns exec $virtnet ip rule add from $devaddr table $routemap\n}\n \ncreate_new \"aveth0\" \"brlan1\" \"nsa\" \"eth0\" \"192.168.0.98\" \"192.168.0.1\" \"101\"\ncreate_new \"aveth1\" \"brlan1\" \"nsa\" \"eth1\" \"192.168.0.96\" \"192.168.0.1\" \"102\"\ncreate_new \"aveth2\" \"brlan1\" \"nsa\" \"eth2\" \"192.168.0.88\" \"192.168.0.1\" \"103\"\ncreate_new \"aveth3\" \"brlan1\" \"nsa\" \"eth3\" \"192.168.0.86\" \"192.168.0.1\" \"104\"\ncreate_new \"aveth4\" \"brlan1\" \"nsa\" \"eth4\" \"192.168.0.84\" \"192.168.0.1\" \"105\"\n \n# Open Internel, FullCone\ncreate_new \"aveth5\" \"brwan\" \"nsa\" \"eth5\" \"10.10.0.108\" \"10.10.0.1\" \"201\"\n# Open Internel, RestrictedCone\ncreate_new \"aveth6\" \"brwan\" \"nsa\" \"eth6\" \"10.10.0.106\" \"10.10.0.1\" \"202\"\n# Open Internet,PortRestrictedCone\ncreate_new \"aveth7\" \"brwan\" \"nsa\" \"eth7\" \"10.10.0.104\" \"10.10.0.1\" \"203\"\n# Open Internet,UDPBlocked\ncreate_new \"aveth8\" \"brwan\" \"nsa\" \"eth8\" \"10.10.0.102\" \"10.10.0.1\" \"204\"\n \ncreate_new \"aveth9\" \"brlan2\" \"nsa\" \"eth9\" \"172.16.0.48\" \"172.16.0.1\" \"301\"\ncreate_new \"avetha\" \"brlan2\" \"nsa\" \"etha\" \"172.16.0.46\" \"172.16.0.1\" \"302\"\ncreate_new \"avethb\" \"brlan2\" \"nsa\" \"ethb\" \"172.16.0.38\" \"172.16.0.1\" \"303\"\ncreate_new \"avethc\" \"brlan2\" \"nsa\" \"ethc\" \"172.16.0.36\" \"172.16.0.1\" \"304\"\ncreate_new \"avethd\" \"brlan2\" \"nsa\" \"ethd\" \"172.16.0.34\" \"172.16.0.1\" \"305\"\n  \nip netns exec nsa ip route add default via 192.168.0.1\n  \nip netns exec nsa iptables -t filter -P OUTPUT DROP\nip netns exec nsa iptables -t filter -P INPUT DROP\nip netns exec nsa iptables -t filter -A OUTPUT ! -p udp -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT ! -p udp -j ACCEPT\n# eth0:192.168.0.98, NAT, FullCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth0 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth0 -j ACCEPT\n# eth1:192.168.0.96, NAT, RestrictedCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth1 -m recent --rdest --set --name pubtrack1 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth1 -m recent --rsource --rcheck --seconds 300 --name pubtrack1 -j ACCEPT\n# eth2:192.168.0.88, NAT, PortRestrictedCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth2 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT\n# eth3:192.168.0.86, NAT, Dynamic\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth3 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth3 -m state --state ESTABLISHED,RELATED -j ACCEPT\n# eth4:192.168.0.84, NAT, Symmetric\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth4 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth4 -m state --state ESTABLISHED,RELATED -j ACCEPT\n# eth5:10.10.0.108，Open Internet，FullCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth5 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth5 -j ACCEPT\n# eth6:10.10.0.106，Open Internet，RestrictedCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth6 -m recent --rdest --set --name pubtrack6 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth6 -m recent --rsource --rcheck --seconds 300 --name pubtrack6 -j ACCEPT\n# eth7:10.10.0.104，Open Internet，PortRestrictedCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth7 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth7 -m state --state ESTABLISHED,RELATED -j ACCEPT\n# eth8:10.10.0.102, OpenInternel, UDPBlocked\n# default rule DROP\n# eth9:172.16.0.48, NAT, FullCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o eth9 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i eth9 -j ACCEPT\n# etha:172.16.0.46, NAT, RestrictedCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o etha -m recent --rdest --set --name pubtrack1 -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i etha -m recent --rsource --rcheck --seconds 300 --name pubtrack1 -j ACCEPT\n# ethb:172.16.0.38, NAT, PortRestrictedCone\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o ethb -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i ethb -m state --state ESTABLISHED,RELATED -j ACCEPT\n# ethc:172.16.0.36, NAT, Dynamic\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o ethc -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i ethc -m state --state ESTABLISHED,RELATED -j ACCEPT\n# ethd:172.16.0.34, NAT, Symmetric\nip netns exec nsa iptables -t filter -A OUTPUT -p udp -o ethd -j ACCEPT\nip netns exec nsa iptables -t filter -A INPUT -p udp -i ethd -m state --state ESTABLISHED,RELATED -j ACCEPT\n  \n# 创建内网主机B\nip netns add nsb\nip netns exec nsb ip link set lo up\n  \nip link add bveth0 type veth peer name bveth1\n  \nip link set dev bveth1 master brlan1\nip link set dev bveth1 up\n  \nip link set dev bveth0 netns nsb\nip netns exec nsb ip link set dev bveth0 name eth0\nip netns exec nsb ip addr add 192.168.0.100/24 dev eth0\nip netns exec nsb ip link set dev eth0 up\nip netns exec nsb ip route add 192.168.0.1 dev eth0\nip netns exec nsb ip route add default via 192.168.0.1\n  \n# 创建外网主机Host O\nip netns add nso\nip netns exec nso ip link set lo up\n  \nip link add oveth00 type veth peer name oveth01\n  \nip link set oveth00 netns nso\nip netns exec nso ip link set dev oveth00 name eth0\nip netns exec nso ip addr add 192.168.0.1/24 dev eth0\nip netns exec nso ip link set dev eth0 up\nip netns exec nso ip rule add from 192.168.0.1/24 dev eth0\nip netns exec nso sysctl -w net.ipv4.conf.eth0.proxy_arp=1\n  \nip link set dev oveth01 master brlan1\nip link set dev oveth01 up\n  \nip link add oveth10 type veth peer name oveth11\n  \nip link set oveth10 netns nso\nip netns exec nso ip link set dev oveth10 name eth1\n# ip netns exec nso ip addr add 10.10.0.1/24 dev eth1\nip netns exec nso ip addr add 10.10.0.98/24 dev eth1\nip netns exec nso ip addr add 10.10.0.96/24 dev eth1\nip netns exec nso ip addr add 10.10.0.88/24 dev eth1\nip netns exec nso ip addr add 10.10.0.86/24 dev eth1\nip netns exec nso ip addr add 10.10.0.84/24 dev eth1\nip netns exec nso ip link set dev eth1 up\nip netns exec nso ip route add default dev eth1\n  \nip link set dev oveth11 master brwan\nip link set dev oveth11 up\n  \nip netns exec nso iptables -A FORWARD -j LOG --log-prefix \"FORWARD:\" --log-level 3\nip netns exec nso iptables -t nat -A PREROUTING -j LOG --log-prefix \"DNAT:\" --log-level 3\nip netns exec nso iptables -t nat -A POSTROUTING -j LOG --log-prefix \"SNAT:\" --log-level 3\n  \n# 192.168.0.98 nat to 10.10.0.98, 许出许进，再通过HOST A中设计iptables规则可成为FullCone\nip netns exec nso iptables -t nat -A POSTROUTING -o eth1 -s 192.168.0.98 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.98\nip netns exec nso iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.98 -s 10.10.0.1/24 -j DNAT --to-destination 192.168.0.98\n# 192.168.0.96 nat to 10.10.0.96, 许出许进，确保映射地址无论如何不会变，再通过HOST A中设计iptables规则可成为RestrictedCone\nip netns exec nso iptables -t nat -A POSTROUTING -o eth1 -s 192.168.0.96 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.96\nip netns exec nso iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.96 -s 10.10.0.1/24 -j DNAT --to-destination 192.168.0.96\n# 192.168.0.88 nat to 10.10.0.88, 许出许进，确保映射地址无论如何不会变，再通过HOST A中设计iptables规则可成为PortRestrictedCone\nip netns exec nso iptables -t nat -A POSTROUTING -o eth1 -s 192.168.0.88 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.88\nip netns exec nso iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.88 -s 10.10.0.1/24 -j DNAT --to-destination 192.168.0.88\n# 192.168.0.86 nat to 10.10.0.86, 若是先进后出的，端口随机映射；否则只进行IP映射，可成为Dynamic\nip netns exec nso iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.86 -s 10.10.0.1/24 -m recent --rsource --set --name strangers -j DNAT --to-destination 192.168.0.1  # 注意：故意DNAT到一个错误的地址\nip netns exec nso iptables -t nat -A POSTROUTING -o eth1 -s 192.168.0.86 -d 10.10.0.1/24 -m recent --rdest --rcheck --seconds 3600 --name strangers -j SNAT --to-source 10.10.0.86 --random\nip netns exec nso iptables -t nat -A POSTROUTING -o eth1 -s 192.168.0.86 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.86\n# 192.168.0.84 nat to 10.10.0.84, 许出不许进，出的时候，端口随机映射，可成为Symmetric\nip netns exec nso iptables -t nat -A POSTROUTING -o eth1 -s 192.168.0.84 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.84 --random\n  \n# 创建外网主机Host N\nip netns add nsn\nip netns exec nsn ip link set lo up\n  \nip link add nveth00 type veth peer name nveth01\n  \nip link set nveth00 netns nsn\nip netns exec nsn ip link set dev nveth00 name eth0\nip netns exec nsn ip addr add 172.16.0.1/24 dev eth0\nip netns exec nsn ip link set dev eth0 up\nip netns exec nsn ip rule add from 172.16.0.1/24 dev eth0\nip netns exec nsn sysctl -w net.ipv4.conf.eth0.proxy_arp=1\n  \nip link set dev nveth01 master brlan2\nip link set dev nveth01 up\n  \nip link add nveth10 type veth peer name nveth11\n  \nip link set nveth10 netns nsn\nip netns exec nsn ip link set dev nveth10 name eth1\n# ip netns exec nsn ip addr add 10.10.0.2/24 dev eth1\nip netns exec nsn ip addr add 10.10.0.48/24 dev eth1\nip netns exec nsn ip addr add 10.10.0.46/24 dev eth1\nip netns exec nsn ip addr add 10.10.0.38/24 dev eth1\nip netns exec nsn ip addr add 10.10.0.36/24 dev eth1\nip netns exec nsn ip addr add 10.10.0.34/24 dev eth1\nip netns exec nsn ip link set dev eth1 up\nip netns exec nsn ip route add default dev eth1\n  \nip link set dev nveth11 master brwan\nip link set dev nveth11 up\n  \nip netns exec nsn iptables -A FORWARD -j LOG --log-prefix \"FORWARD:\" --log-level 3\nip netns exec nsn iptables -t nat -A PREROUTING -j LOG --log-prefix \"DNAT:\" --log-level 3\nip netns exec nsn iptables -t nat -A POSTROUTING -j LOG --log-prefix \"SNAT:\" --log-level 3\n  \n# 172.16.0.48 nat to 10.10.0.48, 许出许进，再通过HOST A中设计iptables规则可成为FullCone\nip netns exec nsn iptables -t nat -A POSTROUTING -o eth1 -s 172.16.0.48 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.48\nip netns exec nsn iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.48 -s 10.10.0.1/24 -j DNAT --to-destination 172.16.0.48\n# 172.16.0.46 nat to 10.10.0.46, 许出许进，确保映射地址无论如何不会变，再通过HOST A中设计iptables规则可成为RestrictedCone\nip netns exec nsn iptables -t nat -A POSTROUTING -o eth1 -s 172.16.0.46 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.46\nip netns exec nsn iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.46 -s 10.10.0.1/24 -j DNAT --to-destination 172.16.0.46\n# 172.16.0.38 nat to 10.10.0.38, 许出许进，确保映射地址无论如何不会变，再通过HOST A中设计iptables规则可成为PortRestrictedCone\nip netns exec nsn iptables -t nat -A POSTROUTING -o eth1 -s 172.16.0.38 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.38\nip netns exec nsn iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.38 -s 10.10.0.1/24 -j DNAT --to-destination 172.16.0.38\n# 172.16.0.36 nat to 10.10.0.36, 若是先进后出的，端口随机映射；否则只进行IP映射，可成为Dynamic\nip netns exec nsn iptables -t nat -A PREROUTING -i eth1 -d 10.10.0.36 -s 10.10.0.1/24 -m recent --rsource --set --name strangers -j DNAT --to-destination 172.16.0.1  # 注意：故意DNAT到一个错误的地址\nip netns exec nsn iptables -t nat -A POSTROUTING -o eth1 -s 172.16.0.36 -d 10.10.0.1/24 -m recent --rdest --rcheck --seconds 3600 --name strangers -j SNAT --to-source 10.10.0.36 --random\nip netns exec nsn iptables -t nat -A POSTROUTING -o eth1 -s 172.16.0.36 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.36\n# 172.16.0.34 nat to 10.10.0.34, 许出不许进，出的时候，端口随机映射，可成为Symmetric\nip netns exec nsn iptables -t nat -A POSTROUTING -o eth1 -s 172.16.0.34 -d 10.10.0.1/24 -j SNAT --to-source 10.10.0.34 --random\n  \n# Host S\nip netns add nss\nip netns exec nss ip link set lo up\n  \ncreate_new \"sveth0\" \"brwan\" \"nss\" \"eth0\" \"10.10.0.64\" \"10.10.0.1\" \"401\"\ncreate_new \"sveth1\" \"brwan\" \"nss\" \"eth1\" \"10.10.0.66\" \"10.10.0.1\" \"402\"\ncreate_new \"sveth2\" \"brwan\" \"nss\" \"eth2\" \"10.10.0.68\" \"10.10.0.1\" \"403\"\n  \n# 创建内网主机H\nip netns add nshub\nip netns exec nshub ip link set lo up\n \nip link add hubveth0 type veth peer name hubveth1\n \nip link set dev hubveth1 master brwan\nip link set dev hubveth1 up\n \nip link set dev hubveth0 netns nshub\nip netns exec nshub ip link set dev hubveth0 name eth0\nip netns exec nshub ip addr add 10.10.0.1/24 dev eth0\nip netns exec nshub ip link set dev eth0 up\n# ip netns exec nshub ip rule add from 10.10.0.1/24 dev eth0\nip netns exec nshub ip route add default dev eth0\n"
  },
  {
    "path": "qtraversal/tools/clear_nat.sh",
    "content": "#!/bin/bash\r\n \r\n# set -x\r\nset -e\r\n  \r\nip netns exec nsa ip route flush table 101\r\nip netns exec nsa ip route flush table 102\r\nip netns exec nsa ip route flush table 103\r\nip netns exec nsa ip route flush table 104\r\nip netns exec nsa ip route flush table 105\r\nip netns exec nsa ip route flush table 201\r\nip netns exec nsa ip route flush table 202\r\nip netns exec nsa ip route flush table 203\r\nip netns exec nsa ip route flush table 204\r\nip netns exec nsa ip route flush table 301\r\nip netns exec nsa ip route flush table 302\r\nip netns exec nsa ip route flush table 303\r\nip netns exec nsa ip route flush table 304\r\nip netns exec nsa ip route flush table 305\r\nip netns exec nsa ip route flush cache\r\n  \r\nip netns exec nss ip route flush table 401\r\nip netns exec nss ip route flush table 402\r\nip netns exec nss ip route flush table 403\r\nip netns exec nss ip route flush cache\r\n  \r\nip netns del nsa\r\nip netns del nsb\r\nip netns del nso\r\nip netns del nss\r\nip netns del nsn\r\nip netns del nshub\r\n  \r\nip link del brlan1\r\nip link del brlan2\r\nip link del brwan\r\n  \r\niptables -D FORWARD -o brlan1 -m comment --comment \"allow packets to pass from lxd lan bridge\" -j ACCEPT\r\niptables -D FORWARD -i brlan1 -m comment --comment \"allow input packets to pass to lxd lan bridge\" -j ACCEPT\r\n  \r\niptables -D FORWARD -o brlan2 -m comment --comment \"allow packets to pass from lxd lan bridge\" -j ACCEPT\r\niptables -D FORWARD -i brlan2 -m comment --comment \"allow input packets to pass to lxd lan bridge\" -j ACCEPT\r\n  \r\niptables -D FORWARD -o brwan -m comment --comment \"allow packets to pass from lxd wan bridge\" -j ACCEPT\r\niptables -D FORWARD -i brwan -m comment --comment \"allow input packets to pass to lxd wan bridge\" -j ACCEPT\r\n  \r\n# ip link del aveth1\r\n# ip link del bveth1\r\n# ip link del oveth1\r\n"
  },
  {
    "path": "qtraversal/tools/dockerfile",
    "content": "ARG TARGETPLATFORM=linux/amd64\r\nFROM --platform=$TARGETPLATFORM ubuntu:24.04\r\nENV DEBIAN_FRONTEND=noninteractive \\\r\n    CARGO_HOME=/usr/local/cargo \\\r\n    PATH=/usr/local/cargo/bin:$PATH\r\n\r\n# # 1. 使用阿里云APT镜像源\r\n# RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list && \\\r\n#     sed -i 's/security.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list\r\n\r\n# 2. 分离系统包安装层（利用Docker缓存）\r\nRUN apt-get update && apt-get install -y \\\r\n    build-essential \\\r\n    curl \\\r\n    git \\\r\n    iproute2 \\\r\n    iptables \\\r\n    libssl-dev \\\r\n    pkg-config \\\r\n    tcpdump \\\r\n    && rm -rf /var/lib/apt/lists/*\r\n\r\n# 3. 安装Rust（独立层）\r\nRUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain nightly --profile minimal --no-modify-path\r\n\r\n# # 4. 配置Cargo镜像源（使用TOML格式）\r\n# RUN mkdir -p $CARGO_HOME && \\\r\n#     printf '[source.crates-io]\\nreplace-with = \"tuna\"\\n\\n[source.tuna]\\nregistry = \"sparse+https://mirrors.tuna.tsinghua.edu.cn/crates.io-index/\"\\n\\n[net]\\ngit-fetch-with-cli = true\\nretry = 2\\n' > $CARGO_HOME/config.toml\r\n\r\n# RUN rustup override set nightly\r\n\r\n# 5. 验证工具链\r\nRUN rustc --version && cargo --version"
  },
  {
    "path": "qtraversal/tools/run_stun.sh",
    "content": "qtraversal/tools/build_nat.sh\ncargo build --example stun_server --release\nip netns exec nss nohup target/release/examples/stun_server --bind-addr1 10.10.0.64:20002 --bind-addr2 10.10.0.64:20003 --change-addr 10.10.0.66:20002 --outer-addr1 10.10.0.64:20002  --outer-addr2 10.10.0.64:20003 &\nip netns exec nss nohup target/release/examples/stun_server --bind-addr1 10.10.0.66:20002 --bind-addr2 10.10.0.66:20003 --change-addr 10.10.0.68:20002 --outer-addr1 10.10.0.66:20002  --outer-addr2 10.10.0.66:20003 &\nip netns exec nss nohup target/release/examples/stun_server --bind-addr1 10.10.0.68:20002 --bind-addr2 10.10.0.68:20003 --change-addr 10.10.0.64:20002 --outer-addr1 10.10.0.68:20002  --outer-addr2 10.10.0.68:20003 &\n"
  },
  {
    "path": "qudp/Cargo.toml",
    "content": "[package]\nname = \"qudp\"\nversion = \"0.5.0\"\nedition.workspace = true\ndescription = \"High-performance UDP encapsulation for QUIC\"\nreadme.workspace = true\nrepository.workspace = true\nlicense.workspace = true\nkeywords = [\"async\", \"socket\", \"udp\", \"gso\", \"gro\"]\ncategories.workspace = true\nrust-version.workspace = true\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbytes = { workspace = true }\ncfg-if = { workspace = true }\nlibc = \"0.2\"\nqbase = { workspace = true }\ntracing = { workspace = true }\nsocket2 = { workspace = true }\ntokio = { workspace = true, features = [\"net\"] }\nnix = { version = \"0.31\", features = [\"socket\", \"uio\", \"net\"] }\n\n[target.'cfg(windows)'.dependencies]\nwindows-sys = { version = \"0.61\", features = [\n    \"Win32_Foundation\",\n    \"Win32_System_IO\",\n    \"Win32_Networking_WinSock\",\n] }\n\n[dev-dependencies]\nclap = { workspace = true }\ntokio = { workspace = true, features = [\"test-util\", \"macros\"] }\n\n[dev-dependencies.tracing-subscriber]\nworkspace = true\nfeatures = [\"env-filter\", \"time\"]\n\n[[example]]\nname = \"send\"\npath = \"examples/send.rs\"\n\n[[example]]\nname = \"receive\"\npath = \"examples/receive.rs\"\n\n[features]\ngso = []\n"
  },
  {
    "path": "qudp/examples/receive.rs",
    "content": "use clap::Parser;\nuse qudp::UdpSocket;\n\n#[derive(Parser, Debug)]\n#[command(version, about, long_about = None)]\nstruct Args {\n    #[arg(short,long, default_value_t = String::from(\"127.0.0.1:12345\"))]\n    bind: String,\n}\n\n#[tokio::main(flavor = \"current_thread\")]\nasync fn main() {\n    tracing_subscriber::fmt()\n        .with_max_level(tracing::level_filters::LevelFilter::TRACE)\n        .init();\n\n    let args = Args::parse();\n    let addr = args.bind.parse().unwrap();\n\n    let socket = UdpSocket::bind(addr).expect(\"failed to create socket\");\n    let mut receiver = socket.receiver();\n    loop {\n        match receiver.recv().await {\n            Ok(n) => {\n                tracing::info!(\n                    \"Received {} packets, dst {}, src {} len {}\",\n                    n,\n                    receiver.lines[0].dst,\n                    receiver.lines[0].src,\n                    receiver.lines[0].seg_size\n                );\n            }\n            Err(e) => {\n                tracing::error!(\"Receive failed: {}\", e);\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "qudp/examples/send.rs",
    "content": "use std::io::IoSlice;\n\nuse clap::Parser;\nuse qbase::net::route::{Line, Link};\nuse qudp::UdpSocket;\n\n#[derive(Parser, Debug)]\n#[command(version, about, long_about = None)]\nstruct Args {\n    #[arg(long, default_value_t = String::from(\"127.0.0.1:0\"))]\n    src: String,\n\n    #[arg(long, default_value_t = String::from(\"127.0.0.1:12345\"))]\n    dst: String,\n\n    #[arg(long, default_value_t = 3600)]\n    msg_size: usize,\n\n    #[arg(long, default_value_t = 100)]\n    msg_count: usize,\n}\n\n#[tokio::main(flavor = \"current_thread\")]\nasync fn main() {\n    tracing_subscriber::fmt()\n        .with_max_level(tracing::level_filters::LevelFilter::TRACE)\n        .init();\n\n    let args = Args::parse();\n    let addr = args.src.parse().unwrap();\n    let socket = UdpSocket::bind(addr).expect(\"failed to create socket\");\n    let dst = args.dst.parse().unwrap();\n\n    let send_hdr = Line::new(\n        Link::new(socket.local_addr().expect(\"failed to get local addr\"), dst),\n        64,\n        None,\n        args.msg_size as u16,\n    );\n\n    let payload = vec![8u8; args.msg_size];\n    let payloads = vec![IoSlice::new(&payload[..]); args.msg_count];\n\n    match socket.send(&payloads, send_hdr).await {\n        Ok(n) => tracing::info!(\"Sent {} packets, bytes: {}\", n, n * args.msg_size),\n        Err(e) => tracing::error!(\"Send failed: {}\", e),\n    }\n}\n"
  },
  {
    "path": "qudp/src/lib.rs",
    "content": "use std::{\n    future::Future,\n    io::{self, IoSlice, IoSliceMut},\n    net::SocketAddr,\n    pin::Pin,\n    sync::atomic::AtomicI32,\n    task::{Context, Poll, ready},\n};\n\nuse bytes::BytesMut;\nuse qbase::net::route::Line;\nuse socket2::{Domain, Socket, Type};\nuse tokio::io::Interest;\npub const BATCH_SIZE: usize = 64;\ncfg_if::cfg_if! {\n    if #[cfg(unix)]{\n        #[path = \"unix.rs\"]\n        mod unix;\n    } else if #[cfg(windows)] {\n        #[path = \"windows.rs\"]\n        mod windows;\n    } else {\n        compile_error!(\"Unsupported platform\");\n    }\n}\n\n#[derive(Debug)]\npub struct UdpSocket {\n    io: tokio::net::UdpSocket,\n    ttl: AtomicI32,\n}\n\nimpl UdpSocket {\n    pub fn bind(addr: SocketAddr) -> io::Result<Self> {\n        let domain = if addr.is_ipv4() {\n            Domain::IPV4\n        } else {\n            Domain::IPV6\n        };\n\n        let socket = Socket::new(domain, Type::DGRAM, None)?;\n        socket.set_nonblocking(true)?;\n        Self::config(&socket, addr)?;\n        let io = tokio::net::UdpSocket::from_std(socket.into())?;\n        let usc = Self {\n            io,\n            ttl: AtomicI32::new(Line::DEFAULT_TTL as i32),\n        };\n        Ok(usc)\n    }\n\n    pub fn local_addr(&self) -> io::Result<SocketAddr> {\n        self.io.local_addr()\n    }\n\n    pub fn poll_send_ready(&self, cx: &mut Context<'_>) -> Poll<io::Result<()>> {\n        self.io.poll_send_ready(cx)\n    }\n\n    pub fn poll_recv_ready(&self, cx: &mut Context<'_>) -> Poll<io::Result<()>> {\n        self.io.poll_recv_ready(cx)\n    }\n\n    pub fn poll_send(\n        &self,\n        cx: &mut Context<'_>,\n        bufs: &[IoSlice<'_>],\n        line: &Line,\n    ) -> Poll<io::Result<usize>> {\n        loop {\n            ready!(self.poll_send_ready(cx))?;\n            self.set_ttl(line.ttl as i32)?;\n            match self\n                .io\n                .try_io(Interest::WRITABLE, || self.sendmsg(bufs, line))\n            {\n                Ok(n) => return Poll::Ready(Ok(n)),\n                Err(e) if e.kind() == io::ErrorKind::WouldBlock => continue,\n                Err(e) => return Poll::Ready(Err(e)),\n            }\n        }\n    }\n\n    pub fn poll_recv(\n        &self,\n        cx: &mut Context,\n        bufs: &mut [IoSliceMut<'_>],\n        lines: &mut [Line],\n    ) -> Poll<io::Result<usize>> {\n        loop {\n            ready!(self.poll_recv_ready(cx)?);\n            let f = || self.recvmsg(bufs, lines);\n            let ret = self.io.try_io(Interest::READABLE, f);\n            if matches!(&ret, Err(e) if e.kind() == io::ErrorKind::WouldBlock) {\n                continue;\n            } else {\n                return Poll::Ready(ret);\n            }\n        }\n    }\n\n    #[allow(unreachable_code)]\n    pub fn bind_device(&self, _device: &str) -> io::Result<()> {\n        // #[cfg(any(target_os = \"android\", target_os = \"fuchsia\", target_os = \"linux\"))]\n        // android and linux support bind_device_by_index, which is called by codes below\n        #[cfg(target_os = \"fuchsia\")]\n        {\n            let socket = socket2::SockRef::from(&self.io);\n            return socket.bind_device(Some(_device.as_bytes()));\n        }\n        #[cfg(any(\n            target_os = \"ios\",\n            target_os = \"visionos\",\n            target_os = \"macos\",\n            target_os = \"tvos\",\n            target_os = \"watchos\",\n            target_os = \"illumos\",\n            target_os = \"solaris\",\n            target_os = \"linux\",\n            target_os = \"android\",\n        ))]\n        {\n            let socket = socket2::SockRef::from(&self.io);\n            let index = nix::net::if_::if_nametoindex(_device)?;\n            let index = std::num::NonZeroU32::new(index)\n                .expect(\"Already checked by nix::net::if_::if_nametoindex\");\n            match self.io.local_addr()? {\n                SocketAddr::V4(..) => socket.bind_device_by_index_v4(Some(index))?,\n                SocketAddr::V6(..) => socket.bind_device_by_index_v6(Some(index))?,\n            }\n            return Ok(());\n        }\n        Ok(())\n    }\n}\n\npub trait Io {\n    fn config(io: &socket2::Socket, addr: SocketAddr) -> io::Result<()>;\n\n    fn sendmsg(&self, bufs: &[IoSlice<'_>], line: &Line) -> io::Result<usize>;\n\n    fn recvmsg(&self, bufs: &mut [IoSliceMut<'_>], line: &mut [Line]) -> io::Result<usize>;\n\n    fn set_ttl(&self, ttl: i32) -> io::Result<()>;\n}\n\nimpl UdpSocket {\n    pub fn send<'a>(&'a self, iovecs: &'a [IoSlice<'a>], line: Line) -> Send<'a> {\n        Send {\n            socket: self,\n            iovecs,\n            line,\n        }\n    }\n\n    pub fn receiver(&self) -> Receiver<'_> {\n        Receiver {\n            socket: self,\n            iovecs: (0..BATCH_SIZE)\n                .map(|_| {\n                    let mut buf = BytesMut::with_capacity(1500);\n                    buf.resize(1500, 0);\n                    buf\n                })\n                .collect::<Vec<_>>(),\n            lines: (0..BATCH_SIZE).map(|_| Line::default()).collect::<Vec<_>>(),\n        }\n    }\n}\n\npub struct Send<'a> {\n    pub socket: &'a UdpSocket,\n    pub iovecs: &'a [IoSlice<'a>],\n    pub line: Line,\n}\n\nimpl Future for Send<'_> {\n    type Output = io::Result<usize>;\n\n    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        let this = self.get_mut();\n        this.socket.poll_send(cx, this.iovecs, &this.line)\n    }\n}\n\npub struct Receiver<'u> {\n    pub socket: &'u UdpSocket,\n    pub iovecs: Vec<BytesMut>,\n    pub lines: Vec<Line>,\n}\n\nimpl Receiver<'_> {\n    #[inline]\n    pub fn poll_recv(&mut self, cx: &mut Context) -> Poll<io::Result<usize>> {\n        let mut bufs = self\n            .iovecs\n            .iter_mut()\n            .map(|b| IoSliceMut::new(b))\n            .collect::<Vec<_>>();\n\n        self.socket.poll_recv(cx, &mut bufs, &mut self.lines)\n    }\n\n    #[inline]\n    pub async fn recv(&mut self) -> io::Result<usize> {\n        core::future::poll_fn(|cx| self.poll_recv(cx)).await\n    }\n}\n"
  },
  {
    "path": "qudp/src/unix.rs",
    "content": "use std::{\n    io::{self, IoSlice},\n    net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6},\n    os::fd::{AsFd, AsRawFd},\n};\n\nuse nix::{\n    cmsg_space,\n    sys::socket::{\n        ControlMessageOwned, SockaddrLike, SockaddrStorage,\n        sockopt::{self},\n    },\n};\nuse qbase::net::route::Line;\nuse socket2::Socket;\n\nuse crate::{Io, UdpSocket};\n\nconst OPTION_ON: bool = true;\nconst OPTION_OFF: bool = false;\n\nimpl Io for UdpSocket {\n    fn config(socket: &Socket, addr: SocketAddr) -> io::Result<()> {\n        let io = socket.as_fd();\n        nix::sys::socket::setsockopt(&io, sockopt::RcvBuf, &(2 * 1024 * 1024))?;\n        match addr {\n            SocketAddr::V4(_) => {\n                #[cfg(any(target_os = \"freebsd\", target_os = \"macos\", target_os = \"ios\"))]\n                {\n                    nix::sys::socket::setsockopt(&io, sockopt::IpDontFrag, &OPTION_ON)?;\n                    nix::sys::socket::setsockopt(&io, sockopt::Ipv4RecvDstAddr, &OPTION_ON)?;\n                }\n                #[cfg(any(\n                    target_os = \"android\",\n                    target_os = \"linux\",\n                    target_os = \"freebsd\",\n                    target_os = \"netbsd\"\n                ))]\n                nix::sys::socket::setsockopt(&io, sockopt::Ipv4Ttl, &(Line::DEFAULT_TTL as i32))?;\n                nix::sys::socket::setsockopt(&io, sockopt::Ipv4PacketInfo, &OPTION_ON)?;\n            }\n            SocketAddr::V6(_) => {\n                nix::sys::socket::setsockopt(&io, sockopt::Ipv6V6Only, &OPTION_OFF)?;\n                nix::sys::socket::setsockopt(&io, sockopt::Ipv6RecvPacketInfo, &OPTION_ON)?;\n                nix::sys::socket::setsockopt(&io, sockopt::Ipv6DontFrag, &OPTION_ON)?;\n                nix::sys::socket::setsockopt(&io, sockopt::Ipv6Ttl, &(Line::DEFAULT_TTL as i32))?;\n            }\n        }\n\n        socket.bind(&addr.into())\n    }\n\n    #[cfg(any(\n        target_os = \"android\",\n        target_os = \"linux\",\n        target_os = \"freebsd\",\n        target_os = \"netbsd\"\n    ))]\n    fn sendmsg(&self, buffers: &[IoSlice<'_>], line: &Line) -> io::Result<usize> {\n        use nix::{\n            errno::Errno,\n            sys::socket::{MsgFlags, MultiHeaders, SockaddrIn, SockaddrIn6, sendmmsg},\n        };\n\n        use super::BATCH_SIZE;\n        let slices: Vec<_> = buffers\n            .iter()\n            .take(BATCH_SIZE)\n            .map(std::slice::from_ref)\n            .collect();\n\n        let batch_size = slices.len();\n        if batch_size == 0 {\n            return Ok(0);\n        }\n        #[cfg(feature = \"gso\")]\n        let (cmsgs, space) = (\n            vec![nix::sys::socket::ControlMessage::UdpGsoSegments(\n                &line.seg_size,\n            )],\n            Some(cmsg_space!(libc::c_int)),\n        );\n        #[cfg(not(feature = \"gso\"))]\n        let (cmsgs, space) = (Vec::new(), None);\n\n        macro_rules! send_batch {\n            ($ty:ty, $addr:expr) => {{\n                let sock_addr = <$ty>::from($addr);\n                let addrs = vec![Some(sock_addr); BATCH_SIZE];\n                let mut data = MultiHeaders::<$ty>::preallocate(BATCH_SIZE, space);\n                match sendmmsg(\n                    self.io.as_raw_fd(),\n                    &mut data,\n                    &slices,\n                    &addrs,\n                    &cmsgs,\n                    MsgFlags::empty(),\n                ) {\n                    Ok(ret) => Ok(ret.count()),\n                    Err(e @ (Errno::EINTR | Errno::EAGAIN | Errno::ENOBUFS)) => {\n                        Err(io::Error::new(io::ErrorKind::WouldBlock, e))\n                    }\n                    Err(e) => Err(e.into()),\n                }\n            }};\n        }\n\n        match line.dst {\n            SocketAddr::V4(v4) => send_batch!(SockaddrIn, v4),\n            SocketAddr::V6(v6) => send_batch!(SockaddrIn6, v6),\n        }\n    }\n\n    #[cfg(any(\n        target_os = \"macos\",\n        target_os = \"ios\",\n        target_os = \"watchos\",\n        target_os = \"tvos\"\n    ))]\n    fn sendmsg(&self, slices: &[IoSlice<'_>], send_line: &Line) -> io::Result<usize> {\n        use nix::{\n            errno::Errno,\n            sys::socket::{MsgFlags, SockaddrIn, SockaddrIn6, sendmsg},\n        };\n        let mut sent_packet = 0;\n        for slice in slices.iter() {\n            macro_rules! send_batch {\n                ($ty:ty, $addr:expr) => {{\n                    let sock_addr = <$ty>::from($addr);\n                    match sendmsg(\n                        self.io.as_raw_fd(),\n                        &[*slice],\n                        &[],\n                        MsgFlags::empty(),\n                        Some(&sock_addr),\n                    ) {\n                        Ok(_send_bytes) => sent_packet += 1,\n                        Err(_) if sent_packet > 0 => return Ok(sent_packet),\n                        Err(Errno::EINTR) => continue,\n                        Err(e @ (Errno::EAGAIN | Errno::ENOBUFS)) => {\n                            return Err(io::Error::new(io::ErrorKind::WouldBlock, e));\n                        }\n                        Err(e) => {\n                            return Err(e.into());\n                        }\n                    }\n                }};\n            }\n\n            match send_line.dst {\n                SocketAddr::V4(v4) => send_batch!(SockaddrIn, v4),\n                SocketAddr::V6(v6) => send_batch!(SockaddrIn6, v6),\n            }\n        }\n        Ok(sent_packet)\n    }\n\n    #[cfg(any(\n        target_os = \"android\",\n        target_os = \"linux\",\n        target_os = \"freebsd\",\n        target_os = \"netbsd\"\n    ))]\n    fn recvmsg(\n        &self,\n        bufs: &mut [std::io::IoSliceMut<'_>],\n        recv_lines: &mut [Line],\n    ) -> io::Result<usize> {\n        use nix::sys::socket::{MsgFlags, recvmmsg};\n\n        use super::BATCH_SIZE;\n        let mut msgs: Vec<_> = bufs\n            .iter_mut()\n            .map(|buf| [std::io::IoSliceMut::new(&mut buf[..])])\n            .collect();\n\n        let cmsg_buffer = cmsg_space!(libc::in_pktinfo, libc::in6_pktinfo, libc::c_int);\n        let mut data = nix::sys::socket::MultiHeaders::<SockaddrStorage>::preallocate(\n            BATCH_SIZE,\n            Some(cmsg_buffer),\n        );\n\n        let res = match recvmmsg(\n            self.io.as_raw_fd(),\n            &mut data,\n            &mut msgs,\n            MsgFlags::MSG_DONTWAIT,\n            None,\n        ) {\n            Ok(results) => results.collect::<Vec<_>>(),\n            Err(e) => {\n                if matches!(e, nix::errno::Errno::EAGAIN | nix::errno::Errno::EINTR) {\n                    return Err(io::Error::new(io::ErrorKind::WouldBlock, e));\n                }\n                return Err(e.into());\n            }\n        };\n\n        let local_port = self.local_addr()?.port();\n        let mut count = 0;\n\n        for recv_msg in res {\n            let src_addr = recv_msg.address.unwrap().to_socketaddr();\n            let link = qbase::net::route::Link::new(src_addr, recv_lines[count].dst);\n            let mut recv_line = Line {\n                link,\n                ttl: 0,\n                ecn: None,\n                seg_size: recv_msg.bytes as u16,\n            };\n            for cmsg in recv_msg.cmsgs().unwrap() {\n                parse_cmsg(cmsg, &mut recv_line);\n            }\n            recv_line.dst.set_port(local_port);\n            recv_lines[count] = recv_line;\n            count += 1;\n        }\n\n        Ok(count)\n    }\n\n    #[cfg(any(\n        target_os = \"macos\",\n        target_os = \"ios\",\n        target_os = \"watchos\",\n        target_os = \"tvos\"\n    ))]\n    fn recvmsg(\n        &self,\n        bufs: &mut [std::io::IoSliceMut<'_>],\n        recv_lines: &mut [Line],\n    ) -> io::Result<usize> {\n        use nix::sys::socket::{MsgFlags, recvmsg};\n        let mut cmsg_space = cmsg_space!(libc::in_pktinfo, libc::in6_pktinfo, libc::c_int);\n        let result = recvmsg::<SockaddrStorage>(\n            self.io.as_raw_fd(),\n            bufs,\n            Some(&mut cmsg_space),\n            MsgFlags::empty(),\n        );\n\n        match result {\n            Ok(recv_msg) => {\n                if let Ok(cmsgs) = recv_msg.cmsgs() {\n                    for cmsg in cmsgs {\n                        parse_cmsg(cmsg, &mut recv_lines[0]);\n                    }\n                }\n                recv_lines[0].dst.set_port(self.local_addr()?.port());\n                recv_lines[0].src = recv_msg.address.unwrap().to_socketaddr();\n                recv_lines[0].seg_size = recv_msg.bytes as u16;\n                Ok(1)\n            }\n            Err(e) => {\n                if matches!(e, nix::errno::Errno::EAGAIN | nix::errno::Errno::EINTR) {\n                    // actually, it's not an error, just a signal to retry\n                    return Err(io::Error::new(io::ErrorKind::WouldBlock, e));\n                }\n                Err(e.into())\n            }\n        }\n    }\n\n    fn set_ttl(&self, ttl: i32) -> io::Result<()> {\n        use std::sync::atomic::Ordering::{Acquire, SeqCst};\n\n        if ttl == self.ttl.load(Acquire) {\n            return Ok(());\n        }\n        let local = self.local_addr()?;\n        let io = self.io.as_raw_fd();\n        let ret = match local.ip() {\n            IpAddr::V4(_) => unsafe {\n                libc::setsockopt(\n                    io,\n                    libc::IPPROTO_IP,\n                    libc::IP_TTL,\n                    &ttl as *const _ as *const libc::c_void,\n                    std::mem::size_of_val(&ttl) as libc::socklen_t,\n                )\n            },\n            IpAddr::V6(_) => unsafe {\n                libc::setsockopt(\n                    io,\n                    libc::IPPROTO_IPV6,\n                    libc::IPV6_UNICAST_HOPS,\n                    &ttl as *const _ as *const libc::c_void,\n                    std::mem::size_of_val(&ttl) as libc::socklen_t,\n                )\n            },\n        };\n        if ret != 0 {\n            return Err(io::Error::last_os_error());\n        }\n\n        self.ttl.store(ttl, SeqCst);\n        Ok(())\n    }\n}\n\nfn parse_cmsg(cmsg: ControlMessageOwned, line: &mut Line) {\n    match cmsg {\n        ControlMessageOwned::Ipv4PacketInfo(pktinfo) => {\n            let ip = IpAddr::V4(Ipv4Addr::from(pktinfo.ipi_addr.s_addr.to_ne_bytes()));\n            line.link.dst.set_ip(ip);\n        }\n        ControlMessageOwned::Ipv6PacketInfo(pktinfo6) => {\n            let ip = IpAddr::V6(Ipv6Addr::from(pktinfo6.ipi6_addr.s6_addr));\n            line.link.dst.set_ip(ip);\n        }\n        _ => {}\n    }\n}\n\ntrait ToSocketAddr {\n    fn to_socketaddr(&self) -> SocketAddr;\n}\n\nimpl ToSocketAddr for SockaddrStorage {\n    fn to_socketaddr(&self) -> SocketAddr {\n        match self.family() {\n            Some(nix::sys::socket::AddressFamily::Inet) => {\n                let sockaddr_in = self.as_sockaddr_in().unwrap();\n                let v4_addr = SocketAddrV4::new(sockaddr_in.ip(), sockaddr_in.port());\n                SocketAddr::V4(v4_addr)\n            }\n            Some(nix::sys::socket::AddressFamily::Inet6) => {\n                let sockaddr_in6 = self.as_sockaddr_in6().unwrap();\n                let v6_addr = SocketAddrV6::new(\n                    sockaddr_in6.ip(),\n                    sockaddr_in6.port(),\n                    sockaddr_in6.flowinfo(),\n                    sockaddr_in6.scope_id(),\n                );\n                SocketAddr::V6(v6_addr)\n            }\n            _ => panic!(\"Unsupported address family\"),\n        }\n    }\n}\n"
  },
  {
    "path": "qudp/src/windows.rs",
    "content": "use std::{\n    ffi::c_int,\n    io, mem,\n    net::{IpAddr, Ipv4Addr, SocketAddr},\n    os::windows::io::AsRawSocket,\n    ptr,\n};\n\nuse libc::c_uchar;\nuse qbase::net::route::{Line, Link};\nuse socket2::Socket;\nuse windows_sys::Win32::Networking::WinSock::{self, SOCKET};\n\nuse crate::{Io, UdpSocket};\n\nconst CMSG_LEN: usize = 128;\n#[derive(Copy, Clone)]\n#[repr(align(8))] // Conservative bound for align_of<WinSock::CMSGHDR>\npub(crate) struct Aligned<T>(pub(crate) T);\n\nimpl Io for UdpSocket {\n    fn config(socket: &Socket, addr: SocketAddr) -> std::io::Result<()> {\n        const OPTION_ON: c_int = 1;\n        const OPTION_OFF: c_int = 0;\n        let io = socket.as_raw_socket().try_into().unwrap();\n\n        setsockopt(io, WinSock::SOL_SOCKET, WinSock::SO_RCVBUF, 2 * 1024 * 1024);\n        match addr {\n            SocketAddr::V4(_) => {\n                setsockopt(io, WinSock::IPPROTO_IP, WinSock::IP_RECVTOS, OPTION_ON);\n                setsockopt(io, WinSock::IPPROTO_IP, WinSock::IP_PKTINFO, OPTION_ON);\n                setsockopt(io, WinSock::IPPROTO_IP, WinSock::IP_RECVTTL, OPTION_ON);\n                setsockopt(io, WinSock::IPPROTO_IP, WinSock::IP_RECVDSTADDR, OPTION_ON);\n                setsockopt(\n                    io,\n                    WinSock::IPPROTO_IP,\n                    WinSock::IP_TTL,\n                    Line::DEFAULT_TTL as c_int,\n                );\n            }\n            SocketAddr::V6(_) => {\n                setsockopt(io, WinSock::IPPROTO_IPV6, WinSock::IPV6_V6ONLY, OPTION_OFF);\n                setsockopt(io, WinSock::IPPROTO_IPV6, WinSock::IPV6_HOPLIMIT, OPTION_ON);\n                setsockopt(\n                    io,\n                    WinSock::IPPROTO_IPV6,\n                    WinSock::IPV6_RECVTCLASS,\n                    OPTION_ON,\n                );\n                setsockopt(io, WinSock::IPPROTO_IPV6, WinSock::IPV6_PKTINFO, OPTION_ON);\n            }\n        }\n        if let Err(e) = socket.bind(&addr.into()) {\n            tracing::error!(target: \"qudp\", \"Failed to bind socket: {}\", e);\n            return Err(io::Error::new(io::ErrorKind::AddrInUse, e));\n        }\n        Ok(())\n    }\n\n    fn sendmsg(&self, bufs: &[std::io::IoSlice<'_>], line: &Line) -> std::io::Result<usize> {\n        let dst = socket2::SockAddr::from(line.dst);\n        let mut count = 0;\n\n        for buf in bufs {\n            let mut ctrl_buf = Aligned([0; CMSG_LEN]);\n            let mut data = WinSock::WSABUF {\n                buf: buf.as_ptr() as *mut _,\n                len: buf.len() as _,\n            };\n\n            let ctrl = WinSock::WSABUF {\n                buf: ctrl_buf.0.as_mut_ptr(),\n                len: ctrl_buf.0.len() as _,\n            };\n\n            let mut wsa_msg = WinSock::WSAMSG {\n                name: dst.as_ptr() as *mut _,\n                namelen: dst.len(),\n                lpBuffers: &mut data,\n                Control: ctrl,\n                dwBufferCount: 1,\n                dwFlags: 0,\n            };\n\n            let mut cmsg = unsafe { first_cmsg(&mut wsa_msg).as_mut() };\n            let mut cmsg_len = 0;\n            if !line.src.ip().is_unspecified() {\n                let src = socket2::SockAddr::from(line.src);\n                match src.family() {\n                    WinSock::AF_INET => {\n                        let src_ip =\n                            unsafe { ptr::read(src.as_ptr() as *const WinSock::SOCKADDR_IN) };\n                        let pktinfo = WinSock::IN_PKTINFO {\n                            ipi_addr: src_ip.sin_addr,\n                            ipi_ifindex: 0,\n                        };\n\n                        cmsg = append_cmsg(\n                            &wsa_msg,\n                            cmsg,\n                            WinSock::IPPROTO_IP,\n                            WinSock::IP_PKTINFO,\n                            pktinfo,\n                            &mut cmsg_len,\n                        );\n                    }\n                    WinSock::AF_INET6 => {\n                        let src_ip =\n                            unsafe { ptr::read(src.as_ptr() as *const WinSock::SOCKADDR_IN6) };\n                        let pktinfo = WinSock::IN6_PKTINFO {\n                            ipi6_addr: src_ip.sin6_addr,\n                            ipi6_ifindex: unsafe { src_ip.Anonymous.sin6_scope_id },\n                        };\n\n                        cmsg = append_cmsg(\n                            &wsa_msg,\n                            cmsg,\n                            WinSock::IPPROTO_IPV6,\n                            WinSock::IPV6_PKTINFO,\n                            pktinfo,\n                            &mut cmsg_len,\n                        );\n                    }\n                    _ => {\n                        return Err(io::Error::from(io::ErrorKind::InvalidInput));\n                    }\n                }\n            }\n\n            if let Some(ecn) = line.ecn {\n                let is_ipv4 = line.dst.is_ipv4()\n                    || matches!(line.dst.ip(), IpAddr::V6(addr) if addr.to_ipv4_mapped().is_some());\n                if is_ipv4 {\n                    _ = append_cmsg(\n                        &wsa_msg,\n                        cmsg,\n                        WinSock::IPPROTO_IP,\n                        WinSock::IP_ECN,\n                        ecn,\n                        &mut cmsg_len,\n                    );\n                } else {\n                    _ = append_cmsg(\n                        &wsa_msg,\n                        cmsg,\n                        WinSock::IPPROTO_IPV6,\n                        WinSock::IPV6_TCLASS,\n                        ecn,\n                        &mut cmsg_len,\n                    );\n                }\n            }\n\n            wsa_msg.Control.len = cmsg_len as _;\n            if cmsg_len == 0 {\n                wsa_msg.Control = WinSock::WSABUF {\n                    buf: ptr::null_mut(),\n                    len: 0,\n                };\n            }\n\n            let mut len = 0;\n            let ret = unsafe {\n                WinSock::WSASendMsg(\n                    self.io.as_raw_socket() as usize,\n                    &wsa_msg,\n                    0,\n                    &mut len,\n                    ptr::null_mut(),\n                    None,\n                )\n            };\n\n            if ret != 0 {\n                let e = io::Error::last_os_error();\n                if e.kind() != io::ErrorKind::WouldBlock {\n                    return Err(e);\n                }\n            }\n            count += 1;\n        }\n        Ok(count as usize)\n    }\n\n    fn recvmsg(\n        &self,\n        bufs: &mut [std::io::IoSliceMut<'_>],\n        lines: &mut [Line],\n    ) -> std::io::Result<usize> {\n        let wsa_recvmsg_ptr = wsarecvmsg_ptr().expect(\"valid function pointer for WSARecvMsg\");\n\n        let mut ctrl_buf = Aligned([0; CMSG_LEN]);\n        let mut source: WinSock::SOCKADDR_INET = unsafe { mem::zeroed() };\n\n        let ctrl = WinSock::WSABUF {\n            buf: ctrl_buf.0.as_mut_ptr(),\n            len: ctrl_buf.0.len() as _,\n        };\n\n        let mut wsa_msg = WinSock::WSAMSG {\n            name: &mut source as *mut _ as *mut _,\n            namelen: mem::size_of_val(&source) as _,\n            lpBuffers: &mut WinSock::WSABUF {\n                buf: bufs[0].as_mut_ptr(),\n                len: bufs[0].len() as _,\n            },\n            Control: ctrl,\n            dwBufferCount: 1,\n            dwFlags: 0,\n        };\n\n        let mut len = 0;\n        unsafe {\n            let rc = (wsa_recvmsg_ptr)(\n                self.io.as_raw_socket() as usize,\n                &mut wsa_msg,\n                &mut len,\n                ptr::null_mut(),\n                None,\n            );\n            if rc == -1 {\n                return Err(io::Error::last_os_error());\n            }\n        }\n\n        let addr = unsafe {\n            let (_, addr) = socket2::SockAddr::try_init(|addr_storage, len| {\n                *len = mem::size_of_val(&source) as _;\n                ptr::copy_nonoverlapping(&source, addr_storage as _, 1);\n                Ok(())\n            })?;\n            addr.as_socket()\n        };\n\n        let mut ecn_bits = 0;\n        let mut dst_ip = None;\n        let mut cmsg: Option<&mut WinSock::CMSGHDR> = unsafe { first_cmsg(&mut wsa_msg).as_mut() };\n        while let Some(cur_cmsg) = cmsg {\n            // [header (len)][data][padding(len + sizeof(data))] -> [header][data][padding]\n            match (cur_cmsg.cmsg_level, cur_cmsg.cmsg_type) {\n                (WinSock::IPPROTO_IP, WinSock::IP_PKTINFO) => {\n                    let pktinfo = cmsg_decode::<WinSock::IN_PKTINFO>(cur_cmsg);\n                    let ip4 = Ipv4Addr::from(u32::from_be(unsafe { pktinfo.ipi_addr.S_un.S_addr }));\n                    dst_ip = Some(ip4.into());\n                }\n                (WinSock::IPPROTO_IPV6, WinSock::IPV6_PKTINFO) => {\n                    let pktinfo = cmsg_decode::<WinSock::IN6_PKTINFO>(cur_cmsg);\n                    // Addr is stored in big endian format\n                    dst_ip = Some(IpAddr::from(unsafe { pktinfo.ipi6_addr.u.Byte }));\n                }\n                (WinSock::IPPROTO_IP, WinSock::IP_ECN) => {\n                    ecn_bits = cmsg_decode::<c_int>(cur_cmsg);\n                }\n                (WinSock::IPPROTO_IPV6, WinSock::IPV6_ECN) => {\n                    ecn_bits = cmsg_decode::<c_int>(cur_cmsg);\n                }\n                _ => {}\n            }\n            cmsg = unsafe { next_cmsg(&wsa_msg, cur_cmsg).as_mut() };\n        }\n        let dst = if let Some(ip) = dst_ip {\n            crate::SocketAddr::new(ip, self.local_addr()?.port())\n        } else {\n            self.local_addr()?\n        };\n        lines[0] = Line {\n            link: Link::new(addr.unwrap(), dst),\n            ttl: Line::DEFAULT_TTL,\n            ecn: Some(ecn_bits as u8),\n            seg_size: len as u16,\n        };\n        Ok(1)\n    }\n\n    fn set_ttl(&self, ttl: i32) -> io::Result<()> {\n        use std::sync::atomic::Ordering::{Acquire, SeqCst};\n\n        if ttl == self.ttl.load(Acquire) {\n            return Ok(());\n        }\n\n        let local = self.local_addr()?;\n        let socket = self.io.as_raw_socket() as usize;\n\n        match local.ip() {\n            IpAddr::V4(_) => setsockopt(socket, WinSock::IPPROTO_IP, WinSock::IP_TTL, ttl),\n            IpAddr::V6(_) => setsockopt(\n                socket,\n                WinSock::IPPROTO_IPV6,\n                WinSock::IPV6_UNICAST_HOPS,\n                ttl,\n            ),\n        };\n        self.ttl.store(ttl, SeqCst);\n        Ok(())\n    }\n}\n\nfn append_cmsg<'a, V: Copy>(\n    msg: &'a WinSock::WSAMSG,\n    mut cmsg: Option<&'a mut WinSock::CMSGHDR>,\n    level: libc::c_int,\n    ty: libc::c_int,\n    data: V,\n    cmsg_len: &mut usize,\n) -> Option<&'a mut WinSock::CMSGHDR> {\n    let space = cmsg_space(mem::size_of_val(&data));\n    let next = cmsg.take().expect(\"no available cmsghdr\");\n    next.cmsg_level = level as _;\n    next.cmsg_type = ty as _;\n    next.cmsg_len = cmsg_data_len(mem::size_of_val(&data)) as _;\n    unsafe {\n        ptr::write(cmsg_data(next) as *const V as *mut V, data);\n    }\n    *cmsg_len += space;\n    unsafe { next_cmsg(msg, next).as_mut() }\n}\n\nfn cmsg_decode<T: Copy>(cmsg: &mut WinSock::CMSGHDR) -> T {\n    unsafe { ptr::read(cmsg_data(cmsg) as *const T) }\n}\n\nconst fn cmsghdr_align(length: usize) -> usize {\n    (length + mem::align_of::<WinSock::CMSGHDR>() - 1) & !(mem::align_of::<WinSock::CMSGHDR>() - 1)\n}\n\nfn cmsgdata_align(length: usize) -> usize {\n    (length + mem::align_of::<usize>() - 1) & !(mem::align_of::<usize>() - 1)\n}\n\nfn cmsg_data_len(len: usize) -> usize {\n    mem::size_of::<WinSock::CMSGHDR>() + len\n}\n\nfn cmsg_space(len: usize) -> usize {\n    let total = mem::size_of::<WinSock::CMSGHDR>() + len;\n    let align = mem::align_of::<WinSock::CMSGHDR>();\n    (total + align - 1) & !(align - 1)\n}\n\nunsafe fn first_cmsg(msg: &mut WinSock::WSAMSG) -> *mut WinSock::CMSGHDR {\n    if msg.Control.len as usize >= mem::size_of::<WinSock::CMSGHDR>() {\n        msg.Control.buf as *mut WinSock::CMSGHDR\n    } else {\n        ptr::null_mut::<WinSock::CMSGHDR>()\n    }\n}\n\nfn next_cmsg(msg: &WinSock::WSAMSG, cmsg: &WinSock::CMSGHDR) -> *mut WinSock::CMSGHDR {\n    let next = (cmsg as *const _ as usize + cmsghdr_align(cmsg.cmsg_len)) as *mut WinSock::CMSGHDR;\n    let max = msg.Control.buf as usize + msg.Control.len as usize;\n    if unsafe { next.offset(1) } as usize > max {\n        ptr::null_mut()\n    } else {\n        next\n    }\n}\n\nfn cmsg_data(cmsg: &mut WinSock::CMSGHDR) -> *mut libc::c_uchar {\n    (cmsg as *const _ as usize + cmsgdata_align(mem::size_of::<WinSock::CMSGHDR>())) as *mut c_uchar\n}\n\nfn setsockopt(io: SOCKET, level: libc::c_int, name: libc::c_int, value: libc::c_int) {\n    unsafe {\n        WinSock::setsockopt(\n            io,\n            level,\n            name,\n            &value as *const _ as _,\n            mem::size_of_val(&value) as _,\n        )\n    };\n}\n\nfn wsarecvmsg_ptr() -> &'static WinSock::LPFN_WSARECVMSG {\n    static WSARECVMSG_PTR: std::sync::OnceLock<WinSock::LPFN_WSARECVMSG> =\n        std::sync::OnceLock::new();\n    WSARECVMSG_PTR.get_or_init(|| {\n        let s = unsafe { WinSock::socket(WinSock::AF_INET as _, WinSock::SOCK_DGRAM as _, 0) };\n        if s == WinSock::INVALID_SOCKET {\n            tracing::warn!(\n                target: \"qudp\",\n                \"Failed to create socket for WSARecvMsg function pointer: {}\",\n                io::Error::last_os_error()\n            );\n            return None;\n        }\n        // Detect if OS expose WSARecvMsg API based on\n        // https://github.com/Azure/mio-uds-windows/blob/a3c97df82018086add96d8821edb4aa85ec1b42b/src/stdnet/ext.rs#L601\n        let guid = WinSock::WSAID_WSARECVMSG;\n        let mut wsa_recvmsg_ptr = None;\n        let mut len = 0;\n\n        // Safety: Option handles the NULL pointer with a None value\n        let ret = unsafe {\n            WinSock::WSAIoctl(\n                s as _,\n                WinSock::SIO_GET_EXTENSION_FUNCTION_POINTER,\n                &guid as *const _ as *const _,\n                mem::size_of_val(&guid) as u32,\n                &mut wsa_recvmsg_ptr as *mut _ as *mut _,\n                mem::size_of_val(&wsa_recvmsg_ptr) as u32,\n                &mut len,\n                ptr::null_mut(),\n                None,\n            )\n        };\n\n        if ret == -1 {\n            tracing::warn!(\n                target: \"qudp\",\n                \"Failed to get WSARecvMsg function pointer: {}\",\n                io::Error::last_os_error()\n            );\n        } else if len as usize != mem::size_of::<WinSock::LPFN_WSARECVMSG>() {\n            tracing::warn!(\n                target: \"qudp\",\n                \"WSARecvMsg function pointer size mismatch: expected {}, got {}\",\n                mem::size_of::<WinSock::LPFN_WSARECVMSG>(),\n                len\n            );\n            wsa_recvmsg_ptr = None;\n        }\n\n        unsafe {\n            WinSock::closesocket(s);\n        }\n\n        wsa_recvmsg_ptr\n    })\n}\n"
  },
  {
    "path": "tests/keychain/gen_key.sh",
    "content": "# gen root key\nopenssl ecparam -name secp384r1 -genkey -noout -out rootCA-ECC.key\n# gen self-signed cert\nopenssl req -new -x509 -days 3650 -key rootCA-ECC.key -sha384 -out rootCA-ECC.crt\n\n# gen server private key\nopenssl ecparam -name secp384r1 -genkey -noout -out quic-test-net-ECC.key\n# create csr \nopenssl req -new -key quic-test-net-ECC.key -out quic-test-net.csr\n# gen server cert with v3\ncat <<EOT > openssl.cnf\n[v3_req]\nbasicConstraints = CA:FALSE\nkeyUsage = nonRepudiation, digitalSignature, keyEncipherment\nsubjectAltName = @alt_names\n\n[alt_names]\nDNS.1 = quic.test.net\nEOT\n\nopenssl x509 -req \\\n  -extfile openssl.cnf -extensions v3_req \\\n  -in quic-test-net.csr \\\n  -CA rootCA-ECC.crt -CAkey rootCA-ECC.key -CAcreateserial \\\n  -out quic-test-net-ECC.crt -days 365 -sha384\n\n# view info in quic-test-net-ECC.crt\nopenssl x509 -in quic-test-net-ECC.crt -text -noout\n"
  },
  {
    "path": "tests/keychain/localhost/ca.cert",
    "content": "-----BEGIN CERTIFICATE-----\nMIIBkjCCATmgAwIBAgIUX2XYA8QU1FAkS19dimLJliUQEe4wCgYIKoZIzj0EAwIw\nFDESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTI1MDQxNjA3Mzk1NFoXDTM1MDQxNDA3\nMzk1NFowFDESMBAGA1UEAwwJbG9jYWxob3N0MFkwEwYHKoZIzj0CAQYIKoZIzj0D\nAQcDQgAEL4+GuiGFoN5syeBqmjbuciQrJfuq4NhiHw+g2K/0wDUrLOdPpNFzv4Dl\noQxneGfGp1qgja+AhimYk+zeFIWqRqNpMGcwHQYDVR0OBBYEFObmop7JSIFCq/lg\n20SaK4hAGLXgMB8GA1UdIwQYMBaAFObmop7JSIFCq/lg20SaK4hAGLXgMA8GA1Ud\nEwEB/wQFMAMBAf8wFAYDVR0RBA0wC4IJbG9jYWxob3N0MAoGCCqGSM49BAMCA0cA\nMEQCIFjNfmSQAaNt1wt86kfb80w8g+RNIoSHk8yHN8tNM0lqAiB95+L021D+58Uf\nc7z4m2eojR5BFV2lIdsbx8tMBN5RRA==\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "tests/keychain/localhost/ca.key",
    "content": "-----BEGIN EC PARAMETERS-----\nBggqhkjOPQMBBw==\n-----END EC PARAMETERS-----\n-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIAXxESTjZZV9fAKLeBtFDoORO3H96YobgtSQDAivT9a9oAoGCCqGSM49\nAwEHoUQDQgAEL4+GuiGFoN5syeBqmjbuciQrJfuq4NhiHw+g2K/0wDUrLOdPpNFz\nv4DloQxneGfGp1qgja+AhimYk+zeFIWqRg==\n-----END EC PRIVATE KEY-----\n"
  },
  {
    "path": "tests/keychain/localhost/ca.srl",
    "content": "422450828B69F288653F12FD94827000BD65DF26\n"
  },
  {
    "path": "tests/keychain/localhost/client.cert",
    "content": "-----BEGIN CERTIFICATE-----\nMIIBpDCCAUqgAwIBAgIUQiRQgotp8ohlPxL9lIJwAL1l3yUwCgYIKoZIzj0EAwIw\nFDESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTI1MDUyNzEwMTMxNloXDTM1MDUyNTEw\nMTMxNlowETEPMA0GA1UEAwwGY2xpZW50MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcD\nQgAEf8aJWKMK8lRW7L/GyEDJhHVRLC+9unFdVwN+Pjwuj2i88zjfB7aXdJC+gZ8/\n2PRc63f3twonhXV6XKjGjwUWUKN9MHswFwYDVR0RBBAwDoIGY2xpZW50hwR/AAAB\nMAsGA1UdDwQEAwIHgDATBgNVHSUEDDAKBggrBgEFBQcDAjAdBgNVHQ4EFgQU76ne\n9bENwxHFwP9nGT9VPC1RTj4wHwYDVR0jBBgwFoAU5uainslIgUKr+WDbRJoriEAY\nteAwCgYIKoZIzj0EAwIDSAAwRQIgBN7Hq276bzZHijQB9vUJC7xDGyNs5/EL9Nm4\nDgWKaocCIQCZcu350d5Zk55+gHuYtXwWO4dGfWS9FDZvGWR0g8db9w==\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "tests/keychain/localhost/client.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgF2YeSXeF2GnsG516\nxEnZA9VL5LqJKWPZbIH6G74H8i6hRANCAAR/xolYowryVFbsv8bIQMmEdVEsL726\ncV1XA34+PC6PaLzzON8Htpd0kL6Bnz/Y9Fzrd/e3CieFdXpcqMaPBRZQ\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "tests/keychain/localhost/server.cert",
    "content": "-----BEGIN CERTIFICATE-----\nMIIBgjCCASigAwIBAgIUQiRQgotp8ohlPxL9lIJwAL1l3yYwCgYIKoZIzj0EAwIw\nFDESMBAGA1UEAwwJbG9jYWxob3N0MB4XDTI2MDQxNzA4MjUyOFoXDTM2MDQxNDA4\nMjUyOFowFDESMBAGA1UEAwwJbG9jYWxob3N0MFkwEwYHKoZIzj0CAQYIKoZIzj0D\nAQcDQgAEmoNwUXTOqO7yUjQfmTI+dg8lmteiIILzg8miSYraPKJsdCeMGiQrpLzM\nViZyfg5VVpG3ajJYnzswe2v7dacpnqNYMFYwFAYDVR0RBA0wC4IJbG9jYWxob3N0\nMB0GA1UdDgQWBBRgxdcCl/SpSR2hNzOhpReEGo0syzAfBgNVHSMEGDAWgBTm5qKe\nyUiBQqv5YNtEmiuIQBi14DAKBggqhkjOPQQDAgNIADBFAiBCsin9ppCSLZBgDkCn\nTfMn94pQ4YQ5R6SWPhv3jytAqAIhAIqAp6Q+urPUDu6xXT2zzl6xWY5+m4t26aFF\nexuP4i9p\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "tests/keychain/localhost/server.key",
    "content": "-----BEGIN EC PARAMETERS-----\nBggqhkjOPQMBBw==\n-----END EC PARAMETERS-----\n-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIHgBydWpZJOQDbkIu/EWCn/2NYmF77bKjw1Xy8rGvHUdoAoGCCqGSM49\nAwEHoUQDQgAEmoNwUXTOqO7yUjQfmTI+dg8lmteiIILzg8miSYraPKJsdCeMGiQr\npLzMViZyfg5VVpG3ajJYnzswe2v7dacpng==\n-----END EC PRIVATE KEY-----\n"
  },
  {
    "path": "tests/keychain/quic.test.net/quic-test-net-ECC.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIIC4zCCAmmgAwIBAgIUeNy6M1upjE0Bf+BRjgbhx6fXFw8wCgYIKoZIzj0EAwMw\ngZMxCzAJBgNVBAYTAkNOMQswCQYDVQQIDAJISzELMAkGA1UEBwwCSEsxFTATBgNV\nBAoMDGdtLXF1aWMgdGVhbTEQMA4GA1UECwwHZ20tcXVpYzEbMBkGA1UEAwwSZ20t\ncXVpYyBtYWludGFpbmVyMSQwIgYJKoZIhvcNAQkBFhVxdWljX3RlYW1AZ2VubWV0\nYS5uZXQwHhcNMjQwODI5MDgzNjAyWhcNMjUwODI5MDgzNjAyWjCBmzELMAkGA1UE\nBhMCQ04xEjAQBgNVBAgMCUd1YW5nZG9uZzERMA8GA1UEBwwIU2hlbnpoZW4xFTAT\nBgNVBAoMDGdtLXF1aWMgdGVhbTEQMA4GA1UECwwHZ20tcXVpYzEWMBQGA1UEAwwN\ncXVpYy50ZXN0Lm5ldDEkMCIGCSqGSIb3DQEJARYVcXVpY190ZWFtQGdlbm1ldGEu\nbmV0MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEjBGFuP8QGBP5aM7ItEFzuwnG+ekJ\nHnzJhdJRd+FaGyaMmjBKF/KKVNas9EzI8fVmRItcrhb1mJOdg1ad8SGl+fNi3Oi1\nn/6CRdHCfbUfV1cOJM9O9QnTffn9aZQaC5Noo3QwcjAJBgNVHRMEAjAAMAsGA1Ud\nDwQEAwIF4DAYBgNVHREEETAPgg1xdWljLnRlc3QubmV0MB0GA1UdDgQWBBRrMVbA\npSCmPnSRuNVHVPo7ZCaeLTAfBgNVHSMEGDAWgBTk3utiwIFAIkmjR0g8LLc6ehdg\noTAKBggqhkjOPQQDAwNoADBlAjEAiddVWk2O74NiOR+A+OActVu9ZSbeaPEUsV3V\n9u1hAB8ybflgPsCb/YFLB3cZB6OVAjAdEW9SEZVXIUvuf9VK5AL2SBCumUg1G+jT\n5e1IIh6HAEuCOfh4eTDXVpm2H00Fi8s=\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "tests/keychain/quic.test.net/quic-test-net-ECC.key",
    "content": "-----BEGIN EC PRIVATE KEY-----\nMIGkAgEBBDD2+LMedQoNvJBnDcq1+9KFI2XfE489S9kzB/DoJW/3pzkG5Jq0Jlme\n1PFoLZtfN3OgBwYFK4EEACKhZANiAASMEYW4/xAYE/lozsi0QXO7Ccb56QkefMmF\n0lF34VobJoyaMEoX8opU1qz0TMjx9WZEi1yuFvWYk52DVp3xIaX582Lc6LWf/oJF\n0cJ9tR9XVw4kz071CdN9+f1plBoLk2g=\n-----END EC PRIVATE KEY-----\n"
  },
  {
    "path": "tests/keychain/quic.test.net/quic-test-net.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIIBlTCCARsCAQAwgZsxCzAJBgNVBAYTAkNOMRIwEAYDVQQIDAlHdWFuZ2Rvbmcx\nETAPBgNVBAcMCFNoZW56aGVuMRUwEwYDVQQKDAxnbS1xdWljIHRlYW0xEDAOBgNV\nBAsMB2dtLXF1aWMxFjAUBgNVBAMMDXF1aWMudGVzdC5uZXQxJDAiBgkqhkiG9w0B\nCQEWFXF1aWNfdGVhbUBnZW5tZXRhLm5ldDB2MBAGByqGSM49AgEGBSuBBAAiA2IA\nBIwRhbj/EBgT+WjOyLRBc7sJxvnpCR58yYXSUXfhWhsmjJowShfyilTWrPRMyPH1\nZkSLXK4W9ZiTnYNWnfEhpfnzYtzotZ/+gkXRwn21H1dXDiTPTvUJ0335/WmUGguT\naKAAMAoGCCqGSM49BAMCA2gAMGUCMBQlx6hnMv66mnbBZDF47v4hGdB7gxsOSEx8\nEKBxsrcp7CkvL1siECJNun953MeZNQIxAJS36WwoUhhetA4YEog4lDGHeQ55f3os\n4UjLeXOWKjswtxUISLB2xZMVm6kgb2vQqw==\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "tests/keychain/root/rootCA-ECC.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIICujCCAkCgAwIBAgIUfOA7KV6d4qkIqNA/6Rjb4Nf+3m4wCgYIKoZIzj0EAwMw\ngZMxCzAJBgNVBAYTAkNOMQswCQYDVQQIDAJISzELMAkGA1UEBwwCSEsxFTATBgNV\nBAoMDGdtLXF1aWMgdGVhbTEQMA4GA1UECwwHZ20tcXVpYzEbMBkGA1UEAwwSZ20t\ncXVpYyBtYWludGFpbmVyMSQwIgYJKoZIhvcNAQkBFhVxdWljX3RlYW1AZ2VubWV0\nYS5uZXQwHhcNMjQwODI5MDczODUyWhcNMzQwODI3MDczODUyWjCBkzELMAkGA1UE\nBhMCQ04xCzAJBgNVBAgMAkhLMQswCQYDVQQHDAJISzEVMBMGA1UECgwMZ20tcXVp\nYyB0ZWFtMRAwDgYDVQQLDAdnbS1xdWljMRswGQYDVQQDDBJnbS1xdWljIG1haW50\nYWluZXIxJDAiBgkqhkiG9w0BCQEWFXF1aWNfdGVhbUBnZW5tZXRhLm5ldDB2MBAG\nByqGSM49AgEGBSuBBAAiA2IABO8rQjanzN5m3ZhflmnY6rx8Q4a5+CZQQPxRPt1f\nT6LTjK0NEdA9SnbITkU5OQo518UXsgMvrsO7zpIOH/HhwfYhVccxbMKXFzSOAYIE\nIum/QtQyULy533javmOCJOogcqNTMFEwHQYDVR0OBBYEFOTe62LAgUAiSaNHSDws\ntzp6F2ChMB8GA1UdIwQYMBaAFOTe62LAgUAiSaNHSDwstzp6F2ChMA8GA1UdEwEB\n/wQFMAMBAf8wCgYIKoZIzj0EAwMDaAAwZQIxALmDdA9EIap8KjKmWAGSSXDfV5wl\nvwsciftrtl662l6GEu4uvI8lNpBqwEaEjvc2NAIwDkvRMnJnb8cmGScVa67dNSzU\n8pM+auAM3NYjU2wRQmNKvKgtynG4Vkg974BnIwvp\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "tests/keychain/root/rootCA-ECC.key",
    "content": "-----BEGIN EC PRIVATE KEY-----\nMIGkAgEBBDBa1GkhjbxHLCEC8/xuXT8uERSDGrfH+JvG2iQwz/w7voZjgEWnRZ2I\njf0GKl1Q9FGgBwYFK4EEACKhZANiAATvK0I2p8zeZt2YX5Zp2Oq8fEOGufgmUED8\nUT7dX0+i04ytDRHQPUp2yE5FOTkKOdfFF7IDL67Du86SDh/x4cH2IVXHMWzClxc0\njgGCBCLpv0LUMlC8ud942r5jgiTqIHI=\n-----END EC PRIVATE KEY-----\n"
  },
  {
    "path": "tests/keychain/root/rootCA-ECC.srl",
    "content": "78DCBA335BA98C4D017FE0518E06E1C7A7D7170F\n"
  },
  {
    "path": "tests/keychain/start-quic-server.sh",
    "content": "cargo run --example server -- ./  \\\n  --root \n  --key ${path_to}/keychain/quic.test.net/quic-test-net-ECC.key \\\n  --cert ${path_to}/keychain/quic.test.net/quic-test-net-ECC.crt \\\n  --keylog\n"
  }
]